{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,27]],"date-time":"2026-03-27T16:37:20Z","timestamp":1774629440622,"version":"3.50.1"},"reference-count":16,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,6,6]],"date-time":"2024-06-06T00:00:00Z","timestamp":1717632000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Ada Lett."],"published-print":{"date-parts":[[2024,6,6]]},"abstract":"<jats:p>Deep neural networks (DNNs) have demonstrated promising performances in handling complex real-world scenarios, surpassing human intelligence. Despite their exciting performances, DNNs are not robust against adversarial attacks. They are specifically vulnerable to data poisoning attacks where attackers meddle with the initial training data, despite the multiple defensive methods available, such as defensive distillation. However, defensive distillation has shown promising results in robustifying image classification deep learning (DL) models against adversarial attacks at the inference level, but they remain vulnerable to data poisoning attacks. This work incorporates a data denoising and reconstruction framework with a defensive distillation methodology to defend against such attacks. We leverage a denoising autoencoder (DAE) to develop a data reconstruction and filtering pipeline with a well-designed reconstruction threshold. We added carefully created adversarial examples to the initial training data to assess the proposed method's performance. Our experimental findings demonstrate that the proposed methodology significantly reduced the vulnerability of the defensive distillation framework to a data poison attack.<\/jats:p>","DOI":"10.1145\/3672359.3672362","type":"journal-article","created":{"date-parts":[[2024,6,7]],"date-time":"2024-06-07T22:58:10Z","timestamp":1717801090000},"page":"30-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Denoising Autoencoder-Based Defensive Distillation as an Adversarial Robustness Algorithm Against Data Poisoning Attacks"],"prefix":"10.1145","volume":"43","author":[{"given":"Bakary","family":"Badjie","sequence":"first","affiliation":[{"name":"LASIGE, Departamento de Inform\u00e1tica, Faculdade de Ci\u00eancias da Universidade Lisboa, Lisboa"}]},{"given":"Jos\u00e9","family":"Cec\u00edlio","sequence":"additional","affiliation":[{"name":"LASIGE, Departamento de Inform\u00e1tica, Faculdadea de Ci\u00eancias da Universidade Lisboa, Lisboa"}]},{"given":"Ant\u00f3nio","family":"Casimiro","sequence":"additional","affiliation":[{"name":"LASIGE, Departamento de Inform\u00e1tica, Faculdadea de Ci\u00eancias da Universidade Lisboa, Lisboa"}]}],"member":"320","published-online":{"date-parts":[[2024,6,7]]},"reference":[{"key":"e_1_2_1_1_1","first-page":"424","volume-title":"Vision and Computing (ICIVC)","author":"Chen Y.","year":"2022","unstructured":"Y. Chen, M. Zhang, J. Li, and X. Kuang, \"Adversarial attacks and defenses in image classification: A practical perspective,\" in 2022 7th International Conference on Image, Vision and Computing (ICIVC), pp. 424--430, IEEE, 2022."},{"key":"e_1_2_1_2_1","first-page":"582","volume-title":"Distillation as a defense to adversarial perturbations against deep neural networks,\" in 2016 IEEE symposium on security and privacy (SP)","author":"Papernot N.","year":"2016","unstructured":"N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, \"Distillation as a defense to adversarial perturbations against deep neural networks,\" in 2016 IEEE symposium on security and privacy (SP), pp. 582--597, IEEE, 2016."},{"key":"e_1_2_1_3_1","volume-title":"Autoencoders,\" arXiv preprint arXiv:2003.05991","author":"Bank D.","year":"2020","unstructured":"D. Bank, N. Koenigstein, and R. Giryes, \"Autoencoders,\" arXiv preprint arXiv:2003.05991, 2020."},{"key":"e_1_2_1_4_1","volume-title":"Distilling the knowledge in a neural network,\" stat","author":"Hinton G.","year":"2015","unstructured":"G. Hinton, O. Vinyals, and J. Dean, \"Distilling the knowledge in a neural network,\" stat, vol. 1050, p. 9, 2015."},{"key":"e_1_2_1_5_1","volume-title":"Explaining and harnessing adversarial examples,\" arXiv preprint arXiv:1412.6572","author":"Goodfellow I. J.","year":"2014","unstructured":"I. J. Goodfellow, J. Shlens, and C. Szegedy, \"Explaining and harnessing adversarial examples,\" arXiv preprint arXiv:1412.6572, 2014."},{"key":"e_1_2_1_6_1","volume-title":"Adversarial machine learning at scale,\" arXiv preprint arXiv:1611.01236","author":"Kurakin A.","year":"2016","unstructured":"A. Kurakin, I. Goodfellow, and S. Bengio, \"Adversarial machine learning at scale,\" arXiv preprint arXiv:1611.01236, 2016."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.5816"},{"key":"e_1_2_1_8_1","first-page":"1","article-title":"Adversarial security mitigations of mmwave beamforming prediction models using defensive distillation and adversarial retraining","author":"Kuzlu M.","year":"2022","unstructured":"M. Kuzlu, F. O. Catak, U. Cali, E. Catak, and O. Guler, \"Adversarial security mitigations of mmwave beamforming prediction models using defensive distillation and adversarial retraining,\" International Journal of Information Security, pp. 1--14, 2022.","journal-title":"International Journal of Information Security"},{"key":"e_1_2_1_9_1","volume-title":"Low temperature distillation for robust adversarial training,\" arXiv preprint arXiv:2111.02331","author":"Chen E.-C.","year":"2021","unstructured":"E.-C. Chen and C.-R. Lee, \"Ltd: Low temperature distillation for robust adversarial training,\" arXiv preprint arXiv:2111.02331, 2021."},{"key":"e_1_2_1_10_1","unstructured":"N. Papernot and P. McDaniel \"Extending defensive distillation \" arXiv preprint arXiv:1705.05264 2017."},{"key":"e_1_2_1_11_1","first-page":"43027","article-title":"A denoising autoencoder approach for poisoning attack detection in federated learning","volume":"9","author":"Yue C.","year":"2021","unstructured":"C. Yue, X. Zhu, Z. Liu, X. He, Z. Zhang, and W. Zhao, \"A denoising autoencoder approach for poisoning attack detection in federated learning,\" IEEE Access, vol. 9, pp. 43027--43036, 2021.","journal-title":"IEEE Access"},{"key":"e_1_2_1_12_1","volume-title":"PMLR","author":"Kascenas A.","year":"2022","unstructured":"A. Kascenas, N. Pugeault, and A. Q. O'Neil, \"Denoising autoencoders for unsupervised anomaly detection in brain mri,\" in International Conference on Medical Imaging with Deep Learning, pp. 653--664, PMLR, 2022."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.37868\/hsd.v3i2.71"},{"key":"e_1_2_1_14_1","first-page":"1","volume-title":"An analysis of adversarial attacks and defenses on autonomous driving models,\" in 2020 IEEE international conference on pervasive computing and communications (PerCom)","author":"Deng Y.","year":"2020","unstructured":"Y. Deng, X. Zheng, T. Zhang, C. Chen, G. Lou, and M. Kim, \"An analysis of adversarial attacks and defenses on autonomous driving models,\" in 2020 IEEE international conference on pervasive computing and communications (PerCom), pp. 1--10, IEEE, 2020."},{"key":"e_1_2_1_15_1","first-page":"547","volume-title":"Deep learning and its adversarial robustness: A brief introduction,\" in HANDBOOK ON COMPUTER LEARNING AND INTELLIGENCE: Volume 2: Deep Learning, Intelligent Control and Evolutionary Computation","author":"Wang F.","year":"2022","unstructured":"F. Wang, C. Zhang, P. Xu, and W. Ruan, \"Deep learning and its adversarial robustness: A brief introduction,\" in HANDBOOK ON COMPUTER LEARNING AND INTELLIGENCE: Volume 2: Deep Learning, Intelligent Control and Evolutionary Computation, pp. 547--584, World Scientific, 2022."},{"key":"e_1_2_1_16_1","first-page":"117","volume-title":"Kullback-leibler divergence revisited,\" in Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval","author":"Raiber F.","year":"2017","unstructured":"F. Raiber and O. Kurland, \"Kullback-leibler divergence revisited,\" in Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval, pp. 117-- 124, 2017."}],"container-title":["ACM SIGAda Ada Letters"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672359.3672362","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3672359.3672362","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:57:49Z","timestamp":1750294669000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672359.3672362"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,6]]},"references-count":16,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,6,6]]}},"alternative-id":["10.1145\/3672359.3672362"],"URL":"https:\/\/doi.org\/10.1145\/3672359.3672362","relation":{},"ISSN":["1094-3641"],"issn-type":[{"value":"1094-3641","type":"print"}],"subject":[],"published":{"date-parts":[[2024,6,6]]},"assertion":[{"value":"2024-06-07","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}