{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,27]],"date-time":"2026-03-27T09:05:43Z","timestamp":1774602343196,"version":"3.50.1"},"reference-count":96,"publisher":"Association for Computing Machinery (ACM)","issue":"7","license":[{"start":{"date-parts":[[2022,12,15]],"date-time":"2022-12-15T00:00:00Z","timestamp":1671062400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2020AAA0107705"],"award-info":[{"award-number":["2020AAA0107705"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62122066, U20A20182, 61872274, U20A20178"],"award-info":[{"award-number":["62122066, U20A20182, 61872274, U20A20178"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Key R&D Program of Zhejiang","award":["2022C01018"],"award-info":[{"award-number":["2022C01018"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2023,7,31]]},"abstract":"<jats:p>Machine learning (ML) has been universally adopted for automated decisions in a variety of fields, including recognition and classification applications, recommendation systems, natural language processing, and so on. However, in light of high expenses on training data and computing resources, recent years have witnessed a rapid increase in outsourced ML training, either partially or completely, which provides vulnerabilities for adversaries to exploit. A prime threat in training phase is called poisoning attack, where adversaries strive to subvert the behavior of machine learning systems by poisoning training data or other means of interference. Although a growing number of relevant studies have been proposed, the research among poisoning attack is still overly scattered, with each paper focusing on a particular task in a specific domain. In this survey, we summarize and categorize existing attack methods and corresponding defenses, as well as demonstrate compelling application scenarios, thus providing a unified framework to analyze poisoning attacks. Besides, we also discuss the main limitations of current works, along with the corresponding future directions to facilitate further researches. Our ultimate motivation is to provide a comprehensive and self-contained survey of this growing field of research and lay the foundation for a more standardized approach to reproducible studies.<\/jats:p>","DOI":"10.1145\/3538707","type":"journal-article","created":{"date-parts":[[2022,5,25]],"date-time":"2022-05-25T11:56:03Z","timestamp":1653479763000},"page":"1-36","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":73,"title":["Threats to Training: A Survey of Poisoning Attacks and Defenses on Machine Learning Systems"],"prefix":"10.1145","volume":"55","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5804-3279","authenticated-orcid":false,"given":"Zhibo","family":"Wang","sequence":"first","affiliation":[{"name":"Wuhan University, China and Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7238-773X","authenticated-orcid":false,"given":"Jingjing","family":"Ma","sequence":"additional","affiliation":[{"name":"Wuhan University, Wuhan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9132-7348","authenticated-orcid":false,"given":"Xue","family":"Wang","sequence":"additional","affiliation":[{"name":"Wuhan University, Wuhan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8771-7474","authenticated-orcid":false,"given":"Jiahui","family":"Hu","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7872-6969","authenticated-orcid":false,"given":"Zhan","family":"Qin","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1969-2591","authenticated-orcid":false,"given":"Kui","family":"Ren","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2022,12,15]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-010-5188-5"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/1128817.1128824"},{"key":"e_1_3_1_4_2","first-page":"634","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Bhagoji Arjun Nitin","year":"2019","unstructured":"Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In Proceedings of the International Conference on Machine Learning. PMLR, 634\u2013643."},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.5555\/3042573.3042761"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.07.023"},{"key":"e_1_3_1_7_2","first-page":"118","volume-title":"Proceedings of the 31st International Conference on Neural Information Processing Systems","author":"Blanchard Peva","year":"2017","unstructured":"Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 118\u2013128."},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP39728.2021.9414862"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICPADS47876.2019.00042"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.373"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2017.11.007"},{"issue":"2","key":"e_1_3_1_12_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3154503","article-title":"Distributed statistical machine learning in adversarial settings: Byzantine gradient descent","volume":"1","author":"Chen Yudong","year":"2017","unstructured":"Yudong Chen, Lili Su, and Jiaming Xu. 2017. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems 1, 2 (2017), 1\u201325.","journal-title":"Proceedings of the ACM on Measurement and Analysis of Computing Systems"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2008.11"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-60248-2_27"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/NCA51143.2020.9306745"},{"key":"e_1_3_1_16_2","first-page":"1605","volume-title":"Proceedings of the 29th USENIX Security Symposium (USENIX Security\u201920)","author":"Fang Minghong","year":"2020","unstructured":"Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th USENIX Security Symposium (USENIX Security\u201920). 1605\u20131622."},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3366423.3380072"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3274694.3274706"},{"key":"e_1_3_1_19_2","first-page":"11994","article-title":"Learning to confuse: Generating training time adversarial data with auto-encoder","volume":"32","author":"Feng Ji","year":"2019","unstructured":"Ji Feng, Qi-Zhi Cai, and Zhi-Hua Zhou. 2019. Learning to confuse: Generating training time adversarial data with auto-encoder. Adv. Neural Info. Process. Syst. 32 (2019), 11994\u201312004.","journal-title":"Adv. Neural Info. Process. Syst."},{"key":"e_1_3_1_20_2","first-page":"253","article-title":"Robust logistic regression and classification","volume":"27","author":"Feng Jiashi","year":"2014","unstructured":"Jiashi Feng, Huan Xu, Shie Mannor, and Shuicheng Yan. 2014. Robust logistic regression and classification. Adv. Neural Info. Process. Syst. 27 (2014), 253\u2013261.","journal-title":"Adv. Neural Info. Process. Syst."},{"key":"e_1_3_1_21_2","unstructured":"Adriano Franci Maxime Cordy Martin Gubri Mike Papadakis and Yves Le Traon. 2020. Effective and efficient data poisoning in semi-supervised learning. Retrieved from https:\/\/arXiv:2012.07381."},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2013.2292894"},{"key":"e_1_3_1_23_2","unstructured":"Shuhao Fu Chulin Xie Bo Li and Qifeng Chen. 2019. Attack-resistant federated learning with residual-based reweighting. Retrieved from https:\/\/arXiv:1912.11464."},{"key":"e_1_3_1_24_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Geiping Jonas","year":"2020","unstructured":"Jonas Geiping, Liam H. Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, and Tom Goldstein. 2020. Witches\u2019 Brew: Industrial scale data poisoning via gradient matching. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2022.3162397"},{"key":"e_1_3_1_26_2","first-page":"3521","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Guerraoui Rachid","year":"2018","unstructured":"Rachid Guerraoui, S\u00e9bastien Rouault, et\u00a0al. 2018. The hidden vulnerability of distributed learning in byzantium. In Proceedings of the International Conference on Machine Learning. PMLR, 3521\u20133530."},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV48630.2021.00073"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58583-9_9"},{"key":"e_1_3_1_29_2","first-page":"10456","article-title":"Using trusted data to train deep networks on labels corrupted by severe noise","volume":"31","author":"Hendrycks Dan","year":"2018","unstructured":"Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. 2018. Using trusted data to train deep networks on labels corrupted by severe noise. Adv. Neural Info. Process. Syst. 31 (2018), 10456\u201310465.","journal-title":"Adv. Neural Info. Process. Syst."},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/GLOBECOM38437.2019.9013539"},{"key":"e_1_3_1_31_2","first-page":"12080","article-title":"MetaPoison: Practical general-purpose clean-label data poisoning","volume":"33","author":"Huang W. Ronny","year":"2020","unstructured":"W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, and Tom Goldstein. 2020. MetaPoison: Practical general-purpose clean-label data poisoning. Adv. Neural Info. Process. Syst. 33 (2020), 12080\u201312091.","journal-title":"Adv. Neural Info. Process. Syst."},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2018.00057"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i9.16971"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.aaa8415"},{"key":"e_1_3_1_35_2","first-page":"1885","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Koh Pang Wei","year":"2017","unstructured":"Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning. PMLR, 1885\u20131894."},{"key":"e_1_3_1_36_2","unstructured":"Pang Wei Koh Jacob Steinhardt and Percy Liang. 2018. Stronger data poisoning attacks break data sanitization defenses. Retrieved from https:\/\/arXiv:1811.00741."},{"key":"e_1_3_1_37_2","first-page":"3488","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Konstantinov Nikola","year":"2019","unstructured":"Nikola Konstantinov and Christoph Lampert. 2019. Robust learning from untrusted sources. In Proceedings of the International Conference on Machine Learning. PMLR, 3488\u20133498."},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3412841.3441892"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.249"},{"key":"e_1_3_1_40_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Levine Alexander","year":"2020","unstructured":"Alexander Levine and Soheil Feizi. 2020. Deep partition aggregation: Provable defenses against general poisoning attacks. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_1_41_2","first-page":"1885","article-title":"Data poisoning attacks on factorization-based collaborative filtering","volume":"29","author":"Li Bo","year":"2016","unstructured":"Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. Adv. Neural Info. Process. Syst. 29 (2016), 1885\u20131893.","journal-title":"Adv. Neural Info. Process. Syst."},{"key":"e_1_3_1_42_2","unstructured":"Suyi Li Yong Cheng Wei Wang Yang Liu and Tianjian Chen. 2020. Learning to detect malicious clients for robust federated learning. Retrieved from https:\/\/arXiv:2002.00211."},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.211"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140447"},{"key":"e_1_3_1_45_2","first-page":"4042","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Liu Fang","year":"2019","unstructured":"Fang Liu and Ness Shroff. 2019. Data poisoning attacks on stochastic bandits. In Proceedings of the International Conference on Machine Learning. PMLR, 4042\u20134050."},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00470-5_13"},{"key":"e_1_3_1_47_2","first-page":"6282","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Liu Sijia","year":"2020","unstructured":"Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Mingyi Hong, and Una-May O!-Reilly. 2020. Min-max optimization without gradients: Convergence and applications to black-box evasion and poisoning attacks. In Proceedings of the International Conference on Machine Learning. PMLR, 6282\u20136293."},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.5555\/3454287.3455164"},{"key":"e_1_3_1_49_2","unstructured":"Yi Liu Xingliang Yuan Ruihui Zhao Yifeng Zheng and Yefeng Zheng. 2020. RC-SSFL: Towards robust and communication-efficient semi-supervised federated learning system. Retrieved from https:\/\/arXiv:2012.04432."},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01554-1_11"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.5555\/3454287.3455592"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/657"},{"key":"e_1_3_1_53_2","first-page":"572","volume-title":"Algorithmic Learning Theory","author":"Mahloujifar Saeed","year":"2018","unstructured":"Saeed Mahloujifar, Dimitrios I. Diochnos, and Mohammad Mahmoody. 2018. Learning under p-tampering attacks. In Algorithmic Learning Theory. PMLR, 572\u2013596."},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33014536"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-70503-3_8"},{"key":"e_1_3_1_56_2","first-page":"4274","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Mahloujifar Saeed","year":"2019","unstructured":"Saeed Mahloujifar, Mohammad Mahmoody, and Ameer Mohammed. 2019. Data poisoning attacks in multi-party learning. In Proceedings of the International Conference on Machine Learning. PMLR, 4274\u20134283."},{"key":"e_1_3_1_57_2","first-page":"1273","volume-title":"Artificial Intelligence and Statistics","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273\u20131282."},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01304"},{"key":"e_1_3_1_59_2","unstructured":"Akshay Mehra Bhavya Kailkhura Pin-Yu Chen and Jihun Hamm. 2021. Understanding the limits of unsupervised domain adaptation via data poisoning. Retrieved from https:\/\/arXiv:2107.03919."},{"key":"e_1_3_1_60_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v29i1.9569"},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140451"},{"key":"e_1_3_1_62_2","unstructured":"Luis Mu\u00f1oz-Gonz\u00e1lez Bjarne Pfitzner Matteo Russo Javier Carnerero-Cano and Emil C. Lupu. 2019. Poisoning attacks with generative adversarial nets. Retrieved from https:\/\/arXiv:1906.07773."},{"key":"e_1_3_1_63_2","article-title":"Exploiting machine learning to subvert your spam filter. In Proceedings of First USENIX Workshop on Large Scale Exploits and Emergent Threats","author":"Nelson Blaine","year":"2008","unstructured":"Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles Sutton, J. Doug Tygar, and Kai Xia. 2008. Exploiting machine learning to subvert your spam filter. In Proceedings of First USENIX Workshop on Large Scale Exploits and Emergent Threats (LEET\u201908). 1\u20139.","journal-title":"(LEET\u201908)"},{"key":"e_1_3_1_64_2","volume-title":"Proceedings of the 8th International Conference on Learning Representations","author":"Orekondy Tribhuvanesh","year":"2020","unstructured":"Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2020. Prediction poisoning: Towards defenses against DNN model stealing attacks. In Proceedings of the 8th International Conference on Learning Representations."},{"key":"e_1_3_1_65_2","unstructured":"Naman Patel Prashanth Krishnamurthy Siddharth Garg and Farshad Khorrami. 2020. Bait and switch: Online training data poisoning of autonomous driving systems. Retrieved from https:\/\/arXiv:2011.04065."},{"key":"e_1_3_1_66_2","first-page":"5","volume-title":"Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases","author":"Paudice Andrea","year":"2018","unstructured":"Andrea Paudice, Luis Mu\u00f1oz-Gonz\u00e1lez, and Emil C. Lupu. 2018. Label sanitization against label flipping poisoning attacks. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 5\u201315."},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-66415-2_4"},{"key":"e_1_3_1_68_2","first-page":"7974","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Rakhsha Amin","year":"2020","unstructured":"Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, and Adish Singla. 2020. Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning. In Proceedings of the International Conference on Machine Learning. PMLR, 7974\u20137984."},{"key":"e_1_3_1_69_2","first-page":"4334","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Ren Mengye","year":"2018","unstructured":"Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. 2018. Learning to reweight examples for robust deep learning. In Proceedings of the International Conference on Machine Learning. PMLR, 4334\u20134343."},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICMLA.2015.152"},{"key":"e_1_3_1_71_2","first-page":"8147","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Roh Yuji","year":"2020","unstructured":"Yuji Roh, Kangwook Lee, Steven Whang, and Changho Suh. 2020. Fr-train: A mutual information-based approach to fair and robust training. In Proceedings of the International Conference on Machine Learning. PMLR, 8147\u20138157."},{"key":"e_1_3_1_72_2","first-page":"8230","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Rosenfeld Elan","year":"2020","unstructured":"Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, and Zico Kolter. 2020. Certified robustness to label-flipping attacks via randomized smoothing. In Proceedings of the International Conference on Machine Learning. PMLR, 8230\u20138241."},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP40000.2020.00115"},{"key":"e_1_3_1_74_2","first-page":"1559","volume-title":"Proceedings of the 30th USENIX Security Symposium (USENIX Security\u201921)","author":"Schuster Roei","year":"2021","unstructured":"Roei Schuster, Congzheng Song, Eran Tromer, and Vitaly Shmatikov. 2021. You autocomplete me: Poisoning vulnerabilities in neural code completion. In Proceedings of the 30th USENIX Security Symposium (USENIX Security\u201921). 1559\u20131575."},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3398394"},{"key":"e_1_3_1_76_2","doi-asserted-by":"publisher","DOI":"10.5555\/3327345.3327509"},{"key":"e_1_3_1_77_2","doi-asserted-by":"publisher","DOI":"10.1145\/2991079.2991125"},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","DOI":"10.5555\/3294996.3295110"},{"key":"e_1_3_1_79_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Sun Yanchao","year":"2020","unstructured":"Yanchao Sun, Da Huo, and Furong Huang. 2020. Vulnerability-aware poisoning mechanism for online RL with unknown dynamics. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_1_80_2","volume-title":"Proceedings of the 2nd International Conference on Learning Representations (ICLR\u201914)","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations (ICLR\u201914)."},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58951-6_24"},{"key":"e_1_3_1_82_2","doi-asserted-by":"publisher","DOI":"10.5555\/3327757.3327896"},{"key":"e_1_3_1_83_2","doi-asserted-by":"publisher","DOI":"10.1145\/1968.1972"},{"key":"e_1_3_1_84_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.696"},{"key":"e_1_3_1_85_2","first-page":"1689","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Xiao Huang","year":"2015","unstructured":"Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. 2015. Is feature selection secure against training data poisoning? In Proceedings of the International Conference on Machine Learning. PMLR, 1689\u20131698."},{"key":"e_1_3_1_86_2","first-page":"870","volume-title":"Proceedings of the 20th European Conference on Artificial Intelligence (ECAI\u201912)","author":"Xiao Han","year":"2012","unstructured":"Han Xiao, Huang Xiao, and Claudia Eckert. 2012. Adversarial label flips attack on support vector machines. In Proceedings of the 20th European Conference on Artificial Intelligence (ECAI\u201912). IOS Press, 870\u2013875."},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11633-019-1211-x"},{"key":"e_1_3_1_88_2","unstructured":"Chaofei Yang Qing Wu Hai Li and Yiran Chen. 2017. Generative poisoning attack method against neural networks. Retrieved from https:\/\/arXiv:1703.01340."},{"key":"e_1_3_1_89_2","first-page":"5650","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Yin Dong","year":"2018","unstructured":"Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning. PMLR, 5650\u20135659."},{"key":"e_1_3_1_90_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2018.2886017"},{"key":"e_1_3_1_91_2","doi-asserted-by":"publisher","DOI":"10.1109\/TrustCom\/BigDataSE.2019.00057"},{"key":"e_1_3_1_92_2","doi-asserted-by":"publisher","DOI":"10.1109\/CISS.2017.7926118"},{"key":"e_1_3_1_93_2","first-page":"201","volume-title":"Learning for Dynamics and Control","author":"Zhang Xuezhou","year":"2020","unstructured":"Xuezhou Zhang, Xiaojin Zhu, and Laurent Lessard. 2020. Online data poisoning attacks. In Learning for Dynamics and Control. PMLR, 201\u2013210."},{"key":"e_1_3_1_94_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2020.2986205"},{"key":"e_1_3_1_95_2","doi-asserted-by":"publisher","DOI":"10.5555\/3172077.3172440"},{"key":"e_1_3_1_96_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11838"},{"key":"e_1_3_1_97_2","first-page":"7614","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Zhu Chen","year":"2019","unstructured":"Chen Zhu, W. Ronny Huang, Hengduo Li, Gavin Taylor, Christoph Studer, and Tom Goldstein. 2019. Transferable clean-label poisoning attacks on deep neural nets. In Proceedings of the International Conference on Machine Learning. PMLR, 7614\u20137623."}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3538707","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3538707","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T18:09:38Z","timestamp":1750183778000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3538707"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,15]]},"references-count":96,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2023,7,31]]}},"alternative-id":["10.1145\/3538707"],"URL":"https:\/\/doi.org\/10.1145\/3538707","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,12,15]]},"assertion":[{"value":"2021-12-15","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-05-13","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-12-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}