{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T15:51:25Z","timestamp":1761061885755,"version":"3.41.0"},"reference-count":277,"publisher":"Association for Computing Machinery (ACM)","issue":"7","license":[{"start":{"date-parts":[[2024,4,9]],"date-time":"2024-04-09T00:00:00Z","timestamp":1712620800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Centre for Smart Analytics, Federation University Australia"},{"name":"ARC DECRA","award":["DE210101458"],"award-info":[{"award-number":["DE210101458"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,7,31]]},"abstract":"<jats:p>Data Poisoning Attacks (DPA) represent a sophisticated technique aimed at distorting the training data of machine learning models, thereby manipulating their behavior. This process is not only technically intricate but also frequently dependent on the characteristics of the victim (target) model. To protect the victim model, the vast number of DPAs and their variants make defenders rely on trial and error techniques to find the ultimate defence solution which is exhausting and very time-consuming. This paper comprehensively summarises the latest research on DPAs and defences, proposes a DPA characterizing model to help investigate adversary attacks dependency on the victim model, and builds a DPA roadmap as the path navigating to defence. Having the roadmap as an applied framework that contains DPA families sharing the same features and mathematical computations will equip the defenders with a powerful tool to quickly find the ultimate defences, away from the exhausting trial and error methodology. The roadmap validated by use cases has been made available as an open access platform, enabling other researchers to add in new DPAs and update the map continuously.<\/jats:p>","DOI":"10.1145\/3627536","type":"journal-article","created":{"date-parts":[[2023,10,27]],"date-time":"2023-10-27T22:01:53Z","timestamp":1698444113000},"page":"1-39","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["The Path to Defence: A Roadmap to Characterising Data Poisoning Attacks on Victim Models"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7646-7385","authenticated-orcid":false,"given":"Tarek","family":"Chaalan","sequence":"first","affiliation":[{"name":"Internet Commerce Security Lab and Center for Smart Analytics, Federation University, Ballarat, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4651-4990","authenticated-orcid":false,"given":"Shaoning","family":"Pang","sequence":"additional","affiliation":[{"name":"Internet Commerce Security Lab and Center for Smart Analytics, Federation University, Ballarat, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3748-0277","authenticated-orcid":false,"given":"Joarder","family":"Kamruzzaman","sequence":"additional","affiliation":[{"name":"Internet Commerce Security Lab and Center for Smart Analytics, Federation University, Ballarat,, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7963-2446","authenticated-orcid":false,"given":"Iqbal","family":"Gondal","sequence":"additional","affiliation":[{"name":"School of Computing Technology, STEM College RMIT University, Royal MelbourneInstitute of Technology, Melbourne, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7353-4159","authenticated-orcid":false,"given":"Xuyun","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Computing, Macquarie University, Sydney, Australia"}]}],"member":"320","published-online":{"date-parts":[[2024,4,9]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"[n. d.]. Tesla denies car was driverless in fatal crash that killed two men in the United States - ABC News. https:\/\/www.abc.net.au\/news\/2021-04-28"},{"key":"e_1_3_2_3_2","unstructured":"2016. Tay: Microsoft issues apology over racist chatbot fiasco. (2016). https:\/\/www.bbc.com\/news\/technology-35902104"},{"key":"e_1_3_2_4_2","article-title":"Robustness to adversarial examples through an ensemble of specialists","author":"Abbasi Mahdieh","year":"2017","unstructured":"Mahdieh Abbasi and Christian Gagn\u00e9. 2017. Robustness to adversarial examples through an ensemble of specialists. arXiv preprint arXiv:1702.06856 (2017).","journal-title":"arXiv preprint arXiv:1702.06856"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1002\/wics.101"},{"issue":"5","key":"e_1_3_2_6_2","first-page":"2106","article-title":"Image transformation-based defense against adversarial perturbation on deep learning models","volume":"18","author":"Agarwal Akshay","year":"2020","unstructured":"Akshay Agarwal, Richa Singh, Mayank Vatsa, and Nalini Ratha. 2020. Image transformation-based defense against adversarial perturbation on deep learning models. IEEE Transactions on Dependable and Secure Computing 18, 5 (2020), 2106\u20132121.","journal-title":"IEEE Transactions on Dependable and Secure Computing"},{"key":"e_1_3_2_7_2","unstructured":"Hojjat Aghakhani Lea Sch\u00f6nherr Thorsten Eisenhofer Dorothea Kolossa Thorsten Holz Christopher Kruegel and Giovanni Vigna. 2021. VenoMave: Targeted Poisoning against Speech Recognition. arxiv:2010.10682 [cs.SD]."},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2807385"},{"key":"e_1_3_2_9_2","doi-asserted-by":"crossref","unstructured":"Naveed Akhtar and Ajmal Mian. 2018. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. https:\/\/arxiv.org\/abs\/1801.00554","DOI":"10.1109\/ACCESS.2018.2807385"},{"key":"e_1_3_2_10_2","volume-title":"International Conference on Learning Representations","author":"Al-Dujaili Abdullah","year":"2020","unstructured":"Abdullah Al-Dujaili and Una-May O\u2019Reilly. 2020. Sign bits are all you need for black-box attacks. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=SygW0TEFwH"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","unstructured":"Zeyuan Allen-Zhu Faeze Ebrahimian Jerry Li and Dan Alistarh. 2020. Byzantine-Resilient Non-Convex Stochastic Gradient Descent. DOI:10.48550\/ARXIV.2012.14368","DOI":"10.48550\/ARXIV.2012.14368"},{"key":"e_1_3_2_12_2","article-title":"Did you hear that? Adversarial examples against automatic speech recognition","author":"Alzantot Moustafa","year":"2018","unstructured":"Moustafa Alzantot, Bharathan Balaji, and Mani Srivastava. 2018. Did you hear that? Adversarial examples against automatic speech recognition. arXiv preprint arXiv:1801.00554 (2018).","journal-title":"arXiv preprint arXiv:1801.00554"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58592-1_29"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/TETCI.2019.2961157"},{"key":"e_1_3_2_15_2","volume-title":"CVPR","author":"Arnab Anurag","year":"2018","unstructured":"Anurag Arnab, Ondrej Miksik, and Philip H. S. Torr. 2018. On the robustness of semantic segmentation models to adversarial attacks. In CVPR."},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3380446.3430628"},{"key":"e_1_3_2_17_2","article-title":"The effects of JPEG and JPEG2000 compression on attacks using adversarial examples","author":"Aydemir Ayse Elvan","year":"2018","unstructured":"Ayse Elvan Aydemir, Alptekin Temizel, and Tugba Taskaya Temizel. 2018. The effects of JPEG and JPEG2000 compression on attacks using adversarial examples. arXiv preprint arXiv:1803.10418 (2018).","journal-title":"arXiv preprint arXiv:1803.10418"},{"key":"e_1_3_2_18_2","unstructured":"Shumeet Baluja and Ian Fischer. 2017. Adversarial Transformation Networks: Learning to Generate Adversarial Examples. arxiv:1703.09387 [cs.NE]."},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/CISS.2018.8362326"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01258-8_10"},{"key":"e_1_3_2_21_2","article-title":"A survey of black-box adversarial attacks on computer vision models","author":"Bhambri Siddhant","year":"2019","unstructured":"Siddhant Bhambri, Sumanyu Muku, Avinash Tulasi, and Arun Balaji Buduru. 2019. A survey of black-box adversarial attacks on computer vision models. arXiv preprint arXiv:1912.01667 (2019).","journal-title":"arXiv preprint arXiv:1912.01667"},{"key":"e_1_3_2_22_2","unstructured":"Anand Bhattad Min Jin Chong Kaizhao Liang Bo Li and D. A. Forsyth. 2020. Unrestricted Adversarial Examples via Semantic Manipulation. arxiv:1904.06347 [cs.CV]. (2020)."},{"key":"e_1_3_2_23_2","doi-asserted-by":"crossref","unstructured":"Pavol Bielik Veselin Raychev and Martin Vechev. 2017. Learning a static analyzer from data. (2017) 233\u2013253.","DOI":"10.1007\/978-3-319-63387-9_12"},{"key":"e_1_3_2_24_2","first-page":"42","volume-title":"Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR)","author":"Biggio Battista","year":"2014","unstructured":"Battista Biggio, Samuel Rota Bul\u00f2, Ignazio Pillai, Michele Mura, Eyasu Zemene Mequanint, Marcello Pelillo, and Fabio Roli. 2014. Poisoning complete-linkage hierarchical clustering. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Springer, 42\u201352."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-40994-3_25"},{"key":"e_1_3_2_26_2","first-page":"97","volume-title":"Asian Conference on Machine Learning","author":"Biggio Battista","year":"2011","unstructured":"Battista Biggio, Blaine Nelson, and Pavel Laskov. 2011. Support vector machines under adversarial label noise. In Asian Conference on Machine Learning. PMLR, 97\u2013112."},{"key":"e_1_3_2_27_2","unstructured":"Battista Biggio Blaine Nelson and Pavel Laskov. 2013. Poisoning Attacks against Support Vector Machines. arxiv:1206.6389 [cs.LG]."},{"key":"e_1_3_2_28_2","unstructured":"Franziska Boenisch Philip Sperl and Konstantin B\u00f6ttinger. 2021. Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning. arxiv:2105.07985 [cs.CR]. https:\/\/dl.acm.org\/doi\/10.1016\/j.eswa.2014.09.054"},{"key":"e_1_3_2_29_2","series-title":"Proceedings of the 37th International Conference on Machine Learning","first-page":"1014","volume":"119","author":"Boopathy Akhilan","year":"2020","unstructured":"Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, and Luca Daniel. 2020. Proper network interpretability helps adversarial robustness in classification. In Proceedings of the 37th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 119), Hal Daum\u00e9 III and Aarti Singh (Eds.). PMLR, 1014\u20131023. https:\/\/proceedings.mlr.press\/v119\/boopathy20a.html"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2014.09.054"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","unstructured":"Ajay Kumar Boyat and Brijendra Kumar Joshi. 2015. A review paper: Noise models in digital image processing. (2015). DOI:10.48550\/ARXIV.1505.03489","DOI":"10.48550\/ARXIV.1505.03489"},{"key":"e_1_3_2_32_2","unstructured":"Wieland Brendel Jonas Rauber and Matthias Bethge. 2018. Decision-Based Adversarial Attacks: Reliable Attacks against Black-Box Machine Learning Models. arxiv:1712.04248 [stat.ML]"},{"key":"e_1_3_2_33_2","volume-title":"International Conference on Learning Representations","author":"Buckman Jacob","year":"2018","unstructured":"Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. 2018. Thermometer encoding: One hot way to resist adversarial examples. In International Conference on Learning Representations."},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3134600.3134606"},{"key":"e_1_3_2_35_2","article-title":"Defensive distillation is not robust to adversarial examples","author":"Carlini Nicholas","year":"2016","unstructured":"Nicholas Carlini and David Wagner. 2016. Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 (2016).","journal-title":"arXiv preprint arXiv:1607.04311"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140444"},{"key":"e_1_3_2_37_2","doi-asserted-by":"crossref","unstructured":"Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. arxiv:1608.04644 [cs.CR].","DOI":"10.1109\/SP.2017.49"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-018-5853-4"},{"key":"e_1_3_2_39_2","article-title":"Adversarial attacks and defences: A survey","author":"Chakraborty Anirban","year":"2018","unstructured":"Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069 (2018).","journal-title":"arXiv preprint arXiv:1810.00069"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1049\/cit2.12028"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","unstructured":"Tanmay Chakraborty Utkarsh Trehan Khawla Mallat and Jean-Luc Dugelay. 2022. Generalizing Adversarial Explanations with Grad-CAM. DOI:10.48550\/ARXIV.2204.05427","DOI":"10.48550\/ARXIV.2204.05427"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","unstructured":"Alvin Chan Lei Ma Felix Juefei-Xu Xiaofei Xie Yang Liu and Yew Soon Ong. 2018. Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer. DOI:10.48550\/ARXIV.1809.02444","DOI":"10.48550\/ARXIV.1809.02444"},{"key":"e_1_3_2_43_2","article-title":"Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer","author":"Chang Hongyan","year":"2019","unstructured":"Hongyan Chang, Virat Shejwalkar, Reza Shokri, and Amir Houmansadr. 2019. Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer. arXiv preprint arXiv:1912.11279 (2019).","journal-title":"arXiv preprint arXiv:1912.11279"},{"key":"e_1_3_2_44_2","unstructured":"Zhaohui Che Ali Borji Guangtao Zhai Suiyi Ling Jing Li and Patrick Le Callet. 2019. A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories. arxiv:1911.07682 [cs.LG]."},{"key":"e_1_3_2_45_2","unstructured":"Hongge Chen Huan Zhang Duane Boning and Cho-Jui Hsieh. 2019. Robust Decision Trees against Adversarial Examples. arxiv:1902.10660 [cs.LG]."},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2020.3031058"},{"key":"e_1_3_2_47_2","article-title":"ReabsNet: Detecting and revising adversarial examples","author":"Chen Jiefeng","year":"2017","unstructured":"Jiefeng Chen, Zihang Meng, Changtian Sun, Wei Tang, and Yinglun Zhu. 2017. ReabsNet: Detecting and revising adversarial examples. arXiv preprint arXiv:1712.08250 (2017).","journal-title":"arXiv preprint arXiv:1712.08250"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140448"},{"key":"e_1_3_2_49_2","article-title":"Gradient band-based adversarial training for generalized attack immunity of A3C path finding","author":"Chen Tong","year":"2018","unstructured":"Tong Chen, Wenjia Niu, Yingxiao Xiang, Xiaoxuan Bai, Jiqiang Liu, Zhen Han, and Gang Li. 2018. Gradient band-based adversarial training for generalized attack immunity of A3C path finding. arXiv preprint arXiv:1807.06752 (2018).","journal-title":"arXiv preprint arXiv:1807.06752"},{"key":"e_1_3_2_50_2","unstructured":"Xihao Chen Jingya Yu Li Chen Shaoqun Zeng Xiuli Liu and Shenghua Cheng. 2019. Multi-stage domain adversarial style reconstruction for cytopathological image stain normalization."},{"key":"e_1_3_2_51_2","first-page":"1","article-title":"Perturbation-seeking generative adversarial networks: A defense framework for remote sensing image scene classification","volume":"60","author":"Cheng Gong","year":"2021","unstructured":"Gong Cheng, Xuxiang Sun, Ke Li, Lei Guo, and Junwei Han. 2021. Perturbation-seeking generative adversarial networks: A defense framework for remote sensing image scene classification. IEEE Transactions on Geoscience and Remote Sensing 60 (2021), 1\u201311.","journal-title":"IEEE Transactions on Geoscience and Remote Sensing"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.5767"},{"key":"e_1_3_2_53_2","article-title":"Improving black-box adversarial attacks with a transfer-based prior","volume":"32","author":"Cheng Shuyu","year":"2019","unstructured":"Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. 2019. Improving black-box adversarial attacks with a transfer-based prior. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_54_2","article-title":"Pasadena: Perceptually aware and stealthy adversarial denoise attack","author":"Cheng Yupeng","year":"2021","unstructured":"Yupeng Cheng, Qing Guo, Felix Juefei-Xu, Wei Feng, Shang-Wei Lin, Weisi Lin, and Yang Liu. 2021. Pasadena: Perceptually aware and stealthy adversarial denoise attack. IEEE Transactions on Multimedia (2021).","journal-title":"IEEE Transactions on Multimedia"},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","unstructured":"Ping-Yeh Chiang Jonas Geiping Micah Goldblum Tom Goldstein Renkun Ni Steven Reich and Ali Shafahi. 2019. WITCHcraft: Efficient PGD attacks with random step size. DOI:10.48550\/ARXIV.1911.07989","DOI":"10.48550\/ARXIV.1911.07989"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jisa.2019.102420"},{"key":"e_1_3_2_57_2","unstructured":"Moustapha Cisse Yossi Adi Natalia Neverova and Joseph Keshet. 2017. Houdini: Fooling Deep Structured Prediction Models. arxiv:1707.05373 [stat.ML]"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01446"},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2008.11"},{"key":"e_1_3_2_60_2","unstructured":"Francesco Croce and Matthias Hein. 2020. Minimally Distorted Adversarial Examples with a Fast Adaptive Boundary Attack. 119 (2020) 2196\u20132205. https:\/\/proceedings.mlr.press\/v119\/croce20a.html"},{"key":"e_1_3_2_61_2","unstructured":"Francesco Croce and Matthias Hein. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arxiv:2003.01690 [cs.LG]."},{"key":"e_1_3_2_62_2","unstructured":"Francesco Croce and Matthias Hein. 2021. Mind the box: \\(l_1\\) -APGD for sparse adversarial attacks on image classifiers. arxiv:2103.01208 [cs.LG]."},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611974010.27"},{"key":"e_1_3_2_64_2","article-title":"Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression","author":"Das Nilaksh","year":"2017","unstructured":"Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, and Duen Horng Chau. 2017. Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression. arXiv preprint arXiv:1705.02900 (2017).","journal-title":"arXiv preprint arXiv:1705.02900"},{"key":"e_1_3_2_65_2","unstructured":"Nilaksh Das Madhuri Shanbhogue Shang-Tse Chen Fred Hohman Siwei Li Li Chen Michael E. Kounavis and Duen Horng Chau. 2018. SHIELD: Fast Practical Defense and Vaccination for Deep Learning using JPEG Compression. (2018)."},{"key":"e_1_3_2_66_2","unstructured":"Shankar A. Deka Du\u0161an M. Stipanovi\u0107 and Claire J. Tomlin. 2020. Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks. arxiv:2009.02874 [cs.LG]."},{"key":"e_1_3_2_67_2","unstructured":"Ambra Demontis Marco Melis Maura Pintor Matthew Jagielski Battista Biggio Alina Oprea Cristina Nita-Rotaru and Fabio Roli. 2019. Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. arxiv:1809.02861 [cs.LG]"},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP40778.2020.9191288"},{"key":"e_1_3_2_69_2","unstructured":"Guneet S. Dhillon Kamyar Azizzadenesheli Zachary C. Lipton Jeremy Bernstein Jean Kossaifi Aran Khanna and Anima Anandkumar. 2018. Stochastic Activation Pruning for Robust Adversarial Defense. arxiv:1803.01442 [cs.LG]."},{"key":"e_1_3_2_70_2","first-page":"1596","volume-title":"International Conference on Machine Learning","author":"Diakonikolas Ilias","year":"2019","unstructured":"Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. 2019. Sever: A robust meta-algorithm for stochastic optimization. In International Conference on Machine Learning. PMLR, 1596\u20131606."},{"key":"e_1_3_2_71_2","first-page":"1","volume-title":"International Workshop on Multiple Classifier Systems","author":"Dietterich Thomas G.","year":"2000","unstructured":"Thomas G. Dietterich. 2000. Ensemble methods in machine learning. In International Workshop on Multiple Classifier Systems. Springer, 1\u201315."},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2019.00019"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","unstructured":"Brian Dolhansky and Cristian Canton Ferrer. 2020. Adversarial collision attacks on image hashing functions. DOI:10.48550\/ARXIV.2011.09473","DOI":"10.48550\/ARXIV.2011.09473"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00957"},{"key":"e_1_3_2_75_2","doi-asserted-by":"crossref","unstructured":"Yinpeng Dong Tianyu Pang Hang Su and Jun Zhu. 2019. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. arxiv:1904.02884 [cs.CV].","DOI":"10.1109\/CVPR.2019.00444"},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.amc.2013.03.119"},{"key":"e_1_3_2_77_2","doi-asserted-by":"publisher","unstructured":"Abhimanyu Dubey Laurens van der Maaten Zeki Yalniz Yixuan Li and Dhruv Mahajan. 2019. Defense against Adversarial Images using Web-Scale Nearest-Neighbor Search. DOI:10.48550\/ARXIV.1903.01612","DOI":"10.48550\/ARXIV.1903.01612"},{"key":"e_1_3_2_78_2","article-title":"HotFlip: White-box adversarial examples for text classification","author":"Ebrahimi Javid","year":"2017","unstructured":"Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. HotFlip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751 (2017).","journal-title":"arXiv preprint arXiv:1712.06751"},{"key":"e_1_3_2_79_2","unstructured":"Logan Engstrom Brandon Tran Dimitris Tsipras Ludwig Schmidt and Aleksander Madry. 2018. A rotation and a translation suffice: Fooling CNNs with simple transformations. (2018)."},{"key":"e_1_3_2_80_2","unstructured":"Okwudili M. Ezeme. 2020. Anomaly detection in kernel-level process events using machine learning-based context analysis. (2020)."},{"key":"e_1_3_2_81_2","doi-asserted-by":"publisher","DOI":"10.1186\/s42492-019-0016-7"},{"key":"e_1_3_2_82_2","first-page":"35","volume-title":"Computer Vision \u2013 ECCV 2020","author":"Fan Yanbo","year":"2020","unstructured":"Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, and Yujiu Yang. 2020. Sparse Adversarial Attack via Perturbation Factorization. In Computer Vision \u2013 ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Springer International Publishing, Cham, 35\u201350."},{"key":"e_1_3_2_83_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58542-6_3"},{"key":"e_1_3_2_84_2","unstructured":"Reuben Feinman Ryan R. Curtin Saurabh Shintre and Andrew B. Gardner. 2017. Detecting Adversarial Samples from Artifacts. arxiv:1703.00410 [stat.ML]"},{"key":"e_1_3_2_85_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV45572.2020.9093310"},{"key":"e_1_3_2_86_2","volume-title":"Advances in Neural Information Processing Systems","author":"Fowl Liam H.","year":"2021","unstructured":"Liam H. Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojciech Czaja, and Tom Goldstein. 2021. Adversarial Examples Make Strong Poisons. In Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (Eds.). https:\/\/openreview.net\/forum?id=DE8MOQIgFTK"},{"key":"e_1_3_2_87_2","doi-asserted-by":"publisher","DOI":"10.1198\/016214502760047131"},{"key":"e_1_3_2_88_2","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813677"},{"key":"e_1_3_2_89_2","doi-asserted-by":"publisher","DOI":"10.1109\/SPW.2018.00016"},{"key":"e_1_3_2_90_2","article-title":"Backdoor attacks and countermeasures on deep learning: A comprehensive review","author":"Gao Yansong","year":"2020","unstructured":"Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, and Hyoungshick Kim. 2020. Backdoor attacks and countermeasures on deep learning: A comprehensive review. arXiv preprint arXiv:2007.10760 (2020).","journal-title":"arXiv preprint arXiv:2007.10760"},{"key":"e_1_3_2_91_2","unstructured":"Yue Gao and Kassem Fawaz. 2021. Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning Classifiers. arxiv:2104.08690 [cs.LG]."},{"key":"e_1_3_2_92_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.498"},{"key":"e_1_3_2_93_2","article-title":"Machine Learning Approaches for Amharic Parts-of-speech Tagging","author":"Gashaw Ibrahim","year":"2020","unstructured":"Ibrahim Gashaw and H. L. Shashirekha. 2020. Machine Learning Approaches for Amharic Parts-of-speech Tagging. arXiv preprint arXiv:2001.03324 (2020).","journal-title":"arXiv preprint arXiv:2001.03324"},{"key":"e_1_3_2_94_2","doi-asserted-by":"crossref","unstructured":"Zoubin Ghahramani. 2003. Unsupervised learning. (2003) 72\u2013112.","DOI":"10.1007\/978-3-540-28650-9_5"},{"key":"e_1_3_2_95_2","unstructured":"Amin Ghiasi Ali Shafahi and Tom Goldstein. 2020. Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates. https:\/\/arxiv.org\/abs\/2012.10544"},{"key":"e_1_3_2_96_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-54978-1_17"},{"key":"e_1_3_2_97_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2022.3162397"},{"key":"e_1_3_2_98_2","article-title":"Adversarial and clean data are not twins","author":"Gong Zhitao","year":"2017","unstructured":"Zhitao Gong, Wenlu Wang, and Wei-Shinn Ku. 2017. Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960 (2017).","journal-title":"arXiv preprint arXiv:1704.04960"},{"key":"e_1_3_2_99_2","article-title":"Explaining and harnessing adversarial examples","author":"Goodfellow Ian J.","year":"2014","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).","journal-title":"arXiv preprint arXiv:1412.6572"},{"key":"e_1_3_2_100_2","unstructured":"Sven Gowal Krishnamurthy Dvijotham Robert Stanforth Rudy Bunel Chongli Qin Jonathan Uesato Relja Arandjelovic Timothy Mann and Pushmeet Kohli. 2019. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models. arxiv:1810.12715 [cs.LG]."},{"key":"e_1_3_2_101_2","unstructured":"Kathrin Grosse Praveen Manoharan Nicolas Papernot Michael Backes and Patrick McDaniel. 2017. On the (Statistical) Detection of Adversarial Examples. arxiv:1702.06280 [cs.CR]."},{"key":"e_1_3_2_102_2","article-title":"Simple black-box adversarial attacks","author":"Guo Chuan","year":"2019","unstructured":"Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q. Weinberger. 2019. Simple black-box adversarial attacks. arXiv preprint arXiv:1905.07121 (2019).","journal-title":"arXiv preprint arXiv:1905.07121"},{"key":"e_1_3_2_103_2","first-page":"85","article-title":"Backpropagating linearly improves transferability of adversarial examples","volume":"33","author":"Guo Yiwen","year":"2020","unstructured":"Yiwen Guo, Qizhang Li, and Hao Chen. 2020. Backpropagating linearly improves transferability of adversarial examples. Advances in Neural Information Processing Systems 33 (2020), 85\u201395.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_104_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-020-10379-6"},{"key":"e_1_3_2_105_2","unstructured":"Hamed Hassani Mahdi Soltanolkotabi and Amin Karbasi. 2017. Gradient Methods for Submodular Maximization. arxiv:1708.03949 [cs.LG]."},{"key":"e_1_3_2_106_2","article-title":"On the effectiveness of mitigating data poisoning attacks with gradient shaping","author":"Hong Sanghyun","year":"2020","unstructured":"Sanghyun Hong, Varun Chandrasekaran, Yi\u011fitcan Kaya, Tudor Dumitra\u015f, and Nicolas Papernot. 2020. On the effectiveness of mitigating data poisoning attacks with gradient shaping. arXiv preprint arXiv:2002.11497 (2020).","journal-title":"arXiv preprint arXiv:2002.11497"},{"key":"e_1_3_2_107_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i9.16955"},{"key":"e_1_3_2_108_2","unstructured":"Sandy Huang Nicolas Papernot Ian Goodfellow Yan Duan and Pieter Abbeel. 2017. Adversarial Attacks on Neural Network Policies. arxiv:1702.02284 [cs.LG]."},{"key":"e_1_3_2_109_2","unstructured":"Zhichao Huang and Tong Zhang. 2020. Black-Box Adversarial Attack with Transferable Model-based Embedding. arxiv:1911.07140 [cs.LG]."},{"key":"e_1_3_2_110_2","unstructured":"Andrew Ilyas Logan Engstrom Anish Athalye and Jessy Lin. 2018. Black-box Adversarial Attacks with Limited Queries and Information. arxiv:1804.08598 [cs.CV]."},{"key":"e_1_3_2_111_2","unstructured":"Andrew Ilyas Logan Engstrom and Aleksander Madry. 2019. Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors. arxiv:1807.07978 [stat.ML]"},{"key":"e_1_3_2_112_2","volume-title":"29th \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security Symposium ( \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security 20)","author":"Jagielski Matthew","year":"2020","unstructured":"Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. 2020. High Accuracy and High Fidelity Extraction of Neural Networks. In 29th \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security Symposium ( \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security 20)."},{"key":"e_1_3_2_113_2","doi-asserted-by":"crossref","unstructured":"Matthew Jagielski Alina Oprea Battista Biggio Chang Liu Cristina Nita-Rotaru and Bo Li. 2018. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. arxiv:1804.00308 [cs.CR].","DOI":"10.1109\/SP.2018.00057"},{"key":"e_1_3_2_114_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01258-8_32"},{"key":"e_1_3_2_115_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00283"},{"key":"e_1_3_2_116_2","article-title":"Caffe: Convolutional Architecture for Fast Feature Embedding","author":"Jia Yangqing","year":"2014","unstructured":"Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. arXiv preprint arXiv:1408.5093 (2014).","journal-title":"arXiv preprint arXiv:1408.5093"},{"key":"e_1_3_2_117_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6311"},{"key":"e_1_3_2_118_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00487"},{"key":"e_1_3_2_119_2","doi-asserted-by":"publisher","unstructured":"Vishaal Munusamy Kabilan Brandon Morris and Anh Nguyen. 2018. VectorDefense: Vectorization as a Defense to Adversarial Examples. DOI:10.48550\/ARXIV.1804.08529","DOI":"10.48550\/ARXIV.1804.08529"},{"key":"e_1_3_2_120_2","unstructured":"Harini Kannan Alexey Kurakin and Ian Goodfellow. 2018. Adversarial Logit Pairing. arxiv:1803.06373 [cs.LG]."},{"key":"e_1_3_2_121_2","first-page":"2387","volume-title":"International Conference on Machine Learning","author":"Kantchelian Alex","year":"2016","unstructured":"Alex Kantchelian, J. Doug Tygar, and Anthony Joseph. 2016. Evasion and hardening of tree ensemble classifiers. In International Conference on Machine Learning. PMLR, 2387\u20132396."},{"key":"e_1_3_2_122_2","doi-asserted-by":"crossref","unstructured":"Kamran Khan Saif Ur Rehman Kamran Aziz Simon Fong and Sababady Sarasvady. 2014. DBSCAN: Past present and future. (2014) 232\u2013238.","DOI":"10.1109\/ICADIWT.2014.6814687"},{"key":"e_1_3_2_123_2","first-page":"1885","volume-title":"International Conference on Machine Learning","author":"Koh Pang Wei","year":"2017","unstructured":"Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning. PMLR, 1885\u20131894."},{"key":"e_1_3_2_124_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-021-06119-y"},{"key":"e_1_3_2_125_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1155\/2021\/4907754","article-title":"A survey on adversarial attack in the age of artificial intelligence","volume":"2021","author":"Kong Zixiao","year":"2021","unstructured":"Zixiao Kong, Jingfeng Xue, Yong Wang, Lu Huang, Zequn Niu, and Feng Li. 2021. A survey on adversarial attack in the age of artificial intelligence. Wireless Communications and Mobile Computing 2021 (2021), 1\u201322.","journal-title":"Wireless Communications and Mobile Computing"},{"key":"e_1_3_2_126_2","first-page":"981","volume-title":"2018 IEEE 16th Intl. Conf. on Dependable, Autonomic and Secure Computing, 16th Intl. Conf. on Pervasive Intelligence and Computing, 4th Intl. Conf. on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC\/PiCom\/DataCom\/CyberSciTech)","author":"Kontopoulos Ioannis","year":"2018","unstructured":"Ioannis Kontopoulos, Giannis Spiliopoulos, Dimitrios Zissis, Konstantinos Chatzikokolakis, and Alexander Artikis. 2018. Countering real-time stream poisoning: An architecture for detecting vessel spoofing in streams of AIS data. In 2018 IEEE 16th Intl. Conf. on Dependable, Autonomic and Secure Computing, 16th Intl. Conf. on Pervasive Intelligence and Computing, 4th Intl. Conf. on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC\/PiCom\/DataCom\/CyberSciTech). IEEE, 981\u2013986."},{"key":"e_1_3_2_127_2","unstructured":"Volodymyr Kuleshov Shantanu Thakoor Tingfung Lau and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. (2018)."},{"key":"e_1_3_2_128_2","article-title":"Adversarial machine learning at scale","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016).","journal-title":"arXiv preprint arXiv:1611.01236"},{"key":"e_1_3_2_129_2","unstructured":"Cassidy Laidlaw and Soheil Feizi. 2019. Functional Adversarial Attacks. arxiv:1906.00001 [cs.LG]."},{"key":"e_1_3_2_130_2","unstructured":"Cassidy Laidlaw Sahil Singla and Soheil Feizi. 2021. Perceptual Adversarial Robustness: Defense against Unseen Threat Models. arxiv:2006.12655 [cs.LG]."},{"key":"e_1_3_2_131_2","doi-asserted-by":"publisher","DOI":"10.1148\/radiol.2019190613"},{"key":"e_1_3_2_132_2","doi-asserted-by":"crossref","unstructured":"Alfred Laugros Alice Caplier and Matthieu Ospici. 2019. Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?arxiv:1909.02436 [cs.LG].","DOI":"10.1109\/ICCVW.2019.00134"},{"key":"e_1_3_2_133_2","doi-asserted-by":"crossref","unstructured":"Mathias Lecuyer Vaggelis Atlidakis Roxana Geambasu Daniel Hsu and Suman Jana. 2019. Certified Robustness to Adversarial Examples with Differential Privacy. arxiv:1802.03471 [stat.ML]","DOI":"10.1109\/SP.2019.00044"},{"key":"e_1_3_2_134_2","doi-asserted-by":"publisher","unstructured":"Qi Lei Lingfei Wu Pin-Yu Chen Alexandros G. Dimakis Inderjit S. Dhillon and Michael Witbrock. 2018. Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification. DOI:10.48550\/ARXIV.1812.00151","DOI":"10.48550\/ARXIV.1812.00151"},{"key":"e_1_3_2_135_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-37337-5_25"},{"key":"e_1_3_2_136_2","article-title":"Adversarial Attacks Defense Method Based on Multiple Filtering and Image Rotation","volume":"2022","author":"Li Feng","year":"2022","unstructured":"Feng Li, Xuehui Du, and Liu Zhang. 2022. Adversarial Attacks Defense Method Based on Multiple Filtering and Image Rotation. Discrete Dynamics in Nature and Society 2022 (2022).","journal-title":"Discrete Dynamics in Nature and Society"},{"key":"e_1_3_2_137_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2021.3109287"},{"key":"e_1_3_2_138_2","article-title":"TextBugger: Generating adversarial text against real-world applications","author":"Li Jinfeng","year":"2018","unstructured":"Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. TextBugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271 (2018).","journal-title":"arXiv preprint arXiv:1812.05271"},{"key":"e_1_3_2_139_2","doi-asserted-by":"publisher","unstructured":"Xiang Li and Shihao Ji. 2021. Generative Dynamic Patch Attack. DOI:10.48550\/ARXIV.2111.04266","DOI":"10.48550\/ARXIV.2111.04266"},{"key":"e_1_3_2_140_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0031-3203(02)00060-2"},{"key":"e_1_3_2_141_2","article-title":"ML Attack Models: Adversarial Attacks and Data Poisoning Attacks","author":"Lin Jing","year":"2021","unstructured":"Jing Lin, Long Dang, Mohamed Rahouti, and Kaiqi Xiong. 2021. ML Attack Models: Adversarial Attacks and Data Poisoning Attacks. arXiv preprint arXiv:2112.02797 (2021).","journal-title":"arXiv preprint arXiv:2112.02797"},{"key":"e_1_3_2_142_2","unstructured":"Tsung-Yu Lin Aruni RoyChowdhury and Subhransu Maji. 2015. Bilinear CNN Models for Fine-Grained Visual Recognition. (December2015)."},{"key":"e_1_3_2_143_2","unstructured":"Yuping Lin Kasra Ahmadi K. A. and Hui Jiang. 2019. Bandlimiting Neural Networks against Adversarial Attacks. arxiv:1905.12797 [cs.LG]."},{"key":"e_1_3_2_144_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220027"},{"key":"e_1_3_2_145_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2805680"},{"key":"e_1_3_2_146_2","unstructured":"Sijia Liu Pin-Yu Chen Xiangyi Chen and Mingyi Hong. [n. d.]. signSGD via Zeroth-Order Oracle. ([n. d.]). https:\/\/www.researchgate.net\/publication\/339404260_Adversarial_Attacks_on_Spoofing_Countermeasures_of_Automatic_Speaker_Verification"},{"key":"e_1_3_2_147_2","volume-title":"International Conference on Learning Representations","author":"Liu Sijia","year":"2019","unstructured":"Sijia Liu, Pin-Yu Chen, Xiangyi Chen, and Mingyi Hong. 2019. signSGD via Zeroth-Order Oracle. In International Conference on Learning Representations. https:\/\/ieeexplore.ieee.org\/document\/9294026"},{"key":"e_1_3_2_148_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASRU46091.2019.9003763"},{"key":"e_1_3_2_149_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3045078"},{"key":"e_1_3_2_150_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01468"},{"key":"e_1_3_2_151_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00095"},{"key":"e_1_3_2_152_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.56"},{"key":"e_1_3_2_153_2","doi-asserted-by":"crossref","unstructured":"Keane Lucas Mahmood Sharif Lujo Bauer Michael K. Reiter and Saurabh Shintre. 2021. Malware Makeover: Breaking ML-Based Static Analysis by Modifying Executable Bytes.","DOI":"10.1145\/3433210.3453086"},{"key":"e_1_3_2_154_2","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2019.23415"},{"key":"e_1_3_2_155_2","article-title":"Characterizing adversarial subspaces using local intrinsic dimensionality","author":"Ma Xingjun","year":"2018","unstructured":"Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, and James Bailey. 2018. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613 (2018).","journal-title":"arXiv preprint arXiv:1801.02613"},{"key":"e_1_3_2_156_2","first-page":"307","volume-title":"ICEIS (1)","author":"Machado Gabriel R.","year":"2019","unstructured":"Gabriel R. Machado, Ronaldo R. Goldschmidt, and Eug\u00eanio Silva. 2019. MultiMagNet: A Non-deterministic Approach based on the Formation of Ensembles for Defending against Adversarial Images. In ICEIS (1). 307\u2013318."},{"key":"e_1_3_2_157_2","unstructured":"Gabriel Resende Machado Eug\u00eanio Silva and Ronaldo Ribeiro Goldschmidt. 2020. Adversarial Machine Learning in Image Classification: A Survey Towards the Defender\u2019s Perspective. arxiv:2009.03728 [cs.CV]."},{"key":"e_1_3_2_158_2","unstructured":"Aleksander Madry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras and Adrian Vladu. 2019. Towards Deep Learning Models Resistant to Adversarial Attacks. arxiv:1706.06083 [stat.ML]. https:\/\/arxiv.org\/abs\/1909.04068"},{"key":"e_1_3_2_159_2","unstructured":"Saeed Mahloujifar and Mohammad Mahmoody. 2019. Can Adversarially Robust Learning Leverage Computational Hardness? 98 (2019) 581\u2013609. https:\/\/proceedings.mlr.press\/v98\/mahloujifar19a.html"},{"key":"e_1_3_2_160_2","first-page":"6640","volume-title":"International Conference on Machine Learning","author":"Maini Pratyush","year":"2020","unstructured":"Pratyush Maini, Eric Wong, and Zico Kolter. 2020. Adversarial robustness against the union of multiple perturbation models. In International Conference on Machine Learning. PMLR, 6640\u20136650."},{"key":"e_1_3_2_161_2","unstructured":"Xiaofeng Mao Yuefeng Chen Shuhui Wang Hang Su Yuan He and Hui Xue. 2020. Composite Adversarial Attacks. arxiv:2012.05434 [cs.CR]."},{"key":"e_1_3_2_162_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01304"},{"key":"e_1_3_2_163_2","first-page":"17347","article-title":"Understanding the limits of unsupervised domain adaptation via data poisoning","volume":"34","author":"Mehra Akshay","year":"2021","unstructured":"Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm. 2021. Understanding the limits of unsupervised domain adaptation via data poisoning. Advances in Neural Information Processing Systems 34 (2021), 17347\u201317359.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_164_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134057"},{"key":"e_1_3_2_165_2","article-title":"On detecting adversarial perturbations","author":"Metzen Jan Hendrik","year":"2017","unstructured":"Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. 2017. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267 (2017).","journal-title":"arXiv preprint arXiv:1702.04267"},{"key":"e_1_3_2_166_2","unstructured":"Laurent Meunier Jamal Atif and Olivier Teytaud. 2019. Yet another but more efficient black-box adversarial attack: Tiling and evolution strategies. arxiv:1910.02244 [cs.LG]."},{"key":"e_1_3_2_167_2","article-title":"Evaluation of momentum diverse input iterative fast gradient sign method (M-DI2-FGSM) based attack method on MCS 2018 adversarial attacks on black box face recognition system","author":"Milton Md. Ashraful Alam","year":"2018","unstructured":"Md. Ashraful Alam Milton. 2018. Evaluation of momentum diverse input iterative fast gradient sign method (M-DI2-FGSM) based attack method on MCS 2018 adversarial attacks on black box face recognition system. arXiv preprint arXiv:1806.08970 (2018).","journal-title":"arXiv preprint arXiv:1806.08970"},{"key":"e_1_3_2_168_2","unstructured":"Seungyong Moon Gaon An and Hyun Oh Song. 2019. Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization. arxiv:1905.06635 [cs.LG]."},{"key":"e_1_3_2_169_2","doi-asserted-by":"crossref","unstructured":"Seyed-Mohsen Moosavi-Dezfooli Alhussein Fawzi Omar Fawzi and Pascal Frossard. 2017. Universal adversarial perturbations. arxiv:1610.08401 [cs.CV].","DOI":"10.1109\/CVPR.2017.17"},{"key":"e_1_3_2_170_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_2_171_2","unstructured":"Seyed-Mohsen Moosavi-Dezfooli Ashish Shrivastava and Oncel Tuzel. 2019. Divide Denoise and Defend against Adversarial Attacks. arxiv:1802.06806 [cs.CV]."},{"key":"e_1_3_2_172_2","article-title":"Fast feature fool: A data independent approach to universal adversarial perturbations","author":"Mopuri Konda Reddy","year":"2017","unstructured":"Konda Reddy Mopuri, Utsav Garg, and R. Venkatesh Babu. 2017. Fast feature fool: A data independent approach to universal adversarial perturbations. arXiv preprint arXiv:1707.05572 (2017).","journal-title":"arXiv preprint arXiv:1707.05572"},{"key":"e_1_3_2_173_2","article-title":"TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP","author":"Morris John X.","year":"2020","unstructured":"John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. arXiv preprint arXiv:2005.05909 (2020).","journal-title":"arXiv preprint arXiv:2005.05909"},{"key":"e_1_3_2_174_2","article-title":"Poisoning attacks with generative adversarial nets","author":"Mu\u00f1oz-Gonz\u00e1lez Luis","year":"2019","unstructured":"Luis Mu\u00f1oz-Gonz\u00e1lez, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, and Emil C. Lupu. 2019. Poisoning attacks with generative adversarial nets. arXiv preprint arXiv:1906.07773 (2019).","journal-title":"arXiv preprint arXiv:1906.07773"},{"key":"e_1_3_2_175_2","doi-asserted-by":"publisher","DOI":"10.1109\/tip.2019.2940533"},{"key":"e_1_3_2_176_2","doi-asserted-by":"crossref","unstructured":"Luis Mu\u00f1oz-Gonz\u00e1lez Battista Biggio Ambra Demontis Andrea Paudice Vasin Wongrassamee Emil C. Lupu and Fabio Roli. 2017. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. arxiv:1708.08689 [cs.LG].","DOI":"10.1145\/3128572.3140451"},{"key":"e_1_3_2_177_2","article-title":"Cascade adversarial machine learning regularized with a unified embedding","author":"Na Taesik","year":"2017","unstructured":"Taesik Na, Jong Hwan Ko, and Saibal Mukhopadhyay. 2017. Cascade adversarial machine learning regularized with a unified embedding. arXiv preprint arXiv:1708.02582 (2017). https:\/\/www.usenix.org\/legacy\/event\/leet08\/tech\/full_papers\/nelson\/nelson_html\/","journal-title":"arXiv preprint arXiv:1708.02582"},{"key":"e_1_3_2_178_2","unstructured":"Elior Nehemya Yael Mathov Asaf Shabtai and Yuval Elovici. [n. d.]. Taking Over the Stock Market: Adversarial Perturbations against Algorithmic Traders. ([n. d.])."},{"key":"e_1_3_2_179_2","first-page":"1","article-title":"Exploiting Machine Learning to Subvert Your Spam Filter.","volume":"8","author":"Nelson Blaine","year":"2008","unstructured":"Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles A. Sutton, J. Doug Tygar, and Kai Xia. 2008. Exploiting Machine Learning to Subvert Your Spam Filter. LEET 8 (2008), 1\u20139.","journal-title":"LEET"},{"key":"e_1_3_2_180_2","article-title":"Adversarial Robustness Toolbox v1. 0.0","author":"Nicolae Maria-Irina","year":"2018","unstructured":"Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, et\u00a0al. 2018. Adversarial Robustness Toolbox v1. 0.0. arXiv preprint arXiv:1807.01069 (2018).","journal-title":"arXiv preprint arXiv:1807.01069"},{"key":"e_1_3_2_181_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2018.10.315"},{"key":"e_1_3_2_182_2","article-title":"Towards robust detection of adversarial examples","author":"Pang Tianyu","year":"2017","unstructured":"Tianyu Pang, Chao Du, Yinpeng Dong, and Jun Zhu. 2017. Towards robust detection of adversarial examples. arXiv preprint arXiv:1706.00633 (2017).","journal-title":"arXiv preprint arXiv:1706.00633"},{"key":"e_1_3_2_183_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW56347.2022.00027"},{"key":"e_1_3_2_184_2","article-title":"Extending defensive distillation","author":"Papernot Nicolas","year":"2017","unstructured":"Nicolas Papernot and Patrick McDaniel. 2017. Extending defensive distillation. arXiv preprint arXiv:1705.05264 (2017).","journal-title":"arXiv preprint arXiv:1705.05264"},{"key":"e_1_3_2_185_2","article-title":"Consistency training with virtual adversarial discrete perturbation","author":"Park Jungsoo","year":"2021","unstructured":"Jungsoo Park, Gyuwan Kim, and Jaewoo Kang. 2021. Consistency training with virtual adversarial discrete perturbation. arXiv preprint arXiv:2104.07284 (2021).","journal-title":"arXiv preprint arXiv:2104.07284"},{"key":"e_1_3_2_186_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAC.2020.3029317"},{"key":"e_1_3_2_187_2","article-title":"Robust deep reinforcement learning with adversarial attacks","author":"Pattanaik Anay","year":"2017","unstructured":"Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. 2017. Robust deep reinforcement learning with adversarial attacks. arXiv preprint arXiv:1712.03632 (2017).","journal-title":"arXiv preprint arXiv:1712.03632"},{"key":"e_1_3_2_188_2","unstructured":"Maura Pintor Fabio Roli Wieland Brendel and Battista Biggio. 2021. Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints. arxiv:2102.12827 [cs.LG]."},{"key":"e_1_3_2_189_2","doi-asserted-by":"crossref","unstructured":"Aaditya Prakash Nick Moran Solomon Garber Antonella DiLillo and James Storer. 2018. Deflecting Adversarial Attacks with Pixel Deflection. arxiv:1801.08926 [cs.CV].","DOI":"10.1109\/CVPR.2018.00894"},{"key":"e_1_3_2_190_2","doi-asserted-by":"publisher","DOI":"10.3390\/app9050909"},{"key":"e_1_3_2_191_2","unstructured":"Erwin Quiring David Klein Daniel Arp Martin Johns and Konrad Rieck. [n. d.]. Open access to the Proceedings of the 29th USENIX Security Symposium is sponsored by USENIX. Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning. https:\/\/www.usenix.org\/conference\/usenixsecurity20\/presentation\/quiring"},{"key":"e_1_3_2_192_2","volume-title":"29th \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security Symposium ( \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security 20)","author":"Quiring Erwin","year":"2020","unstructured":"Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck. 2020. Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning. In 29th \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security Symposium ( \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security 20)."},{"key":"e_1_3_2_193_2","unstructured":"Adnan Siraj Rakin Zhezhi He Boqing Gong and Deliang Fan. 2018. Blind Pre-Processing: A Robust Defense Method against Adversarial Examples. arxiv:1802.01549 [cs.LG]."},{"key":"e_1_3_2_194_2","unstructured":"Miguel A. Ramirez Song-Kyoo Kim Hussam Al Hamadi Ernesto Damiani Young-Ji Byon Tae-Yeon Kim Chung-Suk Cho and Chan Yeob Yeun. 2022. Poisoning Attacks and Defenses on Artificial Intelligence: A Survey. arxiv:2202.10276 [cs.CR]."},{"key":"e_1_3_2_195_2","article-title":"Improving network robustness against adversarial attacks with compact convolution","author":"Ranjan Rajeev","year":"2017","unstructured":"Rajeev Ranjan, Swami Sankaranarayanan, Carlos D. Castillo, and Rama Chellappa. 2017. Improving network robustness against adversarial attacks with compact convolution. arXiv preprint arXiv:1712.00699 (2017).","journal-title":"arXiv preprint arXiv:1712.00699"},{"key":"e_1_3_2_196_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1103"},{"key":"e_1_3_2_197_2","first-page":"arXiv\u20132009","article-title":"Adversarial Machine Learning in Image Classification: A Survey Towards the Defender\u2019s Perspective","author":"Machado Gabriel Resende","year":"2020","unstructured":"Gabriel Resende Machado, Eug\u00eanio Silva, and Ronaldo Ribeiro Goldschmidt. 2020. Adversarial Machine Learning in Image Classification: A Survey Towards the Defender\u2019s Perspective. arXiv e-prints (2020), arXiv\u20132009.","journal-title":"arXiv e-prints"},{"key":"e_1_3_2_198_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-73003-5_196"},{"key":"e_1_3_2_199_2","doi-asserted-by":"publisher","DOI":"10.1007\/0-387-25465-X_9"},{"key":"e_1_3_2_200_2","doi-asserted-by":"publisher","DOI":"10.1145\/3453158"},{"key":"e_1_3_2_201_2","unstructured":"Andrew Slavin Ross and Finale Doshi-Velez. 2017. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients. arxiv:1711.09404 [cs.LG]."},{"key":"e_1_3_2_202_2","article-title":"Token-modification adversarial attacks for natural language processing: A survey","author":"Roth Tom","year":"2021","unstructured":"Tom Roth, Yansong Gao, Alsharif Abuadbba, Surya Nepal, and Wei Liu. 2021. Token-modification adversarial attacks for natural language processing: A survey. arXiv preprint arXiv:2103.00676 (2021).","journal-title":"arXiv preprint arXiv:2103.00676"},{"key":"e_1_3_2_203_2","volume-title":"International Conference on Learning Representations","author":"Ru Binxin","year":"2019","unstructured":"Binxin Ru, Adam Cobb, Arno Blaas, and Yarin Gal. 2019. BayesOpt adversarial attack. In International Conference on Learning Representations."},{"key":"e_1_3_2_204_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482029"},{"key":"e_1_3_2_205_2","doi-asserted-by":"publisher","DOI":"10.1145\/1644893.1644895"},{"key":"e_1_3_2_206_2","unstructured":"Sebastian Ruder. 2017. An overview of gradient descent optimization algorithms. arxiv:1609.04747 [cs.LG]."},{"key":"e_1_3_2_207_2","doi-asserted-by":"crossref","unstructured":"Vivek B. S. and R. Venkatesh Babu. 2020. Single-step Adversarial training with Dropout Scheduling. arxiv:2004.08628 [cs.LG].","DOI":"10.1109\/CVPR42600.2020.00103"},{"key":"e_1_3_2_208_2","article-title":"Poisoning Attacks and Defenses in Federated Learning: A Survey","author":"Sagar Subhash","year":"2023","unstructured":"Subhash Sagar, Chang-Sun Li, Seng W. Loke, and Jinho Choi. 2023. Poisoning Attacks and Defenses in Federated Learning: A Survey. arXiv preprint arXiv:2301.05795 (2023).","journal-title":"arXiv preprint arXiv:2301.05795"},{"key":"e_1_3_2_209_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR48806.2021.9413263"},{"key":"e_1_3_2_210_2","unstructured":"Leo Schwinn Ren\u00e9 Raab and Bj\u00f6rn Eskofier. 2020. Towards Rapid and Robust Adversarial Training with One-Step Attacks. arxiv:2002.10097 [cs.LG]."},{"key":"e_1_3_2_211_2","volume-title":"Workshops at the Thirty-second AAAI Conference on Artificial Intelligence","author":"Sengupta Sailik","year":"2018","unstructured":"Sailik Sengupta, Tathagata Chakraborti, and Subbarao Kambhampati. 2018. MTDeep: Boosting the security of deep neural nets against adversarial attacks with moving target defense. In Workshops at the Thirty-second AAAI Conference on Artificial Intelligence."},{"key":"e_1_3_2_212_2","article-title":"Adversarial examples-a complete characterisation of the phenomenon","author":"Serban Alexandru Constantin","year":"2018","unstructured":"Alexandru Constantin Serban, Erik Poll, and Joost Visser. 2018. Adversarial examples-a complete characterisation of the phenomenon. arXiv preprint arXiv:1810.01185 (2018).","journal-title":"arXiv preprint arXiv:1810.01185"},{"key":"e_1_3_2_213_2","unstructured":"Ali Shafahi Amin Ghiasi Furong Huang and Tom Goldstein. 2019. Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?arxiv:1910.11585 [cs.LG]."},{"key":"e_1_3_2_214_2","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978392"},{"key":"e_1_3_2_215_2","unstructured":"Ali Shafahi Mahyar Najibi Zheng Xu John Dickerson Larry S. Davis and Tom Goldstein. 2019. Universal Adversarial Training. arxiv:1811.11304 [cs.CV]."},{"key":"e_1_3_2_216_2","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978392"},{"key":"e_1_3_2_217_2","article-title":"APE-GAN: Adversarial perturbation elimination with GAN","author":"Shen Shiwei","year":"2017","unstructured":"Shiwei Shen, Guoqing Jin, Ke Gao, and Yongdong Zhang. 2017. APE-GAN: Adversarial perturbation elimination with GAN. arXiv preprint arXiv:1707.05474 (2017).","journal-title":"arXiv preprint arXiv:1707.05474"},{"key":"e_1_3_2_218_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107309"},{"key":"e_1_3_2_219_2","volume-title":"NDSS","author":"Shih Ming-Wei","year":"2017","unstructured":"Ming-Wei Shih, Sangho Lee, Taesoo Kim, and Marcus Peinado. 2017. T-SGX: Eradicating controlled-channel attacks against enclave programs. In NDSS."},{"key":"e_1_3_2_220_2","first-page":"8","volume-title":"NIPS 2017 Workshop on Machine Learning and Computer Security","author":"Shin Richard","year":"2017","unstructured":"Richard Shin and Dawn Song. 2017. JPEG-resistant adversarial images. In NIPS 2017 Workshop on Machine Learning and Computer Security, Vol. 1. 8."},{"key":"e_1_3_2_221_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.41"},{"key":"e_1_3_2_222_2","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-019-0197-0"},{"key":"e_1_3_2_223_2","unstructured":"Osvaldo Simeone. 2018. A Very Brief Introduction to Machine Learning with Applications to Communication Systems. arxiv:1808.02342 [cs.IT]."},{"key":"e_1_3_2_224_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00665"},{"key":"e_1_3_2_225_2","unstructured":"Yang Song Taesup Kim Sebastian Nowozin Stefano Ermon and Nate Kushman. 2018. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples. arxiv:1710.10766 [cs.LG]."},{"key":"e_1_3_2_226_2","article-title":"Ensemble methods as a defense to adversarial perturbations against deep neural networks","author":"Strauss Thilo","year":"2017","unstructured":"Thilo Strauss, Markus Hanselmann, Andrej Junginger, and Holger Ulmer. 2017. Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv preprint arXiv:1709.03423 (2017).","journal-title":"arXiv preprint arXiv:1709.03423"},{"key":"e_1_3_2_227_2","doi-asserted-by":"publisher","DOI":"10.1109\/tevc.2019.2890858"},{"key":"e_1_3_2_228_2","unstructured":"Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement learning: An introduction. (2018)."},{"key":"e_1_3_2_229_2","doi-asserted-by":"crossref","unstructured":"Christian Szegedy Wei Liu Yangqing Jia Pierre Sermanet Scott Reed Dragomir Anguelov Dumitru Erhan Vincent Vanhoucke and Andrew Rabinovich. 2014. Going Deeper with Convolutions. arxiv:1409.4842 [cs.CV].","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_3_2_230_2","unstructured":"Christian Szegedy Wojciech Zaremba Ilya Sutskever Joan Bruna Dumitru Erhan Ian Goodfellow and Rob Fergus. 2014. Intriguing properties of neural networks. arxiv:1312.6199 [cs.CV]."},{"key":"e_1_3_2_231_2","doi-asserted-by":"publisher","DOI":"10.1109\/SSCI.2018.8628742"},{"key":"e_1_3_2_232_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00715"},{"key":"e_1_3_2_233_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11828"},{"key":"e_1_3_2_234_2","doi-asserted-by":"publisher","DOI":"10.1145\/3551636"},{"key":"e_1_3_2_235_2","unstructured":"Florian Tramer Nicholas Carlini Wieland Brendel and Aleksander Madry. 2020. On Adaptive Attacks to Adversarial Example Defenses. arxiv:2002.08347 [cs.LG]."},{"key":"e_1_3_2_236_2","first-page":"601","volume-title":"25th \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security Symposium ( \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security 16)","author":"Tram\u00e8r Florian","year":"2016","unstructured":"Florian Tram\u00e8r, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction APIs. In 25th \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security Symposium ( \\(\\lbrace\\) USENIX \\(\\rbrace\\) Security 16). 601\u2013618."},{"key":"e_1_3_2_237_2","doi-asserted-by":"publisher","unstructured":"Florian Tram\u00e8r Alexey Kurakin Nicolas Papernot Ian Goodfellow Dan Boneh and Patrick McDaniel. 2017. Ensemble Adversarial Training: Attacks and Defenses. DOI:10.48550\/ARXIV.1705.07204","DOI":"10.48550\/ARXIV.1705.07204"},{"key":"e_1_3_2_238_2","doi-asserted-by":"publisher","DOI":"10.14429\/dsj.71.16110"},{"key":"e_1_3_2_239_2","first-page":"5025","volume-title":"International Conference on Machine Learning","author":"Uesato Jonathan","year":"2018","unstructured":"Jonathan Uesato, Brendan O\u2019Donoghue, Pushmeet Kohli, and Aaron Oord. 2018. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning. PMLR, 5025\u20135034."},{"key":"e_1_3_2_240_2","article-title":"Model-based hierarchical clustering","author":"Vaithyanathan Shivakumar","year":"2013","unstructured":"Shivakumar Vaithyanathan and Byron E. Dom. 2013. Model-based hierarchical clustering. arXiv preprint arXiv:1301.3899 (2013).","journal-title":"arXiv preprint arXiv:1301.3899"},{"key":"e_1_3_2_241_2","doi-asserted-by":"crossref","unstructured":"Eric Wallace Tony Z. Zhao Shi Feng and Sameer Singh. 2021. Concealed Data Poisoning Attacks on NLP Models. arxiv:2010.12563 [cs.CL].","DOI":"10.18653\/v1\/2021.naacl-main.13"},{"key":"e_1_3_2_242_2","doi-asserted-by":"crossref","unstructured":"Jianyu Wang and Haichao Zhang. 2019. Bilateral Adversarial Training: Towards Fast Training of More Robust Models against Adversarial Attacks. arxiv:1811.10716 [cs.CV].","DOI":"10.1109\/ICCV.2019.00673"},{"key":"e_1_3_2_243_2","unstructured":"Ling Wang Cheng Zhang Zejian Luo Chenguang Liu Jie Liu Xi Zheng and Athanasios Vasilakos. 2020. Progressive Defense against Adversarial Attacks for Deep Learning as a Service in Internet of Things. arxiv:2010.11143 [cs.CR]."},{"key":"e_1_3_2_244_2","doi-asserted-by":"crossref","unstructured":"Sun-Chong Wang. 2003. Artificial neural network. (2003) 81\u2013100.","DOI":"10.1007\/978-1-4615-0377-4_5"},{"key":"e_1_3_2_245_2","doi-asserted-by":"publisher","unstructured":"Xiaosen Wang Hao Jin Yichen Yang and Kun He. 2019. Natural Language Adversarial Defense through Synonym Encoding. DOI:10.48550\/ARXIV.1909.06723","DOI":"10.48550\/ARXIV.1909.06723"},{"key":"e_1_3_2_246_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i16.17648"},{"key":"e_1_3_2_247_2","first-page":"28","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops","author":"Wang Yu","year":"2017","unstructured":"Yu Wang, Luca Bondi, Paolo Bestagini, Stefano Tubaro, David J. Edward Delp, et\u00a0al. 2017. A counter-forensic method for CNN-based camera model identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 28\u201335."},{"key":"e_1_3_2_248_2","unstructured":"Yizhen Wang Somesh Jha and Kamalika Chaudhuri. 2020. An Investigation of Data Poisoning Defenses for Online Learning. arxiv:1905.12121 [cs.LG]."},{"key":"e_1_3_2_249_2","doi-asserted-by":"publisher","DOI":"10.1109\/ALLERTON.2017.8262842"},{"key":"e_1_3_2_250_2","doi-asserted-by":"publisher","unstructured":"Sandamal Weerasinghe Tansu Alpcan Sarah M. Erfani and Christopher Leckie. 2020. Defending Distributed Classifiers against Data Poisoning Attacks. DOI:10.48550\/ARXIV.2008.09284","DOI":"10.48550\/ARXIV.2008.09284"},{"key":"e_1_3_2_251_2","unstructured":"Daan Wierstra Tom Schaul Tobias Glasmachers Yi Sun Jan Peters and J\u00fcrgen Schmidhuber. 2014. Natural evolution strategies. (2014)."},{"key":"e_1_3_2_252_2","article-title":"Fast is better than free: Revisiting adversarial training","author":"Wong Eric","year":"2020","unstructured":"Eric Wong, Leslie Rice, and J. Zico Kolter. 2020. Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994 (2020).","journal-title":"arXiv preprint arXiv:2001.03994"},{"key":"e_1_3_2_253_2","article-title":"Wasserstein adversarial examples via projected Sinkhorn iterations","author":"Wong Eric","year":"2019","unstructured":"Eric Wong, Frank R. Schmidt, and J. Zico Kolter. 2019. Wasserstein adversarial examples via projected Sinkhorn iterations. arXiv preprint arXiv:1902.07906 (2019).","journal-title":"arXiv preprint arXiv:1902.07906"},{"key":"e_1_3_2_254_2","volume-title":"The AAAI-22 Workshop on Adversarial Machine Learning and Beyond","author":"Worzyk Nils","year":"2021","unstructured":"Nils Worzyk and Stella Yu. 2021. Broad adversarial training with data augmentation in the output space. In The AAAI-22 Workshop on Adversarial Machine Learning and Beyond."},{"key":"e_1_3_2_255_2","doi-asserted-by":"publisher","unstructured":"Huimin Wu Zhengmian Hu and Bin Gu. 2021. Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients. DOI:10.48550\/ARXIV.2107.09937","DOI":"10.48550\/ARXIV.2107.09937"},{"key":"e_1_3_2_256_2","article-title":"Spatially transformed adversarial examples","author":"Xiao Chaowei","year":"2018","unstructured":"Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. 2018. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018).","journal-title":"arXiv preprint arXiv:1801.02612"},{"key":"e_1_3_2_257_2","article-title":"Mitigating adversarial effects through randomization","author":"Xie Cihang","year":"2017","unstructured":"Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. 2017. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991 (2017).","journal-title":"arXiv preprint arXiv:1711.01991"},{"key":"e_1_3_2_258_2","unstructured":"Han Xu Yao Ma Haochen Liu Debayan Deb Hui Liu Jiliang Tang and Anil K. Jain. 2019. Adversarial Attacks and Defenses in Images Graphs and Text: A Review. arxiv:1909.08072 [cs.LG]."},{"key":"e_1_3_2_259_2","doi-asserted-by":"crossref","unstructured":"Xin Yan and Xiaogang Su. 2009. Linear regression analysis: Theory and computing. (2009).","DOI":"10.1142\/9789812834119"},{"key":"e_1_3_2_260_2","article-title":"ME-Net: Towards effective adversarial robustness with matrix estimation","author":"Yang Yuzhe","year":"2019","unstructured":"Yuzhe Yang, Guo Zhang, Dina Katabi, and Zhi Xu. 2019. ME-Net: Towards effective adversarial robustness with matrix estimation. arXiv preprint arXiv:1905.11971 (2019).","journal-title":"arXiv preprint arXiv:1905.11971"},{"key":"e_1_3_2_261_2","series-title":"Proceedings of the 36th International Conference on Machine Learning","first-page":"7074","volume":"97","author":"Yin Dong","year":"2019","unstructured":"Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2019. Defending against saddle point attack in Byzantine-robust distributed learning. In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 7074\u20137084."},{"key":"e_1_3_2_262_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459212.3459217"},{"key":"e_1_3_2_263_2","unstructured":"Jongmin Yoon Sung Ju Hwang and Juho Lee. 2021. Adversarial purification with score-based generative models. (2021)."},{"key":"e_1_3_2_264_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5463"},{"key":"e_1_3_2_265_2","unstructured":"Matthew Yuan Matthew Wicker and Luca Laurenti. 2020. Gradient-Free Adversarial Attacks for Bayesian Neural Networks. arxiv:2012.12640 [cs.LG]."},{"key":"e_1_3_2_266_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10796-008-9131-2"},{"key":"e_1_3_2_267_2","unstructured":"Xiaoyong Yuan Pan He Qile Zhu and Xiaolin Li. 2018. Adversarial Examples: Attacks and Defenses for Deep Learning. arxiv:1712.07107 [cs.LG]."},{"key":"e_1_3_2_268_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140449"},{"key":"e_1_3_2_269_2","doi-asserted-by":"publisher","unstructured":"Huimin Zeng Jiahao Su and Furong Huang. 2021. Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders. DOI:10.48550\/ARXIV.2108.00491","DOI":"10.48550\/ARXIV.2108.00491"},{"key":"e_1_3_2_270_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.6154"},{"key":"e_1_3_2_271_2","article-title":"A survey on universal adversarial attack","author":"Zhang Chaoning","year":"2021","unstructured":"Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, and In So Kweon. 2021. A survey on universal adversarial attack. arXiv preprint arXiv:2103.01498 (2021).","journal-title":"arXiv preprint arXiv:2103.01498"},{"key":"e_1_3_2_272_2","doi-asserted-by":"crossref","unstructured":"Hengtong Zhang Tianhang Zheng Jing Gao Chenglin Miao Lu Su Yaliang Li and Kui Ren. 2019. Data Poisoning Attack against Knowledge Graph Embedding. arxiv:1904.12052 [cs.LG].","DOI":"10.24963\/ijcai.2019\/674"},{"key":"e_1_3_2_273_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM50108.2020.00088"},{"key":"e_1_3_2_274_2","doi-asserted-by":"publisher","unstructured":"Chenchen Zhao and Hao Li. 2020. Blurring Fools the Network \u2013 Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring. DOI:10.48550\/ARXIV.2012.11442","DOI":"10.48550\/ARXIV.2012.11442"},{"key":"e_1_3_2_275_2","doi-asserted-by":"publisher","DOI":"10.5555\/3327757.3327888"},{"key":"e_1_3_2_276_2","doi-asserted-by":"publisher","unstructured":"Zhun Zhong Liang Zheng Guoliang Kang Shaozi Li and Yi Yang. 2017. Random Erasing Data Augmentation. DOI:10.48550\/ARXIV.1708.04896","DOI":"10.48550\/ARXIV.1708.04896"},{"key":"e_1_3_2_277_2","doi-asserted-by":"publisher","unstructured":"Yi Zhou Xiaoqing Zheng Cho-Jui Hsieh Kai-wei Chang and Xuanjing Huang. 2020. Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble. DOI:10.48550\/ARXIV.2006.11627","DOI":"10.48550\/ARXIV.2006.11627"},{"key":"e_1_3_2_278_2","unstructured":"Chen Zhu W. Ronny Huang Ali Shafahi Hengduo Li Gavin Taylor Christoph Studer and Tom Goldstein. 2019. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets. arxiv:1905.05897 [stat.ML]"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3627536","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3627536","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:50:04Z","timestamp":1750287004000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3627536"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,9]]},"references-count":277,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2024,7,31]]}},"alternative-id":["10.1145\/3627536"],"URL":"https:\/\/doi.org\/10.1145\/3627536","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"type":"print","value":"0360-0300"},{"type":"electronic","value":"1557-7341"}],"subject":[],"published":{"date-parts":[[2024,4,9]]},"assertion":[{"value":"2022-09-14","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-09-26","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-04-09","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}