{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,6]],"date-time":"2026-05-06T15:11:42Z","timestamp":1778080302738,"version":"3.51.4"},"reference-count":202,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2024,10,7]],"date-time":"2024-10-07T00:00:00Z","timestamp":1728259200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","award":["2023.02832.BD"],"award-info":[{"award-number":["2023.02832.BD"]}]},{"name":"European Union\u2019s Horizon 2020","award":["957197"],"award-info":[{"award-number":["957197"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2025,1,31]]},"abstract":"<jats:p>The rapid development of artificial intelligence (AI) and breakthroughs in Internet of Things (IoT) technologies have driven the innovation of advanced autonomous driving systems (ADSs). Image classification deep learning (DL) algorithms immensely contribute to the decision-making process in ADSs, showcasing their capabilities in handling complex real-world driving scenarios, surpassing human driving intelligence. However, these algorithms are vulnerable to adversarial attacks, which aim to fool them in real-time decision-making and compromise the reliability of the autonomous driving functions. This systematic review offers a comprehensive overview of the most recent literature on adversarial attacks and countermeasures on image classification DL models in ADSs. The review highlights the current challenges in applying successful countermeasures to mitigating these vulnerabilities. We also introduce taxonomies for categorizing adversarial attacks and countermeasures and provide recommendations and guidelines to help researchers design and evaluate countermeasures. We suggest interesting future research directions to improve the robustness of image classification DL models against adversarial attacks in autonomous driving scenarios.<\/jats:p>","DOI":"10.1145\/3691625","type":"journal-article","created":{"date-parts":[[2024,8,31]],"date-time":"2024-08-31T09:11:26Z","timestamp":1725095486000},"page":"1-52","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":43,"title":["Adversarial Attacks and Countermeasures on Image Classification-based Deep Learning Models in Autonomous Driving Systems: A Systematic Review"],"prefix":"10.1145","volume":"57","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8488-4955","authenticated-orcid":false,"given":"Bakary","family":"Badjie","sequence":"first","affiliation":[{"name":"LASIGE - Computer Science and Engineering Research Centre, University of Lisbon, Lisboa, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5351-5580","authenticated-orcid":false,"given":"Jos\u00e9","family":"Cec\u00edlio","sequence":"additional","affiliation":[{"name":"LASIGE - Computer Science and Engineering Research Centre, University of Lisbon, Lisboa Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5522-5739","authenticated-orcid":false,"given":"Antonio","family":"Casimiro","sequence":"additional","affiliation":[{"name":"LASIGE - Computer Science and Engineering Research Centre, University of Lisbon, Lisboa Portugal"}]}],"member":"320","published-online":{"date-parts":[[2024,10,7]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/NILES50944.2020.9257874"},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.5220\/0012314700003636"},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS52979.2021.00098"},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3321707.3321749"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-020-04761-6"},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00371-022-02660-6"},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58592-1_29"},{"key":"e_1_3_3_9_2","first-page":"16048","article-title":"Understanding and improving fast adversarial training","volume":"33","author":"Andriushchenko Maksym","year":"2020","unstructured":"Maksym Andriushchenko and Nicolas Flammarion. 2020. Understanding and improving fast adversarial training. Advances in Neural Information Processing Systems 33 (2020), 16048\u201316059.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_10_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10586-021-03421-1"},{"key":"e_1_3_3_11_2","first-page":"274","volume-title":"International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning. PMLR, 274\u2013283."},{"key":"e_1_3_3_12_2","first-page":"284","volume-title":"International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing robust adversarial examples. In International Conference on Machine Learning. PMLR, 284\u2013293."},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/IAVVC57316.2023.10328098"},{"key":"e_1_3_3_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3672359.3672362"},{"key":"e_1_3_3_15_2","first-page":"6843","volume-title":"Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Baluja Shumeet","year":"2018","unstructured":"Shumeet Baluja and Ian Fischer. 2018. Adversarial transformation networks: Learning to generate adversarial examples. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6843\u20136852."},{"key":"e_1_3_3_16_2","first-page":"97","volume-title":"Asian Conference on Machine Learning","author":"Biggio Battista","year":"2011","unstructured":"Battista Biggio, Blaine Nelson, and Pavel Laskov. 2011. Support vector machines under adversarial label noise. In Asian Conference on Machine Learning. PMLR, 97\u2013112."},{"key":"e_1_3_3_17_2","first-page":"1","article-title":"Potential cyber threats of adversarial attacks on autonomous driving models","author":"Boltachev Eldar","year":"2023","unstructured":"Eldar Boltachev. 2023. Potential cyber threats of adversarial attacks on autonomous driving models. Journal of Computer Virology and Hacking Techniques (2023), 1\u201311.","journal-title":"Journal of Computer Virology and Hacking Techniques"},{"key":"e_1_3_3_18_2","volume-title":"Federated Learning: Collaborative Machine Learning without Centralized Training Data)","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan and Daniel Ramage. 2017. Collaborative machine learning without centralized training data. In Federated Learning: Collaborative Machine Learning without Centralized Training Data)."},{"key":"e_1_3_3_19_2","volume-title":"International Conference on Learning Representations","author":"Brendel Wieland","year":"2018","unstructured":"Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In International Conference on Learning Representations."},{"key":"e_1_3_3_20_2","article-title":"Unrestricted adversarial examples","author":"Brown Tom B.","year":"2018","unstructured":"Tom B. Brown, Nicholas Carlini, Chiyuan Zhang, Catherine Olsson, Paul Christiano, and Ian Goodfellow. 2018. Unrestricted adversarial examples. (2018). arXiv preprint arXiv:1809.08352.","journal-title":"arXiv preprint arXiv:1809.08352"},{"key":"e_1_3_3_21_2","article-title":"Adversarial patch","author":"Brown Tom B.","year":"2017","unstructured":"Tom B. Brown, Dandelion Man\u00e9, Aurko Roy, Mart\u00edn Abadi, and Justin Gilmer. 2017. Adversarial patch. Computer Vision and Pattern Recognition (2017).","journal-title":"Computer Vision and Pattern Recognition"},{"key":"e_1_3_3_22_2","first-page":"3259","article-title":"Learning to generate realistic noisy images via pixel-level noise-aware adversarial training","volume":"34","author":"Cai Yuanhao","year":"2021","unstructured":"Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, and Donglai Wei. 2021. Learning to generate realistic noisy images via pixel-level noise-aware adversarial training. Advances in Neural Information Processing Systems 34 (2021), 3259\u20133270.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3319535.3339815"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.49"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-018-5853-4"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2018.07.026"},{"key":"e_1_3_3_27_2","article-title":"Dynamic adversarial attacks on autonomous driving systems","author":"Chahe Amirhosein","year":"2023","unstructured":"Amirhosein Chahe, Chenan Wang, Abhishek Jeyapratap, Kaidi Xu, and Lifeng Zhou. 2023. Dynamic adversarial attacks on autonomous driving systems. Computer Vision and Pattern Recognition (2023).","journal-title":"Computer Vision and Pattern Recognition"},{"key":"e_1_3_3_28_2","article-title":"Adversarial attacks and defences: A survey","author":"Chakraborty Anirban","year":"2018","unstructured":"Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. Multidisciplinary Digital Publishing Institute (2018).","journal-title":"Multidisciplinary Digital Publishing Institute"},{"key":"e_1_3_3_29_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11302"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140448"},{"key":"e_1_3_3_31_2","first-page":"52","volume-title":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","author":"Chen Shang-Tse","year":"2018","unstructured":"Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Polo Chau. 2018. ShapeShifter: Robust physical adversarial attack on faster R-CNN object detector. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 52\u201368."},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58555-6_17"},{"key":"e_1_3_3_33_2","article-title":"A theory of transfer-based black-box attacks: Explanation and implications","volume":"36","author":"Chen Yanbo","year":"2024","unstructured":"Yanbo Chen and Weiwei Liu. 2024. A theory of transfer-based black-box attacks: Explanation and implications. Advances in Neural Information Processing Systems 36 (2024).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIVC55077.2022.9886997"},{"key":"e_1_3_3_35_2","volume-title":"International Conference on Learning Representations","author":"Cheng Minhao","year":"2019","unstructured":"Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, JinFeng Yi, and Cho-Jui Hsieh. 2019. Query-efficient hard-label black-box attack: An optimization-based approach. In International Conference on Learning Representations."},{"key":"e_1_3_3_36_2","article-title":"Improving black-box adversarial attacks with a transfer-based prior","volume":"32","author":"Cheng Shuyu","year":"2019","unstructured":"Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. 2019. Improving black-box adversarial attacks with a transfer-based prior. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2024.111729"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2019.2963791"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1155\/2023\/6376275"},{"key":"e_1_3_3_40_2","article-title":"Houdini: Fooling deep structured prediction models","author":"Cisse Moustapha","year":"2017","unstructured":"Moustapha Cisse, Yossi Adi, Natalia Neverova, and Joseph Keshet. 2017. Houdini: Fooling deep structured prediction models. Advances in Neural Information Processing Systems (NIPS) (2017).","journal-title":"Advances in Neural Information Processing Systems (NIPS)"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01446"},{"key":"e_1_3_3_42_2","first-page":"2196","volume-title":"International Conference on Machine Learning","author":"Croce Francesco","year":"2020","unstructured":"Francesco Croce and Matthias Hein. 2020. Minimally distorted adversarial examples with a fast adaptive boundary attack. In International Conference on Machine Learning. PMLR, 2196\u20132205."},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3337638"},{"key":"e_1_3_3_44_2","doi-asserted-by":"publisher","DOI":"10.1111\/ijmr.12156"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP40778.2020.9191288"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2021.3071405"},{"key":"e_1_3_3_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/PerCom45495.2020.9127389"},{"key":"e_1_3_3_48_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-60248-2_27"},{"key":"e_1_3_3_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00957"},{"key":"e_1_3_3_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00444"},{"key":"e_1_3_3_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/383259.383296"},{"key":"e_1_3_3_52_2","article-title":"Robust physical-world attacks on deep learning visual classification","author":"Eykholt Kevin","year":"2017","unstructured":"Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. Robust physical-world attacks on deep learning visual classification. Journal of Environmental Sciences (China) English Ed (2017).","journal-title":"Journal of Environmental Sciences (China) English Ed"},{"key":"e_1_3_3_53_2","first-page":"1","volume-title":"2019 International Joint Conference on Neural Networks (IJCNN)","author":"Fawaz Hassan Ismail","year":"2019","unstructured":"Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Adversarial attacks on deep neural networks for time series classification. In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 1\u20138."},{"key":"e_1_3_3_54_2","article-title":"Waymo launches its first commercial self-driving car service","volume":"12","author":"Fingas Jon","year":"2018","unstructured":"Jon Fingas. 2018. Waymo launches its first commercial self-driving car service. Sunnyvale, California: Verizon Media 12 (2018).","journal-title":"Sunnyvale, California: Verizon Media"},{"key":"e_1_3_3_55_2","article-title":"DeepCloak: Masking deep neural network models for robustness against adversarial samples","author":"Gao Ji","year":"2017","unstructured":"Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, and Yanjun Qi. 2017. DeepCloak: Masking deep neural network models for robustness against adversarial samples. International Conference on Learning Representations (2017).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/OJVT.2023.3265363"},{"key":"e_1_3_3_57_2","article-title":"Deep anomaly detection using geometric transformations","volume":"31","author":"Golan Izhak","year":"2018","unstructured":"Izhak Golan and Ran El-Yaniv. 2018. Deep anomaly detection using geometric transformations. Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_58_2","first-page":"551","volume-title":"International Conference on Applied Human Factors and Ergonomics","author":"Gold Christian","year":"2017","unstructured":"Christian Gold, Frederik Naujoks, Jonas Radlmayr, Hanna Bellem, and Oliver Jarosch. 2017. Testing scenarios for human factors research in level 3 automated vehicles. In International Conference on Applied Human Factors and Ergonomics. Springer, 551\u2013559."},{"key":"e_1_3_3_59_2","article-title":"Generative adversarial nets","volume":"27","author":"Goodfellow Ian","year":"2014","unstructured":"Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in Neural Information Processing Systems 27 (2014).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_60_2","first-page":"20","article-title":"Explaining and Harnessing Adversarial Examples","volume":"1050","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. Stat 1050 (2015), 20.","journal-title":"Stat"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.36548\/jucct.2022.4.002"},{"key":"e_1_3_3_62_2","first-page":"2484","volume-title":"International Conference on Machine Learning","author":"Guo Chuan","year":"2019","unstructured":"Chuan Guo, Jacob Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Weinberger. 2019. Simple black-box adversarial attacks. In International Conference on Machine Learning. PMLR, 2484\u20132493."},{"key":"e_1_3_3_63_2","article-title":"Countering adversarial images using input transformations","author":"Guo Chuan","year":"2017","unstructured":"Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. 2017. Countering adversarial images using input transformations. International Conference on Learning Representations (2017).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_64_2","article-title":"Invisible optical adversarial stripes on traffic sign against autonomous vehicles","author":"Guo Dongfang","year":"2024","unstructured":"Dongfang Guo, Yuting Wu, Yimin Dai, Pengfei Zhou, Xin Lou, and Rui Tan. 2024. Invisible optical adversarial stripes on traffic sign against autonomous vehicles. International Conference on Learning Representations (2024).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_65_2","article-title":"Co-teaching: Robust training of deep neural networks with extremely noisy labels","volume":"31","author":"Han Bo","year":"2018","unstructured":"Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_66_2","doi-asserted-by":"publisher","DOI":"10.1109\/JAS.2021.1004108"},{"key":"e_1_3_3_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/SPW.2018.00015"},{"key":"e_1_3_3_68_2","doi-asserted-by":"publisher","DOI":"10.3233\/FAIA240141"},{"key":"e_1_3_3_69_2","unstructured":"Sanghyun Hong Varun Chandrasekaran Yi\u011fitcan Kaya Tudor Dumitra\u015f and Nicolas Papernot. 2020. On the effectiveness of mitigating data poisoning attacks with gradient shaping. https:\/\/www.scinapse.io\/papers\/3007358161 (2020)."},{"key":"e_1_3_3_70_2","article-title":"Blocking transferability of adversarial examples in black-box learning systems","author":"Hosseini Hossein","year":"2017","unstructured":"Hossein Hosseini, Yize Chen, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Blocking transferability of adversarial examples in black-box learning systems. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2017 (2017).","journal-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2017"},{"key":"e_1_3_3_71_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV57701.2024.00387"},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-19-7222-5_12"},{"key":"e_1_3_3_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00483"},{"key":"e_1_3_3_74_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jnca.2020.102632"},{"key":"e_1_3_3_75_2","article-title":"Black-box adversarial attack with transferable model-based embedding","author":"Huang Zhichao","year":"2019","unstructured":"Zhichao Huang and Tong Zhang. 2019. Black-box adversarial attack with transferable model-based embedding. International Conference on Learning Representations (2019).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_76_2","first-page":"2137","volume-title":"International Conference on Machine Learning","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning. PMLR, 2137\u20132146."},{"key":"e_1_3_3_77_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-97-4182-3_26"},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2023.3347860"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2022.3184255"},{"key":"e_1_3_3_80_2","doi-asserted-by":"publisher","DOI":"10.1109\/MCI.2022.3155327"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00467"},{"key":"e_1_3_3_82_2","first-page":"2387","volume-title":"International Conference on Machine Learning","author":"Kantchelian Alex","year":"2016","unstructured":"Alex Kantchelian, J. Doug Tygar, and Anthony Joseph. 2016. Evasion and hardening of tree ensemble classifiers. In International Conference on Machine Learning. PMLR, 2387\u20132396."},{"key":"e_1_3_3_83_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-020-10139-6"},{"key":"e_1_3_3_84_2","article-title":"A hybrid defense method against adversarial attacks on traffic sign classifiers in autonomous vehicles","author":"Khan Zadid","year":"2023","unstructured":"Zadid Khan, Mashrur Chowdhury, and Sakib Mahmud Khan. 2023. A hybrid defense method against adversarial attacks on traffic sign classifiers in autonomous vehicles. Authorea Preprints (2023).","journal-title":"Authorea Preprints"},{"key":"e_1_3_3_85_2","doi-asserted-by":"publisher","DOI":"10.1109\/OJITS.2022.3142612"},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01426"},{"key":"e_1_3_3_87_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ifacol.2016.11.078"},{"key":"e_1_3_3_88_2","article-title":"The unseen adversaries: Robust and generalized defense against adversarial patches","author":"Kumar Vishesh","year":"2023","unstructured":"Vishesh Kumar and Akshay Agarwal. 2023. The unseen adversaries: Robust and generalized defense against adversarial patches. Available at SSRN 4772716 (2023).","journal-title":"Available at SSRN 4772716"},{"key":"e_1_3_3_89_2","article-title":"Adversarial machine learning at scale","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. International Conference on Learning Representations (2016).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_90_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.249"},{"key":"e_1_3_3_91_2","doi-asserted-by":"publisher","DOI":"10.3390\/sym10120738"},{"key":"e_1_3_3_92_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2023.109430"},{"key":"e_1_3_3_93_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3048120"},{"key":"e_1_3_3_94_2","article-title":"Generative adversarial trainer: Defense to adversarial perturbations with GAN","author":"Lee Hyeungill","year":"2017","unstructured":"Hyeungill Lee, Sungyeob Han, and Jungwoo Lee. 2017. Generative adversarial trainer: Defense to adversarial perturbations with GAN. International Conference on Learning Representations (2017).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_95_2","article-title":"Certified adversarial robustness with additive noise","volume":"32","author":"Li Bai","year":"2019","unstructured":"Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. 2019. Certified adversarial robustness with additive noise. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_96_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2020.2975749"},{"key":"e_1_3_3_97_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.3016145"},{"key":"e_1_3_3_98_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00191"},{"key":"e_1_3_3_99_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33011028"},{"key":"e_1_3_3_100_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-19-7943-9_21"},{"key":"e_1_3_3_101_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00470-5_13"},{"key":"e_1_3_3_102_2","volume-title":"International Conference on Learning Representations","author":"Liu Yanpei","year":"2017","unstructured":"Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations."},{"key":"e_1_3_3_103_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00095"},{"key":"e_1_3_3_104_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2024.01.010"},{"key":"e_1_3_3_105_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2015.84"},{"key":"e_1_3_3_106_2","doi-asserted-by":"publisher","DOI":"10.5220\/0007714203070318"},{"key":"e_1_3_3_107_2","doi-asserted-by":"publisher","DOI":"10.1145\/3330204.3330282"},{"key":"e_1_3_3_108_2","doi-asserted-by":"publisher","DOI":"10.1145\/3485133"},{"key":"e_1_3_3_109_2","volume-title":"International Conference on Learning Representations","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations."},{"key":"e_1_3_3_110_2","doi-asserted-by":"publisher","DOI":"10.2478\/acss-2021-0012"},{"key":"e_1_3_3_111_2","article-title":"Meta adversarial training against universal patches","author":"Metzen Jan Hendrik","year":"2021","unstructured":"Jan Hendrik Metzen, Nicole Finnie, and Robin Hutmacher. 2021. Meta adversarial training against universal patches. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2021 (2021).","journal-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2021"},{"key":"e_1_3_3_112_2","doi-asserted-by":"publisher","DOI":"10.1109\/DICTA60407.2023.00075"},{"key":"e_1_3_3_113_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2020.2985363"},{"key":"e_1_3_3_114_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.17"},{"key":"e_1_3_3_115_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58558-7_36"},{"key":"e_1_3_3_116_2","article-title":"Data-free defense of black box models against adversarial attacks","author":"Nayak Gaurav Kumar","year":"2023","unstructured":"Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, and Anirban Chakraborty. 2023. Data-free defense of black box models against adversarial attacks. Available at SSRN 4531714 (2023).","journal-title":"Available at SSRN 4531714"},{"key":"e_1_3_3_117_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV51458.2022.00288"},{"key":"e_1_3_3_118_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2024.3392760"},{"key":"e_1_3_3_119_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2023.109382"},{"key":"e_1_3_3_120_2","first-page":"2642","volume-title":"International Conference on Machine Learning","author":"Odena Augustus","year":"2017","unstructured":"Augustus Odena, Christopher Olah, and Jonathon Shlens. 2017. Conditional image synthesis with auxiliary classifier GANs. In International Conference on Machine Learning. PMLR, 2642\u20132651."},{"key":"e_1_3_3_121_2","first-page":"1","author":"Papaioannou Diana","year":"2016","unstructured":"Diana Papaioannou, Anthea Sutton, and Andrew Booth. 2016. Systematic Approaches to a Successful Literature Review (2016), 1\u2013336.","journal-title":"Systematic Approaches to a Successful Literature Review"},{"key":"e_1_3_3_122_2","article-title":"Extending defensive distillation","author":"Papernot Nicolas","year":"2017","unstructured":"Nicolas Papernot and Patrick McDaniel. 2017. Extending defensive distillation. International Conference on Learning Representations (2017).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_123_2","article-title":"Transferability in machine learning: From phenomena to black-box attacks using adversarial samples","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. International Conference on Learning Representations (2016).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_124_2","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_3_125_2","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP.2016.36"},{"key":"e_1_3_3_126_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2016.41"},{"key":"e_1_3_3_127_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2021.102269"},{"key":"e_1_3_3_128_2","volume-title":"Adversarial Robustness for Machine Learning","author":"Chen Cho-Jui Hsieh and Pin-Yu","year":"2023","unstructured":"Cho-Jui Hsieh and Pin-Yu Chen. 2023. Adversarial Robustness for Machine Learning. Elsevier. (Elsevier. ISBN: 978-0-12-824020-5)."},{"key":"e_1_3_3_129_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00669"},{"key":"e_1_3_3_130_2","first-page":"7909","volume-title":"Proceedings of the 37th International Conference on Machine Learning","volume":"119","author":"Raghunathan Aditi","year":"2020","unstructured":"Aditi Raghunathan, Sang M Xie, Fanny Yang, John Duchi, and Percy Liang. 2020. Understanding and mitigating the tradeoff between robustness and accuracy. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119. PMLR, 7909\u20137919."},{"key":"e_1_3_3_131_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2019.2948775"},{"key":"e_1_3_3_132_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISM59092.2023.00040"},{"key":"e_1_3_3_133_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV.2018.00093"},{"key":"e_1_3_3_134_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-30648-8_7"},{"key":"e_1_3_3_135_2","article-title":"Adversarial training for free!","volume":"32","author":"Shafahi Ali","year":"2019","unstructured":"Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free! Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_136_2","doi-asserted-by":"publisher","DOI":"10.1145\/3548606.3560561"},{"key":"e_1_3_3_137_2","article-title":"Decision-based query efficient adversarial attack via adaptive boundary learning","author":"Shen Meng","year":"2023","unstructured":"Meng Shen, Changyue Li, Hao Yu, Qi Li, Liehuang Zhu, and Ke Xu. 2023. Decision-based query efficient adversarial attack via adaptive boundary learning. IEEE Transactions on Dependable and Secure Computing (2023).","journal-title":"IEEE Transactions on Dependable and Secure Computing"},{"key":"e_1_3_3_138_2","first-page":"5739","volume-title":"International Conference on Machine Learning","author":"Shen Yanyao","year":"2019","unstructured":"Yanyao Shen and Sujay Sanghavi. 2019. Learning with bad training data via iterative trimmed loss minimization. In International Conference on Machine Learning. PMLR, 5739\u20135748."},{"key":"e_1_3_3_139_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics11121814"},{"key":"e_1_3_3_140_2","article-title":"Query-efficient black-box adversarial attack with customized iteration and sampling","author":"Shi Yucheng","year":"2022","unstructured":"Yucheng Shi, Yahong Han, Qinghua Hu, Yi Yang, and Qi Tian. 2022. Query-efficient black-box adversarial attack with customized iteration and sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_3_3_141_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-021-10064-8"},{"key":"e_1_3_3_142_2","article-title":"Opportunities and challenges in deep learning adversarial robustness: A survey","author":"Silva Samuel Henrique","year":"2020","unstructured":"Samuel Henrique Silva and Peyman Najafirad. 2020. Opportunities and challenges in deep learning adversarial robustness: A survey. IEEE Transactions on Knowledge and Data Engineering (2020).","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"key":"e_1_3_3_143_2","first-page":"1773","volume-title":"Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security","author":"Sitawarin Chawin","year":"2019","unstructured":"Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, and Prateek Mittal. 2019. DARTS: Deceiving autonomous cars with toxic signs. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1773\u20131788."},{"key":"e_1_3_3_144_2","volume-title":"12th USENIX Workshop on Offensive Technologies (WOOT 18)","author":"Song Dawn","year":"2018","unstructured":"Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. 2018. Physical adversarial examples for object detectors. In 12th USENIX Workshop on Offensive Technologies (WOOT 18)."},{"key":"e_1_3_3_145_2","article-title":"Constructing unrestricted adversarial examples with generative models","volume":"31","author":"Song Yang","year":"2018","unstructured":"Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon. 2018. Constructing unrestricted adversarial examples with generative models. Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_146_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.tra.2020.03.024"},{"key":"e_1_3_3_147_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE48619.2023.00188"},{"key":"e_1_3_3_148_2","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2019.2890858"},{"key":"e_1_3_3_149_2","article-title":"Deep probabilistic models to detect data poisoning attacks","author":"Subedar Mahesh","year":"2019","unstructured":"Mahesh Subedar, Nilesh Ahuja, Ranganath Krishnan, Ibrahima J. Ndiour, and Omesh Tickoo. 2019. Deep probabilistic models to detect data poisoning attacks. International Conference on Learning Representations (2019).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_150_2","article-title":"Intriguing properties of neural networks","author":"Szegedy Christian","year":"2013","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. International Conference on Learning Representations (2013).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_151_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_3_152_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.compeleceng.2022.108446"},{"key":"e_1_3_3_153_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2021.11.005"},{"key":"e_1_3_3_154_2","article-title":"Adversarial training and robustness for multiple perturbations","volume":"32","author":"Tramer Florian","year":"2019","unstructured":"Florian Tramer and Dan Boneh. 2019. Adversarial training and robustness for multiple perturbations. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_155_2","volume-title":"6th International Conference on Learning Representations","author":"Tram\u00e8r Florian","year":"2018","unstructured":"Florian Tram\u00e8r, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In 6th International Conference on Learning Representations."},{"key":"e_1_3_3_156_2","article-title":"Spectral signatures in backdoor attacks","volume":"31","author":"Tran Brandon","year":"2018","unstructured":"Brandon Tran, Jerry Li, and Aleksander Madry. 2018. Spectral signatures in backdoor attacks. Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_157_2","volume-title":"International Conference on Learning Representations","author":"Tsipras Dimitris","year":"2018","unstructured":"Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2018. Robustness may be at odds with accuracy. In International Conference on Learning Representations."},{"key":"e_1_3_3_158_2","article-title":"Poster: Adversarial retroreflective patches: A novel stealthy attack on traffic sign recognition at night1","author":"Tsuruoka Go","year":"2023","unstructured":"Go Tsuruoka, Takami Sato, Qi Alfred Chen, Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka, and Tatsuya Mori. 2023. Poster: Adversarial retroreflective patches: A novel stealthy attack on traffic sign recognition at night1. International Conference on Learning Representations (2023).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_159_2","doi-asserted-by":"publisher","DOI":"10.1201\/9781003338611-6"},{"key":"e_1_3_3_160_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00031"},{"key":"e_1_3_3_161_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2020.3014390"},{"key":"e_1_3_3_162_2","volume-title":"International Conference on Learning Representations","author":"Wang Hongjun","year":"2022","unstructured":"Hongjun Wang and Yisen Wang. 2022. Self-ensemble adversarial training for improved robustness. In International Conference on Learning Representations."},{"key":"e_1_3_3_163_2","first-page":"16020","article-title":"Adversarial attack generation empowered by min-max optimization","volume":"34","author":"Wang Jingkang","year":"2021","unstructured":"Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, and Bo Li. 2021. Adversarial attack generation empowered by min-max optimization. Advances in Neural Information Processing Systems 34 (2021), 16020\u201316033.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_164_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCC58397.2023.10218291"},{"key":"e_1_3_3_165_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-20065-6_10"},{"key":"e_1_3_3_166_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2023.110035"},{"key":"e_1_3_3_167_2","doi-asserted-by":"publisher","DOI":"10.1137\/070696143"},{"key":"e_1_3_3_168_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2024.3405006"},{"key":"e_1_3_3_169_2","article-title":"Maximal Jacobian-based saliency map attack","author":"Wiyatno Rey","year":"2018","unstructured":"Rey Wiyatno and Anqi Xu. 2018. Maximal Jacobian-based saliency map attack. International Conference on Learning Representations (2018).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_170_2","first-page":"5286","volume-title":"International Conference on Machine Learning","author":"Wong Eric","year":"2018","unstructured":"Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning. PMLR, 5286\u20135295."},{"key":"e_1_3_3_171_2","article-title":"Fast is better than free: Revisiting adversarial training","author":"Wong Eric","year":"2020","unstructured":"Eric Wong, Leslie Rice, and J. Zico Kolter. 2020. Fast is better than free: Revisiting adversarial training. International Conference on Learning Representations (2020).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_172_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICME.2019.00095"},{"key":"e_1_3_3_173_2","article-title":"Adversarial driving: Attacking end-to-end autonomous driving systems","author":"Wu Han","year":"2021","unstructured":"Han Wu and Wenjie Ruan. 2021. Adversarial driving: Attacking end-to-end autonomous driving systems. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision (2021).","journal-title":"Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision"},{"key":"e_1_3_3_174_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP49359.2023.10223158"},{"key":"e_1_3_3_175_2","article-title":"Generating adversarial examples with adversarial networks","author":"Xiao Chaowei","year":"2018","unstructured":"Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV) (2018).","journal-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV)"},{"key":"e_1_3_3_176_2","volume-title":"International Conference on Learning Representations","author":"Xiao Chaowei","year":"2018","unstructured":"Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. 2018. Spatially transformed adversarial examples. In International Conference on Learning Representations."},{"key":"e_1_3_3_177_2","article-title":"Mitigating adversarial effects through randomization","author":"Xie Cihang","year":"2017","unstructured":"Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. 2017. Mitigating adversarial effects through randomization. International Conference on Learning Representations (2017).","journal-title":"International Conference on Learning Representations"},{"key":"e_1_3_3_178_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVT.2021.3061065"},{"key":"e_1_3_3_179_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11633-019-1211-x"},{"key":"e_1_3_3_180_2","article-title":"Feature squeezing: Detecting adversarial examples in deep neural networks","author":"Xu W.","year":"2017","unstructured":"W. Xu, D. Evans, and Y. Qi. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. Network and Distributed System Security Symposium (2017).","journal-title":"Network and Distributed System Security Symposium"},{"key":"e_1_3_3_181_2","article-title":"Feature squeezing: Detecting adversarial examples in deep neural networks","author":"Xu Weilin","year":"2017","unstructured":"Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2017 (2017).","journal-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2017"},{"key":"e_1_3_3_182_2","article-title":"S 3 ANet: Spatial-spectral self-attention learning network for defending against adversarial attacks in hyperspectral image classification","author":"Xu Yichu","year":"2024","unstructured":"Yichu Xu, Yonghao Xu, Hongzan Jiao, Zhi Gao, and Lefei Zhang. 2024. S 3 ANet: Spatial-spectral self-attention learning network for defending against adversarial attacks in hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing (2024).","journal-title":"IEEE Transactions on Geoscience and Remote Sensing"},{"key":"e_1_3_3_183_2","doi-asserted-by":"publisher","DOI":"10.1007\/s42154-023-00220-9"},{"key":"e_1_3_3_184_2","doi-asserted-by":"publisher","DOI":"10.18178\/ijmlc.2018.8.3.688"},{"key":"e_1_3_3_185_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.3034899"},{"key":"e_1_3_3_186_2","first-page":"11953","volume-title":"International Conference on Machine Learning","author":"Yeats Eric C.","year":"2021","unstructured":"Eric C. Yeats, Yiran Chen, and Hai Li. 2021. Improving gradient regularization using complex-valued neural networks. In International Conference on Machine Learning. PMLR, 11953\u201311963."},{"key":"e_1_3_3_187_2","article-title":"D-BADGE: Decision-based adversarial batch attack with directional gradient estimation","author":"Yu Geunhyeok","year":"2024","unstructured":"Geunhyeok Yu, Minwoo Jeon, and Hyoseok Hwang. 2024. D-BADGE: Decision-based adversarial batch attack with directional gradient estimation. IEEE Access (2024).","journal-title":"IEEE Access"},{"key":"e_1_3_3_188_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2018.2886017"},{"key":"e_1_3_3_189_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2015.2442956"},{"key":"e_1_3_3_190_2","first-page":"1","article-title":"Visual privacy attacks and defenses in deep learning: A survey","author":"Zhang Guangsheng","year":"2022","unstructured":"Guangsheng Zhang, Bo Liu, Tianqing Zhu, Andi Zhou, and Wanlei Zhou. 2022. Visual privacy attacks and defenses in deep learning: A survey. Artificial Intelligence Review (2022), 1\u201355.","journal-title":"Artificial Intelligence Review"},{"key":"e_1_3_3_191_2","article-title":"Versatile defense against adversarial attacks on image recognition","author":"Zhang Haibo","year":"2024","unstructured":"Haibo Zhang, Zhihua Yao, and Kouichi Sakurai. 2024. Versatile defense against adversarial attacks on image recognition. arXiv preprint arXiv:2403.08170 (2024).","journal-title":"arXiv preprint arXiv:2403.08170"},{"key":"e_1_3_3_192_2","first-page":"7472","volume-title":"International Conference on Machine Learning","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning. PMLR, 7472\u20137482."},{"key":"e_1_3_3_193_2","article-title":"PWG-IDS: An intrusion detection model for solving class imbalance in IIoT networks using generative adversarial networks","author":"Zhang Lei","year":"2021","unstructured":"Lei Zhang, Shuaimin Jiang, Xiajiong Shen, Brij B. Gupta, and Zhihong Tian. 2021. PWG-IDS: An intrusion detection model for solving class imbalance in IIoT networks using generative adversarial networks. International Conference on Machine Learning (2021).","journal-title":"International Conference on Machine Learning"},{"key":"e_1_3_3_194_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM58522.2023.00089"},{"key":"e_1_3_3_195_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics12061464"},{"key":"e_1_3_3_196_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP43922.2022.9747294"},{"key":"e_1_3_3_197_2","article-title":"Suppressing the unusual: Towards robust CNNs using symmetric activation functions","author":"Zhao Qiyang","year":"2016","unstructured":"Qiyang Zhao and Lewis D. Griffin. 2016. Suppressing the unusual: Towards robust CNNs using symmetric activation functions. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2016 (2016).","journal-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2016"},{"key":"e_1_3_3_198_2","doi-asserted-by":"publisher","DOI":"10.1109\/DSN.2019.00068"},{"key":"e_1_3_3_199_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2019.8852078"},{"key":"e_1_3_3_200_2","article-title":"Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks","volume":"31","author":"Zheng Zhihao","year":"2018","unstructured":"Zhihao Zheng and Pengyu Hong. 2018. Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_201_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380422"},{"key":"e_1_3_3_202_2","doi-asserted-by":"publisher","DOI":"10.3390\/fi13030073"},{"key":"e_1_3_3_203_2","doi-asserted-by":"publisher","DOI":"10.1145\/3485730.3485935"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3691625","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3691625","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:09:40Z","timestamp":1750295380000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3691625"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,7]]},"references-count":202,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1,31]]}},"alternative-id":["10.1145\/3691625"],"URL":"https:\/\/doi.org\/10.1145\/3691625","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,7]]},"assertion":[{"value":"2023-06-02","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-08-25","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-10-07","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}