{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,22]],"date-time":"2026-04-22T19:33:35Z","timestamp":1776886415606,"version":"3.51.2"},"reference-count":128,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2024,3,15]],"date-time":"2024-03-15T00:00:00Z","timestamp":1710460800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62072309 and 62102359"],"award-info":[{"award-number":["62072309 and 62102359"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"CAS Project for Young Scientists in Basic Research","award":["YSBR-040"],"award-info":[{"award-number":["YSBR-040"]}]},{"name":"ISCAS New Cultivation Project","award":["ISCAS-PYFX-202201"],"award-info":[{"award-number":["ISCAS-PYFX-202201"]}]},{"name":"Key Research and Development Program of Zhejiang","award":["2022C01018"],"award-info":[{"award-number":["2022C01018"]}]},{"name":"Ministry of Education, Singapore, under its Academic Research Fund Tier 3","award":["MOET32020-0004"],"award-info":[{"award-number":["MOET32020-0004"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2024,3,31]]},"abstract":"<jats:p>\n            As a new programming paradigm, deep learning (DL) has achieved impressive performance in areas such as image processing and speech recognition, and has expanded its application to solve many real-world problems. However, neural networks and DL are normally black-box systems; even worse, DL-based software are vulnerable to threats from abnormal examples, such as adversarial and backdoored examples constructed by attackers with malicious intentions as well as unintentionally mislabeled samples. Therefore, it is important and urgent to detect such abnormal examples. Although various detection approaches have been proposed respectively addressing some specific types of abnormal examples, they suffer from some limitations; until today, this problem is still of considerable interest. In this work, we first propose a novel characterization to distinguish abnormal examples from normal ones based on the observation that abnormal examples have significantly different (adversarial) robustness from normal ones. We systemically analyze those three different types of abnormal samples in terms of robustness and find that they have different characteristics from normal ones. As robustness measurement is computationally expensive and hence can be challenging to scale to large networks, we then propose to effectively and efficiently measure robustness of an input sample using the cost of adversarially attacking the input, which was originally proposed to test robustness of neural networks against adversarial examples. Next, we propose a novel detection method, named\n            <jats:italic>attack as detection<\/jats:italic>\n            (A\n            <jats:sup>2<\/jats:sup>\n            D for short), which uses the cost of adversarially attacking an input instead of robustness to check if it is abnormal. Our detection method is generic, and various adversarial attack methods could be leveraged. Extensive experiments show that A\n            <jats:sup>2<\/jats:sup>\n            D is more effective than recent promising approaches that were proposed to detect only one specific type of abnormal examples. We also thoroughly discuss possible adaptive attack methods to our adversarial example detection method and show that A\n            <jats:sup>2<\/jats:sup>\n            D is still effective in defending carefully designed adaptive adversarial attack methods\u2014for example, the attack success rate drops to 0% on CIFAR10.\n          <\/jats:p>","DOI":"10.1145\/3631977","type":"journal-article","created":{"date-parts":[[2023,11,10]],"date-time":"2023-11-10T11:37:12Z","timestamp":1699616232000},"page":"1-45","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["Attack as Detection: Using Adversarial Attack Methods to Detect Abnormal Examples"],"prefix":"10.1145","volume":"33","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4189-3258","authenticated-orcid":false,"given":"Zhe","family":"Zhao","sequence":"first","affiliation":[{"name":"ShanghaiTech University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8277-3119","authenticated-orcid":false,"given":"Guangke","family":"Chen","sequence":"additional","affiliation":[{"name":"ShanghaiTech University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-5804-6551","authenticated-orcid":false,"given":"Tong","family":"Liu","sequence":"additional","affiliation":[{"name":"ShanghaiTech University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4121-000X","authenticated-orcid":false,"given":"Taishan","family":"Li","sequence":"additional","affiliation":[{"name":"ShanghaiTech University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0581-2679","authenticated-orcid":false,"given":"Fu","family":"Song","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7113-7635","authenticated-orcid":false,"given":"Jingyi","family":"Wang","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3545-1392","authenticated-orcid":false,"given":"Jun","family":"Sun","sequence":"additional","affiliation":[{"name":"Singapore Management University, Singapore"}]}],"member":"320","published-online":{"date-parts":[[2024,3,15]]},"reference":[{"key":"e_1_3_1_2_2","unstructured":"GitHub. GitHub. 2022. A \\(^2\\) D. Retrieved November 21 2023 from https:\/\/github.com\/S3L-official\/attack-as-detection"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2021.102277"},{"key":"e_1_3_1_4_2","article-title":"Apollo: An Open, Reliable and Secure Software Platform for Autonomous Driving Systems","year":"2018","unstructured":"Apollo. 2018. Apollo: An Open, Reliable and Secure Software Platform for Autonomous Driving Systems. Retrieved November 21, 2023 from http:\/\/apollo.auto","journal-title":"http:\/\/apollo.auto"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/1985793.1985795"},{"key":"e_1_3_1_6_2","first-page":"274","volume-title":"Proceedings of the 35th International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning. 274\u2013283."},{"key":"e_1_3_1_7_2","first-page":"284","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing robust adversarial examples. In Proceedings of the International Conference on Machine Learning. 284\u2013293."},{"key":"e_1_3_1_8_2","first-page":"2613","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Bastani Osbert","year":"2016","unstructured":"Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya V. Nori, and Antonio Criminisi. 2016. Measuring neural net robustness with constraints. In Proceedings of the Annual Conference on Neural Information Processing Systems. 2613\u20132621."},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/34.1-2.28"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.2517-6161.1964.tb00553.x"},{"key":"e_1_3_1_11_2","volume-title":"Proceedings of the 6th International Conference on Learning Representations","author":"Brendel Wieland","year":"2018","unstructured":"Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In Proceedings of the 6th International Conference on Learning Representations."},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2021.3088661"},{"key":"e_1_3_1_13_2","volume-title":"Proceedings of the 6th International Conference on Learning Representations","author":"Buckman Jacob","year":"2018","unstructured":"Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. 2018. Thermometer encoding: One hot way to resist adversarial examples. In Proceedings of the 6th International Conference on Learning Representations."},{"key":"e_1_3_1_14_2","article-title":"Defensive distillation is not robust to adversarial examples","volume":"1607","author":"Carlini Nicholas","year":"2016","unstructured":"Nicholas Carlini and David Wagner. 2016. Defensive distillation is not robust to adversarial examples. CoRR abs\/1607.04311 (2016).","journal-title":"CoRR"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140444"},{"key":"e_1_3_1_16_2","article-title":"Magnet and \u201cefficient defenses against adversarial attacks\u201d are not robust to adversarial examples","volume":"1711","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David A. Wagner. 2017. Magnet and \u201cefficient defenses against adversarial attacks\u201d are not robust to adversarial examples. CoRR abs\/1711.08478 (2017).","journal-title":"CoRR"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.49"},{"key":"e_1_3_1_18_2","volume-title":"Proceedings of the Workshop on Artificial Intelligence Safety, Co-Located with the 33rd AAAI Conference on Artificial Intelligence","author":"Chen Bryant","year":"2019","unstructured":"Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian M. Molloy, and Biplav Srivastava. 2019. Detecting backdoor attacks on deep neural networks by activation clustering. In Proceedings of the Workshop on Artificial Intelligence Safety, Co-Located with the 33rd AAAI Conference on Artificial Intelligence."},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP40001.2021.00004"},{"key":"e_1_3_1_20_2","first-page":"2437","volume-title":"Proceedings of the 32nd USENIX Security Symposium","author":"Chen Guangke","year":"2023","unstructured":"Guangke Chen, Yedi Zhang, Zhe Zhao, and Fu Song. 2023. QFA2SR: Query-free adversarial transfer attacks to speaker recognition systems. In Proceedings of the 32nd USENIX Security Symposium, Joseph A. Calandrino and Carmela Troncoso (Eds.). USENIX Association, 2437\u20132454."},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2022.3220673"},{"key":"e_1_3_1_22_2","first-page":"1062","volume-title":"Proceedings of the 36th International Conference on Machine Learning","author":"Chen Pengfei","year":"2019","unstructured":"Pengfei Chen, Ben Ben Liao, Guangyong Chen, and Shengyu Zhang. 2019. Understanding and utilizing deep neural networks trained with noisy labels. In Proceedings of the 36th International Conference on Machine Learning. 1062\u20131070."},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/SIBGRAPI51738.2020.00010"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.5778"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICECCS51672.2020.00016"},{"key":"e_1_3_1_26_2","doi-asserted-by":"crossref","unstructured":"Yizhak Yisrael Elboher Justin Gottschlich and Guy Katz. 2020. An abstraction-based framework for neural network verification. In Computer Aided Verification. Lecture Notes in Computer Science Vol. 12224. Springer 43\u201365.","DOI":"10.1007\/978-3-030-53288-8_3"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00175"},{"key":"e_1_3_1_28_2","volume-title":"Geometric Measure Theory","author":"Federer Herbert","year":"2014","unstructured":"Herbert Federer. 2014. Geometric Measure Theory. Springer."},{"key":"e_1_3_1_29_2","article-title":"Detecting adversarial samples from artifacts","volume":"1703","author":"Feinman Reuben","year":"2017","unstructured":"Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, and Andrew B. Gardner. 2017. Detecting adversarial samples from artifacts. CoRR abs\/1703.00410 (2017).","journal-title":"CoRR"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2013.2292894"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3359789.3359790"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2018.00058"},{"key":"e_1_3_1_33_2","volume-title":"Proceedings of the 3rd International Conference on Learning Representations","author":"Goodfellow Ian","year":"2015","unstructured":"Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations."},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2909068"},{"key":"e_1_3_1_35_2","volume-title":"Proceedings of the 6th International Conference on Learning Representations","author":"Guo Chuan","year":"2018","unstructured":"Chuan Guo, Mayank Rana, Moustapha Ciss\u00e9, and Laurens van der Maaten. 2018. Countering adversarial images using input transformations. In Proceedings of the 6th International Conference on Learning Representations."},{"key":"e_1_3_1_36_2","article-title":"SCALE-UP: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency","author":"Guo Junfeng","year":"2023","unstructured":"Junfeng Guo, Yiming Li, Xun Chen, Hanqing Guo, Lichao Sun, and Cong Liu. 2023. SCALE-UP: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency. arXiv preprint arXiv:2302.03251 (2023).","journal-title":"arXiv preprint arXiv:2302.03251"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM50108.2020.00025"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSRE52982.2021.00044"},{"key":"e_1_3_1_39_2","volume-title":"Proceedings of the 5th International Conference on Learning Representations","author":"He Warren","year":"2018","unstructured":"Warren He, Bo Li, and Dawn Song. 2018. Decision boundary analysis of adversarial examples. In Proceedings of the 5th International Conference on Learning Representations."},{"key":"e_1_3_1_40_2","volume-title":"Proceedings of the 11th USENIX Workshop on Offensive Technologies","author":"He Warren","year":"2017","unstructured":"Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. 2017. Adversarial example defense: Ensembles of weak defenses are not strong. In Proceedings of the 11th USENIX Workshop on Offensive Technologies."},{"key":"e_1_3_1_41_2","volume-title":"Proceedings of the 5th International Conference on Learning Representations","author":"Hendrycks Dan","year":"2017","unstructured":"Dan Hendrycks and Kevin Gimpel. 2017. Early methods for detecting adversarial images. In Proceedings of the 5th International Conference on Learning Representations."},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33013771"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2013.6706807"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cosrev.2020.100270"},{"key":"e_1_3_1_45_2","first-page":"2142","volume-title":"Proceedings of the 35th International Conference on Machine Learning","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the 35th International Conference on Machine Learning. 2142\u20132151."},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.3301962"},{"key":"e_1_3_1_47_2","first-page":"2304","volume-title":"Proceedings of the 35th International Conference on Machine Learning","author":"Jiang Lu","year":"2018","unstructured":"Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In Proceedings of the 35th International Conference on Machine Learning. 2304\u20132313."},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-63387-9_5"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE.2019.00108"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00038"},{"key":"e_1_3_1_51_2","volume-title":"Learning Multiple Layers of Features from Tiny Images","author":"Krizhevsky. Alex","year":"2009","unstructured":"Alex Krizhevsky.2009. Learning Multiple Layers of Features from Tiny Images. Technical Report. University of Toronto."},{"key":"e_1_3_1_52_2","volume-title":"Proceedings of the 5th International Conference on Learning Representations","author":"Kurakin Alexey","year":"2017","unstructured":"Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. In Proceedings of the 5th International Conference on Learning Representations."},{"key":"e_1_3_1_53_2","article-title":"MNIST and CIFAR10 Adversarial Examples Challenges","author":"Lab Madry","year":"2020","unstructured":"Madry Lab. 2020. MNIST and CIFAR10 Adversarial Examples Challenges. Retrieved November 21, 2023 from https:\/\/github.com\/MadryLab","journal-title":"https:\/\/github.com\/MadryLab"},{"key":"e_1_3_1_54_2","volume-title":"An Introduction to Mathematical Statistics and Its Applications","author":"Larsen Richard J.","year":"2011","unstructured":"Richard J. Larsen and Morris L. Marx. 2011. An Introduction to Mathematical Statistics and Its Applications. Prentice Hall."},{"key":"e_1_3_1_55_2","unstructured":"Yann LeCun Corinna Cortes and Christopher J. C. Burges. 1998. The MNIST Database of Handwritten Digits. Retrieved November 21 2023 from http:\/\/yann.lecun.com\/exdb\/mnist\/index.html"},{"key":"e_1_3_1_56_2","first-page":"7167","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Lee Kimin","year":"2018","unstructured":"Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Proceedings of the Annual Conference on Neural Information Processing Systems. 7167\u20137177."},{"key":"e_1_3_1_57_2","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Li Yiming","year":"2022","unstructured":"Yiming Li, Yang Bai, Yong Jiang, Yong Yang, Shu-Tao Xia, and Bo Li. 2022. Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection. In Proceedings of the Annual Conference on Neural Information Processing Systems."},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01615"},{"key":"e_1_3_1_59_2","volume-title":"Proceedings of the 2023 ICLR Workshop","author":"Li Yiming","year":"2023","unstructured":"Yiming Li, Mengxi Ya, Yang Bai, Yong Jiang, and Shu-Tao Xia. 2023. BackdoorBox: A Python toolbox for backdoor learning. In Proceedings of the 2023 ICLR Workshop."},{"key":"e_1_3_1_60_2","article-title":"Backdoor attack in the physical world","author":"Li Yiming","year":"2021","unstructured":"Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. 2021. Backdoor attack in the physical world. arXiv preprint arXiv:2104.02361 (2021).","journal-title":"arXiv preprint arXiv:2104.02361"},{"key":"e_1_3_1_61_2","article-title":"Rethinking the trigger of backdoor attack","volume":"2004","author":"Li Yiming","year":"2020","unstructured":"Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, and Shutao Xia. 2020. Rethinking the trigger of backdoor attack. CoRR abs\/2004.04692 (2020).","journal-title":"CoRR"},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE-NIER.2019.00031"},{"key":"e_1_3_1_63_2","article-title":"Abstraction and refinement: Towards scalable and exact verification of neural networks","volume":"2207","author":"Liu Jiaxiang","year":"2022","unstructured":"Jiaxiang Liu, Yunhan Xing, Xiaomu Shi, Fu Song, Zhiwu Xu, and Zhong Ming. 2022. Abstraction and refinement: Towards scalable and exact verification of neural networks. CoRR abs\/2207.00759 (2022).","journal-title":"CoRR"},{"key":"e_1_3_1_64_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00470-5_13"},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11390-020-0546-7"},{"key":"e_1_3_1_66_2","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23291"},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.56"},{"key":"e_1_3_1_68_2","doi-asserted-by":"publisher","DOI":"10.1145\/3238147.3238202"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSRE.2018.00021"},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2019.23415"},{"key":"e_1_3_1_71_2","volume-title":"Proceedings of the 6th International Conference on Learning Representations","author":"Ma Xingjun","year":"2018","unstructured":"Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi N. R. Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, and James Bailey. 2018. Characterizing adversarial subspaces using local intrinsic dimensionality. In Proceedings of the 6th International Conference on Learning Representations."},{"key":"e_1_3_1_72_2","volume-title":"Proceedings of the 6th International Conference on Learning Representations","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th International Conference on Learning Representations."},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0020-0190(98)00165-3"},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSMCB.2012.2223460"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.4324\/9781410611949"},{"key":"e_1_3_1_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134057"},{"key":"e_1_3_1_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2212.10376"},{"key":"e_1_3_1_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2017.172"},{"key":"e_1_3_1_80_2","article-title":"WaNet\u2014Imperceptible warping-based backdoor attack","author":"Nguyen Anh","year":"2021","unstructured":"Anh Nguyen and Anh Tran. 2021. WaNet\u2014Imperceptible warping-based backdoor attack. arXiv preprint arXiv:2102.10369 (2021).","journal-title":"arXiv preprint arXiv:2102.10369"},{"key":"e_1_3_1_81_2","volume-title":"Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks","author":"Northcutt Curtis G.","year":"2021","unstructured":"Curtis G. Northcutt, Anish Athalye, and Jonas Mueller. 2021. Pervasive label errors in test sets destabilize machine learning benchmarks. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks."},{"key":"e_1_3_1_82_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.1.12125"},{"key":"e_1_3_1_83_2","article-title":"Transferability in machine learning: From phenomena to black-box attacks using adversarial samples","volume":"1605","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick D. McDaniel, and Ian J. Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. CoRR abs\/1605.07277 (2016).","journal-title":"CoRR"},{"key":"e_1_3_1_84_2","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_1_85_2","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP.2016.36"},{"key":"e_1_3_1_86_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2016.41"},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.1145\/3132747.3132785"},{"key":"e_1_3_1_88_2","article-title":"Foolbox: A Python toolbox to benchmark the robustness of machine learning models","volume":"1707","author":"Rauber Jonas","year":"2017","unstructured":"Jonas Rauber, Wieland Brendel, and Matthias Bethge. 2017. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. CoRR abs\/1707.04131 (2017).","journal-title":"CoRR"},{"key":"e_1_3_1_89_2","first-page":"5498","volume-title":"Proceedings of the 36th International Conference on Machine Learning","author":"Roth Kevin","year":"2019","unstructured":"Kevin Roth, Yannic Kilcher, and Thomas Hofmann. 2019. The odds are odd: A statistical test for detecting adversarial examples. In Proceedings of the 36th International Conference on Machine Learning. 5498\u20135507."},{"key":"e_1_3_1_90_2","doi-asserted-by":"publisher","DOI":"10.5555\/3304889.3305029"},{"key":"e_1_3_1_91_2","article-title":"Don\u2019t trigger me! A triggerless backdoor attack against deep neural networks","author":"Salem Ahmed","year":"2020","unstructured":"Ahmed Salem, Michael Backes, and Yang Zhang. 2020. Don\u2019t trigger me! A triggerless backdoor attack against deep neural networks. arXiv preprint arXiv:2010.03282 (2020).","journal-title":"arXiv preprint arXiv:2010.03282"},{"key":"e_1_3_1_92_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445518"},{"key":"e_1_3_1_93_2","first-page":"67","volume-title":"Proceedings of the ACM SIGSAC Conference on Computer and Communications Security","author":"Shan Shawn","year":"2020","unstructured":"Shawn Shan, Emily Wenger, Bolun Wang, Bo Li, Haitao Zheng, and Ben Y. Zhao. 2020. Gotta catch\u2019 em all: Using honeypots to catch adversarial attacks on neural networks. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 67\u201383."},{"key":"e_1_3_1_94_2","doi-asserted-by":"publisher","DOI":"10.1146\/annurev-bioeng-071516-044442"},{"key":"e_1_3_1_95_2","doi-asserted-by":"publisher","DOI":"10.1002\/int.22510"},{"key":"e_1_3_1_96_2","article-title":"Testing deep neural networks","volume":"1803","author":"Sun Youcheng","year":"2018","unstructured":"Youcheng Sun, Xiaowei Huang, and Daniel Kroening. 2018. Testing deep neural networks. CoRR abs\/1803.04792 (2018).","journal-title":"CoRR"},{"key":"e_1_3_1_97_2","doi-asserted-by":"publisher","DOI":"10.1145\/3238147.3238172"},{"key":"e_1_3_1_98_2","doi-asserted-by":"publisher","unstructured":"Zhensu Sun Xiaoning Du Fu Song Mingze Ni and Li Li. 2022. CoProtector: Protect open-source code against unauthorized training usage with data poisoning. In Proceedings of the ACM Web Conference. ACM New York NY 652\u2013660. DOI:10.1145\/3485447.3512225","DOI":"10.1145\/3485447.3512225"},{"key":"e_1_3_1_99_2","volume-title":"Proceedings of the 2nd International Conference on Learning Representations","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations."},{"key":"e_1_3_1_100_2","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP48549.2020.00019"},{"key":"e_1_3_1_101_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377812.3382133"},{"key":"e_1_3_1_102_2","volume-title":"Proceedings of the Annual Conference on Neural Information Processing Systems","author":"Tram\u00e8r Florian","year":"2020","unstructured":"Florian Tram\u00e8r, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. In Proceedings of the Annual Conference on Neural Information Processing Systems."},{"key":"e_1_3_1_103_2","unstructured":"Alexander Turner Dimitris Tsipras and Aleksander Madry. 2018. Clean-label backdoor attacks. In Proceedings of the ICLR 2018 Conference."},{"key":"e_1_3_1_104_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00031"},{"key":"e_1_3_1_105_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380379"},{"key":"e_1_3_1_106_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00038"},{"key":"e_1_3_1_107_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE.2019.00126"},{"key":"e_1_3_1_108_2","first-page":"1599","volume-title":"Proceedings of the USENIX Security Symposium","author":"Wang Shiqi","year":"2018","unstructured":"Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal security analysis of neural networks using symbolic intervals. In Proceedings of the USENIX Security Symposium. 1599\u20131614."},{"key":"e_1_3_1_109_2","first-page":"29909","article-title":"Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification","volume":"34","author":"Wang Shiqi","year":"2021","unstructured":"Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. 2021. Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems 34 (2021), 29909\u201329921.","journal-title":"Advances in Neural Information Processing Systems"},{"issue":"11","key":"e_1_3_1_110_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s11432-015-5391-x","article-title":"Crowdsourcing label quality: A theoretical analysis","volume":"58","author":"Wang Wei","year":"2015","unstructured":"Wei Wang and Zhi-Hua Zhou. 2015. Crowdsourcing label quality: A theoretical analysis. Science China Information Sciences 58, 11 (2015), 1\u201312.","journal-title":"Science China Information Sciences"},{"key":"e_1_3_1_111_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00906"},{"key":"e_1_3_1_112_2","first-page":"574","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Weiss Gary M.","year":"1998","unstructured":"Gary M. Weiss and Haym Hirsh. 1998. The problem with noise and small disjuncts. In Proceedings of the International Conference on Machine Learning. 574."},{"key":"e_1_3_1_113_2","volume-title":"Proceedings of the 6th International Conference on Learning Representations","author":"Weng Tsui-Wei","year":"2018","unstructured":"Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. 2018. Evaluating the robustness of neural networks: An extreme value theory approach. In Proceedings of the 6th International Conference on Learning Representations."},{"key":"e_1_3_1_114_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-89960-2_22"},{"key":"e_1_3_1_115_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00059"},{"key":"e_1_3_1_116_2","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23198"},{"key":"e_1_3_1_117_2","doi-asserted-by":"publisher","DOI":"10.1109\/TrustCom53373.2021.00093"},{"key":"e_1_3_1_118_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3409671"},{"key":"e_1_3_1_119_2","first-page":"32598","article-title":"OpenOOD: Benchmarking generalized out-of-distribution detection","volume":"35","author":"Yang Jingkang","year":"2022","unstructured":"Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, and Ziwei Liu. 2022. OpenOOD: Benchmarking generalized out-of-distribution detection. Advances in Neural Information Processing Systems 35 (2022), 32598\u201332611.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_120_2","article-title":"Generalized out-of-distribution detection: A survey","volume":"2110","author":"Yang Jingkang","year":"2021","unstructured":"Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. 2021. Generalized out-of-distribution detection: A survey. CoRR abs\/2110.11334 (2021).","journal-title":"CoRR"},{"key":"e_1_3_1_121_2","doi-asserted-by":"publisher","unstructured":"Yedi Zhang Fu Song and Jun Sun. 2023. QEBVerif: Quantization error bound verification of neural networks. In Computer Aided Verification. Lecture Notes in Computer Science Vol. 13965. Springer 413\u2013437. DOI:10.1007\/978-3-031-37703-7_20","DOI":"10.1007\/978-3-031-37703-7_20"},{"key":"e_1_3_1_122_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-81685-8_8"},{"key":"e_1_3_1_123_2","doi-asserted-by":"publisher","DOI":"10.1145\/3563212"},{"key":"e_1_3_1_124_2","doi-asserted-by":"publisher","DOI":"10.1145\/3551349.3556916"},{"key":"e_1_3_1_125_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01445"},{"key":"e_1_3_1_126_2","doi-asserted-by":"publisher","DOI":"10.1145\/3460319.3464822"},{"key":"e_1_3_1_127_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-22308-2_20"},{"key":"e_1_3_1_128_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-71500-7_16"},{"key":"e_1_3_1_129_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00160"}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3631977","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3631977","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:51:02Z","timestamp":1750287062000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3631977"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,15]]},"references-count":128,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,3,31]]}},"alternative-id":["10.1145\/3631977"],"URL":"https:\/\/doi.org\/10.1145\/3631977","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,15]]},"assertion":[{"value":"2023-04-27","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-10-19","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-03-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}