{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T16:03:25Z","timestamp":1775837005434,"version":"3.50.1"},"reference-count":178,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2024,1,22]],"date-time":"2024-01-22T00:00:00Z","timestamp":1705881600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"HK RGC GRF","award":["PolyU 15201323"],"award-info":[{"award-number":["PolyU 15201323"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,6,30]]},"abstract":"<jats:p>Benefiting from the rapid development of deep learning, 2D and 3D computer vision applications are deployed in many safe-critical systems, such as autopilot and identity authentication. However, deep learning models are not trustworthy enough because of their limited robustness against adversarial attacks. The physically realizable adversarial attacks further pose fatal threats to the application and human safety. Lots of papers have emerged to investigate the robustness and safety of deep learning models against adversarial attacks. To lead to trustworthy AI, we first construct a general threat model from different perspectives and then comprehensively review the latest progress of both 2D and 3D adversarial attacks. We extend the concept of adversarial examples beyond imperceptive perturbations and collate over 170 papers to give an overview of deep learning model robustness against various adversarial attacks. To the best of our knowledge, we are the first to systematically investigate adversarial attacks for 3D models, a flourishing field applied to many real-world applications. In addition, we examine physical adversarial attacks that lead to safety violations. Last but not least, we summarize present popular topics, give insights on challenges, and shed light on future research on trustworthy AI.<\/jats:p>","DOI":"10.1145\/3636551","type":"journal-article","created":{"date-parts":[[2023,12,7]],"date-time":"2023-12-07T11:54:43Z","timestamp":1701950083000},"page":"1-37","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":25,"title":["A Survey of Robustness and Safety of 2D and 3D Deep Learning Models against Adversarial Attacks"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8859-8331","authenticated-orcid":false,"given":"Yanjie","family":"Li","sequence":"first","affiliation":[{"name":"The Hong Kong Polytechnic University, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5118-3570","authenticated-orcid":false,"given":"Bin","family":"Xie","sequence":"additional","affiliation":[{"name":"The Hong Kong Polytechnic University, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6741-4871","authenticated-orcid":false,"given":"Songtao","family":"Guo","sequence":"additional","affiliation":[{"name":"Chongqing University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7296-9222","authenticated-orcid":false,"given":"Yuanyuan","family":"Yang","sequence":"additional","affiliation":[{"name":"Stony Brook University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4223-8220","authenticated-orcid":false,"given":"Bin","family":"Xiao","sequence":"additional","affiliation":[{"name":"The Hong Kong Polytechnic University, Hong Kong"}]}],"member":"320","published-online":{"date-parts":[[2024,1,22]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW50498.2020.00395"},{"key":"e_1_3_1_3_2","article-title":"There are no bit parts for sign bits in black-box attacks","author":"Al-Dujaili Abdullah","year":"2019","unstructured":"Abdullah Al-Dujaili and Una-May O\u2019Reilly. 2019. There are no bit parts for sign bits in black-box attacks. arXiv preprint arXiv:1902.06894 (2019).","journal-title":"arXiv preprint arXiv:1902.06894"},{"key":"e_1_3_1_4_2","volume-title":"International Conference on Learning Representations","author":"Alaifari Rima","year":"2018","unstructured":"Rima Alaifari, Giovanni S. Alberti, and Tandri Gauksson. 2018. ADef: An iterative algorithm to construct adversarial deformations. In International Conference on Learning Representations."},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58592-1_29"},{"key":"e_1_3_1_6_2","first-page":"274","volume-title":"International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning. PMLR, 274\u2013283."},{"key":"e_1_3_1_7_2","first-page":"284","volume-title":"International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing robust adversarial examples. In International Conference on Machine Learning. PMLR, 284\u2013293."},{"key":"e_1_3_1_8_2","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1145\/3475724.3483604","volume-title":"1st International Workshop on Adversarial Learning for Multimedia","author":"Aydin Ayberk","year":"2021","unstructured":"Ayberk Aydin, Deniz Sen, Berat Tuna Karli, Oguz Hanoglu, and Alptekin Temizel. 2021. Imperceptible adversarial examples by spatial chroma-shift. In 1st International Workshop on Adversarial Learning for Multimedia. 8\u201314."},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11672"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01258-8_10"},{"key":"e_1_3_1_11_2","volume-title":"International Conference on Learning Representations","author":"Bhattad Anand","year":"2019","unstructured":"Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, and D. A. Forsyth. 2019. Unrestricted adversarial examples via semantic manipulation. In International Conference on Learning Representations."},{"key":"e_1_3_1_12_2","article-title":"Decision-based adversarial attacks: Reliable attacks against black-box machine learning models","author":"Brendel Wieland","year":"2017","unstructured":"Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2017. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017).","journal-title":"arXiv preprint arXiv:1712.04248"},{"key":"e_1_3_1_13_2","article-title":"Adversarial patch","author":"Brown Tom B.","year":"2017","unstructured":"Tom B. Brown, Dandelion Man\u00e9, Aurko Roy, Mart\u00edn Abadi, and Justin Gilmer. 2017. Adversarial patch. arXiv preprint arXiv:1712.09665 (2017).","journal-title":"arXiv preprint arXiv:1712.09665"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00506"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01481"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3319535.3339815"},{"key":"e_1_3_1_17_2","article-title":"Adversarial objects against lidar-based autonomous driving systems","author":"Cao Yulong","year":"2019","unstructured":"Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, and Bo Li. 2019. Adversarial objects against lidar-based autonomous driving systems. arXiv preprint arXiv:1907.05418 (2019).","journal-title":"arXiv preprint arXiv:1907.05418"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.49"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1002\/int.22349"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403225"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP40000.2020.00045"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11302"},{"key":"e_1_3_1_23_2","first-page":"15","volume-title":"10th ACM Workshop on Artificial Intelligence and Security","author":"Chen Pin-Yu","year":"2017","unstructured":"Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In 10th ACM Workshop on Artificial Intelligence and Security. 15\u201326."},{"key":"e_1_3_1_24_2","first-page":"52","volume-title":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","author":"Chen Shang-Tse","year":"2018","unstructured":"Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Polo Chau. 2018. Shapeshifter: Robust physical adversarial attack on faster r-CNN object detector. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 52\u201368."},{"key":"e_1_3_1_25_2","article-title":"Content-based unrestricted adversarial attack","author":"Chen Zhaoyu","year":"2023","unstructured":"Zhaoyu Chen, Bo Li, Shuang Wu, Kaixun Jiang, Shouhong Ding, and Wenqiang Zhang. 2023. Content-based unrestricted adversarial attack. arXiv preprint arXiv:2305.10665 (2023).","journal-title":"arXiv preprint arXiv:2305.10665"},{"key":"e_1_3_1_26_2","volume-title":"International Conference on Learning Representations","author":"Cheng Minhao","year":"2020","unstructured":"Minhao Cheng, Simranjit Singh, Patrick H. Chen, Pin-Yu Chen, Sijia Liu, and Cho-Jui Hsieh. 2020. Sign-OPT: A query-efficient hard-label adversarial attack. In International Conference on Learning Representations."},{"key":"e_1_3_1_27_2","volume-title":"International Conference on Learning Representations","author":"Cheng Minhao","year":"2019","unstructured":"Minhao Cheng, Huan Zhang, Cho-Jui Hsieh, Thong Le, Pin-Yu Chen, and Jinfeng Yi. 2019. Query-efficient hard-label black-box attack: An optimization-based approach. In International Conference on Learning Representations. ICLR."},{"key":"e_1_3_1_28_2","article-title":"Improving black-box adversarial attacks with a transfer-based prior","volume":"32","author":"Cheng Shuyu","year":"2019","unstructured":"Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. 2019. Improving black-box adversarial attacks with a transfer-based prior. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i6.20595"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00482"},{"key":"e_1_3_1_31_2","first-page":"11226","article-title":"GreedyFool: Distortion-aware sparse adversarial attack","volume":"33","author":"Dong Xiaoyi","year":"2020","unstructured":"Xiaoyi Dong, Dongdong Chen, Jianmin Bao, Chuan Qin, Lu Yuan, Weiming Zhang, Nenghai Yu, and Dong Chen. 2020. GreedyFool: Distortion-aware sparse adversarial attack. Advances in Neural Information Processing Systems 33 (2020), 11226\u201311236.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00957"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00444"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00108"},{"key":"e_1_3_1_35_2","unstructured":"Logan Engstrom Brandon Tran Dimitris Tsipras Ludwig Schmidt and Aleksander Madry. 2019. Exploring the landscape of spatial robustness. In International Conference on Machine Learning PMLR 1802\u20131811."},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00175"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.5244\/C.29.106"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00816"},{"key":"e_1_3_1_39_2","article-title":"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness","author":"Geirhos Robert","year":"2018","unstructured":"Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. 2018. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231 (2018).","journal-title":"arXiv preprint arXiv:1811.12231"},{"key":"e_1_3_1_40_2","article-title":"Motivating the rules of the game for adversarial example research","author":"Gilmer Justin","year":"2018","unstructured":"Justin Gilmer, Ryan P. Adams, Ian Goodfellow, David Andersen, and George E. Dahl. 2018. Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732 (2018).","journal-title":"arXiv preprint arXiv:1807.06732"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW54120.2021.00016"},{"key":"e_1_3_1_42_2","article-title":"Explaining and harnessing adversarial examples","author":"Goodfellow Ian J.","year":"2014","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).","journal-title":"arXiv preprint arXiv:1412.6572"},{"key":"e_1_3_1_43_2","first-page":"2484","volume-title":"International Conference on Machine Learning","author":"Guo Chuan","year":"2019","unstructured":"Chuan Guo, Jacob Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Weinberger. 2019. Simple black-box adversarial attacks. In International Conference on Machine Learning. PMLR, 2484\u20132493."},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41095-021-0229-5"},{"key":"e_1_3_1_45_2","article-title":"Subspace attack: Exploiting promising subspaces for query-efficient black-box attacks","volume":"32","author":"Guo Yiwen","year":"2019","unstructured":"Yiwen Guo, Ziang Yan, and Changshui Zhang. 2019. Subspace attack: Exploiting promising subspaces for query-efficient black-box attacks. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58610-2_15"},{"key":"e_1_3_1_47_2","volume-title":"International Conference on Learning Representations","author":"He Warren","year":"2018","unstructured":"Warren He, Bo Li, and Dawn Song. 2018. Decision boundary analysis of adversarial examples. In International Conference on Learning Representations."},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2020.3034721"},{"key":"e_1_3_1_49_2","first-page":"14963","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"He Ziwen","year":"2022","unstructured":"Ziwen He, Wei Wang, Jing Dong, and Tieniu Tan. 2022. Transferable sparse adversarial attack. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 14963\u201314972."},{"key":"e_1_3_1_50_2","first-page":"6840","article-title":"Denoising diffusion probabilistic models","volume":"33","author":"Ho Jonathan","year":"2020","unstructured":"Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33 (2020), 6840\u20136851.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2018.00212"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00775"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00109"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01490"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00483"},{"key":"e_1_3_1_56_2","first-page":"2137","volume-title":"International Conference on Machine Learning","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning. PMLR, 2137\u20132146."},{"key":"e_1_3_1_57_2","volume-title":"International Conference on Learning Representations","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas, Logan Engstrom, and Aleksander Madry. 2018. Prior convictions: Black-box adversarial attacks with bandits and priors. In International Conference on Learning Representations."},{"key":"e_1_3_1_58_2","article-title":"Adversarial examples are not bugs, they are features","volume":"32","author":"Ilyas Andrew","year":"2019","unstructured":"Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. 2019. Adversarial examples are not bugs, they are features. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_59_2","first-page":"160","volume-title":"2021 IEEE Symposium on Security and Privacy (SP\u201921)","author":"Ji Xiaoyu","year":"2021","unstructured":"Xiaoyu Ji, Yushi Cheng, Yuepeng Zhang, Kai Wang, Chen Yan, Wenyuan Xu, and Kevin Fu. 2021. Poltergeist: Acoustic adversarial machine learning against cameras and computer vision. In 2021 IEEE Symposium on Security and Privacy (SP\u201921). IEEE, 160\u2013175."},{"key":"e_1_3_1_60_2","first-page":"34136","article-title":"ADV-attribute: Inconspicuous and transferable adversarial attack on face recognition","volume":"35","author":"Jia Shuai","year":"2022","unstructured":"Shuai Jia, Bangjie Yin, Taiping Yao, Shouhong Ding, Chunhua Shen, Xiaokang Yang, and Chao Ma. 2022. ADV-attribute: Inconspicuous and transferable adversarial attack on face recognition. Advances in Neural Information Processing Systems 35 (2022), 34136\u201334147.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00487"},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00467"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00411"},{"key":"e_1_3_1_64_2","first-page":"8562","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition","author":"Khrulkov Valentin","year":"2018","unstructured":"Valentin Khrulkov and Ivan Oseledets. 2018. Art of singular vectors and universal adversarial perturbations. In IEEE Conference on Computer Vision and Pattern Recognition. 8562\u20138570."},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00770"},{"key":"e_1_3_1_66_2","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1201\/9781351251389-8","volume-title":"Artificial Intelligence Safety and Security","author":"Kurakin Alexey","year":"2018","unstructured":"Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2018. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security. Chapman and Hall\/CRC, 99\u2013112."},{"key":"e_1_3_1_67_2","doi-asserted-by":"crossref","first-page":"2793","DOI":"10.18653\/v1\/2020.acl-main.249","volume-title":"58th Annual Meeting of the Association for Computational Linguistics","author":"Kurita Keita","year":"2020","unstructured":"Keita Kurita, Paul Michel, and Graham Neubig. 2020. Weight poisoning attacks on pretrained models. In 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2793\u20132806."},{"key":"e_1_3_1_68_2","article-title":"Functional adversarial attacks","volume":"32","author":"Laidlaw Cassidy","year":"2019","unstructured":"Cassidy Laidlaw and Soheil Feizi. 2019. Functional adversarial attacks. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00072"},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00334"},{"key":"e_1_3_1_71_2","first-page":"3866","volume-title":"International Conference on Machine Learning","author":"Li Yandong","year":"2019","unstructured":"Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang, and Boqing Gong. 2019. Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In International Conference on Machine Learning. PMLR, 3866\u20133876."},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.02366"},{"key":"e_1_3_1_73_2","volume-title":"International Conference on Learning Representations","author":"Lin Jiadong","year":"2019","unstructured":"Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, and John E. Hopcroft. 2019. Nesterov accelerated gradient and scale invariance for adversarial attacks. In International Conference on Learning Representations."},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58601-0_24"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3171659"},{"key":"e_1_3_1_76_2","first-page":"1","article-title":"Imperceptible transfer attack and defense on 3D point cloud classification","author":"Liu Daizong","year":"2022","unstructured":"Daizong Liu and Wei Hu. 2022. Imperceptible transfer attack and defense on 3D point cloud classification. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022), 1\u201318.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_3_1_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2019.8803770"},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-66415-2_6"},{"key":"e_1_3_1_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2805680"},{"key":"e_1_3_1_80_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3045078"},{"key":"e_1_3_1_81_2","volume-title":"International Conference on Learning Representations","author":"Liu Yanpei","year":"2016","unstructured":"Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations."},{"key":"e_1_3_1_82_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11499"},{"key":"e_1_3_1_83_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01488"},{"key":"e_1_3_1_84_2","first-page":"19288","article-title":"Finding optimal tangent points for reducing distortions of hard-label attacks","volume":"34","author":"Ma Chen","year":"2021","unstructured":"Chen Ma, Xiangyu Guo, Li Chen, Jun-Hai Yong, and Yisen Wang. 2021. Finding optimal tangent points for reducing distortions of hard-label attacks. Advances in Neural Information Processing Systems 34 (2021), 19288\u201319300.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_85_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394171.3413875"},{"key":"e_1_3_1_86_2","doi-asserted-by":"publisher","DOI":"10.1145\/3485133"},{"key":"e_1_3_1_87_2","article-title":"Towards deep learning models resistant to adversarial attacks","author":"Madry Aleksander","year":"2017","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).","journal-title":"arXiv preprint arXiv:1706.06083"},{"key":"e_1_3_1_88_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01029"},{"key":"e_1_3_1_89_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2020.2970615"},{"key":"e_1_3_1_90_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00930"},{"key":"e_1_3_1_91_2","first-page":"4636","volume-title":"International Conference on Machine Learning","author":"Moon Seungyong","year":"2019","unstructured":"Seungyong Moon, Gaon An, and Hyun Oh Song. 2019. Parsimonious black-box adversarial attacks via efficient combinatorial optimization. In International Conference on Machine Learning. PMLR, 4636\u20134645."},{"key":"e_1_3_1_92_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.17"},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_1_94_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2861800"},{"key":"e_1_3_1_95_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00084"},{"key":"e_1_3_1_96_2","first-page":"2","volume-title":"CVPR Workshops","author":"Narodytska Nina","year":"2017","unstructured":"Nina Narodytska and Shiva Prasad Kasiviswanathan. 2017. Simple black-box adversarial attacks on deep neural networks. In CVPR Workshops, Vol. 2. 2."},{"key":"e_1_3_1_97_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW50498.2020.00415"},{"key":"e_1_3_1_98_2","article-title":"Diffusion models for adversarial purification","author":"Nie Weili","year":"2022","unstructured":"Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. 2022. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460 (2022).","journal-title":"arXiv preprint arXiv:2205.07460"},{"key":"e_1_3_1_99_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2017.2718479"},{"key":"e_1_3_1_100_2","article-title":"Transferability in machine learning: From phenomena to black-box attacks using adversarial samples","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016).","journal-title":"arXiv preprint arXiv:1605.07277"},{"key":"e_1_3_1_101_2","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_1_102_2","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP.2016.36"},{"key":"e_1_3_1_103_2","doi-asserted-by":"crossref","first-page":"399","DOI":"10.1109\/EuroSP.2018.00035","volume-title":"2018 IEEE European Symposium on Security and Privacy (EuroS&P\u201918)","author":"Papernot Nicolas","year":"2018","unstructured":"Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael P. Wellman. 2018. SOK: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P\u201918). IEEE, 399\u2013414."},{"key":"e_1_3_1_104_2","doi-asserted-by":"crossref","first-page":"582","DOI":"10.1109\/SP.2016.41","volume-title":"2016 IEEE Symposium on Security and Privacy (SP\u201916)","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP\u201916). IEEE, 582\u2013597."},{"key":"e_1_3_1_105_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00465"},{"key":"e_1_3_1_106_2","first-page":"652","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition","author":"Qi Charles R.","year":"2017","unstructured":"Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. 2017. Pointnet: Deep learning on point sets for 3D classification and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition. 652\u2013660."},{"key":"e_1_3_1_107_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58568-6_2"},{"key":"e_1_3_1_108_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00847"},{"key":"e_1_3_1_109_2","article-title":"T-BFA: Targeted bit-flip adversarial weight attack","author":"Rakin Adnan Siraj","year":"2021","unstructured":"Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, and Deliang Fan. 2021. T-BFA: Targeted bit-flip adversarial weight attack. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_3_1_110_2","article-title":"CGBA: Curvature-aware geometric black-box attack","author":"Reza Md Farhamdur","year":"2023","unstructured":"Md Farhamdur Reza, Ali Rahmati, Tianfu Wu, and Huaiyu Dai. 2023. CGBA: Curvature-aware geometric black-box attack. arXiv preprint arXiv:2308.03163 (2023).","journal-title":"arXiv preprint arXiv:2308.03163"},{"key":"e_1_3_1_111_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00445"},{"key":"e_1_3_1_112_2","first-page":"1291","volume-title":"29th USENIX security symposium (USENIX Security\u201920)","author":"Salem Ahmed","year":"2020","unstructured":"Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang. 2020. \\(\\lbrace\\) Updates-Leak \\(\\rbrace\\) : Data set inference and reconstruction attacks in online learning. In 29th USENIX security symposium (USENIX Security\u201920). 1291\u20131308."},{"key":"e_1_3_1_113_2","first-page":"3309","volume-title":"30th USENIX Security Symposium (USENIX Security\u201921)","author":"Sato Takami","year":"2021","unstructured":"Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jia, Xue Lin, and Qi Alfred Chen. 2021. Dirty road can attack: Security of deep learning based automated lane centering under \\(\\lbrace\\) Physical-World \\(\\rbrace\\) attack. In 30th USENIX Security Symposium (USENIX Security\u201921). 3309\u20133326."},{"key":"e_1_3_1_114_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01443"},{"key":"e_1_3_1_115_2","article-title":"Distracting downpour: Adversarial weather attacks for motion estimation","author":"Schmalfuss Jenny","year":"2023","unstructured":"Jenny Schmalfuss, Lukas Mehl, and Andr\u00e9s Bruhn. 2023. Distracting downpour: Adversarial weather attacks for motion estimation. arXiv preprint arXiv:2305.06716 (2023).","journal-title":"arXiv preprint arXiv:2305.06716"},{"key":"e_1_3_1_116_2","doi-asserted-by":"publisher","DOI":"10.1145\/3398394"},{"key":"e_1_3_1_117_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP40776.2020.9054368"},{"key":"e_1_3_1_118_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3112290"},{"key":"e_1_3_1_119_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00123"},{"key":"e_1_3_1_120_2","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978392"},{"key":"e_1_3_1_121_2","doi-asserted-by":"publisher","DOI":"10.1145\/3317611"},{"key":"e_1_3_1_122_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2021.3102492"},{"key":"e_1_3_1_123_2","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1109\/SP.2017.41","volume-title":"2017 IEEE Symposium on Security and Privacy (SP\u201917)","author":"Shokri Reza","year":"2017","unstructured":"Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP\u201917). IEEE, 3\u201318."},{"key":"e_1_3_1_124_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467386"},{"key":"e_1_3_1_125_2","article-title":"Constructing unrestricted adversarial examples with generative models","volume":"31","author":"Song Yang","year":"2018","unstructured":"Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon. 2018. Constructing unrestricted adversarial examples with generative models. Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_126_2","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2019.2890858"},{"key":"e_1_3_1_127_2","first-page":"877","volume-title":"29th USENIX Security Symposium (USENIX Security\u201920)","author":"Sun Jiachen","year":"2020","unstructured":"Jiachen Sun, Yulong Cao, Qi Alfred Chen, and Z. Morley Mao. 2020. Towards robust \\(\\lbrace\\) LiDAR-based \\(\\rbrace\\) perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. In 29th USENIX Security Symposium (USENIX Security\u201920). 877\u2013894."},{"key":"e_1_3_1_128_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01487"},{"key":"e_1_3_1_129_2","article-title":"Intriguing properties of neural networks","author":"Szegedy Christian","year":"2013","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).","journal-title":"arXiv preprint arXiv:1312.6199"},{"key":"e_1_3_1_130_2","article-title":"3DHacker: Spectrum-based decision boundary generation for hard-label 3D point cloud attack","author":"Tao Yunbo","year":"2023","unstructured":"Yunbo Tao, Daizong Liu, Pan Zhou, Yulai Xie, Wei Du, and Wei Hu. 2023. 3DHacker: Spectrum-based decision boundary generation for hard-label 3D point cloud attack. arXiv preprint arXiv:2308.07546 (2023).","journal-title":"arXiv preprint arXiv:2308.07546"},{"key":"e_1_3_1_131_2","article-title":"Ensemble adversarial training: Attacks and defenses","author":"Tram\u00e8r Florian","year":"2017","unstructured":"Florian Tram\u00e8r, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017).","journal-title":"arXiv preprint arXiv:1705.07204"},{"key":"e_1_3_1_132_2","article-title":"The space of transferable adversarial examples","author":"Tram\u00e8r Florian","year":"2017","unstructured":"Florian Tram\u00e8r, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453 (2017).","journal-title":"arXiv preprint arXiv:1704.03453"},{"key":"e_1_3_1_133_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5443"},{"key":"e_1_3_1_134_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.3301742"},{"key":"e_1_3_1_135_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01373"},{"key":"e_1_3_1_136_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_137_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i2.20141"},{"key":"e_1_3_1_138_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00846"},{"key":"e_1_3_1_139_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00196"},{"key":"e_1_3_1_140_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-20065-6_10"},{"key":"e_1_3_1_141_2","doi-asserted-by":"publisher","DOI":"10.1145\/3326362"},{"key":"e_1_3_1_142_2","article-title":"Demiguise attack: Crafting invisible semantic adversarial perturbations with perceptual similarity","author":"Wang Yajie","year":"2021","unstructured":"Yajie Wang, Shangbo Wu, Wenyi Jiang, Shengang Hao, Yu-an Tan, and Quanxin Zhang. 2021. Demiguise attack: Crafting invisible semantic adversarial perturbations with perceptual similarity. arXiv preprint arXiv:2107.01396 (2021).","journal-title":"arXiv preprint arXiv:2107.01396"},{"key":"e_1_3_1_143_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00754"},{"key":"e_1_3_1_144_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01464"},{"key":"e_1_3_1_145_2","article-title":"Geometry-aware generation of adversarial point clouds","author":"Wen Yuxin","year":"2020","unstructured":"Yuxin Wen, Jiehong Lin, Ke Chen, C. L. Philip Chen, and Kui Jia. 2020. Geometry-aware generation of adversarial point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"e_1_3_1_146_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01204"},{"key":"e_1_3_1_147_2","first-page":"6808","volume-title":"International Conference on Machine Learning","author":"Wong Eric","year":"2019","unstructured":"Eric Wong, Frank Schmidt, and Zico Kolter. 2019. Wasserstein adversarial examples via projected sinkhorn iterations. In International Conference on Machine Learning. PMLR, 6808\u20136817."},{"key":"e_1_3_1_148_2","unstructured":"Lei Wu Zhanxing Zhu Cheng Tai and E. Weinan. 2018. Understanding and enhancing the transferability of adversarial examples. arXiv preprint arXiv:1802.09707 (2018)."},{"key":"e_1_3_1_149_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00891"},{"key":"e_1_3_1_150_2","doi-asserted-by":"publisher","DOI":"10.5555\/3304222.3304312"},{"key":"e_1_3_1_151_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00706"},{"key":"e_1_3_1_152_2","volume-title":"6th International Conference on Learning Representations (ICLR 2018)","author":"Xiao Chaowei","year":"2018","unstructured":"Chaowei Xiao, Jun Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. 2018. Spatially transformed adversarial examples. In 6th International Conference on Learning Representations (ICLR 2018)."},{"key":"e_1_3_1_153_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01167"},{"key":"e_1_3_1_154_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.153"},{"key":"e_1_3_1_155_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00284"},{"key":"e_1_3_1_156_2","first-page":"12288","article-title":"Learning black-box attackers with transferable priors and query feedback","volume":"33","author":"Yang Jiancheng","year":"2020","unstructured":"Jiancheng Yang, Yangzhou Jiang, Xiaoyang Huang, Bingbing Ni, and Chenglong Zhao. 2020. Learning black-box attackers with transferable priors and query feedback. Advances in Neural Information Processing Systems 33 (2020), 12288\u201312299.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_157_2","article-title":"ADV-Makeup: A new imperceptible and transferable attack on face recognition","author":"Yin Bangjie","year":"2021","unstructured":"Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong, Shouhong Ding, Jilin Li, and Cong Liu. 2021. ADV-Makeup: A new imperceptible and transferable attack on face recognition. International Joint Conferences on Artificial Intelligence (IJCAI) (2021).","journal-title":"International Joint Conferences on Artificial Intelligence (IJCAI)"},{"key":"e_1_3_1_158_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539241"},{"key":"e_1_3_1_159_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00443"},{"key":"e_1_3_1_160_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01453"},{"key":"e_1_3_1_161_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i4.16441"},{"key":"e_1_3_1_162_2","doi-asserted-by":"publisher","DOI":"10.1186\/s13635-020-00112-z"},{"key":"e_1_3_1_163_2","volume-title":"International Conference on Learning Representations","author":"Zhang Huan","year":"2018","unstructured":"Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, and Cho-Jui Hsieh. 2018. The limitations of adversarial training and the blind-spot attack. In International Conference on Learning Representations."},{"key":"e_1_3_1_164_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01457"},{"key":"e_1_3_1_165_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01473"},{"key":"e_1_3_1_166_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46487-9_40"},{"key":"e_1_3_1_167_2","volume-title":"International Conference on Learning Representations","author":"Zhang Yang","year":"2018","unstructured":"Yang Zhang, Hassan Foroosh, Philip David, and Boqing Gong. 2018. CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild. In International Conference on Learning Representations."},{"key":"e_1_3_1_168_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01595"},{"key":"e_1_3_1_169_2","article-title":"Perturbations are not enough: Generating adversarial examples with spatial distortions","author":"Zhao He","year":"2019","unstructured":"He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, and Dinh Phung. 2019. Perturbations are not enough: Generating adversarial examples with spatial distortions. arXiv preprint arXiv:1910.01329 (2019).","journal-title":"arXiv preprint arXiv:1910.01329"},{"key":"e_1_3_1_170_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00128"},{"key":"e_1_3_1_171_2","volume-title":"International Conference on Learning Representations","author":"Zhao Zhengli","year":"2018","unstructured":"Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In International Conference on Learning Representations."},{"key":"e_1_3_1_172_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00112"},{"key":"e_1_3_1_173_2","first-page":"6115","article-title":"On success and simplicity: A second look at transferable targeted attacks","volume":"34","author":"Zhao Zhengyu","year":"2021","unstructured":"Zhengyu Zhao, Zhuoran Liu, and Martha Larson. 2021. On success and simplicity: A second look at transferable targeted attacks. Advances in Neural Information Processing Systems 34 (2021), 6115\u20136128.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_174_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33012253"},{"key":"e_1_3_1_175_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00168"},{"key":"e_1_3_1_176_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01037"},{"key":"e_1_3_1_177_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01264-9_28"},{"key":"e_1_3_1_178_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW59228.2023.00236"},{"key":"e_1_3_1_179_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58542-6_34"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3636551","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3636551","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:10:04Z","timestamp":1750295404000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3636551"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,22]]},"references-count":178,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,6,30]]}},"alternative-id":["10.1145\/3636551"],"URL":"https:\/\/doi.org\/10.1145\/3636551","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,22]]},"assertion":[{"value":"2022-10-14","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-12-01","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-22","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}