{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,29]],"date-time":"2025-12-29T22:11:21Z","timestamp":1767046281972,"version":"3.41.0"},"reference-count":60,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2023,11,14]],"date-time":"2023-11-14T00:00:00Z","timestamp":1699920000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61977046"],"award-info":[{"award-number":["61977046"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Shanghai Science and Technology Program","award":["22511105600"],"award-info":[{"award-number":["22511105600"]}]},{"name":"Shanghai Municipal Science and Technology Major Project","award":["2021SHZDZX0102"],"award-info":[{"award-number":["2021SHZDZX0102"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2023,12,31]]},"abstract":"<jats:p>\n            The wide application of deep neural networks (DNNs) demands an increasing amount of attention to their real-world robustness, i.e., whether a DNN resists black-box adversarial attacks, among which score-based query attacks (SQAs) are the most threatening since they can effectively hurt a victim network with only access to model outputs. Defending against SQAs requires a slight but artful variation of outputs due to the service purpose for users, who share the same output information with SQAs. In this article, we propose a real-world defense by Unifying Gradients (UniG) of different data so that SQAs could only probe a much weaker attack direction that is similar for different samples. Since such universal attack perturbations have been validated as less aggressive than the input-specific perturbations, UniG protects real-world DNNs by indicating to attackers a twisted and less informative attack direction. We implement UniG efficiently by a Hadamard product module, which is plug-and-play. According to extensive experiments on 5 SQAs, 2 adaptive attacks and 7 defense baselines, UniG significantly improves real-world robustness without hurting clean accuracy on CIFAR10 and ImageNet. For instance, UniG maintains a model of 77.80% accuracy under a 2500-query Square attack while the state-of-the-art adversarially trained model only has 67.34% on CIFAR10. Simultaneously, UniG outperforms all compared baselines in terms of clean accuracy and achieves the smallest modification of the model output. The code is released at\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"url\" xlink:href=\"https:\/\/github.com\/snowien\/UniG-pytorch\">https:\/\/github.com\/snowien\/UniG-pytorch<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3617895","type":"journal-article","created":{"date-parts":[[2023,8,31]],"date-time":"2023-08-31T11:15:42Z","timestamp":1693480542000},"page":"1-16","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Unifying Gradients to Improve Real-World Robustness for Deep Networks"],"prefix":"10.1145","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9994-2414","authenticated-orcid":false,"given":"Yingwen","family":"Wu","sequence":"first","affiliation":[{"name":"Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3274-6926","authenticated-orcid":false,"given":"Sizhe","family":"Chen","sequence":"additional","affiliation":[{"name":"Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6351-201X","authenticated-orcid":false,"given":"Kun","family":"Fang","sequence":"additional","affiliation":[{"name":"Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4285-6520","authenticated-orcid":false,"given":"Xiaolin","family":"Huang","sequence":"additional","affiliation":[{"name":"Institute of Image Processing and Pattern Recognition and the MOE Key Laboratory of System Control and Information Processing, Shanghai Jiao Tong University, China"}]}],"member":"320","published-online":{"date-parts":[[2023,11,14]]},"reference":[{"key":"e_1_3_2_2_2","volume-title":"8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020","author":"Al-Dujaili Abdullah","year":"2020","unstructured":"Abdullah Al-Dujaili and Una-May O\u2019Reilly. 2020. Sign bits are all you need for black-box attacks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net."},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i6.20545"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58592-1_29"},{"key":"e_1_3_2_5_2","first-page":"274","volume-title":"International Conference on Machine Learning","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning. PMLR, 274\u2013283."},{"key":"e_1_3_2_6_2","volume-title":"Proceedings of the Asian Conference on Computer Vision","author":"Benz Philipp","year":"2020","unstructured":"Philipp Benz, Chaoning Zhang, Tooba Imtiaz, and In So Kweon. 2020. Double targeted universal adversarial perturbations. In Proceedings of the Asian Conference on Computer Vision."},{"key":"e_1_3_2_7_2","unstructured":"Mariusz Bojarski Davide Del Testa Daniel Dworakowski Bernhard Firner Beat Flepp Prasoon Goyal Lawrence D. Jackel Mathew Monfort Urs Muller Jiakai Zhang et\u00a0al. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316."},{"key":"e_1_3_2_8_2","volume-title":"6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings","author":"Brendel Wieland","year":"2018","unstructured":"Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net."},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV51458.2022.00387"},{"key":"e_1_3_2_10_2","first-page":"39","volume-title":"the IEEE Symposium on Security and Privacy (SP)","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In the IEEE Symposium on Security and Privacy (SP). 39\u201357."},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2788613"},{"key":"e_1_3_2_12_2","first-page":"1277","volume-title":"the IEEE Symposium on Security and Privacy (SP)","author":"Chen Jianbo","year":"2020","unstructured":"Jianbo Chen, Michael I. Jordan, and Martin J. Wainwright. 2020. HopSkipJumpAttack: A query-efficient decision-based attack. In the IEEE Symposium on Security and Privacy (SP). 1277\u20131294."},{"key":"e_1_3_2_13_2","first-page":"15","volume-title":"the 10th ACM Workshop on Artificial Intelligence and Security","author":"Chen Pin-Yu","year":"2017","unstructured":"Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In the 10th ACM Workshop on Artificial Intelligence and Security. 15\u201326."},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3385003.3410925"},{"key":"e_1_3_2_15_2","first-page":"108491","volume-title":"Pattern Recognition","author":"Chen Sizhe","year":"2022","unstructured":"Sizhe Chen, Fan He, Xiaolin Huang, and Kun Zhang. 2022. Relevance attack on detectors. In Pattern Recognition. 108491."},{"key":"e_1_3_2_16_2","first-page":"2188","volume-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)","author":"Chen Sizhe","year":"2022","unstructured":"Sizhe Chen, Zhengbao He, Chengjin Sun, and Xiaolin Huang. 2022. Universal adversarial attack on attention and the resulting dataset DAmageNet. In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 2188\u20132197."},{"key":"e_1_3_2_17_2","article-title":"QueryNet: Attack by multi-identity surrogates","author":"Chen Sizhe","year":"2021","unstructured":"Sizhe Chen, Zhehao Huang, Qinghua Tao, and Xiaolin Huang. 2021. QueryNet: Attack by multi-identity surrogates. arXiv preprint arXiv:2105.15010 (2021).","journal-title":"arXiv preprint arXiv:2105.15010"},{"key":"e_1_3_2_18_2","first-page":"1310","volume-title":"Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA","author":"Cohen Jeremy M.","year":"2019","unstructured":"Jeremy M. Cohen, Elan Rosenfeld, and J. Zico Kolter. 2019. Certified adversarial robustness via randomized smoothing. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA. PMLR, 1310\u20131320."},{"key":"e_1_3_2_19_2","volume-title":"35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)","author":"Croce Francesco","year":"2021","unstructured":"Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. 2021. RobustBench: A standardized adversarial robustness benchmark. In 35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)."},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_21_2","unstructured":"Kun Fang Qinghua Tao Yingwen Wu Tao Li Jia Cai Feipeng Cai Xiaolin Huang and Jie Yang. 2020. Towards robust neural networks via orthogonal diversity. arXiv preprint arXiv:2010.12190."},{"key":"e_1_3_2_22_2","volume-title":"3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings."},{"key":"e_1_3_2_23_2","first-page":"4218","volume-title":"Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual","author":"Gowal Sven","year":"2021","unstructured":"Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy A. Mann. 2021. Improving robustness using generated data. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual. 4218\u20134233."},{"key":"e_1_3_2_24_2","first-page":"2484","volume-title":"Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA","volume":"97","author":"Guo Chuan","year":"2019","unstructured":"Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q. Weinberger. 2019. Simple black-box adversarial attacks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, Vol. 97. PMLR, 2484\u20132493."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00068"},{"key":"e_1_3_2_27_2","volume-title":"8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020","author":"Huang Zhichao","year":"2020","unstructured":"Zhichao Huang and Tong Zhang. 2020. Black-box adversarial attack with transferable model-based embedding. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net."},{"key":"e_1_3_2_28_2","first-page":"2142","volume-title":"Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10-15, 2018","author":"Ilyas Andrew","year":"2018","unstructured":"Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10-15, 2018. PMLR, 2142\u20132151."},{"key":"e_1_3_2_29_2","volume-title":"7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019","author":"Ilyas Andrew","year":"2019","unstructured":"Andrew Ilyas, Logan Engstrom, and Aleksander Madry. 2019. Prior convictions: Black-box adversarial attacks with bandits and priors. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net."},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01360"},{"key":"e_1_3_2_31_2","unstructured":"Alex Krizhevsky Geoffrey Hinton et\u00a0al. 2009. Learning multiple layers of features from tiny images. (2009)."},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00044"},{"key":"e_1_3_2_33_2","unstructured":"Huiying Li Shawn Shan Emily Wenger Jiayun Zhang Haitao Zheng and Ben Y. Zhao. 2020. Blacklight: Defending black-box adversarial attacks on deep neural networks. arXiv preprint arXiv:2006.14042."},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01305"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01234-2_23"},{"key":"e_1_3_2_36_2","volume-title":"7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019","author":"Liu Xuanqing","year":"2019","unstructured":"Xuanqing Liu, Yao Li, Chongruo Wu, and Cho-Jui Hsieh. 2019. Adv-BNN: Improved adversarial defense through robust Bayesian neural network. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net."},{"key":"e_1_3_2_37_2","volume-title":"6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net."},{"key":"e_1_3_2_38_2","first-page":"16805","volume-title":"International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA","volume":"162","author":"Nie Weili","year":"2022","unstructured":"Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Animashree Anandkumar. 2022. Diffusion models for adversarial purification. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, Vol. 162. PMLR, 16805\u201316827."},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00509"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403241"},{"key":"e_1_3_2_41_2","volume-title":"8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020","author":"Pang Tianyu","year":"2020","unstructured":"Tianyu Pang, Kun Xu, and Jun Zhu. 2020. Mixup inference: Better exploiting mixup to defend adversarial attacks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net."},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_43_2","first-page":"8024","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada. 8024\u20138035."},{"key":"e_1_3_2_44_2","first-page":"11838","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada","author":"Pinot Rafael","year":"2019","unstructured":"Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, C\u00e9dric Gouy-Pailler, and Jamal Atif. 2019. Theoretical evidence for adversarial robustness through randomization. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada. 11838\u201311848."},{"key":"e_1_3_2_45_2","first-page":"7650","volume-title":"Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual","author":"Qin Zeyu","year":"2021","unstructured":"Zeyu Qin, Yanbo Fan, Hongyuan Zha, and Baoyuan Wu. 2021. Random noise defense against query-based black-box attacks. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual. 7650\u20137663."},{"key":"e_1_3_2_46_2","article-title":"Adversarial training can hurt generalization","author":"Raghunathan Aditi","year":"2019","unstructured":"Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, and Percy Liang. 2019. Adversarial training can hurt generalization. arXiv preprint arXiv:1906.06032 (2019).","journal-title":"arXiv preprint arXiv:1906.06032"},{"key":"e_1_3_2_47_2","volume-title":"Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual","author":"Salman Hadi","year":"2020","unstructured":"Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. 2020. Do adversarially robust ImageNet models transfer better?. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual."},{"key":"e_1_3_2_48_2","first-page":"11289","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada","author":"Salman Hadi","year":"2019","unstructured":"Hadi Salman, Jerry Li, Ilya P. Razenshteyn, Pengchuan Zhang, Huan Zhang, S\u00e9bastien Bubeck, and Greg Yang. 2019. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada. 11289\u201311300."},{"key":"e_1_3_2_49_2","volume-title":"2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings."},{"key":"e_1_3_2_50_2","volume-title":"6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings","author":"Tram\u00e8r Florian","year":"2018","unstructured":"Florian Tram\u00e8r, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net."},{"key":"e_1_3_2_51_2","volume-title":"7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019","author":"Tsipras Dimitris","year":"2019","unstructured":"Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net."},{"key":"e_1_3_2_52_2","unstructured":"Dequan Wang An Ju Evan Shelhamer David Wagner and Trevor Darrell. 2021. Fighting gradients with gradients: Dynamic defenses against adversarial attacks. arXiv preprint arXiv:2105.08714."},{"key":"e_1_3_2_53_2","first-page":"10399","volume-title":"Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event","volume":"119","author":"Wu Yi-Hsuan","year":"2020","unstructured":"Yi-Hsuan Wu, Chia-Hung Yuan, and Shan-Hung Wu. 2020. Adversarial robustness via runtime masking and cleansing. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, Vol. 119. PMLR, 10399\u201310409."},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00090"},{"key":"e_1_3_2_55_2","volume-title":"6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings","author":"Xie Cihang","year":"2018","unstructured":"Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan L. Yuille. 2018. Mitigating adversarial effects through randomization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net."},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.5244\/C.30.87"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.6154"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00777"},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/635"},{"key":"e_1_3_2_60_2","first-page":"7472","volume-title":"International Conference on Machine Learning","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning. PMLR, 7472\u20137482."},{"issue":"3","key":"e_1_3_2_61_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3374217","article-title":"Adversarial attacks on deep-learning models in natural language processing: A survey","volume":"11","author":"Zhang Wei Emma","year":"2020","unstructured":"Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li. 2020. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST) 11, 3 (2020), 1\u201341.","journal-title":"ACM Transactions on Intelligent Systems and Technology (TIST)"}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3617895","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3617895","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:57Z","timestamp":1750178277000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3617895"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,14]]},"references-count":60,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,12,31]]}},"alternative-id":["10.1145\/3617895"],"URL":"https:\/\/doi.org\/10.1145\/3617895","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"type":"print","value":"2157-6904"},{"type":"electronic","value":"2157-6912"}],"subject":[],"published":{"date-parts":[[2023,11,14]]},"assertion":[{"value":"2022-08-12","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-09","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-11-14","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}