{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,19]],"date-time":"2025-12-19T09:59:57Z","timestamp":1766138397218,"version":"3.41.0"},"reference-count":36,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2023,10,18]],"date-time":"2023-10-18T00:00:00Z","timestamp":1697587200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"crossref","award":["2023JBZY033"],"award-info":[{"award-number":["2023JBZY033"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61832002 and 62172094"],"award-info":[{"award-number":["61832002 and 62172094"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Beijing Natural Science Foundation","award":["JQ20023"],"award-info":[{"award-number":["JQ20023"]}]},{"name":"CCF-Zhipu AI Large Model Fund"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2024,2,29]]},"abstract":"<jats:p>Adversarial examples provide an opportunity as well as impose a challenge for understanding image classification systems. Based on the analysis of the adversarial training solution\u2014Adversarial Logits Pairing (ALP), we observed in this work that: (1) The inference of adversarially robust model tends to rely on fewer high-contribution features compared with vulnerable ones. (2) The training target of ALP does not fit well to a noticeable part of samples, where the logits pairing loss is overemphasized and obstructs minimizing the classification loss. Motivated by these observations, we design an Adaptive Adversarial Logits Pairing (AALP) solution by modifying the training process and training target of ALP. Specifically, AALP consists of an adaptive feature optimization module with Guided Dropout to systematically pursue fewer high-contribution features, and an adaptive sample weighting module by setting sample-specific training weights to balance between logits pairing loss and classification loss. The proposed AALP solution demonstrates superior defense performance on multiple datasets with extensive experiments.<\/jats:p>","DOI":"10.1145\/3616375","type":"journal-article","created":{"date-parts":[[2023,8,21]],"date-time":"2023-08-21T12:16:22Z","timestamp":1692620182000},"page":"1-16","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Adaptive Adversarial Logits Pairing"],"prefix":"10.1145","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6826-6396","authenticated-orcid":false,"given":"Shangxi","family":"Wu","sequence":"first","affiliation":[{"name":"Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0699-3205","authenticated-orcid":false,"given":"Jitao","family":"Sang","sequence":"additional","affiliation":[{"name":"Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, China and Tianjin Normal University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7329-8619","authenticated-orcid":false,"given":"Kaiyan","family":"Xu","sequence":"additional","affiliation":[{"name":"Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6878-971X","authenticated-orcid":false,"given":"Guanhua","family":"Zheng","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8390-431X","authenticated-orcid":false,"given":"Changsheng","family":"Xu","sequence":"additional","affiliation":[{"name":"Institute of Automation, Chinese Academy of Sciences, China"}]}],"member":"320","published-online":{"date-parts":[[2023,10,18]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2020.2969784"},{"key":"e_1_3_1_3_2","first-page":"274","volume-title":"Proceedings of the ICML","author":"Athalye Anish","year":"2018","unstructured":"Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the ICML. 274\u2013283."},{"key":"e_1_3_1_4_2","first-page":"39","volume-title":"Proceedings of the SP","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David A. Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the SP. 39\u201357."},{"key":"e_1_3_1_5_2","volume-title":"Proceedings of the SP","author":"Carlini Nicholas","year":"2018","unstructured":"Nicholas Carlini and David A. Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In Proceedings of the SP."},{"key":"e_1_3_1_6_2","volume-title":"Proceedings of the CVPR","author":"Choe Junsuk","year":"2019","unstructured":"Junsuk Choe and Hyunjung Shim. 2019. Attention-based dropout layer for weakly supervised object localization. In Proceedings of the CVPR."},{"key":"e_1_3_1_7_2","article-title":"Stochastic activation pruning for robust adversarial defense","author":"Dhillon Guneet S.","year":"2018","unstructured":"Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Anima Anandkumar. 2018. Stochastic activation pruning for robust adversarial defense. Retrieved from https:\/\/abs\/1803.01442","journal-title":"R"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2018.2887018"},{"key":"e_1_3_1_9_2","volume-title":"Proceedings of the ACL","author":"Ebrahimi Javid","year":"2018","unstructured":"Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the ACL."},{"key":"e_1_3_1_10_2","article-title":"Detecting adversarial samples from artifacts","author":"Feinman Reuben","year":"2017","unstructured":"Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, and Andrew B. Gardner. 2017. Detecting adversarial samples from artifacts. Retrieved from https:\/\/abs\/1703.00410","journal-title":"R"},{"key":"e_1_3_1_11_2","unstructured":"Ian Goodfellow Jonathon Shlens and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the ICLR ."},{"key":"e_1_3_1_12_2","article-title":"Countering adversarial images using input transformations","author":"Guo Chuan","year":"2017","unstructured":"Chuan Guo, Mayank Rana, Moustapha Ciss\u00e9, and Laurens van der Maaten. 2017. Countering adversarial images using input transformations. Retrieved from https:\/\/abs\/1711.00117","journal-title":"R"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_14_2","article-title":"Adversarial logit pairing","author":"Kannan Harini","year":"2018","unstructured":"Harini Kannan, Alexey Kurakin, and Ian J. Goodfellow. 2018. Adversarial logit pairing. Retrieved from https:\/\/abs\/1803.06373","journal-title":"R"},{"key":"e_1_3_1_15_2","volume-title":"Proceedings of the ICLR","author":"Kingma Diederik P.","year":"2015","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the ICLR."},{"key":"e_1_3_1_16_2","volume-title":"Learning Multiple Layers of Features from Tiny Images","author":"Krizhevsky Alex","year":"2009","unstructured":"Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report. Citeseer."},{"key":"e_1_3_1_17_2","unstructured":"Alexey Kurakin Ian J. Goodfellow and Samy Bengio. 2017. Adversarial examples in the physical world. In 5th International Conference on Learning Representations (ICLR\u201917 Toulon France April 24-26 2017) Workshop Track Proceedings . https:\/\/openreview.net\/forum?id=HJGU3Rodl"},{"key":"e_1_3_1_18_2","volume-title":"Shape, Contour and Grouping in Computer Vision","author":"LeCun Yann","year":"1999","unstructured":"Yann LeCun, Patrick Haffner, L\u00e9on Bottou, and Yoshua Bengio. 1999. Object recognition with gradient-based learning. In Shape, Contour and Grouping in Computer Vision. Springer, Berlin."},{"key":"e_1_3_1_19_2","first-page":"1778","volume-title":"Proceedings of the CVPR","author":"Liao Fangzhou","year":"2018","unstructured":"Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. 2018. Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the CVPR. 1778\u20131787."},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2930550"},{"key":"e_1_3_1_21_2","article-title":"Feature prioritization and regularization improve standard accuracy and adversarial robustness","author":"Liu Chihuang","year":"2018","unstructured":"Chihuang Liu and Joseph J\u00e1J\u00e1. 2018. Feature prioritization and regularization improve standard accuracy and adversarial robustness. Retrieved from https:\/\/abs\/1810.02424","journal-title":"R"},{"key":"e_1_3_1_22_2","first-page":"4644","volume-title":"Proceedings of the ICIP","author":"Liu Yuan Yuan","year":"2019","unstructured":"Yuan Yuan Liu, Lu Yue Ye, Wen Ze Shao, Qi Ge, Li Qian Wang, Bing Kun Bao, and Hai Bo Li. 2019. Adversarial representation learning for dynamic scene deblurring: A simple, fast and robust approach. In Proceedings of the ICIP. 4644\u20134648."},{"key":"e_1_3_1_23_2","unstructured":"Aleksander Madry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. Retrieved from https:\/\/abs\/1706.06083"},{"key":"e_1_3_1_24_2","first-page":"2574","volume-title":"Proceedings of the CVPR","author":"Moosavi-Dezfooli Seyed-Mohsen","year":"2016","unstructured":"Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the CVPR. 2574\u20132582."},{"key":"e_1_3_1_25_2","volume-title":"Proceedings of the NIPS","author":"Netzer Yuval","year":"2011","unstructured":"Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. 2011. Reading digits in natural images with unsupervised feature learning. In Proceedings of the NIPS."},{"key":"e_1_3_1_26_2","first-page":"582","volume-title":"Proceedings of the SP","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the SP. 582\u2013597."},{"key":"e_1_3_1_27_2","first-page":"618","volume-title":"Proceedings of the ICCV","author":"Selvaraju Ramprasaath R.","year":"2017","unstructured":"Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the ICCV. 618\u2013626."},{"key":"e_1_3_1_28_2","article-title":"Adversarial generative nets: Neural network attacks on state-of-the-art face recognition","author":"Sharif Mahmood","year":"2018","unstructured":"Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2018. Adversarial generative nets: Neural network attacks on state-of-the-art face recognition. Retrieved from https:\/\/abs\/1801.00349","journal-title":"R"},{"key":"e_1_3_1_29_2","article-title":"Ape-gan: Adversarial perturbation elimination with gan","author":"Shen Shiwei","year":"2017","unstructured":"Shiwei Shen, Guoqing Jin, Ke Gao, and Yongdong Zhang. 2017. Ape-gan: Adversarial perturbation elimination with gan. Proceedings of the ICLR (2017).","journal-title":"Proceedings of the ICLR"},{"key":"e_1_3_1_30_2","article-title":"PixelDefend: Leveraging generative models to understand and defend against adversarial examples","author":"Song Yang","year":"2017","unstructured":"Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. 2017. PixelDefend: Leveraging generative models to understand and defend against adversarial examples. Retrieved from https:\/\/abs\/1710.10766","journal-title":"R"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.5555\/2627435.2670313"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2019.2899279"},{"key":"e_1_3_1_33_2","volume-title":"Proceedings of the ICLR","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the ICLR."},{"key":"e_1_3_1_34_2","unstructured":"Florian Tram\u00e8r Alexey Kurakin Nicolas Papernot Dan Boneh and Patrick D. McDaniel. 2017. Ensemble adversarial training: Attacks and defenses. Retrieved from https:\/\/abs\/1705.07204"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2019.2949872"},{"key":"e_1_3_1_36_2","volume-title":"Proceedings of the ICCV","author":"Xie Cihang","year":"2017","unstructured":"Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, and Alan L. Yuille. 2017. Adversarial examples for semantic segmentation and object detection. In Proceedings of the ICCV."},{"key":"e_1_3_1_37_2","first-page":"7472","volume-title":"Proceedings of the ICML","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In Proceedings of the ICML. 7472\u20137482."}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3616375","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3616375","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:29:50Z","timestamp":1750285790000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3616375"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,18]]},"references-count":36,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,2,29]]}},"alternative-id":["10.1145\/3616375"],"URL":"https:\/\/doi.org\/10.1145\/3616375","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"type":"print","value":"1551-6857"},{"type":"electronic","value":"1551-6865"}],"subject":[],"published":{"date-parts":[[2023,10,18]]},"assertion":[{"value":"2022-04-21","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-07-27","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-10-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}