{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,8]],"date-time":"2026-01-08T09:42:46Z","timestamp":1767865366410,"version":"3.49.0"},"reference-count":58,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T00:00:00Z","timestamp":1737072000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T00:00:00Z","timestamp":1737072000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"National Key Research and Development Program for Young Scientists of China","award":["2022YFB3102800"],"award-info":[{"award-number":["2022YFB3102800"]}]},{"name":"Major Science and Technology Project of Henan Province","award":["221100240100"],"award-info":[{"award-number":["221100240100"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2025,2]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Neural networks are vulnerable to meticulously crafted adversarial examples, leading to high-confidence misclassifications in image classification tasks. Due to their consistency with regular input patterns and the absence of reliance on the target model and its output information, transferable adversarial attacks exhibit a notably high stealthiness and detection difficulty, making them a significant focus of defense. In this work, we propose a deep learning defense known as multi-source adversarial perturbations elimination (MAPE) to counter diverse transferable attacks. MAPE comprises the <jats:bold>single-source adversarial perturbation elimination<\/jats:bold> (SAPE) mechanism and the pre-trained models probabilistic scheduling algorithm (PPSA). SAPE utilizes a thoughtfully designed channel-attention U-Net as the defense model and employs adversarial examples generated by a pre-trained model (e.g., ResNet) for its training, thereby enabling the elimination of known adversarial perturbations. PPSA introduces model difference quantification and negative momentum to strategically schedule multiple pre-trained models, thereby maximizing the differences among adversarial examples during the defense model\u2019s training and enhancing its robustness in eliminating adversarial perturbations. MAPE effectively eliminates adversarial perturbations in various adversarial examples, providing a robust defense against attacks from different substitute models. In a black-box attack scenario utilizing ResNet-34 as the target model, our approach achieves average defense rates of over 95.1% on CIFAR-10 and over 71.5% on Mini-ImageNet, demonstrating state-of-the-art performance.<\/jats:p>","DOI":"10.1007\/s40747-024-01770-z","type":"journal-article","created":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T07:02:55Z","timestamp":1737097375000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination"],"prefix":"10.1007","volume":"11","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6586-4457","authenticated-orcid":false,"given":"Xinlei","family":"Liu","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3075-188X","authenticated-orcid":false,"given":"Jichao","family":"Xie","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7641-5622","authenticated-orcid":false,"given":"Tao","family":"Hu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0094-5982","authenticated-orcid":false,"given":"Peng","family":"Yi","sequence":"additional","affiliation":[]},{"given":"Yuxiang","family":"Hu","sequence":"additional","affiliation":[]},{"given":"Shumin","family":"Huo","sequence":"additional","affiliation":[]},{"given":"Zhen","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,1,17]]},"reference":[{"key":"1770_CR1","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y LeCun","year":"2015","unstructured":"LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436\u2013444. https:\/\/doi.org\/10.1038\/nature14539","journal-title":"Nature"},{"key":"1770_CR2","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770\u2013778. https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"1770_CR3","doi-asserted-by":"publisher","unstructured":"Guo J, Han K, Wang Y, Wu H, Chen X, Xu C, Xu C (2021) Distilling object detectors via decoupled features. In: 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2154\u20132164. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00219","DOI":"10.1109\/CVPR46437.2021.00219"},{"key":"1770_CR4","doi-asserted-by":"publisher","unstructured":"Siam M, Gamal M, Abdel-Razek M, Yogamani S, Jagersand M, Zhang H (2018) A comparative study of real-time semantic segmentation for autonomous driving. In: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 700\u2013710. https:\/\/doi.org\/10.1109\/CVPRW.2018.00101","DOI":"10.1109\/CVPRW.2018.00101"},{"key":"1770_CR5","doi-asserted-by":"publisher","unstructured":"Zhou Y, Han M, Liu L, He J, Gao X (2019) The adversarial attacks threats on computer vision: A survey. In: 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW), pp. 25\u201330. https:\/\/doi.org\/10.1109\/MASSW.2019.00012","DOI":"10.1109\/MASSW.2019.00012"},{"key":"1770_CR6","doi-asserted-by":"crossref","unstructured":"Akhtar N, Mian A, Kardan N, Shah M (2021) Threat of adversarial attacks on deep learning in computer vision: Survey II. CoRR arXiv:2108.00401","DOI":"10.1109\/ACCESS.2021.3127960"},{"key":"1770_CR7","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2024.111778","volume":"162","author":"H Gao","year":"2024","unstructured":"Gao H, Yang X, Hu Y, Liang Z, Xu H, Wang B, Mu H, Wang Y (2024) Adversarial sample attacks algorithm based on cycle-consistent generative networks. Appl Soft Comput 162:111778. https:\/\/doi.org\/10.1016\/j.asoc.2024.111778","journal-title":"Appl Soft Comput"},{"key":"1770_CR8","doi-asserted-by":"publisher","first-page":"5445","DOI":"10.1007\/s40747-024-01455-7","volume":"10","author":"Q Li","year":"2024","unstructured":"Li Q, Wang Z, Zhang X, Li Y (2024) Attack-cosm: attacking the camouflaged object segmentation model through digital world adversarial examples. Complex Intell Syst 10:5445\u20135457. https:\/\/doi.org\/10.1007\/s40747-024-01455-7","journal-title":"Complex Intell Syst"},{"key":"1770_CR9","doi-asserted-by":"crossref","unstructured":"Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial examples in the physical world. In: 2017 International Conference on Learning Representations (ICLR), OpenReview.net","DOI":"10.1201\/9781351251389-8"},{"key":"1770_CR10","unstructured":"Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: 2020 International Conference on Machine Learning (ICML), PMLR. pp. 2206\u20132216"},{"key":"1770_CR11","doi-asserted-by":"publisher","unstructured":"Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille AL (2019) Improving transferability of adversarial examples with input diversity. In: 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2725\u20132734. https:\/\/doi.org\/10.1109\/CVPR.2019.00284","DOI":"10.1109\/CVPR.2019.00284"},{"key":"1770_CR12","unstructured":"Lin J, Song C, He K, Wang L, Hopcroft JE (2020) Nesterov accelerated gradient and scale invariance for adversarial attacks. In: 2020 International Conference on Learning Representations (ICLR), OpenReview.net"},{"key":"1770_CR13","doi-asserted-by":"publisher","unstructured":"Wang X, He K (2021) Enhancing the transferability of adversarial attacks through variance tuning. In: 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1924\u20131933. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00196","DOI":"10.1109\/CVPR46437.2021.00196"},{"key":"1770_CR14","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2023.111088","volume":"150","author":"H Zhu","year":"2024","unstructured":"Zhu H, Ren Y, Liu C, Sui X, Zhang L (2024) Frequency-based methods for improving the imperceptibility and transferability of adversarial examples. Appl Soft Comput 150:111088. https:\/\/doi.org\/10.1016\/j.asoc.2023.111088","journal-title":"Appl Soft Comput"},{"key":"1770_CR15","doi-asserted-by":"publisher","first-page":"6667","DOI":"10.1007\/s40747-024-01506-z","volume":"10","author":"Z Chen","year":"2024","unstructured":"Chen Z, Luo W, Naseem ML, Kong L, Yang X (2024) Comprehensive comparisons of gradient-based multi-label adversarial attacks. Complex Intell Syst 10:6667\u20136692. https:\/\/doi.org\/10.1007\/s40747-024-01506-z","journal-title":"Complex Intell Syst"},{"key":"1770_CR16","doi-asserted-by":"publisher","unstructured":"Bhagoji AN, He W, Li B, Song D (2018) Practical black-box attacks on deep neural networks using efficient query mechanisms. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds), Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8\u201314, 2018, Proceedings, Part XII, Springer. pp. 158\u2013174. https:\/\/doi.org\/10.1007\/978-3-030-01258-8_10","DOI":"10.1007\/978-3-030-01258-8_10"},{"key":"1770_CR17","doi-asserted-by":"publisher","unstructured":"Li H, Xu X, Zhang X, Yang S, Li B (2020) QEBA: query-efficient boundary-based blackbox attack. In: 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13\u201319, 2020, Computer Vision Foundation \/ IEEE. pp. 1218\u20131227. https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/html\/Li_QEBA_Query-Efficient_Boundary-Based_Blackbox_Attack_CVPR_2020_paper.html, https:\/\/doi.org\/10.1109\/CVPR42600.2020.00130","DOI":"10.1109\/CVPR42600.2020.00130"},{"key":"1770_CR18","doi-asserted-by":"publisher","first-page":"2226","DOI":"10.1109\/TPAMI.2022.3169802","volume":"45","author":"Y Shi","year":"2023","unstructured":"Shi Y, Han Y, Hu Q, Yang Y, Tian Q (2023) Query-efficient black-box adversarial attack with customized iteration and sampling. IEEE Trans Pattern Anal Mach Intell 45:2226\u20132245. https:\/\/doi.org\/10.1109\/TPAMI.2022.3169802","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1770_CR19","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 2018 International Conference on Learning Representations (ICLR), OpenReview.net"},{"key":"1770_CR20","unstructured":"Zhang H, Yu Y, Jiao J, Xing EP, Ghaoui LE, Jordan MI (2019) Theoretically principled trade-off between robustness and accuracy. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9\u201315 June 2019, Long Beach, California, USA, PMLR. pp. 7472\u20137482"},{"key":"1770_CR21","doi-asserted-by":"publisher","first-page":"152","DOI":"10.1016\/j.procs.2018.10.315","volume":"140","author":"M Ozdag","year":"2018","unstructured":"Ozdag M (2018) Adversarial attacks and defenses against deep neural networks: a survey. Procedia Comput Sci 140:152\u2013161. https:\/\/doi.org\/10.1016\/j.procs.2018.10.315","journal-title":"Procedia Comput Sci"},{"key":"1770_CR22","doi-asserted-by":"publisher","unstructured":"Raff E, Sylvester J, Forsyth S, McLean M (2019) Barrage of random transforms for adversarially robust defense. In: 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6521\u20136530. https:\/\/doi.org\/10.1109\/CVPR.2019.00669","DOI":"10.1109\/CVPR.2019.00669"},{"key":"1770_CR23","unstructured":"Pang T, Xu K, Zhu J (2020) Mixup inference: Better exploiting mixup to defend adversarial attacks. In: 2020 International Conference on Learning Representations (ICLR), OpenReview.net"},{"key":"1770_CR24","unstructured":"Bahat Y, Irani M, Shakhnarovich G (2019) Natural and adversarial error detection using invariance to image transformations. CoRR arXiv:1902.00236"},{"key":"1770_CR25","doi-asserted-by":"publisher","first-page":"177","DOI":"10.1016\/j.neunet.2023.03.008","volume":"164","author":"J Li","year":"2023","unstructured":"Li J, Zhang S, Cao J, Tan M (2023) Learning defense transformations for counterattacking adversarial examples. Neural Netw 164:177\u2013185. https:\/\/doi.org\/10.1016\/j.neunet.2023.03.008","journal-title":"Neural Netw"},{"key":"1770_CR26","doi-asserted-by":"publisher","unstructured":"Liao F, Liang M, Dong Y, Pang T, Hu X, Zhu J (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1778\u20131787. https:\/\/doi.org\/10.1109\/CVPR.2018.00191","DOI":"10.1109\/CVPR.2018.00191"},{"key":"1770_CR27","doi-asserted-by":"publisher","unstructured":"Xie C, Wu Y, Maaten Lvd, Yuille AL, He K (2019) Feature denoising for improving adversarial robustness. In: 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 501\u2013509. https:\/\/doi.org\/10.1109\/CVPR.2019.00059","DOI":"10.1109\/CVPR.2019.00059"},{"key":"1770_CR28","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y, LeCun Y (eds.), 2015 International Conference on Learning Representations (ICLR)"},{"key":"1770_CR29","unstructured":"Guo Y, Li Q, Chen H (2020) Backpropagating linearly improves transferability of adversarial examples. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (eds.), 2020 Neural Information Processing Systems (NeurIPS)"},{"key":"1770_CR30","doi-asserted-by":"publisher","unstructured":"Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition CVPR 2018, Salt Lake City, UT, USA, June 18\u201322, 2018, Computer Vision Foundation \/ IEEE Computer Society. pp. 9185\u20139193. https:\/\/doi.org\/10.1109\/CVPR.2018.00957","DOI":"10.1109\/CVPR.2018.00957"},{"key":"1770_CR31","doi-asserted-by":"publisher","unstructured":"Gubri M, Cordy M, Papadakis M, Traon YL, Sen K (2022) LGV: boosting adversarial example transferability from large geometric vicinity. In: Avidan S, Brostow GJ, Ciss\u00e9 M, Farinella GM, Hassner T (eds.), 2022 European Conference on Computer Vision (ECCV), Springer. pp. 603\u2013618. https:\/\/doi.org\/10.1007\/978-3-031-19772-7_35","DOI":"10.1007\/978-3-031-19772-7_35"},{"key":"1770_CR32","unstructured":"Huang Y, Kong AW (2022) Transferable adversarial attack based on integrated gradients. In: The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25\u201329, 2022, OpenReview.net"},{"key":"1770_CR33","doi-asserted-by":"publisher","unstructured":"Chen B, Yin J, Chen S, Chen B, Liu X (2023) An adaptive model ensemble adversarial attack for boosting adversarial transferability. In: IEEE\/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1\u20136, 2023, IEEE. pp. 4466\u20134475. https:\/\/doi.org\/10.1109\/ICCV51070.2023.00414","DOI":"10.1109\/ICCV51070.2023.00414"},{"key":"1770_CR34","unstructured":"Guo C, Rana M, Ciss\u00e9 M, van\u00a0der Maaten L (2018) Countering adversarial images using input transformations. In: 2018 International Conference on Learning Representations (ICLR), OpenReview.net"},{"key":"1770_CR35","doi-asserted-by":"publisher","unstructured":"Prakash A, Moran N, Garber S, DiLillo A, Storer JA (2018) Deflecting adversarial attacks with pixel deflection. In: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Computer Vision Foundation \/ IEEE Computer Society. pp. 8571\u20138580. https:\/\/doi.org\/10.1109\/CVPR.2018.00894","DOI":"10.1109\/CVPR.2018.00894"},{"key":"1770_CR36","unstructured":"Dziugaite GK, Ghahramani Z, Roy DM (2016) A study of the effect of JPG compression on adversarial images. CoRR abs\/1608.00853. arXiv:1608.00853"},{"key":"1770_CR37","doi-asserted-by":"publisher","unstructured":"Wang L (2021) Adversarial perturbation suppression using adaptive Gaussian smoothing and color reduction. In: IEEE International Symposium on Multimedia, ISM 2021, Naple, Italy, November 29\u2013Dec. 1, 2021, IEEE. pp. 158\u2013165. https:\/\/doi.org\/10.1109\/ISM52913.2021.00033","DOI":"10.1109\/ISM52913.2021.00033"},{"key":"1770_CR38","doi-asserted-by":"publisher","unstructured":"Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1\u20139. https:\/\/doi.org\/10.1109\/CVPR.2015.7298594","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"1770_CR39","doi-asserted-by":"publisher","unstructured":"Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. In: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510\u20134520. https:\/\/doi.org\/10.1109\/CVPR.2018.00474","DOI":"10.1109\/CVPR.2018.00474"},{"key":"1770_CR40","doi-asserted-by":"publisher","unstructured":"Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, III, WMW, Frangi AF (eds.), 2015 Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer. pp. 234\u2013241. https:\/\/doi.org\/10.1007\/978-3-319-24574-4","DOI":"10.1007\/978-3-319-24574-4"},{"key":"1770_CR41","doi-asserted-by":"publisher","unstructured":"Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Computer Vision Foundation \/ IEEE Computer Society. pp. 7132\u20137141. https:\/\/doi.org\/10.1109\/CVPR.2018.00745","DOI":"10.1109\/CVPR.2018.00745"},{"key":"1770_CR42","unstructured":"Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville AC, Bengio Y(2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds), 2014 Neural Information Processing Systems(NeurIPS), pp. 2672\u20132680"},{"key":"1770_CR43","unstructured":"Ho J, Jain A, Abbeel P (2020) Denoising diffusion probabilistic models. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6\u201312, 2020, virtual. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/4c5bcfec8584af0d967f1ab10179ca4b- Abstract.html"},{"key":"1770_CR44","doi-asserted-by":"publisher","unstructured":"Huang G, Liu Z, Van Der\u00a0Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: 2017 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261\u20132269. https:\/\/doi.org\/10.1109\/CVPR.2017.243","DOI":"10.1109\/CVPR.2017.243"},{"key":"1770_CR45","unstructured":"Chen Y, Li J, Xiao H, Jin X, Yan S, Feng J (2017) Dual path networks. In: Guyon I, von Luxburg U, Bengio S, Wallach HM, Fergus R, Vishwanathan SVN, Garnett R (Eds.), 2017 Neural Information Processing Systems (NeurIPS), pp. 4467\u20134475"},{"key":"1770_CR46","doi-asserted-by":"publisher","unstructured":"Han D, Kim J, Kim J (2017) Deep pyramidal residual networks. In: 2017 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6307\u20136315. https:\/\/doi.org\/10.1109\/CVPR.2017.668","DOI":"10.1109\/CVPR.2017.668"},{"key":"1770_CR47","doi-asserted-by":"publisher","unstructured":"Radosavovic I, Kosaraju RP, Girshick R, He K, Doll\u00e1r P (2020) Designing network design spaces. In: 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10425\u201310433. https:\/\/doi.org\/10.1109\/CVPR42600.2020.01044","DOI":"10.1109\/CVPR42600.2020.01044"},{"key":"1770_CR48","doi-asserted-by":"publisher","unstructured":"Xie S, Girshick R, Doll\u00e1r P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: 2017 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987\u20135995. https:\/\/doi.org\/10.1109\/CVPR.2017.634","DOI":"10.1109\/CVPR.2017.634"},{"key":"1770_CR49","doi-asserted-by":"crossref","unstructured":"Zagoruyko S, Komodakis N (2016) Wide residual networks. In: Wilson RC, Hancock ER, Smith WAP (eds) 2016 British Machine Vision Conference (BMVC). BMVA Press","DOI":"10.5244\/C.30.87"},{"key":"1770_CR50","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Identity mappings in deep residual networks. In: Leibe B, Matas J, Sebe N, Welling M (eds) 2016 European Conference on Computer Vision (ECCV), Springer. pp. 630\u2013645. https:\/\/doi.org\/10.1007\/978-3-319-46493-0_38","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"1770_CR51","doi-asserted-by":"publisher","unstructured":"Ma N, Zhang X, Zheng H, Sun J (2018) Shufflenet V2: practical guidelines for efficient CNN architecture design. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds.), 2018 European Conference on Computer Vision (ECCV), Springer. pp. 122\u2013138. https:\/\/doi.org\/10.1007\/978-3-030-01264-9_8","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"1770_CR52","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Bengio Y, LeCun Y (eds), 2015 International Conference on Learning Representations (ICLR)"},{"key":"1770_CR53","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N (2021) An image is worth 16x16 words: Transformers for image recognition at scale. In: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3\u20137, 2021, OpenReview.net"},{"key":"1770_CR54","unstructured":"Athalye A, Carlini N, Wagner DA (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Dy JG, Krause A (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10\u201315, 2018, PMLR. pp. 274\u2013283"},{"key":"1770_CR55","unstructured":"Wang Y, Zou D, Yi J, Bailey J, Ma X, Gu Q (2020) Improving adversarial robustness requires revisiting misclassified examples. In: 2020 International Conference on Learning Representations (ICLR), OpenReview.net"},{"key":"1770_CR56","unstructured":"Wang Z, Pang T, Du C, Lin M, Liu W, Yan S (2023) Better diffusion models further improve adversarial training. In: International Conference on Machine Learning, ICML 2023, 23\u201329 July 2023, Honolulu, Hawaii, USA, PMLR. pp. 36246\u201336263. https:\/\/proceedings.mlr.press\/v202\/wang23ad.html"},{"key":"1770_CR57","unstructured":"Bartoldson BR, Diffenderfer J, Parasyris K, Kailkhura B (2024) Adversarial robustness limits via scaling-law and human-alignment studies. In: Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21\u201327, 2024, OpenReview.net. https:\/\/openreview.net\/forum?id=HQtTg1try7"},{"key":"1770_CR58","doi-asserted-by":"publisher","unstructured":"Wang Z, Wang H, Tian C, Jin Y (2024) Preventing catastrophic overfitting in fast adversarial training: A bi-level optimization perspective. In: Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XXVIII, Springer. pp. 144\u2013160. https:\/\/doi.org\/10.1007\/978-3-031-73390-1_9","DOI":"10.1007\/978-3-031-73390-1_9"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01770-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01770-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01770-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,2,7]],"date-time":"2025-02-07T16:27:34Z","timestamp":1738945654000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01770-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,17]]},"references-count":58,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,2]]}},"alternative-id":["1770"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01770-z","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,17]]},"assertion":[{"value":"17 October 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 December 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 January 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no Conflict of interest to declare that are relevant to the content of this article","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"155"}}