{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,2]],"date-time":"2026-01-02T07:51:43Z","timestamp":1767340303291,"version":"3.37.3"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,10,9]],"date-time":"2024-10-09T00:00:00Z","timestamp":1728432000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,10,9]],"date-time":"2024-10-09T00:00:00Z","timestamp":1728432000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"the Science and Technology Innovation Program of Hunan Province","award":["2022GK5002"],"award-info":[{"award-number":["2022GK5002"]}]},{"name":"the Special Foundation for Distinguished Young Scientists of Changsha","award":["kq2209003"],"award-info":[{"award-number":["kq2209003"]}]},{"DOI":"10.13039\/501100013314","name":"111 Project","doi-asserted-by":"crossref","award":["No. D23006"],"award-info":[{"award-number":["No. D23006"]}],"id":[{"id":"10.13039\/501100013314","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Foreign Expert Project of China","award":["G2023041039L"],"award-info":[{"award-number":["G2023041039L"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis. Intell."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In recent years, defending against adversarial examples has gained significant importance, leading to a growing body of research in this area. Among these studies, pre-processing defense approaches have emerged as a prominent research direction. However, existing adversarial example pre-processing techniques often employ a single pre-processing model to counter different types of adversarial attacks. Such a strategy may miss the nuances between different types of attacks, limiting the comprehensiveness and effectiveness of the defense strategy. To address this issue, we propose a divide-and-conquer reconstruction pre-processing algorithm via multi-classification and multi-network training to more effectively defend against different types of mainstream adversarial attacks. The premise and challenge of the divide-and-conquer reconstruction defense is to distinguish between multiple types of adversarial attacks. Our method designs an adversarial attack classification module that exploits the high-frequency information differences between different types of adversarial examples for their multi-classification, which can hardly be achieved by existing adversarial example detection methods. In addition, we construct a divide-and-conquer reconstruction module that utilizes different trained image reconstruction models for each type of adversarial attack, ensuring optimal defense effectiveness. Extensive experiments show that our proposed divide-and-conquer defense algorithm exhibits superior performance compared to state-of-the-art pre-processing methods.<\/jats:p>","DOI":"10.1007\/s44267-024-00061-y","type":"journal-article","created":{"date-parts":[[2024,10,9]],"date-time":"2024-10-09T04:01:42Z","timestamp":1728446502000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["A divide-and-conquer reconstruction method for defending against adversarial example attacks"],"prefix":"10.1007","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2718-659X","authenticated-orcid":false,"given":"Xiyao","family":"Liu","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0003-9217-3805","authenticated-orcid":false,"given":"Jiaxin","family":"Hu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0002-2128-6856","authenticated-orcid":false,"given":"Qingying","family":"Yang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9264-1363","authenticated-orcid":false,"given":"Ming","family":"Jiang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3524-8877","authenticated-orcid":false,"given":"Jianbiao","family":"He","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9365-7420","authenticated-orcid":false,"given":"Hui","family":"Fang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,10,9]]},"reference":[{"issue":"6","key":"61_CR1","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1145\/3065386","volume":"60","author":"A. Krizhevsky","year":"2017","unstructured":"Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84\u201390.","journal-title":"Communications of the ACM"},{"key":"61_CR2","first-page":"11976","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Z. Liu","year":"2022","unstructured":"Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A ConvNet for the 2020s. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a011976\u201311986). Piscataway: IEEE."},{"key":"61_CR3","first-page":"2805","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"J. Valmadre","year":"2017","unstructured":"Valmadre, J., Bertinetto, L., Henriques, J., Vedaldi, A., & Torr, P. H. (2017). End-to-end representation learning for correlation filter based tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp.\u00a02805\u20132813). Piscataway: IEEE."},{"issue":"3","key":"61_CR4","doi-asserted-by":"publisher","first-page":"257","DOI":"10.1109\/JPROC.2023.3238524","volume":"111","author":"Z. Zou","year":"2023","unstructured":"Zou, Z., Chen, K., Shi, Z., Guo, Y., & Ye, J. (2023). Object detection in 20 years: a survey. Proceedings of the IEEE, 111(3), 257\u2013276.","journal-title":"Proceedings of the IEEE"},{"key":"61_CR5","first-page":"3781","volume-title":"Proceedings of the 32nd USENIX security symposium","author":"A. Liu","year":"2023","unstructured":"Liu, A., Guo, J., Wang, J., Liang, S., Tao, R., Zhou, W., et al. (2023). X-adv: physical adversarial object attacks against X-ray prohibited item detection. In J. A. Calandrino & C. Troncoso (Eds.), Proceedings of the 32nd USENIX security symposium (pp.\u00a03781\u20133798). Berkeley: USENIX Association."},{"key":"61_CR6","first-page":"395","volume-title":"Proceedings of the 16th European conference on computer vision","author":"A. Liu","year":"2020","unstructured":"Liu, A., Wang, J., Liu, X., Cao, B., Zhang, C., & Yu, H. (2020). Bias-based universal adversarial patch attack for automatic check-out. In A. Vedaldi, H. Bischof, T. Brox, et al. (Eds.), Proceedings of the 16th European conference on computer vision (pp.\u00a0395\u2013410). Cham: Springer."},{"key":"61_CR7","first-page":"8565","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"J. Wang","year":"2021","unstructured":"Wang, J., Liu, A., Yin, Z., Liu, S., Tang, S., & Liu, X. (2021). Dual attention suppression attack: generate adversarial camouflage in physical world. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a08565\u20138574). Piscataway: IEEE."},{"key":"61_CR8","first-page":"2456","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"J. Wang","year":"2022","unstructured":"Wang, J., Yin, Z., Hu, P., Liu, A., Tao, R., Qin, H., et al. (2022). Defensive patches for robust recognition in the physical world. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a02456\u20132465). Piscataway: IEEE."},{"key":"61_CR9","doi-asserted-by":"publisher","first-page":"598","DOI":"10.1109\/TIP.2021.3127849","volume":"31","author":"J. Wang","year":"2021","unstructured":"Wang, J., Liu, A., Bai, X., & Liu, X. (2021). Universal adversarial patch attack for automatic checkout using perceptual and attentional bias. IEEE Transactions on Image Processing, 31, 598\u2013611.","journal-title":"IEEE Transactions on Image Processing"},{"key":"61_CR10","doi-asserted-by":"crossref","unstructured":"Pintor, M., Angioni, D., Sotgiu, A., Demetrio, L., Demontis, A., Biggio, B., et\u00a0al. (2022). ImageNet-Patch: a dataset for benchmarking machine learning robustness against adversarial patches. arXiv preprint. arXiv:2203.04412.","DOI":"10.1016\/j.patcog.2022.109064"},{"key":"61_CR11","first-page":"770","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"K. He","year":"2016","unstructured":"He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a0770\u2013778). Piscataway: IEEE."},{"key":"61_CR12","doi-asserted-by":"publisher","first-page":"155161","DOI":"10.1109\/ACCESS.2021.3127960","volume":"9","author":"N. Akhtar","year":"2021","unstructured":"Akhtar, N., Mian, A., Kardan, N., & Shah, M. (2021). Advances in adversarial attacks and defenses in computer vision: a survey. IEEE Access, 9, 155161\u2013155196.","journal-title":"IEEE Access"},{"issue":"1","key":"61_CR13","doi-asserted-by":"publisher","first-page":"509","DOI":"10.1007\/s10489-022-03373-y","volume":"53","author":"A. Aldahdooh","year":"2023","unstructured":"Aldahdooh, A., Hamidouche, W., & D\u00e9forges, O. (2023). Revisiting model\u2019s uncertainty and confidences for adversarial example detection. Applied Intelligence, 53(1), 509\u2013531.","journal-title":"Applied Intelligence"},{"key":"61_CR14","first-page":"9877","volume-title":"Proceedings of the AAAI conference on artificial intelligence","author":"J. Tian","year":"2021","unstructured":"Tian, J., Zhou, J., Li, Y., & Duan, J. (2021). Detecting adversarial examples from sensitivity inconsistency of spatial-transform domain. In Proceedings of the AAAI conference on artificial intelligence (pp.\u00a09877\u20139885). Palo Alto: AAAI Press."},{"key":"61_CR15","first-page":"5498","volume-title":"Proceedings of the 36th international conference on machine learning","author":"K. Roth","year":"2019","unstructured":"Roth, K., Kilcher, Y., & Hofmann, T. (2019). The odds are odd: a statistical test for detecting adversarial examples. In K. Chaudhuri & R. Salakhutdinov (Eds.), Proceedings of the 36th international conference on machine learning. (pp.\u00a05498\u20135507). Retrieved December 15, 2023, from http:\/\/proceedings.mlr.press\/v97\/roth19a.html."},{"key":"61_CR16","first-page":"1","volume-title":"Proceedings of the international joint conference on neural networks","author":"G. Fidel","year":"2020","unstructured":"Fidel, G., Bitton, R., & Shabtai, A. (2020). When explainability meets adversarial learning: detecting adversarial examples using SHAP signatures. In Proceedings of the international joint conference on neural networks (pp.\u00a01\u20138). Piscataway: IEEE."},{"key":"61_CR17","first-page":"13398","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"X. Jia","year":"2022","unstructured":"Jia, X., Zhang, Y., Wu, B., Ma, K., Wang, J., & Cao, X. (2022). LAS-AT: adversarial training with learnable attack strategy. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a013398\u201313408). Piscataway: IEEE."},{"key":"61_CR18","first-page":"25595","volume-title":"Proceedings of the international conference on machine learning","author":"C. Yu","year":"2022","unstructured":"Yu, C., Han, B., Shen, L., Yu, J., Gong, C., Gong, M., et al. (2022). Understanding robust overfitting of adversarial training and beyond. In K. Chaudhuri, S. Jegelka, L. Song, et al. (Eds.), Proceedings of the international conference on machine learning (pp.\u00a025595\u201325610). Retrieved December 15, 2023, from https:\/\/proceedings.mlr.press\/v162\/yu22b.html."},{"key":"61_CR19","first-page":"1028","volume-title":"Proceedings of the AAAI conference on artificial intelligence","author":"A. Liu","year":"2019","unstructured":"Liu, A., Liu, X., Fan, J., Ma, Y., Zhang, A., Xie, H., et al. (2019). Perceptual-sensitive GAN for generating adversarial patches. In Proceedings of the AAAI conference on artificial intelligence (pp.\u00a01028\u20131035). Palo Alto: AAAI Press."},{"key":"61_CR20","first-page":"3353","volume-title":"Proceedings of the 34th international conference on neural information processing systems","author":"A. Shafahi","year":"2019","unstructured":"Shafahi, A., Najibi, M., Ghiasi, M. A., Xu, Z., Dickerson, J., Studer, C., et al. (2019). Adversarial training for free! In H. Wallach, H. Larochelle, A. Beygelzimer, et al. (Eds.), Proceedings of the 34th international conference on neural information processing systems (pp.\u00a03353\u20133364). Red Hook: Curran Associates."},{"key":"61_CR21","first-page":"1","volume-title":"Proceedings of the 3rd international conference on learning representations","author":"I. J. Goodfellow","year":"2015","unstructured":"Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In Proceedings of the 3rd international conference on learning representations (pp.\u00a01\u201311). Retrieved December 15, 2023, from https:\/\/arxiv.org\/pdf\/1412.6572."},{"key":"61_CR22","first-page":"1778","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"F. Liao","year":"2018","unstructured":"Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., & Zhu, J. (2018). Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a01778\u20131787). Piscataway: IEEE."},{"key":"61_CR23","doi-asserted-by":"publisher","DOI":"10.1109\/TC.2021.3076826","author":"H. Qiu","year":"2021","unstructured":"Qiu, H., Zeng, Y., Zheng, Q., Guo, S., Zhang, T., & Li, H. (2021). An efficient preprocessing-based approach to mitigate advanced adversarial attacks. IEEE Transactions on Computers. Advance online publication. https:\/\/doi.org\/10.1109\/TC.2021.3076826.","journal-title":"IEEE Transactions on Computers"},{"key":"61_CR24","first-page":"94","volume-title":"Proceedings of the IEEE 6th international conference on big data security on cloud, IEEE international conference on high performance and smart computing, and IEEE international conference on intelligent data and security","author":"M. Qiu","year":"2020","unstructured":"Qiu, M., & Qiu, H. (2020). Review on image processing based adversarial example defenses in computer vision. In Proceedings of the IEEE 6th international conference on big data security on cloud, IEEE international conference on high performance and smart computing, and IEEE international conference on intelligent data and security (pp.\u00a094\u201399). Piscataway: IEEE."},{"key":"61_CR25","first-page":"1","volume-title":"Proceedings of the 7th international conference on learning representations","author":"C. Song","year":"2019","unstructured":"Song, C., He, K., Wang, L., & Hopcroft, J. E. (2019). Improving the generalization of adversarial training with domain adaptation. In Proceedings of the 7th international conference on learning representations (pp.\u00a01\u201314). Retrieved December 15, 2023, from https:\/\/openreview.net\/forum?id=SJeQEp4YDH."},{"key":"61_CR26","first-page":"6084","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"X. Jia","year":"2019","unstructured":"Jia, X., Wei, X., Cao, X., & Foroosh, H. (2019). ComDefend: an efficient image compression model to defend adversarial examples. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a06084\u20136092). Piscataway: IEEE."},{"key":"61_CR27","first-page":"11447","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"B. Sun","year":"2019","unstructured":"Sun, B., Tsai, N.-H., Liu, F., Yu, R., & Su, H. (2019). Adversarial defense by stratified convolutional sparse coding. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a011447\u201311456). Piscataway: IEEE."},{"key":"61_CR28","first-page":"1","volume-title":"Proceedings of the 6th international conference on learning representations","author":"P. Samangouei","year":"2018","unstructured":"Samangouei, P., Kabkab, M., & Chellappa, R. (2018). Defense-GAN: protecting classifiers against adversarial attacks using generative models. In Proceedings of the 6th international conference on learning representations (pp.\u00a01\u201317). Retrieved December 15, 2023, from https:\/\/openreview.net\/forum?id=BkJ3ibb0-."},{"key":"61_CR29","first-page":"6840","volume-title":"Proceedings of the 34th international conference on neural information processing","author":"J. Ho","year":"2020","unstructured":"Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, et al. (Eds.), Proceedings of the 34th international conference on neural information processing (pp.\u00a06840\u20136851). Red Hook: Curran Associates."},{"key":"61_CR30","first-page":"16805","volume-title":"Proceedings of the international conference on machine learning","author":"W. Nie","year":"2022","unstructured":"Nie, W., Guo, B., Huang, Y., Xiao, C., Vahdat, A., & Anandkumar, A. (2022). Diffusion models for adversarial purification. In K. Chaudhuri, S. Jegelka, L. Song, et al. (Eds.), Proceedings of the international conference on machine learning (pp.\u00a016805\u201316827). Stroudsburg: International Machine Learning Society."},{"key":"61_CR31","first-page":"1","volume-title":"Proceedings of the 6th international conference on learning representations","author":"N. Das","year":"2018","unstructured":"Das, N., Shanbhogue, M., Chen, S.-T., Hohman, F., Chen, L., Kounavis, M. E., et al. (2018). Keeping the bad guys out: protecting and vaccinating deep learning with jpeg compression. In Proceedings of the 6th international conference on learning representations (pp.\u00a01\u201315). Retrieved December 15, 2023, from http:\/\/arxiv.org\/abs\/1705.02900."},{"key":"61_CR32","first-page":"1","volume-title":"Proceedings of the 6th international conference on learning representations","author":"C. Xie","year":"2018","unstructured":"Xie, C., Wang, J., Zhang, Z., Ren, Z., & Yuille, A. (2018). Mitigating adversarial effects through randomization. In Proceedings of the 6th international conference on learning representations (pp.\u00a01\u201316). Retrieved December 15, 2023, from https:\/\/openreview.net\/forum?id=Sk9yuql0Z."},{"key":"61_CR33","first-page":"6528","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"E. Raff","year":"2019","unstructured":"Raff, E., Sylvester, J., Forsyth, S., & McLean, M. (2019). Barrage of random transforms for adversarially robust defense. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a06528\u20136537). Piscataway: IEEE."},{"key":"61_CR34","first-page":"6988","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"R. Theagarajan","year":"2019","unstructured":"Theagarajan, R., Chen, M., Bhanu, B., & Zhang, J. (2019). ShieldNets: defending against adversarial attacks using probabilistic adversarial robustness. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a06988\u20136996). Piscataway: IEEE."},{"key":"61_CR35","first-page":"1","volume-title":"Proceedings of the 3rd international conference on learning representations","author":"S. Gu","year":"2015","unstructured":"Gu, S., & Rigazio, L. (2015). Towards deep neural network architectures robust to adversarial examples. In Proceedings of the 3rd international conference on learning representations (pp.\u00a01\u20139). Retrieved December 15, 2023, from http:\/\/arxiv.org\/abs\/1412.5068."},{"key":"61_CR36","first-page":"1","volume-title":"Proceedings of the 6th international conference on learning representations","author":"A. Madry","year":"2018","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th international conference on learning representations (pp.\u00a01\u201323). Retrieved December 15, 2023, from https:\/\/openreview.net\/forum?id=rJzIBfZAb."},{"key":"61_CR37","first-page":"2574","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"S.-M. Moosavi-Dezfooli","year":"2016","unstructured":"Moosavi-Dezfooli, S.-M., Fawzi, A., & Frossard, P. (2016). DeepFool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp.\u00a02574\u20132582). Piscataway: IEEE."},{"key":"61_CR38","first-page":"39","volume-title":"Proceedings of the IEEE symposium on security and privacy","author":"N. Carlini","year":"2017","unstructured":"Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In Proceedings of the IEEE symposium on security and privacy (pp.\u00a039\u201357). Piscataway: IEEE."},{"key":"61_CR39","unstructured":"Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint. arXiv:1503.02531."},{"key":"61_CR40","first-page":"2206","volume-title":"Proceedings of the 36th international conference on machine learning","author":"F. Croce","year":"2020","unstructured":"Croce, F., & Hein, M. (2020). Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proceedings of the 36th international conference on machine learning (pp.\u00a02206\u20132216). Retrieved December 15, 2023, from http:\/\/proceedings.mlr.press\/v119\/croce20b.html."},{"key":"61_CR41","first-page":"1","volume-title":"Proceedings of the 5th international conference on learning representations","author":"J. H. Metzen","year":"2017","unstructured":"Metzen, J. H., Genewein, T., Fischer, V., & Bischoff, B. (2017). On detecting adversarial perturbations. In Proceedings of the 5th international conference on learning representations (pp.\u00a01\u201312). Retrieved December 15, 2023, from https:\/\/openreview.net\/forum?id=SJzCSf9xg."},{"key":"61_CR42","first-page":"1","volume-title":"Proceedings of the 25th annual network and distributed system security symposium","author":"W. Xu","year":"2018","unstructured":"Xu, W., Evans, D., & Qi, Y. (2018). Feature squeezing: detecting adversarial examples in deep neural networks. In Proceedings of the 25th annual network and distributed system security symposium (pp.\u00a01\u201315). Reston: The Internet Society."},{"key":"61_CR43","first-page":"4825","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"J. Liu","year":"2019","unstructured":"Liu, J., Zhang, W., Zhang, Y., Hou, D., Liu, Y., Zha, H., et al. (2019). Detection based defense against adversarial examples from the steganalysis point of view. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a04825\u20134834). Piscataway: IEEE."},{"key":"61_CR44","first-page":"1","volume-title":"Proceedings of the 8th international conference on learning representations","author":"X. Yin","year":"2020","unstructured":"Yin, X., Kolouri, S., & Rohde, G. K. (2020). GAT: generative adversarial training for adversarial example detection and robust classification. In Proceedings of the 8th international conference on learning representations (pp.\u00a01\u201327). Retrieved December 15, 2023, from https:\/\/openreview.net\/forum?id=SJeQEp4YDH."},{"key":"61_CR45","first-page":"1","volume-title":"Proceedings of the 8th international conference on learning representations","author":"Y. Qin","year":"2020","unstructured":"Qin, Y., Frosst, N., Sabour, S., Raffel, C., Cottrell, G., & Hinton, G. (2020). Detecting and diagnosing adversarial images with class-conditional capsule reconstructions. In Proceedings of the 8th international conference on learning representations (pp.\u00a01\u201318). Retrieved December 15, 2023, from https:\/\/openreview.net\/forum?id=Skgy464Kvr."},{"key":"61_CR46","first-page":"3856","volume-title":"Proceedings of the 31st international conference on neural information processing","author":"S. Sabour","year":"2017","unstructured":"Sabour, S., Frosst, N., & Hinton, G. E. (2017). Dynamic routing between capsules. In I. Guyon, U. von Luxburg, S. Bengio, et al. (Eds.), Proceedings of the 31st international conference on neural information processing (pp.\u00a03856\u20133866). Red Hook: Curran Associates."},{"key":"61_CR47","first-page":"972","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Z. Deng","year":"2021","unstructured":"Deng, Z., Yang, X., Xu, S., Su, H., & Zhu, J. (2021). LiBRe: a practical Bayesian approach to adversarial detection. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp.\u00a0972\u2013982). Piscataway: IEEE."},{"key":"61_CR48","unstructured":"Manohar-Alers, N., Feng, R., Singh, S., Song, J., & Prakash, A. (2021). Using anomaly feature vectors for detecting, classifying and warning of outlier adversarial examples. arXiv preprint. arXiv:2107.00561."},{"issue":"11","key":"61_CR49","doi-asserted-by":"publisher","first-page":"2640","DOI":"10.1109\/TIFS.2017.2718479","volume":"12","author":"M. Osadchy","year":"2017","unstructured":"Osadchy, M., Hernandez-Castro, J., Gibson, S., Dunkelman, O., & P\u00e9rez-Cabo, D. (2017). No bot expects the deepcaptcha! introducing immutable adversarial examples, with applications to captcha generation. IEEE Transactions on Information Forensics and Security, 12(11), 2640\u20132653.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"issue":"5","key":"61_CR50","first-page":"2106","volume":"18","author":"A. Agarwal","year":"2020","unstructured":"Agarwal, A., Singh, R., Vatsa, M., & Ratha, N. (2020). Image transformation-based defense against adversarial perturbation on deep learning models. IEEE Transactions on Dependable and Secure Computing, 18(5), 2106\u20132121.","journal-title":"IEEE Transactions on Dependable and Secure Computing"},{"key":"61_CR51","doi-asserted-by":"publisher","first-page":"6117","DOI":"10.1109\/TIP.2021.3092582","volume":"30","author":"S. Zhang","year":"2021","unstructured":"Zhang, S., Gao, H., & Rao, Q. (2021). Defense against adversarial attacks by reconstructing images. IEEE Transactions on Image Processing, 30, 6117\u20136129.","journal-title":"IEEE Transactions on Image Processing"},{"key":"61_CR52","first-page":"1","volume-title":"Proceedings of the 3rd international conference on learning representations","author":"K. Simonyan","year":"2015","unstructured":"Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd international conference on learning representations (pp.\u00a01\u201314). Retrieved December 15, 2023, from http:\/\/arxiv.org\/pdf\/1409.1556."},{"key":"61_CR53","first-page":"448","volume-title":"Proceedings of the 29th international conference on machine learning","author":"S. Ioffe","year":"2015","unstructured":"Ioffe, S., & Szegedy, C. (2015). Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 29th international conference on machine learning (pp.\u00a0448\u2013456). Retrieved December 15, 2023, from http:\/\/proceedings.mlr.press\/v37\/ioffe15.html."},{"key":"61_CR54","unstructured":"Krizhevsky, A., Nair, V., & Hinton, G. (2010). Cifar-10 (Canadian Institute for Advanced Research). Retrieved December 15, 2023, from http:\/\/www.cs.toronto.edu\/kriz\/cifar.html."},{"key":"61_CR55","unstructured":"Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto."},{"key":"61_CR56","doi-asserted-by":"publisher","first-page":"1711","DOI":"10.1109\/TIP.2019.2940533","volume":"29","author":"A. Mustafa","year":"2020","unstructured":"Mustafa, A., Khan, S. H., Hayat, M., Shen, J., & Shao, L. (2020). Image super-resolution as a defense against adversarial attacks. IEEE Transactions on Image Processing, 29, 1711\u20131724.","journal-title":"IEEE Transactions on Image Processing"}],"container-title":["Visual Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44267-024-00061-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44267-024-00061-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44267-024-00061-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,9]],"date-time":"2024-10-09T04:18:40Z","timestamp":1728447520000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44267-024-00061-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,9]]},"references-count":56,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,12]]}},"alternative-id":["61"],"URL":"https:\/\/doi.org\/10.1007\/s44267-024-00061-y","relation":{},"ISSN":["2731-9008"],"issn-type":[{"type":"electronic","value":"2731-9008"}],"subject":[],"published":{"date-parts":[[2024,10,9]]},"assertion":[{"value":"13 September 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 August 2024","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 August 2024","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 October 2024","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"30"}}