{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T10:00:54Z","timestamp":1776074454265,"version":"3.50.1"},"reference-count":69,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T00:00:00Z","timestamp":1773705600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T00:00:00Z","timestamp":1773705600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002701","name":"Ministry of Education","doi-asserted-by":"publisher","award":["BK21 FOUR project (AI-driven Convergence Software Education Research Program) (412020214871)"],"award-info":[{"award-number":["BK21 FOUR project (AI-driven Convergence Software Education Research Program) (412020214871)"]}],"id":[{"id":"10.13039\/501100002701","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int. J. Inf. Secur."],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The adversarial vulnerability of deep learning models poses a significant challenge to the safe commercialization of AI technologies. Although numerous adversarial defenses have been proposed, most offer limited robustness, emphasizing the need for continued exploration of the properties and causes of adversarial vulnerabilities. In this study, we hypothesize that the phenomenon of adversarially trained models exhibiting low adversarial accuracies is due to insufficient exploration and learning from adversarial examples that exist on the manifold. In this regard, we propose a novel perturbation generation method, \u201cmixed perturbation (MP),\u201d which aims to discover various adversarial examples for adversarial training. The proposed method generates perturbations by leveraging information from both the main task and auxiliary tasks, combining them through a random weighted summation. This approach preserves the primary directionality of the main task perturbation while introducing variability in perturbation directions, enabling the discovery of diverse adversarial examples from a defensive perspective. Extensive experiments on five benchmark datasets show that the non-optimized MP surpasses existing AT methods in several settings, while the optimized MP consistently achieves the highest robustness. We further analyze perturbation diversity, conduct ablation studies to explain MP\u2019s effectiveness. In addition, through combination experiments with a state-of-the-art AT method, we confirmed the promising potential of MP in enhancing model robustness and outlined directions for future research.<\/jats:p>","DOI":"10.1007\/s10207-026-01225-1","type":"journal-article","created":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T18:53:58Z","timestamp":1773773638000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Mixed Perturbation: Generating Directionally Diverse Perturbations for Adversarial Training"],"prefix":"10.1007","volume":"25","author":[{"given":"Changhun","family":"Hyun","sequence":"first","affiliation":[]},{"given":"Hyeyoung","family":"Park","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,3,17]]},"reference":[{"issue":"6","key":"1225_CR1","doi-asserted-by":"publisher","first-page":"420","DOI":"10.1007\/s42979-021-00815-1","volume":"2","author":"H Sarker","year":"2021","unstructured":"Sarker, H.: Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2(6), 420 (2021). https:\/\/doi.org\/10.1007\/s42979-021-00815-1","journal-title":"SN Comput. Sci."},{"issue":"3","key":"1225_CR2","doi-asserted-by":"publisher","first-page":"64","DOI":"10.1007\/s10462-023-10679-x","volume":"57","author":"AF Gambin","year":"2024","unstructured":"Gambin, A.F., Yazidi, A., Vasilakos, A., Haugerud, H., Djenouri, Y.: Deepfakes: Current and future trends. Artif. Intell. Rev. 57(3), 64 (2024). https:\/\/doi.org\/10.1007\/s10462-023-10679-x","journal-title":"Artif. Intell. Rev."},{"key":"1225_CR3","unstructured":"Marchal, N., Xu, R., Elasmar, R., Gabriel, I., Goldberg, B., and Isaac, W.: Generative AI misuse: A taxonomy of tactics and insights from real-world data. arXiv preprint arXiv:2406.13843 (2024)"},{"key":"1225_CR4","doi-asserted-by":"crossref","unstructured":"Anderljung, M., and Hazell, J.: Protecting society from AI misuse: When are restrictions on capabilities warranted? arXiv preprint arXiv:2303.09377 (2023)","DOI":"10.1007\/s00146-024-02130-8"},{"key":"1225_CR5","volume-title":"Artificial Intelligence Act","author":"T Madiega","year":"2021","unstructured":"Madiega, T.: Artificial Intelligence Act. European Parliamentary Research Service, European Parliament (2021)"},{"key":"1225_CR6","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R.: Intriguing properties of neural networks. In: Proc. 2nd Int. Conf. Learn. Represent. (ICLR) (2014)"},{"key":"1225_CR7","doi-asserted-by":"crossref","unstructured":"Nguyen, A., Yosinski, J., and Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 427\u2013436 (2015)","DOI":"10.1109\/CVPR.2015.7298640"},{"key":"1225_CR8","unstructured":"Hendrycks, D., Carlini, N., Schulman, J., and Steinhardt, J.: Unsolved problems in ML safety. arXiv preprint arXiv:2109.13916 (2021)"},{"key":"1225_CR9","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2024.129167","volume":"619","author":"S Kim","year":"2025","unstructured":"Kim, S., Im, H., Lee, W., Lee, S., Kang, P.: RobustMixGen: data augmentation for enhancing robustness of visual-language models in the presence of distribution shift. Neurocomputing 619, 129167 (2025). https:\/\/doi.org\/10.1016\/j.neucom.2024.129167","journal-title":"Neurocomputing"},{"key":"1225_CR10","doi-asserted-by":"crossref","unstructured":"Wang, Y., Hong, J., Cheraghian, A., Rahman, S., Ahmedt-Aristizabal, D., Petersson, L., and Harandi, M.: Continual test-time domain adaptation via dynamic sample selection. In: Proc. IEEE\/CVF Winter Conf. Appl. Comput. Vis. (WACV), pp. 1701\u20131710 (2024)","DOI":"10.1109\/WACV57701.2024.00172"},{"key":"1225_CR11","doi-asserted-by":"crossref","unstructured":"Wang, N., Cui, Z., Li, A., Lu, Y., Wang, R., and Nie, F.: Structured doubly stochastic graph-based clustering. IEEE Trans. Neural Netw. Learn. Syst. (2025)","DOI":"10.1109\/TNNLS.2025.3531987"},{"key":"1225_CR12","first-page":"E2023","volume":"100\u20132","author":"A Oprea","year":"2023","unstructured":"Oprea, A., Vassilev, A.: Adversarial machine learning: a taxonomy and terminology of attacks and mitigations. National Institute of Standards and Technology, NIST AI 100\u20132, E2023 (2023)","journal-title":"National Institute of Standards and Technology, NIST AI"},{"key":"1225_CR13","unstructured":"Goodfellow, I.J., Shlens, J., and Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)"},{"key":"1225_CR14","doi-asserted-by":"crossref","unstructured":"Kurakin, A., Goodfellow, I.J., and Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99\u2013112. Chapman & Hall\/CRC (2018)","DOI":"10.1201\/9781351251389-8"},{"key":"1225_CR15","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: Proc. Int. Conf. Learn. Represent. (ICLR) (2018). https:\/\/arxiv.org\/abs\/1706.06083"},{"key":"1225_CR16","doi-asserted-by":"publisher","unstructured":"Carlini, N. and Wagner, D.: Towards evaluating the robustness of neural networks. In: Proc. IEEE Symp. Security Privacy (SP), pp. 39\u201357 (2017). https:\/\/doi.org\/10.1109\/SP.2017.49","DOI":"10.1109\/SP.2017.49"},{"issue":"6433","key":"1225_CR17","doi-asserted-by":"publisher","first-page":"1287","DOI":"10.1126\/science.aaw4399","volume":"363","author":"SG Finlayson","year":"2019","unstructured":"Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287\u20131289 (2019). https:\/\/doi.org\/10.1126\/science.aaw4399","journal-title":"Science"},{"key":"1225_CR18","unstructured":"Angelos, F., Panagiotis, T., Rowan, M., Nicholas, R., Sergey, L., and Yarin, G.: Can autonomous vehicles identify, recover from, and adapt to distribution shifts? In: Proc. IEEE Int. Conf. Mach. Learn. (ICML), pp. 3145\u20133153 (2020)"},{"key":"1225_CR19","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., and Swami, A.: Practical black-box attacks against machine learning. In: Proc. ACM Asia Conf. Comput. Commun. Secur. (ASIACCS), pp. 506\u2013519 (2017)","DOI":"10.1145\/3052973.3053009"},{"key":"1225_CR20","unstructured":"Yue, K., Jin, R., Wong, C.W., Baron, D., and Dai, H.: Gradient obfuscation gives a false sense of security in federated learning. In: 32nd USENIX Security Symposium (USENIX Security 23), pp. 6381\u20136398 (2023)"},{"key":"1225_CR21","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proc. IEEE Symp. Security Privacy (SP), pp. 582\u2013597 (2016)","DOI":"10.1109\/SP.2016.41"},{"key":"1225_CR22","unstructured":"Kuang, H., Liu, H., Wu, Y., Satoh, S.I., and Ji, R.: Improving adversarial robustness via information bottleneck distillation. Adv. Neural Inf. Process. Syst. 36 (2024)"},{"issue":"3","key":"1225_CR23","doi-asserted-by":"publisher","first-page":"1329","DOI":"10.1109\/TNNLS.2021.3105238","volume":"34","author":"F Nesti","year":"2021","unstructured":"Nesti, F., Biondi, A., Buttazzo, G.: Detecting adversarial examples by input transformations, defense perturbations, and voting. IEEE Trans. Neural Netw. Learn. Syst. 34(3), 1329\u20131341 (2021)","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"1225_CR24","doi-asserted-by":"publisher","unstructured":"Prakash, A., Moran, N., Garber, S., DiLillo, A., and Storer, J.: Deflecting adversarial attacks with pixel deflection. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 8571\u20138580 (2018). https:\/\/doi.org\/10.1109\/CVPR.2018.00894","DOI":"10.1109\/CVPR.2018.00894"},{"key":"1225_CR25","unstructured":"Chen, J., Raghuram, J., Choi, J., Wu, X., Liang, Y., and Jha, S.: Revisiting adversarial robustness of classifiers with a reject option. In: Proc. AAAI Workshop on Adversarial Machine Learning Beyond (2021)"},{"key":"1225_CR26","doi-asserted-by":"publisher","first-page":"257","DOI":"10.1016\/j.neucom.2021.04.113","volume":"470","author":"F Crecchi","year":"2022","unstructured":"Crecchi, F., Melis, M., Sotgiu, A., Bacciu, D., Biggio, B.: FADER: fast adversarial example rejection. Neurocomputing 470, 257\u2013268 (2022). https:\/\/doi.org\/10.1016\/j.neucom.2021.04.113","journal-title":"Neurocomputing"},{"key":"1225_CR27","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2022.103748","volume":"311","author":"A Aldahdooh","year":"2022","unstructured":"Aldahdooh, A., Hamidouche, W., Fezza, S.A., D\u00e9forges, O.: Adversarial example detection for dnn models: a review and experimental comparison. Artif. Intell. 311, 103748 (2022). https:\/\/doi.org\/10.1016\/j.artint.2022.103748","journal-title":"Artif. Intell."},{"key":"1225_CR28","doi-asserted-by":"publisher","unstructured":"Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., and Jana, S.: Certified robustness to adversarial examples with differential privacy. In: Proc. IEEE Symp. Secur. Privacy (SP), pp. 656\u2013672 (2019). https:\/\/doi.org\/10.1109\/SP.2019.00044","DOI":"10.1109\/SP.2019.00044"},{"key":"1225_CR29","doi-asserted-by":"publisher","unstructured":"Wang, H., Zhang, A., Zheng, S., Shi, X., Li, M., and Wang, Z.: Removing batch normalization boosts adversarial training. In: Proc. Int. Conf. Mach. Learn. (ICML), pp. 23433\u201323445 (2022). https:\/\/doi.org\/10.48550\/arXiv.2106.04554","DOI":"10.48550\/arXiv.2106.04554"},{"key":"1225_CR30","doi-asserted-by":"publisher","unstructured":"Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., and Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: Proc. Int. Conf. Mach. Learn. (ICML), pp. 7472\u20137482 (2019). https:\/\/doi.org\/10.48550\/arXiv.1901.08573","DOI":"10.48550\/arXiv.1901.08573"},{"key":"1225_CR31","doi-asserted-by":"publisher","unstructured":"Wong, E., Rice, L., and Kolter, J.Z.: Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994 (2020). https:\/\/doi.org\/10.48550\/arXiv.2001.03994","DOI":"10.48550\/arXiv.2001.03994"},{"key":"1225_CR32","doi-asserted-by":"publisher","unstructured":"Athalye, A., Carlini, N., and Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Proc. Int. Conf. Mach. Learn. (ICML), pp. 274\u2013283 (2018). https:\/\/doi.org\/10.48550\/arXiv.1802.00420","DOI":"10.48550\/arXiv.1802.00420"},{"key":"1225_CR33","first-page":"1633","volume":"33","author":"F Tram\u00e9r","year":"2020","unstructured":"Tram\u00e9r, F., Boneh, D.: On adaptive attacks to adversarial example defenses. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 1633\u20131645 (2020)","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"1225_CR34","doi-asserted-by":"publisher","unstructured":"Liu, Z., Liu, Q., Liu, T., Xu, N., Lin, X., Wang, Y., and Wen, W.: Feature distillation: DNN-oriented JPEG compression against adversarial examples. In: Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 860\u2013868 (2019). https:\/\/doi.org\/10.1109\/CVPR.2019.00095","DOI":"10.1109\/CVPR.2019.00095"},{"key":"1225_CR35","doi-asserted-by":"publisher","unstructured":"Xu, W.: Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017). https:\/\/doi.org\/10.48550\/arXiv.1704.01155","DOI":"10.48550\/arXiv.1704.01155"},{"issue":"3","key":"1225_CR36","doi-asserted-by":"publisher","first-page":"1329","DOI":"10.1109\/TNNLS.2021.3076023","volume":"34","author":"F Nesti","year":"2021","unstructured":"Nesti, F., Biondi, A., Buttazzo, G.: Detecting adversarial examples by input transformations, defense perturbations, and voting. IEEE Trans. Neural Netw. Learn. Syst. 34(3), 1329\u20131341 (2021). https:\/\/doi.org\/10.1109\/TNNLS.2021.3076023","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"1225_CR37","doi-asserted-by":"publisher","unstructured":"Chen, Y., Zhang, M., Li, J., Kuang, X., Zhang, X., and Zhang, H.: Dynamic and diverse transformations for defending against adversarial examples. In: 2022 IEEE Int. Conf. Trust, Secur. Privacy Comput. Commun. (TrustCom), pp. 976\u2013983 (2022). https:\/\/doi.org\/10.1109\/TrustCom56396.2022.00059","DOI":"10.1109\/TrustCom56396.2022.00059"},{"key":"1225_CR38","doi-asserted-by":"publisher","unstructured":"Klingner, M., Kumar, V.R., Yogamani, S., B\u00e4r, A., and Fingscheidt, T.: Detecting adversarial perturbations in multi-task perception. In: 2022 IEEE\/RSJ Int. Conf. Intell. Robots Syst. (IROS), pp. 13050\u201313057 (2022). https:\/\/doi.org\/10.1109\/IROS47612.2022.9981661","DOI":"10.1109\/IROS47612.2022.9981661"},{"key":"1225_CR39","doi-asserted-by":"publisher","unstructured":"Carlini, N., and Wagner, D.: Adversarial examples are not easily detected: Bypassing ten detection methods. In: Proc. 10th ACM Workshop Artif. Intell. Secur. (AISec), pp. 3\u201314 (2017). https:\/\/doi.org\/10.1145\/3128572.3140444","DOI":"10.1145\/3128572.3140444"},{"key":"1225_CR40","doi-asserted-by":"publisher","unstructured":"Pfrommer, S., Anderson, B.G., and Sojoudi, S.: Projected randomized smoothing for certified adversarial robustness. arXiv preprint arXiv:2309.13794 (2023). https:\/\/doi.org\/10.48550\/arXiv.2309.13794","DOI":"10.48550\/arXiv.2309.13794"},{"key":"1225_CR41","doi-asserted-by":"publisher","unstructured":"Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A.: Robustness may be at odds with accuracy. In: Proc. Int. Conf. Learn. Represent. (ICLR), pp. 1\u201312 (2019). https:\/\/doi.org\/10.48550\/arXiv.1805.12152","DOI":"10.48550\/arXiv.1805.12152"},{"key":"1225_CR42","doi-asserted-by":"publisher","DOI":"10.1145\/3597925","author":"S Sicong","year":"2023","unstructured":"Sicong, S., et al.: Interpreting adversarial examples in deep learning: a review. ACM Comput. Surv. (2023). https:\/\/doi.org\/10.1145\/3597925","journal-title":"ACM Comput. Surv."},{"key":"1225_CR43","unstructured":"Anonymous: Title omitted for blinded review. Name of journal omitted for blinded review (2024)"},{"key":"1225_CR44","doi-asserted-by":"publisher","first-page":"61113","DOI":"10.1109\/ACCESS.2024.3402641","volume":"12","author":"JC Costa","year":"2024","unstructured":"Costa, J.C., Roxo, T., Proen\u00e7a, H., In\u00e1cio, P.R.: How deep learning sees the world: a survey on adversarial attacks & defenses. IEEE Access 12, 61113\u201361136 (2024). https:\/\/doi.org\/10.1109\/ACCESS.2024.3402641","journal-title":"IEEE Access"},{"key":"1225_CR45","doi-asserted-by":"publisher","unstructured":"Wu, B., Wei, S., Zhu, M., Zheng, M., Zhu, Z., Zhang, M., et al.: Defenses in adversarial machine learning: A survey. arXiv preprint arXiv:1704.01155 (2023). https:\/\/doi.org\/10.48550\/arXiv.1704.01155","DOI":"10.48550\/arXiv.1704.01155"},{"key":"1225_CR46","doi-asserted-by":"publisher","DOI":"10.1016\/j.cosrev.2023.100573","volume":"49","author":"P Bountakas","year":"2023","unstructured":"Bountakas, P., Zarras, A., Lekidis, A., Xenakis, C.: Defense strategies for adversarial machine learning: a survey. Comput. Sci. Rev. 49, 100573 (2023). https:\/\/doi.org\/10.1016\/j.cosrev.2023.100573","journal-title":"Comput. Sci. Rev."},{"key":"1225_CR47","doi-asserted-by":"publisher","unstructured":"Zhao, M., Zhang, L., Ye, J., Lu, H., Yin, B., and Wang, X.: Adversarial training: A survey. arXiv preprint arXiv:2410.15042 (2023). https:\/\/doi.org\/10.48550\/arXiv.2410.15042","DOI":"10.48550\/arXiv.2410.15042"},{"key":"1225_CR48","doi-asserted-by":"publisher","unstructured":"Wu, B., Gu, J., Li, Z., Cai, D., He, X., and Liu, W.: Towards efficient adversarial training on vision transformers. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 307\u2013325 (2022). https:\/\/doi.org\/10.1007\/978-3-031-20062-5_18","DOI":"10.1007\/978-3-031-20062-5_18"},{"key":"1225_CR49","doi-asserted-by":"publisher","unstructured":"Shafahi, A., Najibi, M., Ghiasi, M.A., Xu, Z., Dickerson, J., Studer, C., et al., and Goldstein, T.: Adversarial training for free!. Adv. Neural Inf. Process. Syst. (NeurIPS) 32 (2019). https:\/\/doi.org\/10.48550\/arXiv.1904.12843","DOI":"10.48550\/arXiv.1904.12843"},{"key":"1225_CR50","doi-asserted-by":"publisher","unstructured":"Tram\u00e9r, F., and Boneh, D.: Adversarial training and robustness for multiple perturbations. Adv. Neural Inf. Process. Syst. (NeurIPS) 32 (2019). https:\/\/doi.org\/10.48550\/arXiv.1904.13000","DOI":"10.48550\/arXiv.1904.13000"},{"issue":"22","key":"1225_CR51","doi-asserted-by":"publisher","first-page":"8079","DOI":"10.3390\/app10228079","volume":"10","author":"S Park","year":"2020","unstructured":"Park, S., So, J.: On the effectiveness of adversarial training in defending against adversarial example attacks for image classification. Appl. Sci. 10(22), 8079 (2020). https:\/\/doi.org\/10.3390\/app10228079","journal-title":"Appl. Sci."},{"key":"1225_CR52","doi-asserted-by":"publisher","unstructured":"Hu, Y., Wu, F., Zhang, H., and Zhao, H.: Understanding the impact of adversarial robustness on accuracy disparity. In: Proc. Int. Conf. Mach. Learn. (ICML), pp. 13679\u201313709 (2023). https:\/\/doi.org\/10.48550\/arXiv.2302.01493","DOI":"10.48550\/arXiv.2302.01493"},{"key":"1225_CR53","doi-asserted-by":"publisher","unstructured":"Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., and Madry, A.: Adversarial examples are not bugs, they are features. Adv. Neural Inf. Process. Syst. (NeurIPS) 32 (2019). https:\/\/doi.org\/10.48550\/arXiv.1905.02175","DOI":"10.48550\/arXiv.1905.02175"},{"key":"1225_CR54","doi-asserted-by":"publisher","unstructured":"Mao, C., et al.: Multitask learning strengthens adversarial robustness. In: Proc. Eur. Conf. Comput. Vis. (ECCV), pp. 158\u2013174 (2020). https:\/\/doi.org\/10.1007\/978-3-030-58586-0_10","DOI":"10.1007\/978-3-030-58586-0_10"},{"issue":"3","key":"1225_CR55","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3490488","volume":"18","author":"T Huang","year":"2022","unstructured":"Huang, T., Menkovski, V., Pei, Y., Wang, Y., Pechenizkiy, M.: Direction-aggregated attack for transferable adversarial examples. ACM J. Emerg. Technol. Comput. Syst. 18(3), 1\u201322 (2022). https:\/\/doi.org\/10.1145\/3490488","journal-title":"ACM J. Emerg. Technol. Comput. Syst."},{"key":"1225_CR56","doi-asserted-by":"publisher","unstructured":"Li, Z., Yin, B., Yao, T., Guo, J., Ding, S., Chen, S., and Liu, C.: Sibling-attack: Rethinking transferable adversarial attacks against face recognition. In: Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 24626\u201324637 (2023). https:\/\/doi.org\/10.1109\/CVPR52729.2023.02361","DOI":"10.1109\/CVPR52729.2023.02361"},{"key":"1225_CR57","doi-asserted-by":"publisher","unstructured":"Li, J.W., Liang, R.W., Yeh, C.H., Tsai, C.C., Yu, K., Lu, C.S., and Chen, S.T.: Adversarial robustness overestimation and instability in TRADES. arXiv preprint arXiv:2410.07675 (2024). https:\/\/doi.org\/10.48550\/arXiv.2410.07675","DOI":"10.48550\/arXiv.2410.07675"},{"key":"1225_CR58","doi-asserted-by":"publisher","unstructured":"Wu, B., Chen, J., Cai, D., He, X., and Gu, Q.: Do wider neural networks really help adversarial robustness?. Adv. Neural Inf. Process. Syst. (NeurIPS) 34, 7054\u20137067 (2021). https:\/\/doi.org\/10.48550\/arXiv.2103.11710","DOI":"10.48550\/arXiv.2103.11710"},{"issue":"2","key":"1225_CR59","doi-asserted-by":"publisher","first-page":"441","DOI":"10.1214\/24-AOS2348","volume":"52","author":"H Hassani","year":"2024","unstructured":"Hassani, H., Javanmard, A.: The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression. Ann. Stat. 52(2), 441\u2013465 (2024). https:\/\/doi.org\/10.1214\/24-AOS2348","journal-title":"Ann. Stat."},{"key":"1225_CR60","doi-asserted-by":"crossref","unstructured":"Caruana, R.: Multitask learning: A knowledge-based source of inductive bias. In: Proc. Int. Conf. Mach. Learn. (ICML), pp. 41\u201348 (1993)","DOI":"10.1016\/B978-1-55860-307-3.50012-5"},{"issue":"21","key":"1225_CR61","doi-asserted-by":"publisher","first-page":"2691","DOI":"10.3390\/electronics10212691","volume":"10","author":"SW Lee","year":"2021","unstructured":"Lee, S.W., Lee, R., Seo, M.S., Park, J.C., Noh, H.C., Ju, J.G., Choi, D.G.: Multi-task learning with task-specific feature filtering in low-data condition. Electronics 10(21), 2691 (2021). https:\/\/doi.org\/10.3390\/electronics10212691","journal-title":"Electronics"},{"issue":"11","key":"1225_CR62","doi-asserted-by":"publisher","first-page":"2278","DOI":"10.1109\/5.726791","volume":"86","author":"Y LeCun","year":"1998","unstructured":"LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278\u20132324 (1998). https:\/\/doi.org\/10.1109\/5.726791","journal-title":"Proc. IEEE"},{"key":"1225_CR63","unstructured":"Krizhevsky, A., and Hinton, G.: Learning multiple layers of features from tiny images. Technical Report (2009)"},{"key":"1225_CR64","unstructured":"Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning, Unsupervised Feature Learning, vol. 5, p. 7 (2011)"},{"key":"1225_CR65","doi-asserted-by":"publisher","first-page":"323","DOI":"10.1016\/j.neunet.2012.02.016","volume":"32","author":"J Stallkamp","year":"2012","unstructured":"Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323\u2013332 (2012). https:\/\/doi.org\/10.1016\/j.neunet.2012.02.016","journal-title":"Neural Netw."},{"issue":"7","key":"1225_CR66","first-page":"3","volume":"7","author":"Y Le","year":"2015","unstructured":"Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N. 7(7), 3 (2015)","journal-title":"CS 231N."},{"key":"1225_CR67","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107332","volume":"110","author":"X Ma","year":"2021","unstructured":"Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., Bailey, J., Lu, F.: Understanding adversarial attacks on deep learning-based medical image analysis systems. Pattern Recognit. 110, 107332 (2021). https:\/\/doi.org\/10.1016\/j.patcog.2020.107332","journal-title":"Pattern Recognit."},{"key":"1225_CR68","doi-asserted-by":"publisher","unstructured":"Jia, X., Zhang, Y., Wu, B., Ma, K., Wang, J., and Cao, X.: Las-at: Adversarial training with learnable attack strategy. In: Proc. IEEE\/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 13398\u201313405 (2022). https:\/\/doi.org\/10.1109\/CVPR52688.2022.01303","DOI":"10.1109\/CVPR52688.2022.01303"},{"issue":"7","key":"1225_CR69","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3639237","volume":"56","author":"P Xiong","year":"2024","unstructured":"Xiong, P., Tegegn, M., Sarin, J.S., Pal, S., Rubin, J.: It is all about data: a survey on the effects of data on adversarial robustness. ACM Comput. Surv. 56(7), 1\u201341 (2024). https:\/\/doi.org\/10.1145\/3639237","journal-title":"ACM Comput. Surv."}],"container-title":["International Journal of Information Security"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10207-026-01225-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10207-026-01225-1","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10207-026-01225-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T09:21:29Z","timestamp":1776072089000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10207-026-01225-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,17]]},"references-count":69,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2026,4]]}},"alternative-id":["1225"],"URL":"https:\/\/doi.org\/10.1007\/s10207-026-01225-1","relation":{},"ISSN":["1615-5270"],"issn-type":[{"value":"1615-5270","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,17]]},"assertion":[{"value":"31 July 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 January 2026","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 March 2026","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"74"}}