{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T12:03:36Z","timestamp":1775217816272,"version":"3.50.1"},"reference-count":41,"publisher":"Institution of Engineering and Technology (IET)","issue":"1","license":[{"start":{"date-parts":[[2025,6,8]],"date-time":"2025-06-08T00:00:00Z","timestamp":1749340800000},"content-version":"vor","delay-in-days":158,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"},{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/doi.wiley.com\/10.1002\/tdm_license_1.1"}],"funder":[{"DOI":"10.13039\/100031478","name":"NextGenerationEU","doi-asserted-by":"publisher","award":["101120726"],"award-info":[{"award-number":["101120726"]}],"id":[{"id":"10.13039\/100031478","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100011698","name":"Junta de Comunidades de Castilla-La Mancha","doi-asserted-by":"publisher","award":["SBPLY\/21\/180501\/000025"],"award-info":[{"award-number":["SBPLY\/21\/180501\/000025"]}],"id":[{"id":"10.13039\/501100011698","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["ietresearch.onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["IET Image Processing"],"published-print":{"date-parts":[[2025,1]]},"abstract":"<jats:title>ABSTRACT<\/jats:title>\n                  <jats:p>Adversarial examples are an intriguing and critical topic in the field of machine learning. The impact of malignant perturbations on deep learning\u2010based systems, especially in safety\u2010critical applications, highlights a significant security concern. While most research has focused on artificially generated adversarial attacks\u2013crafted through optimization algorithms and constrained perturbations, it is important to note that adversarial examples can also occur naturally, without any artificial manipulation, during the prediction of real\u2010world images. These naturally occurring adversarial examples pose unique challenges, as they are harder to detect and interpret. Despite their importance, the study of natural adversarial examples remains in its early stages. Fundamental questions remain unanswered: Do natural adversarial examples exhibit similar behaviours or properties as artificially generated ones? How should models be adapted to improve their robustness against such natural inputs? To address these questions, this work proposes an in\u2010depth analysis of activation maps to compare the internal behaviour of neural networks when processing clean images, artificially perturbed inputs and natural adversarial examples. A set of quantitative metrics is extracted from activation heatmaps at various network layers, including mean activation intensity, centroid displacement and standard reference image quality metrics. These measurements enable a systematic comparison of how the network attends to different image regions under varying conditions. The experimental results demonstrate that natural adversarial examples exhibit statistically significant differences in activation patterns compared to their artificial counterparts, suggesting that they may require distinct strategies for detection and\u00a0defence.<\/jats:p>","DOI":"10.1049\/ipr2.70123","type":"journal-article","created":{"date-parts":[[2025,6,12]],"date-time":"2025-06-12T11:20:27Z","timestamp":1749727227000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Characterizing Natural Adversarial Examples Through Activation Map Analysis"],"prefix":"10.1049","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7748-6756","authenticated-orcid":false,"given":"Anibal","family":"Pedraza","sequence":"first","affiliation":[{"name":"Escuela T\u00e9cnica Superior de Ingenier\u00eda Industrial University of Castilla \u2010 La Mancha Ciudad Real Spain"}]},{"given":"Nerea","family":"Leon","sequence":"additional","affiliation":[{"name":"Escuela T\u00e9cnica Superior de Ingenier\u00eda Industrial University of Castilla \u2010 La Mancha Ciudad Real Spain"}]},{"given":"Harbinder","family":"Singh","sequence":"additional","affiliation":[{"name":"Escuela T\u00e9cnica Superior de Ingenier\u00eda Industrial University of Castilla \u2010 La Mancha Ciudad Real Spain"}]},{"given":"Oscar","family":"Deniz","sequence":"additional","affiliation":[{"name":"Escuela T\u00e9cnica Superior de Ingenier\u00eda Industrial University of Castilla \u2010 La Mancha Ciudad Real Spain"}]},{"given":"Gloria","family":"Bueno","sequence":"additional","affiliation":[{"name":"Escuela T\u00e9cnica Superior de Ingenier\u00eda Industrial University of Castilla \u2010 La Mancha Ciudad Real Spain"}]}],"member":"265","published-online":{"date-parts":[[2025,6,8]]},"reference":[{"key":"e_1_2_10_2_1","unstructured":"I. J.Goodfellow J.Shlens andC.Szegedy \u201cExplaining and Harnessing Adversarial Examples \u201darXiv:1412.6572(2014)."},{"key":"e_1_2_10_3_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2022.102847"},{"key":"e_1_2_10_4_1","first-page":"4536","article-title":"Diversity Can be Transferred: Output Diversification for White\u2010and Black\u2010Box Attacks","volume":"33","author":"Tashiro Y.","year":"2020","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_10_5_1","unstructured":"Y.Liu X.Chen C.Liu andD.Song \u201cDelving into Transferable Adversarial Examples and Black\u2010box Attacks \u201darXiv:1611.02770(2017)."},{"key":"e_1_2_10_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2024.3430860"},{"key":"e_1_2_10_7_1","doi-asserted-by":"crossref","unstructured":"D.Hendrycks K.Zhao S.Basart J.Steinhardt andD.Song \u201cNatural Adversarial Examples \u201d inProceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(IEEE 2021) 15262\u201315271.","DOI":"10.1109\/CVPR46437.2021.01501"},{"key":"e_1_2_10_8_1","unstructured":"C.Song K.He L.Wang andJ. E.Hopcroft \u201cImproving the Generalization of Adversarial Training With Domain Adaptation \u201darXiv:1810.00740(2018)."},{"key":"e_1_2_10_9_1","doi-asserted-by":"crossref","unstructured":"J.Deng W.Dong R.Socher L. J.Li K.Li andL.Fei\u2010Fei \u201cImagenet: A Large\u2010scale Hierarchical Image Database \u201d in2009 IEEE Conference on Computer Vision and Pattern Recognition(IEEE 2009) 248\u2013255.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_2_10_10_1","unstructured":"X.Li J.Li T.Dai J.Shi J.Zhu andX.Hu \u201cRethinking Natural Adversarial Examples for Classification Models \u201darXiv:2102.11731(2021)."},{"key":"e_1_2_10_11_1","unstructured":"M.TanandQ.Le \u201cEfficientNet: Rethinking Model Scaling for Convolutional Neural Networks \u201d inInternational Conference on Machine Learning(Springer 2019) 6105\u20136114."},{"key":"e_1_2_10_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13042-021-01435-0"},{"key":"e_1_2_10_13_1","unstructured":"Y.Lin J.Zhang Y.Chen andH.Li \u201cSD\u2010NAE: Generating Natural Adversarial Examples with Stable Diffusion \u201darXiv:2311.12981(2023)."},{"key":"e_1_2_10_14_1","doi-asserted-by":"crossref","unstructured":"A.Agarwal N.Ratha M.Vatsa andR.Singh \u201cExploring Robustness Connection Between Artificial and Natural Adversarial Examples \u201d inProceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(IEEE 2022) 179\u2013186.","DOI":"10.1109\/CVPRW56347.2022.00030"},{"key":"e_1_2_10_15_1","doi-asserted-by":"publisher","DOI":"10.1167\/jov.23.4.4"},{"key":"e_1_2_10_16_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2022.102876"},{"key":"e_1_2_10_17_1","unstructured":"C.SchweimerandS.Scher \u201cQuantifying Probabilistic Robustness of Tree\u2010based Classifiers Against Natural Distortions \u201darXiv:2208.10354(2022)."},{"key":"e_1_2_10_18_1","doi-asserted-by":"crossref","unstructured":"Y.Zhong X.Liu D.Zhai J.Jiang andX.Ji \u201cShadows Can Be Dangerous: Stealthy and Effective Physical\u2010World Adversarial Attack by Natural Phenomenon \u201d in2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(IEEE 2022) 15345\u201315354.","DOI":"10.1109\/CVPR52688.2022.01491"},{"key":"e_1_2_10_19_1","doi-asserted-by":"crossref","unstructured":"X.Chen X.Gao J.Zhao K.Ye andC. Z.Xu \u201cAdvDiffuser: Natural Adversarial Example Synthesis With Diffusion Models \u201d in (IEEE 2023) 4562\u20134572.","DOI":"10.1109\/ICCV51070.2023.00421"},{"key":"e_1_2_10_20_1","unstructured":"B.Li Z.Lin W.Peng et\u00a0al. \u201cNaturalBench: Evaluating Vision\u2010language Models On Natural Adversarial Samples \u201darXiv:2410.14669(2024)."},{"key":"e_1_2_10_21_1","doi-asserted-by":"crossref","unstructured":"Z.Wang W.Wang Q.Chen Q.Wang andA.Nguyen \u201cGenerating Valid and Natural Adversarial Examples With Large Language Models \u201d in2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD)(IEEE 2024) 1716\u20131721.","DOI":"10.1109\/CSCWD61410.2024.10580402"},{"key":"e_1_2_10_22_1","unstructured":"Y.Wu V.Schlegel andR.Batista\u2010Navarro \u201cPay Attention to Real World Perturbations! Natural Robustness Evaluation in Machine Reading Comprehension \u201darXiv:2502.16523(2025)."},{"key":"e_1_2_10_23_1","doi-asserted-by":"publisher","DOI":"10.32604\/cmc.2024.057866"},{"key":"e_1_2_10_24_1","unstructured":"N.CarliniandD. A.Wagner \u201cTowards Evaluating the Robustness of Neural Networks \u201darXiv:1608.04644(2016): abs\/1608.04644."},{"key":"e_1_2_10_25_1","doi-asserted-by":"crossref","unstructured":"J.Chen M. I.Jordan andM. J.Wainwright \u201cHopSkipJumpAttack: A Query\u2010Efficient Decision\u2010based Attack \u201d in2020 IEEE Symposium on Security and Privacy (SP)(IEEE 2020) 1277\u20131294.","DOI":"10.1109\/SP40000.2020.00045"},{"key":"e_1_2_10_26_1","doi-asserted-by":"crossref","unstructured":"C.Szegedy W.Liu Y.Jia et\u00a0al. \u201cGoing Deeper With Convolutions \u201d inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition(IEEE 2015) 1\u20139.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_2_10_27_1","doi-asserted-by":"crossref","unstructured":"F.Chollet \u201cXception: Deep Learning With Depthwise Separable Convolutions \u201d inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition(IEEE 2017) 1251\u20131258.","DOI":"10.1109\/CVPR.2017.195"},{"key":"e_1_2_10_28_1","doi-asserted-by":"crossref","unstructured":"Z.Liu H.Mao C. Y.Wu C.Feichtenhofer T.Darrell andS.Xie \u201cA ConvNet for the 2020s \u201d in2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(IEEE 2022) 11976\u201311986.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"e_1_2_10_29_1","unstructured":"A.Madry A.Makelov L.Schmidt D.Tsipras andA.Vladu \u201cTowards Deep Learning Models Resistant to Adversarial Attacks \u201darXiv:1706.06083(2017)."},{"key":"e_1_2_10_30_1","unstructured":"W.Brendel J.Rauber andM.Bethge \u201cDecision\u2010based Adversarial Attacks: Reliable Attacks Against Black\u2010box Machine Learning Models \u201darXiv:1712.04248(2017)."},{"key":"e_1_2_10_31_1","unstructured":"M.Nicolae M.Sinn T. N.Minh et\u00a0al. \u201cAdversarial Robustness Toolbox v0.2.2 \u201darXiv:1807.01069(2018)."},{"key":"e_1_2_10_32_1","unstructured":"A.Raghunathan J.Steinhardt andP.Liang \u201cCertified Defenses Against Adversarial Examples \u201darXiv:1801.09344(2018)."},{"key":"e_1_2_10_33_1","unstructured":"F.Tram\u00e8r A.Kurakin N.Papernot I.Goodfellow D.Boneh andP.McDaniel \u201cEnsemble Adversarial Training: Attacks and Defenses \u201darXiv:1705.07204(2017)."},{"key":"e_1_2_10_34_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-024-02111-w"},{"key":"e_1_2_10_35_1","unstructured":"A.Kurakin I.Goodfellow andS.Bengio \u201cAdversarial Machine Learning at Scale \u201darXiv:1611.01236(2016)."},{"key":"e_1_2_10_36_1","doi-asserted-by":"crossref","unstructured":"D.Zhou N.Wang C.Peng et\u00a0al. \u201cRemoving Adversarial Noise in Class Activation Feature Space \u201d inProceedings of the IEEE\/CVF International Conference on Computer Vision(IEEE 2021) 7878\u20137887.","DOI":"10.1109\/ICCV48922.2021.00778"},{"key":"e_1_2_10_37_1","unstructured":"R. R.Selvaraju A.Das R.Vedantam M.Cogswell D.Parikh andD.Batra \u201cGrad\u2010CAM: Why Did You Say That? \u201darXiv:1611.07450(2016)."},{"key":"e_1_2_10_38_1","doi-asserted-by":"publisher","DOI":"10.3390\/systems13020088"},{"key":"e_1_2_10_39_1","doi-asserted-by":"crossref","unstructured":"W.Tan J.Renkhoff A.Velasquez et\u00a0al. \u201cNoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks \u201d in2023 IEEE International Conference on Fuzzy Systems (FUZZ)(IEEE 2023) 1\u20138.","DOI":"10.1109\/FUZZ52849.2023.10309766"},{"key":"e_1_2_10_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/26.477498"},{"key":"e_1_2_10_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_2_10_42_1","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/34.1-2.28"}],"container-title":["IET Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/pdf\/10.1049\/ipr2.70123","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/full-xml\/10.1049\/ipr2.70123","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/pdf\/10.1049\/ipr2.70123","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T11:25:34Z","timestamp":1775215534000},"score":1,"resource":{"primary":{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/10.1049\/ipr2.70123"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1]]},"references-count":41,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1]]}},"alternative-id":["10.1049\/ipr2.70123"],"URL":"https:\/\/doi.org\/10.1049\/ipr2.70123","archive":["Portico"],"relation":{},"ISSN":["1751-9659","1751-9667"],"issn-type":[{"value":"1751-9659","type":"print"},{"value":"1751-9667","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1]]},"assertion":[{"value":"2024-12-04","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-05-26","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-06-08","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"e70123"}}