{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T14:04:55Z","timestamp":1773842695499,"version":"3.50.1"},"reference-count":51,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,4,5]],"date-time":"2025-04-05T00:00:00Z","timestamp":1743811200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,4,5]],"date-time":"2025-04-05T00:00:00Z","timestamp":1743811200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62101481"],"award-info":[{"award-number":["62101481"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62261060, 62101480"],"award-info":[{"award-number":["62261060, 62101480"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100007471","name":"Applied Basic Research Foundation of Yunnan Province","doi-asserted-by":"publisher","award":["202301AW070007, 202201AU070033, 202201AT070112"],"award-info":[{"award-number":["202301AW070007, 202201AU070033, 202201AT070112"]}],"id":[{"id":"10.13039\/100007471","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100007471","name":"Applied Basic Research Foundation of Yunnan Province","doi-asserted-by":"publisher","award":["202301AU070210, 202001BB050076, 202005AC160007"],"award-info":[{"award-number":["202301AU070210, 202001BB050076, 202005AC160007"]}],"id":[{"id":"10.13039\/100007471","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Major Scientific and Technological Project of Yunnan Province","award":["202202AD080002"],"award-info":[{"award-number":["202202AD080002"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cybersecurity"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Image preprocessing models typically serve as the initial step in advanced visual tasks, aiming to enhance the performance of subsequent tasks. For example, multi-focus image fusion technology significantly improves the performance of downstream semantic classification tasks. However, with the advancement of adversarial attack techniques, these models are facing significant challenges. Previous research has only explored the impact of adversarial attacks on the performance of individual models, lacking an in-depth investigation into the robustness of tasks involving the combination of multiple models. This study aims to delve into the robustness issues of tasks that combine multi-focus image fusion and image classification. To address this challenge, we have designed a new adversarial attack generator specifically for scenarios that combine multi-focus image fusion with image classification. This attack method uses a decision map surrogate model and a binary weight map to precisely add adversarial perturbations to the effective information parts of multi-focus images. It also incorporates attention mechanisms and Grad-CAM technology to optimize the perturbation areas, aiming to disrupt the key features of the fused image to improve the transferability of the attack. Comprehensive experimental results show that this method significantly improves the efficiency of attacks on downstream classification tasks while maintaining the effectiveness of the fusion model.<\/jats:p>","DOI":"10.1186\/s42400-024-00318-5","type":"journal-article","created":{"date-parts":[[2025,4,5]],"date-time":"2025-04-05T05:55:05Z","timestamp":1743832505000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Transferable adversarial attacks for multi-model systems coupling image fusion with classification models"],"prefix":"10.1186","volume":"8","author":[{"given":"Pengcheng","family":"Zhu","sequence":"first","affiliation":[]},{"given":"Xin","family":"Jin","sequence":"additional","affiliation":[]},{"given":"Qian","family":"Jiang","sequence":"additional","affiliation":[]},{"given":"Xueshuai","family":"Gao","sequence":"additional","affiliation":[]},{"given":"Puming","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Shaowen","family":"Yao","sequence":"additional","affiliation":[]},{"given":"Wei","family":"Zhou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,4,5]]},"reference":[{"key":"318_CR1","doi-asserted-by":"crossref","unstructured":"Bai T, Zhao J, Zhu J, Han S, Chen J, Li B, Kot A (2021) Ai-gan: attack-inspired generation of adversarial examples. In: 2021 IEEE international conference on image processing (ICIP). IEEE, pp 2543\u20132547","DOI":"10.1109\/ICIP42928.2021.9506278"},{"key":"318_CR2","doi-asserted-by":"crossref","unstructured":"Baluja S, Fischer I (2018) Learning to attack: adversarial transformation networks. In: Proceedings of the AAAI conference on artificial intelligence, vol\u00a032, no\u00a01","DOI":"10.1609\/aaai.v32i1.11672"},{"key":"318_CR3","unstructured":"Croce F, Andriushchenko M, Sehwag V, Debenedetti E, Flammarion N, Chiang M, Mittal P, Hein M (2020) Robustbench: a standardized adversarial robustness benchmark, arXiv:2010.09670"},{"key":"318_CR4","unstructured":"Ding GW, Wang L, Jin X (2019) Advertorch v0. 1: an adversarial robustness toolbox based on pytorch, arXiv:1902.07623"},{"key":"318_CR5","doi-asserted-by":"crossref","unstructured":"Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185\u20139193","DOI":"10.1109\/CVPR.2018.00957"},{"key":"318_CR6","doi-asserted-by":"crossref","unstructured":"Dong Y, Pang T, Su H, Zhu J (2019) Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 4312\u20134321","DOI":"10.1109\/CVPR.2019.00444"},{"key":"318_CR7","unstructured":"Dziugaite GK, Ghahramani Z, Roy DM (2016) A study of the effect of jpg compression on adversarial images, arXiv:1608.00853"},{"key":"318_CR8","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples, arXiv:1412.6572"},{"issue":"8","key":"318_CR9","doi-asserted-by":"publisher","first-page":"1982","DOI":"10.1109\/TMM.2019.2895292","volume":"21","author":"X Guo","year":"2019","unstructured":"Guo X, Nie R, Cao J, Zhou D, Mei L, He K (2019) Fusegan: learning to fuse multi-focus image via conditional generative adversarial network. IEEE Trans Multimed 21(8):1982\u20131996","journal-title":"IEEE Trans Multimed"},{"issue":"7","key":"318_CR10","doi-asserted-by":"publisher","first-page":"1019","DOI":"10.1631\/FITEE.1900336","volume":"21","author":"R Guo","year":"2020","unstructured":"Guo R, Shen X-J, Dong X-Y, Zhang X-L (2020) Multi-focus image fusion based on fully convolutional networks. Front Inf Technol Electron Eng 21(7):1019\u20131033","journal-title":"Front Inf Technol Electron Eng"},{"key":"318_CR11","unstructured":"Hendrycks D, Mu N, Cubuk ED, Zoph B, Gilmer J, Lakshminarayanan B (2019) Augmix: a simple data processing method to improve robustness and uncertainty, arXiv:1912.02781"},{"key":"318_CR12","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J, (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"issue":"3","key":"318_CR13","doi-asserted-by":"publisher","first-page":"1076","DOI":"10.1109\/TIP.2016.2633863","volume":"26","author":"P Hill","year":"2016","unstructured":"Hill P, Al-Mualla ME, Bull D (2016) Perceptual image fusion using wavelets. IEEE Trans Image Process 26(3):1076\u20131088","journal-title":"IEEE Trans Image Process"},{"issue":"1","key":"318_CR14","doi-asserted-by":"publisher","first-page":"216","DOI":"10.1049\/ipr2.12345","volume":"16","author":"Y Hu","year":"2022","unstructured":"Hu Y, Chen Z, Zhang B, Ma L, Li J (2022) A multi-focus image fusion method based on multi-source joint layering and convolutional sparse representation. IET Image Proc 16(1):216\u2013228","journal-title":"IET Image Proc"},{"key":"318_CR15","doi-asserted-by":"publisher","first-page":"127","DOI":"10.1016\/j.inffus.2022.11.014","volume":"92","author":"X Hu","year":"2023","unstructured":"Hu X, Jiang J, Liu X, Ma J (2023) Zmff: zero-shot multi-focus image fusion. Inf Fusion 92:127\u2013138","journal-title":"Inf Fusion"},{"key":"318_CR16","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700\u20134708","DOI":"10.1109\/CVPR.2017.243"},{"key":"318_CR17","unstructured":"Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size, arXiv:1602.07360"},{"key":"318_CR18","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2023.103455","volume":"134","author":"X Jin","year":"2023","unstructured":"Jin X, Wang R, Lee S-J, Jiang Q, Yao S, Zhou W (2023) Adversarial attacks on multi-focus image fusion models. Comput Sec 134:103455","journal-title":"Comput Sec"},{"key":"318_CR19","doi-asserted-by":"crossref","unstructured":"Jin X, Jiang Q, Liu P, Gao X, Wang P, Lee S-J (2022) Generating frequency-limited adversarial examples to attack multi-focus image fusion models. In: IEEE Smartworld, ubiquitous intelligence & computing, scalable computing & communications, digital twin, privacy computing, metaverse, autonomous & trusted vehicles (SmartWorld\/UIC\/ScalCom\/DigitalTwin\/PriComp\/Meta). IEEE, pp 1216\u20131223","DOI":"10.1109\/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00180"},{"key":"318_CR20","doi-asserted-by":"crossref","unstructured":"Kim WJ, Hong S, Yoon S-E (2022) Diverse generative perturbations on attention space for transferable adversarial attacks. In: 2022 IEEE international conference on image processing (ICIP). IEEE, pp 281\u2013285","DOI":"10.1109\/ICIP46576.2022.9897346"},{"key":"318_CR22","unstructured":"Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale, arXiv:1611.01236"},{"key":"318_CR21","doi-asserted-by":"crossref","unstructured":"Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Artificial intelligence safety and security. Chapman and Hall\/CRC, pp 99\u2013112","DOI":"10.1201\/9781351251389-8"},{"key":"318_CR23","doi-asserted-by":"publisher","first-page":"4816","DOI":"10.1109\/TIP.2020.2976190","volume":"29","author":"J Li","year":"2020","unstructured":"Li J, Guo X, Lu G, Zhang B, Xu Y, Wu F, Zhang D (2020) Drpl: deep regression pair learning for multi-focus image fusion. IEEE Trans Image Process 29:4816\u20134831","journal-title":"IEEE Trans Image Process"},{"key":"318_CR24","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1016\/j.inffus.2014.05.004","volume":"23","author":"Y Liu","year":"2015","unstructured":"Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense sift. Inf Fusion 23:139\u2013155","journal-title":"Inf Fusion"},{"key":"318_CR25","doi-asserted-by":"publisher","first-page":"147","DOI":"10.1016\/j.inffus.2014.09.004","volume":"24","author":"Y Liu","year":"2015","unstructured":"Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147\u2013164","journal-title":"Inf Fusion"},{"key":"318_CR26","doi-asserted-by":"publisher","first-page":"191","DOI":"10.1016\/j.inffus.2016.12.001","volume":"36","author":"Y Liu","year":"2017","unstructured":"Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191\u2013207","journal-title":"Inf Fusion"},{"key":"318_CR28","doi-asserted-by":"crossref","unstructured":"Liu H, Liu F, Fan X, Huang D (2021) Polarized self-attention: towards high-quality pixel-wise regression, arXiv:2107.00782","DOI":"10.1016\/j.neucom.2022.07.054"},{"key":"318_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.inffus.2022.06.001","volume":"86","author":"Y Liu","year":"2022","unstructured":"Liu Y, Wang L, Li H, Chen X (2022) Multi-focus image fusion with deep residual learning and focus property detection. Inf Fusion 86:1\u201316","journal-title":"Inf Fusion"},{"key":"318_CR29","doi-asserted-by":"crossref","unstructured":"Long Y, Zhang Q, Zeng B, Gao L, Liu X, Zhang J, Song J (2022) Frequency domain model augmentation for adversarial attack. In: European conference on computer vision. Springer, pp 549\u2013566","DOI":"10.1007\/978-3-031-19772-7_32"},{"key":"318_CR30","doi-asserted-by":"publisher","first-page":"5793","DOI":"10.1007\/s00521-020-05358-9","volume":"33","author":"B Ma","year":"2021","unstructured":"Ma B, Zhu Y, Yin X, Ban X, Huang H, Mukeshimana M (2021) Sesf-fuse: an unsupervised deep model for multi-focus image fusion. Neural Comput Appl 33:5793\u20135804","journal-title":"Neural Comput Appl"},{"key":"318_CR31","doi-asserted-by":"publisher","first-page":"204","DOI":"10.1016\/j.neucom.2021.10.115","volume":"470","author":"B Ma","year":"2022","unstructured":"Ma B, Yin X, Wu D, Shen H, Ban X, Wang Y (2022) End-to-end learning for simultaneously generating decision map and multi-focus image fusion result. Neurocomputing 470:204\u2013216","journal-title":"Neurocomputing"},{"key":"318_CR32","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks, arXiv:1706.06083"},{"key":"318_CR33","doi-asserted-by":"crossref","unstructured":"Poursaeed O, Katsman I, Gao B, Belongie S (2018) Generative adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4422\u20134431","DOI":"10.1109\/CVPR.2018.00465"},{"key":"318_CR34","doi-asserted-by":"crossref","unstructured":"Qin X, Zhang Z, Huang C, Gao C, Dehghan M, Jagersand M (2019) Basnet: boundary-aware salient object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 7479\u20137489","DOI":"10.1109\/CVPR.2019.00766"},{"key":"318_CR35","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp 618\u2013626","DOI":"10.1109\/ICCV.2017.74"},{"key":"318_CR36","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556"},{"key":"318_CR38","unstructured":"Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Erhan I, Fergus R (2013) Intriguing properties of neural networks, arXiv:1312.6199"},{"key":"318_CR37","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818\u20132826","DOI":"10.1109\/CVPR.2016.308"},{"key":"318_CR39","unstructured":"Vinyals O, Blundell C, Lillicrap T, Wierstra D et al. (2016) Matching networks for one shot learning. Adv Neural Inf Process Syst 29"},{"key":"318_CR40","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2021.116295","volume":"96","author":"Y Wang","year":"2021","unstructured":"Wang Y, Xu S, Liu J, Zhao Z, Zhang C, Zhang J (2021) Mfif-gan: a new generative adversarial network for multi-focus image fusion. Signal Process Image Commun 96:116295","journal-title":"Signal Process Image Commun"},{"key":"318_CR41","doi-asserted-by":"crossref","unstructured":"Xiao C, Li B, Zhu J-Y, He W, Liu M, Song D (2018) Generating adversarial examples with adversarial networks, arXiv:1801.02610","DOI":"10.24963\/ijcai.2018\/543"},{"key":"318_CR42","doi-asserted-by":"crossref","unstructured":"Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille AL (2019) Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 2730\u20132739","DOI":"10.1109\/CVPR.2019.00284"},{"issue":"1","key":"318_CR43","doi-asserted-by":"publisher","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","volume":"44","author":"H Xu","year":"2020","unstructured":"Xu H, Ma J, Jiang J, Guo X, Ling H (2020a) U2fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell 44(1):502\u2013518","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"318_CR44","doi-asserted-by":"publisher","first-page":"1561","DOI":"10.1109\/TCI.2020.3039564","volume":"6","author":"S Xu","year":"2020","unstructured":"Xu S, Ji L, Wang Z, Li P, Sun K, Zhang C, Zhang J (2020b) Towards reducing severe defocus spread effects for multi-focus image fusion via an optimization based strategy. IEEE Trans Comput Imaging 6:1561\u20131570","journal-title":"IEEE Trans Comput Imaging"},{"issue":"4","key":"318_CR45","doi-asserted-by":"publisher","first-page":"884","DOI":"10.1109\/TIM.2009.2026612","volume":"59","author":"B Yang","year":"2009","unstructured":"Yang B, Li S (2009) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884\u2013892","journal-title":"IEEE Trans Instrum Meas"},{"issue":"8","key":"318_CR46","doi-asserted-by":"publisher","first-page":"10883","DOI":"10.1007\/s11042-022-12046-4","volume":"81","author":"N Yu","year":"2022","unstructured":"Yu N, Li J, Hua Z (2022) Attention based dual path fusion networks for multi-focus image. Multimed Tools Appl 81(8):10883\u201310906","journal-title":"Multimed Tools Appl"},{"issue":"7","key":"318_CR47","doi-asserted-by":"publisher","first-page":"1334","DOI":"10.1016\/j.sigpro.2009.01.012","volume":"89","author":"Q Zhang","year":"2009","unstructured":"Zhang Q, Guo B-L (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334\u20131346","journal-title":"Signal Process"},{"key":"318_CR48","doi-asserted-by":"publisher","first-page":"99","DOI":"10.1016\/j.inffus.2019.07.011","volume":"54","author":"Y Zhang","year":"2020","unstructured":"Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L (2020) Ifcnn: a general image fusion framework based on convolutional neural network. Inf Fusion 54:99\u2013118","journal-title":"Inf Fusion"},{"key":"318_CR49","doi-asserted-by":"publisher","first-page":"40","DOI":"10.1016\/j.inffus.2020.08.022","volume":"66","author":"H Zhang","year":"2021","unstructured":"Zhang H, Le Z, Shao Z, Xu H, Ma J (2021) Mff-gan: an unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf Fusion 66:40\u201353","journal-title":"Inf Fusion"},{"key":"318_CR50","unstructured":"Zhang Q, Li X, Chen Y, Song J, Gao L, He Y, Xue H (2022) Beyond imagenet attack: towards crafting adversarial examples for black-box domains, arXiv:2201.11528"},{"key":"318_CR51","doi-asserted-by":"crossref","unstructured":"Zhou W, Hou X, Chen Y, Tang M, Huang X, Gan X, Yang Y (2018) Transferable adversarial perturbations. In: Proceedings of the European conference on computer vision (ECCV), pp 452\u2013467","DOI":"10.1007\/978-3-030-01264-9_28"}],"container-title":["Cybersecurity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-024-00318-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42400-024-00318-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-024-00318-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,5]],"date-time":"2025-04-05T05:55:33Z","timestamp":1743832533000},"score":1,"resource":{"primary":{"URL":"https:\/\/cybersecurity.springeropen.com\/articles\/10.1186\/s42400-024-00318-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,5]]},"references-count":51,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["318"],"URL":"https:\/\/doi.org\/10.1186\/s42400-024-00318-5","relation":{},"ISSN":["2523-3246"],"issn-type":[{"value":"2523-3246","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,5]]},"assertion":[{"value":"18 June 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 August 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 April 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"23"}}