{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,10]],"date-time":"2025-09-10T22:22:02Z","timestamp":1757542922989,"version":"3.37.3"},"reference-count":55,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,8,3]],"date-time":"2023-08-03T00:00:00Z","timestamp":1691020800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,8,3]],"date-time":"2023-08-03T00:00:00Z","timestamp":1691020800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["2020YFC0833406"],"award-info":[{"award-number":["2020YFC0833406"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62102112"],"award-info":[{"award-number":["62102112"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"Basic Research Plan of Guizhou Province","doi-asserted-by":"publisher","award":["Qiankehejichu-ZK[2021]Yiban310"],"award-info":[{"award-number":["Qiankehejichu-ZK[2021]Yiban310"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"Guizhou Provincial Science and Technology Projects","doi-asserted-by":"publisher","award":["QKHJCZK2022YB19"],"award-info":[{"award-number":["QKHJCZK2022YB19"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Guizhou Provincial Science and Technology Projects","award":["QKHJCZK2023YB143"],"award-info":[{"award-number":["QKHJCZK2023YB143"]}]},{"name":"Youth Science and Technology Talents Cultivating Object of Guizhou Province","award":["QJHKY2021104"],"award-info":[{"award-number":["QJHKY2021104"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,2]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Infrared and visible image fusion aims to generate synthetic images including salient targets and abundant texture details. However, traditional techniques and recent deep learning-based approaches have faced challenges in preserving prominent structures and fine-grained features. In this study, we propose a lightweight infrared and visible image fusion network utilizing multi-scale attention modules and hybrid dilated convolutional blocks to preserve significant structural features and fine-grained textural details. First, we design a hybrid dilated convolutional block with different dilation rates that enable the extraction of prominent structure features by enlarging the receptive field in the fusion network. Compared with other deep learning methods, our method can obtain more high-level semantic information without piling up a large number of convolutional blocks, effectively improving the ability of feature representation. Second, distinct attention modules are designed to integrate into different layers of the network to fully exploit contextual information of the source images, and we leverage the total loss to guide the fusion process to focus on vital regions and compensate for missing information. Extensive qualitative and quantitative experiments demonstrate the superiority of our proposed method over state-of-the-art methods in both visual effects and evaluation metrics. The experimental results on public datasets show that our method can improve the entropy (EN) by 4.80%, standard deviation (SD) by 3.97%, correlation coefficient (CC) by 1.86%, correlations of differences (SCD) by 9.98%, and multi-scale structural similarity (MS_SSIM) by 5.64%, respectively. In addition, experiments with the VIFB dataset further indicate that our approach outperforms other comparable models.<\/jats:p>","DOI":"10.1007\/s40747-023-01185-2","type":"journal-article","created":{"date-parts":[[2023,8,3]],"date-time":"2023-08-03T01:01:41Z","timestamp":1691024501000},"page":"705-719","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Multi-scale attention-based lightweight network with dilated convolutions for infrared and visible image fusion"],"prefix":"10.1007","volume":"10","author":[{"given":"Fuquan","family":"Li","sequence":"first","affiliation":[]},{"given":"Yonghui","family":"Zhou","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4452-5725","authenticated-orcid":false,"given":"YanLi","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Jie","family":"Li","sequence":"additional","affiliation":[]},{"given":"ZhiCheng","family":"Dong","sequence":"additional","affiliation":[]},{"given":"Mian","family":"Tan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,8,3]]},"reference":[{"key":"1185_CR1","doi-asserted-by":"publisher","first-page":"323","DOI":"10.1016\/j.inffus.2021.06.008","volume":"76","author":"H Zhang","year":"2021","unstructured":"Zhang H, Xu H, Tian X, Jiang J, Ma J (2021) Image fusion meets deep learning: a survey and perspective. Inform Fus 76:323\u2013336. https:\/\/doi.org\/10.1016\/j.inffus.2021.06.008","journal-title":"Inform Fus"},{"issue":"5","key":"1185_CR2","doi-asserted-by":"publisher","first-page":"1804","DOI":"10.1109\/TCSVT.2020.3014663","volume":"31","author":"Q Zhang","year":"2021","unstructured":"Zhang Q, Xiao T, Huang N, Zhang D, Han J (2021) Revisiting feature fusion for RGB-T salient object detection. IEEE Trans Circ Syst Video Technol 31(5):1804\u20131818. https:\/\/doi.org\/10.1109\/TCSVT.2020.3014663","journal-title":"IEEE Trans Circ Syst Video Technol"},{"issue":"4","key":"1185_CR3","doi-asserted-by":"publisher","first-page":"6497","DOI":"10.1109\/LRA.2021.3093652","volume":"6","author":"Y-H Kim","year":"2021","unstructured":"Kim Y-H, Shin U, Park J, Kweon IS (2021) Ms-uda: Multi-spectral unsupervised domain adaptation for thermal image semantic segmentation. IEEE Robot Automation Lett 6(4):6497\u20136504. https:\/\/doi.org\/10.1109\/LRA.2021.3093652","journal-title":"IEEE Robot Automation Lett"},{"key":"1185_CR4","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.119307","volume":"215","author":"X Zeng","year":"2023","unstructured":"Zeng X, Long J, Tian S, Xiao G (2023) Random area pixel variation and random area transform for visible-infrared cross-modal pedestrian re-identification. Expert Syst Appl 215:119307. https:\/\/doi.org\/10.1016\/j.eswa.2022.119307","journal-title":"Expert Syst Appl"},{"issue":"4","key":"1185_CR5","doi-asserted-by":"publisher","first-page":"4118","DOI":"10.1109\/JSEN.2023.3234091","volume":"23","author":"W Liu","year":"2023","unstructured":"Liu W, Liu W, Sun Y (2023) Visible-infrared dual-sensor fusion for single-object tracking. IEEE Sens J 23(4):4118\u20134128. https:\/\/doi.org\/10.1109\/JSEN.2023.3234091","journal-title":"IEEE Sens J"},{"doi-asserted-by":"publisher","unstructured":"Zhou, Z., Wang, B., Li, S., Dong, M.: Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Inform Fus 15\u201326 (2016) https:\/\/doi.org\/10.1016\/j.inffus.2015.11.003","key":"1185_CR6","DOI":"10.1016\/j.inffus.2015.11.003"},{"doi-asserted-by":"publisher","unstructured":"Yan L, Hao Q, Cao J, Saad R, Li K, Yan Z, Wu Z: Infrared and visible image fusion via octave gaussian pyramid framework. Sci Rep 11(1) (2021) https:\/\/doi.org\/10.1038\/s41598-020-80189-1","key":"1185_CR7","DOI":"10.1038\/s41598-020-80189-1"},{"issue":"12","key":"1185_CR8","doi-asserted-by":"publisher","first-page":"9645","DOI":"10.1109\/TIM.2020.3005230","volume":"69","author":"H Li","year":"2020","unstructured":"Li H, Wu X-J, Durrani T (2020) NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial\/channel attention models. IEEE Trans Instrument Measure 69(12):9645\u20139656. https:\/\/doi.org\/10.1109\/TIM.2020.3005230. arXiv:2007.00328 [cs]","journal-title":"IEEE Trans Instrument Measure"},{"doi-asserted-by":"publisher","unstructured":"Li H, Wu X-J, Kittler J (2021) Rfn-nest: An end-to-end residual fusion network for infrared and visible images. Inform Fus 72\u201386https:\/\/doi.org\/10.1016\/j.inffus.2021.02.023","key":"1185_CR9","DOI":"10.1016\/j.inffus.2021.02.023"},{"doi-asserted-by":"publisher","unstructured":"Ma J, Tang L, Xu M, Zhang H, Xiao G (2021) Stdfusionnet an infrared and visible image fusion network based on salient target detection.pdf. IEEE Trans Instrumentation Measurement 1\u201313 https:\/\/doi.org\/10.1109\/tim.2021.3075747","key":"1185_CR10","DOI":"10.1109\/tim.2021.3075747"},{"doi-asserted-by":"publisher","unstructured":"Wang Z, Shao W, Chen Y, Xu J, Zhang L (2023) A cross-scale iterative attentional adversarial fusion network for infrared and visible images. IEEE Trans Circ Syst Video Technol 1\u20131https:\/\/doi.org\/10.1109\/TCSVT.2023.3239627","key":"1185_CR11","DOI":"10.1109\/TCSVT.2023.3239627"},{"issue":"6","key":"1185_CR12","doi-asserted-by":"publisher","first-page":"4753","DOI":"10.1007\/s40747-022-00722-9","volume":"8","author":"J Li","year":"2022","unstructured":"Li J, Li B, Jiang Y, Cai W (2022) MSAt-GAN: a generative adversarial network based on multi-scale and deep attention mechanism for infrared and visible light image fusion. Complex Intell Syst 8(6):4753\u20134781. https:\/\/doi.org\/10.1007\/s40747-022-00722-9","journal-title":"Complex Intell Syst"},{"doi-asserted-by":"publisher","unstructured":"Chen J, Li X, Luo L, Mei X, Ma J (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inform Sci 64\u201378. https:\/\/doi.org\/10.1016\/j.ins.2019.08.066","key":"1185_CR13","DOI":"10.1016\/j.ins.2019.08.066"},{"doi-asserted-by":"publisher","unstructured":"Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fus 147\u2013164. https:\/\/doi.org\/10.1016\/j.inffus.2014.09.004","key":"1185_CR14","DOI":"10.1016\/j.inffus.2014.09.004"},{"doi-asserted-by":"publisher","unstructured":"Li H, Wu X-J (2017) Multi-focus image fusion using dictionary learning and low-rank representation, pp 675\u2013686. https:\/\/doi.org\/10.1007\/978-3-319-71607-7_59","key":"1185_CR15","DOI":"10.1007\/978-3-319-71607-7_59"},{"key":"1185_CR16","doi-asserted-by":"publisher","DOI":"10.1016\/j.infrared.2022.104178","volume":"123","author":"Z Guo","year":"2022","unstructured":"Guo Z, Yu X, Du Q (2022) Infrared and visible image fusion based on saliency and fast guided filtering. Infrared Phys Technol 123:104178. https:\/\/doi.org\/10.1016\/j.infrared.2022.104178","journal-title":"Infrared Phys Technol"},{"doi-asserted-by":"publisher","unstructured":"Vargas H, Ram\u00edrez J, Pinilla S, Mart\u00ednez-Torre JI (2022) Multi-sensor image feature fusion via subspace-based approach using $$\\ell _{1}$$-gradient regularization. IEEE J Selected Topics Signal Process 1\u201313. https:\/\/doi.org\/10.1109\/JSTSP.2022.3219357","key":"1185_CR17","DOI":"10.1109\/JSTSP.2022.3219357"},{"key":"1185_CR18","doi-asserted-by":"publisher","first-page":"100","DOI":"10.1016\/j.inffus.2016.02.001","volume":"31","author":"J Ma","year":"2016","unstructured":"Ma J, Chen C, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inform Fus 31:100\u2013109. https:\/\/doi.org\/10.1016\/j.inffus.2016.02.001","journal-title":"Inform Fus"},{"key":"1185_CR19","doi-asserted-by":"publisher","first-page":"99","DOI":"10.1016\/j.inffus.2019.07.011","volume":"54","author":"Y Zhang","year":"2020","unstructured":"Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L (2020) IFCNN: a general image fusion framework based on convolutional neural network. Inform Fus 54:99\u2013118. https:\/\/doi.org\/10.1016\/j.inffus.2019.07.011","journal-title":"Inform Fus"},{"doi-asserted-by":"publisher","unstructured":"Ma J, Xu H, Jiang J, Mei X, Zhang X-P (2020) Ddcgan: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 4980\u20134995. https:\/\/doi.org\/10.1109\/tip.2020.2977573","key":"1185_CR20","DOI":"10.1109\/tip.2020.2977573"},{"unstructured":"Zhang R (2019) Making convolutional networks shift-invariant again. In: ICML","key":"1185_CR21"},{"doi-asserted-by":"publisher","unstructured":"Toet A (2022) TNO image fusion dataset. https:\/\/doi.org\/10.6084\/m9.figshare.1008029.v2","key":"1185_CR22","DOI":"10.6084\/m9.figshare.1008029.v2"},{"issue":"1","key":"1185_CR23","doi-asserted-by":"publisher","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","volume":"44","author":"H Xu","year":"2022","unstructured":"Xu H, Ma J, Jiang J, Guo X, Ling H (2022) U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Trans Pattern Anal Mach Intell 44(1):502\u2013518. https:\/\/doi.org\/10.1109\/TPAMI.2020.3012548","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1185_CR24","doi-asserted-by":"publisher","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","volume":"48","author":"J Ma","year":"2019","unstructured":"Ma J, Yu W, Liang P, Li C, Jiang J (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. Inform Fus 48:11\u201326. https:\/\/doi.org\/10.1016\/j.inffus.2018.09.004","journal-title":"Inform Fus"},{"doi-asserted-by":"publisher","unstructured":"Liu X, Mei W, Du H (2017) Structure tensor and nonsubsampled shearlet transform based algorithm for ct and mri image fusion. Neurocomputing 131\u2013139. https:\/\/doi.org\/10.1016\/j.neucom.2017.01.006","key":"1185_CR25","DOI":"10.1016\/j.neucom.2017.01.006"},{"key":"1185_CR26","doi-asserted-by":"publisher","first-page":"11","DOI":"10.1016\/j.infrared.2015.11.003","volume":"74","author":"Q Zhang","year":"2016","unstructured":"Zhang Q, Maldague X (2016) An adaptive fusion approach for infrared and visible images based on nsct and compressed sensing. Infrared Phys Technol 74:11\u201320. https:\/\/doi.org\/10.1016\/j.infrared.2015.11.003","journal-title":"Infrared Phys Technol"},{"doi-asserted-by":"publisher","unstructured":"Li H, Wu X-J, Kittler J (2020) Mdlatlrr: A novel decomposition method for infrared and visible image fusion. IEEE Trans Image Process 4733\u20134746. https:\/\/doi.org\/10.1109\/tip.2020.2975984","key":"1185_CR27","DOI":"10.1109\/tip.2020.2975984"},{"doi-asserted-by":"publisher","unstructured":"Huang Y, Yao K (2020) Multi-exposure image fusion method based on independent component analysis. In: Proceedings of the 2020 international conference on pattern recognition and intelligent systems. PRIS 2020. Association for Computing Machinery, New York, NY, USA. https:\/\/doi.org\/10.1145\/3415048.3416099","key":"1185_CR28","DOI":"10.1145\/3415048.3416099"},{"key":"1185_CR29","doi-asserted-by":"publisher","first-page":"114","DOI":"10.1016\/j.infrared.2016.05.012","volume":"77","author":"Z Fu","year":"2016","unstructured":"Fu Z, Wang X, Xu J, Zhou N, Zhao Y (2016) Infrared and visible images fusion based on rpca and nsct. Infrared Phys Technol 77:114\u2013123. https:\/\/doi.org\/10.1016\/j.infrared.2016.05.012","journal-title":"Infrared Phys Technol"},{"doi-asserted-by":"publisher","unstructured":"Ahmad T, Lyngdoh RB, Anand SS, Gupta PK, Misra A, Raha S (2021) Robust coupled non-negative matrix factorization for hyperspectral and multispectral data fusion. In: 2021 IEEE international geoscience and remote sensing symposium IGARSS, pp 2456\u20132459. https:\/\/doi.org\/10.1109\/IGARSS47720.2021.9553681","key":"1185_CR30","DOI":"10.1109\/IGARSS47720.2021.9553681"},{"key":"1185_CR31","doi-asserted-by":"publisher","first-page":"191","DOI":"10.1016\/j.inffus.2016.12.001","volume":"36","author":"Y Liu","year":"2017","unstructured":"Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fus 36:191\u2013207. https:\/\/doi.org\/10.1016\/j.inffus.2016.12.001","journal-title":"Inform Fus"},{"key":"1185_CR32","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/TIM.2021.3075747","volume":"70","author":"J Ma","year":"2021","unstructured":"Ma J, Tang L, Xu M, Zhang H, Xiao G (2021) STDFusionNet: an infrared and visible image fusion wetwork based on salient target detection. IEEE Trans Instrument Measure 70:1\u201313. https:\/\/doi.org\/10.1109\/TIM.2021.3075747","journal-title":"IEEE Trans Instrument Measure"},{"key":"1185_CR33","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1016\/j.inffus.2022.03.007","volume":"83\u201384","author":"L Tang","year":"2022","unstructured":"Tang L, Yuan J, Zhang H, Jiang X, Ma J (2022) PIAFusion: a progressive infrared and visible image fusion network based on illumination aware. Inform Fus 83\u201384:79\u201392. https:\/\/doi.org\/10.1016\/j.inffus.2022.03.007","journal-title":"Inform Fus"},{"doi-asserted-by":"publisher","unstructured":"Tang L, Yuan J, Ma J (2022) Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Inform Fus 28\u201342. https:\/\/doi.org\/10.1016\/j.inffus.2021.12.004","key":"1185_CR34","DOI":"10.1016\/j.inffus.2021.12.004"},{"doi-asserted-by":"publisher","unstructured":"Li J, Huo H, Li C, Wang R, Sui C, Liu Z (2021) Multigrained attention network for infrared and visible image fusion. IEEE Trans Instrument Measure 1\u201312. https:\/\/doi.org\/10.1109\/tim.2020.3029360","key":"1185_CR35","DOI":"10.1109\/tim.2020.3029360"},{"doi-asserted-by":"publisher","unstructured":"Yu F, Koltun V, Funkhouser T (2017) Dilated residual networks. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). https:\/\/doi.org\/10.1109\/cvpr.2017.75","key":"1185_CR36","DOI":"10.1109\/cvpr.2017.75"},{"doi-asserted-by":"publisher","unstructured":"Guo M-H, Xu T-X, Liu J-J, Liu Z-N, Jiang P-T, Mu T-J, Zhang S-H, Martin RR, Cheng M-M, Hu S-M (2022) Attention mechanisms in computer vision: a survey. Comput Vis Media 331\u2013368. https:\/\/doi.org\/10.1007\/s41095-022-0271-y","key":"1185_CR37","DOI":"10.1007\/s41095-022-0271-y"},{"doi-asserted-by":"publisher","unstructured":"Hu J, Shen L, Albanie S, Sun G, Wu E (2020) Squeeze-and-excitation networks. IEEE transactions on pattern analysis and machine intelligence 2011\u20132023. https:\/\/doi.org\/10.1109\/tpami.2019.2913372","key":"1185_CR38","DOI":"10.1109\/tpami.2019.2913372"},{"doi-asserted-by":"publisher","unstructured":"Qin Z, Zhang P, Wu F, Li X (2021) Fcanet: Frequency channel attention networks. In: 2021 IEEE\/CVF International conference on computer vision (ICCV). https:\/\/doi.org\/10.1109\/iccv48922.2021.00082","key":"1185_CR39","DOI":"10.1109\/iccv48922.2021.00082"},{"unstructured":"Hu J, Shen L, Albanie S, Sun G, Vedaldi A (2018) Gather-excite: exploiting feature context in convolutional neural networks. In: Advances in Neural Information Processing Systems (NeurIPS)","key":"1185_CR40"},{"doi-asserted-by":"publisher","unstructured":"Wang X, Girshick R, Gupta A, He K (2018) Non-local neural networks. In: 2018 IEEE\/CVF conference on computer vision and pattern recognition. https:\/\/doi.org\/10.1109\/cvpr.2018.00813","key":"1185_CR41","DOI":"10.1109\/cvpr.2018.00813"},{"issue":"7","key":"1185_CR42","doi-asserted-by":"publisher","first-page":"1200","DOI":"10.1109\/JAS.2022.105686","volume":"9","author":"J Ma","year":"2022","unstructured":"Ma J, Tang L, Fan F, Huang J, Mei X, Ma Y (2022) Swinfusion: cross-domain long-range learning for general image fusion via swin transformer. IEEE\/CAA J Automatica Sinica 9(7):1200\u20131217. https:\/\/doi.org\/10.1109\/JAS.2022.105686","journal-title":"IEEE\/CAA J Automatica Sinica"},{"doi-asserted-by":"publisher","unstructured":"Prabhakar KR, Srikar VS, Babu RV (2017) Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: 2017 IEEE international conference on computer vision (ICCV). https:\/\/doi.org\/10.1109\/iccv.2017.505","key":"1185_CR43","DOI":"10.1109\/iccv.2017.505"},{"doi-asserted-by":"publisher","unstructured":"Zhang X, Ye P, Xiao G (2020) Vifb: A visible and infrared image fusion benchmark. In: 2020 IEEE\/CVF conference on computer vision and pattern recognition workshops (CVPRW). https:\/\/doi.org\/10.1109\/cvprw50498.2020.00060","key":"1185_CR44","DOI":"10.1109\/cvprw50498.2020.00060"},{"key":"1185_CR45","doi-asserted-by":"publisher","first-page":"227","DOI":"10.1016\/j.infrared.2017.05.007","volume":"83","author":"Y Zhang","year":"2017","unstructured":"Zhang Y, Zhang L, Bai X, Zhang L (2017) Infrared and visual image fusion through infrared feature extraction and visual information preservation. Infrared Phys Technol 83:227\u2013237. https:\/\/doi.org\/10.1016\/j.infrared.2017.05.007","journal-title":"Infrared Phys Technol"},{"issue":"5","key":"1185_CR46","doi-asserted-by":"publisher","first-page":"1193","DOI":"10.1007\/s11760-013-0556-9","volume":"9","author":"BK Shreyamsha Kumar","year":"2015","unstructured":"Shreyamsha Kumar BK (2015) Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process 9(5):1193\u20131204. https:\/\/doi.org\/10.1007\/s11760-013-0556-9","journal-title":"Signal Image Video Process"},{"doi-asserted-by":"publisher","unstructured":"Liu J, Fan X, Jiang J, Liu R, Luo Z (2022) Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion. IEEE Trans Circ Syst Video Technol 32(1):105\u2013119. https:\/\/doi.org\/10.1109\/TCSVT.2021.3056725","key":"1185_CR47","DOI":"10.1109\/TCSVT.2021.3056725"},{"issue":"1","key":"1185_CR48","doi-asserted-by":"publisher","DOI":"10.1117\/1.2945910","volume":"2","author":"J Van Aardt","year":"2008","unstructured":"Van Aardt J (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2(1):023522. https:\/\/doi.org\/10.1117\/1.2945910","journal-title":"J Appl Remote Sens"},{"doi-asserted-by":"publisher","unstructured":"Shreyamsha Kumar BK (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process 7(6):1125\u20131143. https:\/\/doi.org\/10.1007\/s11760-012-0361-x","key":"1185_CR49","DOI":"10.1007\/s11760-012-0361-x"},{"issue":"12","key":"1185_CR50","doi-asserted-by":"publisher","first-page":"1890","DOI":"10.1016\/j.aeue.2015.09.004","volume":"69","author":"V Aslantas","year":"2015","unstructured":"Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. AEU\u2014Int J Electron Commun 69(12):1890\u20131896. https:\/\/doi.org\/10.1016\/j.aeue.2015.09.004","journal-title":"AEU\u2014Int J Electron Commun"},{"issue":"12","key":"1185_CR51","doi-asserted-by":"publisher","first-page":"2959","DOI":"10.1109\/26.477498","volume":"43","author":"AM Eskicioglu","year":"1995","unstructured":"Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959\u20132965. https:\/\/doi.org\/10.1109\/26.477498","journal-title":"IEEE Trans Commun"},{"issue":"4","key":"1185_CR52","doi-asserted-by":"publisher","first-page":"355","DOI":"10.1088\/0957-0233\/8\/4\/002","volume":"8","author":"Y-J Rao","year":"1997","unstructured":"Rao Y-J (1997) In-fibre bragg grating sensors. Measure Sci Technol 8(4):355\u2013375. https:\/\/doi.org\/10.1088\/0957-0233\/8\/4\/002","journal-title":"Measure Sci Technol"},{"key":"1185_CR53","doi-asserted-by":"publisher","first-page":"133","DOI":"10.1016\/j.aqpro.2015.02.019","volume":"4","author":"P Jagalingam","year":"2015","unstructured":"Jagalingam P, Hegde AV (2015) A review of quality metrics for fused image. Aquatic Proc 4:133\u2013142. https:\/\/doi.org\/10.1016\/j.aqpro.2015.02.019","journal-title":"Aquatic Proc"},{"doi-asserted-by":"publisher","unstructured":"Wang Z, Simoncelli EP, Bovik AC (2003) Multiscale structural similarity for image quality assessment. In: The thrity-seventh asilomar conference on signals, systems & computers pp 1398\u20131402. IEEE, Pacific Grove, CA, USA. https:\/\/doi.org\/10.1109\/ACSSC.2003.1292216","key":"1185_CR54","DOI":"10.1109\/ACSSC.2003.1292216"},{"doi-asserted-by":"publisher","unstructured":"Ha Q, Watanabe K, Karasawa T, Ushiku Y, Harada T (2017) MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. In: 2017 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 5108\u20135115. IEEE, Vancouver, BC. https:\/\/doi.org\/10.1109\/IROS.2017.8206396","key":"1185_CR55","DOI":"10.1109\/IROS.2017.8206396"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01185-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01185-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01185-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,10]],"date-time":"2024-02-10T22:23:39Z","timestamp":1707603819000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01185-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,3]]},"references-count":55,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["1185"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01185-2","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2023,8,3]]},"assertion":[{"value":"26 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 July 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 August 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}