{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,15]],"date-time":"2026-04-15T22:47:07Z","timestamp":1776293227809,"version":"3.50.1"},"reference-count":54,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2024,3,10]],"date-time":"2024-03-10T00:00:00Z","timestamp":1710028800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["62173040"],"award-info":[{"award-number":["62173040"]}]},{"name":"National Natural Science Foundation of China","award":["62071036"],"award-info":[{"award-number":["62071036"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Infrared\u2013visible image fusion is valuable across various applications due to the complementary information that it provides. However, the current fusion methods face challenges in achieving high-quality fused images. This paper identifies a limitation in the existing fusion framework that affects the fusion quality: modal differences between infrared and visible images are often overlooked, resulting in the poor fusion of the two modalities. This limitation implies that features from different sources may not be consistently fused, which can impact the quality of the fusion results. Therefore, we propose a framework that utilizes feature-based decomposition and domain normalization. This decomposition method separates infrared and visible images into common and unique regions. To reduce modal differences while retaining unique information from the source images, we apply domain normalization to the common regions within the unified feature space. This space can transform infrared features into a pseudo-visible domain, ensuring that all features are fused within the same domain and minimizing the impact of modal differences during the fusion process. Noise in the source images adversely affects the fused images, compromising the overall fusion performance. Thus, we propose the non-local Gaussian filter. This filter can learn the shape and parameters of its filtering kernel based on the image features, effectively removing noise while preserving details. Additionally, we propose a novel dense attention in the feature extraction module, enabling the network to understand and leverage inter-layer information. Our experiments demonstrate a marked improvement in fusion quality with our proposed method.<\/jats:p>","DOI":"10.3390\/rs16060969","type":"journal-article","created":{"date-parts":[[2024,3,11]],"date-time":"2024-03-11T08:56:41Z","timestamp":1710147401000},"page":"969","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Infrared\u2013Visible Image Fusion through Feature-Based Decomposition and Domain Normalization"],"prefix":"10.3390","volume":"16","author":[{"given":"Weiyi","family":"Chen","sequence":"first","affiliation":[{"name":"School of Automation, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1782-4535","authenticated-orcid":false,"given":"Lingjuan","family":"Miao","sequence":"additional","affiliation":[{"name":"School of Automation, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-5633-7927","authenticated-orcid":false,"given":"Yuhao","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Automation, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6871-8236","authenticated-orcid":false,"given":"Zhiqiang","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Automation, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-8120-9146","authenticated-orcid":false,"given":"Yajun","family":"Qiao","sequence":"additional","affiliation":[{"name":"School of Automation, Beijing Institute of Technology, Beijing 100081, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,3,10]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1016\/j.inffus.2018.02.004","article-title":"Infrared and visible image fusion methods and applications: A survey","volume":"45","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_2","first-page":"5001715","article-title":"Infrared and visible image fusion using visual saliency sparse representation and detail injection model","volume":"70","author":"Yang","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"880","DOI":"10.1109\/TIP.2018.2872630","article-title":"Spectral total-variation local scale signatures for image manipulation and fusion","volume":"28","author":"Hait","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"3367","DOI":"10.1109\/TIM.2018.2877285","article-title":"Image fusion using adjustable non-subsampled shearlet transform","volume":"68","author":"Vishwakarma","year":"2018","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"15","DOI":"10.1016\/j.inffus.2015.11.003","article-title":"Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters","volume":"30","author":"Zhou","year":"2016","journal-title":"Inf. Fusion"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"6480","DOI":"10.1364\/AO.55.006480","article-title":"Fusion of infrared and visible images for night-vision context enhancement","volume":"55","author":"Zhou","year":"2016","journal-title":"Appl. Opt."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"4733","DOI":"10.1109\/TIP.2020.2975984","article-title":"MDLatLRR: A novel decomposition method for infrared and visible image fusion","volume":"29","author":"Li","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1109\/JSEN.2015.2478655","article-title":"Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform","volume":"16","author":"Bavirisetti","year":"2015","journal-title":"IEEE Sens. J."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"743","DOI":"10.1109\/JSEN.2007.894926","article-title":"Region-based multimodal image fusion using ICA bases","volume":"7","author":"Cvejic","year":"2007","journal-title":"IEEE Sens. J."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"624","DOI":"10.1109\/TMM.2009.2017640","article-title":"Segmentation-driven image fusion based on alpha-stable modeling of wavelet coefficients","volume":"11","author":"Wan","year":"2009","journal-title":"IEEE Trans. Multimed."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"70","DOI":"10.1016\/j.neucom.2012.12.015","article-title":"Fast saliency-aware multi-modality image fusion","volume":"111","author":"Han","year":"2013","journal-title":"Neurocomputing"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Ellmauthaler, A., da Silva, E.A., Pagliari, C.L., and Neves, S.R. (October, January 30). Infrared-visible image fusion using the undecimated wavelet transform with spectral factorization and target extraction. Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA.","DOI":"10.1109\/ICIP.2012.6467446"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1850018","DOI":"10.1142\/S0219691318500182","article-title":"Infrared and visible image fusion with convolutional neural networks","volume":"16","author":"Liu","year":"2018","journal-title":"Int. J. Wavelets Multiresolution Inf. Process."},{"key":"ref_14","first-page":"1","article-title":"A multilevel hybrid transmission network for infrared and visible image fusion","volume":"71","author":"Li","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","article-title":"U2Fusion: A unified unsupervised image fusion network","volume":"44","author":"Xu","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_16","first-page":"5002215","article-title":"SEDRFuse: A symmetric encoder\u2013decoder with residual block network for infrared and visible image fusion","volume":"70","author":"Jian","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1200","DOI":"10.1109\/JAS.2022.105686","article-title":"SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer","volume":"9","author":"Ma","year":"2022","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1016\/j.inffus.2022.03.007","article-title":"PIAFusion: A progressive infrared and visible image fusion network based on illumination aware","volume":"83","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_19","first-page":"565","article-title":"Multisensor image fusion based on generative adversarial networks","volume":"Volume 11155","author":"Lebedev","year":"2019","journal-title":"Proceedings of the Image and Signal Processing for Remote Sensing XXV"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"182185","DOI":"10.1109\/ACCESS.2019.2959034","article-title":"Infrared and visible image fusion using detail enhanced channel attention network","volume":"7","author":"Cui","year":"2019","journal-title":"IEEE Access"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"34685","DOI":"10.1007\/s11042-020-09301-x","article-title":"Unsupervised densely attention network for infrared and visible image fusion","volume":"79","author":"Li","year":"2020","journal-title":"Multimed. Tools Appl."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","article-title":"DenseFuse: A fusion approach to infrared and visible images","volume":"28","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"640","DOI":"10.1109\/TCI.2020.2965304","article-title":"VIF-Net: An unsupervised framework for infrared and visible image fusion","volume":"6","author":"Hou","year":"2020","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"50","DOI":"10.1016\/j.neucom.2021.05.034","article-title":"Two-stream network for infrared and visible images fusion","volume":"460","author":"Liu","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"79754","DOI":"10.1109\/ACCESS.2020.2990539","article-title":"Fusion of infrared-visible images in UE-IoT for fault point detection based on GAN","volume":"8","author":"Liao","year":"2020","journal-title":"IEEE Access"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","article-title":"FusionGAN: A generative adversarial network for infrared and visible image fusion","volume":"48","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"4980","DOI":"10.1109\/TIP.2020.2977573","article-title":"DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion","volume":"29","author":"Ma","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"183","DOI":"10.1016\/j.neucom.2022.02.025","article-title":"Triple-discriminator generative adversarial network for infrared and visible image fusion","volume":"483","author":"Song","year":"2022","journal-title":"Neurocomputing"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Zhang, H., Xu, H., Xiao, Y., Guo, X., and Ma, J. (2020, January 7\u201312). Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6975"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"477","DOI":"10.1016\/j.infrared.2014.09.019","article-title":"Fusion method for infrared and visible images by using non-negative sparse representation","volume":"67","author":"Wang","year":"2014","journal-title":"Infrared Phys. Technol."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"104242","DOI":"10.1016\/j.infrared.2022.104242","article-title":"Multi-scale unsupervised network for infrared and visible image fusion based on joint attention mechanism","volume":"125","author":"Xu","year":"2022","journal-title":"Infrared Phys. Technol."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"1383","DOI":"10.1109\/TMM.2020.2997127","article-title":"AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks","volume":"23","author":"Li","year":"2020","journal-title":"IEEE Trans. Multimed."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"6384831","DOI":"10.1155\/2020\/6384831","article-title":"Flgc-fusion gan: An enhanced fusion gan model by importing fully learnable group convolution","volume":"2020","author":"Yuan","year":"2020","journal-title":"Math. Probl. Eng."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., and Catanzaro, B. (2018, January 18\u201323). High-resolution image synthesis and semantic manipulation with conditional gans. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00917"},{"key":"ref_36","unstructured":"Kim, T., Cha, M., Kim, H., Lee, J.K., and Kim, J. (2017, January 6\u201311). Learning to discover cross-domain relations with generative adversarial networks. Proceedings of the International Conference on Machine Learning, PMLR 2017, Sydney, Australia."},{"key":"ref_37","unstructured":"Liu, M.Y., Breuel, T., and Kautz, J. (2017, January 4\u20139). Unsupervised image-to-image translation networks. Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_39","unstructured":"Zhu, J.Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., and Shechtman, E. (2017, January 4\u20139). Toward multimodal image-to-image translation. Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., and Krishnan, D. (2017, January 21\u201326). Unsupervised pixel-level domain adaptation with generative adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.18"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Huang, X., and Belongie, S. (2017, January 22\u201329). Arbitrary style transfer in real-time with adaptive instance normalization. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.167"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Jing, Y., Liu, X., Ding, Y., Wang, X., Ding, E., Song, M., and Wen, S. (2020, January 7\u201312). Dynamic instance normalization for arbitrary style transfer. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i04.5862"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Liu, J., Fan, X., Huang, Z., Wu, G., Liu, R., Zhong, W., and Luo, Z. (2022, January 18\u201324). Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00571"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"249","DOI":"10.1016\/j.dib.2017.09.038","article-title":"The TNO multiband image data collection","volume":"15","author":"Toet","year":"2017","journal-title":"Data Brief"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Zhang, P., Zhao, J., Wang, D., Lu, H., and Ruan, X. (2022, January 18\u201324). Visible-thermal UAV tracking: A large-scale benchmark and new baseline. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00868"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"174","DOI":"10.1016\/j.inffus.2022.12.022","article-title":"A perceptual framework for infrared\u2013visible image fusion based on multiscale structure decomposition and biological vision","volume":"93","author":"Zhou","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"105","DOI":"10.1109\/TCSVT.2021.3056725","article-title":"Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion","volume":"32","author":"Liu","year":"2021","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_49","unstructured":"Di, W., Jinyuan, L., Xin, F., and Liu, R. (2022, January 23\u201329). Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Vienna, Austria."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.inffus.2021.02.023","article-title":"RFN-Nest: An end-to-end residual fusion network for infrared and visible images","volume":"73","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"199","DOI":"10.1016\/j.optcom.2014.12.032","article-title":"Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition","volume":"341","author":"Cui","year":"2015","journal-title":"Opt. Commun."},{"key":"ref_52","first-page":"484","article-title":"Image fusion and image quality assessment of fused images","volume":"4","author":"Deshmukh","year":"2010","journal-title":"Int. J. Image Process. (IJIP)"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"023522","DOI":"10.1117\/1.2945910","article-title":"Assessment of image fusion procedures using entropy, image quality, and multispectral classification","volume":"2","author":"Roberts","year":"2008","journal-title":"J. Appl. Remote Sens."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"3345","DOI":"10.1109\/TIP.2015.2442920","article-title":"Perceptual quality assessment for multi-exposure image fusion","volume":"24","author":"Ma","year":"2015","journal-title":"IEEE Trans. Image Process."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/6\/969\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T14:11:33Z","timestamp":1760105493000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/6\/969"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,10]]},"references-count":54,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2024,3]]}},"alternative-id":["rs16060969"],"URL":"https:\/\/doi.org\/10.3390\/rs16060969","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,10]]}}}