{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T16:06:08Z","timestamp":1774454768220,"version":"3.50.1"},"reference-count":42,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2023,6,27]],"date-time":"2023-06-27T00:00:00Z","timestamp":1687824000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62062048"],"award-info":[{"award-number":["62062048"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62263017"],"award-info":[{"award-number":["62263017"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["202201AT070113"],"award-info":[{"award-number":["202201AT070113"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Yunnan Department of Science and Technology Project","award":["62062048"],"award-info":[{"award-number":["62062048"]}]},{"name":"Yunnan Department of Science and Technology Project","award":["62263017"],"award-info":[{"award-number":["62263017"]}]},{"name":"Yunnan Department of Science and Technology Project","award":["202201AT070113"],"award-info":[{"award-number":["202201AT070113"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>The aim of infrared and visible image fusion is to integrate the complementary information of the two modalities for high-quality fused images. However, many deep learning fusion algorithms have not considered the characteristics of infrared images in low-light scenes, leading to the problems of weak texture details, low contrast of infrared targets and poor visual perception in the existing methods. Therefore, in this paper, we propose a salient compensation-based fusion method that makes sufficient use of the characteristics of infrared and visible images to generate high-quality fused images under low-light conditions. First, we design a multi-scale edge gradient module (MEGB) in the texture mainstream to adequately extract the texture information of the dual input of infrared and visible images; on the other hand, the salient tributary is pre-trained by salient loss to obtain the saliency map based on the salient dense residual module (SRDB) to extract salient features, which is supplemented in the process of overall network training. We propose the spatial bias module (SBM) to fuse global information with local information. Finally, extensive comparison experiments with existing methods show that our method has significant advantages in describing target features and global scenes, the effectiveness of the proposed module is demonstrated by ablation experiments. In addition, we also verify the facilitation of this paper\u2019s method for high-level vision on a semantic segmentation task.<\/jats:p>","DOI":"10.3390\/e25070985","type":"journal-article","created":{"date-parts":[[2023,6,28]],"date-time":"2023-06-28T00:50:52Z","timestamp":1687913452000},"page":"985","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["SCFusion: Infrared and Visible Fusion Based on Salient Compensation"],"prefix":"10.3390","volume":"25","author":[{"given":"Haipeng","family":"Liu","sequence":"first","affiliation":[{"name":"Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China"}]},{"given":"Meiyan","family":"Ma","sequence":"additional","affiliation":[{"name":"Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China"}]},{"given":"Meng","family":"Wang","sequence":"additional","affiliation":[{"name":"Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China"},{"name":"Yunnan Province Key Laboratory of Computer, Kunming University of Science and Technology, Kunming 650500, China"}]},{"given":"Zhaoyu","family":"Chen","sequence":"additional","affiliation":[{"name":"Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China"}]},{"given":"Yibo","family":"Zhao","sequence":"additional","affiliation":[{"name":"Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,6,27]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"323","DOI":"10.1016\/j.inffus.2021.06.008","article-title":"Image fusion meets deep learning: A survey and perspective","volume":"76","author":"Zhang","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"655","DOI":"10.1109\/TMM.2021.3057493","article-title":"Multi-focus image fusion based on multi-scale gradients and image matting","volume":"24","author":"Chen","year":"2021","journal-title":"IEEE Trans. Multimed."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1410","DOI":"10.1049\/ipr2.12114","article-title":"Fusion-based simultaneous estimation of reflectance and illumination for low-light image enhancement","volume":"15","author":"Parihar","year":"2021","journal-title":"IET Image Process"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"13","DOI":"10.1186\/s13640-018-0251-4","article-title":"Nighttime low illumination image enhancement with single image using bright\/dark channel prior","volume":"2018","author":"Shi","year":"2018","journal-title":"EURASIP J. Image Video Process"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"111","DOI":"10.1016\/j.inffus.2021.02.005","article-title":"Benchmarking and comparing multi-exposure image fusion algorithms","volume":"74","author":"Zhang","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1016\/j.inffus.2022.09.019","article-title":"Current advances and future perspectives of image fusion: A comprehensive review","volume":"90","author":"Karim","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1016\/j.inffus.2022.03.007","article-title":"PIAFusion: A progressive infrared and visible image fusion network based on illumination aware","volume":"83","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Zhao, Y., Cheng, J., Zhou, W., and Zhang, C. (2019, January 18\u201321). Infrared pedestrian detection with converted temperature map. Proceedings of the 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Lanzhou, China.","DOI":"10.1109\/APSIPAASC47483.2019.9023228"},{"key":"ref_9","unstructured":"Zhou, S., Yang, P., and Xie, W. (2011, January 26\u201328). Infrared image segmentation based on Otsu and genetic algorithm. Proceedings of the 2011 International Conference on Multimedia Technology, Hangzhou, China."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"109","DOI":"10.1016\/j.inffus.2021.02.008","article-title":"An infrared and visible image fusion method based on multi-scale transformation and norm optimization","volume":"71","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"64","DOI":"10.1016\/j.ins.2019.08.066","article-title":"Infrared and visible image fusion based on target-enhanced multiscale transform decomposition","volume":"508","author":"Chen","year":"2020","journal-title":"Inf. Sci."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"347","DOI":"10.1049\/iet-ipr.2014.0311","article-title":"Simultaneous image fusion and denoising with adaptive sparse representation","volume":"9","author":"Liu","year":"2015","journal-title":"IET Image Proc."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1016\/j.neucom.2014.07.003","article-title":"Sparse representation with learned multiscale dictionary for image fusion","volume":"148","author":"Yin","year":"2015","journal-title":"Neurocomputing"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"147","DOI":"10.1016\/j.inffus.2014.09.004","article-title":"A general framework for image fusion based on multi-scale transform and sparse representation","volume":"24","author":"Liu","year":"2015","journal-title":"Inf. Fusion"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"4733","DOI":"10.1109\/TIP.2020.2975984","article-title":"MDLatLRR: A novel decomposition method for infraredand visible image fusion","volume":"29","author":"Li","year":"2020","journal-title":"IEEE Trans. Image Process"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1016\/j.infrared.2017.02.005","article-title":"Infrared and visible image fusion based on visual saliency map and weighted least square optimization","volume":"82","author":"Ma","year":"2017","journal-title":"Infrared Phys. Technol."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","article-title":"DenseFuse: A fusion approach to infrared and visible images","volume":"28","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"9645","DOI":"10.1109\/TIM.2020.3005230","article-title":"NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial\/channel attention models","volume":"69","author":"Li","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"103407","DOI":"10.1016\/j.cviu.2022.103407","article-title":"CUFD: An encoder\u2013decoder network for visible and infrared image fusion based on common and unique feature decomposition","volume":"218","author":"Xu","year":"2022","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_20","first-page":"1","article-title":"DRF: Disentangled representation for visible and infrared image fusion","volume":"70","author":"Xu","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"2121","DOI":"10.1109\/JAS.2022.106082","article-title":"SuperFusion: A versatile image registration and fusion network with semantic awareness","volume":"9","author":"Tang","year":"2022","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_22","first-page":"1","article-title":"STDFusionNet: An infrared and visible image fusion network based on salient target detection","volume":"70","author":"Ma","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","article-title":"U2Fusion: A unified unsupervised image fusion network","volume":"44","author":"Xu","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"104383","DOI":"10.1016\/j.infrared.2022.104383","article-title":"FLFuse-Net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information","volume":"127","author":"Xue","year":"2022","journal-title":"Infrared Phys. Technol."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","article-title":"FusionGAN: A generative adversarial network for infrared and visible image fusion","volume":"48","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Xu, H., Liang, P., Yu, W., Jiang, J., and Ma, J. (2019, January 10\u201316). Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI), Macao, China.","DOI":"10.24963\/ijcai.2019\/549"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"184","DOI":"10.1016\/j.inffus.2022.07.016","article-title":"Unified gradient-and intensity-discriminator generative adversarial network for image fusion","volume":"88","author":"Zhou","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Liu, J., Fan, X., Huang, Z., Wu, G., Liu, R., Zhong, W., and Luo, Z. (2022, January 18\u20134). Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00571"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"28","DOI":"10.1016\/j.inffus.2021.12.004","article-title":"Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network","volume":"82","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.inffus.2021.02.023","article-title":"RFN-Nest: An end-to-end residual fusion network for infrared and visible images","volume":"73","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_31","first-page":"5002215","article-title":"SEDRFuse: A symmetric encoder\u2013decoder with residual block network for infrared and visible image fusion","volume":"70","author":"Jian","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"1383","DOI":"10.1109\/TMM.2020.2997127","article-title":"AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks","volume":"23","author":"Li","year":"2020","journal-title":"IEEE Trans. Multimed."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"128","DOI":"10.1016\/j.inffus.2020.11.009","article-title":"RXDNFuse: A aggregated residual dense network for infrared and visible image fusion","volume":"69","author":"Long","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"4070","DOI":"10.1109\/TIP.2021.3069339","article-title":"Different input resolutions and arbitrary output resolution: A meta learning-based deep framework for infrared and visible image fusion","volume":"30","author":"Li","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Ha, Q., Watanabe, K., Karasawa, T., Ushiku, Y., and Harada, T. (2017, January 24\u201328). MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. Proceedings of the 2017 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada.","DOI":"10.1109\/IROS.2017.8206396"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"2576","DOI":"10.1109\/LRA.2019.2904733","article-title":"Rtfnet: Rgb-thermal fusion network for semantic segmentation of urban scenes","volume":"4","author":"Sun","year":"2019","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"179","DOI":"10.1016\/j.patrec.2021.03.015","article-title":"Attention fusion network for multi-spectral semantic segmentation","volume":"146","author":"Xu","year":"2021","journal-title":"Pattern Recognit. Lett."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Liu, H., Chen, F., Zeng, Z., and Tan, X. (2022). AMFuse: Add\u2013Multiply-Based Cross-Modal Fusion Network for Multi-Spectral Semantic Segmentation. Remote Sens., 14.","DOI":"10.3390\/rs14143368"},{"key":"ref_40","unstructured":"Toet, A. (2023, May 31). TNO Image Fusion Dataset. Available online: https:\/\/figshare.com\/articles\/dataset\/TNO_Image_Fusion_Dataset\/1008029."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"2761","DOI":"10.1007\/s11263-021-01501-8","article-title":"SDNet: A versatile squeeze-and-decomposition network for real-time image fusion","volume":"129","author":"Zhang","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"188","DOI":"10.1016\/j.neunet.2021.01.021","article-title":"Bilateral attention decoder: A lightweight decoder for real-time semantic segmentation","volume":"137","author":"Peng","year":"2021","journal-title":"Neural Netw."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/25\/7\/985\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:02:10Z","timestamp":1760126530000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/25\/7\/985"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,27]]},"references-count":42,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2023,7]]}},"alternative-id":["e25070985"],"URL":"https:\/\/doi.org\/10.3390\/e25070985","relation":{},"ISSN":["1099-4300"],"issn-type":[{"value":"1099-4300","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,6,27]]}}}