{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T14:35:55Z","timestamp":1774449355145,"version":"3.50.1"},"reference-count":40,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2022,7,8]],"date-time":"2022-07-08T00:00:00Z","timestamp":1657238400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Defense Industrial Technology Development Program","award":["JCKY2019602C015"],"award-info":[{"award-number":["JCKY2019602C015"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Infrared images are robust against illumination variation and disguises, containing the sharp edge contours of objects. Visible images are enriched with texture details. Infrared and visible image fusion seeks to obtain high-quality images, keeping the advantages of source images. This paper proposes an object-aware image fusion method based on a deep residual shrinkage network, termed as DRSNFuse. DRSNFuse exploits residual shrinkage blocks for image fusion and introduces a deeper network in infrared and visible image fusion tasks than existing methods based on fully convolutional networks. The deeper network can effectively extract semantic information, while the residual shrinkage blocks maintain the texture information throughout the whole network. The residual shrinkage blocks adapt a channel-wise attention mechanism to the fusion task, enabling feature map channels to focus on objects and backgrounds separately. A novel image fusion loss function is proposed to obtain better fusion performance and suppress artifacts. DRSNFuse trained with the proposed loss function can generate fused images with fewer artifacts and more original textures, which also satisfy the human visual system. Experiments show that our method has better fusion results than mainstream methods through quantitative comparison and obtains fused images with brighter targets, sharper edge contours, richer details, and fewer artifacts.<\/jats:p>","DOI":"10.3390\/s22145149","type":"journal-article","created":{"date-parts":[[2022,7,11]],"date-time":"2022-07-11T00:06:21Z","timestamp":1657497981000},"page":"5149","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion"],"prefix":"10.3390","volume":"22","author":[{"given":"Hongfeng","family":"Wang","sequence":"first","affiliation":[{"name":"School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Jianzhong","family":"Wang","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Haonan","family":"Xu","sequence":"additional","affiliation":[{"name":"School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Yong","family":"Sun","sequence":"additional","affiliation":[{"name":"School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Zibo","family":"Yu","sequence":"additional","affiliation":[{"name":"School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,7,8]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Lyu, C., Heyer, P., Goossens, B., and Philips, W. (2022). An Unsupervised Transfer Learning Framework for Visible-Thermal Pedestrian Detection. Sensors, 22.","DOI":"10.3390\/s22124416"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Khandakar, A., Chowdhury, M.E.H., Reaz, M.B.I., Ali, S.H.M., Kiranyaz, S., Rahman, T., Chowdhury, M.H., Ayari, M.A., Alfkey, R., and Bakar, A.A.A. (2022). A Novel Machine Learning Approach for Severity Classification of Diabetic Foot Complications Using Thermogram Images. Sensors, 22.","DOI":"10.3390\/s22114249"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"823","DOI":"10.1007\/s00170-020-06173-1","article-title":"Deep multi-sensorial data analysis for production monitoring in hard metal industry","volume":"115","author":"Kotsiopoulos","year":"2021","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"16040","DOI":"10.1109\/ACCESS.2017.2735865","article-title":"From Multi-scale Decomposition to Non-multi-scale Decomposition Methods: A Comprehensive Survey of Image Fusion Techniques and its Applications","volume":"5","author":"Dogra","year":"2017","journal-title":"IEEE Access"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"15750","DOI":"10.1109\/ACCESS.2017.2735019","article-title":"Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network","volume":"5","author":"Du","year":"2017","journal-title":"IEEE Access"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"74","DOI":"10.1016\/j.inffus.2010.03.002","article-title":"Performance comparison of different multi-resolution transforms for image fusion","volume":"12","author":"Li","year":"2011","journal-title":"Inf. Fusion"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1109\/JSEN.2015.2478655","article-title":"Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform","volume":"16","author":"Bavirisetti","year":"2015","journal-title":"IEEE Sens. J."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"15","DOI":"10.1016\/j.inffus.2015.11.003","article-title":"Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters","volume":"30","author":"Zhou","year":"2016","journal-title":"Inf. Fusion"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"195","DOI":"10.1016\/j.bspc.2017.02.005","article-title":"Medical image fusion based on sparse representation of classified image patches","volume":"34","author":"Zong","year":"2017","journal-title":"Biomed. Signal Process. Control."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Patil, U., and Mudengudi, U. (2011, January 3\u20135). Image fusion using hierarchical PCA. Proceedings of the 2011 International Conference on Image Information Processing, Shimla, India.","DOI":"10.1109\/ICIIP.2011.6108966"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1400","DOI":"10.1364\/JOSAA.34.001400","article-title":"Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition","volume":"34","author":"Zhang","year":"2017","journal-title":"JOSA A"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1982","DOI":"10.1109\/TMM.2019.2895292","article-title":"FuseGAN: Learning to fuse multi-focus image via conditional generative adversarial network","volume":"21","author":"Guo","year":"2019","journal-title":"IEEE Trans. Multimed."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","article-title":"FusionGAN: A generative adversarial network for infrared and visible image fusion","volume":"48","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Li, H., Wu, X.J., and Kittler, J. (2018, January 20\u201324). Infrared and visible image fusion using a deep learning framework. Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China.","DOI":"10.1109\/ICPR.2018.8546006"},{"key":"ref_15","unstructured":"Lahoud, F., and S\u00fcsstrunk, S. (2019). Fast and efficient zero-learning image fusion. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","article-title":"DenseFuse: A fusion approach to infrared and visible images","volume":"28","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Rakotonirina, N.C., and Rasoanaivo, A. (2020, January 4\u20138). ESRGAN+: Further improving enhanced super-resolution generative adversarial network. Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9054071"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Husz\u00e1r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., and Wang, Z. (2017, January 21\u201326). Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.19"},{"key":"ref_20","first-page":"1","article-title":"GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion","volume":"70","author":"Ma","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zhao, Z., Xu, S., Zhang, C., Liu, J., Li, P., and Zhang, J. (2020). DIDFuse: Deep image decomposition for infrared and visible image fusion. arXiv.","DOI":"10.24963\/ijcai.2020\/135"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"640","DOI":"10.1109\/TCI.2020.2965304","article-title":"VIF-Net: An unsupervised framework for infrared and visible image fusion","volume":"6","author":"Hou","year":"2020","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_23","first-page":"1","article-title":"STDFusionNet: An infrared and visible image fusion network based on salient target detection","volume":"70","author":"Ma","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1016\/j.inffus.2019.07.005","article-title":"Infrared and visible image fusion via detail preserving adversarial learning","volume":"54","author":"Ma","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_25","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Bai, Y., Zhang, Y., Ding, M., and Ghanem, B. (2018, January 8\u201314). Sod-mtgan: Small object detection via multi-task generative adversarial network. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01261-8_13"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_28","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_29","unstructured":"Ge, Z., Liu, S., Wang, F., Li, Z., and Sun, J. (2021). Yolox: Exceeding yolo series in 2021. arXiv."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 21\u201326). Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Honolulu, HI, USA.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"4681","DOI":"10.1109\/TII.2019.2943898","article-title":"Deep residual shrinkage networks for fault diagnosis","volume":"16","author":"Zhao","year":"2019","journal-title":"IEEE Trans. Ind. Inform."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Yang, P., Geng, H., Wen, C., and Liu, P. (2021). An Intelligent Quadrotor Fault Diagnosis Method Based on Novel Deep Residual Shrinkage Network. Drones, 5.","DOI":"10.3390\/drones5040133"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Li, H., and Chen, L. (2021, January 11\u201313). Deep Residual Shrinkage Networks with Self-Adaptive Slope Thresholding for Fault Diagnosis. Proceedings of the 2021 7th International Conference on Condition Monitoring of Machinery in Non-Stationary Operations (CMMNO), Guangzhou, China.","DOI":"10.1109\/CMMNO53328.2021.9467549"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"206","DOI":"10.1049\/elp2.12147","article-title":"A novel method for transformer fault diagnosis based on refined deep residual shrinkage network","volume":"16","author":"Hu","year":"2022","journal-title":"IET Electr. Power Appl."},{"key":"ref_35","first-page":"8880960","article-title":"Fault diagnosis of rotating machinery based on one-dimensional deep residual shrinkage network with a wide convolution layer","volume":"2020","author":"Yang","year":"2020","journal-title":"Shock Vib."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1016\/j.infrared.2017.02.005","article-title":"Infrared and visible image fusion based on visual saliency map and weighted least square optimization","volume":"82","author":"Ma","year":"2017","journal-title":"Infrared Phys. Technol."},{"key":"ref_39","first-page":"12484","article-title":"FusionDN: A Unified Densely Connected Network for Image Fusion","volume":"34","author":"Xu","year":"2020","journal-title":"Proc. Aaai Conf. Artif. Intell."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"010901","DOI":"10.1117\/1.OE.51.1.010901","article-title":"Progress in color night vision","volume":"51","author":"Toet","year":"2012","journal-title":"Opt. Eng."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/14\/5149\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T23:47:03Z","timestamp":1760140023000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/14\/5149"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,7,8]]},"references-count":40,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2022,7]]}},"alternative-id":["s22145149"],"URL":"https:\/\/doi.org\/10.3390\/s22145149","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,7,8]]}}}