{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T02:24:29Z","timestamp":1760149469288,"version":"build-2065373602"},"reference-count":54,"publisher":"MDPI AG","issue":"16","license":[{"start":{"date-parts":[[2023,8,11]],"date-time":"2023-08-11T00:00:00Z","timestamp":1691712000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61871210","CX20230958"],"award-info":[{"award-number":["61871210","CX20230958"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Chuanshan Talent Project of the University of South China","award":["61871210","CX20230958"],"award-info":[{"award-number":["61871210","CX20230958"]}]},{"name":"2023 Hunan Postgraduate Research Innovation Project","award":["61871210","CX20230958"],"award-info":[{"award-number":["61871210","CX20230958"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Infrared and visible image fusion technologies are used to characterize the same scene using diverse modalities. However, most existing deep learning-based fusion methods are designed as symmetric networks, which ignore the differences between modal images and lead to source image information loss during feature extraction. In this paper, we propose a new fusion framework for the different characteristics of infrared and visible images. Specifically, we design a dual-stream asymmetric network with two different feature extraction networks to extract infrared and visible feature maps, respectively. The transformer architecture is introduced in the infrared feature extraction branch, which can force the network to focus on the local features of infrared images while still obtaining their contextual information. The visible feature extraction branch uses residual dense blocks to fully extract the rich background and texture detail information of visible images. In this way, it can provide better infrared targets and visible details for the fused image. Experimental results on multiple datasets indicate that DSA-Net outperforms state-of-the-art methods in both qualitative and quantitative evaluations. In addition, we also apply the fusion results to the target detection task, which indirectly demonstrates the fusion performances of our method.<\/jats:p>","DOI":"10.3390\/s23167097","type":"journal-article","created":{"date-parts":[[2023,8,11]],"date-time":"2023-08-11T12:10:23Z","timestamp":1691755823000},"page":"7097","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network"],"prefix":"10.3390","volume":"23","author":[{"given":"Ruyi","family":"Yin","sequence":"first","affiliation":[{"name":"College of Electrical Engineering, University of South China, Hengyang 421001, China"}]},{"given":"Bin","family":"Yang","sequence":"additional","affiliation":[{"name":"College of Electrical Engineering, University of South China, Hengyang 421001, China"}]},{"given":"Zuyan","family":"Huang","sequence":"additional","affiliation":[{"name":"College of Electrical Engineering, University of South China, Hengyang 421001, China"}]},{"given":"Xiaozhi","family":"Zhang","sequence":"additional","affiliation":[{"name":"College of Electrical Engineering, University of South China, Hengyang 421001, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,8,11]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Li, X., Li, X., and Liu, W. (2023). CBFM: Contrast Balance Infrared and Visible Image Fusion Based on Contrast-Preserving Guided Filter. Remote Sens., 15.","DOI":"10.3390\/rs15122969"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TIM.2022.3218574","article-title":"CGTF: Convolution-Guided Transformer for Infrared and Visible Image Fusion","volume":"71","author":"Li","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"457","DOI":"10.1007\/s00542-022-05315-7","article-title":"Gagandeep IR and Visible Image Fusion Using DWT and Bilateral Filter","volume":"29","author":"Singh","year":"2023","journal-title":"Microsyst. Technol."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ma, W., Wang, K., Li, J., Yang, S.X., Li, J., Song, L., and Li, Q. (2023). Infrared and Visible Image Fusion Technology and Application: A Review. Sensors, 23.","DOI":"10.3390\/s23020599"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Zhang, L., Yang, X., Wan, Z., Cao, D., and Lin, Y. (2022). A Real-Time FPGA Implementation of Infrared and Visible Image Fusion Using Guided Filter and Saliency Detection. Sensors, 22.","DOI":"10.3390\/s22218487"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Jia, W., Song, Z., and Li, Z. (2022). Multi-Scale Fusion of Stretched Infrared and Visible Images. Sensors, 22.","DOI":"10.3390\/s22176660"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Liu, Y., Wu, Z., Han, X., Sun, Q., Zhao, J., and Liu, J. (2022). Infrared and Visible Image Fusion Based on Visual Saliency Map and Image Contrast Enhancement. Sensors, 22.","DOI":"10.3390\/s22176390"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"104589","DOI":"10.1016\/j.infrared.2023.104589","article-title":"RDCa-Net: Residual Dense Channel Attention Symmetric Network for Infrared and Visible Image Fusion","volume":"130","author":"Huang","year":"2023","journal-title":"Infrared Phys. Technol."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Wang, H., Wang, J., Xu, H., Sun, Y., and Yu, Z. (2022). DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion. Sensors, 22.","DOI":"10.3390\/s22145149"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Zheng, X., Yang, Q., Si, P., and Wu, Q. (2022). A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism. Sensors, 22.","DOI":"10.3390\/s22103651"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"104701","DOI":"10.1016\/j.infrared.2023.104701","article-title":"Infrared and Visible Image Fusion Based on Domain Transform Filtering and Sparse Representation","volume":"131","author":"Li","year":"2023","journal-title":"Infrared Phys. Technol."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"4881","DOI":"10.1016\/j.ijleo.2014.04.036","article-title":"Visual Attention Guided Image Fusion with Sparse Representation","volume":"125","author":"Yang","year":"2014","journal-title":"Optik"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"933","DOI":"10.1007\/s10044-022-01073-4","article-title":"Infrared and Visible Image Fusion via Multi-Scale Multi-Layer Rolling Guidance Filter","volume":"25","author":"Prema","year":"2022","journal-title":"Pattern Anal. Applic."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"6010","DOI":"10.1016\/j.ijleo.2014.07.059","article-title":"A False Color Image Fusion Method Based on Multi-Resolution Color Transfer in Normalization YCbCr Space","volume":"125","author":"Yu","year":"2014","journal-title":"Optik"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"743","DOI":"10.1109\/JSEN.2007.894926","article-title":"Region-Based Multimodal Image Fusion Using ICA Bases","volume":"7","author":"Cvejic","year":"2007","journal-title":"IEEE Sens. J."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"131","DOI":"10.1016\/j.inffus.2005.09.001","article-title":"Pixel-Based and Region-Based Image Fusion Schemes Using ICA Bases","volume":"8","author":"Mitianoudis","year":"2007","journal-title":"Inf. Fusion"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"182","DOI":"10.1016\/j.neucom.2016.11.051","article-title":"A Novel Infrared and Visible Image Fusion Algorithm Based on Shift-Invariant Dual-Tree Complex Shearlet Transform and Sparse Representation","volume":"226","author":"Yin","year":"2017","journal-title":"Neurocomputing"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Nguyen, H.-C., Nguyen, T.-H., Scherer, R., and Le, V.-H. (2023). Deep Learning for Human Activity Recognition on 3D Human Skeleton: Survey and Comparative Study. Sensors, 23.","DOI":"10.3390\/s23115121"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Song, J., Zhu, A.-X., and Zhu, Y. (2023). Transformer-Based Semantic Segmentation for Extraction of Building Footprints from Very-High-Resolution Images. Sensors, 23.","DOI":"10.3390\/s23115166"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"191","DOI":"10.1016\/j.inffus.2016.12.001","article-title":"Multi-Focus Image Fusion with a Deep Convolutional Neural Network","volume":"36","author":"Liu","year":"2017","journal-title":"Inf. Fusion"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1850018","DOI":"10.1142\/S0219691318500182","article-title":"Infrared and Visible Image Fusion with Convolutional Neural Networks","volume":"16","author":"Liu","year":"2018","journal-title":"Int. J. Wavelets Multiresolut Inf. Process."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","article-title":"DenseFuse: A Fusion Approach to Infrared and Visible Images","volume":"28","author":"Li","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","article-title":"FusionGAN: A Generative Adversarial Network for Infrared and Visible Image Fusion","volume":"48","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"4980","DOI":"10.1109\/TIP.2020.2977573","article-title":"DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion","volume":"29","author":"Ma","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_25","first-page":"12797","article-title":"Rethinking the Image Fusion: A Fast Unified Image Fusion Network Based on Proportional Maintenance of Gradient and Intensity","volume":"34","author":"Zhang","year":"2020","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","article-title":"U2Fusion: A Unified Unsupervised Image Fusion Network","volume":"44","author":"Xu","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Tang, W., He, F., and Liu, Y. (2022). YDTR: Infrared and Visible Image Fusion via Y-Shape Dynamic Transformer. IEEE Trans. Multimed., 1\u201316.","DOI":"10.1109\/TMM.2022.3192661"},{"key":"ref_28","unstructured":"Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (2017). Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_29","unstructured":"Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. arXiv."},{"key":"ref_30","unstructured":"Wallach, H., Larochelle, H., Beygelzimer, A., Alch\u00e9-Buc, F.D., Fox, E., and Garnett, R. (2019). Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_31","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2021). An Image Is Worth 16 \u00d7 16 Words: Transformers for Image Recognition at Scale. arXiv."},{"key":"ref_32","unstructured":"Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A.L., and Zhou, Y. (2021). TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., and Shao, L. (2021, January 11\u201317). Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00061"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., and Torr, P.H.S. (2021, January 19\u201325). Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. Proceedings of the 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00681"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Yan, B., Peng, H., Fu, J., Wang, D., and Lu, H. (2021, January 11\u201317). Learning Spatio-Temporal Transformer for Visual Tracking. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01028"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Ren, P., Li, C., Wang, G., Xiao, Y., Du, Q., Liang, X., and Chang, X. (2022, January 19\u201320). Beyond Fixation: Dynamic Window Visual Transformer. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01168"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Vs, V., Jose Valanarasu, J.M., Oza, P., and Patel, V.M. (2022, January 16\u201319). Image Fusion Transformer. Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France.","DOI":"10.1109\/ICIP46576.2022.9897280"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Zhao, H., and Nie, R. (2021, January 24\u201326). DNDT: Infrared and Visible Image Fusion Via DenseNet and Dual-Transformer. Proceedings of the 2021 International Conference on Information Technology and Biomedical Engineering (ICITBE), Nanchang, China.","DOI":"10.1109\/ICITBE54178.2021.00025"},{"key":"ref_39","unstructured":"Fu, Y., Xu, T., Wu, X., and Kittler, J. (2021). PPT Fusion: Pyramid Patch Transformerfor a Case Study in Image Fusion. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Rao, D., Xu, T., and Wu, X.-J. (2023). TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network. IEEE Trans. Image Process.","DOI":"10.1109\/TIP.2023.3273451"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tian, Y., Kong, Y., Zhong, B., and Fu, Y. (2018, January 18\u201322). Residual Dense Network for Image Super-Resolution. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00262"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"103016","DOI":"10.1016\/j.cviu.2020.103016","article-title":"Infrared and Visible Image Fusion via Gradientlet Filter","volume":"197\u2013198","author":"Ma","year":"2020","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1109\/JSEN.2015.2478655","article-title":"Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform","volume":"16","author":"Bavirisetti","year":"2016","journal-title":"IEEE Sens. J."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"109","DOI":"10.1016\/j.inffus.2021.02.008","article-title":"An Infrared and Visible Image Fusion Method Based on Multi-Scale Transformation and Norm Optimization","volume":"71","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"1134","DOI":"10.1109\/TCI.2021.3119954","article-title":"GAN-FM: Infrared and Visible Image Fusion Using GAN With Full-Scale Skip Connection and Dual Markovian Discriminators","volume":"7","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"103407","DOI":"10.1016\/j.cviu.2022.103407","article-title":"CUFD: An Encoder\u2013Decoder Network for Visible and Infrared Image Fusion Based on Common and Unique Feature Decomposition","volume":"218","author":"Xu","year":"2022","journal-title":"Comput. Vis. Image Und."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"3159","DOI":"10.1109\/TCSVT.2023.3234340","article-title":"DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer","volume":"33","author":"Tang","year":"2023","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image Quality Assessment: From Error Visibility to Structural Similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"3231","DOI":"10.1016\/j.optcom.2009.05.021","article-title":"A Comparison of Criterion Functions for Fusion of Multi-Focus Noisy Images","volume":"282","author":"Aslantas","year":"2009","journal-title":"Opt. Commun."},{"key":"ref_50","first-page":"69","article-title":"A Guide to Appropriate Use of Correlation Coefficient in Medical Research","volume":"24","author":"Mukaka","year":"2012","journal-title":"Malawi Med. J."},{"key":"ref_51","first-page":"93","article-title":"Infrared and Visible Image Fusion Using Entropy and Neuro-Fuzzy Concepts","volume":"Volume 248","author":"Satapathy","year":"2014","journal-title":"ICT and Critical Infrastructure: Proceedings of the 48th Annual Convention of Computer Society of India-Vol I"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"1890","DOI":"10.1016\/j.aeue.2015.09.004","article-title":"A New Image Quality Metric for Image Fusion: The Sum of the Correlations of Differences","volume":"69","author":"Aslantas","year":"2015","journal-title":"AEU\u2014Int. J. Electron. Commun."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"1421","DOI":"10.1016\/j.imavis.2007.12.002","article-title":"A New Automated Quality Assessment Algorithm for Image Fusion","volume":"27","author":"Chen","year":"2009","journal-title":"Image Vis. Comput."},{"key":"ref_54","unstructured":"Ge, Z., Liu, S., Wang, F., Li, Z., and Sun, J. (2021). YOLOX: Exceeding YOLO Series in 2021. arXiv."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/16\/7097\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:31:00Z","timestamp":1760128260000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/16\/7097"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,11]]},"references-count":54,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2023,8]]}},"alternative-id":["s23167097"],"URL":"https:\/\/doi.org\/10.3390\/s23167097","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,8,11]]}}}