{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,13]],"date-time":"2026-05-13T03:54:58Z","timestamp":1778644498125,"version":"3.51.4"},"reference-count":94,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2023,1,4]],"date-time":"2023-01-04T00:00:00Z","timestamp":1672790400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Outstanding Scientist Training Program of Beijing Academy of Agriculture and Forestry Sciences","award":["JKZX202214"],"award-info":[{"award-number":["JKZX202214"]}]},{"name":"Outstanding Scientist Training Program of Beijing Academy of Agriculture and Forestry Sciences","award":["BAIC10-2022"],"award-info":[{"award-number":["BAIC10-2022"]}]},{"name":"Outstanding Scientist Training Program of Beijing Academy of Agriculture and Forestry Sciences","award":["CYS22726"],"award-info":[{"award-number":["CYS22726"]}]},{"name":"Outstanding Scientist Training Program of Beijing Academy of Agriculture and Forestry Sciences","award":["KJQN202001531"],"award-info":[{"award-number":["KJQN202001531"]}]},{"name":"Outstanding Scientist Training Program of Beijing Academy of Agriculture and Forestry Sciences","award":["cstc2018jcyjAX0336"],"award-info":[{"award-number":["cstc2018jcyjAX0336"]}]},{"name":"Beijing Digital Agriculture Innovation Consortium Project","award":["JKZX202214"],"award-info":[{"award-number":["JKZX202214"]}]},{"name":"Beijing Digital Agriculture Innovation Consortium Project","award":["BAIC10-2022"],"award-info":[{"award-number":["BAIC10-2022"]}]},{"name":"Beijing Digital Agriculture Innovation Consortium Project","award":["CYS22726"],"award-info":[{"award-number":["CYS22726"]}]},{"name":"Beijing Digital Agriculture Innovation Consortium Project","award":["KJQN202001531"],"award-info":[{"award-number":["KJQN202001531"]}]},{"name":"Beijing Digital Agriculture Innovation Consortium Project","award":["cstc2018jcyjAX0336"],"award-info":[{"award-number":["cstc2018jcyjAX0336"]}]},{"name":"Chongqing Municipal Education Commission Graduate Innovation Project","award":["JKZX202214"],"award-info":[{"award-number":["JKZX202214"]}]},{"name":"Chongqing Municipal Education Commission Graduate Innovation Project","award":["BAIC10-2022"],"award-info":[{"award-number":["BAIC10-2022"]}]},{"name":"Chongqing Municipal Education Commission Graduate Innovation Project","award":["CYS22726"],"award-info":[{"award-number":["CYS22726"]}]},{"name":"Chongqing Municipal Education Commission Graduate Innovation Project","award":["KJQN202001531"],"award-info":[{"award-number":["KJQN202001531"]}]},{"name":"Chongqing Municipal Education Commission Graduate Innovation Project","award":["cstc2018jcyjAX0336"],"award-info":[{"award-number":["cstc2018jcyjAX0336"]}]},{"name":"Research Foundation of Chongqing Education Committee","award":["JKZX202214"],"award-info":[{"award-number":["JKZX202214"]}]},{"name":"Research Foundation of Chongqing Education Committee","award":["BAIC10-2022"],"award-info":[{"award-number":["BAIC10-2022"]}]},{"name":"Research Foundation of Chongqing Education Committee","award":["CYS22726"],"award-info":[{"award-number":["CYS22726"]}]},{"name":"Research Foundation of Chongqing Education Committee","award":["KJQN202001531"],"award-info":[{"award-number":["KJQN202001531"]}]},{"name":"Research Foundation of Chongqing Education Committee","award":["cstc2018jcyjAX0336"],"award-info":[{"award-number":["cstc2018jcyjAX0336"]}]},{"name":"Natural Science Foundation of Chongqing","award":["JKZX202214"],"award-info":[{"award-number":["JKZX202214"]}]},{"name":"Natural Science Foundation of Chongqing","award":["BAIC10-2022"],"award-info":[{"award-number":["BAIC10-2022"]}]},{"name":"Natural Science Foundation of Chongqing","award":["CYS22726"],"award-info":[{"award-number":["CYS22726"]}]},{"name":"Natural Science Foundation of Chongqing","award":["KJQN202001531"],"award-info":[{"award-number":["KJQN202001531"]}]},{"name":"Natural Science Foundation of Chongqing","award":["cstc2018jcyjAX0336"],"award-info":[{"award-number":["cstc2018jcyjAX0336"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The images acquired by a single visible light sensor are very susceptible to light conditions, weather changes, and other factors, while the images acquired by a single infrared light sensor generally have poor resolution, low contrast, low signal-to-noise ratio, and blurred visual effects. The fusion of visible and infrared light can avoid the disadvantages of two single sensors and, in fusing the advantages of both sensors, significantly improve the quality of the images. The fusion of infrared and visible images is widely used in agriculture, industry, medicine, and other fields. In this study, firstly, the architecture of mainstream infrared and visible image fusion technology and application was reviewed; secondly, the application status in robot vision, medical imaging, agricultural remote sensing, and industrial defect detection fields was discussed; thirdly, the evaluation indicators of the main image fusion methods were combined into the subjective evaluation and the objective evaluation, the properties of current mainstream technologies were then specifically analyzed and compared, and the outlook for image fusion was assessed; finally, infrared and visible image fusion was summarized. The results show that the definition and efficiency of the fused infrared and visible image had been improved significantly. However, there were still some problems, such as the poor accuracy of the fused image, and irretrievably lost pixels. There is a need to improve the adaptive design of the traditional algorithm parameters, to combine the innovation of the fusion algorithm and the optimization of the neural network, so as to further improve the image fusion accuracy, reduce noise interference, and improve the real-time performance of the algorithm.<\/jats:p>","DOI":"10.3390\/s23020599","type":"journal-article","created":{"date-parts":[[2023,1,5]],"date-time":"2023-01-05T02:28:53Z","timestamp":1672885733000},"page":"599","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":148,"title":["Infrared and Visible Image Fusion Technology and Application: A Review"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8838-0006","authenticated-orcid":false,"given":"Weihong","family":"Ma","sequence":"first","affiliation":[{"name":"Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5738-8072","authenticated-orcid":false,"given":"Kun","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Chongqing University of Science & Technology, Chongqing 401331, China"}]},{"given":"Jiawei","family":"Li","sequence":"additional","affiliation":[{"name":"Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6888-7993","authenticated-orcid":false,"given":"Simon X.","family":"Yang","sequence":"additional","affiliation":[{"name":"Advanced Robotics and Intelligent Systems Laboratory, School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada"}]},{"given":"Junfei","family":"Li","sequence":"additional","affiliation":[{"name":"Advanced Robotics and Intelligent Systems Laboratory, School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada"}]},{"given":"Lepeng","family":"Song","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Chongqing University of Science & Technology, Chongqing 401331, China"}]},{"given":"Qifeng","family":"Li","sequence":"additional","affiliation":[{"name":"Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,1,4]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"211164","DOI":"10.1109\/ACCESS.2020.3036620","article-title":"Detection of Road Objects with Small Appearance in Images for Autonomous Driving in Various Traffic Situations Using a Deep Learning Based Approach","volume":"8","author":"Li","year":"2020","journal-title":"IEEE Access"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"158","DOI":"10.1016\/j.inffus.2017.10.007","article-title":"Deep learning for pixel-level image fusion: Recent advances and future prospects","volume":"42","author":"Liu","year":"2017","journal-title":"Inf. Fusion"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/j.inffus.2016.05.004","article-title":"Pixel-level image fusion: A survey of the state of the art","volume":"33","author":"Li","year":"2016","journal-title":"Inf. Fusion"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1016\/j.inffus.2018.02.004","article-title":"Infrared and visible image fusion methods and applications: A survey","volume":"45","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1016\/j.eij.2015.09.002","article-title":"Current trends in medical image registration and fusion","volume":"17","author":"Elmogy","year":"2016","journal-title":"Egypt. Inform. J."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/j.inffus.2016.02.001","article-title":"Infrared and visible image fusion via gradient transfer and total variation minimization","volume":"31","author":"Ma","year":"2016","journal-title":"Inf. Fusion"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"147","DOI":"10.1016\/j.inffus.2014.09.004","article-title":"A general framework for image fusion based on multi-scale transform and sparse representation","volume":"24","author":"Liu","year":"2014","journal-title":"Inf. Fusion"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","article-title":"DenseFuse: A fusion approach to infrared and visible images","volume":"28","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.inffus.2021.02.023","article-title":"RFN-Nest: An end-to-end residual fusion network for infrared and visible images","volume":"73","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","article-title":"FusionGAN: A generative adversarial network for infrared and visible image fusion","volume":"48","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_11","first-page":"1","article-title":"GANMcC: A Generative Adversarial Network with Multiclassification Constraints for Infrared and Visible Image Fusion","volume":"70","author":"Ma","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1016\/j.inffus.2019.07.011","article-title":"IFCNN: A general image fusion framework based on convolutional neural network","volume":"54","author":"Zhang","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_13","unstructured":"Zhu, C., Zeng, M., and Huang, X. (2018). SDnet: Contextualized attention-based deep network for conversational question answering. arXiv."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","article-title":"U2Fusion: A unified unsupervised image fusion network","volume":"44","author":"Xu","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"28","DOI":"10.1016\/j.inffus.2021.12.004","article-title":"Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network","volume":"82","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"8279342","DOI":"10.1155\/2020\/8279342","article-title":"A review of multimodal medical image fusion techniques","volume":"2020","author":"Huang","year":"2020","journal-title":"Comput. Math. Methods Med."},{"key":"ref_17","first-page":"129","article-title":"An overview of different image fusion methods for medical applications","volume":"4","author":"Pure","year":"2013","journal-title":"Int. J. Sci. Eng. Res."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1016\/j.neucom.2015.07.160","article-title":"An overview of multi-modal medical image fusion","volume":"215","author":"Du","year":"2016","journal-title":"Neurocomputing"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"108036","DOI":"10.1016\/j.sigpro.2021.108036","article-title":"Multimodal medical image fusion review: Theoretical background and recent advances","volume":"183","author":"Hermessi","year":"2021","journal-title":"Signal Process."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Yang, Y., Han, C., Kang, X., and Han, D. (2007, January 18\u201321). An overview on pixel-level image fusion in remote sensing. Proceedings of the 2007 IEEE International Conference on Automation and Logistics, Jinan, China.","DOI":"10.1109\/ICAL.2007.4338968"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"158","DOI":"10.1080\/17538947.2013.869266","article-title":"Remote sensing image fusion: An update in the context of Digital Earth","volume":"7","author":"Pohl","year":"2014","journal-title":"Int. J. Digit. Earth"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Belgiu, M., and Stein, A. (2019). Spatiotemporal Image Fusion in Remote Sensing. Remote Sens., 11.","DOI":"10.3390\/rs11070818"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Wang, Q., Yu, D., and Shen, Y. (2009, January 5\u20137). An overview of image fusion metrics. Proceedings of the 2009 IEEE Instrumentation and Measurement Technology Conference, Singapore.","DOI":"10.1109\/IMTC.2009.5168582"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Omar, Z., and Stathaki, T. (2014, January 27\u201329). Image fusion: An overview. Proceedings of the 2014 5th International Conference on Intelligent Systems, Modelling and Simulation, Langkawi, Malaysia.","DOI":"10.1109\/ISMS.2014.58"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"45","DOI":"10.1109\/MIM.2021.9400960","article-title":"Recent Advances in Sparse Representation Based Medical Image Fusion","volume":"24","author":"Liu","year":"2021","journal-title":"IEEE Instrum. Meas. Mag."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Burt, P.J., and Adelson, E.H. (1987). The Laplacian pyramid as a compact image code. Readings in Computer Vision, Morgan Kaufmann.","DOI":"10.1016\/B978-0-08-051581-6.50065-9"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1016\/j.sigpro.2013.10.010","article-title":"Region level based multi-focus image fusion using quaternion wavelet and normalized cut","volume":"97","author":"Liu","year":"2014","journal-title":"Signal Process."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"131","DOI":"10.1016\/j.neucom.2017.01.006","article-title":"Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion","volume":"235","author":"Liu","year":"2017","journal-title":"Neurocomputing"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.infrared.2015.11.003","article-title":"An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing","volume":"74","author":"Zhang","year":"2016","journal-title":"Infrared Phys. Technol."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"4733","DOI":"10.1109\/TIP.2020.2975984","article-title":"MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion","volume":"29","author":"Li","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1882","DOI":"10.1109\/LSP.2016.2618776","article-title":"Image Fusion with Convolutional Sparse Representation","volume":"23","author":"Liu","year":"2016","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"245","DOI":"10.1016\/0167-8655(89)90003-2","article-title":"Image fusion by a ratio of low-pass pyramid","volume":"9","author":"Toet","year":"1989","journal-title":"Pattern Recognit. Lett."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"789","DOI":"10.1117\/12.7977034","article-title":"Merging thermal and visual images by a contrast pyramid","volume":"28","author":"Toet","year":"1989","journal-title":"Opt. Eng."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"255","DOI":"10.1016\/0167-8655(89)90004-4","article-title":"A morphological pyramidal image decomposition","volume":"9","author":"Toet","year":"1989","journal-title":"Pattern Recognit. Lett."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"891","DOI":"10.1109\/34.93808","article-title":"The design and use of steerable filters","volume":"9","author":"Freeman","year":"1991","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"723","DOI":"10.1137\/0515056","article-title":"Decomposition of Hardy Functions into Square Integrable Wavelets of Constant Shape","volume":"15","author":"Grossmann","year":"1984","journal-title":"SIAM J. Math. Anal."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"674","DOI":"10.1109\/34.192463","article-title":"A theory for multiresolution signal decomposition: The wavelet representation","volume":"11","author":"Mallat","year":"1989","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"3089","DOI":"10.1109\/TIP.2006.877507","article-title":"The Non-subsampled Contourlet Transform: Theory, Design, and Applications","volume":"15","author":"Zhou","year":"2006","journal-title":"IEEE Trans. Image Process."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"6010","DOI":"10.1016\/j.ijleo.2014.07.059","article-title":"A false color image fusion method based on multi-resolution color transfer in normalization YCBCR space","volume":"125","author":"Yu","year":"2014","journal-title":"Optik"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"027002","DOI":"10.1117\/1.2857417","article-title":"Fusion of infrared and visual images based on contrast pyramid directional filter banks using clonal selection optimizing","volume":"47","author":"Jin","year":"2008","journal-title":"Opt. Eng."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Zhang, B. (2010, January 20\u201322). Study on image fusion based on different fusion rules of wavelet transform. Proceedings of the 2010 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE), Chengdu, China.","DOI":"10.1109\/ICACTE.2010.5579586"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1109\/MSP.2005.1550194","article-title":"The dual-tree complex wavelet transform","volume":"22","author":"Selesnick","year":"2005","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_43","first-page":"6290","article-title":"Visible and infrared image fusion using the lifting wavelet","volume":"11","author":"Zou","year":"2013","journal-title":"Telkomnika Indones. J. Electr. Eng."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Yin, S., Cao, L., Tan, Q., and Jin, G. (2010, January 4\u20137). Infrared and visible image fusion based on NSCT and fuzzy logic. Proceedings of the 2010 IEEE International Conference on Mechatronics and Automation, Xi\u2019an, China.","DOI":"10.1109\/ICMA.2010.5588318"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"526","DOI":"10.1007\/s12204-013-1437-7","article-title":"Infrared and visible image fusion based on region of interest detection and nonsubsampled contourlet transform","volume":"1","author":"Liu","year":"2013","journal-title":"J. Shanghai Jiaotong Univ. (Sci.)"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"298","DOI":"10.1137\/060649781","article-title":"Optimally Sparse Multidimensional Representation Using Shearlets","volume":"39","author":"Guo","year":"2007","journal-title":"SIAM J. Math. Anal."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"161","DOI":"10.1016\/j.infrared.2014.07.019","article-title":"Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization","volume":"67","author":"Kong","year":"2014","journal-title":"Infrared Phys. Technol."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"884","DOI":"10.1109\/TIM.2009.2026612","article-title":"Multifocus Image Fusion and Restoration with Sparse Representation","volume":"59","author":"Bin","year":"2010","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"1553","DOI":"10.1109\/TSP.2009.2036477","article-title":"Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation","volume":"58","author":"Rubinstein","year":"2010","journal-title":"IEEE Trans. Signal Process."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1016\/j.inffus.2017.05.006","article-title":"Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review","volume":"40","author":"Zhang","year":"2018","journal-title":"Inf. Fusion"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"108301","DOI":"10.1016\/j.patcog.2021.108301","article-title":"Privacy-aware supervised classification: An informative subspace based multi-objective approach","volume":"122","author":"Biswas","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1016\/j.infrared.2016.05.012","article-title":"Infrared and visible images fusion based on RPCA and NSCT","volume":"77","author":"Fu","year":"2016","journal-title":"Infrared Phys. Technol."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"743","DOI":"10.1109\/JSEN.2007.894926","article-title":"Region-Based Multimodal Image Fusion Using ICA Bases","volume":"7","author":"Cvejic","year":"2007","journal-title":"IEEE Sensors J."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"1200","DOI":"10.1109\/JAS.2022.105686","article-title":"SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer","volume":"9","author":"Ma","year":"2022","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"83","DOI":"10.1016\/j.tifs.2017.12.006","article-title":"Use of principal component analysis (PCA) and hierarchical cluster analysis (HCA) for multivariate association between bioactive compounds and functional properties in foods: A critical perspective","volume":"72","author":"Granato","year":"2018","journal-title":"Trends Food Sci. Technol."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"52","DOI":"10.1016\/j.infrared.2016.01.009","article-title":"Two-scale image fusion of visible and infrared images using saliency detection","volume":"76","author":"Baviristti","year":"2016","journal-title":"Infrared Phys. Technol."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Cvejic, N., Lewis, J., Bull, D., and Canagarajah, N. (2006, January 10\u201313). Adaptive Region-Based Multimodal Image Fusion Using ICA Bases. Proceedings of the 2006 9th International Conference on Information Fusion, Florence, Italy.","DOI":"10.1109\/ICIF.2006.301600"},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Song, H.A., and Lee, S.Y. (2013). Hierarchical Representation Using NMF. International Conference on Neural Information Processing, Springer.","DOI":"10.1007\/978-3-642-42054-2_58"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Mou, J., Gao, W., and Song, Z. (2013, January 16\u201318). Image fusion based on non-negative matrix factorization and infrared feature extraction. Proceedings of the 2013 6th International Congress on Image and Signal Processing, Hangzhou, China.","DOI":"10.1109\/CISP.2013.6745210"},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"104048","DOI":"10.1016\/j.infrared.2022.104048","article-title":"VDFEFuse: A novel fusion approach to infrared and visible images","volume":"121","author":"Hao","year":"2022","journal-title":"Infrared Phys. Technol."},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"9645","DOI":"10.1109\/TIM.2020.3005230","article-title":"NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial\/Channel Attention Models","volume":"69","author":"Li","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"824","DOI":"10.1109\/TCI.2021.3100986","article-title":"Classification Saliency-Based Rule for Visible and Infrared Image Fusion","volume":"7","author":"Xu","year":"2021","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Liu, Y., Chen, X., Cheng, J., and Peng, H. (2017, January 10\u201313). A medical image fusion method based on convolutional neural networks. Proceedings of the 2017 20th International Conference on Information Fusion, Xi\u2019an, China.","DOI":"10.23919\/ICIF.2017.8009769"},{"key":"ref_64","first-page":"12797","article-title":"Rethinking the Image Fusion: A Fast Unified Image Fusion Network based on Proportional Maintenance of Gradient and Intensity","volume":"34","author":"Zhang","year":"2020","journal-title":"Proc. Conf. AAAI Artif. Intell."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"2761","DOI":"10.1007\/s11263-021-01501-8","article-title":"SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion","volume":"129","author":"Zhang","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_66","first-page":"1","article-title":"STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection","volume":"70","author":"Ma","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_67","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_68","doi-asserted-by":"crossref","unstructured":"Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., and Choo, J. (2018, January 18\u201323). Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00916"},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Xu, H., Liang, P., Yu, W., Jiang, J., and Ma, J. (2019, January 10\u201316). Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China.","DOI":"10.24963\/ijcai.2019\/549"},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1016\/j.inffus.2019.07.005","article-title":"Infrared and visible image fusion via detail preserving adversarial learning","volume":"54","author":"Ma","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"4980","DOI":"10.1109\/TIP.2020.2977573","article-title":"DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion","volume":"29","author":"Ma","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"1383","DOI":"10.1109\/TMM.2020.2997127","article-title":"AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks","volume":"23","author":"Li","year":"2021","journal-title":"IEEE Trans. Multimed."},{"key":"ref_73","doi-asserted-by":"crossref","first-page":"71","DOI":"10.1016\/j.optlaseng.2017.05.007","article-title":"A fusion algorithm for infrared and visible based on guided filtering and phase congruency in NSST domain","volume":"97","author":"Liu","year":"2017","journal-title":"Opt. Lasers Eng."},{"key":"ref_74","doi-asserted-by":"crossref","first-page":"375","DOI":"10.1016\/j.compeleceng.2016.09.019","article-title":"Image fusion based on object region detection and Non-Subsampled Contourlet Transform","volume":"62","author":"Meng","year":"2017","journal-title":"Comput. Electr. Eng."},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"286","DOI":"10.1016\/j.infrared.2015.10.004","article-title":"A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform","volume":"73","author":"Zhang","year":"2015","journal-title":"Infrared Phys. Technol."},{"key":"ref_76","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1016\/j.infrared.2017.01.026","article-title":"Fusion of infrared and visible images based on nonsubsampled contourlet transform and sparse K-SVD dictionary learning","volume":"82","author":"Cai","year":"2017","journal-title":"Infrared Phys. Technol."},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"182","DOI":"10.1016\/j.neucom.2016.11.051","article-title":"A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation","volume":"226","author":"Yin","year":"2017","journal-title":"Neurocomputing"},{"key":"ref_78","doi-asserted-by":"crossref","first-page":"1204","DOI":"10.1109\/JSEN.2018.2882239","article-title":"Recent advances in multifunctional sensing technology on a perspective of multi-sensor system: A review","volume":"19","author":"Majumder","year":"2018","journal-title":"IEEE Sens. J."},{"key":"ref_79","doi-asserted-by":"crossref","first-page":"4425","DOI":"10.1007\/s11831-021-09540-7","article-title":"Image Fusion Techniques: A Survey","volume":"28","author":"Kaur","year":"2021","journal-title":"Arch. Comput. Methods Eng."},{"key":"ref_80","doi-asserted-by":"crossref","first-page":"022002","DOI":"10.1088\/2631-7990\/abe0d0","article-title":"Defect inspection technologies for additive manufacturing","volume":"3","author":"Chen","year":"2021","journal-title":"Int. J. Extrem. Manuf."},{"key":"ref_81","first-page":"1","article-title":"End-to-End Ship Detection in SAR Images for Complex Scenes Based on Deep CNNs","volume":"2021","author":"Chen","year":"2021","journal-title":"J. Sensors"},{"key":"ref_82","doi-asserted-by":"crossref","first-page":"374","DOI":"10.1016\/j.measurement.2017.08.002","article-title":"Quality inspection of machined metal parts using an image fusion technique","volume":"111","author":"Ortega","year":"2017","journal-title":"Measurement"},{"key":"ref_83","doi-asserted-by":"crossref","first-page":"017004","DOI":"10.1117\/1.OE.52.1.017004","article-title":"Fusing concurrent visible and infrared videos for improved tracking performance","volume":"52","author":"Chan","year":"2013","journal-title":"Opt. Eng."},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"259","DOI":"10.1016\/S1566-2535(03)00046-0","article-title":"A general framework for multiresolution image fusion: From pixels to regions","volume":"4","author":"Piella","year":"2003","journal-title":"Inf. Fusion"},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1016\/S0141-9382(97)00014-0","article-title":"Fusion of visible and thermal imagery improves situational awareness","volume":"18","author":"Toet","year":"1997","journal-title":"Displays"},{"key":"ref_86","doi-asserted-by":"crossref","first-page":"25","DOI":"10.1016\/S0141-9382(02)00069-0","article-title":"Perceptual evaluation of different image fusion schemes","volume":"24","author":"Toet","year":"2003","journal-title":"Displays"},{"key":"ref_87","doi-asserted-by":"crossref","first-page":"338","DOI":"10.1007\/s10278-007-9044-5","article-title":"Information entropy measure for evaluation of image quality","volume":"21","author":"Tsai","year":"2008","journal-title":"J. Digit. Imaging"},{"key":"ref_88","doi-asserted-by":"crossref","first-page":"430","DOI":"10.1109\/TIP.2005.859378","article-title":"Image information and visual quality","volume":"15","author":"Sheikh","year":"2006","journal-title":"IEEE Trans. Image Process."},{"key":"ref_89","doi-asserted-by":"crossref","first-page":"228","DOI":"10.1109\/TIP.2004.823821","article-title":"Gradient-based multiresolution image fusion","volume":"13","author":"Petrovic","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_90","doi-asserted-by":"crossref","first-page":"023522","DOI":"10.1117\/1.2945910","article-title":"Assessment of image fusion procedures using entropy, image quality, and multispectral classification","volume":"2","year":"2008","journal-title":"J. Appl. Remote Sens."},{"key":"ref_91","doi-asserted-by":"crossref","first-page":"127","DOI":"10.1016\/j.inffus.2011.08.002","article-title":"A new image fusion performance metric based on visual information fidelity","volume":"14","author":"Han","year":"2013","journal-title":"Inf. Fusion"},{"key":"ref_92","doi-asserted-by":"crossref","unstructured":"Petrovic, V., and Xydeas, C. (2005, January 17\u201320). Objective image fusion performance characterization. Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV\u201905), Beijing, China.","DOI":"10.1109\/ICCV.2005.175"},{"key":"ref_93","doi-asserted-by":"crossref","first-page":"2827","DOI":"10.1109\/TGRS.2012.2213604","article-title":"A Sparse Image Fusion Algorithm with Application to Pan-Sharpening","volume":"51","author":"Zhu","year":"2013","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_94","unstructured":"Piella, G., and Heijmans, H. (2003, January 14\u201317). A new quality metric for image fusion. Proceedings of the 2003 International Conference on Image Processing (Cat. No 03CH37429), Barcelona, Spain."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/2\/599\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T17:59:21Z","timestamp":1760119161000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/2\/599"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,4]]},"references-count":94,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2023,1]]}},"alternative-id":["s23020599"],"URL":"https:\/\/doi.org\/10.3390\/s23020599","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,1,4]]}}}