{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,17]],"date-time":"2026-04-17T04:37:12Z","timestamp":1776400632179,"version":"3.51.2"},"reference-count":80,"publisher":"MDPI AG","issue":"20","license":[{"start":{"date-parts":[[2024,10,13]],"date-time":"2024-10-13T00:00:00Z","timestamp":1728777600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["92152109"],"award-info":[{"award-number":["92152109"]}]},{"name":"National Natural Science Foundation of China","award":["62261053"],"award-info":[{"award-number":["62261053"]}]},{"name":"National Natural Science Foundation of China","award":["2024CX02065"],"award-info":[{"award-number":["2024CX02065"]}]},{"name":"National Natural Science Foundation of China","award":["BNR2019TD01022"],"award-info":[{"award-number":["BNR2019TD01022"]}]},{"name":"National Natural Science Foundation of China","award":["2023TSYCTD0012"],"award-info":[{"award-number":["2023TSYCTD0012"]}]},{"name":"Technology Innovation Program of Beijing Institute of Technology","award":["92152109"],"award-info":[{"award-number":["92152109"]}]},{"name":"Technology Innovation Program of Beijing Institute of Technology","award":["62261053"],"award-info":[{"award-number":["62261053"]}]},{"name":"Technology Innovation Program of Beijing Institute of Technology","award":["2024CX02065"],"award-info":[{"award-number":["2024CX02065"]}]},{"name":"Technology Innovation Program of Beijing Institute of Technology","award":["BNR2019TD01022"],"award-info":[{"award-number":["BNR2019TD01022"]}]},{"name":"Technology Innovation Program of Beijing Institute of Technology","award":["2023TSYCTD0012"],"award-info":[{"award-number":["2023TSYCTD0012"]}]},{"name":"Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist)","award":["92152109"],"award-info":[{"award-number":["92152109"]}]},{"name":"Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist)","award":["62261053"],"award-info":[{"award-number":["62261053"]}]},{"name":"Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist)","award":["2024CX02065"],"award-info":[{"award-number":["2024CX02065"]}]},{"name":"Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist)","award":["BNR2019TD01022"],"award-info":[{"award-number":["BNR2019TD01022"]}]},{"name":"Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology (BNRist)","award":["2023TSYCTD0012"],"award-info":[{"award-number":["2023TSYCTD0012"]}]},{"name":"Tianshan Talent Training Project-Xinjiang Science and Technology Innovation Team Program","award":["92152109"],"award-info":[{"award-number":["92152109"]}]},{"name":"Tianshan Talent Training Project-Xinjiang Science and Technology Innovation Team Program","award":["62261053"],"award-info":[{"award-number":["62261053"]}]},{"name":"Tianshan Talent Training Project-Xinjiang Science and Technology Innovation Team Program","award":["2024CX02065"],"award-info":[{"award-number":["2024CX02065"]}]},{"name":"Tianshan Talent Training Project-Xinjiang Science and Technology Innovation Team Program","award":["BNR2019TD01022"],"award-info":[{"award-number":["BNR2019TD01022"]}]},{"name":"Tianshan Talent Training Project-Xinjiang Science and Technology Innovation Team Program","award":["2023TSYCTD0012"],"award-info":[{"award-number":["2023TSYCTD0012"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>The fusion of infrared and visible images together can fully leverage the respective advantages of each, providing a more comprehensive and richer set of information. This is applicable in various fields such as military surveillance, night navigation, environmental monitoring, etc. In this paper, a novel infrared and visible image fusion method based on sparse representation and guided filtering in Laplacian pyramid (LP) domain is introduced. The source images are decomposed into low- and high-frequency bands by the LP, respectively. Sparse representation has achieved significant effectiveness in image fusion, and it is used to process the low-frequency band; the guided filtering has excellent edge-preserving effects and can effectively maintain the spatial continuity of the high-frequency band. Therefore, guided filtering combined with the weighted sum of eight-neighborhood-based modified Laplacian (WSEML) is used to process high-frequency bands. Finally, the inverse LP transform is used to reconstruct the fused image. We conducted simulation experiments on the publicly available TNO dataset to validate the superiority of our proposed algorithm in fusing infrared and visible images. Our algorithm preserves both the thermal radiation characteristics of the infrared image and the detailed features of the visible image.<\/jats:p>","DOI":"10.3390\/rs16203804","type":"journal-article","created":{"date-parts":[[2024,10,14]],"date-time":"2024-10-14T07:47:05Z","timestamp":1728892025000},"page":"3804","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":35,"title":["Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7354-7494","authenticated-orcid":false,"given":"Liangliang","family":"Li","sequence":"first","affiliation":[{"name":"School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3660-1014","authenticated-orcid":false,"given":"Yan","family":"Shi","sequence":"additional","affiliation":[{"name":"School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Ming","family":"Lv","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, Xinjiang University, Urumqi 830046, China"}]},{"given":"Zhenhong","family":"Jia","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, Xinjiang University, Urumqi 830046, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4333-4468","authenticated-orcid":false,"given":"Minqin","family":"Liu","sequence":"additional","affiliation":[{"name":"National Key Laboratory of Space Integrated Information System, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9828-1976","authenticated-orcid":false,"given":"Xiaobin","family":"Zhao","sequence":"additional","affiliation":[{"name":"School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"given":"Xueyu","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1785-4024","authenticated-orcid":false,"given":"Hongbing","family":"Ma","sequence":"additional","affiliation":[{"name":"Department of Electronic Engineering, Tsinghua University, Beijing 100084, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,10,13]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"147","DOI":"10.1016\/j.inffus.2014.09.004","article-title":"A general framework for image fusion based on multi-scale transform and sparse representation","volume":"24","author":"Liu","year":"2015","journal-title":"Inf. Fusion"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Huo, X., Deng, Y., and Shao, K. (2022). Infrared and visible image fusion with significant target enhancement. Entropy, 24.","DOI":"10.3390\/e24111633"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Luo, Y., and Luo, Z. (2023). Infrared and visible image fusion: Methods, datasets, applications, and prospects. Appl. Sci., 13.","DOI":"10.3390\/app131910891"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Li, L., Lv, M., Jia, Z., Jin, Q., Liu, M., Chen, L., and Ma, H. (2023). An effective infrared and visible image fusion approach via rolling guidance filtering and gradient saliency map. Remote Sens., 15.","DOI":"10.3390\/rs15102486"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Ma, X., Li, T., and Deng, J. (2024). Infrared and visible image fusion algorithm based on double-domain transform filter and contrast transform feature extraction. Sensors, 24.","DOI":"10.3390\/s24123949"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Wang, Q., Yan, X., Xie, W., and Wang, Y. (2024). Image fusion method based on snake visual imaging mechanism and PCNN. Sensors, 24.","DOI":"10.3390\/s24103077"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Feng, B., Ai, C., and Zhang, H. (2024). Fusion of infrared and visible light images based on improved adaptive dual-channel pulse coupled neural network. Electronics, 13.","DOI":"10.3390\/electronics13122337"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1109\/TCI.2022.3151472","article-title":"Injected infrared and visible image fusion via L1 decomposition model and guided filtering","volume":"8","author":"Yang","year":"2022","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zhang, X., Boutat, D., and Liu, D. (2023). Applications of fractional operator in image processing and stability of control systems. Fractal Fract., 7.","DOI":"10.3390\/fractalfract7050359"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"703","DOI":"10.1016\/j.isatra.2022.03.003","article-title":"Multi-focus image fusion based on fractional order differentiation and closed image matting","volume":"129","author":"Zhang","year":"2022","journal-title":"ISA Trans."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1688","DOI":"10.1049\/ipr2.12137","article-title":"Medical image fusion and noise suppression with fractional-order total variation and multi-scale decomposition","volume":"15","author":"Zhang","year":"2021","journal-title":"IET Image Process."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"160","DOI":"10.1016\/j.isatra.2020.07.040","article-title":"Adaptive fractional multi-scale edge-preserving decomposition and saliency detection fusion algorithm","volume":"107","author":"Yan","year":"2020","journal-title":"ISA Trans."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1631\/FITEE.1900737","article-title":"Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets","volume":"21","author":"Zhang","year":"2020","journal-title":"Front. Inf. Technol. Electron. Eng."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"5500","DOI":"10.1109\/TAC.2024.3365726","article-title":"Fault-tolerant prescribed performance control of wheeled mobile robots: A mixed-gain adaption approach","volume":"69","author":"Zhang","year":"2024","journal-title":"IEEE Trans. Autom. Control"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1557","DOI":"10.1109\/JAS.2023.123831","article-title":"Prescribed performance tracking control of time-delay nonlinear systems with output constraints","volume":"11","author":"Zhang","year":"2024","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Wu, D., Wang, Y., Wang, H., Wang, F., and Gao, G. (2024). DCFNet: Infrared and visible image fusion network based on discrete wavelet transform and convolutional neural network. Sensors, 24.","DOI":"10.3390\/s24134065"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Wei, Q., Liu, Y., Jiang, X., Zhang, B., Su, Q., and Yu, M. (2024). DDFNet-A: Attention-based dual-branch feature decomposition fusion network for infrared and visible image fusion. Remote Sens., 16.","DOI":"10.3390\/rs16101795"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Li, X., He, H., and Shi, J. (2024). HDCCT: Hybrid densely connected CNN and transformer for infrared and visible image fusion. Electronics, 13.","DOI":"10.3390\/electronics13173470"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Mao, Q., Zhai, W., Lei, X., Wang, Z., and Liang, Y. (2024). CT and MRI image fusion via coupled feature-learning GAN. Electronics, 13.","DOI":"10.3390\/electronics13173491"},{"key":"ref_20","first-page":"5016412","article-title":"SwinFuse: A residual swin transformer fusion network for infrared and visible images","volume":"71","author":"Wang","year":"2023","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1200","DOI":"10.1109\/JAS.2022.105686","article-title":"SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer","volume":"9","author":"Ma","year":"2022","journal-title":"IEEE-CAA J. Autom. Sin."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Gao, F., Lang, P., Yeh, C., Li, Z., Ren, D., and Yang, J. (2024). An interpretable target-aware vision transformer for polarimetric HRRP target recognition with a novel attention loss. Remote Sens., 16.","DOI":"10.36227\/techrxiv.172101236.64867447\/v1"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Huang, L., Chen, Y., and He, X. (2024). Spectral-spatial Mamba for hyperspectral image classification. Remote Sens., 16.","DOI":"10.3390\/rs16132449"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"10535","DOI":"10.1109\/TPAMI.2023.3261282","article-title":"Visible and infrared image fusion using deep learning","volume":"45","author":"Zhang","year":"2023","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Zhang, X., Ye, P., and Xiao, G. (2020, January 14\u201319). VIFB: A visible and infrared image fusion benchmark. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00060"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"102147","DOI":"10.1016\/j.inffus.2023.102147","article-title":"CrossFuse: A novel cross attention mechanism based infrared and visible image fusion approach","volume":"103","author":"Li","year":"2024","journal-title":"Inf. Fusion"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"158","DOI":"10.1016\/j.inffus.2017.10.007","article-title":"Deep learning for pixel-level image fusion: Recent advances and future prospects","volume":"42","author":"Liu","year":"2018","journal-title":"Inf. Fusion"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"1850018","DOI":"10.1142\/S0219691318500182","article-title":"Infrared and visible image fusion with convolutional neural networks","volume":"16","author":"Liu","year":"2018","journal-title":"Int. J. Wavelets Multiresolut. Inf. Process."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"104015","DOI":"10.1016\/j.jvcir.2023.104015","article-title":"Multi-scale convolutional neural networks and saliency weight maps for infrared and visible image fusion","volume":"98","author":"Yang","year":"2024","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Wei, H., Fu, X., Wang, Z., and Zhao, J. (2024). Infrared\/Visible light fire image fusion method based on generative adversarial network of wavelet-guided pooling vision transformer. Forests, 15.","DOI":"10.3390\/f15060976"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"4980","DOI":"10.1109\/TIP.2020.2977573","article-title":"DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion","volume":"29","author":"Ma","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"127391","DOI":"10.1016\/j.neucom.2024.127391","article-title":"DUGAN: Infrared and visible image fusion based on dual fusion paths and a U-type discriminator","volume":"578","author":"Chang","year":"2024","journal-title":"Neurocomputing"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Lv, M., Jia, Z., Li, L., and Ma, H. (2023). Multi-focus image fusion via PAPCNN and fractal dimension in NSST domain. Mathematics, 11.","DOI":"10.3390\/math11183803"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Lv, M., Li, L., Jin, Q., Jia, Z., Chen, L., and Ma, H. (2023). Multi-focus image fusion via distance-weighted regional energy and structure tensor in NSCT domain. Sensors, 23.","DOI":"10.3390\/s23136135"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Li, L., Lv, M., Jia, Z., and Ma, H. (2023). Sparse representation-based multi-focus image fusion method via local energy in shearlet domain. Sensors, 23.","DOI":"10.3390\/s23062888"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1016\/j.inffus.2018.02.004","article-title":"Infrared and visible image fusion methods and applications: A survey","volume":"45","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"71","DOI":"10.1016\/j.inffus.2020.06.013","article-title":"Multi-focus image fusion: A survey of the state of the art","volume":"64","author":"Liu","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_38","first-page":"5011615","article-title":"SFCFusion: Spatial-frequency collaborative infrared and visible image fusion","volume":"73","author":"Chen","year":"2024","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Chen, H., Deng, L., Zhu, L., and Dong, M. (2023). ECFuse: Edge-consistent and correlation-driven fusion framework for infrared and visible image fusion. Sensors, 23.","DOI":"10.3390\/s23198071"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"104701","DOI":"10.1016\/j.infrared.2023.104701","article-title":"Infrared and visible image fusion based on domain transform filtering and sparse representation","volume":"131","author":"Li","year":"2023","journal-title":"Infrared Phys. Technol."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Chen, Y., and Liu, Y. (IEEE Sens. J., 2024). Multi-focus image fusion with complex sparse representation, IEEE Sens. J., early access.","DOI":"10.1109\/JSEN.2024.3411588"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"985","DOI":"10.1016\/S0167-8655(02)00029-6","article-title":"Multifocus image fusion using artificial neural networks","volume":"23","author":"Li","year":"2002","journal-title":"Pattern Recognit. Lett."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"5510122","DOI":"10.1109\/TGRS.2024.3367127","article-title":"Iterative Gaussian\u2013Laplacian pyramid network for hyperspectral image classification","volume":"62","author":"Chang","year":"2024","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"532","DOI":"10.1109\/TCOM.1983.1095851","article-title":"The laplacian pyramid as a compact image code","volume":"31","author":"Burt","year":"1983","journal-title":"IEEE Trans. Commun."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"64","DOI":"10.1016\/j.ins.2019.08.066","article-title":"Infrared and visible image fusion based on target-enhanced multiscale transform decomposition","volume":"508","author":"Chen","year":"2020","journal-title":"Inf. Sci."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1109\/TIM.2018.2838778","article-title":"Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain","volume":"68","author":"Yin","year":"2019","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1397","DOI":"10.1109\/TPAMI.2012.213","article-title":"Guided image filtering","volume":"35","author":"He","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"2864","DOI":"10.1109\/TIP.2013.2244222","article-title":"Image fusion with guided filtering","volume":"22","author":"Li","year":"2013","journal-title":"IEEE Trans. Image Process."},{"key":"ref_49","unstructured":"(2024, May 01). Available online: https:\/\/figshare.com\/articles\/dataset\/TNO_Image_Fusion_Dataset\/1008029."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"131","DOI":"10.1016\/j.inffus.2005.09.001","article-title":"Pixel-based and region-based image fusion schemes using ICA bases","volume":"8","author":"Mitianoudis","year":"2007","journal-title":"Inf. Fusion"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1109\/JSEN.2015.2478655","article-title":"Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform","volume":"16","author":"Bavirisetti","year":"2016","journal-title":"IEEE Sens. J."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"52","DOI":"10.1016\/j.infrared.2016.01.009","article-title":"Two-scale image fusion of visible and infrared images using saliency detection","volume":"76","author":"Bavirisetti","year":"2016","journal-title":"Infrared Phys. Technol."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"4733","DOI":"10.1109\/TIP.2020.2975984","article-title":"MDLatLRR: A novel decomposition method for infrared and visible image fusion","volume":"29","author":"Li","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Zhang, H., Xu, H., and Xiao, Y. (2020, January 7\u201312). Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6975"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.inffus.2021.02.023","article-title":"RFN-Nest: An end-to-end residual fusion network for infrared and visible images","volume":"73","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"385","DOI":"10.1109\/TCI.2024.3369398","article-title":"EgeFusion: Towards edge gradient enhancement in infrared and visible image fusion with multi-scale transform","volume":"10","author":"Tang","year":"2024","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Xiang, W., Shen, J., Zhang, L., and Zhang, Y. (2024). Infrared and visual image fusion based on a local-extrema-driven image filter. Sensors, 24.","DOI":"10.3390\/s24072271"},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"1508","DOI":"10.3724\/SP.J.1004.2008.01508","article-title":"Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain","volume":"34","author":"Qu","year":"2008","journal-title":"Acta Autom. Sin."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Li, S., Han, M., Qin, Y., and Li, Q. (2024). Self-attention progressive network for infrared and visible image fusion. Remote Sens., 16.","DOI":"10.3390\/rs16183370"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Li, L., Zhao, X., Hou, H., Zhang, X., Lv, M., Jia, Z., and Ma, H. (2024). Fractal dimension-based multi-focus image fusion via coupled neural P systems in NSCT domain. Fractal Fract., 8.","DOI":"10.3390\/fractalfract8100554"},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"102837","DOI":"10.1016\/j.displa.2024.102837","article-title":"MSI-DTrans: A multi-focus image fusion using multilayer semantic interaction and dynamic transformer","volume":"85","author":"Zhai","year":"2024","journal-title":"Displays"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"12389","DOI":"10.1007\/s11042-020-10462-y","article-title":"A novel multiscale transform decomposition based multi-focus image fusion framework","volume":"80","author":"Li","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"106603","DOI":"10.1016\/j.neunet.2024.106603","article-title":"Multi-focus image fusion with parameter adaptive dual channel dynamic threshold neural P systems","volume":"179","author":"Li","year":"2024","journal-title":"Neural Netw."},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"94","DOI":"10.1109\/TPAMI.2011.109","article-title":"Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study","volume":"34","author":"Liu","year":"2012","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"105210","DOI":"10.1016\/j.imavis.2024.105210","article-title":"W-shaped network combined with dual transformers and edge protection for multi-focus image fusion","volume":"150","author":"Zhai","year":"2024","journal-title":"Image Vis. Comput."},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Haghighat, M., and Razian, M. (2014, January 15\u201317). Fast-FMI: Non-reference image fusion metric. Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies, Astana, Kazakhstan.","DOI":"10.1109\/ICAICT.2014.7036000"},{"key":"ref_67","doi-asserted-by":"crossref","first-page":"111041","DOI":"10.1016\/j.patcog.2024.111041","article-title":"MMAE: A universal image fusion method via mask attention mechanism","volume":"158","author":"Wang","year":"2025","journal-title":"Pattern Recognit."},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"120615","DOI":"10.1016\/j.eswa.2023.120615","article-title":"Hyperspectral pathology image classification using dimension-driven multi-path attention residual network","volume":"230","author":"Zhang","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"ref_69","doi-asserted-by":"crossref","first-page":"1552","DOI":"10.1109\/JBHI.2024.3350245","article-title":"FD-Net: Feature distillation network for oral squamous cell carcinoma lymph node segmentation in hyperspectral imagery","volume":"28","author":"Zhang","year":"2024","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.inffus.2014.10.004","article-title":"Multi-focus image fusion using dictionary-based sparse representation","volume":"25","author":"Nejati","year":"2015","journal-title":"Inf. Fusion"},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1016\/j.inffus.2020.08.022","article-title":"MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion","volume":"66","author":"Zhang","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_72","doi-asserted-by":"crossref","unstructured":"Xu, H., Ma, J., and Le, Z. (2020, January 7\u201312). FusionDN: A unified densely connected network for image fusion. Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6936"},{"key":"ref_73","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","article-title":"U2Fusion: A unified unsupervised image fusion network","volume":"44","author":"Xu","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Zhang, Y., and Xiang, W. (2022). Local extreme map guided multi-modal brain image fusion. Front. Neurosci., 16.","DOI":"10.3389\/fnins.2022.1055451"},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"127","DOI":"10.1016\/j.inffus.2022.11.014","article-title":"ZMFF: Zero-shot multi-focus image fusion","volume":"92","author":"Hu","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Li, J., Zhang, J., Yang, C., Liu, H., Zhao, Y., and Ye, Y. (2023). Comparative analysis of pixel-level fusion algorithms and a new high-resolution dataset for SAR and optical image fusion. Remote Sens., 15.","DOI":"10.3390\/rs15235514"},{"key":"ref_77","doi-asserted-by":"crossref","unstructured":"Li, L., Ma, H., and Jia, Z. (2022). Multiscale geometric analysis fusion-based unsupervised change detection in remote sensing images via FLICM model. Entropy, 24.","DOI":"10.3390\/e24020291"},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Li, L., Ma, H., Zhang, X., Zhao, X., Lv, M., and Jia, Z. (2024). Synthetic aperture radar image change detection based on principal component analysis and two-level clustering. Remote Sens., 16.","DOI":"10.3390\/rs16111861"},{"key":"ref_79","doi-asserted-by":"crossref","unstructured":"Li, L., Ma, H., and Jia, Z. (2021). Change detection from SAR images based on convolutional neural networks guided by saliency enhancement. Remote Sens., 13.","DOI":"10.3390\/rs13183697"},{"key":"ref_80","doi-asserted-by":"crossref","first-page":"1077","DOI":"10.1007\/s12524-023-01674-4","article-title":"Gamma correction-based automatic unsupervised change detection in SAR images via FLICM model","volume":"51","author":"Li","year":"2023","journal-title":"J. Indian Soc. Remote Sens."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/20\/3804\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T16:12:27Z","timestamp":1760112747000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/20\/3804"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,13]]},"references-count":80,"journal-issue":{"issue":"20","published-online":{"date-parts":[[2024,10]]}},"alternative-id":["rs16203804"],"URL":"https:\/\/doi.org\/10.3390\/rs16203804","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,13]]}}}