{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T08:53:17Z","timestamp":1774255997716,"version":"3.50.1"},"reference-count":39,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T00:00:00Z","timestamp":1774224000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T00:00:00Z","timestamp":1774224000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Pattern Anal Applic"],"published-print":{"date-parts":[[2026,6]]},"DOI":"10.1007\/s10044-026-01649-4","type":"journal-article","created":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T07:52:14Z","timestamp":1774252334000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["FDCFusion: frequency-domain CAFormer-CNN fusion network for infrared and visible image fusion"],"prefix":"10.1007","volume":"29","author":[{"given":"Huifang","family":"Kong","sequence":"first","affiliation":[]},{"given":"Zixiang","family":"Dong","sequence":"additional","affiliation":[]},{"given":"Dacheng","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,3,23]]},"reference":[{"key":"1649_CR1","doi-asserted-by":"publisher","first-page":"323","DOI":"10.1016\/j.inffus.2021.06.008","volume":"76","author":"H Zhang","year":"2021","unstructured":"Zhang H, Xu H, Tian X, Jiang J, Ma J (2021) Image fusion meets deep learning: a survey and perspective. Inf Fusion 76:323\u2013336","journal-title":"Inf Fusion"},{"issue":"19","key":"1649_CR2","doi-asserted-by":"publisher","DOI":"10.3390\/s23198071","volume":"23","author":"H Chen","year":"2023","unstructured":"Chen H, Li W, Chen J, Hou Y, Chen C, Zhou Y, Li B (2023) ECFuse: edge-consistent and correlation-driven fusion framework for infrared and visible image fusion. Sensors 23(19):8071","journal-title":"Sensors"},{"key":"1649_CR3","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1016\/j.inffus.2018.06.001","volume":"46","author":"Y Cao","year":"2019","unstructured":"Cao Y, Zhang H, Wu D, Shen L (2019) Pedestrian detection with unsupervised multispectral feature learning using deep neural networks. Inf Fusion 46:206\u2013217. https:\/\/doi.org\/10.1016\/j.inffus.2018.06.001","journal-title":"Inf Fusion"},{"key":"1649_CR4","doi-asserted-by":"publisher","unstructured":"Ha, Q., Watanabe, K., Karasawa, T., Ushiku, Y., & Harada, T. (2017). MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. In: Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5108\u20135115). https:\/\/doi.org\/10.1109\/IROS.2017.8206396","DOI":"10.1109\/IROS.2017.8206396"},{"key":"1649_CR5","doi-asserted-by":"crossref","unstructured":"Yu, W., Luo, M., Zhou, P., Si, C., Zhou, Y., Wang, X., Feng, J., & Yan, S. (2022). MetaFormer is actually what you need for vision. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10809\u201310819).","DOI":"10.1109\/CVPR52688.2022.01055"},{"issue":"2","key":"1649_CR6","doi-asserted-by":"publisher","first-page":"896","DOI":"10.1109\/TPAMI.2023.3329173","volume":"46","author":"W Yu","year":"2024","unstructured":"Yu W, Luo M, Zhou P, Si C, Zhou Y, Wang X, Feng J, Yan S (2024) MetaFormer baselines for vision. IEEE Trans Pattern Anal Mach Intell 46(2):896\u2013912","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1649_CR7","doi-asserted-by":"publisher","first-page":"147","DOI":"10.1016\/j.inffus.2014.09.004","volume":"24","author":"Y Liu","year":"2015","unstructured":"Liu Y, Chen X, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147\u2013164","journal-title":"Inf Fusion"},{"issue":"4","key":"1649_CR8","doi-asserted-by":"publisher","first-page":"532","DOI":"10.1109\/TCOM.1983.1095851","volume":"31","author":"PJ Burt","year":"1983","unstructured":"Burt PJ, Adelson EH (1983) The Laplacian pyramid as a compact image code. IEEE Trans Commun 31(4):532\u2013540","journal-title":"IEEE Trans Commun"},{"issue":"20","key":"1649_CR9","doi-asserted-by":"publisher","DOI":"10.3390\/rs16203804","volume":"16","author":"L Li","year":"2024","unstructured":"Li L, Shi Y, Lv M, Jia Z, Liu M, Zhao X, Zhang X, Ma H (2024) Infrared and visible image fusion via sparse representation and guided filtering in Laplacian pyramid domain. Remote Sens 16(20):3804","journal-title":"Remote Sens"},{"key":"1649_CR10","doi-asserted-by":"crossref","unstructured":"Li, X., Li, X., Ye, T., Cheng, X., Liu, W., & Tan, H. (2024). Bridging the gap between multi-focus and multi-modal: a focused integration framework for multi-modal image fusion. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV) (pp. 1628\u20131637).","DOI":"10.1109\/WACV57701.2024.00165"},{"issue":"5","key":"1649_CR11","doi-asserted-by":"publisher","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","volume":"28","author":"H Li","year":"2019","unstructured":"Li H, Wu X-J (2019) DenseFuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614\u20132623. https:\/\/doi.org\/10.1109\/TIP.2018.2887342","journal-title":"IEEE Trans Image Process"},{"issue":"22","key":"1649_CR12","first-page":"22168","volume":"22","author":"Y Jiang","year":"2022","unstructured":"Jiang Y, Ma J, Chen C, Wang Z, Liu Y, Feng Y (2022) SEDRFuse: a symmetric network for infrared and visible image fusion with residual dense block and attention mechanism. IEEE Sens J 22(22):22168\u201322180","journal-title":"IEEE Sens J"},{"key":"1649_CR13","doi-asserted-by":"publisher","unstructured":"Wang, Z., Liu, J., Xu, H., & Ma, J. (2022). SwinFuse: A simple and strong fusion baseline with Swin Transformer. arXiv preprint arXiv:2203.11450. https:\/\/doi.org\/10.48550\/arXiv.2203.11450","DOI":"10.48550\/arXiv.2203.11450"},{"issue":"5","key":"1649_CR14","doi-asserted-by":"publisher","first-page":"2510","DOI":"10.1109\/TIP.2019.2899947","volume":"28","author":"J Ma","year":"2019","unstructured":"Ma J, Yu W, Liang P, Li C, Jiang J (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. IEEE Trans Image Process 28(5):2510\u20132524. https:\/\/doi.org\/10.1109\/TIP.2019.2899947","journal-title":"IEEE Trans Image Process"},{"issue":"12","key":"1649_CR15","doi-asserted-by":"crossref","first-page":"5468","DOI":"10.1109\/TNNLS.2021.3068762","volume":"32","author":"H Xu","year":"2021","unstructured":"Xu H, Ma J, Jiang J, Guo X, Ling H (2021) DDcGAN: dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Neural Netw Learn Syst 32(12):5468\u20135482","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1649_CR16","doi-asserted-by":"publisher","first-page":"1383","DOI":"10.1109\/TMM.2020.2997127","volume":"23","author":"H Li","year":"2021","unstructured":"Li H, Wu X-J, Kittler J, Li S (2021) RFN-Nest: an end-to-end residual fusion network for infrared and visible image fusion. IEEE Trans Multimed 23:1383\u20131396","journal-title":"IEEE Trans Multimed"},{"issue":"12","key":"1649_CR17","first-page":"10161","volume":"34","author":"Z Zhao","year":"2023","unstructured":"Zhao Z, Liu X, Xu Y, Zhang Y, Xu H, Ma J (2023) AUIF: algorithm unrolling for infrared and visible image fusion. IEEE Trans Neural Netw Learn Syst 34(12):10161\u201310174","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1649_CR18","first-page":"3342","volume":"32","author":"J Zhang","year":"2023","unstructured":"Zhang J, Ma J (2023) SDNet: a gradient and intensity-based network for multi-task image fusion. IEEE Trans Image Process 32:3342\u20133355","journal-title":"IEEE Trans Image Process"},{"key":"1649_CR19","doi-asserted-by":"publisher","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An image is worth 16\u00d716 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. https:\/\/doi.org\/10.48550\/arXiv.2010.11929","DOI":"10.48550\/arXiv.2010.11929"},{"issue":"1","key":"1649_CR20","doi-asserted-by":"publisher","DOI":"10.1117\/1.JEI.34.1.013024","volume":"34","author":"S Liu","year":"2025","unstructured":"Liu S, Zhao Y (2025) TransUNetFormer: let hybrid convolutional neural network + transformer encoder and decoder provide powerful support for remote sensing image segmentation. J Electron Imaging 34(1):013024","journal-title":"J Electron Imaging"},{"key":"1649_CR21","doi-asserted-by":"crossref","unstructured":"Zhao, Z., Xu, H., Zhang, J., Liu, X., & Ma, J. (2023). CDDFuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5906\u20135916).","DOI":"10.1109\/CVPR52729.2023.00572"},{"issue":"9","key":"1649_CR22","first-page":"4420","volume":"27","author":"L Wang","year":"2018","unstructured":"Wang L, Wang S, Liang Z (2018) GLADNet: low-light enhancement network with global illumination awareness. IEEE Trans Image Process 27(9):4420\u20134434","journal-title":"IEEE Trans Image Process"},{"key":"1649_CR23","volume":"111","author":"C Li","year":"2021","unstructured":"Li C, Song D, Tong R, Tang M (2021) Illumination-aware Faster R-CNN for robust multispectral pedestrian detection. Pattern Recogn 111:107697","journal-title":"Pattern Recogn"},{"key":"1649_CR24","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1016\/j.inffus.2022.03.007","volume":"83\u201384","author":"L Tang","year":"2022","unstructured":"Tang L, Deng L, Ma J, Huang J (2022) PIAFusion: a progressive infrared and visible image fusion network based on illumination aware. Inf Fusion 83\u201384:79\u201392","journal-title":"Inf Fusion"},{"key":"1649_CR25","doi-asserted-by":"publisher","first-page":"6231","DOI":"10.1109\/TIP.2025.3607623","volume":"34","author":"X Li","year":"2025","unstructured":"Li X, Li X, Tan T, Li H, Ye T (2025) UMCFuse: a unified multiple complex scenes infrared and visible image fusion framework. IEEE Trans Image Process 34:6231\u20136245","journal-title":"IEEE Trans Image Process"},{"key":"1649_CR26","doi-asserted-by":"crossref","unstructured":"Yi, X., Xu, H., Zhang, H., Tang, L., & Ma, J. (2024). Text-IF: Leveraging semantic text guidance for degradation-aware and interactive image fusion. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 23045\u201323054).","DOI":"10.1109\/CVPR52733.2024.02552"},{"key":"1649_CR27","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2026.104130","author":"X Li","year":"2026","unstructured":"Li X, Liu W, Li X, Zhou F, Li H, Nie F (2026) All-weather multi-modality image fusion: unified framework and 100k benchmark. Inf Fusion. https:\/\/doi.org\/10.1016\/j.inffus.2026.104130","journal-title":"Inf Fusion"},{"key":"1649_CR28","unstructured":"Gonzalez, R. C., & Woods, R. E. (2018). Digital image processing (4th ed.). Pearson."},{"key":"1649_CR29","unstructured":"Dinh, L., Sohl-Dickstein, J., & Bengio, S. (2017). Density estimation using Real NVP. In: International Conference on Learning Representations (ICLR)."},{"key":"1649_CR30","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C. (2018). MobileNetV2: Inverted Residuals and Linear Bottlenecks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR.2018.00474"},{"key":"1649_CR31","doi-asserted-by":"publisher","first-page":"249","DOI":"10.1016\/j.dib.2017.09.027","volume":"15","author":"A Toet","year":"2017","unstructured":"Toet A (2017) The TNO multiband image data collection. Data Brief 15:249\u2013251. https:\/\/doi.org\/10.1016\/j.dib.2017.09.027","journal-title":"Data Brief"},{"issue":"1","key":"1649_CR32","doi-asserted-by":"publisher","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","volume":"44","author":"H Xu","year":"2022","unstructured":"Xu H, Ma J, Jiang J, Guo X, Ling H (2022) U2Fusion: A unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell 44(1):502\u2013518. https:\/\/doi.org\/10.1109\/TPAMI.2020.3012548","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1649_CR33","first-page":"1","volume":"90","author":"Y Zheng","year":"2024","unstructured":"Zheng Y, Zhang Z, Zhang L (2024) Frequency integration and spatial compensation network for infrared and visible image fusion. Inf Fusion 90:1\u201320","journal-title":"Inf Fusion"},{"key":"1649_CR34","doi-asserted-by":"publisher","DOI":"10.1016\/j.optlaseng.2024.108094","volume":"176","author":"W Tang","year":"2024","unstructured":"Tang W, He F, Liu Y (2024) MPCFusion: Multi-scale parallel cross fusion for infrared and visible images via convolution and vision transformer. Opt Lasers Eng 176:108094. https:\/\/doi.org\/10.1016\/j.optlaseng.2024.108094","journal-title":"Opt Lasers Eng"},{"key":"1649_CR35","doi-asserted-by":"crossref","unstructured":"Hu, K., Zhang, Q., Yuan, M., & Zhang, Y. (2024). SFDFusion: An Efficient Spatial-Frequency Domain Fusion Network for Infrared and Visible Image Fusion. arXiv preprint arXiv:2410.22837.","DOI":"10.3233\/FAIA240524"},{"key":"1649_CR36","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2025.111391","volume":"162","author":"L Guo","year":"2025","unstructured":"Guo L, Luo X, Liu Y, Zhang X, Li H (2025) SAM-guided multi-level collaborative transformer for infrared and visible image fusion. Pattern Recogn 162:111391","journal-title":"Pattern Recogn"},{"key":"1649_CR37","doi-asserted-by":"publisher","first-page":"153","DOI":"10.1016\/j.inffus.2018.02.004","volume":"45","author":"J Ma","year":"2019","unstructured":"Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: A survey. Inf Fusion 45:153\u2013178","journal-title":"Inf Fusion"},{"key":"1649_CR38","doi-asserted-by":"publisher","unstructured":"Liu, J., Fan, X., Huang, Z., Wu, G., Liu, R., Zhong, W., & Luo, Z. (2022). Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp.5792\u20135801). https:\/\/doi.org\/10.1109\/CVPR52688.2022.00573","DOI":"10.1109\/CVPR52688.2022.00573"},{"key":"1649_CR39","doi-asserted-by":"crossref","unstructured":"Wang, C.-Y., Bochkovskiy, A., & Liao, H.-Y. M. (2023). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 7464\u20137475).","DOI":"10.1109\/CVPR52729.2023.00721"}],"container-title":["Pattern Analysis and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10044-026-01649-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10044-026-01649-4","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10044-026-01649-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T07:52:19Z","timestamp":1774252339000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10044-026-01649-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,23]]},"references-count":39,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,6]]}},"alternative-id":["1649"],"URL":"https:\/\/doi.org\/10.1007\/s10044-026-01649-4","relation":{},"ISSN":["1433-7541","1433-755X"],"issn-type":[{"value":"1433-7541","type":"print"},{"value":"1433-755X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,23]]},"assertion":[{"value":"14 November 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 March 2026","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 March 2026","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"We declare that there are no financial interests, commercial affiliations, or other potential conflicts of interest that have influenced the objectivity of this research or the writing of this paper. The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"74"}}