{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T18:35:29Z","timestamp":1771958129015,"version":"3.50.1"},"reference-count":70,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T00:00:00Z","timestamp":1717545600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The aim of infrared and visible image fusion is to generate a fused image that not only contains salient targets and rich texture details, but also facilitates high-level vision tasks. However, due to the hardware limitations of digital cameras and other devices, there are more low-resolution images in the existing datasets, and low-resolution images are often accompanied by the problem of losing details and structural information. At the same time, existing fusion algorithms focus too much on the visual quality of the fused images, while ignoring the requirements of high-level vision tasks. To address the above challenges, in this paper, we skillfully unite the super-resolution network, fusion network and segmentation network, and propose a super-resolution-based semantic-aware fusion network. First, we design a super-resolution network based on a multi-branch hybrid attention module (MHAM), which aims to enhance the quality and details of the source image, enabling the fusion network to integrate the features of the source image more accurately. Then, a comprehensive information extraction module (STDC) is designed in the fusion network to enhance the network\u2019s ability to extract finer-grained complementary information from the source image. Finally, the fusion network and segmentation network are jointly trained to utilize semantic loss to guide the semantic information back to the fusion network, which effectively improves the performance of the fused images on high-level vision tasks. Extensive experiments show that our method is more effective than other state-of-the-art image fusion methods. In particular, our fused images not only have excellent visual perception effects, but also help to improve the performance of high-level vision tasks.<\/jats:p>","DOI":"10.3390\/s24113665","type":"journal-article","created":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T10:05:50Z","timestamp":1717581950000},"page":"3665","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Semantic-Aware Fusion Network Based on Super-Resolution"],"prefix":"10.3390","volume":"24","author":[{"given":"Lingfeng","family":"Xu","sequence":"first","affiliation":[{"name":"School of Microelectronics, Tianjin University, Tianjin 300072, China"}]},{"given":"Qiang","family":"Zou","sequence":"additional","affiliation":[{"name":"School of Microelectronics, Tianjin University, Tianjin 300072, China"},{"name":"Tianjin International Joint Research Center for Internet of Things, Tianjin 300072, China"},{"name":"Tianjin Key Laboratory of Imaging and Sensing Microelectronic Technology, Tianjin University, Tianjin 300072, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,6,5]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Lu, Y., Wu, Y., Liu, B., Zhang, T., Li, B., Chu, Q., and Yu, N. (2020, January 13\u201319). Cross-modality person re-identification with shared-specific feature transfer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, DC, USA.","DOI":"10.1109\/CVPR42600.2020.01339"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"206","DOI":"10.1016\/j.inffus.2018.06.005","article-title":"Pedestrian detection with unsupervised multispectral feature learning using deep neural networks","volume":"46","author":"Cao","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Li, C., Zhu, C., Huang, Y., Tang, J., and Wang, L. (2018, January 6). Cross-modal ranking with soft consistency and noisy labels for robust RGB-T tracking. Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland.","DOI":"10.1007\/978-3-030-01261-8_49"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ha, Q., Watanabe, K., Karasawa, T., Ushiku, Y., and Harada, T. (2017, January 24\u201328). MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. Proceedings of the 2017 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada.","DOI":"10.1109\/IROS.2017.8206396"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1016\/j.inffus.2018.02.004","article-title":"Infrared and visible image fusion methods and applications: A survey","volume":"45","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1016\/j.sigpro.2013.10.010","article-title":"Region level based multi-focus image fusion using quaternion wavelet and normalized cut","volume":"97","author":"Liu","year":"2014","journal-title":"Signal Process."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"10951","DOI":"10.1007\/s11042-023-16074-6","article-title":"MFIF-DWT-CNN: Multi-focus image fusion based on discrete wavelet transform with deep convolutional neural network","volume":"83","author":"Sert","year":"2024","journal-title":"Multimed. Tools Appl."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1181","DOI":"10.1007\/s00371-021-02396-9","article-title":"Image fusion using dual tree discrete wavelet transform and weights optimization","volume":"39","author":"Aghamaleki","year":"2023","journal-title":"Vis. Comput."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Wang, J., Xi, X., Li, D., Li, F., and Zhang, G. (2023). GRPAFusion: A gradient residual and pyramid attention-based multiscale network for multimodal image fusion. Entropy, 25.","DOI":"10.3390\/e25010169"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"326","DOI":"10.1016\/j.neucom.2016.02.047","article-title":"Union Laplacian pyramid with multiple features for medical image fusion","volume":"194","author":"Du","year":"2016","journal-title":"Neurocomputing"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"64","DOI":"10.1016\/j.ins.2019.08.066","article-title":"Infrared and visible image fusion based on target-enhanced multiscale transform decomposition","volume":"508","author":"Jun","year":"2020","journal-title":"Inf. Sci."},{"key":"ref_12","unstructured":"Sadjadi, F. (2005, January 20\u201325). Comparative image fusion analysais. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.infrared.2015.11.003","article-title":"An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing","volume":"74","author":"Zhang","year":"2016","journal-title":"Infrared Phys. Technol."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1295","DOI":"10.1016\/j.patrec.2008.02.002","article-title":"Multifocus image fusion by combining curvelet and wavelet transform","volume":"29","author":"Li","year":"2008","journal-title":"Pattern Recognit. Lett."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhao, X., Jin, S., Bian, G., Cui, Y., Wang, J., and Zhou, B. (2023). A curvelet-transform-based image fusion method incorporating side-scan sonar image features. J. Mar. Sci. Eng., 11.","DOI":"10.3390\/jmse11071291"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"4733","DOI":"10.1109\/TIP.2020.2975984","article-title":"MDLatLRR: A novel decomposition method for infrared and visible image fusion","volume":"29","author":"Li","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1882","DOI":"10.1109\/LSP.2016.2618776","article-title":"Image fusion with convolutional sparse representation","volume":"23","author":"Liu","year":"2016","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"113","DOI":"10.1109\/TPAMI.2013.109","article-title":"Joint sparse representation for robust multimodal biometrics recognition","volume":"36","author":"Shekhar","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"743","DOI":"10.1109\/JSEN.2007.894926","article-title":"Region-based multimodal image fusion using ICA bases","volume":"7","author":"Cvejic","year":"2007","journal-title":"IEEE Sens. J."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Mou, J., Gao, W., and Song, Z. (2013, January 16\u201318). Image fusion based on non-negative matrix factorization and infrared feature extraction. Proceedings of the 2013 6th International Congress on Image and Signal Processing (CISP), Hangzhou, China.","DOI":"10.1109\/CISP.2013.6745210"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1016\/j.infrared.2016.05.012","article-title":"Infrared and visible images fusion based on RPCA and NSCT","volume":"77","author":"Fu","year":"2016","journal-title":"Infrared Phys. Technol."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1016\/j.infrared.2017.02.005","article-title":"Infrared and visible image fusion based on visual saliency map and weighted least square optimization","volume":"82","author":"Ma","year":"2017","journal-title":"Infrared Phys. Technol."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1519","DOI":"10.1109\/JSEN.2010.2041924","article-title":"Hybrid multiresolution method for multisensor multimodal image fusion","volume":"10","author":"Li","year":"2010","journal-title":"IEEE Sens. J."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"043019","DOI":"10.1117\/1.JEI.22.4.043019","article-title":"Image fusion with nonsubsampled contourlet transform and sparse representation","volume":"22","author":"Wang","year":"2013","journal-title":"J. Electron. Imaging"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"147","DOI":"10.1016\/j.inffus.2014.09.004","article-title":"A general framework for image fusion based on multi-scale transform and sparse representation","volume":"24","author":"Liu","year":"2015","journal-title":"Inf. Fusion"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"108929","DOI":"10.1016\/j.patcog.2022.108929","article-title":"Infrared and visible image fusion via parallel scene and texture learning","volume":"132","author":"Xu","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","article-title":"DenseFuse: A fusion approach to infrared and visible images","volume":"28","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"824","DOI":"10.1109\/TCI.2021.3100986","article-title":"Classification saliency-based rule for visible and infrared image fusion","volume":"7","author":"Xu","year":"2021","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_29","first-page":"5006713","article-title":"DRF: Disentangled representation for visible and infrared image fusion","volume":"70","author":"Xu","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"20139","DOI":"10.1007\/s11042-022-14314-9","article-title":"An end-to-end multi-scale network based on autoencoder for infrared and visible image fusion","volume":"82","author":"Liu","year":"2023","journal-title":"Multimed. Tools Appl."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"5009513","DOI":"10.1109\/TIM.2021.3075747","article-title":"STDFusionNet: An infrared and visible image fusion network based on salient target detection","volume":"70","author":"Ma","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1016\/j.inffus.2019.07.011","article-title":"IFCNN: A general image fusion framework based on convolutional neural network","volume":"54","author":"Zhang","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"128","DOI":"10.1016\/j.inffus.2020.11.009","article-title":"RXDNFuse: A aggregated residual dense network for infrared and visible image fusion","volume":"69","author":"Long","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1016\/j.inffus.2022.03.007","article-title":"PIAFusion: A progressive infrared and visible image fusion network based on illumination aware","volume":"83","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"477","DOI":"10.1016\/j.inffus.2022.10.034","article-title":"DIVFusion: Darkness-free infrared and visible image fusion","volume":"91","author":"Tang","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Rao, D., Xu, T., and Wu, X. (2023). TGFuse: An infrared and visible image fusion approach based on transformer and generative adversarial network. IEEE Trans. Image Process., 1.","DOI":"10.1109\/TIP.2023.3273451"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"26","DOI":"10.1016\/j.inffus.2023.02.011","article-title":"Feature dynamic alignment and refinement for infrared\u2013visible image fusion: Translation robust fusion","volume":"95","author":"Li","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","article-title":"FusionGAN: A generative adversarial network for infrared and visible image fusion","volume":"48","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1016\/j.inffus.2019.07.005","article-title":"Infrared and visible image fusion via detail preserving adversarial learning","volume":"54","author":"Ma","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"4980","DOI":"10.1109\/TIP.2020.2977573","article-title":"DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion","volume":"29","author":"Ma","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1134","DOI":"10.1109\/TCI.2021.3119954","article-title":"GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators","volume":"7","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Wang, D., Liu, J., Fan, X., and Liu, R. (2022). Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration. arXiv, accepted.","DOI":"10.24963\/ijcai.2022\/487"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"5705","DOI":"10.1109\/TIP.2023.3322046","article-title":"Dif-fusion: Towards high color fidelity in infrared and visible image fusion with diffusion models","volume":"32","author":"Yue","year":"2023","journal-title":"IEEE Trans. Image Process."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"28","DOI":"10.1016\/j.inffus.2021.12.004","article-title":"Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network","volume":"82","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Zhao, Z., Bai, H., Zhang, J., Zhang, Y., Xu, S., Lin, Z., Timofte, R., and Van Gool, L. (2023, January 17\u201324). Cddfuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00572"},{"key":"ref_46","unstructured":"Howard, A., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv, accepted."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"295","DOI":"10.1109\/TPAMI.2015.2439281","article-title":"Image super-resolution using deep convolutional networks","volume":"38","author":"Dong","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., and Fu, Y. (2018, January 6). Image super-resolution using very deep residual channel attention networks. Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland.","DOI":"10.1007\/978-3-030-01234-2_18"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"3098","DOI":"10.1109\/TIP.2021.3058764","article-title":"Deep coupled feedback network for joint exposure fusion and image super-resolution","volume":"30","author":"Deng","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"489","DOI":"10.1007\/s00371-023-02795-0","article-title":"MFFN: Image super-resolution via multi-level features fusion network","volume":"40","author":"Chen","year":"2024","journal-title":"Vis. Comput."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Li, Y., Dong, Y., Li, H., Liu, D., Xue, F., and Gao, D. (2024). No-Reference Hyperspectral Image Quality Assessment via Ranking Feature Learning. Remote Sens., 16.","DOI":"10.3390\/rs16101657"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"5501805","DOI":"10.1109\/LGRS.2024.3353706","article-title":"Image Quality Assessment of UAV Hyperspectral Images Using Radiant, Spatial, and Spectral Features Based on Fuzzy Comprehensive Evaluation Method","volume":"21","author":"Tian","year":"2024","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"108975","DOI":"10.1016\/j.sigpro.2023.108975","article-title":"A method to improve full-resolution remote sensing pansharpening image quality assessment via feature combination","volume":"208","author":"Wang","year":"2023","journal-title":"Signal Process."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Chen, W., Lin, W., Xu, X., Lin, L., and Zhao, T. (2024). Face Super-Resolution Quality Assessment Based On Identity and Recognizability. IEEE T-BIOM, 1.","DOI":"10.1109\/TBIOM.2024.3389982"},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"6475","DOI":"10.1109\/TMM.2024.3352400","article-title":"RISTRA: Recursive Image Super-resolution Transformer with Relativistic Assessment","volume":"26","author":"Zhou","year":"2024","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"405","DOI":"10.1016\/j.inffus.2022.08.032","article-title":"Multispectral and hyperspectral image fusion in remote sensing: A survey","volume":"89","author":"Vivone","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Liu, J., Fan, X., Huang, Z., Wu, G., Liu, R., Zhong, W., and Luo, Z. (2022, January 18\u201324). Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00571"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Sun, Y., Cao, B., Zhu, P., and Hu, Q. (2022, January 10\u201314). Detfusion: A detection-driven infrared and visible image fusion network. Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal.","DOI":"10.1145\/3503161.3547902"},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"188","DOI":"10.1016\/j.neunet.2021.01.021","article-title":"Bilateral attention decoder: A lightweight decoder for real-time semantic segmentation","volume":"137","author":"Peng","year":"2021","journal-title":"Neural Netw."},{"key":"ref_61","unstructured":"Sang, W., Jong, P., Joon, L., and In, S. (2018, January 6). CBAM: Convolutional block attention module. Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.inffus.2021.02.023","article-title":"RFN-Nest: An end-to-end residual fusion network for infrared and visible images","volume":"73","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"1383","DOI":"10.1109\/TMM.2020.2997127","article-title":"AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks","volume":"23","author":"Li","year":"2020","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TPAMI.2020.3012548","article-title":"U2Fusion: A unified unsupervised image fusion network","volume":"44","author":"Xu","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"249","DOI":"10.1016\/j.dib.2017.09.038","article-title":"The TNO multiband image data collection","volume":"15","author":"Toet","year":"2017","journal-title":"Data Brief"},{"key":"ref_66","first-page":"023522","article-title":"Assessment of image fusion procedures using entropy, image quality, and multispectral classification","volume":"2","author":"Roberts","year":"2008","journal-title":"Remote Sens."},{"key":"ref_67","doi-asserted-by":"crossref","first-page":"313","DOI":"10.1049\/el:20020212","article-title":"Information measure for performance of image fusion","volume":"38","author":"Qu","year":"2002","journal-title":"Electron. Lett."},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"127","DOI":"10.1016\/j.inffus.2011.08.002","article-title":"A new image fusion performance metric based on visual information fidelity","volume":"14","author":"Han","year":"2013","journal-title":"Inf. Fusion"},{"key":"ref_69","doi-asserted-by":"crossref","first-page":"2959","DOI":"10.1109\/26.477498","article-title":"Image quality measures and their performance","volume":"43","author":"Eskicioglu","year":"1995","journal-title":"IEEE Trans. Commun."},{"key":"ref_70","unstructured":"Ram, K., Sai, V., and Venkatesh, R. (2017, January 22\u201329). Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/11\/3665\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T14:54:12Z","timestamp":1760108052000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/11\/3665"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,5]]},"references-count":70,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2024,6]]}},"alternative-id":["s24113665"],"URL":"https:\/\/doi.org\/10.3390\/s24113665","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,5]]}}}