{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:22:30Z","timestamp":1760145750729,"version":"build-2065373602"},"reference-count":64,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2024,8,16]],"date-time":"2024-08-16T00:00:00Z","timestamp":1723766400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U22B20115"],"award-info":[{"award-number":["U22B20115"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>The complementary combination of emphasizing target objects in infrared images and rich texture details in visible images can effectively enhance the information entropy of fused images, thereby providing substantial assistance for downstream composite high-level vision tasks, such as nighttime vehicle intelligent driving. However, mainstream fusion algorithms lack specific research on the contradiction between the low information entropy and high pixel intensity of visible images under harsh light nighttime road environments. As a result, fusion algorithms that perform well in normal conditions can only produce low information entropy fusion images similar to the information distribution of visible images under harsh light interference. In response to these problems, we designed an image fusion network resilient to harsh light environment interference, incorporating entropy and information theory principles to enhance robustness and information retention. Specifically, an edge feature extraction module was designed to extract key edge features of salient targets to optimize fusion information entropy. Additionally, a harsh light environment aware (HLEA) module was proposed to avoid the decrease in fusion image quality caused by the contradiction between low information entropy and high pixel intensity based on the information distribution characteristics of harsh light visible images. Finally, an edge-guided hierarchical fusion (EGHF) module was designed to achieve robust feature fusion, minimizing irrelevant noise entropy and maximizing useful information entropy. Extensive experiments demonstrate that, compared to other advanced algorithms, the method proposed fusion results contain more useful information and have significant advantages in high-level vision tasks under harsh nighttime lighting conditions.<\/jats:p>","DOI":"10.3390\/e26080696","type":"journal-article","created":{"date-parts":[[2024,8,16]],"date-time":"2024-08-16T09:15:41Z","timestamp":1723799741000},"page":"696","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Infrared and Harsh Light Visible Image Fusion Using an Environmental Light Perception Network"],"prefix":"10.3390","volume":"26","author":[{"given":"Aiyun","family":"Yan","sequence":"first","affiliation":[{"name":"College of Information Science and Engineering, Northeastern University, Shenyang 110167, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-6620-8641","authenticated-orcid":false,"given":"Shang","family":"Gao","sequence":"additional","affiliation":[{"name":"College of Information Science and Engineering, Northeastern University, Shenyang 110167, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-3898-1771","authenticated-orcid":false,"given":"Zhenlin","family":"Lu","sequence":"additional","affiliation":[{"name":"Beijing Microelectronics Technology Institute, Beijing 100076, China"}]},{"given":"Shuowei","family":"Jin","sequence":"additional","affiliation":[{"name":"College of Information Science and Engineering, Northeastern University, Shenyang 110167, China"}]},{"given":"Jingrong","family":"Chen","sequence":"additional","affiliation":[{"name":"College of Information Science and Engineering, Northeastern University, Shenyang 110167, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,8,16]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"722","DOI":"10.1109\/TITS.2020.3023541","article-title":"Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review","volume":"23","author":"Cui","year":"2022","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1261","DOI":"10.1109\/TITS.2022.3183893","article-title":"Research on Road Environmental Sense Method of Intelligent Vehicle based on Tracking check","volume":"24","author":"Han","year":"2023","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"195","DOI":"10.1038\/s41586-019-0912-1","article-title":"Deep learning and process understanding for data-driven Earth system science","volume":"566","author":"Reichstein","year":"2019","journal-title":"Nature"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ma, W., Wang, K., Li, J., Yang, S.X., Li, J., Song, L., and Li, Q. (2023). Infrared and visible image fusion technology and application: A review. Sensors, 23.","DOI":"10.3390\/s23020599"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"323","DOI":"10.1016\/j.inffus.2021.06.008","article-title":"Image fusion meets deep learning: A survey and perspective","volume":"76","author":"Zhang","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1882","DOI":"10.1109\/LSP.2016.2618776","article-title":"Image fusion with convolutional sparse representation","volume":"23","author":"Liu","year":"2016","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"64","DOI":"10.1016\/j.ins.2019.08.066","article-title":"Infrared and visible image fusion based on target-enhanced multiscale transform decomposition","volume":"508","author":"Luo","year":"2020","journal-title":"Inform. Sci."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"161","DOI":"10.1016\/j.infrared.2014.07.019","article-title":"Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization","volume":"67","author":"Kong","year":"2014","journal-title":"Infrared Phys. Technol."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/j.inffus.2016.02.001","article-title":"Infrared and visible image fusion via gradient transfer and total variation minimization","volume":"31","author":"Ma","year":"2016","journal-title":"Inf. Fusion"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"147","DOI":"10.1016\/j.inffus.2014.09.004","article-title":"A general framework for image fusion based on multi-scale transform and sparse representation","volume":"24","author":"Liu","year":"2015","journal-title":"Inf. Fusion"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"720","DOI":"10.1016\/j.inffus.2021.02.023","article-title":"RFN-nest: An end-to-end residual fusion network for infrared and visible images","volume":"73","author":"Li","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","article-title":"FusionGAN: A generative adversarial network for infrared and visible image fusion","volume":"48","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1016\/j.inffus.2019.07.011","article-title":"IFCNN: A general image fusion framework based on convolutional neural network","volume":"54","author":"Zhang","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TIM.2022.3216413","article-title":"SwinFuse: A residual swin transformer fusion network for infrared and visible images","volume":"71","author":"Wang","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Luo, Y., and Luo, Z. (2023). Infrared and visible image fusion: Methods, datasets, applications, and prospects. Appl. Sci., 13.","DOI":"10.3390\/app131910891"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"3008","DOI":"10.1109\/TMI.2020.2983721","article-title":"CPFNet: Context pyramid fusion network for medical image segmentation","volume":"39","author":"Feng","year":"2020","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1016\/j.inffus.2012.09.005","article-title":"Fusion of multimodal medical images using Daubechies complex wavelet transform\u2014A multiresolution approach","volume":"19","author":"Singh","year":"2014","journal-title":"Inf. Fusion"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"532","DOI":"10.1109\/TCOM.1983.1095851","article-title":"The Laplacian pyramid as a compact image code","volume":"31","author":"Burt","year":"1983","journal-title":"IEEE Trans. Commun."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Jose, J., Gautam, N., Tiwari, M., Tiwari, T., Suresh, A., Sundararaj, V., and Mr, R. (2021). An image quality enhancement scheme employing adolescent identity search algorithm in the NSST domain for multimodal medical image fusion. Biomed. Signal Process. Control, 66.","DOI":"10.1016\/j.bspc.2021.102480"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1193","DOI":"10.1007\/s11760-013-0556-9","article-title":"Image fusion based on pixel significance using cross bilateral filter","volume":"9","year":"2015","journal-title":"Signal Image Video Process."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zuo, Y., Liu, J., Bai, G., Wang, X., and Sun, M. (2017). Airborne Infrared and Visible Image Fusion Combined with Region Segmentation. Sensors, 17.","DOI":"10.3390\/s17051127"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Vaish, A., and Patel, S. (J. King. Saud. Univ.-Comput. Inf. Sci., 2022). A sparse representation based compression of fused images using WDR coding, J. King. Saud. Univ.-Comput. Inf. Sci., in press.","DOI":"10.1016\/j.jksuci.2022.02.002"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"4733","DOI":"10.1109\/TIP.2020.2975984","article-title":"M DLatLRR: A novel decomposition method for infrared and visible image fusion","volume":"29","author":"Li","year":"2020","journal-title":"IEEE Trans. Image Process"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"2192","DOI":"10.1109\/TMM.2021.3077767","article-title":"CCAFNet: Crossflow and Cross-Scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images","volume":"24","author":"Zhou","year":"2021","journal-title":"IEEE Trans. Multimed."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Broussard, R., and Rogers, S. (1996). Physiologically motivated image fusion using pulse-coupled neural networks. Applications and Science of Artificial Neural Networks II, SPIE.","DOI":"10.1117\/12.235981"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Ram Prabhakar, K., Sai Srikar, V., and Venkatesh Babu, R. (2017, January 22\u201329). Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.505"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","article-title":"DenseFuse: A fusion approach to infrared and visible images","volume":"28","author":"Li","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"9645","DOI":"10.1109\/TIM.2020.3005230","article-title":"NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial\/channel attention models","volume":"69","author":"Li","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Zhao, Z., Xu, S., Zhang, C., Liu, J., Zhang, J., and Li, P. (2020, January 11\u201317). DIDFuse: Deep image decomposition for infrared and visible image fusion. Proceedings of the IJCAI, Yokohama, Japan.","DOI":"10.24963\/ijcai.2020\/135"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Li, H., Wu, X., and Kittler, J. (2018, January 20\u201324). Infrared and visible image fusion using a deep learning framework. Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China.","DOI":"10.1109\/ICPR.2018.8546006"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"103039","DOI":"10.1016\/j.infrared.2019.103039","article-title":"Infrared and visible image fusion with ResNet and zero-phase component analysis","volume":"102","author":"Li","year":"2019","journal-title":"Infrared Phys. Technol."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"36303","DOI":"10.1007\/s11042-023-14967-0","article-title":"LatLRR-CNN: An infrared and visible image fusion method combining latent low-rank representation and CNN","volume":"82","author":"Yang","year":"2023","journal-title":"Multimed. Tools Appl."},{"key":"ref_33","first-page":"5006713","article-title":"DRF: Disentangled representation for visible and infrared image fusion","volume":"70","author":"Xu","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_34","first-page":"1","article-title":"GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion","volume":"70","author":"Ma","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"147928","DOI":"10.1109\/ACCESS.2018.2872695","article-title":"SCGAN: Disentangled Representation Learning by Adding Similarity Constraint on Generative Adversarial Nets","volume":"7","author":"Li","year":"2019","journal-title":"IEEE Access"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Xu, H., Ma, J., Le, Z., Jiang, J., and Guo, X. (2020, January 3). Fusiondn: A unified densely connected network for image fusion. Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA.","DOI":"10.1609\/aaai.v34i07.6936"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Zhao, H., and Nie, R. (2021, January 24\u201326). DNDT: Infrared and visible image fusion via DenseNet and dual-transformer. Proceedings of the 2021 International Conference on Information Technology and Biomedical Engineering (ICITBE), Nanchang, China.","DOI":"10.1109\/ICITBE54178.2021.00025"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"4920","DOI":"10.1109\/JSEN.2023.3346886","article-title":"EV-Fusion: A Novel Infrared and Low-Light Color Visible Image Fusion Network Integrating Unsupervised Visible Image Enhancement","volume":"24","author":"Zhang","year":"2024","journal-title":"IEEE Sens."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1016\/j.inffus.2022.03.007","article-title":"PIAFusion: A progressive infrared and visible image fusion network based on illumination aware","volume":"83","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"608","DOI":"10.1109\/TMM.2021.3129354","article-title":"IFSepR: A General Framework for Image Fusion Based on Separate Representation Learning","volume":"25","author":"Luo","year":"2023","journal-title":"IEEE Trans. Multimed."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"336","DOI":"10.1016\/j.inffus.2022.12.007","article-title":"AT-GAN: A generative adversarial network with attention and transition for infrared and visible image fusion","volume":"92","author":"Rao","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Liu, X., Yang, L., Zhang, X., and Duan, X. (2022, January 6\u20138). MA-ResNet50: A General Encoder Network for Video Segmentation. Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022), Online.","DOI":"10.5220\/0010800800003124"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., and Bernstein, M. (2014). ImageNet Large Scale Visual Recognition Challenge. arXiv.","DOI":"10.1007\/s11263-015-0816-y"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"4695","DOI":"10.1109\/TIP.2012.2214050","article-title":"No-reference image quality assessment in the spatial domain","volume":"21","author":"Mittal","year":"2012","journal-title":"IEEE Trans. Image Process"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"28","DOI":"10.1016\/j.inffus.2021.12.004","article-title":"Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network","volume":"82","author":"Tang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"1303","DOI":"10.1109\/TCSVT.2017.2654543","article-title":"Residual Networks of Residual Networks: Multilevel Residual Networks","volume":"28","author":"Zhang","year":"2018","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Ma, L., Ma, T., Liu, R., Fan, X., and Luo, Z. (2022, January 19\u201324). Toward fast, flexible, and robust low-light image enhancement. Proceedings of the 2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00555"},{"key":"ref_48","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16 \u00d7 16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_49","unstructured":"Han, D., Ye, T., Ha Han, D., Ye, T., Han, Y., Xia, Z., Song, S., and Huang, G. (2023). Agent attention: On the integration of softmax and linear attention. arXiv."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"20991","DOI":"10.1109\/TITS.2022.3182311","article-title":"MFNet: Multi-feature fusion network for real-time semantic segmentation in road scenes","volume":"23","author":"Lu","year":"2022","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_51","first-page":"1","article-title":"Assessment of image fusion procedures using entropy, image quality, and multispectral classification","volume":"2","author":"Roberts","year":"2008","journal-title":"J. Appl. Remote Sens."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"313","DOI":"10.1049\/el:20020212","article-title":"Information measure for performance of image fusion","volume":"38","author":"Qu","year":"2002","journal-title":"Electron. Lett."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"760","DOI":"10.3748\/wjg.v27.i9.760","article-title":"Update on the management of sigmoid diverticulitis","volume":"27","author":"Hanna","year":"2021","journal-title":"World J. Gastroenterol."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"127","DOI":"10.1016\/j.inffus.2011.08.002","article-title":"A new image fusion performance metric based on visual information fidelity","volume":"14","author":"Han","year":"2013","journal-title":"Inf. Fusion"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"2959","DOI":"10.1109\/26.477498","article-title":"Image quality measures and their performance","volume":"43","author":"Eskicioglu","year":"1995","journal-title":"IEEE Trans. Commun."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"78","DOI":"10.1016\/j.inffus.2022.07.008","article-title":"Multi-modal knowledge graphs representation learning via multi-headed self-attention","volume":"88","author":"Wang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"1102","DOI":"10.1109\/TCSVT.2018.2821177","article-title":"Multi-focus image fusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network","volume":"29","author":"Zhao","year":"2018","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_58","unstructured":"Toet, A. (2024, June 27). TNO Image Fusion Dataset. Available online: https:\/\/figshare.com\/articles\/dataset\/TNO_Image_Fusion_Dataset\/1008029."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Liu, J., Fan, X., Huang, Z., Wu, G., Liu, R., Zhong, W., and Luo, Z. (2022, January 19\u201324). Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. Proceedings of the 2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00571"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_61","first-page":"187","article-title":"Theory of edge detection","volume":"207","author":"Marr","year":"1980","journal-title":"Proc. R. Soc. Lond. Ser. B Biol. Sci."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"679","DOI":"10.1109\/TPAMI.1986.4767851","article-title":"A computational approach to edge detection","volume":"PAMI-8","author":"Canny","year":"1986","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Liu, Y., Cheng, M.M., Hu, X., Wang, K., and Bai, X. (2017, January 21\u201326). Richer convolutional features for edge detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.622"},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Kang, H., Lee, S., and Chui, C.K. (2007, January 4\u20135). Coherent line drawing. Proceedings of the 5th International Symposium on Non-Photorealistic Animation and Rendering, Stuttgart, Germany.","DOI":"10.1145\/1274871.1274878"}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/26\/8\/696\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:37:49Z","timestamp":1760110669000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/26\/8\/696"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,16]]},"references-count":64,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2024,8]]}},"alternative-id":["e26080696"],"URL":"https:\/\/doi.org\/10.3390\/e26080696","relation":{},"ISSN":["1099-4300"],"issn-type":[{"type":"electronic","value":"1099-4300"}],"subject":[],"published":{"date-parts":[[2024,8,16]]}}}