{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,14]],"date-time":"2025-11-14T05:19:18Z","timestamp":1763097558474,"version":"3.45.0"},"reference-count":57,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2025,11,12]],"date-time":"2025-11-12T00:00:00Z","timestamp":1762905600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["No. 61772125"],"award-info":[{"award-number":["No. 61772125"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["No. N2317004"],"award-info":[{"award-number":["No. N2317004"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>Automated violence detection in video surveillance is critical for public safety; however, existing methods frequently suffer notable performance degradation across diverse real-world scenarios due to domain shift. Substantial distributional discrepancies between source training data and target environments severely hinder model generalization, limiting practical deployment. To overcome this, we propose CoMT-VD, a new contrastive Mean Teacher-based violence detection model, engineered for enhanced adaptability in unseen target domains. CoMT-VD innovatively integrates a Mean Teacher architecture to adequately leverage unlabeled target domain data, fostering stable, domain-invariant feature representations by enforcing consistency regularization between student and teacher networks, crucial for bridging the domain gap. Furthermore, to mitigate supervisory noise from pseudo-labels and refine the feature space, CoMT-VD incorporates a dual-strategy contrastive learning module. DCL systematically refines features through intra-sample consistency, minimizing latent space distances for compact representations, and inter-sample consistency, maximizing feature dissimilarity across distinct categories to sharpen decision boundaries. This dual regularization purifies the learned feature space, boosting discriminativeness while mitigating noisy pseudo-labels. Broad evaluations on five benchmark datasets unequivocally demonstrate that CoMT-VD achieves the superior generalization performance (in the four integrated scenarios from five benchmark datasets, the improvements were 5.0\u223c12.0%, 6.0\u223c12.5%, 5.0\u223c11.2%, 5.0\u223c11.2%, and 6.3\u223c12.3%, respectively), marking a notable advancement towards robust and reliable real-world violence detection systems.<\/jats:p>","DOI":"10.3390\/bdcc9110286","type":"journal-article","created":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T09:59:09Z","timestamp":1763027949000},"page":"286","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Overcoming Domain Shift in Violence Detection with Contrastive Consistency Learning"],"prefix":"10.3390","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8671-2591","authenticated-orcid":false,"given":"Zhenche","family":"Xia","sequence":"first","affiliation":[{"name":"School of Software, Northeastern University, Shenyang 110819, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9870-8925","authenticated-orcid":false,"given":"Zhenhua","family":"Tan","sequence":"additional","affiliation":[{"name":"School of Software, Northeastern University, Shenyang 110819, China"},{"name":"National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Northeastern University, Shenyang 110819, China"},{"name":"Key Laboratory of Data Analytics and Optimization for Smart Industry, Ministry of Education, Northeastern University, Shenyang 110819, China"}]},{"given":"Bin","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Software, Northeastern University, Shenyang 110819, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,11,12]]},"reference":[{"key":"ref_1","first-page":"150","article-title":"DOVE: Detection of movie violence using motion intensity analysis on skin and blood","volume":"6","author":"Clarin","year":"2005","journal-title":"PCSC"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"De Souza, F.D., Chavez, G.C., do Valle Jr, E.A., and Ara\u00fajo, A.d.A. (September, January 30). Violence detection in video using spatio-temporal features. Proceedings of the 2010 23rd SIBGRAPI Conference on Graphics, Patterns and Images, Gramado, Brazil.","DOI":"10.1109\/SIBGRAPI.2010.38"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"15363","DOI":"10.1109\/ACCESS.2025.3531213","article-title":"Violence Detection from Industrial Surveillance Videos Using Deep Learning","volume":"13","author":"Khan","year":"2025","journal-title":"IEEE Access"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Sultani, W., Chen, C., and Shah, M. (2018, January 18\u201322). Real-world anomaly detection in surveillance videos. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00678"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"18693","DOI":"10.1007\/s11042-021-10570-3","article-title":"Anomaly recognition from surveillance videos using 3D convolution neural network","volume":"80","author":"Maqsood","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Wang, Z., She, Q., and Smolic, A. (2021, January 20\u201325). Action-net: Multipath excitation for action recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01301"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Soliman, M.M., Kamal, M.H., Nashed, M.A.E.M., Mostafa, Y.M., Chawky, B.S., and Khattab, D. (2019, January 8\u201310). Violence recognition from videos using deep learning techniques. Proceedings of the 2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt.","DOI":"10.1109\/ICICIS46948.2019.9014714"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"4737","DOI":"10.1007\/s13042-025-02540-0","article-title":"A multi-stream framework using spatial\u2013temporal collaboration learning networks for violence and non-violence classification in complex video environments","volume":"16","author":"Pandey","year":"2025","journal-title":"Int. J. Mach. Learn. Cybern."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Sun, S., and Gong, X. (2024, January 15\u201319). Multi-scale bottleneck transformer for weakly supervised multimodal violence detection. Proceedings of the 2024 IEEE International Conference on Multimedia and Expo (ICME), Niagara Falls, ON, Canada.","DOI":"10.1109\/ICME57554.2024.10688202"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"318","DOI":"10.1016\/j.neunet.2023.01.048","article-title":"Crimenet: Neural structured learning using vision transformer for violence detection","volume":"161","author":"Tommasi","year":"2023","journal-title":"Neural Netw."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"199","DOI":"10.1109\/TNN.2010.2091281","article-title":"Domain adaptation via transfer component analysis","volume":"22","author":"Pan","year":"2010","journal-title":"IEEE Trans. Neural Netw."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Tran, D., Bourdev, L., Fergus, R., Torresani, L., and Paluri, M. (2015, January 7\u201313). Learning spatiotemporal features with 3D convolutional networks. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.510"},{"key":"ref_13","unstructured":"Tarvainen, A., and Valpola, H. (2017, January 4\u20139). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"6767","DOI":"10.1109\/TNNLS.2022.3212909","article-title":"Aligning correlation information for domain adaptation in action recognition","volume":"35","author":"Xu","year":"2022","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_15","unstructured":"Da Costa, V.G.T., Zara, G., Rota, P., Oliveira-Santos, T., Sebe, N., Murino, V., and Ricci, E. (2022, January 3\u20138). Dual-head contrastive domain adaptation for video action recognition. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Li, J., Xu, R., Liu, X., Ma, J., Li, B., Zou, Q., Ma, J., and Yu, H. (2023). Domain adaptation based object detection for autonomous driving in foggy and rainy weather. arXiv.","DOI":"10.1109\/WACV56688.2023.00068"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"111328","DOI":"10.1016\/j.patcog.2024.111328","article-title":"Source-free video domain adaptation by learning from noisy labels","volume":"161","author":"Dasgupta","year":"2025","journal-title":"Pattern Recognit."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3679010","article-title":"Video unsupervised domain adaptation with deep learning: A comprehensive survey","volume":"56","author":"Xu","year":"2024","journal-title":"ACM Comput. Surv."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"13197","DOI":"10.1109\/TCYB.2021.3105637","article-title":"A novel multiple-view adversarial learning network for unsupervised domain adaptation action recognition","volume":"52","author":"Gao","year":"2021","journal-title":"IEEE Trans. Cybern."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Huang, S.W., Lin, C.T., Chen, S.P., Wu, Y.Y., Hsu, P.H., and Lai, S.H. (2018, January 8\u201314). Auggan: Cross domain adaptation with gan-based data augmentation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01240-3_44"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Choi, J., Sharma, G., Chandraker, M., and Huang, J.B. (2020, January 1\u20135). Unsupervised and semi-supervised domain adaptation for action recognition from drones. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA.","DOI":"10.1109\/WACV45572.2020.9093511"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"107737","DOI":"10.1016\/j.patcog.2020.107737","article-title":"Deep feature augmentation for occluded image classification","volume":"111","author":"Cen","year":"2021","journal-title":"Pattern Recognit."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Wang, X., Zhang, R., Shen, C., Kong, T., and Li, L. (2021, January 19\u201325). Dense contrastive learning for self-supervised visual pre-training. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00304"},{"key":"ref_24","unstructured":"Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020, January 13\u201318). A simple framework for contrastive learning of visual representations. Proceedings of the International Conference on Machine Learning, PMLR, Virtual."},{"key":"ref_25","first-page":"48639","article-title":"Dual mean-teacher: An unbiased semi-supervised framework for audio-visual source localization","volume":"36","author":"Guo","year":"2023","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"46","DOI":"10.1007\/s10044-024-01265-0","article-title":"A spatio-temporal model for violence detection based on spatial and temporal attention modules and 2D CNNs","volume":"27","author":"Mahmoodi","year":"2024","journal-title":"Pattern Anal. Appl."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"e70034","DOI":"10.1111\/coin.70034","article-title":"Violence Detection in Video Using Statistical Features of the Optical Flow and 2D Convolutional Neural Network","volume":"41","author":"Mahmoodi","year":"2025","journal-title":"Comput. Intell."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Bucilu\u01ce, C., Caruana, R., and Niculescu-Mizil, A. (2006, January 20\u201323). Model compression. Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA.","DOI":"10.1145\/1150402.1150464"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wu, M.C., Chiu, C.T., and Wu, K.H. (2019, January 12\u201317). Multi-teacher knowledge distillation for compressed video action recognition on deep neural networks. Proceedings of the ICASSP 2019\u20142019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8682450"},{"key":"ref_30","unstructured":"Kumar, A., Mitra, S., and Rawat, Y.S. (March, January 25). Stable mean teacher for semi-supervised video action detection. Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Wang, X., Hu, J.F., Lai, J.H., Zhang, J., and Zheng, W.S. (2019, January 15\u201320). Progressive teacher-student learning for early action prediction. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00367"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Xiong, B., Yang, X., Song, Y., Wang, Y., and Xu, C. (2024, January 17\u201321). Modality-Collaborative Test-Time Adaptation for Action Recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.02524"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. (2020, January 13\u201319). Momentum contrast for unsupervised visual representation learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00975"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Singh, A., Chakraborty, O., Varshney, A., Panda, R., Feris, R., Saenko, K., and Das, A. (2021, January 20\u201325). Semi-supervised action recognition with temporal contrastive learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01025"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Shah, K., Shah, A., Lau, C.P., de Melo, C.M., and Chellappa, R. (2023, January 2\u20137). Multi-view action recognition using contrastive learning. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV56688.2023.00338"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Lorre, G., Rabarisoa, J., Orcesi, A., Ainouz, S., and Canu, S. (2020, January 1\u20135). Temporal contrastive pretraining for video action recognition. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA.","DOI":"10.1109\/WACV45572.2020.9093278"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Zheng, S., Chen, S., and Jin, Q. (2022). Few-shot action recognition with hierarchical matching and contrastive learning. Computer Vision\u2014ECCV 2022, Proceedings of the 17th European Conference, Tel Aviv, Israel, 23\u201327 October 2022, Springer.","DOI":"10.1007\/978-3-031-19772-7_18"},{"key":"ref_38","unstructured":"Nguyen, T.T., Bin, Y., Wu, X., Hu, Z., Nguyen, C.D.T., Ng, S.K., and Luu, A.T. (March, January 25). Multi-scale contrastive learning for video temporal grounding. Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"103406","DOI":"10.1016\/j.cviu.2022.103406","article-title":"Tclr: Temporal contrastive learning for video representation","volume":"219","author":"Dave","year":"2022","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"129694","DOI":"10.1016\/j.neucom.2025.129694","article-title":"STCLR: Sparse Temporal Contrastive Learning for Video Representation","volume":"630","author":"Altabrawee","year":"2025","journal-title":"Neurocomputing"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Sohn, K., Liu, S., Zhong, G., Yu, X., Yang, M.H., and Chandraker, M. (2017, January 22\u201329). Unsupervised domain adaptation for face recognition in unlabeled videos. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.630"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Kim, D., Tsai, Y.H., Zhuang, B., Yu, X., Sclaroff, S., Saenko, K., and Chandraker, M. (2021, January 11\u201317). Learning cross-modal contrastive features for video domain adaptation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Virtual.","DOI":"10.1109\/ICCV48922.2021.01336"},{"key":"ref_43","unstructured":"Chen, M.H., Kira, Z., AlRegib, G., Yoo, J., Chen, R., and Zheng, J. (November, January 27). Temporal attentive alignment for large-scale video domain adaptation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Aich, A., Peng, K.C., and Roy-Chowdhury, A.K. (2023, January 2\u20137). Cross-domain video anomaly detection without target domain adaptation. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV56688.2023.00261"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Valois, P.H.V., Niinuma, K., and Fukui, K. (2024, January 1\u20136). Occlusion Sensitivity Analysis With Augmentation Subspace Perturbation in Deep Feature Space. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA.","DOI":"10.1109\/WACV57701.2024.00476"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"130120","DOI":"10.1016\/j.neucom.2025.130120","article-title":"Multi-receptive field feature disentanglement with Distance-Aware Gaussian Brightness Augmentation for single-source domain generalization in medical image segmentation","volume":"638","author":"Wang","year":"2025","journal-title":"Neurocomputing"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"881","DOI":"10.28991\/ESJ-2022-06-04-015","article-title":"Brightness as an augmentation technique for image classification","volume":"6","author":"Kandel","year":"2022","journal-title":"Emerg. Sci. J."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Choi, J., Sharma, G., Schulter, S., and Huang, J.B. (2020). Shuffle and attend: Video domain adaptation. Computer Vision\u2013ECCV 2020, Proceedings of the 16th European Conference, Glasgow, UK, 23\u201328 August 2020, Springer. Proceedings, Part XII 16.","DOI":"10.1007\/978-3-030-58610-2_40"},{"key":"ref_49","first-page":"23386","article-title":"Contrast and mix: Temporal contrastive video domain adaptation with background mixing","volume":"34","author":"Sahoo","year":"2021","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Yuan, J., Liu, Y., Shen, C., Wang, Z., and Li, H. (2021, January 11\u201317). A simple baseline for semi-supervised semantic segmentation with strong data augmentation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Virtual.","DOI":"10.1109\/ICCV48922.2021.00812"},{"key":"ref_51","unstructured":"Sohn, K. (2016, January 5\u201310). Improved deep metric learning with multi-class n-pair loss objective. Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Cheng, M., Cai, K., and Li, M. (2021, January 10\u201315). RWF-2000: An open large scale video database for violence detection. Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy.","DOI":"10.1109\/ICPR48806.2021.9412502"},{"key":"ref_53","unstructured":"Bermejo Nievas, E., Deniz Suarez, O., Bueno Garc\u00eda, G., and Sukthankar, R. (2011). Violence detection in video using computer vision techniques. Computer Analysis of Images and Patterns, Proceedings of the 14th International Conference, CAIP 2011, Seville, Spain, 29\u201331 August 2011, Springer. Proceedings, Part II 14."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Hassner, T., Itcher, Y., and Kliper-Gross, O. (2012, January 16\u201321). Violent flows: Real-time detection of violent crowd behavior. Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA.","DOI":"10.1109\/CVPRW.2012.6239348"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"39172","DOI":"10.1109\/ACCESS.2019.2906275","article-title":"A novel violent video detection scheme based on modified 3D convolutional neural networks","volume":"7","author":"Song","year":"2019","journal-title":"IEEE Access"},{"key":"ref_56","first-page":"36899","article-title":"SCTF: An efficient neural network based on local spatial compression and full temporal fusion for video violence detection","volume":"83","author":"Tan","year":"2024","journal-title":"Multimed. Tools Appl."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lu\u010di\u0107, M., and Schmid, C. (2021, January 11\u201317). Vivit: A video vision transformer. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Virtual.","DOI":"10.1109\/ICCV48922.2021.00676"}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/11\/286\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,14]],"date-time":"2025-11-14T05:15:38Z","timestamp":1763097338000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/11\/286"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,12]]},"references-count":57,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2025,11]]}},"alternative-id":["bdcc9110286"],"URL":"https:\/\/doi.org\/10.3390\/bdcc9110286","relation":{},"ISSN":["2504-2289"],"issn-type":[{"type":"electronic","value":"2504-2289"}],"subject":[],"published":{"date-parts":[[2025,11,12]]}}}