{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T06:42:56Z","timestamp":1771915376276,"version":"3.50.1"},"reference-count":54,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2023,4,28]],"date-time":"2023-04-28T00:00:00Z","timestamp":1682640000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Institute of Information &amp; communications Technology Planning &amp; Evaluation (IITP)","award":["IITP-2022-0-00078"],"award-info":[{"award-number":["IITP-2022-0-00078"]}]},{"name":"Institute of Information &amp; communications Technology Planning &amp; Evaluation (IITP)","award":["IITP-2017-0-00655"],"award-info":[{"award-number":["IITP-2017-0-00655"]}]},{"name":"Institute of Information &amp; communications Technology Planning &amp; Evaluation (IITP)","award":["IITP-2022-2020-0-01489"],"award-info":[{"award-number":["IITP-2022-2020-0-01489"]}]},{"name":"Lean UX core technology and platform for any digital artifacts UX evaluation","award":["IITP-2022-0-00078"],"award-info":[{"award-number":["IITP-2022-0-00078"]}]},{"name":"Lean UX core technology and platform for any digital artifacts UX evaluation","award":["IITP-2017-0-00655"],"award-info":[{"award-number":["IITP-2017-0-00655"]}]},{"name":"Lean UX core technology and platform for any digital artifacts UX evaluation","award":["IITP-2022-2020-0-01489"],"award-info":[{"award-number":["IITP-2022-2020-0-01489"]}]},{"name":"Grand Information Technology Research Center support program","award":["IITP-2022-0-00078"],"award-info":[{"award-number":["IITP-2022-0-00078"]}]},{"name":"Grand Information Technology Research Center support program","award":["IITP-2017-0-00655"],"award-info":[{"award-number":["IITP-2017-0-00655"]}]},{"name":"Grand Information Technology Research Center support program","award":["IITP-2022-2020-0-01489"],"award-info":[{"award-number":["IITP-2022-2020-0-01489"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Multimodal emotion recognition has gained much traction in the field of affective computing, human\u2013computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (Happiness, Neutral, Sadness, and Anger) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement.<\/jats:p>","DOI":"10.3390\/s23094373","type":"journal-article","created":{"date-parts":[[2023,4,28]],"date-time":"2023-04-28T09:54:53Z","timestamp":1682675693000},"page":"4373","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":21,"title":["A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0061-8834","authenticated-orcid":false,"given":"Muhammad Asif","family":"Razzaq","sequence":"first","affiliation":[{"name":"Department of Computer Science, Fatima Jinnah Women University, Rawalpindi 46000, Pakistan"},{"name":"Ubiquitous Computing Lab, Department of Computer Science and Engineering, Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si 17104, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3862-8787","authenticated-orcid":false,"given":"Jamil","family":"Hussain","sequence":"additional","affiliation":[{"name":"Department of Data Science, Sejong University, Seoul 30019, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3675-2258","authenticated-orcid":false,"given":"Jaehun","family":"Bang","sequence":"additional","affiliation":[{"name":"Hanwha Corporation\/Momentum, Hanwha Building, 86 Cheonggyecheon-ro, Jung-gu, Seoul 04541, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2556-4991","authenticated-orcid":false,"given":"Cam-Hao","family":"Hua","sequence":"additional","affiliation":[{"name":"Ubiquitous Computing Lab, Department of Computer Science and Engineering, Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si 17104, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9883-3355","authenticated-orcid":false,"given":"Fahad Ahmed","family":"Satti","sequence":"additional","affiliation":[{"name":"Ubiquitous Computing Lab, Department of Computer Science and Engineering, Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si 17104, Republic of Korea"},{"name":"Department of Computing, School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2155-8911","authenticated-orcid":false,"given":"Ubaid Ur","family":"Rehman","sequence":"additional","affiliation":[{"name":"Ubiquitous Computing Lab, Department of Computer Science and Engineering, Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si 17104, Republic of Korea"},{"name":"Department of Computing, School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8920-4231","authenticated-orcid":false,"given":"Hafiz Syed Muhammad","family":"Bilal","sequence":"additional","affiliation":[{"name":"Department of Computing, School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2132-6021","authenticated-orcid":false,"given":"Seong Tae","family":"Kim","sequence":"additional","affiliation":[{"name":"Ubiquitous Computing Lab, Department of Computer Science and Engineering, Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si 17104, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5962-1587","authenticated-orcid":false,"given":"Sungyoung","family":"Lee","sequence":"additional","affiliation":[{"name":"Ubiquitous Computing Lab, Department of Computer Science and Engineering, Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si 17104, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2023,4,28]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Zhao, Z., Wang, Y., and Wang, Y. (2022). Multi-level Fusion of Wav2vec 2.0 and BERT for Multimodal Emotion Recognition. arXiv.","DOI":"10.21437\/Interspeech.2022-10230"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"108580","DOI":"10.1016\/j.knosys.2022.108580","article-title":"Deep learning based multimodal emotion recognition using model-level fusion of audio\u2013visual modalities","volume":"244","author":"Middya","year":"2022","journal-title":"Knowl.-Based Syst."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Medjden, S., Ahmed, N., and Lataifeh, M. (2020). Adaptive user interface design and analysis using emotion recognition through facial expressions and body posture from an RGB-D sensor. PLoS ONE, 15.","DOI":"10.1371\/journal.pone.0235908"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"168865","DOI":"10.1109\/ACCESS.2020.3023871","article-title":"Cross-subject multimodal emotion recognition based on hybrid fusion","volume":"8","author":"Cimtay","year":"2020","journal-title":"IEEE Access"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"103","DOI":"10.1016\/j.inffus.2020.01.011","article-title":"Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review","volume":"59","author":"Zhang","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"157","DOI":"10.1145\/3161174","article-title":"Multimodal deep learning for activity and context recognition","volume":"1","author":"Radu","year":"2018","journal-title":"Proc. Acm Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"20","DOI":"10.1016\/j.eswa.2019.04.051","article-title":"Advancing ensemble learning performance through data transformation and classifiers fusion in granular computing context","volume":"131","author":"Liu","year":"2019","journal-title":"Expert Syst. Appl."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"402","DOI":"10.1016\/j.neucom.2018.06.021","article-title":"Combining multiple algorithms in classifier ensembles using generalized mixture functions","volume":"313","author":"Costa","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1622","DOI":"10.3390\/s18051622","article-title":"A multimodal deep log-based user experience (UX) platform for UX evaluation","volume":"18","author":"Hussain","year":"2018","journal-title":"Sensors"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1016\/j.inffus.2017.02.003","article-title":"A review of affective computing: From unimodal analysis to multimodal fusion","volume":"37","author":"Poria","year":"2017","journal-title":"Inf. Fusion"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Liu, Z., Shen, Y., Lakshminarasimhan, V.B., Liang, P.P., Zadeh, A., and Morency, L.P. (2018). Efficient low-rank multimodal fusion with modality-specific factors. arXiv.","DOI":"10.18653\/v1\/P18-1209"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"184","DOI":"10.1016\/j.inffus.2018.06.003","article-title":"Audio-visual emotion fusion (AVEF): A deep efficient weighted approach","volume":"46","author":"Ma","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1576","DOI":"10.1109\/TMM.2017.2766843","article-title":"Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching","volume":"20","author":"Zhang","year":"2017","journal-title":"IEEE Trans. Multimed."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Li, S., Zhang, T., Chen, B., and Chen, C.P. (2023). MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis. IEEE Trans. Affect. Comput., 1\u201315.","DOI":"10.1109\/TAFFC.2023.3259010"},{"key":"ref_15","first-page":"423","article-title":"Multimodal machine learning: A survey and taxonomy","volume":"41","author":"Ahuja","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"68","DOI":"10.1016\/j.inffus.2016.09.005","article-title":"Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges","volume":"35","author":"Gravina","year":"2017","journal-title":"Inf. Fusion"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"60736","DOI":"10.1109\/ACCESS.2019.2913393","article-title":"Robust human activity recognition using multimodal feature-level fusion","volume":"7","author":"Javed","year":"2019","journal-title":"IEEE Access"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Huang, J., Li, Y., Tao, J., Lian, Z., Wen, Z., Yang, M., and Yi, J. (2017, January 23\u201327). Continuous multimodal emotion prediction based on long short term memory recurrent neural network. Proceedings of the 7th Annual Workshop on Audio\/Visual Emotion Challenge, Mountain View, CA, USA.","DOI":"10.1145\/3133944.3133946"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"174","DOI":"10.1016\/j.neucom.2022.04.019","article-title":"EmoSeC: Emotion recognition from scene context","volume":"492","author":"Thuseethan","year":"2022","journal-title":"Neurocomputing"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"19","DOI":"10.1016\/j.inffus.2022.03.009","article-title":"A systematic review on affective computing: Emotion models, databases, and recent advances","volume":"83\u201384","author":"Wang","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"119601","DOI":"10.1016\/j.eswa.2023.119601","article-title":"Practically motivated adaptive fusion method with tie analysis for multilabel dispersed data","volume":"219","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1016\/j.neucom.2016.02.040","article-title":"Untrained weighted classifier combination with embedded ensemble pruning","volume":"196","author":"Krawczyk","year":"2016","journal-title":"Neurocomputing"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1217","DOI":"10.1109\/TFUZZ.2017.2718483","article-title":"Combination of Classifiers With Optimal Weight Based on Evidential Reasoning","volume":"26","author":"Liu","year":"2018","journal-title":"IEEE Trans. Fuzzy Syst."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.eswa.2016.06.005","article-title":"A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification","volume":"62","author":"Onan","year":"2016","journal-title":"Expert Syst. Appl."},{"key":"ref_25","unstructured":"(2023, April 02). Lean UX: Mixed Method Approach for ux Evaluation. Available online: https:\/\/github.com\/ubiquitous-computing-lab\/Lean-UX-Platform\/."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"715","DOI":"10.1109\/TCDS.2021.3071170","article-title":"Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition","volume":"14","author":"Liu","year":"2021","journal-title":"IEEE Trans. Cogn. Dev. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Ghoniem, R.M., Algarni, A.D., and Shaalan, K. (2019). Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information. Information, 10.","DOI":"10.3390\/info10070239"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"257","DOI":"10.1109\/JPROC.2023.3238524","article-title":"Object detection in 20 years: A survey","volume":"111","author":"Zou","year":"2023","journal-title":"Proc. IEEE"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Zhang, J., and Xiu, Y. (2023). Image stitching based on human visual system and SIFT algorithm. Vis. Comput., 1\u201313.","DOI":"10.1007\/s00371-023-02791-4"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"102447","DOI":"10.1016\/j.jnca.2019.102447","article-title":"Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals","volume":"149","author":"Shoumy","year":"2020","journal-title":"J. Netw. Comput. Appl."},{"key":"ref_32","unstructured":"Park, E.L., and Cho, S. (2014, January 11\u201314). KoNLPy: Korean natural language processing in Python. Proceedings of the 26th Annual Conference on Human & Cognitive Language Technology, Chuncheon, Korea."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"410","DOI":"10.1080\/00038628.2020.1748562","article-title":"Deep learning-based natural language sentiment classification model for recognizing users\u2019 sentiments toward residential space","volume":"64","author":"Chang","year":"2020","journal-title":"Archit. Sci. Rev."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Bang, J., Hur, T., Kim, D., Huynh-The, T., Lee, J., Han, Y., Banos, O., Kim, J.I., and Lee, S. (2018). Adaptive Data Boosting Technique for Robust Personalized Speech Emotion in Emotionally-Imbalanced Small-Sample Environments. Sensors, 18.","DOI":"10.3390\/s18113744"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"1458","DOI":"10.3390\/s150101458","article-title":"Time-frequency feature representation using multi-resolution texture analysis and acoustic activity detector for real-life speech emotion recognition","volume":"15","author":"Wang","year":"2015","journal-title":"Sensors"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Razzaq, M.A., Bang, J., Kang, S.S., and Lee, S. (2020, January 7\u201310). UnSkEm: Unobtrusive Skeletal-based Emotion Recognition for User Experience. Proceedings of the 2020 International Conference on Information Networking (ICOIN), Barcelona, Spain.","DOI":"10.1109\/ICOIN48656.2020.9016601"},{"key":"ref_37","first-page":"1","article-title":"A Novel Emotion-Aware Method Based on the Fusion of Textual Description of Speech, Body Movements, and Facial Expressions","volume":"71","author":"Du","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"301346","DOI":"10.1016\/j.fsidi.2022.301346","article-title":"A semi-supervised deep learning based video anomaly detection framework using RGB-D for surveillance of real-world critical environments","volume":"40","author":"Khaire","year":"2022","journal-title":"Forensic Sci. Int. Digit. Investig."},{"key":"ref_39","first-page":"1566","article-title":"Multimodal sentiment analysis: A systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions","volume":"91","author":"Gandhi","year":"2022","journal-title":"Inf. Fusion"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"26777","DOI":"10.1109\/ACCESS.2019.2901352","article-title":"Emotion Recognition Using Hybrid Gaussian Mixture Model and Deep Neural Network","volume":"7","author":"Shahin","year":"2019","journal-title":"IEEE Access"},{"key":"ref_41","unstructured":"(2023, April 02). Deep Learning Library for the Java. Available online: https:\/\/deeplearning4j.org\/."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"118681","DOI":"10.1016\/j.eswa.2022.118681","article-title":"Multimodal spatiotemporal skeletal kinematic gait feature fusion for vision-based fall detection","volume":"212","author":"Amsaprabhaa","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Samadiani, N., Huang, G., Cai, B., Luo, W., Chi, C.H., Xiang, Y., and He, J. (2019). A review on automatic facial expression recognition systems assisted by multimodal sensor data. Sensors, 19.","DOI":"10.3390\/s19081863"},{"key":"ref_44","unstructured":"Pereira, R.M., and Pasi, G. (1999, January 25\u201328). On non-monotonic aggregation: Mixture operators. Proceedings of the 4th Meeting of the EURO Working Group on Fuzzy Sets (EUROFUSE\u201999) and 2nd International Conference on Soft and Intelligent Computing (SIC\u201999), Budapest, Hungary."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1108\/JICES-03-2019-0034","article-title":"Uncertainty in emotion recognition","volume":"17","author":"Landowska","year":"2019","journal-title":"J. Inf. Commun. Ethics Soc."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Beliakov, G., Sola, H.B., and S\u00e1nchez, T.C. (2016). A Practical Guide to Averaging Functions, Springer.","DOI":"10.1007\/978-3-319-24753-3"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","article-title":"Affectnet: A database for facial expression, valence, and arousal computing in the wild","volume":"10","author":"Mollahosseini","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"104342","DOI":"10.1016\/j.imavis.2021.104342","article-title":"Facial expression recognition using densely connected convolutional neural network and hierarchical spatial attention","volume":"117","author":"Gan","year":"2022","journal-title":"Image Vis. Comput."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Hua, C.H., Huynh-The, T., Seo, H., and Lee, S. (2020, January 3\u20135). Convolutional network with densely backward attention for facial expression recognition. Proceedings of the 2020 14th International Conference on Ubiquitous Information Management and Communication (IMCOM), Taichung, Taiwan.","DOI":"10.1109\/IMCOM48794.2020.9001686"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"107316","DOI":"10.1016\/j.knosys.2021.107316","article-title":"A multimodal hierarchical approach to speech emotion recognition from audio and text","volume":"229","author":"Singh","year":"2021","journal-title":"Knowl.-Based Syst."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"802","DOI":"10.1109\/TCYB.2017.2787717","article-title":"Multiscale amplitude feature and significance of enhanced vocal tract information for emotion classification","volume":"49","author":"Deb","year":"2018","journal-title":"IEEE Trans. Cybern."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"90","DOI":"10.1109\/TAFFC.2016.2591039","article-title":"Perception of emotions and body movement in the emilya database","volume":"9","author":"Fourati","year":"2016","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Livingstone, S.R., and Russo, F.A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13.","DOI":"10.1371\/journal.pone.0196391"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"103","DOI":"10.1109\/MIS.2022.3147585","article-title":"Multiscale 3D-shift graph convolution network for emotion recognition from human actions","volume":"37","author":"Shi","year":"2022","journal-title":"IEEE Intell. Syst."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/9\/4373\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:25:48Z","timestamp":1760124348000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/9\/4373"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,28]]},"references-count":54,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2023,5]]}},"alternative-id":["s23094373"],"URL":"https:\/\/doi.org\/10.3390\/s23094373","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,4,28]]}}}