{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,19]],"date-time":"2025-12-19T05:28:03Z","timestamp":1766122083344,"version":"3.48.0"},"reference-count":33,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2025,12,17]],"date-time":"2025-12-17T00:00:00Z","timestamp":1765929600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Emotional Footprint Identification refers to the process of recognizing or understanding the emotional impact that a person, experience, or interaction leaves on others. Emotion Recognition plays an important role in human\u2013computer interaction for identifying emotions such as fear, sadness, anger, happiness, and surprise on the human face during the conversation. However, accurate emotional footprint identification plays a crucial role due to the dynamic changes. Conventional deep learning techniques integrate advanced technologies for emotional footprint identification, but challenges in accurately detecting emotions in minimal time. To address these challenges, a novel Divergence Shepherd Feature Optimization-based Stochastic-Tuned Deep Multilayer Perceptron (DSFO-STDMP) is proposed. The proposed DSFO-STDMP model consists of three distinct processes namely data acquisition, feature selection or reduction, and classification. First, the data acquisition phase collects a number of conversation data samples from a dataset to train the model. These conversation samples are given to the Sokal\u2013Sneath Divergence shuffling shepherd optimization to select more important features and remove the others. This optimization process accurately performs the feature reduction process to minimize the emotional footprint identification time. Once the features are selected, classification is carried out using the Rosenthal correlative stochastic-tuned deep multilayer perceptron classifier, which analyzes the correlation score between data samples. Based on this analysis, the system successfully classifies different emotions footprints during the conversations. In the fine-tuning phase, the stochastic gradient method is applied to adjust the weights between layers of deep learning architecture for minimizing errors and improving the model\u2019s accuracy. Experimental evaluations are conducted using various performance metrics, including accuracy, precision, recall, F1 score, and emotional footprint identification time. The quantitative results reveal enhancement in the 95% accuracy, 93% precision, 97% recall and 97% F1 score. Additionally, the DSFO-STDMP minimized the in training time by 35% when compared to traditional techniques.<\/jats:p>","DOI":"10.3390\/a18120801","type":"journal-article","created":{"date-parts":[[2025,12,18]],"date-time":"2025-12-18T10:06:56Z","timestamp":1766052416000},"page":"801","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Divergence Shepherd Feature Optimization-Based Stochastic-Tuned Deep Multilayer Perceptron for Emotional Footprint Identification"],"prefix":"10.3390","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0009-0002-1728-0902","authenticated-orcid":false,"given":"Karthikeyan","family":"Jagadeesan","sequence":"first","affiliation":[{"name":"Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0551-3705","authenticated-orcid":false,"given":"Annapurani","family":"Kumarappan","sequence":"additional","affiliation":[{"name":"Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India"}]}],"member":"1968","published-online":{"date-parts":[[2025,12,17]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1109\/TAFFC.2022.3175578","article-title":"Emotion Intensity and its Control for Emotional Voice Conversion","volume":"14","author":"Zhou","year":"2023","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"112825","DOI":"10.1016\/j.knosys.2024.112825","article-title":"SDR-GNN: Spectral Domain Reconstruction Graph Neural Network for incomplete multimodal learning in conversational emotion recognition","volume":"309","author":"Fu","year":"2025","journal-title":"Knowl.-Based Syst."},{"key":"ref_3","first-page":"11418","article-title":"Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum","volume":"39","author":"Ai","year":"2025","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1754","DOI":"10.1109\/TAFFC.2022.3216551","article-title":"ECPEC: Emotion-Cause Pair Extraction in Conversations","volume":"14","author":"Li","year":"2023","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1693","DOI":"10.1109\/TASLP.2023.3268571","article-title":"iEmoTTS: Toward Robust Cross-Speaker Emotion Transfer and Control for Speech Synthesis based on Disentanglement between Prosody and Timbre","volume":"31","author":"Zhang","year":"2023","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"36","DOI":"10.17694\/bajece.1372107","article-title":"Multimodal Emotion Recognition Using Bi-LG-GCN for MELD Dataset","volume":"12","year":"2024","journal-title":"Balk. J. Electr. Comput. Eng."},{"doi-asserted-by":"crossref","unstructured":"Nassif, A.B., Shahin, I., Lataifeh, M., Elnagar, A., and Nemmour, N. (2022). Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments. Information, 13.","key":"ref_7","DOI":"10.3390\/info13100456"},{"doi-asserted-by":"crossref","unstructured":"Triantafyllopoulos, A., Reichel, U., Liu, S., Huber, S., Eyben, F., and Schuller, B.W. (2023). Multistage linguistic conditioning of convolutional layers for speech emotion recognition. Front. Comput. Sci., 5.","key":"ref_8","DOI":"10.3389\/fcomp.2023.1072479"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"4298","DOI":"10.1109\/TASLP.2024.3434495","article-title":"Masked Graph Learning With Recurrent Alignment for Multimodal Emotion Recognition in Conversation","volume":"32","author":"Meng","year":"2024","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"6472","DOI":"10.1109\/TAI.2024.3445325","article-title":"Deep Imbalanced Learning for Multimodal Emotion Recognition in Conversations","volume":"5","author":"Meng","year":"2024","journal-title":"IEEE Trans. Artif. Intell."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1186\/s13636-023-00303-9","article-title":"Speech emotion recognition based on Graph-LSTM neural network","volume":"2023","author":"Li","year":"2023","journal-title":"EURASIP J. Audio, Speech, Music. Process."},{"doi-asserted-by":"crossref","unstructured":"Bhangale, K., and Kothandaraman, M. (2023). Speech Emotion Recognition Based on Multiple Acoustic Features and Deep Convolutional Neural Network. Electronics, 12.","key":"ref_12","DOI":"10.3390\/electronics12040839"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"11824","DOI":"10.1038\/s41598-025-95734-z","article-title":"Speech emotion recognition with light weight deep neural ensemble model using hand crafted features","volume":"15","author":"Chowdhury","year":"2025","journal-title":"Sci. Rep."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"8855","DOI":"10.1038\/s41598-025-89758-8","article-title":"Multi-modal emotion recognition in conversation based on prompt learning with text-audio fusion features","volume":"15","author":"Wu","year":"2025","journal-title":"Sci. Rep."},{"doi-asserted-by":"crossref","unstructured":"Pallewela, N., Alahakoon, D., Adikari, A., Pierce, J.E., and Rose, M.L. (2024). Optimizing Speech Emotion Recognition with Machine Learning Based Advanced Audio Cue Analysis. Technologies, 12.","key":"ref_15","DOI":"10.3390\/technologies12070111"},{"doi-asserted-by":"crossref","unstructured":"Filali, H., Boulealam, C., El Fazazy, K., Mahraz, A.M., Tairi, H., and Riffi, J. (2025). Meaningful Multimodal Emotion Recognition Based on Capsule Graph Transformer Architecture. Information, 16.","key":"ref_16","DOI":"10.2139\/ssrn.5113515"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"77","DOI":"10.1109\/TMM.2023.3260635","article-title":"GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition","volume":"26","author":"Li","year":"2023","journal-title":"IEEE Trans. Multimed."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"2349","DOI":"10.1109\/ACCESS.2023.3348518","article-title":"Residual Relation-Aware Attention Deep Graph-Recurrent Model for Emotion Recognition in Conversation","volume":"12","author":"Duong","year":"2024","journal-title":"IEEE Access"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"126427","DOI":"10.1016\/j.neucom.2023.126427","article-title":"GraphMFT: A graph network based multimodal fusion technique for emotion recognition in conversation","volume":"550","author":"Li","year":"2023","journal-title":"Neurocomputing"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"102306","DOI":"10.1016\/j.inffus.2024.102306","article-title":"Fusing pairwise modalities for emotion recognition in conversations","volume":"106","author":"Fan","year":"2024","journal-title":"Inf. Fusion"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1341","DOI":"10.1016\/j.dcan.2022.10.018","article-title":"An autoencoder-based feature level fusion for speech emotion recognition","volume":"10","author":"Shixin","year":"2024","journal-title":"Digit. Commun. Networks"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"1177","DOI":"10.1109\/TAFFC.2024.3498443","article-title":"A Low-Rank Matching Attention Based Cross-Modal Feature Fusion Method for Conversational Emotion Recognition","volume":"16","author":"Shou","year":"2024","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1711","DOI":"10.1109\/TAFFC.2024.3369726","article-title":"Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition","volume":"15","author":"Chen","year":"2024","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"102367","DOI":"10.1016\/j.inffus.2024.102367","article-title":"GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition","volume":"108","author":"Lian","year":"2024","journal-title":"Inf. Fusion"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"4484","DOI":"10.1038\/s41598-024-52989-2","article-title":"Speech emotion recognition via graph-based representations","volume":"14","author":"Pentari","year":"2024","journal-title":"Sci. Rep."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"127370","DOI":"10.1016\/j.neucom.2024.127370","article-title":"Self-supervised utterance order prediction for emotion recognition in conversations","volume":"577","author":"Jiang","year":"2024","journal-title":"Neurocomputing"},{"doi-asserted-by":"crossref","unstructured":"Makhmudov, F., Kultimuratov, A., and Cho, Y.-I. (2024). Enhancing Multimodal Emotion Recognition through Attention Mechanisms in BERT and CNN Architectures. Appl. Sci., 14.","key":"ref_27","DOI":"10.20944\/preprints202404.1574.v1"},{"doi-asserted-by":"crossref","unstructured":"Li, J., Mei, H., Jia, L., and Zhang, X. (2023). Multimodal Emotion Recognition in Conversation Based on Hypergraphs. Electronics, 12.","key":"ref_28","DOI":"10.3390\/electronics12224703"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"110285","DOI":"10.1016\/j.knosys.2023.110285","article-title":"Hierarchically stacked graph convolution for emotion recognition in conversation","volume":"263","author":"Wang","year":"2023","journal-title":"Knowl.-Based Syst."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"102272","DOI":"10.1016\/j.inffus.2024.102272","article-title":"Bi-stream graph learning based multimodal fusion for emotion recognition in conversation","volume":"106","author":"Lu","year":"2024","journal-title":"Inf. Fusion"},{"doi-asserted-by":"crossref","unstructured":"Chouhayebi, H., Mahraz, M.A., Riffi, J., Tairi, H., and Alioua, N. (2024). Human Emotion Recognition Based on Spatio-Temporal Facial Features Using HOG-HOF and VGG-LSTM. Computers, 13.","key":"ref_31","DOI":"10.3390\/computers13040101"},{"doi-asserted-by":"crossref","unstructured":"Azizian, P., Honarmand, M., Jaiswal, A., Kline, A., Dunlap, K., Washington, P., and Wall, D.P. (2025). Multimodal LLM vs. Human-Measured Features for AI Predictions of Autism in Home Videos. Algorithms, 18.","key":"ref_32","DOI":"10.3390\/a18110687"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"107391","DOI":"10.1016\/j.knosys.2021.107391","article-title":"Oriented stochastic loss descent algorithm to train very deep multi-layer neural networks without vanishing gradients","volume":"230","author":"Abuqaddom","year":"2021","journal-title":"Knowl.-Based Syst."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/12\/801\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,19]],"date-time":"2025-12-19T05:15:00Z","timestamp":1766121300000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/12\/801"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,17]]},"references-count":33,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["a18120801"],"URL":"https:\/\/doi.org\/10.3390\/a18120801","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2025,12,17]]}}}