{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T01:39:13Z","timestamp":1760060353606,"version":"build-2065373602"},"reference-count":46,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T00:00:00Z","timestamp":1755561600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>Automatic Facial Expression Recognition (AFER) is a key component of affective computing, enabling machines to recognize and interpret human emotions across various applications such as human\u2013computer interaction, healthcare, entertainment, and social robotics. Dynamic AFER systems, which exploit image sequences, can capture the temporal evolution of facial expressions but often suffer from high computational costs, limiting their suitability for real-time use. In this paper, we propose an efficient dynamic AFER approach based on a novel spatio-temporal representation. Facial landmarks are extracted, and all possible Euclidean distances are computed to model the spatial structure. To capture temporal variations, three statistical metrics are applied to each distance sequence. A feature selection stage based on the Extremely Randomized Trees (ExtRa-Trees) algorithm is then performed to reduce dimensionality and enhance classification performance. Finally, the emotions are classified using a linear multi-class Support Vector Machine (SVM) and compared against the k-Nearest Neighbors (k-NN) method. The proposed approach is evaluated on three benchmark datasets: CK+, MUG, and MMI, achieving recognition rates of 94.65%, 93.98%, and 75.59%, respectively. Our results demonstrate that the proposed method achieves a strong balance between accuracy and computational efficiency, making it well-suited for real-time facial expression recognition applications.<\/jats:p>","DOI":"10.3390\/bdcc9080213","type":"journal-article","created":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T15:29:29Z","timestamp":1755617369000},"page":"213","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Efficient Dynamic Emotion Recognition from Facial Expressions Using Statistical Spatio-Temporal Geometric Features"],"prefix":"10.3390","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4704-1398","authenticated-orcid":false,"given":"Yacine","family":"Yaddaden","sequence":"first","affiliation":[{"name":"Department of Mathematics, Computer Science and Engineering, L\u00e9vis Campus, Universit\u00e9 du Qu\u00e9bec \u00e0 Rimouski, L\u00e9vis, QC G6V 0A6, Canada"}]}],"member":"1968","published-online":{"date-parts":[[2025,8,19]]},"reference":[{"key":"ref_1","first-page":"51","article-title":"Communication without words","volume":"2","author":"Mehrabian","year":"1968","journal-title":"Psychol. Today"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"173","DOI":"10.1016\/j.eswa.2018.06.033","article-title":"User action and facial expression recognition for error detection system in an ambient assisted environment","volume":"112","author":"Yaddaden","year":"2018","journal-title":"Expert Syst. Appl."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1037\/h0030377","article-title":"Constants across cultures in the face and emotion","volume":"17","author":"Ekman","year":"1971","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ekman, P., and Rosenberg, E.L. (2005). What the Face Reveals Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS), Oxford University Press.","DOI":"10.1093\/acprof:oso\/9780195179644.001.0001"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"8316","DOI":"10.1109\/TIP.2020.3011846","article-title":"Facial expression recognition in videos using dynamic kernels","volume":"29","author":"Perveen","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"136974","DOI":"10.1109\/ACCESS.2021.3117253","article-title":"Photogram classification-based emotion recognition","volume":"9","year":"2021","journal-title":"IEEE Access"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"102949","DOI":"10.1016\/j.jvcir.2020.102949","article-title":"Contour and region harmonic features for sub-local facial expression recognition","volume":"73","author":"Shahid","year":"2020","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Ngoc, Q.T., Lee, S., and Song, B.C. (2020). Facial landmark-based emotion recognition via directed graph neural network. Electronics, 9.","DOI":"10.3390\/electronics9050764"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"22861","DOI":"10.1007\/s11042-019-7530-7","article-title":"Multi-stream CNN for facial expression recognition in limited training data","volume":"78","author":"Aghamaleki","year":"2019","journal-title":"Multimed. Tools Appl."},{"key":"ref_10","unstructured":"Kanade, T., Cohn, J.F., and Tian, Y. (2000, January 28\u201330). Comprehensive database for facial expression analysis. Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France."},{"key":"ref_11","unstructured":"Aifanti, N., Papachristou, C., and Delopoulos, A. (2010, January 12\u201314). The MUG facial expression database. Proceedings of the 11th International Workshop on Image Analysis for Multimedia Interactive Services, Desenzano del Garda, Italy."},{"key":"ref_12","unstructured":"Pantic, M., Valstar, M., Rademaker, R., and Maat, L. (2005, January 6). Web-based database for facial expression analysis. Proceedings of the IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Konar, A., Halder, A., and Chakraborty, A. (2015). Introduction to Emotion Recognition. Emotion Recognition, John Wiley & Sons, Inc.","DOI":"10.1002\/9781118910566"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Yaddaden, Y., Adda, M., Bouzouane, A., Gaboury, S., and Bouchard, B. (2018, January 24\u201325). One-class and bi-class SVM classifier comparison for automatic facial expression recognition. Proceedings of the 2018 International Conference on Applied Smart Systems (ICASS), Medea, Algeria.","DOI":"10.1109\/ICASS.2018.8651969"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"32297","DOI":"10.1109\/ACCESS.2019.2901521","article-title":"Learning affective video features for facial expression recognition via hybrid deep learning","volume":"7","author":"Zhang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Yaddaden, Y., Adda, M., Bouzouane, A., Gaboury, S., and Bouchard, B. (2017, January 11\u201313). Facial Expression Recognition from Video using Geometric Features. Proceedings of the 8th International Conference on Pattern Recognition Systems, Madrid, Spain.","DOI":"10.1049\/cp.2017.0133"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"10287","DOI":"10.1007\/s11042-018-6537-9","article-title":"Facial emotion classification using concatenated geometric and textural features","volume":"78","author":"Sen","year":"2019","journal-title":"Multimed. Tools Appl."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Yaddaden, Y., Adda, M., Bouzouane, A., Gaboury, S., and Bouchard, B. (2018, January 29\u201331). Hybrid-based facial expression recognition approach for human-computer interaction. Proceedings of the 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP), Vancouver, BC, Canada.","DOI":"10.1109\/MMSP.2018.8547081"},{"key":"ref_19","first-page":"200166","article-title":"An efficient facial expression recognition system with appearance-based fused descriptors","volume":"17","author":"Yaddaden","year":"2023","journal-title":"Intell. Syst. Appl."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Pham, T.D., Duong, M.T., Ho, Q.T., Lee, S., and Hong, M.C. (2023). CNN-based facial expression recognition with simultaneous consideration of inter-class and intra-class variations. Sensors, 23.","DOI":"10.20944\/preprints202311.0027.v1"},{"key":"ref_21","unstructured":"Vaijayanthi, S., and Arunnehru, J. (2021, January 23\u201324). Dense SIFT-based facial expression recognition using machine learning techniques. Proceedings of the 6th International Conference on Advance Computing and Intelligent Engineering: ICACIE 2021, Bhubaneswar, India."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"169053","DOI":"10.1016\/j.ijleo.2022.169053","article-title":"Windmill graph based feature descriptors for facial expression recognition","volume":"260","author":"Kartheek","year":"2022","journal-title":"Optik"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"259","DOI":"10.13164\/re.2020.0259","article-title":"Facial expression recognition based on multi-dataset neural network","volume":"29","author":"Yang","year":"2020","journal-title":"Radioengineering"},{"key":"ref_24","first-page":"1819","article-title":"Facial expression recognition in videos using hybrid CNN & ConvLSTM","volume":"15","author":"Singh","year":"2023","journal-title":"Int. J. Inf. Technol."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"6499","DOI":"10.1007\/s00521-022-08005-7","article-title":"A deep-learning-based facial expression recognition method using textural features","volume":"35","author":"Mukhopadhyay","year":"2023","journal-title":"Neural Comput. Appl."},{"key":"ref_26","first-page":"27619","article-title":"CC-CNN: A cross connected convolutional neural network using feature level fusion for facial expression recognition","volume":"83","author":"Kartheek","year":"2024","journal-title":"Multimed. Tools Appl."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"11707","DOI":"10.1007\/s11042-024-19364-9","article-title":"Cross-centroid ripple pattern for facial expression recognition","volume":"84","author":"Verma","year":"2025","journal-title":"Multimed. Tools Appl."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"369","DOI":"10.1007\/s11760-021-01941-2","article-title":"Deep cross feature adaptive network for facial emotion classification","volume":"16","author":"Reddy","year":"2022","journal-title":"Signal Image Video Process."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Li, J., Huang, S., Zhang, X., Fu, X., Chang, C.C., Tang, Z., and Luo, Z. (2020). Facial expression recognition by transfer learning for small datasets. Security with Intelligent Computing and Big-data Services, Proceedings of the Second International Conference on Security with Intelligent Computing and Big Data Services (SICBS-2018), Guilin, China, 14\u201316 December 2018, Springer.","DOI":"10.1007\/978-3-030-16946-6_62"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"e5764","DOI":"10.1002\/cpe.5764","article-title":"A multiple feature fusion framework for video emotion recognition in the wild","volume":"34","author":"Samadiani","year":"2022","journal-title":"Concurr. Comput. Pract. Exp."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"110951","DOI":"10.1016\/j.patcog.2024.110951","article-title":"Poster++: A simpler and stronger facial expression recognition network","volume":"157","author":"Mao","year":"2025","journal-title":"Pattern Recognit."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Liao, L., Wu, S., Song, C., and Fu, J. (2024). RS-Xception: A lightweight network for facial expression recognition. Electronics, 13.","DOI":"10.3390\/electronics13163217"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"7363","DOI":"10.1007\/s00521-025-10974-4","article-title":"Enhancing facial expression recognition in uncontrolled environment: A lightweight CNN approach with pre-processing","volume":"37","author":"Grover","year":"2025","journal-title":"Neural Comput. Appl."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"3553","DOI":"10.1007\/s11760-024-03020-8","article-title":"A lightweight facial expression recognition model for automated engagement detection","volume":"18","author":"Zhao","year":"2024","journal-title":"Signal Image Video Process."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"6853","DOI":"10.1007\/s11760-024-03356-1","article-title":"MVT-CEAM: A lightweight MobileViT with channel expansion and attention mechanism for facial expression recognition","volume":"18","author":"Wang","year":"2024","journal-title":"Signal Image Video Process."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Kopalidis, T., Solachidis, V., Vretos, N., and Daras, P. (2024). Advances in facial expression recognition: A survey of methods, benchmarks, models, and datasets. Information, 15.","DOI":"10.3390\/info15030135"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Tagmatova, Z., Umirzakova, S., Kutlimuratov, A., Abdusalomov, A., and Im Cho, Y. (2025). A Hyper-Attentive Multimodal Transformer for Real-Time and Robust Facial Expression Recognition. Appl. Sci., 15.","DOI":"10.3390\/app15137100"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"103525","DOI":"10.1016\/j.inffus.2025.103525","article-title":"Rank-aware LDL hybrid MetaFormer for Compound Facial Expression Recognition in-the-wild","volume":"126","author":"Khelifa","year":"2025","journal-title":"Inf. Fusion"},{"key":"ref_39","unstructured":"Viola, P., and Jones, M. (2001, January 8\u201314). Rapid object detection using a boosted cascade of simple features. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Kazemi, V., and Sullivan, J. (2014, January 23\u201328). One millisecond face alignment with an ensemble of regression trees. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.241"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1023\/A:1010933404324","article-title":"Random forests","volume":"45","author":"Breiman","year":"2001","journal-title":"Mach. Learn."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1007\/s10994-006-6226-1","article-title":"Extremely randomized trees","volume":"63","author":"Geurts","year":"2006","journal-title":"Mach. Learn."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1023\/A:1022627411411","article-title":"Support-vector networks","volume":"20","author":"Cortes","year":"1995","journal-title":"Mach. Learn."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Shalev-Shwartz, S., and Ben-David, S. (2014). Understanding Machine Learning: From Theory to Algorithms, Cambridge University Press.","DOI":"10.1017\/CBO9781107298019"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"37","DOI":"10.1023\/A:1022689900470","article-title":"Instance-based learning algorithms","volume":"6","author":"Aha","year":"1991","journal-title":"Mach. Learn."},{"key":"ref_46","first-page":"720","article-title":"Efficient Driver Drowsiness Detection Using Spatiotemporal Features with Support Vector Machine","volume":"23","author":"Lamouchi","year":"2025","journal-title":"Int. J. Intell. Transp. Syst. Res."}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/8\/213\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:31:18Z","timestamp":1760034678000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/8\/213"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,19]]},"references-count":46,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2025,8]]}},"alternative-id":["bdcc9080213"],"URL":"https:\/\/doi.org\/10.3390\/bdcc9080213","relation":{},"ISSN":["2504-2289"],"issn-type":[{"type":"electronic","value":"2504-2289"}],"subject":[],"published":{"date-parts":[[2025,8,19]]}}}