{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T16:05:00Z","timestamp":1772553900069,"version":"3.50.1"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T00:00:00Z","timestamp":1752537600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T00:00:00Z","timestamp":1752537600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","doi-asserted-by":"publisher","award":["FCT\/MCTES (PIDDAC) to CeDRI, UIDB\/05757\/2020 (DOI: 10.54499\/UIDB\/05757\/2020) and UIDP\/05757\/2020 (DOI: 10.54499\/UIDP\/05757\/2020) and SusTEC, LA\/P\/0007\/2020 (DOI: 10.54499\/LA\/P\/0007\/2020)"],"award-info":[{"award-number":["FCT\/MCTES (PIDDAC) to CeDRI, UIDB\/05757\/2020 (DOI: 10.54499\/UIDB\/05757\/2020) and UIDP\/05757\/2020 (DOI: 10.54499\/UIDP\/05757\/2020) and SusTEC, LA\/P\/0007\/2020 (DOI: 10.54499\/LA\/P\/0007\/2020)"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100015322","name":"Instituto Polit\u00e9cnico de Bragan\u00e7a","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100015322","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["SN COMPUT. SCI."],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Facial expressions are an important channel for interpersonal communication and comprehension since people externalize their emotions through a variety of facial expressions. Technology, in particular, deep learning algorithms, can detect and analyze human emotions in real-time, which paves the way for advanced user interfaces or adjustable devices and applications. Based on this, the work described in this paper presents a system that identifies three groups of emotions, positive, negative, and neutral, in three execution scenarios: in resource-limited devices, such as mobile phones, for desktop or web applications, and with a state-of-the-art model. To address this problem, three classifiers were used: MobileNetV3 Small, VGG-19, and FER-VT. The experimental results revealed that each model has distinct strengths and weaknesses, with MobileNetV3 Small being the most efficient for resource-constrained environments, VGG-19 achieving the highest accuracy across metrics while maintaining the ideal balance of performance and computational requirements, and FER-VT struggling with generalization issues. These findings emphasize the importance of choosing an appropriate model based on the specific application requirements, while balancing computational constraints and performance.<\/jats:p>","DOI":"10.1007\/s42979-025-04178-9","type":"journal-article","created":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T12:04:26Z","timestamp":1752581066000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["From Constricted Models to State-of-the-Art for Facial Expression Classification"],"prefix":"10.1007","volume":"6","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-7037-2293","authenticated-orcid":false,"given":"Ana Sofia","family":"Rodrigues","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5842-4602","authenticated-orcid":false,"given":"J\u00falio Castro","family":"Lopes","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9170-5078","authenticated-orcid":false,"given":"Rui Pedro","family":"Lopes","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,15]]},"reference":[{"issue":"4","key":"4178_CR1","doi-asserted-by":"publisher","first-page":"87","DOI":"10.1049\/htl2.12049","volume":"10","author":"AA Alarood","year":"2023","unstructured":"Alarood AA, Faheem M, Al-Khasawneh MA, Alzahrani AIA, Alshdadi AA. Secure medical image transmission using deep neural network in e-health applications. Healthc Technol Lett. 2023;10(4):87\u201398. https:\/\/doi.org\/10.1049\/htl2.12049.","journal-title":"Healthc Technol Lett."},{"key":"4178_CR2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-024-19392-5","author":"M Aly","year":"2024","unstructured":"Aly M. Revolutionizing online education: advanced facial expression recognition for real-time student progress tracking via deep learning model. Multimed Tools Appl. 2024. https:\/\/doi.org\/10.1007\/s11042-024-19392-5.","journal-title":"Multimed Tools Appl."},{"key":"4178_CR3","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-023-15808-w","author":"M Bie","year":"2023","unstructured":"Bie M, Liu Q, Xu H, Gao Y, Che X. FEMFER: feature enhancement for multi-faces expression recognition in classroom images. Multimed Tools Appl. 2023. https:\/\/doi.org\/10.1007\/s11042-023-15808-w.","journal-title":"Multimed Tools Appl."},{"issue":"8","key":"4178_CR4","doi-asserted-by":"publisher","first-page":"5619","DOI":"10.1109\/TII.2022.3141400","volume":"18","author":"C Bisogni","year":"2022","unstructured":"Bisogni C, Castiglione A, Hossain S, Narducci F, Umer S. Impact of deep learning approaches on facial expression recognition in healthcare industries. IEEE Trans Ind Inf. 2022;18(8):5619\u201327. https:\/\/doi.org\/10.1109\/TII.2022.3141400.","journal-title":"IEEE Trans Ind Inf."},{"issue":"4","key":"4178_CR5","doi-asserted-by":"publisher","first-page":"345","DOI":"10.1007\/s11760-008-0074-3","volume":"3","author":"I Buciu","year":"2009","unstructured":"Buciu I, Kotropoulos C, Pitas I. Comparison of ICA approaches for facial expression recognition. Signal Image Video Process. 2009;3(4):345\u201361. https:\/\/doi.org\/10.1007\/s11760-008-0074-3.","journal-title":"Signal Image Video Process."},{"key":"4178_CR6","unstructured":"Canedo D, Neves A. Mood estimation based on facial expressions and postures. In: Proceedings of the RECPAD; 2020. p. 49\u201350."},{"key":"4178_CR7","doi-asserted-by":"publisher","unstructured":"Chattopadhyay J, Kundu S, Chakraborty A, Banerjee JS. Facial expression recognition for human\u2013computer interaction. In: Smys, S., Iliyasu, A.M., Bestak, R., Shi, F. (eds.) New trends in computational vision and bio-inspired computing: selected works presented at the ICCVBIC 2018, Coimbatore. Cham: Springer International Publishing; 2020. p. 1181\u20131192. https:\/\/doi.org\/10.1007\/978-3-030-41862-5_119","DOI":"10.1007\/978-3-030-41862-5_119"},{"key":"4178_CR8","doi-asserted-by":"publisher","first-page":"1717","DOI":"10.1007\/s10055-022-00720-9","volume":"27","author":"X Chen","year":"2023","unstructured":"Chen X, Chen H. Emotion recognition using facial expressions in an immersive virtual reality application. Virtual Real. 2023;27:1717\u201332.","journal-title":"Virtual Real."},{"issue":"7","key":"4178_CR9","doi-asserted-by":"publisher","first-page":"1340","DOI":"10.1016\/j.patcog.2008.10.010","volume":"42","author":"Y Cheon","year":"2009","unstructured":"Cheon Y, Kim D. Natural facial expression recognition using differential-AAM and manifold learning. Pattern Recogn. 2009;42(7):1340\u201350. https:\/\/doi.org\/10.1016\/j.patcog.2008.10.010.","journal-title":"Pattern Recogn."},{"key":"4178_CR10","doi-asserted-by":"publisher","unstructured":"del Castillo Torres G, Roig-Maim\u00f3 MF, Mascar\u00f3-Oliver M, Amengual-Alcover E, Mas-Sans\u00f3 R. Understanding how CNNs recognize facial expressions: a case study with LIME and CEM. Sensors. 2023;23(1):131. https:\/\/doi.org\/10.3390\/s23010131.","DOI":"10.3390\/s23010131"},{"key":"4178_CR11","doi-asserted-by":"publisher","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N. An image is worth 16\u00a0$$\\times$$\u00a016 words: transformers for image recognition at scale 2020. https:\/\/doi.org\/10.48550\/ARXIV.2010.11929, version Number: 2","DOI":"10.48550\/ARXIV.2010.11929"},{"key":"4178_CR12","doi-asserted-by":"publisher","unstructured":"Ekman P, Friesen WV. Facial action coding system; 2019. https:\/\/doi.org\/10.1037\/t27734-000","DOI":"10.1037\/t27734-000"},{"key":"4178_CR13","doi-asserted-by":"crossref","unstructured":"Georgescu MI, Ionescu RT. Teacher\u2013student training and triplet loss for facial expression recognition under occlusion. In: 2020 25th International Conference on Pattern Recognition (ICPR). IEEE; 2021. p. 2288\u20132295.","DOI":"10.1109\/ICPR48806.2021.9412493"},{"issue":"1","key":"4178_CR14","doi-asserted-by":"publisher","first-page":"103","DOI":"10.1007\/s42979-023-02447-z","volume":"5","author":"H Ghazouani","year":"2023","unstructured":"Ghazouani H. Challenges and emerging trends for machine reading of the mind from facial expressions. SN Comput Sci. 2023;5(1):103. https:\/\/doi.org\/10.1007\/s42979-023-02447-z.","journal-title":"SN Comput Sci."},{"issue":"2","key":"4178_CR15","doi-asserted-by":"publisher","first-page":"627","DOI":"10.5465\/annals.2018.0057","volume":"14","author":"E Glikson","year":"2020","unstructured":"Glikson E, Woolley AW. Human trust in artificial intelligence: review of empirical research. Acad Manag Ann. 2020;14(2):627\u201360. https:\/\/doi.org\/10.5465\/annals.2018.0057.","journal-title":"Acad Manag Ann."},{"key":"4178_CR16","doi-asserted-by":"publisher","first-page":"59","DOI":"10.1016\/j.neunet.2014.09.005","volume":"64","author":"IJ Goodfellow","year":"2015","unstructured":"Goodfellow IJ, Erhan D, Luc Carrier P, Courville A, Mirza M, Hamner B, Cukierski W, Tang Y, Thaler D, Lee DH, Zhou Y, Ramaiah C, Feng F, Li R, Wang X, Athanasakis D, Shawe-Taylor J, Milakov M, Park J, Ionescu R, Popescu M, Grozea C, Bergstra J, Xie J, Romaszko L, Xu B, Chuang Z, Bengio Y. Challenges in representation learning: a report on three machine learning contests. Neural Netw. 2015;64:59\u201363. https:\/\/doi.org\/10.1016\/j.neunet.2014.09.005.","journal-title":"Neural Netw."},{"key":"4178_CR17","doi-asserted-by":"publisher","first-page":"106944","DOI":"10.1016\/j.chb.2021.106944","volume":"125","author":"JW Hong","year":"2021","unstructured":"Hong JW, Cruz I, Williams D. AI, you can drive my car: how we evaluate human drivers vs. self-driving cars. Comput Hum Behav. 2021;125:106944. https:\/\/doi.org\/10.1016\/j.chb.2021.106944.","journal-title":"Comput Hum Behav."},{"key":"4178_CR18","doi-asserted-by":"publisher","unstructured":"Hong K, Chalup SK, King RA. A component based approach for classifying the seven universal facial expressions of emotion. In: 2013 IEEE Symposium on Computational Intelligence for Creativity and Affective Computing (CICAC); 2013. p. 1\u20138. https:\/\/doi.org\/10.1109\/CICAC.2013.6595214","DOI":"10.1109\/CICAC.2013.6595214"},{"key":"4178_CR19","doi-asserted-by":"crossref","unstructured":"Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M, Wang W, Zhu Y, Pang R, Vasudevan V, Le QV, Adam H. Searching for MobileNetV3; 2019. arXiv:1905.02244 [cs]","DOI":"10.1109\/ICCV.2019.00140"},{"key":"4178_CR20","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1016\/j.ins.2021.08.043","volume":"580","author":"Q Huang","year":"2021","unstructured":"Huang Q, Huang C, Wang X, Jiang F. Facial expression recognition with grid-wise attention and visual transformer. Inf Sci. 2021;580:35\u201354. https:\/\/doi.org\/10.1016\/j.ins.2021.08.043.","journal-title":"Inf Sci."},{"issue":"2","key":"4178_CR21","doi-asserted-by":"publisher","first-page":"375","DOI":"10.1016\/j.gltp.2021.08.027","volume":"2","author":"AV Ikechukwu","year":"2021","unstructured":"Ikechukwu AV, Murali S, Deepu R, Shivamurthy R. ResNet-50 vs VGG-19 vs training from scratch: a comparative analysis of the segmentation and classification of Pneumonia from chest X-ray images. Glob Transit Proc. 2021;2(2):375\u201381.","journal-title":"Glob Transit Proc."},{"issue":"2","key":"4178_CR22","doi-asserted-by":"publisher","first-page":"401","DOI":"10.3390\/s18020401","volume":"18","author":"BC Ko","year":"2018","unstructured":"Ko BC. A brief review of facial emotion recognition based on visual information. Sensors. 2018;18(2):401. https:\/\/doi.org\/10.3390\/s18020401.","journal-title":"Sensors."},{"issue":"3","key":"4178_CR23","doi-asserted-by":"publisher","first-page":"135","DOI":"10.3390\/info15030135","volume":"15","author":"T Kopalidis","year":"2024","unstructured":"Kopalidis T, Solachidis V, Vretos N, Daras P. Advances in facial expression recognition: a survey of methods, benchmarks, models, and datasets. Information. 2024;15(3):135. https:\/\/doi.org\/10.3390\/info15030135.","journal-title":"Information."},{"key":"4178_CR24","doi-asserted-by":"publisher","unstructured":"Korgialas C, Pantraki E, Kotropoulos C. Interpretable face aging: enhancing conditional adversarial autoencoders with lime explanations. In: ICASSP 2024\u20142024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2024. p. 5260\u20135264. https:\/\/doi.org\/10.1109\/ICASSP48485.2024.10447128. ISSN: 2379-190X","DOI":"10.1109\/ICASSP48485.2024.10447128"},{"key":"4178_CR25","doi-asserted-by":"publisher","first-page":"2016","DOI":"10.1109\/TIP.2021.3049955","volume":"30","author":"H Li","year":"2021","unstructured":"Li H, Wang N, Ding X, Yang X, Gao X. Adaptively learning facial expression representation via CF labels and distillation. IEEE Trans Image Process. 2021;30:2016\u201328.","journal-title":"IEEE Trans Image Process."},{"key":"4178_CR26","doi-asserted-by":"publisher","unstructured":"Li R, Liu P, Jia K, Wu Q. Facial Expression recognition under partial occlusion based on Gabor filter and gray-level cooccurrence matrix. In: 2015 International Conference on Computational Intelligence and Communication Networks (CICN); 2015. p. 347\u2013351. https:\/\/doi.org\/10.1109\/CICN.2015.75. ISSN: 2472-7555","DOI":"10.1109\/CICN.2015.75"},{"key":"4178_CR27","doi-asserted-by":"publisher","unstructured":"Li Z, Imai Ji, Kaneko M. Facial-component-based bag of words and PHOG descriptor for facial expression recognition. In: 2009 IEEE international conference on systems, man and cybernetics; 2009. p. 1353\u20131358. https:\/\/doi.org\/10.1109\/ICSMC.2009.5346254. ISSN: 1062-922X","DOI":"10.1109\/ICSMC.2009.5346254"},{"issue":"2","key":"4178_CR28","doi-asserted-by":"publisher","first-page":"863","DOI":"10.1007\/s10055-022-00689-5","volume":"27","author":"Y Lin","year":"2023","unstructured":"Lin Y, Lan Y, Wang S. A method for evaluating the learning concentration in head-mounted virtual reality interaction. Virtual Real. 2023;27(2):863\u201385.","journal-title":"Virtual Real."},{"key":"4178_CR29","doi-asserted-by":"publisher","unstructured":"Liu SS, Zhang Y, Liu KP, Li Y. Facial expression recognition under partial occlusion based on Gabor multi-orientation features fusion and local Gabor binary pattern histogram sequence. In: 2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing; 2013. p. 218\u2013222. https:\/\/doi.org\/10.1109\/IIH-MSP.2013.63","DOI":"10.1109\/IIH-MSP.2013.63"},{"key":"4178_CR30","doi-asserted-by":"publisher","unstructured":"Liu, S., Zhang, Y., Liu, K.: Facial expression recognition under partial occlusion based on Weber Local Descriptor histogram and decision fusion. In: Proceedings of the 33rd Chinese control conference; 2014. p. 4664\u20134668. https:\/\/doi.org\/10.1109\/ChiCC.2014.6895725. ISSN: 1934-1768","DOI":"10.1109\/ChiCC.2014.6895725"},{"issue":"6","key":"4178_CR31","doi-asserted-by":"publisher","first-page":"2227","DOI":"10.1007\/s00530-022-00949-z","volume":"28","author":"S Liu","year":"2022","unstructured":"Liu S, Ren Y, Li L, Sun X, Song Y, Hung CC. Micro-expression recognition based on SqueezeNet and C3D. Multimed Syst. 2022;28(6):2227\u201336.","journal-title":"Multimed Syst."},{"key":"4178_CR32","doi-asserted-by":"publisher","unstructured":"Lopes JC, Lopes RP. A Review of Dynamic Difficulty Adjustment Methods for Serious Games. In: Pereira, A.I., Ko\u0161ir, A., Fernandes, F.P., Pacheco, M.F., Teixeira, J.P., Lopes, R.P. (eds.) Optimization, learning algorithms and applications. Communications in computer and information science. Cham: Springer International Publishing; 2022. p. 144\u2013159. https:\/\/doi.org\/10.1007\/978-3-031-23236-7_11","DOI":"10.1007\/978-3-031-23236-7_11"},{"issue":"18","key":"4178_CR33","doi-asserted-by":"publisher","first-page":"2260","DOI":"10.3390\/electronics10182260","volume":"10","author":"RP Lopes","year":"2021","unstructured":"Lopes RP, Barroso B, Deusdado L, Novo A, Guimar\u00e3es M, Teixeira JP, Leit\u00e3o P. Digital technologies for innovative mental health rehabilitation. Electronics. 2021;10(18):2260. https:\/\/doi.org\/10.3390\/electronics10182260.","journal-title":"Electronics."},{"key":"4178_CR34","doi-asserted-by":"crossref","unstructured":"Mozaffari L, Brekke MM, Gajaruban B, Purba D, Zhang J. Facial expression recognition using deep neural network. In: 2023 3rd International Conference on Applied Artificial Intelligence (ICAPAI). IEEE; 2023. p. 1\u20139.","DOI":"10.1109\/ICAPAI58366.2023.10193866"},{"key":"4178_CR35","doi-asserted-by":"crossref","unstructured":"Petrou N, Christodoulou G, Avgerinakis K, Kosmides P. Lightweight mood estimation algorithm For faces under partial occlusion. In: Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments; 2023. p. 402\u2013407.","DOI":"10.1145\/3594806.3596553"},{"issue":"1","key":"4178_CR36","doi-asserted-by":"publisher","first-page":"663","DOI":"10.1007\/s11277-022-10127-z","volume":"129","author":"A Rajpal","year":"2023","unstructured":"Rajpal A, Sehra K, Bagri R, Sikka P. XAI-FR: explainable AI-based face recognition using deep neural networks. Wirel Perso Commun. 2023;129(1):663\u201380. https:\/\/doi.org\/10.1007\/s11277-022-10127-z.","journal-title":"Wirel Perso Commun."},{"key":"4178_CR37","doi-asserted-by":"publisher","first-page":"730317","DOI":"10.3389\/frobt.2021.730317","volume":"8","author":"N Rawal","year":"2022","unstructured":"Rawal N, Koert D, Turan C, Kersting K, Peters J, Stock-Homburg R. ExGenNet: learning to generate robotic facial expression using facial expression recognition. Front Robot AI. 2022;8:730317. https:\/\/doi.org\/10.3389\/frobt.2021.730317.","journal-title":"Front Robot AI"},{"key":"4178_CR38","doi-asserted-by":"publisher","unstructured":"Ribeiro MT, Singh S, Guestrin C. \u201cWhy should i trust you?\u201d: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD \u201916. New York: Association for Computing Machinery; 2016. p. 1135\u20131144. https:\/\/doi.org\/10.1145\/2939672.2939778","DOI":"10.1145\/2939672.2939778"},{"key":"4178_CR39","volume-title":"Optimization, learning algorithms and applications","author":"ASF Rodrigues","year":"2024","unstructured":"Rodrigues ASF, Lopes JC, Lopes RP. Facial expression recognition in virtual reality simulations. In: Pereira AI, Fernandes FP, Coelho JP, Teixeira JP, Lima J, Pacheco MF, Lopes RP, Santiago TA, editors. Optimization, learning algorithms and applications. Cham: Springer; 2024."},{"key":"4178_CR40","doi-asserted-by":"publisher","first-page":"804","DOI":"10.1007\/978-3-031-23236-7_55","volume-title":"Optimization, learning algorithms and applications","author":"ASF Rodrigues","year":"2022","unstructured":"Rodrigues ASF, Lopes JC, Lopes RP, Teixeira LF. Classification of facial expressions under partial occlusion for VR games. In: Pereira AI, Ko\u0161ir A, Fernandes FP, Pacheco MF, Teixeira JP, Lopes RP, editors. Optimization, learning algorithms and applications. Cham: Springer; 2022. p. 804\u201319."},{"issue":"7","key":"4178_CR41","doi-asserted-by":"publisher","first-page":"4292","DOI":"10.1007\/s00034-023-02320-7","volume":"42","author":"GK Sahoo","year":"2023","unstructured":"Sahoo GK, Das SK, Singh P. Performance comparison of facial emotion recognition: a transfer learning-based driver assistance framework for in-vehicle applications. Circuits Syst Signal Process. 2023;42(7):4292\u2013319. https:\/\/doi.org\/10.1007\/s00034-023-02320-7.","journal-title":"Circuits Syst Signal Process."},{"key":"4178_CR42","doi-asserted-by":"crossref","unstructured":"Schroff F, Kalenichenko D, Philbin J. Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 815\u2013823.","DOI":"10.1109\/CVPR.2015.7298682"},{"issue":"8","key":"4178_CR43","doi-asserted-by":"publisher","first-page":"1272","DOI":"10.1016\/j.neucom.2010.07.017","volume":"74","author":"A S\u00e1nchez","year":"2011","unstructured":"S\u00e1nchez A, Ruiz JV, Moreno AB, Montemayor AS, Hern\u00e1ndez J, Pantrigo JJ. Differential optical flow applied to automatic facial expression recognition. Neurocomputing. 2011;74(8):1272\u201382. https:\/\/doi.org\/10.1016\/j.neucom.2010.07.017.","journal-title":"Neurocomputing."},{"key":"4178_CR44","doi-asserted-by":"publisher","DOI":"10.1007\/s12193-023-00410-z","author":"PC S\u00e1nchez","year":"2023","unstructured":"S\u00e1nchez PC, Bennett CC. Facial expression recognition via transfer learning in cooperative game paradigms for enhanced social AI. J Multimodal User Interfaces. 2023. https:\/\/doi.org\/10.1007\/s12193-023-00410-z.","journal-title":"J Multimodal User Interfaces."},{"issue":"5","key":"4178_CR45","doi-asserted-by":"publisher","first-page":"1685","DOI":"10.18280\/ts.390526","volume":"39","author":"MZ Uzun","year":"2022","unstructured":"Uzun MZ, Celik Y, Basaran E. Micro-expression recognition by using CNN features with PSO algorithm and SVM methods. Traitement du Signal. 2022;39(5):1685\u201393. https:\/\/doi.org\/10.18280\/ts.390526.","journal-title":"Traitement du Signal"},{"key":"4178_CR46","doi-asserted-by":"publisher","unstructured":"Wu T, Bartlett MS, Movellan JR. Facial expression recognition using Gabor motion energy filters. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition\u2014Workshops; 2010. p. 42\u201347. https:\/\/doi.org\/10.1109\/CVPRW.2010.5543267. ISSN: 2160-7516","DOI":"10.1109\/CVPRW.2010.5543267"},{"key":"4178_CR47","doi-asserted-by":"crossref","unstructured":"Yang B, Jianming W, Hattori G. Face mask aware robust facial expression recognition during the COVID-19 pandemic. In: 2021 IEEE International conference on image processing (ICIP). IEEE; 2021. p. 240\u2013244.","DOI":"10.1109\/ICIP42928.2021.9506047"},{"key":"4178_CR48","doi-asserted-by":"publisher","unstructured":"Zhao L, Zhuang G, Xu X. Facial expression recognition based on PCA and NMF. In: 2008 7th World Congress on Intelligent Control and Automation; 2008. p. 6826\u20136829. https:\/\/doi.org\/10.1109\/WCICA.2008.4593968","DOI":"10.1109\/WCICA.2008.4593968"},{"key":"4178_CR49","doi-asserted-by":"publisher","unstructured":"Zilu Y, Xieyan F. Combining LBP and Adaboost for facial expression recognition. In: 2008 9th International Conference on Signal Processing; 2008. p. 1461\u20131464. https:\/\/doi.org\/10.1109\/ICOSP.2008.4697408. ISSN: 2164-523X","DOI":"10.1109\/ICOSP.2008.4697408"}],"container-title":["SN Computer Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s42979-025-04178-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s42979-025-04178-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s42979-025-04178-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,7]],"date-time":"2025-09-07T11:16:12Z","timestamp":1757243772000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s42979-025-04178-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,15]]},"references-count":49,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2025,8]]}},"alternative-id":["4178"],"URL":"https:\/\/doi.org\/10.1007\/s42979-025-04178-9","relation":{},"ISSN":["2661-8907"],"issn-type":[{"value":"2661-8907","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,15]]},"assertion":[{"value":"17 December 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 June 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 July 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"No applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Research Involving Human and\/or Animals"}},{"value":"No applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed Consent"}}],"article-number":"651"}}