{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,12]],"date-time":"2026-02-12T10:35:31Z","timestamp":1770892531694,"version":"3.50.1"},"reference-count":324,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,12,17]],"date-time":"2025-12-17T00:00:00Z","timestamp":1765929600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"},{"start":{"date-parts":[[2025,12,17]],"date-time":"2025-12-17T00:00:00Z","timestamp":1765929600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Artif Intell"],"DOI":"10.1007\/s44163-025-00553-w","type":"journal-article","created":{"date-parts":[[2025,12,17]],"date-time":"2025-12-17T11:37:35Z","timestamp":1765971455000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Revolutionizing facial emotion recognition: in-depth analysis of cutting-edge models, methodologies, and datasets"],"prefix":"10.1007","volume":"5","author":[{"given":"Ketan","family":"Sarvakar","sequence":"first","affiliation":[]},{"given":"Kaushikkumar","family":"Rana","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,17]]},"reference":[{"key":"553_CR1","doi-asserted-by":"crossref","unstructured":"Pantic M, Rothkrantz LJM. Automatic analysis of facial expressions: the state of the art 2000.","DOI":"10.1016\/S0262-8856(00)00034-2"},{"issue":"1","key":"553_CR2","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1016\/S0031-3203(02)00052-3","volume":"36","author":"B Fasel","year":"2003","unstructured":"Fasel B, Luettin J. Automatic facial expression analysis: a survey. Pattern Recognit. 2003;36(1):259\u201375.","journal-title":"Pattern Recognit"},{"issue":"3","key":"553_CR3","doi-asserted-by":"publisher","first-page":"1449","DOI":"10.1109\/TSMCB.2004.825931","volume":"34","author":"M Pantic","year":"2004","unstructured":"Pantic M, Rothkrantz LJM. Facial action recognition for facial expression analysis from static face images. IEEE Trans Syst Man Cybern B Cybern. 2004;34(3):1449\u201361. https:\/\/doi.org\/10.1109\/TSMCB.2004.825931.","journal-title":"IEEE Trans Syst Man Cybern B Cybern"},{"key":"553_CR4","doi-asserted-by":"crossref","unstructured":"Tian Y-L, Kanade T, Cohn JF. Facial expression analysis 2005.","DOI":"10.1007\/11564386_1"},{"issue":"6","key":"553_CR5","doi-asserted-by":"publisher","first-page":"803","DOI":"10.1016\/j.imavis.2008.08.005","volume":"27","author":"C Shan","year":"2009","unstructured":"Shan C, Gong S, McOwan PW. Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput. 2009;27(6):803\u201316. https:\/\/doi.org\/10.1016\/j.imavis.2008.08.005.","journal-title":"Image Vis Comput"},{"key":"553_CR6","unstructured":"Bettadapura V. Face expression recognition and analysis: the state of the art 2012."},{"key":"553_CR7","doi-asserted-by":"crossref","unstructured":"Konar A, Chakraborty A. Emotion recognition: a pattern analysis approach. Wiley 2015.","DOI":"10.1002\/9781118910566"},{"issue":"5","key":"553_CR8","doi-asserted-by":"publisher","first-page":"505","DOI":"10.1080\/02564602.2015.1117403","volume":"33","author":"X Zhao","year":"2016","unstructured":"Zhao X, Zhang S. A review on facial expression recognition: feature extraction and classification. IETE Tech Rev. 2016;33(5):505\u201317. https:\/\/doi.org\/10.1080\/02564602.2015.1117403.","journal-title":"IETE Tech Rev"},{"key":"553_CR9","doi-asserted-by":"publisher","unstructured":"Martinez B, Valstar MF, Jiang B, Pantic M. Automatic analysis of facial actions: a survey, 2019, Institute of Electrical and Electronics Engineers Inc. https:\/\/doi.org\/10.1109\/TAFFC.2017.2731763.","DOI":"10.1109\/TAFFC.2017.2731763"},{"key":"553_CR10","unstructured":"Azizan I, Fatimah K, Khalid F. Facial emotion recognition: a brief review. 2018. [Online]. Available: https:\/\/www.researchgate.net\/publication\/343443531"},{"key":"553_CR11","doi-asserted-by":"publisher","DOI":"10.3390\/s18020416","author":"D Mehta","year":"2018","unstructured":"Mehta D, Siddiqui MFH, Javaid AY. Facial emotion recognition: a survey and real-world user experiences in mixed reality. Sensors. 2018. https:\/\/doi.org\/10.3390\/s18020416.","journal-title":"Sensors"},{"key":"553_CR12","doi-asserted-by":"publisher","unstructured":"Wei H, Zhang Z. A survey of facial expression recognition based on deep learning, 2020, pp. 90\u201394. https:\/\/doi.org\/10.1109\/ICIEA48937.2020.9248180.","DOI":"10.1109\/ICIEA48937.2020.9248180"},{"key":"553_CR13","doi-asserted-by":"publisher","unstructured":"Revina IM, Emmanuel WRS. A survey on human face expression recognition techniques, 2021. King Saud bin Abdulaziz University. https:\/\/doi.org\/10.1016\/j.jksuci.2018.09.002.","DOI":"10.1016\/j.jksuci.2018.09.002"},{"key":"553_CR14","doi-asserted-by":"publisher","unstructured":"Mellouk W, Handouzi W. Facial emotion recognition using deep learning: review and insights. In Procedia computer science, Elsevier BV, 2020, pp. 689\u2013694. https:\/\/doi.org\/10.1016\/j.procs.2020.07.101.","DOI":"10.1016\/j.procs.2020.07.101"},{"key":"553_CR15","doi-asserted-by":"publisher","first-page":"90495","DOI":"10.1109\/ACCESS.2020.2993803","volume":"8","author":"K Patel","year":"2020","unstructured":"Patel K, et al. Facial sentiment analysis using AI techniques: state-of-the-art, taxonomies, and challenges. IEEE Access. 2020;8:90495\u2013519. https:\/\/doi.org\/10.1109\/ACCESS.2020.2993803.","journal-title":"IEEE Access"},{"key":"553_CR16","doi-asserted-by":"publisher","first-page":"593","DOI":"10.1016\/j.ins.2021.10.005","volume":"582","author":"FZ Canal","year":"2022","unstructured":"Canal FZ, et al. A survey on facial emotion recognition techniques: a state-of-the-art literature review. Inf Sci (N Y). 2022;582:593\u2013617. https:\/\/doi.org\/10.1016\/j.ins.2021.10.005.","journal-title":"Inf Sci (N Y)"},{"issue":"4","key":"553_CR17","doi-asserted-by":"publisher","first-page":"2086","DOI":"10.1109\/TAFFC.2022.3184995","volume":"13","author":"M Jampour","year":"2022","unstructured":"Jampour M, Javidi M. Multiview facial expression recognition, a survey. IEEE Trans Affect Comput. 2022;13(4):2086\u2013105. https:\/\/doi.org\/10.1109\/TAFFC.2022.3184995.","journal-title":"IEEE Trans Affect Comput"},{"key":"553_CR18","doi-asserted-by":"publisher","unstructured":"Khan AR. Facial emotion recognition using conventional machine learning and deep learning methods: current achievements, analysis and remaining challenges, 2022, MDPI. https:\/\/doi.org\/10.3390\/info13060268.","DOI":"10.3390\/info13060268"},{"issue":"3","key":"553_CR19","doi-asserted-by":"publisher","first-page":"7457","DOI":"10.1007\/s11042-023-15139-w","volume":"83","author":"MJAI Dujaili","year":"2024","unstructured":"Dujaili MJAI. Survey on facial expressions recognition: databases, features and classification schemes. Multimed Tools Appl. 2024;83(3):7457\u201378.","journal-title":"Multimed Tools Appl"},{"key":"553_CR20","doi-asserted-by":"publisher","unstructured":"Cai Y, Li X, Li J. Emotion recognition using different sensors, emotion models, methods and datasets: a comprehensive review, 2023, MDPI. https:\/\/doi.org\/10.3390\/s23052455.","DOI":"10.3390\/s23052455"},{"issue":"5","key":"553_CR21","first-page":"672","volume":"20","author":"P Dulguerov","year":"1999","unstructured":"Dulguerov P, Marchal F, Wang D, Gysin C. Review of objective topographic facial nerve evaluation methods. Otol Neurotol. 1999;20(5):672\u20138.","journal-title":"Otol Neurotol"},{"key":"553_CR22","doi-asserted-by":"crossref","unstructured":"Assari MA, Rahmati M. Driver drowsiness detection using face expression recognition. In: 2011 IEEE international conference on signal and image processing applications (ICSIPA) 2011 Nov 16 (pp. 337\u2013341). IEEE.","DOI":"10.1109\/ICSIPA.2011.6144162"},{"key":"553_CR23","doi-asserted-by":"publisher","unstructured":"Abdat F, Maaoui C, Pruski A. Human\u2013computer interaction using emotion recognition from facial expression. In: 2011 UKSim 5th European symposium on computer modeling and simulation, 2011, pp. 196\u2013201. https:\/\/doi.org\/10.1109\/EMS.2011.20.","DOI":"10.1109\/EMS.2011.20"},{"key":"553_CR24","unstructured":"Hickson S, Dufour N, Sud A, Kwatra V, Essa I. Eyemotion: classifying facial expressions in VR using eye-tracking cameras, 2017 [Online]. Available: http:\/\/arxiv.org\/abs\/1707.07204"},{"key":"553_CR25","doi-asserted-by":"publisher","first-page":"396","DOI":"10.1016\/j.ridd.2014.10.015","volume":"36","author":"C-H Chen","year":"2015","unstructured":"Chen C-H, Lee I-J, Lin L-Y. Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders. Res Dev Disabil. 2015;36:396\u2013403. https:\/\/doi.org\/10.1016\/j.ridd.2014.10.015.","journal-title":"Res Dev Disabil"},{"key":"553_CR26","doi-asserted-by":"publisher","DOI":"10.1155\/2008\/542918","author":"C Zhan","year":"2008","unstructured":"Zhan C, Li W, Ogunbona P, Safaei F. A real-time facial expression recognition system for online games. Int J Comput Games Technol. 2008. https:\/\/doi.org\/10.1155\/2008\/542918.","journal-title":"Int J Comput Games Technol"},{"issue":"7553","key":"553_CR27","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y Lecun","year":"2015","unstructured":"Lecun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436\u201344. https:\/\/doi.org\/10.1038\/nature14539.","journal-title":"Nature"},{"key":"553_CR28","doi-asserted-by":"publisher","first-page":"242","DOI":"10.1016\/j.paid.2015.03.002","volume":"82","author":"M Stankovi\u0107","year":"2015","unstructured":"Stankovi\u0107 M, Ne\u0161i\u0107 M, Obrenovi\u0107 J, Stojanovi\u0107 D, Milo\u0161evi\u0107 V. Recognition of facial expressions of emotions in criminal and non-criminal psychopaths: Valence-specific hypothesis. Pers Individ Dif. 2015;82:242\u20137. https:\/\/doi.org\/10.1016\/j.paid.2015.03.002.","journal-title":"Pers Individ Dif"},{"key":"553_CR29","doi-asserted-by":"crossref","unstructured":"Cowie R, Douglas-Cowie E, Tsapatsoulis N, Votsis G, Kollias S, Fellenz W, Taylor JG. Emotion recognition in human-computer interaction. IEEE Signal Process Mag 2001;18(1):32\u201380.","DOI":"10.1109\/79.911197"},{"key":"553_CR30","unstructured":"Sharma T, Sumant O. Emotion detection and recognition market size, share, competitive landscape and trend analysis report, by software tool, by application, by technology, by end user\u202f: global Opportunity Analysis and Industry Forecast, 2021\u20132031. Allied Market Research."},{"key":"553_CR31","doi-asserted-by":"crossref","unstructured":"Ekman P, Friesen WV. Constants across cultures in the face and emotion. 1971.","DOI":"10.1016\/B978-0-08-016643-8.50004-5"},{"key":"553_CR32","unstructured":"Ekman P, Friesen WV. Unmasking the face: a guide to recognizing emotions from facial clues. Ishk 2003."},{"key":"553_CR33","doi-asserted-by":"publisher","unstructured":"Antoniadis P, Filntisis PP, Maragos P. Exploiting emotional dependencies with graph convolutional networks for facial expression recognition, 2021. https:\/\/doi.org\/10.1109\/FG52635.2021.9667014.","DOI":"10.1109\/FG52635.2021.9667014"},{"key":"553_CR34","doi-asserted-by":"crossref","unstructured":"Matsumoto D. More evidence for the universality of a contempt expression 1992.","DOI":"10.1007\/BF00992972"},{"key":"553_CR35","doi-asserted-by":"crossref","unstructured":"Ekman P. An argument for basic emotions 1992.","DOI":"10.1037\/\/0033-295X.99.3.550"},{"issue":"4","key":"553_CR36","doi-asserted-by":"publisher","first-page":"364","DOI":"10.1177\/1754073911410740","volume":"3","author":"P Ekman","year":"2011","unstructured":"Ekman P, Cordaro D. What is meant by calling emotions basic. Emot Rev. 2011;3(4):364\u201370. https:\/\/doi.org\/10.1177\/1754073911410740.","journal-title":"Emot Rev"},{"key":"553_CR37","doi-asserted-by":"crossref","unstructured":"Zangeneh Soroush M, Maghooli K, Setarehdan SK, Motie Nasrabadi A. Emotion classification through nonlinear EEG analysis using machine learning methods. Int Clin Neurosci J 2018;5(4):135\u2013149.","DOI":"10.15171\/icnj.2018.26"},{"issue":"4","key":"553_CR38","doi-asserted-by":"publisher","first-page":"2132","DOI":"10.1109\/TAFFC.2022.3188390","volume":"13","author":"AV Savchenko","year":"2022","unstructured":"Savchenko AV, Savchenko LV, Makarov I. Classifying emotions and engagement in online learning based on a single facial expression recognition neural network. IEEE Trans Affect Comput. 2022;13(4):2132\u201343. https:\/\/doi.org\/10.1109\/TAFFC.2022.3188390.","journal-title":"IEEE Trans Affect Comput"},{"key":"553_CR39","doi-asserted-by":"publisher","unstructured":"Savchenko AV. Facial expression and attributes recognition based on multi-task learning of lightweight neural networks 2021. https:\/\/doi.org\/10.1109\/SISY52375.2021.9582508.","DOI":"10.1109\/SISY52375.2021.9582508"},{"key":"553_CR40","unstructured":"Tereikovska L, Tereikovskyi I, Mussiraliyeva S, Akhmed G, Beketova A, Sambetbayeva A. Recognition of emotions by facial geometry using a capsule neural network. Int J Civ Eng Technol 2019;10(3)."},{"key":"553_CR41","unstructured":"Roy S, Etemad A. Contrastive learning of view-invariant representations for facial expressions recognition, 2023 [Online]. Available: http:\/\/arxiv.org\/abs\/2311.06852"},{"key":"553_CR42","doi-asserted-by":"publisher","unstructured":"Alrfou K, Kordijazi A. Computer vision methods for the microstructural analysis of materials: the state-of-the-art and future perspectives. https:\/\/doi.org\/10.48550\/arXiv.2208.04149.","DOI":"10.48550\/arXiv.2208.04149"},{"key":"553_CR43","doi-asserted-by":"crossref","unstructured":"Bassili JN. Facial motion in the perception of faces and of emotional expression. J Exp Psychol Hum Percept Perform 1978;4(3):373.","DOI":"10.1037\/\/0096-1523.4.3.373"},{"key":"553_CR44","unstructured":"Roy S, Etemad A. Analysis of semi-supervised methods for facial expression recognition 2022 [Online]. Available: http:\/\/arxiv.org\/abs\/2208.00544"},{"key":"553_CR45","doi-asserted-by":"crossref","unstructured":"Meena G, Mohbey KK, Lokesh K. FSTL-SA: few-shot transfer learning for sentiment analysis from facial expressions. Multimedia Tools Appl: 2024;1\u201329.","DOI":"10.1007\/s11042-024-20518-y"},{"issue":"2","key":"553_CR46","first-page":"64","volume":"12","author":"G Meena","year":"2024","unstructured":"Meena G, Indian A, Mohbey KK, Jangid K. Point of interest recommendation system using sentiment analysis. J Inf Sci Theory Pract. 2024;12(2):64\u201378.","journal-title":"J Inf Sci Theory Pract"},{"key":"553_CR47","doi-asserted-by":"crossref","unstructured":"Indian A, Manethia P, Meena G, Mohbey KK. Decoding emotions: unveiling sentiments and sarcasm through text analysis. In International conference on deep learning, artificial intelligence and robotics (pp. 714\u2013731). Cham: Springer Nature Switzerland;2023.","DOI":"10.1007\/978-3-031-60935-0_62"},{"issue":"2","key":"553_CR48","doi-asserted-by":"publisher","first-page":"2531","DOI":"10.30574\/ijsra.2024.12.2.1549","volume":"12","author":"I Qutab","year":"2024","unstructured":"Qutab I, Jiangbin Z, Aqeel M, Fatima U, Butt IA. Advancing emotion recognition in facial expressions through PCA, RFE, and MLP Integration. Int J Sci Res Arch. 2024;12(2):2531\u201342. https:\/\/doi.org\/10.30574\/ijsra.2024.12.2.1549.","journal-title":"Int J Sci Res Arch"},{"key":"553_CR49","doi-asserted-by":"publisher","first-page":"1429","DOI":"10.1016\/j.enbuild.2017.11.045","volume":"158","author":"F Bre","year":"2018","unstructured":"Bre F, Gimenez JM, Fachinotti VD. Prediction of wind pressure coefficients on building surfaces using artificial neural networks. Energy Build. 2018;158:1429\u201341. https:\/\/doi.org\/10.1016\/j.enbuild.2017.11.045.","journal-title":"Energy Build"},{"key":"553_CR50","unstructured":"Padgett C, Cottrell G. Representing face images for emotion classification."},{"key":"553_CR51","doi-asserted-by":"publisher","DOI":"10.3390\/ma9070531","author":"E Garc\u00eda-Gonzalo","year":"2016","unstructured":"Garc\u00eda-Gonzalo E, Fern\u00e1ndez-Mu\u00f1iz Z, Nieto PJG, S\u00e1nchez AB, Fern\u00e1ndez MM. Hard-rock stability analysis for span design in entry-type excavations with learning classifiers. Materials. 2016. https:\/\/doi.org\/10.3390\/ma9070531.","journal-title":"Materials"},{"key":"553_CR52","unstructured":"Guo G, Li SZ, Chan K. Face recognition by support vector machines."},{"key":"553_CR53","doi-asserted-by":"publisher","DOI":"10.3390\/app11073138","author":"M Zhang","year":"2021","unstructured":"Zhang M, Chen X, Li W. A hybrid hidden Markov model for pipeline leakage detection. Appl Sci. 2021. https:\/\/doi.org\/10.3390\/app11073138.","journal-title":"Appl Sci"},{"key":"553_CR54","doi-asserted-by":"publisher","DOI":"10.3390\/electronics10091036","author":"MAH Akhand","year":"2021","unstructured":"Akhand MAH, Roy S, Siddique N, Kamal MAS, Shimamura T. Facial emotion recognition using transfer learning in the deep CNN. Electronics (Switzerland). 2021. https:\/\/doi.org\/10.3390\/electronics10091036.","journal-title":"Electronics (Switzerland)"},{"key":"553_CR55","doi-asserted-by":"crossref","unstructured":"Singh M, Majumder A, Behera L. Facial expressions recognition system using Bayesian inference. Institute of Electrical and Electronics Engineers 2014.","DOI":"10.1109\/IJCNN.2014.6889754"},{"key":"553_CR56","doi-asserted-by":"publisher","DOI":"10.1016\/S0893-6080(03)00115-1","author":"M Matsugu","year":"2003","unstructured":"Matsugu M, Mori K, Mitari Y, Kaneda Y. Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 2003. https:\/\/doi.org\/10.1016\/S0893-6080(03)00115-1.","journal-title":"Neural Netw"},{"key":"553_CR57","unstructured":"Cohen I, Sebe N, Cozman FG, Cirelo MC, Huang TS. Learning Bayesian network classifiers for facial expression recognition using both labeled and unlabeled data."},{"key":"553_CR58","doi-asserted-by":"publisher","DOI":"10.1016\/B978-0-12-817736-5.00009-0","author":"S Misra","year":"2019","unstructured":"Misra S, Li H. Noninvasive fracture characterization based on the classification of sonic wave travel times. Mach Learn Subsurf Charact. 2019. https:\/\/doi.org\/10.1016\/B978-0-12-817736-5.00009-0.","journal-title":"Mach Learn Subsurf Charact"},{"key":"553_CR59","doi-asserted-by":"publisher","unstructured":"Ai H, Huang C, Wang Y, Wu B. Real time facial expression recognition with Adaboost. 2004. https:\/\/doi.org\/10.1109\/ICPR.2004.733.","DOI":"10.1109\/ICPR.2004.733"},{"key":"553_CR60","doi-asserted-by":"publisher","DOI":"10.3390\/en13246668","author":"R Muzzammel","year":"2020","unstructured":"Muzzammel R, Raza A. A support vector machine learning-based protection technique for MT-HVDC systems. Energies. 2020. https:\/\/doi.org\/10.3390\/en13246668.","journal-title":"Energies"},{"issue":"1","key":"553_CR61","doi-asserted-by":"publisher","first-page":"172","DOI":"10.1109\/TIP.2006.884954","volume":"16","author":"I Kotsia","year":"2007","unstructured":"Kotsia I, Pitas I. Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans Image Process. 2007;16(1):172\u201387. https:\/\/doi.org\/10.1109\/TIP.2006.884954.","journal-title":"IEEE Trans Image Process"},{"key":"553_CR62","doi-asserted-by":"publisher","first-page":"94499","DOI":"10.1109\/ACCESS.2020.2995629","volume":"8","author":"Y Zhang","year":"2020","unstructured":"Zhang Y, Jiang H, Li X, Lu B, Rabie KM, Rehman AU. A new framework combining local-region division and feature selection for micro-expressions recognition. IEEE Access. 2020;8:94499\u2013509. https:\/\/doi.org\/10.1109\/ACCESS.2020.2995629.","journal-title":"IEEE Access"},{"key":"553_CR63","unstructured":"Zhao G, Pietik\u00e4inen M. Dynamic texture recognition using local binary patterns with an application to facial expressions."},{"key":"553_CR64","doi-asserted-by":"publisher","DOI":"10.3390\/s19010204","author":"C Li","year":"2019","unstructured":"Li C, Wang Y, Zhang X, Gao H, Yang Y, Wang J. Deep belief network for spectral\u2013spatial classification of hyperspectral remote sensor data. Sensors. 2019. https:\/\/doi.org\/10.3390\/s19010204.","journal-title":"Sensors"},{"key":"553_CR65","unstructured":"Ranzato A, Susskind J, Mnih V, Hinton G. On deep generative models with applications to recognition."},{"key":"553_CR66","doi-asserted-by":"publisher","DOI":"10.1155\/2016\/5687602","author":"W Wang","year":"2016","unstructured":"Wang W, Xu L. A modified sparse representation method for facial expression recognition. Comput Intell Neurosci. 2016. https:\/\/doi.org\/10.1155\/2016\/5687602.","journal-title":"Comput Intell Neurosci"},{"issue":"8","key":"553_CR67","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1109\/TCYB.2014.2354351","volume":"45","author":"L Zhong","year":"2015","unstructured":"Zhong L, Liu Q, Yang P, Huang J, Metaxas DN. Learning multiscale active facial patches for expression analysis. IEEE Trans Cybern. 2015;45(8):1499\u2013510. https:\/\/doi.org\/10.1109\/TCYB.2014.2354351.","journal-title":"IEEE Trans Cybern"},{"key":"553_CR68","doi-asserted-by":"publisher","DOI":"10.3390\/s19051168","author":"S Park","year":"2019","unstructured":"Park S, Gil MS, Im H, Moon YS. Measurement noise recommendation for efficient kalman filtering over a large amount of sensor data. Sensors. 2019. https:\/\/doi.org\/10.3390\/s19051168.","journal-title":"Sensors"},{"key":"553_CR69","unstructured":"Tang Y. Deep learning using linear support vector machines 2013 [Online]. Available: http:\/\/arxiv.org\/abs\/1306.0239"},{"key":"553_CR70","doi-asserted-by":"publisher","unstructured":"Kahou SE, et al. Combining modality specific deep neural networks for emotion recognition in video. In ICMI 2013\u2014Proceedings of the 2013 ACM international conference on multimodal interaction, 2013, pp. 543\u2013550. https:\/\/doi.org\/10.1145\/2522848.2531745","DOI":"10.1145\/2522848.2531745"},{"key":"553_CR71","unstructured":"Liu P, Han S, Meng Z, Tong Y. Facial expression recognition via a boosted deep belief network."},{"key":"553_CR72","unstructured":"Liu M, Li S, Shan S, Wang R, Chen X. Deeply learning deformable facial action parts model for dynamic expression analysis."},{"key":"553_CR73","doi-asserted-by":"publisher","first-page":"74850","DOI":"10.1109\/ACCESS.2022.3187406","volume":"10","author":"F Arias","year":"2022","unstructured":"Arias F, ZambranoNunez M, Guerra-Adames A, Tejedor-Flores N, Vargas-Lombardo M. Sentiment analysis of public social media as a tool for health-related topics. IEEE Access. 2022;10:74850\u201372. https:\/\/doi.org\/10.1109\/ACCESS.2022.3187406.","journal-title":"IEEE Access"},{"key":"553_CR74","doi-asserted-by":"publisher","unstructured":"Kahou SE, Michalski V, Konda K, Memisevic R, Pal C. Recurrent neural networks for emotion recognition in video. In: ICMI 2015\u2014proceedings of the 2015 ACM international conference on multimodal interaction, association for computing machinery, Inc, 2015, pp. 467\u2013474. https:\/\/doi.org\/10.1145\/2818346.2830596.","DOI":"10.1145\/2818346.2830596"},{"key":"553_CR75","doi-asserted-by":"publisher","unstructured":"Yang L, Jiang D, Han W, Sahli H. DCNN and DNN based multi-modal depression recognition. In 2017 7th international conference on affective computing and intelligent interaction, ACII 2017, Institute of Electrical and Electronics Engineers Inc., 2017, pp. 484\u2013489. https:\/\/doi.org\/10.1109\/ACII.2017.8273643.","DOI":"10.1109\/ACII.2017.8273643"},{"key":"553_CR76","doi-asserted-by":"publisher","unstructured":"Kim BK, Lee H, Roh J, Lee SY. Hierarchical committee of deep CNNs with exponentially-weighted decision fusion for static facial expression recognition. In: ICMI 2015\u2014Proceedings of the 2015 ACM international conference on multimodal interaction, association for computing machinery, Inc, Nov. 2015, pp. 427\u2013434. https:\/\/doi.org\/10.1145\/2818346.2830590.","DOI":"10.1145\/2818346.2830590"},{"issue":"29","key":"553_CR77","doi-asserted-by":"publisher","first-page":"31694","DOI":"10.1021\/acsomega.4c02393","volume":"9","author":"T Liang","year":"2024","unstructured":"Liang T, Liu W, Tan K, Wu A, Lu X. Advancing ionic liquid research with pSCNN: a novel approach for accurate normal melting temperature predictions. ACS Omega. 2024;9(29):31694\u2013702. https:\/\/doi.org\/10.1021\/acsomega.4c02393.","journal-title":"ACS Omega"},{"key":"553_CR78","doi-asserted-by":"publisher","DOI":"10.1016\/j.compbiomed.2020.104037","author":"A Amyar","year":"2020","unstructured":"Amyar A, Modzelewski R, Li H, Ruan S. Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: classification and segmentation. Comput Biol Med. 2020. https:\/\/doi.org\/10.1016\/j.compbiomed.2020.104037.","journal-title":"Comput Biol Med"},{"issue":"4","key":"553_CR79","doi-asserted-by":"publisher","first-page":"1819","DOI":"10.1007\/s41870-023-01183-0","volume":"15","author":"R Singh","year":"2023","unstructured":"Singh R, Saurav S, Kumar T, Saini R, Vohra A, Singh S. Facial expression recognition in videos using hybrid CNN & ConvLSTM. Int J Inf Technol (Singapore). 2023;15(4):1819\u201330. https:\/\/doi.org\/10.1007\/s41870-023-01183-0.","journal-title":"Int J Inf Technol (Singapore)"},{"issue":"9","key":"553_CR80","doi-asserted-by":"publisher","first-page":"4193","DOI":"10.1109\/TIP.2017.2689999","volume":"26","author":"K Zhang","year":"2017","unstructured":"Zhang K, Huang Y, Du Y, Wang L. Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Trans Image Process. 2017;26(9):4193\u2013203. https:\/\/doi.org\/10.1109\/TIP.2017.2689999.","journal-title":"IEEE Trans Image Process"},{"key":"553_CR81","doi-asserted-by":"publisher","unstructured":"Fan Y, Lu X, Li D, Liu Y. Video-based emotion recognition using CNN-RNN and C3D hybrid networks. In ICMI 2016\u2014proceedings of the 18th ACM international conference on multimodal interaction, Association for Computing Machinery, Inc, 2016, pp. 445\u2013450. https:\/\/doi.org\/10.1145\/2993148.2997632.","DOI":"10.1145\/2993148.2997632"},{"key":"553_CR82","unstructured":"Surani M. GANmut: generating and modifying facial expressions 2024 [Online]. Available: http:\/\/arxiv.org\/abs\/2406.11079"},{"key":"553_CR83","unstructured":"Zhang F, Zhang T, Mao Q, Xu C. Joint pose and expression modeling for facial expression recognition."},{"key":"553_CR84","doi-asserted-by":"publisher","DOI":"10.3390\/math11030776","author":"SB Punuri","year":"2023","unstructured":"Punuri SB, et al. Efficient Net-XGBoost: an implementation for facial emotion recognition using transfer learning. Mathematics. 2023. https:\/\/doi.org\/10.3390\/math11030776.","journal-title":"Mathematics"},{"issue":"7","key":"553_CR85","doi-asserted-by":"publisher","first-page":"550","DOI":"10.1080\/08839514.2020.1730631","volume":"34","author":"K Pytel","year":"2020","unstructured":"Pytel K. Hybrid multi-evolutionary algorithm to solve optimization problems. Appl Artif Intell. 2020;34(7):550\u201363. https:\/\/doi.org\/10.1080\/08839514.2020.1730631.","journal-title":"Appl Artif Intell"},{"key":"553_CR86","doi-asserted-by":"crossref","unstructured":"Liu C, Jiang W, Wang M, Tang T. Group level audio-video emotion recognition using hybrid networks. In: Proceedings of the 2020 international conference on multimodal interaction, 2020, pp. 807\u2013812.","DOI":"10.1145\/3382507.3417968"},{"key":"553_CR87","unstructured":"Xue F, Wang Q, Guo G. TransFER: learning relation-aware facial expression representations with transformers."},{"key":"553_CR88","unstructured":"Li Y, Wang M, Gong M, Lu Y, Liu L. FER-former: multi-modal transformer for facial expression recognition 2023 [Online]. Available: http:\/\/arxiv.org\/abs\/2303.12997"},{"key":"553_CR89","doi-asserted-by":"publisher","unstructured":"Aironi C, Cornell S, Principi E, Squartini S. Graph-based representation of audio signals for sound event classification. In European signal processing conference EUSIPCO, 2021, pp. 566\u2013570. https:\/\/doi.org\/10.23919\/EUSIPCO54536.2021.9616143.","DOI":"10.23919\/EUSIPCO54536.2021.9616143"},{"issue":"4","key":"553_CR90","doi-asserted-by":"publisher","first-page":"747","DOI":"10.1109\/THMS.2022.3163211","volume":"52","author":"J Zhang","year":"2022","unstructured":"Zhang J, Sun G, Zheng K, Mazhar S, Fu X, Li Y, et al. SSGNN: a macro and microfacial expression recognition graph neural network combining spatial and spectral domain features. IEEE Trans Hum Mach Syst. 2022;52(4):747\u201360.","journal-title":"IEEE Trans Hum Mach Syst"},{"issue":"5","key":"553_CR91","doi-asserted-by":"publisher","first-page":"01","DOI":"10.5121\/ijcses.2015.6501","volume":"6","author":"S Roychowdhury","year":"2015","unstructured":"Roychowdhury S, Emmons M. A survey of the trends in facial and expression recognition databases and methods. Int J Comput Sci Eng Surv. 2015;6(5):01\u201319. https:\/\/doi.org\/10.5121\/ijcses.2015.6501.","journal-title":"Int J Comput Sci Eng Surv"},{"key":"553_CR92","unstructured":"Zhao Z, Cao Y, Gong S, Patras I. Enhancing Zero-shot facial expression recognition by LLM knowledge transfer May 2024 [Online]. Available: http:\/\/arxiv.org\/abs\/2405.19100"},{"key":"553_CR93","doi-asserted-by":"publisher","DOI":"10.1007\/s11554-023-01310-x","author":"CL Kim","year":"2023","unstructured":"Kim CL, Kim BG. Few-shot learning for facial expression recognition: a comprehensive survey. J Real Time Image Process. 2023. https:\/\/doi.org\/10.1007\/s11554-023-01310-x.","journal-title":"J Real Time Image Process"},{"key":"553_CR94","doi-asserted-by":"crossref","unstructured":"Barsoum E, Zhang C, Ferrer CC, Zhang Z. Training deep networks for facial expression recognition with crowd-sourced label distribution 2016 [Online]. Available: http:\/\/arxiv.org\/abs\/1608.01041","DOI":"10.1145\/2993148.2993165"},{"key":"553_CR95","unstructured":"Lyons MJ, Akamatsu S, Kamachi M, Gyoba J, Budynek J. The Japanese female facial expression (JAFFE) database. In Proceedings of third international conference on automatic face and gesture recognition, 1998, pp. 14\u201316."},{"key":"553_CR96","unstructured":"Li X et al. Analyzing facial expressions and emotions in three dimensional space with multimodal sensing. IEEE\/CVF international conference on computer vision (ICCV)."},{"key":"553_CR97","doi-asserted-by":"publisher","unstructured":"Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended Cohn\u2013Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition\u2014workshops, CVPRW 2010, 2010, pp. 94\u2013101. https:\/\/doi.org\/10.1109\/CVPRW.2010.5543262.","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"553_CR98","unstructured":"Kanade T, Cohn JF, Tian Y. Comprehensive database for facial expression analysis. [Online]. Available: http:\/\/www.cs.cmu.edu\/~face"},{"issue":"2","key":"553_CR99","doi-asserted-by":"publisher","first-page":"97","DOI":"10.1109\/34.908962","volume":"23","author":"YL Tian","year":"2001","unstructured":"Tian YL, Kanade T, Conn JF. Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell. 2001;23(2):97\u2013115. https:\/\/doi.org\/10.1109\/34.908962.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"553_CR100","unstructured":"Yin L, Wei X, Sun Y, Wang J, Rosato MJ. A 3D facial expression database for facial behavior research. In: 7th international conference on automatic face and gesture recognition (FGR06) 2006 Apr 10 (pp. 211\u2013216). IEEE."},{"key":"553_CR101","unstructured":"Fedorov I. Extended Yale B Database 2024."},{"key":"553_CR102","unstructured":"Susskind E. The Toronto Face Database; Technical Report 3; Department of Computer Scienice, University of Toronto: Toronto, ON, Canada."},{"issue":"9","key":"553_CR103","doi-asserted-by":"publisher","first-page":"607","DOI":"10.1016\/j.imavis.2011.07.002","volume":"29","author":"G Zhao","year":"2011","unstructured":"Zhao G, Huang X, Taini M, Li SZ, Pietik\u00e4inen M. Facial expression recognition from near-infrared videos. Image Vis Comput. 2011;29(9):607\u201319. https:\/\/doi.org\/10.1016\/j.imavis.2011.07.002.","journal-title":"Image Vis Comput"},{"key":"553_CR104","doi-asserted-by":"crossref","unstructured":"Lundqvist D, Flykt A, \u00d6hman A. Karolinska directed emotional faces. Cogn Emot 1998.","DOI":"10.1037\/t27732-000"},{"key":"553_CR105","doi-asserted-by":"crossref","unstructured":"Lundqvist D, Flykt A, \u00d6hman A. Karolinska directed emotional faces. Cognition and Emotion. 1998 Jan 1.","DOI":"10.1037\/t27732-000"},{"key":"553_CR106","unstructured":"Li S, Deng W, Du J. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild [Online]. Available: http:\/\/whdeng.cn\/RAF\/model1.html"},{"key":"553_CR107","unstructured":"Zhang S, Luo P, Loy CC, Tang X. From Facial expression recognition to interpersonal relation prediction 2016 [Online]. Available: http:\/\/arxiv.org\/abs\/1609.06426"},{"key":"553_CR108","doi-asserted-by":"publisher","unstructured":"Gross R, Matthews I, Cohn J, Kanade T, Baker S. Multi-PIE. In 2008 8th IEEE international conference on automatic face & gesture recognition, 2008, pp. 1\u20138. https:\/\/doi.org\/10.1109\/AFGR.2008.4813399.","DOI":"10.1109\/AFGR.2008.4813399"},{"key":"553_CR109","unstructured":"Valstar PM. Induced disgust, happiness and surprise: an addition to the mmi facial expression database."},{"key":"553_CR110","unstructured":"Pantic M, Valstar M, Rademaker R, Maat L. Web-Based database for facial expression analysis."},{"key":"553_CR111","unstructured":"Papachristou C, Aifanti N, Delopoulos A. The MUG facial expression database 2010 [Online]. Available: https:\/\/www.researchgate.net\/publication\/224187946"},{"key":"553_CR112","unstructured":"Multimedia Understanding Group. Multimedia Understanding Group (MUG) Database."},{"key":"553_CR113","doi-asserted-by":"publisher","unstructured":"Kosti R, Alvarez JM, Recasens A, Lapedriza A. Context based emotion recognition using EMOTIC dataset 2020. https:\/\/doi.org\/10.1109\/TPAMI.2019.2916866.","DOI":"10.1109\/TPAMI.2019.2916866"},{"issue":"11","key":"553_CR114","doi-asserted-by":"publisher","first-page":"2755","DOI":"10.1109\/TPAMI.2019.2916866","volume":"42","author":"R Kosti","year":"2020","unstructured":"Kosti R, Alvarez JM, Recasens A, Lapedriza A. Context based emotion recognition using EMOTIC dataset. IEEE Trans Pattern Anal Mach Intell. 2020;42(11):2755\u201366. https:\/\/doi.org\/10.1109\/TPAMI.2019.2916866.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"553_CR115","unstructured":"Kollias D, Zafeiriou S. Aff-Wild2: extending the Aff-wild database for affect recognition 2018 [Online]. Available: http:\/\/arxiv.org\/abs\/1811.07770"},{"key":"553_CR116","unstructured":"Kotsiantis SB, Kanellopoulos D, Pintelas PE. Data preprocessing for supervised leaning. International journal of computer science. 2006;1(2):111\u20137."},{"key":"553_CR117","unstructured":"Pasha M, Peng Wong K. CMED: a child micro-expression dataset 2025 [Online]. Available: http:\/\/arxiv.org\/abs\/2503.21690"},{"key":"553_CR118","doi-asserted-by":"publisher","DOI":"10.1016\/j.jvcir.2023.104033","volume":"98","author":"L Ulrich","year":"2024","unstructured":"Ulrich L, et al. CalD3r and MenD3s: spontaneous 3D facial expression databases. J Vis Commun Image Represent. 2024;98:104033. https:\/\/doi.org\/10.1016\/j.jvcir.2023.104033.","journal-title":"J Vis Commun Image Represent"},{"key":"553_CR119","doi-asserted-by":"publisher","unstructured":"Zeng D, Veldhuis R, Spreeuwers L. A survey of face recognition techniques under occlusion 2021. John Wiley and Sons Inc. https:\/\/doi.org\/10.1049\/bme2.12029.","DOI":"10.1049\/bme2.12029"},{"key":"553_CR120","unstructured":"Isabelle J, Wood R, Olszewska JI. Lighting-variable AdaBoost Based-on system for robust face detection 2012."},{"key":"553_CR121","doi-asserted-by":"crossref","unstructured":"Braje WL, Tarr MJ, Troje NF, Braje WL. Illumination effects in face recognition 1998.","DOI":"10.3758\/BF03330623"},{"key":"553_CR122","unstructured":"Prikler F. Preparing the camera ready paper for proceedings of international conference CADSM Polyana-Svalyava 2016. [Online]. Available: https:\/\/www.youtube.com\/watch?v=EO-EM03J8LE"},{"key":"553_CR123","unstructured":"Zou W, Yuen PC. Very low resolution face recognition problem."},{"key":"553_CR124","doi-asserted-by":"publisher","unstructured":"Ekenel HK, Stiefelhagen R. Why is facial occlusion a challenging problem? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2009, pp. 299\u2013308. https:\/\/doi.org\/10.1007\/978-3-642-01793-3_31.","DOI":"10.1007\/978-3-642-01793-3_31"},{"issue":"4","key":"553_CR125","doi-asserted-by":"publisher","first-page":"314","DOI":"10.1049\/iet-bmt.2014.0022","volume":"3","author":"A Abaza","year":"2014","unstructured":"Abaza A, Harrison MA, Bourlai T, Ross A. Design and evaluation of photometric image quality measures for effective face recognition. IET Biom. 2014;3(4):314\u201324. https:\/\/doi.org\/10.1049\/iet-bmt.2014.0022.","journal-title":"IET Biom"},{"key":"553_CR126","unstructured":"Shreve M, Godavarthy S, Goldgof D, Sarkar S. Macro-and micro-expression spotting in long videos using spatio-temporal strain."},{"key":"553_CR127","doi-asserted-by":"publisher","unstructured":"Hasan MK, Ahsan MS, Abdullah-Al-Mamun, Newaz SHS, Lee GM. Human face detection techniques: a comprehensive review and future research directions. Electronics (Switzerland) 2021. https:\/\/doi.org\/10.3390\/electronics10192354.","DOI":"10.3390\/electronics10192354"},{"key":"553_CR128","unstructured":"Felzenszwalb P, Girshick R, Mcallester D, Ramanan D. Object detection with discriminatively trained part based models."},{"key":"553_CR129","doi-asserted-by":"publisher","unstructured":"Cootes TF, Taylor CJ. Active Shape Models\u2014\u2018smart snakes\u2019, British Machine Vision Association and Society for Pattern Recognition, Feb. 2013, pp. 28.1\u201328.10. https:\/\/doi.org\/10.5244\/c.6.28.","DOI":"10.5244\/c.6.28"},{"key":"553_CR130","unstructured":"Viola P, Jones M. Rapid object detection using a boosted cascade of simple features 2004. [Online]. Available: http:\/\/www.merl.com"},{"key":"553_CR131","unstructured":"Liu C, Wechsler H. Independent component analysis of gabor features for face recognition 2003."},{"issue":"2","key":"553_CR132","doi-asserted-by":"publisher","first-page":"927","DOI":"10.1007\/s10462-018-9650-2","volume":"52","author":"A Kumar","year":"2019","unstructured":"Kumar A, Kaur A, Kumar M. Face detection techniques: a review. Artif Intell Rev. 2019;52(2):927\u201348. https:\/\/doi.org\/10.1007\/s10462-018-9650-2.","journal-title":"Artif Intell Rev"},{"key":"553_CR133","unstructured":"Bhele SG, Mankar VH. A review paper on face recognition techniques 2012."},{"key":"553_CR134","doi-asserted-by":"publisher","DOI":"10.1088\/1742-6596\/1591\/1\/012028","author":"WK Mutlag","year":"2020","unstructured":"Mutlag WK, Ali SK, Aydam ZM, Taher BH. Feature extraction methods: a review. J Phys Conf Ser. 2020. https:\/\/doi.org\/10.1088\/1742-6596\/1591\/1\/012028.","journal-title":"J Phys Conf Ser"},{"key":"553_CR135","doi-asserted-by":"crossref","unstructured":"Jun H, Shuai L, Jinming S, Yue L, Jingwei W, Peng J. Facial expression recognition based on VGGNet convolutional neural network. In: 2018 Chinese automation congress (CAC) 2018 Nov 30 (pp. 4146\u20134151). IEEE.","DOI":"10.1109\/CAC.2018.8623238"},{"key":"553_CR136","doi-asserted-by":"publisher","unstructured":"Abhishree TM, Latha J, Manikantan K, Ramachandran S. Face recognition using gabor filter based feature extraction with anisotropic diffusion as a pre-processing technique. In Procedia computer science, Elsevier B.V., 2015, pp. 312\u2013321. https:\/\/doi.org\/10.1016\/j.procs.2015.03.149.","DOI":"10.1016\/j.procs.2015.03.149"},{"key":"553_CR137","unstructured":"Zhang Z, Lyons M, Schuster M, Akamatsu S. Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron."},{"key":"553_CR138","doi-asserted-by":"publisher","unstructured":"Mita T, Kaneko T, Hori O. Joint Haar-like features for face detection. In Proceedings of the IEEE international conference on computer vision, 2005, pp. 1619\u20131626. https:\/\/doi.org\/10.1109\/ICCV.2005.129.","DOI":"10.1109\/ICCV.2005.129"},{"key":"553_CR139","doi-asserted-by":"publisher","unstructured":"Xiong X, De La Torre F. Supervised descent method and its applications to face alignment. In Proceedings of the IEEE computer society conference on computer vision and pattern recognition, 2013, pp. 532\u2013539. https:\/\/doi.org\/10.1109\/CVPR.2013.75.","DOI":"10.1109\/CVPR.2013.75"},{"key":"553_CR140","doi-asserted-by":"crossref","unstructured":"Zhang K, Zhang Z, Li Z, Qiao Y. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 2016;23(10):1499\u20131503.","DOI":"10.1109\/LSP.2016.2603342"},{"key":"553_CR141","unstructured":"Cootes TF, Edwards GJ, Taylor CJ. Active appearance models."},{"key":"553_CR142","unstructured":"Asthana A, Zafeiriou S, Cheng S, Pantic M. Incremental face alignment in the wild."},{"key":"553_CR143","unstructured":"Ren S, Cao X., Wei Y, Sun J. Face alignment at 3000 FPS via regressing local binary features."},{"key":"553_CR144","doi-asserted-by":"publisher","unstructured":"Li S, Deng W. Deep facial expression recognition: a survey 2018. https:\/\/doi.org\/10.1109\/TAFFC.2020.2981446.","DOI":"10.1109\/TAFFC.2020.2981446"},{"key":"553_CR145","doi-asserted-by":"publisher","unstructured":"Johnston B, de Chazal P. Review of image-based automatic facial landmark identification techniques Dec. 01, 2018, Springer International Publishing. https:\/\/doi.org\/10.1186\/s13640-018-0324-4.","DOI":"10.1186\/s13640-018-0324-4"},{"key":"553_CR146","doi-asserted-by":"crossref","unstructured":"Cootes TF, Taylor CJ, Cooper DH, Graham J. Active shape models-their training and application. Comput Vis Image Understand 1995;61(1):38\u201359.","DOI":"10.1006\/cviu.1995.1004"},{"key":"553_CR147","unstructured":"Hong X, Xu Y, Zhao G. LBP-TOP: a tensor unfolding revisit. [Online]. Available: http:\/\/www.ee.oulu.fi\/research\/imag\/cmvs\/files\/code\/Fast"},{"key":"553_CR148","doi-asserted-by":"crossref","unstructured":"Comon P. Signal processing independent component analysis. A new concept? 1994.","DOI":"10.1016\/0165-1684(94)90029-9"},{"key":"553_CR149","unstructured":"Lowe DG. Accepted for publication in the 2004."},{"key":"553_CR150","doi-asserted-by":"crossref","unstructured":"Hyv\u00e4rinen A, Oja E. Independent Component analysis: algorithms and applications 2000.","DOI":"10.1002\/0471221317"},{"issue":"2","key":"553_CR151","doi-asserted-by":"publisher","first-page":"169","DOI":"10.3233\/AIC-170729","volume":"30","author":"A Tharwat","year":"2017","unstructured":"Tharwat A, Gaber T, Ibrahim A, Hassanien AE. Linear discriminant analysis: A detailed tutorial. AI Commun. 2017;30(2):169\u201390. https:\/\/doi.org\/10.3233\/AIC-170729.","journal-title":"AI Commun"},{"key":"553_CR152","doi-asserted-by":"publisher","unstructured":"Dalal N, Histograms BT, Triggs B. Histograms of oriented gradients for human detection, pp. 886\u2013893, 2005. https:\/\/doi.org\/10.1109\/CVPR.2005.177.","DOI":"10.1109\/CVPR.2005.177"},{"key":"553_CR153","unstructured":"Bosch A, Zisserman A, Mu\u00f1oz X. Image classification using random forests and ferns."},{"issue":"10","key":"553_CR154","first-page":"3474","volume":"74","author":"K Mase","year":"1991","unstructured":"Mase K. Recognition of facial expression from optical flow. IEICE Trans Inf Syst. 1991;74(10):3474\u201383.","journal-title":"IEICE Trans Inf Syst"},{"key":"553_CR155","unstructured":"Cohn JF, Lien JJ, Zlochower AJ, Kanade T. Feature-point tracking by optical flow discriminates subtle differences in facial expression."},{"key":"553_CR156","unstructured":"Wold S, Esbensen K, Geladi P. Principal component analysis."},{"key":"553_CR157","doi-asserted-by":"crossref","unstructured":"Barron JL, Fleet DJ, Beauchemin SS. Systems and experiment performance of optical flow techniques 1994.","DOI":"10.1007\/BF01420984"},{"key":"553_CR158","doi-asserted-by":"crossref","unstructured":"Abdi H, Williams LJ. Principal component analysis 2010. [Online]. Available: www.utdallas.edu\/.","DOI":"10.1002\/wics.101"},{"key":"553_CR159","doi-asserted-by":"crossref","unstructured":"Ekman P, Friesen WV. Facial action coding system. Environ Psychol Nonverbal Behav 1978.","DOI":"10.1037\/t27734-000"},{"key":"553_CR160","unstructured":"Tomasi C. Histograms of oriented gradients 2017. [Online]. Available: http:\/\/www.bewiser.co.uk\/news\/car-insurance\/crackdown-rude-drivers-who-undertake-hog-lanes-and-splash-pedestrians."},{"issue":"2","key":"553_CR161","first-page":"5","volume":"3","author":"E Friesen","year":"1978","unstructured":"Friesen E, Ekman P. Facial action coding system: a technique for the measurement of facial movement. Palo Alto. 1978;3(2):5.","journal-title":"Palo Alto"},{"key":"553_CR162","doi-asserted-by":"publisher","unstructured":"Canedo D, Neves AJR. Facial expression recognition using computer vision: Aa systematic review 2019, MDPI AG. https:\/\/doi.org\/10.3390\/app9214678.","DOI":"10.3390\/app9214678"},{"key":"553_CR163","unstructured":"Carreira-Perpi\u00f1\u00e1n MA. A review of dimension reduction techniques 1997."},{"key":"553_CR164","doi-asserted-by":"crossref","unstructured":"Calder AJ, Burton AM, Miller P, Young AW, Akamatsu S. A principal component analysis of facial expressions 2001. [Online]. Available: www.elsevier.com","DOI":"10.1016\/S0042-6989(01)00002-5"},{"key":"553_CR165","doi-asserted-by":"crossref","unstructured":"Lowe DG. Object recognition from local scale-invariant features 1999.","DOI":"10.1109\/ICCV.1999.790410"},{"key":"553_CR166","doi-asserted-by":"publisher","first-page":"71","DOI":"10.1162\/jocn.1991.3.1.71","volume":"3","author":"MA Turk","year":"1991","unstructured":"Turk MA, Pentland A. Eigenfaces for recognition. J Cogn Neurosci. 1991;3:71\u201386.","journal-title":"J Cogn Neurosci"},{"key":"553_CR167","unstructured":"Cover TM, Hart PE. Approximate formulas for the information transmitted bv a discrete communication channel 1952."},{"key":"553_CR168","unstructured":"Deng H-B, Jin L-W, Zhen L-X, Huang J-C. A new facial expression recognition method based on local gabor filter bank and PCA plus LDA."},{"key":"553_CR169","doi-asserted-by":"publisher","unstructured":"Kotsiantis SB. Decision trees: a recent overview 2013. https:\/\/doi.org\/10.1007\/s10462-011-9272-4.","DOI":"10.1007\/s10462-011-9272-4"},{"key":"553_CR170","unstructured":"Vretos N, Tefas A, Pitas I. Facial expression recognition with robust covariance estimation and support vector machines."},{"key":"553_CR171","unstructured":"Zadeh LA. Fuzzy sets1965."},{"key":"553_CR172","unstructured":"Nefian AV, Hayes MH. Hidden Markov models for face recognition."},{"key":"553_CR173","doi-asserted-by":"crossref","unstructured":"Michel P, El Kaliouby R. Real Time facial expression recognition in video using support vector machines 2003.","DOI":"10.1145\/958468.958479"},{"key":"553_CR174","doi-asserted-by":"publisher","DOI":"10.1186\/s42492-019-0034-5","author":"I Dagher","year":"2019","unstructured":"Dagher I, Dahdah E, Al SM. Facial expression recognition using three-stage support vector machines. Vis Comput Ind Biomed Art. 2019. https:\/\/doi.org\/10.1186\/s42492-019-0034-5.","journal-title":"Vis Comput Ind Biomed Art"},{"key":"553_CR175","doi-asserted-by":"publisher","unstructured":"Sarker IH. Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. 2021, Springer. https:\/\/doi.org\/10.1007\/s42979-021-00815-1.","DOI":"10.1007\/s42979-021-00815-1"},{"key":"553_CR176","doi-asserted-by":"publisher","unstructured":"Wu S, et al. Deep learning in clinical natural language processing: a methodical review, Mar. 01, 2020, Oxford University Press. https:\/\/doi.org\/10.1093\/jamia\/ocz200.","DOI":"10.1093\/jamia\/ocz200"},{"key":"553_CR177","doi-asserted-by":"publisher","first-page":"19143","DOI":"10.1109\/ACCESS.2019.2896880","volume":"7","author":"AB Nassif","year":"2019","unstructured":"Nassif AB, Shahin I, Attili I, Azzeh M, Shaalan K. Speech recognition using deep neural networks: a systematic review. IEEE Access. 2019;7:19143\u201365. https:\/\/doi.org\/10.1109\/ACCESS.2019.2896880.","journal-title":"IEEE Access"},{"key":"553_CR178","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-022-29268-7","author":"N Sapoval","year":"2022","unstructured":"Sapoval N, et al. Current progress and open challenges for applying deep learning across the biosciences. Nat Res. 2022. https:\/\/doi.org\/10.1038\/s41467-022-29268-7.","journal-title":"Nat Res"},{"key":"553_CR179","doi-asserted-by":"publisher","unstructured":"Wang X, Wang K, Lian S. A survey on face data augmentation for the training of deep neural networks, Oct. 01, 2020, Springer Science and Business Media Deutschland GmbH. https:\/\/doi.org\/10.1007\/s00521-020-04748-3.","DOI":"10.1007\/s00521-020-04748-3"},{"key":"553_CR180","doi-asserted-by":"publisher","unstructured":"Deng L, Yu D. Deep learning: methods and applications 2013. Now Publishers Inc. https:\/\/doi.org\/10.1561\/2000000039.","DOI":"10.1561\/2000000039"},{"key":"553_CR181","doi-asserted-by":"crossref","unstructured":"Olszewska JI. Automated face recognition: challenges and solutions. Pattern Recognit Anal Appl 2016.","DOI":"10.5772\/66013"},{"key":"553_CR182","doi-asserted-by":"publisher","unstructured":"Lecun Y, Bottou L, Bengio Y, Haffner P, Bottou E. Gradient-based learning applied to document recognition\u2014proceedings of the IEEE. Proc IEEE 861(11) 1998. https:\/\/doi.org\/10.1109\/5.726791.","DOI":"10.1109\/5.726791"},{"key":"553_CR183","unstructured":"Hinton GE, Osindero S. A fast learning algorithm for deep belief nets Yee-Whye Teh."},{"key":"553_CR184","unstructured":"Rumelhart DE, Hinton GE, Williams RJ. Learing internal representations by error propagation."},{"issue":"5786","key":"553_CR185","doi-asserted-by":"publisher","first-page":"502","DOI":"10.1126\/science.1129198","volume":"313","author":"MW Klein","year":"2006","unstructured":"Klein MW, Enkrich C, Wegener M, Linden S. Second-harmonic generation from magnetic metamaterials. Science. 2006;313(5786):502\u20134. https:\/\/doi.org\/10.1126\/science.1129198.","journal-title":"Science"},{"key":"553_CR186","doi-asserted-by":"publisher","unstructured":"Mollahosseini A, Hasani B, Mahoor MH. AffectNet: a database for facial expression, valence, and arousal computing in the wild. 2017. https:\/\/doi.org\/10.1109\/TAFFC.2017.2740923.","DOI":"10.1109\/TAFFC.2017.2740923"},{"key":"553_CR187","unstructured":"Mcculloch WS, Lrerr W, Pitts H. A logical calculus of the ideas immanent in nervous activity."},{"key":"553_CR188","unstructured":"Padgett C, Cottrell G. Representing face images for emotion classification 2010."},{"key":"553_CR189","unstructured":"Fabian Benitez-Quiroz C, Srinivasan R, Martinez AM. EmotioNet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild."},{"key":"553_CR190","unstructured":"Benitez-Quiroz CF, Srinivasan R, Feng Q, Wang Y, Martinez AM. EmotioNet Challenge: recognition of facial expressions of emotion in the wild, 2017 [Online]. Available: http:\/\/arxiv.org\/abs\/1703.01210"},{"key":"553_CR191","doi-asserted-by":"publisher","first-page":"643","DOI":"10.1016\/j.neucom.2017.08.043","volume":"273","author":"N Zeng","year":"2018","unstructured":"Zeng N, Zhang H, Song B, Liu W, Li Y, Dobaie AM. Facial expression recognition via learning deep sparse autoencoders. Neurocomputing. 2018;273:643\u20139. https:\/\/doi.org\/10.1016\/j.neucom.2017.08.043.","journal-title":"Neurocomputing"},{"key":"553_CR192","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-021-00444-8","author":"L Alzubaidi","year":"2021","unstructured":"Alzubaidi L, et al. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data. 2021. https:\/\/doi.org\/10.1186\/s40537-021-00444-8.","journal-title":"J Big Data"},{"key":"553_CR193","doi-asserted-by":"publisher","first-page":"610","DOI":"10.1016\/j.patcog.2016.07.026","volume":"61","author":"AT Lopes","year":"2017","unstructured":"Lopes AT, de Aguiar E, De Souza AF, Oliveira-Santos T. Facial expression recognition with convolutional neural networks: coping with few data and the training sample order. Pattern Recognit. 2017;61:610\u201328. https:\/\/doi.org\/10.1016\/j.patcog.2016.07.026.","journal-title":"Pattern Recognit"},{"key":"553_CR194","unstructured":"Ponti MA, Ribeiro LSF, Nazare TS, Bui T, Collomosse J. Everything you wanted to know about deep learning for computer vision but were afraid to ask."},{"key":"553_CR195","unstructured":"O\u2019Shea K, Nash R. an introduction to convolutional neural networks, 2015 [Online]. Available: http:\/\/arxiv.org\/abs\/1511.08458"},{"key":"553_CR196","unstructured":"Vijayakumar S, Schaal S. Locally weighted projection regression\u202f: an O(n) algorithm for incremental real time learning in high dimensional space."},{"key":"553_CR197","unstructured":"Le T, Duan Y. PointGrid: a deep network for 3D shape understanding."},{"key":"553_CR198","unstructured":"Tharwat A. Principal component analysis\u2014a tutorial 2009."},{"key":"553_CR199","doi-asserted-by":"publisher","unstructured":"Yu Z, Zhang C. Image based static facial expression recognition with multiple deep network learning. https:\/\/doi.org\/10.1145\/2823327.2823341.","DOI":"10.1145\/2823327.2823341"},{"key":"553_CR200","unstructured":"Gholamalinezhad H, Khosravi H. Pooling methods in deep neural networks: a review."},{"key":"553_CR201","unstructured":"Zhang C-L, Luo J-H, Wei X-S, Wu J. In defense of fully connected layers in visual representation transfer."},{"key":"553_CR202","unstructured":"Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. [Online]. Available: http:\/\/code.google.com\/p\/cuda-convnet\/"},{"key":"553_CR203","unstructured":"Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition, 2014 [Online]. Available: http:\/\/arxiv.org\/abs\/1409.1556"},{"key":"553_CR204","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J. Rethinking the inception architecture for computer vision."},{"key":"553_CR205","unstructured":"Parkhi OM, Vedaldi A, Zisserman A. Deep face recognition."},{"key":"553_CR206","unstructured":"Canziani A, Paszke A, Culurciello E. An analysis of deep neural network models for practical applications, May 2016 [Online]. Available: http:\/\/arxiv.org\/abs\/1605.07678"},{"key":"553_CR207","unstructured":"Chollet F. Xception: deep learning with depthwise separable convolutions."},{"key":"553_CR208","unstructured":"Szegedy C, et al. Going deeper with convolutions."},{"key":"553_CR209","unstructured":"Targ S, Almeida D, Lyman K. Resnet in Resnet: generalizing residual architectures, Mar. 2016 [Online]. Available: http:\/\/arxiv.org\/abs\/1603.08029"},{"key":"553_CR210","unstructured":"Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. [Online]. Available: https:\/\/github.com\/liuzhuang13\/DenseNet."},{"key":"553_CR211","unstructured":"Tan M, Le QV. EfficientNet: rethinking model scaling for convolutional neural networks."},{"key":"553_CR212","unstructured":"Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. [Online]. Available: http:\/\/www.image-net.org."},{"key":"553_CR213","unstructured":"Konda K, Memisevic R, Krueger D. Zero-bias autoencoders and the benefits of co-adapting features, 2014 [Online]. Available: http:\/\/arxiv.org\/abs\/1402.3337"},{"key":"553_CR214","unstructured":"Ding H, Zhou SK, Chellappa R. FaceNet2ExpNet: regularizing a deep face recognition net for expression recognition, 2016 [Online]. Available: http:\/\/arxiv.org\/abs\/1609.06591"},{"key":"553_CR215","unstructured":"Wang H, Raj B. On the origin of deep learning, 2017 [Online]. Available: http:\/\/arxiv.org\/abs\/1702.07800"},{"key":"553_CR216","unstructured":"Hochreiter S, Urgen Schmidhuber J. Long short-term memory."},{"key":"553_CR217","unstructured":"Werbos PJ. Backpropagation through time: what it does and how to do it."},{"key":"553_CR218","unstructured":"Tran D, Bourdev L, Fergus R, Torresani L, Paluri M. Learning spatiotemporal features with 3D convolutional networks."},{"key":"553_CR219","doi-asserted-by":"crossref","unstructured":"Schuster M, Paliwal KK. Bidirectional recurrent neural networks 1997.","DOI":"10.1109\/78.650093"},{"key":"553_CR220","doi-asserted-by":"crossref","unstructured":"Cho K, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation, 2014 [Online]. Available: http:\/\/arxiv.org\/abs\/1406.1078","DOI":"10.3115\/v1\/D14-1179"},{"key":"553_CR221","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1016\/j.neucom.2018.07.028","volume":"317","author":"Z Yu","year":"2018","unstructured":"Yu Z, Liu G, Liu Q, Deng J. Spatio-temporal convolutional features with nested LSTM for facial expression recognition. Neurocomputing. 2018;317:50\u20137. https:\/\/doi.org\/10.1016\/j.neucom.2018.07.028.","journal-title":"Neurocomputing"},{"key":"553_CR222","doi-asserted-by":"publisher","first-page":"128","DOI":"10.1016\/j.patrec.2019.12.013","volume":"131","author":"H Zhang","year":"2020","unstructured":"Zhang H, Huang B, Tian G. Facial expression recognition based on deep convolution long short-term memory networks of double-channel weighted mixture. Pattern Recognit Lett. 2020;131:128\u201334. https:\/\/doi.org\/10.1016\/j.patrec.2019.12.013.","journal-title":"Pattern Recognit Lett"},{"key":"553_CR223","doi-asserted-by":"publisher","DOI":"10.3390\/mti6020011","author":"D Dresvyanskiy","year":"2022","unstructured":"Dresvyanskiy D, Ryumina E, Kaya H, Markitantov M, Karpov A, Minker W. End-to-end modeling and transfer learning for audiovisual emotion recognition in-the-wild. Multimodal Technol Interact. 2022. https:\/\/doi.org\/10.3390\/mti6020011.","journal-title":"Multimodal Technol Interact"},{"key":"553_CR224","doi-asserted-by":"publisher","first-page":"4630","DOI":"10.1109\/ACCESS.2017.2784096","volume":"6","author":"B Yang","year":"2017","unstructured":"Yang B, Cao J, Ni R, Zhang Y. Facial expression recognition using weighted mixture deep neural network based on double-channel facial images. IEEE Access. 2017;6:4630\u201340. https:\/\/doi.org\/10.1109\/ACCESS.2017.2784096.","journal-title":"IEEE Access"},{"issue":"11","key":"553_CR225","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1145\/3422622","volume":"63","author":"I Goodfellow","year":"2020","unstructured":"Goodfellow I, et al. Generative adversarial networks. Commun ACM. 2020;63(11):139\u201344. https:\/\/doi.org\/10.1145\/3422622.","journal-title":"Commun ACM"},{"key":"553_CR226","unstructured":"Shen Y, Zhou B, Luo P, Tang X. FaceFeat-GAN: a two-stage approach for identity-preserving face synthesis, 2018 [Online]. Available: http:\/\/arxiv.org\/abs\/1812.01288"},{"issue":"6\u20137","key":"553_CR227","doi-asserted-by":"publisher","first-page":"863","DOI":"10.1007\/s11263-019-01169-1","volume":"127","author":"F Shiri","year":"2019","unstructured":"Shiri F, Yu X, Porikli F, Hartley R, Koniusz P. Identity-preserving face recovery from stylized portraits. Int J Comput Vis. 2019;127(6\u20137):863\u201383. https:\/\/doi.org\/10.1007\/s11263-019-01169-1.","journal-title":"Int J Comput Vis"},{"key":"553_CR228","doi-asserted-by":"crossref","unstructured":"Li J, Lam EY. Facial expression recognition using deep neural networks. IEEE 2015.","DOI":"10.1109\/IST.2015.7294547"},{"key":"553_CR229","doi-asserted-by":"publisher","DOI":"10.3390\/electronics11193084","author":"A Khanum","year":"2022","unstructured":"Khanum A, Lee CY, Yang CS. Deep-learning-based network for lane following in autonomous vehicles. Electronics. 2022. https:\/\/doi.org\/10.3390\/electronics11193084.","journal-title":"Electronics"},{"key":"553_CR230","doi-asserted-by":"publisher","first-page":"32297","DOI":"10.1109\/ACCESS.2019.2901521","volume":"7","author":"S Zhang","year":"2019","unstructured":"Zhang S, Pan X, Cui Y, Zhao X, Liu L. Learning affective video features for facial expression recognition via hybrid deep learning. IEEE Access. 2019;7:32297\u2013304. https:\/\/doi.org\/10.1109\/ACCESS.2019.2901521.","journal-title":"IEEE Access"},{"issue":"1","key":"553_CR231","doi-asserted-by":"publisher","first-page":"176","DOI":"10.1049\/iet-ipr.2019.0293","volume":"14","author":"X Pan","year":"2020","unstructured":"Pan X. Fusing HOG and convolutional neural network spatial-temporal features for video-based facial expression recognition. IET Image Process. 2020;14(1):176\u201382. https:\/\/doi.org\/10.1049\/iet-ipr.2019.0293.","journal-title":"IET Image Process"},{"key":"553_CR232","doi-asserted-by":"publisher","first-page":"101","DOI":"10.1016\/j.patrec.2018.04.010","volume":"115","author":"N Jain","year":"2018","unstructured":"Jain N, Kumar S, Kumar A, Shamsolmoali P, Zareapoor M. Hybrid deep neural networks for face emotion recognition. Pattern Recognit Lett. 2018;115:101\u20136. https:\/\/doi.org\/10.1016\/j.patrec.2018.04.010.","journal-title":"Pattern Recognit Lett"},{"issue":"4","key":"553_CR233","doi-asserted-by":"publisher","first-page":"587","DOI":"10.1007\/s12559-019-09654-y","volume":"11","author":"X Sun","year":"2019","unstructured":"Sun X, Lv M. Facial expression recognition based on a hybrid model combining deep and shallow features. Cognit Comput. 2019;11(4):587\u201397. https:\/\/doi.org\/10.1007\/s12559-019-09654-y.","journal-title":"Cognit Comput"},{"issue":"3","key":"553_CR234","doi-asserted-by":"publisher","first-page":"1350","DOI":"10.11591\/eei.v11i3.3722","volume":"11","author":"NS Abdulsattar","year":"2022","unstructured":"Abdulsattar NS, Hussain MN. Facial expression recognition using HOG and LBP features with convolutional neural network. Bull Electr Eng Inf. 2022;11(3):1350\u20137. https:\/\/doi.org\/10.11591\/eei.v11i3.3722.","journal-title":"Bull Electr Eng Inf"},{"key":"553_CR235","unstructured":"Vaswani A, et al. Attention is all you need."},{"key":"553_CR236","unstructured":"Wen Y, Zhang K, Li Z, Qiao Y. A discriminative feature learning approach for deep face recognition."},{"key":"553_CR237","unstructured":"Li Y, Lu Y, Li J, Lu G. Separate loss for basic and compound facial expression recognition in the wild 2019."},{"key":"553_CR238","unstructured":"Cai J, Meng Z, Khan AS, Li Z, O\u2019Reilly J, Tong Y. Island loss for learning discriminative features in facial expression recognition, Oct. 2017, [Online]. Available: http:\/\/arxiv.org\/abs\/1710.03144"},{"key":"553_CR239","unstructured":"Schroff F, Philbin J. FaceNet: a unified embedding for face recognition and clustering."},{"key":"553_CR240","unstructured":"Guo Y, Tao D, Yu J, Xiong H, Li Y, Tao D. Deep neural networks with relativity learning for facial expression recognition."},{"key":"553_CR241","doi-asserted-by":"crossref","unstructured":"Liu X, Vijaya Kumar BV, You J, Jia P. Adaptive deep metric learning for identity-aware facial expression recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops 2017 (pp. 20\u201329).","DOI":"10.1109\/CVPRW.2017.79"},{"key":"553_CR242","doi-asserted-by":"publisher","first-page":"26756","DOI":"10.1109\/ACCESS.2022.3156598","volume":"10","author":"AP Fard","year":"2022","unstructured":"Fard AP, Mahoor MH. Ad-Corre: adaptive correlation-based loss for facial expression recognition in the wild. IEEE Access. 2022;10:26756\u201368. https:\/\/doi.org\/10.1109\/ACCESS.2022.3156598.","journal-title":"IEEE Access"},{"key":"553_CR243","doi-asserted-by":"publisher","first-page":"2016","DOI":"10.1109\/TIP.2021.3049955","volume":"30","author":"H Li","year":"2021","unstructured":"Li H, Wang N, Ding X, Yang X, Gao X. Adaptively learning facial expression representation via C-F labels and distillation. IEEE Trans Image Process. 2021;30:2016\u201328. https:\/\/doi.org\/10.1109\/TIP.2021.3049955.","journal-title":"IEEE Trans Image Process"},{"key":"553_CR244","unstructured":"Farzaneh AH, Qi X. Facial expression recognition in the wild via deep attentive center loss."},{"key":"553_CR245","unstructured":"Sharif A, Azizpour RH, Sullivan J, Carlsson S. CNN features off-the-shelf: an astounding baseline for recognition."},{"key":"553_CR246","unstructured":"Jung H, Lee S, Yim J, Park S, Kim J. Joint fine-tuning in deep neural networks for facial expression recognition."},{"key":"553_CR247","doi-asserted-by":"publisher","unstructured":"Bargal SA, Barsoum E, Ferrer CC, Zhang C. Emotion recognition in the wild from videos using images. In ICMI 2016\u2014proceedings of the 18th ACM international conference on multimodal interaction, Association for Computing Machinery, Inc, Oct. 2016, pp. 433\u2013436. https:\/\/doi.org\/10.1145\/2993148.2997627.","DOI":"10.1145\/2993148.2997627"},{"key":"553_CR248","doi-asserted-by":"publisher","unstructured":"Mollahosseini A, Chan D, Mahoor MH. going deeper in facial expression recognition using deep neural networks 2015. https:\/\/doi.org\/10.1109\/WACV.2016.7477450.","DOI":"10.1109\/WACV.2016.7477450"},{"key":"553_CR249","unstructured":"Lin M, Chen Q, Yan S. Network in network. 2013 [Online]. Available: http:\/\/arxiv.org\/abs\/1312.4400"},{"key":"553_CR250","unstructured":"Fan Y, Lam JCK, Li VOK. Multi-region ensemble convolutional neural network for facial expression recognition."},{"key":"553_CR251","doi-asserted-by":"publisher","unstructured":"Dhall A, Goecke R, Ghosh S, Joshi J, Hoey J, Gedeon T. From individual to group-level emotion recognition: Emoti W 5.0. In: ICMI 2017\u2014Proceedings of the 19th ACM international conference on multimodal interaction, association for computing machinery, Inc, Nov. 2017, pp. 524\u2013528. https:\/\/doi.org\/10.1145\/3136755.3143004.","DOI":"10.1145\/3136755.3143004"},{"key":"553_CR252","unstructured":"Dhall A, et al. Predicting performance of intel cluster OpenMP with code analysis method 2008. [Online]. Available: http:\/\/cs.anu.edu.au\/techreports\/."},{"key":"553_CR253","unstructured":"Dhall A, Goecke R, Lucey S, Gedeon T. Collecting large, richly annotated facial-expression databases from movies constructing facial-expression datasets large-scale multimedia data collections."},{"key":"553_CR254","doi-asserted-by":"publisher","first-page":"64827","DOI":"10.1109\/ACCESS.2019.2917266","volume":"7","author":"MI Georgescu","year":"2019","unstructured":"Georgescu MI, Ionescu RT, Popescu M. Local learning with deep and handcrafted features for facial expression recognition. IEEE Access. 2019;7:64827\u201336. https:\/\/doi.org\/10.1109\/ACCESS.2019.2917266.","journal-title":"IEEE Access"},{"issue":"7","key":"553_CR255","doi-asserted-by":"publisher","first-page":"1227","DOI":"10.1049\/iet-ipr.2019.1188","volume":"14","author":"S Rajan","year":"2020","unstructured":"Rajan S, Chenniappan P, Devaraj S, Madian N. Novel deep learning model for facial expression recognition based on maximum boosted CNN and LSTM. IET Image Process. 2020;14(7):1227\u201332. https:\/\/doi.org\/10.1049\/iet-ipr.2019.1188.","journal-title":"IET Image Process"},{"key":"553_CR256","unstructured":"Khaireddin Y, Chen Z. Facial Emotion recognition: state of the art performance on FER2013."},{"issue":"3","key":"553_CR257","doi-asserted-by":"publisher","first-page":"1195","DOI":"10.1109\/TAFFC.2020.2981446","volume":"13","author":"S Li","year":"2022","unstructured":"Li S, Deng W. Deep facial expression recognition: a survey. IEEE Trans Affect Comput. 2022;13(3):1195\u2013215. https:\/\/doi.org\/10.1109\/TAFFC.2020.2981446.","journal-title":"IEEE Trans Affect Comput"},{"key":"553_CR258","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107694","author":"Z Wang","year":"2021","unstructured":"Wang Z, Zeng F, Liu S, Zeng B. OAENet: oriented attention ensemble for accurate facial expression recognition. Pattern Recognit. 2021. https:\/\/doi.org\/10.1016\/j.patcog.2020.107694.","journal-title":"Pattern Recognit"},{"key":"553_CR259","unstructured":"Ruder S. An overview of multi-task learning in deep neural networks, Jun. 2017, [Online]. Available: http:\/\/arxiv.org\/abs\/1706.05098"},{"key":"553_CR260","unstructured":"Ming Z, Xia J, Luqman MM, Burie J-C, Zhao K. Dynamic multi-task learning for face recognition with facial expression, Nov. 2019, [Online]. Available: http:\/\/arxiv.org\/abs\/1911.03281"},{"key":"553_CR261","doi-asserted-by":"publisher","unstructured":"Serengil SI, Ozpinar A. HyperExtended LightFace: a facial attribute analysis framework. In: 2021 international conference on engineering and emerging technologies (ICEET), Oct. 2021, pp. 1\u20134. https:\/\/doi.org\/10.1109\/ICEET53442.2021.9659697.","DOI":"10.1109\/ICEET53442.2021.9659697"},{"key":"553_CR262","unstructured":"Zhang Z, Song Y, Qi H. Age progression\/regression by conditional adversarial autoencoder. [Online]. Available: https:\/\/zzutk.github.io\/Face-Aging-CAAE"},{"key":"553_CR263","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1016\/j.ins.2020.04.041","volume":"533","author":"H Zheng","year":"2020","unstructured":"Zheng H, et al. Discriminative deep multi-task learning for facial expression recognition. Inf Sci (N Y). 2020;533:60\u201371. https:\/\/doi.org\/10.1016\/j.ins.2020.04.041.","journal-title":"Inf Sci (N Y)"},{"key":"553_CR264","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2022.105651","volume":"118","author":"P Foggia","year":"2023","unstructured":"Foggia P, Greco A, Saggese A, Vento M. Multi-task learning on the edge for effective gender, age, ethnicity and emotion recognition. Eng Appl Artif Intell. 2023;118:105651.","journal-title":"Eng Appl Artif Intell"},{"key":"553_CR265","unstructured":"Hu J, Shen L, Sun G. Squeeze-and-excitation networks. [Online]. Available: http:\/\/image-net.org\/challenges\/LSVRC\/2017\/results."},{"key":"553_CR266","unstructured":"Kollias D. ABAW: valence-arousal estimation, expression recognition, action unit detection and multi-task learning challenges. [Online]. Available: https:\/\/ibug.doc.ic.ac.uk\/resources\/fg-2020."},{"key":"553_CR267","doi-asserted-by":"crossref","unstructured":"Yan L, Sheng M, Wang C, Gao R, Yu H. Hybrid neural networks based facial expression recognition for smart city. Multimed Tools Appl 2022;1\u201324.","DOI":"10.1007\/s11042-021-11530-7"},{"key":"553_CR268","doi-asserted-by":"publisher","first-page":"4637","DOI":"10.1109\/TIP.2022.3186536","volume":"31","author":"H Li","year":"2022","unstructured":"Li H, Wang N, Yang X, Gao X. CRS-CONT: a well-trained general encoder for facial expression analysis. IEEE Trans Image Process. 2022;31:4637\u201350. https:\/\/doi.org\/10.1109\/TIP.2022.3186536.","journal-title":"IEEE Trans Image Process"},{"key":"553_CR269","doi-asserted-by":"publisher","DOI":"10.3390\/s22041350","author":"X Zhu","year":"2022","unstructured":"Zhu X, He Z, Zhao L, Dai Z, Yang Q. A cascade attention based facial expression recognition network by fusing multi-scale spatio-temporal features. Sensors. 2022. https:\/\/doi.org\/10.3390\/s22041350.","journal-title":"Sensors"},{"key":"553_CR270","doi-asserted-by":"crossref","unstructured":"Huang Y, Khan SM. Dyadgan: generating facial expressions in dyadic interactions. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops 2017 (pp. 11\u201318).","DOI":"10.1109\/CVPRW.2017.280"},{"key":"553_CR271","doi-asserted-by":"crossref","unstructured":"Yang H, Ciftci U, Yin L. Facial expression recognition by de-expression residue learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2018 (pp. 2168\u20132177).","DOI":"10.1109\/CVPR.2018.00231"},{"key":"553_CR272","doi-asserted-by":"crossref","unstructured":"Wu R, Zhang G, Lu S, Chen T. Cascade ef-gan: progressive facial expression editing with local focuses. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition 2020 (pp. 5021\u20135030).","DOI":"10.1109\/CVPR42600.2020.00507"},{"key":"553_CR273","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2022.109157","volume":"135","author":"Z Sun","year":"2023","unstructured":"Sun Z, Zhang H, Bai J, Liu M, Hu Z. A discriminatively deep fusion approach with improved conditional GAN (im-cGAN) for facial expression recognition. Pattern Recognit. 2023;135:109157. https:\/\/doi.org\/10.1016\/j.patcog.2022.109157.","journal-title":"Pattern Recognit"},{"issue":"4","key":"553_CR274","doi-asserted-by":"publisher","first-page":"2657","DOI":"10.1109\/TAFFC.2022.3215918","volume":"14","author":"Y Liu","year":"2023","unstructured":"Liu Y, Zhang X, Li Y, Zhou J, Li X, Zhao G. Graph-based facial affect analysis: a review. IEEE Trans Affect Comput. 2023;14(4):2657\u201377. https:\/\/doi.org\/10.1109\/TAFFC.2022.3215918.","journal-title":"IEEE Trans Affect Comput"},{"issue":"1","key":"553_CR275","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1109\/TNNLS.2020.2978386","volume":"32","author":"Z Wu","year":"2021","unstructured":"Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS. A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst. 2021;32(1):4\u201324. https:\/\/doi.org\/10.1109\/TNNLS.2020.2978386.","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"553_CR276","doi-asserted-by":"publisher","DOI":"10.1007\/s00138-022-01288-9","author":"L Liao","year":"2022","unstructured":"Liao L, Zhu Y, Zheng B, Jiang X, Lin J. FERGCN: facial expression recognition based on graph convolution network. Mach Vis Appl. 2022. https:\/\/doi.org\/10.1007\/s00138-022-01288-9.","journal-title":"Mach Vis Appl"},{"key":"553_CR277","doi-asserted-by":"publisher","first-page":"6544","DOI":"10.1109\/TIP.2021.3093397","volume":"30","author":"Z Zhao","year":"2021","unstructured":"Zhao Z, Liu Q, Wang S. Learning deep global multi-scale and local attention features for facial expression recognition in the wild. IEEE Trans Image Process. 2021;30:6544\u201356. https:\/\/doi.org\/10.1109\/TIP.2021.3093397.","journal-title":"IEEE Trans Image Process"},{"key":"553_CR278","doi-asserted-by":"publisher","unstructured":"Wu C, Chai L, Yang J, Sheng Y. Facial expression recognition using convolutional neural network on graphs. In Chinese control conference, CCC, IEEE Computer Society, 2019, pp. 7572\u20137576. https:\/\/doi.org\/10.23919\/ChiCC.2019.8866311.","DOI":"10.23919\/ChiCC.2019.8866311"},{"key":"553_CR279","unstructured":"Aouayeb M, Hamidouche W, Soladie C, Kpalma K, Seguier R. Learning vision transformer with squeeze and excitation for facial expression recognition, 2021 [Online]. Available: http:\/\/arxiv.org\/abs\/2107.03107"},{"key":"553_CR280","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1016\/j.ins.2021.08.043","volume":"580","author":"Q Huang","year":"2021","unstructured":"Huang Q, Huang C, Wang X, Jiang F. Facial expression recognition with grid-wise attention and visual transformer. Inf Sci (N Y). 2021;580:35\u201354. https:\/\/doi.org\/10.1016\/j.ins.2021.08.043.","journal-title":"Inf Sci (N Y)"},{"key":"553_CR281","doi-asserted-by":"publisher","unstructured":"Wasi AT, \u0160erbetar K, Islam R, Rafi TH, Chae D-K. ARBEx: attentive feature extraction with reliability balancing for robust facial expression learning, 2023. https:\/\/doi.org\/10.1007\/978-981-96-0911-6_27.","DOI":"10.1007\/978-981-96-0911-6_27"},{"issue":"18","key":"553_CR282","first-page":"17","volume":"48","author":"N Perveen","year":"2012","unstructured":"Perveen N, Gupta S, Verma K. Facial expression recognition system using statistical feature and neural network. Int J Comput Appl. 2012;48(18):17\u201323.","journal-title":"Int J Comput Appl"},{"key":"553_CR283","doi-asserted-by":"crossref","unstructured":"Meng D, Peng X, Wang K, Qiao Y. Frame attention networks for facial expression recognition in videos, 2019 [Online]. Available: http:\/\/arxiv.org\/abs\/1907.00193","DOI":"10.1109\/ICIP.2019.8803603"},{"key":"553_CR284","doi-asserted-by":"publisher","first-page":"499","DOI":"10.1007\/s00371-019-01636-3","volume":"36","author":"D Liang","year":"2020","unstructured":"Liang D, Liang H, Yu Z, Zhang Y. Deep convolutional BiLSTM fusion network for facial expression recognition. Vis Comput. 2020;36:499\u2013508.","journal-title":"Vis Comput"},{"key":"553_CR285","doi-asserted-by":"publisher","first-page":"435","DOI":"10.1016\/j.neucom.2022.10.013","volume":"514","author":"E Ryumina","year":"2022","unstructured":"Ryumina E, Dresvyanskiy D, Karpov A. In search of a robust facial expressions recognition model: a large-scale visual cross-corpus study. Neurocomputing. 2022;514:435\u201350.","journal-title":"Neurocomputing"},{"key":"553_CR286","unstructured":"Gholam P, Montazer A, Esmaili F. Using self-supervised auxiliary tasks to improve fine-grained facial representation."},{"issue":"3","key":"553_CR287","doi-asserted-by":"publisher","first-page":"1252","DOI":"10.1109\/TCDS.2022.3203822","volume":"15","author":"X Wang","year":"2023","unstructured":"Wang X, Zhang T, Chen CLP. PAU-Net: privileged action unit network for facial expression recognition. IEEE Trans Cogn Dev Syst. 2023;15(3):1252\u201362. https:\/\/doi.org\/10.1109\/TCDS.2022.3203822.","journal-title":"IEEE Trans Cogn Dev Syst"},{"key":"553_CR288","unstructured":"Kervadec C, Vielzeuf V, Pateux S, Lechervy A, Jurie F. CAKE: compact and accurate K-dimensional representation of emotion, 2018 [Online]. Available: http:\/\/arxiv.org\/abs\/1807.11215"},{"key":"553_CR289","doi-asserted-by":"crossref","unstructured":"Hayale W, Negi P, Mahoor M. Facial expression recognition using deep siamese neural networks with a supervised loss function. IEEE Computer Society, 2019.","DOI":"10.1109\/FG.2019.8756571"},{"key":"553_CR290","doi-asserted-by":"publisher","DOI":"10.3390\/biomimetics8020199","author":"Z Wen","year":"2023","unstructured":"Wen Z, Lin W, Wang T, Xu G. Distract your attention: multi-head cross attention network for facial expression recognition. Biomimetics. 2023. https:\/\/doi.org\/10.3390\/biomimetics8020199.","journal-title":"Biomimetics"},{"key":"553_CR291","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2017.2788081","author":"T Zhang","year":"2017","unstructured":"Zhang T, Zheng W, Cui Z, Zong Y, Li Y. Spatial-temporal recurrent neural network for emotion recognition. IEEE Trans Cybern. 2017. https:\/\/doi.org\/10.1109\/TCYB.2017.2788081.","journal-title":"IEEE Trans Cybern"},{"issue":"1","key":"553_CR292","doi-asserted-by":"publisher","first-page":"52","DOI":"10.11591\/ijece.v8i1.pp52-59","volume":"8","author":"FZ Salmam","year":"2018","unstructured":"Salmam FZ, Madani A, Kissi M. Emotion recognition from facial expression based on fiducial points detection and using neural network. Int J Electr Comput Eng. 2018;8(1):52\u20139. https:\/\/doi.org\/10.11591\/ijece.v8i1.pp52-59.","journal-title":"Int J Electr Comput Eng"},{"key":"553_CR293","doi-asserted-by":"publisher","DOI":"10.3390\/s21093046","author":"S Minaee","year":"2021","unstructured":"Minaee S, Minaei M, Abdolrashidi A. Deep-emotion: facial expression recognition using attentional convolutional network. Sensors. 2021. https:\/\/doi.org\/10.3390\/s21093046.","journal-title":"Sensors"},{"issue":"11","key":"553_CR294","doi-asserted-by":"publisher","first-page":"1940015","DOI":"10.1142\/S0218001419400159","volume":"33","author":"H-D Nguyen","year":"2019","unstructured":"Nguyen H-D, Yeom S, Lee G-S, Yang H-J, Na I-S, Kim S-H. Facial emotion recognition using an ensemble of multi-level convolutional neural networks. Int J Pattern Recognit Artif Intell. 2019;33(11):1940015. https:\/\/doi.org\/10.1142\/S0218001419400159.","journal-title":"Int J Pattern Recognit Artif Intell"},{"key":"553_CR295","doi-asserted-by":"publisher","unstructured":"Vulpe-Grigorasi A, Grigore O. Convolutional neural network hyperparameters optimization for facial emotion recognition. In: 12th international symposium on advanced topics in electrical engineering, ATEE 2021, Institute of Electrical and Electronics Engineers Inc., 2021. https:\/\/doi.org\/10.1109\/ATEE52255.2021.9425073.","DOI":"10.1109\/ATEE52255.2021.9425073"},{"key":"553_CR296","unstructured":"Burkert P, Trier F, Afzal MZ, Dengel A, Liwicki M. DeXpression: deep convolutional neural network for expression recognition, 2015. Available: http:\/\/arxiv.org\/abs\/1509.05371"},{"key":"553_CR297","doi-asserted-by":"publisher","unstructured":"Hasani B, Mahoor MH. Spatio-temporal facial expression recognition using convolutional neural networks and conditional random fields, 2017. https:\/\/doi.org\/10.1109\/FG.2017.99.","DOI":"10.1109\/FG.2017.99"},{"key":"553_CR298","doi-asserted-by":"publisher","DOI":"10.3390\/s21216954","author":"SJ Park","year":"2021","unstructured":"Park SJ, Kim BG, Chilamkurti N. A robust facial expression recognition algorithm based on multi-rate feature fusion scheme. Sensors. 2021. https:\/\/doi.org\/10.3390\/s21216954.","journal-title":"Sensors"},{"key":"553_CR299","unstructured":"Wang K, Peng X, Yang J, Meng D, Qiao Y. Region attention networks for pose and occlusion robust facial expression recognition, 2019 [Online]. Available: http:\/\/arxiv.org\/abs\/1905.04075"},{"key":"553_CR300","doi-asserted-by":"publisher","unstructured":"Jiang J, Deng W. disentangling identity and pose for facial expression recognition, 2022. https:\/\/doi.org\/10.1109\/TAFFC.2022.3197761.","DOI":"10.1109\/TAFFC.2022.3197761"},{"key":"553_CR301","doi-asserted-by":"crossref","unstructured":"Zhao X, et al. Peak-piloted deep network for facial expression recognition, 2016. Available: http:\/\/arxiv.org\/abs\/1607.06997","DOI":"10.1007\/978-3-319-46475-6_27"},{"issue":"12","key":"553_CR302","doi-asserted-by":"publisher","first-page":"1691","DOI":"10.1007\/s00371-017-1443-0","volume":"34","author":"Z Yu","year":"2018","unstructured":"Yu Z, Liu Q, Liu G. Deeper cascaded peak-piloted network for weak expression recognition. Vis Comput. 2018;34(12):1691\u20139. https:\/\/doi.org\/10.1007\/s00371-017-1443-0.","journal-title":"Vis Comput"},{"key":"553_CR303","unstructured":"Kuo C-M, Lai S-H, Sarkis M. A compact deep learning model for robust facial expression recognition."},{"issue":"5","key":"553_CR304","doi-asserted-by":"publisher","first-page":"1455","DOI":"10.1007\/s11263-020-01304-3","volume":"128","author":"D Kollias","year":"2020","unstructured":"Kollias D, Cheng S, Ververas E, Kotsia I, Zafeiriou S. Deep neural network augmentation: generating faces for affect analysis. Int J Comput Vis. 2020;128(5):1455\u201384. https:\/\/doi.org\/10.1007\/s11263-020-01304-3.","journal-title":"Int J Comput Vis"},{"key":"553_CR305","doi-asserted-by":"publisher","first-page":"131988","DOI":"10.1109\/ACCESS.2020.3010018","volume":"8","author":"TH Vo","year":"2020","unstructured":"Vo TH, Lee GS, Yang HJ, Kim SH. Pyramid with super resolution for in-the-wild facial expression recognition. IEEE Access. 2020;8:131988\u20132001. https:\/\/doi.org\/10.1109\/ACCESS.2020.3010018.","journal-title":"IEEE Access"},{"key":"553_CR306","unstructured":"Psaroudakis A, Kollias D. MixAugment & mixup: augmentation methods for facial expression recognition."},{"key":"553_CR307","doi-asserted-by":"crossref","unstructured":"Zhao Z, Liu Q, Zhou F. Robust lightweight facial expression recognition network with label distribution training, 2021. [Online]. Available: www.aaai.org","DOI":"10.1609\/aaai.v35i4.16465"},{"key":"553_CR308","unstructured":"Zhang Y, Wang C, Deng W. Relative uncertainty learning for facial expression recognition. [Online]. Available: https:\/\/github.com\/zyh-uaiaaaa\/Relative-Uncertainty-Learning."},{"key":"553_CR309","doi-asserted-by":"crossref","unstructured":"Zhang Y, Wang C, Ling X, Deng W. Learn from all: erasing attention consistency for noisy label facial expression recognition, 2022, [Online]. Available: http:\/\/arxiv.org\/abs\/2207.10299","DOI":"10.1007\/978-3-031-19809-0_24"},{"key":"553_CR310","doi-asserted-by":"publisher","unstructured":"Zhou H, et al. Exploring emotion features and fusion strategies for audio-video emotion recognition. In ICMI 2019\u2014proceedings of the 2019 international conference on multimodal interaction, association for computing machinery, Inc, Oct. 2019, pp. 562\u2013566. https:\/\/doi.org\/10.1145\/3340555.3355713.","DOI":"10.1145\/3340555.3355713"},{"key":"553_CR311","doi-asserted-by":"publisher","unstructured":"Kumar V, Rao S, Yu L. Noisy student training using body language dataset improves facial expression recognition, 2020. https:\/\/doi.org\/10.1007\/978-3-030-66415-2_53.","DOI":"10.1007\/978-3-030-66415-2_53"},{"key":"553_CR312","doi-asserted-by":"crossref","unstructured":"Wang L, Jia G, Jiang N, Wu H, Yang J. Ease: robust facial expression recognition via emotion ambiguity-sensitive cooperative networks. In: Proceedings of the 30th ACM international conference on multimedia, 2022, pp. 218\u2013227.","DOI":"10.1145\/3503161.3548005"},{"key":"553_CR313","doi-asserted-by":"publisher","DOI":"10.1007\/s42452-020-2234-1","author":"N Mehendale","year":"2020","unstructured":"Mehendale N. Facial emotion recognition using convolutional neural networks (FERC). SN Appl Sci. 2020. https:\/\/doi.org\/10.1007\/s42452-020-2234-1.","journal-title":"SN Appl Sci"},{"key":"553_CR314","doi-asserted-by":"crossref","unstructured":"Chen Y, Wang J, Chen S, Shi Z, Cai J. Facial motion prior networks for facial expression recognition, 2019, [Online]. Available: http:\/\/arxiv.org\/abs\/1902.08788","DOI":"10.1109\/VCIP47243.2019.8965826"},{"key":"553_CR315","unstructured":"Dhall A, Goecke R, Lucey S, Gedeon T. Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark. [Online]. Available: http:\/\/cs.anu.edu.au\/few"},{"key":"553_CR316","doi-asserted-by":"publisher","unstructured":"Dhall A, Ramana Murthy OV, Goecke R, Joshi J, Gedeon T. Video and image based Emotion recognition challenges in the wild: EmotiW 2015. In ICMI 2015\u2014proceedings of the 2015 ACM international conference on multimodal interaction, association for computing machinery, Inc, 2015, pp. 423\u2013426. https:\/\/doi.org\/10.1145\/2818346.2829994.","DOI":"10.1145\/2818346.2829994"},{"key":"553_CR317","unstructured":"Goodfellow IJ, et al. Challenges in representation learning: a report on three machine learning contests 2013. [Online]. http:\/\/arxiv.org\/abs\/1307.0414"},{"issue":"8","key":"553_CR318","doi-asserted-by":"publisher","first-page":"1377","DOI":"10.1080\/02699930903485076","volume":"24","author":"O Langner","year":"2010","unstructured":"Langner O, Dotsch R, Bijlstra G, Wigboldus DHJ, Hawk ST, van Knippenberg A. Presentation and validation of the radboud faces database. Cogn Emot. 2010;24(8):1377\u201388. https:\/\/doi.org\/10.1080\/02699930903485076.","journal-title":"Cogn Emot"},{"key":"553_CR319","unstructured":"Cheng S, Kotsia I, Pantic M, Zafeiriou S. 4DFAB: a large scale 4D database for facial expression analysis and biometric applications. [Online]. Available: http:\/\/www.di3d.com"},{"issue":"1","key":"553_CR320","doi-asserted-by":"publisher","first-page":"298","DOI":"10.1109\/TCDS.2022.3157772","volume":"15","author":"N Sun","year":"2023","unstructured":"Sun N, Tao J, Liu J, Sun H, Han G. 3-D facial feature reconstruction and learning network for facial expression recognition in the wild. IEEE Trans Cogn Dev Syst. 2023;15(1):298\u2013309. https:\/\/doi.org\/10.1109\/TCDS.2022.3157772.","journal-title":"IEEE Trans Cogn Dev Syst"},{"key":"553_CR321","doi-asserted-by":"crossref","unstructured":"Wu Z, Wang X, Jiang Y-G, Ye H, Xue X. Modeling spatial-temporal clues in a hybrid deep learning framework for video classification, 2015. Available: http:\/\/arxiv.org\/abs\/1504.01561","DOI":"10.1145\/2733373.2806222"},{"key":"553_CR322","doi-asserted-by":"publisher","unstructured":"Dang CN, Moreno-Garc\u00eda MN, De La Prieta F. Hybrid deep learning models for sentiment analysis. Complexity 2021. https:\/\/doi.org\/10.1155\/2021\/9986920.","DOI":"10.1155\/2021\/9986920"},{"key":"553_CR323","unstructured":"Baltru\u0161aitis T, Ahuja C, Morency L-P. Multimodal machine learning: a survey and taxonomy. May 2017, [Online]. Available: http:\/\/arxiv.org\/abs\/1705.09406"},{"key":"553_CR324","unstructured":"Khan MM, Ward RD, Lngleby M. Automated classification and recognition of facial expressions using infrared thermal imaging."}],"container-title":["Discover Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00553-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44163-025-00553-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00553-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,17]],"date-time":"2025-12-17T11:37:57Z","timestamp":1765971477000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44163-025-00553-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,17]]},"references-count":324,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["553"],"URL":"https:\/\/doi.org\/10.1007\/s44163-025-00553-w","relation":{},"ISSN":["2731-0809"],"issn-type":[{"value":"2731-0809","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,17]]},"assertion":[{"value":"1 June 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 September 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}},{"value":"As no human participants were involved and the data is sourced from openly available, pre-approved datasets, ethical approval and informed consent were not required. Our study is based entirely on publicly available benchmark datasets for facial emotion recognition (e.g., FER-2013, CK+\u2009, AffectNet, etc.) which are anonymized and pre-approved for research use. There was no direct involvement of human participants in our study. The study used publicly available datasets with anonymized data.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"This manuscript does not contain any individual person\u2019s identifiable information, photographs, genetic profiles, or other personal data (such as names, dates of birth, identity numbers, facial features, fingerprints, writing style, voice patterns, or DNA).","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}}],"article-number":"388"}}