{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,27]],"date-time":"2026-02-27T23:53:32Z","timestamp":1772236412602,"version":"3.50.1"},"reference-count":44,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2022,5,19]],"date-time":"2022-05-19T00:00:00Z","timestamp":1652918400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Spanish Ministry of Science and Innovation","award":["RTI2018-101372-B-I00"],"award-info":[{"award-number":["RTI2018-101372-B-I00"]}]},{"name":"Spanish Ministry of Science and Innovation","award":["PRE2019-088146"],"award-info":[{"award-number":["PRE2019-088146"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Facial motion analysis is a research field with many practical applications, and has been strongly developed in the last years. However, most effort has been focused on the recognition of basic facial expressions of emotion and neglects the analysis of facial motions related to non-verbal communication signals. This paper focuses on the classification of facial expressions that are of the utmost importance in sign languages (Grammatical Facial Expressions) but also present in expressive spoken language. We have collected a dataset of Spanish Sign Language sentences and extracted the intervals for three types of Grammatical Facial Expressions: negation, closed queries and open queries. A study of several deep learning models using different input features on the collected dataset (LSE_GFE) and an external dataset (BUHMAP) shows that GFEs can be learned reliably with Graph Convolutional Networks simply fed with face landmarks.<\/jats:p>","DOI":"10.3390\/s22103839","type":"journal-article","created":{"date-parts":[[2022,5,20]],"date-time":"2022-05-20T00:18:11Z","timestamp":1653005891000},"page":"3839","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Facial Motion Analysis beyond Emotional Expressions"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6474-4533","authenticated-orcid":false,"given":"Manuel","family":"Porta-Lorenzo","sequence":"first","affiliation":[{"name":"atlanTTic Research Center, University of Vigo, 36310 Vigo, Spain"}]},{"given":"Manuel","family":"V\u00e1zquez-Enr\u00edquez","sequence":"additional","affiliation":[{"name":"atlanTTic Research Center, University of Vigo, 36310 Vigo, Spain"}]},{"given":"Ania","family":"P\u00e9rez-P\u00e9rez","sequence":"additional","affiliation":[{"name":"atlanTTic Research Center, University of Vigo, 36310 Vigo, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6319-5915","authenticated-orcid":false,"given":"Jos\u00e9 Luis","family":"Alba-Castro","sequence":"additional","affiliation":[{"name":"atlanTTic Research Center, University of Vigo, 36310 Vigo, Spain"}]},{"given":"Laura","family":"Doc\u00edo-Fern\u00e1ndez","sequence":"additional","affiliation":[{"name":"atlanTTic Research Center, University of Vigo, 36310 Vigo, Spain"}]}],"member":"1968","published-online":{"date-parts":[[2022,5,19]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1037\/h0030377","article-title":"Constants across cultures in the face and emotion","volume":"17","author":"Ekman","year":"1971","journal-title":"J. Personal. Hencec. Psychol."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"193","DOI":"10.1016\/j.cogbrainres.2004.08.012","article-title":"Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners","volume":"22","author":"McCullough","year":"2005","journal-title":"Cogn. Brain Res."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Bartoli, A., and Fusiello, A. (2020, January 23\u201328). Recognition of Affective and Grammatical Facial Expressions: A Study for Brazilian Sign Language. Proceedings of the Computer Vision\u2014ECCV 2020 Workshops, Glasgow, UK.","DOI":"10.1007\/978-3-030-67070-2"},{"key":"ref_4","unstructured":"Li, S., and Deng, W. (2020). Deep facial expression recognition: A survey. IEEE Trans. Affect. Comput."},{"key":"ref_5","unstructured":"Ouellet, S. (2014). Real-time emotion recognition for gaming using deep convolutional network features. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Khor, H.Q., See, J., Phan, R.C.W., and Lin, W. (2018, January 15\u201319). Enriched long-term recurrent convolutional network for facial micro-expression recognition. Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi\u2019an, China.","DOI":"10.1109\/FG.2018.00105"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Ekman, P., and Friesen, W. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement, Consulting Psychologists Press.","DOI":"10.1037\/t27734-000"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1109\/TPAMI.2008.52","article-title":"A Survey of Affect Recognition Methods: Audio Visual and Spontaneous Expressions","volume":"31","author":"Zeng","year":"2009","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Valstar, M., Gratch, J., Schuller, B., Ringeval, F., Cowie, R., and Pantic, M. (2016, January 16). AVEC 2016-Depression, Mood, and Emotion Recognition Workshop and Challenge. Proceedings of the 6th International Workshop on Audio\/Visual Emotion Challenge, Amsterdam, The Netherlands.","DOI":"10.1145\/2964284.2980532"},{"key":"ref_10","unstructured":"Lien, J.J., Kanade, T., Cohn, J.F., and Li, C.C. (1998, January 14\u201316). Automated facial expression recognition based on FACS action units. Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Devries, T., Biswaranjan, K., and Taylor, G.W. (2014, January 6\u20139). Multi-task Learning of Facial Landmarks and Expression. Proceedings of the 2014 Canadian Conference on Computer and Robot Vision, Montreal, QC, Canada.","DOI":"10.1109\/CRV.2014.21"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"11532","DOI":"10.1109\/JSEN.2020.3028075","article-title":"GA-SVM-Based Facial Emotion Recognition Using Facial Geometric Features","volume":"21","author":"Liu","year":"2021","journal-title":"IEEE Sens. J."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Qiu, Y., and Wan, Y. (2019, January 20\u201322). Facial Expression Recognition based on Landmarks. Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China.","DOI":"10.1109\/IAEAC47372.2019.8997580"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1016\/j.neucom.2018.03.068","article-title":"Multi-cue fusion for emotion recognition in the wild","volume":"309","author":"Yan","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"128","DOI":"10.1109\/TAI.2021.3076974","article-title":"Graph Convolutional Neural Network for Human Action Recognition: A Comprehensive Survey","volume":"2","author":"Ahmad","year":"2021","journal-title":"IEEE Trans. Artif. Intell."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Ngoc, Q.T., Lee, S., and Song, B.C. (2020). Facial landmark-based emotion recognition via directed graph neural network. Electronics, 9.","DOI":"10.3390\/electronics9050764"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"311","DOI":"10.1109\/TCDS.2019.2917711","article-title":"Facial Expression Recognition via Deep Action Units Graph Network Based on Psychological Mechanism","volume":"12","author":"Liu","year":"2020","journal-title":"IEEE Trans. Cogn. Dev. Syst."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Jung, H., Lee, S., Yim, J., Park, S., and Kim, J. (2015, January 7\u201313). Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.341"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Yan, S., Xiong, Y., and Lin, D. (2018, January 2\u20137). Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, LO, USA.","DOI":"10.1609\/aaai.v32i1.12328"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Heidari, N., and Iosifidis, A. (2021, January 6\u20138). Progressive Spatio\u2013Temporal Bilinear Network with Monte Carlo Dropout for Landmark-based Facial Expression Recognition with Uncertainty Estimation. Proceedings of the 23rd International Workshop on Multimedia Signal Processing, MMSP 2021, Tampere, Finland.","DOI":"10.1109\/MMSP53017.2021.9733455"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010, January 13\u201318). The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops, San Francisco, CA, USA.","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"607","DOI":"10.1016\/j.imavis.2011.07.002","article-title":"Facial expression recognition from near-infrared videos","volume":"29","author":"Zhao","year":"2011","journal-title":"Image Vis. Comput."},{"key":"ref_23","unstructured":"Valstar, M.F., and Pantic, M. (2010, January 23). Induced disgust, happiness and surprise: An addition to the mmi facial expression database. Proceedings of the 3rd International Workshop on EMOTION (Satellite of LREC): Corpora for Research on Emotion and Affect, Valetta, Malta."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/MMUL.2012.26","article-title":"Collecting Large, Richly Annotated Facial-Expression Databases from Movies","volume":"19","author":"Dhall","year":"2012","journal-title":"IEEE Multimed."},{"key":"ref_25","unstructured":"Aifanti, N., Papachristou, C., and Delopoulos, A. (2010, January 12\u201314). The MUG facial expression database. Proceedings of the 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10, Desenzano del Garda, Italy."},{"key":"ref_26","unstructured":"Yin, L., Wei, X., Sun, Y., Wang, J., and Rosato, M. (2006, January 10\u201312). A 3D facial expression database for facial behavior research. Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Southampton, UK."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Gunes, H., and Piccardi, M. (2006, January 20\u201324). A Bimodal Face and Body Gesture Database for Automatic Analysis of Human Nonverbal Affective Behavior. Proceedings of the 18th International Conference on Pattern Recognition (ICPR\u201906), Hong Kong, China.","DOI":"10.1109\/ICPR.2006.39"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Aran, O., Ari, I., Guvensan, A., Haberdar, H., Kurt, Z., Turkmen, I., Uyar, A., and Akarun, L. (2007, January 11\u201313). A Database of Non-Manual Signs in Turkish Sign Language. Proceedings of the 2007 IEEE 15th Signal Processing and Communications Applications, Eskisehir, Turkey.","DOI":"10.1109\/SIU.2007.4298708"},{"key":"ref_29","unstructured":"Freitas, F.D.A. (2014). Grammatical Facial Expressions, UCI Machine Learning Repository."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Jiang, X., Zong, Y., Zheng, W., Tang, C., Xia, W., Lu, C., and Liu, J. (2020, January 12\u201316). DFEW: A Large-Scale Database for Recognizing Dynamic Facial Expressions in the Wild. Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA.","DOI":"10.1145\/3394171.3413620"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Sheerman-Chase, T., Ong, E.J., and Bowden, R. (2011, January 6\u201313). Cultural factors in the regression of non-verbal communication perception. Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain.","DOI":"10.1109\/ICCVW.2011.6130393"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Silva, E.P.d., Costa, P.D.P., Kumada, K.M.O., and De Martino, J.M. (2020, January 16\u201320). SILFA: Sign Language Facial Action Database for the Development of Assistive Technologies for the Deaf. Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina.","DOI":"10.1109\/FG47880.2020.00059"},{"key":"ref_33","unstructured":"Doc\u00edo-Fern\u00e1ndez, L., Alba-Castro, J.L., Torres-Guijarro, S., Rodr\u00edguez-Banga, E., Rey-Area, M., P\u00e9rez-P\u00e9rez, A., Rico-Alonso, S., and Garc\u00eda-Mateo, C. (2020, January 11\u201316). LSE_UVIGO: A Multi-source Database for Spanish Sign Language Recognition. Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, Marseille, France."},{"key":"ref_34","unstructured":"Max Planck Institute for Psycholinguistics (2020, November 20). The Language Archive [Computer Software]. Available online: https:\/\/archive.mpi.nl\/tla\/elan."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Baltru\u0161aitis, T., Mahmoud, M., and Robinson, P. (2015, January 4\u20138). Cross-dataset learning and person-specific normalisation for automatic action unit detection. Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia.","DOI":"10.1109\/FG.2015.7284869"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Zadeh, A., Lim, Y.C., Baltru\u0161aitis, T., and Morency, L.P. (2017, January 22\u201329). Convolutional experts constrained local model for 3D facial landmark detection. Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, Venice, Italy.","DOI":"10.1109\/ICCVW.2017.296"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.C. (2019). MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Liu, Z., Zhang, H., Chen, Z., Wang, Z., and Ouyang, W. (2020, January 13\u201319). Disentangling and unifying graph convolutions for skeleton-based action recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00022"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"V\u00e1zquez-Enr\u00edquez, M., Alba-Castro, J.L., Fern\u00e1ndez, L.D., and Banga, E.R. (2021, January 19\u201325). Isolated Sign Language Recognition with Multi-Scale Spatial-Temporal Graph Convolutional Networks. Proceedings of the 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Virtual.","DOI":"10.1109\/CVPRW53098.2021.00385"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Li, M., Chen, S., Chen, X., Zhang, Y., Wang, Y., and Tian, Q. (2019). Actional-Structural Graph Convolutional Networks for Skeleton-based Action Recognition. arXiv.","DOI":"10.1109\/CVPR.2019.00371"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1776","DOI":"10.1016\/j.patcog.2009.12.002","article-title":"A multi-class classification strategy for Fisher scores: Application to signer independent sign language recognition","volume":"43","author":"Aran","year":"2010","journal-title":"Pattern Recognit."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"470","DOI":"10.1016\/j.imavis.2011.03.001","article-title":"Robust classification of face and head gestures in video","volume":"29","author":"Sankur","year":"2011","journal-title":"Image Vis. Comput."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Chouhayebi, H., Riffi, J., Mahraz, M.A., Yahyaouy, A., Tairi, H., and Alioua, N. (2020, January 9\u201311). Facial expression recognition based on geometric features. Proceedings of the 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco.","DOI":"10.1109\/ISCV49265.2020.9204111"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Ari, I., Uyar, A., and Akarun, L. (2008, January 27\u201329). Facial feature tracking and expression recognition for sign language. Proceedings of the 23rd International Symposium on Computer and Information Sciences, Istanbul, Turkey.","DOI":"10.1109\/ISCIS.2008.4717948"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/10\/3839\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T23:14:33Z","timestamp":1760138073000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/10\/3839"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,5,19]]},"references-count":44,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2022,5]]}},"alternative-id":["s22103839"],"URL":"https:\/\/doi.org\/10.3390\/s22103839","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,5,19]]}}}