{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:30:23Z","timestamp":1775068223724,"version":"3.50.1"},"reference-count":96,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2013,11,14]],"date-time":"2013-11-14T00:00:00Z","timestamp":1384387200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human\u2013robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human\u2013robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.<\/jats:p>","DOI":"10.3390\/s131115549","type":"journal-article","created":{"date-parts":[[2013,11,14]],"date-time":"2013-11-14T11:24:25Z","timestamp":1384428265000},"page":"15549-15581","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":92,"title":["A Multimodal Emotion Detection System during Human\u2013Robot Interaction"],"prefix":"10.3390","volume":"13","author":[{"given":"Fernando","family":"Alonso-Mart\u00edn","sequence":"first","affiliation":[{"name":"Robotics Lab, Universidad Carlos III de Madrid, Av. de la Universidad 30, Legan\u00e9s, Madrid 28911, Spain"}]},{"given":"Mar\u00eda","family":"Malfaz","sequence":"additional","affiliation":[{"name":"Robotics Lab, Universidad Carlos III de Madrid, Av. de la Universidad 30, Legan\u00e9s, Madrid 28911, Spain"}]},{"given":"Jo\u00e3o","family":"Sequeira","sequence":"additional","affiliation":[{"name":"Institute for Systems and Robotics (ISR), North Tower, Av.Rovisco Pais 1, Lisbon, 1049-001, Portugal"}]},{"given":"Javier","family":"Gorostiza","sequence":"additional","affiliation":[{"name":"Robotics Lab, Universidad Carlos III de Madrid, Av. de la Universidad 30, Legan\u00e9s, Madrid 28911, Spain"}]},{"given":"Miguel","family":"Salichs","sequence":"additional","affiliation":[{"name":"Robotics Lab, Universidad Carlos III de Madrid, Av. de la Universidad 30, Legan\u00e9s, Madrid 28911, Spain"}]}],"member":"1968","published-online":{"date-parts":[[2013,11,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Picard, R. (1999). Affective Computing for HCI, MIT Press.","DOI":"10.7551\/mitpress\/1140.001.0001"},{"key":"ref_2","unstructured":"Alonso-Mart\u00edn, F., Gorostiza, J.F., and Salichs, M.A. (, January July). Descripci\u00f3n General del Sistema de Interacci\u00f3n Humano-Robot Robotics Dialog System (RDS). Madrid, Spain."},{"key":"ref_3","unstructured":"Alonso-Martin, F., Gorostiza, J.F., and Salichs, M.A. (, January March). Preliminary Experiments on HRI for Improvement the Robotic Dialog System (RDS). Legan\u00e9s, Spain."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Rich, C., and Ponsler, B. (2010, January 2\u20135). Recognizing Engagement in Human-Robot Interaction. Osaka, Japan.","DOI":"10.1109\/HRI.2010.5453163"},{"key":"ref_5","unstructured":"Arnold, M. (1960). Emotion and Personality, Columbia University Press."},{"key":"ref_6","unstructured":"Ekman, P., Friesen, W., and Ellsworth, P. (1972). Emotion in the Human Face: Guidelines for Research and an Integration of Findings, Pergamon Press."},{"key":"ref_7","unstructured":"Plutchik, R. (1980). Emotion, a Psychoevolutionary Synthesis, Harper and Row."},{"key":"ref_8","unstructured":"Cowie, R., and Douglas-Cowie, E. (2000, January 5\u20137). \u2018FEELTRACE\u2019: An Instrument for Recording Perceived Emotion in Real Time. Newcastle, Northern Ireland, UK."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1016\/0005-7916(94)90063-9","article-title":"Measuring emotion: The self-assessment manikin and the semantic differential","volume":"25","author":"Bradley","year":"1994","journal-title":"J. Behav. Ther. Exp. Psychiatry"},{"key":"ref_10","unstructured":"Bradley, M., and Lang, P. (2000). Cognitive Neuroscience of Emotion, Oxford University Press."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"726","DOI":"10.1109\/TSMCA.2009.2014645","article-title":"Emotion recognition from facial expressions and its control using fuzzy logic","volume":"39","author":"Chakraborty","year":"2009","journal-title":"IEEE Trans. Syst. Man Cybern. A"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Cheng, J., and Deng, Y. (2013, January 22\u201326). A Facial Expression Based Continuous Emotional State Monitoring System with GPU Acceleration. Shanghai, China.","DOI":"10.1109\/FG.2013.6553811"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"803","DOI":"10.1016\/j.imavis.2008.08.005","article-title":"Facial expression recognition based on Local Binary Patterns: A comprehensive study","volume":"27","author":"Shan","year":"2009","journal-title":"Image Vis. Comput."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"32","DOI":"10.1109\/79.911197","article-title":"Emotion recognition in human\u2013computer interaction","volume":"18","author":"Cowie","year":"2001","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_15","first-page":"46","article-title":"Emotion recognition based on audio speech","volume":"1","author":"Dar","year":"2013","journal-title":"IOSR J. Comput. Eng."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Pantic, M., Sebe, N., Cohn, J.F., and Huang, T. Affective Multimodal Human\u2013Computer Interaction. New York, NY, USA. 2005.","DOI":"10.1145\/1101149.1101299"},{"key":"ref_17","unstructured":"Roy, D., and Pentland, A. (1996, January 14\u201316). Automatic Spoken Affect Classification and Analysis. Killington, VT, USA."},{"key":"ref_18","first-page":"46","article-title":"Detecting changing emotions in human speech by machine and humans","volume":"39","author":"Kowalczyk","year":"2013","journal-title":"Appl. Intell."},{"key":"ref_19","unstructured":"Yu, C., Tian, Q., Cheng, F., and Zhang, S. Advanced Research on Computer Science and Information Engineering, Springer."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"691","DOI":"10.1037\/a0017088","article-title":"Emotion recognition from expressions in face, voice, and body: The multimodal emotion tecognition test (MERT)","volume":"9","author":"Grandjean","year":"2009","journal-title":"Emotion"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Busso, C., Deng, Z., Yildirim, S., and Bulut, M. (2004, January 13\u201315). Analysis of Emotion Recognition Using Facial Expressions, Speech and Multimodal Information. New York, NY, USA.","DOI":"10.1145\/1027933.1027968"},{"key":"ref_22","unstructured":"Castellano, G., Kessous, L., and Caridakis, G. (2008). Affect and Emotion in Human-Computer Interaction, Springer."},{"key":"ref_23","unstructured":"De Silva, L., Miyasato, T., and Nakatsu, R. (2007, January 10\u201313). Facial Emotion Recognition Using Multi-Modal Information. Singapore."},{"key":"ref_24","unstructured":"Chen, L., Huang, T., Miyasato, T., and Nakatsu, R. (1998, January 14\u201316). Multimodal Human Emotion\/Expression Recognition. Nara, Japan."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"691","DOI":"10.1007\/978-3-642-25664-6_81","article-title":"Bimodal emotion recognition based on speech signals and facial expression","volume":"122","author":"Tu","year":"2012","journal-title":"Found. Intell. Syst."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1109\/TPAMI.2008.52","article-title":"A survey of affect recognition methods: Audio, visual, and spontaneous expressions","volume":"31","author":"Zeng","year":"2009","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_27","unstructured":"Yoshitomi, Y., Kawano, T., and Kilazoe, T. (2000, January 27\u201329). Effect of Sensor fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face. Osaka, Japan."},{"key":"ref_28","unstructured":"Merchant, J. (2013). Lymbix Sentiment Analysis Reinvented."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Rao, Y., Lei, J., Wenyin, L., Li, Q., and Chen, M. (2013). Building emotional dictionary for sentiment analysis of online news. World Wide Web.","DOI":"10.1007\/s11280-013-0221-9"},{"key":"ref_30","unstructured":"Serban, O., Pauchet, A., and Pop, H. (2012, January 6\u20138). Recognizing Emotions in Short Texts. Vilamoura, Portugal."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Alghowinem, S., Goecke, R., and Wagner, M. (2013, January 2\u20135). Head Pose and Movement Analysis as an Indicator of Depression. Geneva, Switzerland.","DOI":"10.1109\/ACII.2013.53"},{"key":"ref_32","unstructured":"Russell, J., and Dols, J. (1997). The Psychology of Facial Expression, Cambridge University Press."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Liu, Y., Sourina, O., and Nguyen, M. (2010, January 20\u201322). Real-Time EEG-Based Human Emotion Recognition and Visualization. Singapore, Republic of Singapore.","DOI":"10.1109\/CW.2010.37"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Valderrama Cuadros, C.E., and Ulloa Villegas, G.V. (2012). Spectral analysis of physiological parameters for consumers' emotion detection.","DOI":"10.1109\/STSIVA.2012.6340595"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"248","DOI":"10.1037\/h0024648","article-title":"Inference of attitudes from nonverbal communication in two channels","volume":"31","author":"Mehrabian","year":"1967","journal-title":"J. Consult. Psychol."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"109","DOI":"10.1037\/h0024532","article-title":"Decoding of inconsistent communications","volume":"6","author":"Mehrabian","year":"1967","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_37","unstructured":"Truong, K., van Leeuwen, D., and Neerincx, M. (2007). Foundations of Augmented Cognition, Springer."},{"key":"ref_38","unstructured":"Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., and Narayanan, S. Analysis of Emotion Recognition Using Facial Expressions, Speech and Multimodal Information."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Sebe, N., Cohen, I., Gevers, T., and Huang, T. (2006, January 20\u201324). Emotion Recognition Based on Joint Visual and Audio Cues. Hong Kong.","DOI":"10.1109\/ICPR.2006.489"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"29","DOI":"10.1016\/j.csl.2009.12.004","article-title":"Detecting emotional state of a child in a conversational computer game","volume":"25","author":"Yildirim","year":"2011","journal-title":"Comput. Speech Lang."},{"key":"ref_41","unstructured":"Microsoft Kinect. Available online at http:\/\/en.wikipedia.org\/wiki\/Kinect#Kinect_on_the_Xbox_-One."},{"key":"ref_42","unstructured":"Toledo-Ronen, O., and Sorin, A. (2012, January 1\u20132). Emotion Detection for Dementia Patients Monitoring. Haifa, Israel."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"135","DOI":"10.1016\/j.schres.2013.06.044","article-title":"Fearful face recognition in schizophrenia: An electrophysiological study","volume":"149","author":"Csukly","year":"2013","journal-title":"Schizophr. Res."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"290","DOI":"10.1007\/s11065-010-9138-6","article-title":"Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies","volume":"20","author":"Harms","year":"2010","journal-title":"Neuropsychol. Rev."},{"key":"ref_45","unstructured":"Petrushin, V. Emotion in Speech: Recognition and Application to Call Centers."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Inanoglu, Z., and Caneel, R. (2005, January 9\u201312). Emotive Alert: HMM-Based Emotion Detection in Voicemail Messages. New York, NY, USA.","DOI":"10.1145\/1040830.1040885"},{"key":"ref_47","unstructured":"Ram\u00edrez, A.P. (2009). Uso del Cry Translator (Traductor del llanto del beb\u00e9) de Biloop Technologic SL (Espa\u00f1a) como identificador del llanto en el ni\u00f1o y pautas a seguir. Ped. Rur. Ext., 39."},{"key":"ref_48","unstructured":"XML. Available online: http:\/\/en.wikipedia.org\/wiki\/Extensible_Markup_Language."},{"key":"ref_49","unstructured":"VoiceXML. Available online: http:\/\/en.wikipedia.org\/wiki\/VoiceXML."},{"key":"ref_50","unstructured":"Bladeware. Available online: http:\/\/sourceforge.net\/projects\/bladeware-vxml."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Litman, D.J., and Forbes-Riley, K. (2004, January 21\u201326). Predicting Student Emotions in Computer-Human Tutoring Dialogues. Barcelona, Spain.","DOI":"10.3115\/1218955.1219000"},{"key":"ref_52","unstructured":"Cowie, R., and Douglas-Cowie, E. (1999, January 1\u20133). Romano, Changing Emotional Tone in Dialogue and its Prosodic Correlates. Veldhoven, Netherlands."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Eyben, F., W\u00f6llmer, M., and Schuller, B. (2010, January 25\u201329). Opensmile: The Munich versatile and fast open-source audio feature extractor. Firenze, Italy.","DOI":"10.1145\/1873951.1874246"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Eyben, F., Wollmer, M., and Schuller, B. (2009, January 10\u201312). OpenEAR\u2014Introducing the Munich Open-Source Emotion and Affect Recognition Toolkit. Amsterdam, Netherlands.","DOI":"10.1109\/ACII.2009.5349350"},{"key":"ref_55","first-page":"341","article-title":"Praat, a system for doing phonetics by computer","volume":"5","author":"Boersma","year":"2002","journal-title":"Glot Int."},{"key":"ref_56","unstructured":"Verbio. Available online: http:\/\/www.verbio.com\/webverbio3\/es\/tecnologia\/verbio-tts\/52-verbio-speech-analytics.html."},{"key":"ref_57","unstructured":"Chuck. Available online at (http:\/\/chuck.cs.princeton.edu)."},{"key":"ref_58","unstructured":"Fiebrink, R., and Cook, P. A Meta-Instrument for Interactive, On-the-fly Machine Learning. Software Available online: https:\/\/code.google.com\/p\/wekinator\/downloads\/detail?name=wekinator-2011-06-06.zip&can=2q=."},{"key":"ref_59","unstructured":"Discrete Wavelet Transform. Available online: http:\/\/www.cs.ucf.edu\/m\u02dcali\/haar)."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"1917","DOI":"10.1121\/1.1458024","article-title":"YIN, A fundamental frequency estimator for speech and music","volume":"111","author":"Kawahara","year":"2002","journal-title":"J. Acoust. Soc. Am."},{"key":"ref_61","unstructured":"McLeod, P., and Wyvill, G. (, January September). A Smarter Way to Find Pitch. Barcelona, Spain."},{"key":"ref_62","unstructured":"Larson, E., and Maddox, R. (2005, January 3). Real-Time Time-Domain Pitch Tracking Using Wavelets. Champaign, IL, USA."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"416","DOI":"10.1016\/j.specom.2008.01.001","article-title":"Influence of contextual information in emotion annotation for spoken dialogue systems","volume":"50","author":"Callejas","year":"2008","journal-title":"Speech Commun."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Vlasenko, B., and Schuller, B. (2007, January 27\u201331). Combining Frame and Turn-Level Information for Robust Recognition of Emotions Within Speech. Antwerp, Belgium.","DOI":"10.21437\/Interspeech.2007-611"},{"key":"ref_65","doi-asserted-by":"crossref","unstructured":"Schuller, B., and Arsic, D. (2006, January 2\u20135). Emotion Recognition in the Noise Applying Large Acoustic Feature Sets. Dresden, Germany.","DOI":"10.21437\/SpeechProsody.2006-150"},{"key":"ref_66","unstructured":"Steidl, S. (2009). Automatic Classification of Emotion Related User States in Spontaneous Children's Speech. [Ph.D. thesis, University of Erlangen-N\u00fcrnberg]."},{"key":"ref_67","unstructured":"FAU. Available online: http:\/\/www5.cs.fau.de\/de\/mitarbeiter\/steidl-stefan\/fau-aibo-emotion-corpus."},{"key":"ref_68","unstructured":"Holmes, G., Donkin, A., and Witten, I. (1994, January 29). WEKA: A Machine Learning Workbench. Brisbane, Australia."},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Bartlett, M.S., Littlewort, G., Fasel, I., and Movellan, J.R. (2003, January 16\u201322). Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction. Madison, WI, USA.","DOI":"10.1109\/CVPRW.2003.10057"},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"1424","DOI":"10.1109\/34.895976","article-title":"Automatic analysis of facial expressions: The state of the art","volume":"22","author":"Pantie","year":"2000","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1023\/B:VISI.0000013087.49260.fb","article-title":"Robust real-time face detection","volume":"57","author":"Viola","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_72","unstructured":"OpenCV. Available online: http:\/\/docs.opencv.org\/trunk\/modules\/contrib\/doc\/facerec."},{"key":"ref_73","unstructured":"Kobayashi, H., and Hara, F. (1997, January 12\u201315). Facial Interaction Between Animated 3D Face Robot and Human Beings. Orlando, FL, USA."},{"key":"ref_74","unstructured":"Padgett, C., and Cottrell, G. (1997). Advances in Neural Information Processing Systems, The MIT Press."},{"key":"ref_75","doi-asserted-by":"crossref","unstructured":"Cootes, T., Edwards, G., and Taylor, C. (1998, January 7). Active Appearance Models. Freiburg, Germany.","DOI":"10.1109\/ICCV.1999.791209"},{"key":"ref_76","unstructured":"Lucey, S., Matthews, I., and Hu, C. (2006, January 10\u201312). AAM Derived Face Representations for Robust Facial Action Recognition. Southampton, UK."},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"569","DOI":"10.1109\/34.216726","article-title":"Analysis and synthesis of facial image sequences using physical and anatomical models","volume":"15","author":"Terzopoulos","year":"1993","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_78","first-page":"589","article-title":"Machine interpretation of emotion: Design of a memory-based expert system for interpreting facial expressions in terms of signaled emotions","volume":"17","author":"Kearney","year":"1993","journal-title":"Cogn. Sci."},{"key":"ref_79","first-page":"373","article-title":"Facial motion in the perception of faces and of emotional expression","volume":"4","author":"Bassili","year":"1978","journal-title":"J. Exp. Psychol.: Hum. Percept. Perform."},{"key":"ref_80","doi-asserted-by":"crossref","unstructured":"Ekman, P., Friesen, W., and Hager, J. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement, Consulting Psychologists Press.","DOI":"10.1037\/t27734-000"},{"key":"ref_81","unstructured":"(2009, January 2). A Modular Framework to Detect and Analyze Faces for Audience Measurement Systems. L\u00fcbeck, Germany."},{"key":"ref_82","unstructured":"Izard, C.E. (1983). The Maximally Discriminative Facial Movement Coding System, University of Delaware."},{"key":"ref_83","unstructured":"Izard, C., Dougherty, L., and Hembree, E. (1983). A System for Identifying Affect Expressions by Holistic Judgments (Affex), University of Delaware."},{"key":"ref_84","unstructured":"Pelachaud, C., Badler, N., and Viaud, M. Technical Report No. IRCS-94-21. Available online: http:\/\/repository.upenn.edu\/ircs_reports\/167\/."},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1037\/h0030377","article-title":"Constants across cultures in the face and emotion","volume":"17","author":"Ekman","year":"1971","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_86","first-page":"423","article-title":"Recognizing faces [and discussion]","volume":"302","author":"Bruce","year":"1983","journal-title":"Philos. Trans. R. Soc. B: Biol. Sci."},{"key":"ref_87","doi-asserted-by":"crossref","first-page":"487","DOI":"10.1037\/0022-3514.58.3.487","article-title":"Facial expressions and the regulation of emotions","volume":"58","author":"Izard","year":"1990","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_88","doi-asserted-by":"crossref","first-page":"1683","DOI":"10.1109\/TPAMI.2007.1094","article-title":"Facial action unit recognition by exploiting their dynamic and semantic relationships","volume":"29","author":"Tong","year":"2007","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_89","doi-asserted-by":"crossref","unstructured":"Littlewort, G., Whitehill, J., Wu, T.-F., Butko, N., Ruvolo, P., Movellan, J., and Bartlett, M. (2011, January 21\u201325). The Motion in Emotion\u2014A CERT Based Approach to the FERA Emotion Challenge. Santa Barbara, CA, USA.","DOI":"10.1109\/FG.2011.5771370"},{"key":"ref_90","unstructured":"CERT. Available online: http:\/\/mpt4u.com\/AFECT."},{"key":"ref_91","unstructured":"Ernst, A., Ruf, T., and Kueblbeck, C. (2009, January 2). A Modular Framework to Detect and Analyze Faces for Audience Measurement Systems. Lubeck, Germany."},{"key":"ref_92","doi-asserted-by":"crossref","first-page":"564","DOI":"10.1016\/j.imavis.2005.08.005","article-title":"Face detection and tracking in video sequences using the modifiedcensus transformation","volume":"vol 24","author":"Ernst","year":"2006","journal-title":"Image Vis. Comput."},{"key":"ref_93","unstructured":"SHORE demostration. Available online: http:\/\/www.iis.fraunhofer.de\/en\/bf\/bsy\/produkte\/shore.html."},{"key":"ref_94","unstructured":"Wierzbicki, R., Tschoeppe, C., Ruf, T., and Garbas, J. EDIS-Emotion-Driven Interactive Systems."},{"key":"ref_95","unstructured":"Alonso-Martin, F., Gorostiza, J.F., and Salichs, M.A. (, January June). Musical Expression in a Social Robot. Alcal\u00e1 de Henares, Spain."},{"key":"ref_96","doi-asserted-by":"crossref","unstructured":"Salichs, M., Barber, R., Khamis, A., Malfaz, M., Gorostiza, J., Pacheco, R., Rivas, R., Corrales, A., Delgado, E., and Garcia, D. (2006, January 1\u20133). Maggie: A Robotic Platform for Human-Robot Social Interaction. Bangkok, Thailand.","DOI":"10.1109\/RAMECH.2006.252754"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/13\/11\/15549\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T21:50:37Z","timestamp":1760219437000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/13\/11\/15549"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2013,11,14]]},"references-count":96,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2013,11]]}},"alternative-id":["s131115549"],"URL":"https:\/\/doi.org\/10.3390\/s131115549","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2013,11,14]]}}}