{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T15:37:02Z","timestamp":1774539422914,"version":"3.50.1"},"reference-count":122,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2020,8,6]],"date-time":"2020-08-06T00:00:00Z","timestamp":1596672000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["MTI"],"abstract":"<jats:p>The exigency of emotion recognition is pushing the envelope for meticulous strategies of discerning actual emotions through the use of superior multimodal techniques. This work presents a multimodal automatic emotion recognition (AER) framework capable of differentiating between expressed emotions with high accuracy. The contribution involves implementing an ensemble-based approach for the AER through the fusion of visible images and infrared (IR) images with speech. The framework is implemented in two layers, where the first layer detects emotions using single modalities while the second layer combines the modalities and classifies emotions. Convolutional Neural Networks (CNN) have been used for feature extraction and classification. A hybrid fusion approach comprising early (feature-level) and late (decision-level) fusion, was applied to combine the features and the decisions at different stages. The output of the CNN trained with voice samples of the RAVDESS database was combined with the image classifier\u2019s output using decision-level fusion to obtain the final decision. An accuracy of 86.36% and similar recall (0.86), precision (0.88), and f-measure (0.87) scores were obtained. A comparison with contemporary work endorsed the competitiveness of the framework with the rationale for exclusivity in attaining this accuracy in wild backgrounds and light-invariant conditions.<\/jats:p>","DOI":"10.3390\/mti4030046","type":"journal-article","created":{"date-parts":[[2020,8,6]],"date-time":"2020-08-06T09:41:21Z","timestamp":1596706881000},"page":"46","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":49,"title":["A Multimodal Facial Emotion Recognition Framework through the Fusion of Speech with Visible and Infrared Images"],"prefix":"10.3390","volume":"4","author":[{"given":"Mohammad Faridul Haque","family":"Siddiqui","sequence":"first","affiliation":[{"name":"Electrical Engineering and Computer Science, The University of Toledo, Toledo, OH 43606, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4719-4941","authenticated-orcid":false,"given":"Ahmad Y.","family":"Javaid","sequence":"additional","affiliation":[{"name":"Electrical Engineering and Computer Science, The University of Toledo, Toledo, OH 43606, USA"}]}],"member":"1968","published-online":{"date-parts":[[2020,8,6]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Ekman, P., and Friesen, W.V. (1977). Facial Action Coding System, Consulting Psychologists Press, Stanford University.","DOI":"10.1037\/t27734-000"},{"key":"ref_2","unstructured":"Ekman, P., Friesen, W., and Hager, J. (2002). Facial Action Coding System: The Manual on CD ROM. A Human Face, Network Information Research Co."},{"key":"ref_3","unstructured":"Ekman, P., Friesen, W.V., and Hager, J.C. (2002). FACS investigator\u2019s guide. A Human Face, Network Information Research Co."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1007\/s12193-015-0195-2","article-title":"Emonets: Multimodal deep learning approaches for emotion recognition in video","volume":"10","author":"Kahou","year":"2016","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_5","unstructured":"Kim, B.K., Dong, S.Y., Roh, J., Kim, G., and Lee, S.Y. (July, January 26). Fusing Aligned and Non-Aligned Face Information for Automatic Affect Recognition in the Wild: A Deep Learning Approach. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"125","DOI":"10.1007\/s12193-015-0203-6","article-title":"Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild","volume":"10","author":"Sun","year":"2016","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"415","DOI":"10.1080\/10447318.2016.1159799","article-title":"Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning","volume":"32","author":"Bahreini","year":"2016","journal-title":"Int. J.-Hum.-Comput. Interact."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Xu, C., Cao, T., Feng, Z., and Dong, C. (2012, January 29\u201331). Multi-Modal Fusion Emotion Recognition Based on HMM and ANN. Proceedings of the Contemporary Research on E-business Technology and Strategy, Tianjin, China.","DOI":"10.1007\/978-3-642-34447-3_48"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"15549","DOI":"10.3390\/s131115549","article-title":"A multimodal emotion detection system during human\u2013robot interaction","volume":"13","author":"Malfaz","year":"2013","journal-title":"Sensors"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Chen, J., Chen, Z., Chi, Z., and Fu, H. (2014, January 12\u201316). Emotion recognition in the wild with feature fusion and multiple kernel learning. Proceedings of the 16th International Conference on Multimodal Interaction, Istanbul, Turkey.","DOI":"10.1145\/2663204.2666277"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Tzirakis, P., Trigeorgis, G., Nicolaou, M.A., Schuller, B., and Zafeiriou, S. (2017). End-to-End Multimodal Emotion Recognition using Deep Neural Networks. arXiv.","DOI":"10.1109\/ICASSP.2018.8462677"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Torres, J.M.M., and Stepanov, E.A. (2017, January 23\u201327). Enhanced face\/audio emotion recognition: Video and instance level classification using ConvNets and restricted Boltzmann Machines. Proceedings of the International Conference on Web Intelligence, Leipzig, Germany.","DOI":"10.1145\/3106426.3109423"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"53","DOI":"10.5772\/54002","article-title":"Towards efficient multi-modal emotion recognition","volume":"10","year":"2013","journal-title":"Int. J. Adv. Robot. Syst."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Kim, Y., Lee, H., and Provost, E.M. (2013, January 26\u201331). Deep learning for robust feature generation in audiovisual emotion recognition. Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada.","DOI":"10.1109\/ICASSP.2013.6638346"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"753","DOI":"10.1007\/s11036-016-0685-9","article-title":"Audio-visual emotion recognition using big data towards 5G","volume":"21","author":"Hossain","year":"2016","journal-title":"Mob. Netw. Appl."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"325","DOI":"10.1007\/s12193-015-0207-2","article-title":"Audio-visual emotion recognition using multi-directional regression and Ridgelet transform","volume":"10","author":"Hossain","year":"2016","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Noroozi, F., Marjanovic, M., Njegus, A., Escalera, S., and Anbarjafari, G. (2016, January 4\u20138). Fusion of classifier predictions for audio-visual emotion recognition. Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico.","DOI":"10.1109\/ICPR.2016.7899608"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"38","DOI":"10.1109\/TAFFC.2016.2593719","article-title":"Facial expression recognition in video with multiple feature fusion","volume":"9","author":"Chen","year":"2016","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1319","DOI":"10.1109\/TMM.2016.2557721","article-title":"Sparse Kernel Reduced-Rank Regression for Bimodal Emotion Recognition From Facial Expression and Speech","volume":"18","author":"Yan","year":"2016","journal-title":"IEEE Trans. Multimed."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Kim, Y. (2015, January 21\u201324). Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition. Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi\u2019an, China.","DOI":"10.1109\/ACII.2015.7344653"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Pei, E., Yang, L., Jiang, D., and Sahli, H. (2015, January 21\u201324). Multimodal dimensional affect recognition using deep bidirectional long short-term memory recurrent neural networks. Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi\u2019an, China.","DOI":"10.1109\/ACII.2015.7344573"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Nguyen, D., Nguyen, K., Sridharan, S., Ghasemi, A., Dean, D., and Fookes, C. (2017, January 24\u201331). Deep spatio-temporal features for multimodal emotion recognition. Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA.","DOI":"10.1109\/WACV.2017.140"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"451","DOI":"10.1007\/s00530-017-0547-8","article-title":"Multimodal shared features learning for emotion recognition by enhanced sparse local discriminative canonical correlation analysis","volume":"25","author":"Fu","year":"2019","journal-title":"Multimed. Syst."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"3030","DOI":"10.1109\/TCSVT.2017.2719043","article-title":"Learning Affective Features with a Hybrid Deep Model for Audio-Visual Emotion Recognition","volume":"28","author":"Zhang","year":"2017","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_25","unstructured":"Cid, F., Manso, L.J., and N\u00fanez, P. (October, January 28). A Novel Multimodal Emotion Recognition Approach for Affective Human Robot Interaction. Proceedings of the 2015 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany."},{"key":"ref_26","unstructured":"Haq, S., Jan, T., Jehangir, A., Asif, M., Ali, A., and Ahmad, N. (2015). Bimodal Human Emotion Classification in the Speaker-Dependent Scenario, Pakistan Academy of Sciences."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Gideon, J., Zhang, B., Aldeneh, Z., Kim, Y., Khorram, S., Le, D., and Provost, E.M. (2016, January 12\u201316). Wild wild emotion: A multimodal ensemble approach. Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan.","DOI":"10.1145\/2993148.2997626"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"597","DOI":"10.1109\/TMM.2012.2189550","article-title":"Kernel cross-modal factor analysis for information fusion with application to bimodal emotion recognition","volume":"14","author":"Wang","year":"2012","journal-title":"IEEE Trans. Multimed."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1109\/TAFFC.2017.2713783","article-title":"Audio-visual emotion recognition in video clips","volume":"10","author":"Noroozi","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_30","unstructured":"Afouras, T., Chung, J.S., Senior, A., Vinyals, O., and Zisserman, A. (2018). Deep audio-visual speech recognition. IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Albanie, S., Nagrani, A., Vedaldi, A., and Zisserman, A. (2018). Emotion recognition in speech using cross-modal transfer in the wild. arXiv.","DOI":"10.1145\/3240508.3240578"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1016\/j.neucom.2018.03.068","article-title":"Multi-cue fusion for emotion recognition in the wild","volume":"309","author":"Yan","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1109\/TAFFC.2016.2588488","article-title":"A Combined Rule-Based & Machine Learning Audio-Visual Emotion Recognition Approach","volume":"9","author":"Seng","year":"2018","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Dhall, A., Kaur, A., Goecke, R., and Gedeon, T. (2018, January 16\u201320). Emotiw 2018: Audio-video, student engagement and group-level affect prediction. Proceedings of the 2018 on International Conference on Multimodal Interaction, Boulder, CO, USA.","DOI":"10.1145\/3242969.3264993"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"975","DOI":"10.1007\/s00138-018-0960-9","article-title":"Audiovisual emotion recognition in wild","volume":"30","author":"Avots","year":"2018","journal-title":"Mach. Vis. Appl."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"206","DOI":"10.1109\/T-AFFC.2011.12","article-title":"Exploring fusion methods for multimodal emotion recognition with missing data","volume":"2","author":"Wagner","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"33","DOI":"10.1007\/s12193-009-0025-5","article-title":"Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis","volume":"3","author":"Kessous","year":"2010","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Ranganathan, H., Chakraborty, S., and Panchanathan, S. (2016, January 7\u201310). Multimodal emotion recognition using deep learning architectures. Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA.","DOI":"10.1109\/WACV.2016.7477679"},{"key":"ref_39","first-page":"375","article-title":"Multimodal emotion recognition from expressive faces, body gestures and speech","volume":"Volume 247","author":"Caridakis","year":"2007","journal-title":"Artificial Intelligence and Innovations 2007: From Theory to Applications"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Ghayoumi, M., and Bansal, A.K. (2016, January 6\u20137). Multimodal architecture for emotion in robots using deep learning. Proceedings of the Future Technologies Conference (FTC), San Francisco, CA, USA.","DOI":"10.1109\/FTC.2016.7821710"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Ghayoumi, M., Thafar, M., and Bansal, A.K. (2016, January 25\u201326). Towards Formal Multimodal Analysis of Emotions for Affective Computing. Proceedings of the 22nd International Conference on Distributed Multimedia Systems, Salerno, Italy.","DOI":"10.18293\/DMS2016-030"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Filntisis, P.P., Efthymiou, N., Koutras, P., Potamianos, G., and Maragos, P. (2019). Fusing Body Posture with Facial Expressions for Joint Recognition of Affect in Child-Robot Interaction. arXiv.","DOI":"10.1109\/LRA.2019.2930434"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"609","DOI":"10.1007\/s11704-014-3295-3","article-title":"Emotion recognition from thermal infrared images using deep Boltzmann machine","volume":"8","author":"Wang","year":"2014","journal-title":"Front. Comput. Sci."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"682","DOI":"10.1109\/TMM.2010.2060716","article-title":"A natural visible and infrared facial expression database for expression recognition and emotion inference","volume":"12","author":"Wang","year":"2010","journal-title":"IEEE Trans. Multimed."},{"key":"ref_45","unstructured":"Abidi, B. (2020, August 06). Dataset 02: IRIS Thermal\/Visible Face Database. DOE University Research Program in Robotics under grant DOE-DE-FG02-86NE37968. Available online: http:\/\/vcipl-okstate.org\/pbvs\/bench\/."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"580","DOI":"10.11591\/eei.v7i4.1230","article-title":"Local Entropy and Standard Deviation for Facial Expressions Recognition in Thermal Imaging","volume":"7","author":"Elbarawy","year":"2018","journal-title":"Bull. Electr. Eng. Inform."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"He, S., Wang, S., Lan, W., Fu, H., and Ji, Q. (2013, January 2\u20135). Facial expression recognition using deep Boltzmann machine from thermal infrared images. Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), Geneva, Switzerland.","DOI":"10.1109\/ACII.2013.46"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Basu, A., Routray, A., Shit, S., and Deb, A.K. (2015, January 17\u201320). Human emotion recognition from facial thermal image based on fused statistical feature and multi-class SVM. Proceedings of the 2015 Annual IEEE India Conference (INDICON), New Delhi, India.","DOI":"10.1109\/INDICON.2015.7443712"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"250","DOI":"10.1016\/j.infrared.2017.01.002","article-title":"Human emotions detection based on a smart-thermal system of thermographic images","volume":"81","year":"2017","journal-title":"Infrared Phys. Technol."},{"key":"ref_50","unstructured":"Yoshitomi, Y., Kim, S.I., Kawano, T., and Kilazoe, T. (2000, January 27\u201329). Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face. Proceedings of the 9th IEEE International Workshop on Robot and Human Interactive Communication, RO-MAN 2000, Osaka, Japan."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Kitazoe, T., Kim, S.I., Yoshitomi, Y., and Ikeda, T. (2000, January 16\u201320). Recognition of emotional states using voice, face image and thermal image of face. Proceedings of the Sixth International Conference on Spoken Language Processing, Beijing, China.","DOI":"10.21437\/ICSLP.2000-162"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"42","DOI":"10.1109\/T-AFFC.2011.25","article-title":"A multimodal database for affect recognition and implicit tagging","volume":"3","author":"Soleymani","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_53","unstructured":"Caridakis, G., Wagner, J., Raouzaiou, A., Curto, Z., Andre, E., and Karpouzis, K. (2010). A multimodal corpus for gesture expressivity analysis. Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, LREC."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"121","DOI":"10.1007\/s12193-012-0112-x","article-title":"A cross-cultural, multimodal, affective corpus for gesture expressivity analysis","volume":"7","author":"Caridakis","year":"2013","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/T-AFFC.2011.15","article-title":"Deap: A database for emotion analysis; using physiological signals","volume":"3","author":"Koelstra","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Ringeval, F., Sonderegger, A., Sauer, J., and Lalanne, D. (2013, January 22\u201326). Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. Proceedings of the 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Shanghai, China.","DOI":"10.1109\/FG.2013.6553805"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"567","DOI":"10.3758\/s13428-015-0601-4","article-title":"The EU-emotion stimulus set: A validation study","volume":"48","author":"Pigat","year":"2016","journal-title":"Behav. Res. Methods"},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Martin, O., Kotsia, I., Macq, B., and Pitas, I. (2006, January 3\u20137). The eNTERFACE\u201905 audio-visual emotion database. Proceedings of the 22nd International Conference on Data Engineering Workshops, Atlanta, GA, USA.","DOI":"10.1109\/ICDEW.2006.145"},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"936","DOI":"10.1109\/TMM.2008.927665","article-title":"Recognizing human emotional state from audiovisual signals","volume":"10","author":"Wang","year":"2008","journal-title":"IEEE Trans. Multimed."},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Valstar, M.F., Jiang, B., Mehu, M., Pantic, M., and Scherer, K. (2011, January 21\u201325). The first facial expression recognition and analysis challenge. Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011), Santa Barbara, CA, USA.","DOI":"10.1109\/FG.2011.5771374"},{"key":"ref_61","unstructured":"B\u00e4nziger, T., and Scherer, K.R. (2010). Introducing the geneva multimodal emotion portrayal (gemep) corpus. Blueprint for Affective Computing: A sourcebook, Oxford University Press."},{"key":"ref_62","unstructured":"Haq, S., and Jackson, P.J. (2010). Multimodal emotion recognition. Machine Audition: Principles, Algorithms and Systems, University of Surrey."},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Valstar, M., Schuller, B., Smith, K., Eyben, F., Jiang, B., Bilakhia, S., Schnieder, S., Cowie, R., and Pantic, M. (2013, January 21). AVEC 2013: The continuous audio\/visual emotion and depression recognition challenge. Proceedings of the 3rd ACM International Workshop on Audio\/Visual Emotion Challenge, Barcelona, Spain.","DOI":"10.1145\/2512530.2512533"},{"key":"ref_64","unstructured":"Valstar, M., Schuller, B., Smith, K., Almaev, T., Eyben, F., Krajewski, J., Cowie, R., and Pantic, M. (2014, January 3\u20137). Avec 2014: 3d dimensional affect and depression recognition challenge. Proceedings of the 4th International Workshop on Audio\/Visual Emotion Challenge, Orlando, FL, USA."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"300","DOI":"10.1109\/TAFFC.2016.2553038","article-title":"BAUM-1: A Spontaneous Audio-Visual Face Database of Affective and Mental States","volume":"8","author":"Zhalehpour","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_66","doi-asserted-by":"crossref","first-page":"7429","DOI":"10.1007\/s11042-014-1986-2","article-title":"BAUM-2: A multilingual audio-visual affective face database","volume":"74","author":"Erdem","year":"2015","journal-title":"Multimed. Tools Appl."},{"key":"ref_67","doi-asserted-by":"crossref","unstructured":"Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, O., Mcrorie, M., Martin, J.C., Devillers, L., Abrilian, S., and Batliner, A. (2007). The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. Affective Computing and Intelligent Interaction, Springer.","DOI":"10.1007\/978-3-540-74889-2_43"},{"key":"ref_68","doi-asserted-by":"crossref","unstructured":"Grimm, M., Kroschel, K., and Narayanan, S. (April, January 23). The Vera am Mittag German audio-visual emotional speech database. Proceedings of the 2008 IEEE International Conference on Multimedia and Expo, Hannover, Germany.","DOI":"10.1109\/ICME.2008.4607572"},{"key":"ref_69","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1109\/T-AFFC.2011.20","article-title":"The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent","volume":"3","author":"McKeown","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_70","doi-asserted-by":"crossref","unstructured":"McKeown, G., Valstar, M.F., Cowie, R., and Pantic, M. (2010, January 19\u201323). The SEMAINE corpus of emotionally coloured character interactions. Proceedings of the 2010 IEEE International Conference on Multimedia and Expo (ICME), Suntec City, Singapore.","DOI":"10.1109\/ICME.2010.5583006"},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Gunes, H., and Pantic, M. (2010). Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. Intelligent Virtual Agents, Springer.","DOI":"10.1007\/978-3-642-15892-6_39"},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"497","DOI":"10.1007\/s10579-015-9300-0","article-title":"The USC CreativeIT database of multimodal dyadic interactions: From speech and full body motion capture to continuous emotional annotations","volume":"50","author":"Metallinou","year":"2016","journal-title":"Lang. Resour. Eval."},{"key":"ref_73","doi-asserted-by":"crossref","unstructured":"Chang, C.M., and Lee, C.C. (2017, January 5\u20139). Fusion of multiple emotion perspectives: Improving affect recognition through integrating cross-lingual emotion information. Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA.","DOI":"10.1109\/ICASSP.2017.7953272"},{"key":"ref_74","doi-asserted-by":"crossref","first-page":"874","DOI":"10.1037\/a0020019","article-title":"Evidence and a computational explanation of cultural differences in facial expression recognition","volume":"10","author":"Dailey","year":"2010","journal-title":"Emotion"},{"key":"ref_75","unstructured":"Lyons, M., Akamatsu, S., Kamachi, M., and Gyoba, J. (1998, January 14\u201316). Coding facial expressions with gabor wavelets. Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan."},{"key":"ref_76","doi-asserted-by":"crossref","first-page":"1357","DOI":"10.1109\/34.817413","article-title":"Automatic classification of single facial images","volume":"21","author":"Lyons","year":"1999","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_77","unstructured":"Kanade, T., Cohn, J.F., and Tian, Y. (2000, January 28\u201330). Comprehensive database for facial expression analysis. Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France."},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010, January 13\u201318). The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA.","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"ref_79","doi-asserted-by":"crossref","unstructured":"Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., and Weiss, B. (2005, January 4\u20138). A database of german emotional speech. Proceedings of the Interspeech, Lisbon, Portugal.","DOI":"10.21437\/Interspeech.2005-446"},{"key":"ref_80","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/MMUL.2012.26","article-title":"Collecting large, richly annotated facial-expression databases from movies","volume":"1","author":"Dhall","year":"2012","journal-title":"IEEE Multimed."},{"key":"ref_81","doi-asserted-by":"crossref","unstructured":"Goodfellow, I.J., Erhan, D., Carrier, P.L., Courville, A., Mirza, M., Hamner, B., Cukierski, W., Tang, Y., Thaler, D., and Lee, D.H. (2013, January 3\u20137). Challenges in representation learning: A report on three machine learning contests. Proceedings of the International Conference on Neural Information Processing, Daegu, Korea.","DOI":"10.1007\/978-3-642-42051-1_16"},{"key":"ref_82","doi-asserted-by":"crossref","unstructured":"Ng, H.W., Nguyen, V.D., Vonikakis, V., and Winkler, S. (2015, January 3\u201313). Deep learning for emotion recognition on small datasets using transfer learning. Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA.","DOI":"10.1145\/2818346.2830593"},{"key":"ref_83","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1007\/s12193-016-0213-z","article-title":"Emotion recognition in the wild","volume":"Volume 10","author":"Dhall","year":"2016","journal-title":"Journal on Multimodal User Interfaces"},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"335","DOI":"10.1007\/s10579-008-9076-6","article-title":"IEMOCAP: Interactive emotional dyadic motion capture database","volume":"42","author":"Busso","year":"2008","journal-title":"Lang. Resour. Eval."},{"key":"ref_85","doi-asserted-by":"crossref","unstructured":"Mehta, D., Siddiqui, M.F.H., and Javaid, A.Y. (2018). Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality. Sensors, 18.","DOI":"10.3390\/s18020416"},{"key":"ref_86","doi-asserted-by":"crossref","unstructured":"Gao, Y., Hendricks, L.A., Kuchenbecker, K.J., and Darrell, T. (2016, January 16\u201321). Deep learning for tactile understanding from visual and haptic data. Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden.","DOI":"10.1109\/ICRA.2016.7487176"},{"key":"ref_87","unstructured":"Pramerdorfer, C., and Kampel, M. (2016). Facial Expression Recognition using Convolutional Neural Networks: State of the Art. arXiv."},{"key":"ref_88","doi-asserted-by":"crossref","unstructured":"Sun, B., Cao, S., Li, L., He, J., and Yu, L. (2016, January 12\u201316). Exploring multimodal visual features for continuous affect recognition. Proceedings of the 6th International Workshop on Audio\/Visual Emotion Challenge, Amsterdam, The Netherlands.","DOI":"10.1145\/2988257.2988270"},{"key":"ref_89","doi-asserted-by":"crossref","unstructured":"Keren, G., Kirschstein, T., Marchi, E., Ringeval, F., and Schuller, B. (2017, January 10\u201314). End-to-end learning for dimensional emotion recognition from physiological signals. Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China.","DOI":"10.1109\/ICME.2017.8019533"},{"key":"ref_90","doi-asserted-by":"crossref","unstructured":"Zhang, S., Zhang, S., Huang, T., and Gao, W. (2016, January 6\u20139). Multimodal Deep Convolutional Neural Network for Audio-Visual Emotion Recognition. Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, New York, NY, USA.","DOI":"10.1145\/2911996.2912051"},{"key":"ref_91","doi-asserted-by":"crossref","first-page":"597","DOI":"10.1007\/s12559-017-9472-6","article-title":"Ensemble of Deep Neural Networks with Probability-Based Fusion for Facial Expression Recognition","volume":"9","author":"Wen","year":"2017","journal-title":"Cogn. Comput."},{"key":"ref_92","doi-asserted-by":"crossref","unstructured":"Cho, J., Pappagari, R., Kulkarni, P., Villalba, J., Carmiel, Y., and Dehak, N. (2018, January 2\u20136). Deep neural networks for emotion recognition combining audio and transcripts. Proceedings of the Interspeech 2018, Hyderabad, India.","DOI":"10.21437\/Interspeech.2018-2466"},{"key":"ref_93","doi-asserted-by":"crossref","unstructured":"Gu, Y., Chen, S., and Marsic, I. (2018). Deep Multimodal Learning for Emotion Recognition in Spoken Language. arXiv.","DOI":"10.1109\/ICASSP.2018.8462440"},{"key":"ref_94","doi-asserted-by":"crossref","first-page":"1282","DOI":"10.1016\/j.patcog.2013.10.010","article-title":"Emotion recognition from geometric facial features using self-organizing map","volume":"47","author":"Majumder","year":"2014","journal-title":"Pattern Recognit."},{"key":"ref_95","doi-asserted-by":"crossref","first-page":"172","DOI":"10.1109\/TIP.2006.884954","article-title":"Facial expression recognition in image sequences using geometric deformation features and support vector machines","volume":"16","author":"Kotsia","year":"2007","journal-title":"IEEE Trans. Image Process."},{"key":"ref_96","doi-asserted-by":"crossref","first-page":"97","DOI":"10.1109\/34.908962","article-title":"Recognizing action units for facial expression analysis","volume":"23","author":"Tian","year":"2001","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_97","unstructured":"Lien, J.J., Kanade, T., Cohn, J.F., and Li, C.C. (1998, January 14\u201316). Automated facial expression recognition based on FACS action units. Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan."},{"key":"ref_98","unstructured":"Tian, Y.l., Kanade, T., and Cohn, J.F. (2002, January 21). Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. Proceedings of the Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washington, DC, USA."},{"key":"ref_99","doi-asserted-by":"crossref","first-page":"131","DOI":"10.1016\/S0921-8890(99)00103-7","article-title":"Detection, tracking, and classification of action units in facial expression","volume":"31","author":"Lien","year":"2000","journal-title":"Robot. Auton. Syst."},{"key":"ref_100","doi-asserted-by":"crossref","unstructured":"Siddiqui, M.F.H., Javaid, A.Y., and Carvalho, J.D. (2017, January 14\u201316). A Genetic Algorithm Based Approach for Data Fusion at Grammar Level. Proceedings of the 2017 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA.","DOI":"10.1109\/CSCI.2017.48"},{"key":"ref_101","doi-asserted-by":"crossref","first-page":"345","DOI":"10.1007\/s00530-010-0182-0","article-title":"Multimodal fusion for multimedia analysis: A survey","volume":"16","author":"Atrey","year":"2010","journal-title":"Multimed. Syst."},{"key":"ref_102","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1016\/j.neuroimage.2013.11.007","article-title":"Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals","volume":"102","author":"Verma","year":"2014","journal-title":"NeuroImage"},{"key":"ref_103","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1016\/j.cviu.2015.09.015","article-title":"Multi-modal emotion analysis from facial expressions and electroencephalogram","volume":"147","author":"Huang","year":"2016","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_104","doi-asserted-by":"crossref","first-page":"617","DOI":"10.1007\/s13246-017-0571-1","article-title":"Fusion of heart rate variability and pulse rate variability for emotion recognition using lagged poincare plots","volume":"40","author":"Goshvarpour","year":"2017","journal-title":"Australas Phys. Eng. Sci. Med."},{"key":"ref_105","doi-asserted-by":"crossref","unstructured":"Gievska, S., Koroveshovski, K., and Tagasovska, N. (2015, January 21\u201324). Bimodal feature-based fusion for real-time emotion recognition in a mobile context. Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi\u2019an, China.","DOI":"10.1109\/ACII.2015.7344602"},{"key":"ref_106","doi-asserted-by":"crossref","unstructured":"Yoon, S., Byun, S., and Jung, K. (2018). Multimodal Speech Emotion Recognition Using Audio and Text. arXiv.","DOI":"10.1109\/SLT.2018.8639583"},{"key":"ref_107","doi-asserted-by":"crossref","unstructured":"Hazarika, D., Gorantla, S., Poria, S., and Zimmermann, R. (2018, January 10\u201312). Self-attentive feature-level fusion for multimodal emotion detection. Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, USA.","DOI":"10.1109\/MIPR.2018.00043"},{"key":"ref_108","unstructured":"Lee, C.W., Song, K.Y., Jeong, J., and Choi, W.Y. (2018). Convolutional Attention Networks for Multimodal Emotion Recognition from Speech and Text Data. arXiv."},{"key":"ref_109","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1016\/j.knosys.2018.07.041","article-title":"Multimodal sentiment analysis using hierarchical fusion with context modeling","volume":"161","author":"Majumder","year":"2018","journal-title":"Knowl.-Based Syst."},{"key":"ref_110","doi-asserted-by":"crossref","unstructured":"Hazarika, D., Poria, S., Zadeh, A., Cambria, E., Morency, L.P., and Zimmermann, R. (2018, January 1\u20136). Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, LA, USA. (Long Papers).","DOI":"10.18653\/v1\/N18-1193"},{"key":"ref_111","doi-asserted-by":"crossref","first-page":"556","DOI":"10.1016\/j.proeng.2015.08.716","article-title":"Continuous monitoring of emotions by a multimodal cooperative sensor system","volume":"120","author":"Mencattini","year":"2015","journal-title":"Procedia Eng."},{"key":"ref_112","doi-asserted-by":"crossref","unstructured":"Shah, M., Chakrabarti, C., and Spanias, A. (2014, January 1\u20135). A multi-modal approach to emotion recognition using undirected topic models. Proceedings of the 2014 IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne, Australia.","DOI":"10.1109\/ISCAS.2014.6865245"},{"key":"ref_113","doi-asserted-by":"crossref","unstructured":"Liang, P.P., Zadeh, A., and Morency, L.P. (2018, January 16\u201320). Multimodal local-global ranking fusion for emotion recognition. Proceedings of the 2018 on International Conference on Multimodal Interaction, Boulder, CO, USA.","DOI":"10.1145\/3242969.3243019"},{"key":"ref_114","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1016\/j.cmpb.2016.12.005","article-title":"Recognition of emotions using multimodal physiological signals and an ensemble deep learning model","volume":"140","author":"Yin","year":"2017","journal-title":"Comput. Methods Prog. Biomed."},{"key":"ref_115","unstructured":"Tripathi, S., and Beigi, H. (2018). Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning. arXiv."},{"key":"ref_116","unstructured":"Dataset 01: NIST Thermal\/Visible Face Database 2012."},{"key":"ref_117","unstructured":"Nguyen, H., Kotani, K., Chen, F., and Le, B. (November, January 28). A thermal facial emotion database and its analysis. Proceedings of the Pacific-Rim Symposium on Image and Video Technology, Guanajuato, Mexico."},{"key":"ref_118","doi-asserted-by":"crossref","unstructured":"Livingstone, S.R., and Russo, F.A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13.","DOI":"10.1371\/journal.pone.0196391"},{"key":"ref_119","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009, January 22\u201324). ImageNet: A Large-Scale Hierarchical Image Database. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_120","doi-asserted-by":"crossref","first-page":"2437","DOI":"10.1016\/j.patcog.2004.12.013","article-title":"A new method of feature fusion and its application in image recognition","volume":"38","author":"Sun","year":"2005","journal-title":"Pattern Recognit."},{"key":"ref_121","doi-asserted-by":"crossref","first-page":"23","DOI":"10.1016\/j.eswa.2015.10.047","article-title":"Fully automatic face normalization and single sample face recognition in unconstrained environments","volume":"47","author":"Haghighat","year":"2016","journal-title":"Expert Syst. Appl."},{"key":"ref_122","first-page":"3943859","article-title":"A Multiple Classifier Fusion Algorithm Using Weighted Decision Templates","volume":"2016","author":"Mi","year":"2016","journal-title":"Sci. Prog."}],"container-title":["Multimodal Technologies and Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2414-4088\/4\/3\/46\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T09:57:19Z","timestamp":1760176639000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2414-4088\/4\/3\/46"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,8,6]]},"references-count":122,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2020,9]]}},"alternative-id":["mti4030046"],"URL":"https:\/\/doi.org\/10.3390\/mti4030046","relation":{},"ISSN":["2414-4088"],"issn-type":[{"value":"2414-4088","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,8,6]]}}}