{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,11]],"date-time":"2026-04-11T09:26:42Z","timestamp":1775899602802,"version":"3.50.1"},"reference-count":232,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2022,6,17]],"date-time":"2022-06-17T00:00:00Z","timestamp":1655424000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["MTI"],"abstract":"<jats:p>Multimodal human\u2013computer interaction (HCI) systems pledge a more human\u2013human-like interaction between machines and humans. Their prowess in emanating an unambiguous information exchange between the two makes these systems more reliable, efficient, less error prone, and capable of solving complex tasks. Emotion recognition is a realm of HCI that follows multimodality to achieve accurate and natural results. The prodigious use of affective identification in e-learning, marketing, security, health sciences, etc., has increased demand for high-precision emotion recognition systems. Machine learning (ML) is getting its feet wet to ameliorate the process by tweaking the architectures or wielding high-quality databases (DB). This paper presents a survey of such DBs that are being used to develop multimodal emotion recognition (MER) systems. The survey illustrates the DBs that contain multi-channel data, such as facial expressions, speech, physiological signals, body movements, gestures, and lexical features. Few unimodal DBs are also discussed that work in conjunction with other DBs for affect recognition. Further, VIRI, a new DB of visible and infrared (IR) images of subjects expressing five emotions in an uncontrolled, real-world environment, is presented. A rationale for the superiority of the presented corpus over the existing ones is instituted.<\/jats:p>","DOI":"10.3390\/mti6060047","type":"journal-article","created":{"date-parts":[[2022,6,17]],"date-time":"2022-06-17T11:45:44Z","timestamp":1655466344000},"page":"47","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":33,"title":["A Survey on Databases for Multimodal Emotion Recognition and an Introduction to the VIRI (Visible and InfraRed Image) Database"],"prefix":"10.3390","volume":"6","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8515-062X","authenticated-orcid":false,"given":"Mohammad Faridul Haque","family":"Siddiqui","sequence":"first","affiliation":[{"name":"Department of Computer Science, West Texas A&M University, Canyon, TX 79016, USA"}]},{"given":"Parashar","family":"Dhakal","sequence":"additional","affiliation":[{"name":"Manufacturing Department, Grote Industries, Madison, IN 47250, USA"}]},{"given":"Xiaoli","family":"Yang","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Fairfield University, Fairfield, CT 06824, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4719-4941","authenticated-orcid":false,"given":"Ahmad Y.","family":"Javaid","sequence":"additional","affiliation":[{"name":"Electrical Engineering and Computer Science Department, The University of Toledo, Toledo, OH 43606, USA"}]}],"member":"1968","published-online":{"date-parts":[[2022,6,17]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"415","DOI":"10.1080\/10447318.2016.1159799","article-title":"Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning","volume":"32","author":"Bahreini","year":"2016","journal-title":"Int. J. Hum.\u2013Comput. Interact."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Sun, B., Li, L., Zhou, G., Wu, X., He, J., Yu, L., Li, D., and Wei, Q. (2015, January 9\u201313). Combining multimodal features within a fusion network for emotion recognition in the wild. Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA.","DOI":"10.1145\/2818346.2830586"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"541","DOI":"10.1007\/978-3-642-34447-3_48","article-title":"Multi-Modal Fusion Emotion Recognition Based on HMM and ANN","volume":"332","author":"Xu","year":"2012","journal-title":"Contemp. Res.-Bus. Technol. Strategy"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"15549","DOI":"10.3390\/s131115549","article-title":"A multimodal emotion detection system during human\u2013robot interaction","volume":"13","author":"Malfaz","year":"2013","journal-title":"Sensors"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1007\/s12193-015-0195-2","article-title":"Emonets: Multimodal deep learning approaches for emotion recognition in video","volume":"10","author":"Kahou","year":"2016","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Sun, B., Li, L., Zuo, T., Chen, Y., Zhou, G., and Wu, X. (2014, January 12\u201316). Combining multimodal features with hierarchical classifier fusion for emotion recognition in the wild. Proceedings of the 16th International Conference on Multimodal Interaction, Istanbul, Turkey.","DOI":"10.1145\/2663204.2666272"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Chen, J., Chen, Z., Chi, Z., and Fu, H. (2014, January 12\u201316). Emotion recognition in the wild with feature fusion and multiple kernel learning. Proceedings of the 16th International Conference on Multimodal Interaction, Istanbul, Turkey.","DOI":"10.1145\/2663204.2666277"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Tzirakis, P., Trigeorgis, G., Nicolaou, M.A., Schuller, B., and Zafeiriou, S. (2017). End-to-End Multimodal Emotion Recognition using Deep Neural Networks. arXiv.","DOI":"10.1109\/ICASSP.2018.8462677"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"125","DOI":"10.1007\/s12193-015-0203-6","article-title":"Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild","volume":"10","author":"Sun","year":"2016","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Torres, J.M.M., and Stepanov, E.A. (2017, January 23\u201326). Enhanced face\/audio emotion recognition: Video and instance level classification using ConvNets and restricted Boltzmann Machines. Proceedings of the International Conference on Web Intelligence, Leipzig, Germany.","DOI":"10.1145\/3106426.3109423"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"53","DOI":"10.5772\/54002","article-title":"Towards efficient multi-modal emotion recognition","volume":"10","year":"2013","journal-title":"Int. J. Adv. Robot. Syst."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Noroozi, F., Marjanovic, M., Njegus, A., Escalera, S., and Anbarjafari, G. (2016, January 4\u20138). Fusion of classifier predictions for audio-visual emotion recognition. Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico.","DOI":"10.1109\/ICPR.2016.7899608"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Kim, Y., Lee, H., and Provost, E.M. (2013, January 26\u201331). Deep learning for robust feature generation in audiovisual emotion recognition. Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada.","DOI":"10.1109\/ICASSP.2013.6638346"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"325","DOI":"10.1007\/s12193-015-0207-2","article-title":"Audio-visual emotion recognition using multi-directional regression and Ridgelet transform","volume":"10","author":"Hossain","year":"2016","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"753","DOI":"10.1007\/s11036-016-0685-9","article-title":"Audio-visual emotion recognition using big data towards 5G","volume":"21","author":"Hossain","year":"2016","journal-title":"Mob. Netw. Appl."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"38","DOI":"10.1109\/TAFFC.2016.2593719","article-title":"Facial expression recognition in video with multiple feature fusion","volume":"9","author":"Chen","year":"2016","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1319","DOI":"10.1109\/TMM.2016.2557721","article-title":"Sparse Kernel Reduced-Rank Regression for Bimodal Emotion Recognition From Facial Expression and Speech","volume":"18","author":"Yan","year":"2016","journal-title":"IEEE Trans. Multimed."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Zhang, S., Zhang, S., Huang, T., and Gao, W. (2016, January 6\u20139). Multimodal Deep Convolutional Neural Network for Audio-Visual Emotion Recognition. Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, New York, NY, USA.","DOI":"10.1145\/2911996.2912051"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Kim, Y. (2015, January 21\u201324). Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition. Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi\u2019an, China.","DOI":"10.1109\/ACII.2015.7344653"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Pei, E., Yang, L., Jiang, D., and Sahli, H. (2015, January 21\u201324). Multimodal dimensional affect recognition using deep bidirectional long short-term memory recurrent neural networks. Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi\u2019an, China.","DOI":"10.1109\/ACII.2015.7344573"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Nguyen, D., Nguyen, K., Sridharan, S., Ghasemi, A., Dean, D., and Fookes, C. (2017, January 24\u201331). Deep spatio-temporal features for multimodal emotion recognition. Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA.","DOI":"10.1109\/WACV.2017.140"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"451","DOI":"10.1007\/s00530-017-0547-8","article-title":"Multimodal shared features learning for emotion recognition by enhanced sparse local discriminative canonical correlation analysis","volume":"25","author":"Fu","year":"2017","journal-title":"Multimed. Syst."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"3030","DOI":"10.1109\/TCSVT.2017.2719043","article-title":"Learning Affective Features with a Hybrid Deep Model for Audio-Visual Emotion Recognition","volume":"28","author":"Zhang","year":"2017","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_24","unstructured":"Cid, F., Manso, L.J., and N\u00fanez, P. (2015, January 1). A Novel Multimodal Emotion Recognition Approach for Affective Human Robot Interaction. Proceedings of the Workshop on Multimodal and Semantics for Robotics Systems, Hamburg, Germany."},{"key":"ref_25","first-page":"27","article-title":"Bimodal Human Emotion Classification in the Speaker-dependent Scenario","volume":"52","author":"Haq","year":"2015","journal-title":"Pak. Acad. Sci."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Gideon, J., Zhang, B., Aldeneh, Z., Kim, Y., Khorram, S., Le, D., and Provost, E.M. (2016, January 12\u201316). Wild wild emotion: A multimodal ensemble approach. Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan.","DOI":"10.1145\/2993148.2997626"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1109\/TAFFC.2017.2713783","article-title":"Audio-visual emotion recognition in video clips","volume":"10","author":"Noroozi","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"206","DOI":"10.1109\/T-AFFC.2011.12","article-title":"Exploring fusion methods for multimodal emotion recognition with missing data","volume":"2","author":"Wagner","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Ghayoumi, M., and Bansal, A.K. (2016, January 6\u20137). Multimodal architecture for emotion in robots using deep learning. Proceedings of the Future Technologies Conference (FTC), San Francisco, CA, USA.","DOI":"10.1109\/FTC.2016.7821710"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"33","DOI":"10.1007\/s12193-009-0025-5","article-title":"Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis","volume":"3","author":"Kessous","year":"2010","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_31","unstructured":"Yoshitomi, Y., Kim, S.I., Kawano, T., and Kilazoe, T. (2000, January 27\u201329). Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face. Proceedings of the Proceedings 9th IEEE International Workshop on Robot and Human Interactive Communication. IEEE RO-MAN 2000 (Cat. No.00TH8499), Osaka, Japan."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Kitazoe, T., Kim, S.I., Yoshitomi, Y., and Ikeda, T. (2000, January 16\u201320). Recognition of emotional states using voice, face image and thermal image of face. Proceedings of the Sixth International Conference on Spoken Language Processing, Beijing, China.","DOI":"10.21437\/ICSLP.2000-162"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Shah, M., Chakrabarti, C., and Spanias, A. (2014, January 1\u20135). A multi-modal approach to emotion recognition using undirected topic models. Proceedings of the 2014 IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne, Australia.","DOI":"10.1109\/ISCAS.2014.6865245"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1016\/j.neuroimage.2013.11.007","article-title":"Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals","volume":"102","author":"Verma","year":"2014","journal-title":"NeuroImage"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Keren, G., Kirschstein, T., Marchi, E., Ringeval, F., and Schuller, B. (2017, January 10\u201314). End-to-end learning for dimensional emotion recognition from physiological signals. Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China.","DOI":"10.1109\/ICME.2017.8019533"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1016\/j.cmpb.2016.12.005","article-title":"Recognition of emotions using multimodal physiological signals and an ensemble deep learning model","volume":"140","author":"Yin","year":"2017","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"408","DOI":"10.1016\/j.measurement.2017.06.006","article-title":"Wearable Biosensor Network Enabled Multimodal Daily-life Emotion Recognition Employing Reputation-driven Imbalanced Fuzzy Classification","volume":"109","author":"Dai","year":"2017","journal-title":"Measurement"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Kortelainen, J., Tiinanen, S., Huang, X., Li, X., Laukka, S., Pietik\u00e4inen, M., and Sepp\u00e4nen, T. (September, January 28). Multimodal emotion recognition by combining physiological signals and facial expressions: A preliminary study. Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA.","DOI":"10.1109\/EMBC.2012.6347175"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"120","DOI":"10.1037\/a0013386","article-title":"Darwin and emotion expression","volume":"64","author":"Hess","year":"2009","journal-title":"Am. Psychol."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1177\/1754073913494899","article-title":"Bodily influences on emotional feelings: Accumulating evidence and extensions of William James\u2019s theory of emotion","volume":"6","author":"Laird","year":"2014","journal-title":"Emot. Rev."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1548","DOI":"10.1109\/TPAMI.2016.2515606","article-title":"Survey on rgb, 3d, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications","volume":"38","author":"Corneanu","year":"2016","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"268","DOI":"10.1037\/0033-2909.115.2.268","article-title":"Strong evidence for universals in facial expressions: A reply to Russell\u2019s mistaken critique","volume":"115","author":"Ekman","year":"1994","journal-title":"Psychol. Bull."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Ekman, P., Friesen, W.V., and Hager, J. (1978). Investigator\u2019s Guide to the Facial Action Coding System, Consulting Psychologists Press.","DOI":"10.1037\/t27734-000"},{"key":"ref_44","first-page":"3474","article-title":"Recognition of facial expression from optical flow","volume":"74","author":"Mase","year":"1991","journal-title":"IEICE Trans. Inf. Syst."},{"key":"ref_45","unstructured":"Lanitis, A., Taylor, C.J., and Cootes, T.F. (1995, January 20\u201323). A unified approach to coding and interpreting face images. Proceedings of the IEEE International Conference on Computer Vision, Cambridge, MA, USA."},{"key":"ref_46","unstructured":"Black, M.J., and Yacoob, Y. (1995, January 20\u201323). Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. Proceedings of the IEEE International Conference on Computer Vision, Cambridge, MA, USA."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1121","DOI":"10.1109\/72.536309","article-title":"Human expression recognition from motion using a radial basis function network architecture","volume":"7","author":"Rosenblum","year":"1996","journal-title":"IEEE Trans. Neural Netw."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"757","DOI":"10.1109\/34.598232","article-title":"Coding, analysis, interpretation, and recognition of facial expressions","volume":"19","author":"Essa","year":"1997","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"636","DOI":"10.1109\/34.506414","article-title":"Recognizing human facial expressions from long image sequences using optical flow","volume":"18","author":"Yacoob","year":"1996","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_50","first-page":"39","article-title":"International affective picture system (IAPS): Technical manual and affective ratings","volume":"1","author":"Lang","year":"1997","journal-title":"NIMH Cent. Study Emot. Atten."},{"key":"ref_51","first-page":"57","article-title":"Using deep convolutional neural network for emotion detection on a physiological signals dataset (AMIGOS)","volume":"7","author":"Abdulhay","year":"2018","journal-title":"IEEE Access"},{"key":"ref_52","unstructured":"Sourina, O., and Liu, Y. (2011, January 26\u201329). A fractal-based algorithm of emotion recognition from EEG using arousal-valence model. Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing, Rome, Italy."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Liu, Y., Sourina, O., and Nguyen, M.K. (2011). Real-time EEG-Based emotion recognition and its applications. Transactions on Computational Science XII, Springer.","DOI":"10.1007\/978-3-642-22336-5_13"},{"key":"ref_54","first-page":"355","article-title":"Emotion recognition based on EEG using LSTM recurrent neural network","volume":"8","author":"Alhagry","year":"2017","journal-title":"Emotion"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1016\/j.compind.2017.04.005","article-title":"Respiration-based emotion recognition with deep learning","volume":"92","author":"Zhang","year":"2017","journal-title":"Comput. Ind."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1109\/TIFS.2005.863510","article-title":"Automatic facial expression recognition using facial animation parameters and multistream HMMs","volume":"1","author":"Aleksic","year":"2006","journal-title":"IEEE Trans. Inf. Forensics Secur."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"97","DOI":"10.1109\/34.908962","article-title":"Recognizing action units for facial expression analysis","volume":"23","author":"Tian","year":"2001","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"555","DOI":"10.1016\/S0893-6080(03)00115-1","article-title":"Subject independent facial expression recognition with robust face detection using a convolutional neural network","volume":"16","author":"Matsugu","year":"2003","journal-title":"Neural Netw."},{"key":"ref_59","unstructured":"Yin, L., Wei, X., Sun, Y., Wang, J., and Rosato, M.J. (2006, January 10\u201312). A 3D facial expression database for facial behavior research. Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Southampton, UK. Available online: https:\/\/dl.acm.org\/doi\/abs\/10.5555\/1126250.1126340."},{"key":"ref_60","unstructured":"Mandal, T., Majumdar, A., and Wu, Q.J. (2007, January 22\u201324). Face recognition by curvelet based feature extraction. Proceedings of the International Conference Image Analysis and Recognition, Montreal, QC, Canada."},{"key":"ref_61","first-page":"30","article-title":"Automatic facial expression recognition using 3D faces","volume":"3","author":"Li","year":"2011","journal-title":"Int. J. Eng. Res. Innov."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"69","DOI":"10.1016\/j.patrec.2019.01.008","article-title":"Extended deep neural network for facial emotion recognition","volume":"120","author":"Jain","year":"2019","journal-title":"Pattern Recognit. Lett."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1016\/j.ins.2017.10.044","article-title":"Softmax regression based deep sparse autoencoder network for facial emotion recognition in human-robot interaction","volume":"428","author":"Chen","year":"2018","journal-title":"Inf. Sci."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Bazrafkan, S., Nedelcu, T., Filipczuk, P., and Corcoran, P. (2017, January 8\u201311). Deep learning for facial expression recognition: A step closer to a smartphone that knows your moods. Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA.","DOI":"10.1109\/ICCE.2017.7889290"},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"2528","DOI":"10.1109\/TMM.2016.2598092","article-title":"A deep neural network-driven feature learning method for multi-view facial expression recognition","volume":"18","author":"Zhang","year":"2016","journal-title":"IEEE Trans. Multimed."},{"key":"ref_66","unstructured":"Sebe, N., Cohen, I., Gevers, T., and Huang, T.S. (2005, January 16\u201320). Multimodal approaches for emotion recognition: A survey. Proceedings of the SPIE Internet Imaging VI, San Jose, CA, USA."},{"key":"ref_67","doi-asserted-by":"crossref","first-page":"386","DOI":"10.1109\/T-AFFC.2013.26","article-title":"Iterative feature normalization scheme for automatic emotion detection from speech","volume":"4","author":"Busso","year":"2013","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"2203","DOI":"10.1109\/TMM.2014.2360798","article-title":"Learning salient features for speech emotion recognition using convolutional neural networks","volume":"16","author":"Mao","year":"2014","journal-title":"IEEE Trans. Multimed."},{"key":"ref_69","doi-asserted-by":"crossref","first-page":"1056","DOI":"10.1109\/TASLP.2014.2319157","article-title":"Multiview supervised dictionary learning in speech emotion recognition","volume":"22","author":"Gangeh","year":"2014","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"69","DOI":"10.1109\/TAFFC.2015.2392101","article-title":"Speech emotion recognition using Fourier parameters","volume":"6","author":"Wang","year":"2015","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Fayek, H.M., Lech, M., and Cavedon, L. (2015, January 14\u201316). Towards real-time speech emotion recognition using deep neural networks. Proceedings of the 2015 9th International Conference on Signal Processing and Communication Systems (ICSPCS), Cairns, Australia.","DOI":"10.1109\/ICSPCS.2015.7391796"},{"key":"ref_72","doi-asserted-by":"crossref","unstructured":"Satt, A., Rozenberg, S., and Hoory, R. (2017, January 20\u201324). Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms. Proceedings of the INTERSPEECH, Stockholm, Sweden.","DOI":"10.21437\/Interspeech.2017-200"},{"key":"ref_73","doi-asserted-by":"crossref","unstructured":"Trigeorgis, G., Ringeval, F., Brueckner, R., Marchi, E., Nicolaou, M.A., Schuller, B., and Zafeiriou, S. (2016, January 20\u201325). Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China.","DOI":"10.1109\/ICASSP.2016.7472669"},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Rozgi\u0107, V., Vitaladevuni, S.N., and Prasad, R. (2013, January 26\u201331). Robust EEG emotion classification using segment level decision fusion. Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada.","DOI":"10.1109\/ICASSP.2013.6637858"},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"238","DOI":"10.1109\/T-AFFC.2013.3","article-title":"Using a smartphone to measure heart rate changes during relived happiness and anger","volume":"4","author":"Lakens","year":"2013","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Hernandez, J., McDuff, D., Fletcher, R., and Picard, R.W. (2013, January 18\u201322). Inside-out: Reflecting on your inner state. Proceedings of the 2013 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), San Diego, CA, USA.","DOI":"10.1109\/PerComW.2013.6529507"},{"key":"ref_77","unstructured":"Fridlund, A., and Izard, C.E. (1983). Electromyographic studies of facial expressions of emotions and patterns of emotions. Social Psychophysiology: A Sourcebook, Guilford Press."},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Lin, W., Li, C., and Sun, S. (2017, January 13\u201315). Deep convolutional neural network for emotion recognition using EEG and peripheral physiological signal. Proceedings of the International Conference on Image and Graphics, Shanghai, China.","DOI":"10.1007\/978-3-319-71589-6_33"},{"key":"ref_79","doi-asserted-by":"crossref","unstructured":"Paleari, M., Chellali, R., and Huet, B. (2010, January 28\u201330). Features for multimodal emotion recognition: An extensive study. Proceedings of the 2010 IEEE Conference on Cybernetics and Intelligent Systems, Singapore.","DOI":"10.1109\/ICCIS.2010.5518574"},{"key":"ref_80","unstructured":"Viola, P., and Jones, M. (2001, January 8\u201314). Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA."},{"key":"ref_81","unstructured":"De Silva, L.C., and Ng, P.C. (2000, January 26\u201330). Bimodal emotion recognition. Proceedings of the Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), Grenoble, France."},{"key":"ref_82","unstructured":"Chen, L.S., and Huang, T.S. (August, January 30). Emotional expressions in audiovisual human computer interaction. Proceedings of the 2000 IEEE International Conference on Multimedia and Expo, ICME2000, Latest Advances in the Fast Changing World of Multimedia (Cat. No. 00TH8532), New York, NY, USA."},{"key":"ref_83","doi-asserted-by":"crossref","unstructured":"Caridakis, G., Castellano, G., Kessous, L., Raouzaiou, A., Malatesta, L., Asteriadis, S., and Karpouzis, K. (2007). Multimodal emotion recognition from expressive faces, body gestures and speech. IFIP International Conference on Artificial Intelligence Applications and Innovations, Springer.","DOI":"10.1007\/978-0-387-74161-1_41"},{"key":"ref_84","doi-asserted-by":"crossref","unstructured":"Tang, K., Tie, Y., Yang, T., and Guan, L. (2014, January 4\u20137). Multimodal emotion recognition (MER) system. Proceedings of the 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), Toronto, ON, Canada.","DOI":"10.1109\/CCECE.2014.6900993"},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"42","DOI":"10.1109\/T-AFFC.2011.25","article-title":"A multimodal database for affect recognition and implicit tagging","volume":"3","author":"Soleymani","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_86","doi-asserted-by":"crossref","unstructured":"Ranganathan, H., Chakraborty, S., and Panchanathan, S. (2016, January 7\u201310). Multimodal emotion recognition using deep learning architectures. Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA.","DOI":"10.1109\/WACV.2016.7477679"},{"key":"ref_87","doi-asserted-by":"crossref","unstructured":"Lee, H., Grosse, R., Ranganath, R., and Ng, A.Y. (2009, January 14\u201318). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada.","DOI":"10.1145\/1553374.1553453"},{"key":"ref_88","doi-asserted-by":"crossref","unstructured":"Poria, S., Chaturvedi, I., Cambria, E., and Hussain, A. (2016, January 12\u201315). Convolutional MKL based multimodal emotion recognition and sentiment analysis. Proceedings of the 2016 IEEE 16th International Conference on Data Mining (ICDM), Barcelona, Spain.","DOI":"10.1109\/ICDM.2016.0055"},{"key":"ref_89","unstructured":"(2022, June 14). Dataset 02: IRIS Thermal\/Visible Face Database 2016. Available online: http:\/\/vcipl-okstate.org\/pbvs\/bench\/."},{"key":"ref_90","unstructured":"(2022, June 14). Dataset 01: NIST Thermal\/Visible Face Database 2012, Available online: https:\/\/www.google.com.hk\/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjRhJn--LP4AhVFCd4KHYiOAhgQFnoECAYQAQ&url=https%3A%2F%2Fwww.nist.gov%2Fdocument%2Fklare-nistdatasets2015pdf&usg=AOvVaw0O-vRUczPwxCTSp2_SWWe7."},{"key":"ref_91","doi-asserted-by":"crossref","first-page":"682","DOI":"10.1109\/TMM.2010.2060716","article-title":"A natural visible and infrared facial expression database for expression recognition and emotion inference","volume":"12","author":"Wang","year":"2010","journal-title":"IEEE Trans. Multimed."},{"key":"ref_92","unstructured":"Nguyen, H., Kotani, K., Chen, F., and Le, B. (November, January 28). A thermal facial emotion database and its analysis. Proceedings of the Pacific-Rim Symposium on Image and Video Technology, Guanajuato, M\u00e9xico."},{"key":"ref_93","doi-asserted-by":"crossref","first-page":"335","DOI":"10.1007\/s10579-008-9076-6","article-title":"IEMOCAP: Interactive emotional dyadic motion capture database","volume":"42","author":"Busso","year":"2008","journal-title":"Lang. Resour. Eval."},{"key":"ref_94","doi-asserted-by":"crossref","first-page":"479","DOI":"10.1109\/TAFFC.2018.2884461","article-title":"Amigos: A dataset for affect, personality and mood research on individuals and groups","volume":"12","author":"Correa","year":"2018","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_95","unstructured":"(2020, May 18). EMOTIV | Brain Data Measuring Hardware and Software Solutions. Available online: https:\/\/www.emotiv.com\/."},{"key":"ref_96","unstructured":"(2020, May 18). SHIMMER | Wearable Sensor Technology | Wireless IMU | ECG | EMG | GSR. Available online: http:\/\/www.shimmersensing.com\/."},{"key":"ref_97","doi-asserted-by":"crossref","first-page":"147","DOI":"10.1109\/TAFFC.2016.2625250","article-title":"ASCERTAIN: Emotion and personality recognition using commercial sensors","volume":"9","author":"Subramanian","year":"2016","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_98","unstructured":"Caridakis, G., Wagner, J., Raouzaiou, A., Curto, Z., Andre, E., and Karpouzis, K. (2010, January 18). A multimodal corpus for gesture expressivity analysis. Proceedings of the Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, LREC, Valetta, Malta."},{"key":"ref_99","doi-asserted-by":"crossref","first-page":"121","DOI":"10.1007\/s12193-012-0112-x","article-title":"A cross-cultural, multimodal, affective corpus for gesture expressivity analysis","volume":"7","author":"Caridakis","year":"2013","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_100","doi-asserted-by":"crossref","unstructured":"Markova, V., Ganchev, T., and Kalinkov, K. (2019, January 8\u20139). CLAS: A Database for Cognitive Load, Affect and Stress Recognition. Proceedings of the 2019 International Conference on Biomedical Innovations and Applications (BIA), Varna, Bulgaria.","DOI":"10.1109\/BIA48344.2019.8967457"},{"key":"ref_101","unstructured":"(2020, May 19). SHIMMER3 ECG Unit| Wearable ECG Sensor | Wireless ECG Sensor | Electrocardiogram. Available online: https:\/\/www.shimmersensing.com\/products\/shimmer3-ecg-sensor."},{"key":"ref_102","unstructured":"(2020, May 19). Shimmer3 GSR+ Sensor. Available online: http:\/\/www.shimmersensing.com\/shimmer3-gsr-sensor\/."},{"key":"ref_103","unstructured":"Zadeh, A.B., Liang, P.P., Poria, S., Cambria, E., and Morency, L.P. (2018, January 15\u201320). Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia."},{"key":"ref_104","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/T-AFFC.2011.15","article-title":"Deap: A database for emotion analysis; using physiological signals","volume":"3","author":"Koelstra","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_105","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/TAFFC.2015.2392932","article-title":"DECAF: MEG-based multimodal database for decoding affective physiological responses","volume":"6","author":"Abadi","year":"2015","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_106","doi-asserted-by":"crossref","first-page":"567","DOI":"10.3758\/s13428-015-0601-4","article-title":"The EU-emotion stimulus set: A validation study","volume":"48","author":"Pigat","year":"2016","journal-title":"Behav. Res. Methods"},{"key":"ref_107","doi-asserted-by":"crossref","first-page":"8669","DOI":"10.1007\/s00521-020-05616-w","article-title":"HEU Emotion: A large-scale database for multimodal emotion recognition in the wild","volume":"33","author":"Chen","year":"2021","journal-title":"Neural Comput. Appl."},{"key":"ref_108","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1016\/j.cviu.2015.09.015","article-title":"Multi-modal emotion analysis from facial expressions and electroencephalogram","volume":"147","author":"Huang","year":"2016","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_109","doi-asserted-by":"crossref","unstructured":"Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., and Mihalcea, R. (2018). Meld: A multimodal multi-party dataset for emotion recognition in conversations. arXiv.","DOI":"10.18653\/v1\/P19-1050"},{"key":"ref_110","unstructured":"Chen, S.Y., Hsu, C.C., Kuo, C.C., and Ku, L.W. (2018). Emotionlines: An emotion corpus of multi-party conversations. arXiv."},{"key":"ref_111","doi-asserted-by":"crossref","unstructured":"Tu, G., Wen, J., Liu, C., Jiang, D., and Cambria, E. (2022). Context-and sentiment-aware networks for emotion recognition in conversation. IEEE Trans. Artif. Intell.","DOI":"10.1109\/TAI.2022.3149234"},{"key":"ref_112","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Girard, J.M., Wu, Y., Zhang, X., Liu, P., Ciftci, U., Canavan, S., Reale, M., Horowitz, A., and Yang, H. (2016, January 27\u201330). Multimodal spontaneous emotion corpus for human behavior analysis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.374"},{"key":"ref_113","doi-asserted-by":"crossref","unstructured":"Yang, H., Ciftci, U., and Yin, L. (2018, January 18\u201323). Facial expression recognition by de-expression residue learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00231"},{"key":"ref_114","doi-asserted-by":"crossref","unstructured":"Jannat, R., Tynes, I., Lime, L.L., Adorno, J., and Canavan, S. (2018, January 8\u201312). Ubiquitous emotion recognition using audio and video data. Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, Singapore.","DOI":"10.1145\/3267305.3267689"},{"key":"ref_115","doi-asserted-by":"crossref","first-page":"12177","DOI":"10.1109\/ACCESS.2019.2891579","article-title":"MPED: A multi-modal physiological emotion database for discrete emotion recognition","volume":"7","author":"Song","year":"2019","journal-title":"IEEE Access"},{"key":"ref_116","doi-asserted-by":"crossref","unstructured":"Song, T., Zheng, W., Liu, S., Zong, Y., Cui, Z., and Li, Y. (2021). Graph-Embedded Convolutional Neural Network for Image-based EEG Emotion Recognition. IEEE Trans. Emerg. Top. Comput.","DOI":"10.1109\/TETC.2021.3087174"},{"key":"ref_117","doi-asserted-by":"crossref","unstructured":"Castro, S., Hazarika, D., P\u00e9rez-Rosas, V., Zimmermann, R., Mihalcea, R., and Poria, S. (2019). Towards multimodal sarcasm detection (an _obviously_ perfect paper). arXiv.","DOI":"10.18653\/v1\/P19-1455"},{"key":"ref_118","unstructured":"(2020, May 17). Sarcasm | Psychology Today. Available online: https:\/\/www.psychologytoday.com\/us\/blog\/stronger-the-broken-places\/201907\/sarcasm."},{"key":"ref_119","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tiwari, P., Rong, L., Chen, R., AlNajem, N.A., and Hossain, M.S. (2021). Affective Interaction: Attentive Representation Learning for Multi-Modal Sentiment Classification. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), ACM.","DOI":"10.1145\/3527175"},{"key":"ref_120","doi-asserted-by":"crossref","unstructured":"Pramanick, S., Roy, A., and Patel, V.M. (2022, January 5\u20137). Multimodal Learning using Optimal Transport for Sarcasm and Humor Detection. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikola, HI, USA.","DOI":"10.1109\/WACV51458.2022.00062"},{"key":"ref_121","doi-asserted-by":"crossref","unstructured":"Chou, H.C., Lin, W.C., Chang, L.C., Li, C.C., Ma, H.P., and Lee, C.C. (2017, January 23\u201326). NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpus. Proceedings of the 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, USA.","DOI":"10.1109\/ACII.2017.8273615"},{"key":"ref_122","doi-asserted-by":"crossref","first-page":"1675","DOI":"10.1109\/TASLP.2021.3076364","article-title":"Speech emotion recognition considering nonverbal vocalization in affective conversations","volume":"29","author":"Hsu","year":"2021","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_123","doi-asserted-by":"crossref","unstructured":"Perepelkina, O., Kazimirova, E., and Konstantinova, M. (2018, January 18\u201322). RAMAS: Russian multimodal corpus of dyadic interaction for affective computing. Proceedings of the International Conference on Speech and Computer, Leipzig, Germany.","DOI":"10.7287\/peerj.preprints.26688"},{"key":"ref_124","unstructured":"Sloetjes, H., and Wittenburg, P. (2008, January 28\u201330). Annotation by category-ELAN and ISO DCR. Proceedings of the 6th international Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco."},{"key":"ref_125","first-page":"80","article-title":"A Bimodal Approach for Speech Emotion Recognition using Audio and Text","volume":"11","author":"Verkholyak","year":"2021","journal-title":"J. Internet Serv. Inf. Secur."},{"key":"ref_126","doi-asserted-by":"crossref","unstructured":"Ringeval, F., Sonderegger, A., Sauer, J., and Lalanne, D. (2013, January 22\u201326). Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. Proceedings of the 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Shanghai, China.","DOI":"10.1109\/FG.2013.6553805"},{"key":"ref_127","doi-asserted-by":"crossref","first-page":"556","DOI":"10.1016\/j.proeng.2015.08.716","article-title":"Continuous monitoring of emotions by a multimodal cooperative sensor system","volume":"120","author":"Mencattini","year":"2015","journal-title":"Procedia Eng."},{"key":"ref_128","unstructured":"Ganchev, T., Markova, V., Lefterov, I., and Kalinin, Y. (October, January 30). Overall Design of the SLADE Data Acquisition System. Proceedings of the International Conference on Intelligent Information Technologies for Industry, Sirius, Russia."},{"key":"ref_129","doi-asserted-by":"crossref","unstructured":"Valstar, M., Schuller, B., Smith, K., Eyben, F., Jiang, B., Bilakhia, S., Schnieder, S., Cowie, R., and Pantic, M. (2013, January 21). AVEC 2013: The continuous audio\/visual emotion and depression recognition challenge. Proceedings of the 3rd ACM International Workshop on Audio\/Visual Emotion Challenge, Barcelona, Spain.","DOI":"10.1145\/2512530.2512533"},{"key":"ref_130","unstructured":"Valstar, M., Schuller, B., Smith, K., Almaev, T., Eyben, F., Krajewski, J., Cowie, R., and Pantic, M. (2014, January 7). Avec 2014: 3d dimensional affect and depression recognition challenge. Proceedings of the 4th International Workshop on Audio\/Visual Emotion Challenge, Orlando, FL, USA."},{"key":"ref_131","doi-asserted-by":"crossref","unstructured":"Tian, L., Moore, J., and Lai, C. (2016, January 13\u201316). Recognizing emotions in spoken dialogue with hierarchically fused acoustic and lexical features. Proceedings of the 2016 IEEE Spoken Language Technology Workshop (SLT), San Diego, CA, USA.","DOI":"10.1109\/SLT.2016.7846319"},{"key":"ref_132","doi-asserted-by":"crossref","first-page":"300","DOI":"10.1109\/TAFFC.2016.2553038","article-title":"BAUM-1: A Spontaneous Audio-Visual Face Database of Affective and Mental States","volume":"8","author":"Zhalehpour","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_133","doi-asserted-by":"crossref","unstructured":"Zhang, L., Walter, S., Ma, X., Werner, P., Al-Hamadi, A., Traue, H.C., and Gruss, S. (2016, January 6\u20139). \u201cBioVid Emo DB\u201d: A multimodal database for emotion analyses validated by subjective ratings. Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece.","DOI":"10.1109\/SSCI.2016.7849931"},{"key":"ref_134","doi-asserted-by":"crossref","unstructured":"Prabha, R., Anandan, P., Sivarajeswari, S., Saravanakumar, C., and Babu, D.V. (2022, January 20\u201322). Design of an Automated Recurrent Neural Network for Emotional Intelligence Using Deep Neural Networks. Proceedings of the 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India.","DOI":"10.1109\/ICSSIT53264.2022.9716420"},{"key":"ref_135","doi-asserted-by":"crossref","first-page":"913","DOI":"10.1007\/s12652-016-0406-z","article-title":"CHEAVD: A Chinese natural emotional audio\u2013visual database","volume":"8","author":"Li","year":"2017","journal-title":"J. Ambient. Intell. Humaniz. Comput."},{"key":"ref_136","doi-asserted-by":"crossref","unstructured":"Li, Y., Tao, J., Schuller, B., Shan, S., Jiang, D., and Jia, J. (2018, January 20\u201322). Mec 2017: Multimodal emotion recognition challenge. Proceedings of the 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia), Beijing, China.","DOI":"10.1109\/ACIIAsia.2018.8470342"},{"key":"ref_137","doi-asserted-by":"crossref","first-page":"4897","DOI":"10.1007\/s11042-021-10553-4","article-title":"Speech emotion recognition based on multi-feature and multi-lingual fusion","volume":"81","author":"Wang","year":"2022","journal-title":"Multimed. Tools Appl."},{"key":"ref_138","doi-asserted-by":"crossref","unstructured":"Liang, J., Chen, S., Zhao, J., Jin, Q., Liu, H., and Lu, L. (2019, January 12\u201317). Cross-culture multimodal emotion recognition with adversarial learning. Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8683725"},{"key":"ref_139","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1109\/JBHI.2017.2688239","article-title":"DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices","volume":"22","author":"Katsigiannis","year":"2017","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_140","doi-asserted-by":"crossref","first-page":"e38","DOI":"10.7717\/peerj.38","article-title":"Validation of the Emotiv EPOC\u00ae EEG gaming system for measuring research quality auditory ERPs","volume":"1","author":"Badcock","year":"2013","journal-title":"PeerJ"},{"key":"ref_141","unstructured":"Ekanayake, H. (2022, June 06). P300 and Emotiv EPOC: Does Emotiv EPOC Capture Real EEG?. Available online: http:\/\/neurofeedback.visaduma.info\/emotivresearch.htm."},{"key":"ref_142","doi-asserted-by":"crossref","first-page":"1527","DOI":"10.1109\/JSEN.2010.2045498","article-title":"SHIMMER\u2122\u2013A wireless sensor platform for noninvasive biomedical research","volume":"10","author":"Burns","year":"2010","journal-title":"IEEE Sens. J."},{"key":"ref_143","doi-asserted-by":"crossref","unstructured":"Martin, O., Kotsia, I., Macq, B., and Pitas, I. (2006, January 3\u20137). The enterface\u201905 audio-visual emotion database. Proceedings of the 22nd International Conference on Data Engineering Workshops, Washington, DC, USA.","DOI":"10.1109\/ICDEW.2006.145"},{"key":"ref_144","doi-asserted-by":"crossref","unstructured":"Gunes, H., and Piccardi, M. (2006, January 20\u201326). A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. Proceedings of the 18th International Conference on Pattern Recognition (ICPR\u201906), Hong Kong, China.","DOI":"10.1109\/ICPR.2006.39"},{"key":"ref_145","doi-asserted-by":"crossref","unstructured":"Karatay, B., Bestepe, D., Sailunaz, K., Ozyer, T., and Alhajj, R. (2022, January 1\u20133). A Multi-Modal Emotion Recognition System Based on CNN-Transformer Deep Learning Technique. Proceedings of the 2022 7th International Conference on Data Science and Machine Learning Applications (CDMA), Riyadh, Saudi Arabia.","DOI":"10.1109\/CDMA54072.2022.00029"},{"key":"ref_146","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010, January 13\u201318). The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA.","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"ref_147","doi-asserted-by":"crossref","unstructured":"Valstar, M.F., Jiang, B., Mehu, M., Pantic, M., and Scherer, K. (2011, January 21\u201323). The first facial expression recognition and analysis challenge. Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011), Santa Barbara, CA, USA.","DOI":"10.1109\/FG.2011.5771374"},{"key":"ref_148","first-page":"271","article-title":"Introducing the geneva multimodal emotion portrayal (gemep) corpus","volume":"2010","author":"Scherer","year":"2010","journal-title":"Bluepr. Affect. Comput. Sourceb."},{"key":"ref_149","first-page":"488","article-title":"The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data","volume":"4738","author":"Cowie","year":"2007","journal-title":"Affect. Comput. Intell. Interact."},{"key":"ref_150","doi-asserted-by":"crossref","unstructured":"Baveye, Y., Bettinelli, J.N., Dellandr\u00e9a, E., Chen, L., and Chamaret, C. (2013, January 2\u20135). A large video database for computational models of induced emotion. Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland.","DOI":"10.1109\/ACII.2013.9"},{"key":"ref_151","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1109\/TAFFC.2015.2396531","article-title":"Liris-accede: A video database for affective content analysis","volume":"6","author":"Baveye","year":"2015","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_152","doi-asserted-by":"crossref","unstructured":"Livingstone, S.R., and Russo, F.A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13.","DOI":"10.1371\/journal.pone.0196391"},{"key":"ref_153","doi-asserted-by":"crossref","unstructured":"Iqbal, A., and Barua, K. (2019, January 7\u20139). A Real-time Emotion Recognition from Speech using Gradient Boosting. Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox\u2019sBazar, Bangladesh.","DOI":"10.1109\/ECACE.2019.8679271"},{"key":"ref_154","doi-asserted-by":"crossref","unstructured":"Haque, A., Guo, M., Verma, P., and Fei-Fei, L. (2019, January 12\u201317). Audio-linguistic embeddings for spoken sentences. Proceedings of the ICASSP 2019\u20142019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8682553"},{"key":"ref_155","doi-asserted-by":"crossref","first-page":"936","DOI":"10.1109\/TMM.2008.927665","article-title":"Recognizing human emotional state from audiovisual signals","volume":"10","author":"Wang","year":"2008","journal-title":"IEEE Trans. Multimed."},{"key":"ref_156","doi-asserted-by":"crossref","unstructured":"Gievska, S., Koroveshovski, K., and Tagasovska, N. (2015, January 21\u201324). Bimodal feature-based fusion for real-time emotion recognition in a mobile context. Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi\u2019an, China.","DOI":"10.1109\/ACII.2015.7344602"},{"key":"ref_157","doi-asserted-by":"crossref","unstructured":"Gunes, H., and Pantic, M. (2010, January 20\u201322). Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. Proceedings of the Intelligent Virtual Agents, Philadelphia, PA, USA.","DOI":"10.1007\/978-3-642-15892-6_39"},{"key":"ref_158","unstructured":"Haq, S., and Jackson, P.J. (2010). Multimodal emotion recognition. Machine Audition: Principles, Algorithms and Systems, IGI Global."},{"key":"ref_159","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1109\/TAMD.2015.2431497","article-title":"Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks","volume":"7","author":"Zheng","year":"2015","journal-title":"IEEE Trans. Auton. Ment. Dev."},{"key":"ref_160","unstructured":"Liu, W., Qiu, J.L., Zheng, W.L., and Lu, B.L. (2019). Multimodal Emotion Recognition Using Deep Canonical Correlation Analysis. arXiv."},{"key":"ref_161","doi-asserted-by":"crossref","unstructured":"Duan, R.N., Zhu, J.Y., and Lu, B.L. (2013, January 6\u20138). Differential entropy feature for EEG-based emotion classification. Proceedings of the 2013 6th International IEEE\/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA.","DOI":"10.1109\/NER.2013.6695876"},{"key":"ref_162","doi-asserted-by":"crossref","first-page":"1110","DOI":"10.1109\/TCYB.2018.2797176","article-title":"Emotionmeter: A multimodal framework for recognizing human emotions","volume":"49","author":"Zheng","year":"2018","journal-title":"IEEE Trans. Cybern."},{"key":"ref_163","doi-asserted-by":"crossref","unstructured":"Li, T.H., Liu, W., Zheng, W.L., and Lu, B.L. (2019, January 20\u201323). Classification of five emotions from EEG and eye movement signals: Discrimination ability and stability over time. Proceedings of the 2019 9th International IEEE\/EMBS Conference on Neural Engineering (NER), San Francisco, CA, USA.","DOI":"10.1109\/NER.2019.8716943"},{"key":"ref_164","doi-asserted-by":"crossref","first-page":"026017","DOI":"10.1088\/1741-2552\/aa5a98","article-title":"A multimodal approach to estimating vigilance using EEG and forehead EOG","volume":"14","author":"Zheng","year":"2017","journal-title":"J. Neural Eng."},{"key":"ref_165","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1109\/T-AFFC.2011.20","article-title":"The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent","volume":"3","author":"McKeown","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_166","doi-asserted-by":"crossref","first-page":"497","DOI":"10.1007\/s10579-015-9300-0","article-title":"The USC CreativeIT database of multimodal dyadic interactions: From speech and full body motion capture to continuous emotional annotations","volume":"50","author":"Metallinou","year":"2016","journal-title":"Lang. Resour. Eval."},{"key":"ref_167","doi-asserted-by":"crossref","unstructured":"Chang, C.M., and Lee, C.C. (2017, January 5\u20139). Fusion of multiple emotion perspectives: Improving affect recognition through integrating cross-lingual emotion information. Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA.","DOI":"10.1109\/ICASSP.2017.7953272"},{"key":"ref_168","doi-asserted-by":"crossref","unstructured":"Grimm, M., Kroschel, K., and Narayanan, S. (2008, January 23\u201326). The Vera am Mittag German audio-visual emotional speech database. Proceedings of the 2008 IEEE International Conference on Multimedia and Expo, Hannover, Germany.","DOI":"10.1109\/ICME.2008.4607572"},{"key":"ref_169","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/MMUL.2012.26","article-title":"Collecting large, richly annotated facial-expression databases from movies","volume":"19","author":"Dhall","year":"2012","journal-title":"IEEE Multimed."},{"key":"ref_170","doi-asserted-by":"crossref","unstructured":"Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., and Weiss, B. (2005, January 4\u20138). A database of german emotional speech. Proceedings of the Interspeech, Lisbon, Portugal.","DOI":"10.21437\/Interspeech.2005-446"},{"key":"ref_171","doi-asserted-by":"crossref","unstructured":"Staroniewicz, P., and Majewski, W. (2009). Polish emotional speech database\u2013recording and preliminary validation. Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, Springer.","DOI":"10.1007\/978-3-642-03320-9_5"},{"key":"ref_172","doi-asserted-by":"crossref","unstructured":"Lee, S., Yildirim, S., Kazemzadeh, A., and Narayanan, S. (2005, January 4\u20138). An articulatory study of emotional speech production. Proceedings of the Interspeech, Lisbon, Portugal.","DOI":"10.21437\/Interspeech.2005-325"},{"key":"ref_173","doi-asserted-by":"crossref","unstructured":"Strapparava, C., and Mihalcea, R. (2007, January 23\u201324). Semeval-2007 task 14: Affective text. Proceedings of the 4th International Workshop on Semantic Evaluations. Association for Computational Linguistics, Prague, Czech Republic.","DOI":"10.3115\/1621474.1621487"},{"key":"ref_174","doi-asserted-by":"crossref","first-page":"763","DOI":"10.1177\/053901886025004001","article-title":"How universal and specific is emotional experience? Evidence from 27 countries on five continents","volume":"25","author":"Wallbott","year":"1986","journal-title":"Soc. Sci. Inf."},{"key":"ref_175","unstructured":"Kanade, T., Cohn, J.F., and Tian, Y. (2000, January 26\u201330). Comprehensive database for facial expression analysis. Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France."},{"key":"ref_176","unstructured":"Ekman, P., and Friesen, W.V. (2022, June 14). Facial Action Coding System. Available online: https:\/\/psycnet.apa.org\/doiLanding?doi=10.1037%2Ft27734-000."},{"key":"ref_177","unstructured":"Ekman, P., Friesen, W.V., and Hager, J.C. (2022, June 14). FACS Investigator\u2019s Guide. 2002, 96 Chapter 4 pp 29. Available online: https:\/\/www.scirp.org\/%28S%28i43dyn45teexjx455qlt3d2q%29%29\/reference\/ReferencesPapers.aspx?ReferenceID=1850657."},{"key":"ref_178","doi-asserted-by":"crossref","unstructured":"Ranganathan, H., Chakraborty, S., and Panchanathan, S. (2016, January 6\u20139). Transfer of multimodal emotion features in deep belief networks. Proceedings of the 2016 50th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA.","DOI":"10.1109\/ACSSC.2016.7869079"},{"key":"ref_179","doi-asserted-by":"crossref","first-page":"597","DOI":"10.1007\/s12559-017-9472-6","article-title":"Ensemble of Deep Neural Networks with Probability-Based Fusion for Facial Expression Recognition","volume":"9","author":"Wen","year":"2017","journal-title":"Cogn. Comput."},{"key":"ref_180","doi-asserted-by":"crossref","unstructured":"Goodfellow, I.J., Erhan, D., Carrier, P.L., Courville, A., Mirza, M., Hamner, B., Cukierski, W., Tang, Y., Thaler, D., and Lee, D.H. (2013, January 3\u20137). Challenges in representation learning: A report on three machine learning contests. Proceedings of the International Conference on Neural Information Processing, Daegu, Korea.","DOI":"10.1007\/978-3-642-42051-1_16"},{"key":"ref_181","doi-asserted-by":"crossref","unstructured":"Ng, H.W., Nguyen, V.D., Vonikakis, V., and Winkler, S. (2015, January 9\u201313). Deep learning for emotion recognition on small datasets using transfer learning. Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA.","DOI":"10.1145\/2818346.2830593"},{"key":"ref_182","doi-asserted-by":"crossref","first-page":"874","DOI":"10.1037\/a0020019","article-title":"Evidence and a computational explanation of cultural differences in facial expression recognition","volume":"10","author":"Dailey","year":"2010","journal-title":"Emotion"},{"key":"ref_183","unstructured":"Lyons, M., Akamatsu, S., Kamachi, M., and Gyoba, J. (1998, January 14\u201316). Coding facial expressions with gabor wavelets. Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan."},{"key":"ref_184","doi-asserted-by":"crossref","first-page":"1357","DOI":"10.1109\/34.817413","article-title":"Automatic classification of single facial images","volume":"21","author":"Lyons","year":"1999","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_185","unstructured":"Pantic, M., Valstar, M., Rademaker, R., and Maat, L. (2005, January 6). Web-based database for facial expression analysis. Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands."},{"key":"ref_186","unstructured":"Valstar, M., and Pantic, M. (2010, January 23). Induced disgust, happiness and surprise: An addition to the mmi facial expression database. Proceedings of the 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, Valletta, Malta."},{"key":"ref_187","unstructured":"Dhall, A., Goecke, R., Lucey, S., and Gedeon, T. (2022, June 04). Static Facial Expressions In The Wild: Data and Experiment Protocol. Available online: http:\/\/citeseerx.ist.psu.edu\/viewdoc\/versions?doi=10.1.1.671.1708."},{"key":"ref_188","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3490686","article-title":"A Multimodal Framework for Large-Scale Emotion Recognition by Fusing Music and Electrodermal Activity Signals","volume":"18","author":"Yin","year":"2022","journal-title":"ACM Trans. Multimed. Comput. Commun. Appl. (Tomm)"},{"key":"ref_189","doi-asserted-by":"crossref","unstructured":"Udovi\u010di\u0107, G., \u00d0erek, J., Russo, M., and Sikora, M. (2017, January 23). Wearable emotion recognition system based on GSR and PPG signals. Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care, Mountain View, CA, USA.","DOI":"10.1145\/3132635.3132641"},{"key":"ref_190","unstructured":"Radhika, K., and Oruganti, V.R.M. (2021, January 28\u201329). Deep Multimodal Fusion for Subject-Independent Stress Detection. Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India."},{"key":"ref_191","doi-asserted-by":"crossref","unstructured":"Mittal, T., Bhattacharya, U., Chandra, R., Bera, A., and Manocha, D. (2020, January 7\u201312). M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i02.5492"},{"key":"ref_192","doi-asserted-by":"crossref","first-page":"4040","DOI":"10.1109\/LRA.2021.3067867","article-title":"Negative emotion management using a smart shirt and a robot assistant","volume":"6","author":"Pham","year":"2021","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_193","doi-asserted-by":"crossref","unstructured":"Sun, B., Cao, S., Li, L., He, J., and Yu, L. (2016, January 16). Exploring multimodal visual features for continuous affect recognition. Proceedings of the 6th International Workshop on Audio\/Visual Emotion Challenge, Amsterdam, The Netherlands.","DOI":"10.1145\/2988257.2988270"},{"key":"ref_194","doi-asserted-by":"crossref","first-page":"7429","DOI":"10.1007\/s11042-014-1986-2","article-title":"BAUM-2: A multilingual audio-visual affective face database","volume":"74","author":"Erdem","year":"2015","journal-title":"Multimed. Tools Appl."},{"key":"ref_195","doi-asserted-by":"crossref","unstructured":"Dar, M.N., Akram, M.U., Khawaja, S.G., and Pujari, A.N. (2020). CNN and LSTM-based emotion charting using physiological signals. Sensors, 20.","DOI":"10.3390\/s20164551"},{"key":"ref_196","doi-asserted-by":"crossref","first-page":"96","DOI":"10.1109\/TAFFC.2019.2916015","article-title":"Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing","volume":"13","author":"Siddharth","year":"2019","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_197","doi-asserted-by":"crossref","unstructured":"Yi, Y., Wang, H., and Tang, P. (2022, June 14). Unified Multi-Stage Fusion Network for Affective Video Content Analysis. Available at SSRN 4080629. Available online: https:\/\/ssrn.com\/abstract=4080629.","DOI":"10.2139\/ssrn.4080629"},{"key":"ref_198","doi-asserted-by":"crossref","unstructured":"McKeown, G., Valstar, M.F., Cowie, R., and Pantic, M. (2010, January 19\u201323). The SEMAINE corpus of emotionally coloured character interactions. Proceedings of the 2010 IEEE International Conference on Multimedia and Expo (ICME), Singapore.","DOI":"10.1109\/ICME.2010.5583006"},{"key":"ref_199","doi-asserted-by":"crossref","unstructured":"Siddiqui, M.F.H., and Javaid, A.Y. (2020). A Multimodal Facial Emotion Recognition Framework through the Fusion of Speech with Visible and Infrared Images. Multimodal Technol. Interact., 4.","DOI":"10.3390\/mti4030046"},{"key":"ref_200","unstructured":"(2018, May 03). Andrew Ng: Why AI Is the New Electricity | The Dish. Available online: https:\/\/news.stanford.edu\/thedish\/2017\/03\/14\/andrew-ng-why-ai-is-the-new-electricity\/."},{"key":"ref_201","unstructured":"(2018, May 03). Emotional Intelligence is the Future of Artificial Intelligence: Fjord | ZDNet. Available online: http:\/\/www.zdnet.com\/article\/emotional-intelligence-is-the-future-of-artificial-intelligence-fjord\/."},{"key":"ref_202","unstructured":"(2018, May 03). Synced | Emotional Intelligence is the Future of Artificial Intelligence. Available online: https:\/\/syncedreview.com\/2017\/03\/14\/emotional-intelligence-is-the-future-of-artificial-intelligence\/."},{"key":"ref_203","doi-asserted-by":"crossref","unstructured":"Olszewska, J.I. (2016). Automated Face Recognition: Challenges and Solutions. Pattern Recognition-Analysis and Applications, InTech.","DOI":"10.5772\/66013"},{"key":"ref_204","unstructured":"(2018, June 03). Lie to Me | Paul Ekman Group. Available online: https:\/\/www.paulekman.com\/lie-to-me\/."},{"key":"ref_205","doi-asserted-by":"crossref","unstructured":"Arellano, D., Varona, J., and Perales, F.J. (2015). Emotional Context? Or Contextual Emotions?. Handbook of Research on Synthesizing Human Emotion in Intelligent Systems and Robotics, IGI Global.","DOI":"10.4018\/978-1-4666-7278-9.ch018"},{"key":"ref_206","doi-asserted-by":"crossref","unstructured":"Bullington, J. (2005, January 23\u201324). \u2019Affective\u2019computing and emotion recognition systems: The future of biometric surveillance?. Proceedings of the 2nd Annual Conference on Information Security Curriculum Development, Kennesaw, GA, USA.","DOI":"10.1145\/1107622.1107644"},{"key":"ref_207","unstructured":"(2018, June 03). Disney Is Using Facial Recognition to Predict How You\u2019ll React to Movies. Available online: https:\/\/mashable.com\/2017\/07\/27\/disney-facial-recognition-prediction-movies\/#aoVIBBcxxmqI."},{"key":"ref_208","unstructured":"Xie, Z., and Guan, L. (2013, January 15\u201319). Multimodal information fusion of audiovisual emotion recognition using novel information theoretic tools. Proceedings of the 2013 IEEE International Conference on Multimedia and Expo (ICME), San Jose, CA, USA."},{"key":"ref_209","doi-asserted-by":"crossref","first-page":"e12","DOI":"10.1017\/ATSIP.2014.11","article-title":"Survey on audiovisual emotion recognition: Databases, features, and data fusion strategies","volume":"3","author":"Wu","year":"2014","journal-title":"APSIPA Trans. Signal Inf. Process."},{"key":"ref_210","doi-asserted-by":"crossref","first-page":"597","DOI":"10.1109\/TMM.2012.2189550","article-title":"Kernel cross-modal factor analysis for information fusion with application to bimodal emotion recognition","volume":"14","author":"Wang","year":"2012","journal-title":"IEEE Trans. Multimed."},{"key":"ref_211","doi-asserted-by":"crossref","first-page":"444","DOI":"10.1016\/j.compeleceng.2016.04.009","article-title":"A novel feature extraction method based on late positive potential for emotion recognition in human brain signal patterns","volume":"53","author":"Mehmood","year":"2016","journal-title":"Comput. Electr. Eng."},{"key":"ref_212","unstructured":"Pramerdorfer, C., and Kampel, M. (2016). Facial Expression Recognition using Convolutional Neural Networks: State of the Art. arXiv."},{"key":"ref_213","first-page":"70","article-title":"The International Affective Picture System (IAPS) in the study of emotion and attention","volume":"29","author":"Lang","year":"2007","journal-title":"Handb. Emot. Elicitation Assess."},{"key":"ref_214","unstructured":"Kim, B.K., Dong, S.Y., Roh, J., Kim, G., and Lee, S.Y. (July, January 26). Fusing Aligned and Non-Aligned Face Information for Automatic Affect Recognition in the Wild: A Deep Learning Approach. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA."},{"key":"ref_215","doi-asserted-by":"crossref","first-page":"617","DOI":"10.1007\/s13246-017-0571-1","article-title":"Fusion of heart rate variability and pulse rate variability for emotion recognition using lagged poincare plots","volume":"40","author":"Goshvarpour","year":"2017","journal-title":"Australas. Phys. Eng. Sci. Med."},{"key":"ref_216","doi-asserted-by":"crossref","unstructured":"Ghayoumi, M., Thafar, M., and Bansal, A.K. (2016, January 25\u201326). Towards Formal Multimodal Analysis of Emotions for Affective Computing. Proceedings of the DMS, Salerno, Italy.","DOI":"10.18293\/DMS2016-030"},{"key":"ref_217","doi-asserted-by":"crossref","unstructured":"Gao, Y., Hendricks, L.A., Kuchenbecker, K.J., and Darrell, T. (2016, January 16\u201320). Deep learning for tactile understanding from visual and haptic data. Proceedings of the Robotics and Automation (ICRA), 2016 IEEE International Conference on IEEE, Stockholm, Sweden.","DOI":"10.1109\/ICRA.2016.7487176"},{"key":"ref_218","first-page":"1","article-title":"Emotion Analysis using Different Stimuli with EEG Signals in Emotional Space","volume":"2","author":"Dasdemir","year":"2017","journal-title":"Nat. Eng. Sci."},{"key":"ref_219","doi-asserted-by":"crossref","unstructured":"Callejas-Cuervo, M., Mart\u00ednez-Tejada, L., and Botero-Fagua, J. (2017). Architecture of an emotion recognition and video games system to identify personality traits. VII Latin American Congress on Biomedical Engineering CLAIB 2016, Bucaramanga, Santander, Colombia, October 26th\u201328th, 2016, Springer.","DOI":"10.1007\/978-981-10-4086-3_11"},{"key":"ref_220","doi-asserted-by":"crossref","first-page":"22","DOI":"10.1016\/j.patrec.2014.11.007","article-title":"Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data","volume":"66","author":"Ringeval","year":"2015","journal-title":"Pattern Recognit. Lett."},{"key":"ref_221","doi-asserted-by":"crossref","first-page":"184","DOI":"10.1109\/T-AFFC.2011.40","article-title":"Context-sensitive learning for enhanced audiovisual emotion classification","volume":"3","author":"Metallinou","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_222","unstructured":"Haq, S., Jackson, P.J., and Edge, J. (2009, January 10\u201313). Speaker-dependent audio-visual emotion recognition. Proceedings of the AVSP, Norwich, UK."},{"key":"ref_223","unstructured":"Haq, S., Jackson, P.J., and Edge, J. (2008, January 26\u201329). Audio-visual feature selection and reduction for emotion classification. Proceedings of the International Conference on Auditory-Visual Speech Processing (AVSP\u201908), Moreton Island, Australia."},{"key":"ref_224","doi-asserted-by":"crossref","first-page":"787","DOI":"10.1016\/j.specom.2007.01.010","article-title":"Primitives-based evaluation and estimation of emotions in speech","volume":"49","author":"Grimm","year":"2007","journal-title":"Speech Commun."},{"key":"ref_225","unstructured":"Pringle, H. (2008). Brand Immortality: How Brands Can Live Long and Prosper, Kogan Page Publishers."},{"key":"ref_226","doi-asserted-by":"crossref","unstructured":"Ko\u0142akowska, A., Landowska, A., Szwoch, M., Szwoch, W., and Wrobel, M.R. (2014). Emotion recognition and its applications. Human-Computer Systems Interaction: Backgrounds and Applications 3, Springer.","DOI":"10.1007\/978-3-319-08491-6_5"},{"key":"ref_227","doi-asserted-by":"crossref","unstructured":"Li, G., and Wang, Y. (2018, January 12\u201314). Research on leamer\u2019s emotion recognition for intelligent education system. Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China.","DOI":"10.1109\/IAEAC.2018.8577590"},{"key":"ref_228","first-page":"8887","article-title":"Implementation of Hybrid Model of Particle Filter and Kalman Filter based Real-Time Tracking for handling Occlusion on Beagleboard-xM","volume":"95","author":"Majumdar","year":"2014","journal-title":"Int. J. Comput. Appl."},{"key":"ref_229","first-page":"28","article-title":"Implementation of Real Time Local Search Particle Filter Based Tracking Algorithms on BeagleBoard-xM","volume":"11","author":"Majumdar","year":"2014","journal-title":"Int. J. Comput. Sci. Issues (IJCSI)"},{"key":"ref_230","doi-asserted-by":"crossref","unstructured":"Smith, J.R., Joshi, D., Huet, B., Hsu, W., and Cota, J. (2017, January 23\u201327). Harnessing ai for augmenting creativity: Application to movie trailer creation. Proceedings of the 25th ACM international conference on Multimedia, Mountain View, CA, USA.","DOI":"10.1145\/3123266.3127906"},{"key":"ref_231","doi-asserted-by":"crossref","unstructured":"Mehta, D., Siddiqui, M.F.H., and Javaid, A.Y. (2019). Recognition of emotion intensities using machine learning algorithms: A comparative study. Sensors, 19.","DOI":"10.3390\/s19081897"},{"key":"ref_232","doi-asserted-by":"crossref","first-page":"14231","DOI":"10.1007\/s11042-018-6755-1","article-title":"An intelligent recommendation system using gaze and emotion detection","volume":"78","author":"Jaiswal","year":"2019","journal-title":"Multimed. Tools Appl."}],"container-title":["Multimodal Technologies and Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2414-4088\/6\/6\/47\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T23:33:55Z","timestamp":1760139235000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2414-4088\/6\/6\/47"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,17]]},"references-count":232,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2022,6]]}},"alternative-id":["mti6060047"],"URL":"https:\/\/doi.org\/10.3390\/mti6060047","relation":{},"ISSN":["2414-4088"],"issn-type":[{"value":"2414-4088","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,17]]}}}