{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T20:38:08Z","timestamp":1776112688055,"version":"3.50.1"},"reference-count":123,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2020,12,24]],"date-time":"2020-12-24T00:00:00Z","timestamp":1608768000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1\u20134 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (\u226464 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.<\/jats:p>","DOI":"10.3390\/s21010052","type":"journal-article","created":{"date-parts":[[2020,12,24]],"date-time":"2020-12-24T09:02:44Z","timestamp":1608800564000},"page":"52","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":63,"title":["CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6293-881X","authenticated-orcid":false,"given":"Tianyi","family":"Zhang","sequence":"first","affiliation":[{"name":"Multimedia Computing Group, Delft University of Technology, 2600AA Delft, The Netherlands"},{"name":"Centrum Wiskunde &amp; Informatica (CWI), 1098XG Amsterdam, The Netherlands"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9954-4088","authenticated-orcid":false,"given":"Abdallah","family":"El Ali","sequence":"additional","affiliation":[{"name":"Centrum Wiskunde &amp; Informatica (CWI), 1098XG Amsterdam, The Netherlands"}]},{"given":"Chen","family":"Wang","sequence":"additional","affiliation":[{"name":"Future Media and Convergence Institute, Xinhuanet &amp; State Key Laboratory of Media Convergence Production Technology and Systems, Xinhua News Agency, Beijing 100000, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5771-2549","authenticated-orcid":false,"given":"Alan","family":"Hanjalic","sequence":"additional","affiliation":[{"name":"Multimedia Computing Group, Delft University of Technology, 2600AA Delft, The Netherlands"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1752-6837","authenticated-orcid":false,"given":"Pablo","family":"Cesar","sequence":"additional","affiliation":[{"name":"Multimedia Computing Group, Delft University of Technology, 2600AA Delft, The Netherlands"},{"name":"Centrum Wiskunde &amp; Informatica (CWI), 1098XG Amsterdam, The Netherlands"}]}],"member":"1968","published-online":{"date-parts":[[2020,12,24]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1109\/T-AFFC.2011.37","article-title":"Multimodal emotion recognition in response to videos","volume":"3","author":"Soleymani","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"422","DOI":"10.1016\/j.neucom.2012.07.050","article-title":"Affivir: An affect-based Internet video recommendation system","volume":"120","author":"Niu","year":"2013","journal-title":"Neurocomputing"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"51185","DOI":"10.1109\/ACCESS.2019.2911235","article-title":"EmoWare: A Context-Aware Framework for Personalized Video Recommendation Using Affective Video Sequences","volume":"7","author":"Tripathi","year":"2019","journal-title":"IEEE Access"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2133366.2133373","article-title":"Affect recognition based on physiological changes during the watching of music videos","volume":"2","author":"Yazdani","year":"2012","journal-title":"ACM Trans. Interact. Intell. Syst. (TiiS)"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Ali, M., Al Machot, F., Haj Mosa, A., Jdeed, M., Al Machot, E., and Kyamakya, K. (2018). A globally generalized emotion recognition system involving different physiological signals. Sensors, 18.","DOI":"10.3390\/s18061905"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Shu, L., Xie, J., Yang, M., Li, Z., Li, Z., Liao, D., Xu, X., and Yang, X. (2018). A Review of Emotion Recognition Using Physiological Signals. Sensors, 18.","DOI":"10.3390\/s18072074"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Jerritta, S., Murugappan, M., Nagarajan, R., and Wan, K. (2011, January 4\u20136). Physiological signals based human emotion recognition: A review. Proceedings of the 2011 IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, Malaysia.","DOI":"10.1109\/CSPA.2011.5759912"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"35","DOI":"10.1016\/j.entcs.2019.04.009","article-title":"Emotion recognition from physiological signal analysis: A review","volume":"343","author":"Maria","year":"2019","journal-title":"Electron. Notes Theor. Comput. Sci."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"283","DOI":"10.3758\/BF03193159","article-title":"EMuJoy: Software for continuous measurement of perceived emotions in music","volume":"39","author":"Nagel","year":"2007","journal-title":"Behav. Res. Methods"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"17","DOI":"10.1109\/TAFFC.2015.2436926","article-title":"Analysis of EEG signals and facial expressions for continuous emotion detection","volume":"7","author":"Soleymani","year":"2015","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"372","DOI":"10.1037\/0003-066X.50.5.372","article-title":"The emotion probe: Studies of motivation and attention","volume":"50","author":"Lang","year":"1995","journal-title":"Am. Psychol."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1161","DOI":"10.1037\/h0077714","article-title":"A circumplex model of affect","volume":"39","author":"Russell","year":"1980","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_13","unstructured":"Paul, E. (2007). Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life, OWL Books."},{"key":"ref_14","unstructured":"Levenson, R.W. (1988). Emotion and the autonomic nervous system: A prospectus for research on autonomic specificity. Soc. Psychophysiol. Theory Clin. Appl., 17\u201342."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"101646","DOI":"10.1016\/j.bspc.2019.101646","article-title":"A machine learning model for emotion recognition from physiological signals","volume":"55","author":"Delahoz","year":"2020","journal-title":"Biomed. Signal Process. Control"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1016\/0005-7916(94)90063-9","article-title":"Measuring emotion: The self-assessment manikin and the semantic differential","volume":"25","author":"Bradley","year":"1994","journal-title":"J. Behav. Ther. Exp. Psychiatry"},{"key":"ref_17","unstructured":"Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., and Schr\u00f6der, M. (2000, January 5\u20137). \u2019FEELTRACE\u2019: An instrument for recording perceived emotion in real time. Proceedings of the ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion, Newcastle, UK."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"902","DOI":"10.3758\/s13428-017-0915-5","article-title":"DARMA: Software for dual axis rating and media annotation","volume":"50","author":"Girard","year":"2018","journal-title":"Behav. Res. Methods"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1038\/s41597-019-0209-0","article-title":"A dataset of continuous affect annotations and physiological signals for emotion analysis","volume":"6","author":"Sharma","year":"2019","journal-title":"Sci. Data"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Soleymani, M., Asghari-Esfeden, S., Pantic, M., and Fu, Y. (2014, January 14\u201318). Continuous emotion detection using EEG signals and facial expressions. Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China.","DOI":"10.1109\/ICME.2014.6890301"},{"key":"ref_21","first-page":"264","article-title":"EEG Based Human Facial Emotion Recognition System Using LSTMRNN","volume":"2","author":"Haripriyadharshini","year":"2018","journal-title":"Asian J. Appl. Sci. Technol. (AJAST)"},{"key":"ref_22","unstructured":"Hasanzadeh, F., Annabestani, M., and Moghimi, S. (2019). Continuous Emotion Recognition during Music Listening Using EEG Signals: A Fuzzy Parallel Cascades Model. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Wu, S., Du, Z., Li, W., Huang, D., and Wang, Y. (2019, January 14\u201318). Continuous Emotion Recognition in Videos by Fusing Facial Expression, Head Pose and Eye Gaze. Proceedings of the 2019 International Conference on Multimodal Interaction, Suzhou, China.","DOI":"10.1145\/3340555.3353739"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Zhao, S., Yao, H., and Jiang, X. (2015, January 26\u201330). Predicting continuous probability distribution of image emotions in valence-arousal space. Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia.","DOI":"10.1145\/2733373.2806354"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"031001","DOI":"10.1088\/1741-2552\/ab0ab5","article-title":"Deep learning for electroencephalogram (EEG) classification tasks: A review","volume":"16","author":"Craik","year":"2019","journal-title":"J. Neural Eng."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1007\/s13534-018-00093-6","article-title":"Wearable EEG and beyond","volume":"9","author":"Casson","year":"2019","journal-title":"Biomed. Eng. Lett."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Khamis, M., Baier, A., Henze, N., Alt, F., and Bulling, A. (2018, January 21\u201326). Understanding Face and Eye Visibility in Front-Facing Cameras of Smartphones Used in the Wild. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI \u201918), Montreal, QC, Canada.","DOI":"10.1145\/3173574.3173854"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"235","DOI":"10.1207\/s15327051hci2102_3","article-title":"The watcher and the watched: Social judgments about privacy in a public place","volume":"21","author":"Friedman","year":"2006","journal-title":"Hum. Comput. Interact."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"712","DOI":"10.5465\/amj.2012.0911","article-title":"Watching you watching me: Boundary control and capturing attention in the context of ubiquitous technology use","volume":"58","author":"Stanko","year":"2015","journal-title":"Acad. Manag. J."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Ragot, M., Martin, N., Em, S., Pallamin, N., and Diverrez, J.M. (2017). Emotion recognition using physiological signals: Laboratory vs. wearable sensors. International Conference on Applied Human Factors and Ergonomics, Springer.","DOI":"10.1007\/978-3-319-60639-2_2"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3397316","article-title":"Detection of Artifacts in Ambulatory Electrodermal Activity Data","volume":"4","author":"Gashi","year":"2020","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Zhang, T., El Ali, A., Wang, C., Hanjalic, A., and Cesar, P. (2020, January 26). RCEA: Real-Time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI\u201920), Honolulu, HI, USA.","DOI":"10.1145\/3313831.3376808"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Ma, J., Tang, H., Zheng, W.L., and Lu, B.L. (2019, January 21\u201325). Emotion Recognition using Multimodal Residual LSTM Network. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350871"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Zhong, S.H., Fares, A., and Jiang, J. (2019, January 21\u201325). An Attentional-LSTM for Improved Classification of Brain Activities Evoked by Images. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350886"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"2222","DOI":"10.1109\/TNNLS.2016.2582924","article-title":"LSTM: A search space odyssey","volume":"28","author":"Greff","year":"2016","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_36","first-page":"3104","article-title":"Sequence to sequence learning with neural networks","volume":"27","author":"Sutskever","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Chen, Q., Zhu, X., Ling, Z., Wei, S., Jiang, H., and Inkpen, D. (2016). Enhanced lstm for natural language inference. arXiv.","DOI":"10.18653\/v1\/P17-1152"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Bentley, F., and Lottridge, D. (2019, January 4\u20139). Understanding Mass-Market Mobile TV Behaviors in the Streaming Era. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI\u201919), Glasgow, UK.","DOI":"10.1145\/3290605.3300491"},{"key":"ref_39","first-page":"1","article-title":"Moodexplorer: Towards compound emotion detection via smartphone sensing","volume":"1","author":"Zhang","year":"2018","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"169","DOI":"10.1080\/02699939208411068","article-title":"An argument for basic emotions","volume":"6","author":"Ekman","year":"1992","journal-title":"Cogn. Emot."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Taylor, B., Dey, A., Siewiorek, D., and Smailagic, A. (2015, January 7\u201311). Using physiological sensors to detect levels of user frustration induced by system delays. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Osaka, Japan.","DOI":"10.1145\/2750858.2805847"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Kyriakou, K., Resch, B., Sagl, G., Petutschnig, A., Werner, C., Niederseer, D., Liedlgruber, M., Wilhelm, F.H., Osborne, T., and Pykett, J. (2019). Detecting moments of stress from measurements of wearable physiological sensors. Sensors, 19.","DOI":"10.3390\/s19173805"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"1139","DOI":"10.12928\/telkomnika.v17i3.9719","article-title":"Stress detection and relief using wearable physiological sensors","volume":"17","author":"Sethi","year":"2019","journal-title":"Telkomnika"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3214284","article-title":"A weakly supervised learning framework for detecting social anxiety and depression","volume":"2","author":"Salekin","year":"2018","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"479","DOI":"10.1016\/j.future.2018.03.038","article-title":"Emotions detection on an ambient intelligent system using wearable devices","volume":"92","author":"Costa","year":"2019","journal-title":"Future Gener. Comput. Syst."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Zenonos, A., Khan, A., Kalogridis, G., Vatsikas, S., Lewis, T., and Sooriyabandara, M. (2016, January 14\u201318). HealthyOffice: Mood recognition at work using smartphones and wearable sensors. Proceedings of the 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), Sydney, Australia.","DOI":"10.1109\/PERCOMW.2016.7457166"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"196","DOI":"10.1109\/TCE.2018.2844736","article-title":"Emotion based music recommendation system using wearable physiological sensors","volume":"64","author":"Ayata","year":"2018","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Yao, L., Liu, Y., Li, W., Zhou, L., Ge, Y., Chai, J., and Sun, X. (2014). Using physiological measures to evaluate user experience of mobile applications. International Conference on Engineering Psychology and Cognitive Ergonomics, Springer.","DOI":"10.1007\/978-3-319-07515-0_31"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3314400","article-title":"Using unobtrusive wearable sensors to measure the physiological synchrony between presenters and audience members","volume":"3","author":"Gashi","year":"2019","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Puke, S., Suzuki, T., Nakayama, K., Tanaka, H., and Minami, S. (2013, January 3\u20137). Blood pressure estimation from pulse wave velocity measured on the chest. Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan.","DOI":"10.1109\/EMBC.2013.6610946"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3191745","article-title":"EngageMon: Multi-Modal Engagement Sensing for Mobile Games","volume":"2","author":"Huynh","year":"2018","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_52","first-page":"1","article-title":"Unobtrusive assessment of students\u2019 emotional engagement during lectures using electrodermal activity sensors","volume":"2","author":"Gashi","year":"2018","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Yang, W., Rifqi, M., Marsala, C., and Pinna, A. (2018, January 11\u201314). Towards Better Understanding of Player\u2019s Game Experience. Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, Yokohama, Japan.","DOI":"10.1145\/3206025.3206072"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Wioleta, S. (2013, January 6\u20138). Using physiological signals for emotion recognition. Proceedings of the 2013 6th International Conference on Human System Interactions (HSI), Sopot, Poland.","DOI":"10.1109\/HSI.2013.6577880"},{"key":"ref_55","first-page":"147","article-title":"Emotion pattern recognition using physiological signals","volume":"172","author":"Niu","year":"2014","journal-title":"Sens. Transducers"},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"459","DOI":"10.1615\/CritRevBiomedEng.v30.i456.80","article-title":"Control of multifunctional prosthetic hands by processing the electromyographic signal","volume":"30","author":"Zecca","year":"2002","journal-title":"Crit. Rev. Biomed. Eng."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/T-AFFC.2010.1","article-title":"Affect detection: An interdisciplinary review of models, methods, and their applications","volume":"1","author":"Calvo","year":"2010","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"He, C., Yao, Y.J., and Ye, X.S. (2017). An emotion recognition system based on physiological signals obtained by wearable sensors. Wearable Sensors and Robots, Springer.","DOI":"10.1007\/978-981-10-2404-7_2"},{"key":"ref_59","unstructured":"Chen, L., Li, M., Su, W., Wu, M., Hirota, K., and Pedrycz, W. (2019). Adaptive Feature Selection-Based AdaBoost-KNN With Direct Optimization for Dynamic Emotion Recognition in Human\u2013Robot Interaction. IEEE Trans. Emerg. Top. Comput. Intell."},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Rigas, G., Katsis, C.D., Ganiatsas, G., and Fotiadis, D.I. (2007). A user independent, biosignal based, emotion recognition method. International Conference on User Modeling, Springer.","DOI":"10.1007\/978-3-540-73078-1_36"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Ali, M., Al Machot, F., Mosa, A.H., and Kyamakya, K. (2016). Cnn based subject-independent driver emotion recognition system involving physiological signals for adas. Advanced Microsystems for Automotive Applications 2016, Springer.","DOI":"10.1007\/978-3-319-44766-7_11"},{"key":"ref_62","first-page":"57","article-title":"Using deep convolutional neural network for emotion detection on a physiological signals dataset (AMIGOS)","volume":"7","author":"Abdulhay","year":"2018","journal-title":"IEEE Access"},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Suhara, Y., Xu, Y., and Pentland, A. (2017, January 3\u20137). Deepmood: Forecasting depressed mood based on self-reported histories via recurrent neural networks. Proceedings of the 26th International Conference on World Wide Web, Perth, Australia.","DOI":"10.1145\/3038912.3052676"},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Zhang, T. (2019, January 14\u201318). Multi-modal Fusion Methods for Robust Emotion Recognition using Body-worn Physiological Sensors in Mobile Environments. Proceedings of the 2019 International Conference on Multimodal Interaction, Suzhou, China.","DOI":"10.1145\/3340555.3356089"},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"391","DOI":"10.1109\/TMM.2012.2229970","article-title":"Affective labeling in a content-based recommender system for images","volume":"15","author":"Tkalcic","year":"2012","journal-title":"IEEE Trans. Multimed."},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Chang, C.Y., Zheng, J.Y., and Wang, C.J. (2010, January 18\u201323). Based on support vector regression for emotion recognition using physiological signals. Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain.","DOI":"10.1109\/IJCNN.2010.5596878"},{"key":"ref_67","doi-asserted-by":"crossref","first-page":"182","DOI":"10.1016\/j.bspc.2018.05.039","article-title":"Intelligent human emotion recognition based on elephant herding optimization tuned support vector regression","volume":"45","author":"Hassanien","year":"2018","journal-title":"Biomed. Signal Process. Control"},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"23384","DOI":"10.1038\/srep23384","article-title":"Higher-order multivariable polynomial regression to estimate human affective states","volume":"6","author":"Wei","year":"2016","journal-title":"Sci. Rep."},{"key":"ref_69","doi-asserted-by":"crossref","first-page":"92","DOI":"10.1109\/T-AFFC.2011.9","article-title":"Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space","volume":"2","author":"Nicolaou","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_70","unstructured":"Romeo, L., Cavallo, A., Pepa, L., Berthouze, N., and Pontil, M. (2019). Multiple Instance Learning for Emotion Recognition using Physiological Signals. IEEE Trans. Affect. Comput."},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1109\/TAFFC.2015.2510625","article-title":"Multiple instance learning for behavioral coding","volume":"8","author":"Gibson","year":"2015","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_72","doi-asserted-by":"crossref","unstructured":"Lee, C.C., Katsamanis, A., Black, M.P., Baucom, B.R., Georgiou, P.G., and Narayanan, S.S. (2011). Affective state recognition in married couples\u2019 interactions using PCA-based vocal entrainment measures with multiple instance learning. International Conference on Affective Computing and Intelligent Interaction, Springer.","DOI":"10.1007\/978-3-642-24571-8_4"},{"key":"ref_73","doi-asserted-by":"crossref","unstructured":"Wu, B., Zhong, E., Horner, A., and Yang, Q. (2014, January 18\u201319). Music emotion recognition by multi-label multi-layer multi-instance multi-view learning. Proceedings of the 22nd ACM International Conference on Multimedia, Mountain View, CA, USA.","DOI":"10.1145\/2647868.2654904"},{"key":"ref_74","first-page":"570","article-title":"A framework for multiple-instance learning","volume":"10","author":"Maron","year":"1997","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"428","DOI":"10.3171\/jns.1987.67.3.0428","article-title":"Oculomotor nerve regeneration in rats: Functional, histological, and neuroanatomical studies","volume":"67","author":"Fernandez","year":"1987","journal-title":"J. Neurosurg."},{"key":"ref_76","doi-asserted-by":"crossref","first-page":"10952","DOI":"10.1523\/JNEUROSCI.3950-08.2008","article-title":"Saccadic modulation of neural responses: Possible roles in saccadic suppression, enhancement, and time compression","volume":"28","author":"Ibbotson","year":"2008","journal-title":"J. Neurosci."},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"3575","DOI":"10.1098\/rstb.2009.0143","article-title":"Future affective technology for autism and emotion communication","volume":"364","author":"Picard","year":"2009","journal-title":"Philos. Trans. R. Soc. B Biol. Sci."},{"key":"ref_78","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1016\/j.autneu.2015.11.002","article-title":"Sympathetic regulation during thermal stress in human aging and disease","volume":"196","author":"Greaney","year":"2016","journal-title":"Auton. Neurosci."},{"key":"ref_79","unstructured":"Chen, M., Shi, X., Zhang, Y., Wu, D., and Guizani, M. (2017). Deep features learning for medical image analysis with convolutional autoencoder neural network. IEEE Trans. Big Data."},{"key":"ref_80","unstructured":"Creswell, A., Arulkumaran, K., and Bharath, A.A. (2017). On denoising autoencoders trained to minimise binary cross-entropy. arXiv."},{"key":"ref_81","first-page":"1853","article-title":"An autoencoder approach to learning bilingual word representations","volume":"27","author":"Ap","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_82","doi-asserted-by":"crossref","unstructured":"Zhang, T., El Ali, A., Wang, C., Zhu, X., and Cesar, P. (2019, January 14\u201318). CorrFeat: Correlation-based Feature Extraction Algorithm using Skin Conductance and Pupil Diameter for Emotion Recognition. Proceedings of the 2019 International Conference on Multimodal Interaction, Suzhou, China.","DOI":"10.1145\/3340555.3353716"},{"key":"ref_83","unstructured":"Andrew, G., Arora, R., Bilmes, J., and Livescu, K. (2013, January 17\u201319). Deep canonical correlation analysis. Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA."},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"10","DOI":"10.1109\/TNNLS.2017.2716952","article-title":"Broad learning system: An effective and efficient incremental learning system without the need for deep architecture","volume":"29","author":"Chen","year":"2018","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"642","DOI":"10.1109\/JBHI.2017.2727218","article-title":"Deep belief networks for electroencephalography: A review of recent contributions and future outlooks","volume":"22","author":"Movahedi","year":"2018","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_86","doi-asserted-by":"crossref","unstructured":"Liu, C., Tang, T., Lv, K., and Wang, M. (2018, January 16\u201320). Multi-Feature Based Emotion Recognition for Video Clips. Proceedings of the ACM 2018 on International Conference on Multimodal Interaction, Boulder, CO, USA.","DOI":"10.1145\/3242969.3264989"},{"key":"ref_87","doi-asserted-by":"crossref","unstructured":"Chen, H., Jiang, B., and Ding, S.X. (2020). A Broad Learning Aided Data-Driven Framework of Fast Fault Diagnosis for High-Speed Trains. IEEE Intell. Transp. Syst. Mag.","DOI":"10.1109\/MITS.2019.2907629"},{"key":"ref_88","doi-asserted-by":"crossref","first-page":"2270","DOI":"10.1016\/j.patcog.2005.01.012","article-title":"Score normalization in multimodal biometric systems","volume":"38","author":"Jain","year":"2005","journal-title":"Pattern Recognit."},{"key":"ref_89","doi-asserted-by":"crossref","first-page":"3311","DOI":"10.1016\/S0042-6989(97)00169-7","article-title":"Sparse coding with an overcomplete basis set: A strategy employed by V1?","volume":"37","author":"Olshausen","year":"1997","journal-title":"Vis. Res."},{"key":"ref_90","doi-asserted-by":"crossref","first-page":"1095","DOI":"10.1080\/02699930541000084","article-title":"A revised film set for the induction of basic emotions","volume":"19","author":"Hewig","year":"2005","journal-title":"Cogn. Emot."},{"key":"ref_91","unstructured":"Bartolini, E.E. (2011). Eliciting Emotion with Film: Development of a Stimulus Set, Wesleyan University."},{"key":"ref_92","doi-asserted-by":"crossref","unstructured":"Park, C.Y., Cha, N., Kang, S., Kim, A., Khandoker, A.H., Hadjileontiadis, L., Oh, A., Jeong, Y., and Lee, U. (2020). K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations. arXiv.","DOI":"10.1038\/s41597-020-00630-y"},{"key":"ref_93","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/TAFFC.2015.2392932","article-title":"DECAF: MEG-based multimodal database for decoding affective physiological responses","volume":"6","author":"Abadi","year":"2015","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_94","first-page":"74","article-title":"Investigating adopter categories and determinants affecting the adoption of mobile television in China","volume":"10","author":"Lin","year":"2014","journal-title":"China Media Res."},{"key":"ref_95","doi-asserted-by":"crossref","unstructured":"McNally, J., and Harrington, B. (2017, January 14\u201316). How Millennials and Teens Consume Mobile Video. Proceedings of the 2017 ACM International Conference on Interactive Experiences for TV and Online Video (TVX \u201917), Hilversum, The Netherlands.","DOI":"10.1145\/3077548.3077555"},{"key":"ref_96","unstructured":"O\u2019Hara, K., Mitchell, A.S., and Vorbau, A. (May, January 28). Consuming Video on Mobile Devices. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI\u201907), San Jose, CA, USA."},{"key":"ref_97","doi-asserted-by":"crossref","first-page":"42","DOI":"10.1109\/T-AFFC.2011.25","article-title":"A multimodal database for affect recognition and implicit tagging","volume":"3","author":"Soleymani","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_98","doi-asserted-by":"crossref","unstructured":"Ferdinando, H., Sepp\u00e4nen, T., and Alasaarela, E. (2017, January 24\u201326). Enhancing Emotion Recognition from ECG Signals using Supervised Dimensionality Reduction. Proceedings of the ICPRAM, Porto, Portugal.","DOI":"10.5220\/0006147801120118"},{"key":"ref_99","doi-asserted-by":"crossref","unstructured":"Gui, D., Zhong, S.H., and Ming, Z. (2018). Implicit Affective Video Tagging Using Pupillary Response. International Conference on Multimedia Modeling, Springer.","DOI":"10.1007\/978-3-319-73600-6_15"},{"key":"ref_100","unstructured":"Olson, D.H., Russell, C.S., and Sprenkle, D.H. (1989). Circumplex Model: Systemic Assessment and Treatment of Families, Psychology Press."},{"key":"ref_101","unstructured":"Itten, J. (1963). Mein Vorkurs am Bauhaus, Otto Maier Verlag."},{"key":"ref_102","doi-asserted-by":"crossref","unstructured":"Schmidt, P., Reiss, A., D\u00fcrichen, R., and Van Laerhoven, K. (2018, January 8\u201312). Labelling Affective States \u201cin the Wild\u201d Practical Guidelines and Lessons Learned. Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, Singapore.","DOI":"10.1145\/3267305.3267551"},{"key":"ref_103","doi-asserted-by":"crossref","unstructured":"Zhao, B., Wang, Z., Yu, Z., and Guo, B. (2018, January 8\u201312). EmotionSense: Emotion recognition based on wearable wristband. Proceedings of the 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld\/SCALCOM\/UIC\/ATC\/CBDCom\/IOP\/SCI), Guangzhou, China.","DOI":"10.1109\/SmartWorld.2018.00091"},{"key":"ref_104","doi-asserted-by":"crossref","first-page":"319","DOI":"10.1109\/5.993400","article-title":"A chronology of interpolation: From ancient astronomy to modern signal and image processing","volume":"90","author":"Meijering","year":"2002","journal-title":"Proc. IEEE"},{"key":"ref_105","unstructured":"Daniels, R.W. (1974). Approximation Methods for Electronic Filter Design: With Applications to Passive, Active, and Digital Networks, McGraw-Hill."},{"key":"ref_106","doi-asserted-by":"crossref","unstructured":"Fleureau, J., Guillotel, P., and Orlac, I. (2013, January 2\u20135). Affective benchmarking of movies based on the physiological responses of a real audience. Proceedings of the IEEE 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland.","DOI":"10.1109\/ACII.2013.19"},{"key":"ref_107","doi-asserted-by":"crossref","first-page":"279","DOI":"10.3389\/fnins.2017.00279","article-title":"Physiological signal-based method for measurement of pain intensity","volume":"11","author":"Chu","year":"2017","journal-title":"Front. Neurosci."},{"key":"ref_108","doi-asserted-by":"crossref","first-page":"1341","DOI":"10.1589\/jpts.24.1341","article-title":"Descriptive analysis of skin temperature variability of sympathetic nervous system activity in stress","volume":"24","author":"Karthikeyan","year":"2012","journal-title":"J. Phys. Ther. Sci."},{"key":"ref_109","unstructured":"Zeiler, M.D. (2012). Adadelta: An adaptive learning rate method. arXiv."},{"key":"ref_110","doi-asserted-by":"crossref","unstructured":"Prechelt, L. (1998). Early stopping-but when?. Neural Networks: Tricks of the Trade, Springer.","DOI":"10.1007\/3-540-49430-8_3"},{"key":"ref_111","doi-asserted-by":"crossref","unstructured":"Chinchor, N. (1991). MUC-3 evaluation metrics. Proceedings of the 3rd Conference on Message Understanding, Association for Computational Linguistics.","DOI":"10.3115\/1071958.1071961"},{"key":"ref_112","doi-asserted-by":"crossref","unstructured":"Fatourechi, M., Ward, R.K., Mason, S.G., Huggins, J., Schl\u00f6gl, A., and Birch, G.E. (2008, January 11\u201313). Comparison of evaluation metrics in classification applications with imbalanced datasets. Proceedings of the IEEE 2008 Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA.","DOI":"10.1109\/ICMLA.2008.34"},{"key":"ref_113","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1016\/j.neunet.2014.09.003","article-title":"Deep learning in neural networks: An overview","volume":"61","author":"Schmidhuber","year":"2015","journal-title":"Neural Netw."},{"key":"ref_114","unstructured":"Huang, Z., Xu, W., and Yu, K. (2015). Bidirectional LSTM-CRF models for sequence tagging. arXiv."},{"key":"ref_115","doi-asserted-by":"crossref","unstructured":"Wickramasuriya, D.S., and Faghih, R.T. (2017, January 6\u20138). Online and offline anger detection via electromyography analysis. Proceedings of the 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT), Bethesda, MD, USA.","DOI":"10.1109\/HIC.2017.8227582"},{"key":"ref_116","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/T-AFFC.2011.15","article-title":"Deap: A database for emotion analysis; using physiological signals","volume":"3","author":"Koelstra","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_117","doi-asserted-by":"crossref","first-page":"717","DOI":"10.1016\/j.ijhcs.2014.05.006","article-title":"Comparative analysis of emotion estimation methods based on physiological measurements for real-time applications","volume":"72","author":"Kukolja","year":"2014","journal-title":"Int. J. Hum. Comput. Stud."},{"key":"ref_118","doi-asserted-by":"crossref","first-page":"7","DOI":"10.1016\/j.copsyc.2017.04.020","article-title":"Interoception and emotion","volume":"17","author":"Critchley","year":"2017","journal-title":"Curr. Opin. Psychol."},{"key":"ref_119","first-page":"2672","article-title":"Generative adversarial nets","volume":"27","author":"Goodfellow","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_120","doi-asserted-by":"crossref","unstructured":"Ed-doughmi, Y., and Idrissi, N. (2019, January 12\u201315). Driver fatigue detection using recurrent neural networks. Proceedings of the 2nd International Conference on Networking, Information Systems & Security, Sochi, Russia.","DOI":"10.1145\/3320326.3320376"},{"key":"ref_121","doi-asserted-by":"crossref","first-page":"730","DOI":"10.1109\/TMM.2019.2933338","article-title":"Realistic facial expression reconstruction for VR HMD users","volume":"22","author":"Lou","year":"2019","journal-title":"IEEE Trans. Multimed."},{"key":"ref_122","doi-asserted-by":"crossref","unstructured":"Gen\u00e7, \u00c7., Colley, A., L\u00f6chtefeld, M., and H\u00e4kkil\u00e4, J. (2020, January 14\u201317). Face mask design to mitigate facial expression occlusion. Proceedings of the 2020 International Symposium on Wearable Computers, Cancun, Mexico.","DOI":"10.1145\/3410531.3414303"},{"key":"ref_123","doi-asserted-by":"crossref","unstructured":"Oulefki, A., Aouache, M., and Bengherabi, M. (2019). Low-Light Face Image Enhancement Based on Dynamic Face Part Selection. Iberian Conference on Pattern Recognition and Image Analysis, Springer.","DOI":"10.1007\/978-3-030-31321-0_8"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/1\/52\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T10:45:39Z","timestamp":1760179539000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/1\/52"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,12,24]]},"references-count":123,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2021,1]]}},"alternative-id":["s21010052"],"URL":"https:\/\/doi.org\/10.3390\/s21010052","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,12,24]]}}}