{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T05:55:59Z","timestamp":1773294959041,"version":"3.50.1"},"reference-count":47,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2019,4,7]],"date-time":"2019-04-07T00:00:00Z","timestamp":1554595200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>One of the main objectives of Active and Assisted Living (AAL) environments is to ensure that elderly and\/or disabled people perform\/live well in their immediate environments; this can be monitored by among others the recognition of emotions based on non-highly intrusive sensors such as Electrodermal Activity (EDA) sensors. However, designing a learning system or building a machine-learning model to recognize human emotions while training the system on a specific group of persons and testing the system on a totally a new group of persons is still a serious challenge in the field, as it is possible that the second testing group of persons may have different emotion patterns. Accordingly, the purpose of this paper is to contribute to the field of human emotion recognition by proposing a Convolutional Neural Network (CNN) architecture which ensures promising robustness-related results for both subject-dependent and subject-independent human emotion recognition. The CNN model has been trained using a grid search technique which is a model hyperparameter optimization technique to fine-tune the parameters of the proposed CNN architecture. The overall concept\u2019s performance is validated and stress-tested by using MAHNOB and DEAP datasets. The results demonstrate a promising robustness improvement regarding various evaluation metrics. We could increase the accuracy for subject-independent classification to 78% and 82% for MAHNOB and DEAP respectively and to 81% and 85% subject-dependent classification for MAHNOB and DEAP respectively (4 classes\/labels). The work shows clearly that while using solely the non-intrusive EDA sensors a robust classification of human emotion is possible even without involving additional\/other physiological signals.<\/jats:p>","DOI":"10.3390\/s19071659","type":"journal-article","created":{"date-parts":[[2019,4,8]],"date-time":"2019-04-08T11:54:52Z","timestamp":1554724492000},"page":"1659","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":97,"title":["A Deep-Learning Model for Subject-Independent Human Emotion Recognition Using Electrodermal Activity Sensors"],"prefix":"10.3390","volume":"19","author":[{"given":"Fadi","family":"Al Machot","sequence":"first","affiliation":[{"name":"Research Center Borstel\u2014Leibniz Lung Center, 23845 Borstel, Germany"}]},{"given":"Ali","family":"Elmachot","sequence":"additional","affiliation":[{"name":"Faculty of Mechanical and Electrical Engineering, University of Damascus, Damascus, Syria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7941-5900","authenticated-orcid":false,"given":"Mouhannad","family":"Ali","sequence":"additional","affiliation":[{"name":"Institute for Smart Systems Technologies, Alpen-Adira University, 9020 Klagenfurt, Austria"}]},{"given":"Elyan","family":"Al Machot","sequence":"additional","affiliation":[{"name":"Carl Gustav Carus Faculty of Medicine, Dresden University of Technology, 01069 Dresden, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0773-9476","authenticated-orcid":false,"given":"Kyandoghere","family":"Kyamakya","sequence":"additional","affiliation":[{"name":"Institute for Smart Systems Technologies, Alpen-Adira University, 9020 Klagenfurt, Austria"}]}],"member":"1968","published-online":{"date-parts":[[2019,4,7]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Suryadevara, N.K., Quazi, M., and Mukhopadhyay, S.C. (2012, January 26\u201329). Intelligent sensing systems for measuring wellness indices of the daily activities for the elderly. Proceedings of the 2012 8th IEEE International Conference onIntelligent Environments (IE), Guanajuato, Mexico.","DOI":"10.1109\/IE.2012.49"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Al Machot, F., Mosa, A.H., Dabbour, K., Fasih, A., Schwarzlmuller, C., Ali, M., and Kyamakya, K. (2011, January 25\u201327). A novel real-time emotion detection system from audio streams based on bayesian quadratic discriminate classifier for adas. Proceedings of the 2011 Joint 3rd Int\u2019l Workshop on IEEE Nonlinear Dynamics and Synchronization (INDS) & 16th Int\u2019l Symposium on Theoretical Electrical Engineering (ISTET), Klagenfurt, Austria.","DOI":"10.1109\/INDS.2011.6024783"},{"key":"ref_3","first-page":"4","article-title":"Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion","volume":"5","author":"Krause","year":"1987","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"372","DOI":"10.1037\/0003-066X.50.5.372","article-title":"The emotion probe: Studies of motivation and attention","volume":"50","author":"Lang","year":"1995","journal-title":"Am. Psychol."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"2067","DOI":"10.1109\/TPAMI.2008.26","article-title":"Emotion recognition based on physiological changes in music listening","volume":"30","author":"Kim","year":"2008","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Ali, M., Mosa, A.H., Al Machot, F., and Kyamakya, K. (2016, January 5\u20138). EEG-based emotion recognition approach for e-healthcare applications. Proceedings of the 2016 IEEE Eighth International Conference on Ubiquitous and Future Networks (ICUFN), Vienna, Austria.","DOI":"10.1109\/ICUFN.2016.7536936"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Kim, Y., Lee, H., and Provost, E.M. (2013, January 26\u201331). Deep learning for robust feature generation in audiovisual emotion recognition. Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada.","DOI":"10.1109\/ICASSP.2013.6638346"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1109\/TPAMI.2008.52","article-title":"A survey of affect recognition methods: Audio, visual, and spontaneous expressions","volume":"31","author":"Zeng","year":"2009","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep learning","volume":"521","author":"LeCun","year":"2015","journal-title":"Nature"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1109\/TAMD.2015.2431497","article-title":"Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks","volume":"7","author":"Zheng","year":"2015","journal-title":"IEEE Trans. Auton. Mental Dev."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Ranganathan, H., Chakraborty, S., and Panchanathan, S. (2016, January 7\u20139). Multimodal emotion recognition using deep learning architectures. Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA.","DOI":"10.1109\/WACV.2016.7477679"},{"key":"ref_12","first-page":"1770","article-title":"Approximation of phenol concentration using computational intelligence methods based on signals from the metal-oxide sensor array","volume":"15","author":"Rzecki","year":"2015","journal-title":"IEEE Sens. J."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"117","DOI":"10.1016\/j.snb.2013.10.065","article-title":"Classification of tea specimens using novel hybrid artificial intelligence methods","volume":"192","author":"Maziarz","year":"2014","journal-title":"Sens. Actuators B Chem."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"411","DOI":"10.1016\/j.compbiomed.2018.09.009","article-title":"Arrhythmia detection using deep convolutional neural network with long duration ECG signals","volume":"102","author":"Tan","year":"2018","journal-title":"Comput. Biol. Med."},{"key":"ref_15","unstructured":"P\u0142awiak, P., and Acharya, U.R. (2019, April 05). Novel Deep Genetic Ensemble of Classifiers for Arrhythmia Detection Using ECG Signals. Available online: https:\/\/www.researchgate.net\/profile\/Pawel_Plawiak\/publication\/329782366_Novel_Deep_Genetic_Ensemble_of_Classifiers_for_Arrhythmia_Detection_Using_ECG_Signals\/links\/5c1bad6792851c22a338cd02\/Novel-Deep-Genetic-Ensemble-of-Classifiers-for-Arrhythmia-Detection-Using-ECG-Signals.pdf."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"874","DOI":"10.1037\/a0017399","article-title":"Emotion recognition across cultures: The influence of ethnicity on empathic accuracy and physiological linkage","volume":"9","author":"Soto","year":"2009","journal-title":"Emotion"},{"key":"ref_17","unstructured":"Ooi, J.S.K., Ahmad, S.A., Chong, Y.Z., Ali, S.H.M., Ai, G., and Wagatsuma, H. (2016, January 4\u20137). Driver emotion recognition framework based on electrodermal activity measurements during simulated driving conditions. Proceedings of the 2016 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), Kuala Lumpur, Malaysia."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"716","DOI":"10.1109\/JSEN.2016.2623677","article-title":"Arousal and valence recognition of affective sounds based on electrodermal activity","volume":"17","author":"Greco","year":"2017","journal-title":"IEEE Sens. J."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"204","DOI":"10.1111\/1469-8986.3720204","article-title":"Affective reactions to acoustic stimuli","volume":"37","author":"Bradley","year":"2000","journal-title":"Psychophysiology"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1109\/T-AFFC.2012.28","article-title":"Directing physiology and mood through music: Validation of an affective music player","volume":"4","author":"Janssen","year":"2013","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"186","DOI":"10.1016\/j.cmpb.2016.01.002","article-title":"Multimodal analysis of startle type responses","volume":"129","author":"Kukolja","year":"2016","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_22","unstructured":"Keren, G., Kirschstein, T., Marchi, E., Ringeval, F., and Schuller, B. (2019, April 05). END-TO-END Learning for Dimensional Emotion Recognition from Physiological Signals. Available online: https:\/\/ieeexplore.ieee.org\/document\/8019533."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Weber, R., Barrielle, V., Soladi\u00e9, C., and S\u00e9guier, R. (2016, January 15\u201319). High-level geometry-based features of video modality for emotion prediction. Proceedings of the 6th ACM International Workshop on Audio\/Visual Emotion Challenge, Amsterdam, The Netherlands.","DOI":"10.1145\/2988257.2988262"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Povolny, F., Matejka, P., Hradis, M., Popkov\u00e1, A., Otrusina, L., Smrz, P., Wood, I., Robin, C., and Lamel, L. (2016, January 15\u201319). Multimodal emotion recognition for AVEC 2016 challenge. Proceedings of the 6th ACM International Workshop on Audio\/Visual Emotion Challenge, Amsterdam, The Netherlands.","DOI":"10.1145\/2988257.2988268"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TSMCA.2008.918624","article-title":"Toward emotion recognition in car-racing drivers: A biosignal processing approach","volume":"38","author":"Katsis","year":"2008","journal-title":"IEEE Trans. Syst. Man Cybern. Part A Syst. Hum."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Ali, M., Al Machot, F., Mosa, A.H., and Kyamakya, K. (2016). CNN Based Subject-Independent Driver Emotion Recognition System Involving Physiological Signals for ADAS. Advanced Microsystems for Automotive Applications 2016, Springer.","DOI":"10.1007\/978-3-319-44766-7_11"},{"key":"ref_27","first-page":"147","article-title":"Emotion pattern recognition using physiological signals","volume":"172","author":"Niu","year":"2014","journal-title":"Sens. Trans."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Xia, V., Jaques, N., Taylor, S., Fedor, S., and Picard, R. (2015, January 12). Active learning for electrodermal activity classification. Proceedings of the 2015 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA.","DOI":"10.1109\/SPMB.2015.7405467"},{"key":"ref_29","unstructured":"Paragliola, G., and Coronato, A. (2019, April 05). A Deep Learning-Based Approach for the Recognition of Sleep Disorders in Patients with Cognitive Diseases: A Case Study. Available online: https:\/\/annals-csis.org\/Volume_12\/drp\/pdf\/532.pdf."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Al Machot, F., Ali, M., Ranasinghe, S., Mosa, A.H., and Kyandoghere, K. (2018, January 26\u201329). Improving Subject-independent Human Emotion Recognition Using Electrodermal Activity Sensors for Active and Assisted Living. Proceedings of the 11th ACM PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece.","DOI":"10.1145\/3197768.3201523"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"42","DOI":"10.1109\/T-AFFC.2011.25","article-title":"A multimodal database for affect recognition and implicit tagging","volume":"3","author":"Soleymani","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1016\/0005-7916(94)90063-9","article-title":"Measuring emotion: The self-assessment manikin and the semantic differential","volume":"25","author":"Bradley","year":"1994","journal-title":"J. Behav. Ther. Exp. Psychiatry"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/T-AFFC.2011.15","article-title":"Deap: A database for emotion analysis; using physiological signals","volume":"3","author":"Koelstra","year":"2012","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_34","unstructured":"Frijda, N.H. (1986). The Emotions, Cambridge University Press."},{"key":"ref_35","first-page":"1995","article-title":"Convolutional networks for images, speech, and time series","volume":"3361","author":"LeCun","year":"1995","journal-title":"Handb. Brain Theory Neural Netw."},{"key":"ref_36","unstructured":"Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R.R. (arXiv, 2012). Improving neural networks by preventing co-adaptation of feature detectors, arXiv."},{"key":"ref_37","first-page":"2825","article-title":"Scikit-learn: Machine Learning in Python","volume":"12","author":"Pedregosa","year":"2011","journal-title":"J. Mach. Learn. Res."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1007\/BF00994018","article-title":"Support-vector networks","volume":"20","author":"Cortes","year":"1995","journal-title":"Mach. Learn."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"175","DOI":"10.1080\/00031305.1992.10475879","article-title":"An introduction to kernel and nearest-neighbor nonparametric regression","volume":"46","author":"Altman","year":"1992","journal-title":"Am. Stat."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Webb, G.I. (2017). Na\u00efve Bayes. Encyclopedia of Machine Learning and Data Mining, Springer.","DOI":"10.1007\/978-1-4899-7687-1_581"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1023\/A:1010933404324","article-title":"Random forests","volume":"45","author":"Breiman","year":"2001","journal-title":"Mach. Learn."},{"key":"ref_42","unstructured":"Powers, D.M. (2019, April 05). Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness and Correlation. Available online: https:\/\/dspace2.flinders.edu.au\/xmlui\/handle\/2328\/27165."},{"key":"ref_43","unstructured":"Fukunaga, K. (2013). Introduction to Statistical Pattern Recognition, Academic Press."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Lawrence, I., and Lin, K. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 255\u2013268.","DOI":"10.2307\/2532051"},{"key":"ref_45","unstructured":"Bradley, M.M., and Lang, P.J. (2007). The International Affective Digitized Sounds (IADS-2): Affective Ratings of Sounds and Instruction Manual, University of Florida."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"548","DOI":"10.1017\/S0048577201000622","article-title":"The joint impact of mood state and task difficulty on cardiovascular and electrodermal reactivity in active coping","volume":"38","author":"Gendolla","year":"2001","journal-title":"Psychophysiology"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"599","DOI":"10.1080\/00223980.2012.727891","article-title":"Facial Expressions of Emotions: Recognition Accuracy and Affective Reactions During Late Childhood","volume":"147","author":"Mancini","year":"2013","journal-title":"J. Psychol."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/7\/1659\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T12:43:29Z","timestamp":1760186609000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/7\/1659"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,4,7]]},"references-count":47,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2019,4]]}},"alternative-id":["s19071659"],"URL":"https:\/\/doi.org\/10.3390\/s19071659","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,4,7]]}}}