{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,13]],"date-time":"2026-03-13T08:55:23Z","timestamp":1773392123431,"version":"3.50.1"},"reference-count":38,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2021,1,1]],"date-time":"2021-01-01T00:00:00Z","timestamp":1609459200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,1,15]],"date-time":"2021-01-15T00:00:00Z","timestamp":1610668800000},"content-version":"vor","delay-in-days":14,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Projekt DEAL"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Ambient Intell Human Comput"],"published-print":{"date-parts":[[2021,1]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>We address the problem of facial expression analysis. The proposed approach predicts both basic emotion and valence\/arousal values as a continuous measure for the emotional state. Experimental results including cross-database evaluation on the AffectNet, Aff-Wild, and AFEW dataset shows that our approach predicts emotion categories and valence\/arousal values with high accuracies and that the simultaneous learning of discrete categories and continuous values improves the prediction of both. In addition, we use our approach to measure the emotional states of users in an Human-Robot-Collaboration scenario (HRC), show how these emotional states are affected by multiple difficulties that arise for the test subjects, and examine how different feedback mechanisms counteract negative emotions users experience while interacting with a robot system.<\/jats:p>","DOI":"10.1007\/s12652-020-02851-w","type":"journal-article","created":{"date-parts":[[2021,1,15]],"date-time":"2021-01-15T09:03:26Z","timestamp":1610701406000},"page":"57-73","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Simultaneous prediction of valence \/ arousal and emotion categories and its application in an HRC scenario"],"prefix":"10.1007","volume":"12","author":[{"given":"Sebastian","family":"Handrich","sequence":"first","affiliation":[]},{"given":"Laslo","family":"Dinges","sequence":"additional","affiliation":[]},{"given":"Ayoub","family":"Al-Hamadi","sequence":"additional","affiliation":[]},{"given":"Philipp","family":"Werner","sequence":"additional","affiliation":[]},{"given":"Frerk","family":"Saxen","sequence":"additional","affiliation":[]},{"given":"Zaher","family":"Al Aghbari","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,1,15]]},"reference":[{"key":"2851_CR1","doi-asserted-by":"publisher","first-page":"11761","DOI":"10.1109\/ACCESS.2019.2963113","volume":"8","author":"F Ahmed","year":"2020","unstructured":"Ahmed F, Bari ASMH, Gavrilova ML (2020) Emotion recognition from body movement. IEEE Access 8:11761\u201311781. https:\/\/doi.org\/10.1109\/ACCESS.2019.2963113","journal-title":"IEEE Access"},{"issue":"6","key":"2851_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.9734\/BJAST\/2016\/27294","volume":"16","author":"A Al-Hamadi","year":"2016","unstructured":"Al-Hamadi A, Saeed A, Niese R, Handrich S, Neumann H (2016) Emotional trace: mapping of facial expression to valence-arousal space. Br J Appl Sci Technol 16(6):1\u201314. https:\/\/doi.org\/10.9734\/BJAST\/2016\/27294","journal-title":"Br J Appl Sci Technol"},{"key":"2851_CR3","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA.2019.8794176","author":"M Anvaripour","year":"2019","unstructured":"Anvaripour M, Khoshnam M, Menon C, Saif M (2019) Safe human robot cooperation in task performed on the shared load. Proc IEEE Int Conf Robot Autom. https:\/\/doi.org\/10.1109\/ICRA.2019.8794176","journal-title":"Proc IEEE Int Conf Robot Autom"},{"issue":"8","key":"2851_CR4","doi-asserted-by":"publisher","first-page":"1125","DOI":"10.1109\/TNSRE.2016.2583464","volume":"25","author":"D Ao","year":"2017","unstructured":"Ao D, Song R, Gao J (2017) Movement performance of human-robot cooperation control based on EMG-driven hill-type and proportional models for an ankle power-assist exoskeleton robot. IEEE Trans Neural Syst Rehab Eng 25(8):1125\u20131134. https:\/\/doi.org\/10.1109\/TNSRE.2016.2583464","journal-title":"IEEE Trans Neural Syst Rehab Eng"},{"issue":"5","key":"2851_CR5","doi-asserted-by":"publisher","first-page":"709","DOI":"10.1007\/s12369-019-00593-0","volume":"11","author":"C Br\u00f6hl","year":"2019","unstructured":"Br\u00f6hl C, Nelles J, Brandl C, Mertens A, Nitsch V (2019) Human robot collaboration acceptance model: development and comparison for Germany, Japan, China and the USA. Int J Soc Robot 11(5):709\u2013726. https:\/\/doi.org\/10.1007\/s12369-019-00593-0","journal-title":"Int J Soc Robot"},{"key":"2851_CR6","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2017.246","author":"WY Chang","year":"2017","unstructured":"Chang WY, Hsu SH, Chien JH (2017) FATAUVA-Net: an integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation. IEEE Conf Comput Vis Pattern Recognit Workshops. https:\/\/doi.org\/10.1109\/CVPRW.2017.246","journal-title":"IEEE Conf Comput Vis Pattern Recognit Workshops"},{"key":"2851_CR7","doi-asserted-by":"publisher","DOI":"10.1109\/FG.2017.13","author":"WS Chu","year":"2017","unstructured":"Chu WS, De la Torre F, Cohn JF (2017) Learning spatial and temporal cues for multi-Label facial action unit detection. IEEE Int Conf Autom Face Gesture Recogn. https:\/\/doi.org\/10.1109\/FG.2017.13","journal-title":"IEEE Int Conf Autom Face Gesture Recogn"},{"key":"2851_CR8","doi-asserted-by":"publisher","first-page":"97","DOI":"10.1016\/j.mechatronics.2018.03.006","volume":"51","author":"K Darvish","year":"2018","unstructured":"Darvish K, Wanderlingh F, Bruno B, Simetti E, Mastrogiovanni F, Casalino G (2018) Flexible humanrobot cooperation models for assisted shop-floor tasks. Mechatronics 51:97\u2013114. https:\/\/doi.org\/10.1016\/j.mechatronics.2018.03.006","journal-title":"Mechatronics"},{"issue":"1","key":"2851_CR9","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1007\/s11263-014-0733-5","volume":"111","author":"M Everingham","year":"2015","unstructured":"Everingham M, Eslami SMA, Van Gool L, Williams CKI, Winn J, Zisserman A (2015) The Pascal visual object classes challenge: a retrospective. Int J Comput Vis 111(1):98\u2013136. https:\/\/doi.org\/10.1007\/s11263-014-0733-5","journal-title":"Int J Comput Vis"},{"key":"2851_CR10","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2017.245","author":"B Hasani","year":"2017","unstructured":"Hasani B, Mahoor MH (2017) Facial affect estimation in the wild using deep residual and convolutional networks. IEEE Conf Comput Vis Pattern Recognit Workshops. https:\/\/doi.org\/10.1109\/CVPRW.2017.245","journal-title":"IEEE Conf Comput Vis Pattern Recognit Workshops"},{"key":"2851_CR11","doi-asserted-by":"publisher","DOI":"10.1109\/ICSMC.2012.6378303","author":"H Hoffmann","year":"2012","unstructured":"Hoffmann H, Scheck A, Schuster T, Walter S, Limbrecht K, Traue HC, Kessler H (2012) Mapping discrete emotions into the dimensional space: an empirical approach. IEEE Int Conf Syst Man Cybern. https:\/\/doi.org\/10.1109\/ICSMC.2012.6378303","journal-title":"IEEE Int Conf Syst Man Cybern"},{"key":"2851_CR12","doi-asserted-by":"publisher","first-page":"1388","DOI":"10.3389\/fpsyg.2020.01388","volume":"11","author":"TTA H\u00f6fling","year":"2020","unstructured":"H\u00f6fling TTA, Gerdes ABM, F\u00f6hl U, Alpers GW (2020) Read my face: automatic facial coding versus psychophysiological indicators of emotional valence and arousal. Front Psychol 11:1388. https:\/\/doi.org\/10.3389\/fpsyg.2020.01388","journal-title":"Front Psychol"},{"issue":"5","key":"2851_CR13","doi-asserted-by":"publisher","first-page":"1787","DOI":"10.1007\/s12652-017-0644-8","volume":"10","author":"Y Huang","year":"2019","unstructured":"Huang Y, Tian K, Wu A, Zhang G (2019) Feature fusion methods research based on deep belief networks for speech emotion recognition under noise condition. J Ambient Intell Humaniz Comput 10(5):1787\u20131798. https:\/\/doi.org\/10.1007\/s12652-017-0644-8","journal-title":"J Ambient Intell Humaniz Comput"},{"key":"2851_CR14","volume-title":"Emotions and affect in human factors and human-computer interaction","author":"M Jeon","year":"2017","unstructured":"Jeon M (2017) Emotions and affect in human factors and human-computer interaction, 1st edn. Academic Press Inc, Orlando","edition":"1"},{"key":"2851_CR15","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2015.12","author":"P Khorrami","year":"2015","unstructured":"Khorrami P, Paine TL, Huang TS (2015) Do deep neural networks learn facial action units when doing expression recognition? IEEE Int Conf Comput Vis Workshop (ICCVW). https:\/\/doi.org\/10.1109\/ICCVW.2015.12","journal-title":"IEEE Int Conf Comput Vis Workshop (ICCVW)"},{"issue":"6\u20137","key":"2851_CR16","doi-asserted-by":"publisher","first-page":"907","DOI":"10.1007\/s11263-019-01158-4","volume":"127","author":"D Kollias","year":"2019","unstructured":"Kollias D, Tzirakis P, Nicolaou MA, Papaioannou A, Zhao G, Schuller B, Kotsia I, Zafeiriou S (2019) Deep affect prediction in-the-wild: aff-wild database and challenge, deep architectures, and beyond. Int J Comput Vis 127(6\u20137):907\u2013929. https:\/\/doi.org\/10.1007\/s11263-019-01158-4","journal-title":"Int J Comput Vis"},{"key":"2851_CR17","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1016\/j.imavis.2017.02.001","volume":"65","author":"J Kossaifi","year":"2017","unstructured":"Kossaifi J, Tzimiropoulos G, Todorovic S, Pantic M (2017) AFEW-VA database for valence and arousal estimation in-the-wild. Image Vis Comput 65:23\u201336. https:\/\/doi.org\/10.1016\/j.imavis.2017.02.001","journal-title":"Image Vis Comput"},{"issue":"6","key":"2851_CR18","doi-asserted-by":"publisher","first-page":"444","DOI":"10.1016\/j.tics.2016.03.011","volume":"20","author":"PA Kragel","year":"2016","unstructured":"Kragel PA, LaBar KS (2016) Decoding the nature of emotion in the brain. Trends Cognit Sci 20(6):444\u2013455. https:\/\/doi.org\/10.1016\/j.tics.2016.03.011","journal-title":"Trends Cognit Sci"},{"issue":"7","key":"2851_CR19","doi-asserted-by":"publisher","first-page":"2701","DOI":"10.1007\/s12652-019-01329-8","volume":"11","author":"PR Krishnappa Babu","year":"2020","unstructured":"Krishnappa Babu PR, Lahiri U (2020) Classification approach for understanding implications of emotions using eye-gaze. J Ambient Intell Humaniz Comput 11(7):2701\u20132713. https:\/\/doi.org\/10.1007\/s12652-019-01329-8","journal-title":"J Ambient Intell Humaniz Comput"},{"key":"2851_CR20","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2017.244","author":"J Li","year":"2017","unstructured":"Li J, Chen Y, Xiao S, Zhao J, Roy S, Feng J, Yan S, Sim T (2017) Estimation of affective level in the wild with multiple memory networks. IEEE Conf Comput Vis Pattern Recognit Workshops (CVPRW). https:\/\/doi.org\/10.1109\/CVPRW.2017.244","journal-title":"IEEE Conf Comput Vis Pattern Recognit Workshops (CVPRW)"},{"key":"2851_CR21","doi-asserted-by":"publisher","first-page":"740","DOI":"10.1007\/978-3-319-10602-1_48","volume-title":"European conference on computer vision","author":"TY Lin","year":"2014","unstructured":"Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Doll\u00e1r P, Zitnick CL (2014) Microsoft COCO: common objects in context. European conference on computer vision. Springer, Berlin, pp 740\u2013755. https:\/\/doi.org\/10.1007\/978-3-319-10602-1_48"},{"key":"2851_CR22","first-page":"266","volume-title":"An approach to environmental psychology","author":"A Mehrabian","year":"1974","unstructured":"Mehrabian A, Russell JA (1974) An approach to environmental psychology. MIT Press, Cambridge, p 266"},{"issue":"1","key":"2851_CR23","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","volume":"10","author":"A Mollahosseini","year":"2019","unstructured":"Mollahosseini A, Hasani B, Mahoor MH (2019) AffectNet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans Affect Comput 10(1):18\u201331. https:\/\/doi.org\/10.1109\/TAFFC.2017.2740923","journal-title":"IEEE Trans Affect Comput"},{"issue":"1","key":"2851_CR24","doi-asserted-by":"publisher","first-page":"57","DOI":"10.1016\/j.cirp.2016.04.035","volume":"65","author":"S Pellegrinelli","year":"2016","unstructured":"Pellegrinelli S, Moro FL, Pedrocchi N, Molinari Tosatti L, Tolio T (2016) A probabilistic approach to workspace sharing for humanrobot cooperation in assembly tasks. CIRP Ann Manuf Technol 65(1):57\u201360. https:\/\/doi.org\/10.1016\/j.cirp.2016.04.035","journal-title":"CIRP Ann Manuf Technol"},{"key":"2851_CR25","unstructured":"Redmon JC, Bochkovskiy A (2018) Darknet framework.\u00a0Retrieved from https:\/\/github.com\/AlexeyAB\/darknet"},{"key":"2851_CR26","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.690","author":"J Redmon","year":"2017","unstructured":"Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger. IEEE Conf Comput Visi Pattern Recognit. https:\/\/doi.org\/10.1109\/CVPR.2017.69010.1109\/CVPR.2017.690","journal-title":"IEEE Conf Comput Visi Pattern Recognit"},{"issue":"6","key":"2851_CR27","doi-asserted-by":"publisher","first-page":"1161","DOI":"10.1037\/h0077714","volume":"39","author":"JA Russell","year":"1980","unstructured":"Russell JA (1980) A circumplex model of affect. J Personal Soc Psychol 39(6):1161\u20131178. https:\/\/doi.org\/10.1037\/h0077714","journal-title":"J Personal Soc Psychol"},{"issue":"8","key":"2851_CR28","doi-asserted-by":"publisher","first-page":"3187","DOI":"10.1007\/s12652-019-01485-x","volume":"11","author":"MG Salido Ortega","year":"2020","unstructured":"Salido Ortega MG, Rodr\u00edguez LF, Gutierrez-Garcia JO (2020) Towards emotion recognition from contextual information using machine learning. J Ambient Intell Humaniz Comput 11(8):3187\u20133207. https:\/\/doi.org\/10.1007\/s12652-019-01485-x","journal-title":"J Ambient Intell Humaniz Comput"},{"issue":"6","key":"2851_CR29","doi-asserted-by":"publisher","first-page":"2175","DOI":"10.1007\/s12652-017-0636-8","volume":"10","author":"A Samara","year":"2019","unstructured":"Samara A, Galway L, Bond R, Wang H (2019) Affective state detection via facial expression analysis within a humancomputer interaction context. J Ambient Intell Humaniz Comput 10(6):2175\u20132184. https:\/\/doi.org\/10.1007\/s12652-017-0636-8","journal-title":"J Ambient Intell Humaniz Comput"},{"key":"2851_CR30","unstructured":"S\u00e1r\u00e1ndi I, Linder T, Arras KO, Leibe B (2018) Synthetic occlusion augmentation with volumetric heatmaps for the 2018 eccv posetrack challenge on 3d human pose estimation. In: ECCV."},{"key":"2851_CR31","doi-asserted-by":"publisher","DOI":"10.1109\/SMC.2019.8914593","author":"C Savur","year":"2019","unstructured":"Savur C, Kumar S, Sahin F (2019) A Framework for monitoring human physiological response during human robot collaborative task. IEEE Int Conf Syst Man Cybern. https:\/\/doi.org\/10.1109\/SMC.2019.8914593","journal-title":"IEEE Int Conf Syst Man Cybern"},{"issue":"10","key":"2851_CR32","doi-asserted-by":"publisher","first-page":"3831","DOI":"10.1007\/s12652-019-01196-3","volume":"10","author":"J Seo","year":"2019","unstructured":"Seo J, Laine TH, Sohn KA (2019) Machine learning approaches for boredom classification using EEG. J Ambient Intell Humaniz Comput 10(10):3831\u20133846. https:\/\/doi.org\/10.1007\/s12652-019-01196-3","journal-title":"J Ambient Intell Humaniz Comput"},{"key":"2851_CR33","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2018.00246","author":"I Tautkute","year":"2018","unstructured":"Tautkute I, Trzcinski T, Bielski A (2018) I know how you feel: emotion recognition with facial landmarks. IEEE\/CVF Conf Comput Vis Pattern Recognit Workshops (CVPRW). https:\/\/doi.org\/10.1109\/CVPRW.2018.00246","journal-title":"IEEE\/CVF Conf Comput Vis Pattern Recognit Workshops (CVPRW)"},{"key":"2851_CR34","doi-asserted-by":"publisher","DOI":"10.1109\/FG.2018.00081","author":"D Vinkemeier","year":"2018","unstructured":"Vinkemeier D, Valstar M, Gratch J (2018) Predicting folds in poker using action unit detectors and decision trees. IEEE Int Conf Autom Face Gesture Recognit. https:\/\/doi.org\/10.1109\/FG.2018.00081","journal-title":"IEEE Int Conf Autom Face Gesture Recognit"},{"issue":"5","key":"2851_CR35","doi-asserted-by":"publisher","first-page":"e0177239","DOI":"10.1371\/journal.pone.0177239","volume":"12","author":"M Wegrzyn","year":"2017","unstructured":"Wegrzyn M, Vogt M, Kireclioglu B, Schneider J, Kissler J (2017) Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLOS One 12(5):e0177239. https:\/\/doi.org\/10.1371\/journal.pone.0177239","journal-title":"PLOS One"},{"key":"2851_CR36","doi-asserted-by":"publisher","DOI":"10.1109\/ACII.2017.8273631","author":"P Werner","year":"2017","unstructured":"Werner P, Handrich S, Al-Hamadi A (2017) Facial action unit intensity estimation and feature relevance visualization with random regression forests. Seven Int Conf Affect Comput Intell Interact (ACII). https:\/\/doi.org\/10.1109\/ACII.2017.8273631","journal-title":"Seven Int Conf Affect Comput Intell Interact (ACII)"},{"key":"2851_CR37","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2019.2946774","author":"P Werner","year":"2019","unstructured":"Werner P, Lopez-Martinez D, Walter S, Al-Hamadi A, Gruss S, Picard R (2019) Automatic recognition methods supporting pain assessment: a survey. IEEE Trans Affect Comput. https:\/\/doi.org\/10.1109\/TAFFC.2019.2946774","journal-title":"IEEE Trans Affect Comput"},{"key":"2851_CR38","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2019.2951656","author":"L Zhang","year":"2020","unstructured":"Zhang L, Peng S, Winkler S (2020) PersEmoN: a deep network for joint analysis of apparent personality, emotion and their relationship. IEEE Trans Affect Comput. https:\/\/doi.org\/10.1109\/TAFFC.2019.2951656","journal-title":"IEEE Trans Affect Comput"}],"container-title":["Journal of Ambient Intelligence and Humanized Computing"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s12652-020-02851-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1007\/s12652-020-02851-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s12652-020-02851-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,2,19]],"date-time":"2021-02-19T22:42:44Z","timestamp":1613774564000},"score":1,"resource":{"primary":{"URL":"http:\/\/link.springer.com\/10.1007\/s12652-020-02851-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,1]]},"references-count":38,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,1]]}},"alternative-id":["2851"],"URL":"https:\/\/doi.org\/10.1007\/s12652-020-02851-w","relation":{},"ISSN":["1868-5137","1868-5145"],"issn-type":[{"value":"1868-5137","type":"print"},{"value":"1868-5145","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,1]]},"assertion":[{"value":"15 July 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 November 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 January 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Compliance with ethical standards"}},{"value":"The authors declare they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}