{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,13]],"date-time":"2026-03-13T07:34:53Z","timestamp":1773387293745,"version":"3.50.1"},"reference-count":21,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2020,11,10]],"date-time":"2020-11-10T00:00:00Z","timestamp":1604966400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,11,10]],"date-time":"2020-11-10T00:00:00Z","timestamp":1604966400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Korea governmen","award":["NRF-2020R1A4A1019191"],"award-info":[{"award-number":["NRF-2020R1A4A1019191"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Multimed Tools Appl"],"published-print":{"date-parts":[[2021,3]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Emotion recognition is one of the hottest fields in affective computing research. Recognizing emotions is an important task for facilitating communication between machines and humans. However, it is a very challenging task based on a lack of ethnically diverse databases. In particular, emotional expressions tend to be very dissimilar between Western and Eastern people. Therefore, diverse emotion databases are required for studying emotional expression. However, majority of the well-known emotion databases focus on Western people, which exhibit different characteristics compared to Eastern people. In this study, we constructed a novel emotion dataset containing more than 1200 video clips collected from Korean movies, called Korean Video Dataset for Emotion Recognition in the Wild (KVDERW). Which are similar to real-world conditions, with the goal of studying the emotions of Eastern people, particularly Korean people. Additionally, we developed a semi-automatic video emotion labelling tool that could be used to generate video clips and annotate the emotions in clips.<\/jats:p>","DOI":"10.1007\/s11042-020-10106-1","type":"journal-article","created":{"date-parts":[[2020,11,10]],"date-time":"2020-11-10T12:27:16Z","timestamp":1605011236000},"page":"9479-9492","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":24,"title":["Korean video dataset for emotion recognition in the wild"],"prefix":"10.1007","volume":"80","author":[{"given":"Trinh Le Ba","family":"Khanh","sequence":"first","affiliation":[]},{"given":"Soo-Hyung","family":"Kim","sequence":"additional","affiliation":[]},{"given":"Gueesang","family":"Lee","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3024-5060","authenticated-orcid":false,"given":"Hyung-Jeong","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Eu-Tteum","family":"Baek","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,11,10]]},"reference":[{"key":"10106_CR1","doi-asserted-by":"publisher","unstructured":"Avola D., Cinque L., Fagioli A., Foresti G.L., Massaroni C. (2020) Deep Temporal Analysis for Non-Acted Body Affect Recognition. In: IEEE Transactions on Affective Computing, https:\/\/doi.org\/10.1109\/TAFFC.2020.3003816","DOI":"10.1109\/TAFFC.2020.3003816"},{"key":"10106_CR2","doi-asserted-by":"publisher","first-page":"78","DOI":"10.4236\/jsip.2017.82006","volume":"8","author":"G Benitez-Garcia","year":"2017","unstructured":"Benitez-Garcia G, Nakamura T, Kaneko M (2017) Methodical analysis of Western-Caucasian and East-Asian basic facial expressions of emotions based on specific facial regions. Journal of Signal and Information Processing 8:78\u201398","journal-title":"Journal of Signal and Information Processing"},{"issue":"4","key":"10106_CR3","doi-asserted-by":"publisher","first-page":"335","DOI":"10.1007\/s10579-008-9076-6","volume":"42","author":"C Busso","year":"2008","unstructured":"Busso C, Bulut M, Lee CC, Kazemzadeh A, Mower E, et al. (2008) IEMOCAP: Interactive Emotional dyadic motion capture database. Lang Resour Eval 42(4):335\u2013359","journal-title":"Lang Resour Eval"},{"key":"10106_CR4","doi-asserted-by":"crossref","unstructured":"Dhall A, Goecke R, Joshi J, Wagner M, Gedeon T (2011) Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In: 2011 IEEE International Conference on Computer Vision Workshops, pp 2106\u20132112","DOI":"10.1109\/ICCVW.2011.6130508"},{"key":"10106_CR5","doi-asserted-by":"crossref","unstructured":"Dhall A, Goecke R, Joshi J, Wagner M, Gedeon T (2013) Emotion recognition in the wild challenge 2013. In: Proceedings of the 15th ACM on International conference on multimodal interaction, pp 509\u2013516","DOI":"10.1145\/2522848.2531739"},{"key":"10106_CR6","doi-asserted-by":"crossref","unstructured":"Dhall A, Kaur A, Goecke R, Gedeon T (2018) EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction, pp 653\u2013656","DOI":"10.1145\/3242969.3264993"},{"key":"10106_CR7","unstructured":"Douglas-Cowie E., Cowie R., Schroder M. (2000) A new emotion database: considerations, sources and scope, proc, isca itrw on speech and emotion, 3944"},{"issue":"2","key":"10106_CR8","doi-asserted-by":"publisher","first-page":"124","DOI":"10.1037\/h0030377","volume":"17","author":"P Ekman","year":"1971","unstructured":"Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. J Pers Soc Psychol 17(2):124\u2013129","journal-title":"J Pers Soc Psychol"},{"key":"10106_CR9","doi-asserted-by":"crossref","unstructured":"Hu P, Ramanan D (2017) Finding tiny faces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 1522\u20131530","DOI":"10.1109\/CVPR.2017.166"},{"issue":"18","key":"10106_CR10","doi-asserted-by":"publisher","first-page":"1543","DOI":"10.1016\/j.cub.2009.07.051","volume":"19","author":"RE Jack","year":"2009","unstructured":"Jack RE, Blais C, Scheepers C, Schyns PG, Caldara R (2009) Cultural confusions show that facial expressions are not universal. Curr Biol 19 (18):1543\u20131548","journal-title":"Curr Biol"},{"key":"10106_CR11","doi-asserted-by":"publisher","first-page":"51","DOI":"10.1007\/978-3-319-08491-6_5","volume":"300","author":"A Kolakowska","year":"2014","unstructured":"Kolakowska A, Landowska A, Szwoch M, Szwoch W, Wr\u00f3bel M R (2014) Emotion recognition and its applications. Advances in Intelligent Systems and Computing 300:51\u201362","journal-title":"Advances in Intelligent Systems and Computing"},{"key":"10106_CR12","unstructured":"Li S, Deng W (2018) Deep Facial Expression Recognition: A Survey. arXiv:1804.08348, Accessed 30 Sep 2019"},{"issue":"2","key":"10106_CR13","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1016\/j.imr.2016.03.004","volume":"5","author":"N Lim","year":"2016","unstructured":"Lim N (2016) Cultural differences in emotion: East-West differences in emotional arousal level. Integrative Medicine Research 5(2):105\u2013109","journal-title":"Integrative Medicine Research"},{"key":"10106_CR14","doi-asserted-by":"publisher","unstructured":"Muszynski M., et al. (2019) Recognizing Induced Emotions of Movie Audiences From Multimodal Information. In: IEEE Transactions on Affective Computing, https:\/\/doi.org\/10.1109\/TAFFC.2019.2902091","DOI":"10.1109\/TAFFC.2019.2902091"},{"key":"10106_CR15","doi-asserted-by":"crossref","unstructured":"Pantic M, Valstar M, Rademaker R, Maat L (2005) Web-based database for facial expression analysis. In: Proceedings of the IEEE International Conference on Multimedia and Expo, pp 317\u2013321","DOI":"10.1109\/ICME.2005.1521424"},{"issue":"6","key":"10106_CR16","doi-asserted-by":"publisher","first-page":"1161","DOI":"10.1037\/h0077714","volume":"39","author":"JA Russell","year":"1980","unstructured":"Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):1161\u20131178","journal-title":"J Pers Soc Psychol"},{"key":"10106_CR17","unstructured":"Simonyan K, Zisserman (2015) A Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556, Accessed 1 Sep 2019"},{"issue":"8","key":"10106_CR18","doi-asserted-by":"publisher","first-page":"2144","DOI":"10.1016\/j.patcog.2013.01.032","volume":"46","author":"K Yu","year":"2013","unstructured":"Yu K, Wang Z, Zhuo L, Wang J, Chi Z, Feng D (2013) Learning realistic facial expressions from web images. Pattern Recogn 46(8):2144\u20132155","journal-title":"Pattern Recogn"},{"key":"10106_CR19","doi-asserted-by":"crossref","unstructured":"Zafeiriou S, Kollias D, Nicolaou M, Papaioannou A, Zhao G, Kotsia I (2017) Aff-wild: Valence and arousal in-the-wild challenge. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 1980\u20131987","DOI":"10.1109\/CVPRW.2017.248"},{"issue":"12","key":"10106_CR20","doi-asserted-by":"publisher","first-page":"1067","DOI":"10.1016\/j.imavis.2014.09.005","volume":"32","author":"L Zhang","year":"2014","unstructured":"Zhang L, Tjondronegoro D, Chandran V (2014) Representation of facial expression categories in continuous arousal-valence space: Feature and correlation. Image Vis Comput 32(12):1067\u20131079","journal-title":"Image Vis Comput"},{"issue":"10","key":"10106_CR21","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","volume":"23","author":"K Zhang","year":"2016","unstructured":"Zhang K, Zhang Z, Li Z, Qiao Y (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23(10):1499\u20131503","journal-title":"IEEE Signal Processing Letters"}],"container-title":["Multimedia Tools and Applications"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-020-10106-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1007\/s11042-020-10106-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-020-10106-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,3,1]],"date-time":"2021-03-01T21:47:43Z","timestamp":1614635263000},"score":1,"resource":{"primary":{"URL":"http:\/\/link.springer.com\/10.1007\/s11042-020-10106-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,11,10]]},"references-count":21,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2021,3]]}},"alternative-id":["10106"],"URL":"https:\/\/doi.org\/10.1007\/s11042-020-10106-1","relation":{},"ISSN":["1380-7501","1573-7721"],"issn-type":[{"value":"1380-7501","type":"print"},{"value":"1573-7721","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,11,10]]},"assertion":[{"value":"14 October 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 September 2020","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 October 2020","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 November 2020","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}