{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,15]],"date-time":"2026-01-15T08:53:26Z","timestamp":1768467206832,"version":"3.49.0"},"reference-count":41,"publisher":"Institute of Electronics, Information and Communications Engineers (IEICE)","issue":"8","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IEICE Trans. Inf. &amp; Syst."],"published-print":{"date-parts":[[2018,8,1]]},"DOI":"10.1587\/transinf.2017edp7362","type":"journal-article","created":{"date-parts":[[2018,7,31]],"date-time":"2018-07-31T23:10:06Z","timestamp":1533078606000},"page":"2092-2100","source":"Crossref","is-referenced-by-count":4,"title":["Construction of Spontaneous Emotion Corpus from Indonesian TV Talk Shows and Its Application on Multimodal Emotion Recognition"],"prefix":"10.1587","volume":"E101.D","author":[{"given":"Nurul","family":"LUBIS","sequence":"first","affiliation":[{"name":"Augmented Human Communication Lab, Nara Institute of Science and Technology"}]},{"given":"Dessi","family":"LESTARI","sequence":"additional","affiliation":[{"name":"School of Informatics and Electrical Engineering, Institut Teknologi Bandung"}]},{"given":"Sakriani","family":"SAKTI","sequence":"additional","affiliation":[{"name":"Augmented Human Communication Lab, Nara Institute of Science and Technology"},{"name":"RIKEN, Center for Advanced Intelligence Project AIP"}]},{"given":"Ayu","family":"PURWARIANTI","sequence":"additional","affiliation":[{"name":"School of Informatics and Electrical Engineering, Institut Teknologi Bandung"}]},{"given":"Satoshi","family":"NAKAMURA","sequence":"additional","affiliation":[{"name":"Augmented Human Communication Lab, Nara Institute of Science and Technology"},{"name":"RIKEN, Center for Advanced Intelligence Project AIP"}]}],"member":"532","reference":[{"key":"1","doi-asserted-by":"publisher","unstructured":"[1] M. Schroder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M.T. Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, and M. Wollmer, \u201cBuilding autonomous sensitive artificial listeners,\u201d IEEE Trans. Affective Computing, vol.3, no.2, pp.165-183, 2012. 10.1109\/t-affc.2011.34","DOI":"10.1109\/T-AFFC.2011.34"},{"key":"2","doi-asserted-by":"crossref","unstructured":"[2] K.J. Williams, J.C. Peters, and C.L. Breazeal, \u201cTowards leveraging the driver&apos;s mobile device for an intelligent, sociable in-car robotic assistant,\u201d 2013 IEEE Intelligent Vehicles Symposium (IV), pp.369-376, IEEE, 2013. 10.1109\/ivs.2013.6629497","DOI":"10.1109\/IVS.2013.6629497"},{"key":"3","doi-asserted-by":"publisher","unstructured":"[3] D. Benyon, B. Gamb\u00e4ck, P. Hansen, O. Mival, and N. Webb, \u201cHow was your day? evaluating a conversational companion,\u201d IEEE Trans. Affective Computing, vol.4, no.3, pp.299-311, 2013. 10.1109\/t-affc.2013.15","DOI":"10.1109\/T-AFFC.2013.15"},{"key":"4","doi-asserted-by":"crossref","unstructured":"[4] B. Schuller, S. Steidl, and A. Batliner, \u201cThe INTERSPEECH 2009 emotion challenge,\u201d INTERSPEECH, pp.312-315, Citeseer, 2009.","DOI":"10.21437\/Interspeech.2009-103"},{"key":"5","unstructured":"[5] B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. M\u00fcller, and S. Narayanan, \u201cThe INTERSPEECH 2010 paralinguistic challenge,\u201d INTERSPEECH, pp.2794-2797, 2010."},{"key":"6","doi-asserted-by":"crossref","unstructured":"[6] B. Schuller, M. Valstar, F. Eyben, G. McKeown, R. Cowie, and M. Pantic, \u201cAvec 2011-the first international audio\/visual emotion challenge,\u201d Affective Computing and Intelligent Interaction, pp.415-424, Springer, 2011. 10.1007\/978-3-642-24571-8_53","DOI":"10.1007\/978-3-642-24571-8_53"},{"key":"7","doi-asserted-by":"crossref","unstructured":"[7] B. Schuller, M. Valster, F. Eyben, R. Cowie, and M. Pantic, \u201cAvec 2012: the continuous audio\/visual emotion challenge,\u201d Proc. 14th ACM international conference on Multimodal interaction, pp.449-456, ACM, 2012. 10.1145\/2388676.2388776","DOI":"10.1145\/2388676.2388776"},{"key":"8","doi-asserted-by":"crossref","unstructured":"[8] F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi, D. Lalanne, R. Cowie, and M. Pantic, \u201cAv+ ec 2015: The first affect recognition challenge bridging across audio, video, and physiological data,\u201d Proc. 5th International Workshop on Audio\/Visual Emotion Challenge, pp.3-8, ACM, 2015. 10.1145\/2808196.2811642","DOI":"10.1145\/2808196.2811642"},{"key":"9","doi-asserted-by":"crossref","unstructured":"[9] M. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne, M.T. Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic, \u201cAVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge,\u201d AVEC &apos;16 Proceedings of the 6th International Workshop on Audio\/Visual Emotion Challenge, pp.3-10, ACM 2016. 10.1145\/2988257.2988258","DOI":"10.1145\/2988257.2988258"},{"key":"10","doi-asserted-by":"crossref","unstructured":"[10] T. Polzehl, A. Schmitt, and F. Metze, \u201cApproaching multi-lingual emotion recognition from speech-on language dependency of acoustic\/prosodic features for anger detection,\u201d 2010.","DOI":"10.21437\/SpeechProsody.2010-123"},{"key":"11","doi-asserted-by":"crossref","unstructured":"[11] H. Sagha, J. Deng, M. Gavryukova, J. Han, and B. Schuller, \u201cCross lingual speech emotion recognition using canonical correlation analysis on principal component subspace,\u201d 2016 IEEE International Conference on Acoustics, Speech Signal Process. (ICASSP), pp.5800-5804, IEEE, 2016. 10.1109\/icassp.2016.7472789","DOI":"10.1109\/ICASSP.2016.7472789"},{"key":"12","unstructured":"[12] Y. Pan, P. Shen, and L. Shen, \u201cSpeech emotion recognition using support vector machine,\u201d Int. J. Smart Home, vol.6, no.2, pp.101-108, 2012."},{"key":"13","doi-asserted-by":"publisher","unstructured":"[13] J.C.P. Gonzaga, J.A. Seguerra, J.A. Turingan, M.P.A. Ulit, and R.A. Sagum, \u201cEmotional techy basyang: An automated filipino narrative storyteller,\u201d Int. J. Future Computer and Communication, vol.3, pp.271-274, Aug. 2014. 10.7763\/ijfcc.2014.v3.310","DOI":"10.7763\/IJFCC.2014.V3.310"},{"key":"14","doi-asserted-by":"crossref","unstructured":"[14] S. Sakti, A.A. Arman, S. Nakamura, and P. Hutagaol, \u201cIndonesian speech recognition for hearing and speaking impaired people,\u201d INTERSPEECH, 2004.","DOI":"10.21437\/Interspeech.2004-366"},{"key":"15","doi-asserted-by":"crossref","unstructured":"[15] S. Sakti, M. Paul, R. Maia, S. Sakai, N. Kimura, Y. Ashikari, E. Sumita, and S. Nakamura, \u201cToward translating Indonesian spoken utterances to\/from other languages,\u201d Proc. O-COCOSDA, pp.137-142, 2009. 10.1109\/icsda.2009.5278362","DOI":"10.1109\/ICSDA.2009.5278362"},{"key":"16","doi-asserted-by":"crossref","unstructured":"[16] E. Cahyaningtyas and D. Arifianto, \u201cHMM-based indonesian speech synthesis system with declarative and question sentences intonation,\u201d 2015 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), pp.153-158, IEEE, 2015. 10.1109\/ispacs.2015.7432756","DOI":"10.1109\/ISPACS.2015.7432756"},{"key":"17","doi-asserted-by":"publisher","unstructured":"[17] M. El Ayadi, M.S. Kamel, and F. Karray, \u201cSurvey on speech emotion recognition: Features, classification schemes, and databases,\u201d Pattern Recognit., vol.44, no.3, pp.572-587, 2011. 10.1016\/j.patcog.2010.09.020","DOI":"10.1016\/j.patcog.2010.09.020"},{"key":"18","doi-asserted-by":"publisher","unstructured":"[18] P.R. Shaver, U. Murdaya, and R.C. Fraley, \u201cStructure of the indonesian emotion lexicon,\u201d Asian journal of social psychology, vol.4, no.3, pp.201-224, 2001. 10.1111\/1467-839x.00086","DOI":"10.1111\/1467-839X.00086"},{"key":"19","doi-asserted-by":"publisher","unstructured":"[19] K.R. Scherer, R. Banse, and H.G. Wallbott, \u201cEmotion inferences from vocal expression correlate across languages and cultures,\u201d J. Cross-cultural psychology, vol.32, no.1, pp.76-92, 2001. 10.1177\/0022022101032001009","DOI":"10.1177\/0022022101032001009"},{"key":"20","doi-asserted-by":"crossref","unstructured":"[20] F. Koto and M. Adriani, \u201cHbe: Hashtag-based emotion lexicons for twitter sentiment analysis,\u201d Proc. 7th Forum for Information Retrieval Evaluation, pp.31-34, ACM, 2015. 10.1145\/2838706.2838718","DOI":"10.1145\/2838706.2838718"},{"key":"21","doi-asserted-by":"crossref","unstructured":"[21] J.E. The, A.F. Wicaksono, and M. Adriani, \u201cA two-stage emotion detection on indonesian tweets,\u201d 2015 International Conference on Advanced Computer Science and Information Systems (ICACSIS), pp.143-146, IEEE, 2015. 10.1109\/icacsis.2015.7415174","DOI":"10.1109\/ICACSIS.2015.7415174"},{"key":"22","doi-asserted-by":"crossref","unstructured":"[22] T.P. Tomo, G. Enriquez, and S. Hashimoto, \u201cIndonesian puppet theater robot with gamelan music emotion recognition,\u201d 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp.1177-1182, IEEE, 2015. 10.1109\/robio.2015.7418931","DOI":"10.1109\/ROBIO.2015.7418931"},{"key":"23","doi-asserted-by":"publisher","unstructured":"[23] L. Devillers, M. Tahon, M.A. Sehili, and A. Delaborde, \u201cInference of human beings emotional states from speech in human-robot interactions,\u201d Int. J. Social Robotics, vol.7, no.4, pp.451-463, 2015. 10.1007\/s12369-015-0297-8","DOI":"10.1007\/s12369-015-0297-8"},{"key":"24","unstructured":"[24] E. Douglas-Cowie, R. Cowie, C. Cox, N. Amier, and D. Heylen, \u201cThe sensitive artificial listener: an induction technique for generating emotionally coloured conversation,\u201d 2008. 10.1007\/978-3-540-76442-7_23"},{"key":"25","doi-asserted-by":"crossref","unstructured":"[25] M. Grimm, K. Kroschel, and S. Narayanan, \u201cThe Vera am Mittag German audio-visual emotional speech database,\u201d Proc. IEEE International Conference on Multimedia and Expo (ICME), IEEE, 2008. 10.1109\/icme.2008.4607572","DOI":"10.1109\/ICME.2008.4607572"},{"key":"26","doi-asserted-by":"publisher","unstructured":"[26] J.A. Russell, \u201cA circumplex model of affect,\u201d J. Personality and Social Psychology, vol.39, no.6, p.1161, 1980. 10.1037\/h0077714","DOI":"10.1037\/h0077714"},{"key":"27","doi-asserted-by":"crossref","unstructured":"[27] E. Douglas-Cowie, R. Cowie, I. Sneddon, C. Cox, O. Lowry, M. Mcrorie, J.C. Martin, L. Devillers, S. Abrilian, A. Batliner, N. Amir, and K. Karpouzis, \u201cThe HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data,\u201d Affective computing and intelligent interaction, pp.488-500, Springer, 2007. 10.1007\/978-3-540-74889-2_43","DOI":"10.1007\/978-3-540-74889-2_43"},{"key":"28","doi-asserted-by":"crossref","unstructured":"[28] F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne, \u201cIntroducing the recola multimodal corpus of remote collaborative and affective interactions,\u201d 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp.1-8, IEEE, 2013. 10.1109\/fg.2013.6553805","DOI":"10.1109\/FG.2013.6553805"},{"key":"29","doi-asserted-by":"crossref","unstructured":"[29] B. Xue, C. Fu, and Z. Shaobin, \u201cA study on sentiment computing and classification of sina weibo with word2vec,\u201d 2014 IEEE International Congress on Big Data, pp.358-363, IEEE, 2014. 10.1109\/bigdata.congress.2014.59","DOI":"10.1109\/BigData.Congress.2014.59"},{"key":"30","unstructured":"[30] C. Strapparava and A. Valitutti, \u201cWordnet affect: An affective extension of wordnet,\u201d LREC, pp.1083-1086, 2004."},{"key":"31","doi-asserted-by":"crossref","unstructured":"[31] L. Tian, J. Moore, and C. Lai, \u201cRecognizing emotions in spoken dialogue with hierarchically fused acoustic and lexical features,\u201d 2016 IEEE Spoken Language Technology Workshop (SLT), pp.565-572, IEEE, 2016. 10.1109\/slt.2016.7846319","DOI":"10.1109\/SLT.2016.7846319"},{"key":"32","doi-asserted-by":"crossref","unstructured":"[32] F. Eyben, M. Woellmer, and B. Schuller, \u201cOpensmile: the munich versatile and fast open-source audio feature extractor,\u201d Proc. international conference on Multimedia, pp.1459-1462, ACM, 2010. 10.1145\/1873951.1874246","DOI":"10.1145\/1873951.1874246"},{"key":"33","doi-asserted-by":"publisher","unstructured":"[33] F. Eyben, K.R. Scherer, B.W. Schuller, J. Sundberg, E. Andr\u00e9, C. Busso, L.Y. Devillers, J. Epps, P. Laukka, S.S. Narayanan, and K.P. Truong, \u201cThe geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing,\u201d IEEE Trans. Affective Computing, vol.7, no.2, pp.190-202, 2016. 10.1109\/taffc.2015.2457417","DOI":"10.1109\/TAFFC.2015.2457417"},{"key":"34","doi-asserted-by":"crossref","unstructured":"[34] T. Baltru\u0161aitis, P. Robinson, and L.P. Morency, \u201cOpenface: An open source facial behavior analysis toolkit,\u201d 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp.1-10, IEEE, 2016. 10.1109\/wacv.2016.7477553","DOI":"10.1109\/WACV.2016.7477553"},{"key":"35","doi-asserted-by":"publisher","unstructured":"[35] C.C. Chang and C.J. Lin, \u201cLibSVM: A library for support vector machines,\u201d ACM Trans. Intelligent Systems and Technology (TIST), vol.2, no.3, p.27, 2011. 10.1145\/1961189.1961199","DOI":"10.1145\/1961189.1961199"},{"key":"36","unstructured":"[36] C.W. Hsu, C.C. Chang, and C.J. Lin, \u201cA practical guide to support vector classification,\u201d 2003."},{"key":"37","doi-asserted-by":"crossref","unstructured":"[37] C. Busso, Z. Deng, S. Yildirim, M. Bulut, C.M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, and S. Narayanan, \u201cAnalysis of emotion recognition using facial expressions, speech and multimodal information,\u201d Proc. 6th international conference on Multimodal interfaces, pp.205-211, ACM, 2004. 10.1145\/1027933.1027968","DOI":"10.1145\/1027933.1027968"},{"key":"38","doi-asserted-by":"crossref","unstructured":"[38] S. Poria, I. Chaturvedi, E. Cambria, and A. Hussain, \u201cConvolutional mkl based multimodal emotion recognition and sentiment analysis,\u201d 2016 IEEE 16th International Conference on Data Mining (ICDM), pp.439-448, IEEE, 2016. 10.1109\/icdm.2016.0055","DOI":"10.1109\/ICDM.2016.0055"},{"key":"39","doi-asserted-by":"crossref","unstructured":"[39] B. Nojavanasghari, T. Baltru\u0161aitis, C.E. Hughes, and L.P. Morency, \u201cEmoreact: A multimodal approach and dataset for recognizing emotional responses in children,\u201d Proc. 18th ACM International Conference on Multimodal Interaction, pp.137-144, ACM, 2016. 10.1145\/2993148.2993168","DOI":"10.1145\/2993148.2993168"},{"key":"40","doi-asserted-by":"publisher","unstructured":"[40] Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang, \u201cA survey of affect recognition methods: Audio, visual, and spontaneous expressions,\u201d IEEE Trans. Pattern Anal. Mach. Intell., vol.31, no.1, pp.39-58, 2009. 10.1109\/tpami.2008.52","DOI":"10.1109\/TPAMI.2008.52"},{"key":"41","doi-asserted-by":"publisher","unstructured":"[41] C.N. Anagnostopoulos, T. Iliou, and I. Giannoukos, \u201cFeatures and classifiers for emotion recognition from speech: a survey from 2000 to 2011,\u201d Artificial Intelligence Review, vol.43, no.2, pp.155-177, 2015. 10.1007\/s10462-012-9368-5","DOI":"10.1007\/s10462-012-9368-5"}],"container-title":["IEICE Transactions on Information and Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transinf\/E101.D\/8\/E101.D_2017EDP7362\/_pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,8]],"date-time":"2024-07-08T20:59:26Z","timestamp":1720472366000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transinf\/E101.D\/8\/E101.D_2017EDP7362\/_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,8,1]]},"references-count":41,"journal-issue":{"issue":"8","published-print":{"date-parts":[[2018]]}},"URL":"https:\/\/doi.org\/10.1587\/transinf.2017edp7362","relation":{},"ISSN":["0916-8532","1745-1361"],"issn-type":[{"value":"0916-8532","type":"print"},{"value":"1745-1361","type":"electronic"}],"subject":[],"published":{"date-parts":[[2018,8,1]]}}}