{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:27:09Z","timestamp":1750220829030,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":28,"publisher":"ACM","license":[{"start":{"date-parts":[[2019,10,14]],"date-time":"2019-10-14T00:00:00Z","timestamp":1571011200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2019,10,14]]},"DOI":"10.1145\/3340555.3353762","type":"proceedings-article","created":{"date-parts":[[2019,10,17]],"date-time":"2019-10-17T12:49:48Z","timestamp":1571316588000},"page":"435-439","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":7,"title":["Exploring Transfer Learning between Scripted and Spontaneous Speech for Emotion Recognition"],"prefix":"10.1145","author":[{"given":"Qingqing","family":"Li","sequence":"first","affiliation":[{"name":"Texas A&amp;M University, USA"}]},{"given":"Theodora","family":"Chaspari","sequence":"additional","affiliation":[{"name":"Texas A&amp;M University, USA"}]}],"member":"320","published-online":{"date-parts":[[2019,10,14]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2015.7178934"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASLP.2018.2867099"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.25080\/Majora-8b375195-003"},{"key":"e_1_3_2_1_4_1","volume-title":"IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42, 4","author":"Busso Carlos","year":"2008","unstructured":"Carlos Busso , Murtaza Bulut , Chi-Chun Lee , Abe Kazemzadeh , Emily Mower , Samuel Kim , Jeannette\u00a0 N Chang , Sungbok Lee , and Shrikanth\u00a0 S Narayanan . 2008 . IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42, 4 (2008), 335. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette\u00a0N Chang, Sungbok Lee, and Shrikanth\u00a0S Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42, 4 (2008), 335."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2016.2515617"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2016.2515617"},{"key":"e_1_3_2_1_7_1","volume-title":"CREMA-D: Crowd-sourced emotional multimodal actors dataset","author":"Cao Houwei","year":"2014","unstructured":"Houwei Cao , David\u00a0 G Cooper , Michael\u00a0 K Keutmann , Ruben\u00a0 C Gur , Ani Nenkova , and Ragini Verma . 2014. CREMA-D: Crowd-sourced emotional multimodal actors dataset . IEEE transactions on affective computing 5, 4 ( 2014 ), 377\u2013390. Houwei Cao, David\u00a0G Cooper, Michael\u00a0K Keutmann, Ruben\u00a0C Gur, Ani Nenkova, and Ragini Verma. 2014. CREMA-D: Crowd-sourced emotional multimodal actors dataset. IEEE transactions on affective computing 5, 4 (2014), 377\u2013390."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2017.7952656"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACII.2013.90"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"crossref","unstructured":"Abhinav Dhall Roland Goecke Tom Gedeon and Nicu Sebe. 2016. Emotion recognition in the wild.  Abhinav Dhall Roland Goecke Tom Gedeon and Nicu Sebe. 2016. Emotion recognition in the wild.","DOI":"10.1145\/2993148.3007626"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/1873951.1874246"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-13560-1_76"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"crossref","unstructured":"John Gideon Soheil Khorram Zakaria Aldeneh Dimitrios Dimitriadis and Emily\u00a0Mower Provost. 2017. Progressive neural networks for transfer learning in emotion recognition. arXiv preprint arXiv:1706.03256(2017).  John Gideon Soheil Khorram Zakaria Aldeneh Dimitrios Dimitriadis and Emily\u00a0Mower Provost. 2017. Progressive neural networks for transfer learning in emotion recognition. arXiv preprint arXiv:1706.03256(2017).","DOI":"10.21437\/Interspeech.2017-1637"},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2016.7472099"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0100795"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1611835114"},{"key":"e_1_3_2_1_17_1","volume-title":"Big Data and Smart Computing (BigComp), 2017 IEEE International Conference on. IEEE, 437\u2013440","author":"Lee Dongkeon","year":"2017","unstructured":"Dongkeon Lee , Kyo-Joong Oh , and Ho-Jin Choi . 2017 . The chatbot feels you-a counseling service using emotional response generation . In Big Data and Smart Computing (BigComp), 2017 IEEE International Conference on. IEEE, 437\u2013440 . Dongkeon Lee, Kyo-Joong Oh, and Ho-Jin Choi. 2017. The chatbot feels you-a counseling service using emotional response generation. In Big Data and Smart Computing (BigComp), 2017 IEEE International Conference on. IEEE, 437\u2013440."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACII.2015.7344652"},{"key":"e_1_3_2_1_19_1","volume-title":"Annual meeting of the canadian society for brain, behaviour and cognitive science. 205\u2013211","author":"Livingstone R","year":"2012","unstructured":"Steven\u00a0 R Livingstone , Katlyn Peck , and Frank\u00a0 A Russo . 2012 . RAVDESS: The Ryerson audio-visual database of emotional speech and song . In Annual meeting of the canadian society for brain, behaviour and cognitive science. 205\u2013211 . Steven\u00a0R Livingstone, Katlyn Peck, and Frank\u00a0A Russo. 2012. RAVDESS: The Ryerson audio-visual database of emotional speech and song. In Annual meeting of the canadian society for brain, behaviour and cognitive science. 205\u2013211."},{"key":"e_1_3_2_1_20_1","volume-title":"Data Engineering Workshops, 2006. Proceedings. 22nd International Conference on. IEEE, 8\u20138.","author":"Martin Olivier","year":"2006","unstructured":"Olivier Martin , Irene Kotsia , Benoit Macq , and Ioannis Pitas . 2006 . The enterface\u201905 audio-visual emotion database . In Data Engineering Workshops, 2006. Proceedings. 22nd International Conference on. IEEE, 8\u20138. Olivier Martin, Irene Kotsia, Benoit Macq, and Ioannis Pitas. 2006. The enterface\u201905 audio-visual emotion database. In Data Engineering Workshops, 2006. Proceedings. 22nd International Conference on. IEEE, 8\u20138."},{"key":"e_1_3_2_1_21_1","unstructured":"Andrei\u00a0A Rusu Neil\u00a0C Rabinowitz Guillaume Desjardins Hubert Soyer James Kirkpatrick Koray Kavukcuoglu Razvan Pascanu and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671(2016).  Andrei\u00a0A Rusu Neil\u00a0C Rabinowitz Guillaume Desjardins Hubert Soyer James Kirkpatrick Koray Kavukcuoglu Razvan Pascanu and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671(2016)."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"crossref","unstructured":"Saurabh Sahu Rahul Gupta Ganesh Sivaraman Wael AbdAlmageed and Carol Espy-Wilson. 2018. Adversarial Auto-encoders for Speech Based Emotion Recognition. arXiv preprint arXiv:1806.02146(2018).  Saurabh Sahu Rahul Gupta Ganesh Sivaraman Wael AbdAlmageed and Carol Espy-Wilson. 2018. Adversarial Auto-encoders for Speech Based Emotion Recognition. arXiv preprint arXiv:1806.02146(2018).","DOI":"10.21437\/Interspeech.2017-1421"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2009-103"},{"key":"e_1_3_2_1_24_1","volume-title":"Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2250\u20132252","author":"Sohn S","year":"2018","unstructured":"Samuel\u00a0 S Sohn , Xun Zhang , Fernando Geraci , and Mubbasir Kapadia . 2018 . An Emotionally Aware Embodied Conversational Agent . In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2250\u20132252 . Samuel\u00a0S Sohn, Xun Zhang, Fernando Geraci, and Mubbasir Kapadia. 2018. An Emotionally Aware Embodied Conversational Agent. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2250\u20132252."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2015.7178762"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"crossref","unstructured":"Adela\u00a0C Timmons Theodora Chaspari Sohyun\u00a0C Han Laura Perrone Shrikanth\u00a0S Narayanan and Gayla Margolin. 2017. Using multimodal wearable technology to detect conflict among couples. Computer3(2017) 50\u201359.  Adela\u00a0C Timmons Theodora Chaspari Sohyun\u00a0C Han Laura Perrone Shrikanth\u00a0S Narayanan and Gayla Margolin. 2017. Using multimodal wearable technology to detect conflict among couples. Computer3(2017) 50\u201359.","DOI":"10.1109\/MC.2017.83"},{"key":"e_1_3_2_1_27_1","unstructured":"Eric Tzeng Judy Hoffman Ning Zhang Kate Saenko and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474(2014).  Eric Tzeng Judy Hoffman Ning Zhang Kate Saenko and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474(2014)."},{"key":"e_1_3_2_1_28_1","unstructured":"Jason Yosinski Jeff Clune Yoshua Bengio and Hod Lipson. 2014. How transferable are features in deep neural networks?. In Advances in neural information processing systems. 3320\u20133328.  Jason Yosinski Jeff Clune Yoshua Bengio and Hod Lipson. 2014. How transferable are features in deep neural networks?. In Advances in neural information processing systems. 3320\u20133328."}],"event":{"name":"ICMI '19: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION","acronym":"ICMI '19","location":"Suzhou China"},"container-title":["2019 International Conference on Multimodal Interaction"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340555.3353762","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3340555.3353762","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:13:28Z","timestamp":1750202008000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340555.3353762"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,10,14]]},"references-count":28,"alternative-id":["10.1145\/3340555.3353762","10.1145\/3340555"],"URL":"https:\/\/doi.org\/10.1145\/3340555.3353762","relation":{},"subject":[],"published":{"date-parts":[[2019,10,14]]},"assertion":[{"value":"2019-10-14","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}