{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:35:25Z","timestamp":1750221325636,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":18,"publisher":"ACM","license":[{"start":{"date-parts":[[2017,11,13]],"date-time":"2017-11-13T00:00:00Z","timestamp":1510531200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Swedish Foundation for Strategic Research project EACare","award":["RIT15-0107"],"award-info":[{"award-number":["RIT15-0107"]}]},{"name":"EU Horizon 2020 project BabyRobot","award":["687831"],"award-info":[{"award-number":["687831"]}]},{"name":"Swedish Research Council Project InkSynt","award":["2013-4935"],"award-info":[{"award-number":["2013-4935"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2017,11,13]]},"DOI":"10.1145\/3139491.3139499","type":"proceedings-article","created":{"date-parts":[[2017,11,6]],"date-time":"2017-11-06T13:30:29Z","timestamp":1509975029000},"page":"37-38","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Using crowd-sourcing for the design of listening agents: challenges and opportunities"],"prefix":"10.1145","author":[{"given":"Catharine","family":"Oertel","sequence":"first","affiliation":[{"name":"KTH, Sweden"}]},{"given":"Patrik","family":"Jonell","sequence":"additional","affiliation":[{"name":"KTH, Sweden"}]},{"given":"Kevin","family":"El Haddad","sequence":"additional","affiliation":[{"name":"University of Mons, Belgium"}]},{"given":"Eva","family":"Szekely","sequence":"additional","affiliation":[{"name":"KTH, Sweden"}]},{"given":"Joakim","family":"Gustafson","sequence":"additional","affiliation":[{"name":"KTH, Sweden"}]}],"member":"320","published-online":{"date-parts":[[2017,11,13]]},"reference":[{"volume-title":"2016 IEEE Winter Conference on Applications of Computer Vision (WACV). 1\u201310","author":"Baltru\u00c5\u0105aitis T.","key":"e_1_3_2_1_1_1","unstructured":"T. Baltru\u00c5\u0105aitis , P. Robinson , and L. P. Morency . 2016. OpenFace: An open source facial behavior analysis toolkit . In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). 1\u201310 . T. Baltru\u00c5\u0105aitis, P. Robinson, and L. P. Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). 1\u201310."},{"key":"e_1_3_2_1_2_1","unstructured":"7477553  7477553"},{"key":"e_1_3_2_1_3_1","first-page":"357","article-title":"Speaker adaptation using constrained estimation of Gaussian mixtures. Speech and Audio Processing","volume":"3","author":"Digalakis Vassilios V","year":"1995","unstructured":"Vassilios V Digalakis , Dimitry Rtischev , and Leonardo G Neumeyer . 1995 . Speaker adaptation using constrained estimation of Gaussian mixtures. Speech and Audio Processing , IEEE Transactions on 3 , 5 (1995), 357 \u2013 366 . Vassilios V Digalakis, Dimitry Rtischev, and Leonardo G Neumeyer. 1995. Speaker adaptation using constrained estimation of Gaussian mixtures. Speech and Audio Processing, IEEE Transactions on 3, 5 (1995), 357\u2013366.","journal-title":"IEEE Transactions on"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/2993148.2993182"},{"volume-title":"2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 4939\u20134943","author":"Haddad K. E.","key":"e_1_3_2_1_5_1","unstructured":"K. E. Haddad , S. Dupont , J. Urbain , and T. Dutoit . 2015. Speech-laughs: An HMM-based approach for amused speech synthesis . In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 4939\u20134943 . K. E. Haddad, S. Dupont, J. Urbain, and T. Dutoit. 2015. Speech-laughs: An HMM-based approach for amused speech synthesis. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 4939\u20134943."},{"key":"e_1_3_2_1_6_1","unstructured":"Xugang Lu Yu Tsao Shigeki Matsuda and Chiori Hori. 2013. Speech enhancement based on deep denoising autoencoder.. In Interspeech. 436\u2013440.  Xugang Lu Yu Tsao Shigeki Matsuda and Chiori Hori. 2013. Speech enhancement based on deep denoising autoencoder.. In Interspeech. 436\u2013440."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10458-009-9092-y"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3011263.3011272"},{"key":"e_1_3_2_1_9_1","volume-title":"Crowd-Sourced Design of Artificial Attentive Listeners. In accepted at Interspeech","author":"Oertel Catharine","year":"2017","unstructured":"Catharine Oertel , Patrik Jonell , Dimosthenis Kontogiorgos , Joseph Mendelson , Jonas Beskow , and Joakim Gustafson . 2017. Crowd-Sourced Design of Artificial Attentive Listeners. In accepted at Interspeech 2017 . Catharine Oertel, Patrik Jonell, Dimosthenis Kontogiorgos, Joseph Mendelson, Jonas Beskow, and Joakim Gustafson. 2017. Crowd-Sourced Design of Artificial Attentive Listeners. In accepted at Interspeech 2017."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2993148.2993188"},{"key":"e_1_3_2_1_11_1","unstructured":"Sathish Pammi Marc Schr\u00f6der Marcela Charfuelan Oytun T\u00fcrk and Ingmar Steiner. 2010. Synthesis of listener vocalisations with imposed intonation contours.. In SSW. 240\u2013245.  Sathish Pammi Marc Schr\u00f6der Marcela Charfuelan Oytun T\u00fcrk and Ingmar Steiner. 2010. Synthesis of listener vocalisations with imposed intonation contours.. In SSW. 240\u2013245."},{"volume-title":"Crowd-Powered Design of Virtual Attentive Listeners. In accepted at IVA 2017: The 17th International Conference on Intelligent Virtual Agents","author":"Patrik Jonell","key":"e_1_3_2_1_12_1","unstructured":"Jonell Patrik , Catharine Oertel , Dimosthenis Kontogiorgos , Jonas Beskow , and Joakim Gustafson . 2017. Crowd-Powered Design of Virtual Attentive Listeners. In accepted at IVA 2017: The 17th International Conference on Intelligent Virtual Agents . Jonell Patrik, Catharine Oertel, Dimosthenis Kontogiorgos, Jonas Beskow, and Joakim Gustafson. 2017. Crowd-Powered Design of Virtual Attentive Listeners. In accepted at IVA 2017: The 17th International Conference on Intelligent Virtual Agents."},{"volume-title":"Crowdsourced Multimodal Corpora Collection Tool. In submitted to MMC 2017: The 12th Workshop on Multimodal Corpora","author":"Patrik Jonell","key":"e_1_3_2_1_13_1","unstructured":"Jonell Patrik , Catharine Oertel , Dimosthenis Kontogiorgos , Jonas Beskow , and Joakim Gustafson . 2017. Crowdsourced Multimodal Corpora Collection Tool. In submitted to MMC 2017: The 12th Workshop on Multimodal Corpora . Jonell Patrik, Catharine Oertel, Dimosthenis Kontogiorgos, Jonas Beskow, and Joakim Gustafson. 2017. Crowdsourced Multimodal Corpora Collection Tool. In submitted to MMC 2017: The 12th Workshop on Multimodal Corpora."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"crossref","unstructured":"Thorsten Stocksmeier Stefan Kopp and Dafydd Gibbon. 2007. Synthesis of prosodic attitudinal variants in German backchannel ja.. In INTERSPEECH. 1290\u2013 1293.  Thorsten Stocksmeier Stefan Kopp and Dafydd Gibbon. 2007. Synthesis of prosodic attitudinal variants in German backchannel ja.. In INTERSPEECH. 1290\u2013 1293.","DOI":"10.21437\/Interspeech.2007-232"},{"key":"e_1_3_2_1_15_1","volume-title":"Proc. of Interspeech","author":"Truong Khiet P.","year":"2011","unstructured":"Khiet P. Truong , Ronald Poppe , Iwan De Kok , and Dirk Heylen . 2011 . A multimodal analysis of vocal and visual backchannels in spontaneous dialogs . Proc. of Interspeech (2011), 2973\u20132976. Khiet P. Truong, Ronald Poppe, Iwan De Kok, and Dirk Heylen. 2011. A multimodal analysis of vocal and visual backchannels in spontaneous dialogs. Proc. of Interspeech (2011), 2973\u20132976."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0378-2166(99)00109-5"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.specom.2009.04.004"},{"key":"e_1_3_2_1_18_1","unstructured":"Abstract 1 Introduction 2 Background 3 Need For Crowd-Sourced Data 4 Discussion 5 Conclusion Acknowledgments References  Abstract 1 Introduction 2 Background 3 Need For Crowd-Sourced Data 4 Discussion 5 Conclusion Acknowledgments References"}],"event":{"name":"ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION","sponsor":["SIGCHI ACM Special Interest Group on Computer-Human Interaction"],"location":"Glasgow UK","acronym":"ICMI '17"},"container-title":["Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3139491.3139499","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3139491.3139499","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T02:14:01Z","timestamp":1750212841000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3139491.3139499"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2017,11,13]]},"references-count":18,"alternative-id":["10.1145\/3139491.3139499","10.1145\/3139491"],"URL":"https:\/\/doi.org\/10.1145\/3139491.3139499","relation":{},"subject":[],"published":{"date-parts":[[2017,11,13]]},"assertion":[{"value":"2017-11-13","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}