{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,20]],"date-time":"2025-07-20T03:58:54Z","timestamp":1752983934886},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2010,3,1]],"date-time":"2010-03-01T00:00:00Z","timestamp":1267401600000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/www.springer.com\/tdm"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["J Multimodal User Interfaces"],"published-print":{"date-parts":[[2010,3]]},"DOI":"10.1007\/s12193-010-0053-1","type":"journal-article","created":{"date-parts":[[2010,9,27]],"date-time":"2010-09-27T14:43:06Z","timestamp":1285598586000},"page":"47-58","source":"Crossref","is-referenced-by-count":29,"title":["AVLaughterCycle"],"prefix":"10.1007","volume":"4","author":[{"given":"J\u00e9r\u00f4me","family":"Urbain","sequence":"first","affiliation":[]},{"given":"Radoslaw","family":"Niewiadomski","sequence":"additional","affiliation":[]},{"given":"Elisabetta","family":"Bevacqua","sequence":"additional","affiliation":[]},{"given":"Thierry","family":"Dutoit","sequence":"additional","affiliation":[]},{"given":"Alexis","family":"Moinet","sequence":"additional","affiliation":[]},{"given":"Catherine","family":"Pelachaud","sequence":"additional","affiliation":[]},{"given":"Benjamin","family":"Picart","sequence":"additional","affiliation":[]},{"given":"Jo\u00eblle","family":"Tilmanne","sequence":"additional","affiliation":[]},{"given":"Johannes","family":"Wagner","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2010,9,28]]},"reference":[{"issue":"6","key":"53_CR1","first-page":"22","volume":"1","author":"MS Bartlett","year":"2006","unstructured":"Bartlett MS, Littlewort GC, Frank MG, Lainscsek C, Fasel IR, Movellan JR (2006) Automatic recognition of facial actions in spontaneous expression. J\u00a0Multimed 1(6):22\u201335","journal-title":"J\u00a0Multimed"},{"key":"53_CR2","first-page":"287","volume-title":"Intl workshop on social intelligence design (SID2009)","author":"C Becker-Asano","year":"2009","unstructured":"Becker-Asano C, Ishiguro H (2009) Laughter in social robotics\u2014no laughing matter. In: Intl workshop on social intelligence design (SID2009), pp\u00a0287\u2013300"},{"issue":"1","key":"53_CR3","first-page":"115A","volume":"37","author":"L Berk","year":"1989","unstructured":"Berk L, Tan S, Napier B, Evy W (1989) Eustress of mirthful laughter modifies natural killer cell activity. Clin Res 37(1):115A","journal-title":"Clin Res"},{"key":"53_CR4","unstructured":"Cantoche (2010) http:\/\/www.cantoche.com\/"},{"key":"53_CR5","first-page":"520","volume-title":"Proceedings of the CHI\u201999 conference","author":"J Cassell","year":"1999","unstructured":"Cassell J, Bickmore T, Billinghurst M, Campbell L, Chang K, Vilhj\u00e1lmsson H, Yan H (1999) Embodiment in conversational interfaces: Rea. In: Proceedings of the CHI\u201999 conference. ACM, New York, pp\u00a0520\u2013527"},{"key":"53_CR6","doi-asserted-by":"crossref","first-page":"77","DOI":"10.1145\/1140491.1140508","volume-title":"APGV\u201906: Proceedings of the 3rd symposium on applied perception in graphics and visualization","author":"C Curio","year":"2006","unstructured":"Curio C, Breidt M, Kleiner M, Vuong QC, Giese MA, B\u00fclthoff HH (2006) Semantic 3D motion retargeting for facial animation. In: APGV\u201906: Proceedings of the 3rd symposium on applied perception in graphics and visualization. ACM, New York, pp\u00a077\u201384"},{"key":"53_CR7","volume-title":"SIGGRAPH\u201908: ACM SIGGRAPH 2008","author":"PC DiLorenzo","year":"2008","unstructured":"DiLorenzo PC, Zordan VB, Sanders BL (2008) Laughing out loud. In: SIGGRAPH\u201908: ACM SIGGRAPH 2008. ACM, New York"},{"key":"53_CR8","volume-title":"Proc of IEEE content based multimedia indexing conference (CBMI09)","author":"S Dupont","year":"2009","unstructured":"Dupont S, Dubuisson T, Urbain J, Frisson C, Sebbe R, D\u2019Alessandro N (2009) Audiocycle: browsing musical loop libraries. In: Proc of IEEE content based multimedia indexing conference (CBMI09)"},{"key":"53_CR9","unstructured":"Haptek (2010) http:\/\/www.haptek.com\/"},{"key":"53_CR10","volume-title":"The first functional markup language workshop","author":"D Heylen","year":"2008","unstructured":"Heylen D, Kopp S, Marsella S, Pelachaud C, Vilhj\u00e1lmsson H (2008) Why conversational agents do what they do? Functional representations for generating conversational agent behavior. In: The first functional markup language workshop, Estoril, Portugal"},{"key":"53_CR11","volume-title":"2003 IEEE international conference on acoustics, speech, and signal processing (ICASSP)","author":"A Janin","year":"2003","unstructured":"Janin A, Baron D, Edwards J, Ellis D, Gelbart D, Morgan N, Peskin B, Pfau T, Shriberg E, Stolcke A, Wooters C (2003) The ICSI meeting corpus. In: 2003 IEEE international conference on acoustics, speech, and signal processing (ICASSP), Hong-Kong"},{"key":"53_CR12","doi-asserted-by":"crossref","first-page":"2973","DOI":"10.21437\/Interspeech.2007-741","volume-title":"Proceedings of interspeech 2007","author":"MT Knox","year":"2007","unstructured":"Knox MT, Mirghafori N (2007) Automatic laughter detection using neural networks. In: Proceedings of interspeech 2007, Antwerp, Belgium, pp\u00a02973\u20132976"},{"issue":"4","key":"53_CR13","first-page":"11","volume":"17","author":"S Kopp","year":"2003","unstructured":"Kopp S, Jung B, Le\u00dfmann N, Wachsmuth I (2003) Max\u2014a multimodal assistant in virtual reality construction. K\u00fcnstl Intell 17(4):11\u201318","journal-title":"K\u00fcnstl Intell"},{"key":"53_CR14","first-page":"43","volume-title":"Proceedings of the interdisciplinary workshop on the phonetics of laughter","author":"E Lasarcyk","year":"2007","unstructured":"Lasarcyk E, Trouvain J (2007) Imitating conversational laughter with an articulatory speech synthesis. In: Proceedings of the interdisciplinary workshop on the phonetics of laughter, Saarbrucken, Germany, pp\u00a043\u201348"},{"key":"53_CR15","unstructured":"Natural Point, Inc (2009) Optitrack\u2014optical motion tracking solutions. http:\/\/www.naturalpoint.com\/optitrack\/"},{"key":"53_CR16","first-page":"1399","volume-title":"8th international joint conference on autonomous agents and multiagent systems (AAMAS 2009), IFAAMAS","author":"R Niewiadomski","year":"2009","unstructured":"Niewiadomski R, Bevacqua E, Mancini M, Pelachaud C (2009) Greta: an interactive expressive ECA system. In: Sierra C, Castelfranchi C, Decker KS, Sichman JS (eds) 8th international joint conference on autonomous agents and multiagent systems (AAMAS 2009), IFAAMAS, Budapest, Hungary, 10\u201315 May 2009, vol\u00a02, pp\u00a01399\u20131400"},{"key":"53_CR17","first-page":"101","volume-title":"Proc Twente workshop on language technology\u00a020 (TWLT 20)","author":"A Nijholt","year":"2002","unstructured":"Nijholt A (2002) Embodied agents: a new impetus to humor research. In: Proc Twente workshop on language technology\u00a020 (TWLT 20), pp\u00a0101\u2013111"},{"key":"53_CR18","doi-asserted-by":"crossref","first-page":"17","DOI":"10.1002\/0470854626.ch2","volume-title":"MPEG-4 facial animation\u2014the standard implementation and applications","author":"J Ostermann","year":"2002","unstructured":"Ostermann J (2002) Face animation in MPEG-4. In: Pandzic IS, Forchheimer R (eds) MPEG-4 facial animation\u2014the standard implementation and applications. Wiley, New York, pp\u00a017\u201355"},{"key":"53_CR19","first-page":"377","volume-title":"Face recognition","author":"M Pantic","year":"2007","unstructured":"Pantic M, Bartlett MS (2007) Machine analysis of facial expressions. In: Delac K, Grgic M (eds) Face recognition. I-Tech Education and Publishing, Vienna, pp\u00a0377\u2013416"},{"issue":"9","key":"53_CR20","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1109\/MCG.1982.1674492","volume":"2","author":"FI Parke","year":"1982","unstructured":"Parke FI (1982) Parameterized models for facial animation. IEEE Comput Graph Appl 2(9):61\u201368","journal-title":"IEEE Comput Graph Appl"},{"key":"53_CR21","unstructured":"Peeters G (2004) A large set of audio features for sound description (similarity and classification) in the CUIDADO project. Tech rep, Institut de Recherche et Coordination Acoustique\/Musique (IRCAM)"},{"key":"53_CR22","first-page":"1444","volume-title":"Proceedings of the IEEE international conference on multimedia and expo","author":"S Petridis","year":"2009","unstructured":"Petridis S, Pantic M (2009) Is this joke really funny? Judging the mirth by audiovisual laughter analysis. In: Proceedings of the IEEE international conference on multimedia and expo, New York, USA, pp\u00a01444\u20131447"},{"key":"53_CR23","first-page":"75","volume-title":"SIGGRAPH\u201998","author":"F Pighin","year":"1998","unstructured":"Pighin F, Hecker J, Lischinski D, Szeliski R, Salesin DH (1998) Synthesizing realistic facial expressions from photographs. In: SIGGRAPH\u201998. ACM, New York, pp\u00a075\u201384"},{"key":"53_CR24","doi-asserted-by":"crossref","first-page":"426","DOI":"10.1142\/9789812810687_0033","volume-title":"Emotion, qualia and consciousness","author":"W Ruch","year":"2001","unstructured":"Ruch W, Ekman P (2001) The expressive pattern of laughter. In: Kaszniak A (ed) Emotion, qualia and consciousness. World Scientific, Singapore, pp\u00a0426\u2013443"},{"key":"53_CR25","first-page":"1993","volume-title":"Proceedings of the IEEE 12th international conference on computer vision workshops (ICCV workshops)","author":"A Savran","year":"2009","unstructured":"Savran A, Sankur B (2009) Automatic detection of facial actions from 3D data. In: Proceedings of the IEEE 12th international conference on computer vision workshops (ICCV workshops), Kyoto, Japan, pp\u00a01993\u20132000"},{"issue":"1\u20132","key":"53_CR26","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1016\/S0167-6393(02)00078-X","volume":"40","author":"M Schr\u00f6der","year":"2003","unstructured":"Schr\u00f6der M (2003) Experimental study of affect bursts. Speech Commun 40(1\u20132):99\u2013116","journal-title":"Speech Commun"},{"key":"53_CR27","first-page":"19","volume-title":"QPSR of the numediart research program, Numediart research program on digital art technologies","author":"X Siebert","year":"2009","unstructured":"Siebert X, Dupont S, Fortemps P, Tardieu D (2009) MediaCycle: browsing and performing with sound and image libraries. In: Dutoit T, Macq B (eds) QPSR of the numediart research program, Numediart research program on digital art technologies, vol\u00a02, pp\u00a019\u201322"},{"key":"53_CR28","unstructured":"Skype Communications S \u00e0 rl (2009) The skype laughter chain. http:\/\/www.skypelaughterchain.com\/"},{"issue":"1","key":"53_CR29","first-page":"39","volume":"21","author":"N Stoiber","year":"2010","unstructured":"Stoiber N, Seguier R, Breton G (2010) Facial animation retargeting and control based on a human appearance space. J\u00a0Vis Comput Animat 21(1):39\u201354","journal-title":"J\u00a0Vis Comput Animat"},{"key":"53_CR30","first-page":"528","volume-title":"Proceedings of ACM CHI\u201999","author":"E Strommen","year":"1999","unstructured":"Strommen E, Alexander K (1999) Emotional interfaces for interactive aardvarks: designing affect into social interfaces for children. In: Proceedings of ACM CHI\u201999, pp\u00a0528\u2013535"},{"issue":"1","key":"53_CR31","doi-asserted-by":"crossref","first-page":"527","DOI":"10.1121\/1.2390679","volume":"121","author":"S Sundaram","year":"2007","unstructured":"Sundaram S, Narayanan S (2007) Automatic acoustic synthesis of human-like laughter. J\u00a0Acoust Soc Am 121(1):527\u2013535","journal-title":"J\u00a0Acoust Soc Am"},{"key":"53_CR32","first-page":"8","volume-title":"Proc of AAAI-05 workshop on modular construction of human-like intelligence","author":"KR Th\u00f3risson","year":"2005","unstructured":"Th\u00f3risson KR, List T, Pennock C, DiPirro J (2005) Whiteboards: scheduling blackboards for semantic routing of messages & streams. In: Th\u00f3risson KR, Vilhj\u00e1lmsson H, Marsella S (eds) Proc of AAAI-05 workshop on modular construction of human-like intelligence, Pittsburgh, Pennsylvania, pp\u00a08\u201315"},{"key":"53_CR33","first-page":"2793","volume-title":"Proceedings of the 15th international congress of phonetic sciences","author":"J Trouvain","year":"2003","unstructured":"Trouvain J (2003) Segmenting phonetic units in laughter. In: Proceedings of the 15th international congress of phonetic sciences, Barcelona, Spain, pp\u00a02793\u20132796"},{"issue":"2","key":"53_CR34","doi-asserted-by":"crossref","first-page":"144","DOI":"10.1016\/j.specom.2007.01.001","volume":"49","author":"KP Truong","year":"2007","unstructured":"Truong KP, van Leeuwen DA (2007) Automatic discrimination between laughter and speech. Speech Commun 49(2):144\u2013158","journal-title":"Speech Commun"},{"key":"53_CR35","first-page":"49","volume-title":"Proceedings of the interdisciplinary workshop on the phonetics of laughter","author":"KP Truong","year":"2007","unstructured":"Truong KP, van Leeuwen DA (2007) Evaluating automatic laughter segmentation in meetings using acoustic and acoustic-phonetic features. In: Proceedings of the interdisciplinary workshop on the phonetics of laughter, Saarbrucken, Germany, pp\u00a049\u201353"},{"key":"53_CR36","volume-title":"Proceedings of the 5th international summer workshop on multimodal interfaces (eNTERFACE\u201909)","author":"J Urbain","year":"2010","unstructured":"Urbain J, Bevacqua E, Dutoit T, Moinet A, Niewiadomski R, Pelachaud C, Picart B, Tilmanne J, Wagner J (2010) AVLaughterCycle: an audiovisual laughing machine. In: Camurri A, Mancini M, Volpe G (eds) Proceedings of the 5th international summer workshop on multimodal interfaces (eNTERFACE\u201909). DIST-University of Genova, Genova"},{"key":"53_CR37","volume-title":"Proceedings of the seventh conference on international language resources and evaluation (LREC\u201910), European Language Resources Association (ELRA)","author":"J Urbain","year":"2010","unstructured":"Urbain J, Bevacqua E, Dutoit T, Moinet A, Niewiadomski R, Pelachaud C, Picart B, Tilmanne J, Wagner J (2010) The AVLaughterCycle database. In: Proceedings of the seventh conference on international language resources and evaluation (LREC\u201910), European Language Resources Association (ELRA), Valletta, Malta"},{"key":"53_CR38","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1007\/978-3-540-74997-4_10","volume-title":"7th international conference on intelligent virtual agents","author":"H Vilhj\u00e1lmsson","year":"2007","unstructured":"Vilhj\u00e1lmsson H, Cantelmo N, Cassell J, Chafai NE, Kipp M, Kopp\u00a0S, Mancini M, Marsella S, Marshall AN, Pelachaud C, Ruttkay\u00a0Z, Th\u00f3risson KR, van Welbergen H, van\u00a0der Werf R (2007) The behavior markup language: recent developments and challenges. In: 7th international conference on intelligent virtual agents, Paris, France, pp\u00a099\u2013111"},{"key":"53_CR39","first-page":"1","volume-title":"Affective computing and intelligent interaction (ACII 2009)","author":"J Wagner","year":"2009","unstructured":"Wagner J, Andr\u00e9 E, Jung F (2009) Smart sensor integration: a\u00a0framework for multimodal emotion recognition in real-time. In: Affective computing and intelligent interaction (ACII 2009), Amsterdam, The Netherlands, pp\u00a01\u20138"},{"key":"53_CR40","doi-asserted-by":"crossref","first-page":"1027","DOI":"10.1145\/1631272.1631502","volume-title":"MM \u201909: proceedings of the seventeen ACM international conference on multimedia","author":"W Zhang","year":"2009","unstructured":"Zhang W, Wang Q, Tang X (2009) Performance driven face animation via non-rigid 3D tracking. In: MM \u201909: proceedings of the seventeen ACM international conference on multimedia. ACM, New York, pp\u00a01027\u20131028"},{"key":"53_CR41","unstructured":"Zign Creations: Zign Track (2009) http:\/\/www.zigncreations.com\/zigntrack.html"}],"container-title":["Journal on Multimodal User Interfaces"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s12193-010-0053-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1007\/s12193-010-0053-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s12193-010-0053-1","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,11,10]],"date-time":"2021-11-10T18:41:18Z","timestamp":1636569678000},"score":1,"resource":{"primary":{"URL":"http:\/\/link.springer.com\/10.1007\/s12193-010-0053-1"}},"subtitle":["Enabling a virtual agent to join in laughing with a\u00a0conversational partner using a\u00a0similarity-driven audiovisual laughter animation"],"short-title":[],"issued":{"date-parts":[[2010,3]]},"references-count":41,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2010,3]]}},"alternative-id":["53"],"URL":"https:\/\/doi.org\/10.1007\/s12193-010-0053-1","relation":{},"ISSN":["1783-7677","1783-8738"],"issn-type":[{"value":"1783-7677","type":"print"},{"value":"1783-8738","type":"electronic"}],"subject":[],"published":{"date-parts":[[2010,3]]}}}