{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,18]],"date-time":"2025-11-18T12:14:42Z","timestamp":1763468082216,"version":"3.41.0"},"reference-count":47,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2012,3,1]],"date-time":"2012-03-01T00:00:00Z","timestamp":1330560000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001602","name":"Science Foundation Ireland","doi-asserted-by":"publisher","award":["09\/IN.1\/I2631"],"award-info":[{"award-number":["09\/IN.1\/I2631"]}],"id":[{"id":"10.13039\/501100001602","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Interact. Intell. Syst."],"published-print":{"date-parts":[[2012,3]]},"abstract":"<jats:p>It is essential for the advancement of human-centered multimodal interfaces to be able to infer the current user's state or communication state. In order to enable a system to do that, the recognition and interpretation of multimodal social signals (i.e., paralinguistic and nonverbal behavior) in real-time applications is required. Since we believe that laughs are one of the most important and widely understood social nonverbal signals indicating affect and discourse quality, we focus in this work on the detection of laughter in natural multiparty discourses. The conversations are recorded in a natural environment without any specific constraint on the discourses using unobtrusive recording devices. This setup ensures natural and unbiased behavior, which is one of the main foci of this work. To compare results of methods, namely Gaussian Mixture Model (GMM) supervectors as input to a Support Vector Machine (SVM), so-called Echo State Networks (ESN), and a Hidden Markov Model (HMM) approach, are utilized in online and offline detection experiments. The SVM approach proves very accurate in the offline classification task, but is outperformed by the ESN and HMM approach in the online detection (F<jats:sub>1<\/jats:sub>scores: GMM SVM 0.45, ESN 0.63, HMM 0.72). Further, we were able to utilize the proposed HMM approach in a cross-corpus experiment without any retraining with respectable generalization capability (F<jats:sub>1<\/jats:sub>score: 0.49). The results and possible reasons for these outcomes are shown and discussed in the article. The proposed methods may be directly utilized in practical tasks such as the labeling or the online detection of laughter in conversational data and affect-aware applications.<\/jats:p>","DOI":"10.1145\/2133366.2133370","type":"journal-article","created":{"date-parts":[[2012,6,1]],"date-time":"2012-06-01T15:51:28Z","timestamp":1338565888000},"page":"1-31","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":34,"title":["Spotting laughter in natural multiparty conversations"],"prefix":"10.1145","volume":"2","author":[{"given":"Stefan","family":"Scherer","sequence":"first","affiliation":[{"name":"Trinity College Dublin; Ulm University, UK"}]},{"given":"Michael","family":"Glodek","sequence":"additional","affiliation":[{"name":"Ulm University, Germany"}]},{"given":"Friedhelm","family":"Schwenker","sequence":"additional","affiliation":[{"name":"Ulm University, Germany"}]},{"given":"Nick","family":"Campbell","sequence":"additional","affiliation":[{"name":"Trinity College Dublin, UK"}]},{"given":"G\u00fcnther","family":"Palm","sequence":"additional","affiliation":[{"name":"Ulm University, Germany"}]}],"member":"320","published-online":{"date-parts":[[2012,3,20]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/CW.2005.82"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1121\/1.1391244"},{"key":"e_1_2_2_3_1","unstructured":"Batliner A. Steidl S. Eyben F. and Schuller B. 2010. On laughter and speech laugh based on observations of child-robot interaction. The Phonetics of Laughting Trends in Linguistics. Mouton de Gruyter. To appear. Batliner A. Steidl S. Eyben F. and Schuller B. 2010. On laughter and speech laugh based on observations of child-robot interaction. The Phonetics of Laughting Trends in Linguistics. Mouton de Gruyter. To appear."},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/380995.380999"},{"key":"e_1_2_2_5_1","volume-title":"Pattern Recognition and Machine Learning (Information Science and Statistics)","author":"Bishop C. M.","unstructured":"Bishop , C. M. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics) 1 st ed. Springer . Bishop, C. M. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics) 1st ed. Springer.","edition":"1"},{"key":"e_1_2_2_6_1","volume-title":"Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction (ACII'09)","volume":"2","author":"Bousmalis K.","unstructured":"Bousmalis , K. , Mehu , M. , and Pantic , M . 2009. Spotting agreement and disagreement: A survey of nonverbal audiovisual cues and tools . In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction (ACII'09) . Vol. 2 . IEEE, 1--9. Bousmalis, K., Mehu, M., and Pantic, M. 2009. Spotting agreement and disagreement: A survey of nonverbal audiovisual cues and tools. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction (ACII'09). Vol. 2. IEEE, 1--9."},{"volume-title":"Proceedings of the Interspeech Conference. 465--468","author":"Campbell N.","key":"e_1_2_2_7_1","unstructured":"Campbell , N. , Kashioka , H. , and Ohara , R . 2005. No laughing matter . In Proceedings of the Interspeech Conference. 465--468 . Campbell, N., Kashioka, H., and Ohara, R. 2005. No laughing matter. In Proceedings of the Interspeech Conference. 465--468."},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2006.870086"},{"volume-title":"Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06)","author":"Campbell W. M.","key":"e_1_2_2_9_1","unstructured":"Campbell , W. M. , Sturim , D. E. , Reynolds , D. A. , and Solomonoff , A . 2006b. SVM-Based speaker verification using a GMM supervector kernel and NAP variability compensation . In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06) . IEEE, 97--100. Campbell, W. M., Sturim, D. E., Reynolds, D. A., and Solomonoff, A. 2006b. SVM-Based speaker verification using a GMM supervector kernel and NAP variability compensation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'06). IEEE, 97--100."},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.5555\/1783474.1783490"},{"key":"e_1_2_2_11_1","volume-title":"Proceedings of the 6th International Language Resources and Evaluation (LREC'08)","author":"Campbell W. N.","year":"2008","unstructured":"Campbell , W. N. 2008 . Tools and resources for visualising conversational-speech interaction . In Proceedings of the 6th International Language Resources and Evaluation (LREC'08) . 231--234. Campbell, W. N. 2008. Tools and resources for visualising conversational-speech interaction. In Proceedings of the 6th International Language Resources and Evaluation (LREC'08). 231--234."},{"volume-title":"Proceedings of the Conference. 2546--2549","author":"Campbell W. N.","key":"e_1_2_2_12_1","unstructured":"Campbell , W. N. and Scherer , S . 2010. Comparing measures of synchrony and alignment in dialogue speech timing with respect to turn-taking activity . In Proceedings of the Conference. 2546--2549 . Campbell, W. N. and Scherer, S. 2010. Comparing measures of synchrony and alignment in dialogue speech timing with respect to turn-taking activity. In Proceedings of the Conference. 2546--2549."},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12193-009-0030-8"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1007\/11677482_3"},{"key":"e_1_2_2_15_1","first-page":"1","article-title":"Robust Real Time Face Tracking for the Analysis of Human Behaviour","volume":"4892","author":"Douxchamps D.","year":"2007","unstructured":"Douxchamps , D. and Campbell , W. N. 2007 . Robust Real Time Face Tracking for the Analysis of Human Behaviour . Lecture Notes in Computer Science Series , vol. 4892 , Spring er, 1 -- 10 . Douxchamps, D. and Campbell, W. N. 2007. Robust Real Time Face Tracking for the Analysis of Human Behaviour. Lecture Notes in Computer Science Series, vol. 4892, Springer, 1--10.","journal-title":"Lecture Notes in Computer Science Series"},{"volume-title":"Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'11)","author":"Eyben F.","key":"e_1_2_2_16_1","unstructured":"Eyben , F. , Petridis , S. , Schuller , B. , Tzimiropoulos , G. , Zafeiriou , S. , and Pantic , M . 2011. Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks . In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'11) . IEEE, 5844--5847. Eyben, F., Petridis, S., Schuller, B., Tzimiropoulos, G., Zafeiriou, S., and Pantic, M. 2011. Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'11). IEEE, 5844--5847."},{"volume-title":"Theoria motus corporum coelestium in sectionibus conicis solem ambientium","author":"Gauss C. F.","key":"e_1_2_2_17_1","unstructured":"Gauss , C. F. 1809. Theoria motus corporum coelestium in sectionibus conicis solem ambientium . Latin Ed. F. Perthes and I. H. Besser , Hamburgi . Gauss, C. F. 1809. Theoria motus corporum coelestium in sectionibus conicis solem ambientium. Latin Ed. F. Perthes and I. H. Besser, Hamburgi."},{"volume-title":"Proceedings of Interspeech Conference. 2269--2272","author":"Glodek M.","key":"e_1_2_2_18_1","unstructured":"Glodek , M. , Scherer , S. , Schwenker , F. , and Palm , G . 2011. Conditioned hidden markov model fusion for multimodal classification . In Proceedings of Interspeech Conference. 2269--2272 . Glodek, M., Scherer, S., Schwenker, F., and Palm, G. 2011. Conditioned hidden markov model fusion for multimodal classification. In Proceedings of Interspeech Conference. 2269--2272."},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1121\/1.399423"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASRU.1997.658998"},{"key":"e_1_2_2_21_1","volume-title":"Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'85)","volume":"10","author":"Hermansky H.","unstructured":"Hermansky , H. , Hanson , B. , and Wakita , H . 1985. Perceptually based linear predictive analysis of speech . In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'85) . Vol. 10 . IEEE, 509--512. Hermansky, H., Hanson, B., and Wakita, H. 1985. Perceptually based linear predictive analysis of speech. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'85). Vol. 10. IEEE, 509--512."},{"key":"e_1_2_2_22_1","volume-title":"Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'92)","volume":"1","author":"Hermansky H.","unstructured":"Hermansky , H. , Morgan , N. , Bayya , A. , and Kohn , P . 1992. Rasta-PLP speech analysis technique . In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'92) . Vol. 1 . IEEE, 121--124. Hermansky, H., Morgan, N., Bayya, A., and Kohn, P. 1992. Rasta-PLP speech analysis technique. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'92). Vol. 1. IEEE, 121--124."},{"volume-title":"RTRL, EKF and the echo state network approach. Tech. rep. 159","author":"Jaeger H.","key":"e_1_2_2_23_1","unstructured":"Jaeger , H. 2002. Tutorial on training recurrent neural networks, covering BPPT , RTRL, EKF and the echo state network approach. Tech. rep. 159 , Fraunhofer-Gesellschaft, St . Augustin, Germany . Jaeger, H. 2002. Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the echo state network approach. Tech. rep. 159, Fraunhofer-Gesellschaft, St. Augustin, Germany."},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.1091277"},{"volume-title":"The ICSI meeting corpus. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03)","author":"Janin A.","key":"e_1_2_2_25_1","unstructured":"Janin , A. , Baron , D. , Edwards , J. , Ellis , D. , Gelbart , D. , Morgan , N. , Peskin , B. , Pfau , T. , Shriberg , E. , Stolcke , A. , and Wooters , C . 2003 . The ICSI meeting corpus. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03) . IEEE, 364--367. Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. 2003. The ICSI meeting corpus. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'03). IEEE, 364--367."},{"volume-title":"Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'04)","author":"Kennedy L.","key":"e_1_2_2_26_1","unstructured":"Kennedy , L. and Ellis , D . 2004. Laughter detection in meetings . In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'04) , Meeting Recognition Workshop. IEEE, 118--121. Kennedy, L. and Ellis, D. 2004. Laughter detection in meetings. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'04), Meeting Recognition Workshop. IEEE, 118--121."},{"volume-title":"Proceedings of the Interspeech Conference. 2973--2976","author":"Knox M.","key":"e_1_2_2_27_1","unstructured":"Knox , M. and Mirghafori , N . 2007. Automatic laughter detection using neural networks . In Proceedings of the Interspeech Conference. 2973--2976 . Knox, M. and Mirghafori, N. 2007. Automatic laughter detection using neural networks. In Proceedings of the Interspeech Conference. 2973--2976."},{"key":"e_1_2_2_28_1","unstructured":"Koller D. and Friedman N. 2009. Probabilistic Graphical Models: Principles and Techniques. The MIT Press. Koller D. and Friedman N. 2009. Probabilistic Graphical Models: Principles and Techniques. The MIT Press."},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/SLT.2008.4777845"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2009.4960696"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-74889-2_62"},{"volume-title":"Proceedings of IEEE International Conference on Multimedia and Expo (ICME'09)","author":"Petridis S.","key":"e_1_2_2_32_1","unstructured":"Petridis , S. and Pantic , M . 2009. Is this joke really funny&quest; Judging the mirth by audiovisual laughter analysis . In Proceedings of IEEE International Conference on Multimedia and Expo (ICME'09) . IEEE, 1444--1447. Petridis, S. and Pantic, M. 2009. Is this joke really funny&quest; Judging the mirth by audiovisual laughter analysis. In Proceedings of IEEE International Conference on Multimedia and Expo (ICME'09). IEEE, 1444--1447."},{"key":"e_1_2_2_33_1","first-page":"115","article-title":"Laughter: A stereotyped human vocalization","volume":"89","author":"Provine R.","year":"1991","unstructured":"Provine , R. and Yong , L. 1991 . Laughter: A stereotyped human vocalization . English 89 , 2, 115 -- 124 . Provine, R. and Yong, L. 1991. Laughter: A stereotyped human vocalization. English 89, 2, 115--124.","journal-title":"English"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/5.18626"},{"key":"e_1_2_2_35_1","doi-asserted-by":"crossref","unstructured":"Risko E. F. and Kingstone A. 2010. Eyes wide shut: Implied social presence eye tracking and attention. J. Attent. Percept. Psychophys. 1--6. Risko E. F. and Kingstone A. 2010. Eyes wide shut: Implied social presence eye tracking and attention. J. Attent. Percept. Psychophys. 1--6.","DOI":"10.3758\/s13414-010-0042-1"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2005.12.010"},{"volume-title":"Proceedings of the 6th International Language Resources and Evaluation (LREC'08)","author":"Scherer S.","key":"e_1_2_2_37_1","unstructured":"Scherer , S. , Hofmann , H. , Lampmann , M. , Pfeil , M. , Rhinow , S. , Schwenker , F. , and Palm , G . 2008a. Emotion recognition from speech: Stress experiment . In Proceedings of the 6th International Language Resources and Evaluation (LREC'08) . 1325--1330. Scherer, S., Hofmann, H., Lampmann, M., Pfeil, M., Rhinow, S., Schwenker, F., and Palm, G. 2008a. Emotion recognition from speech: Stress experiment. In Proceedings of the 6th International Language Resources and Evaluation (LREC'08). 1325--1330."},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-69939-2_20"},{"key":"e_1_2_2_39_1","volume-title":"Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction (ACII'11)","volume":"2","author":"Scherer S.","unstructured":"Scherer , S. , Schels , M. , and Palm , G . 2011. How low level observations can help to reveal the user's state in hci . In Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction (ACII'11) . S. D'Mello, A. Graesser, B. Schuller, and J.-C. Martin, Eds. , Vol. 2 , Springer, 81--90. Scherer, S., Schels, M., and Palm, G. 2011. How low level observations can help to reveal the user's state in hci. In Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction (ACII'11). S. D'Mello, A. Graesser, B. Schuller, and J.-C. Martin, Eds., Vol. 2, Springer, 81--90."},{"key":"e_1_2_2_40_1","doi-asserted-by":"crossref","DOI":"10.7551\/mitpress\/4175.001.0001","volume-title":"Kernels: Support Vector Machines, Regularization, Optimization, and Beyond","author":"Sch\u00f6lkopf B.","year":"2001","unstructured":"Sch\u00f6lkopf , B. and Smola , A. J . 2001 . Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond . MIT Press , Cambridge, MA . Sch\u00f6lkopf, B. and Smola, A. J. 2001. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA."},{"volume-title":"Proceedings of the Interspeech Conference. 2794--2797","author":"Schuller B.","key":"e_1_2_2_41_1","unstructured":"Schuller , B. , Steidl , S. , Batliner , A. , Burkhardt , F. , Devillers , L. , M\u00fcller , C. , and Narayanan , S. S . 2010. The interspeech 2010 paralinguistic challenge . In Proceedings of the Interspeech Conference. 2794--2797 . Schuller, B., Steidl, S., Batliner, A., Burkhardt, F., Devillers, L., M\u00fcller, C., and Narayanan, S. S. 2010. The interspeech 2010 paralinguistic challenge. In Proceedings of the Interspeech Conference. 2794--2797."},{"volume-title":"Proceedings of the 3rd IET International Conference on Intelligent Environments 2007 (IE'07)","author":"Strauss P.-M.","key":"e_1_2_2_42_1","unstructured":"Strauss , P.-M. , Hoffmann , H. , and Scherer , S . 2007. Evaluation and user acceptance of a dialogue system using wizard-of-oz recordings . In Proceedings of the 3rd IET International Conference on Intelligent Environments 2007 (IE'07) . IEEE, 521--524. Strauss, P.-M., Hoffmann, H., and Scherer, S. 2007. Evaluation and user acceptance of a dialogue system using wizard-of-oz recordings. In Proceedings of the 3rd IET International Conference on Intelligent Environments 2007 (IE'07). IEEE, 521--524."},{"volume-title":"Proceedings of the Interspeech Conference. 485--488","author":"Truong K. P.","key":"e_1_2_2_43_1","unstructured":"Truong , K. P. and Van Leeuwen, D. A. 2005. Automatic detection of laughter . In Proceedings of the Interspeech Conference. 485--488 . Truong, K. P. and Van Leeuwen, D. A. 2005. Automatic detection of laughter. In Proceedings of the Interspeech Conference. 485--488."},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.specom.2007.01.001"},{"volume-title":"Proceedings of the Workshop on the Phonetics of Laughter. 49--53","author":"Truong K. P.","key":"e_1_2_2_45_1","unstructured":"Truong , K. P. and Van Leeuwen, D. A. 2007b. Evaluating laughter segmentation in meetings with acoustic and acoustic-phonetic features . In Proceedings of the Workshop on the Phonetics of Laughter. 49--53 . Truong, K. P. and Van Leeuwen, D. A. 2007b. Evaluating laughter segmentation in meetings with acoustic and acoustic-phonetic features. In Proceedings of the Workshop on the Phonetics of Laughter. 49--53."},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1023\/B:VISI.0000013087.49260.fb"},{"key":"e_1_2_2_47_1","first-page":"522","article-title":"Advances in SVM-based system using GMM super vectors for text-independent speaker verificationsuper vectors for text-independent speaker verification. Tsinghua Sci","volume":"13","author":"Zhao J.","year":"2008","unstructured":"Zhao , J. , Dong , Y. , Zhao , X. , Yang , H. , Lu , L. , and Wang , H. 2008 . Advances in SVM-based system using GMM super vectors for text-independent speaker verificationsuper vectors for text-independent speaker verification. Tsinghua Sci . Technol. 13 , 4, 522 -- 527 . Zhao, J., Dong, Y., Zhao, X., Yang, H., Lu, L., and Wang, H. 2008. Advances in SVM-based system using GMM super vectors for text-independent speaker verificationsuper vectors for text-independent speaker verification. Tsinghua Sci. Technol. 13, 4, 522--527.","journal-title":"Technol."}],"container-title":["ACM Transactions on Interactive Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2133366.2133370","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/2133366.2133370","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T10:06:05Z","timestamp":1750241165000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2133366.2133370"}},"subtitle":["A comparison of automatic online and offline approaches using audiovisual data"],"short-title":[],"issued":{"date-parts":[[2012,3]]},"references-count":47,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2012,3]]}},"alternative-id":["10.1145\/2133366.2133370"],"URL":"https:\/\/doi.org\/10.1145\/2133366.2133370","relation":{},"ISSN":["2160-6455","2160-6463"],"issn-type":[{"type":"print","value":"2160-6455"},{"type":"electronic","value":"2160-6463"}],"subject":[],"published":{"date-parts":[[2012,3]]},"assertion":[{"value":"2010-12-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2011-11-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2012-03-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}