{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T18:59:53Z","timestamp":1776106793784,"version":"3.50.1"},"reference-count":54,"publisher":"Association for Computing Machinery (ACM)","issue":"CSCW1","license":[{"start":{"date-parts":[[2024,4,17]],"date-time":"2024-04-17T00:00:00Z","timestamp":1713312000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"JST?????????????????????????????????????????????????","award":["Grant number: JPMJMS2013"],"award-info":[{"award-number":["Grant number: JPMJMS2013"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2024,4,17]]},"abstract":"<jats:p>The limited nonverbal cues and spatially distributed nature of remote communication make it challenging for unacquainted members to be expressive during social interactions over video conferencing. Though it enables seeing others' facial expressions, the visual feedback can instead lead to unexpected self-focus, resulting in users missing cues for others to engage in the conversation equally. To support expressive communication and equal participation among unacquainted counterparts, we propose SealMates, a behavior-driven avatar in which the avatar infers the engagement level of the group based on collective gaze and speech patterns and then moves across interlocutors' windows in the video conferencing. By conducting a controlled experiment with 15 groups of triads, we found the avatar's movement encouraged people to experience more self-disclosure and made them perceive everyone was equally engaged in the conversation than when there was no behavior-driven avatar. We discuss how a behavior-driven avatar influences distributed members' perceptions and the implications of avatar-mediated communication for future platforms.<\/jats:p>","DOI":"10.1145\/3637395","type":"journal-article","created":{"date-parts":[[2024,4,29]],"date-time":"2024-04-29T10:05:31Z","timestamp":1714385131000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["SealMates: Improving Communication in Video Conferencing using a Collective Behavior-Driven Avatar"],"prefix":"10.1145","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-9651-3893","authenticated-orcid":false,"given":"Mark","family":"Armstrong","sequence":"first","affiliation":[{"name":"Graduate School of Media Design, Keio University, Yokohama, Kanagawa, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0603-2807","authenticated-orcid":false,"given":"Chi-Lan","family":"Yang","sequence":"additional","affiliation":[{"name":"Graduate School of Interdisciplinary Information Studies, The University of Tokyo, Tokyo, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7418-5704","authenticated-orcid":false,"given":"Kinga","family":"Skiers","sequence":"additional","affiliation":[{"name":"Graduate School of Media Design, Keio University, Yokohama, Kanagawa, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7178-5648","authenticated-orcid":false,"given":"Mengzhen","family":"Lim","sequence":"additional","affiliation":[{"name":"Graduate School of Arts and Letters, Meiji University, Tokyo, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7008-5458","authenticated-orcid":false,"given":"Tamil Selvan","family":"Gunasekaran","sequence":"additional","affiliation":[{"name":"Empathic Computing Lab, The University of Auckland, Auckland, New Zealand"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7785-8887","authenticated-orcid":false,"given":"Ziyue","family":"Wang","sequence":"additional","affiliation":[{"name":"Keio University Graduate School of Media Design, Tokyo, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9010-1491","authenticated-orcid":false,"given":"Takuji","family":"Narumi","sequence":"additional","affiliation":[{"name":"the University of Tokyo, Tokyo, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6303-5791","authenticated-orcid":false,"given":"Kouta","family":"Minamizawa","sequence":"additional","affiliation":[{"name":"Keio University Graduate School of Media Design, Yokohama, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6090-2837","authenticated-orcid":false,"given":"Yun Suen","family":"Pai","sequence":"additional","affiliation":[{"name":"Keio University Graduate School of Media Design, Yokohama, Japan"}]}],"member":"320","published-online":{"date-parts":[[2024,4,26]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3479597"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1177\/0146167297234003"},{"key":"e_1_2_2_3_1","volume-title":"Looking At Yourself on Zoom. Ph.,D. Dissertation. Master's thesis","author":"Balogov\u00e1 Karol'ina","unstructured":"Karol'ina Balogov\u00e1. 2021. Looking At Yourself on Zoom. Ph.,D. Dissertation. Master's thesis. University College London, London, UK. https:\/\/uclic. ucl ?."},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3027063.3053207"},{"key":"e_1_2_2_5_1","first-page":"13","article-title":"The Impact of Transitioning to Online Learning and Virtual Conferences on Students and Educators During the Coronavirus Pandemic","volume":"18","author":"Borror Jia","year":"2021","unstructured":"Jia Borror, Sarah Ransdell, Jenna Binaco, and April Feeser. 2021. The Impact of Transitioning to Online Learning and Virtual Conferences on Students and Educators During the Coronavirus Pandemic. Distance Learning, Vol. 18, 1 (2021), 13--24.","journal-title":"Distance Learning"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/1531674.1531712"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.3389\/fpsyg.2021.616471"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3479579"},{"key":"e_1_2_2_9_1","doi-asserted-by":"crossref","unstructured":"Cathy Mengying Fang GR Marvez Neska ElHaouij and Rosalind Picard. 2022. Cardiac Arrest: Evaluating the Role of Biosignals in Gameplay Strategies and Players' Physiological Synchrony in Social Deception Games. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1--7.","DOI":"10.1145\/3491101.3519670"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3134679"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2931002.2931014"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.24251\/HICSS.2022.582"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-020-01392-6"},{"key":"e_1_2_2_14_1","unstructured":"Erving Goffman. 2021. The presentation of self in everyday life. Anchor."},{"key":"e_1_2_2_15_1","unstructured":"William W Hahn. 1973. Attention and heart rate: a critical appraisal of the hypothesis of Lacey and Lacey. (1973)."},{"key":"e_1_2_2_16_1","volume-title":"Augmenting the Sense of Social Presence in Online Video Games Through the Sharing of Biosignals. Available at SSRN 4409781","author":"Hassan Modar","year":"2023","unstructured":"Modar Hassan, Maxwell Kennard, Seiji Yoshitake, Karlos Ishac, Shion Takahashi, SunKyoung Kim, Takashi Matsui, and Masakazu Hirokawa. 2023. Augmenting the Sense of Social Presence in Online Video Games Through the Sharing of Biosignals. Available at SSRN 4409781 (2023)."},{"key":"e_1_2_2_17_1","first-page":"94","article-title":"Reflections on an unexpected presentation of turn-taking difficulties in an English discussion class","volume":"2","author":"Jonathan Hennessy","year":"2022","unstructured":"Jonathan Hennessy et al. 2022. Reflections on an unexpected presentation of turn-taking difficulties in an English discussion class. Journal of Multilingual Pedagogy and Practice , Vol. 2 (2022), 94--100.","journal-title":"Journal of Multilingual Pedagogy and Practice"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/2499474.2499481"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2002333.2002352"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376785"},{"key":"e_1_2_2_21_1","unstructured":"Mark R Leary. 2016. Introduction to behavioral research methods. Pearson education company."},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3517451"},{"key":"e_1_2_2_23_1","first-page":"1","article-title":"Animo: Sharing biosignals on a smartwatch for lightweight social connection","volume":"3","author":"Liu Fannie","year":"2019","unstructured":"Fannie Liu, Mario Esparza, Maria Pavlovskaia, Geoff Kaufman, Laura Dabbish, and Andr\u00e9s Monroy-Hern\u00e1ndez. 2019. Animo: Sharing biosignals on a smartwatch for lightweight social connection. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 3, 1 (2019), 1--19.","journal-title":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445200"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.3390\/s130810273"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1002\/asi.20794"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025548"},{"key":"e_1_2_2_28_1","volume-title":"ICASSP 2023--2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"Mizuno Saki","unstructured":"Saki Mizuno, Nobukatsu Hojo, Satoshi Kobashikawa, and Ryo Masumura. 2023. Next-Speaker Prediction Based on Non-Verbal Information in Multi-Party Video Conversation. In ICASSP 2023--2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1--5."},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3139131.3141217"},{"key":"e_1_2_2_30_1","volume-title":"The relationship between Zoom use with the camera on and Zoom fatigue: considering self-monitoring and social interaction anxiety. Information","author":"Ngien Annabel","year":"2022","unstructured":"Annabel Ngien and Bernie Hogan. 2022. The relationship between Zoom use with the camera on and Zoom fatigue: considering self-monitoring and social interaction anxiety. Information, Communication & Society (2022), 1--19."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.12840\/issn.2255-4165.2018.06.01.015"},{"key":"e_1_2_2_32_1","volume-title":"Conversations over video conferences: An evaluation of the spoken aspects of video-mediated communication. Human-computer interaction","author":"O'Conaill Brid","year":"1993","unstructured":"Brid O'Conaill, Steve Whittaker, and Sylvia Wilbur. 1993. Conversations over video conferences: An evaluation of the spoken aspects of video-mediated communication. Human-computer interaction, Vol. 8, 4 (1993), 389--428."},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0161794"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1002\/jaal.1159"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1080\/10447318.2022.2041897"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/958160.958166"},{"key":"e_1_2_2_37_1","volume-title":"GAZE IN REMOTE COLLABORATION: WHERE DO PEOPLE LOOK DURING VIDEO CONFERENCE IN THE CONTEXT OF STUDENT COLLABORATION.","author":"Qin MoShi","year":"2023","unstructured":"MoShi Qin. 2023. GAZE IN REMOTE COLLABORATION: WHERE DO PEOPLE LOOK DURING VIDEO CONFERENCE IN THE CONTEXT OF STUDENT COLLABORATION. (2023)."},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/BSN.2017.7936031"},{"key":"e_1_2_2_39_1","volume-title":"Francesco Ruotolo, Gennaro Ruggiero, and Tina Iachini.","author":"Rapuano Mariachiara","year":"2020","unstructured":"Mariachiara Rapuano, Antonella Ferrara, Filomena Leonela Sbordone, Francesco Ruotolo, Gennaro Ruggiero, and Tina Iachini. 2020. The appearance of the avatar can enhance the sense of co-presence during virtual interactions with users.. In PSYCHOBIT."},{"key":"e_1_2_2_40_1","volume-title":"Challenges and issues in adopting speech recognition. Speech and language processing for human-machine communications","author":"Sahu Priyanka","year":"2018","unstructured":"Priyanka Sahu, Mohit Dua, and Ankit Kumar. 2018. Challenges and issues in adopting speech recognition. Speech and language processing for human-machine communications (2018), 209--215."},{"key":"e_1_2_2_41_1","volume-title":"Mohammad Rafayet Ali, and Mohammed Ehsan Hoque","author":"Samrose Samiha","year":"2018","unstructured":"Samiha Samrose, Ru Zhao, Jeffery White, Vivian Li, Luis Nova, Yichen Lu, Mohammad Rafayet Ali, and Mohammed Ehsan Hoque. 2018. Coco: Collaboration coach for understanding team dynamics during video conferencing. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies, Vol. 1, 4 (2018), 1--24."},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCE.2019.8661836"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173965"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173965"},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/URAI.2018.8441766"},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1177\/0265407521996055"},{"key":"e_1_2_2_47_1","unstructured":"P. O. N. Staff. 2022. Body Language in Negotiation Can Build Rapport-Without Saying a Word. https:\/\/www.pon.harvard.edu\/daily\/negotiation-skills-daily\/build-rapport-without-saying-a-word-nb\/ Section: Negotiation Skills."},{"key":"e_1_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1561\/9781680836578"},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025574"},{"key":"e_1_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025844"},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2019.10.003"},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/985921.986016"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3581135"},{"key":"e_1_2_2_54_1","volume-title":"GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos. The 34th Annual ACM Symposium on User Interface Software and Technology","author":"Zhenyi He","year":"2021","unstructured":"He Zhenyi, Keru Wang, Brandon Yushan Feng, Du Ruofei, Ken Perlin, Zhenyi He, Keru Wang, Brandon Yushan Feng, Ruofei Du, and Ken Perlin. 2021. GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos. The 34th Annual ACM Symposium on User Interface Software and Technology (2021). io"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3637395","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3637395","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T17:19:56Z","timestamp":1755883196000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3637395"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,17]]},"references-count":54,"journal-issue":{"issue":"CSCW1","published-print":{"date-parts":[[2024,4,17]]}},"alternative-id":["10.1145\/3637395"],"URL":"https:\/\/doi.org\/10.1145\/3637395","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,4,17]]},"assertion":[{"value":"2024-04-26","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}