{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,10]],"date-time":"2026-01-10T01:53:24Z","timestamp":1768010004210,"version":"3.49.0"},"reference-count":84,"publisher":"Association for Computing Machinery (ACM)","issue":"MHCI","license":[{"start":{"date-parts":[[2023,9,11]],"date-time":"2023-09-11T00:00:00Z","timestamp":1694390400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2023,9,11]]},"abstract":"<jats:p>Teleconferencing is poised to become one of the most frequent use cases of immersive platforms, since it supports high levels of presence and embodiment in collaborative settings. On desktop and mobile platforms, teleconferencing solutions are already among the most popular apps and accumulate significant usage time---not least due to the pandemic or as a desirable substitute for air travel or commuting.<\/jats:p>\n          <jats:p>In this paper, we present ViGather, an immersive teleconferencing system that integrates users of all platform types into a joint experience via equal representation and a first-person experience. ViGather renders all participants as embodied avatars in one shared scene to establish co-presence and elicit natural behavior during collocated conversations, including nonverbal communication cues such as eye contact between participants as well as body language such as turning one's body to another person or using hand gestures to emphasize parts of a conversation during the virtual hangout. Since each user embodies an avatar and experiences situated meetings from an egocentric perspective no matter the device they join from, ViGather alleviates potential concerns about self-perception and appearance while mitigating potential 'Zoom fatigue', as users' self-views are not shown. For participants in Mixed Reality, our system leverages the rich sensing and reconstruction capabilities of today's headsets. For users of tablets, laptops, or PCs, ViGather reconstructs the user's pose from the device's front-facing camera, estimates eye contact with other participants, and relates these non-verbal cues to immediate avatar animations in the shared scene.<\/jats:p>\n          <jats:p>Our evaluation compared participants' behavior and impressions while videoconferencing in groups of four inside ViGather with those in Meta Horizon as a baseline for a social VR setting. Participants who participated on traditional screen devices (e.g., laptops and desktops) using ViGather reported a significantly higher sense of physical, spatial, and self-presence than when using Horizon, while all perceived similar levels of active social presence when using Virtual Reality headsets. Our follow-up study confirmed the importance of representing users on traditional screen devices as reconstructed avatars for perceiving self-presence.<\/jats:p>","DOI":"10.1145\/3604279","type":"journal-article","created":{"date-parts":[[2023,9,13]],"date-time":"2023-09-13T15:16:20Z","timestamp":1694618180000},"page":"1-27","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["ViGather: Inclusive Virtual Conferencing with a Joint Experience Across Traditional Screen Devices and Mixed Reality Headsets"],"prefix":"10.1145","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7792-0241","authenticated-orcid":false,"given":"Huajian","family":"Qiu","sequence":"first","affiliation":[{"name":"ETH Z\u00fcrich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3334-7727","authenticated-orcid":false,"given":"Paul","family":"Streli","sequence":"additional","affiliation":[{"name":"ETH Z\u00fcrich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1351-1606","authenticated-orcid":false,"given":"Tiffany","family":"Luong","sequence":"additional","affiliation":[{"name":"ETH Z\u00fcrich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7162-0133","authenticated-orcid":false,"given":"Christoph","family":"Gebhardt","sequence":"additional","affiliation":[{"name":"ETH Z\u00fcrich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9655-9519","authenticated-orcid":false,"given":"Christian","family":"Holz","sequence":"additional","affiliation":[{"name":"ETH Z\u00fcrich, Zurich, Switzerland"}]}],"member":"320","published-online":{"date-parts":[[2023,9,13]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3463499"},{"key":"e_1_2_1_2_1","unstructured":"Alphabet. 2022. MediaPipe. https:\/\/mediapipe.dev\/"},{"key":"e_1_2_1_3_1","unstructured":"Apple. 2022. ARKit. https:\/\/developer.apple.com\/alp-reality\/"},{"key":"e_1_2_1_4_1","unstructured":"Sara Atske. 2021. 1. How the internet and technology shaped Americans' personal experiences amid COVID-19. https:\/\/www.pewresearch.org\/internet\/2021\/09\/01\/how-the-internet-and-technology-shaped-americans-personal-experiences-amid-covid-19\/"},{"key":"e_1_2_1_5_1","unstructured":"Autodesk. 2023. The Wild. https:\/\/thewild.com\/"},{"key":"e_1_2_1_6_1","volume-title":"Blazeface: Sub-millisecond neural face detection on mobile gpus. arXiv preprint arXiv:1907.05047","author":"Bazarevsky Valentin","year":"2019","unstructured":"Valentin Bazarevsky, Yury Kartynnik, Andrey Vakunov, Karthik Raveendran, and Matthias Grundmann. 2019. Blazeface: Sub-millisecond neural face detection on mobile gpus. arXiv preprint arXiv:1907.05047 (2019)."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/632716.632838"},{"key":"e_1_2_1_8_1","volume-title":"Future of information and communication conference","author":"Campbell Abraham G","unstructured":"Abraham G Campbell, Thomas Holz, Jonny Cosgrove, Mike Harlick, and Tadhg O'Sullivan. 2019. Uses of virtual reality for communication in financial services: A case study on comparing different telepresence interfaces: Virtual reality compared to video conferencing. In Future of information and communication conference. Springer, 463--481."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2207676.2208639"},{"key":"e_1_2_1_10_1","doi-asserted-by":"crossref","unstructured":"Xu Chen Tianjian Jiang Jie Song Jinlong Yang Michael J Black Andreas Geiger and Otmar Hilliges. 2022. gDNA: Towards Generative Detailed Neural Avatars. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR52688.2022.01978"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2019.102897"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR55827.2022.00029"},{"key":"e_1_2_1_13_1","unstructured":"Cluster. 2023. Cluster. https:\/\/cluster.mu\/"},{"key":"e_1_2_1_14_1","unstructured":"Zoom Video Communications. 2022. zoom. https:\/\/zoom.us\/"},{"key":"e_1_2_1_15_1","unstructured":"HTC Corporation. 2022. HTC Vive Flow. https:\/\/www.vive.com\/us\/product\/vive-flow\/overview\/"},{"key":"e_1_2_1_16_1","unstructured":"Arthur Digital. 2023. Arthur. https:\/\/www.arthur.digital\/"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/142750.143052"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.fusengdes.2008.11.021"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3501836"},{"key":"e_1_2_1_20_1","unstructured":"Mozilla Foundation. 2022. mozilla:hubs. https:\/\/hubs.mozilla.com\/"},{"key":"e_1_2_1_21_1","unstructured":"Frame. 2023. Frame. https:\/\/learn.framevr.io\/\/"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3270316.3271543"},{"key":"e_1_2_1_23_1","volume-title":"Manipulating Avatars for Enhanced Communication in Extended Reality. In 2021 IEEE International Conference on Intelligent Reality (ICIR). IEEE, 9--16","author":"Hart Jonathon Derek","year":"2021","unstructured":"Jonathon Derek Hart, Thammathip Piumsomboon, Gun A Lee, Ross T Smith, and Mark Billinghurst. 2021. Manipulating Avatars for Enhanced Communication in Extended Reality. In 2021 IEEE International Conference on Intelligent Reality (ICIR). IEEE, 9--16."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300577"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR50242.2020.00082"},{"key":"e_1_2_1_26_1","volume-title":"GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos. In The 34th Annual ACM Symposium on User Interface Software and Technology. 769--782","author":"He Zhenyi","year":"2021","unstructured":"Zhenyi He, Keru Wang, Brandon Yushan Feng, Ruofei Du, and Ken Perlin. 2021. GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos. In The 34th Annual ACM Symposium on User Interface Software and Technology. 769--782."},{"key":"e_1_2_1_27_1","unstructured":"Hyperspace. 2023. MootUp. https:\/\/mootup.com\/"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-20065-6_26"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/1531326.1531370"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/3334480.3382820"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR52148.2021.00021"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2582051.2582097"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2598153.2598165"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3473856.3473865"},{"key":"e_1_2_1_35_1","volume-title":"Body Language of Avatars in VR Meetings as Communication Status Cue: Recommendations for Interaction Design and Implementation. i-com","volume":"21","author":"Kurzweg Marco","year":"2022","unstructured":"Marco Kurzweg and Katrin Wolf. 2022. Body Language of Avatars in VR Meetings as Communication Status Cue: Recommendations for Interaction Design and Implementation. i-com, Vol. 21 (2022), 175 -- 201."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/142750.142980"},{"key":"e_1_2_1_37_1","unstructured":"J Lafferty and P Eady. 1974. The Desert Survival Problem Manual."},{"key":"e_1_2_1_38_1","volume-title":"Proceedings of the 12th annual international workshop on presence. 1--15","author":"Lombard Matthew","year":"2009","unstructured":"Matthew Lombard, Theresa B Ditton, and Lisa Weinstein. 2009. Measuring presence: the temple presence inventory. In Proceedings of the 12th annual international workshop on presence. 1--15."},{"key":"e_1_2_1_39_1","unstructured":"Tencent Holdings Ltd. 2022. Wechat. https:\/\/www.wechat.com\/"},{"key":"e_1_2_1_40_1","volume-title":"Juhyun Lee, et al.","author":"Lugaresi Camillo","year":"2019","unstructured":"Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, et al. 2019. Mediapipe: A framework for building perception pipelines. arXiv preprint arXiv:1906.08172 (2019)."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2017.02.066"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073596"},{"key":"e_1_2_1_43_1","unstructured":"Inc Meta Platforms. 2022a. Facebook. https:\/\/facebook.com"},{"key":"e_1_2_1_44_1","unstructured":"Inc Meta Platforms. 2022b. Meta Avatar SDK. https:\/\/developer.oculus.com\/documentation\/unity\/meta-avatars-overview\/"},{"key":"e_1_2_1_45_1","unstructured":"Inc Meta Platforms. 2022c. Meta Horizon Workrooms. https:\/\/www.oculus.com\/workrooms"},{"key":"e_1_2_1_46_1","unstructured":"Inc Meta Platforms. 2022d. Meta Quest Pro. https:\/\/www.meta.com\/ch\/en\/quest\/quest-pro\/"},{"key":"e_1_2_1_47_1","unstructured":"Inc Microsoft. 2022a. Microsoft Teams. https:\/\/www.microsoft.com\/en-us\/microsoft-teams\/group-chat-software"},{"key":"e_1_2_1_48_1","unstructured":"Inc Microsoft. 2022b. Skype. https:\/\/www.skype.com\/en\/"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.compedu.2018.04.012"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.compedu.2006.12.008"},{"key":"e_1_2_1_51_1","first-page":"1","article-title":"The Acceptance and Use of Video Conferencing for Teaching in Covid-19 Pandemic: An Empirical Study in Vietnam","volume":"12","author":"Nguyen Thanh Khuong","year":"2021","unstructured":"Thanh Khuong Nguyen and Thi Hong Tham Nguyen. 2021. The Acceptance and Use of Video Conferencing for Teaching in Covid-19 Pandemic: An Empirical Study in Vietnam. AsiaCALL Online Journal, Vol. 12, 5 (2021), 1--16.","journal-title":"AsiaCALL Online Journal"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/2807442.2807497"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/1959022.1959025"},{"key":"e_1_2_1_54_1","unstructured":"OptiTrack. 2022. Motion Capture Systems. http:\/\/optitrack.com\/"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/2984511.2984517"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1186\/s40691-021-00257-6"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/2818048.2819965"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.compedu.2018.10.006"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173620"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1089\/cyber.2021.0112"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.5555\/3113194.3113386"},{"key":"e_1_2_1_62_1","volume-title":"Beyond Replication: Augmenting Social Behaviors in Multi-User Virtual Realities. 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","author":"Roth Daniel","year":"2018","unstructured":"Daniel Roth, Constantin Kleinbeck, Tobias Feigl, Christopher Mutschler, and Marc Erich Latoschik. 2018. Beyond Replication: Augmenting Social Behaviors in Multi-User Virtual Realities. 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (2018), 215--222."},{"key":"e_1_2_1_63_1","doi-asserted-by":"crossref","unstructured":"Jiwon Ryu and Gerard Jounghyun Kim. 2020. Interchanging the Mode of Display Between Desktop and Immersive Headset for Effective and Usable On-line Learning. In IHCI.","DOI":"10.1007\/978-3-030-68449-5_22"},{"key":"e_1_2_1_64_1","volume-title":"DS 60: Proceedings of DESIGN 2010, the 11th International Design Conference","author":"Schroeer Bernd","year":"2010","unstructured":"Bernd Schroeer, Andreas Kain, and Udo Lindemann. 2010. Supporting creativity in conceptual design: Method 635-extended. In DS 60: Proceedings of DESIGN 2010, the 11th International Design Conference, Dubrovnik, Croatia."},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3284432.3284439"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2011.5995316"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3196709.3196788"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3385959.3422699"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126338"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3581468"},{"key":"e_1_2_1_71_1","unstructured":"Spatial Systems. 2023. Spatial. https:\/\/www.spatial.io\/\/"},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.603"},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.603"},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.214"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1111\/jcal.12309"},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3385956.3418940"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/1978942.1978963"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/VR51125.2022.00106"},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00622"},{"key":"e_1_2_1_80_1","volume-title":"Borst","author":"Yoshimura Andrew","year":"2020","unstructured":"Andrew Yoshimura and Christoph W. Borst. 2020a. Evaluation and Comparison of Desktop Viewing and Headset Viewing of Remote Lectures in VR with Mozilla Hubs. In ICAT-EGVE."},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1145\/3385956.3422124"},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2898737"},{"key":"e_1_2_1_83_1","volume-title":"Mediapipe hands: On-device real-time hand tracking. arXiv preprint arXiv:2006.10214","author":"Zhang Fan","year":"2020","unstructured":"Fan Zhang, Valentin Bazarevsky, Andrey Vakunov, Andrei Tkachenka, George Sung, Chuo-Ling Chang, and Matthias Grundmann. 2020. Mediapipe hands: On-device real-time hand tracking. arXiv preprint arXiv:2006.10214 (2020)."},{"key":"e_1_2_1_84_1","volume-title":"Computer graphics forum","author":"Zollh\u00f6fer Michael","unstructured":"Michael Zollh\u00f6fer, Justus Thies, Pablo Garrido, Derek Bradley, Thabo Beeler, Patrick P\u00e9rez, Marc Stamminger, Matthias Nie\u00dfner, and Christian Theobalt. 2018. State of the art on monocular 3D face reconstruction, tracking, and applications. In Computer graphics forum, Vol. 37. Wiley Online Library, 523--550."}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3604279","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3604279","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:47:17Z","timestamp":1750178837000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3604279"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,11]]},"references-count":84,"journal-issue":{"issue":"MHCI","published-print":{"date-parts":[[2023,9,11]]}},"alternative-id":["10.1145\/3604279"],"URL":"https:\/\/doi.org\/10.1145\/3604279","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,11]]},"assertion":[{"value":"2023-09-13","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}