{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,28]],"date-time":"2025-11-28T12:33:34Z","timestamp":1764333214316,"version":"3.41.0"},"reference-count":48,"publisher":"Association for Computing Machinery (ACM)","issue":"MHCI","license":[{"start":{"date-parts":[[2022,9,19]],"date-time":"2022-09-19T00:00:00Z","timestamp":1663545600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2022,9,19]]},"abstract":"<jats:p>Research in human-robot collaboration explores aspects of using interaction modalities and their effect on human perception. Particular attention is paid to intent communication, which is essential for successful interaction and collaboration. This work investigates the effect of using audio, visual, and haptic feedback on intent communication in a human-robot collaboration task where the collaborators do not share a direct line of sight. A user study was conducted in virtual reality with 20 participants. Qualitative and quantitative feedback was collected from all participants. When compared with a baseline of no feedback given to the participants, results show that using visual feedback had a significant impact on task efficiency, user experience, and cognitive load. Audio feedback was slightly less impactful, while haptic feedback had a divisive effect. Multimodal feedback combining the three modalities showed the highest impact compared to the individual modalities, leading to the highest task efficiency and user experience, and the lowest cognitive load.<\/jats:p>","DOI":"10.1145\/3546731","type":"journal-article","created":{"date-parts":[[2022,9,20]],"date-time":"2022-09-20T23:14:30Z","timestamp":1663715670000},"page":"1-19","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["What Is Happening Behind The Wall?"],"prefix":"10.1145","volume":"6","author":[{"given":"Khaled","family":"Kassem","sequence":"first","affiliation":[{"name":"TU Wien, Vienna, Austria"}]},{"given":"Tobias","family":"Ungerb\u00f6ck","sequence":"additional","affiliation":[{"name":"TU Wien, Vienna, Austria"}]},{"given":"Philipp","family":"Wintersberger","sequence":"additional","affiliation":[{"name":"TU Wien, Vienna, Austria"}]},{"given":"Florian","family":"Michahelles","sequence":"additional","affiliation":[{"name":"TU Wien, Vienna, Austria"}]}],"member":"320","published-online":{"date-parts":[[2022,9,20]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/ROBOT.1999.770061"},{"key":"e_1_2_1_2_1","volume-title":"Improving Spatial Perception for Augmented Reality X-Ray Vision. In 2009 IEEE Virtual Reality Conference. IEEE. https:\/\/doi.org\/10","author":"Avery Benjamin","year":"2009","unstructured":"Benjamin Avery , Christian Sandor , and Bruce H. Thomas . 2009 . Improving Spatial Perception for Augmented Reality X-Ray Vision. In 2009 IEEE Virtual Reality Conference. IEEE. https:\/\/doi.org\/10 .1109\/vr. 2009 .4811002 10.1109\/vr.2009.4811002 Benjamin Avery, Christian Sandor, and Bruce H. Thomas. 2009. Improving Spatial Perception for Augmented Reality X-Ray Vision. In 2009 IEEE Virtual Reality Conference. IEEE. https:\/\/doi.org\/10.1109\/vr.2009.4811002"},{"key":"e_1_2_1_3_1","volume-title":"Michal Kapinus, V'itve zslav Beran, and Pavel Smrvz.","author":"Daniel","year":"2019","unstructured":"Daniel Bambu^sek , Zdenve k Materna , Michal Kapinus, V'itve zslav Beran, and Pavel Smrvz. 2019 . Combining Interactive Spatial Augmented Reality with Head-Mounted Display for End-User Collaborative Robot Programming. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE , 1--8. Daniel Bambu^sek, Zdenve k Materna, Michal Kapinus, V'itve zslav Beran, and Pavel Smrvz. 2019. Combining Interactive Spatial Augmented Reality with Head-Mounted Display for End-User Collaborative Robot Programming. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 1--8."},{"volume-title":"Robot 2015: Second Iberian Robotics Conference","author":"Baraka Kim","key":"e_1_2_1_4_1","unstructured":"Kim Baraka , Ana Paiva , and Manuela Veloso . 2016. Expressive lights for revealing mobile service robot state . In Robot 2015: Second Iberian Robotics Conference . Springer , 107--119. Kim Baraka, Ana Paiva, and Manuela Veloso. 2016. Expressive lights for revealing mobile service robot state. In Robot 2015: Second Iberian Robotics Conference. Springer, 107--119."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ifacol.2016.10.614"},{"key":"#cr-split#-e_1_2_1_6_1.1","doi-asserted-by":"crossref","unstructured":"Gabriele Bolano Christian Juelg Arne Roennau and Ruediger Dillmann. 2019. Transparent Robot Behavior Using Augmented Reality in Close Human-Robot Interaction. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1--7. https:\/\/doi.org\/10.1109\/RO-MAN46459.2019.8956296 10.1109\/RO-MAN46459.2019.8956296","DOI":"10.1109\/RO-MAN46459.2019.8956296"},{"key":"#cr-split#-e_1_2_1_6_1.2","doi-asserted-by":"crossref","unstructured":"Gabriele Bolano Christian Juelg Arne Roennau and Ruediger Dillmann. 2019. Transparent Robot Behavior Using Augmented Reality in Close Human-Robot Interaction. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1--7. https:\/\/doi.org\/10.1109\/RO-MAN46459.2019.8956296","DOI":"10.1109\/RO-MAN46459.2019.8956296"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/ROMAN.2018.8525671"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2018.2865034"},{"volume-title":"Proceedings of the 2018 ACM\/IEEE International Conference on Human-Robot Interaction","author":"Cha Elizabeth","key":"e_1_2_1_9_1","unstructured":"Elizabeth Cha , Naomi T. Fitter , Yunkyung Kim , Terrence Fong , and Maja J . Matari?. 2018a. Effects of Robot Sound on Auditory Localization in Human-Robot Collaboration . In Proceedings of the 2018 ACM\/IEEE International Conference on Human-Robot Interaction ( Chicago, IL, USA) (HRI '18). Association for Computing Machinery, New York, NY, USA, 434--442. https:\/\/doi.org\/10.1145\/3171221.3171285 10.1145\/3171221.3171285 Elizabeth Cha, Naomi T. Fitter, Yunkyung Kim, Terrence Fong, and Maja J. Matari?. 2018a. Effects of Robot Sound on Auditory Localization in Human-Robot Collaboration. In Proceedings of the 2018 ACM\/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI '18). Association for Computing Machinery, New York, NY, USA, 434--442. https:\/\/doi.org\/10.1145\/3171221.3171285"},{"key":"e_1_2_1_10_1","volume-title":"Mataric","author":"Cha Elizabeth","year":"2018","unstructured":"Elizabeth Cha , Yunkyung Kim , Terrence Fong , and Maja J . Mataric . 2018 b. A Survey of Nonverbal Signaling Methods for Non-Humanoid Robots . Elizabeth Cha, Yunkyung Kim, Terrence Fong, and Maja J. Mataric. 2018b. A Survey of Nonverbal Signaling Methods for Non-Humanoid Robots."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243503"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.promfg.2020.10.009"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.promfg.2020.10.088"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/2371574.2371610"},{"volume-title":"19th International Conference on Mobile and Ubiquitous Multimedia. ACM. https:\/\/doi.org\/10.1145\/3428361.3428402","author":"Gruenefeld Uwe","key":"e_1_2_1_15_1","unstructured":"Uwe Gruenefeld , Yvonne Br\u00fcck , and Susanne Boll . 2020. Behind the Scenes: Comparing X-Ray Visualization Techniques in Head-mounted Optical See-through Augmented Reality . In 19th International Conference on Mobile and Ubiquitous Multimedia. ACM. https:\/\/doi.org\/10.1145\/3428361.3428402 10.1145\/3428361.3428402 Uwe Gruenefeld, Yvonne Br\u00fcck, and Susanne Boll. 2020. Behind the Scenes: Comparing X-Ray Visualization Techniques in Head-mounted Optical See-through Augmented Reality. In 19th International Conference on Mobile and Ubiquitous Multimedia. ACM. https:\/\/doi.org\/10.1145\/3428361.3428402"},{"key":"e_1_2_1_16_1","volume-title":"2017 3rd International Conference on Control, Automation and Robotics, ICCAR 2017 (2017","author":"Huy D.Q.","year":"2017","unstructured":"D.Q. Huy , I. Vietcheslav , and S.G.L. Gerald . 2017 . See-through and spatial augmented reality - A novel framework for human-robot interaction . 2017 3rd International Conference on Control, Automation and Robotics, ICCAR 2017 (2017 ), 719--726. https:\/\/doi.org\/10.1109\/ICCAR.2017.7942791 10.1109\/ICCAR.2017.7942791 D.Q. Huy, I. Vietcheslav, and S.G.L. Gerald. 2017. See-through and spatial augmented reality - A novel framework for human-robot interaction. 2017 3rd International Conference on Control, Automation and Robotics, ICCAR 2017 (2017), 719--726. https:\/\/doi.org\/10.1109\/ICCAR.2017.7942791"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1080\/21693277.2019.1645628"},{"key":"e_1_2_1_18_1","volume-title":"It's your turn! - A collaborative human-robot pick-and-place scenario in a virtual industrial setting. CoRR","author":"Krenn Brigitte","year":"2021","unstructured":"Brigitte Krenn , Tim Reinboth , Stephanie Gross , Christine Busch , Martina Mara , Kathrin Meyer , Michael Heiml , and Thomas Layer-Wagner . 2021. It's your turn! - A collaborative human-robot pick-and-place scenario in a virtual industrial setting. CoRR , Vol. abs\/ 2105 .13838 ( 2021 ). arxiv: 2105.13838 https:\/\/arxiv.org\/abs\/2105.13838 Brigitte Krenn, Tim Reinboth, Stephanie Gross, Christine Busch, Martina Mara, Kathrin Meyer, Michael Heiml, and Thomas Layer-Wagner. 2021. It's your turn! - A collaborative human-robot pick-and-place scenario in a virtual industrial setting. CoRR , Vol. abs\/2105.13838 (2021). arxiv: 2105.13838 https:\/\/arxiv.org\/abs\/2105.13838"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cirp.2009.09.009"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSMC.2020.3041231"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.5555\/1734454.1734544"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3472223"},{"key":"e_1_2_1_23_1","volume-title":"IFIP International Conference on Advances in Production Management Systems. Springer, 606--613","author":"Matsas Elias","year":"2012","unstructured":"Elias Matsas , Dimitrios Batras , and George-Christopher Vosniakos . 2012 . Beware of the robot: a highly interactive and immersive Virtual Reality training application in robotic manufacturing systems . In IFIP International Conference on Advances in Production Management Systems. Springer, 606--613 . Elias Matsas, Dimitrios Batras, and George-Christopher Vosniakos. 2012. Beware of the robot: a highly interactive and immersive Virtual Reality training application in robotic manufacturing systems. In IFIP International Conference on Advances in Production Management Systems. Springer, 606--613."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2927929.2927948"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00170-017-0428-5"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.rcim.2017.09.005"},{"key":"e_1_2_1_27_1","volume-title":"David Williamson Shaffer, and Bilge Mutlu","author":"Michaelis Joseph E.","year":"2020","unstructured":"Joseph E. Michaelis , Amanda Siebert-Evenstone , David Williamson Shaffer, and Bilge Mutlu . 2020 . Collaborative or Simply Uncaged? Understanding Human-Cobot Interactions in Automation. Association for Computing Machinery , New York, NY, USA, 1--12. https:\/\/doi.org\/10.1145\/3313831.3376547 10.1145\/3313831.3376547 Joseph E. Michaelis, Amanda Siebert-Evenstone, David Williamson Shaffer, and Bilge Mutlu. 2020. Collaborative or Simply Uncaged? Understanding Human-Cobot Interactions in Automation. Association for Computing Machinery, New York, NY, USA, 1--12. https:\/\/doi.org\/10.1145\/3313831.3376547"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10055-008-0081-2"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/1027933.1027957"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.procir.2018.01.009"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/2935334.2935348"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0166-4115(08)62386-9"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.9781\/ijimai.2017.09.001"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3338286.3340134"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/aim.2016.7577007"},{"volume-title":"Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services","author":"Song T. H.","key":"e_1_2_1_36_1","unstructured":"T. H. Song , J. H. Park , S. M. Chung , S. H. Hong , K. H. Kwon , S. Lee , and J. W. Jeon . 2007. A Study on Usability of Human-Robot Interaction Using a Mobile Computer and a Human Interface Device . In Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services ( Singapore) (MobileHCI '07). Association for Computing Machinery, New York, NY, USA, 462--466. https:\/\/doi.org\/10.1145\/1377999.1378055 10.1145\/1377999.1378055 T. H. Song, J. H. Park, S. M. Chung, S. H. Hong, K. H. Kwon, S. Lee, and J. W. Jeon. 2007. A Study on Usability of Human-Robot Interaction Using a Mobile Computer and a Human Interface Device. In Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services (Singapore) (MobileHCI '07). Association for Computing Machinery, New York, NY, USA, 462--466. https:\/\/doi.org\/10.1145\/1377999.1378055"},{"volume-title":"Forecasting User Attention during Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors (MobileHCI '18)","author":"Steil Julian","key":"e_1_2_1_37_1","unstructured":"Julian Steil , Philipp M\u00fcller , Yusuke Sugano , and Andreas Bulling . 2018. Forecasting User Attention during Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors (MobileHCI '18) . Association for Computing Machinery , New York, NY, USA , Article 1, 13 pages. https:\/\/doi.org\/10.1145\/3229434.3229439 10.1145\/3229434.3229439 Julian Steil, Philipp M\u00fcller, Yusuke Sugano, and Andreas Bulling. 2018. Forecasting User Attention during Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors (MobileHCI '18). Association for Computing Machinery, New York, NY, USA, Article 1, 13 pages. https:\/\/doi.org\/10.1145\/3229434.3229439"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.rcim.2018.08.005"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.promfg.2017.07.127"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300737"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.promfg.2020.01.066"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3171221.3171253"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/THMS.2021.3092684"},{"key":"e_1_2_1_44_1","volume-title":"Presence, Embodiment, Performance, nor Visual Attention","author":"Wenk Nicolas","year":"2022","unstructured":"Nicolas Wenk , Mirjam V. Jordi , Karin A. Buetler , and Laura Marchal-Crespo . 2022. Hiding Assistive Robots During Training in Immersive VR Does not Affect Users' Motivation , Presence, Embodiment, Performance, nor Visual Attention . IEEE Transactions on Neural Systems and Rehabilitation Engineering ( 2022 ), 1--1. https:\/\/doi.org\/10.1109\/TNSRE.2022.3147260 10.1109\/TNSRE.2022.3147260 Nicolas Wenk, Mirjam V. Jordi, Karin A. Buetler, and Laura Marchal-Crespo. 2022. Hiding Assistive Robots During Training in Immersive VR Does not Affect Users' Motivation, Presence, Embodiment, Performance, nor Visual Attention. IEEE Transactions on Neural Systems and Rehabilitation Engineering (2022), 1--1. https:\/\/doi.org\/10.1109\/TNSRE.2022.3147260"},{"key":"e_1_2_1_45_1","volume-title":"Multiple resources and mental workload. Human factors","author":"Wickens Christopher D","year":"2008","unstructured":"Christopher D Wickens . 2008. Multiple resources and mental workload. Human factors , Vol. 50 , 3 ( 2008 ), 449--455. Christopher D Wickens. 2008. Multiple resources and mental workload. Human factors, Vol. 50, 3 (2008), 449--455."},{"key":"e_1_2_1_46_1","volume-title":"Compatibility and resource competition between modalities of input, central processing, and output. Human factors","author":"Wickens Christopher D","year":"1983","unstructured":"Christopher D Wickens , Diane L Sandry , and Michael Vidulich . 1983. Compatibility and resource competition between modalities of input, central processing, and output. Human factors , Vol. 25 , 2 ( 1983 ), 227--248. Christopher D Wickens, Diane L Sandry, and Michael Vidulich. 1983. Compatibility and resource competition between modalities of input, central processing, and output. Human factors, Vol. 25, 2 (1983), 227--248."},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2019.2943753"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3546731","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3546731","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T18:44:04Z","timestamp":1750272244000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3546731"}},"subtitle":["Towards a Better Understanding of a Hidden Robot's Intent By Multimodal Cues"],"short-title":[],"issued":{"date-parts":[[2022,9,19]]},"references-count":48,"journal-issue":{"issue":"MHCI","published-print":{"date-parts":[[2022,9,19]]}},"alternative-id":["10.1145\/3546731"],"URL":"https:\/\/doi.org\/10.1145\/3546731","relation":{},"ISSN":["2573-0142"],"issn-type":[{"type":"electronic","value":"2573-0142"}],"subject":[],"published":{"date-parts":[[2022,9,19]]},"assertion":[{"value":"2022-09-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}