{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T15:38:33Z","timestamp":1768318713737,"version":"3.49.0"},"reference-count":60,"publisher":"Cambridge University Press (CUP)","issue":"12","license":[{"start":{"date-parts":[[2024,10,30]],"date-time":"2024-10-30T00:00:00Z","timestamp":1730246400000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/www.cambridge.org\/core\/terms"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Robotica"],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Precise and efficient performance in remote robotic teleoperation relies on intuitive interaction. This requires both accurate control actions and complete perception (vision, haptic, and other sensory feedback) of the remote environment. Especially in immersive remote teleoperation, the complete perception of remote environments in 3D allows operators to gain improved situational awareness. Color and Depth (RGB-D) cameras capture remote environments as dense 3D point clouds for real-time visualization. However, providing enough situational awareness needs fast, high-quality data transmission from acquisition to virtual reality rendering. Unfortunately, dense point-cloud data can suffer from network delays and limits, impacting the teleoperator\u2019s situational awareness. Understanding how the human eye works can help mitigate these challenges. This paper introduces a solution by implementing foveation, mimicking the human eye\u2019s focus by smartly sampling and rendering dense point clouds for an intuitive remote teleoperation interface. This provides high resolution in the user\u2019s central field, which gradually reduces toward the edges. However, this systematic visualization approach in the peripheral vision may benefit or risk losing information and burdening the user\u2019s cognitive load. This work investigates these advantages and drawbacks through an experimental study and describes the overall system, with its software, hardware, and communication framework. This will show significant enhancements in both latency and throughput, surpassing 60% and 40% improvements in both aspects when compared with state-of-the-art research works. A user study reveals that the framework has minimal impact on the user\u2019s visual quality of experience while helping to reduce the error rate significantly. Further, a 50% reduction in task execution time highlights the benefits of the proposed framework in immersive remote telerobotics applications.<\/jats:p>","DOI":"10.1017\/s0263574724001784","type":"journal-article","created":{"date-parts":[[2024,10,30]],"date-time":"2024-10-30T09:19:33Z","timestamp":1730279973000},"page":"4223-4248","source":"Crossref","is-referenced-by-count":1,"title":["Immersive remote telerobotics: foveated unicasting and remote visualization for intuitive interaction"],"prefix":"10.1017","volume":"42","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9938-4379","authenticated-orcid":false,"given":"Yonas T.","family":"Tefera","sequence":"first","affiliation":[{"name":"Istituto Italiano di Tecnologia (IIT)"}]},{"given":"Yaesol","family":"Kim","sequence":"additional","affiliation":[{"name":"Istituto Italiano di Tecnologia (IIT)"}]},{"given":"Sara","family":"Anastasi","sequence":"additional","affiliation":[{"name":"Istituto Nazionale per l\u2019Assicurazione contro gli Infortuni sul Lavoro (INAIL)"}]},{"given":"Paolo","family":"Fiorini","sequence":"additional","affiliation":[{"name":"University of Verona"}]},{"given":"Darwin G.","family":"Caldwell","sequence":"additional","affiliation":[{"name":"Istituto Italiano di Tecnologia (IIT)"}]},{"given":"Nikhil","family":"Deshpande","sequence":"additional","affiliation":[{"name":"Istituto Italiano di Tecnologia (IIT)"}]}],"member":"56","published-online":{"date-parts":[[2024,10,30]]},"reference":[{"key":"S0263574724001784_ref40","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2018.2794119"},{"key":"S0263574724001784_ref47","doi-asserted-by":"crossref","unstructured":"[47] Ishiguro, Y. and Rekimoto, J. , \u201cPeripheral Vision Annotation: Noninterference Information Presentation Method for Mobile Augmented Reality,\u201d In Proceedings of the 2nd Augmented Human International Conference, New York, NY, USA (2011) pp. 1\u20135.","DOI":"10.1145\/1959826.1959834"},{"key":"S0263574724001784_ref50","doi-asserted-by":"publisher","DOI":"10.1364\/JOSA.67.000202"},{"key":"S0263574724001784_ref11","doi-asserted-by":"publisher","DOI":"10.20965\/jrm.2014.p0486"},{"key":"S0263574724001784_ref12","doi-asserted-by":"crossref","unstructured":"[12] Dima, E. , Brunnstr\u00f6m, K. , Sj\u00f6str\u00f6m, M. , Andersson, M. , Edlund, J. , Johanson, M. and Qureshi, T. , \u201cView Position Impact on QoE in an Immersive Telepresence System for Remote Operation,\u201d In: 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), IEEE, Berlin, Germany (2019) pp. 1\u20133.","DOI":"10.1109\/QoMEX.2019.8743147"},{"key":"S0263574724001784_ref36","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.12956"},{"key":"S0263574724001784_ref23","doi-asserted-by":"crossref","unstructured":"[23] Izadi, S. , Kim, D. , Hilliges, O. , Molyneaux, D. , Newcombe, R. , Kohli, P. , Shotton, J. , Hodges, S. , Freeman, D. and Davison, A. , \u201cKinectfusion: Real-Time 3D Reconstruction and Interaction Using a Moving Depth Camera,\u201d In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Association for Computing Machinery, New York, NY, USA (2011) pp. 559\u2013568.","DOI":"10.1145\/2047196.2047270"},{"key":"S0263574724001784_ref31","doi-asserted-by":"crossref","unstructured":"[31] De Pace, F. , Gorjup, G. , Bai, H. , Sanna, A. , Liarokapis, M. and Billinghurst, M. , \u201cLeveraging Enhanced Virtual Reality Methods and Environments for Efficient, Intuitive, and Immersive Teleoperation of Robots,\u201d In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, Xi\u2019an, China (2021) pp. 12967\u201312973.","DOI":"10.1109\/ICRA48506.2021.9560757"},{"key":"S0263574724001784_ref60","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2898435"},{"key":"S0263574724001784_ref26","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2016.2543039"},{"key":"S0263574724001784_ref52","doi-asserted-by":"publisher","DOI":"10.1167\/11.5.14"},{"key":"S0263574724001784_ref14","doi-asserted-by":"publisher","DOI":"10.3389\/frvir.2020.582204"},{"key":"S0263574724001784_ref55","first-page":"1","volume-title":"Practical Packet Analysis: Using Wireshark to Solve Real-World Network Problems","author":"Sanders","year":"2017"},{"key":"S0263574724001784_ref25","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2019.2947048"},{"key":"S0263574724001784_ref44","doi-asserted-by":"publisher","DOI":"10.1016\/j.preteyeres.2018.10.001"},{"key":"S0263574724001784_ref4","doi-asserted-by":"publisher","DOI":"10.1007\/s10846-021-01311-7"},{"key":"S0263574724001784_ref21","first-page":"391","volume-title":"Wissenschaftlich-Technische Jahrestagung der DGPF","author":"Weinmann","year":"2020"},{"key":"S0263574724001784_ref3","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574721000576"},{"key":"S0263574724001784_ref45","doi-asserted-by":"publisher","DOI":"10.1016\/B978-0-12-800965-9.00003-9"},{"key":"S0263574724001784_ref8","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574709005517"},{"key":"S0263574724001784_ref17","doi-asserted-by":"publisher","DOI":"10.1145\/2366145.2366183"},{"key":"S0263574724001784_ref1","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574713000234"},{"key":"S0263574724001784_ref18","doi-asserted-by":"crossref","unstructured":"[18] Maimone, A. and Fuchs, H. , \u201cEncumbrance-Free Telepresence System with Real-Time 3D Capture and Display Using Commodity Depth Cameras,\u201d In: 10th IEEE International Symposium on Mixed and Augmented Reality, IEEE,Basel, Switzerland (2011) pp. 137\u2013146.","DOI":"10.1109\/ISMAR.2011.6092379"},{"key":"S0263574724001784_ref30","doi-asserted-by":"crossref","unstructured":"[30] Van Der Hooft, J. , Wauters, T. , De Turck, F. , Timmerer, C. and Hellwagner, H. , \u201cTowards 6DoF http Adaptive Streaming Through Point Cloud Compression,\u201d In: Proceedings of the 27th ACM International Conference on Multimedia, New York, NY, USA (2019) pp. 2405\u20132413.","DOI":"10.1145\/3343031.3350917"},{"key":"S0263574724001784_ref28","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2007.70441"},{"key":"S0263574724001784_ref22","doi-asserted-by":"publisher","DOI":"10.3390\/app132212129"},{"key":"S0263574724001784_ref42","doi-asserted-by":"crossref","unstructured":"[42] Sch\u00fctz, M. , Kr\u00f6sl, K. and Wimmer, M. , \u201cReal-Time Continuous Level of Detail Rendering of Point Clouds,\u201d In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan (2019) pp. 103\u2013110.","DOI":"10.1109\/VR.2019.8798284"},{"key":"S0263574724001784_ref35","doi-asserted-by":"publisher","DOI":"10.1177\/2041669520983338"},{"key":"S0263574724001784_ref37","unstructured":"[37] Bruder, V. , Schulz, C. , Bauer, R. , Frey, S. , Weiskopf, D. and Ertl, T. , \u201cVoronoi-Based Foveated Volume Rendering,\u201d In: EuroVis (Short Papers) pp. 67\u201371 (2019)."},{"key":"S0263574724001784_ref41","doi-asserted-by":"publisher","DOI":"10.2352\/ISSN.2470-1173.2020.1.VDA-374"},{"key":"S0263574724001784_ref6","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574700007517"},{"key":"S0263574724001784_ref56","unstructured":"[56] M. 3DG and Requirements 2017. Call for Proposals for Point Cloud Compression V2. Technical report, MPEG 3DG and Requirements, Hobart, AU."},{"key":"S0263574724001784_ref38","unstructured":"[38] Charlton, A. , What is foveated rendering? Explaining the VR technology key to lifelike realism (2021) (Accessed: 05-Sep-2021)."},{"key":"S0263574724001784_ref53","doi-asserted-by":"publisher","DOI":"10.1098\/rsos.172331"},{"key":"S0263574724001784_ref57","first-page":"2","article-title":"Cloudcompare-open source project","volume":"588","author":"Girardeau-Montaut","year":"2011","journal-title":"OpenSource Project"},{"key":"S0263574724001784_ref5","first-page":"1","volume-title":"The International Workshop on Virtual Augmented, and Mixed-Reality for Human-Robot Interactions at HRI","author":"Tefera","year":"2022"},{"key":"S0263574724001784_ref9","first-page":"3630","volume-title":"IEEE\/RSJ IROS","author":"Stotko","year":"2019"},{"key":"S0263574724001784_ref2","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574722001035"},{"key":"S0263574724001784_ref33","first-page":"24","article-title":"Eye movements of aircraft pilots during instrument-landing approaches","volume":"9","author":"Fitts","year":"1949","journal-title":"Aeronaut. Eng. Rev."},{"key":"S0263574724001784_ref43","first-page":"597","volume-title":"Guyton and Hall Textbook of Medical Physiology","author":"Guyton","year":"2011"},{"key":"S0263574724001784_ref13","doi-asserted-by":"crossref","unstructured":"[13] Rosen, E. , Whitney, D. , Fishman, M. , Ullman, D. and Tellex, S. , \u201cMixed Reality as a Bidirectional Communication Interface for Human-Robot Interaction,\u201d In: 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA (2020) pp. 11431\u201311438.","DOI":"10.1109\/IROS45743.2020.9340822"},{"key":"S0263574724001784_ref24","first-page":"1697","article-title":"ElasticFusion: Dense SLAM without a pose graph","volume":"11","author":"Whelan","year":"2015","journal-title":"In Robot.: Sci. Syst"},{"key":"S0263574724001784_ref58","first-page":"1","volume-title":"Recommendation ITU-T P.919: Subjective Test Methodologies for 360\u00b0 Video On Head-Mounted Displays","year":"2020"},{"key":"S0263574724001784_ref49","doi-asserted-by":"publisher","DOI":"10.1016\/j.visres.2017.08.001"},{"key":"S0263574724001784_ref27","first-page":"31","article-title":"Video coding of dynamic 3D point cloud data","volume":"8","author":"Schwarz","year":"2019","journal-title":"APSIPA Trans. Signal Info. Process."},{"key":"S0263574724001784_ref59","doi-asserted-by":"publisher","DOI":"10.1016\/j.edurev.2018.03.003"},{"key":"S0263574724001784_ref51","doi-asserted-by":"publisher","DOI":"10.1016\/0002-9394(58)90042-4"},{"key":"S0263574724001784_ref46","first-page":"820","volume-title":"The Oxford Handbook of Eye Movements","author":"Hy\u00f6n\u00e4","year":"2011"},{"key":"S0263574724001784_ref7","doi-asserted-by":"crossref","unstructured":"[7] Mossel, A. and Kr\u00f6ter, M. , \u201cStreaming and Exploration of Dynamically Changing Dense 3D Reconstructions in Immersive Virtual Reality,\u201d In: 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), IEEE, Merida, Mexico (2016) pp. 43\u201348.","DOI":"10.1109\/ISMAR-Adjunct.2016.0035"},{"key":"S0263574724001784_ref54","doi-asserted-by":"crossref","unstructured":"[54] Handa, A. , Whelan, T. , McDonald, J. B. and Davison, A. J. , \u201cA Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM,\u201d In: IEEE International Conference on Robotics and Automation, ICRA, Hong Kong, China (2014) pp. 1524\u20131531.","DOI":"10.1109\/ICRA.2014.6907054"},{"key":"S0263574724001784_ref48","doi-asserted-by":"publisher","DOI":"10.1167\/11.5.13"},{"key":"S0263574724001784_ref16","first-page":"1","volume-title":"Macular Degeneration","author":"Hendrickson","year":"2005"},{"key":"S0263574724001784_ref29","doi-asserted-by":"crossref","unstructured":"[29] Shi, Y. , Venkatram, P. , Ding, Y. and Ooi, W. T. , \u201cEnabling Low Bit-Rate mpeg v-pcc Encoded Volumetric Video Streaming with 3D Sub-Sampling,\u201d In: Proceedings of the 14th Conference on ACM Multimedia Systems, New York, NY, USA (2023) pp. 108\u2013118.","DOI":"10.1145\/3587819.3590981"},{"key":"S0263574724001784_ref19","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574716000631"},{"key":"S0263574724001784_ref39","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356557"},{"key":"S0263574724001784_ref32","volume-title":"The Psychology and Pedagogy of Reading: With a Review of the History of Reading and Writing and of Methods, Texts, and Hygiene in Reading","author":"Huey","year":"1968"},{"key":"S0263574724001784_ref15","doi-asserted-by":"crossref","unstructured":"[15] Orts-Escolano, S. , Rhemann, C. , Fanello, S. , Chang, W. , Kowdle, A. , Degtyarev, Y. , Kim, D. , Davidson, P. L. , Khamis, S. , Dou, M. , Tankovich, V. , Loop, C. , Cai, Q. , Chou, P. A. , Mennicken, S. , Valentin, J. , Pradeep, V. , Wang, S. , Kang, S. B. , Kohli, P. , Lutchyn, Y. , Keskin, C. and Izadi, S. . \u201cHoloportation: Virtual 3D Teleportation in Real-Time,\u201d In: 29th Annual Symposium on User Interface Software and Technology (UIST), New York, NY, USA, Association for Computing Machinery (2016) pp. 741\u2013754.","DOI":"10.1145\/2984511.2984517"},{"key":"S0263574724001784_ref20","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2016.2580425"},{"key":"S0263574724001784_ref34","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4899-5379-7"},{"key":"S0263574724001784_ref10","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2899231"}],"container-title":["Robotica"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.cambridge.org\/core\/services\/aop-cambridge-core\/content\/view\/S0263574724001784","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T07:52:08Z","timestamp":1755589928000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.cambridge.org\/core\/product\/identifier\/S0263574724001784\/type\/journal_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,30]]},"references-count":60,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["S0263574724001784"],"URL":"https:\/\/doi.org\/10.1017\/s0263574724001784","relation":{},"ISSN":["0263-5747","1469-8668"],"issn-type":[{"value":"0263-5747","type":"print"},{"value":"1469-8668","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,30]]}}}