{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,14]],"date-time":"2026-03-14T00:18:04Z","timestamp":1773447484988,"version":"3.50.1"},"reference-count":40,"publisher":"Association for Computing Machinery (ACM)","issue":"ETRA","license":[{"start":{"date-parts":[[2023,5,17]],"date-time":"2023-05-17T00:00:00Z","timestamp":1684281600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2023,5,17]]},"abstract":"<jats:p>Enabling gaze interaction in real-time on handheld mobile devices has attracted significant attention in recent years. An increasing number of research projects have focused on sophisticated appearance-based deep learning models to enhance the precision of gaze estimation on smartphones. This inspires important research questions, including how the gaze can be used in a real-time application, and what type of gaze interaction methods are preferable under dynamic conditions in terms of both user acceptance and delivering reliable performance. To address these questions, we design four types of gaze scrolling techniques: three explicit technique based on Gaze Gesture, Dwell time, and Pursuit; and one implicit technique based on reading speed to support touch-free, page-scrolling on a reading application. We conduct a 20-participant user study under both sitting and walking settings and our results reveal that Gaze Gesture and Dwell time-based interfaces are more robust while walking and Gaze Gesture has achieved consistently good scores on usability while not causing high cognitive workload.<\/jats:p>","DOI":"10.1145\/3591127","type":"journal-article","created":{"date-parts":[[2023,5,18]],"date-time":"2023-05-18T20:21:03Z","timestamp":1684441263000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["DynamicRead: Exploring Robust Gaze Interaction Methods for Reading on Handheld Mobile Devices under Dynamic Conditions"],"prefix":"10.1145","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0697-7942","authenticated-orcid":false,"given":"Yaxiong","family":"Lei","sequence":"first","affiliation":[{"name":"University of St Andrews, St Andrews, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3335-8706","authenticated-orcid":false,"given":"Yuheng","family":"Wang","sequence":"additional","affiliation":[{"name":"University of St Andrews, St-Andrews, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-0158-2563","authenticated-orcid":false,"given":"Tyler","family":"Caslin","sequence":"additional","affiliation":[{"name":"University of St Andrews, St Andrews, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-6660-6512","authenticated-orcid":false,"given":"Alexander","family":"Wisowaty","sequence":"additional","affiliation":[{"name":"University of St Andrews, St Andrews, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2801-3271","authenticated-orcid":false,"given":"Xu","family":"Zhu","sequence":"additional","affiliation":[{"name":"University of St Andrews, St Andrews, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7051-5200","authenticated-orcid":false,"given":"Mohamed","family":"Khamis","sequence":"additional","affiliation":[{"name":"University of Glasgow, Glasgow, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2838-6836","authenticated-orcid":false,"given":"Juan","family":"Ye","sequence":"additional","affiliation":[{"name":"University of St Andrews, St Andrews, United Kingdom"}]}],"member":"320","published-online":{"date-parts":[[2023,5,18]]},"reference":[{"key":"e_1_2_2_1_1","volume-title":"state of mobile","author":"Annie App","year":"2022","unstructured":"App Annie. 2022. state of mobile 2022. https:\/\/www.data.ai\/en\/go\/state-of-mobile-2022\/"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR48806.2021.9412205"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","unstructured":"John Brooke et al. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189 194 (1996) 4--7. https:\/\/doi.org\/10.1201\/9781498710411--35","DOI":"10.1201\/9781498710411--35"},{"key":"e_1_2_2_4_1","volume-title":"Shi","author":"Chen Zhaokang","year":"2019","unstructured":"Zhaokang Chen and Bertram E. Shi. 2019. Appearance-Based Gaze Estimation Using Dilated-Convolutions. In Computer Vision -- ACCV 2018, C.V. Jawahar, Hongdong Li, Greg Mori, and Konrad Schindler (Eds.). Springer International Publishing, Cham, 309--324."},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i07.6636"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR56361.2022.9956687"},{"key":"e_1_2_2_7_1","volume-title":"Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark. arXiv preprint abs\/2104.12668","author":"Cheng Yihua","year":"2021","unstructured":"Yihua Cheng, Haofei Wang, Yiwei Bao, and Feng Lu. 2021. Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark. arXiv preprint abs\/2104.12668 (2021)."},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1080\/17470210802168583"},{"key":"e_1_2_2_9_1","volume-title":"Automatic Gaze Analysis: A Survey of Deep Learning Based Approaches. arXiv preprint abs\/2108.05479","author":"Ghosh Shreya","year":"2021","unstructured":"Shreya Ghosh, Abhinav Dhall, Munawar Hayat, Jarrod Knibbe, and Qiang Ji. 2021. Automatic Gaze Analysis: A Survey of Deep Learning Based Approaches. arXiv preprint abs\/2108.05479 (2021)."},{"key":"e_1_2_2_10_1","unstructured":"Google. 2022. Flutter dev. https:\/\/flutter.dev"},{"key":"e_1_2_2_11_1","unstructured":"Google. 2022. ML Kit. https:\/\/developers.google.com\/ml-kit"},{"key":"e_1_2_2_12_1","volume-title":"Jae-Joon Han, and Changkyu Choi.","author":"Guo Tianchu","year":"2019","unstructured":"Tianchu Guo, Yongchao Liu, Hui Zhang, Xiabing Liu, Youngjun Kwak, Byung In Yoo, Jae-Joon Han, and Changkyu Choi. 2019. A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone. arXiv:1910.07331 [cs.CV]"},{"key":"e_1_2_2_13_1","unstructured":"Sandra G Hart. 1986. NASA task load index (TLX). (1986)."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1037\/e577632012-009"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.3389\/fpsyg.2013.00277"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1080\/00222890109603151"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/123078.128728"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1523\/JNEUROSCI.5570-08.2009"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3229434.3229452"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173854"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3453988"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.239"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/1294211.1294249"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1080\/10447318.2018.1455307"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/1518701.1518758"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376479"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025517"},{"key":"e_1_2_2_28_1","volume-title":"Proceedings of British Machine Vision Conference (BMVC). The British Machine Vision Association(BMVA)","author":"Palmero Cristina","year":"2018","unstructured":"Cristina Palmero, Javier Selva, Mohammad Ali Bagheri, and Sergio Escalera. 2018. Recurrent cnn for 3d gaze estimation using appearance and shape cues. In Proceedings of British Machine Vision Conference (BMVC). The British Machine Vision Association(BMVA), Northumbria, UK."},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/2639189.2639242"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2509315.2509319"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--981--19--3747--7_6"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","unstructured":"Jayson Turner Shamsi Iqbal and Susan Dumais. 2015. Understanding Gaze and Scrolling Strategies in Text Consumption Tasks. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (Osaka Japan) (UbiComp\/ISWC'15 Adjunct). Association for Computing Machinery NewYork NY USA 829--838. https:\/\/doi.org\/10.1145\/2800835.2804331","DOI":"10.1145\/2800835.2804331"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","unstructured":"Nachiappan Valliappan Na Dai Ethan Steinberg Junfeng He Kantwon Rogers Venky Ramachandran Pingmei Xu Mina Shojaeizadeh Li Guo Kai Kohlhoff et al. 2020. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature communications 11 1 (2020) 1--12. https:\/\/doi.org\/10.1038\/s41467-020--18360--5","DOI":"10.1038\/s41467-020--18360--5"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.humov.2020.102616"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3204493.3204556"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/978--3-030--85607--6_50"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.2196\/ijmr.2402"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174198"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300646"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3490099.3511103"}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3591127","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3591127","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:30Z","timestamp":1750178250000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3591127"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,17]]},"references-count":40,"journal-issue":{"issue":"ETRA","published-print":{"date-parts":[[2023,5,17]]}},"alternative-id":["10.1145\/3591127"],"URL":"https:\/\/doi.org\/10.1145\/3591127","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,17]]},"assertion":[{"value":"2023-05-18","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}