{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,19]],"date-time":"2026-02-19T07:03:32Z","timestamp":1771484612265,"version":"3.50.1"},"reference-count":280,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2023,9,15]],"date-time":"2023-09-15T00:00:00Z","timestamp":1694736000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,2,29]]},"abstract":"<jats:p>In recent years, we have witnessed an increasing number of interactive systems on handheld mobile devices which utilise gaze as a single or complementary interaction modality. This trend is driven by the enhanced computational power of these devices, higher resolution and capacity of their cameras, and improved gaze estimation accuracy obtained from advanced machine learning techniques, especially in deep learning. As the literature is fast progressing, there is a pressing need to review the state-of-the-art, delineate the boundary, and identify the key research challenges and opportunities in gaze estimation and interaction. This article aims to serve this purpose by presenting an end-to-end holistic view in this area, from gaze capturing sensors, to gaze estimation workflows, to deep learning techniques, and to gaze interactive applications.<\/jats:p>\n          <jats:p\/>","DOI":"10.1145\/3606947","type":"journal-article","created":{"date-parts":[[2023,6,30]],"date-time":"2023-06-30T11:57:49Z","timestamp":1688126269000},"page":"1-38","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":24,"title":["An End-to-End Review of Gaze Estimation and its Interactive Applications on Handheld Mobile Devices"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0697-7942","authenticated-orcid":false,"given":"Yaxiong","family":"Lei","sequence":"first","affiliation":[{"name":"University of St Andrews, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3697-0706","authenticated-orcid":false,"given":"Shijing","family":"He","sequence":"additional","affiliation":[{"name":"King\u2019s College London, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7051-5200","authenticated-orcid":false,"given":"Mohamed","family":"Khamis","sequence":"additional","affiliation":[{"name":"University of Glasgow, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2838-6836","authenticated-orcid":false,"given":"Juan","family":"Ye","sequence":"additional","affiliation":[{"name":"University of St Andrews, UK"}]}],"member":"320","published-online":{"date-parts":[[2023,9,15]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"crossref","unstructured":"Ahmed A. Abdelrahman Thorsten Hempel Aly Khalifa and Ayoub Al-Hamadi. 2022. L2CS-Net: Fine-Grained gaze estimation in unconstrained environments. arXiv:2203.03339. Retrieved from https:\/\/arxiv.org\/abs\/2203.03339.","DOI":"10.1109\/ICFSP59764.2023.10372944"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.3389\/fneur.2018.00144"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2013.6738869"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0085701"},{"key":"e_1_3_1_6_2","unstructured":"Rishi Athavale Lakshmi Sritan Motati and Rohan Kalahasty. 2022. One eye is all you need: Lightweight ensembles for gaze estimation with single encoders. arXiv:2211.11936. Retrieved from https:\/\/arxiv.org\/abs\/2211.11936."},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neuroscience.2005.08.023"},{"key":"e_1_3_1_8_2","first-page":"1","article-title":"Understanding the role of proximity and eye gaze in human\u2013computer interaction for individuals with autism","volume":"5","author":"Babu Pradeep Raj Krishnappa","year":"2019","unstructured":"Pradeep Raj Krishnappa Babu and Uttama Lahiri. 2019. Understanding the role of proximity and eye gaze in human\u2013computer interaction for individuals with autism. Journal of Ambient Intelligence and Humanized Computing 5 (2019), 1\u201315.","journal-title":"Journal of Ambient Intelligence and Humanized Computing"},{"key":"e_1_3_1_9_2","first-page":"10","volume-title":"ETRA\u201920","author":"Bace Mihai","year":"2020","unstructured":"Mihai Bace, Vincent Becker, Chenyang Wang, and Andreas Bulling. 2020. Combining gaze estimation and optical flow for pursuits interaction. In ETRA\u201920. ACM, 10 pages."},{"key":"e_1_3_1_10_2","first-page":"5","volume-title":"SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications","author":"B\u00e2ce Mihai","year":"2016","unstructured":"Mihai B\u00e2ce, Teemu Lepp\u00e4nen, David Gil de Gomez, and Argenis Ramirez Gomez. 2016. UbiGaze: Ubiquitous augmented reality messaging using gaze gestures. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications. ACM, 5 pages."},{"key":"e_1_3_1_11_2","first-page":"21","article-title":"PrivacyScout: Assessing vulnerability to shoulder surfing on mobile devices","volume":"1","author":"B\u00e2ce Mihai","year":"2022","unstructured":"Mihai B\u00e2ce, Alia Saad, Mohamed Khamis, Stefan Schneegass, and Andreas Bulling. 2022. PrivacyScout: Assessing vulnerability to shoulder surfing on mobile devices. Proceedings on Privacy Enhancing Technologies 1, 3 (2022), 21.","journal-title":"Proceedings on Privacy Enhancing Technologies"},{"key":"e_1_3_1_12_2","volume-title":"Onnx: Open Neural Network Exchange","author":"others Bai, Junjie and Lu, Fang and Zhang, Ke and","year":"2019","unstructured":"Bai, Junjie and Lu, Fang and Zhang, Ke and others. 2019. Onnx: Open Neural Network Exchange. Github. Retrieved from https:\/\/github.com\/onnx\/onnx. Accessed 9-4-2023."},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/FG.2018.00019"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2022.3171416"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR48806.2021.9412205"},{"key":"e_1_3_1_16_2","first-page":"10","volume-title":"ETRA\u201918","author":"Barz Michael","year":"2018","unstructured":"Michael Barz, Florian Daiber, Daniel Sonntag, and Andreas Bulling. 2018. Error-aware gaze-based interfaces for robust mobile gaze interaction. In ETRA\u201918. ACM, 10 pages."},{"key":"e_1_3_1_17_2","first-page":"8","volume-title":"ETRA\u201920","author":"Barz Michael","year":"2020","unstructured":"Michael Barz, Sven Stauden, and Daniel Sonntag. 2020. Visual search target inference in natural interaction settings with machine learning. In ETRA\u201920. ACM, 8 pages."},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSMC.1998.727531"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/2134203.2134205"},{"key":"e_1_3_1_20_2","first-page":"77","volume-title":"Conference on Fairness, Accountability and Transparency","author":"Buolamwini Joy","year":"2018","unstructured":"Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. PMLR, 77\u201391."},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1523\/JNEUROSCI.2178-18.2019"},{"key":"e_1_3_1_22_2","doi-asserted-by":"crossref","unstructured":"Mihai B\u00e2ce Sander Staal and Andreas Bulling. 2019. Accurate and robust eye contact detection during everyday mobile device interactions. arXiv:1907.11115. Retrieved from https:\/\/arxiv.org\/abs\/1907.11115.","DOI":"10.1145\/3313831.3376449"},{"key":"e_1_3_1_23_2","first-page":"201","volume-title":"Biometric Recognition","author":"Cai Lijun","year":"2015","unstructured":"Lijun Cai, Lei Huang, and Changping Liu. 2015. Person-specific face spoofing detection for replay attack based on gaze estimation. In Biometric Recognition. Springer, 201\u2013211."},{"key":"e_1_3_1_24_2","first-page":"3415","volume-title":"CHI\u201916","author":"Carter Marcus","year":"2016","unstructured":"Marcus Carter, Eduardo Velloso, John Downs, Abigail Sellen, Kenton O\u2019Hara, and Frank Vetere. 2016. PathSync: Multi-user gestural interaction with touchless rhythmic path mimicry. In CHI\u201916. ACM, 3415\u20133427."},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/2578153.2583042"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP39728.2021.9414624"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-20876-9_20"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.vrih.2021.10.003"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i1.19921"},{"key":"e_1_3_1_30_2","doi-asserted-by":"crossref","unstructured":"Yihua Cheng and Feng Lu. 2022. Gaze estimation using transformer. In 26th International Conference on Pattern Recognition (ICPR\u201922) IEEE 3341\u20133347.","DOI":"10.1109\/ICPR56361.2022.9956687"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01264-9_7"},{"key":"e_1_3_1_32_2","unstructured":"Yihua Cheng Haofei Wang Yiwei Bao and Feng Lu. 2021. Appearance-based gaze estimation with deep learning: A review and benchmark. arXiv:2104.12668. Retrieved from https:\/\/arxiv.org\/abs\/2104.12668."},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00464-011-2143-x"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2019.00147"},{"key":"e_1_3_1_35_2","first-page":"1","volume-title":"CHI\u201920","author":"Creed Chris","year":"2020","unstructured":"Chris Creed, Maite Frutos-Pascual, and Ian Williams. 2020. Multimodal gaze interaction for creative design. In CHI\u201920. ACM, 1\u201313."},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2017.8296894"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-013-0422-2"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2019.2915540"},{"key":"e_1_3_1_39_2","unstructured":"Samuel Forbes Jacob Dink and Brock Ferguson. 2021. eyetrackingR. R package version 0.2.0. http:\/\/www.eyetracking-r.com\/."},{"key":"e_1_3_1_40_2","unstructured":"Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold Sylvain Gelly Jakob Uszkoreit and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv:2010.11929. Retrieved from https:\/\/arxiv.org\/abs\/2010.11929."},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/1378063.1378122"},{"key":"e_1_3_1_42_2","first-page":"10","volume-title":"MUM\u201919","author":"Drewes Heiko","year":"2019","unstructured":"Heiko Drewes, Mohamed Khamis, and Florian Alt. 2019. DialPlates: Enabling pursuits-based user interfaces with large target numbers. In MUM\u201919. ACM, 10 pages."},{"key":"e_1_3_1_43_2","first-page":"8","volume-title":"ETRA\u201919","author":"Drewes Heiko","year":"2019","unstructured":"Heiko Drewes, Ken Pfeuffer, and Florian Alt. 2019. Time- and space-efficient eye tracker calibration. In ETRA\u201919. ACM, 8 pages."},{"key":"e_1_3_1_44_2","doi-asserted-by":"crossref","first-page":"475","DOI":"10.1007\/978-3-540-74800-7_43","volume-title":"Human-Computer Interaction \u2013 INTERACT 2007","author":"Drewes Heiko","year":"2007","unstructured":"Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the computer using gaze gestures. In Human-Computer Interaction \u2013 INTERACT 2007. C\u00e9cilia Baranauskas, Philippe Palanque, Julio Abascal, and Simone Diniz Junqueira Barbosa (Eds.), Springer, Berlin, 475\u2013488."},{"key":"e_1_3_1_45_2","unstructured":"Lingyu Du and Guohao Lan. 2022. FreeGaze: Resource-efficient gaze estimation via frequency domain contrastive learning. arXiv:2209.06692. Retrieved from https:\/\/arxiv.org\/abs\/2209.06692."},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2019.8851961"},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cag.2018.04.002"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-57883-5"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/2168556.2168601"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2016.2520093"},{"key":"e_1_3_1_51_2","doi-asserted-by":"crossref","first-page":"1581","DOI":"10.1109\/SSCI47803.2020.9308238","volume-title":"2020 IEEE Symposium Series on Computational Intelligence (SSCI\u201920)","author":"Elbattah Mahmoud","year":"2020","unstructured":"Mahmoud Elbattah, Jean-Luc Gu\u00e9rin, Romuald Carette, Federica Cilia, and Gilles Dequen. 2020. NLP-based approach to detect autism spectrum disorder in saccadic eye movement. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI\u201920). IEEE, 1581\u20131587."},{"key":"e_1_3_1_52_2","first-page":"7","volume-title":"CHI EA\u201921","author":"Elmadjian Carlos","year":"2021","unstructured":"Carlos Elmadjian and Carlos H. Morimoto. 2021. GazeBar: Exploiting the midas touch in gaze interaction. In CHI EA\u201921. ACM, 7 pages."},{"key":"e_1_3_1_53_2","first-page":"1","volume-title":"ETRA\u201921","author":"Emery Kara J.","year":"2021","unstructured":"Kara J. Emery, Marina Zannoli, James Warren, Lei Xiao, and Sachin S. Talathi. 2021. OpenNEEDS: A dataset of gaze, head, hand, and scene signals during exploration in open-ended VR environments. In ETRA\u201921. 1\u20137."},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0042-6989(03)00084-1"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/2807442.2807499"},{"key":"e_1_3_1_56_2","volume-title":"EyeOn Air \u2013 Eye Tracking Communication Aid","year":"2023","unstructured":"EyeTech. 2023. EyeOn Air \u2013 Eye Tracking Communication Aid. eyetechds. Retrieved 2023-03-10 from https:\/\/eyetechds.com\/eyeon-air\/."},{"key":"e_1_3_1_57_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-017-0857-y"},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1037\/xhp0000743"},{"key":"e_1_3_1_59_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373625.3416987"},{"key":"e_1_3_1_60_2","first-page":"1","volume-title":"ACM Symposium on Eye Tracking Research and Applications","author":"Feit Anna Maria","year":"2020","unstructured":"Anna Maria Feit, Lukas Vordemann, Seonwook Park, Caterina B\u00e9rub\u00e9, and Otmar Hilliges. 2020. Detecting relevance during decision-making from eye movements for UI adaptation. In ACM Symposium on Eye Tracking Research and Applications. Association for Computing Machinery, 1\u201311."},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01264-9_33"},{"key":"e_1_3_1_62_2","unstructured":"Paul Festor Ali Shafti Alex Harston Michey Li Pavel Orlov and A. Aldo Faisal. 2022. MIDAS: Deep learning human action intention prediction from natural eye movement patterns. arXiv:2201.09135. Retrieved from https:\/\/arxiv.org\/abs\/2201.09135."},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01249-6_21"},{"key":"e_1_3_1_64_2","first-page":"5","volume-title":"ETRA\u201916","author":"Fuhl Wolfgang","year":"2018","unstructured":"Wolfgang Fuhl, Shahram Eivazi, Benedikt Hosp, Anna Eivazi, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2018. BORE: Boosted-oriented edge optimization for robust, real time remote pupil center detection. In ETRA\u201916. ACM, 5 pages."},{"key":"e_1_3_1_65_2","first-page":"6","volume-title":"ETRA\u201918","author":"Fuhl Wolfgang","year":"2018","unstructured":"Wolfgang Fuhl, David Geisler, Thiago Santini, Tobias Appel, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2018. CBF: Circular binary features for robust and real-time pupil center detection. In ETRA\u201918. ACM, 6 pages."},{"key":"e_1_3_1_66_2","doi-asserted-by":"crossref","first-page":"367","DOI":"10.1109\/ISMAR52148.2021.00053","volume-title":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR\u201921)","author":"Fuhl Wolfgang","year":"2021","unstructured":"Wolfgang Fuhl, Gjergji Kasneci, and Enkelejda Kasneci. 2021. TEyeD: Over 20 million real-world eye images with pupil, eyelid, and iris 2D and 3D segmentations, 2D and 3D landmarks, 3D eyeball, gaze vector, and eye movement types. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR\u201921). IEEE, 367\u2013375."},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-23192-1_4"},{"key":"e_1_3_1_68_2","unstructured":"Wolfgang Fuhl Thiago Santini Gjergji Kasneci Wolfgang Rosenstiel and Enkelejda Kasneci. 2017. PupilNet v2.0: Convolutional neural networks for CPU based real time robust pupil detection. arXiv:1711.00112. Retrieved from https:\/\/arxiv.org\/abs\/1711.00112."},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2857505"},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnsys.2013.00004"},{"key":"e_1_3_1_71_2","doi-asserted-by":"publisher","DOI":"10.1145\/2578153.2578190"},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-020-01374-8"},{"key":"e_1_3_1_73_2","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1007\/978-3-030-58465-8_5","volume-title":"Augmented Reality, Virtual Reality, and Computer Graphics","author":"George Ceenu","year":"2020","unstructured":"Ceenu George, Daniel Buschek, Andrea Ngao, and Mohamed Khamis. 2020. GazeRoomLock: Using gaze and head-pose to improve the usability and observation resistance of 3D passwords in virtual reality. In Augmented Reality, Virtual Reality, and Computer Graphics. Lucio Tommaso De Paolis and Patrick Bourdot (Eds.), Springer, 61\u201381."},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-020-01392-6"},{"key":"e_1_3_1_75_2","unstructured":"Shreya Ghosh Abhinav Dhall Munawar Hayat Jarrod Knibbe and Qiang Ji. 2022. Automatic gaze analysis: A survey of deep learning based approaches. arXiv:2108.05479. Retrieved from https:\/\/arxiv.org\/abs\/2108.05479."},{"key":"e_1_3_1_76_2","doi-asserted-by":"publisher","DOI":"10.3758\/BF03195488"},{"key":"e_1_3_1_77_2","volume-title":"Eye Tracking Market Size and Share Report, 2022\u20132030","author":"Research Grand View","year":"2022","unstructured":"Grand View Research. 2022. Eye Tracking Market Size and Share Report, 2022\u20132030. grandviewresearch. Retrieved from https:\/\/www.grandviewresearch.com\/industry-analysis\/eye-tracking-market. Accessed 05-04-2023."},{"issue":"8","key":"e_1_3_1_78_2","first-page":"33","article-title":"Eye-tracking technologies in mobile devices using edge computing: A systematic review","volume":"55","author":"Gunawardena Nishan","year":"2022","unstructured":"Nishan Gunawardena, Jeewani Anupama Ginige, and Bahman Javadi. 2022. Eye-tracking technologies in mobile devices using edge computing: A systematic review. ACM Computing Surveys 55, 8, (2022), 33 pages.","journal-title":"ACM Computing Surveys"},{"key":"e_1_3_1_79_2","unstructured":"Tianchu Guo Yongchao Liu Hui Zhang Xiabing Liu Youngjun Kwak Byung In Yoo Jae-Joon Han and Changkyu Choi. 2019. A generalized and robust method towards practical gaze estimation on smart phone. In Proceedings of the IEEE\/CVF International Conference on Computer Vision Workshops 1131\u20131139."},{"key":"e_1_3_1_80_2","first-page":"292","volume-title":"the Asian Conference on Computer Vision","author":"Guo Zidong","year":"2020","unstructured":"Zidong Guo, Zejian Yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, and Shenghao Zhang. 2020. Domain adaptation gaze estimation by embedding with prediction consistency. In the Asian Conference on Computer Vision. Springer, 292\u2013307."},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2857514"},{"key":"e_1_3_1_82_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.compmedimag.2017.04.006"},{"key":"e_1_3_1_83_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2019.00146"},{"key":"e_1_3_1_84_2","first-page":"770","volume-title":"CVPR\u201916","author":"He Kaiming","year":"2016","unstructured":"Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR\u201916. IEEE, 770\u2013778."},{"key":"e_1_3_1_85_2","first-page":"418","volume-title":"Image Analysis","author":"He Qiuhai","year":"2015","unstructured":"Qiuhai He, Xiaopeng Hong, Xiujuan Chai, Jukka Holappa, Guoying Zhao, Xilin Chen, and Matti Pietik\u00e4inen. 2015. OMEG: Oulu multi-pose eye gaze dataset. In Image Analysis. Rasmus R. Paulsen and Kim S. Pedersen (Eds.), Springer, 418\u2013427."},{"key":"e_1_3_1_86_2","doi-asserted-by":"publisher","DOI":"10.1145\/3472749.3474785"},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.1145\/2168556.2168579"},{"key":"e_1_3_1_88_2","doi-asserted-by":"crossref","unstructured":"Oliver Hein and Wolfgang Zangemeister. 2017. Topology for Gaze Analyses-Raw Data Segmentation. Retrieved April 19 2023 from https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC7141061\/.","DOI":"10.16910\/jemr.10.1.1"},{"key":"e_1_3_1_89_2","doi-asserted-by":"publisher","DOI":"10.3390\/ijerph17051639"},{"key":"e_1_3_1_90_2","doi-asserted-by":"publisher","DOI":"10.1111\/infa.12093"},{"key":"e_1_3_1_91_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-015-0676-y"},{"key":"e_1_3_1_92_2","first-page":"12","volume-title":"ETRA\u201920","author":"Hirzle Teresa","year":"2020","unstructured":"Teresa Hirzle, Maurice Cordts, Enrico Rukzio, and Andreas Bulling. 2020. A survey of digital eye strain in gaze-based interactive systems. In ETRA\u201920. ACM, 12 pages."},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_1_94_2","doi-asserted-by":"publisher","DOI":"10.3389\/fnhum.2018.00105"},{"key":"e_1_3_1_95_2","first-page":"4700","volume-title":"CVPR\u201917","author":"Huang Gao","year":"2017","unstructured":"Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In CVPR\u201917. IEEE, 4700\u20134708."},{"key":"e_1_3_1_96_2","first-page":"1","volume-title":"Workshop on Faces in\u2019Real-Life\u2019 Images: Detection, Alignment, and Recognition","author":"Huang Gary B.","year":"2008","unstructured":"Gary B. Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. 2008. Labeled faces in the wild: A database forStudying face recognition in unconstrained environments. In Workshop on Faces in\u2019Real-Life\u2019 Images: Detection, Alignment, and Recognition. Erik Learned-Miller and Andras Ferencz and Fr\u00e9d\u00e9ric Jurie, Inria, 1\u201314. Retrieved from https:\/\/hal.inria.fr\/inria-00321923."},{"key":"e_1_3_1_97_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP43922.2022.9747911"},{"key":"e_1_3_1_98_2","first-page":"10","volume-title":"ETRA\u201919","author":"Huang Michael Xuelin","year":"2019","unstructured":"Michael Xuelin Huang and Andreas Bulling. 2019. SacCalib: Reducing calibration distortion for stationary eye trackers using saccadic eye movements. In ETRA\u201919. ACM, 10 pages."},{"key":"e_1_3_1_99_2","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858404"},{"key":"e_1_3_1_100_2","doi-asserted-by":"publisher","DOI":"10.1145\/2964284.2964318"},{"key":"e_1_3_1_101_2","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025794"},{"key":"e_1_3_1_102_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00138-017-0852-4"},{"issue":"4","key":"e_1_3_1_103_2","first-page":"1","article-title":"iMon: Appearance-based gaze tracking system on mobile devices","volume":"5","author":"Huynh Sinh","year":"2021","unstructured":"Sinh Huynh, Rajesh Krishna Balan, and JeongGil Ko. 2021. iMon: Appearance-based gaze tracking system on mobile devices. IMWUT 5, 4 (2021), 1\u201326.","journal-title":"IMWUT"},{"key":"e_1_3_1_104_2","doi-asserted-by":"publisher","DOI":"10.1080\/0144929X.2020.1813330"},{"key":"e_1_3_1_105_2","doi-asserted-by":"publisher","DOI":"10.1145\/123078.128728"},{"key":"e_1_3_1_106_2","first-page":"7","volume-title":"CHI EA\u201922","author":"Jannat Marium-E","year":"2022","unstructured":"Marium-E Jannat, Thuan T. Vo, and Khalad Hasan. 2022. Face-centered spatial user interfaces on smartwatches. In CHI EA\u201922. ACM, 7 pages."},{"key":"e_1_3_1_107_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIV.2022.3141071"},{"key":"e_1_3_1_108_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2021.106992"},{"key":"e_1_3_1_109_2","first-page":"1","volume-title":"CHI\u201920","author":"Jiang Xinhui","year":"2020","unstructured":"Xinhui Jiang, Yang Li, Jussi P.P. Jokinen, Viet Ba Hirvola, Antti Oulasvirta, and Xiangshi Ren. 2020. How we type: Eye and finger movement strategies in mobile typing. In CHI\u201920. ACM, 1\u201314."},{"key":"e_1_3_1_110_2","unstructured":"Swati Jindal and Roberto Manduchi. 2023. Contrastive representation learning for gaze estimation. In Annual Conference on Neural Information Processing Systems PMLR 37\u201349."},{"key":"e_1_3_1_111_2","first-page":"10","volume-title":"ETRA\u201918","author":"Jungwirth Florian","year":"2018","unstructured":"Florian Jungwirth, Michael Haslgr\u00fcbler, and Alois Ferscha. 2018. Contour-guided gaze gestures: Using object contours as visual guidance for triggering interactions. In ETRA\u201918. ACM, 10 pages."},{"key":"e_1_3_1_112_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2017.2735633"},{"key":"e_1_3_1_113_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394974"},{"key":"e_1_3_1_114_2","doi-asserted-by":"publisher","DOI":"10.1145\/355017.355030"},{"key":"e_1_3_1_115_2","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2857511"},{"key":"e_1_3_1_116_2","first-page":"1151","volume-title":"UbiComp\u201914 Adjunct","author":"Kassner Moritz","year":"2014","unstructured":"Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. In UbiComp\u201914 Adjunct. ACM, 1151\u20131160."},{"key":"e_1_3_1_117_2","first-page":"1","volume-title":"CHI\u201920","author":"Katsini Christina","year":"2020","unstructured":"Christina Katsini, Yasmeen Abdrabou, George E. Raptis, Mohamed Khamis, and Florian Alt. 2020. The role of eye gaze in security and privacy applications: Survey and future HCI research directions. In CHI\u201920. ACM, 1\u201321."},{"key":"e_1_3_1_118_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00701"},{"key":"e_1_3_1_119_2","first-page":"17","volume-title":"MobileHCI\u201918","author":"Khamis Mohamed","year":"2018","unstructured":"Mohamed Khamis, Florian Alt, and Andreas Bulling. 2018. The past, present, and future of gaze-enabled handheld mobile devices: Survey and lessons learned. In MobileHCI\u201918. ACM, 17 pages."},{"key":"e_1_3_1_120_2","first-page":"1","volume-title":"CHI\u201918","author":"Khamis Mohamed","year":"2018","unstructured":"Mohamed Khamis, Anita Baier, Niels Henze, Florian Alt, and Andreas Bulling. 2018. Understanding face and eye visibility in front-facing cameras of smartphones used in the wild. In CHI\u201918. ACM, 1\u201312."},{"key":"e_1_3_1_121_2","first-page":"9","volume-title":"PerDis\u201917","author":"Khamis Mohamed","year":"2017","unstructured":"Mohamed Khamis, Regina Hasholzner, Andreas Bulling, and Florian Alt. 2017. GTmoPass: Two-factor authentication on public displays using gaze-touch passwords and personal mobile devices. In PerDis\u201917. ACM, 9 pages."},{"key":"e_1_3_1_122_2","doi-asserted-by":"publisher","DOI":"10.1145\/3136755.3136809"},{"key":"e_1_3_1_123_2","doi-asserted-by":"publisher","DOI":"10.1080\/0144929X.2022.2069597"},{"key":"e_1_3_1_124_2","doi-asserted-by":"publisher","DOI":"10.1145\/3453988"},{"key":"e_1_3_1_125_2","first-page":"1","volume-title":"CHI\u201919","author":"Kim Joohwan","year":"2019","unstructured":"Joohwan Kim, Michael Stengel, Alexander Majercik, Shalini De Mello, David Dunn, Samuli Laine, Morgan McGuire, and David Luebke. 2019. NVGaze: An anatomically-informed dataset for low-latency, near-eye gaze estimation. In CHI\u201919. ACM, 1\u201312."},{"key":"e_1_3_1_126_2","doi-asserted-by":"publisher","DOI":"10.1145\/2983323.2983720"},{"key":"e_1_3_1_127_2","doi-asserted-by":"publisher","DOI":"10.1002\/asi.23628"},{"key":"e_1_3_1_128_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2019.2897758"},{"key":"e_1_3_1_129_2","doi-asserted-by":"publisher","DOI":"10.5555\/1577069.1755843"},{"key":"e_1_3_1_130_2","doi-asserted-by":"publisher","DOI":"10.3758\/BF03207917"},{"key":"e_1_3_1_131_2","volume-title":"Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry","author":"Koch Christof","year":"1987","unstructured":"Christof Koch and Shimon Ullman. 1987. Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry. Springer Netherlands."},{"key":"e_1_3_1_132_2","doi-asserted-by":"publisher","DOI":"10.1109\/TBME.2010.2057429"},{"key":"e_1_3_1_133_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-012-0234-9"},{"key":"e_1_3_1_134_2","doi-asserted-by":"publisher","DOI":"10.1145\/3462244.3479938"},{"key":"e_1_3_1_135_2","first-page":"9980","volume-title":"CVPR\u201921","author":"Kothari Rakshit","year":"2021","unstructured":"Rakshit Kothari, Shalini De Mello, Umar Iqbal, Wonmin Byeon, Seonwook Park, and Jan Kautz. 2021. Weakly-supervised physically unconstrained gaze estimation. In CVPR\u201921. IEEE, 9980\u20139989."},{"key":"e_1_3_1_136_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-020-59251-5"},{"key":"e_1_3_1_137_2","first-page":"88","volume-title":"CVPR\u201917 Workshops","author":"Kowalski Marek","year":"2017","unstructured":"Marek Kowalski, Jacek Naruniec, and Tomasz Trzcinski. 2017. Deep alignment network: A convolutional neural network for robust face alignment. In CVPR\u201917 Workshops. IEEE, 88\u201397."},{"key":"e_1_3_1_138_2","doi-asserted-by":"publisher","DOI":"10.1146\/annurev-vision-091718-014901"},{"key":"e_1_3_1_139_2","first-page":"2176","volume-title":"CVPR\u201916","author":"Krafka Kyle","year":"2016","unstructured":"Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye tracking for everyone. In CVPR\u201916. IEEE, 2176\u20132184."},{"key":"e_1_3_1_140_2","doi-asserted-by":"publisher","DOI":"10.16910\/jemr.7.1.1"},{"key":"e_1_3_1_141_2","first-page":"9","volume-title":"ETRA\u201920","author":"Sharma Vinay Krishna","year":"2020","unstructured":"Vinay Krishna Sharma, Kamalpreet Saluja, Vimal Mollyn, and Pradipta Biswas. 2020. Eye gaze controlled robotic arm for persons with severe speech and motor impairment. In ETRA\u201920. ACM, 9 pages."},{"key":"e_1_3_1_142_2","doi-asserted-by":"publisher","DOI":"10.1145\/3065386"},{"key":"e_1_3_1_143_2","doi-asserted-by":"publisher","DOI":"10.1145\/3041021.3054730"},{"key":"e_1_3_1_144_2","first-page":"2531","volume-title":"CHI EA\u201907","author":"Kumar Manu","year":"2007","unstructured":"Manu Kumar, Terry Winograd, and Andreas Paepcke. 2007. Gaze-enhanced scrolling techniques. In CHI EA\u201907. ACM, 2531\u20132536."},{"key":"e_1_3_1_145_2","doi-asserted-by":"publisher","DOI":"10.1145\/3415207"},{"key":"e_1_3_1_146_2","doi-asserted-by":"publisher","DOI":"10.1145\/2600428.2609631"},{"key":"e_1_3_1_147_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"issue":"23","key":"e_1_3_1_148_2","first-page":"17","article-title":"DynamicRead: Exploring robust gaze interaction methods for reading on handheld mobile devices under dynamic conditions","volume":"7","author":"Lei Yaxiong","year":"2023","unstructured":"Yaxiong Lei, Yuheng Wang, Tyler Caslin, Alexander Wisowaty, Xu Zhu, Mohamed Khamis, and Juan Ye. 2023. DynamicRead: Exploring robust gaze interaction methods for reading on handheld mobile devices under dynamic conditions. Proceedings of the ACM on Human-Computer Interaction 7, ETRA23(2023), 17.","journal-title":"Proceedings of the ACM on Human-Computer Interaction"},{"key":"e_1_3_1_149_2","first-page":"4","volume-title":"ETRA\u201921 (ETRA\u201921 Adjunct)","author":"Lewien Ryan","year":"2021","unstructured":"Ryan Lewien. 2021. GazeHelp: Exploring practical gaze-assisted interactions for graphic design tools. In ETRA\u201921 (ETRA\u201921 Adjunct). ACM, 4 pages."},{"key":"e_1_3_1_150_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-87358-5_39"},{"key":"e_1_3_1_151_2","doi-asserted-by":"publisher","DOI":"10.1145\/3041021.3054182"},{"key":"e_1_3_1_152_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICAICA50127.2020.9181854"},{"key":"e_1_3_1_153_2","doi-asserted-by":"publisher","DOI":"10.1093\/iwcomp\/iwaa002"},{"key":"e_1_3_1_154_2","first-page":"231","volume-title":"Graphics Interface 2021","author":"Li Zhi","year":"2021","unstructured":"Zhi Li, Maozheng Zhao, Yifan Wang, Sina Rashidian, Furqan Baig, Rui Liu, Wanyu Liu, Michel Beaudouin-Lafon, Brooke Ellison, Fusheng Wang, IV Ramakrishnan, and Xiaojun Bi. 2021. BayesGaze: A Bayesian approach to eye-gaze based target selection. In Graphics Interface 2021. Canadian Information Processing Society, 231\u2013240."},{"key":"e_1_3_1_155_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2018.2865525"},{"key":"e_1_3_1_156_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33012488"},{"key":"e_1_3_1_157_2","doi-asserted-by":"publisher","DOI":"10.1145\/2638728.2641692"},{"key":"e_1_3_1_158_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.425"},{"key":"e_1_3_1_159_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10593-2_9"},{"key":"e_1_3_1_160_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2313123"},{"key":"e_1_3_1_161_2","doi-asserted-by":"publisher","DOI":"10.1145\/1518701.1518758"},{"key":"e_1_3_1_162_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4471-6392-3_3"},{"key":"e_1_3_1_163_2","first-page":"5","volume-title":"ETRA\u201919","author":"Majaranta P\u00e4ivi","year":"2019","unstructured":"P\u00e4ivi Majaranta, Jari Laitinen, Jari Kangas, and Poika Isokoski. 2019. Inducing gaze gestures by static illustrations. In ETRA\u201919. ACM, 5 pages."},{"key":"e_1_3_1_164_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0165-0270(03)00151-1"},{"key":"e_1_3_1_165_2","doi-asserted-by":"publisher","DOI":"10.1038\/nrn1348"},{"key":"e_1_3_1_166_2","doi-asserted-by":"publisher","DOI":"10.1038\/nrn3405"},{"key":"e_1_3_1_167_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICAR53236.2021.9659338"},{"key":"e_1_3_1_168_2","first-page":"1","volume-title":"CHI\u201920","author":"Mayer Sven","year":"2020","unstructured":"Sven Mayer, Gierad Laput, and Chris Harrison. 2020. Enhancing mobile voice assistants with WorldGaze. In CHI\u201920. ACM, 1\u201310."},{"issue":"6","key":"e_1_3_1_169_2","first-page":"220","article-title":"Schau genau! A gaze-controlled 3D game for entertainment and education","volume":"10","author":"Menges Raphael","year":"2017","unstructured":"Raphael Menges, Chandan Kumar, Ulrich Wechselberger, Christoph Schaefer, Tina Walber, and Steffen Staab. 2017. Schau genau! A gaze-controlled 3D game for entertainment and education. Journal of Eye Movement Research 10, 6 (2017), 220.","journal-title":"Journal of Eye Movement Research"},{"key":"e_1_3_1_170_2","doi-asserted-by":"publisher","DOI":"10.1080\/07370024.2020.1716762"},{"key":"e_1_3_1_171_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-020-80126-2"},{"key":"e_1_3_1_172_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2013.6738574"},{"key":"e_1_3_1_173_2","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025517"},{"key":"e_1_3_1_174_2","doi-asserted-by":"publisher","DOI":"10.1145\/3172944.3172969"},{"key":"e_1_3_1_175_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV56688.2023.00095"},{"key":"e_1_3_1_176_2","first-page":"17","volume-title":"CHI\u201923","author":"Namnakani Omar","year":"2023","unstructured":"Omar Namnakani, Yasmeen Abdrabou, Jonathan Grizou, Augusto Esteves, and Mohamed Khamis. 2023. Comparing dwell time, pursuits and gaze gestures for gaze interaction on handheld mobile devices. In CHI\u201923. ACM, 17 pages."},{"key":"e_1_3_1_177_2","doi-asserted-by":"publisher","DOI":"10.1145\/2371574.2371598"},{"key":"e_1_3_1_178_2","first-page":"1","volume-title":"CHI\u201920","author":"Newman Anelise","year":"2020","unstructured":"Anelise Newman, Barry McNamara, Camilo Fosco, Yun Bin Zhang, Pat Sukhum, Matthew Tancik, Nam Wook Kim, and Zoya Bylinskii. 2020. TurkEyes: A web-based toolbox for crowdsourcing attention data. In CHI\u201920. ACM, 1\u201313."},{"key":"e_1_3_1_179_2","first-page":"4992","volume-title":"CVPR\u201922","author":"Oh Jun O.","year":"2022","unstructured":"Jun O. Oh, Hyung Jin Chang, and Sang-Il Choi. 2022. Self-attention with convolution and deconvolution for efficient eye gaze estimation from a full face image. In CVPR\u201922. IEEE, 4992\u20135000."},{"key":"e_1_3_1_180_2","doi-asserted-by":"publisher","DOI":"10.1109\/FG.2019.8756523"},{"key":"e_1_3_1_181_2","doi-asserted-by":"publisher","DOI":"10.1080\/10584609.2021.2000082"},{"key":"e_1_3_1_182_2","unstructured":"Cristina Palmero Javier Selva Mohammad Ali Bagheri and Sergio Escalera. 2018. Recurrent CNN for 3D gaze estimation using appearance and shape cues. arXiv:1805.03064. Retrieved from https:\/\/arxiv.org\/abs\/1805.03064."},{"key":"e_1_3_1_183_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.apergo.2022.103775"},{"key":"e_1_3_1_184_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2021.102676"},{"key":"e_1_3_1_185_2","doi-asserted-by":"publisher","DOI":"10.1109\/PERCOM50583.2021.9439113"},{"key":"e_1_3_1_186_2","doi-asserted-by":"crossref","first-page":"747","DOI":"10.1007\/978-3-030-58610-2_44","volume-title":"Computer Vision \u2013 ECCV 2020","author":"Park Seonwook","year":"2020","unstructured":"Seonwook Park, Emre Aksan, Xucong Zhang, and Otmar Hilliges. 2020. Towards end-to-end video-based eye-tracking. In Computer Vision \u2013 ECCV 2020. Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.), Springer, 747\u2013763."},{"key":"e_1_3_1_187_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00946"},{"key":"e_1_3_1_188_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01261-8_44"},{"key":"e_1_3_1_189_2","doi-asserted-by":"publisher","DOI":"10.5244\/C.29.41"},{"key":"e_1_3_1_190_2","doi-asserted-by":"publisher","DOI":"10.1145\/355017.355042"},{"key":"e_1_3_1_191_2","doi-asserted-by":"publisher","DOI":"10.1145\/3357236.3395553"},{"key":"e_1_3_1_192_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.displa.2021.101997"},{"key":"e_1_3_1_193_2","first-page":"7","volume-title":"ETRA\u201921","author":"Pfeuffer Ken","year":"2021","unstructured":"Ken Pfeuffer, Jason Alexander, and Hans Gellersen. 2021. Multi-user gaze-based interaction techniques on collaborative touchscreens. In ETRA\u201921. ACM, 7 pages."},{"key":"e_1_3_1_194_2","doi-asserted-by":"publisher","DOI":"10.1145\/2984511.2984514"},{"key":"e_1_3_1_195_2","first-page":"1199","volume-title":"2012 Federated Conference on Computer Science and Information Systems (FedCSIS\u201912)","author":"Pino Carmelo","year":"2012","unstructured":"Carmelo Pino and Isaak Kavasidis. 2012. Improving mobile device interaction by eye tracking analysis. In 2012 Federated Conference on Computer Science and Information Systems (FedCSIS\u201912). IEEE, 1199\u20131202."},{"key":"e_1_3_1_196_2","volume-title":"How to Position the Eye Tracker and Participant in a Study","author":"Pro Tobbi","year":"2015","unstructured":"Tobbi Pro. 2015. How to Position the Eye Tracker and Participant in a Study. Tobii AB. Retrieved April 19, 2023 from https:\/\/www.tobiipro.com\/learn-and-support\/learn\/steps-in-an-eye-tracking-study\/run\/how-to-position-the-participant-and-the-eye-tracker\/."},{"key":"e_1_3_1_197_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVBVS.2000.855245"},{"key":"e_1_3_1_198_2","first-page":"6","volume-title":"MobileHCI\u201919","author":"Ragozin Kirill","year":"2019","unstructured":"Kirill Ragozin, Yun Suen Pai, Olivier Augereau, Koichi Kise, Jochen Kerdels, and Kai Kunze. 2019. Private reader: Using eye tracking to improve reading privacy in public spaces. In MobileHCI\u201919. ACM, 6 pages."},{"key":"e_1_3_1_199_2","first-page":"3","volume-title":"ETRA\u201918","author":"Rajanna Vijay","year":"2018","unstructured":"Vijay Rajanna and Tracy Hammond. 2018. A gaze gesture-based paradigm for situational impairments, Accessibility, and rich interactions. In ETRA\u201918. ACM, 3 pages."},{"key":"e_1_3_1_200_2","first-page":"12","volume-title":"ETRA\u201921","author":"Gomez Argenis Ramirez Ramirez","year":"2021","unstructured":"Argenis Ramirez Ramirez Gomez, Christopher Clarke, Ludwig Sidenmark, and Hans Gellersen. 2021. Gaze+Hold: Eyes-only direct manipulation with continuous gaze modulated by closure of one eye. In ETRA\u201921. ACM, 12 pages."},{"key":"e_1_3_1_201_2","first-page":"1","volume-title":"Advances in Neural Information Processing Systems","author":"Recasens Adria","year":"2015","unstructured":"Adria Recasens, Aditya Khosla, Carl Vondrick, and Antonio Torralba. 2015. Where are they looking?. In Advances in Neural Information Processing Systems. C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (Eds.), Vol. 28. Curran Associates, Inc., 1\u20139."},{"key":"e_1_3_1_202_2","doi-asserted-by":"publisher","DOI":"10.1145\/2745555.2746644"},{"key":"e_1_3_1_203_2","first-page":"7","volume-title":"ETRA\u201919","author":"Rivu Sheikh","year":"2019","unstructured":"Sheikh Rivu, Yasmeen Abdrabou, Thomas Mayer, Ken Pfeuffer, and Florian Alt. 2019. GazeButton: Enhancing buttons with eye gaze interactions. In ETRA\u201919. ACM, 7 pages."},{"key":"e_1_3_1_204_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF00363977"},{"key":"e_1_3_1_205_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.visres.2009.08.010"},{"key":"e_1_3_1_206_2","doi-asserted-by":"crossref","unstructured":"David Rozado Javier S. Agustin Francisco B. Rodriguez and Pablo Varona. 2012. Gliding and saccadic gaze gesture recognition in real time. ACM Transactions on Interactive Intelligent Systems 1 2 (2012) 27 pages.","DOI":"10.1145\/2070719.2070723"},{"key":"e_1_3_1_207_2","doi-asserted-by":"publisher","DOI":"10.1080\/07370024.2013.870385"},{"key":"e_1_3_1_208_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2012.02.023"},{"key":"e_1_3_1_209_2","doi-asserted-by":"publisher","DOI":"10.1038\/323533a0"},{"key":"e_1_3_1_210_2","doi-asserted-by":"publisher","DOI":"10.16910\/jemr.12.6.6"},{"key":"e_1_3_1_211_2","doi-asserted-by":"publisher","DOI":"10.1145\/355017.355028"},{"key":"e_1_3_1_212_2","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2857512"},{"key":"e_1_3_1_213_2","first-page":"981","volume-title":"CVPR\u201915","author":"Sattar Hosnieh","year":"2015","unstructured":"Hosnieh Sattar, Sabine Muller, Mario Fritz, and Andreas Bulling. 2015. Prediction of search targets from fixations in open-world settings. In CVPR\u201915. IEEE, 981\u2013990."},{"key":"e_1_3_1_214_2","first-page":"3034","volume-title":"CHI\u201917","author":"Schenk Simon","year":"2017","unstructured":"Simon Schenk, Marc Dreiser, Gerhard Rigoll, and Michael Dorr. 2017. GazeEverywhere: Enabling gaze-only user interaction on an unmodified desktop PC in everyday scenarios. In CHI\u201917. ACM, 3034\u20133044."},{"key":"e_1_3_1_215_2","first-page":"5","volume-title":"COGAIN\u201918","author":"Schl\u00f6sser Christian","year":"2018","unstructured":"Christian Schl\u00f6sser, Benedikt Schr\u00f6der, Linda Cedli, and Andrea Kienle. 2018. Beyond gaze cursor: Exploring information-based gaze sharing in chat. In COGAIN\u201918. ACM, 5 pages."},{"key":"e_1_3_1_216_2","doi-asserted-by":"publisher","DOI":"10.1136\/bjo.44.2.89"},{"key":"e_1_3_1_217_2","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2021.709952"},{"key":"e_1_3_1_218_2","doi-asserted-by":"publisher","DOI":"10.1002\/npr2.12046"},{"key":"e_1_3_1_219_2","unstructured":"Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556. Retrieved from https:\/\/arxiv.org\/abs\/1409.1556."},{"key":"e_1_3_1_220_2","first-page":"13","volume-title":"CHI\u201919","author":"Sindhwani Shyamli","year":"2019","unstructured":"Shyamli Sindhwani, Christof Lutteroth, and Gerald Weber. 2019. ReType: Quick text editing with keyboard and gaze. In CHI\u201919. ACM, 13 pages."},{"key":"e_1_3_1_221_2","doi-asserted-by":"publisher","DOI":"10.1177\/1357633X20926819"},{"key":"e_1_3_1_222_2","volume-title":"Driver Monitoring System - Smart Eye","author":"Eye Smart","year":"2022","unstructured":"Smart Eye. 2022. Driver Monitoring System - Smart Eye. Smart Eye Co., Ltd. Retrieved from https:\/\/smarteye.se\/solutions\/automotive\/driver-monitoring-system\/. 10-2-2023."},{"key":"e_1_3_1_223_2","doi-asserted-by":"publisher","DOI":"10.1145\/2501988.2501994"},{"key":"e_1_3_1_224_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-012-0286-x"},{"key":"e_1_3_1_225_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2016.7524367"},{"key":"e_1_3_1_226_2","volume-title":"Biology: Concepts and Applications","author":"Starr Cecie","year":"2014","unstructured":"Cecie Starr, Christine Evers, and Lisa Starr. 2014. Biology: Concepts and Applications. Cengage Learning."},{"key":"e_1_3_1_227_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-018-1144-2"},{"key":"e_1_3_1_228_2","first-page":"13","volume-title":"MobileHCI\u201918","author":"Steil Julian","year":"2018","unstructured":"Julian Steil, Philipp M\u00fcller, Yusuke Sugano, and Andreas Bulling. 2018. Forecasting user attention during everyday mobile interactions using device-integrated and wearable sensors. In MobileHCI\u201918. ACM, 13 pages."},{"key":"e_1_3_1_229_2","first-page":"1821","volume-title":"CVPR\u201914","author":"Sugano Yusuke","year":"2014","unstructured":"Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato. 2014. Learning-by-synthesis for appearance-based 3d gaze estimation. In CVPR\u201914. IEEE, 1821\u20131828."},{"key":"e_1_3_1_230_2","unstructured":"Tobbi. 2020. Data Quality Reports for 3 Tobii Eye Trackers - Tobii. Retrieved April 19 2023 from https:\/\/www.tobii.com\/resource-center\/data-quality#cta-section."},{"key":"e_1_3_1_231_2","first-page":"3125","volume-title":"CVPR\u201921","author":"Tomas Henri","year":"2021","unstructured":"Henri Tomas, Marcus Reyes, Raimarc Dionido, Mark Ty, Jonric Mirando, Joel Casimiro, Rowel Atienza, and Richard Guinto. 2021. Goo: A dataset for gaze object prediction in retail environments. In CVPR\u201921. IEEE, 3125\u20133133."},{"issue":"3","key":"e_1_3_1_232_2","first-page":"1","article-title":"Invisibleeye: Mobile eye tracking using multiple low-resolution cameras and learning-based gaze estimation","volume":"1","author":"Tonsen Marc","year":"2017","unstructured":"Marc Tonsen, Julian Steil, Yusuke Sugano, and Andreas Bulling. 2017. Invisibleeye: Mobile eye tracking using multiple low-resolution cameras and learning-based gaze estimation. IMWUT 1, 3 (2017), 1\u201321.","journal-title":"IMWUT"},{"key":"e_1_3_1_233_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-020-18360-5"},{"key":"e_1_3_1_234_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-017-0909-3"},{"key":"e_1_3_1_235_2","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf."},{"key":"e_1_3_1_236_2","doi-asserted-by":"publisher","DOI":"10.1145\/3064937"},{"key":"e_1_3_1_237_2","first-page":"8","volume-title":"ETRA\u201919","author":"Venuprasad Pranav","year":"2019","unstructured":"Pranav Venuprasad, Tushal Dobhal, Anurag Paul, Tu N. M. Nguyen, Andrew Gilman, Pamela Cosman, and Leanne Chukoskie. 2019. Characterizing joint attention behavior during real world interactions using automated object and gaze detection. In ETRA\u201919. ACM, 8 pages."},{"key":"e_1_3_1_238_2","doi-asserted-by":"publisher","DOI":"10.1080\/14626260500476523"},{"key":"e_1_3_1_239_2","doi-asserted-by":"publisher","DOI":"10.1145\/2493432.2493477"},{"key":"e_1_3_1_240_2","first-page":"1","volume-title":"CHI\u201920","author":"Voelker Simon","year":"2020","unstructured":"Simon Voelker, Sebastian Hueber, Christian Holz, Christian Remy, and Nicolai Marquardt. 2020. GazeConduits: Calibration-free cross-device collaboration through gaze and touch. In CHI\u201920. ACM, 1\u201310."},{"key":"e_1_3_1_241_2","doi-asserted-by":"crossref","first-page":"849","DOI":"10.1109\/IVS.2017.7995822","volume-title":"2017 IEEE Intelligent Vehicles Symposium (IV\u201917)","author":"Vora Sourabh","year":"2017","unstructured":"Sourabh Vora, Akshay Rangesh, and Mohan M. Trivedi. 2017. On generalizing driver gaze zone estimation using convolutional neural networks. In 2017 IEEE Intelligent Vehicles Symposium (IV\u201917). IEEE, 849\u2013854."},{"key":"e_1_3_1_242_2","first-page":"9831","volume-title":"CVPR\u201919","author":"Wang Kang","year":"2019","unstructured":"Kang Wang, Hui Su, and Qiang Ji. 2019. Neuro-inspired eye tracking with eye movement dynamics. In CVPR\u201919. IEEE, 9831\u20139840."},{"key":"e_1_3_1_243_2","first-page":"11907","volume-title":"CVPR\u201919","author":"Wang Kang","year":"2019","unstructured":"Kang Wang, Rui Zhao, Hui Su, and Qiang Ji. 2019. Generalizing eye tracking with bayesian adversarial learning. In CVPR\u201919. IEEE, 11907\u201311916."},{"key":"e_1_3_1_244_2","first-page":"1","article-title":"Scanpath prediction on information visualisations","author":"Wang Yao","year":"2023","unstructured":"Yao Wang, Mihai B\u00e2 ce, and Andreas Bulling. 2023. Scanpath prediction on information visualisations. IEEE Transactions on Visualization and Computer Graphics Early Access (2023), 1\u201315.","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"e_1_3_1_245_2","doi-asserted-by":"publisher","DOI":"10.1111\/infa.12055"},{"key":"e_1_3_1_246_2","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-021-00108-6"},{"key":"e_1_3_1_247_2","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2888592"},{"key":"e_1_3_1_248_2","first-page":"5","volume-title":"ETRA\u201918","author":"Wilson Andrew D.","year":"2018","unstructured":"Andrew D. Wilson and Shane Williams. 2018. Autopager: Exploiting change blindness for gaze-assisted reading. In ETRA\u201918. ACM, 5 pages."},{"key":"e_1_3_1_249_2","first-page":"529","volume-title":"CVPR\u201911","author":"Wolf Lior","year":"2011","unstructured":"Lior Wolf, Tal Hassner, and Itay Maoz. 2011. Face recognition in unconstrained videos with matched background similarity. In CVPR\u201911. IEEE, 529\u2013534."},{"key":"e_1_3_1_250_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.428"},{"key":"e_1_3_1_251_2","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2857492"},{"key":"e_1_3_1_252_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2022.3152800"},{"key":"e_1_3_1_253_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2019.00455"},{"key":"e_1_3_1_254_2","first-page":"7743","volume-title":"CVPR\u201919","author":"Xiong Yunyang","year":"2019","unstructured":"Yunyang Xiong, Hyunwoo J. Kim, and Vikas Singh. 2019. Mixed effects neural networks (menets) with applications to gaze estimation. In CVPR\u201919. IEEE, 7743\u20137752."},{"key":"e_1_3_1_255_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-85607-6_50"},{"key":"e_1_3_1_256_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP40776.2020.9053659"},{"key":"e_1_3_1_257_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM42981.2021.9488668"},{"key":"e_1_3_1_258_2","first-page":"7314","volume-title":"CVPR\u201920","author":"Yu Yu","year":"2020","unstructured":"Yu Yu and Jean-Marc Odobez. 2020. Unsupervised representation learning for gaze estimation. In CVPR\u201920. IEEE, 7314\u20137324."},{"key":"e_1_3_1_259_2","first-page":"3361","volume-title":"the Asian Conference on Computer Vision","author":"Yun Jun-Seok","year":"2022","unstructured":"Jun-Seok Yun, Youngju Na, Hee Hyeon Kim, Hyung-Il Kim, and Seok Bong Yoo. 2022. HAZE-Net: High-frequency attentive super-resolved gaze estimation in low-resolution face images. In the Asian Conference on Computer Vision. IEEE, 3361\u20133378."},{"key":"e_1_3_1_260_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-018-1133-5"},{"key":"e_1_3_1_261_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13428-017-0860-3"},{"key":"e_1_3_1_262_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2017.2743987"},{"key":"e_1_3_1_263_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-017-5426-y"},{"key":"e_1_3_1_264_2","doi-asserted-by":"publisher","DOI":"10.1109\/HRI.2019.8673093"},{"key":"e_1_3_1_265_2","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2016.2603342"},{"key":"e_1_3_1_266_2","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3174198"},{"key":"e_1_3_1_267_2","first-page":"8","volume-title":"CHI\u201922","author":"Zhang Xiang","year":"2022","unstructured":"Xiang Zhang, Kaori Ikematsu, Kunihiro Kato, and Yuta Sugiura. 2022. ReflecTouch: Detecting grasp posture of smartphone using corneal reflection images. In CHI\u201922. ACM, 8 pages."},{"key":"e_1_3_1_268_2","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025790"},{"key":"e_1_3_1_269_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58558-7_22"},{"key":"e_1_3_1_270_2","first-page":"13","volume-title":"CHI\u201919","author":"Zhang Xucong","year":"2019","unstructured":"Xucong Zhang, Yusuke Sugano, and Andreas Bulling. 2019. Evaluation of appearance-based methods and implications for gaze-based applications. In CHI\u201919. ACM, 13 pages."},{"key":"e_1_3_1_271_2","first-page":"86","volume-title":"31st British Machine Vision Conference (BMVC\u201920)","author":"Zhang Xucong","year":"2020","unstructured":"Xucong Zhang, Yusuke Sugano, Andreas Bulling, and Otmar Hilliges. 2020. Learning-based region selection for end-to-end gaze estimation. In 31st British Machine Vision Conference (BMVC\u201920). British Machine Vision Association, BMVA, 86."},{"key":"e_1_3_1_272_2","first-page":"4511","volume-title":"CVPR\u201915","author":"Zhang Xucong","year":"2015","unstructured":"Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2015. Appearance-based gaze estimation in the wild. In CVPR\u201915. IEEE, 4511\u20134520."},{"key":"e_1_3_1_273_2","first-page":"2299","volume-title":"CVPR\u201917 Workshops","author":"Zhang Xucong","year":"2017","unstructured":"Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2017. It\u2019s written all over your face: Full-face appearance-based gaze estimation. In CVPR\u201917 Workshops. IEEE, 2299\u20132308."},{"key":"e_1_3_1_274_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2017.2778103"},{"key":"e_1_3_1_275_2","doi-asserted-by":"publisher","DOI":"10.1145\/3490099.3511103"},{"key":"e_1_3_1_276_2","first-page":"15","volume-title":"CHI\u201922","author":"Zhao Xuan","year":"2022","unstructured":"Xuan Zhao, Mingming Fan, and Teng Han. 2022. \u201cI don\u2019t want people to look at me differently\u201d: Designing user-defined above-the-neck gestures for people with upper body motor impairments. In CHI\u201922. ACM, 15 pages."},{"issue":"1","key":"e_1_3_1_277_2","first-page":"1","article-title":"Calibration-free gaze interfaces based on linear smooth pursuit","volume":"13","author":"Zhe Zeng","year":"2020","unstructured":"Zeng Zhe, Felix Wilhelm Siebert, Antje Christine Venjakob, and Matthias Roetting. 2020. Calibration-free gaze interfaces based on linear smooth pursuit. Journal of Eye Movement Research 13, 1 (2020), 1\u201312.","journal-title":"Journal of Eye Movement Research"},{"key":"e_1_3_1_278_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2019.04.099"},{"key":"e_1_3_1_279_2","first-page":"3143","volume-title":"the IEEE International Conference on Computer Vision","author":"Zhu Wangjiang","year":"2017","unstructured":"Wangjiang Zhu and Haoping Deng. 2017. Monocular free-head 3d gaze tracking with deep learning and geometry constraints. In the IEEE International Conference on Computer Vision. IEEE, 3143\u20133152."},{"key":"e_1_3_1_280_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2017.2778152"},{"key":"e_1_3_1_281_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-66415-2_35"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3606947","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3606947","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:48:52Z","timestamp":1750182532000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3606947"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,15]]},"references-count":280,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,2,29]]}},"alternative-id":["10.1145\/3606947"],"URL":"https:\/\/doi.org\/10.1145\/3606947","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,15]]},"assertion":[{"value":"2022-05-20","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-06-22","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-09-15","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}