{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T18:28:58Z","timestamp":1776104938634,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":64,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,10,18]],"date-time":"2021-10-18T00:00:00Z","timestamp":1634515200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,10,18]]},"DOI":"10.1145\/3462244.3479938","type":"proceedings-article","created":{"date-parts":[[2021,10,15]],"date-time":"2021-10-15T14:41:47Z","timestamp":1634308907000},"page":"577-585","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":24,"title":["EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices"],"prefix":"10.1145","author":[{"given":"Andy","family":"Kong","sequence":"first","affiliation":[{"name":"School of Computer Science, Carnegie Mellon University, USA"}]},{"given":"Karan","family":"Ahuja","sequence":"additional","affiliation":[{"name":"Human-Computer Interaction Institute, Carnegie Mellon University, USA"}]},{"given":"Mayank","family":"Goel","sequence":"additional","affiliation":[{"name":"School of Computer Science, Carnegie Mellon University, USA"}]},{"given":"Chris","family":"Harrison","sequence":"additional","affiliation":[{"name":"Human-Computer Interaction Institute, Carnegie Mellon University, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,10,18]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.5898\/JHRI.6.1.Admoni"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2016.7532934"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3214260"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3379337.3415588"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMS.2012.23"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/1057237.1057239"},{"key":"e_1_3_2_1_7_1","volume-title":"Graphics Interface. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH \u201980)","author":"Bolt A.","year":"1980","unstructured":"Richard\u00a0 A. Bolt . 1980 . \u201c Put-That-There\u201d: Voice and Gesture at the Graphics Interface. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH \u201980) . ACM, New York, NY, USA, 262\u2013270. https:\/\/doi.org\/10.1145\/800250.807503 Richard\u00a0A. Bolt. 1980. \u201cPut-That-There\u201d: Voice and Gesture at the Graphics Interface. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH \u201980). ACM, New York, NY, USA, 262\u2013270. https:\/\/doi.org\/10.1145\/800250.807503"},{"key":"e_1_3_2_1_8_1","volume-title":"Accurate Model-Based Point of Gaze Estimation on Mobile Devices. Vision 2, 3","author":"Brousseau Braiden","year":"2018","unstructured":"Braiden Brousseau , Jonathan Rose , and Moshe Eizenman . 2018. Accurate Model-Based Point of Gaze Estimation on Mobile Devices. Vision 2, 3 ( 2018 ). https:\/\/doi.org\/10.3390\/vision2030035 Braiden Brousseau, Jonathan Rose, and Moshe Eizenman. 2018. Accurate Model-Based Point of Gaze Estimation on Mobile Devices. Vision 2, 3 (2018). https:\/\/doi.org\/10.3390\/vision2030035"},{"key":"e_1_3_2_1_9_1","volume-title":"Electronics Mobile Communication Conference (UEMCON). 951\u2013959","author":"Brousseau B.","year":"2018","unstructured":"B. Brousseau , J. Rose , and M. Eizenman . 2018. SmartEye: An Accurate Infrared Eye Tracking System for Smartphones. In 2018 9th IEEE Annual Ubiquitous Computing , Electronics Mobile Communication Conference (UEMCON). 951\u2013959 . https:\/\/doi.org\/10.1109\/UEMCON. 2018 .8796799 B. Brousseau, J. Rose, and M. Eizenman. 2018. SmartEye: An Accurate Infrared Eye Tracking System for Smartphones. In 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). 951\u2013959. https:\/\/doi.org\/10.1109\/UEMCON.2018.8796799"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2838739.2838778"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2818346.2820752"},{"key":"e_1_3_2_1_12_1","volume-title":"Oxiod: The dataset for deep inertial odometry. arXiv preprint arXiv:1809.07491(2018).","author":"Chen Changhao","year":"2018","unstructured":"Changhao Chen , Peijun Zhao , Chris\u00a0Xiaoxuan Lu , Wei Wang , Andrew Markham , and Niki Trigoni . 2018 . Oxiod: The dataset for deep inertial odometry. arXiv preprint arXiv:1809.07491(2018). Changhao Chen, Peijun Zhao, Chris\u00a0Xiaoxuan Lu, Wei Wang, Andrew Markham, and Niki Trigoni. 2018. Oxiod: The dataset for deep inertial odometry. arXiv preprint arXiv:1809.07491(2018)."},{"key":"e_1_3_2_1_13_1","volume-title":"INFORMATIK 2008","author":"Dachselt Raimund","year":"2008","unstructured":"Raimund Dachselt and Robert Buchholz . 2008 . Throw and tilt\u2013seamless interaction across devices using mobile phone gestures . INFORMATIK 2008 . Beherrschbare Systeme\u2013dank Informatik. Band 1 (2008). Raimund Dachselt and Robert Buchholz. 2008. Throw and tilt\u2013seamless interaction across devices using mobile phone gestures. INFORMATIK 2008. Beherrschbare Systeme\u2013dank Informatik. Band 1 (2008)."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/2168556.2168601"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/19.744648"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.3390\/info10100290"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/2578153.2578190"},{"key":"e_1_3_2_1_18_1","unstructured":"Google. 2021. MediaPipe Face Mesh. https:\/\/google.github.io\/mediapipe\/solutions\/face_mesh.html Accessed: 2021-04-04.  Google. 2021. MediaPipe Face Mesh. https:\/\/google.github.io\/mediapipe\/solutions\/face_mesh.html Accessed: 2021-04-04."},{"key":"e_1_3_2_1_19_1","unstructured":"Tianchu Guo Yongchao Liu Hui Zhang Xiabing Liu Youngjun Kwak ByungIn Yoo Jae-Joon Han and Changkyu Choi. 2019. A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone. CoRR abs\/1910.07331(2019). arxiv:1910.07331http:\/\/arxiv.org\/abs\/1910.07331  Tianchu Guo Yongchao Liu Hui Zhang Xiabing Liu Youngjun Kwak ByungIn Yoo Jae-Joon Han and Changkyu Choi. 2019. A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone. CoRR abs\/1910.07331(2019). arxiv:1910.07331http:\/\/arxiv.org\/abs\/1910.07331"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2016.2581023"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/274644.274647"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/354401.354417"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/1709886.1709906"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/97243.97246"},{"key":"e_1_3_2_1_25_1","volume-title":"Free-Head Appearance-Based Eye Gaze Estimation on Mobile Devices. In 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). 232\u2013237","author":"Jigang L.","year":"2019","unstructured":"L. Jigang , B.\u00a0S.\u00a0 L. Francis , and D. Rajan . 2019 . Free-Head Appearance-Based Eye Gaze Estimation on Mobile Devices. In 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). 232\u2013237 . https:\/\/doi.org\/10.1109\/ICAIIC. 2019 .8669057 L. Jigang, B.\u00a0S.\u00a0L. Francis, and D. Rajan. 2019. Free-Head Appearance-Based Eye Gaze Estimation on Mobile Devices. In 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). 232\u2013237. https:\/\/doi.org\/10.1109\/ICAIIC.2019.8669057"},{"key":"e_1_3_2_1_26_1","volume-title":"IMU Sensor-Based Hand Gesture Recognition for Human-Machine Interfaces. Sensors 19, 18","author":"Kim Minwoo","year":"2019","unstructured":"Minwoo Kim , Jaechan Cho , Seongjoo Lee , and Yunho Jung . 2019. IMU Sensor-Based Hand Gesture Recognition for Human-Machine Interfaces. Sensors 19, 18 ( 2019 ). https:\/\/doi.org\/10.3390\/s19183827 Minwoo Kim, Jaechan Cho, Seongjoo Lee, and Yunho Jung. 2019. IMU Sensor-Based Hand Gesture Recognition for Human-Machine Interfaces. Sensors 19, 18 (2019). https:\/\/doi.org\/10.3390\/s19183827"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/2818346.2820751"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.239"},{"key":"e_1_3_2_1_29_1","volume-title":"Appearance-Based Gaze Tracking with Free Head Movement. In 2014 22nd International Conference on Pattern Recognition. 1869\u20131873","author":"Lai C.","year":"2014","unstructured":"C. Lai , Y. Chen , K. Chen , S. Chen , S. Shih , and Y. Hung . 2014 . Appearance-Based Gaze Tracking with Free Head Movement. In 2014 22nd International Conference on Pattern Recognition. 1869\u20131873 . https:\/\/doi.org\/10.1109\/ICPR. 2014 .327 C. Lai, Y. Chen, K. Chen, S. Chen, S. Shih, and Y. Hung. 2014. Appearance-Based Gaze Tracking with Free Head Movement. In 2014 22nd International Conference on Pattern Recognition. 1869\u20131873. https:\/\/doi.org\/10.1109\/ICPR.2014.327"},{"key":"e_1_3_2_1_30_1","unstructured":"Joseph Lemley Anuradha Kar Alexandru Drimbarean and Peter Corcoran. 2018. Efficient CNN Implementation for Eye-Gaze Estimation on Low-Power\/Low-Quality Consumer Imaging Systems. CoRR abs\/1806.10890(2018). arxiv:1806.10890http:\/\/arxiv.org\/abs\/1806.10890  Joseph Lemley Anuradha Kar Alexandru Drimbarean and Peter Corcoran. 2018. Efficient CNN Implementation for Eye-Gaze Estimation on Low-Power\/Low-Quality Consumer Imaging Systems. CoRR abs\/1806.10890(2018). arxiv:1806.10890http:\/\/arxiv.org\/abs\/1806.10890"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2019.2899869"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.pmcj.2009.07.007"},{"key":"e_1_3_2_1_33_1","volume-title":"CHI \u201908 Extended Abstracts on Human Factors in Computing Systems(CHI EA \u201908)","author":"Mateo C.","unstructured":"Julio\u00a0 C. Mateo , Javier San\u00a0Agustin , and John\u00a0Paulin Hansen . 2008. Gaze Beats Mouse: Hands-Free Selection by Combining Gaze and Emg . In CHI \u201908 Extended Abstracts on Human Factors in Computing Systems(CHI EA \u201908) . ACM , New York, NY, USA , 3039\u20133044. https:\/\/doi.org\/10.1145\/1358628.1358804 Julio\u00a0C. Mateo, Javier San\u00a0Agustin, and John\u00a0Paulin Hansen. 2008. Gaze Beats Mouse: Hands-Free Selection by Combining Gaze and Emg. In CHI \u201908 Extended Abstracts on Human Factors in Computing Systems(CHI EA \u201908). ACM, New York, NY, USA, 3039\u20133044. https:\/\/doi.org\/10.1145\/1358628.1358804"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376479"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2008.926867"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2010.937500"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.5555\/1061935.1649095"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2011.05.001"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"crossref","unstructured":"K. Noh D. Lee and H. Jeong. 2015. Description and recognition based on directional motion vector for spatial hand gestures. In 2015 IEEE SENSORS. 1\u20134. https:\/\/doi.org\/10.1109\/ICSENS.2015.7370260  K. Noh D. Lee and H. Jeong. 2015. Description and recognition based on directional motion vector for spatial hand gestures. In 2015 IEEE SENSORS. 1\u20134. https:\/\/doi.org\/10.1109\/ICSENS.2015.7370260","DOI":"10.1109\/ICSENS.2015.7370260"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3204493.3204552"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3020165.3020170"},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2987386.2987410"},{"key":"e_1_3_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/2642918.2647397"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.3390\/s19173731"},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/2982142.2982145"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2018.00290"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/1226969.1227023"},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/2686612.2686676"},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.23919\/ICIF.2018.8455482"},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/2642918.2647373"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1142\/S0218213097000116"},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1080\/03637759309376314"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/KST48564.2020.9059376"},{"key":"e_1_3_2_1_54_1","unstructured":"Tobii. 2014. Tobii eye tracker for HTC Vive. https:\/\/blog.tobii.com\/eye-tracking-vr-devkit-for-htc-vive-311cbca952df  Tobii. 2014. Tobii eye tracker for HTC Vive. https:\/\/blog.tobii.com\/eye-tracking-vr-devkit-for-htc-vive-311cbca952df"},{"key":"e_1_3_2_1_55_1","volume-title":"Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature communications 11, 1","author":"Valliappan Nachiappan","year":"2020","unstructured":"Nachiappan Valliappan , Na Dai , Ethan Steinberg , Junfeng He , Kantwon Rogers , Venky Ramachandran , Pingmei Xu , Mina Shojaeizadeh , Li Guo , Kai Kohlhoff , 2020. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature communications 11, 1 ( 2020 ), 1\u201312. Nachiappan Valliappan, Na Dai, Ethan Steinberg, Junfeng He, Kantwon Rogers, Venky Ramachandran, Pingmei Xu, Mina Shojaeizadeh, Li Guo, Kai Kohlhoff, 2020. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature communications 11, 1 (2020), 1\u201312."},{"key":"e_1_3_2_1_56_1","volume-title":"ISWC","author":"Whitmire Eric","unstructured":"Eric Whitmire , Laura Trutoiu , Robert Cavin , David Perek , Brian Scally , James Phillips , and Shwetak Patel . 2016. EyeContact: scleral coil eye tracking for virtual reality . In ISWC . ACM , New York, NY, USA , 184\u2013191. Eric Whitmire, Laura Trutoiu, Robert Cavin, David Perek, Brian Scally, James Phillips, and Shwetak Patel. 2016. EyeContact: scleral coil eye tracking for virtual reality. In ISWC. ACM, New York, NY, USA, 184\u2013191."},{"key":"e_1_3_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2857492"},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2018.2856119"},{"key":"e_1_3_2_1_59_1","volume-title":"Turkergaze: Crowdsourcing saliency with webcam based eye tracking. arXiv preprint arXiv:1504.06755(2015).","author":"Xu Pingmei","year":"2015","unstructured":"Pingmei Xu , Krista\u00a0 A Ehinger , Yinda Zhang , Adam Finkelstein , Sanjeev\u00a0 R Kulkarni , and Jianxiong Xiao . 2015 . Turkergaze: Crowdsourcing saliency with webcam based eye tracking. arXiv preprint arXiv:1504.06755(2015). Pingmei Xu, Krista\u00a0A Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev\u00a0R Kulkarni, and Jianxiong Xiao. 2015. Turkergaze: Crowdsourcing saliency with webcam based eye tracking. arXiv preprint arXiv:1504.06755(2015)."},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858270"},{"key":"e_1_3_2_1_61_1","volume-title":"CHI \u201910 Extended Abstracts on Human Factors in Computing Systems(CHI EA \u201910)","author":"Yoo ByungIn","unstructured":"ByungIn Yoo , Jae-Joon Han , Changkyu Choi , Kwonju Yi , Sungjoo Suh , Dusik Park , and Changyeong Kim . 2010. 3D User Interface Combining Gaze and Hand Gestures for Large-Scale Display . In CHI \u201910 Extended Abstracts on Human Factors in Computing Systems(CHI EA \u201910) . ACM , New York, NY, USA , 3709\u20133714. https:\/\/doi.org\/10.1145\/1753846.1754043 ByungIn Yoo, Jae-Joon Han, Changkyu Choi, Kwonju Yi, Sungjoo Suh, Dusik Park, and Changyeong Kim. 2010. 3D User Interface Combining Gaze and Hand Gestures for Large-Scale Display. In CHI \u201910 Extended Abstracts on Human Factors in Computing Systems(CHI EA \u201910). ACM, New York, NY, USA, 3709\u20133714. https:\/\/doi.org\/10.1145\/1753846.1754043"},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/302979.303053"},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025790"},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58558-7_22"}],"event":{"name":"ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION","location":"Montr\u00e9al QC Canada","acronym":"ICMI '21","sponsor":["SIGCHI ACM Special Interest Group on Computer-Human Interaction"]},"container-title":["Proceedings of the 2021 International Conference on Multimodal Interaction"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3462244.3479938","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3462244.3479938","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:48:55Z","timestamp":1750193335000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3462244.3479938"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,18]]},"references-count":64,"alternative-id":["10.1145\/3462244.3479938","10.1145\/3462244"],"URL":"https:\/\/doi.org\/10.1145\/3462244.3479938","relation":{},"subject":[],"published":{"date-parts":[[2021,10,18]]},"assertion":[{"value":"2021-10-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}