{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,24]],"date-time":"2025-08-24T01:54:05Z","timestamp":1756000445658,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":39,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,9,26]],"date-time":"2023-09-26T00:00:00Z","timestamp":1695686400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,9,26]]},"DOI":"10.1145\/3565066.3608690","type":"proceedings-article","created":{"date-parts":[[2023,9,22]],"date-time":"2023-09-22T22:08:36Z","timestamp":1695420516000},"page":"1-7","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["V-ir-Net: A Novel Neural Network for Pupil and Corneal Reflection Detection trained on Simulated Light Distributions"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-5580-2304","authenticated-orcid":false,"given":"Virmarie","family":"Maquiling","sequence":"first","affiliation":[{"name":"Human-Centered Technologies for Learning, Technical University of Munich, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-5685-7318","authenticated-orcid":false,"given":"Sean Anthony","family":"Byrne","sequence":"additional","affiliation":[{"name":"MoMiLab, IMT School for Advanced Studies Lucca, IMT Lucca, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2089-9012","authenticated-orcid":false,"given":"Marcus","family":"Nystr\u00f6m","sequence":"additional","affiliation":[{"name":"Lund University Humanities Lab, Lund University, Sweden"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3146-4484","authenticated-orcid":false,"given":"Enkelejda","family":"Kasneci","sequence":"additional","affiliation":[{"name":"Technical University of Munich, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4672-8756","authenticated-orcid":false,"given":"Diederick C.","family":"Niehorster","sequence":"additional","affiliation":[{"name":"Lund University Humanities Lab &amp; Department of Psychology, Lund University, Sweden"}]}],"member":"320","published-online":{"date-parts":[[2023,9,26]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/EMBC.2019.8857218"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.5555\/3379393"},{"key":"e_1_3_2_1_3_1","volume-title":"Precise localization of corneal reflections in eye images using deep learning trained on synthetic data. arXiv preprint arXiv:2304.05673","author":"Byrne Sean\u00a0Anthony","year":"2023","unstructured":"Sean\u00a0Anthony Byrne , Marcus Nystr\u00f6m , Virmarie Maquiling , Enkelejda Kasneci , and Diederick\u00a0 C Niehorster . 2023. Precise localization of corneal reflections in eye images using deep learning trained on synthetic data. arXiv preprint arXiv:2304.05673 ( 2023 ). Sean\u00a0Anthony Byrne, Marcus Nystr\u00f6m, Virmarie Maquiling, Enkelejda Kasneci, and Diederick\u00a0C Niehorster. 2023. Precise localization of corneal reflections in eye images using deep learning trained on synthetic data. arXiv preprint arXiv:2304.05673 (2023)."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01471"},{"key":"e_1_3_2_1_5_1","volume-title":"Detection and Correspondence Matching of Corneal Reflections for Eye Tracking Using Deep Learning. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2210\u20132217","author":"Chugh Soumil","year":"2021","unstructured":"Soumil Chugh , Braiden Brousseau , Jonathan Rose , and Moshe Eizenman . 2021 . Detection and Correspondence Matching of Corneal Reflections for Eye Tracking Using Deep Learning. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2210\u20132217 . Soumil Chugh, Braiden Brousseau, Jonathan Rose, and Moshe Eizenman. 2021. Detection and Correspondence Matching of Corneal Reflections for Eye Tracking Using Deep Learning. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2210\u20132217."},{"key":"e_1_3_2_1_6_1","volume-title":"Pupilnet: Convolutional neural networks for robust pupil detection. arXiv preprint arXiv:1601.04902","author":"Fuhl Wolfgang","year":"2016","unstructured":"Wolfgang Fuhl , Thiago Santini , Gjergji Kasneci , and Enkelejda Kasneci . 2016 . Pupilnet: Convolutional neural networks for robust pupil detection. arXiv preprint arXiv:1601.04902 (2016). Wolfgang Fuhl, Thiago Santini, Gjergji Kasneci, and Enkelejda Kasneci. 2016. Pupilnet: Convolutional neural networks for robust pupil detection. arXiv preprint arXiv:1601.04902 (2016)."},{"key":"e_1_3_2_1_7_1","volume-title":"Pupilnet v2. 0: Convolutional neural networks for cpu based real time robust pupil detection. arXiv preprint arXiv:1711.00112","author":"Fuhl Wolfgang","year":"2017","unstructured":"Wolfgang Fuhl , Thiago Santini , Gjergji Kasneci , Wolfgang Rosenstiel , and Enkelejda Kasneci . 2017. Pupilnet v2. 0: Convolutional neural networks for cpu based real time robust pupil detection. arXiv preprint arXiv:1711.00112 ( 2017 ). Wolfgang Fuhl, Thiago Santini, Gjergji Kasneci, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. Pupilnet v2. 0: Convolutional neural networks for cpu based real time robust pupil detection. arXiv preprint arXiv:1711.00112 (2017)."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00138-016-0776-4"},{"key":"e_1_3_2_1_9_1","volume-title":"Pistol: Pupil invisible supportive tool to extract pupil, iris, eye opening, eye movements, pupil and iris gaze vector, and 2d as well as 3d gaze. arXiv preprint arXiv:2201.06799","author":"Fuhl Wolfgang","year":"2022","unstructured":"Wolfgang Fuhl , Daniel Weber , and Shahram Eivazi . 2022 . Pistol: Pupil invisible supportive tool to extract pupil, iris, eye opening, eye movements, pupil and iris gaze vector, and 2d as well as 3d gaze. arXiv preprint arXiv:2201.06799 (2022). Wolfgang Fuhl, Daniel Weber, and Shahram Eivazi. 2022. Pistol: Pupil invisible supportive tool to extract pupil, iris, eye opening, eye movements, pupil and iris gaze vector, and 2d as well as 3d gaze. arXiv preprint arXiv:2201.06799 (2022)."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1364\/OPTICA.6.000506"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2638728.2641695"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISSC49989.2020.9180166"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300780"},{"key":"e_1_3_2_1_14_1","volume-title":"arXiv:2304.02643","author":"Kirillov Alexander","year":"2023","unstructured":"Alexander Kirillov , Eric Mintun , Nikhila Ravi , Hanzi Mao , Chloe Rolland , Laura Gustafson , Tete Xiao , Spencer Whitehead , Alexander\u00a0 C. Berg , Wan-Yen Lo , Piotr Doll\u00e1r , and Ross Girshick . 2023. Segment Anything . arXiv:2304.02643 ( 2023 ). Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander\u00a0C. Berg, Wan-Yen Lo, Piotr Doll\u00e1r, and Ross Girshick. 2023. Segment Anything. arXiv:2304.02643 (2023)."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2021.3067765"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.324"},{"key":"e_1_3_2_1_17_1","volume-title":"Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101","author":"Loshchilov Ilya","year":"2017","unstructured":"Ilya Loshchilov and Frank Hutter . 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 ( 2017 ). Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3317956.3318153"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1063\/5.0034891"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"crossref","unstructured":"Maria Mikhailenko Nadezhda Maksimenko and Mikhail Kurushkin. 2022. Eye-tracking in immersive virtual reality for education: a review of the current progress and applications. In Frontiers in Education Vol.\u00a07. Frontiers Media SA 697032.  Maria Mikhailenko Nadezhda Maksimenko and Mikhail Kurushkin. 2022. Eye-tracking in immersive virtual reality for education: a review of the current progress and applications. In Frontiers in Education Vol.\u00a07. Frontiers Media SA 697032.","DOI":"10.3389\/feduc.2022.697032"},{"key":"e_1_3_2_1_21_1","volume-title":"V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV). Ieee, 565\u2013571.","author":"Milletari Fausto","year":"2016","unstructured":"Fausto Milletari , Nassir Navab , and Seyed-Ahmad Ahmadi . 2016 . V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV). Ieee, 565\u2013571. Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. 2016. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 fourth international conference on 3D vision (3DV). Ieee, 565\u2013571."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3379157.3391990"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/CoG47356.2020.9231896"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1177\/2041669517708205"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3490355.3490359"},{"key":"e_1_3_2_1_26_1","volume-title":"The amplitude of small eye movements can be accurately estimated with video-based eye trackers. Behavior Research Methods","author":"Nystr\u00f6m Marcus","year":"2022","unstructured":"Marcus Nystr\u00f6m , Diederick\u00a0 C Niehorster , Richard Andersson , Roy\u00a0 S Hessels , and Ignace\u00a0 TC Hooge . 2022. The amplitude of small eye movements can be accurately estimated with video-based eye trackers. Behavior Research Methods ( 2022 ), 1\u201313. Marcus Nystr\u00f6m, Diederick\u00a0C Niehorster, Richard Andersson, Roy\u00a0S Hessels, and Ignace\u00a0TC Hooge. 2022. The amplitude of small eye movements can be accurately estimated with video-based eye trackers. Behavior Research Methods (2022), 1\u201313."},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/CBMS.2018.00086"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2019.00451"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAES.2021.3139848"},{"key":"e_1_3_2_1_30_1","article-title":"Binary cross entropy with deep learning technique for image classification","volume":"9","author":"Ruby Usha","year":"2020","unstructured":"Usha Ruby and Vamsidhar Yendapalli . 2020 . Binary cross entropy with deep learning technique for image classification . Int. J. Adv. Trends Comput. Sci. Eng 9 , 10 (2020). Usha Ruby and Vamsidhar Yendapalli. 2020. Binary cross entropy with deep learning technique for image classification. Int. J. Adv. Trends Comput. Sci. Eng 9, 10 (2020).","journal-title":"Int. J. Adv. Trends Comput. Sci. Eng"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"crossref","unstructured":"Alexandra Sipatchin Siegfried Wahl and Katharina Rifai. 2021. Eye-tracking for clinical ophthalmology with virtual reality (vr): A case study of the htc vive pro eye\u2019s usability. In Healthcare Vol.\u00a09. Mdpi 180.  Alexandra Sipatchin Siegfried Wahl and Katharina Rifai. 2021. Eye-tracking for clinical ophthalmology with virtual reality (vr): A case study of the htc vive pro eye\u2019s usability. In Healthcare Vol.\u00a09. Mdpi 180.","DOI":"10.3390\/healthcare9020180"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-57987-0_30"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.235"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/2988458.2988466"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00366"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2019.00455"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1177\/1071181321651090"},{"key":"e_1_3_2_1_38_1","volume-title":"gazeNet: End-to-end eye-movement event detection with deep neural networks. Behavior research methods 51","author":"Zemblys Raimondas","year":"2019","unstructured":"Raimondas Zemblys , Diederick\u00a0 C Niehorster , and Kenneth Holmqvist . 2019. gazeNet: End-to-end eye-movement event detection with deep neural networks. Behavior research methods 51 ( 2019 ), 840\u2013864. Raimondas Zemblys, Diederick\u00a0C Niehorster, and Kenneth Holmqvist. 2019. gazeNet: End-to-end eye-movement event detection with deep neural networks. Behavior research methods 51 (2019), 840\u2013864."},{"key":"e_1_3_2_1_39_1","volume-title":"Proceedings, Part XXIX 16","author":"Zhu Tyler","year":"2020","unstructured":"Tyler Zhu , Per Karlsson , and Christoph Bregler . 2020 . Simpose: Effectively learning densepose and surface normals of people from simulated data. In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020 , Proceedings, Part XXIX 16 . Springer, 225\u2013242. Tyler Zhu, Per Karlsson, and Christoph Bregler. 2020. Simpose: Effectively learning densepose and surface normals of people from simulated data. In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XXIX 16. Springer, 225\u2013242."}],"event":{"name":"MobileHCI '23: 25th International Conference on Mobile Human-Computer Interaction","sponsor":["SIGCHI ACM Special Interest Group on Computer-Human Interaction"],"location":"Athens Greece","acronym":"MobileHCI '23"},"container-title":["Proceedings of the 25th International Conference on Mobile Human-Computer Interaction"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3565066.3608690","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3565066.3608690","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:02:53Z","timestamp":1750186973000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3565066.3608690"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,26]]},"references-count":39,"alternative-id":["10.1145\/3565066.3608690","10.1145\/3565066"],"URL":"https:\/\/doi.org\/10.1145\/3565066.3608690","relation":{},"subject":[],"published":{"date-parts":[[2023,9,26]]},"assertion":[{"value":"2023-09-26","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}