{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,18]],"date-time":"2026-02-18T23:25:32Z","timestamp":1771457132422,"version":"3.50.1"},"reference-count":28,"publisher":"Cambridge University Press (CUP)","issue":"4","license":[{"start":{"date-parts":[[2020,9,11]],"date-time":"2020-09-11T00:00:00Z","timestamp":1599782400000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/www.cambridge.org\/core\/terms"}],"content-domain":{"domain":["cambridge.org"],"crossmark-restriction":true},"short-container-title":["AIEDAM"],"published-print":{"date-parts":[[2020,11]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The user's gaze can provide important information for human\u2013machine interaction, but the analysis of manual gaze data is extremely time-consuming, inhibiting wide adoption in usability studies. Existing methods for automated areas of interest (AOI) analysis cannot be applied to tangible products with a screen-based user interface (UI), which have become ubiquitous in everyday life. The objective of this paper is to present and evaluate a method to automatically map the user's gaze to dynamic AOIs on tangible screen-based UIs based on computer vision and deep learning. This paper presents an algorithm for<jats:italic>automated Dynamic AOI Mapping<\/jats:italic>(aDAM), which allows the automated mapping of gaze data recorded with mobile eye tracking to the predefined AOIs on tangible screen-based UIs. The evaluation of the algorithm is performed using two medical devices, which represent two extreme examples of tangible screen-based UIs. The different elements of aDAM are examined for accuracy and robustness, as well as the time saved compared to manual mapping. The break-even point for an analyst's effort for aDAM compared to manual analysis is found to be 8.9 min gaze data time. The accuracy and robustness of both the automated gaze mapping and the screen matching indicate that aDAM can be applied to a wide range of products. aDAM allows, for the first time, automated AOI analysis of tangible screen-based UIs with AOIs that dynamically change over time. The algorithm requires some additional initial input for the setup and training, but analyzed gaze data duration and effort is only determined by computation time and does not require any additional manual work thereafter. The efficiency of the approach has the potential for a broader adoption of mobile eye tracking in usability testing for the development of new products and may contribute to a more data-driven usability engineering process in the future.<\/jats:p>","DOI":"10.1017\/s0890060420000372","type":"journal-article","created":{"date-parts":[[2020,9,11]],"date-time":"2020-09-11T08:58:19Z","timestamp":1599814699000},"page":"505-514","update-policy":"https:\/\/doi.org\/10.1017\/policypage","source":"Crossref","is-referenced-by-count":8,"title":["Automated areas of interest analysis for usability studies of tangible screen-based user interfaces using mobile eye tracking"],"prefix":"10.1017","volume":"34","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1330-7338","authenticated-orcid":false,"given":"M.","family":"Batliner","sequence":"first","affiliation":[]},{"given":"S.","family":"Hess","sequence":"additional","affiliation":[]},{"given":"C.","family":"Ehrlich-Ad\u00e1m","sequence":"additional","affiliation":[]},{"given":"Q.","family":"Lohmeyer","sequence":"additional","affiliation":[]},{"given":"M.","family":"Meboldt","sequence":"additional","affiliation":[]}],"member":"56","published-online":{"date-parts":[[2020,9,11]]},"reference":[{"key":"S0890060420000372_ref14","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"S0890060420000372_ref23","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2016.2577031"},{"key":"S0890060420000372_ref15","doi-asserted-by":"publisher","DOI":"10.1080\/17425247.2019.1563070"},{"key":"S0890060420000372_ref26","doi-asserted-by":"publisher","DOI":"10.1080\/00140139.2014.990524"},{"key":"S0890060420000372_ref17","unstructured":"Lucas, BD and Kanade, T (1981) An iterative image registration technique with an application to stereo vision. IJCAI81. San Francisco, CA, US: Morgan Kaufmann Publishers Inc., pp. 674\u2013679."},{"key":"S0890060420000372_ref3","volume-title":"Eye Tracking Methodology","author":"Duchowski","year":"2007"},{"key":"S0890060420000372_ref4","first-page":"3304","volume-title":"Fully-automatic annotation of scene videos: establish eye tracking effectively in various industrial applications","author":"Essig","year":"2010"},{"key":"S0890060420000372_ref6","doi-asserted-by":"publisher","DOI":"10.5201\/ipol.2012.gjmr-lsd"},{"key":"S0890060420000372_ref20","doi-asserted-by":"publisher","DOI":"10.1145\/2857491.2857530"},{"key":"S0890060420000372_ref19","doi-asserted-by":"publisher","DOI":"10.18489\/sacj.v30i1.511"},{"key":"S0890060420000372_ref2","first-page":"67","volume-title":"Proceedings of the 2012 ACM Conference on Ubiquitous Computing - UbiComp \u201812","author":"De Beugher","year":"2012"},{"key":"S0890060420000372_ref9","volume-title":"Eyetracking: A Comprehensive Guide to Methods, Paradigms and Measures","author":"Holmqvist","year":"2017"},{"key":"S0890060420000372_ref8","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.322"},{"key":"S0890060420000372_ref18","doi-asserted-by":"publisher","DOI":"10.1101\/299925"},{"key":"S0890060420000372_ref16","doi-asserted-by":"publisher","DOI":"10.1023\/B:VISI.0000029664.99615.94"},{"key":"S0890060420000372_ref27","doi-asserted-by":"crossref","first-page":"1","DOI":"10.16910\/jemr.11.6.6","article-title":"Automating areas of interest analysis in mobile eye tracking experiments based on machine learning","volume":"11","author":"Wolf","year":"2018","journal-title":"Journal of Eye Movement Research"},{"key":"S0890060420000372_ref12","doi-asserted-by":"publisher","DOI":"10.1561\/0600000001"},{"key":"S0890060420000372_ref1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.1986.4767851"},{"key":"S0890060420000372_ref10","doi-asserted-by":"crossref","unstructured":"Kiefer, P , Giannopoulos, I , Kremer, D , (2014) Starting to get bored. Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA \u201814. New York, NY, USA: ACM Press, pp. 315\u2013318.","DOI":"10.1145\/2578153.2578216"},{"key":"S0890060420000372_ref5","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2018.05.018"},{"key":"S0890060420000372_ref25","doi-asserted-by":"crossref","unstructured":"Toyama, T , Kieninger, T , Shafait, F , (2012) Gaze guided object recognition using a head-mounted eye tracker. Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA \u201912. New York, NY, USA: ACM Press, p. 91.","DOI":"10.1145\/2168556.2168570"},{"key":"S0890060420000372_ref11","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2016.2598695"},{"key":"S0890060420000372_ref28","doi-asserted-by":"publisher","DOI":"10.1109\/APSIPA.2015.7415350"},{"key":"S0890060420000372_ref21","doi-asserted-by":"publisher","DOI":"10.1561\/0600000007"},{"key":"S0890060420000372_ref7","volume-title":"Multiple View Geometry in Computer Vision","author":"Hartley","year":"2003"},{"key":"S0890060420000372_ref24","doi-asserted-by":"crossref","unstructured":"Saluja, KS , Jeevithashree, D , Arjun, S , (2019) Analyzing eye gaze of users with different reading abilities due to learning disability. International Conference on Graphics and Signal Processing. New York NY US: Association for Computing Machinery.","DOI":"10.1145\/3338472.3338481"},{"key":"S0890060420000372_ref13","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126542"},{"key":"S0890060420000372_ref22","unstructured":"Mussgnug, M , Waldern, MF and Meboldt, M (2017) Mobile eye tracking in usability testing: designers analysing the user\u2013product interaction. International Conference on Engineering Design. Glasgow, Scotland: Design Society."}],"container-title":["Artificial Intelligence for Engineering Design, Analysis and Manufacturing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.cambridge.org\/core\/services\/aop-cambridge-core\/content\/view\/S0890060420000372","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,3,31]],"date-time":"2021-03-31T22:44:12Z","timestamp":1617230652000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.cambridge.org\/core\/product\/identifier\/S0890060420000372\/type\/journal_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,9,11]]},"references-count":28,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2020,11]]}},"alternative-id":["S0890060420000372"],"URL":"https:\/\/doi.org\/10.1017\/s0890060420000372","relation":{},"ISSN":["0890-0604","1469-1760"],"issn-type":[{"value":"0890-0604","type":"print"},{"value":"1469-1760","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,9,11]]},"assertion":[{"value":"Copyright \u00a9 The Author(s), 2020. Published by Cambridge University Press","name":"copyright","label":"Copyright","group":{"name":"copyright_and_licensing","label":"Copyright and Licensing"}}]}}