{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,20]],"date-time":"2026-02-20T02:58:54Z","timestamp":1771556334784,"version":"3.50.1"},"reference-count":44,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2020,4,8]],"date-time":"2020-04-08T00:00:00Z","timestamp":1586304000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In the Cultural Heritage (CH) context, art galleries and museums employ technology devices to enhance and personalise the museum visit experience. However, the most challenging aspect is to determine what the visitor is interested in. In this work, a novel Visual Attentive Model (VAM) has been proposed that is learned from eye tracking data. In particular, eye-tracking data of adults and children observing five paintings with similar characteristics have been collected. The images are selected by CH experts and are\u2014the three \u201cIdeal Cities\u201d (Urbino, Baltimore and Berlin), the Inlaid chest in the National Gallery of Marche and Wooden panel in the \u201cStudiolo del Duca\u201d with Marche view. These pictures have been recognized by experts as having analogous features thus providing coherent visual stimuli. Our proposed method combines a new coordinates representation from eye sequences by using Geometric Algebra with a deep learning model for automated recognition (to identify, differentiate, or authenticate individuals) of people by the attention focus of distinctive eye movement patterns. The experiments were conducted by comparing five Deep Convolutional Neural Networks (DCNNs), yield high accuracy (more than     80 %), demonstrating the effectiveness and suitability of the proposed approach in identifying adults and children as museums\u2019 visitors.<\/jats:p>","DOI":"10.3390\/s20072101","type":"journal-article","created":{"date-parts":[[2020,4,9]],"date-time":"2020-04-09T03:40:19Z","timestamp":1586403619000},"page":"2101","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":18,"title":["A Visual Attentive Model for Discovering Patterns in Eye-Tracking Data\u2014A Proposal in Cultural Heritage"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9160-834X","authenticated-orcid":false,"given":"Roberto","family":"Pierdicca","sequence":"first","affiliation":[{"name":"Dipartimento di Ingegneria Civile, Edile e dell\u2019Architettura, Universit\u00e1 Politecnica delle Marche, 60131 Ancona, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5523-7174","authenticated-orcid":false,"given":"Marina","family":"Paolanti","sequence":"additional","affiliation":[{"name":"Dipartimento di Ingegneria dell\u2019Informazione, Universit\u00e1 Politecnica delle Marche, 60131 Ancona, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5637-6582","authenticated-orcid":false,"given":"Ramona","family":"Quattrini","sequence":"additional","affiliation":[{"name":"Dipartimento di Ingegneria Civile, Edile e dell\u2019Architettura, Universit\u00e1 Politecnica delle Marche, 60131 Ancona, Italy"}]},{"given":"Marco","family":"Mameli","sequence":"additional","affiliation":[{"name":"Dipartimento di Ingegneria dell\u2019Informazione, Universit\u00e1 Politecnica delle Marche, 60131 Ancona, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8893-9244","authenticated-orcid":false,"given":"Emanuele","family":"Frontoni","sequence":"additional","affiliation":[{"name":"Dipartimento di Ingegneria dell\u2019Informazione, Universit\u00e1 Politecnica delle Marche, 60131 Ancona, Italy"}]}],"member":"1968","published-online":{"date-parts":[[2020,4,8]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Pierdicca, R., Marques-Pita, M., Paolanti, M., and Malinverni, E.S. (2019). IoT and Engagement in the Ubiquitous Museum. Sensors, 19.","DOI":"10.3390\/s19061387"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"244","DOI":"10.1109\/JIOT.2015.2506258","article-title":"An indoor location-aware system for an IoT-based smart museum","volume":"3","author":"Alletto","year":"2015","journal-title":"IEEE Internet Things J."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"133","DOI":"10.1016\/j.chb.2015.12.035","article-title":"Use of digital guides in museum galleries: Determinants of information selection","volume":"57","author":"Merkt","year":"2016","journal-title":"Comput. Hum. Behav."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Fontanella, F., Molinara, M., Gallozzi, A., Cigola, M., Senatore, L.J., Florio, R., Clini, P., and Celis D\u2019Amico, F. (2019). HeritageGO (HeGO): A Social Media Based Project for Cultural Heritage Valorization. Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, ACM.","DOI":"10.1145\/3314183.3323863"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Yanulevskaya, V., Uijlings, J., Bruni, E., Sartori, A., Zamboni, E., Bacci, F., Melcher, D., and Sebe, N. (2012, January 29). In the eye of the beholder: Employing statistical analysis and eye tracking for analyzing abstract paintings. Proceedings of the 20th ACM international conference on Multimedia, Nara, Japan.","DOI":"10.1145\/2393347.2393399"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.patrec.2014.06.002","article-title":"Markov chain based computational visual attention model that learns from eye tracking data","volume":"49","author":"Zhong","year":"2014","journal-title":"Pattern Recognit. Lett."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"5142","DOI":"10.1109\/TIP.2018.2851672","article-title":"Predicting human eye fixations via an lstm-based saliency attentive model","volume":"27","author":"Cornia","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"6","DOI":"10.1145\/1658349.1658355","article-title":"Computational visual attention systems and their cognitive foundations: A survey","volume":"7","author":"Frintrop","year":"2010","journal-title":"ACM Trans. Appl. Percept."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1401","DOI":"10.1016\/j.sigpro.2012.06.014","article-title":"Learning saliency-based visual attention: A review","volume":"93","author":"Zhao","year":"2013","journal-title":"Signal Process."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Poole, A., and Ball, L.J. (2006). Eye tracking in HCI and usability research. Encyclopedia of Human Computer Interaction, IGI Global.","DOI":"10.4018\/978-1-59140-562-7.ch034"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"218","DOI":"10.1016\/j.jbi.2017.04.001","article-title":"Predicting healthcare trajectories from medical records: A deep learning approach","volume":"69","author":"Pham","year":"2017","journal-title":"J. Biomed. Inf."},{"key":"ref_12","unstructured":"Erhan, D., Manzagol, P.A., Bengio, Y., Bengio, S., and Vincent, P. (2009, January 16\u201318). The difficulty of training deep architectures and the effect of unsupervised pre-training. Proceedings of the 12th International Confe-renceon Artificial Intelligence and Statistics (AISTATS) 2009, Clearwa-ter Beach, FL, USA."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"346","DOI":"10.1016\/j.patcog.2017.02.030","article-title":"Enhanced skeleton visualization for view invariant human action recognition","volume":"68","author":"Liu","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Naspetti, S., Pierdicca, R., Mandolesi, S., Paolanti, M., Frontoni, E., and Zanoli, R. (2016, January 15\u201318). Automatic analysis of eye-tracking data for augmented reality applications: A prospective outlook. Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Lecce, Italy.","DOI":"10.1007\/978-3-319-40651-0_17"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Pierdicca, R., Paolanti, M., Naspetti, S., Mandolesi, S., Zanoli, R., and Frontoni, E. (2018). User-Centered Predictive Model for Improving Cultural Heritage Augmented Reality Applications: An HMM-Based Approach for Eye-Tracking Data. J. Imaging, 4.","DOI":"10.3390\/jimaging4080101"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Clini, P., Frontoni, E., Quattrini, R., and Pierdicca, R. (2014). Augmented reality experience: From high-resolution acquisition to real time augmented contents. Adv. Multimedia, 2014.","DOI":"10.1155\/2014\/597476"},{"key":"ref_17","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv, Available online: https:\/\/arxiv.org\/pdf\/1409.1556.pdf."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A.A. (2017, January 4\u20139). Inception-v4, inception-resnet and the impact of residual connections on learning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going deeper with convolutions. Proceedings of the Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Aquilanti, L., Osimani, A., Cardinali, F., Clementi, F., Foligni, R., Garofalo, C., Loreto, N., Mandolesi, S., Milanovi\u0107, V., and Mozzon, M. (2020). Valorization of Foods: From Tradition to Innovation. The First Outstanding 50 Years of \u201cUniversit\u00e0 Politecnica delle Marche, Springer.","DOI":"10.1007\/978-3-030-33832-9_36"},{"key":"ref_22","first-page":"32","article-title":"Eye tracking in neuromarketing: A research agenda for marketing studies","volume":"7","author":"Rocha","year":"2015","journal-title":"Int. J. Psychol. Stud."},{"key":"ref_23","unstructured":"Nielsen, J., and Pernice, K. (2010). Eyetracking Web Usability, New Riders."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Br\u00f4ne, G., Oben, B., and Goedem\u00e9, T. (2011, January 18). Towards a more effective method for analyzing mobile eye-tracking data: Integrating gaze data with object recognition algorithms. Proceedings of the 1st International Workshop on Pervasive Eye Tracking & Mobile Eye-Based Interaction, Beijing, China.","DOI":"10.1145\/2029956.2029971"},{"key":"ref_25","unstructured":"De Beugher, S., Br\u00f4ne, G., and Goedem\u00e9, T. (2014, January 5\u20138). Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection. Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Nakano, Y.I., and Ishii, R. (2010, January 2\u20137). Estimating user\u2019s engagement from eye-gaze behaviors in human-agent conversations. Proceedings of the 15th International Conference on Intelligent User Interfaces, Hong Kong, China.","DOI":"10.1145\/1719970.1719990"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Pfeiffer, T., and Renner, P. (2014, January 26\u201328). EyeSee3D: A low-cost approach for analysing mobile 3D eye tracking data using augmented reality technology. Proceedings of the Symposium on Eye Tracking Research and Applications, Safety Harbor, FL, USA.","DOI":"10.1145\/2578153.2628814"},{"key":"ref_28","unstructured":"Ohm, C., M\u00fcller, M., Ludwig, B., and Bienk, S. (2014, January 23). Where is the landmark? Eye tracking studies in large-scale indoor environments. Proceedings of the 2nd International Workshop on Eye Tracking for Spatial Research co-located with the 8th International Conference on Geographic Information Science, Vienna, Austria."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Ma, K.T., Xu, Q., Lim, R., Li, L., Sim, T., and Kankanhalli, M. (2017, January 4\u20136). Eye-2-I: Eye-tracking for just-in-time implicit user profiling. Proceedings of the 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), Singapore.","DOI":"10.1109\/SIPROCESS.2017.8124555"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"90","DOI":"10.1016\/j.edurev.2013.10.001","article-title":"A review of using eye-tracking technology in exploring learning from 2000 to 2012","volume":"10","author":"Lai","year":"2013","journal-title":"Educ. Res. Rev."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Tabbers, H.K., Paas, F., Lankford, C., Martens, R.L., and van Merrienboer, J.J. (2008). Studying eye movements in multimedia learning. Understanding Multimedia Documents, Springer.","DOI":"10.1007\/978-0-387-73337-1_9"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"304","DOI":"10.1016\/j.ijme.2019.05.002","article-title":"Using eye-tracking to for analyzing case study materials","volume":"17","author":"Berger","year":"2019","journal-title":"Int. J. Manag. Educ."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Schrammel, J., Mattheiss, E., D\u00f6belt, S., Paletta, L., Almer, A., and Tscheligi, M. (2011). Attentional behavior of users on the move towards pervasive advertising media. Pervasive Advertising, Springer.","DOI":"10.1007\/978-0-85729-352-7_14"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Schrammel, J., Regal, G., and Tscheligi, M. (2014, January 4\u20139). Attention approximation of mobile users towards their environment. Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, Toronto, ON, Canada.","DOI":"10.1145\/2559206.2581295"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"366","DOI":"10.1016\/j.ijmedinf.2019.07.010","article-title":"Eye-tracking retrospective think-aloud as a novel approach for a usability evaluation","volume":"129","author":"Cho","year":"2019","journal-title":"Int. J. Med. Inform."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Massaro, D., Savazzi, F., Di Dio, C., Freedberg, D., Gallese, V., Gilli, G., and Marchetti, A. (2012). When art moves the eyes: A behavioral and eye-tracking study. PLoS ONE, 7.","DOI":"10.1371\/journal.pone.0037285"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"549","DOI":"10.1214\/16-AOAS921","article-title":"What we look at in paintings: A comparison between experienced and inexperienced art viewers","volume":"10","author":"Ylitalo","year":"2016","journal-title":"Ann. Appl. Stat."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Kiefer, P., Giannopoulos, I., Kremer, D., Schlieder, C., and Raubal, M. (2014, January 19). Starting to get bored: An outdoor eye tracking study of tourists exploring a city panorama. Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, FL, USA.","DOI":"10.1145\/2578153.2578216"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"1503","DOI":"10.1016\/j.visres.2010.05.002","article-title":"Statistical regularities in art: Relations with visual coding and perception","volume":"50","author":"Graham","year":"2010","journal-title":"Vis. Res."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"98","DOI":"10.3389\/fnhum.2011.00098","article-title":"How do we see art: An eye-tracker study","volume":"5","author":"Quiroga","year":"2011","journal-title":"Front. Hum. Neurosci."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"397","DOI":"10.1016\/j.neuropsychologia.2019.04.022","article-title":"A novel machine learning analysis of eye-tracking data reveals suboptimal visual information extraction from facial stimuli in individuals with autism","volume":"129","year":"2019","journal-title":"Neuropsychologia"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"103779","DOI":"10.1016\/j.compedu.2019.103779","article-title":"Does visual attention to the instructor in online video affect learning and learner perceptions? An eye-tracking analysis","volume":"146","author":"Wang","year":"2020","journal-title":"Comput. Educ."},{"key":"ref_43","unstructured":"Camerota, F., and Kemp, M. (2006). La prospettiva del Rinascimento: Arte, architettura, scienza, Mondadori Electa."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Li, F.-F. (2009, January 20\u201325). Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/7\/2101\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T09:16:36Z","timestamp":1760174196000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/7\/2101"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,4,8]]},"references-count":44,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2020,4]]}},"alternative-id":["s20072101"],"URL":"https:\/\/doi.org\/10.3390\/s20072101","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,4,8]]}}}