{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T17:57:33Z","timestamp":1764784653816,"version":"build-2065373602"},"reference-count":63,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2021,4,29]],"date-time":"2021-04-29T00:00:00Z","timestamp":1619654400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Ministerio de Ciencia, Innovaci\u00f3n y Universidades","award":["RED2018- 102511-T"],"award-info":[{"award-number":["RED2018- 102511-T"]}]},{"name":"Universitat Jaume I","award":["UJI-B2018-44"],"award-info":[{"award-number":["UJI-B2018-44"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. To address this gap, we proposed Glimpse, a novel measure to compute temporal salience based on the observer-spatio-temporal consistency of raw gaze data. The measure is conceptually simple, training free, and provides a semantically meaningful quantification of visual attention over time. As an extension, we explored scoring algorithms to estimate temporal salience from spatial salience maps predicted with existing computational models. However, these approaches generally fall short when compared with our proposed gaze-based measure. Glimpse could serve as the basis for several downstream tasks such as segmentation or summarization of videos. Glimpse\u2019s software and data are publicly available.<\/jats:p>","DOI":"10.3390\/s21093099","type":"journal-article","created":{"date-parts":[[2021,4,29]],"date-time":"2021-04-29T10:30:41Z","timestamp":1619692241000},"page":"3099","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Glimpse: A Gaze-Based Measure of Temporal Salience"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1596-8466","authenticated-orcid":false,"given":"V. Javier","family":"Traver","sequence":"first","affiliation":[{"name":"Institute of New Imaging Technologies, Universitat Jaume I, Av. Vicent Sos Baynat, s\/n, E12071 Castell\u00f3n, Spain"}]},{"given":"Judith","family":"Zor\u00edo","sequence":"additional","affiliation":[{"name":"Universitat Jaume I, Av. Vicent Sos Baynat, s\/n, E12071 Castell\u00f3n, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5011-1847","authenticated-orcid":false,"given":"Luis A.","family":"Leiva","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Luxembourg, Belval, 6 Avenue de la Fonte, L-4264 Esch-sur-Alzette, Luxembourg"}]}],"member":"1968","published-online":{"date-parts":[[2021,4,29]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"770","DOI":"10.1017\/S0140525X00072484","article-title":"Is Complexity Theory appropriate for analyzing biological systems?","volume":"14","author":"Tsotsos","year":"1991","journal-title":"Behav. Brain Sci."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"20160113","DOI":"10.1098\/rstb.2016.0113","article-title":"How is visual salience computed in the brain? Insights from behavior, neurobiology and modeling","volume":"372","author":"Veale","year":"2017","journal-title":"Philos. Trans. R. Soc. Lond. B. Biol. Sci."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1295","DOI":"10.1016\/j.visres.2008.09.007","article-title":"Bayesian surprise attracts human attention","volume":"49","author":"Itti","year":"2009","journal-title":"Vis. Res."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1254","DOI":"10.1109\/34.730558","article-title":"A Model of Saliency-Based Visual Attention for Rapid Scene Analysis","volume":"20","author":"Itti","year":"1998","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Krasovskaya, S., and MacInnes, W.J. (2019). Salience Models: A Computational Cognitive Neuroscience Review. Vision, 3.","DOI":"10.3390\/vision3040056"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Leiva, L.A., Xue, Y., Bansal, A., Tavakoli, H.R., K\u00f6ro\u011flu, T., Du, J., Dayama, N.R., and Oulasvirta, A. (2020, January 5\u20139). Understanding Visual Saliency in Mobile User Interfaces. Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI), Oldenburg, Germany.","DOI":"10.1145\/3379503.3403557"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Shen, C., and Zhao, Q. (2014, January 6\u201312). Webpage Saliency. Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10584-0_3"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Sidorov, O., Pedersen, M., Shekhar, S., and Kim, N.W. (2020, January 25\u201330). Are All the Frames Equally Important?. Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA), Honolulu, HI, USA.","DOI":"10.1145\/3334480.3382980"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zhou, K., Qiao, Y., and Xiang, T. (2018, January 2\u20137). Deep Reinforcement Learning for Unsupervised Video Summarization With Diversity-Representativeness Reward. Proceedings of the Annual AAAI Conference on Artificial Intelligence (AAAI), New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.12255"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Mahasseni, B., Lam, M., and Todorovic, S. (2017, January 21\u201326). Unsupervised Video Summarization With Adversarial LSTM Networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.318"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Yarbus, A.L. (1967). Eye Movements and Vision, Plenum Press.","DOI":"10.1007\/978-1-4899-5379-7"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"69","DOI":"10.1016\/j.dcn.2016.11.001","article-title":"Beyond eye gaze: What else can eyetracking reveal about cognition and cognitive development?","volume":"25","author":"Eckstein","year":"2017","journal-title":"Dev. Cogn. Neurosci."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Yun, K., Peng, Y., Samaras, D., Zelinsky, G.J., and Berg, T.L. (2013, January 23\u201328). Studying Relationships between Human Gaze, Description, and Computer Vision. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA.","DOI":"10.1109\/CVPR.2013.101"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Karthikeyan, S., Thuyen, N., Eckstein, M., and Manjunath, B.S. (2015, January 8\u201310). Eye tracking assisted extraction of attentionally important objects from videos. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298944"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Karessli, N., Akata, Z., Schiele, B., and Bulling, A. (2017, January 21\u201326). Gaze Embeddings for Zero-Shot Image Classification. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.679"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Salehin, M.M., and Paul, M. (2017, January 10\u201314). A novel framework for video summarization based on smooth pursuit information from eye tracker data. Proceedings of the IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, China.","DOI":"10.1109\/ICMEW.2017.8026294"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Xu, J., Mukherjee, L., Li, Y., Warner, J., Rehg, J.M., and Singh, V. (2015, January 8\u201310). Gaze-enabled egocentric video summarization via constrained submodular maximization. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), CVPR 2015, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298836"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"394","DOI":"10.1109\/LSP.2016.2523339","article-title":"Novelty-based Spatiotemporal Saliency Detection for Prediction of Gaze in Egocentric Video","volume":"23","author":"Polatsek","year":"2016","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_19","unstructured":"Neves, A.C., Silva, M.M., Campos, M.F.M., and do Nascimento, E.R. (2020, January 23). A gaze driven fast-forward method for first-person videos. Proceedings of the EPIC@ECCV Workshop, Glasgow, UK."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"e2016980118","DOI":"10.1073\/pnas.2016980118","article-title":"Synchronized eye movements predict test scores in online video education","volume":"118","author":"Madsen","year":"2021","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"898","DOI":"10.1109\/TIP.2011.2165292","article-title":"Eye-Tracking Database for a Set of Standard Video Sequences","volume":"21","author":"Hadizadeh","year":"2012","journal-title":"IEEE Trans. Image Process."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"91","DOI":"10.1007\/s10044-016-0568-5","article-title":"Fusion of eye movement and mouse dynamics for reliable behavioral biometrics","volume":"21","author":"Kasprowski","year":"2018","journal-title":"Pattern Anal. Appl."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1016\/j.jvcir.2011.08.005","article-title":"Key frame extraction based on visual attention model","volume":"23","author":"Lai","year":"2012","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Ma, Y.F., Lu, L., Zhang, H.J., and Li, M. (2002, January 1\u20136). A User Attention Model for Video Summarization. Proceedings of the ACM International Conference on Multimedia (MULTIMEDIA), New York, NY, USA.","DOI":"10.1145\/641007.641116"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Gitman, Y., Erofeev, M., Vatolin, D., Andrey, B., and Alexey, F. (2014, January 27\u201330). Semiautomatic visual-attention modeling and its application to video compression. Proceedings of the International Conference on Image Processing (ICIP), Paris, France.","DOI":"10.1109\/ICIP.2014.7025220"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Fosco, C., Newman, A., Sukhum, P., Zhang, Y.B., Zhao, N., Oliva, A., and Bylinskii, Z. (2020, January 14\u201319). How Much Time Do You Have? Modeling Multi-Duration Saliency. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00453"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Nguyen, T.V., Xu, M., Gao, G., Kankanhalli, M., Tian, Q., and Yan, S. (2013, January 18\u201319). Static Saliency vs. Dynamic Saliency: A Comparative Study. Proceedings of the ACM International Conference on Multimedia (MULTIMEDIA), Barcelona, Spain.","DOI":"10.1145\/2502081.2502128"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"231","DOI":"10.1007\/s11263-009-0215-3","article-title":"Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos","volume":"82","author":"Marat","year":"2009","journal-title":"Int. J. Comput. Vis."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"150","DOI":"10.1007\/s11263-010-0354-6","article-title":"Probabilistic Multi-Task Learning for Visual Saliency Estimation in Video","volume":"90","author":"Li","year":"2010","journal-title":"Int. J. Comput. Vis."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"510","DOI":"10.1109\/LSP.2016.2611485","article-title":"Beyond Frame-level CNN: Saliency-Aware 3-D CNN With LSTM for Video Action Recognition","volume":"24","author":"Wang","year":"2017","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"154","DOI":"10.1109\/LSP.2017.2775212","article-title":"A Novel Bottom-Up Saliency Detection Method for Video With Dynamic Background","volume":"25","author":"Chen","year":"2018","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_32","unstructured":"Min, K., and Corso, J. (November, January 27). TASED-Net: Temporally-Aggregating Spatial Encoder-Decoder Network for Video Saliency Detection. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1113","DOI":"10.1109\/TIP.2019.2936112","article-title":"Video Saliency Prediction Using Spatiotemporal Residual Attentive Networks","volume":"29","author":"Lai","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Droste, R., Jiao, J., and Noble, J.A. (2020, January 23). Unified Image and Video Saliency Modeling. Proceedings of the European Conference on Computer Vision (ECCV), Glssgow, UK.","DOI":"10.1007\/978-3-030-58558-7_25"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Tangemann, M., K\u00fcmmerer, M., Wallis, T.S., and Bethge, M. (2020, January 23). Measuring the Importance of Temporal Features in Video Saliency. Proceedings of the European Conference on Computer Vision (ECCV), Glasglow, UK.","DOI":"10.1007\/978-3-030-58604-1_40"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Palmero Cantarino, C., Komogortsev, O.V., and Talathi, S.S. (2020, January 2\u20135). Benefits of Temporal Information for Appearance-Based Gaze Estimation. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA), Stuttgart, Germany.","DOI":"10.1145\/3379156.3391376"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Jiang, M., Huang, S., Duan, J., and Zhao, Q. (2015, January 8\u201310). SALICON: Saliency in context. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298710"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"57","DOI":"10.3758\/BF03195497","article-title":"A tool for tracking visual attention: The Restricted Focus Viewer","volume":"35","author":"Jansen","year":"2003","journal-title":"Behav. Res. Methods Instrum. Comput."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3131275","article-title":"BubbleView: An Interface for Crowdsourcing Image Importance Maps and Tracking Visual Attention","volume":"24","author":"Kim","year":"2017","journal-title":"ACM Trans. Comput.-Hum. Interact."},{"key":"ref_40","unstructured":"Cooke, L. (2006, January 7\u201310). Is the Mouse a \u201cPoor Man\u2019s Eye Tracker\u201d?. Proceedings of the STC Summit, Las Vegas, NV, USA."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Lyudvichenko, V.A., and Vatolin, D.S. (2019, January 23\u201326). Predicting video saliency using crowdsourced mouse-tracking data. Proceedings of the GraphiCon, Bryansk, Russia.","DOI":"10.30987\/graphicon-2019-2-127-130"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"417","DOI":"10.1080\/07370024.2012.731332","article-title":"Alternatives to Eye Tracking for Predicting Stimulus-Driven Attentional Selection Within Interfaces","volume":"28","author":"Masciocchi","year":"2013","journal-title":"Hum. Comput. Interact."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Newman, A., McNamara, B., Fosco, C., Zhang, Y.B., Sukhum, P., Tancik, M., Kim, N.W., and Bylinskii, Z. (2020, January 25\u201330). TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data. Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI), Honolulu, HI, USA.","DOI":"10.1145\/3313831.3376799"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"255","DOI":"10.2307\/3212829","article-title":"The second-order analysis of stationary point processes","volume":"13","author":"Ripley","year":"1976","journal-title":"J. Appl. Probab."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"159","DOI":"10.1007\/s11258-006-9198-0","article-title":"Spatial Patterns on the Sagebrush Steppe\/Western Juniper Ecotone","volume":"190","author":"Strand","year":"2007","journal-title":"Plant Ecolog. Divers."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"1281","DOI":"10.1111\/jbi.12534","article-title":"Spatial distribution patterns of plague hosts: Point pattern analysis of the burrows of great gerbils in Kazakhstan","volume":"42","author":"Wilschut","year":"2015","journal-title":"J. Biogeogr."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Potapov, D., Douze, M., Harchaoui, Z., and Schmid, C. (2014, January 6\u201312). Category-specific video summarization. Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10599-4_35"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Wang, C., Wang, W., Qiu, Y., Hu, Y., and Scherer, S. (2020, January 23). Visual Memorability for Robotic Interestingness via Unsupervised Online Learning. Proceedings of the European Conference on Computer Vision (ECCV), Glasglow, UK.","DOI":"10.1007\/978-3-030-58536-5_4"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Otani, M., Nakahima, Y., Rahtu, E., and Heikkil\u00e4, J. (2019, January 16\u201320). Rethinking the Evaluation of Video Summaries. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00778"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"491","DOI":"10.3389\/fnhum.2017.00491","article-title":"How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models","volume":"11","author":"Nuthmann","year":"2017","journal-title":"Front. Hum. Neurosci."},{"key":"ref_52","unstructured":"Harel, J., Koch, C., and Perona, P. (2006, January 4\u20135). Graph-Based Visual Saliency. Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Geisler, D., Weber, D., Castner, N., and Kasneci, E. (2020, January 2\u20135). Exploiting the GBVS for Saliency Aware Gaze Heatmaps. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA), Stuttgart, Germany.","DOI":"10.1145\/3379156.3391367"},{"key":"ref_54","unstructured":"Borji, A. (2018). Saliency Prediction in the Deep Learning Era: Successes, Limitations, and Future Challenges. arXiv Prepr."},{"key":"ref_55","unstructured":"Simonyan, K., Vedaldi, A., and Zisserman, A. (2014, January 14\u201316). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. Proceedings of the International Conference on Learning Representations (ICLR), Banff, AB, Canada."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Kim, B., Seo, J., Jeon, S., Koo, J., Choe, J., and Jeon, T. (2019, January 27\u201328). Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps. Proceedings of the ICCV Workshops, Seoul, Korea.","DOI":"10.1109\/ICCVW.2019.00510"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., and Savarese, S. (2019, January 16\u201320). Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00075"},{"key":"ref_58","unstructured":"Zou, Z., Shi, Z., Guo, Y., and Ye, J. (2019). Object Detection in 20 Years: A Survey. arXiv Prepr."},{"key":"ref_59","first-page":"906","article-title":"A comparative study of statistical methods used to identify dependencies between gene expression signals","volume":"15","author":"Takahashi","year":"2013","journal-title":"Briefings Bioinf."},{"key":"ref_60","unstructured":"Purves, D., Augustine, G.J., Fitzpatrick, D., Katz, L.C., LaMantia, A.S., McNamara, J.O., and Williams, S.M. (2001). Chapter Eye Movements and Sensory Motor Integration. Neuroscience, Sinauer Associates."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Kasprowski, P., and Harezlak, K. (2019, January 25\u201328). Using Mutual Distance Plot and Warped Time Distance Chart to Compare Scan-Paths of Multiple Observers. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA), Denver, CO, USA.","DOI":"10.1145\/3317958.3318226"},{"key":"ref_62","first-page":"75","article-title":"Designing Calm Technology","volume":"1","author":"Weiser","year":"1996","journal-title":"PowerGrid J."},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Papoutsaki, A., Sangkloy, P., Laskey, J., Daskalova, N., Huang, J., and Hays, J. (2016, January 9\u201316). WebGazer: Scalable Webcam Eye Tracking Using User Interactions. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), New York, NY, USA.","DOI":"10.1145\/2702613.2702627"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/9\/3099\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T05:55:17Z","timestamp":1760162117000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/9\/3099"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,4,29]]},"references-count":63,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2021,5]]}},"alternative-id":["s21093099"],"URL":"https:\/\/doi.org\/10.3390\/s21093099","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2021,4,29]]}}}