{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,26]],"date-time":"2026-02-26T01:15:14Z","timestamp":1772068514193,"version":"3.50.1"},"reference-count":66,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2020,1,31]],"date-time":"2020-01-31T00:00:00Z","timestamp":1580428800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/100010663","name":"H2020 European Research Council","doi-asserted-by":"publisher","award":["643924"],"award-info":[{"award-number":["643924"]}],"id":[{"id":"10.13039\/100010663","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>An automatic \u201cmuseum audio guide\u201d is presented as a new type of audio guide for museums. The device consists of a headset equipped with a camera that captures exhibit pictures and the eyes of things computer vision device (EoT). The EoT board is capable of recognizing artworks using features from accelerated segment test (FAST) keypoints and a random forest classifier, and is able to be used for an entire day without the need to recharge the batteries. In addition, an application logic has been implemented, which allows for a special highly-efficient behavior upon recognition of the painting. Two different use case scenarios have been implemented. The main testing was performed with a piloting phase in a real world museum. Results show that the system keeps its promises regarding its main benefit, which is simplicity of use and the user\u2019s preference of the proposed system over traditional audioguides.<\/jats:p>","DOI":"10.3390\/s20030779","type":"journal-article","created":{"date-parts":[[2020,1,31]],"date-time":"2020-01-31T11:55:56Z","timestamp":1580471756000},"page":"779","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["Automatic Museum Audio Guide"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5092-8275","authenticated-orcid":false,"given":"Noelia","family":"Vallez","sequence":"first","affiliation":[{"name":"Visilab (Vision and Artificial Intelligence Group), University of Castilla-La Mancha (UCLM), E.T.S.I. Industrial, Avda Camilo Jose Cela s\/n, 13071 Ciudad Real, Spain"}]},{"given":"Stephan","family":"Krauss","sequence":"additional","affiliation":[{"name":"DFKI (Deutsches Forschungszentrum f\u00fcr K\u00fcnstliche Intelligenz), Augmented Vision Research Group, Tripstaddterstr. 122, 67663 Kaiserslautern, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5377-848X","authenticated-orcid":false,"given":"Jose Luis","family":"Espinosa-Aranda","sequence":"additional","affiliation":[{"name":"Visilab (Vision and Artificial Intelligence Group), University of Castilla-La Mancha (UCLM), E.T.S.I. Industrial, Avda Camilo Jose Cela s\/n, 13071 Ciudad Real, Spain"}]},{"given":"Alain","family":"Pagani","sequence":"additional","affiliation":[{"name":"DFKI (Deutsches Forschungszentrum f\u00fcr K\u00fcnstliche Intelligenz), Augmented Vision Research Group, Tripstaddterstr. 122, 67663 Kaiserslautern, Germany"}]},{"given":"Kasra","family":"Seirafi","sequence":"additional","affiliation":[{"name":"Fluxguide, Burggasse 7-9\/9, 1070 Vienna, Austria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0841-4131","authenticated-orcid":false,"given":"Oscar","family":"Deniz","sequence":"additional","affiliation":[{"name":"Visilab (Vision and Artificial Intelligence Group), University of Castilla-La Mancha (UCLM), E.T.S.I. Industrial, Avda Camilo Jose Cela s\/n, 13071 Ciudad Real, Spain"}]}],"member":"1968","published-online":{"date-parts":[[2020,1,31]]},"reference":[{"key":"ref_1","unstructured":"Hu, F. (2013). Classification and Regression Trees, CRC Press. [1st ed.]."},{"key":"ref_2","unstructured":"Commission, E. (2020, January 31). Report from the Workshop on Cyber-Physical Systems: Uplifting Europe\u2019s Innovation Capacity. Available online: https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/report-workshop-cyber-physical-systems-uplifting-europe\u2019s-innovation-capacity."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Szeliski, R. (2010). Computer Vision: Algorithms and Applications, Springer Science & Business Media.","DOI":"10.1007\/978-1-84882-935-0"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Belbachir, A.N. (2010). Smart Cameras, Springer.","DOI":"10.1007\/978-1-4419-0953-4"},{"key":"ref_5","unstructured":"BDTI (2020, January 31). Implementing Vision Capabilities in Embedded Systems. Available online: https:\/\/www.bdti.com\/MyBDTI\/pubs\/BDTI_ESC_Boston_Embedded_Vision.pdf."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Kisa\u010danin, B., Bhattacharyya, S.S., and Chai, S. (2009). Embedded Computer Vision, Springer International Publishing.","DOI":"10.1007\/978-1-84800-304-0"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Bailey, D. (2011). Design for Embedded Image Processing on FPGAs, John Wiley & Sons Asia Pte Ltd.","DOI":"10.1002\/9780470828519"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"921","DOI":"10.1016\/j.comnet.2006.10.002","article-title":"A survey on wireless multimedia sensor networks","volume":"51","author":"Akyildiz","year":"2007","journal-title":"Comput. Netw."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Farooq, M.O., and Kunz, T. (2011). Wireless multimedia sensor networks testbeds and state-of-the-art hardware: A survey. Communication and Networking, Proceedings of the International Conference on Future Generation Communication and Networking, Jeju Island, Korea, 8\u201310 December 2011, Springer.","DOI":"10.1007\/978-3-642-27192-2_1"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"6662","DOI":"10.3390\/s100706662","article-title":"Wireless multimedia sensor networks: Current trends and future directions","volume":"10","author":"Almalkawi","year":"2010","journal-title":"Sensors"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Soro, S., and Heinzelman, W. (2020, January 30). A Survey of Visual Sensor Networks. Available online: https:\/\/www.hindawi.com\/journals\/am\/2009\/640386\/.","DOI":"10.1155\/2009\/640386"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Fern\u00e1ndez-Berni, J., Carmona-Gal\u00e1n, R., and Rodr\u00edguez-V\u00e1zquez, \u00c1. (2012). Vision-enabled WSN nodes: State of the art. Low-Power Smart Imagers for Vision-Enabled Sensor Networks, Springer.","DOI":"10.1007\/978-1-4614-2392-8"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"689","DOI":"10.1007\/s11042-011-0840-z","article-title":"A survey of visual sensor network platforms","volume":"60","author":"Tavli","year":"2012","journal-title":"Multimedia Tools Appl."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Chen, P., Ahammad, P., Boyer, C., Huang, S.I., Lin, L., Lobaton, E., Meingast, M., Oh, S., Wang, S., and Yan, P. (2008, January 7\u201311). CITRIC: A low-bandwidth wireless camera network platform. Proceedings of the 2008 Second ACM\/IEEE International Conference on Distributed Smart Cameras, Stanford, CA, USA.","DOI":"10.1109\/ICDSC.2008.4635675"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Hengstler, S., Prashanth, D., Fong, S., and Aghajan, H. (2007, January 25\u201327). MeshEye: A hybrid-resolution smart camera mote for applications in distributed intelligent surveillance. Proceedings of the 6th International Conference on Information Processing in Sensor Networks, Cambridge, MA, USA.","DOI":"10.1109\/IPSN.2007.4379696"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"331","DOI":"10.1007\/s11554-007-0048-7","article-title":"A low-power wireless video sensor node for distributed object detection","volume":"2","author":"Kerhet","year":"2007","journal-title":"J. Real-Time Image Process."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Kleihorst, R., Abbo, A., Schueler, B., and Danilin, A. (2007, January 5\u20137). Camera mote with a high-performance parallel processor for real-time frame-based video processing. Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, London, UK.","DOI":"10.1109\/AVSS.2007.4425288"},{"key":"ref_18","first-page":"151","article-title":"Panoptes: A scalable architecture for video sensor networking applications","volume":"1","author":"Feng","year":"2003","journal-title":"ACM Multimedia"},{"key":"ref_19","unstructured":"Boice, J., Lu, X., Margi, C., Stanek, G., Zhang, G., Manduchi, R., and Obraczka, K. (2020, January 30). Meerkats: A Power-Aware, Self-Managing Wireless Camera Network For Wide Area Monitoring. Available online: http:\/\/users.soe.ucsc.edu\/~manduchi\/papers\/meerkats-dsc06-final.pdf."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"847","DOI":"10.1016\/j.sysarc.2013.05.010","article-title":"Towards commoditized smart-camera design","volume":"59","author":"Murovec","year":"2013","journal-title":"J. Syst. Archit."},{"key":"ref_21","unstructured":"(2020, January 30). Qualcomm, Snapdragon. Available online: http:\/\/www.qualcomm.com\/snapdragon."},{"key":"ref_22","unstructured":"Deniz, O. (2020, January 30). EoT Project. Available online: http:\/\/eyesofthings.eu."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Deniz, O., Vallez, N., Espinosa-Aranda, J.L., Rico-Saavedra, J.M., Parra-Patino, J., Bueno, G., Moloney, D., Dehghani, A., Dunne, A., and Pagani, A. (2017). Eyes of Things. Sensors, 17.","DOI":"10.3390\/s17051173"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Wacker, P., Kreutz, K., Heller, F., and Borchers, J.O. (2016, January 7\u201312). Maps and Location: Acceptance of Modern Interaction Techniques for Audio Guides. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA.","DOI":"10.1145\/2858036.2858189"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"97","DOI":"10.1007\/s00779-010-0295-7","article-title":"Electronic mobile guides: A survey","volume":"15","author":"Kenteris","year":"2011","journal-title":"Pers. Ubiquitous Comput."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"421","DOI":"10.1023\/A:1019194325861","article-title":"Cyberguide: A mobile context-aware tour guide","volume":"3","author":"Abowd","year":"1997","journal-title":"Wireless Netw."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kim, D., Seo, D., Yoo, B., and Ko, H. (2016, January 17\u201322). Development and Evaluation of Mobile Tour Guide Using Wearable and Hand-Held Devices. Proceedings of the International Conference on Human-Computer Interaction, Toronto, ON, Canada.","DOI":"10.1007\/978-3-319-39513-5_27"},{"key":"ref_28","first-page":"74","article-title":"Soundscape of an Archaeological Site Recreated with Audio Augmented Reality","volume":"14","author":"Sikora","year":"2018","journal-title":"ACM Trans. Multimedia Comput. Commu. Appl."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Lee, G.A., D\u00fcnser, A., Kim, S., and Billinghurst, M. (2012, January 5\u20138). CityViewAR: A mobile outdoor AR application for city visualization. Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality-Arts, Media, and Humanities (ISMAR-AMH), Altanta, GA, USA.","DOI":"10.1109\/ISMAR-AMH.2012.6483989"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1084","DOI":"10.1016\/j.cag.2012.10.001","article-title":"Exploring the use of handheld AR for outdoor navigation","volume":"36","author":"Billinghurst","year":"2012","journal-title":"Comput. Graphics"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Baldauf, M., Fr\u00f6hlich, P., and Hutter, S. (2010, January 2\u20133). KIBITZER: A wearable system for eye-gaze-based mobile urban exploration. Proceedings of the 1st Augmented Human International Conference, Meg\u00e8ve, France.","DOI":"10.1145\/1785455.1785464"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Szymczak, D., Rassmus-Gr\u00f6hn, K., Magnusson, C., and Hedvall, P.O. (2012, January 21\u201324). A real-world study of an audio-tactile tourist guide. Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, San Francsico, CA, USA.","DOI":"10.1145\/2371574.2371627"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Lim, J.H., Li, Y., You, Y., and Chevallet, J.P. (2007, January 2\u20135). Scene Recognition with Camera Phones for Tourist Information Access. Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, Beijing, China.","DOI":"10.1109\/ICME.2007.4284596"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Skoryukina, N., Nikolaev, D.P., and Arlazarov, V.V. (2019, January 1). 2D art recognition in uncontrolled conditions using one-shot learning. Proceedings of the International Conference on Machine Vision, Amsterdam, The Netherlands.","DOI":"10.1117\/12.2523017"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Fasel, B., and Gool, L.V. (2006). Interactive Museum Guide: Accurate Retrieval of Object Descriptions. Adaptive Multimedia Retrieval, Springer.","DOI":"10.1007\/978-3-540-71545-0_14"},{"key":"ref_36","unstructured":"Temmermans, F., Jansen, B., Deklerck, R., Schelkens, P., and Cornelis, J. (2011, January 13\u201315). The mobile Museum guide: Artwork recognition with eigenpaintings and SURF. Proceedings of the 12th International Workshop on Image Analysis for Multimedia Interactive Services, Delft, The Netherlands."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Greci, L. (2016). An Augmented Reality Guide for Religious Museum. Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Lecce, Italy, 15\u201318 June 2016, Springer.","DOI":"10.1007\/978-3-319-40651-0_23"},{"key":"ref_38","unstructured":"Raptis, G.E., Katsini, C.P., and Chrysikos, T. (November, January 29). CHISTA: Cultural Heritage Information Storage and reTrieval Application. Proceedings of the 6th EuroMed Conference, Nicosia, Cyprus."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Ali, S., Koleva, B., Bedwell, B., and Benford, S. (2018, January 9\u201313). Deepening Visitor Engagement with Museum Exhibits through Hand-crafted Visual Markers. Proceedings of the 2018 Designing Interactive Systems Conference (DIS \u201918), Hong Kong, China.","DOI":"10.1145\/3196709.3196786"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"739","DOI":"10.1007\/s00779-018-1126-5","article-title":"Treasure codes: Augmenting learning from physical museum exhibits through treasure hunting","volume":"22","author":"Ng","year":"2018","journal-title":"Pers. Ubiquitous Comput."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Wein, L. (2014, January 26). Visual recognition in museum guide apps: Do visitors want it?. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada.","DOI":"10.1145\/2556288.2557270"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Detyniecki, M., Leiner, U., and N\u00fcrnberger, A. (2010). Mobile Museum Guide Based on Fast SIFT Recognition. Adaptive Multimedia Retrieval. Identifying, Summarizing, and Recommending Image and Music, Springer.","DOI":"10.1007\/978-3-642-14758-6"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Serubugo, S., Skantarova, D., Nielsen, L., and Kraus, M. (2017). Comparison of Wearable Optical See-through and Handheld Devices as Platform for an Augmented Reality Museum Guide. Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, SCITEPRESS Digital Library.","DOI":"10.5220\/0006093901790186"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Altwaijry, H., Moghimi, M., and Belongie, S. (2014, January 24\u201326). Recognizing locations with google glass: A case study. Proceedings of the IEEE winter conference on applications of computer vision, Steamboat Springs, CO, USA.","DOI":"10.1109\/WACV.2014.6836105"},{"key":"ref_45","unstructured":"Yanai, K., Tanno, R., and Okamoto, K. (, January October). Efficient mobile implementation of a cnn-based object recognition system. Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"35","DOI":"10.1145\/3092832","article-title":"Deep artwork detection and retrieval for automatic context-aware audio guides","volume":"13","author":"Seidenari","year":"2017","journal-title":"ACM Trans. Multimedia Comput. Commun. Appl."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Seidenari, L., Baecchi, C., Uricchio, T., Ferracani, A., Bertini, M., and Del Bimbo, A. (2019). Wearable systems for improving tourist experience. Multimodal Behavior Analysis in the Wild, Elsevier.","DOI":"10.1016\/B978-0-12-814601-9.00020-1"},{"key":"ref_48","unstructured":"(2020, January 30). Crystalsound Audio Guide. Available online: https:\/\/crystal-sound.com\/en\/audio-guide."},{"key":"ref_49","unstructured":"(2020, January 30). Locatify. Available online: https:\/\/locatify.com\/."},{"key":"ref_50","unstructured":"(2020, January 30). Copernicus Guide. Available online: http:\/\/www.copernicus-guide.com\/en\/index-museum.html."},{"key":"ref_51","unstructured":"(2020, January 30). xamoom Museum Guide. Available online: https:\/\/xamoom.com\/museum\/."},{"key":"ref_52","unstructured":"(2020, January 30). Orpheo Touch. Available online: https:\/\/orpheogroup.com\/us\/products\/visioguide\/orpheo-touch."},{"key":"ref_53","unstructured":"(2020, January 30). Headphone Weight. Available online: https:\/\/www.headphonezone.in\/pages\/headphone-weight."},{"key":"ref_54","unstructured":"(2020, January 30). OASIS Standards\u2014MQTT v3.1.1. Available online: https:\/\/www.oasis-open.org\/standards."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Espinosa-Aranda, J.L., V\u00e1llez, N., Sanchez-Bueno, C., Aguado-Araujo, D., Garc\u00eda, G.B., and D\u00e9niz-Su\u00e1rez, O. (2015, January 28\u201330). Pulga, a tiny open-source MQTT broker for flexible and secure IoT deployments. Proceedings of the 2015 IEEE Conference on Communications and Network Security (CNS), Florence, Italy.","DOI":"10.1109\/CNS.2015.7346889"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Monteiro, D.M., Rodrigues, J.J.P.C., and Lloret, J. (2012, January 13). A secure NFC application for credit transfer among mobile phones. Proceedings of the 2012 International Conference on Computer, Information and Telecommunication Systems (CITS), Amman, Jordan.","DOI":"10.1109\/CITS.2012.6220369"},{"key":"ref_57","unstructured":"Lepetit, V., Pilet, J., and Fua, P. (July, January 27). Point matching as a classification problem for fast and robust object pose estimation. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Espinosa-Aranda, J., Vallez, N., Rico-Saavedra, J., Parra-Patino, J., Bueno, G., Sorci, M., Moloney, D., Pena, D., and Deniz, O. (2018). Smart Doll: Emotion Recognition Using Embedded Deep Learning. Symmetry, 10.","DOI":"10.3390\/sym10090387"},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"2409","DOI":"10.1016\/S0167-8655(03)00070-9","article-title":"Fast features for face authentication under illumination direction changes","volume":"24","author":"Sanderson","year":"2003","journal-title":"Pattern Recognit. Lett."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"105","DOI":"10.1109\/TPAMI.2008.275","article-title":"Faster and better: A machine learning approach to corner detection","volume":"32","author":"Rosten","year":"2008","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Tareen, S.A.K., and Saleem, Z. (2018, January 30). A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan.","DOI":"10.1109\/ICOMET.2018.8346440"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"1947","DOI":"10.1021\/ci034160g","article-title":"Random Forest: A Classification and Regression Tool for Compound Classification and QSAR Modeling","volume":"43","author":"Svetnik","year":"2003","journal-title":"J. Chem. Inf. Comput. Sci."},{"key":"ref_63","unstructured":"Breiman, L., Friedman, J.H., Olshen, R.A., and Stone, C.J. (1984). Classification and Regression Trees, Wadsworth and Brooks."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Bosch, A., Zisserman, A., and Munoz, X. (2007, January 14\u201320). Image classification using random forests and ferns. Proceedings of the 2007 IEEE 11th international conference on computer vision, Rio de Janeiro, Brazil.","DOI":"10.1109\/ICCV.2007.4409066"},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"381","DOI":"10.1145\/358669.358692","article-title":"Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography","volume":"24","author":"Fischler","year":"1981","journal-title":"Commun. ACM"},{"key":"ref_66","unstructured":"(2020, January 30). Nvidia Developer Blogs: NVIDIA\u00ae Jetson\u2122 TX1 Supercomputer-on-Module Drives Next Wave of Autonomous Machines. Available online: https:\/\/devblogs.nvidia.com\/."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/3\/779\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T08:53:30Z","timestamp":1760172810000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/3\/779"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,1,31]]},"references-count":66,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2020,2]]}},"alternative-id":["s20030779"],"URL":"https:\/\/doi.org\/10.3390\/s20030779","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,1,31]]}}}