{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T16:45:42Z","timestamp":1776185142781,"version":"3.50.1"},"reference-count":63,"publisher":"MDPI AG","issue":"23","license":[{"start":{"date-parts":[[2022,12,4]],"date-time":"2022-12-04T00:00:00Z","timestamp":1670112000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Disorders of swallowing often lead to pneumonia when material enters the airways (aspiration). Flexible Endoscopic Evaluation of Swallowing (FEES) plays a key role in the diagnostics of aspiration but is prone to human errors. An AI-based tool could facilitate this process. Recent non-endoscopic\/non-radiologic attempts to detect aspiration using machine-learning approaches have led to unsatisfying accuracy and show black-box characteristics. Hence, for clinical users it is difficult to trust in these model decisions. Our aim is to introduce an explainable artificial intelligence (XAI) approach to detect aspiration in FEES. Our approach is to teach the AI about the relevant anatomical structures, such as the vocal cords and the glottis, based on 92 annotated FEES videos. Simultaneously, it is trained to detect boluses that pass the glottis and become aspirated. During testing, the AI successfully recognized the glottis and the vocal cords but could not yet achieve satisfying aspiration detection quality. While detection performance must be optimized, our architecture results in a final model that explains its assessment by locating meaningful frames with relevant aspiration events and by highlighting suspected boluses. In contrast to comparable AI tools, our framework is verifiable and interpretable and, therefore, accountable for clinical users.<\/jats:p>","DOI":"10.3390\/s22239468","type":"journal-article","created":{"date-parts":[[2022,12,5]],"date-time":"2022-12-05T08:10:57Z","timestamp":1670227857000},"page":"9468","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["AI-Based Detection of Aspiration for Video-Endoscopy with Visual Aids in Meaningful Frames to Interpret the Model Outcome"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2297-3482","authenticated-orcid":false,"given":"J\u00fcrgen","family":"Konradi","sequence":"first","affiliation":[{"name":"Institute of Physical Therapy, Prevention and Rehabilitation, University Medical Center of the Johannes Gutenberg-University Mainz, 55131 Mainz, Germany"}]},{"given":"Milla","family":"Zajber","sequence":"additional","affiliation":[{"name":"Department for Health Care & Nursing, Catholic University of Applied Sciences, 55122 Mainz, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2157-4287","authenticated-orcid":false,"given":"Ulrich","family":"Betz","sequence":"additional","affiliation":[{"name":"Institute of Physical Therapy, Prevention and Rehabilitation, University Medical Center of the Johannes Gutenberg-University Mainz, 55131 Mainz, Germany"}]},{"given":"Philipp","family":"Drees","sequence":"additional","affiliation":[{"name":"Department of Orthopedics and Trauma Surgery, University Medical Center of the Johannes Gutenberg-University Mainz, 55131 Mainz, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4327-6352","authenticated-orcid":false,"given":"Annika","family":"Gerken","sequence":"additional","affiliation":[{"name":"Fraunhofer Institute for Digital Medicine MEVIS, 28359 Bremen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7557-5007","authenticated-orcid":false,"given":"Hans","family":"Meine","sequence":"additional","affiliation":[{"name":"Fraunhofer Institute for Digital Medicine MEVIS, 28359 Bremen, Germany"}]}],"member":"1968","published-online":{"date-parts":[[2022,12,4]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"119","DOI":"10.1109\/MC.2021.3074263","article-title":"The Ten Commandments of Ethical Medical AI\" in Computer","volume":"54","author":"Muller","year":"2021","journal-title":"Computer"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","article-title":"Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)","volume":"6","author":"Adadi","year":"2018","journal-title":"IEEE Access"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"4793","DOI":"10.1109\/TNNLS.2020.3027314","article-title":"A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI","volume":"32","author":"Tjoa","year":"2021","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"11974","DOI":"10.1109\/ACCESS.2021.3051315","article-title":"A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence","volume":"9","author":"Stepin","year":"2021","journal-title":"IEEE Access"},{"key":"ref_5","first-page":"29","article-title":"A Survey of Data-Driven and Knowledge-Aware eXplainable AI","volume":"34","author":"Li","year":"2020","journal-title":"IEEE Trans. Knowl. Data Eng."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Aceves-Fernandez, M.A. (2020). Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models. Advances and Applications in Deep Learning, IntechOpen.","DOI":"10.5772\/intechopen.87786"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"153316","DOI":"10.1109\/ACCESS.2021.3127881","article-title":"A Systematic Review of Human\u2013Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques","volume":"9","author":"Nazar","year":"2021","journal-title":"IEEE Access"},{"key":"ref_8","unstructured":"Ali, S., and Tilendra Shishir, S. (2020). Deep Learning Approach to Key Frame Detection in Human Action Videos. Recent Trends in Computational Intelligence, IntechOpen. Chapter 7."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Yan, X., Gilani, S.Z., Feng, M., Zhang, L., Qin, H., and Mian, A. (2020). Self-Supervised Learning to Detect Key Frames in Videos. Sensors, 20.","DOI":"10.3390\/s20236941"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"765","DOI":"10.1177\/0194599814549156","article-title":"The prevalence of dysphagia among adults in the United States","volume":"151","author":"Bhattacharyya","year":"2014","journal-title":"Otolaryngol.-Head Neck Surg. Off. J. Am. Acad. Otolaryngol. Head Neck Surg."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"594","DOI":"10.1186\/s12913-018-3376-3","article-title":"Impact of oropharyngeal dysphagia on healthcare cost and length of stay in hospital: A systematic review","volume":"18","author":"Attrill","year":"2018","journal-title":"BMC Health Serv. Res."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"279","DOI":"10.1007\/s00455-001-0087-3","article-title":"Prevention of pneumonia in elderly stroke patients by systematic diagnosis and treatment of dysphagia: An evidence-based comprehensive analysis of the literature","volume":"16","author":"Doggett","year":"2001","journal-title":"Dysphagia"},{"key":"ref_13","first-page":"306","article-title":"Role of videofluoroscopy in evaluation of neurologic dysphagia","volume":"27","author":"Rugiu","year":"2007","journal-title":"Acta Otorhinolaryngol. Ital."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Aviv, J.E., Sataloff, R.T., Cohen, M., Spitzer, J., Ma, G., Bhayani, R., and Close, L.G. (2001). Cost-effectiveness of two types of dysphagia care in head and neck cancer: A preliminary report. Ear Nose Throat J., 80.","DOI":"10.1177\/014556130108000818"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1006","DOI":"10.1007\/s00115-014-4114-7","article-title":"FEES f\u00fcr neurogene Dysphagien","volume":"85","author":"Dziewas","year":"2014","journal-title":"Der. Nervenarzt."},{"key":"ref_16","unstructured":"L\u00fcttje, D., Meisel, M., Meyer, A.-K., and Wittrich, A. (2022, October 18). \u00c4nderungsvorschlag f\u00fcr den OPS 2011. Bundesinstitut f\u00fcr Arzneimittel und Medizinprodukte. Available online: https:\/\/www.bfarm.de\/DE\/Kodiersysteme\/Services\/Downloads\/OPS\/_functions\/ops-vorschlaege-2011.html?nn=841246&cms_gtp=1005398_list%253D5."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"216","DOI":"10.1055\/s-0043-120430","article-title":"Fiberendoskopische Evaluation des Schluckens\u2013FEES","volume":"41","author":"Bohlender","year":"2017","journal-title":"Sprache Stimme Geh\u00f6r"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"418","DOI":"10.1007\/s00455-015-9616-3","article-title":"Penetration\u2013Aspiration: Is Their Detection in FEES\u00ae Reliable Without Video Recording?","volume":"30","author":"Hey","year":"2015","journal-title":"Dysphagia"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1007\/BF00417897","article-title":"A penetration-aspiration scale","volume":"11","author":"Rosenbek","year":"1996","journal-title":"Dysphagia"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"308","DOI":"10.1007\/s00455-002-0073-4","article-title":"Interjudge and Intrajudge Reliabilities in Fiberoptic Endoscopic Evaluation of Swallowing (Fees\u00ae) Using the Penetration\u2013Aspiration Scale: A Replication Study","volume":"17","author":"Colodny","year":"2002","journal-title":"Dysphagia"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"417","DOI":"10.1007\/s00455-021-10293-5","article-title":"Visual Analysis of Swallowing Efficiency and Safety (VASES): A Standardized Approach to Rating Pharyngeal Residue, Penetration, and Aspiration During FEES","volume":"37","author":"Curtis","year":"2022","journal-title":"Dysphagia"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"480","DOI":"10.1177\/0003489414566267","article-title":"Reliability of the Penetration Aspiration Scale With Flexible Endoscopic Evaluation of Swallowing","volume":"124","author":"Butler","year":"2015","journal-title":"Ann. Otol. Rhinol. Laryngol."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"443","DOI":"10.1007\/s00455-017-9784-4","article-title":"Narrow Band Imaging Enhances the Detection Rate of Penetration and Aspiration in FEES","volume":"32","author":"Nienstedt","year":"2017","journal-title":"Dysphagia"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"591","DOI":"10.1007\/s00455-021-10309-0","article-title":"Detecting Aspiration During FEES with Narrow Band Imaging in a Clinical Setting","volume":"37","author":"Stanley","year":"2022","journal-title":"Dysphagia"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"e42","DOI":"10.3346\/jkms.2022.37.e42","article-title":"Deep Learning Analysis to Automatically Detect the Presence of Penetration or Aspiration in Videofluoroscopic Swallowing Study","volume":"37","author":"Kim","year":"2022","journal-title":"J. Korean Med. Sci."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"259","DOI":"10.1007\/s00455-020-10124-z","article-title":"Tracking Hyoid Bone Displacement During Swallowing Without Videofluoroscopy Using Machine Learning of Vibratory Signals","volume":"36","author":"Donohue","year":"2021","journal-title":"Dysphagia"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kuramoto, N., Ichimura, K., Jayatilake, D., Shimokakimoto, T., Hidaka, K., and Suzuki, K. (2020, January 20\u201324). Deep Learning-Based Swallowing Monitor for Realtime Detection of Swallow Duration. Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada.","DOI":"10.1109\/EMBC44109.2020.9176721"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1186\/1743-0003-3-14","article-title":"A radial basis classifier for the automatic detection of aspiration in children with dysphagia","volume":"3","author":"Lee","year":"2006","journal-title":"J. Neuroeng. Rehabil."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"181912","DOI":"10.1098\/rsos.181982","article-title":"Neck sensor-supported hyoid bone movement tracking during swallowing","volume":"6","author":"Mao","year":"2019","journal-title":"R. Soc. Open Sci."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Feng, S., Shea, Q.-T.-K., Ng, K.-Y., Tang, C.-N., Kwong, E., and Zheng, Y. (2021). Automatic Hyoid Bone Tracking in Real-Time Ultrasound Swallowing Videos Using Deep Learning Based and Correlation Filter Based Trackers. Sensors, 21.","DOI":"10.3390\/s21113712"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1016\/j.cmpb.2016.07.010","article-title":"Computer-assisted detection of swallowing difficulty","volume":"134","author":"Lee","year":"2016","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"12310","DOI":"10.1038\/s41598-018-30182-6","article-title":"Automatic hyoid bone detection in fluoroscopic images using deep learning","volume":"8","author":"Zhang","year":"2018","journal-title":"Sci. Rep."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1482","DOI":"10.1007\/s00455-022-10410-y","article-title":"Using an Automated Speech Recognition Approach to Differentiate Between Normal and Aspirating Swallowing Sounds Recorded from Digital Cervical Auscultation in Children","volume":"37","author":"Frakking","year":"2022","journal-title":"Dysphagia"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"8704","DOI":"10.1038\/s41598-020-65492-1","article-title":"Non-invasive identification of swallows via deep learning in high resolution cervical auscultation recordings","volume":"10","author":"Khalifa","year":"2020","journal-title":"Sci. Rep."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"698","DOI":"10.1007\/s00455-018-09974-5","article-title":"Development of a Non-invasive Device for Swallow Screening in Patients at Risk of Oropharyngeal Dysphagia: Results from a Prospective Exploratory Study","volume":"34","author":"Steele","year":"2019","journal-title":"Dysphagia"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"176","DOI":"10.1007\/s00455-014-9593-y","article-title":"Neural Network Pattern Recognition of Lingual\u2013Palatal Pressure for Automated Detection of Swallow","volume":"30","author":"Hadley","year":"2015","journal-title":"Dysphagia"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/JTEHM.2015.2500562","article-title":"Smartphone-Based Real-time Assessment of Swallowing Ability From the Swallowing Sound","volume":"3","author":"Jayatilake","year":"2015","journal-title":"IEEE J. Transl. Eng. Health Med."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"e13236","DOI":"10.1111\/nmo.13236","article-title":"Identification of swallowing disorders in early and mid-stage Parkinson\u2019s disease using pattern recognition of pharyngeal high-resolution manometry data","volume":"30","author":"Jones","year":"2018","journal-title":"Neurogastroenterol. Motil."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"336","DOI":"10.1111\/nmo.12730","article-title":"Objective prediction of pharyngeal swallow dysfunction in dysphagia through artificial neural network modeling","volume":"28","author":"Kritas","year":"2016","journal-title":"Neurogastroenterol. Motil. Off. J. Eur. Gastrointest. Motil. Soc."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"1049","DOI":"10.1016\/j.medengphy.2009.07.001","article-title":"Swallow segmentation with artificial neural networks and multi-sensor fusion","volume":"31","author":"Lee","year":"2009","journal-title":"Med. Eng. Phys."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"14735","DOI":"10.1038\/s41598-020-71713-4","article-title":"Machine learning analysis to automatically measure response time of pharyngeal swallowing reflex in videofluoroscopic swallowing study","volume":"10","author":"Lee","year":"2020","journal-title":"Sci. Rep."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Sakai, K., Gilmour, S., Hoshino, E., Nakayama, E., Momosaki, R., Sakata, N., and Yoneoka, D. (2021). A Machine Learning-Based Screening Test for Sarcopenic Dysphagia Using Image Recognition. Nutrients, 13.","DOI":"10.3390\/nu13114009"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"106248","DOI":"10.1016\/j.cmpb.2021.106248","article-title":"Machine learning based analysis of speech dimensions in functional oropharyngeal dysphagia","volume":"208","year":"2021","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_44","unstructured":"(2022, October 18). Regulation (EU) 2016\/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95\/46\/EC (General Data Protection Regulation) (Text with EEA relevance). Available online: http:\/\/data.europa.eu\/eli\/reg\/2016\/679\/oj."},{"key":"ref_45","unstructured":"Holzinger, A., Biemann, C., Pattichis, C.S., and Kell, D.B. (2017). What Do We Need to Build Explainable AI Systems for the Medical Domain?. arXiv, Available online: https:\/\/arxiv.org\/pdf\/1712.09923.pdf."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Fehling, M.K., Grosch, F., Schuster, M.E., Schick, B., and Lohscheller, J. (2020). Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep Convolutional LSTM Network. PLoS ONE, 15.","DOI":"10.1371\/journal.pone.0227791"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"483","DOI":"10.1007\/s11548-018-01910-0","article-title":"A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation","volume":"14","author":"Laves","year":"2019","journal-title":"Int. J. Comput. Assist. Radiol. Surg."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"44","DOI":"10.1007\/s10916-019-1481-4","article-title":"A Convolutional Neural Network for Real Time Classification, Identification, and Labelling of Vocal Cord and Tracheal Using Laryngoscopy and Bronchoscopy Video","volume":"44","author":"Matava","year":"2020","journal-title":"J. Med. Syst."},{"key":"ref_49","unstructured":"Meine, H., and Moltz, J.H. (2022, September 27). SATORI. Available online: https:\/\/www.mevis.fraunhofer.de\/en\/research-and-technologies\/ai-collaboration-toolkit.html."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"611","DOI":"10.1007\/s13244-018-0639-9","article-title":"Convolutional neural networks: An overview and application in radiology","volume":"9","author":"Yamashita","year":"2018","journal-title":"Insights Into Imaging"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional Networks for BiomedicalImage Segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1038\/s41592-020-01008-z","article-title":"nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation","volume":"18","author":"Isensee","year":"2021","journal-title":"Nat. Methods"},{"key":"ref_53","first-page":"1929","article-title":"Dropout: A simple way to prevent neural networks from overfitting","volume":"15","author":"Srivastava","year":"2014","journal-title":"J. Mach. Learn. Res."},{"key":"ref_54","first-page":"448","article-title":"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift","volume":"37","author":"Ioffe","year":"2015","journal-title":"Proc. 32nd Int. Conf. Mach. Learn."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015, January 7\u201313). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.123"},{"key":"ref_56","unstructured":"Kingma, D.P., and Ba, J. (2022, October 18). Adam: A Method for Stochastic Optimization. Available online: https:\/\/arxiv.org\/abs\/1412.6980."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Milletari, F., Navab, N., and Ahmadi, S.A. (2016, January 25\u201328). V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA.","DOI":"10.1109\/3DV.2016.79"},{"key":"ref_58","first-page":"37","article-title":"Evaluation: From Precision, Recall And F-Measure To Roc, Informedness, Markedness & Correlation","volume":"2","author":"Powers","year":"2011","journal-title":"J. Mach. Learn. Technol."},{"key":"ref_59","unstructured":"Sasaki, Y. (2022, October 18). The Truth of the F-Measure. Available online: https:\/\/www.toyota-ti.ac.jp\/Lab\/Denshi\/COIN\/people\/yutaka.sasaki\/F-measure-YS-26Oct07.pdf."},{"key":"ref_60","first-page":"2825","article-title":"Scikit-learn: Machine Learning in Python","volume":"12","author":"Pedregosa","year":"2011","journal-title":"J. Mach. Learn. Res."},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"261","DOI":"10.1038\/s41592-019-0686-2","article-title":"SciPy 1.0: Fundamental algorithms for scientific computing in Python","volume":"17","author":"Virtanen","year":"2020","journal-title":"Nat. Methods"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"2529","DOI":"10.1109\/TBME.2018.2807487","article-title":"Using Machine Learning and a Combination of Respiratory Flow, Laryngeal Motion, and Swallowing Sounds to Classify Safe and Unsafe Swallowing","volume":"65","author":"Inoue","year":"2018","journal-title":"IEEE Trans. Biomed. Eng."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"167","DOI":"10.1159\/000517144","article-title":"Advanced Machine Learning Tools to Monitor Biomarkers of Dysphagia: A Wearable Sensor Proof-of-Concept Study","volume":"5","author":"Botonis","year":"2021","journal-title":"Digit. Biomark."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/23\/9468\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:33:41Z","timestamp":1760146421000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/23\/9468"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,4]]},"references-count":63,"journal-issue":{"issue":"23","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["s22239468"],"URL":"https:\/\/doi.org\/10.3390\/s22239468","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,12,4]]}}}