{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,1]],"date-time":"2026-05-01T20:30:38Z","timestamp":1777667438436,"version":"3.51.4"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,2,27]],"date-time":"2024-02-27T00:00:00Z","timestamp":1708992000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,2,27]],"date-time":"2024-02-27T00:00:00Z","timestamp":1708992000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Artif Intell"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>This review aims to explore the growing impact of machine learning and deep learning algorithms in the medical field, with a specific focus on the critical issues of explainability and interpretability associated with black-box algorithms. While machine learning algorithms are increasingly employed for medical analysis and diagnosis, their complexity underscores the importance of understanding how these algorithms explain and interpret data to take informed decisions. This review comprehensively analyzes challenges and solutions presented in the literature, offering an overview of the most recent techniques utilized in this field. It also provides precise definitions of interpretability and explainability, aiming to clarify the distinctions between these concepts and their implications for the decision-making process. Our analysis, based on 448 articles and addressing seven research questions, reveals an exponential growth in this field over the last decade. The psychological dimensions of public perception underscore the necessity for effective communication regarding the capabilities and limitations of artificial intelligence. Researchers are actively developing techniques to enhance interpretability, employing visualization methods and reducing model complexity. However, the persistent challenge lies in finding the delicate balance between achieving high performance and maintaining interpretability. Acknowledging the growing significance of artificial intelligence in aiding medical diagnosis and therapy, and the creation of interpretable artificial intelligence models is considered essential. In this dynamic context, an unwavering commitment to transparency, ethical considerations, and interdisciplinary collaboration is imperative to ensure the responsible use of artificial intelligence. This collective commitment is vital for establishing enduring trust between clinicians and patients, addressing emerging challenges, and facilitating the informed adoption of these advanced technologies in medicine.<\/jats:p>","DOI":"10.1007\/s44163-024-00114-7","type":"journal-article","created":{"date-parts":[[2024,2,28]],"date-time":"2024-02-28T00:02:18Z","timestamp":1709078538000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":140,"title":["Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review"],"prefix":"10.1007","volume":"4","author":[{"given":"Maria","family":"Frasca","sequence":"first","affiliation":[]},{"given":"Davide","family":"La Torre","sequence":"additional","affiliation":[]},{"given":"Gabriella","family":"Pravettoni","sequence":"additional","affiliation":[]},{"given":"Ilaria","family":"Cutica","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,2,27]]},"reference":[{"issue":"1","key":"114_CR1","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1002\/hast.973","volume":"49","author":"AJ London","year":"2019","unstructured":"London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep. 2019;49(1):15\u201321.","journal-title":"Hastings Cent Rep"},{"key":"114_CR2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2021.108391","volume":"117","author":"H Hakkoum","year":"2022","unstructured":"Hakkoum H, Abnane I, Idri A. Interpretability in the medical field: a systematic mapping and review study. Appl Soft Comput. 2022;117: 108391.","journal-title":"Appl Soft Comput"},{"key":"114_CR3","doi-asserted-by":"publisher","first-page":"154096","DOI":"10.1109\/ACCESS.2019.2949286","volume":"7","author":"O Loyola-Gonzalez","year":"2019","unstructured":"Loyola-Gonzalez O. Black-box vs. white-box: understanding their advantages and weaknesses from a practical point of view. IEEE Access. 2019;7:154096\u2013113.","journal-title":"IEEE Access"},{"key":"114_CR4","doi-asserted-by":"crossref","unstructured":"Kolasinska A, Lauriola I, Quadrio G. Do people believe in artificial intelligence? a cross-topic multicultural study. In Proceedings of the 5th EAI International Conference on Smart Objects and Technologies for Social Good, 2019:31\u20136.","DOI":"10.1145\/3342428.3342667"},{"issue":"8","key":"114_CR5","doi-asserted-by":"publisher","first-page":"555","DOI":"10.1016\/j.tips.2019.06.001","volume":"40","author":"C Gilvary","year":"2019","unstructured":"Gilvary C, Madhukar N, Elkhader J, Elemento O. The missing pieces of artificial intelligence in medicine. Trends Pharmacol Sci. 2019;40(8):555\u201364.","journal-title":"Trends Pharmacol Sci"},{"key":"114_CR6","unstructured":"General Data Protection Regulation. General data protection regulation (GDPR). Intersoft Consulting. Accessed in October, 2018;24(1)."},{"issue":"1","key":"114_CR7","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-020-01332-6","volume":"20","author":"J Amann","year":"2020","unstructured":"Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. 2020;20(1):1\u20139.","journal-title":"BMC Med Inform Decis Mak"},{"key":"114_CR8","volume-title":"Four principles of explainable artificial intelligence","author":"PJ Phillips","year":"2020","unstructured":"Phillips PJ, Hahn CA, Fontana PC, Broniatowski DA, Przybocki MA. Four principles of explainable artificial intelligence, vol. 18. Gaithersburg: National Institute of Standards and Technology; 2020."},{"key":"114_CR9","doi-asserted-by":"crossref","unstructured":"Nassih Rym, Berrado Abdelaziz. State of the art of fairness, interpretability and explainability in machine learning: Case of prim. In Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications, 2020:1\u20135.","DOI":"10.1145\/3419604.3419776"},{"issue":"1","key":"114_CR10","doi-asserted-by":"publisher","first-page":"18","DOI":"10.3390\/e23010018","volume":"23","author":"P Linardatos","year":"2020","unstructured":"Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: a review of machine learning interpretability methods. Entropy. 2020;23(1):18.","journal-title":"Entropy"},{"key":"114_CR11","doi-asserted-by":"publisher","first-page":"502","DOI":"10.1016\/j.cag.2021.09.002","volume":"102","author":"G Alicioglu","year":"2022","unstructured":"Alicioglu G, Sun B. A survey of visual analytics for explainable artificial intelligence methods. Comput Graph. 2022;102:502\u201320.","journal-title":"Comput Graph"},{"issue":"5","key":"114_CR12","doi-asserted-by":"publisher","first-page":"1589","DOI":"10.1109\/JBHI.2017.2767063","volume":"22","author":"B Shickel","year":"2017","unstructured":"Shickel B, Tighe PJ, Bihorac A, Rashidi P. Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform. 2017;22(5):1589\u2013604.","journal-title":"IEEE J Biomed Health Inform"},{"key":"114_CR13","unstructured":"Shashanka M, Raj B, Smaragdis P. Sparse overcomplete latent variable decomposition of counts data. Adv Neural Inform Process Syst, 2007;20."},{"key":"114_CR14","doi-asserted-by":"crossref","unstructured":"Ribeiro MT, Singh S, Guestrin C. \u201dwhy should i trust you?\u201d explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016:1135\u201344.","DOI":"10.1145\/2939672.2939778"},{"key":"114_CR15","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2021.103473","volume":"296","author":"M Langer","year":"2021","unstructured":"Langer M, Oster D, Speith T, Hermanns H, K\u00e4stner L, Schmidt E, Sesing A, Baum K. What do we want from explainable artificial intelligence (XAI)?\u2014a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif Intell. 2021;296: 103473.","journal-title":"Artif Intell"},{"issue":"44","key":"114_CR16","doi-asserted-by":"publisher","first-page":"22071","DOI":"10.1073\/pnas.1900654116","volume":"116","author":"WJ Murdoch","year":"2019","unstructured":"Murdoch WJ, Singh C, Kumbier K, Abbasi-Asl R, Yu B. Definitions, methods, and applications in interpretable machine learning. Proc Natl Acad Sci. 2019;116(44):22071\u201380.","journal-title":"Proc Natl Acad Sci"},{"key":"114_CR17","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2022.102423","volume":"133","author":"C Combi","year":"2022","unstructured":"Combi C, Amico B, Bellazzi R, Holzinger A, Moore JH, Zitnik M, Holmes JH. A manifesto on explainability for artificial intelligence in medicine. Artif Intell Med. 2022;133: 102423.","journal-title":"Artif Intell Med"},{"issue":"4","key":"114_CR18","doi-asserted-by":"publisher","first-page":"1607","DOI":"10.1007\/s13347-021-00477-0","volume":"34","author":"WJ von Eschenbach","year":"2021","unstructured":"von Eschenbach WJ. Transparency and the black box problem: why we do not trust ai. Philos Technol. 2021;34(4):1607\u201322.","journal-title":"Philos Technol"},{"key":"114_CR19","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","volume":"58","author":"AB Arrieta","year":"2020","unstructured":"Arrieta AB, D\u00edaz-Rodr\u00edguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, Garc\u00eda S, Gil-L\u00f3pez S, Molina D, Benjamins R, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible ai. Inf Fus. 2020;58:82\u2013115.","journal-title":"Inf Fus"},{"key":"114_CR20","unstructured":"Biran O, Cotton C. Explanation and justification in machine learning: a survey. In IJCAI-17 workshop on explainable AI (XAI), 2017;8:8\u201313."},{"issue":"5","key":"114_CR21","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3236009","volume":"51","author":"R Guidotti","year":"2018","unstructured":"Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. A survey of methods for explaining black box models. ACM Comput Surv (CSUR). 2018;51(5):1\u201342.","journal-title":"ACM Comput Surv (CSUR)"},{"key":"114_CR22","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","volume":"267","author":"T Miller","year":"2019","unstructured":"Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell. 2019;267:1\u201338.","journal-title":"Artif Intell"},{"issue":"11","key":"114_CR23","doi-asserted-by":"publisher","first-page":"4793","DOI":"10.1109\/TNNLS.2020.3027314","volume":"32","author":"E Tjoa","year":"2020","unstructured":"Tjoa E, Guan C. A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Trans Neural Netw Learn Syst. 2020;32(11):4793\u2013813.","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"issue":"5","key":"114_CR24","doi-asserted-by":"publisher","DOI":"10.1002\/widm.1379","volume":"10","author":"G Stiglic","year":"2020","unstructured":"Stiglic G, Kocbek P, Fijacko N, Zitnik M, Verbert K, Cilar L. Interpretability of machine learning-based prediction models in healthcare. Wiley Interdiscip Rev Data Min Knowl Discov. 2020;10(5): e1379.","journal-title":"Wiley Interdiscip Rev Data Min Knowl Discov"},{"issue":"6","key":"114_CR25","first-page":"1","volume":"54","author":"N Mehrabi","year":"2021","unstructured":"Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv(CSUR). 2021;54(6):1\u201335.","journal-title":"ACM Comput Surv(CSUR)"},{"key":"114_CR26","unstructured":"Chakrobartty S, El-Gayar O. Explainable artificial intelligence in the medical domain: a systematic review. 2021."},{"key":"114_CR27","doi-asserted-by":"crossref","unstructured":"Hatherley J, Sparrow R, Howard M. The virtues of interpretable medical artificial intelligence. Camb Q Healthc Ethics, 2022:1\u201310.","DOI":"10.1017\/S0963180122000305"},{"issue":"2","key":"114_CR28","first-page":"120","volume":"1","author":"L Farah","year":"2023","unstructured":"Farah L, Murris JM, Borget I, Guilloux A, Martelli NM, Katsahian SIM. Assessment of performance, interpretability, and explainability in artificial intelligence-based health technologies: what healthcare stakeholders need to know. Mayo Clin Proc. 2023;1(2):120\u201338.","journal-title":"Mayo Clin Proc"},{"key":"114_CR29","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2023.101805","volume":"99","author":"S Ali","year":"2023","unstructured":"Ali S, Abuhmed T, El-Sappagh S, Muhammad K, Alonso-Moral JM, Confalonieri R, Guidotti R, Del Ser J, D\u00edaz-Rodr\u00edguez N, Herrera F. Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence. Inf Fusion. 2023;99: 101805.","journal-title":"Inf Fusion"},{"key":"114_CR30","doi-asserted-by":"publisher","DOI":"10.1016\/j.imu.2023.101286","volume":"40","author":"SS Band","year":"2023","unstructured":"Band SS, Yarahmadi A, Hsu C-C, Biyari M, Sookhak M, Ameri R, Dehzangi I, Chronopoulos AT, Liang H-W. Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods. Inform Med Unlocked. 2023;40: 101286.","journal-title":"Inform Med Unlocked"},{"issue":"3","key":"114_CR31","first-page":"245","volume":"6","author":"BS Ballew","year":"2009","unstructured":"Ballew BS. Elsevier\u2019s scopus\u00ae database. J Electron Resour Med Libr. 2009;6(3):245\u201352.","journal-title":"J Electron Resour Med Libr"},{"key":"114_CR32","volume-title":"Encyclopedia of library and information science","author":"M Drake","year":"2003","unstructured":"Drake M. Encyclopedia of library and information science, vol. 1. Boca Raton: CRC Press; 2003."},{"issue":"2","key":"114_CR33","doi-asserted-by":"publisher","first-page":"523","DOI":"10.1007\/s11192-009-0146-3","volume":"84","author":"N Van Eck","year":"2010","unstructured":"Van Eck N, Waltman L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics. 2010;84(2):523\u201338.","journal-title":"Scientometrics"},{"key":"114_CR34","doi-asserted-by":"publisher","DOI":"10.1016\/j.cmpb.2020.105608","volume":"196","author":"L Brunese","year":"2020","unstructured":"Brunese L, Mercaldo F, Reginelli A, Santone A. Explainable deep learning for pulmonary disease and coronavirus covid-19 detection from x-rays. Comput Methods Programs Biomed. 2020;196: 105608.","journal-title":"Comput Methods Programs Biomed"},{"issue":"1","key":"114_CR35","doi-asserted-by":"publisher","first-page":"10","DOI":"10.1038\/s41746-019-0216-8","volume":"3","author":"A Ghorbani","year":"2020","unstructured":"Ghorbani A, Ouyang D, Abid A, He B, Chen JH, Harrington RA, Liang DH, Ashley EA, Zou JY. Deep learning interpretation of echocardiograms. NPJ Digit Med. 2020;3(1):10.","journal-title":"NPJ Digit Med"},{"issue":"4","key":"114_CR36","doi-asserted-by":"publisher","first-page":"e179","DOI":"10.1016\/S2589-7500(20)30018-2","volume":"2","author":"H-C Thorsen-Meyer","year":"2020","unstructured":"Thorsen-Meyer H-C, Nielsen AB, Nielsen AP, Kaas-Hansen BS, Toft P, Schierbeck J, Str\u00f8m T, Chmura PJ, Heimann M, Dybdahl L, et al. Dynamic and explainable machine learning prediction of mortality in patients in the intensive care unit: a retrospective study of high-frequency data in electronic patient records. Lancet Digit Health. 2020;2(4):e179\u201391.","journal-title":"Lancet Digit Health"},{"issue":"1","key":"114_CR37","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1186\/1471-244X-14-76","volume":"14","author":"T Tran","year":"2014","unstructured":"Tran T, Luo W, Phung D, Harvey R, Berk M, Kennedy RL, Venkatesh S. Risk stratification using data from electronic medical records better predicts suicide risks than clinician assessments. BMC Psychiatry. 2014;14(1):76.","journal-title":"BMC Psychiatry"},{"key":"114_CR38","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10916-020-01597-4","volume":"44","author":"D Brinati","year":"2020","unstructured":"Brinati D, Campagner A, Ferrari D, Locatelli M, Banfi G, Cabitza F. Detection of covid-19 infection from routine blood exams with machine learning: a feasibility study. J Med Syst. 2020;44:1\u201312.","journal-title":"J Med Syst"},{"key":"114_CR39","doi-asserted-by":"publisher","first-page":"42","DOI":"10.1016\/j.artmed.2019.01.001","volume":"94","author":"J-B Lamy","year":"2019","unstructured":"Lamy J-B, Sekar B, Guezennec G, Bouaud J, S\u00e9roussi B. Explainable artificial intelligence for breast cancer: a visual case-based reasoning approach. Artif Intell Med. 2019;94:42\u201353.","journal-title":"Artif Intell Med"},{"issue":"1","key":"114_CR40","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-019-0874-0","volume":"19","author":"R Elshawi","year":"2019","unstructured":"Elshawi R, Al-Mallah MH, Sakr S. On the interpretability of machine learning-based model for predicting hypertension. BMC Med Inform Decis Mak. 2019;19(1):1\u201332.","journal-title":"BMC Med Inform Decis Mak"},{"key":"114_CR41","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1007\/s00769-006-0191-z","volume":"12","author":"A Menditto","year":"2007","unstructured":"Menditto A, Patriarca M, Magnusson B. Understanding the meaning of accuracy, trueness and precision. Accredit Qual Assur. 2007;12:45\u20137.","journal-title":"Accredit Qual Assur"},{"key":"114_CR42","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1007\/s00769-014-1093-0","volume":"20","author":"E Prenesti","year":"2015","unstructured":"Prenesti E, Gosmaro F. Trueness, precision and accuracy: a critical overview of the concepts as well as proposals for revision. Accredit Qual Assur. 2015;20:33\u201340.","journal-title":"Accredit Qual Assur"},{"issue":"1","key":"114_CR43","doi-asserted-by":"publisher","first-page":"12","DOI":"10.1002\/(SICI)1097-4571(199401)45:1<12::AID-ASI2>3.0.CO;2-L","volume":"45","author":"M Buckland","year":"1994","unstructured":"Buckland M, Gey F. The relationship between recall and precision. J Am Soc Inform Sci. 1994;45(1):12\u20139.","journal-title":"J Am Soc Inform Sci"},{"issue":"3","key":"114_CR44","doi-asserted-by":"publisher","first-page":"299","DOI":"10.1109\/TKDE.2005.50","volume":"17","author":"J Huang","year":"2005","unstructured":"Huang J, Ling CX. Using AUC and accuracy in evaluating learning algorithms. IEEE Trans Knowl Data Eng. 2005;17(3):299\u2013310.","journal-title":"IEEE Trans Knowl Data Eng"},{"issue":"4","key":"114_CR45","doi-asserted-by":"publisher","first-page":"731","DOI":"10.1093\/jamia\/ocw011","volume":"23","author":"Y Halpern","year":"2016","unstructured":"Halpern Y, Horng S, Choi Y, Sontag D. Electronic medical record phenotyping using the anchor and learn framework. J Am Med Inform Assoc. 2016;23(4):731\u201340.","journal-title":"J Am Med Inform Assoc"},{"key":"114_CR46","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-019-1014-6","volume":"20","author":"AM Carrington","year":"2020","unstructured":"Carrington AM, Fieguth PW, Qazi H, Holzinger A, Chen HH, Mayr F, Manuel DG. A new concordant partial AUC and partial c statistic for imbalanced data in the evaluation of machine learning algorithms. BMC Med Inform Decis Mak. 2020;20:1\u201312.","journal-title":"BMC Med Inform Decis Mak"},{"key":"114_CR47","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2023.101882","volume":"99","author":"E Mariotti","year":"2023","unstructured":"Mariotti E, Moral JMA, Gatt A. Exploring the balance between interpretability and performance with carefully designed constrainable neural additive models. Inf Fus. 2023;99: 101882.","journal-title":"Inf Fus"},{"key":"114_CR48","doi-asserted-by":"crossref","unstructured":"Ashwath VA, Sikha OK, Benitez R. TS-CNN: a three-tier self-interpretable CNN for multi-region medical image classification. IEEE Access; 2023.","DOI":"10.1109\/ACCESS.2023.3299850"},{"issue":"8","key":"114_CR49","doi-asserted-by":"publisher","first-page":"9115","DOI":"10.1007\/s10489-022-03886-6","volume":"53","author":"B La Rosa","year":"2023","unstructured":"La Rosa B, Capobianco R, Nardi D. A self-interpretable module for deep image classification on small data. Appl Intell. 2023;53(8):9115\u201347.","journal-title":"Appl Intell"},{"issue":"9","key":"114_CR50","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3561048","volume":"55","author":"R Dwivedi","year":"2023","unstructured":"Dwivedi R, Dave D, Naik H, Singhal S, Omer R, Patel P, Qian B, Wen Z, Shah T, Morgan G, et al. Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput Surv. 2023;55(9):1\u201333.","journal-title":"ACM Comput Surv"},{"key":"114_CR51","doi-asserted-by":"publisher","first-page":"66","DOI":"10.1002\/9781119790686.ch7","volume-title":"AI in clinical medicine: a practical guide for healthcare professionals","author":"SM Anwar","year":"2023","unstructured":"Anwar SM. Expert systems for interpretable decisions in the clinical domain. In: Byrne MF, Parsa N, Greenhill AT, Chahal D, Ahmad O, Bagci U, editors. AI in clinical medicine: a practical guide for healthcare professionals. Hoboken: Wiley Online Library; 2023. p. 66\u201372."},{"issue":"1","key":"114_CR52","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1038\/s41598-019-56847-4","volume":"10","author":"B-J Cho","year":"2020","unstructured":"Cho B-J, Choi YJ, Lee M-J, Kim JH, Son G-H, Park S-H, Kim H-B, Joo Y-J, Cho H-Y, Kyung MS, et al. Classification of cervical neoplasms on colposcopic photography using deep learning. Sci Rep. 2020;10(1):1\u201310.","journal-title":"Sci Rep"},{"key":"114_CR53","doi-asserted-by":"publisher","DOI":"10.1016\/j.compbiomed.2020.103792","volume":"121","author":"T Ozturk","year":"2020","unstructured":"Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Acharya UR. Automated detection of covid-19 cases using deep neural networks with x-ray images. Comput Biol Med. 2020;121: 103792.","journal-title":"Comput Biol Med"},{"key":"114_CR54","doi-asserted-by":"crossref","unstructured":"Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017:2097\u2013106.","DOI":"10.1109\/CVPR.2017.369"},{"key":"114_CR55","doi-asserted-by":"publisher","first-page":"96","DOI":"10.1016\/j.jbi.2015.01.012","volume":"54","author":"T Tran","year":"2015","unstructured":"Tran T, Nguyen TD, Phung D, Venkatesh S. Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (eNRBM). J Biomed Inform. 2015;54:96\u2013105.","journal-title":"J Biomed Inform"},{"key":"114_CR56","doi-asserted-by":"publisher","first-page":"555","DOI":"10.1007\/s10115-014-0740-4","volume":"43","author":"T Tran","year":"2015","unstructured":"Tran T, Phung D, Luo W, Venkatesh S. Stabilized sparse ordinal regression for medical risk stratification. Knowl Inform Syst. 2015;43:555\u201382.","journal-title":"Knowl Inform Syst"}],"container-title":["Discover Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-024-00114-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44163-024-00114-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-024-00114-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,28]],"date-time":"2024-02-28T12:10:27Z","timestamp":1709122227000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44163-024-00114-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,27]]},"references-count":56,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,12]]}},"alternative-id":["114"],"URL":"https:\/\/doi.org\/10.1007\/s44163-024-00114-7","relation":{},"ISSN":["2731-0809"],"issn-type":[{"value":"2731-0809","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,27]]},"assertion":[{"value":"31 October 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 February 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 February 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"15"}}