{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,14]],"date-time":"2026-05-14T20:39:26Z","timestamp":1778791166156,"version":"3.51.4"},"reference-count":44,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,11,30]],"date-time":"2020-11-30T00:00:00Z","timestamp":1606694400000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"},{"start":{"date-parts":[[2020,11,30]],"date-time":"2020-11-30T00:00:00Z","timestamp":1606694400000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Horizon 2020 Research and Innovation Programme","award":["777107"],"award-info":[{"award-number":["777107"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["BMC Med Inform Decis Mak"],"published-print":{"date-parts":[[2020,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n<jats:title>Background<\/jats:title>\n<jats:p>Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Methods<\/jats:title>\n<jats:p>Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the \u201cPrinciples of Biomedical Ethics\u201d by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Results<\/jats:title>\n<jats:p>Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Conclusions<\/jats:title>\n<jats:p>To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.<\/jats:p>\n<\/jats:sec>","DOI":"10.1186\/s12911-020-01332-6","type":"journal-article","created":{"date-parts":[[2020,11,30]],"date-time":"2020-11-30T16:03:07Z","timestamp":1606752187000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1551,"title":["Explainability for artificial intelligence in healthcare: a multidisciplinary perspective"],"prefix":"10.1186","volume":"20","author":[{"name":"the Precise4Q consortium","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2155-5286","authenticated-orcid":false,"given":"Julia","family":"Amann","sequence":"first","affiliation":[]},{"given":"Alessandro","family":"Blasimme","sequence":"additional","affiliation":[]},{"given":"Effy","family":"Vayena","sequence":"additional","affiliation":[]},{"given":"Dietmar","family":"Frey","sequence":"additional","affiliation":[]},{"given":"Vince I.","family":"Madai","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,11,30]]},"reference":[{"key":"1332_CR1","doi-asserted-by":"publisher","first-page":"2000052","DOI":"10.1002\/aisy.202000052","volume":"2","author":"D Higgins","year":"2020","unstructured":"Higgins D, Madai VI. From bit to bedside: a practical framework for artificial intelligence product development in healthcare. Adv Intell Syst. 2020;2:2000052.","journal-title":"Adv Intell Syst."},{"key":"1332_CR2","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"C Rudin","year":"2019","unstructured":"Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1:206\u201315.","journal-title":"Nat Mach Intell"},{"key":"1332_CR3","unstructured":"Doran D, Schulz S, Besold TR. What does explainable AI really mean? A new conceptualization of perspectives. ArXiv171000794 Cs. 2017. http:\/\/arxiv.org\/abs\/1710.00794. Accessed 3 Sept 2019."},{"key":"1332_CR4","doi-asserted-by":"publisher","first-page":"2199","DOI":"10.1001\/jama.2018.17163","volume":"320","author":"EH Shortliffe","year":"2018","unstructured":"Shortliffe EH, Sep\u00falveda MJ. Clinical decision support in the era of artificial intelligence. JAMA. 2018;320:2199\u2013200.","journal-title":"JAMA"},{"key":"1332_CR5","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1126\/science.aax2342","volume":"366","author":"Z Obermeyer","year":"2019","unstructured":"Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366:447\u201353.","journal-title":"Science"},{"key":"1332_CR6","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-28954-6","volume-title":"Explainable AI: interpreting, explaining and visualizing deep learning","year":"2019","unstructured":"Samek W, Montavon G, Vedaldi A, Hansen LK, M\u00fcller K-R, editors. Explainable AI: interpreting, explaining and visualizing deep learning. Berlin: Springer; 2019. https:\/\/doi.org\/10.1007\/978-3-030-28954-6."},{"key":"1332_CR7","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1038\/s41591-018-0316-z","volume":"25","author":"A Esteva","year":"2019","unstructured":"Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, et al. A guide to deep learning in healthcare. Nat Med. 2019;25:24\u20139.","journal-title":"Nat Med"},{"key":"1332_CR8","unstructured":"Islam SR, Eberle W, Ghafoor SK. Towards quantification of explainability in explainable artificial intelligence methods. ArXiv191110104 Cs Q-Fin. 2019. http:\/\/arxiv.org\/abs\/1911.10104. Accessed 2 Oct 2020."},{"key":"1332_CR9","unstructured":"Samek W, Montavon G, Lapuschkin S, Anders CJ, M\u00fcller K-R. Toward interpretable machine learning: transparent deep neural networks and beyond. ArXiv200307631 Cs Stat. 2020. http:\/\/arxiv.org\/abs\/2003.07631. Accessed 2 Oct 2020."},{"key":"1332_CR10","doi-asserted-by":"publisher","first-page":"1096","DOI":"10.1038\/s41467-019-08987-4","volume":"10","author":"S Lapuschkin","year":"2019","unstructured":"Lapuschkin S, W\u00e4ldchen S, Binder A, Montavon G, Samek W, M\u00fcller K-R. Unmasking Clever Hans predictors and assessing what machines really learn. Nat Commun. 2019;10:1096.","journal-title":"Nat Commun"},{"key":"1332_CR11","doi-asserted-by":"publisher","first-page":"e1002683","DOI":"10.1371\/journal.pmed.1002683","volume":"15","author":"JR Zech","year":"2018","unstructured":"Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLOS Med. 2018;15:e1002683.","journal-title":"PLOS Med"},{"key":"1332_CR12","doi-asserted-by":"publisher","unstructured":"Olsen HP, Slosser JL, Hildebrandt TT, Wiesener C. What\u2019s in the box? The legal requirement of explainability in computationally aided decision-making in public administration. SSRN Scholarly Paper. Rochester: Social Science Research Network; 2019. https:\/\/doi.org\/10.2139\/ssrn.3402974.","DOI":"10.2139\/ssrn.3402974"},{"key":"1332_CR13","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1093\/ijlit\/eaz002","volume":"27","author":"D Sch\u00f6nberger","year":"2019","unstructured":"Sch\u00f6nberger D. Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int J Law Inf Technol. 2019;27:171\u2013203.","journal-title":"Int J Law Inf Technol"},{"key":"1332_CR14","doi-asserted-by":"publisher","unstructured":"Cohen IG. Informed consent and medical artificial intelligence: what to tell the patient? SSRN Scholarly Paper. Rochester, NY: Social Science Research Network; 2020. https:\/\/doi.org\/10.2139\/ssrn.3529576.","DOI":"10.2139\/ssrn.3529576"},{"key":"1332_CR15","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3604924","author":"V Beaudouin","year":"2020","unstructured":"Beaudouin V, Bloch I, Bounie D, Cl\u00e9men\u00e7on S, d\u2019Alch\u00e9-Buc F, Eagan J, et al. Identifying the \u201cright\u201d level of explanation in a given situation. SSRN Electron J. 2020. https:\/\/doi.org\/10.2139\/ssrn.3604924.","journal-title":"SSRN Electron J"},{"key":"1332_CR16","unstructured":"FDA. Proposed regulatory framework for modifications to artificial intelligence\/machine learning (AI\/ML)-based Software as a Medical Device (SaMD). 2020. https:\/\/www.fda.gov\/files\/medical%20devices\/published\/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf. Accessed 5 July 2020."},{"key":"1332_CR17","doi-asserted-by":"crossref","unstructured":"Hacker P, Krestel R, Grundmann S, Naumann F. Explainable AI under contract and tort law: legal incentives and technical challenges. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network; 2020. https:\/\/papers.ssrn.com\/abstract=3513433. Accessed 13 Feb 2020.","DOI":"10.2139\/ssrn.3513433"},{"key":"1332_CR18","doi-asserted-by":"publisher","first-page":"320","DOI":"10.21552\/edpl\/2018\/3\/10","volume":"4","author":"A Ferretti","year":"2018","unstructured":"Ferretti A, Schneider M, Blasimme A. Machine learning in medicine: opening the new data protection black box. Eur Data Prot Law Rev EDPL. 2018;4:320.","journal-title":"Eur Data Prot Law Rev EDPL"},{"key":"1332_CR19","doi-asserted-by":"publisher","first-page":"e0174944","DOI":"10.1371\/journal.pone.0174944","volume":"12","author":"SF Weng","year":"2017","unstructured":"Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS ONE. 2017;12:e0174944.","journal-title":"PLoS ONE"},{"key":"1332_CR20","doi-asserted-by":"publisher","first-page":"e009476","DOI":"10.1161\/JAHA.118.009476","volume":"7","author":"IA Kakadiaris","year":"2018","unstructured":"Kakadiaris IA, Vrigkas M, Yen AA, Kuznetsova T, Budoff M, Naghavi M. Machine learning outperforms ACC\/AHA CVD risk calculator in MESA. J Am Heart Assoc. 2018;7:e009476.","journal-title":"J Am Heart Assoc."},{"key":"1332_CR21","doi-asserted-by":"publisher","first-page":"101723","DOI":"10.1016\/j.artmed.2019.101723","volume":"101","author":"T Liu","year":"2019","unstructured":"Liu T, Fan W, Wu C. A hybrid machine learning approach to cerebral stroke prediction based on imbalanced medical dataset. Artif Intell Med. 2019;101:101723\u2013101723.","journal-title":"Artif Intell Med."},{"key":"1332_CR22","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41746-020-0254-2","volume":"3","author":"CM Cutillo","year":"2020","unstructured":"Cutillo CM, Sharma KR, Foschini L, Kundu S, Mackintosh M, Mandl KD. Machine intelligence in healthcare\u2014perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit Med. 2020;3:1\u20135.","journal-title":"NPJ Digit Med"},{"key":"1332_CR23","unstructured":"Tonekaboni S, Joshi S, McCradden MD, Goldenberg A. What clinicians want: contextualizing explainable machine learning for clinical end use. ArXiv190505134 Cs Stat. 2019. http:\/\/arxiv.org\/abs\/1905.05134. Accessed 3 Sept 2019."},{"key":"1332_CR24","unstructured":"Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academies Press (US); 2001. http:\/\/www.ncbi.nlm.nih.gov\/books\/NBK222274\/. Accessed 21 May 2020."},{"key":"1332_CR25","doi-asserted-by":"publisher","first-page":"780","DOI":"10.1056\/NEJMp1109283","volume":"366","author":"MJ Barry","year":"2012","unstructured":"Barry MJ, Edgman-Levitan S. Shared decision making\u2014the pinnacle patient-centered care. N Engl J Med. 2012;366:780\u20131.","journal-title":"N Engl J Med"},{"key":"1332_CR26","doi-asserted-by":"publisher","first-page":"1320","DOI":"10.1111\/acem.13065","volume":"23","author":"M Kunneman","year":"2016","unstructured":"Kunneman M, Montori VM, Castaneda-Guarderas A, Hess EP. What is shared decision making? (and What it is not). Acad Emerg Med. 2016;23:1320\u20134.","journal-title":"Acad Emerg Med"},{"key":"1332_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.ahj.2017.05.014","volume":"191","author":"ES O\u2019Neill","year":"2017","unstructured":"O\u2019Neill ES, Grande SW, Sherman A, Elwyn G, Coylewright M. Availability of patient decision aids for stroke prevention in atrial fibrillation: a systematic review. Am Heart J. 2017;191:1\u201311.","journal-title":"Am Heart J"},{"key":"1332_CR28","doi-asserted-by":"publisher","first-page":"159","DOI":"10.1007\/s10840-018-0465-5","volume":"56","author":"PA Noseworthy","year":"2019","unstructured":"Noseworthy PA, Brito JP, Kunneman M, Hargraves IG, Zeballos-Palacios C, Montori VM, et al. Shared decision-making in atrial fibrillation: navigating complex issues in partnership with the patient. J Interv Card Electrophysiol. 2019;56:159\u201363.","journal-title":"J Interv Card Electrophysiol"},{"key":"1332_CR29","doi-asserted-by":"publisher","first-page":"499","DOI":"10.1136\/bmjqs-2018-008022","volume":"28","author":"CC Dobler","year":"2019","unstructured":"Dobler CC, Sanchez M, Gionfriddo MR, Alvarez-Villalobos NA, Ospina NS, Spencer-Bonilla G, et al. Impact of decision aids used during clinical encounters on clinician outcomes and consultation length: a systematic review. BMJ Qual Saf. 2019;28:499\u2013510.","journal-title":"BMJ Qual Saf"},{"key":"1332_CR30","doi-asserted-by":"publisher","first-page":"e944","DOI":"10.1161\/CIR.0000000000000740","volume":"140","author":"PA Noseworthy","year":"2019","unstructured":"Noseworthy PA, Kaufman ES, Chen LY, Chung MK, Elkind Mitchell SV, Joglar JA, et al. Subclinical and device-detected atrial fibrillation: pondering the knowledge gap: a scientific statement from the American Heart Association. Circulation. 2019;140:e944\u201363.","journal-title":"Circulation"},{"key":"1332_CR31","doi-asserted-by":"publisher","first-page":"395","DOI":"10.1186\/s13063-020-04305-2","volume":"21","author":"G Spencer-Bonilla","year":"2020","unstructured":"Spencer-Bonilla G, Thota A, Organick P, Ponce OJ, Kunneman M, Giblon R, et al. Normalization of a conversation tool to promote shared decision making about anticoagulation in patients with atrial fibrillation within a practical randomized trial of its effectiveness: a cross-sectional study. Trials. 2020;21:395.","journal-title":"Trials"},{"key":"1332_CR32","doi-asserted-by":"publisher","first-page":"19","DOI":"10.1186\/s12872-018-0760-1","volume":"18","author":"C Bonner","year":"2018","unstructured":"Bonner C, Bell K, Jansen J, Glasziou P, Irwig L, Doust J, et al. Should heart age calculators be used alongside absolute cardiovascular disease risk assessment? BMC Cardiovasc Disord. 2018;18:19.","journal-title":"BMC Cardiovasc Disord"},{"key":"1332_CR33","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-019-00391-6","author":"JC Bjerring","year":"2020","unstructured":"Bjerring JC, Busch J. Artificial intelligence and patient-centered decision-making. Philos Technol. 2020. https:\/\/doi.org\/10.1007\/s13347-019-00391-6.","journal-title":"Philos Technol"},{"key":"1332_CR34","doi-asserted-by":"publisher","DOI":"10.1136\/bmj.f7066","author":"MC Politi","year":"2013","unstructured":"Politi MC, Dizon DS, Frosch DL, Kuzemchak MD, Stiggelbout AM. Importance of clarifying patients\u2019 desired role in shared decision making to match their level of engagement with their preferences. BMJ. 2013. https:\/\/doi.org\/10.1136\/bmj.f7066.","journal-title":"BMJ"},{"key":"1332_CR35","doi-asserted-by":"publisher","DOI":"10.1002\/14651858.CD001431.pub5","author":"D Stacey","year":"2017","unstructured":"Stacey D, L\u00e9gar\u00e9 F, Lewis K, Barry MJ, Bennett CL, Eden KB, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2017. https:\/\/doi.org\/10.1002\/14651858.CD001431.pub5.","journal-title":"Cochrane Database Syst Rev"},{"key":"1332_CR36","unstructured":"Beauchamp TL. Principles of biomedical ethics. Paperback May-2008. New York: Oxford University Press; 2008."},{"key":"1332_CR37","doi-asserted-by":"publisher","first-page":"111","DOI":"10.1136\/medethics-2014-102282","volume":"41","author":"R Gillon","year":"2015","unstructured":"Gillon R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J Med Ethics. 2015;41:111\u20136.","journal-title":"J Med Ethics"},{"key":"1332_CR38","doi-asserted-by":"publisher","first-page":"501","DOI":"10.1038\/s42256-019-0114-4","volume":"1","author":"B Mittelstadt","year":"2019","unstructured":"Mittelstadt B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. 2019;1:501\u20137.","journal-title":"Nat Mach Intell"},{"key":"1332_CR39","volume-title":"A history and theory of informed consent","author":"RR Faden","year":"1986","unstructured":"Faden RR, Beauchamp TL. A history and theory of informed consent. Oxford: Oxford University Press; 1986."},{"key":"1332_CR40","doi-asserted-by":"publisher","DOI":"10.1093\/0198248075.001.0001\/acprof-9780198248071","volume-title":"The Morality of Freedom","author":"J Raz","year":"2020","unstructured":"Raz J. The Morality of Freedom. Oxford: Oxford University Press; 2020. https:\/\/doi.org\/10.1093\/0198248075.001.0001\/acprof-9780198248071."},{"key":"1332_CR41","doi-asserted-by":"publisher","first-page":"156","DOI":"10.1136\/medethics-2018-105118","volume":"45","author":"RJ McDougall","year":"2019","unstructured":"McDougall RJ. Computer knows best? The need for value-flexibility in medical AI. J Med Ethics. 2019;45:156\u201360.","journal-title":"J Med Ethics"},{"key":"1332_CR42","doi-asserted-by":"publisher","DOI":"10.1136\/medethics-2019-105586","author":"T Grote","year":"2019","unstructured":"Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. J Med Ethics. 2019. https:\/\/doi.org\/10.1136\/medethics-2019-105586.","journal-title":"J Med Ethics"},{"key":"1332_CR43","doi-asserted-by":"publisher","DOI":"10.1186\/s40635-019-0286-6","author":"M Beil","year":"2019","unstructured":"Beil M, Proft I, van Heerden D, Sviri S, van Heerden PV. Ethical considerations about artificial intelligence for prognostication in intensive care. Intensive Care Med Exp. 2019. https:\/\/doi.org\/10.1186\/s40635-019-0286-6.","journal-title":"Intensive Care Med Exp"},{"key":"1332_CR44","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1002\/hast.973","volume":"49","author":"AJ London","year":"2019","unstructured":"London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep. 2019;49:15\u201321.","journal-title":"Hastings Cent Rep"}],"container-title":["BMC Medical Informatics and Decision Making"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-020-01332-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1186\/s12911-020-01332-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-020-01332-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2020,11,30]],"date-time":"2020-11-30T16:28:17Z","timestamp":1606753697000},"score":1,"resource":{"primary":{"URL":"https:\/\/bmcmedinformdecismak.biomedcentral.com\/articles\/10.1186\/s12911-020-01332-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,11,30]]},"references-count":44,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,12]]}},"alternative-id":["1332"],"URL":"https:\/\/doi.org\/10.1186\/s12911-020-01332-6","relation":{},"ISSN":["1472-6947"],"issn-type":[{"value":"1472-6947","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,11,30]]},"assertion":[{"value":"22 July 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 November 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 November 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Not applicable.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare no competing interests.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"310"}}