{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,27]],"date-time":"2026-03-27T06:24:42Z","timestamp":1774592682501,"version":"3.50.1"},"reference-count":44,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,10,8]],"date-time":"2020-10-08T00:00:00Z","timestamp":1602115200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,10,8]],"date-time":"2020-10-08T00:00:00Z","timestamp":1602115200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100000092","name":"U.S. National Library of Medicine","doi-asserted-by":"publisher","award":["T15 LM007059"],"award-info":[{"award-number":["T15 LM007059"]}],"id":[{"id":"10.13039\/100000092","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000071","name":"National Institute of Child Health and Human Development","doi-asserted-by":"publisher","award":["K23HD099331"],"award-info":[{"award-number":["K23HD099331"]}],"id":[{"id":"10.13039\/100000071","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["BMC Med Inform Decis Mak"],"published-print":{"date-parts":[[2020,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n<jats:title>Background<\/jats:title>\n<jats:p>There is an increasing interest in clinical prediction tools that can achieve high prediction accuracy and provide explanations of the factors leading to increased risk of adverse outcomes. However, approaches to explaining complex machine learning (ML) models are rarely informed by end-user needs and user evaluations of model interpretability are lacking in the healthcare domain. We used extended revisions of previously-published theoretical frameworks to propose a framework for the design of user-centered displays of explanations. This new framework served as the basis for qualitative inquiries and design review sessions with critical care nurses and physicians that informed the design of a user-centered explanation display for an ML-based prediction tool.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Methods<\/jats:title>\n<jats:p>We used our framework to propose explanation displays for predictions from a <jats:underline>p<\/jats:underline>ediatric <jats:underline>i<\/jats:underline>ntensive <jats:underline>c<\/jats:underline>are <jats:underline>u<\/jats:underline>nit (PICU) in-hospital mortality risk model. Proposed displays were based on a model-agnostic, instance-level explanation approach based on feature influence, as determined by Shapley values. Focus group sessions solicited critical care provider feedback on the proposed displays, which were then revised accordingly.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Results<\/jats:title>\n<jats:p>The proposed displays were perceived as useful tools in assessing model predictions. However, specific explanation goals and information needs varied by clinical role and level of predictive modeling knowledge. Providers preferred explanation displays that required less information processing effort and could support the information needs of a variety of users. Providing supporting information to assist in interpretation was seen as critical for fostering provider understanding and acceptance of the predictions and explanations. The user-centered explanation display for the PICU in-hospital mortality risk model incorporated elements from the initial displays along with enhancements suggested by providers.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Conclusions<\/jats:title>\n<jats:p>We proposed a framework for the design of user-centered displays of explanations for ML models. We used the proposed framework to motivate the design of a user-centered display of an explanation for predictions from a PICU in-hospital mortality risk model. Positive feedback from focus group participants provides preliminary support for the use of model-agnostic, instance-level explanations of feature influence as an approach to understand ML model predictions in healthcare and advances the discussion on how to effectively communicate ML model information to healthcare providers.<\/jats:p>\n<\/jats:sec>","DOI":"10.1186\/s12911-020-01276-x","type":"journal-article","created":{"date-parts":[[2020,10,8]],"date-time":"2020-10-08T16:05:45Z","timestamp":1602173145000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":81,"title":["A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare"],"prefix":"10.1186","volume":"20","author":[{"given":"Amie J.","family":"Barda","sequence":"first","affiliation":[]},{"given":"Christopher M.","family":"Horvat","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8793-9982","authenticated-orcid":false,"given":"Harry","family":"Hochheiser","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,10,8]]},"reference":[{"key":"1276_CR1","doi-asserted-by":"publisher","first-page":"1317","DOI":"10.1001\/jama.2017.18391","volume":"319","author":"AL Beam","year":"2018","unstructured":"Beam AL, Kohane IS. Big data and machine learning in health care. JAMA. 2018;319:1317\u20138. https:\/\/doi.org\/10.1001\/jama.2017.18391.","journal-title":"JAMA."},{"key":"1276_CR2","doi-asserted-by":"publisher","first-page":"27","DOI":"10.1001\/jama.2018.5602","volume":"320","author":"ND Shah","year":"2018","unstructured":"Shah ND, Steyerberg EW, Kent DM. Big data and predictive analytics: recalibrating expectations. JAMA. 2018;320:27\u20138. https:\/\/doi.org\/10.1001\/jama.2018.5602.","journal-title":"JAMA."},{"key":"1276_CR3","doi-asserted-by":"publisher","first-page":"1766","DOI":"10.1253\/circj.CJ-17-1185","volume":"81","author":"F Nakamura","year":"2017","unstructured":"Nakamura F, Nakai M. Prediction models - why are they used or not used? Circ J. 2017;81:1766\u20137. https:\/\/doi.org\/10.1253\/circj.CJ-17-1185.","journal-title":"Circ J"},{"key":"1276_CR4","volume-title":"Machine learning model interpretability for precision medicine","author":"GJ Katuwal","year":"2016","unstructured":"Katuwal GJ, Chen R. Machine learning model interpretability for precision medicine. 2016. http:\/\/arxiv.org\/abs\/1610.09045."},{"key":"1276_CR5","doi-asserted-by":"publisher","first-page":"559","DOI":"10.1145\/3233547.3233667","volume-title":"Proceedings of the 2018 ACM international conference on bioinformatics, computational biology, and health informatics - BCB \u201818","author":"MA Ahmad","year":"2018","unstructured":"Ahmad MA, Eckert C, Teredesai A. Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM international conference on bioinformatics, computational biology, and health informatics - BCB \u201818. New York: ACM Press; 2018. p. 559\u201360. https:\/\/doi.org\/10.1145\/3233547.3233667."},{"key":"1276_CR6","doi-asserted-by":"publisher","first-page":"11","DOI":"10.1159\/000492428","volume":"5","author":"A Vellido","year":"2019","unstructured":"Vellido A. Societal Issues Concerning the Application of Artificial Intelligence in Medicine. Kidney Dis (Basel, Switzerland). 2019;5:11\u20137. https:\/\/doi.org\/10.1159\/000492428.","journal-title":"Kidney Dis (Basel, Switzerland)"},{"key":"1276_CR7","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1609\/aimag.v38i3.2741","volume":"38","author":"B Goodman","year":"2017","unstructured":"Goodman B, Flaxman S. European Union Regulations on Algorithmic Decision-Making and a \u201cRight to Explanation.\u201d. AI Mag. 2017;38:50\u20137. https:\/\/doi.org\/10.1609\/aimag.v38i3.2741.","journal-title":"AI Mag"},{"key":"1276_CR8","unstructured":"U.S. Food and Drug Administration. Clinical and Patient Decision Support Software: Draft Guidance for Industry and Food and Drug Adminstration Staff. Washington, D.C., USA; 2017. https:\/\/www.fda.gov\/downloads\/MedicalDevices\/DeviceRegulationandGuidance\/GuidanceDocuments\/UCM587819.pdf."},{"key":"1276_CR9","doi-asserted-by":"publisher","first-page":"1181","DOI":"10.13063\/2327-9214.1181","volume":"3","author":"TL Johnson","year":"2015","unstructured":"Johnson TL, Brewer D, Estacio R, Vlasimsky T, Durfee MJ, Thompson KR, et al. Augmenting Predictive Modeling Tools with Clinical Insights for Care Coordination Program Design and Implementation. EGEMS (Washington, DC). 2015;3:1181. https:\/\/doi.org\/10.13063\/2327-9214.1181.","journal-title":"EGEMS (Washington, DC)"},{"key":"1276_CR10","doi-asserted-by":"publisher","DOI":"10.1111\/1559-8918.2018.01213","volume-title":"Ethnographic Praxis in Industry Conference Proceedings","author":"MC Elish","year":"2018","unstructured":"Elish MC. The stakes of uncertainty: developing and integrating machine learning in clinical care. In: Ethnographic Praxis in Industry Conference Proceedings; 2018. p. 364\u201380. https:\/\/doi.org\/10.1111\/1559-8918.2018.01213."},{"key":"1276_CR11","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","volume":"267","author":"T Miller","year":"2019","unstructured":"Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell. 2019;267:1\u201338. https:\/\/doi.org\/10.1016\/j.artint.2018.07.007.","journal-title":"Artif Intell"},{"key":"1276_CR12","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3173574.3174156","volume-title":"Proceedings of the 2018 CHI conference on human factors in computing systems - CHI \u201818","author":"A Abdul","year":"2018","unstructured":"Abdul A, Vermeulen J, Wang D, Lim BY, Kankanhalli M. Trends and trajectories for explainable, accountable and intelligible systems. In: Proceedings of the 2018 CHI conference on human factors in computing systems - CHI \u201818. New York: ACM Press; 2018. p. 1\u201318. https:\/\/doi.org\/10.1145\/3173574.3174156."},{"key":"1276_CR13","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/CIG.2018.8490433","volume-title":"2018 IEEE conference on computational intelligence and games (CIG). IEEE","author":"J Zhu","year":"2018","unstructured":"Zhu J, Liapis A, Risi S, Bidarra R, Youngblood GM. Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation. In: 2018 IEEE conference on computational intelligence and games (CIG). IEEE; 2018. p. 1\u20138. https:\/\/doi.org\/10.1109\/CIG.2018.8490433."},{"key":"1276_CR14","doi-asserted-by":"publisher","first-page":"19","DOI":"10.1007\/978-3-319-98131-4_2","volume-title":"Explainable and Interpretable Models in Computer Vision and Machine Learning","author":"G Ras","year":"2018","unstructured":"Ras G, van Gerven M, Haselager P. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges. In: Escalante HJ, Escalera S, Guyon I, Bar\u00f3 X, G\u00fc\u00e7l\u00fct\u00fcrk Y, G\u00fc\u00e7l\u00fc U, et al., editors. Explainable and Interpretable Models in Computer Vision and Machine Learning. Cham: Springer; 2018. p. 19\u201336. https:\/\/doi.org\/10.1007\/978-3-319-98131-4_2."},{"key":"1276_CR15","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3290605.3300831","volume-title":"Proceedings of the 2019 CHI conference on human factors in computing systems - CHI \u201819","author":"D Wang","year":"2019","unstructured":"Wang D, Yang Q, Abdul A, Lim BY. Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI conference on human factors in computing systems - CHI \u201819. New York: ACM Press; 2019. p. 1\u201315. https:\/\/doi.org\/10.1145\/3290605.3300831."},{"key":"1276_CR16","volume-title":"Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA","author":"BY Lim","year":"2019","unstructured":"Lim BY, Yang Q, Abdul A, Wang D. Why these Explanations? Selecting Intelligibility Types for Explanation Goals. In: Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA; 2019."},{"key":"1276_CR17","volume-title":"Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA","author":"M Ribera","year":"2019","unstructured":"Ribera M, Lapedriza A. Can we do better explanations? A proposal of User-Centered Explainable AI. In: Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA; 2019."},{"key":"1276_CR18","unstructured":"Doshi-Velez F, Kim B. Towards A Rigorous science of interpretable machine learning. 2017. http:\/\/arxiv.org\/abs\/1702.08608."},{"key":"1276_CR19","volume-title":"A multidisciplinary survey and framework for design and evaluation of explainable AI systems","author":"S Mohseni","year":"2018","unstructured":"Mohseni S, Zarei N, Ragan ED. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. 2018. http:\/\/arxiv.org\/abs\/1811.11839."},{"key":"1276_CR20","doi-asserted-by":"publisher","first-page":"56","DOI":"10.1016\/j.ijmedinf.2016.12.001","volume":"98","author":"E Kilsdonk","year":"2017","unstructured":"Kilsdonk E, Peute LW, Jaspers MWM. Factors influencing implementation success of guideline-based clinical decision support systems: a systematic review and gaps analysis. Int J Med Inform. 2017;98:56\u201364. https:\/\/doi.org\/10.1016\/j.ijmedinf.2016.12.001.","journal-title":"Int J Med Inform"},{"key":"1276_CR21","volume-title":"M\u00fcller K-R","author":"W Samek","year":"2017","unstructured":"Samek W, Wiegand T. M\u00fcller K-R. Understanding, Visualizing and Interpreting Deep Learning Models: Explainable Artificial Intelligence; 2017. http:\/\/arxiv.org\/abs\/1708.08296."},{"key":"1276_CR22","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1002\/widm.1312","volume":"9","author":"A Holzinger","year":"2019","unstructured":"Holzinger A, Langs G, Denk H, Zatloukal K, M\u00fcller H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9:1\u201313.","journal-title":"Wiley Interdiscip Rev Data Min Knowl Discov"},{"key":"1276_CR23","doi-asserted-by":"publisher","unstructured":"Teasdale G, Jennett B. Assessment of coma and impaired consciousness. A practical scale. Lancet (London, England). 1974;2:81\u20134. https:\/\/doi.org\/10.1016\/s0140-6736(74)91639-0.","DOI":"10.1016\/s0140-6736(74)91639-0"},{"key":"1276_CR24","unstructured":"Fayyad UM, Irani KB. Multi-lnterval Discretization of Continuous-Valued Attributes for Classification learning. In: 13th International Joint Conference on Artificial Intelligence. 1993. p. 1022\u20137."},{"key":"1276_CR25","unstructured":"Hall MA. Correlation-based feature selection for machine learning: The University of Waikato; 1999. https:\/\/www.cs.waikato.ac.nz\/~mhall\/thesis.pdf."},{"key":"1276_CR26","unstructured":"Lundberg S, Lee S-I. An unexpected unity among methods for interpreting model predictions. In: NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems. Barcelona, Spain; 2016. http:\/\/arxiv.org\/abs\/1611.07478."},{"key":"1276_CR27","unstructured":"Lundberg S, Lee S-I. A Unified Approach to Interpreting Model Predictions. In: Advances in Neural Information Processing Systems. Long Beach, CA, USA; 2017. p. 4765\u201374. http:\/\/arxiv.org\/abs\/1705.07874."},{"key":"1276_CR28","volume-title":"IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI). Melbourne, Australia","author":"O Biran","year":"2017","unstructured":"Biran O, Cotton C. Explanation and Justification in Machine Learning : A Survey. In: IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI). Melbourne, Australia; 2017."},{"key":"1276_CR29","doi-asserted-by":"publisher","first-page":"542","DOI":"10.1016\/j.knosys.2007.04.004","volume":"20","author":"P Pu","year":"2007","unstructured":"Pu P, Chen L. Trust-inspiring explanation interfaces for recommender systems. Knowledge-Based Syst. 2007;20:542\u201356. https:\/\/doi.org\/10.1016\/j.knosys.2007.04.004.","journal-title":"Knowledge-Based Syst"},{"key":"1276_CR30","volume-title":"User-oriented assessment of classification model understandability. In: 11th Scandinavian Conference on Artificial Intelligence. Trondheim, Norway","author":"H Allahyari","year":"2011","unstructured":"Allahyari H, Lavesson N. User-oriented assessment of classification model understandability. In: 11th Scandinavian Conference on Artificial Intelligence. Trondheim, Norway; 2011."},{"key":"1276_CR31","doi-asserted-by":"publisher","first-page":"749","DOI":"10.1101\/206540","volume":"2","author":"SM Lundberg","year":"2018","unstructured":"Lundberg SM, Nair B, Vavilala MS, Horibe M, Eisses MJ, Adams T, et al. Explainable machine learning predictions to help anesthesiologists prevent hypoxemia during surgery. Nat Biomed Eng. 2018;2:749\u201360. https:\/\/doi.org\/10.1101\/206540.","journal-title":"Nat Biomed Eng"},{"key":"1276_CR32","doi-asserted-by":"publisher","first-page":"1135","DOI":"10.1145\/2939672.2939778","volume-title":"Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining","author":"MT Ribeiro","year":"2016","unstructured":"Ribeiro MT, Singh S, Guestrin C. \u201cWhy should I trust you?\u201d: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. San Francisco: ACM; 2016. p. 1135\u201344. http:\/\/arxiv.org\/abs\/1602.04938."},{"key":"1276_CR33","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0132614","volume":"10","author":"V Van Belle","year":"2015","unstructured":"Van Belle V, Van Calster B. Visualizing risk prediction models. PLoS One. 2015;10:e0132614. https:\/\/doi.org\/10.1371\/journal.pone.0132614.","journal-title":"PLoS One"},{"key":"1276_CR34","unstructured":"Lundberg SM. SHAP (SHapley Additive exPlanations). https:\/\/github.com\/slundberg\/shap. ."},{"key":"1276_CR35","unstructured":"Bokeh Development Team. Bokeh: Python library for interactive visualization. https:\/\/bokeh.org."},{"key":"1276_CR36","doi-asserted-by":"publisher","DOI":"10.4135\/9781452230153","volume-title":"Basics of qualitative research: techniques and procedures for developing grounded theory","author":"J Corbin","year":"2008","unstructured":"Corbin J, Strauss A. Basics of qualitative research: techniques and procedures for developing grounded theory. 3rd ed. Los Angeles: SAGE Publications; 2008.","edition":"3"},{"issue":"123","key":"1276_CR37","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.ijmedinf.2018.12.003","volume":"2019","author":"G Kennedy","year":"2017","unstructured":"Kennedy G, Gallego B. Clinical prediction rules: a systematic review of healthcare provider opinions and preferences. Int J Med Inform. 2017;2019(123):1\u201310. https:\/\/doi.org\/10.1016\/j.ijmedinf.2018.12.003.","journal-title":"Int J Med Inform"},{"key":"1276_CR38","unstructured":"NVivo qualitative data analysis software. Version 12. QSR International Pty Ltd.; 2018."},{"key":"1276_CR39","doi-asserted-by":"publisher","first-page":"2","DOI":"10.1097\/PCC.0000000000000558","volume":"17","author":"MM Pollack","year":"2016","unstructured":"Pollack MM, Holubkov R, Funai T, Dean JM, Berger JT, Wessel DL, et al. The pediatric risk of mortality score: update 2015. Pediatr Crit Care Med. 2016;17:2\u20139. https:\/\/doi.org\/10.1097\/PCC.0000000000000558.","journal-title":"Pediatr Crit Care Med"},{"key":"1276_CR40","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/HealthCom.2016.7749452","volume-title":"2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom). IEEE","author":"C Yang","year":"2016","unstructured":"Yang C, Delcher C, Shenkman E, Ranka S. Predicting 30-day all-cause readmissions from hospital inpatient discharge data. In: 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom). IEEE; 2016. p. 1\u20136. https:\/\/doi.org\/10.1109\/HealthCom.2016.7749452."},{"key":"1276_CR41","doi-asserted-by":"publisher","first-page":"827","DOI":"10.1136\/bmj.324.7341.827","volume":"324","author":"A Edwards","year":"2002","unstructured":"Edwards A. Explaining risks: turning numerical data into meaningful pictures. BMJ. 2002;324:827\u201330. https:\/\/doi.org\/10.1136\/bmj.324.7341.827.","journal-title":"BMJ."},{"key":"1276_CR42","doi-asserted-by":"publisher","first-page":"1102","DOI":"10.1093\/jamia\/ocx060","volume":"24","author":"AD Jeffery","year":"2017","unstructured":"Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. J Am Med Informatics Assoc. 2017;24:1102\u201310. https:\/\/doi.org\/10.1093\/jamia\/ocx060.","journal-title":"J Am Med Informatics Assoc"},{"key":"1276_CR43","doi-asserted-by":"publisher","first-page":"136","DOI":"10.1016\/j.jclinepi.2015.09.008","volume":"70","author":"TH Kappen","year":"2016","unstructured":"Kappen TH, van Loon K, Kappen MAM, van Wolfswinkel L, Vergouwe Y, van Klei WA, et al. Barriers and facilitators perceived by physicians when using prediction models in practice. J Clin Epidemiol. 2016;70:136\u201345. https:\/\/doi.org\/10.1016\/j.jclinepi.2015.09.008.","journal-title":"J Clin Epidemiol"},{"key":"1276_CR44","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1007\/s13218-020-00636-z","volume":"34","author":"A Holzinger","year":"2020","unstructured":"Holzinger A, Carrington A, M\u00fcller H. Measuring the quality of explanations: the system Causability scale (SCS): comparing human and machine explanations. KI - Kunstl Intelligenz. 2020;34:193\u20138. https:\/\/doi.org\/10.1007\/s13218-020-00636-z.","journal-title":"KI - Kunstl Intelligenz"}],"container-title":["BMC Medical Informatics and Decision Making"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-020-01276-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s12911-020-01276-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-020-01276-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,10,7]],"date-time":"2021-10-07T23:51:34Z","timestamp":1633650694000},"score":1,"resource":{"primary":{"URL":"https:\/\/bmcmedinformdecismak.biomedcentral.com\/articles\/10.1186\/s12911-020-01276-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,8]]},"references-count":44,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,12]]}},"alternative-id":["1276"],"URL":"https:\/\/doi.org\/10.1186\/s12911-020-01276-x","relation":{},"ISSN":["1472-6947"],"issn-type":[{"value":"1472-6947","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,10,8]]},"assertion":[{"value":"2 April 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 September 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 October 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The IRB at the University of Pittsburgh approved the use of de-identified patient data for model development (PRO17030743) and determined the focus group study to be an exempt study (STUDY19020074). A formal informed consent process was not required for any portion of this work.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"Corresponding author Harry Hochheiser is an Associate Editor for <i>BMC Medical Informatics and Decision Making<\/i>.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"257"}}