{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,24]],"date-time":"2026-04-24T14:04:51Z","timestamp":1777039491285,"version":"3.51.4"},"reference-count":79,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,2,28]],"date-time":"2022-02-28T00:00:00Z","timestamp":1646006400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,2,28]],"date-time":"2022-02-28T00:00:00Z","timestamp":1646006400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100004440","name":"wellcome trust","doi-asserted-by":"publisher","award":["213660\/Z\/18\/Z"],"award-info":[{"award-number":["213660\/Z\/18\/Z"]}],"id":[{"id":"10.13039\/100004440","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000275","name":"leverhulme trust","doi-asserted-by":"publisher","award":["RC-2015-067"],"award-info":[{"award-number":["RC-2015-067"]}],"id":[{"id":"10.13039\/501100000275","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100006112","name":"microsoft research","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100006112","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2022,3]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek to address in this paper. We outline a framework, called <jats:italic>Explanatory Pragmatism<\/jats:italic>, which we argue has two attractive features. First, it allows us to conceptualise explainability in explicitly context-, audience- and purpose-relative terms, while retaining a unified underlying definition of explainability. Second, it makes visible any normative disagreements that may underpin conflicting claims about explainability regarding the purposes for which explanations are sought. Third, it allows us to distinguish several dimensions of AI explainability. We illustrate this framework by applying it to a case study involving a machine learning model for predicting whether patients suffering disorders of consciousness were likely to recover consciousness.<\/jats:p>","DOI":"10.1007\/s10676-022-09632-3","type":"journal-article","created":{"date-parts":[[2022,2,28]],"date-time":"2022-02-28T17:02:52Z","timestamp":1646067772000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":44,"title":["Explanatory pragmatism: a context-sensitive framework for explainable medical AI"],"prefix":"10.1007","volume":"24","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9880-6912","authenticated-orcid":false,"given":"Rune","family":"Nyrup","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7468-0123","authenticated-orcid":false,"given":"Diana","family":"Robinson","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,2,28]]},"reference":[{"key":"9632_CR1","volume-title":"How to do things with words","author":"JL Austin","year":"1962","unstructured":"Austin, J. L. (1962). How to do things with words. Clarendon Press."},{"key":"9632_CR2","doi-asserted-by":"publisher","first-page":"421","DOI":"10.1126\/science.aaz3873","volume":"366","author":"R Benjamin","year":"2019","unstructured":"Benjamin, R. (2019). Assessing risk, automating racism. Science, 366, 421\u2013422. https:\/\/doi.org\/10.1126\/science.aaz3873","journal-title":"Science"},{"key":"9632_CR3","unstructured":"Besold, T.R. and Uckelman, S.L. 2018. The what, the why, and the how of explanations in automated decision-making. https:\/\/arXiv.org\/1808.07074"},{"key":"9632_CR4","unstructured":"Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI). Accessed 1 July 2018. http:\/\/www.cs.columbia.edu\/~orb\/papers\/xai_survey_paper_2017.pdf"},{"key":"9632_CR5","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-019-00391-6","author":"JC Bjerring","year":"2020","unstructured":"Bjerring, J. C., & Busch, J. (2020). Artificial intelligence and patient-centred decision-making. Philosophy &amp; Technology. https:\/\/doi.org\/10.1007\/s13347-019-00391-6","journal-title":"Philosophy & Technology"},{"issue":"50","key":"9632_CR6","doi-asserted-by":"publisher","first-page":"20254","DOI":"10.1073\/pnas.1112029108","volume":"108","author":"JA Brewer","year":"2011","unstructured":"Brewer, J. A., Worhunsky, P. D., Gray, J. R., Tang, Y., Weber, J., & Kober, H. (2011). Meditation experience is associated with differences in default mode network activity and connectivity. PNAS, 108(50), 20254\u201320259. https:\/\/doi.org\/10.1073\/pnas.1112029108","journal-title":"PNAS"},{"key":"9632_CR7","unstructured":"Buolamwini, J and Gebru, T. (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification.\u201d Proceedings of Machine Learning Research 81:1\u201315. Accessed 20 Apr 2021. https:\/\/proceedings.mlr.press\/v81\/buolamwini18a\/buolamwini18a.pdf."},{"key":"9632_CR8","doi-asserted-by":"publisher","DOI":"10.1177\/2053951715622512","author":"J Burrell","year":"2016","unstructured":"Burrell, J. (2016). How the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms. Big Data &amp; Society. https:\/\/doi.org\/10.1177\/2053951715622512","journal-title":"Big Data & Society"},{"key":"9632_CR9","doi-asserted-by":"crossref","unstructured":"Cai, C.J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G.S., Stumpe, M.C. and Terry, M., 2019. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-14).","DOI":"10.1145\/3290605.3300234"},{"key":"9632_CR10","unstructured":"Camburu, O.M., Giunchiglia, E., Foerster, J., Lukasiewicz, T. and Blunsom, P., 2019. Can I trust the explainer? Verifying post-hoc explanatory methods. arXiv preprint.https:\/\/arXiv.org\/1910.02065"},{"key":"9632_CR11","doi-asserted-by":"publisher","first-page":"P1400","DOI":"10.1016\/S0140-6736(11)60563-1","volume":"377","author":"N Cartwright","year":"2011","unstructured":"Cartwright, N. (2011). A philosopher\u2019s view of the long road from RCTs to effectiveness. The Lancet, 377, P1400\u2013P1401. https:\/\/doi.org\/10.1016\/S0140-6736(11)60563-1","journal-title":"The Lancet"},{"key":"9632_CR12","doi-asserted-by":"publisher","first-page":"973","DOI":"10.1086\/668041","volume":"79","author":"N Cartwright","year":"2013","unstructured":"Cartwright, N. (2013). Presidential address: Will this policy work for you? Predicting effectiveness better: How philosophy helps. Philosophy of Science, 79, 973\u2013989. https:\/\/doi.org\/10.1086\/668041","journal-title":"Philosophy of Science"},{"issue":"7623","key":"9632_CR13","doi-asserted-by":"publisher","first-page":"20","DOI":"10.1038\/538020a","volume":"538","author":"D Castelvecchi","year":"2016","unstructured":"Castelvecchi, D. (2016). Can we open the black box of AI? Nature, 538(7623), 20\u201323. https:\/\/doi.org\/10.1038\/538020a","journal-title":"Nature"},{"key":"9632_CR14","doi-asserted-by":"publisher","unstructured":"Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F. et al 2018. Interpretability of deep learning models: A survey of results. 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld\/SCALCOM\/UIC\/ATC\/CBDCom\/IOP\/SCI). https:\/\/doi.org\/10.1109\/UIC-ATC.2017.8397411","DOI":"10.1109\/UIC-ATC.2017.8397411"},{"key":"9632_CR15","unstructured":"Chen, S. 2018. Doctors said the coma patients would never wake. AI said they would - and they did. South China Post. Accessed 1 July 2018. https:\/\/www.scmp.com\/news\/china\/science\/article\/2163298\/doctors-said-coma-patients-would-never-wake-ai-said-they-would"},{"key":"9632_CR16","doi-asserted-by":"publisher","first-page":"638","DOI":"10.1111\/jep.12852","volume":"24","author":"B Chin-Yee","year":"2018","unstructured":"Chin-Yee, B., & Upshur, R. (2018). Clinical judgement in the era of big data and predictive analytics. Journal of Evaluation in Clinical Practice, 24, 638\u2013645. https:\/\/doi.org\/10.1111\/jep.12852","journal-title":"Journal of Evaluation in Clinical Practice"},{"key":"9632_CR17","volume-title":"Explanation in the biological and historical sciences","author":"C Craver","year":"2014","unstructured":"Craver, C. (2014). The ontic conception of scientific explanation. In Andreas H\u00fctteman & Marie Kaiser (Eds.), Explanation in the biological and historical sciences. Springer."},{"key":"9632_CR18","unstructured":"Crawford, K. 2017. The trouble with bias. NIPS 2017 keynote address. Retrieved 29 June 2021 from https:\/\/www.youtube.com\/watch?v=fMym_BKWQzk."},{"key":"9632_CR19","doi-asserted-by":"publisher","DOI":"10.1093\/oso\/9780190652913.001.0001","volume-title":"Understanding scientific understanding","author":"H de Regt","year":"2017","unstructured":"de Regt, H. (2017). Understanding scientific understanding. OUP."},{"key":"9632_CR20","unstructured":"Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint,https:\/\/arXiv.org.1702.08608"},{"key":"9632_CR21","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-020-00435-2","author":"A Erasmus","year":"2020","unstructured":"Erasmus, A., Brunet, T. D. P., & Fish, E. (2020). What is interpretability? Philosophy &amp; Technology. https:\/\/doi.org\/10.1007\/s13347-020-00435-2","journal-title":"Philosophy & Technology"},{"key":"9632_CR22","unstructured":"Felten, E. 2017 What does it mean to ask for an \u2018explainable\u2019 algorithm?\u201d, Freedom to Tinker (blog), 31 May 2017. Accessed 1 Aug 2019. https:\/\/freedom-to-tinker.com\/2017\/05\/31\/what-does-it-mean-to-ask-for-an-explainable-algorithm\/"},{"key":"9632_CR23","doi-asserted-by":"publisher","first-page":"1005","DOI":"10.1086\/705452","volume":"86","author":"PL Franco","year":"2019","unstructured":"Franco, P. L. (2019). Speech act theory and the multiple aims of science. Philosophy of Science, 86, 1005\u20131015. https:\/\/doi.org\/10.1086\/705452","journal-title":"Philosophy of Science"},{"key":"9632_CR24","doi-asserted-by":"publisher","first-page":"1","DOI":"10.5195\/POM.2021.27","volume":"2","author":"K Genin","year":"2021","unstructured":"Genin, K., & Grote, T. (2021). Randomized controlled trials in medical AI: A methodological critique. Philosophy of Medicine, 2, 1\u201315. https:\/\/doi.org\/10.5195\/POM.2021.27","journal-title":"Philosophy of Medicine"},{"key":"9632_CR25","unstructured":"Ghorbani, A., Wexler, J., Zou, J. and Kim, B. 2019. Towards automatic concept-based explanations. arXiv preprint.https:\/\/arXiv.org\/1902.03129"},{"key":"9632_CR26","unstructured":"Gil, Yolanda (2021) \u2018Accelerate programme: An AI revolution in science? Using machine learning for scientific discovery\u2019 [Panel Discussion]. University of Cambridge. 26 April."},{"key":"9632_CR27","unstructured":"Gray, A. 2018 7 Amazing ways artificial intelligence is used in healthcare, World Economic Forum, 20 September 2018. Accessed 1 July 2018. https:\/\/www.weforum.org\/agenda\/2018\/09\/7-amazing-ways-artificial-intelligence-is-used-in-healthcare"},{"key":"9632_CR28","doi-asserted-by":"publisher","first-page":"93","DOI":"10.1145\/3236009","volume":"51","author":"R Guidotto","year":"2018","unstructured":"Guidotto, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51, 93. https:\/\/doi.org\/10.1145\/3236009","journal-title":"ACM Computing Surveys"},{"key":"9632_CR29","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1609\/aimag.v40i2.2850","volume":"40","author":"D Gunning","year":"2019","unstructured":"Gunning, D., & Aha, D. W. (2019). DARPA\u2019s explainable artificial intelligence (XAI) programme. AI Magazine, 40, 44\u201358. https:\/\/doi.org\/10.1609\/aimag.v40i2.2850","journal-title":"AI Magazine"},{"key":"9632_CR30","doi-asserted-by":"publisher","first-page":"9781","DOI":"10.1073\/pnas.0711791105","volume":"105","author":"BJ Harrison","year":"2008","unstructured":"Harrison, B.J., Pujol, J., Lopez-Sola, M., Hernandez-Ribas, R., Deus, J., Ortiz, H. et al. 2008. Consistency and functional specialization in the default mode network. Accessed 20 Jan 2021. PNAS 105:9781\u20139786.","journal-title":"PNAS"},{"key":"9632_CR31","unstructured":"Heaven, W. 2020. New standards for AI clinical trials will help spot snake oil and hype. MIT Technology Review. 11 September."},{"key":"9632_CR32","doi-asserted-by":"publisher","first-page":"1435","DOI":"10.1002\/hbm.24886","volume":"41","author":"B Heinrichs","year":"2020","unstructured":"Heinrichs, B., & Eickhoff, S. (2020). Your evidence? Machine learning algorithms for medical diagnosis and prediction. Human Brain Mapping, 41, 1435\u20131444. https:\/\/doi.org\/10.1002\/hbm.24886","journal-title":"Human Brain Mapping"},{"key":"9632_CR33","unstructured":"UK House of Lords Select Committee on Artificial Intelligence. AI in the UK: Ready, willing and able? 2018. HL Paper 100. Accessed 1 July 2018. https:\/\/publications.parliament.uk\/pa\/ld201719\/ldselect\/ldai\/100\/10002.htm"},{"key":"9632_CR34","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1017\/S0266267100000468","volume":"8","author":"F Jackson","year":"1992","unstructured":"Jackson, F., & Petit, P. (1992). In defense of explanatory ecumenism. Economics &amp; Philosophy, 8, 1\u201321. https:\/\/doi.org\/10.1017\/S0266267100000468","journal-title":"Economics & Philosophy"},{"key":"9632_CR35","doi-asserted-by":"publisher","first-page":"389","DOI":"10.1038\/s42256-019-0088-2","volume":"1","author":"A Jobin","year":"2019","unstructured":"Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial intelligence: The global landscape of ethics guidelines. Nature Machine Intelligence, 1, 389\u2013399. https:\/\/doi.org\/10.1038\/s42256-019-0088-2","journal-title":"Nature Machine Intelligence"},{"key":"9632_CR36","unstructured":"Keeling, G., & Nyrup, R. manuscript. Explainable machine learning, clinical reasoning and patient autonomy. Unpublished manuscript under review."},{"key":"9632_CR37","doi-asserted-by":"publisher","first-page":"3799","DOI":"10.1007\/s11229-014-0616-x","volume":"192","author":"C Kelp","year":"2015","unstructured":"Kelp, C. (2015). Understanding phenomena. Synthese, 192, 3799\u20133816. https:\/\/doi.org\/10.1007\/s11229-014-0616-x","journal-title":"Synthese"},{"key":"9632_CR38","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1080\/1350178X.2018.1561078","volume":"26","author":"D Khosrowi","year":"2019","unstructured":"Khosrowi, D. (2019). Extrapolation of causal effects\u2013hopes, assumptions, and the extrapolator\u2019s circle. Journal of Economic Methodology, 26, 45\u201358. https:\/\/doi.org\/10.1080\/1350178X.2018.1561078","journal-title":"Journal of Economic Methodology"},{"key":"9632_CR39","unstructured":"Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J. and Viegas, F., (2018). Interpretability beyond feature attrijbution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning (pp. 2668\u20132677). PMLR."},{"key":"9632_CR40","unstructured":"Kim, B. 2021. Interpretability for everyone [Lecture]. Oxford Applied and Theoretical Machine Learning Group."},{"key":"9632_CR41","unstructured":"Kirsch, A. 2017. Explain to whom? Putting the user in the center of explainable AI. In: Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML. Accessed 1 Aug 2019. https:\/\/hal.archives-ouvertes.fr\/hal-01845135"},{"key":"9632_CR42","doi-asserted-by":"publisher","first-page":"315","DOI":"10.2307\/2026782","volume":"84","author":"P Kitcher","year":"1987","unstructured":"Kitcher, P., & Salmon, W. (1987). Van Fraassen on explanation. Journal of Philosophy, 84, 315\u2013330.","journal-title":"Journal of Philosophy"},{"key":"9632_CR43","doi-asserted-by":"publisher","first-page":"487","DOI":"10.1007\/s13347-019-00372-9","volume":"33","author":"M Krishnan","year":"2019","unstructured":"Krishnan, M. (2019). Against interpretability: A critical examination of the interpretability problem in machine learning. Philosophy &amp; Technology, 33, 487\u2013502. https:\/\/doi.org\/10.1007\/s13347-019-00372-9","journal-title":"Philosophy & Technology"},{"key":"9632_CR44","unstructured":"Lawrence, N. 2020. Intellectual debt and the death of the programmer [Lecture]. University of Cambridge, Department of Engineering."},{"key":"9632_CR45","volume-title":"Scientific understanding: Philosophical perspectives","author":"S Leonelli","year":"2009","unstructured":"Leonelli, S. (2009). Understanding in biology: the impure nature of biological understanding. In H. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives. University of Pittsburgh Press."},{"key":"9632_CR46","unstructured":"Lipton, Z.C. 2017. The mythos of model interpretability. 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). Accessed 1 July 2018. https:\/\/arxiv.org\/abs\/1606.03490"},{"key":"9632_CR47","doi-asserted-by":"publisher","first-page":"1364","DOI":"10.1038\/s41591-020-1034-x","volume":"26","author":"X Liu","year":"2020","unstructured":"Liu, X., Cruz Rivera, S., Moher, D., et al. (2020). Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI extension. Nature Medicine, 26, 1364\u20131374.","journal-title":"Nature Medicine"},{"key":"9632_CR48","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1002\/hast.973","volume":"49","author":"A London","year":"2019","unstructured":"London, A. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. The Hastings Center Report, 49, 15\u201321. https:\/\/doi.org\/10.1002\/hast.973","journal-title":"The Hastings Center Report"},{"key":"9632_CR49","doi-asserted-by":"publisher","first-page":"162","DOI":"10.1016\/j.bbi.2017.01.013","volume":"62","author":"AL Marsland","year":"2017","unstructured":"Marsland, A. L., Kuan, C. D., Sheu, L. K., Krajina, K., Kraynak, T., Manuck, S., & Gianaros, P. J. (2017). Systemic inflammation and resting state connectivity of the default mode network. Brain, Behaviour and Immunology, 62, 162\u2013170. https:\/\/doi.org\/10.1016\/j.bbi.2017.01.013","journal-title":"Brain, Behaviour and Immunology"},{"key":"9632_CR50","doi-asserted-by":"publisher","first-page":"2251","DOI":"10.1056\/NEJMe068134","volume":"355","author":"G Norman","year":"2006","unstructured":"Norman, G. (2006). Building on experience\u2013the development of clinical reasoning. New England Journal of Medicine, 355, 2251\u20132252. https:\/\/doi.org\/10.1056\/NEJMe068134","journal-title":"New England Journal of Medicine"},{"key":"9632_CR51","doi-asserted-by":"publisher","first-page":"96","DOI":"10.1016\/j.shpsa.2019.09.002","volume":"81","author":"R Northcott","year":"2020","unstructured":"Northcott, R. (2020). Big data and prediction: Four case studies. Studies in the History and Philosophy of Science Part A, 81, 96\u2013104. https:\/\/doi.org\/10.1016\/j.shpsa.2019.09.002","journal-title":"Studies in the History and Philosophy of Science Part A"},{"key":"9632_CR52","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1126\/science.aax2342","volume":"366","author":"Z Obermeyer","year":"2019","unstructured":"Obermeyer, Z., Powers, B., Vogeli, & Mullainathan. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366, 447\u2013453. https:\/\/doi.org\/10.1126\/science.aax2342","journal-title":"Science"},{"key":"9632_CR53","doi-asserted-by":"publisher","first-page":"905","DOI":"10.1086\/683328","volume":"82","author":"W Pietsch","year":"2015","unstructured":"Pietsch, W. (2015). Aspects of theory-ladenness in data-intensive science. Philosophy of Science, 82, 905\u2013916. https:\/\/doi.org\/10.1086\/683328","journal-title":"Philosophy of Science"},{"key":"9632_CR54","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1007\/s13347-015-0202-2","volume":"29","author":"W Pietsch","year":"2016","unstructured":"Pietsch, W. (2016). The causal nature of modeling with big data. Philosophy &amp; Technology, 29, 137\u2013171. https:\/\/doi.org\/10.1007\/s13347-015-0202-2","journal-title":"Philosophy & Technology"},{"key":"9632_CR55","doi-asserted-by":"publisher","first-page":"373","DOI":"10.1001\/jamapsychiatry.2013.455","volume":"70","author":"J Posner","year":"2013","unstructured":"Posner, J., Hellerstein, D. J., Gat, I., Mechling, A., Klahr, K., Wang, Z., et al. (2013). Antidepressants normalize the default mode network in patients with dysthymia. JAMA Psychiatry, 70, 373\u2013382. https:\/\/doi.org\/10.1001\/jamapsychiatry.2013.455","journal-title":"JAMA Psychiatry"},{"key":"9632_CR56","doi-asserted-by":"publisher","first-page":"721","DOI":"10.1086\/687858","volume":"83","author":"A Potochnik","year":"2016","unstructured":"Potochnik, A. (2016). Scientific explanation: Putting communication first. Philosophy of Science, 83, 721\u2013732. https:\/\/doi.org\/10.1086\/687858","journal-title":"Philosophy of Science"},{"key":"9632_CR57","unstructured":"Selbst, A. and Barocas, S. 2018. The intuitive appeal of explainable machine. Fordham Law Review 87:1085-1139. Accessed 1 Aug 2019. https:\/\/ir.lawnet.fordham.edu\/flr\/vol87\/iss3\/11"},{"key":"9632_CR58","doi-asserted-by":"crossref","unstructured":"Sendak, M., Elish, M.C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., Bedoya, A., Balu, S. and O'Brien, C., (2020) \"The human body is a black box\" supporting clinical decision-making with deep learning. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 99\u2013109).","DOI":"10.1145\/3351095.3372827"},{"key":"9632_CR59","doi-asserted-by":"publisher","first-page":"e36173","DOI":"10.7554\/eLife.36173","volume":"7","author":"M Song","year":"2018","unstructured":"Song, M., Yang, Y., He, J., Yang, Z., Yu, S., Xie, Q., et al. (2018). Prognostication of chronic disorders of consciousness using brain functional networks and clinical characteristics. eLife, 7, e36173. https:\/\/doi.org\/10.7554\/eLife.36173","journal-title":"eLife"},{"key":"9632_CR60","doi-asserted-by":"publisher","first-page":"2244","DOI":"10.1038\/npp.2014.75","volume":"39","author":"R Sripada","year":"2014","unstructured":"Sripada, R., Swain, J., Evans, G. W., Welsh, R. C., & Liberzon, I. (2014). Childhood poverty and stress reactivity are associated with aberrant functional connectivity in default mode network. Neuropsychopharmacology, 39, 2244\u20132251. https:\/\/doi.org\/10.1038\/npp.2014.75","journal-title":"Neuropsychopharmacology"},{"key":"9632_CR61","doi-asserted-by":"publisher","DOI":"10.1093\/acprof:oso\/9780195331448.001.0001","volume-title":"Across the boundaries: Extrapolation in biology and social science","author":"D Steel","year":"2007","unstructured":"Steel, D. (2007). Across the boundaries: Extrapolation in biology and social science. OUP."},{"key":"9632_CR62","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1007\/BF00128919","volume":"11","author":"K Sterelny","year":"1996","unstructured":"Sterelny, K. (1996). Explanatory pluralism in evolutionary biology. Biology and Philosophy, 11, 193\u2013214. https:\/\/doi.org\/10.1007\/BF00128919","journal-title":"Biology and Philosophy"},{"key":"9632_CR63","volume-title":"The routledge companion to thought experiments","author":"M Stuart","year":"2018","unstructured":"Stuart, M., et al. (2018). How thought experiments increase understanding. In M. Stuart (Ed.), The routledge companion to thought experiments. Routledge."},{"key":"9632_CR64","doi-asserted-by":"publisher","first-page":"221","DOI":"10.1007\/s11098-017-0863-z","volume":"175","author":"E Sullivan","year":"2018","unstructured":"Sullivan, E. (2018). Understanding: Not know-how. Philosophical Studies, 175, 221\u2013240. https:\/\/doi.org\/10.1007\/s11098-017-0863-z","journal-title":"Philosophical Studies"},{"key":"9632_CR65","doi-asserted-by":"publisher","DOI":"10.1093\/bjps\/axz035","author":"E Sullivan","year":"2019","unstructured":"Sullivan, E. (2019). Understanding from machine learning models. British Journal for the Philosophy of Science. https:\/\/doi.org\/10.1093\/bjps\/axz035","journal-title":"British Journal for the Philosophy of Science"},{"key":"9632_CR66","unstructured":"Tomsett, R., Braines, D., Harborne, D., Preece, A., and Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning. 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018). https:\/\/arXiv.org\/1806.07552"},{"key":"9632_CR67","doi-asserted-by":"publisher","DOI":"10.1093\/0198244274.001.0001","volume-title":"The scientific image","author":"B Van Fraassen","year":"1980","unstructured":"Van Fraassen, B. (1980). The scientific image. Oxford University Press."},{"issue":"364","key":"9632_CR68","doi-asserted-by":"publisher","DOI":"10.1136\/bmj.l886","volume":"2019","author":"DS Watson","year":"2019","unstructured":"Watson, D. S., Krutzinna, J., Bruce, I., Griffiths, C. E. M., McInnes, I. B., Barnes, M. R., & Floridi, L. (2019). Clinical applications of machine learning: Beyond the black box. BMJ, 2019(364), l886. https:\/\/doi.org\/10.1136\/bmj.l886","journal-title":"BMJ"},{"key":"9632_CR69","unstructured":"Weinberger, D. 2018. Optimization of explanation. Accessed 1 Aug 2018. https:\/\/medium.com\/berkman-klein-center\/optimization-over-explanation-41ecb135763d"},{"key":"9632_CR70","doi-asserted-by":"publisher","first-page":"639","DOI":"10.5840\/jphil20071041240","volume":"104","author":"M Weisberg","year":"2007","unstructured":"Weisberg, M. (2007). Three kinds of idealization. Journal of Philosophy, 104, 639\u2013659. https:\/\/doi.org\/10.5840\/jphil20071041240","journal-title":"Journal of Philosophy"},{"key":"9632_CR71","unstructured":"Weller, A. 2017. Challenges for transparency. 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017) https:\/\/arXiv.org\/1708.01870v1"},{"key":"9632_CR72","doi-asserted-by":"publisher","first-page":"997","DOI":"10.1007\/s11229-011-0055-x","volume":"190","author":"D Wilkenfeld","year":"2013","unstructured":"Wilkenfeld, D. (2013). Understanding as representation manipulability. Synthese, 190, 997\u20131016. https:\/\/doi.org\/10.1007\/s11229-011-0055-x","journal-title":"Synthese"},{"key":"9632_CR73","doi-asserted-by":"publisher","first-page":"3367","DOI":"10.1007\/s11229-014-0452-z","volume":"191","author":"D Wilkenfeld","year":"2014","unstructured":"Wilkenfeld, D. (2014). Functional explaining: A new approach to the philosophy of explanation. Synthese, 191, 3367\u20133391. https:\/\/doi.org\/10.1007\/s11229-014-0452-z","journal-title":"Synthese"},{"key":"9632_CR74","doi-asserted-by":"publisher","first-page":"1273","DOI":"10.1007\/s11229-015-0992-x","volume":"194","author":"D Wilkenfeld","year":"2017","unstructured":"Wilkenfeld, D. (2017). MUDdy Understanding. Synthese, 194, 1273\u20131293. https:\/\/doi.org\/10.1007\/s11229-015-0992-x","journal-title":"Synthese"},{"key":"9632_CR75","doi-asserted-by":"publisher","first-page":"e1105","DOI":"10.1038\/tp.2017.40","volume":"7","author":"T Wise","year":"2017","unstructured":"Wise, T., Marwood, L., Perkins, A. M., Herane-Vives, A., Joules, R., Lythgoe, D. J., et al. (2017). Instability of default mode network connectivity in major depression: A two-sample confirmation study. Translational Psychiatry, 7, e1105. https:\/\/doi.org\/10.1038\/tp.2017.40","journal-title":"Translational Psychiatry"},{"key":"9632_CR76","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-019-00382-7","author":"C Zednik","year":"2019","unstructured":"Zednik, C. (2019). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy &amp; Technology. https:\/\/doi.org\/10.1007\/s13347-019-00382-7","journal-title":"Philosophy & Technology"},{"key":"9632_CR77","doi-asserted-by":"publisher","first-page":"6457","DOI":"10.1038\/s41598-020-63540-4","volume":"10","author":"L Zhang","year":"2020","unstructured":"Zhang, L., Zuo, X., Ng, K. K., Chong, J. S. X., Shim, H. Y., Ong, M. Q. W., et al. (2020). Distinct BOLD variability changes in the default mode and salience networks in Alzheimer\u2019s disease spectrum and associations with cognitive decline. Scientific Reports, 10, 6457. https:\/\/doi.org\/10.1038\/s41598-020-63540-4","journal-title":"Scientific Reports"},{"key":"9632_CR78","doi-asserted-by":"publisher","first-page":"16220","DOI":"10.1038\/s41598-019-52674-9","volume":"9","author":"M Zhang","year":"2019","unstructured":"Zhang, M., Savill, N., Marguiles, D. S., Smallwood, J., & Jefferies, E. (2019). Distinct individual differences in default mode network connectivity relate to off-task thought and text memory during reading. Scientific Reports, 9, 16220. https:\/\/doi.org\/10.1038\/s41598-019-52674-9","journal-title":"Scientific Reports"},{"key":"9632_CR79","unstructured":"Zittrain, J. 2019. Intellectual debt: With great power comes great ignorance. Medium, Retrieved July 24. https:\/\/medium.com\/berkman-klein-center\/from-technical-debt-to-intellectual-debt-in-ai-e05ac56a502c."}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-022-09632-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-022-09632-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-022-09632-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,3,22]],"date-time":"2022-03-22T19:18:06Z","timestamp":1647976686000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-022-09632-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,2,28]]},"references-count":79,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,3]]}},"alternative-id":["9632"],"URL":"https:\/\/doi.org\/10.1007\/s10676-022-09632-3","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,2,28]]},"assertion":[{"value":"5 January 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 February 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no relevant financial or non-financial interests to disclose.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"13"}}