{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,9]],"date-time":"2026-04-09T14:39:22Z","timestamp":1775745562376,"version":"3.50.1"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2022,6,16]],"date-time":"2022-06-16T00:00:00Z","timestamp":1655337600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,6,16]],"date-time":"2022-06-16T00:00:00Z","timestamp":1655337600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100008375","name":"University of Basel","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100008375","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2022,9]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather than explainable. Yet, there is a grave lack of agreement concerning these terms in much of the literature on AI. We argue that the seminal distinction made by the philosopher and physician Karl Jaspers between different types of explaining and understanding in psychopathology can be used to promote greater conceptual clarity in the context of Machine Learning (ML). Following Jaspers, we claim that explaining and understanding constitute multi-faceted epistemic approaches that should not be seen as mutually exclusive, but rather as complementary ones as in and of themselves they are necessarily limited. Drawing on the famous example of Watson for Oncology we highlight how Jaspers\u2019 methodology translates to the case of medical AI. Classical considerations from the philosophy of psychiatry can therefore inform a debate at the centre of current AI ethics, which in turn may be crucial for a successful implementation of ethically and legally sound AI in medicine.<\/jats:p>","DOI":"10.1007\/s10676-022-09650-1","type":"journal-article","created":{"date-parts":[[2022,6,16]],"date-time":"2022-06-16T18:02:57Z","timestamp":1655402577000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine"],"prefix":"10.1007","volume":"24","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7428-2619","authenticated-orcid":false,"given":"Georg","family":"Starke","sequence":"first","affiliation":[]},{"given":"Christopher","family":"Poppe","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,6,16]]},"reference":[{"key":"9650_CR1","doi-asserted-by":"publisher","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","volume":"6","author":"A Adadi","year":"2018","unstructured":"Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138\u201352160","journal-title":"IEEE Access"},{"issue":"1","key":"9650_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s12911-020-01332-6","volume":"20","author":"J Amann","year":"2020","unstructured":"Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), 1\u20139","journal-title":"BMC Medical Informatics and Decision Making"},{"key":"9650_CR3","doi-asserted-by":"publisher","first-page":"185","DOI":"10.1016\/j.neunet.2020.07.010","volume":"130","author":"P Angelov","year":"2020","unstructured":"Angelov, P., & Soares, E. (2020). Towards explainable deep neural networks (xDNN). Neural Networks, 130, 185\u2013194","journal-title":"Neural Networks"},{"key":"9650_CR4","doi-asserted-by":"crossref","unstructured":"Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., & Atkinson, P. M. (2021). Explainable artificial intelligence: an analytical review.Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5), e1424","DOI":"10.1002\/widm.1424"},{"key":"9650_CR60","doi-asserted-by":"crossref","unstructured":"Arbelaez Ossa, L., Starke, G., Lorenzini, G., Vogt, J. E., Shaw, D. M., & Elger, B. S. (2022). Re-focusing explainability in medicine. Digital Health, 8, 20552076221074488.","DOI":"10.1177\/20552076221074488"},{"key":"9650_CR5","volume-title":"Principles of biomedical ethics","author":"TL Beauchamp","year":"2019","unstructured":"Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford: Oxford University Press","edition":"8"},{"key":"9650_CR6","doi-asserted-by":"crossref","unstructured":"Braun, M., Hummel, P., Beck, S., & Dabrock, P. (2021). Primer on an ethics of AI-based decision support systems in the clinic. Journal of medical ethics. 2021;47:e3.","DOI":"10.1136\/medethics-2019-105860"},{"issue":"1","key":"9650_CR8","doi-asserted-by":"publisher","first-page":"205395171562251","DOI":"10.1177\/2053951715622512","volume":"3","author":"J Burrell","year":"2016","unstructured":"Burrell, J. (2016). How the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512","journal-title":"Big Data & Society"},{"key":"9650_CR9","doi-asserted-by":"crossref","unstructured":"Bos, N., Glasgow, K., Gersh, J., Harbison, I., & Lyn Paul, C. (2019, November). Mental models of AI-based systems: User predictions and explanations of image classification results. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol.\u00a063, No. 1, pp.\u00a0184\u2013188). Sage CA: Los Angeles, CA: SAGE Publications","DOI":"10.1177\/1071181319631392"},{"issue":"3","key":"9650_CR10","first-page":"223","volume":"3","author":"D Bzdok","year":"2018","unstructured":"Bzdok, D., & Meyer-Lindenberg, A. (2018). Machine learning for precision psychiatry: opportunities and challenges. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 3(3), 223\u2013230","journal-title":"Biological Psychiatry: Cognitive Neuroscience and Neuroimaging"},{"issue":"8","key":"9650_CR12","doi-asserted-by":"publisher","first-page":"1301","DOI":"10.1038\/s41591-019-0508-1","volume":"25","author":"G Campanella","year":"2019","unstructured":"Campanella, G., Hanna, M. G., Geneslaw, L., Miraflor, A., Silva, V. W. K., Busam, K. J. \u2026 Fuchs, T. J. (2019). Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nature medicine, 25(8), 1301\u20131309","journal-title":"Nature medicine"},{"key":"9650_CR13","doi-asserted-by":"crossref","unstructured":"Choi, Y. I., Chung, J. W., Kim, K. O., Kwon, K. A., Kim, Y. J., Park, D. K. \u2026 Lee, U. (2019). Concordance rate between clinicians and watson for oncology among patients with advanced gastric cancer: early, real-world experience in Korea. Canadian Journal of Gastroenterology and Hepatology. 2019:8072928.","DOI":"10.1155\/2019\/8072928"},{"key":"9650_CR14","doi-asserted-by":"crossref","unstructured":"DeCamp, M., & Tilburt, J. C. (2019). Why we cannot trust artificial intelligence in medicine.The Lancet Digital Health, 1(8), e390","DOI":"10.1016\/S2589-7500(19)30197-9"},{"issue":"2","key":"9650_CR15","doi-asserted-by":"publisher","first-page":"205395172110359","DOI":"10.1177\/20539517211035955","volume":"8","author":"E Denton","year":"2021","unstructured":"Denton, E., Hanna, A., Amironesei, R., Smart, A., & Nicole, H. (2021). On the genealogy of machine learning datasets: A critical history of ImageNet. Big Data & Society, 8(2), 20539517211035955","journal-title":"Big Data & Society"},{"issue":"4","key":"9650_CR16","doi-asserted-by":"publisher","first-page":"592","DOI":"10.1093\/jamia\/ocz229","volume":"27","author":"WK Diprose","year":"2020","unstructured":"Diprose, W. K., Buist, N., Hua, N., Thurier, Q., Shand, G., & Robinson, R. (2020). Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association, 27(4), 592\u2013600","journal-title":"Journal of the American Medical Informatics Association"},{"key":"9650_CR17","doi-asserted-by":"publisher","first-page":"103498","DOI":"10.1016\/j.artint.2021.103498","volume":"297","author":"JM Dur\u00e1n","year":"2021","unstructured":"Dur\u00e1n, J. M. (2021). Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare. Artificial Intelligence, 297, 103498","journal-title":"Artificial Intelligence"},{"issue":"5","key":"9650_CR18","first-page":"329","volume":"47","author":"JM Dur\u00e1n","year":"2021","unstructured":"Dur\u00e1n, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? on the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329\u2013335","journal-title":"Journal of Medical Ethics"},{"key":"9650_CR19","doi-asserted-by":"crossref","unstructured":"Elgin, C. Z. (2017). True enough. Cambridge, MA: MIT Press","DOI":"10.7551\/mitpress\/9780262036535.001.0001"},{"issue":"6","key":"9650_CR20","doi-asserted-by":"publisher","first-page":"800","DOI":"10.1192\/bjp.151.6.800","volume":"151","author":"KP Ebmeier","year":"1987","unstructured":"Ebmeier, K. P. (1987). Explaining and understanding in psychopathology. The British Journal of Psychiatry, 151(6), 800\u2013804","journal-title":"The British Journal of Psychiatry"},{"issue":"7639","key":"9650_CR21","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1038\/nature21056","volume":"542","author":"A Esteva","year":"2017","unstructured":"Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115\u2013118","journal-title":"Nature"},{"key":"9650_CR22","doi-asserted-by":"crossref","unstructured":"Ferrario, A., & Loi, M. (2021). The meaning of \u201cExplainability fosters trust in AI\u201d. Available at SSRN 3916396","DOI":"10.2139\/ssrn.3916396"},{"issue":"4","key":"9650_CR23","doi-asserted-by":"publisher","first-page":"689","DOI":"10.1007\/s11023-018-9482-5","volume":"28","author":"L Floridi","year":"2018","unstructured":"Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V. \u2026 Vayena, E. (2018). AI4People\u2014an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689\u2013707","journal-title":"Minds and Machines"},{"issue":"1","key":"9650_CR24","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s13347-020-00396-6","volume":"33","author":"L Floridi","year":"2020","unstructured":"Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology, 33(1), 1\u20133","journal-title":"Philosophy & Technology"},{"issue":"4","key":"9650_CR25","doi-asserted-by":"publisher","first-page":"689","DOI":"10.1007\/s11023-018-9482-5","volume":"28","author":"L Floridi","year":"2018","unstructured":"Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V. \u2026 Schafer, B. (2018). AI4People\u2014an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689\u2013707","journal-title":"Minds and Machines"},{"key":"9650_CR26","doi-asserted-by":"crossref","unstructured":"Gough, J. (2021). On the proper epistemology of the mental in psychiatry: what\u2019s the point of understanding and explaining? The British Journal for the Philosophy of Science (accepted). doi: 10.1086.715106","DOI":"10.1086\/715106"},{"key":"9650_CR28","doi-asserted-by":"crossref","unstructured":"Hoerl, C. (2013). Jaspers on explaining and understanding in psychiatry. In Stanghellini, G., & Fuchs, T. (Eds.). (2013). One century of Karl Jaspers\u2019 general psychopathology. Oxford: Oxford University Press.107\u2013120","DOI":"10.1093\/med\/9780199609253.003.0008"},{"key":"9650_CR29","doi-asserted-by":"crossref","unstructured":"Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & M\u00fcller, H. (2019). Causability and explainability of artificial intelligence in medicine.Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312","DOI":"10.1002\/widm.1312"},{"key":"9650_CR30","doi-asserted-by":"crossref","unstructured":"Husserl, E. (2020). Studien zur Struktur des Bewusstseins: Teilband III Wille und Handlung Texte aus dem Nachlass (1902\u20131934). Edited by U. Melle, & T. Vongehr. Cham: Springer","DOI":"10.1007\/978-3-030-35928-7"},{"issue":"3","key":"9650_CR62","doi-asserted-by":"publisher","first-page":"364","DOI":"10.1038\/s41591-020-0789-4","volume":"26","author":"Stephanie L. Hyland","year":"2020","unstructured":"Hyland, S. L., Faltys, M., H ser, M., Lyu, X., Gumbsch, T., Esteban, C., Bock, C., Horn, M., Moor, M., Rieck, B., Zimmermann, M., Bodenham, D., Borgwardt, K., R\u00e4tsch, G., Merz, T. M. (2020) Early prediction of circulatory failure in the intensive care unit using machine learning. Nature Medicine 26(3) 364-373 10.1038\/s41591-020-0789-4","journal-title":"Nature Medicine"},{"key":"9650_CR31","doi-asserted-by":"crossref","unstructured":"Jacovi, A., Marasovi\u0107, A., Miller, T., & Goldberg, Y. (2021). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp.\u00a0624\u2013635)","DOI":"10.1145\/3442188.3445923"},{"key":"9650_CR32","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-662-11111-6","volume-title":"Allgemeine Psychopathologie","author":"K Jaspers","year":"1946","unstructured":"Jaspers, K. (1946). Allgemeine Psychopathologie (4th ed.). Berlin: Springer","edition":"4"},{"issue":"1","key":"9650_CR33","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41598-021-84973-5","volume":"11","author":"Z Jie","year":"2021","unstructured":"Jie, Z., Zhiying, Z., & Li, L. (2021). A meta-analysis of Watson for Oncology in clinical application. Scientific reports, 11(1), 1\u201313","journal-title":"Scientific reports"},{"issue":"1","key":"9650_CR34","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41598-019-49506-1","volume":"9","author":"PG Knoops","year":"2019","unstructured":"Knoops, P. G., Papaioannou, A., Borghi, A., Breakey, R. W., Wilson, A. T., Jeelani, O. \u2026 Schievano, S. (2019). A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Scientific reports, 9(1), 1\u201312","journal-title":"Scientific reports"},{"issue":"2","key":"9650_CR35","doi-asserted-by":"publisher","first-page":"212","DOI":"10.1177\/0957154X13476201","volume":"24","author":"T Kumazaki","year":"2013","unstructured":"Kumazaki, T. (2013). The theoretical root of Karl Jaspers\u2019 General Psychopathology. Part 1: Reconsidering the influence of phenomenology and hermeneutics. History of Psychiatry, 24(2), 212\u2013226","journal-title":"History of Psychiatry"},{"key":"9650_CR36","doi-asserted-by":"publisher","first-page":"700","DOI":"10.3389\/fnhum.2014.00700","volume":"8","author":"T Lombrozo","year":"2014","unstructured":"Lombrozo, T., & Gwynne, N. Z. (2014). Explanation and inference: Mechanistic and functional explanations guide property generalization. Frontiers in Human Neuroscience, 8, 700","journal-title":"Frontiers in Human Neuroscience"},{"issue":"1","key":"9650_CR37","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1002\/hast.973","volume":"49","author":"AJ London","year":"2019","unstructured":"London, A. J. (2019). Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Center Report, 49(1), 15\u201321","journal-title":"Hastings Center Report"},{"key":"9650_CR38","doi-asserted-by":"crossref","unstructured":"Mittelstadt, B., Russell, C., & Wachter, S. (2019, January). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency, 279\u2013288","DOI":"10.1145\/3287560.3287574"},{"key":"9650_CR39","unstructured":"M\u00fcller, V. C. (2020). Ethics of Artificial Intelligence and Robotics. In E. N. Zalta (ed.) The Stanford Encyclopedia of Philosophy. https:\/\/plato.stanford.edu\/archives\/win2020\/entries\/ethics-ai\/>"},{"key":"9650_CR40","doi-asserted-by":"crossref","unstructured":"Nguyen, J. (2020). Do fictions explain? Synthese, 199, 3219\u20133244","DOI":"10.1007\/s11229-020-02931-6"},{"issue":"3","key":"9650_CR41","doi-asserted-by":"publisher","first-page":"441","DOI":"10.1007\/s11023-019-09502-w","volume":"29","author":"A P\u00e1ez","year":"2019","unstructured":"P\u00e1ez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441\u2013459","journal-title":"Minds and Machines"},{"key":"9650_CR42","unstructured":"Parascandolo, G., Kilbertus, N., Rojas-Carulla, M., & Sch\u00f6lkopf, B. (2018, July). Learning independent causal mechanisms. Proceedings of the 35th International Conference on Machine Learning, PMLR 80, 4036\u20134044"},{"key":"9650_CR43","doi-asserted-by":"publisher","first-page":"101901","DOI":"10.1016\/j.artmed.2020.101901","volume":"107","author":"T Ploug","year":"2020","unstructured":"Ploug, T., & Holm, S. (2020). The four dimensions of contestable AI diagnostics-A patient-centric approach to explainable AI. Artificial Intelligence in Medicine, 107, 101901","journal-title":"Artificial Intelligence in Medicine"},{"issue":"5\u20136","key":"9650_CR44","doi-asserted-by":"publisher","first-page":"950","DOI":"10.1016\/j.artint.2011.01.006","volume":"175","author":"D Proudfoot","year":"2011","unstructured":"Proudfoot, D. (2011). Anthropomorphism and AI: Turing\u02bcs much misunderstood imitation game. Artificial Intelligence, 175(5\u20136), 950\u2013957","journal-title":"Artificial Intelligence"},{"issue":"2","key":"9650_CR46","doi-asserted-by":"publisher","first-page":"88","DOI":"10.1080\/21507740.2020.1740350","volume":"11","author":"A Salles","year":"2020","unstructured":"Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB neuroscience, 11(2), 88\u201395","journal-title":"AJOB neuroscience"},{"issue":"1","key":"9650_CR47","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1007\/s00115-011-3365-9","volume":"83","author":"JE Schlimme","year":"2012","unstructured":"Schlimme, J. E., Paprotny, T., & Br\u00fcckner, B. (2012). Karl Jaspers. Der Nervenarzt, 83(1), 84\u201391","journal-title":"Der Nervenarzt"},{"key":"9650_CR48","unstructured":"Sch\u00f6lkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., & Mooij, J. (2012). On causal and anticausal learning. 29th International Conference on Machine Learning (ICML 2012). , 1255\u20131262"},{"key":"9650_CR49","unstructured":"Shanahan, M. (2016). Conscious exotica. Aeon.\nhttps:\/\/aeon.co\/essays\/beyond-humans-what-other-kinds-of-minds-might-be-out-there (6.4.2021)"},{"key":"9650_CR50","doi-asserted-by":"publisher","unstructured":"Spano, N. (2021). Volitional causality vs natural causality: reflections on their compatibility in Husserl\u2019s phenomenology of action. Phenomenology and the Cognitive Sciences, 1\u201319. doi: https:\/\/doi.org\/10.1007\/s11097-020-09724-9","DOI":"10.1007\/s11097-020-09724-9"},{"issue":"4","key":"9650_CR51","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1109\/MSPEC.2019.8678513","volume":"56","author":"E Strickland","year":"2019","unstructured":"Strickland, E. (2019). IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectrum, 56(4), 24\u201331","journal-title":"IEEE Spectrum"},{"key":"9650_CR61","doi-asserted-by":"crossref","unstructured":"Starke, G. (2021). The Emperor\u2019s New Clothes? Transparency and Trust in Machine Learning for Clinical Neuroscience. In: Friedrich, O., Wolkenstein, A., Bublitz, C., Jox, R.J., Racine, E. (eds.), Clinical Neurotechnology meets Artificial Intelligence. Advances in Neuroethics. Cham: Springer. 183\u2013196.","DOI":"10.1007\/978-3-030-64590-8_14"},{"key":"9650_CR52","doi-asserted-by":"publisher","DOI":"10.1093\/bjps\/axz035","author":"E Sullivan","year":"2020","unstructured":"Sullivan, E. (2020). Understanding from machine learning models. The British Journal for the Philosophy of Science. doi: https:\/\/doi.org\/10.1093\/bjps\/axz035","journal-title":"The British Journal for the Philosophy of Science"},{"issue":"1","key":"9650_CR53","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1038\/s41591-018-0300-7","volume":"25","author":"EJ Topol","year":"2019","unstructured":"Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44\u201356","journal-title":"Nature medicine"},{"key":"9650_CR54","doi-asserted-by":"crossref","unstructured":"Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLoS medicine, 15(11), e1002689","DOI":"10.1371\/journal.pmed.1002689"},{"issue":"2","key":"9650_CR56","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1093\/idpl\/ipx005","volume":"7","author":"S Wachter","year":"2017","unstructured":"Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76\u201399","journal-title":"International Data Privacy Law"},{"issue":"3","key":"9650_CR57","doi-asserted-by":"publisher","first-page":"417","DOI":"10.1007\/s11023-019-09506-6","volume":"29","author":"D Watson","year":"2019","unstructured":"Watson, D. (2019). The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence. Minds and Machines, 29(3), 417\u2013440","journal-title":"Minds and Machines"},{"issue":"2","key":"9650_CR58","doi-asserted-by":"publisher","first-page":"169","DOI":"10.2307\/2504798","volume":"19","author":"W Windelband","year":"1980","unstructured":"Windelband, W. (1980). Rectorial Address, Strasbourg, 1894. Translation by Guy Oakes. History and Theory, 19(2), 169\u2013185","journal-title":"History and Theory"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-022-09650-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-022-09650-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-022-09650-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,3]],"date-time":"2023-01-03T16:16:55Z","timestamp":1672762615000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-022-09650-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,16]]},"references-count":56,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,9]]}},"alternative-id":["9650"],"URL":"https:\/\/doi.org\/10.1007\/s10676-022-09650-1","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,16]]},"assertion":[{"value":"16 June 2022","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"None.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"26"}}