{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T15:56:38Z","timestamp":1772726198906,"version":"3.50.1"},"reference-count":79,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T00:00:00Z","timestamp":1761091200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T00:00:00Z","timestamp":1761091200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100019559","name":"Carl von Ossietzky Universit\u00e4t Oldenburg","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100019559","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>While trust is foundational to the doctor-patient relationship, the introduction of AI into healthcare settings poses the risk of eroding this trust, and such erosion cannot be countered simply by appealing to the notion of \u201ctrustworthy AI.\u201d We argue that trust presupposes specific epistemic attitudes that cannot be meaningfully applied to AI systems. Accordingly, our focus is not on specifying which capabilities AI must exhibit in order to appear trustworthy, but on examining from an epistemological perspective how the use of AI reshapes the dynamics of trust within the doctor-patient relationship. To this end, we first sketch conceptions of trust and demonstrate how trust differs from reliance. We then combine the model of Computational Reliabilism with an epistemic framework to develop a matrix for the ethical analysis of our use cases. Finally, we apply this framework to three scenarios of melanoma detection, risk prediction, and psychotherapy chatbots, which we construct by mapping epistemic stances across different modes of human-machine interaction, ranging from collaborative support with varying degrees of autonomy to the replacement of human-human interaction. We argue that the application of AI in the doctor-patient relationship exposes what we call a \u201creliability gap\u201d \u2014 a conceptual space where the opaque nature of advanced AI systems prevents both doctors and patients from independently verifying their reliability. This creates a dynamic where reliability in the AI\u2019s performance is increasingly mediated by the doctor as a proxy. Our use cases demonstrate that the more autonomous and opaque AI systems are, the more trust in the doctor becomes essential for bridging reliability gaps, while threatening to overburden the doctor\u2019s central role.<\/jats:p>","DOI":"10.1007\/s10676-025-09867-w","type":"journal-article","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T03:32:09Z","timestamp":1761103929000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Trust and Artificial Intelligence in the Doctor-Patient Relationship: Epistemological Preconditions and Reliability Gaps"],"prefix":"10.1007","volume":"27","author":[{"given":"Eike","family":"Buhr","sequence":"first","affiliation":[]},{"given":"Orhan","family":"Onder","sequence":"additional","affiliation":[]},{"given":"Pranab","family":"Rudra","sequence":"additional","affiliation":[]},{"given":"Frank","family":"Ursin","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,22]]},"reference":[{"issue":"2","key":"9867_CR1","doi-asserted-by":"publisher","first-page":"e0000016","DOI":"10.1371\/journal.pdig.0000016","volume":"1","author":"J Amann","year":"2022","unstructured":"Amann, J., Vetter, D., Blomberg, S. N., Christensen, H. C., Coffee, M., Gerke, S., Gilbert, T. K., Hagendorff, T., Holm, S., Livne, M., Spezzatti, A., Str\u00fcmke, I., Zicari, R. V., & Madai, V. I. (2022). To explain or not to explain?\u2014Artificial intelligence explainability in clinical decision support systems. PLoS Digital Health, 1(2), e0000016. https:\/\/doi.org\/10.1371\/journal.pdig.0000016. & on behalf of the, Z. I. i.","journal-title":"PLoS Digital Health"},{"issue":"6","key":"9867_CR2","doi-asserted-by":"publisher","first-page":"e15154","DOI":"10.2196\/15154","volume":"22","author":"O Asan","year":"2020","unstructured":"Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial intelligence and human trust in healthcare: Focus on clinicians [Viewpoint]. Journal of Medical Internet Research, 22(6), e15154. https:\/\/doi.org\/10.2196\/15154","journal-title":"Journal of Medical Internet Research"},{"issue":"6","key":"9867_CR3","doi-asserted-by":"publisher","first-page":"589","DOI":"10.1001\/jamainternmed.2023.1838","volume":"183","author":"JW Ayers","year":"2023","unstructured":"Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589\u2013596. https:\/\/doi.org\/10.1001\/jamainternmed.2023.1838","journal-title":"JAMA Internal Medicine"},{"issue":"2","key":"9867_CR4","doi-asserted-by":"publisher","first-page":"231","DOI":"10.1086\/292745","volume":"96","author":"A Baier","year":"1986","unstructured":"Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231\u2013260.","journal-title":"Ethics"},{"key":"9867_CR5","unstructured":"Baier, A. (1991). Two lectures on trust: lecture 1, trust and its vulnerabilities and lecture 2, sustaining trust. In G. B. Peterson (Ed.), Tanner lectures on human values (Vol. 13, pp. 109\u2013174). University of Utah."},{"issue":"15","key":"9867_CR6","doi-asserted-by":"publisher","first-page":"2522","DOI":"10.1017\/S0033291721001008","volume":"51","author":"DS Barron","year":"2021","unstructured":"Barron, D. S. (2021). Commentary: The ethical challenges of machine learning in psychiatry: A focus on data, diagnosis, and treatment. Psychological Medicine, 51(15), 2522\u20132524. https:\/\/doi.org\/10.1017\/S0033291721001008","journal-title":"Psychological Medicine"},{"issue":"4","key":"9867_CR7","doi-asserted-by":"publisher","first-page":"4167","DOI":"10.1007\/s43681-025-00690-z","volume":"5","author":"S Blanco","year":"2025","unstructured":"Blanco, S. (2025a). Human trust in AI: A relationship beyond reliance. AI and Ethics, 5(4), 4167\u20134180. https:\/\/doi.org\/10.1007\/s43681-025-00690-z","journal-title":"AI and Ethics"},{"key":"9867_CR8","unstructured":"Blanco, S. (2025b). Trusting as a moral act: Trustworthy AI and responsibility. Eberhard Karls Universit\u00e4t T\u00fcbingen ]."},{"issue":"3","key":"9867_CR9","doi-asserted-by":"publisher","first-page":"17","DOI":"10.1002\/hast.1207","volume":"51","author":"M Braun","year":"2021","unstructured":"Braun, M., Bleher, H., & Hummel, P. (2021). A leap of faith: Is there a formula for trustworthy AI? Hastings Center Report, 51(3), 17\u201322. https:\/\/doi.org\/10.1002\/hast.1207","journal-title":"Hastings Center Report"},{"key":"9867_CR10","unstructured":"Bryson, J. (2018). 11\u201313). AI & Global Governance: No One Should Trust AI. UNU-CPR (blog). https:\/\/unu.edu\/cpr\/blog-post\/ai-global-governance-no-one-should-trust-ai"},{"issue":"1","key":"9867_CR11","doi-asserted-by":"publisher","first-page":"205395171562251","DOI":"10.1177\/2053951715622512","volume":"3","author":"J Burrell","year":"2016","unstructured":"Burrell, J. (2016). How the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. https:\/\/doi.org\/10.1177\/2053951715622512","journal-title":"Big Data & Society"},{"key":"9867_CR12","unstructured":"Carrington, B. (2023). AI mental health chatbot that predicts disorders becomes first in world to gain Class IIa UKCA medical device status. https:\/\/www.limbic.ai\/blog\/class-ii-a"},{"issue":"1","key":"9867_CR13","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1007\/s10676-011-9279-1","volume":"14","author":"M Coeckelbergh","year":"2012","unstructured":"Coeckelbergh, M. (2012). Can we trust robots? Ethics and Information Technology, 14(1), 53\u201360. https:\/\/doi.org\/10.1007\/s10676-011-9279-1","journal-title":"Ethics and Information Technology"},{"issue":"2","key":"9867_CR14","doi-asserted-by":"publisher","first-page":"94","DOI":"10.7861\/futurehosp.6-2-94","volume":"6","author":"T Davenport","year":"2019","unstructured":"Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 94\u201398. https:\/\/doi.org\/10.7861\/futurehosp.6-2-94","journal-title":"Future Healthcare Journal"},{"key":"9867_CR15","unstructured":"Dreyfus, H. L. (1992). What computers still can\u2019t do: A critique of artificial reason. MIT Press."},{"issue":"4","key":"9867_CR16","doi-asserted-by":"publisher","first-page":"645","DOI":"10.1007\/s11023-018-9481-6","volume":"28","author":"JM Dur\u00e1n","year":"2018","unstructured":"Dur\u00e1n, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28(4), 645\u2013666. https:\/\/doi.org\/10.1007\/s11023-018-9481-6","journal-title":"Minds and Machines"},{"issue":"5","key":"9867_CR17","doi-asserted-by":"publisher","first-page":"329","DOI":"10.1136\/medethics-2020-106820","volume":"47","author":"JM Dur\u00e1n","year":"2021","unstructured":"Dur\u00e1n, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329\u2013335. https:\/\/doi.org\/10.1136\/medethics-2020-106820","journal-title":"Journal of Medical Ethics"},{"issue":"1","key":"9867_CR18","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1007\/s13347-025-00843-2","volume":"38","author":"JM Dur\u00e1n","year":"2025","unstructured":"Dur\u00e1n, J. M., & Pozzi, G. (2025). Trust and trustworthiness in AI. Philosophy & Technology, 38(1), 16. https:\/\/doi.org\/10.1007\/s13347-025-00843-2","journal-title":"Philosophy & Technology"},{"issue":"3","key":"9867_CR19","doi-asserted-by":"publisher","first-page":"731","DOI":"10.3233\/jad-161197","volume":"65","author":"M Dyrba","year":"2018","unstructured":"Dyrba, M., Grothe, M. J., Mohammadi, A., Binder, H., Kirste, T., & Teipel, S. J. (2018). Comparison of different hypotheses regarding the spread of Alzheimer\u2019s disease using Markov random fields and multimodal imaging. Journal of Alzheimer\u2019s Disease, 65(3), 731\u2013746. https:\/\/doi.org\/10.3233\/jad-161197","journal-title":"Journal of Alzheimer's Disease"},{"issue":"1","key":"9867_CR20","doi-asserted-by":"publisher","first-page":"191","DOI":"10.1186\/s13195-021-00924-2","volume":"13","author":"M Dyrba","year":"2021","unstructured":"Dyrba, M., Hanzig, M., Altenstein, S., Bader, S., Ballarini, T., Brosseron, F., Buerger, K., Cantr\u00e9, D., Dechent, P., Dobisch, L., D\u00fczel, E., Ewers, M., Fliessbach, K., Glanz, W., Haynes, J. D., Heneka, M. T., Janowitz, D., Keles, D. B., & Kilimann, I. (2021). Improving 3D convolutional neural network comprehensibility via interactive visualization of relevance maps: Evaluation in Alzheimer\u2019s disease. Alzheimer\u2019s Research & Therapy, 13(1), 191. https:\/\/doi.org\/10.1186\/s13195-021-00924-2. \u2026 for the Adni, A. D. s. g.","journal-title":"Alzheimer's Research & Therapy"},{"key":"9867_CR21","doi-asserted-by":"publisher","first-page":"e43882","DOI":"10.2196\/43882","volume":"10","author":"Y Ehrt-Sch\u00e4fer","year":"2023","unstructured":"Ehrt-Sch\u00e4fer, Y., Rusmir, M., Vetter, J., Seifritz, E., M\u00fcller, M., & Kleim, B. (2023). Feasibility, Adherence, and effectiveness of blended psychotherapy for severe mental illnesses: Scoping review [Review]. JMIR Ment Health, 10, e43882. https:\/\/doi.org\/10.2196\/43882","journal-title":"JMIR Ment Health"},{"issue":"7639","key":"9867_CR22","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1038\/nature21056","volume":"542","author":"A Esteva","year":"2017","unstructured":"Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115\u2013118. https:\/\/doi.org\/10.1038\/nature21056","journal-title":"Nature"},{"key":"9867_CR23","doi-asserted-by":"publisher","unstructured":"Faulkner, P. (2011). Knowledge on trust. Oxford University Press. https:\/\/doi.org\/10.1093\/acprof:oso\/9780199589784.001.0001","DOI":"10.1093\/acprof:oso\/9780199589784.001.0001"},{"key":"9867_CR24","doi-asserted-by":"publisher","unstructured":"Faulkner, P. (2017). The Problem of Trust. In P. Faulkner & T. Simpson (Eds.), The Philosophy of Trust (pp. 109\u2013128). Oxford University Press. https:\/\/doi.org\/10.1093\/acprof:oso\/9780198732549.003.0007","DOI":"10.1093\/acprof:oso\/9780198732549.003.0007"},{"key":"9867_CR25","doi-asserted-by":"publisher","unstructured":"Ferrario, A., & Loi, M. (2022). How Explainability Contributes to Trust in AI Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea. https:\/\/doi.org\/10.1145\/3531146.3533202","DOI":"10.1145\/3531146.3533202"},{"issue":"1","key":"9867_CR26","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1007\/s11136-008-9430-6","volume":"18","author":"J Greenhalgh","year":"2009","unstructured":"Greenhalgh, J. (2009). The applications of pros in clinical practice: What are they, do they work, and why? Quality Of Life Research, 18(1), 115\u2013123. https:\/\/doi.org\/10.1007\/s11136-008-9430-6","journal-title":"Quality Of Life Research"},{"issue":"3","key":"9867_CR27","doi-asserted-by":"publisher","first-page":"205","DOI":"10.1136\/medethics-2019-105586","volume":"46","author":"T Grote","year":"2019","unstructured":"Grote, T., & Berens, P. (2019). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205\u2013211. https:\/\/doi.org\/10.1136\/medethics-2019-105586","journal-title":"Journal of Medical Ethics"},{"key":"9867_CR28","doi-asserted-by":"publisher","unstructured":"Gunkel, D. J. (2023). Person, Thing, robot: A moral and legal ontology for the 21st century and beyond. The MIT Press. https:\/\/doi.org\/10.7551\/mitpress\/14983.001.0001","DOI":"10.7551\/mitpress\/14983.001.0001"},{"issue":"4","key":"9867_CR29","doi-asserted-by":"publisher","first-page":"613","DOI":"10.1111\/1468-0009.00223","volume":"79","author":"MA Hall","year":"2001","unstructured":"Hall, M. A., Dugan, E., Zheng, B., & Mishra, A. K. (2001). Trust in physicians and medical institutions: What is it, can it be measured, and does it matter? Milbank Quarterly, 79(4), 613\u2013639. https:\/\/doi.org\/10.1111\/1468-0009.00223","journal-title":"Milbank Quarterly"},{"key":"9867_CR30","unstructured":"Hardin, R. (2002). Trust and trustworthiness. Russell Sage Foundation. http:\/\/www.jstor.org\/stable\/10.7758\/9781610442718"},{"issue":"7","key":"9867_CR31","doi-asserted-by":"publisher","first-page":"478","DOI":"10.1136\/medethics-2019-105935","volume":"46","author":"JJ Hatherley","year":"2020","unstructured":"Hatherley, J. J. (2020). Limits of trust in medical AI. Journal of Medical Ethics, 46(7), 478\u2013481. https:\/\/doi.org\/10.1136\/medethics-2019-105935","journal-title":"Journal of Medical Ethics"},{"issue":"1","key":"9867_CR32","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1111\/nous.12000","volume":"48","author":"K Hawley","year":"2014","unstructured":"Hawley, K. (2014). Trust, distrust and commitment. No\u00fbs, 48(1), 1\u201320. https:\/\/doi.org\/10.1111\/nous.12000","journal-title":"No\u00fbs"},{"key":"9867_CR33","doi-asserted-by":"publisher","unstructured":"Hawley, K. (2017). Trustworthy groups and organizations. In P. Faulkner, & T. Simpson (Eds.), The philosophy of trust (Vol. 0pp.). Oxford University Press. https:\/\/doi.org\/10.1093\/acprof:oso\/9780198732549.003.0014","DOI":"10.1093\/acprof:oso\/9780198732549.003.0014"},{"key":"9867_CR34","unstructured":"HLEGoAI (2019). Ethics guidelines for trustworthy AI. Brussels."},{"issue":"1","key":"9867_CR35","doi-asserted-by":"publisher","first-page":"63","DOI":"10.1080\/00048409412345881","volume":"72","author":"R Holton","year":"1994","unstructured":"Holton, R. (1994). Deciding to trust, coming to believe. Australasian Journal of Philosophy, 72(1), 63\u201376. https:\/\/doi.org\/10.1080\/00048409412345881","journal-title":"Australasian Journal of Philosophy"},{"key":"9867_CR36","doi-asserted-by":"publisher","unstructured":"Jacovi, A., Marasovi\u0107, A., Miller, T., & Goldberg, Y. (2021). Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada. https:\/\/doi.org\/10.1145\/3442188.3445923","DOI":"10.1145\/3442188.3445923"},{"issue":"1","key":"9867_CR37","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1086\/233694","volume":"107","author":"K Jones","year":"1996","unstructured":"Jones, K. (1996). Trust as an affective attitude. Ethics, 107(1), 4\u201325. http:\/\/www.jstor.org\/stable\/2382241","journal-title":"Ethics"},{"issue":"2","key":"9867_CR38","doi-asserted-by":"publisher","first-page":"55","DOI":"10.2307\/2564672","volume":"96","author":"K Jones","year":"1999","unstructured":"Jones, K. (1999). Second-hand moral knowledge. Journal of Philosophy, 96(2), 55.","journal-title":"Journal of Philosophy"},{"issue":"1","key":"9867_CR39","doi-asserted-by":"publisher","first-page":"61","DOI":"10.1086\/667838","volume":"123","author":"K Jones","year":"2012","unstructured":"Jones, K. (2012). Trustworthiness Ethics, 123(1), 61\u201385.","journal-title":"Trustworthiness Ethics"},{"issue":"4","key":"9867_CR40","doi-asserted-by":"publisher","first-page":"955","DOI":"10.1007\/s11098-018-1221-5","volume":"176","author":"K Jones","year":"2019","unstructured":"Jones, K. (2019). Trust, distrust, and affective looping. Philosophical Studies, 176(4), 955\u2013968. https:\/\/doi.org\/10.1007\/s11098-018-1221-5","journal-title":"Philosophical Studies"},{"issue":"3","key":"9867_CR41","doi-asserted-by":"publisher","first-page":"368","DOI":"10.3366\/E1742360007000147","volume":"4","author":"A Keren","year":"2007","unstructured":"Keren, A. (2007). Epistemic authority, testimony and the transmission of knowledge. Episteme, 4(3), 368\u2013381. https:\/\/doi.org\/10.3366\/E1742360007000147","journal-title":"Episteme"},{"issue":"3","key":"9867_CR42","doi-asserted-by":"publisher","first-page":"52","DOI":"10.1007\/s44206-023-00073-z","volume":"2","author":"BH Lang","year":"2023","unstructured":"Lang, B. H., Nyholm, S., & Blumenthal-Barby, J. (2023). Responsibility gaps and black box healthcare AI: Shared responsibilization as a solution. Digital Society, 2(3), 52. https:\/\/doi.org\/10.1007\/s44206-023-00073-z","journal-title":"Digital Society"},{"issue":"1","key":"9867_CR43","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1002\/hast.973","volume":"49","author":"AJ London","year":"2019","unstructured":"London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15\u201321. https:\/\/doi.org\/10.1002\/hast.973","journal-title":"Hastings Center Report"},{"issue":"5","key":"9867_CR44","doi-asserted-by":"publisher","first-page":"424","DOI":"10.1111\/bioe.13158","volume":"37","author":"G Lorenzini","year":"2023","unstructured":"Lorenzini, G., Arbelaez Ossa, L., Shaw, D. M., & Elger, B. S. (2023). Artificial intelligence and the doctor\u2013patient relationship expanding the paradigm of shared decision making. Bioethics, 37(5), 424\u2013429. https:\/\/doi.org\/10.1111\/bioe.13158","journal-title":"Bioethics"},{"key":"9867_CR45","doi-asserted-by":"publisher","unstructured":"Malle, B. F., & Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In C. S. Nam & J. B. Lyons (Eds.), Trust in Human-Robot Interaction (pp. 3\u201325). Academic Press. https:\/\/doi.org\/10.1016\/B978-0-12-819472-0.00001-0","DOI":"10.1016\/B978-0-12-819472-0.00001-0"},{"key":"9867_CR46","unstructured":"McLeod, C. (2023). Trust. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy."},{"issue":"1","key":"9867_CR47","doi-asserted-by":"publisher","first-page":"221","DOI":"10.1007\/s11023-022-09620-y","volume":"33","author":"J M\u00f6kander","year":"2023","unstructured":"M\u00f6kander, J., Sheth, M., Watson, D. S., & Floridi, L. (2023). The switch, the ladder, and the matrix: Models for classifying AI systems. Minds and Machines, 33(1), 221\u2013248. https:\/\/doi.org\/10.1007\/s11023-022-09620-y","journal-title":"Minds and Machines"},{"key":"9867_CR48","doi-asserted-by":"publisher","unstructured":"Montgomery, K. (2005). How Doctors think: Clinical judgment and the practice of medicine. Oxford University Press. https:\/\/doi.org\/10.1093\/oso\/9780195187120.001.0001","DOI":"10.1093\/oso\/9780195187120.001.0001"},{"issue":"1","key":"9867_CR49","doi-asserted-by":"publisher","DOI":"10.1186\/s12910-025-01243-z","volume":"26","author":"SCE Nouis","year":"2025","unstructured":"Nouis, S. C. E., Uren, V., & Jariwala, S. (2025). Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: A qualitative study of healthcare professionals\u2019 perspectives in the UK. BMC Medical Ethics, 26(1), Article 89. https:\/\/doi.org\/10.1186\/s12910-025-01243-z","journal-title":"BMC Medical Ethics"},{"issue":"6464","key":"9867_CR50","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1126\/science.aax2342","volume":"366","author":"Z Obermeyer","year":"2019","unstructured":"Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447\u2013453. https:\/\/doi.org\/10.1126\/science.aax2342","journal-title":"Science"},{"key":"9867_CR200","unstructured":"Onder, O. (2022). Epistemolojik ve Etik Acidan Klinik Karar Destek Sistemleri. In T. Bardak\u00e7\u0131 & M. I. Karaman (Eds.), Yapay Zeka Eti\u011fi (pp. 147-160). Isar Yay\u0131nlar\u0131."},{"key":"9867_CR101","unstructured":"Onder, O. (2025). Klinik Karar Destek Sistemleri Ba\u011flam\u0131nda T\u0131pta Yapay Zeka Kullan\u0131m\u0131nda Etik Sorun Alanlar\u0131 (Publication No. 949041) [Doctoral dissertation, Istanbul University] Y\u00d6K Ulusal Tez Merkezi. https:\/\/tez.yok.gov.tr\/UlusalTezMerkezi\/"},{"key":"9867_CR51","doi-asserted-by":"publisher","unstructured":"Pasquale, F. (2015). The black box society. Harvard University Press. https:\/\/doi.org\/10.4159\/harvard.9780674736061","DOI":"10.4159\/harvard.9780674736061"},{"key":"9867_CR52","doi-asserted-by":"publisher","unstructured":"Patel, V. L., Arocha, J. F., & Zhang, J. (2012). 736 medical reasoning and thinking. In K. J. Holyoak, & R. G. Morrison (Eds.), The Oxford handbook of thinking and reasoning (p. 0). Oxford University Press. https:\/\/doi.org\/10.1093\/oxfordhb\/9780199734689.013.0037","DOI":"10.1093\/oxfordhb\/9780199734689.013.0037"},{"key":"9867_CR53","unstructured":"Polanyi, M. (1966). The Tacit dimension. Chicago University Press."},{"issue":"3","key":"9867_CR54","doi-asserted-by":"publisher","first-page":"735","DOI":"10.1007\/s43681-022-00200-5","volume":"3","author":"K Reinhardt","year":"2023","unstructured":"Reinhardt, K. (2023). Trust and trustworthiness in AI ethics. AI and Ethics, 3(3), 735\u2013744. https:\/\/doi.org\/10.1007\/s43681-022-00200-5","journal-title":"AI and Ethics"},{"key":"9867_CR55","doi-asserted-by":"publisher","unstructured":"Riedl, R., Hogeterp, S. A., & Reuter, M. (2024). Do patients prefer a human doctor, artificial intelligence, or a blend, and is this preference dependent on medical discipline? Empirical evidence and implications for medical practice [Original Research]. Frontiers in psychology, Volume 15\u20132024. https:\/\/doi.org\/10.3389\/fpsyg.2024.1422177","DOI":"10.3389\/fpsyg.2024.1422177"},{"key":"9867_CR56","unstructured":"Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. pearson."},{"issue":"5","key":"9867_CR57","doi-asserted-by":"publisher","first-page":"2749","DOI":"10.1007\/s11948-020-00228-y","volume":"26","author":"M Ryan","year":"2020","unstructured":"Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749\u20132767. https:\/\/doi.org\/10.1007\/s11948-020-00228-y","journal-title":"Science and Engineering Ethics"},{"key":"9867_CR120","doi-asserted-by":"publisher","unstructured":"Safdari, A. (2025). Toward an empathy-based trust in human-otheroid relations. AI & SOCIETY, 40(5), 3123\u20133138. https:\/\/doi.org\/10.1007\/s00146-024-02155-z","DOI":"10.1007\/s00146-024-02155-z"},{"issue":"1","key":"9867_CR58","doi-asserted-by":"publisher","first-page":"73","DOI":"10.1186\/s12911-023-02162-y","volume":"23","author":"A Sauerbrei","year":"2023","unstructured":"Sauerbrei, A., Kerasidou, A., Lucivero, F., & Hallowell, N. (2023). The impact of artificial intelligence on the person-centred, doctor-patient relationship: Some problems and solutions. BMC Medical Informatics and Decision Making, 23(1), 73. https:\/\/doi.org\/10.1186\/s12911-023-02162-y","journal-title":"BMC Medical Informatics and Decision Making"},{"key":"9867_CR59","doi-asserted-by":"publisher","DOI":"10.2196\/47031","volume":"11","author":"D Shevtsova","year":"2024","unstructured":"Shevtsova, D., Ahmed, A., Boot, I. W. A., Sanges, C., Hudecek, M., Jacobs, J. J. L., Hort, S., & Vrijhoef, H. J. M. (2024). Trust in and acceptance of artificial intelligence applications in medicine: Mixed methods study. JMIR Human Factors, 11, Article e47031. https:\/\/doi.org\/10.2196\/47031","journal-title":"JMIR Human Factors"},{"key":"9867_CR60","doi-asserted-by":"publisher","unstructured":"Simmler, M., & Frischknecht, R. (2021). A taxonomy of human\u2013machine collaboration: capturing automation and technical autonomy. AI & SOCIETY, 36. https:\/\/doi.org\/10.1007\/s00146-020-01004-z","DOI":"10.1007\/s00146-020-01004-z"},{"key":"9867_CR105","doi-asserted-by":"publisher","unstructured":"Sperling, R. A., Aisen, P. S., Beckett, L. A., Bennett, D. A., Craft, S., Fagan, A. M., Iwatsubo, T., Jack, C. R., Jr., Kaye, J., Montine, T. J., Park, D. C., Reiman, E. M., Rowe, C. C., Siemers, E., Stern, Y., Yaffe, K., Carrillo, M. C., Thies, B., Morrison-Bogorad, M., \u2026 Phelps, C. H. (2011). Toward defining the preclinical stages of Alzheimer's disease: recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimers Dement, 7(3), 280\u2013292. https:\/\/doi.org\/10.1016\/j.jalz.2011.03.003","DOI":"10.1016\/j.jalz.2011.03.003"},{"issue":"4","key":"9867_CR61","doi-asserted-by":"publisher","first-page":"e0000826","DOI":"10.1371\/journal.pdig.0000826","volume":"4","author":"AM Stroud","year":"2025","unstructured":"Stroud, A. M., Minteer, S. A., Zhu, X., Ridgeway, J. L., Miller, J. E., & Barry, B. A. (2025). Patient information needs for transparent and trustworthy cardiovascular artificial intelligence: A qualitative study. PLoS Digital Health, 4(4), e0000826. https:\/\/doi.org\/10.1371\/journal.pdig.0000826","journal-title":"PLoS Digital Health"},{"issue":"3","key":"9867_CR62","doi-asserted-by":"publisher","first-page":"491","DOI":"10.1111\/jep.13530","volume":"27","author":"J Szalai","year":"2021","unstructured":"Szalai, J. (2021). The potential use of artificial intelligence in the therapy of borderline personality disorder. Journal of Evaluation in Clinical Practice, 27(3), 491\u2013496. https:\/\/doi.org\/10.1111\/jep.13530","journal-title":"Journal of Evaluation in Clinical Practice"},{"issue":"2","key":"9867_CR63","doi-asserted-by":"publisher","first-page":"243","DOI":"10.1007\/s11023-010-9201-3","volume":"20","author":"M Taddeo","year":"2010","unstructured":"Taddeo, M. (2010a). Modelling trust in artificial agents, a first step toward the analysis of e-Trust. Minds and Machines, 20(2), 243\u2013257. https:\/\/doi.org\/10.1007\/s11023-010-9201-3","journal-title":"Minds and Machines"},{"issue":"3","key":"9867_CR64","doi-asserted-by":"publisher","first-page":"283","DOI":"10.1007\/s12130-010-9113-9","volume":"23","author":"M Taddeo","year":"2010","unstructured":"Taddeo, M. (2010b). Trust in technology: A distinctive and a problematic relation. Knowledge Technology & Policy, 23(3), 283\u2013286. https:\/\/doi.org\/10.1007\/s12130-010-9113-9","journal-title":"Knowledge Technology & Policy"},{"key":"9867_CR65","doi-asserted-by":"publisher","first-page":"23","DOI":"10.4018\/jthi.2009040102","volume":"5","author":"M Taddeo","year":"2011","unstructured":"Taddeo, M. (2011). Defining trust and e-trust. International Journal of Technology and Human Interaction, 5, 23\u201335. https:\/\/doi.org\/10.4018\/jthi.2009040102","journal-title":"International Journal of Technology and Human Interaction"},{"issue":"4","key":"9867_CR66","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1002\/tht3.259","volume":"6","author":"J Tallant","year":"2017","unstructured":"Tallant, J. (2017). Commitment in cases of trust and distrust. Thought, A Journal of Philosophy, 6(4), 261\u2013267. https:\/\/doi.org\/10.1002\/tht3.259","journal-title":"Thought, A Journal of Philosophy"},{"key":"9867_CR67","unstructured":"Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again (1st ed.). ed.). Basic Books."},{"key":"9867_CR100","doi-asserted-by":"publisher","unstructured":"Ursin, F., Lindner, F., Ropinski, T., Salloch, S., & Timmermann, C. (2023). Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach? Ethik in der Medizin, 6(5), 52138. https:\/\/doi.org\/10.1007\/s00481-023-00761-x","DOI":"10.1007\/s00481-023-00761-x"},{"key":"9867_CR109","doi-asserted-by":"publisher","unstructured":"Ursin, F., Timmermann, C., & Steger, F. (2021). Ethical Implications of Alzheimer's Disease Prediction in Asymptomatic Individuals through Artificial Intelligence. Diagnostics (Basel, Switzerland), 11(3). https:\/\/doi.org\/10.3390\/diagnostics11030440","DOI":"10.3390\/diagnostics11030440"},{"key":"9867_CR68","unstructured":"Vernon, D. (2014). Artificial cognitive systems: A primer. The MIT Press. http:\/\/www.jstor.org\/stable\/j.ctt17kk720"},{"key":"9867_CR69","doi-asserted-by":"publisher","unstructured":"Walker, M. (2006). Moral repair: Reconstructing moral relations after wrongdoing. 1\u2013250. https:\/\/doi.org\/10.1017\/CBO9780511618024","DOI":"10.1017\/CBO9780511618024"},{"issue":"9","key":"9867_CR70","doi-asserted-by":"publisher","first-page":"1660","DOI":"10.1111\/jdv.18192","volume":"36","author":"T Willem","year":"2022","unstructured":"Willem, T., Krammer, S., B\u00f6hm, A. S., French, L. E., Hartmann, D., Lasser, T., & Buyx, A. (2022). Risks and benefits of dermatological machine learning health care applications\u2014An overview and ethical analysis. Journal of The European Academy of Dermatology and Venereology, 36(9), 1660\u20131668. https:\/\/doi.org\/10.1111\/jdv.18192","journal-title":"Journal of The European Academy of Dermatology and Venereology"},{"key":"9867_CR71","doi-asserted-by":"publisher","first-page":"146","DOI":"10.1016\/j.ejca.2020.12.010","volume":"145","author":"JK Winkler","year":"2021","unstructured":"Winkler, J. K., Sies, K., Fink, C., Toberer, F., Enk, A., Abassi, M. S., Fuchs, T., & Haenssle, H. A. (2021). Association between different scale bars in dermoscopic images and diagnostic performance of a market-approved deep learning convolutional neural network for melanoma recognition. European Journal of Cancer, 145, 146\u2013154. https:\/\/doi.org\/10.1016\/j.ejca.2020.12.010","journal-title":"European Journal of Cancer"},{"issue":"6","key":"9867_CR72","doi-asserted-by":"publisher","first-page":"1388","DOI":"10.1007\/s11606-021-07008-9","volume":"37","author":"Q Wu","year":"2022","unstructured":"Wu, Q., Jin, Z., & Wang, P. (2022). The relationship between the physician-patient relationship, physician empathy, and patient trust. Journal of General Internal Medicine, 37(6), 1388\u20131393. https:\/\/doi.org\/10.1007\/s11606-021-07008-9","journal-title":"Journal of General Internal Medicine"},{"key":"9867_CR73","doi-asserted-by":"publisher","first-page":"e50853","DOI":"10.2196\/50853","volume":"26","author":"AGM Zondag","year":"2024","unstructured":"Zondag, A. G. M., Rozestraten, R., Grimmelikhuijsen, S. G., Jongsma, K. R., van Solinge, W. W., Bots, M. L., Vernooij, R. W. M., & Haitjema, S. (2024). The effect of artificial intelligence on patient-physician trust: Cross-sectional vignette study. Journal of Medical Internet Research, 26, e50853. https:\/\/doi.org\/10.2196\/50853","journal-title":"Journal of Medical Internet Research"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09867-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09867-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09867-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T03:34:07Z","timestamp":1763782447000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09867-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,22]]},"references-count":79,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["9867"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09867-w","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,22]]},"assertion":[{"value":"22 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no competing financial interests to declare.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"60"}}