{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T11:12:27Z","timestamp":1774264347810,"version":"3.50.1"},"reference-count":64,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2026,1,22]],"date-time":"2026-01-22T00:00:00Z","timestamp":1769040000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,1,22]],"date-time":"2026-01-22T00:00:00Z","timestamp":1769040000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Fachhochschule des BFI Wien GmbH"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI Ethics"],"published-print":{"date-parts":[[2026,2]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The widespread invocation of \u201ctrust\u201d in discussions surrounding generative AI (GenAI) conceals a deeper conceptual and normative tension. This paper critically examines the increasing tendency to treat AI systems as recipients or objects of trust, arguing that such a move risks both conceptual distortion and ethical erosion. Drawing on a digital humanist perspective, the paper contends that trust is a foundationally human, moral concept\u2014anchored in vulnerability, autonomy, and reciprocal recognition\u2014that cannot be meaningfully transferred to machines. The paper differentiates trust from mere reliance and explores the consequences of anthropomorphising AI systems. Using cases such as AI chatbots in therapeutic contexts, it shows how the application of trust to AI not only undermines human agency and emotional depth but also encourages a mechanisation of human social practices and an erosion of moral responsibility. In contrast to functionalist or system-oriented views of trust, a digital humanist approach insists on maintaining the conceptual boundary between humans and machines, advocating for transparency, controllability, and accountability in AI systems\u2014without misappropriating the language of trust. The paper scrutinizes how framing AI as trustworthy reconfigures social norms, blurs lines of responsibility, and endangers the cultivation of human morality. A digital humanist reassertion of trust as an intersubjective and ethical relation is vital to resist these trends and to uphold the dignity and agency of human actors in the age of artificial intelligence.<\/jats:p>","DOI":"10.1007\/s43681-025-00923-1","type":"journal-article","created":{"date-parts":[[2026,1,23]],"date-time":"2026-01-23T11:19:59Z","timestamp":1769167199000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Trusting the machine: a digital humanist perspective on misplaced trust in artificial intelligence"],"prefix":"10.1007","volume":"6","author":[{"given":"Pia-Zoe","family":"Hahne","sequence":"first","affiliation":[]},{"given":"Alexander","family":"Schmoelz","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,1,22]]},"reference":[{"key":"923_CR1","doi-asserted-by":"crossref","unstructured":"Bailey, M.: Why humans can\u2019t trust AI: You don\u2019t know how it works, what it\u2019s going to do or whether it\u2019ll serve your interests. http:\/\/theconversation.com\/why-humans-cant-trust-ai-you-dont-know-how-it-works-what-its-going-to-do-or-whether-itll-serve-your-interests-213115, last accessed 2024\/11\/06","DOI":"10.64628\/AAI.j4xr5e773"},{"key":"923_CR2","unstructured":"Constantino, T.: Can You Trust AI Search? New Study Reveals The Shocking Truth, https:\/\/www.forbes.com\/sites\/torconstantino\/2025\/03\/28\/can-you-trust-ai-search-new-study-reveals-the-shocking-truth\/, last accessed 2025\/08\/12."},{"key":"923_CR3","unstructured":"Chow, A.R., Haupt, A.: What Happened When a Doctor Posed As a Teen for AI Therapy. https:\/\/time.com\/7291048\/ai-chatbot-therapy-kids\/, last accessed 2025\/08\/06"},{"key":"923_CR4","unstructured":"Landymore, F.: Psychiatrist horrified when he actually tried talking to an AI Therapist, posing as a vulnerable teen, https:\/\/futurism.com\/psychiatrist-horrified-ai-therapist, last accessed 2025\/08\/06."},{"key":"923_CR5","unstructured":"Milmo, D.: \u2018It cannot provide nuance\u2019: UK experts warn AI therapy chatbots are not safe, (2025). https:\/\/www.theguardian.com\/technology\/2025\/may\/07\/experts-warn-therapy-ai-chatbots-are-not-safe-to-use"},{"key":"923_CR6","unstructured":"Wells, S.: Exploring the Dangers of AI in Mental Health Care. https:\/\/hai.stanford.edu\/news\/exploring-the-dangers-of-ai-in-mental-health-care, last accessed 2025\/08\/06"},{"key":"923_CR7","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1007\/s13347-024-00837-6","volume":"38","author":"S Baron","year":"2025","unstructured":"Baron, S.: Trust, explainability and AI. Philos. Technol. 38, 4 (2025). https:\/\/doi.org\/10.1007\/s13347-024-00837-6","journal-title":"Philos. Technol."},{"key":"923_CR8","doi-asserted-by":"publisher","first-page":"10","DOI":"10.1007\/s13347-024-00820-1","volume":"38","author":"C Budnik","year":"2025","unstructured":"Budnik, C.: Can we trust artificial intelligence? Philos. Technol. 38, 10 (2025). https:\/\/doi.org\/10.1007\/s13347-024-00820-1","journal-title":"Technol"},{"key":"923_CR9","doi-asserted-by":"publisher","unstructured":"Kaminski, A.: Hat Vertrauen Gr\u00fcnde oder ist Vertrauen ein Grund? \u2013 Eine (dialektische) Tugendtheorie von Vertrauen und Vertrauensw\u00fcrdigkeit. In: Kertscher, J. and M\u00fcller, J. (eds.) Praxis und \u203azweite Natur\u2039. pp. 167\u2013188. Brill | mentis (2017). https:\/\/doi.org\/10.30965\/9783957438249_017","DOI":"10.30965\/9783957438249_017"},{"key":"923_CR10","doi-asserted-by":"publisher","first-page":"62","DOI":"10.1007\/s13347-024-00752-w","volume":"37","author":"J Dorsch","year":"2024","unstructured":"Dorsch, J., Deroy, O.: Quasi-Metacognitive machines: Why we don\u2019t need morally trustworthy AI and communicating reliability is enough. Philos. Technol. 37, 62 (2024). https:\/\/doi.org\/10.1007\/s13347-024-00752-w","journal-title":"Philos. Technol."},{"key":"923_CR11","doi-asserted-by":"publisher","unstructured":"Jope, M.: Trust, risk, and Mere vulnerability. Philos. Phenomenol Res. Phpr. 70040 (2025). https:\/\/doi.org\/10.1111\/phpr.70040","DOI":"10.1111\/phpr.70040"},{"key":"923_CR12","doi-asserted-by":"publisher","first-page":"110","DOI":"10.1080\/21515581.2018.1552592","volume":"9","author":"F Kroeger","year":"2019","unstructured":"Kroeger, F.: Unlocking the treasure trove: How can luhmann\u2019s theory of trust enrich trust research? J. Trust Res. 9, 110\u2013124 (2019). https:\/\/doi.org\/10.1080\/21515581.2018.1552592","journal-title":"J. Trust Res."},{"key":"923_CR13","doi-asserted-by":"publisher","DOI":"10.36198\/9783838540047","volume-title":"Vertrauen: Ein Mechanismus Der Reduktion Sozialer Komplexit\u00e4t","author":"N Luhmann","year":"2014","unstructured":"Luhmann, N.: Vertrauen: Ein Mechanismus Der Reduktion Sozialer Komplexit\u00e4t. UVK Verlagsgesellschaft, Konstanz (2014)"},{"key":"923_CR14","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1007\/s10676-024-09774-6","volume":"26","author":"E P\u00f6ll","year":"2024","unstructured":"P\u00f6ll, E.: Engineering the trust machine. Aligning the concept of trust in the context of blockchain applications. Ethics Inf. Technol. 26, 37 (2024). https:\/\/doi.org\/10.1007\/s10676-024-09774-6","journal-title":"Ethics Inf. Technol."},{"key":"923_CR15","unstructured":"Werthner, H., Lee, E., Akkermans, H., Vardi, M., et al.: Wiener Manifest f\u00fcr Digitalen Humanismus, (2019). https:\/\/www.informatik.tuwien.ac.at\/dighum\/wp-content\/uploads\/2019\/07\/Vienna_Manifesto_on_Digital_Humanism_DE.pdf"},{"key":"923_CR16","doi-asserted-by":"publisher","first-page":"691","DOI":"10.1007\/s43681-024-00419-4","volume":"4","author":"A Placani","year":"2024","unstructured":"Placani, A.: Anthropomorphism in AI: Hype and fallacy. AI Ethics. 4, 691\u2013698 (2024). https:\/\/doi.org\/10.1007\/s43681-024-00419-4","journal-title":"AI Ethics"},{"key":"923_CR17","doi-asserted-by":"crossref","unstructured":"Stokel-Walker, C.: People are starting to trust AI more \u2013 and view it as more human-like. https:\/\/www.newscientist.com\/article\/2467435-people-are-starting-to-trust-ai-more-and-view-it-as-more-human-like\/, last accessed 2025\/08\/12","DOI":"10.1016\/S0262-4079(25)00304-5"},{"key":"923_CR18","doi-asserted-by":"publisher","DOI":"10.1201\/9781003530244","volume-title":"Debiasing AI: Rethinking the Intersection of Innovation and Sustainability","author":"D Shin","year":"2025","unstructured":"Shin, D.: Debiasing AI: Rethinking the Intersection of Innovation and Sustainability. Routledge, New York, NY Abingdon, Oxon (2025)"},{"key":"923_CR19","doi-asserted-by":"publisher","first-page":"143","DOI":"10.1521\/soco.2008.26.2.143","volume":"26","author":"N Epley","year":"2008","unstructured":"Epley, N., Waytz, A., Akalis, S., Cacioppo, J.T.: When we need A human: Motivational determinants of anthropomorphism. Soc. Cogn. 26, 143\u2013155 (2008). https:\/\/doi.org\/10.1521\/soco.2008.26.2.143","journal-title":"Soc. Cogn."},{"key":"923_CR20","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-12482-2","volume-title":"Digital Humanism: for a Humane Transformation of Democracy, Economy and Culture in the Digital Age","author":"J Nida-R\u00fcmelin","year":"2022","unstructured":"Nida-R\u00fcmelin, J., Weidenfeld, N.: Digital Humanism: for a Humane Transformation of Democracy, Economy and Culture in the Digital Age. Springer International Publishing, Cham (2022). https:\/\/doi.org\/10.1007\/978-3-031-12482-2"},{"key":"923_CR21","doi-asserted-by":"publisher","unstructured":"Schmoelz, A.: Die conditio humana Im digitalen Zeitalter. Zur grundlegung des digitalen humanismus und des wiener manifests. MedienP\u00e4dagogik Z. f\u00fcr Theorie Und Praxis Der Medienbildung. 208\u2013234 (2020). https:\/\/doi.org\/10.21240\/mpaed\/00\/2020.11.13.X","DOI":"10.21240\/mpaed\/00\/2020.11.13.X"},{"key":"923_CR22","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1007\/s10676-011-9279-1","volume":"14","author":"M Coeckelbergh","year":"2012","unstructured":"Coeckelbergh, M.: Can we trust robots? Ethics Inf. Technol. 14, 53\u201360 (2012). https:\/\/doi.org\/10.1007\/s10676-011-9279-1","journal-title":"Ethics Inf. Technol."},{"key":"923_CR23","doi-asserted-by":"publisher","first-page":"523","DOI":"10.1007\/s13347-019-00378-3","volume":"33","author":"A Ferrario","year":"2020","unstructured":"Ferrario, A., Loi, M., Vigan\u00f2, E.: In AI we trust incrementally: A Multi-layer model of trust to analyze Human-Artificial intelligence interactions. Philos. Technol. 33, 523\u2013539 (2020). https:\/\/doi.org\/10.1007\/s13347-019-00378-3","journal-title":"Philos. Technol."},{"key":"923_CR24","doi-asserted-by":"publisher","first-page":"94","DOI":"10.1007\/s13347-025-00924-2","volume":"38","author":"I Robertson","year":"2025","unstructured":"Robertson, I.: Trust and reliability. Philos. Technol. 38, 94 (2025). https:\/\/doi.org\/10.1007\/s13347-025-00924-2","journal-title":"Philos. Technol."},{"key":"923_CR25","doi-asserted-by":"publisher","first-page":"1412","DOI":"10.1057\/s41599-024-03897-3","volume":"11","author":"A Sch\u00e4fer","year":"2024","unstructured":"Sch\u00e4fer, A., Esterbauer, R., Kubicek, B.: Trusting robots: A relational trust definition based on human intentionality. Humanit. Soc. Sci. Commun. 11, 1412 (2024). https:\/\/doi.org\/10.1057\/s41599-024-03897-3","journal-title":"Humanit. Soc. Sci. Commun."},{"key":"923_CR26","doi-asserted-by":"publisher","first-page":"1753","DOI":"10.1007\/s00146-022-01401-6","volume":"38","author":"S Kr\u00fcger","year":"2023","unstructured":"Kr\u00fcger, S., Wilson, C.: The problem with trust: On the discursive commodification of trust in AI. AI Soc. 38, 1753\u20131761 (2023). https:\/\/doi.org\/10.1007\/s00146-022-01401-6","journal-title":"AI Soc."},{"key":"923_CR27","doi-asserted-by":"publisher","unstructured":"Stamboliev, E., Christiaens, T.: How empty is trustworthy AI? A discourse analysis of the ethics guidelines of trustworthy AI. Crit. Policy Stud. 1\u201318 (2024). https:\/\/doi.org\/10.1080\/19460171.2024.2315431","DOI":"10.1080\/19460171.2024.2315431"},{"key":"923_CR28","doi-asserted-by":"publisher","first-page":"735","DOI":"10.1007\/s43681-022-00200-5","volume":"3","author":"K Reinhardt","year":"2023","unstructured":"Reinhardt, K.: Trust and trustworthiness in AI ethics. AI Ethics. 3, 735\u2013744 (2023). https:\/\/doi.org\/10.1007\/s43681-022-00200-5","journal-title":"AI Ethics"},{"key":"923_CR29","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1007\/978-3-031-45304-5_1","volume-title":"Introduction To Digital Humanism: A Textbook","author":"J Nida-R\u00fcmelin","year":"2024","unstructured":"Nida-R\u00fcmelin, J., Winter, D.: Humanism and enlightenment. In: Werthner, H., Ghezzi, C., Kramer, J., Nida-R\u00fcmelin, J., Nuseibeh, B., Prem, E., Stanger, A. (eds.) Introduction To Digital Humanism: A Textbook, pp. 3\u201316. Springer Nature Switzerland, Cham (2024)"},{"key":"923_CR30","doi-asserted-by":"publisher","DOI":"10.1177\/14779714231161449","author":"A Schmoelz","year":"2023","unstructured":"Schmoelz, A.: Digital Humanism, progressive neoliberalism and the European digital governance system for vocational and adult education. J. Adult Continuing Educ. (2023). https:\/\/doi.org\/10.1177\/14779714231161449","journal-title":"J. Adult Continuing Educ."},{"key":"923_CR31","doi-asserted-by":"publisher","first-page":"17","DOI":"10.1007\/978-3-031-45304-5_2","volume-title":"Introduction To Digital Humanism","author":"J Nida-R\u00fcmelin","year":"2024","unstructured":"Nida-R\u00fcmelin, J., Staudacher, K.: Philosophical foundations of digital humanism. In: Werthner, H., Ghezzi, C., Kramer, J., Nida-R\u00fcmelin, J., Nuseibeh, B., Prem, E., Stanger, A. (eds.) Introduction To Digital Humanism, pp. 17\u201330. Springer Nature Switzerland, Cham (2024). https:\/\/doi.org\/10.1007\/978-3-031-45304-5_2"},{"key":"923_CR32","volume-title":"Pour Un Humanisme num\u00e9rique. \u00c9d","author":"M Doueihi","year":"2011","unstructured":"Doueihi, M.: Pour Un Humanisme num\u00e9rique. \u00c9d. du Seuil, Paris (2011)"},{"key":"923_CR33","doi-asserted-by":"publisher","first-page":"392","DOI":"10.1177\/05390184241270835","volume":"63","author":"M Uttenthal","year":"2024","unstructured":"Uttenthal, M.: A conceptual analysis of trust. Social Sci. Inform. 63, 392\u2013410 (2024). https:\/\/doi.org\/10.1177\/05390184241270835","journal-title":"Social Sci. Inform."},{"key":"923_CR34","doi-asserted-by":"publisher","first-page":"231","DOI":"10.1086\/292745","volume":"96","author":"A Baier","year":"1986","unstructured":"Baier, A.: Trust and antitrust. Ethics. 96, 231\u2013260 (1986). https:\/\/doi.org\/10.1086\/292745","journal-title":"Ethics"},{"key":"923_CR35","volume-title":"Trust and Trustworthiness","author":"R Hardin","year":"2002","unstructured":"Hardin, R.: Trust and Trustworthiness. Russell Sage Foundation, New York (2002)"},{"key":"923_CR36","doi-asserted-by":"publisher","DOI":"10.5040\/9781474217590","volume-title":"Trust, ethics, and Human Reason","author":"O Lagerspetz","year":"2015","unstructured":"Lagerspetz, O.: Trust, ethics, and Human Reason. Bloomsbury Academic, New York (2015). https:\/\/doi.org\/10.5040\/9781474217590"},{"key":"923_CR37","doi-asserted-by":"publisher","unstructured":"Faulkner, P.: The problem of trust. In: Faulkner, P., Simpson, T. (eds.) The Philosophy of Trust, pp. 109\u2013128. Oxford University Press (2017). https:\/\/doi.org\/10.1093\/acprof:oso\/9780198732549.003.0007","DOI":"10.1093\/acprof:oso\/9780198732549.003.0007"},{"key":"923_CR38","doi-asserted-by":"publisher","first-page":"169","DOI":"10.1007\/978-3-032-11108-1_12","volume-title":"Digital Humanism","author":"P-Z Hahne","year":"2026","unstructured":"Hahne, P.-Z., Schmoelz, A.: Understanding the Humanist notion of trust in the age of generative AI. In: Hagedorn, L., Schmid, U., Winter, S., Woltran, S. (eds.) Digital Humanism, pp. 169\u2013178. Springer Nature Switzerland, Cham (2026). https:\/\/doi.org\/10.1007\/978-3-032-11108-1_12"},{"key":"923_CR39","doi-asserted-by":"publisher","first-page":"4167","DOI":"10.1007\/s43681-025-00690-z","volume":"5","author":"S Blanco","year":"2025","unstructured":"Blanco, S.: Human trust in AI: A relationship beyond reliance. AI Ethics. 5, 4167\u20134180 (2025). https:\/\/doi.org\/10.1007\/s43681-025-00690-z","journal-title":"AI Ethics"},{"key":"923_CR40","unstructured":"Hahne, P.-Z.: Invisible Labour: Who Keeps the Algorithm Running? In: M\u00fcller, V.C., Dung, L., L\u00f6hr, G., and Rumana, A. (eds.) Philosophy of Artificial Intelligence: The State of the Art. p. in Print. Springer (forthcoming)"},{"key":"923_CR41","unstructured":"Schmoelz, A., Hahne, P.-Z., Klocker, S., Katzian, W.: KI in Der arbeitswelt: Der mensch Im Spannungsfeld von entfremdung und digitalem humanismus. In: KI Trifft Arbeit. Wie k\u00fcnstliche Intelligenz Die Arbeitswelt ver\u00e4ndert. Waxmann (2025)"},{"key":"923_CR42","doi-asserted-by":"publisher","unstructured":"Gozalo-Brizuela, R., Garrido-Merchan, E.C.: ChatGPT is not all you need. A State of the Art Review of large Generative AI models, https:\/\/arxiv.org\/abs\/2301.04655, (2023). https:\/\/doi.org\/10.48550\/ARXIV.2301.04655","DOI":"10.48550\/ARXIV.2301.04655"},{"key":"923_CR43","doi-asserted-by":"publisher","first-page":"1651","DOI":"10.1007\/s00146-021-01336-4","volume":"38","author":"J Brusseau","year":"2023","unstructured":"Brusseau, J.: From the ground truth up: Doing AI ethics from practice to principles. AI Soc. 38, 1651\u20131657 (2023). https:\/\/doi.org\/10.1007\/s00146-021-01336-4","journal-title":"AI Soc."},{"key":"923_CR44","doi-asserted-by":"publisher","unstructured":"Pink, S., Quilty, E., Grundy, J., Hoda, R.: Trust, Artificial Intelligence and Software Practitioners: an Interdisciplinary Agenda. AI & Soc (2024). https:\/\/doi.org\/10.1007\/s00146-024-01882-7","DOI":"10.1007\/s00146-024-01882-7"},{"key":"923_CR45","doi-asserted-by":"publisher","first-page":"2749","DOI":"10.1007\/s11948-020-00228-y","volume":"26","author":"M Ryan","year":"2020","unstructured":"Ryan, M.: In AI we trust: Ethics, artificial Intelligence, and reliability. Sci. Eng. Ethics. 26, 2749\u20132767 (2020). https:\/\/doi.org\/10.1007\/s11948-020-00228-y","journal-title":"Sci. Eng. Ethics"},{"key":"923_CR46","doi-asserted-by":"publisher","first-page":"107","DOI":"10.1007\/s13347-014-0156-9","volume":"28","author":"S Vallor","year":"2015","unstructured":"Vallor, S.: Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philos. Technol. 28, 107\u2013124 (2015). https:\/\/doi.org\/10.1007\/s13347-014-0156-9","journal-title":"Philos. Technol."},{"key":"923_CR47","doi-asserted-by":"publisher","unstructured":"Zhang, R., Li, H., Meng, H., Zhan, J., Gan, H., Lee, Y.-C.: The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships. In: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. pp. 1\u201317. ACM, Yokohama Japan (2025). https:\/\/doi.org\/10.1145\/3706598.3713429","DOI":"10.1145\/3706598.3713429"},{"key":"923_CR48","doi-asserted-by":"publisher","first-page":"5923","DOI":"10.1177\/14614448221142007","volume":"26","author":"L Laestadius","year":"2024","unstructured":"Laestadius, L., Bishop, A., Gonzalez, M., Illen\u010d\u00edk, D., Campos-Castillo, C.: Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New. Media Soc. 26, 5923\u20135941 (2024). https:\/\/doi.org\/10.1177\/14614448221142007","journal-title":"New. Media Soc."},{"key":"923_CR49","doi-asserted-by":"crossref","unstructured":"Xie, T., Pentina, I.: Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika. In: Proceedings of the 55th Hawaii International Conference on System Sciences. pp. 2046\u20132055 (2022)","DOI":"10.24251\/HICSS.2022.258"},{"key":"923_CR50","unstructured":"Koebler, J.: AI Chatbot Added to Mushroom Foraging Facebook Group Immediately Gives Tips for Cooking Dangerous Mushroom, https:\/\/www.404media.co\/ai-chatbot-added-to-mushroom-foraging-facebook-group-immediately-gives-tips-for-cooking-dangerous-mushroom\/"},{"key":"923_CR51","unstructured":"McMahon, L., Kleinman, Z.: Glue pizza and eat rocks: Google AI search errors go viral, https:\/\/www.bbc.com\/news\/articles\/cd11gzejgz4o"},{"key":"923_CR52","volume-title":"Communicative AI: a Critical Introduction To Large Language Models","author":"M Coeckelbergh","year":"2025","unstructured":"Coeckelbergh, M., Gunkel, D.J.: Communicative AI: a Critical Introduction To Large Language Models. Polity, Cambridge Hoboken (2025)"},{"key":"923_CR53","unstructured":"Jurberg, A.: These Tragic AI Fails Are Proof That You Can\u2019t Fully Rely On ChatGPT To Plan Your Trip, https:\/\/www.huffpost.com\/entry\/chatgpt-travel-plans-itinerary-trip_l_687107c9e4b00de383c0cf1f, last accessed 2025\/08\/06."},{"key":"923_CR54","unstructured":"Druteik\u0117, J.: Who To Trust: ChatBot or a Travel Agent? - Skycop. https:\/\/www.skycop.com\/news\/tips-and-tricks\/chatbot-said-yes-border-agent-said-no-who-to-trust\/, last accessed 2025\/08\/06"},{"key":"923_CR55","doi-asserted-by":"publisher","unstructured":"Barberi, A., Missomelius, P., Nida-R\u00fcmelin, J., Schmoelz, A., Werthner, H.: Digitaler Humanismus Medienimpulse. 1\u201320 (2021). https:\/\/doi.org\/10.21243\/MI-02-21-27","DOI":"10.21243\/MI-02-21-27"},{"key":"923_CR56","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10676-010-9263-1","volume":"13","author":"M Taddeo","year":"2011","unstructured":"Taddeo, M., Floridi, L.: The case for e-trust. Ethics Inf. Technol. 13, 1\u20133 (2011). https:\/\/doi.org\/10.1007\/s10676-010-9263-1","journal-title":"Ethics Inf. Technol."},{"key":"923_CR57","doi-asserted-by":"publisher","first-page":"17","DOI":"10.1007\/s10676-010-9255-1","volume":"13","author":"FS Grodzinsky","year":"2011","unstructured":"Grodzinsky, F.S., Miller, K.W., Wolf, M.J.: Developing artificial agents worthy of trust: Would you buy a used car from this artificial agent? Ethics Inf. Technol. 13, 17\u201327 (2011). https:\/\/doi.org\/10.1007\/s10676-010-9255-1","journal-title":"Ethics Inf. Technol."},{"key":"923_CR58","doi-asserted-by":"publisher","first-page":"243","DOI":"10.1007\/s11023-010-9201-3","volume":"20","author":"M Taddeo","year":"2010","unstructured":"Taddeo, M.: Modelling trust in artificial Agents, A first step toward the analysis of e-Trust. Minds Machines. 20, 243\u2013257 (2010). https:\/\/doi.org\/10.1007\/s11023-010-9201-3","journal-title":"Minds Machines"},{"key":"923_CR59","doi-asserted-by":"publisher","first-page":"392","DOI":"10.1111\/tops.12682","volume":"17","author":"KD Evans","year":"2025","unstructured":"Evans, K.D., Robbins, S.A., Bryson, J.J.: Do we collaborate with what we design? Top. Cogn. Sci. 17, 392\u2013411 (2025). https:\/\/doi.org\/10.1111\/tops.12682","journal-title":"Top. Cogn. Sci."},{"key":"923_CR60","doi-asserted-by":"publisher","unstructured":"Mittelstadt, B.: Interpretability and transparency in artificial intelligence. In: V\u00e9liz, C. (ed.) Oxford Handbook of Digital Ethics, pp. 378\u2013410. Oxford University Press (2022). https:\/\/doi.org\/10.1093\/oxfordhb\/9780198857815.013.20","DOI":"10.1093\/oxfordhb\/9780198857815.013.20"},{"key":"923_CR61","doi-asserted-by":"publisher","first-page":"1607","DOI":"10.1007\/s13347-021-00477-0","volume":"34","author":"WJ Von Eschenbach","year":"2021","unstructured":"Von Eschenbach, W.J.: Transparency and the black box problem: Why we do not trust AI. Philos. Technol. 34, 1607\u20131622 (2021). https:\/\/doi.org\/10.1007\/s13347-021-00477-0","journal-title":"Philos. Technol."},{"key":"923_CR62","unstructured":"Hornyak, T.: Zen and the Art of Aibo Engineering - IEEE Spectrum. https:\/\/spectrum.ieee.org\/aibo, last accessed 2025\/10\/13"},{"key":"923_CR63","doi-asserted-by":"publisher","first-page":"434","DOI":"10.1037\/a0036054","volume":"14","author":"A Waytz","year":"2014","unstructured":"Waytz, A., Norton, M.I.: Botsourcing and outsourcing: Robot, British, Chinese, and German workers are for thinking\u2014not feeling\u2014jobs. Emotion. 14, 434\u2013444 (2014). https:\/\/doi.org\/10.1037\/a0036054","journal-title":"Emotion"},{"key":"923_CR64","doi-asserted-by":"publisher","unstructured":"Smuha, N.A.: Beyond the individual: Governing ai\u2019s societal harm. Internet Policy Rev. 10 (2021). https:\/\/doi.org\/10.14763\/2021.3.1574","DOI":"10.14763\/2021.3.1574"}],"container-title":["AI and Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-025-00923-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43681-025-00923-1","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-025-00923-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T10:28:51Z","timestamp":1774261731000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43681-025-00923-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,1,22]]},"references-count":64,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,2]]}},"alternative-id":["923"],"URL":"https:\/\/doi.org\/10.1007\/s43681-025-00923-1","relation":{},"ISSN":["2730-5953","2730-5961"],"issn-type":[{"value":"2730-5953","type":"print"},{"value":"2730-5961","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,1,22]]},"assertion":[{"value":"17 September 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 November 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 January 2026","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"115"}}