{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T01:47:47Z","timestamp":1774316867445,"version":"3.50.1"},"reference-count":50,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,10,12]],"date-time":"2023-10-12T00:00:00Z","timestamp":1697068800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,10,12]],"date-time":"2023-10-12T00:00:00Z","timestamp":1697068800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100003407","name":"Ministero dell\u2019Istruzione, dell\u2019Universit\u00e0 e della Ricerca","doi-asserted-by":"publisher","award":["PRIN2020 2020SSKZ7R"],"award-info":[{"award-number":["PRIN2020 2020SSKZ7R"]}],"id":[{"id":"10.13039\/501100003407","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100006690","name":"Politecnico di Milano","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100006690","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI &amp; Soc"],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee\u2019s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.<\/jats:p>","DOI":"10.1007\/s00146-023-01789-9","type":"journal-article","created":{"date-parts":[[2023,10,12]],"date-time":"2023-10-12T07:01:49Z","timestamp":1697094109000},"page":"2691-2702","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":19,"title":["Keep trusting! A plea for the notion of Trustworthy AI"],"prefix":"10.1007","volume":"39","author":[{"given":"Giacomo","family":"Zanotti","sequence":"first","affiliation":[]},{"given":"Mattia","family":"Petrolo","sequence":"additional","affiliation":[]},{"given":"Daniele","family":"Chiffi","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9127-6165","authenticated-orcid":false,"given":"Viola","family":"Schiaffonati","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,10,12]]},"reference":[{"issue":"11","key":"1789_CR1","doi-asserted-by":"publisher","first-page":"1247","DOI":"10.1001\/jamadermatol.2018.2348","volume":"154","author":"AS Adamson","year":"2018","unstructured":"Adamson AS, Smith A (2018) Machine learning and health care disparities in dermatology. JAMA Dermatol 154(11):1247\u20131248. https:\/\/doi.org\/10.1001\/jamadermatol.2018.2348","journal-title":"JAMA Dermatol"},{"key":"1789_CR2","volume-title":"Machine Ethics","year":"2011","unstructured":"Anderson M, Anderson S (eds) (2011) Machine Ethics. Cambridge University Press, Cambridge"},{"key":"1789_CR3","doi-asserted-by":"publisher","first-page":"611","DOI":"10.1007\/s00146-019-00931-w","volume":"35","author":"T Araujo","year":"2020","unstructured":"Araujo T, Helberger N, Kruikemeier S et al (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Soc 35:611\u2013623. https:\/\/doi.org\/10.1007\/s00146-019-00931-w","journal-title":"AI & Soc"},{"issue":"2","key":"1789_CR4","doi-asserted-by":"publisher","first-page":"231","DOI":"10.1086\/292745","volume":"96","author":"A Baier","year":"1986","unstructured":"Baier A (1986) Trust and antitrust. Ethics 96(2):231\u2013260","journal-title":"Ethics"},{"issue":"3","key":"1789_CR5","doi-asserted-by":"publisher","first-page":"321","DOI":"10.1017\/can.2020.27","volume":"52","author":"JB Biddle","year":"2022","unstructured":"Biddle JB (2022) On predicting recidivism: epistemic risk, tradeoffs, and values in machine learning. Can J Philos 52(3):321\u2013341. https:\/\/doi.org\/10.1017\/can.2020.27","journal-title":"Can J Philos"},{"key":"1789_CR6","unstructured":"Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Amodei D. (2020). Language Models are Few-Shot Learners. NIPS\u201920: Proceedings of the 34th International Conference on Neural Information Processing Systems, 1877\u20131901"},{"key":"1789_CR7","unstructured":"Buechner J, Simon J, Tavani HT. (2014). Re-Thinking Trust and Trustworthiness in Digital Environments. In Buchanan E. et al. (Eds.), Autonomous Technologies: Philosophical Issues, Practical Solutions, Human Nature. Proceedings of the Tenth International Conference on Computer Ethics Philosophical Enquiry, INSEIT, 65\u201379"},{"issue":"1","key":"1789_CR8","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1007\/s10676-011-9279-1","volume":"14","author":"M Coeckelbergh","year":"2012","unstructured":"Coeckelbergh M (2012) Can We Trust Robots? Ethics Inf Technol 14(1):53\u201360. https:\/\/doi.org\/10.1007\/s10676-011-9279-1","journal-title":"Ethics Inf Technol"},{"issue":"32","key":"1789_CR9","doi-asserted-by":"publisher","first-page":"6147","DOI":"10.1126\/sciadv.abq6147","volume":"8","author":"R Daneshjou","year":"2022","unstructured":"Daneshjou R, Vodrahalli K, Novoa RA, Jenkins M, Liang W, Rotemberg V, Ko J, Swetter SM, Bailey EE, Gevaert O, Mukherjee P, Phung M, Yekrang K, Fong B, Sahasrabudhe R, Allerup JAC, Okata-Karigane U, Zou J, Chiou A (2022) Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci Adv 8(32):6147. https:\/\/doi.org\/10.1126\/sciadv.abq6147","journal-title":"Sci Adv"},{"issue":"8","key":"1789_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/S2589-7500(19)30197-9","volume":"1","author":"M DeCamp","year":"2019","unstructured":"DeCamp M, Tilburt JC (2019) Why we cannot trust artificial intelligence in medicine. Lancet Digital Health 1(8):e390. https:\/\/doi.org\/10.1016\/S2589-7500(19)30197-9","journal-title":"Lancet Digital Health"},{"issue":"4","key":"1789_CR11","doi-asserted-by":"publisher","first-page":"645","DOI":"10.1007\/s11023-018-9481-6","volume":"28","author":"JM Dur\u00e1n","year":"2018","unstructured":"Dur\u00e1n JM, Formanek N (2018) Grounds for trust: essential epistemic opacity and computational reliabilism. Mind Mach 28(4):645\u2013666. https:\/\/doi.org\/10.1007\/s11023-018-9481-6","journal-title":"Mind Mach"},{"key":"1789_CR12","doi-asserted-by":"publisher","first-page":"329","DOI":"10.1136\/medethics-2020-106820","volume":"47","author":"JM Dur\u00e1n","year":"2021","unstructured":"Dur\u00e1n JM, Jongsma KR (2021) Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J Med Ethics 47:329\u2013335. https:\/\/doi.org\/10.1136\/medethics-2020-106820","journal-title":"J Med Ethics"},{"key":"1789_CR13","doi-asserted-by":"publisher","first-page":"7639","DOI":"10.1038\/nature21056","volume":"542","author":"A Esteva","year":"2017","unstructured":"Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542:7639. https:\/\/doi.org\/10.1038\/nature21056","journal-title":"Nature"},{"key":"1789_CR14","unstructured":"European Commission (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence\u2014Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts. https:\/\/digital-strategy.ec.europa.eu\/en\/library\/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence"},{"issue":"3","key":"1789_CR15","doi-asserted-by":"publisher","first-page":"523","DOI":"10.1007\/s13347-019-00378-3","volume":"33","author":"A Ferrario","year":"2020","unstructured":"Ferrario A, Loi M, Vigan\u00f2 E (2020) In AI we trust incrementally: a multi-layer model of trust to analyze human-artificial intelligence interactions. Phil Technol 33(3):523\u2013539. https:\/\/doi.org\/10.1007\/s13347-019-00378-3","journal-title":"Phil Technol"},{"issue":"6","key":"1789_CR16","doi-asserted-by":"publisher","first-page":"437","DOI":"10.1136\/medethics-2020-106922","volume":"47","author":"A Ferrario","year":"2021","unstructured":"Ferrario A, Loi M, Vigan\u00f2 E (2021) Trust does not need to be human: It is possible to trust medical AI. J Med Ethics 47(6):437\u2013438. https:\/\/doi.org\/10.1136\/medethics-2020-106922","journal-title":"J Med Ethics"},{"issue":"1","key":"1789_CR17","doi-asserted-by":"publisher","first-page":"63","DOI":"10.4454\/teoria.v39i1.57","volume":"39","author":"F Fossa","year":"2019","unstructured":"Fossa F (2019) \u00abI don\u2019t trust you, you faker!\u00bb on trust, reliance, and artificial agency. TESOL J 39(1):63\u201380. https:\/\/doi.org\/10.4454\/teoria.v39i1.57","journal-title":"TESOL J"},{"key":"1789_CR18","volume-title":"Humane Robotics A Multidisciplinary Approach Towards the Development of Humane-Centred Technologies","author":"F Fossa","year":"2022","unstructured":"Fossa F, Chiffi D, De Florio C (2022) A Conceptual Characterization of Autonomy in the Philosophy of Robotics. In: Riva G, Marchetti A (eds) Humane Robotics. A Multidisciplinary Approach Towards the Development of Humane-Centred Technologies. Vita e Pensiero, Milano"},{"key":"1789_CR19","first-page":"213","volume-title":"Trust: Making and Breaking Cooperative Relations","author":"D Gambetta","year":"1988","unstructured":"Gambetta D (1988) Can We Trust Trust? In: Gambetta D (ed) Trust: Making and Breaking Cooperative Relations. Blackwell, Oxford, pp 213\u2013237"},{"key":"1789_CR20","doi-asserted-by":"publisher","first-page":"97","DOI":"10.4324\/9781315542294-8","volume-title":"The Routledge Handbook of Trust and Philosophy","author":"SC Goldberg","year":"2020","unstructured":"Goldberg SC (2020) Trust and Reliance. In: Simon J (ed) The Routledge Handbook of Trust and Philosophy. Routledge, New York, pp 97\u2013108"},{"key":"1789_CR21","doi-asserted-by":"publisher","first-page":"298","DOI":"10.4324\/9781315542294-23","volume-title":"The Routledge Handbook of Trust and Philosophy","author":"F Grodzinsky","year":"2020","unstructured":"Grodzinsky F, Miller K, Wolf MJ (2020) Trust in artificial agents. In: Simon J (ed) The Routledge Handbook of Trust and Philosophy. Routledge, New York, pp 298\u2013312"},{"key":"1789_CR22","volume-title":"Trust and Trustworthiness","author":"R Hardin","year":"2002","unstructured":"Hardin R (2002) Trust and Trustworthiness. Russell Sage Foundation, New York"},{"issue":"7","key":"1789_CR23","doi-asserted-by":"publisher","first-page":"478","DOI":"10.1136\/medethics-2019-105935","volume":"46","author":"JJ Hatherley","year":"2020","unstructured":"Hatherley JJ (2020) Limits of trust in medical AI. J Med Ethics 46(7):478\u2013481. https:\/\/doi.org\/10.1136\/medethics-2019-105935","journal-title":"J Med Ethics"},{"key":"1789_CR24","unstructured":"AI HLEG (2019). Ethics guidelines for trustworthy AI. https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai"},{"issue":"3","key":"1789_CR25","doi-asserted-by":"publisher","first-page":"407","DOI":"10.1177\/0018720814547570","volume":"57","author":"KA Hoff","year":"2015","unstructured":"Hoff KA, Bashir M (2015) Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57(3):407\u2013434","journal-title":"Hum Factors"},{"key":"1789_CR26","volume-title":"Wording robotics: discourses and representations on robotics","author":"M Hunyadi","year":"2019","unstructured":"Hunyadi M (2019) Artificial Moral Agents. Really? In: Laumond J-P, Danblon E, Pieters C (eds) Wording robotics: discourses and representations on robotics. Springer International Publishing, Cham"},{"issue":"1","key":"1789_CR27","doi-asserted-by":"publisher","first-page":"10","DOI":"10.1177\/1555343418810936","volume":"13","author":"HA Klein","year":"2019","unstructured":"Klein HA, Lin M-H, Miller NL, Militello LG, Lyons JB, Finkeldey JG (2019) Trust across culture and context. J Cognitive Eng Decision Making 13(1):10\u201329. https:\/\/doi.org\/10.1177\/1555343418810936","journal-title":"J Cognitive Eng Decision Making"},{"key":"1789_CR28","volume-title":"Trust and Power: Two Works","author":"N Luhmann","year":"1979","unstructured":"Luhmann N (1979) Trust and Power: Two Works. Wiley, Chichester"},{"key":"1789_CR29","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-022-01412-3","author":"M L\u00fcnich","year":"2022","unstructured":"L\u00fcnich M, Kieslich K (2022) Exploring the roles of trust and social group preference on the legitimacy of algorithmic decision-making vs. human decision-making for allocating COVID-19 vaccinations. AI Soc. https:\/\/doi.org\/10.1007\/s00146-022-01412-3","journal-title":"AI Soc"},{"key":"1789_CR30","unstructured":"McLeod C. (2021). Trust. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2021). Metaphysics Research Lab, Stanford University. https:\/\/plato.stanford.edu\/archives\/fall2021\/entriesrust\/"},{"key":"1789_CR31","unstructured":"Metzinger T. (2019). EU guidelines: Ethics washing made in Europe. Der Tagesspiegel Online. https:\/\/www.tagesspiegel.de\/politik\/ethics-washing-made-in-europe-5937028.html"},{"issue":"4","key":"1789_CR32","doi-asserted-by":"publisher","first-page":"751","DOI":"10.1007\/s11023-022-09612-y","volume":"32","author":"J M\u00f6kander","year":"2022","unstructured":"M\u00f6kander J, Juneja P, Watson DS, Floridi L (2022) The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other? Mind Mach 32(4):751\u2013758. https:\/\/doi.org\/10.1007\/s11023-022-09612-y","journal-title":"Mind Mach"},{"key":"1789_CR33","volume-title":"Trust: Reason, Routine, Reflexivity","author":"G Mollering","year":"2006","unstructured":"Mollering G (2006) Trust: Reason, Routine, Reflexivity. Bingley, Emerald Group"},{"issue":"4","key":"1789_CR34","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/MIS.2006.80","volume":"21","author":"JH Moor","year":"2006","unstructured":"Moor JH (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21(4):18\u201321","journal-title":"IEEE Intell Syst"},{"key":"1789_CR35","doi-asserted-by":"publisher","unstructured":"Murray-Rust, D. S., Nicenboim, I., & Lockton, D. (2022). Metaphors for Designers Working with AI. In DRS Conference Proceedings 2022 (RS Biennial Conference Series). https:\/\/doi.org\/10.21606\/drs.2022.667","DOI":"10.21606\/drs.2022.667"},{"key":"1789_CR36","volume-title":"Oxford Studies in Epistemology, 7","author":"CT Nguyen","year":"2022","unstructured":"Nguyen CT (2022) Trust as an Unquestioning Attitude. In: Gendler TS, Hawthorne J, Chung J (eds) Oxford Studies in Epistemology, 7. Oxford University Press, Oxford"},{"issue":"3","key":"1789_CR37","doi-asserted-by":"publisher","first-page":"309","DOI":"10.1007\/s10677-007-9069-3","volume":"10","author":"PJ Nickel","year":"2007","unstructured":"Nickel PJ (2007) Trust and obligation-ascription. Ethical Theory Moral Pract 10(3):309\u2013319. https:\/\/doi.org\/10.1007\/s10677-007-9069-3","journal-title":"Ethical Theory Moral Pract"},{"issue":"3","key":"1789_CR38","doi-asserted-by":"publisher","first-page":"429","DOI":"10.1007\/s12130-010-9124-6","volume":"23","author":"PJ Nickel","year":"2010","unstructured":"Nickel PJ, Franssen M, Kroes P (2010) Can we make sense of the notion of trustworthy technology? Knowl Technol Policy 23(3):429\u2013444. https:\/\/doi.org\/10.1007\/s12130-010-9124-6","journal-title":"Knowl Technol Policy"},{"key":"1789_CR39","volume-title":"In AI we trust: power, illusion and control of predictive algorithms","author":"H Nowotny","year":"2021","unstructured":"Nowotny H (2021) In AI we trust: power, illusion and control of predictive algorithms. Polity, Cambridge"},{"key":"1789_CR40","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-022-01462-7","author":"G Papagni","year":"2022","unstructured":"Papagni G, de Pagter J, Zafari S et al (2022) Artificial agents\u2019 explainability to support trust: considerations on timing and context. AI & Soc. https:\/\/doi.org\/10.1007\/s00146-022-01462-7","journal-title":"AI & Soc"},{"key":"1789_CR41","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-022-01617-6","author":"F Russo","year":"2023","unstructured":"Russo F, Schliesser E, Wagemans J (2023) Connecting ethics and epistemology of AI. AI & Soc. https:\/\/doi.org\/10.1007\/s00146-022-01617-6","journal-title":"AI & Soc"},{"key":"1789_CR42","doi-asserted-by":"publisher","first-page":"2749","DOI":"10.1007\/s11948-020-00228-y","volume":"26","author":"M Ryan","year":"2020","unstructured":"Ryan M (2020) In AI we trust: ethics, artificial intelligence, and reliability. Sci Eng Ethics 26:2749\u20132767. https:\/\/doi.org\/10.1007\/s11948-020-00228-y","journal-title":"Sci Eng Ethics"},{"key":"1789_CR43","volume-title":"The Routledge Handbook of Trust and Philosophy","year":"2020","unstructured":"Simon J (ed) (2020) The Routledge Handbook of Trust and Philosophy. Routledge, New York"},{"key":"1789_CR44","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijhcs.2021.102601","volume":"149","author":"M Skjuve","year":"2021","unstructured":"Skjuve M, F\u00f8lstad A, Fostervold KI, Brandtzaeg PB (2021) My chatbot companion-a study of human-chatbot relationships. Int J Hum Comput Stud 149:102601. https:\/\/doi.org\/10.1016\/j.ijhcs.2021.102601","journal-title":"Int J Hum Comput Stud"},{"issue":"581","key":"1789_CR45","doi-asserted-by":"publisher","first-page":"3652","DOI":"10.1126\/scitranslmed.abb3652","volume":"13","author":"LR Soenksen","year":"2021","unstructured":"Soenksen LR, Kassis T, Conover ST, Marti-Fuster B, Birkenfeld JS, Tucker-Schwartz J, Naseem A, Stavert RR, Kim CC, Senna MM, Avil\u00e9s-Izquierdo J, Collins JJ, Barzilay R, Gray ML (2021) Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images. Sci Transl Med 13(581):3652. https:\/\/doi.org\/10.1126\/scitranslmed.abb3652","journal-title":"Sci Transl Med"},{"issue":"2","key":"1789_CR46","doi-asserted-by":"publisher","first-page":"154","DOI":"10.1111\/bioe.12891","volume":"36","author":"G Starke","year":"2022","unstructured":"Starke G, Brule R, Elger BS, Haselager P (2022) Intentional machines: a defence of trust in medical artificial intelligence. Bioethics 36(2):154\u2013161. https:\/\/doi.org\/10.1111\/bioe.12891","journal-title":"Bioethics"},{"issue":"2","key":"1789_CR47","doi-asserted-by":"publisher","first-page":"23","DOI":"10.4018\/jthi.2009040102","volume":"5","author":"M Taddeo","year":"2009","unstructured":"Taddeo M (2009) Defining trust and E-trust: from old theories to new problems. Int J Technol Human Interact 5(2):23\u201335. https:\/\/doi.org\/10.4018\/jthi.2009040102","journal-title":"Int J Technol Human Interact"},{"key":"1789_CR48","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41591-018-0300-7","volume":"25","author":"EJ Topol","year":"2019","unstructured":"Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25:1. https:\/\/doi.org\/10.1038\/s41591-018-0300-7","journal-title":"Nat Med"},{"key":"1789_CR49","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1002\/9781118551424.ch4","volume-title":"Responsible innovation: managing the responsible emergence of science and innovation in society","author":"J van den Hoven","year":"2013","unstructured":"van den Hoven J (2013) Value sensitive design and responsible innovation. In: Owen R, Bessant J, Heintz H (eds) Responsible innovation: managing the responsible emergence of science and innovation in society. Wiley, London, pp 75\u201383"},{"issue":"1","key":"1789_CR50","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1017\/psa.2021.13","volume":"89","author":"J Zerilli","year":"2022","unstructured":"Zerilli J (2022) Explaining machine learning decisions. Philos Sci 89(1):1\u201319. https:\/\/doi.org\/10.1017\/psa.2021.13","journal-title":"Philos Sci"}],"container-title":["AI &amp; SOCIETY"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-023-01789-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00146-023-01789-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-023-01789-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,3]],"date-time":"2024-12-03T05:05:45Z","timestamp":1733202345000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00146-023-01789-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,12]]},"references-count":50,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["1789"],"URL":"https:\/\/doi.org\/10.1007\/s00146-023-01789-9","relation":{},"ISSN":["0951-5666","1435-5655"],"issn-type":[{"value":"0951-5666","type":"print"},{"value":"1435-5655","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,12]]},"assertion":[{"value":"24 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 September 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 October 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}