{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T12:23:04Z","timestamp":1776082984158,"version":"3.50.1"},"reference-count":75,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,10,6]],"date-time":"2023-10-06T00:00:00Z","timestamp":1696550400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,10,6]],"date-time":"2023-10-06T00:00:00Z","timestamp":1696550400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000286","name":"British Academy","doi-asserted-by":"publisher","award":["PF22\\220076"],"award-info":[{"award-number":["PF22\\220076"]}],"id":[{"id":"10.13039\/501100000286","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000276","name":"Department of Health and Social Care","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100000276","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000879","name":"Alfred P. Sloan Foundation","doi-asserted-by":"publisher","award":["G-2021-16779"],"award-info":[{"award-number":["G-2021-16779"]}],"id":[{"id":"10.13039\/100000879","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100010269","name":"Wellcome Trust","doi-asserted-by":"publisher","award":["223765\/Z\/21\/Z"],"award-info":[{"award-number":["223765\/Z\/21\/Z"]}],"id":[{"id":"10.13039\/100010269","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Luminate Group"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI &amp; Soc"],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>\n                        <jats:sc>Abstract<\/jats:sc>\n                     <\/jats:title><jats:p>Human oversight has become a key mechanism for the governance of artificial intelligence (\u201cAI\u201d). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union\u2019s Artificial Intelligence Act (\u201cAIA\u201d). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.<\/jats:p>","DOI":"10.1007\/s00146-023-01777-z","type":"journal-article","created":{"date-parts":[[2023,10,6]],"date-time":"2023-10-06T15:01:41Z","timestamp":1696604501000},"page":"2853-2866","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":50,"title":["Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act"],"prefix":"10.1007","volume":"39","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3043-075X","authenticated-orcid":false,"given":"Johann","family":"Laux","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,10,6]]},"reference":[{"key":"1777_CR1","volume-title":"Power and prediction: the disruptive economics of artificial intelligence","author":"A Agrawal","year":"2022","unstructured":"Agrawal A, Gans J, Goldfarb A (2022) Power and prediction: the disruptive economics of artificial intelligence. Harvard Business Review Press, Boston, M.A."},{"key":"1777_CR2","unstructured":"Andrade NNG de and Zarra A (2022) Artificial intelligence act: a policy prototyping experiment: operationalizing the requirements for AI systems\u2014Part I. https:\/\/openloop.org\/reports\/2022\/11\/Artificial_Intelligence_Act_A_Policy_Prototyping_Experiment_Operationalizing_Reqs_Part1.pdf."},{"key":"1777_CR3","unstructured":"Angwin J, Larson J, Mattu S, et al. (2016) Machine Bias. In: ProPublica. https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 8 Jan 2023."},{"key":"1777_CR8","doi-asserted-by":"publisher","unstructured":"Aoki N (2021) The importance of the assurance that \u201chumans are still in the decision loop\u201d for public trust in artificial intelligence: Evidence from an online experiment. Computers in Human Behavior 114. Elsevier Ltd.  https:\/\/doi.org\/10.1016\/j.chb.2020.106572.","DOI":"10.1016\/j.chb.2020.106572"},{"key":"1777_CR9","unstructured":"Article 29 Data Protection Working Party (2017) Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016\/679. 17\/EN WP 251, 3 October."},{"key":"1777_CR10","unstructured":"AWS (n.d.) Moderating content. https:\/\/docs.aws.amazon.com\/rekognition\/latest\/dg\/moderation.html?pg=ln&sec=ft. Accessed 6 Feb 2023."},{"issue":"6","key":"1777_CR11","doi-asserted-by":"publisher","first-page":"775","DOI":"10.1016\/0005-1098(83)90046-8","volume":"19","author":"L Bainbridge","year":"1983","unstructured":"Bainbridge L (1983) Ironies of automation. Automatica 19(6):775\u2013779. https:\/\/doi.org\/10.1016\/0005-1098(83)90046-8","journal-title":"Automatica"},{"issue":"3","key":"1777_CR12","doi-asserted-by":"publisher","first-page":"250","DOI":"10.1080\/1463922X.2018.1432716","volume":"20","author":"VA Banks","year":"2019","unstructured":"Banks VA, Plant KL, Stanton NA (2019) Driving aviation forward; contrasting driving automation and aviation automation. Theor Issues Ergon Sci 20(3):250\u2013264. https:\/\/doi.org\/10.1080\/1463922X.2018.1432716","journal-title":"Theor Issues Ergon Sci"},{"key":"1777_CR13","doi-asserted-by":"crossref","unstructured":"Bentham J, Schofield P and Bentham J (1990) Securities against Misrule and Other Constitutional Writings for Tripoli and Greece. The collected works of Jeremy Bentham. Oxford\u202f: New York: Clarendon Press\u202f; Oxford Univer Press.","DOI":"10.1093\/oseo\/instance.00077277"},{"issue":"2","key":"1777_CR14","doi-asserted-by":"publisher","first-page":"213","DOI":"10.1017\/S1755773919000080","volume":"11","author":"E Bertsou","year":"2019","unstructured":"Bertsou E (2019) Rethinking political distrust. Eur Polit Sci Rev 11(2):213\u2013230. https:\/\/doi.org\/10.1017\/S1755773919000080","journal-title":"Eur Polit Sci Rev"},{"issue":"4","key":"1777_CR15","doi-asserted-by":"publisher","first-page":"543","DOI":"10.1007\/s13347-017-0263-5","volume":"31","author":"R Binns","year":"2018","unstructured":"Binns R (2018) Algorithmic accountability and public reason. Philosophy Technol 31(4):543\u2013556. https:\/\/doi.org\/10.1007\/s13347-017-0263-5","journal-title":"Philosophy Technol"},{"key":"1777_CR16","doi-asserted-by":"crossref","unstructured":"Bod\u00f3 B (2021) Mediated trust: a theoretical framework to address the trustworthiness of technological trust mediators. New Media & Society 23(9): 2668\u20132690.","DOI":"10.1177\/1461444820939922"},{"key":"1777_CR17","first-page":"343","volume-title":"Trust and governance","author":"J Braithwaite","year":"1998","unstructured":"Braithwaite J (1998) Institutionalizing distrust, enculturating trust. Trust and governance. Russell Sage Foundation, The Russell Sage Foundation Series on Trust. New York, pp 343\u2013375"},{"issue":"3","key":"1777_CR18","doi-asserted-by":"publisher","first-page":"745","DOI":"10.15779\/Z385X25D2W","volume":"34","author":"K Brennan-Marquez","year":"2019","unstructured":"Brennan-Marquez K, Levy K, Susser D (2019) Strange loops: apparent versus actual human involvement in automated decision making. Berkeley Technol Law J 34(3):745\u2013772. https:\/\/doi.org\/10.15779\/Z385X25D2W","journal-title":"Berkeley Technol Law J"},{"issue":"1","key":"1777_CR19","doi-asserted-by":"publisher","first-page":"205395171562251","DOI":"10.1177\/2053951715622512","volume":"3","author":"J Burrell","year":"2016","unstructured":"Burrell J (2016) How the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms. Big Data Soc 3(1):205395171562251. https:\/\/doi.org\/10.1177\/2053951715622512","journal-title":"Big Data Soc"},{"key":"1777_CR20","unstructured":"Colombian police cartoon (2022) 2022\u2013004-FB-UA."},{"key":"1777_CR21","unstructured":"Council of the Europan Union (2022) Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts\u2014General approach\u2014Interinstitutional File: 2021\/0106(COD). 14954\/22."},{"key":"1777_CR22","unstructured":"Cranor LF (2008) A framework for reasoning about the human in the loop. Proceedings of the 1st Conference on Usability, Psychology, and Security (UPSEC\u201908): 1\u201315."},{"key":"1777_CR23","doi-asserted-by":"publisher","first-page":"1017677","DOI":"10.3389\/fdata.2022.1017677","volume":"5","author":"J Davidovic","year":"2023","unstructured":"Davidovic J (2023) On the purpose of meaningful human control of AI. Front Big Data 5:1017677. https:\/\/doi.org\/10.3389\/fdata.2022.1017677","journal-title":"Front Big Data"},{"issue":"6","key":"1777_CR24","doi-asserted-by":"publisher","first-page":"100489","DOI":"10.1016\/j.patter.2022.100489","volume":"3","author":"D De Silva","year":"2022","unstructured":"De Silva D, Alahakoon D (2022) An artificial intelligence life cycle: from conception to production. Patterns 3(6):100489. https:\/\/doi.org\/10.1016\/j.patter.2022.100489","journal-title":"Patterns"},{"issue":"1","key":"1777_CR25","doi-asserted-by":"publisher","first-page":"114","DOI":"10.1037\/xge0000033","volume":"144","author":"BJ Dietvorst","year":"2015","unstructured":"Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol Gen 144(1):114\u2013126. https:\/\/doi.org\/10.1037\/xge0000033","journal-title":"J Exp Psychol Gen"},{"key":"1777_CR26","doi-asserted-by":"publisher","unstructured":"Ebers M, Hoch VRS, Rosenkranz F, et al. (2021) The European Commission\u2019s Proposal for an Artificial intelligence act\u2014a critical assessment by members of the robotics and AI law society (RAILS). J 4(4): 589\u2013603. https:\/\/doi.org\/10.3390\/j4040043.","DOI":"10.3390\/j4040043"},{"key":"1777_CR27","unstructured":"Edwards L (2022) Regulating AI in Europe: four problems and four solutions. March. Ada Lovelace Institute."},{"key":"1777_CR28","doi-asserted-by":"crossref","unstructured":"Elster J (2013) Securities against Misrule: Juries, Assemblies, Elections. Cambridge\u202f; New York: Cambridge University Press.","DOI":"10.1017\/CBO9781139382762"},{"key":"1777_CR29","volume-title":"Democracy and distrust: a theory of judicial review","author":"JH Ely","year":"1980","unstructured":"Ely JH (1980) Democracy and distrust: a theory of judicial review. Harvard University Press, Cambridge"},{"issue":"1","key":"1777_CR30","doi-asserted-by":"publisher","first-page":"123","DOI":"10.1080\/13600834.2021.1958860","volume":"31","author":"T Enarsson","year":"2022","unstructured":"Enarsson T, Enqvist L, Naarttij\u00e4rvi M (2022) Approaching the human in the loop\u2014legal perspectives on hybrid human\/algorithmic decision-making in three contexts. Inform Commun Technol Law 31(1):123\u2013153","journal-title":"Inform Commun Technol Law"},{"key":"1777_CR31","unstructured":"European Commission (2020) White paper on artificial intelligence\u2014a European approach to excellence and trust. COM(2020) 65 final."},{"key":"1777_CR32","unstructured":"European Commission (2021) Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. COM(2021) 206 final."},{"key":"1777_CR33","unstructured":"European Commission (2022a) Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). COM(2022) 496 final."},{"key":"1777_CR34","unstructured":"European Commission (2022b) Proposal for a Directive of the European Parliament and of the Council on liability for defective products. COM(2022b) 495 final."},{"key":"1777_CR35","unstructured":"European Parliament (2023) Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206\u2014C9-0146\/2021\u20142021\/0106(COD)). P9_TA(2023)0236."},{"key":"1777_CR36","unstructured":"European Commission (2023a) Annexes to the Commission implementing decision on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence. C(2023) 3215 final."},{"key":"1777_CR37","unstructured":"European Commission (2023b) Commission implementing decision of 22.05.2023 on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence. C(2023) 3215 final."},{"key":"1777_CR38","doi-asserted-by":"publisher","unstructured":"Flechais I, Riegelsberger J and Sasse MA (2005) Divide and conquer: the role of trust and assurance in the design of secure socio-technical systems. In: Proceedings of the 2005 workshop on New security paradigms\u2014NSPW \u201905, Lake Arrowhead, California, 2005, p. 33. ACM Press. https:\/\/doi.org\/10.1145\/1146269.1146280.","DOI":"10.1145\/1146269.1146280"},{"key":"1777_CR39","doi-asserted-by":"publisher","first-page":"105681","DOI":"10.1016\/j.clsr.2022.105681","volume":"45","author":"B Green","year":"2022","unstructured":"Green B (2022) The flaws of policies requiring human oversight of government algorithms. Comput Law Secur Rev 45:105681. https:\/\/doi.org\/10.1016\/j.clsr.2022.105681","journal-title":"Comput Law Secur Rev"},{"key":"1777_CR40","doi-asserted-by":"publisher","unstructured":"Gyevnar B, Ferguson N and Schafer B (2023) Bridging the transparency gap: What can explainable AI learn from the AI Act? arXiv. https:\/\/doi.org\/10.48550\/ARXIV.2302.10766.","DOI":"10.48550\/ARXIV.2302.10766"},{"key":"1777_CR41","doi-asserted-by":"publisher","unstructured":"Hacker P (2022) The European AI liability directives\u2014Critique of a half-hearted approach and lessons for the future. arXiv. https:\/\/doi.org\/10.48550\/ARXIV.2211.13960.","DOI":"10.48550\/ARXIV.2211.13960"},{"issue":"1","key":"1777_CR42","doi-asserted-by":"publisher","first-page":"73","DOI":"10.1017\/S1062798702000078","volume":"10","author":"R Hardin","year":"2002","unstructured":"Hardin R (2002) Liberal distrust. Euro Rev 10(1):73\u201389. https:\/\/doi.org\/10.1017\/S1062798702000078","journal-title":"Euro Rev"},{"key":"1777_CR43","volume-title":"Distrust. Russell Sage Foundation series on trust","year":"2004","unstructured":"Hardin R (ed) (2004) Distrust. Russell Sage Foundation series on trust, vol 8. Russell Sage Foundation, New York"},{"key":"1777_CR44","unstructured":"High-Level Expert Group on Artificial Intelligence (2019) Ethics Guidelines for Trustworthy AI."},{"issue":"46","key":"1777_CR45","doi-asserted-by":"publisher","first-page":"16385","DOI":"10.1073\/pnas.0403723101","volume":"101","author":"L Hong","year":"2004","unstructured":"Hong L, Page SE (2004) Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proc Natl Acad Sci 101(46):16385\u201316389. https:\/\/doi.org\/10.1073\/pnas.0403723101","journal-title":"Proc Natl Acad Sci"},{"key":"1777_CR46","unstructured":"International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) (2020) Information technology\u2014Artificial intelligence\u2014Overview of trustworthiness in artificial intelligence. ISO\/IEC TR 24028:2020 (E), May. Geneva."},{"key":"1777_CR47","unstructured":"International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) (2022) Information technology\u2014Artificial intelligence\u2014Overview of ethical and societal concerns. ISO\/IEC TR 24368:2022, August. Geneva."},{"key":"1777_CR48","doi-asserted-by":"publisher","unstructured":"Jones-Jang SM and Park YJ (2022) How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability. J Comput-Med Commun Yao M (ed.) 28(1): zmac029. https:\/\/doi.org\/10.1093\/jcmc\/zmac029.","DOI":"10.1093\/jcmc\/zmac029"},{"key":"1777_CR49","doi-asserted-by":"publisher","DOI":"10.5281\/ZENODO.5783447","author":"K Kyriakou","year":"2021","unstructured":"Kyriakou K, Barlas P, Kleanthous S et al (2021) Crowdsourcing human oversight on image tagging algorithms: an initial study of image diversity. Zenodo. https:\/\/doi.org\/10.5281\/ZENODO.5783447"},{"key":"1777_CR50","volume-title":"Collective Wisdom: principles and mechanisms","year":"2012","unstructured":"Landemore H, Elster J (eds) (2012) Collective Wisdom: principles and mechanisms. Cambridge University Press, Cambridge; New York"},{"key":"1777_CR51","doi-asserted-by":"publisher","DOI":"10.1007\/s10869-022-09829-9","author":"M Langer","year":"2022","unstructured":"Langer M, K\u00f6nig CJ, Back C et al (2022) Trust in artificial Intelligence: comparing Trust Processes between human and automated trustees in light of unfair bias. J Bus Psychol. https:\/\/doi.org\/10.1007\/s10869-022-09829-9","journal-title":"J Bus Psychol"},{"key":"1777_CR4","doi-asserted-by":"publisher","unstructured":"Laux J, Wachter S, Mittelstadt B (2021) Taming the Few: Platform Regulation, Independent Audits, and the Risks of Capture Created by the DMA and DSA. Comput Law  Secur Rev 43:105613. https:\/\/doi.org\/10.1016\/j.clsr.2021.105613","DOI":"10.1016\/j.clsr.2021.105613"},{"key":"1777_CR5","doi-asserted-by":"publisher","unstructured":"Laux J (2022) Normative Institutional Design for EU Law. In:  Public Epistemic Authority. Grundlagen Der Rechtswissenschaft vol 42. T\u00fcbingen Mohr Siebeck. https:\/\/doi.org\/10.1628\/978-3-16-160257-3","DOI":"10.1628\/978-3-16-160257-3"},{"key":"1777_CR6","doi-asserted-by":"publisher","unstructured":"Laux J, Wachter S, Mittelstadt B (2023a) Three Pathways for Standardisation and Ethical Disclosure by Default under the European Union Artificial Intelligence Act. SSRN Electron J.  https:\/\/doi.org\/10.2139\/ssrn.4365079","DOI":"10.2139\/ssrn.4365079"},{"key":"1777_CR7","doi-asserted-by":"publisher","unstructured":"Laux J, Wachter S, Mittelstadt B (2023b) Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk. Regul  Gov.  https:\/\/doi.org\/10.1111\/rego.12512","DOI":"10.1111\/rego.12512"},{"key":"1777_CR52","doi-asserted-by":"publisher","first-page":"90","DOI":"10.1016\/j.obhdp.2018.12.005","volume":"151","author":"JM Logg","year":"2019","unstructured":"Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Hum Decis Process 151:90\u2013103. https:\/\/doi.org\/10.1016\/j.obhdp.2018.12.005","journal-title":"Organ Behav Hum Decis Process"},{"key":"1777_CR53","unstructured":"Meta (2022a) How review teams work. https:\/\/transparency.fb.com\/enforcement\/detecting-violations\/how-review-teams-work\/. Accessed 9 Jan 2023."},{"key":"1777_CR54","unstructured":"Meta (2022b) How technology detects violations. https:\/\/transparency.fb.com\/enforcement\/detecting-violations\/technology-detects-violations\/. Accessed 9 Jan 2023."},{"key":"1777_CR55","doi-asserted-by":"publisher","unstructured":"Metcalf K, Theobald B-J, Weinberg G, et al. (2019) Mirroring to Build Trust in Digital Assistants. arXiv. https:\/\/doi.org\/10.48550\/ARXIV.1904.01664.","DOI":"10.48550\/ARXIV.1904.01664"},{"key":"1777_CR56","doi-asserted-by":"publisher","unstructured":"Mittelstadt B, Russell C and Wachter S (2019) Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency, Atlanta GA USA, 29 January 2019, pp. 279\u2013288. ACM. https:\/\/doi.org\/10.1145\/3287560.3287574.","DOI":"10.1145\/3287560.3287574"},{"issue":"2","key":"1777_CR57","doi-asserted-by":"publisher","first-page":"241","DOI":"10.1007\/s11023-021-09577-4","volume":"32","author":"J M\u00f6kander","year":"2022","unstructured":"M\u00f6kander J, Axente M, Casolari F et al (2022) Conformity assessments and post-market monitoring: a guide to the role of auditing in the proposed european ai regulation. Mind Mach 32(2):241\u2013268. https:\/\/doi.org\/10.1007\/s11023-021-09577-4","journal-title":"Mind Mach"},{"key":"1777_CR58","unstructured":"Oversight Board (n.d.) The purpose of the board. https:\/\/oversightboard.com\/. Accessed 9 Jan 2023"},{"key":"1777_CR59","doi-asserted-by":"crossref","unstructured":"Page SE (2007) The difference: How the power of diversity creates better groups, firms, schools, and societies. 3. print., and 1. paperback print., with a new preface. Princeton, NJ: Princeton Univ. Press.","DOI":"10.1515\/9781400830282"},{"issue":"3","key":"1777_CR60","doi-asserted-by":"publisher","first-page":"381","DOI":"10.1177\/0018720810376055","volume":"52","author":"R Parasuraman","year":"2010","unstructured":"Parasuraman R, Manzey DH (2010) Complacency and bias in human use of automation: an attentional integration. Hum Factors 52(3):381\u2013410. https:\/\/doi.org\/10.1177\/0018720810376055","journal-title":"Hum Factors"},{"key":"1777_CR61","doi-asserted-by":"publisher","DOI":"10.4159\/harvard.9780674736061","volume-title":"The black box society: the secret algorithms that control money and information","author":"F Pasquale","year":"2015","unstructured":"Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge"},{"key":"1777_CR62","unstructured":"Patty JW and Penn EM (2014) Social choice and legitimacy: The possibilities of impossibility. Political economy of institutions and decisions. Cambridge\u202f; New York: Cambridge University Press."},{"key":"1777_CR63","unstructured":"Perrigo B (2023) OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time, 18 January. https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/. Accessed 14 Feb 2023."},{"issue":"5","key":"1777_CR64","doi-asserted-by":"publisher","first-page":"991","DOI":"10.1006\/ijhc.1999.0252","volume":"51","author":"LJ Skitka","year":"1999","unstructured":"Skitka LJ, Mosier KL, Burdick M (1999) Does automation bias decision-making? Int J Hum Comput Stud 51(5):991\u20131006. https:\/\/doi.org\/10.1006\/ijhc.1999.0252","journal-title":"Int J Hum Comput Stud"},{"key":"1777_CR65","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3899991","author":"NA Smuha","year":"2021","unstructured":"Smuha NA, Ahmed-Rengers E, Harkens A et al (2021) How the EU can achieve legally trustworthy AI: a response to the European commission\u2019s proposal for an artificial intelligence act. SSRN Electron J. https:\/\/doi.org\/10.2139\/ssrn.3899991","journal-title":"SSRN Electron J"},{"issue":"4","key":"1777_CR66","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1109\/MSPEC.2019.8678513","volume":"56","author":"E Strickland","year":"2019","unstructured":"Strickland E (2019) IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectr 56(4):24\u201331. https:\/\/doi.org\/10.1109\/MSPEC.2019.8678513","journal-title":"IEEE Spectr"},{"key":"1777_CR67","volume-title":"Wiser: getting beyond groupthink to make groups smarter","author":"CR Sunstein","year":"2015","unstructured":"Sunstein CR, Hastie R (2015) Wiser: getting beyond groupthink to make groups smarter. Harvard Business Review Press, Boston, Massachusetts"},{"issue":"1","key":"1777_CR68","first-page":"5","volume":"29","author":"P Sztompka","year":"2000","unstructured":"Sztompka P (2000) Trust, distrust and the paradox of democracy. Polish Polit Sci Yearbook 29(1):5\u201322","journal-title":"Polish Polit Sci Yearbook"},{"issue":"2","key":"1777_CR69","doi-asserted-by":"publisher","first-page":"398","DOI":"10.1016\/j.clsr.2017.12.002","volume":"34","author":"M Veale","year":"2018","unstructured":"Veale M, Edwards L (2018) Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Comput Law Secur Rev 34(2):398\u2013404. https:\/\/doi.org\/10.1016\/j.clsr.2017.12.002","journal-title":"Comput Law Secur Rev"},{"issue":"3","key":"1777_CR70","doi-asserted-by":"publisher","first-page":"615","DOI":"10.3390\/make3030032","volume":"3","author":"G Vilone","year":"2021","unstructured":"Vilone G, Longo L (2021) Classification of explainable artificial intelligence methods through their output formats. Mach Learn Knowl Extraction 3(3):615\u2013661. https:\/\/doi.org\/10.3390\/make3030032","journal-title":"Mach Learn Knowl Extraction"},{"issue":"2","key":"1777_CR71","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1093\/idpl\/ipx005","volume":"7","author":"S Wachter","year":"2017","unstructured":"Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76\u201399. https:\/\/doi.org\/10.1093\/idpl\/ipx005","journal-title":"Int Data Privacy Law"},{"issue":"2","key":"1777_CR72","first-page":"841","volume":"31","author":"S Wachter","year":"2018","unstructured":"Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J Law Technol 31(2):841\u2013887","journal-title":"Harvard J Law Technol"},{"key":"1777_CR73","unstructured":"Wendehorst C (2021) The Proposal for an Artificial Intelligence Act COM(2021) 206 from a Consumer Policy Perspective. 14 December. Vienna: Federal Ministry of Social Affairs, Health, Care and Consumer Protection."},{"key":"1777_CR74","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-022-09613-x","author":"D Wong","year":"2022","unstructured":"Wong D, Floridi L (2022) Meta\u2019s oversight board: a review and critical assessment. Mind Mach. https:\/\/doi.org\/10.1007\/s11023-022-09613-x","journal-title":"Mind Mach"},{"issue":"3","key":"1777_CR75","doi-asserted-by":"publisher","first-page":"323","DOI":"10.1504\/IJVD.2007.014908","volume":"45","author":"MS Young","year":"2007","unstructured":"Young MS, Stanton NA, Harris D (2007) Driving automation: learning from aviation about design philosophies. Int J Veh Des 45(3):323. https:\/\/doi.org\/10.1504\/IJVD.2007.014908","journal-title":"Int J Veh Des"}],"container-title":["AI &amp; SOCIETY"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-023-01777-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00146-023-01777-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-023-01777-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,3]],"date-time":"2024-12-03T05:05:57Z","timestamp":1733202357000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00146-023-01777-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,6]]},"references-count":75,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["1777"],"URL":"https:\/\/doi.org\/10.1007\/s00146-023-01777-z","relation":{"references":[{"id-type":"uri","id":"","asserted-by":"subject"}]},"ISSN":["0951-5666","1435-5655"],"issn-type":[{"value":"0951-5666","type":"print"},{"value":"1435-5655","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,6]]},"assertion":[{"value":"12 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 September 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 October 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all the authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}