{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,19]],"date-time":"2026-02-19T08:08:02Z","timestamp":1771488482159,"version":"3.50.1"},"reference-count":79,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2023,2,13]],"date-time":"2023-02-13T00:00:00Z","timestamp":1676246400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2023,2,13]],"date-time":"2023-02-13T00:00:00Z","timestamp":1676246400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI &amp; Soc"],"published-print":{"date-parts":[[2024,8]]},"DOI":"10.1007\/s00146-023-01640-1","type":"journal-article","created":{"date-parts":[[2023,2,13]],"date-time":"2023-02-13T19:04:35Z","timestamp":1676315075000},"page":"1891-1903","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":25,"title":["When something goes wrong: Who is responsible for errors in ML decision-making?"],"prefix":"10.1007","volume":"39","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2463-410X","authenticated-orcid":false,"given":"Andrea","family":"Berber","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8512-1578","authenticated-orcid":false,"given":"Sanja","family":"Sre\u0107kovi\u0107","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,2,13]]},"reference":[{"issue":"3","key":"1640_CR1","doi-asserted-by":"publisher","first-page":"973","DOI":"10.1177\/1461444816676645","volume":"20","author":"M Ananny","year":"2018","unstructured":"Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20(3):973\u2013989","journal-title":"New Media Soc"},{"issue":"4","key":"1640_CR2","first-page":"15","volume":"28","author":"M Anderson","year":"2007","unstructured":"Anderson M, Anderson S (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15\u201326","journal-title":"AI Mag"},{"key":"1640_CR3","unstructured":"Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias: there\u2019s software used across the country to predict future criminals. And it\u2019s biased against blacks. ProPublica. Retrieved November 9, 2021, from https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing"},{"key":"1640_CR4","unstructured":"Apps P (2021) New era of robot war may be underway unnoticed. Reuters. Retrieved September 7, 2021, from https:\/\/www.reuters.com\/article\/apps-drones-idUSL5N2NS2E8"},{"issue":"886","key":"1640_CR5","doi-asserted-by":"publisher","first-page":"687","DOI":"10.1017\/S1816383112000768","volume":"94","author":"PM Asaro","year":"2012","unstructured":"Asaro PM (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. Int Rev Red Cross 94(886):687\u2013709","journal-title":"Int Rev Red Cross"},{"key":"1640_CR6","first-page":"169","volume-title":"Robot ethics: the ethical and social implications of robotics","author":"PM Asaro","year":"2014","unstructured":"Asaro PM (2014) A body to kick, but still no soul to damn: legal perspectives on robotics. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, pp 169\u2013186"},{"key":"1640_CR7","volume-title":"The science and art of simulation II","author":"FJ Boge","year":"2019","unstructured":"Boge FJ, Gr\u00fcnke P (2019) Computer simulations, machine learning and the Laplacean demon: opacity in the case of high energy physics. In: Kaminski A, Resch M, Gehring P (eds) The science and art of simulation II. Springer"},{"key":"1640_CR8","doi-asserted-by":"publisher","first-page":"63","DOI":"10.1075\/nlp.8.11bry","volume-title":"Close engagements with artificial companions: key social, psychological, ethical and design issues","author":"JJ Bryson","year":"2010","unstructured":"Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins, pp 63\u201374"},{"key":"1640_CR9","doi-asserted-by":"publisher","unstructured":"Burrell J (2016) How the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms. Big Data &\nSociety, 3(1). https:\/\/doi.org\/10.1177\/2053951715622512","DOI":"10.1177\/2053951715622512"},{"key":"1640_CR10","first-page":"399","volume":"530","author":"D Butler","year":"2016","unstructured":"Butler D (2016) Tomorrow\u2019s world. Nature 530:399\u2013401","journal-title":"Nature"},{"issue":"3","key":"1640_CR11","doi-asserted-by":"publisher","first-page":"25","DOI":"10.7748\/ncyp2011.04.23.3.25.c8417","volume":"23","author":"M Cornock","year":"2011","unstructured":"Cornock M (2011) Legal definitions of responsibility, accountability and liability. Nurs Child Young People 23(3):25\u201326","journal-title":"Nurs Child Young People"},{"key":"1640_CR12","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9780511814594","volume-title":"Responsibility and control: a theory of moral responsibility","author":"JM Fischer","year":"1998","unstructured":"Fischer JM, Ravizza MSJ (1998) Responsibility and control: a theory of moral responsibility. Cambridge University Press"},{"issue":"2","key":"1640_CR13","first-page":"38","volume":"80","author":"AW Flores","year":"2016","unstructured":"Flores AW, Lowenkamp CT, Bechtel K (2016) False positives, false negatives, and false analyses: a rejoinder to \u201cMachine bias: there\u2019s software used across the country to predict future criminals. And it\u2019s biased against blacks.\u201d Fed Probat J 80(2):38\u201346","journal-title":"Fed Probat J"},{"key":"1640_CR14","doi-asserted-by":"publisher","first-page":"689","DOI":"10.1007\/s11023-018-9482-5","volume":"28","author":"L Floridi","year":"2018","unstructured":"Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) AI4People\u2014an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28:689\u2013707","journal-title":"Mind Mach"},{"issue":"7397","key":"1640_CR15","doi-asserted-by":"publisher","first-page":"997","DOI":"10.1136\/bmj.326.7397.997","volume":"326","author":"WJ Gaine","year":"2003","unstructured":"Gaine WJ (2003) No-fault compensation systems. BMJ 326(7397):997\u2013998","journal-title":"BMJ"},{"key":"1640_CR16","doi-asserted-by":"crossref","unstructured":"Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter M, Kagal L (2018) Explaining explanations: an overview of interpretability of machine learning. In: Proceedings of the 2018 IEEE 5th international conference on Data Science and Advanced Analytics (DSAA). IEEE, pp 80\u201389","DOI":"10.1109\/DSAA.2018.00018"},{"key":"1640_CR17","unstructured":"Goertzel B (2002) Thoughts on AI morality. Dyn Psychol Int Interdiscip J Complex Ment Process. Retrieved October 31, 2021, from http:\/\/www.goertzel.org\/dynapsyc\/2002\/AIMorality.htm"},{"key":"1640_CR18","doi-asserted-by":"publisher","first-page":"1197","DOI":"10.1007\/s11192-020-03614-2","volume":"125","author":"YC Goh","year":"2020","unstructured":"Goh YC, Cai XQ, Theseira W, Ko G, Khor KA (2020) Evaluating human versus machine learning performance in classifying research abstracts. Scientometrics 125:1197\u20131212","journal-title":"Scientometrics"},{"issue":"3","key":"1640_CR19","first-page":"50","volume":"38","author":"B Goodman","year":"2017","unstructured":"Goodman B, Flaxman S (2017) EU regulations on algorithmic decision-making and a \u2018Right to Explanation.\u2019 AI Mag 38(3):50\u201357","journal-title":"AI Mag"},{"key":"1640_CR20","unstructured":"Grossmann J, Wiesbrock HW, Motta M (2021) Testing ML-based systems. Federal Ministry for Economic Affairs and Energy. https:\/\/docbox.etsi.org\/mts\/mts\/05-CONTRIBUTIONS\/2022\/MTS(22)086017_Testing_ML-based_Systems.pdf"},{"issue":"5","key":"1640_CR21","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3236009","volume":"51","author":"R Guidotti","year":"2018","unstructured":"Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1\u201342","journal-title":"ACM Comput Surv"},{"key":"1640_CR22","doi-asserted-by":"publisher","first-page":"307","DOI":"10.1007\/s10676-017-9428-2","volume":"22","author":"DJ Gunkel","year":"2020","unstructured":"Gunkel DJ (2020) Mind the gap: responsible robotics and the problem of responsibility. Ethics Inf Technol 22:307\u2013320","journal-title":"Ethics Inf Technol"},{"key":"1640_CR23","unstructured":"Hall JS (2001) Ethics for machines. Kurzweil Essays. Retrieved June 15, 2021, from KurzweilAI.net http:\/\/www.kurzweilai.net\/ethics-for-machines"},{"key":"1640_CR24","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1007\/s10676-009-9184-z","volume":"11","author":"FA Hanson","year":"2009","unstructured":"Hanson FA (2009) Beyond the skin bag: on the moral responsibility of extended agencies. Ethics Inf Technol 11:91\u201399","journal-title":"Ethics Inf Technol"},{"key":"1640_CR25","unstructured":"Hart E (2019) Machine learning 101: the what, why, and how of weighting. KDnuggets. Retrieved May 21, 2021, from https:\/\/www.kdnuggets.com\/2019\/11\/machine-learning-what-why-how-weighting.html"},{"issue":"3","key":"1640_CR26","first-page":"645","volume":"2","author":"LM Henry","year":"2015","unstructured":"Henry LM, Larkin ME, Pike ER (2015) Just compensation: a no-fault proposal for research-related injuries. J Law Biosci 2(3):645\u2013668","journal-title":"J Law Biosci"},{"key":"1640_CR27","unstructured":"Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. XAI Metrics. Retrieved October 1, 2021, from https:\/\/arxiv.org\/ftp\/arxiv\/papers\/1812\/1812.04608.pdf"},{"key":"1640_CR28","doi-asserted-by":"publisher","DOI":"10.1093\/0195158709.001.0001","volume-title":"Extending ourselves: computational science, empiricism, and scientific method","author":"P Humphreys","year":"2004","unstructured":"Humphreys P (2004) Extending ourselves: computational science, empiricism, and scientific method. Oxford University Press"},{"key":"1640_CR29","doi-asserted-by":"publisher","first-page":"615","DOI":"10.1007\/s11229-008-9435-2","volume":"169","author":"P Humphreys","year":"2009","unstructured":"Humphreys P (2009) The philosophical novelty of computer simulation methods. Synthese 169:615\u2013626","journal-title":"Synthese"},{"issue":"4","key":"1640_CR30","doi-asserted-by":"publisher","first-page":"195","DOI":"10.1007\/s10676-006-9111-5","volume":"8","author":"DG Johnson","year":"2006","unstructured":"Johnson DG (2006) Computer systems: moral entities but not moral agents. Ethics Inf Technol 8(4):195\u2013204","journal-title":"Ethics Inf Technol"},{"issue":"2\u20133","key":"1640_CR31","doi-asserted-by":"publisher","first-page":"123","DOI":"10.1007\/s10676-008-9174-6","volume":"10","author":"DG Johnson","year":"2008","unstructured":"Johnson DG, Miller KW (2008) Un-making artificial moral agents. Ethics Inf Technol 10(2\u20133):123\u2013133","journal-title":"Ethics Inf Technol"},{"key":"1640_CR32","unstructured":"Lauret J (2019) Amazon\u2019s sexist AI recruiting tool: how did it go so wrong? Medium. Retrieved November 9, 2021, from https:\/\/becominghuman.ai\/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e"},{"issue":"8","key":"1640_CR33","doi-asserted-by":"publisher","first-page":"e19918","DOI":"10.2196\/19918","volume":"22","author":"J Lee","year":"2020","unstructured":"Lee J (2020) Is artificial intelligence better than human clinicians in predicting patient outcomes? J Med Internet Res 22(8):e19918. https:\/\/doi.org\/10.2196\/19918","journal-title":"J Med Internet Res"},{"key":"1640_CR34","unstructured":"Lipton ZC (2016) The mythos of model interpretability. In: 2016 ICML workshop on human interpretability in machine learning (WHI 2016). New York. https:\/\/arxiv.org\/abs\/1606.03490"},{"key":"1640_CR35","volume-title":"Tort law","author":"M Lunney","year":"2013","unstructured":"Lunney M, Oliphant K (2013) Tort law, 5th edn. Oxford University Press","edition":"5"},{"issue":"3","key":"1640_CR36","doi-asserted-by":"publisher","first-page":"175","DOI":"10.1007\/s10676-004-3422-1","volume":"6","author":"A Matthias","year":"2004","unstructured":"Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175\u2013183","journal-title":"Ethics Inf Technol"},{"key":"1640_CR37","doi-asserted-by":"publisher","first-page":"29","DOI":"10.1007\/s11098-007-9100-5","volume":"139","author":"M McKenna","year":"2008","unstructured":"McKenna M (2008) Putting the lie on the control condition for moral responsibility. Philos Stud 139:29\u201337","journal-title":"Philos Stud"},{"key":"1640_CR38","unstructured":"Mehta S (2022) Deterministic vs stochastic machine learning [Blog post]. https:\/\/analyticsindiamag.com\/deterministic-vs-stochastic-machine-learning\/"},{"key":"1640_CR39","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","volume":"267","author":"T Miller","year":"2017","unstructured":"Miller T (2017) Explanation in artificial intelligence: insights from the social science. Artif Intell 267:1\u201338","journal-title":"Artif Intell"},{"key":"1640_CR40","doi-asserted-by":"publisher","first-page":"501","DOI":"10.1038\/s42256-019-0114-4","volume":"1","author":"B Mittelstadt","year":"2019","unstructured":"Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1:501\u2013507","journal-title":"Nat Mach Intell"},{"key":"1640_CR41","doi-asserted-by":"crossref","unstructured":"Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: FAT* \u201919: conference on fairness, accountability, and transparency (FAT* \u201919). Retrieved October 30, 2021, from https:\/\/arxiv.org\/pdf\/1811.01439.pdf","DOI":"10.1145\/3287560.3287574"},{"key":"1640_CR42","unstructured":"Molnar C (2019) Interpretable Machine Learning. Available online: https:\/\/christophm.github.io\/interpretable-mlbook\/"},{"key":"1640_CR43","doi-asserted-by":"crossref","unstructured":"Moor J (2006) The nature, importance and difficulty of machine ethics. IEEE Intelligent Systems 21(4): 18-21","DOI":"10.1109\/MIS.2006.80"},{"issue":"3","key":"1640_CR44","doi-asserted-by":"publisher","first-page":"271","DOI":"10.1007\/s00146-007-0147-9","volume":"22","author":"A Mowshowitz","year":"2008","unstructured":"Mowshowitz A (2008) Technology as excuse for questionable ethics. AI Soc 22(3):271\u2013282","journal-title":"AI Soc"},{"issue":"1","key":"1640_CR45","doi-asserted-by":"publisher","first-page":"25","DOI":"10.1007\/BF02639315","volume":"2","author":"H Nissenbaum","year":"1996","unstructured":"Nissenbaum H (1996) Accountability in a computerized society. Sci Eng Ethics 2(1):25\u201342","journal-title":"Sci Eng Ethics"},{"key":"1640_CR46","first-page":"9","volume":"23","author":"J Ombach","year":"2014","unstructured":"Ombach J (2014) A short introduction to stochastic optimization. Schedae Informaticae 23:9\u201320","journal-title":"Schedae Informaticae"},{"key":"1640_CR47","doi-asserted-by":"publisher","first-page":"441","DOI":"10.1007\/s11023-019-09502-w","volume":"29","author":"A Paez","year":"2019","unstructured":"Paez A (2019) The pragmatic turn in explainable artificial intelligence (XAI). Mind Mach 29:441\u2013459","journal-title":"Mind Mach"},{"key":"1640_CR48","unstructured":"Pant K (2021) AI in the courts [Blog post]. Retrieved from https:\/\/indianexpress.com\/article\/opinion\/artificial-intelligence-in-the-courts-7399436\/"},{"key":"1640_CR49","doi-asserted-by":"crossref","unstructured":"Price M (2019) Hospital \u2018risk scores' prioritize white patients. Science. Retrieved November 9, 2021, from https:\/\/www.science.org\/content\/article\/hospital-risk-scores-prioritize-white-patients","DOI":"10.1126\/science.aaz9777"},{"key":"1640_CR50","unstructured":"Ribera TM, Lapedriza A (2019) Can we do better explanations? A proposal of user-centered explainable AI. In joint Proceedings of the ACM IUI 2019 workshops"},{"issue":"5","key":"1640_CR51","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"C Rudin","year":"2019","unstructured":"Rudin C (2019) stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206\u2013215","journal-title":"Nat Mach Intell"},{"key":"1640_CR52","unstructured":"Russ M (2021) Artificial intelligence, machine learning, and deep learning\u2014what is the difference and why it matters [Blog post]. Retrieved from https:\/\/bluehealthintelligence.com\/how-to-know-the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-and-why-it-matters\/"},{"key":"1640_CR53","volume-title":"Artificial intelligence: a modern approach","year":"2016","unstructured":"Russell SJ, Norvig P (eds) (2016) Artificial intelligence: a modern approach. Pearson Education Limited, Cham"},{"key":"1640_CR54","volume-title":"Explainable AI: interpreting, explaining and visualizing deep learning","year":"2019","unstructured":"Samek W, Montavon G, Vedaldi A, Hansen LK, M\u00fcller KR (eds) (2019) Explainable AI: interpreting, explaining and visualizing deep learning. Springer"},{"key":"1640_CR55","doi-asserted-by":"publisher","first-page":"51","DOI":"10.1007\/978-3-319-55762-5_5","volume-title":"The science and art of simulation I: exploring\u2014understanding\u2014knowing","author":"B Schembera","year":"2017","unstructured":"Schembera B (2017) Myths of Simulation. In: Resch MM, Kaminski A, Gehring P (eds) The science and art of simulation I: exploring\u2014understanding\u2014knowing. Springer, Cham, pp 51\u201363"},{"key":"1640_CR56","unstructured":"Sidelov P (2021) Machine learning in banking: top use cases [Blog post]. Retrieved from https:\/\/sdk.finance\/top-machine-learning-use-cases-in-banking\/"},{"issue":"4","key":"1640_CR57","doi-asserted-by":"publisher","first-page":"279","DOI":"10.1007\/s10676-005-6710-5","volume":"6","author":"M Siponen","year":"2004","unstructured":"Siponen M (2004) A pragmatic evaluation of the theory of information ethics. Ethics Inf Technol 6(4):279\u2013290","journal-title":"Ethics Inf Technol"},{"issue":"1","key":"1640_CR58","doi-asserted-by":"publisher","first-page":"62","DOI":"10.1111\/j.1468-5930.2007.00346.x","volume":"24","author":"R Sparrow","year":"2007","unstructured":"Sparrow R (2007) Killer robots. J Appl Philos 24(1):62","journal-title":"J Appl Philos"},{"key":"1640_CR59","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-021-09575-6","author":"S Sre\u0107kovi\u0107","year":"2022","unstructured":"Sre\u0107kovi\u0107 S, Berber A, Filipovi\u0107 N (2022) The automated Laplacean demon: how ML challenges our views on prediction and explanation. Mind Mach. https:\/\/doi.org\/10.1007\/s11023-021-09575-6","journal-title":"Mind Mach"},{"issue":"12","key":"1640_CR60","first-page":"23","volume":"6","author":"JP Sullins","year":"2006","unstructured":"Sullins JP (2006) When is a robot a moral agent? Int Rev Inf Ethics 6(12):23\u201330","journal-title":"Int Rev Inf Ethics"},{"key":"1640_CR61","unstructured":"Talbert M (2022) Moral responsibility. In: Zalta EN, Nodelman U (eds) The Stanford encyclopedia of philosophy (Fall 2022 edition). https:\/\/plato.stanford.edu\/archives\/fall2022\/entries\/moral-responsibility\/"},{"key":"1640_CR62","unstructured":"Tkachenko N (2021) Machine learning in healthcare: 12 real-world use cases to know [Blog post]. Retrieved from https:\/\/nix-united.com\/blog\/machine-learning-in-healthcare-12-real-world-use-cases-to-know\/#:~:text=One%20of%20the%20uses%20of,decision%2Dmaking%20and%20patient%20care."},{"key":"1640_CR63","first-page":"37","volume-title":"Computer media and communication: a reader","author":"A Turing","year":"1999","unstructured":"Turing A (1999) Computing machinery and intelligence. In: Meyer PA (ed) Computer media and communication: a reader. Oxford University Press, pp 37\u201358"},{"key":"1640_CR64","unstructured":"UNI Global Union (2018) 10 principles for ethical AI. UNI Global Union, February 21, 2021. http:\/\/www.thefutureworldofwork.org\/opinions\/10-principles-for-ethical-ai\/"},{"issue":"3","key":"1640_CR65","doi-asserted-by":"publisher","first-page":"246","DOI":"10.1089\/big.2016.0051","volume":"5","author":"KR Varshney","year":"2017","unstructured":"Varshney KR, Alemzadeh H (2017) On the safety of machine learning: cyber-physical systems, decision sciences, and data products. Big Data 5(3):246\u2013255","journal-title":"Big Data"},{"key":"1640_CR66","doi-asserted-by":"publisher","DOI":"10.7208\/chicago\/9780226852904.001.0001","volume-title":"Moralizing technology: understanding and designing the morality of things","author":"PP Verbeek","year":"2011","unstructured":"Verbeek PP (2011) Moralizing technology: understanding and designing the morality of things. University of Chicago Press"},{"issue":"2","key":"1640_CR67","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1093\/idpl\/ipx005","volume":"7","author":"S Wachter","year":"2016","unstructured":"Wachter S, Mittelstadt B, Floridi L (2016) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76\u201399","journal-title":"Int Data Privacy Law"},{"issue":"2","key":"1640_CR68","first-page":"841","volume":"31","author":"S Wachter","year":"2018","unstructured":"Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv J Law Technol 31(2):841\u2013887","journal-title":"Harv J Law Technol"},{"key":"1640_CR69","doi-asserted-by":"publisher","DOI":"10.1093\/acprof:oso\/9780195374049.001.0001","volume-title":"Moral machines: teaching robots right from wrong","author":"W Wallach","year":"2009","unstructured":"Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press"},{"issue":"4","key":"1640_CR70","doi-asserted-by":"publisher","first-page":"549","DOI":"10.1093\/biostatistics\/kxy002","volume":"20","author":"F Wang","year":"2019","unstructured":"Wang F, Rudin C, McCormick TH, Gore JL (2019) Modeling recovery curves with application to prostatectomy. Biostatistics 20(4):549\u2013564","journal-title":"Biostatistics"},{"key":"1640_CR71","doi-asserted-by":"publisher","first-page":"19626","DOI":"10.1038\/s41598-022-20474-3","volume":"12","author":"H Wang","year":"2022","unstructured":"Wang H, Shuai P, Deng Y et al (2022) A correlation-based feature analysis of physical examination indicators can help predict the overall underlying health status using machine learning. Sci Rep 12:19626","journal-title":"Sci Rep"},{"key":"1640_CR72","unstructured":"Wexler R (2017) When a computer program keeps you in jail: how computers are harming criminal justice. New York Times. Retrieved October 3, 2021, https:\/\/www.nytimes.com\/2017\/06\/13\/opinion\/how-computers-are-harming-criminal-justice.html"},{"issue":"3","key":"1640_CR73","doi-asserted-by":"publisher","first-page":"203","DOI":"10.2471\/BLT.14.139022","volume":"93","author":"R Wyber","year":"2015","unstructured":"Wyber R, Vaillancourt S, Perry W, Mannava P, Folaranmi T, Celi LA (2015) Big data in global health: improving health in low- and middle-income countries. Bull World Health Organ 93(3):203\u2013208","journal-title":"Bull World Health Organ"},{"issue":"2","key":"1640_CR74","doi-asserted-by":"publisher","first-page":"277","DOI":"10.1142\/S2705078520500150","volume":"7","author":"R Yampolskiy","year":"2020","unstructured":"Yampolskiy R (2020) Unexplainability and incomprehensibility of AI. J Artif Intell Conscious 7(2):277\u2013291","journal-title":"J Artif Intell Conscious"},{"key":"1640_CR75","unstructured":"Yeung K (2019) Responsibility and AI: a study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. Council of Europe Study Series. Council of Europe"},{"key":"1640_CR76","doi-asserted-by":"publisher","first-page":"265","DOI":"10.1007\/s13347-019-00382-7","volume":"34","author":"C Zednik","year":"2019","unstructured":"Zednik C (2019) Solving the black box problem: a normative framework for explainable artificial intelligence. Philos Technol 34:265\u2013288","journal-title":"Philos Technol"},{"key":"1640_CR77","doi-asserted-by":"publisher","first-page":"661","DOI":"10.1007\/s13347-018-0330-6","volume":"32","author":"J Zerilli","year":"2019","unstructured":"Zerilli J, Knott A, Maclaurin J, Gavaghan C (2019) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32:661\u2013683","journal-title":"Philos Technol"},{"key":"1640_CR78","doi-asserted-by":"publisher","unstructured":"Zhao T, Dai E, Shu K, Wang S (2022) Towards fair classifiers without sensitive attributes: exploring biases in related features. In: Conference: WSDM '22: the fifteenth ACM international conference on web search and data mining, pp 1433\u20131442. https:\/\/doi.org\/10.1145\/3488560.3498493","DOI":"10.1145\/3488560.3498493"},{"issue":"3","key":"1640_CR79","doi-asserted-by":"publisher","first-page":"410","DOI":"10.1086\/233742","volume":"107","author":"MJ Zimmerman","year":"1997","unstructured":"Zimmerman MJ (1997) Moral responsibility and ignorance. Ethics 107(3):410\u2013426","journal-title":"Ethics"}],"container-title":["AI &amp; SOCIETY"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-023-01640-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00146-023-01640-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-023-01640-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,8,13]],"date-time":"2024-08-13T06:08:45Z","timestamp":1723529325000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00146-023-01640-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,13]]},"references-count":79,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,8]]}},"alternative-id":["1640"],"URL":"https:\/\/doi.org\/10.1007\/s00146-023-01640-1","relation":{},"ISSN":["0951-5666","1435-5655"],"issn-type":[{"value":"0951-5666","type":"print"},{"value":"1435-5655","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,13]]},"assertion":[{"value":"24 January 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 January 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 February 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}