{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T00:38:23Z","timestamp":1774226303114,"version":"3.50.1"},"reference-count":51,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,8,3]],"date-time":"2024-08-03T00:00:00Z","timestamp":1722643200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,8,3]],"date-time":"2024-08-03T00:00:00Z","timestamp":1722643200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2024,9]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Artificially intelligent machines are different in kind from all previous machines and tools. While many are used for relatively benign purposes, the types of artificially intelligent machines that we should care about, the ones that are worth focusing on, are the machines that <jats:italic>purport<\/jats:italic> to replace humans entirely and thereby engage in what Brian Cantwell Smith calls \u201cjudgment.\u201d As impressive as artificially intelligent machines are, their abilities are still derived from humans and as such lack the sort of normative commitments that humans have. So while artificially intelligent machines possess a great capacity for \u201creckoning,\u201d to use Smith\u2019s terminology, i.e., a calculative prowess of extraordinary utility and importance, they still lack the kind of considered human judgment that accompanies the ethical commitment and responsible action we humans must ultimately aspire toward. But there is a perfect technological storm brewing. Artificially intelligent machines are analogous to a perfect storm in that such machines involve the convergence of a number of factors that threaten our ability to behave ethically and maintain meaningful human control over the outcomes of processes involving artificial intelligence. I argue that the storm in the context of artificially intelligent machines makes us vulnerable to moral complacency. That is, this perfect technological storm is capable of lulling people into a state in which they abdicate responsibility for decision-making and behaviour precipitated by the use of artificially intelligent machines, a state that I am calling \u201cmoral complacency.\u201d I focus on three salient problems that converge to make us especially vulnerable to becoming morally complacent and losing meaningful human control. The first problem is that of transparency\/opacity. The second problem is that of overtrust in machines, often referred to as the automation bias. The third problem is that of ascribing responsibility. I examine each of these problems and how together they threaten to render us morally complacent.<\/jats:p>","DOI":"10.1007\/s10676-024-09788-0","type":"journal-article","created":{"date-parts":[[2024,8,3]],"date-time":"2024-08-03T04:01:50Z","timestamp":1722657710000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["The perfect technological storm: artificial intelligence and moral complacency"],"prefix":"10.1007","volume":"26","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0963-1004","authenticated-orcid":false,"given":"Marten H. L.","family":"Kaas","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,8,3]]},"reference":[{"key":"9788_CR1","doi-asserted-by":"crossref","unstructured":"Adams, Z., & Browning, J. (Eds.). (2017). Giving a damn: Essays in dialogue with John Haugeland. MIT Press.","DOI":"10.7551\/mitpress\/9780262035248.001.0001"},{"issue":"3","key":"9788_CR2","doi-asserted-by":"publisher","first-page":"973","DOI":"10.1177\/1461444816676645","volume":"20","author":"M Ananny","year":"2018","unstructured":"Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973\u2013989. https:\/\/doi.org\/10.1177\/1461444816676645","journal-title":"New Media & Society"},{"key":"9788_CR3","unstructured":"Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica. https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing"},{"key":"9788_CR4","unstructured":"Armstrong, K. (2023). May 27). ChatGPT: US lawyer admits using AI for case research. BBC News. https:\/\/www.bbc.com\/news\/world-us-canada-65735769"},{"key":"9788_CR5","doi-asserted-by":"publisher","unstructured":"Bainbridge, L. (1983). IRONIES OF AUTOMATION. In Analysis, Design and Evaluation of Man\u2013Machine Systems (pp. 129\u2013135). Elsevier. https:\/\/doi.org\/10.1016\/B978-0-08-029348-6.50026-9","DOI":"10.1016\/B978-0-08-029348-6.50026-9"},{"key":"9788_CR6","unstructured":"Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., & Liang, P. (2022). On the Opportunities and Risks of Foundation Models (arXiv:2108.07258). arXiv. http:\/\/arxiv.org\/abs\/2108.07258."},{"issue":"1","key":"9788_CR7","doi-asserted-by":"publisher","first-page":"205395171562251","DOI":"10.1177\/2053951715622512","volume":"3","author":"J Burrell","year":"2016","unstructured":"Burrell, J. (2016). How the machine \u2018thinks\u2019: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https:\/\/doi.org\/10.1177\/2053951715622512","journal-title":"Big Data & Society"},{"key":"9788_CR8","doi-asserted-by":"publisher","first-page":"103201","DOI":"10.1016\/j.artint.2019.103201","volume":"279","author":"S Burton","year":"2020","unstructured":"Burton, S., Habli, I., Lawton, T., McDermid, J., Morgan, P., & Porter, Z. (2020). Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artificial Intelligence, 279, 103201. https:\/\/doi.org\/10.1016\/j.artint.2019.103201","journal-title":"Artificial Intelligence"},{"issue":"2","key":"9788_CR9","doi-asserted-by":"publisher","first-page":"309","DOI":"10.1007\/s00146-019-00888-w","volume":"35","author":"M Carabantes","year":"2020","unstructured":"Carabantes, M. (2020). Black-box artificial intelligence: An epistemological and critical analysis. AI & SOCIETY, 35(2), 309\u2013317. https:\/\/doi.org\/10.1007\/s00146-019-00888-w","journal-title":"AI & SOCIETY"},{"issue":"3","key":"9788_CR10","doi-asserted-by":"publisher","first-page":"Article3","DOI":"10.1109\/JPROC.2018.2865996","volume":"107","author":"S Cave","year":"2019","unstructured":"Cave, S., Nyrup, R., Vold, K., & Weller, A. (2019). Motivations and risks of Machine Ethics. Proceedings of the IEEE, 107(3), Article3. https:\/\/doi.org\/10.1109\/JPROC.2018.2865996","journal-title":"Proceedings of the IEEE"},{"key":"9788_CR11","unstructured":"Computer Security Division, I. T. L. (2019, October 28). CSRC Topic: Artificial intelligence | CSRC. CSRC | NIST. https:\/\/csrc.nist.gov\/Topics\/technologies\/artificial-intelligence"},{"key":"9788_CR12","unstructured":"Cuthbertson, A. (2023, February 22). Hundreds of AI-written books flood Amazon. The Independent. https:\/\/www.independent.co.uk\/tech\/ai-author-books-amazon-chatgpt-b2287111.html"},{"key":"9788_CR13","doi-asserted-by":"publisher","unstructured":"Diakopoulos, N. (2020). Transparency. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 196\u2013213). Oxford University Press. https:\/\/doi.org\/10.1093\/oxfordhb\/9780190067397.013.11","DOI":"10.1093\/oxfordhb\/9780190067397.013.11"},{"issue":"1","key":"9788_CR14","doi-asserted-by":"publisher","first-page":"eaao5580","DOI":"10.1126\/sciadv.aao5580","volume":"4","author":"J Dressel","year":"2018","unstructured":"Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. https:\/\/doi.org\/10.1126\/sciadv.aao5580","journal-title":"Science Advances"},{"key":"9788_CR15","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3518482","author":"J Fjeld","year":"2020","unstructured":"Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in ethical and rights-based approaches to principles for AI. SSRN Electronic Journal. https:\/\/doi.org\/10.2139\/ssrn.3518482","journal-title":"SSRN Electronic Journal"},{"issue":"6","key":"9788_CR16","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1145\/242485.242493","volume":"3","author":"B Friedman","year":"1996","unstructured":"Friedman, B. (1996). Value-sensitive design. Interactions, 3(6), 16\u201323.","journal-title":"Interactions"},{"issue":"3","key":"9788_CR17","doi-asserted-by":"publisher","first-page":"397","DOI":"10.3197\/096327106778226293","volume":"15","author":"SM Gardiner","year":"2006","unstructured":"Gardiner, S. M. (2006). A Perfect Moral Storm: Climate Change, Intergenerational Ethics and the Problem of Moral Corruption. Environmental Values, 15(3), 397\u2013413. https:\/\/doi.org\/10.3197\/096327106778226293","journal-title":"Environmental Values"},{"key":"9788_CR18","doi-asserted-by":"publisher","unstructured":"Gebru, T. (2020). Race and gender. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 251\u2013269). Oxford University Press. https:\/\/doi.org\/10.1093\/oxfordhb\/9780190067397.013.16","DOI":"10.1093\/oxfordhb\/9780190067397.013.16"},{"issue":"4","key":"9788_CR19","doi-asserted-by":"publisher","first-page":"3473","DOI":"10.1007\/s10462-022-10256-8","volume":"56","author":"M Graziani","year":"2023","unstructured":"Graziani, M., Dutkiewicz, L., Calvaresi, D., Amorim, J. P., Yordanova, K., Vered, M., Nair, R., Abreu, P. H., Blanke, T., Pulignano, V., Prior, J. O., Lauwaert, L., Reijers, W., Depeursinge, A., Andrearczyk, V., & M\u00fcller, H. (2023). A global taxonomy of interpretable AI: Unifying the terminology for the technical and social sciences. Artificial Intelligence Review, 56(4), 3473\u20133504. https:\/\/doi.org\/10.1007\/s10462-022-10256-8","journal-title":"Artificial Intelligence Review"},{"key":"9788_CR20","unstructured":"Harford, T. (2016, October 11). Crash: How computers are setting us up for disaster. The Guardian. https:\/\/www.theguardian.com\/technology\/2016\/oct\/11\/crash-how-computers-are-setting-us-up-disaster"},{"issue":"3","key":"9788_CR21","doi-asserted-by":"publisher","first-page":"849","DOI":"10.1007\/s40685-020-00138-6","volume":"13","author":"P Hayes","year":"2020","unstructured":"Hayes, P. (2020). An ethical intuitionist account of transparency of algorithms and its gradations. Business Research, 13(3), 849\u2013874. https:\/\/doi.org\/10.1007\/s40685-020-00138-6","journal-title":"Business Research"},{"key":"9788_CR22","doi-asserted-by":"publisher","unstructured":"Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An Overview of Catastrophic AI Risks. https:\/\/doi.org\/10.48550\/ARXIV.2306.12001","DOI":"10.48550\/ARXIV.2306.12001"},{"issue":"4","key":"9788_CR23","doi-asserted-by":"publisher","first-page":"1746","DOI":"10.1109\/TETC.2022.3171314","volume":"10","author":"Y Jia","year":"2022","unstructured":"Jia, Y., McDermid, J., Lawton, T., & Habli, I. (2022). The role of Explainability in Assuring Safety of Machine Learning in Healthcare. IEEE Transactions on Emerging Topics in Computing, 10(4), 1746\u20131760. https:\/\/doi.org\/10.1109\/TETC.2022.3171314","journal-title":"IEEE Transactions on Emerging Topics in Computing"},{"issue":"9","key":"9788_CR24","doi-asserted-by":"publisher","first-page":"389","DOI":"10.1038\/s42256-019-0088-2","volume":"1","author":"A Jobin","year":"2019","unstructured":"Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389\u2013399. https:\/\/doi.org\/10.1038\/s42256-019-0088-2","journal-title":"Nature Machine Intelligence"},{"key":"9788_CR25","doi-asserted-by":"publisher","DOI":"10.1038\/s41433-022-02289-8","author":"S Khavandi","year":"2022","unstructured":"Khavandi, S., Lim, E., Higham, A., de Pennington, N., Bindra, M., Maling, S., Adams, M., & Mole, G. (2022). User-acceptability of an automated telephone call for post-operative follow-up after uncomplicated cataract surgery. Eye (London, England). https:\/\/doi.org\/10.1038\/s41433-022-02289-8","journal-title":"Eye (London, England)"},{"key":"9788_CR26","unstructured":"Lajka, A., & Marcelo, P. (2023, March 23). Fake AI images of Putin, Trump being arrested spread online. PBS NewsHour. https:\/\/www.pbs.org\/newshour\/politics\/fake-ai-images-of-putin-trump-being-arrested-spread-online"},{"key":"9788_CR27","doi-asserted-by":"publisher","DOI":"10.22541\/au.168209222.21704626\/v1","author":"T Lawton","year":"2023","unstructured":"Lawton, T., Morgan, P., Porter, Z., Cunningham, A., Hughes, N., Iacovides, I., Jia, Y., Sharma, V., & Habli, I. (2023). Clinicians risk becoming \u2018Liability sinks\u2019 for Artificial Intelligence. Preprints. https:\/\/doi.org\/10.22541\/au.168209222.21704626\/v1. Preprint.","journal-title":"Preprints"},{"key":"9788_CR28","doi-asserted-by":"publisher","unstructured":"Lipton, Z. C. (2016). The Mythos of Model Interpretability. https:\/\/doi.org\/10.48550\/ARXIV.1606.03490","DOI":"10.48550\/ARXIV.1606.03490"},{"key":"9788_CR29","doi-asserted-by":"publisher","first-page":"90","DOI":"10.1016\/j.obhdp.2018.12.005","volume":"151","author":"JM Logg","year":"2019","unstructured":"Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90\u2013103. https:\/\/doi.org\/10.1016\/j.obhdp.2018.12.005","journal-title":"Organizational Behavior and Human Decision Processes"},{"issue":"3","key":"9788_CR30","doi-asserted-by":"publisher","first-page":"421","DOI":"10.1007\/s11023-021-09570-x","volume":"31","author":"J Maclure","year":"2021","unstructured":"Maclure, J. (2021). AI, explainability and public reason: The argument from the limitations of the human mind. Minds and Machines, 31(3), 421\u2013438. https:\/\/doi.org\/10.1007\/s11023-021-09570-x","journal-title":"Minds and Machines"},{"issue":"2207","key":"9788_CR31","doi-asserted-by":"publisher","first-page":"20200363","DOI":"10.1098\/rsta.2020.0363","volume":"379","author":"JA McDermid","year":"2021","unstructured":"McDermid, J. A., Jia, Y., Porter, Z., & Habli, I. (2021). Artificial intelligence explainability: The technical and ethical dimensions. Philosophical Transactions of the Royal Society A: Mathematical Physical and Engineering Sciences, 379(2207), 20200363. https:\/\/doi.org\/10.1098\/rsta.2020.0363","journal-title":"Philosophical Transactions of the Royal Society A: Mathematical Physical and Engineering Sciences"},{"key":"9788_CR32","unstructured":"Milmo, D. (2023, February 9). Google AI chatbot Bard sends shares plummeting after it gives wrong answer. The Guardian. https:\/\/www.theguardian.com\/technology\/2023\/feb\/09\/google-ai-chatbot-bard-error-sends-shares-plummeting-in-battle-with-microsoft"},{"key":"9788_CR33","unstructured":"Minsky, M. (1968). Semantic information Processing. MIT Press. https:\/\/books.google.co.uk\/books?id=F3NSAQAACAAJ"},{"issue":"4","key":"9788_CR34","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/MIS.2006.80","volume":"21","author":"JH Moor","year":"2006","unstructured":"Moor, J. H. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), 18\u201321. https:\/\/doi.org\/10.1109\/MIS.2006.80","journal-title":"IEEE Intelligent Systems"},{"issue":"1","key":"9788_CR35","doi-asserted-by":"publisher","first-page":"25","DOI":"10.1007\/BF02639315","volume":"2","author":"H Nissenbaum","year":"1996","unstructured":"Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25\u201342. https:\/\/doi.org\/10.1007\/BF02639315","journal-title":"Science and Engineering Ethics"},{"issue":"2","key":"9788_CR36","doi-asserted-by":"publisher","first-page":"293","DOI":"10.1080\/09672559.2018.1454637","volume":"26","author":"O O\u2019Neill","year":"2018","unstructured":"O\u2019Neill, O. (2018). Linking Trust to Trustworthiness. International Journal of Philosophical Studies, 26(2), 293\u2013300. https:\/\/doi.org\/10.1080\/09672559.2018.1454637","journal-title":"International Journal of Philosophical Studies"},{"key":"9788_CR37","doi-asserted-by":"publisher","unstructured":"Ozturk, B., Lawton, T., Smith, S., & Habli, I. (2023). Predicting Progression of type 2 diabetes using primary Care Data with the help of machine learning. In M. H\u00e4gglund, M. Blusi, S. Bonacina, L. Nilsson, I. Cort Madsen, S. Pelayo, A. Moen, A. Benis, L. Lindsk\u00f6ld, & P. Gallos (Eds.), Studies in Health Technology and Informatics. IOS. https:\/\/doi.org\/10.3233\/SHTI230060","DOI":"10.3233\/SHTI230060"},{"key":"9788_CR38","unstructured":"Pause Giant, A. I., & Experiments (2023, March). An Open Letter. Future of Life Institute. https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/"},{"key":"9788_CR39","doi-asserted-by":"publisher","unstructured":"Porter, Z., Al-Qaddoumi, J., Conmy, P. R., Morgan, P., McDermid, J., & Habli, I. (2023). Unravelling Responsibility for AI. https:\/\/doi.org\/10.48550\/ARXIV.2308.02608","DOI":"10.48550\/ARXIV.2308.02608"},{"key":"9788_CR40","doi-asserted-by":"publisher","unstructured":"Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016). Overtrust of robots in emergency evacuation scenarios. 2016 11th ACM\/IEEE International Conference on Human-Robot Interaction (HRI), 101\u2013108. https:\/\/doi.org\/10.1109\/HRI.2016.7451740","DOI":"10.1109\/HRI.2016.7451740"},{"key":"9788_CR41","doi-asserted-by":"publisher","unstructured":"Ryan Conmy, P., Ozturk, B., Lawton, T., & Habli, I. (2023). The Impact of Training Data Shortfalls on Safety of AI-Based Clinical Decision Support Systems. In J. Guiochet, S. Tonetta, & F. Bitsch (Eds.), Computer Safety, Reliability, and Security (Vol. 14181, pp. 213\u2013226). Springer Nature Switzerland. https:\/\/doi.org\/10.1007\/978-3-031-40923-3_16","DOI":"10.1007\/978-3-031-40923-3_16"},{"issue":"3","key":"9788_CR42","doi-asserted-by":"publisher","first-page":"13","DOI":"10.1109\/MCE.2021.3075329","volume":"11","author":"S Saeedi","year":"2022","unstructured":"Saeedi, S., Fong, A. C. M., Mohanty, S. P., Gupta, A. K., & Carr, S. (2022). Consumer Artificial Intelligence mishaps and Mitigation Strategies. IEEE Consumer Electronics Magazine, 11(3), 13\u201324. https:\/\/doi.org\/10.1109\/MCE.2021.3075329","journal-title":"IEEE Consumer Electronics Magazine"},{"issue":"7837","key":"9788_CR43","doi-asserted-by":"publisher","first-page":"S102","DOI":"10.1038\/d41586-020-03409-8","volume":"588","author":"N Savage","year":"2020","unstructured":"Savage, N. (2020). The race to the top among the world\u2019s leaders in artificial intelligence. Nature, 588(7837), S102\u2013S104. https:\/\/doi.org\/10.1038\/d41586-020-03409-8","journal-title":"Nature"},{"key":"9788_CR44","doi-asserted-by":"crossref","unstructured":"Smith, B. C. (2019). The promise of artificial intelligence: Reckoning and judgment. The MIT Press.","DOI":"10.7551\/mitpress\/12385.001.0001"},{"issue":"4","key":"9788_CR45","doi-asserted-by":"publisher","first-page":"905","DOI":"10.2307\/1954312","volume":"74","author":"DF Thompson","year":"1980","unstructured":"Thompson, D. F. (1980). Moral responsibility of public officials: The Problem of many hands. American Political Science Review, 74(4), 905\u2013916. https:\/\/doi.org\/10.2307\/1954312","journal-title":"American Political Science Review"},{"issue":"4","key":"9788_CR46","doi-asserted-by":"publisher","first-page":"683","DOI":"10.1007\/s11023-022-09614-w","volume":"32","author":"B Townsend","year":"2022","unstructured":"Townsend, B., Paterson, C., Arvind, T. T., Nemirovsky, G., Calinescu, R., Cavalcanti, A., Habli, I., & Thomas, A. (2022). From pluralistic normative principles to Autonomous-Agent rules. Minds and Machines, 32(4), 683\u2013715. https:\/\/doi.org\/10.1007\/s11023-022-09614-w","journal-title":"Minds and Machines"},{"issue":"9","key":"9788_CR47","doi-asserted-by":"publisher","first-page":"827","DOI":"10.1016\/S0016-3287(97)00035-9","volume":"29","author":"H Tsoukas","year":"1997","unstructured":"Tsoukas, H. (1997). The tyranny of light. Futures, 29(9), 827\u2013843. https:\/\/doi.org\/10.1016\/S0016-3287(97)00035-9","journal-title":"Futures"},{"issue":"2","key":"9788_CR48","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1007\/s10676-009-9187-9","volume":"11","author":"M Turilli","year":"2009","unstructured":"Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology, 11(2), 105\u2013112. https:\/\/doi.org\/10.1007\/s10676-009-9187-9","journal-title":"Ethics and Information Technology"},{"key":"9788_CR49","doi-asserted-by":"crossref","unstructured":"United Nations Activities on Artificial Intelligence (AI). (2021).","DOI":"10.1201\/9781003175865-5"},{"key":"9788_CR50","doi-asserted-by":"publisher","unstructured":"Wallach, W., & Vallor, S. (2020). Moral machines: From Value Alignment to Embodied Virtue. In S. M. Liao (Ed.), Ethics of Artificial Intelligence (1st ed., pp. 383\u2013412). Oxford University PressNew York. https:\/\/doi.org\/10.1093\/oso\/9780190905033.003.0014","DOI":"10.1093\/oso\/9780190905033.003.0014"},{"issue":"2","key":"9788_CR51","doi-asserted-by":"publisher","first-page":"585","DOI":"10.1007\/s00146-020-01066-z","volume":"36","author":"J Walmsley","year":"2021","unstructured":"Walmsley, J. (2021). Artificial intelligence and the value of transparency. AI & SOCIETY, 36(2), 585\u2013595. https:\/\/doi.org\/10.1007\/s00146-020-01066-z","journal-title":"AI & SOCIETY"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-024-09788-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-024-09788-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-024-09788-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,17]],"date-time":"2024-10-17T17:05:30Z","timestamp":1729184730000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-024-09788-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,3]]},"references-count":51,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,9]]}},"alternative-id":["9788"],"URL":"https:\/\/doi.org\/10.1007\/s10676-024-09788-0","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,8,3]]},"assertion":[{"value":"25 July 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 August 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 October 2024","order":3,"name":"change_date","label":"Change Date","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Update","order":4,"name":"change_type","label":"Change Type","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Minor typo in abstract was corrected","order":5,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"49"}}