{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,23]],"date-time":"2025-10-23T13:34:32Z","timestamp":1761226472928,"version":"build-2065373602"},"reference-count":36,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2025,9,18]],"date-time":"2025-09-18T00:00:00Z","timestamp":1758153600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,9,18]],"date-time":"2025-09-18T00:00:00Z","timestamp":1758153600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100005416","name":"Norges Forskningsr\u00e5d","doi-asserted-by":"publisher","award":["315580"],"award-info":[{"award-number":["315580"]}],"id":[{"id":"10.13039\/501100005416","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Sci Eng Ethics"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.<\/jats:p>","DOI":"10.1007\/s11948-025-00555-y","type":"journal-article","created":{"date-parts":[[2025,9,18]],"date-time":"2025-09-18T13:59:52Z","timestamp":1758203992000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["After Harm: A Plea for Moral Repair after Algorithms Have Failed"],"prefix":"10.1007","volume":"31","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8720-9912","authenticated-orcid":false,"given":"Pak-Hang","family":"Wong","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0930-8602","authenticated-orcid":false,"given":"Gernot","family":"Rieder","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,9,18]]},"reference":[{"key":"555_CR1","unstructured":"AI HLEG (EU High-Level Expert Group on AI) (2019). Ethics guidelines for trustworthy AI. Retrieved February 19, 2025, from https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai"},{"key":"555_CR2","unstructured":"AlgorithmWatch (2020). Automating society report 2020. Retrieved February 19, 2025, from https:\/\/automatingsociety.algorithmwatch.org\/"},{"issue":"1","key":"555_CR3","doi-asserted-by":"publisher","first-page":"93","DOI":"10.1177\/0162243915606523","volume":"41","author":"M Ananny","year":"2016","unstructured":"Ananny, M., & Science (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Technology & Human Values, 41(1), 93\u2013117. https:\/\/doi.org\/10.1177\/0162243915606523","journal-title":"Technology &amp; Human Values"},{"key":"555_CR4","unstructured":"Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, 23 May. Retrieved February 19, 2025, from https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing"},{"key":"555_CR5","doi-asserted-by":"publisher","first-page":"671","DOI":"10.2139\/ssrn.2477899","volume":"104","author":"S Barocas","year":"2016","unstructured":"Barocas, S., & Selbst, A. D. (2016). Big data\u2019s disparate impact. California Law Review, 104, 671\u2013732. https:\/\/doi.org\/10.2139\/ssrn.2477899","journal-title":"California Law Review"},{"key":"555_CR6","unstructured":"Berlinger, N. (2005). After harm: Medical error and the ethics of forgiveness. Johns Hopkins University."},{"key":"555_CR7","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1613\/jair.1.14263","volume":"76","author":"F Bianchi","year":"2023","unstructured":"Bianchi, F., Curry, A. C., & Hovy, D. (2023). Viewpoint: Artificial intelligence accidents waiting to happen? Journal of Artificial Intelligence Research, 76, 193\u2013199. https:\/\/doi.org\/10.1613\/jair.1.14263","journal-title":"Journal of Artificial Intelligence Research"},{"key":"555_CR8","doi-asserted-by":"publisher","first-page":"77110","DOI":"10.1109\/ACCESS.2022.3191790","volume":"10","author":"TF Blauth","year":"2022","unstructured":"Blauth, T. F., Gstrein, O. J., & Zwitter, A. (2022). Artificial intelligence crime: An overview of malicious use and abuse of AI. IEEE Access: Practical Innovations, Open Solutions, 10, 77110\u201377122. https:\/\/doi.org\/10.1109\/ACCESS.2022.3191790","journal-title":"Ieee Access: Practical Innovations, Open Solutions"},{"key":"555_CR9","doi-asserted-by":"crossref","unstructured":"Cohen, A. (2020). Apologies and moral repair. Routledge.","DOI":"10.4324\/9781003023647"},{"key":"555_CR10","doi-asserted-by":"publisher","unstructured":"Diberardino, N., Baleshta, C., & Stark, L. (2024). Algorithmic harms and algorithmic wrongs. In Proceedings of the 2024 ACM conference on fairness, accountability, and transparency (FAccT \u201824), (pp. 1725\u20131732). Association for Computing Machinery. https:\/\/doi.org\/10.1145\/3630106.3659001","DOI":"10.1145\/3630106.3659001"},{"key":"555_CR11","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1007\/s13347-025-00843-2","volume":"38","author":"JM Dur\u00e1n","year":"2025","unstructured":"Dur\u00e1n, J. M., & Pozzi, G. (2025). Trust and trustworthiness in AI. Philosophy & Technology, 38, 16. https:\/\/doi.org\/10.1007\/s13347-025-00843-2","journal-title":"Philosophy & Technology"},{"key":"555_CR12","doi-asserted-by":"publisher","unstructured":"Ehsan, U., Singh, R., Metcalf, J., & Riedl, M. (2022). The algorithmic imprint. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (FAccT \u201822), (pp. 1305\u20131317). Association for Computing Machinery. https:\/\/doi.org\/10.48550\/arXiv.2206.03275","DOI":"10.48550\/arXiv.2206.03275"},{"key":"555_CR13","unstructured":"EU (European Union) (2024). The Artificial Intelligence Act - regulation 2024\/1689. Retrieved February 19, 2025, from https:\/\/eur-lex.europa.eu\/eli\/reg\/2024\/1689\/oj\/eng"},{"issue":"4","key":"555_CR14","doi-asserted-by":"publisher","first-page":"289","DOI":"10.1017\/S1352325223000198","volume":"29","author":"G Fornaroli","year":"2024","unstructured":"Fornaroli, G. (2024). Neglecting others and making it up to them: The idea of a corrective duty. Legal Theory, 29(4), 289\u2013313. https:\/\/doi.org\/10.1017\/S1352325223000198","journal-title":"Legal Theory"},{"issue":"2","key":"555_CR15","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1109\/mts.2022.3173342","volume":"41","author":"A Gardner","year":"2022","unstructured":"Gardner, A. (2022). Responsibility, recourse, and redress: A focus on the three R\u2019s of AI ethics. IEEE Technology and Society Magazine, 41(2), 84\u201389. https:\/\/doi.org\/10.1109\/mts.2022.3173342","journal-title":"IEEE Technology and Society Magazine"},{"key":"555_CR16","doi-asserted-by":"publisher","unstructured":"Johnson, N., Moharana, S., Harrington, C., Andalibi, N., Heidari, H., & Eslami, M. (2024). The fall of an algorithm: Characterizing the dynamics toward abandonment. In Proceedings of the 2024 ACM conference on fairness, accountability, and transparency (FAccT \u201824), (pp. 337\u2013358). Association for Computing Machinery. https:\/\/doi.org\/10.48550\/arXiv.2404.13802","DOI":"10.48550\/arXiv.2404.13802"},{"key":"555_CR17","doi-asserted-by":"publisher","first-page":"175","DOI":"10.1007\/s10676-004-3422-1","volume":"6","author":"A Matthias","year":"2004","unstructured":"Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175\u2013183. https:\/\/doi.org\/10.1007\/s10676-004-3422-1","journal-title":"Ethics and Information Technology"},{"key":"555_CR18","unstructured":"Mayer-Sch\u00f6nberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt."},{"key":"555_CR19","doi-asserted-by":"publisher","unstructured":"McGregor, S. (2021). Preventing repeated real world ai failures by cataloging incidents: The AI incident database. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15458\u201315463. Held virtually. https:\/\/doi.org\/10.48550\/arXiv.2011.08512","DOI":"10.48550\/arXiv.2011.08512"},{"issue":"3\u20134","key":"555_CR20","doi-asserted-by":"publisher","first-page":"429","DOI":"10.1007\/s12130-010-9124-6","volume":"23","author":"PJ Nickel","year":"2010","unstructured":"Nickel, P. J., Franssen, M., & Kroes, P. (2010). Can we make sense of the notion of trustworthy technology? Knowledge. Technology & Policy, 23(3\u20134), 429\u2013444. https:\/\/doi.org\/10.1007\/s12130-010-9124-6","journal-title":"Technology &amp; Policy"},{"key":"555_CR21","doi-asserted-by":"crossref","unstructured":"NIST (National Institute of Standards and Technology) (2023). Artificial intelligence risk management framework (AI RMF 1.0). Retrieved February 19, 2025, from https:\/\/www.nist.gov\/itl\/ai-risk-management-framework","DOI":"10.6028\/NIST.AI.100-1.jpn"},{"key":"555_CR22","unstructured":"Perrow, C. (1984). Normal accidents. Princeton University Press."},{"key":"555_CR23","doi-asserted-by":"publisher","first-page":"735","DOI":"10.1007\/s43681-022-00200-5","volume":"3","author":"K Reinhardt","year":"2023","unstructured":"Reinhardt, K. (2023). Trust and trustworthiness in AI ethics. AI and Ethics, 3, 735\u2013744. https:\/\/doi.org\/10.1007\/s43681-022-00200-5","journal-title":"AI and Ethics"},{"issue":"4","key":"555_CR24","doi-asserted-by":"publisher","first-page":"1057","DOI":"10.1007\/s13347-021-00450-x","volume":"34","author":"F Santoni de Sio","year":"2021","unstructured":"Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057\u20131084. https:\/\/doi.org\/10.1007\/s13347-021-00450-x","journal-title":"Philosophy & Technology"},{"key":"555_CR25","doi-asserted-by":"publisher","unstructured":"Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla-Akbari, N. M., Gallegos, J., Smart, A., Garcia, E., & Virk, G. (2023). Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. In Proceedings of the 2023 AAAI\/ACM conference on AI, ethics, and society (AIES \u201823), (pp. 723\u2013741). Association for Computing Machinery. https:\/\/doi.org\/10.1145\/3600211.360467","DOI":"10.1145\/3600211.360467"},{"key":"555_CR35","doi-asserted-by":"publisher","unstructured":"Simon, J., Wong, P.H., & Rieder, G. (2020) Algorithmic bias and the value sensitive design approach. Internet Policy Review, 9(4) 1\u201316. https:\/\/doi.org\/10.14763\/2020.4.1534","DOI":"10.14763\/2020.4.1534"},{"key":"555_CR36","doi-asserted-by":"crossref","unstructured":"Rieder, G., Simon, J., & Wong, P. H. (2021) Mapping the stony road towards trustworthy AI: Expectations, problems, conundrums. In M. Pelillo, & T. Scantamburlo (Eds.), Machines we trust: Perspectives on dependable AI (pp. 27\u201340). MIT Press","DOI":"10.7551\/mitpress\/12186.003.0007"},{"key":"555_CR26","doi-asserted-by":"publisher","first-page":"102","DOI":"10.1111\/theo.12177","volume":"85","author":"J Tallant","year":"2019","unstructured":"Tallant, J. (2019). You can trust the ladder, but you shouldn\u2019t. Theoria, 85, 102\u2013118. https:\/\/doi.org\/10.1111\/theo.12177","journal-title":"Theoria"},{"key":"555_CR27","doi-asserted-by":"publisher","first-page":"253","DOI":"10.1007\/s43681-023-00327-z","volume":"5","author":"A Tartaro","year":"2023","unstructured":"Tartaro, A. (2023). When things go wrong: The recall of AI systems as a last resort for ethical and lawful AI. AI and Ethics, 5, 253\u2013262. https:\/\/doi.org\/10.1007\/s43681-023-00327-z","journal-title":"AI and Ethics"},{"key":"555_CR28","unstructured":"UNESCO (United Nations Educational, Scientific and Cultural Organization) (2022). Recommendation on the ethics of artificial intelligence. Retrieved February 19, 2025, from https:\/\/unesdoc.unesco.org\/ark:\/48223\/pf0000381137"},{"key":"555_CR29","doi-asserted-by":"publisher","unstructured":"Vallor, S., & Ganesh, B. (2023). Artificial intelligence and the imperative of responsibility: Reconceiving AI governance as social care. In M. Kiener (Ed.), The Routledge handbook of philosophy of responsibility (pp. 395\u2013406). Routledge. https:\/\/doi.org\/10.4324\/9781003282242-43","DOI":"10.4324\/9781003282242-43"},{"issue":"4","key":"555_CR30","doi-asserted-by":"publisher","first-page":"732","DOI":"10.1017\/beq.2022.6","volume":"33","author":"J Vives-Gabriel","year":"2023","unstructured":"Vives-Gabriel, J., Van Lent, W., & Wettstein, F. (2023). Moral repair: Toward a two-level conceptualization. Business Ethics Quarterly, 33(4), 732\u2013762.","journal-title":"Business Ethics Quarterly"},{"key":"555_CR31","doi-asserted-by":"crossref","unstructured":"Walker, M. U. (2006). Moral repair. Cambridge University Press.","DOI":"10.1017\/CBO9780511618024"},{"key":"555_CR32","unstructured":"Weale, S., & Stewart, H. (2020, August 17). A-level and GCSE results in England to be based on teacher assessments in u-turn. The Guardian. Retrieved February 19, 2025, from https:\/\/www.theguardian.com\/education\/2020\/aug\/17\/a-levels-gcse-results-england-based-teacher-assessments-government-u-turn"},{"key":"555_CR33","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1613\/jair.1.13196","volume":"74","author":"L Weinberg","year":"2022","unstructured":"Weinberg, L. (2022). Rethinking fairness: An interdisciplinary survey of critiques of hegemonic ML fairness approaches. Journal of Artificial Intelligence Research, 74, 75\u2013109. https:\/\/doi.org\/10.1613\/jair.1.13196","journal-title":"Journal of Artificial Intelligence Research"},{"key":"555_CR34","doi-asserted-by":"crossref","unstructured":"Zweig, K. (2022). Awkward intelligence: Where AI goes wrong, why it matters, and what we can do about it. MIT Press.","DOI":"10.7551\/mitpress\/13915.001.0001"}],"container-title":["Science and Engineering Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-025-00555-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11948-025-00555-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-025-00555-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,23]],"date-time":"2025-10-23T13:11:48Z","timestamp":1761225108000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11948-025-00555-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,18]]},"references-count":36,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2025,10]]}},"alternative-id":["555"],"URL":"https:\/\/doi.org\/10.1007\/s11948-025-00555-y","relation":{},"ISSN":["1471-5546"],"issn-type":[{"type":"electronic","value":"1471-5546"}],"subject":[],"published":{"date-parts":[[2025,9,18]]},"assertion":[{"value":"19 April 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 August 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 September 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The corresponding author declares that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"26"}}