{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T07:43:40Z","timestamp":1774511020379,"version":"3.50.1"},"reference-count":36,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T00:00:00Z","timestamp":1764720000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T00:00:00Z","timestamp":1764720000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100005722","name":"Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100005722","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2026,3]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Some authors argue that responsibility gaps can open up when no one has sufficient control over negative outcomes. With recent developments in Artificial Intelligence, the responsibility gap is thought to have grown since AI technologies can produce negative outcomes over which people do not have sufficient control. This paper aims to close the responsibility gap by recommending allocating responsibility according to a strategy constituted by two conditions: in scenarios where no one seems to be responsible for negative outcomes, responsibility should be primarily allocated to people who have (1) intentionally and voluntarily exercised prerequisite control that causally makes the current situation uncontrolled, and (2) expected personal benefits when exercising the prerequisite control. Theoretically speaking, every case of the responsibility gap contains at least one agent meeting these two conditions, and the responsibility gap could thus be closed.<\/jats:p>","DOI":"10.1007\/s10676-025-09883-w","type":"journal-article","created":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T04:25:03Z","timestamp":1764735903000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Closing the responsibility gap: allocating responsibility according to prerequisite control and expectations for personal benefits"],"prefix":"10.1007","volume":"28","author":[{"given":"Dilin","family":"Gong","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,3]]},"reference":[{"key":"9883_CR1","doi-asserted-by":"crossref","unstructured":"Albareda, J. L. (2024). Uncovering the Gap: Challenging the Agential Nature of AI Responsibility Problems. In Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society (Vol. 7, pp. 878\u2013878).","DOI":"10.1609\/aies.v7i1.31688"},{"key":"9883_CR2","doi-asserted-by":"publisher","first-page":"187","DOI":"10.1007\/s43154-020-00024-3","volume":"1","author":"D Amoroso","year":"2020","unstructured":"Amoroso, D., & Tamburrini, G. (2020). Autonomous weapons systems and meaningful human control: Ethical and legal issues. Current Robotics Reports, 1, 187\u2013194.","journal-title":"Current Robotics Reports"},{"key":"9883_CR3","doi-asserted-by":"publisher","first-page":"777","DOI":"10.1007\/s10677-012-9387-y","volume":"16","author":"D Archard","year":"2013","unstructured":"Archard, D. (2013). Dirty hands and the complicity of the democratic public. Ethical Theory and Moral Practice, 16, 777\u2013790.","journal-title":"Ethical Theory and Moral Practice"},{"key":"9883_CR4","unstructured":"Arvan, M. (2022). Varieties of Artificial Moral Agency and the New Control Problem."},{"key":"9883_CR5","doi-asserted-by":"publisher","first-page":"125","DOI":"10.1007\/s13347-013-0138-3","volume":"28","author":"M Champagne","year":"2015","unstructured":"Champagne, M., & Tonkens, R. (2015). Bridging the responsibility gap in automated warfare. Philosophy & Technology, 28, 125\u2013137.","journal-title":"Philosophy & Technology"},{"issue":"4","key":"9883_CR6","doi-asserted-by":"publisher","first-page":"299","DOI":"10.1007\/s10676-016-9403-3","volume":"18","author":"J Danaher","year":"2016","unstructured":"Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299\u2013309.","journal-title":"Ethics and Information Technology"},{"issue":"3","key":"9883_CR7","doi-asserted-by":"publisher","first-page":"227","DOI":"10.1007\/s43681-020-00028-x","volume":"1","author":"J Danaher","year":"2021","unstructured":"Danaher, J., & Nyholm, S. (2021). Automation, work and the achievement gap. AI and Ethics, 1(3), 227\u2013237.","journal-title":"AI and Ethics"},{"issue":"4","key":"9883_CR8","doi-asserted-by":"publisher","first-page":"1057","DOI":"10.1007\/s13347-021-00450-x","volume":"34","author":"F de Santoni","year":"2021","unstructured":"de Santoni, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why. They matter and how to address them. Philosophy & Technology, 34(4), 1057\u20131084.","journal-title":"Philosophy & Technology"},{"key":"9883_CR9","doi-asserted-by":"crossref","unstructured":"Di Nucci, E. (2020). The control paradox: From AI to populism. Rowman & Littlefield.","DOI":"10.5040\/9798881816193"},{"key":"9883_CR10","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2021.744590","volume":"8","author":"\u00c1E Eiben","year":"2021","unstructured":"Eiben, \u00c1. E., Ellers, J., Meynen, G., & Nyholm, S. (2021). Robot evolution: Ethical concerns. Frontiers in Robotics and AI, 8, Article 744590.","journal-title":"Frontiers in Robotics and AI"},{"issue":"4","key":"9883_CR11","doi-asserted-by":"publisher","first-page":"307","DOI":"10.1007\/s10676-017-9428-2","volume":"22","author":"DJ Gunkel","year":"2020","unstructured":"Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22(4), 307\u2013320.","journal-title":"Ethics and Information Technology"},{"issue":"1","key":"9883_CR12","doi-asserted-by":"publisher","DOI":"10.1007\/s11229-022-04001-5","volume":"201","author":"F Hindriks","year":"2023","unstructured":"Hindriks, F., & Veluwenkamp, H. (2023). The risks of autonomous machines: From responsibility gaps to control gaps. Synthese, 201(1), Article 21.","journal-title":"Synthese"},{"issue":"4","key":"9883_CR13","doi-asserted-by":"publisher","first-page":"385","DOI":"10.1017\/S0007123400003033","volume":"12","author":"M Hollis","year":"1982","unstructured":"Hollis, M. (1982). Dirty hands. British Journal of Political Science, 12(4), 385\u2013398.","journal-title":"British Journal of Political Science"},{"key":"9883_CR14","unstructured":"Horowitz, M. C., & Scharre, P. (2015).\u00a0Meaningful Human Control in Weapon Systems."},{"key":"9883_CR15","doi-asserted-by":"publisher","first-page":"63","DOI":"10.1075\/nlp.8.11bry","volume":"8","author":"J J Bryson","year":"2010","unstructured":"J Bryson, J. (2010). Robots should be slaves. Close Engagements with Artificial Companions: Key Social Psychological Ethical and Design Issues, 8, 63\u201374.","journal-title":"Close Engagements with Artificial Companions: Key Social Psychological Ethical and Design Issues"},{"key":"9883_CR16","doi-asserted-by":"crossref","unstructured":"Joffe, S., & Truog, R. D. (2010). Consent to medical care: the importance of fiduciary context. The ethics of consent: theory and practice, 347(7).","DOI":"10.1093\/acprof:oso\/9780195335149.003.0014"},{"issue":"4","key":"9883_CR17","doi-asserted-by":"publisher","first-page":"575","DOI":"10.1007\/s10677-022-10313-9","volume":"25","author":"M Kiener","year":"2022","unstructured":"Kiener, M. (2022). Can we bridge AI\u2019s responsibility gap at will? Ethical Theory and Moral Practice, 25(4), 575\u2013593.","journal-title":"Ethical Theory and Moral Practice"},{"key":"9883_CR18","doi-asserted-by":"crossref","unstructured":"Kleinig, J. (2010). The nature of consent. The ethics of consent: Theory and practice, 3\u201324.","DOI":"10.1093\/acprof:oso\/9780195335149.003.0001"},{"key":"9883_CR19","doi-asserted-by":"crossref","unstructured":"K\u00f6hler, S., Roughley, N., & Sauer, H. (2017). Technologically blurred accountability? : Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In C. Ulbert, P. Finkenbusch, E. Sondermann, & T. Debiel (Eds.), Moral agency and the politics of responsibility (pp. 51\u201368). Routledge.","DOI":"10.4324\/9781315201399-4"},{"issue":"3","key":"9883_CR20","doi-asserted-by":"publisher","DOI":"10.1007\/s44206-023-00073-z","volume":"2","author":"BH Lang","year":"2023","unstructured":"Lang, B. H., Nyholm, S., & Blumenthal-Barby, J. (2023). Responsibility gaps and black box healthcare AI: Shared responsibilization as a solution. Digital Society, 2(3), Article 52.","journal-title":"Digital Society"},{"issue":"4","key":"9883_CR21","doi-asserted-by":"publisher","first-page":"1213","DOI":"10.1007\/s13347-021-00454-7","volume":"34","author":"C List","year":"2021","unstructured":"List, C. (2021). Group agency and artificial intelligence. Philosophy & Technology, 34(4), 1213\u20131242.","journal-title":"Philosophy & Technology"},{"key":"9883_CR22","doi-asserted-by":"publisher","first-page":"175","DOI":"10.1007\/s10676-004-3422-1","volume":"6","author":"A Matthias","year":"2004","unstructured":"Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175\u2013183.","journal-title":"Ethics and Information Technology"},{"key":"9883_CR23","doi-asserted-by":"crossref","unstructured":"Miller, F. G. (2010). Consent to clinical research. In F. Miller, & A. Wertheimer (Eds.), The Ethics of Informed Consent (pp. 375\u2013404). Oxford University Press.","DOI":"10.1093\/acprof:oso\/9780195335149.003.0015"},{"issue":"3","key":"9883_CR24","doi-asserted-by":"publisher","first-page":"134","DOI":"10.4103\/2231-4040.116779","volume":"4","author":"LP Nijhawan","year":"2013","unstructured":"Nijhawan, L. P., Janodia, M. D., Muddukrishna, B. S., Bhat, K. M., Bairy, K. L., Udupa, N., & Musmade, P. B. (2013). Informed consent: Issues and challenges. Journal of Advanced Pharmaceutical Technology & Research, 4(3), 134\u2013140.","journal-title":"Journal of Advanced Pharmaceutical Technology & Research"},{"issue":"4","key":"9883_CR26","doi-asserted-by":"publisher","first-page":"1229","DOI":"10.1007\/s43681-022-00231-y","volume":"3","author":"S Nyholm","year":"2023","unstructured":"Nyholm, S. (2023a). A new control problem? Humanoid robots, artificial intelligence, and the value of control. AI and Ethics, 3(4), 1229\u20131239.","journal-title":"AI and Ethics"},{"key":"9883_CR27","doi-asserted-by":"publisher","first-page":"191","DOI":"10.4324\/9781003276029-14","volume-title":"Risk and responsibility in context","author":"S Nyholm","year":"2023","unstructured":"Nyholm, S. (2023b). Responsibility gaps, value alignment, and meaningful human control over artificial intelligence. In A. Placani & S. Broadhead (Eds.), Risk and responsibility in context (pp. 191\u2013213). Routledge."},{"issue":"1","key":"9883_CR28","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-024-09661-5","volume":"34","author":"S Nyholm","year":"2024","unstructured":"Nyholm, S. (2024). Gamification, side effects, and praise and blame for outcomes. Minds and Machines, 34(1), Article 4.","journal-title":"Minds and Machines"},{"issue":"1","key":"9883_CR29","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1007\/s10676-017-9430-8","volume":"20","author":"I Rahwan","year":"2018","unstructured":"Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5\u201314.","journal-title":"Ethics and Information Technology"},{"key":"9883_CR30","doi-asserted-by":"publisher","DOI":"10.2307\/797066","author":"PH Schuck","year":"1994","unstructured":"Schuck, P. H. (1994). Rethinking informed consent. Yale Law Journal. https:\/\/doi.org\/10.2307\/797066","journal-title":"Yale Law Journal"},{"issue":"2","key":"9883_CR31","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1111\/japp.12107","volume":"32","author":"AM Smith","year":"2015","unstructured":"Smith, A. M. (2015). Attitudes, tracing, and control. Journal of Applied Philosophy, 32(2), 115\u2013132.","journal-title":"Journal of Applied Philosophy"},{"issue":"1","key":"9883_CR32","doi-asserted-by":"publisher","first-page":"62","DOI":"10.1111\/j.1468-5930.2007.00346.x","volume":"24","author":"R Sparrow","year":"2007","unstructured":"Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62\u201377.","journal-title":"Journal of Applied Philosophy"},{"issue":"4","key":"9883_CR33","doi-asserted-by":"publisher","first-page":"905","DOI":"10.2307\/1954312","volume":"74","author":"DF Thompson","year":"1980","unstructured":"Thompson, D. F. (1980). Moral responsibility of public officials: The problem of many hands. American Political Science Review, 74(4), 905\u2013916.","journal-title":"American Political Science Review"},{"issue":"3","key":"9883_CR34","doi-asserted-by":"publisher","first-page":"589","DOI":"10.1007\/s13347-020-00414-7","volume":"34","author":"DW Tigard","year":"2021","unstructured":"Tigard, D. W. (2021). There is no techno-responsibility gap. Philosophy & Technology, 34(3), 589\u2013607.","journal-title":"Philosophy & Technology"},{"issue":"1","key":"9883_CR35","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10676-025-09823-8","volume":"27","author":"H Veluwenkamp","year":"2025","unstructured":"Veluwenkamp, H. (2025). What responsibility gaps are and what they should be. Ethics and Information Technology, 27(1), 1\u201313.","journal-title":"Ethics and Information Technology"},{"issue":"1","key":"9883_CR36","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1007\/s11023-020-09532-9","volume":"31","author":"I Verdiesen","year":"2021","unstructured":"Verdiesen, I., de Sio, S., F., & Dignum, V. (2021). Accountability and control over autonomous. Weapon systems: A framework for comprehensive human oversight. Minds and Machines, 31(1), 137\u2013163.","journal-title":"Minds and Machines"},{"key":"9883_CR37","unstructured":"Walzer, M. (2006). Just and unjust wars: A moral argument with historical illustrations. Basic Books."}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09883-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09883-w","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09883-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T05:50:36Z","timestamp":1774504236000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09883-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,3]]},"references-count":36,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,3]]}},"alternative-id":["9883"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09883-w","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,3]]},"assertion":[{"value":"3 December 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"8"}}