{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T09:25:27Z","timestamp":1766568327292,"version":"3.48.0"},"reference-count":40,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2025,11,26]],"date-time":"2025-11-26T00:00:00Z","timestamp":1764115200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,11,26]],"date-time":"2025-11-26T00:00:00Z","timestamp":1764115200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001734","name":"Copenhagen University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100001734","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Sci Eng Ethics"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Should developers be held responsible for the predictions of their neural networks\u2014and if not, does that introduce a responsibility gap? The claim that neural networks introduce a responsibility gap has seen significant pushback, with philosophers arguing that the gap can be bridged, or did not exist in the first place. We show how the responsibility gap turns on\n                    <jats:italic>whether we can distinguish between foreseeable and unforeseeable<\/jats:italic>\n                    neural network predictions. Empirical facts about neural networks tell us we cannot, which seems to force developers to either assume full responsibility or no responsibility at all, introducing a responsibility gap\u2014unless, of course, the same empirical facts hold true of humans, in which case there is no gap, but the trouble is simply with the classical notion of responsibility. We revisit and revise Mele\u2019s Zygote, as well as the famous Palsgraf case, and argue that in fact, what complicates responsibility assignment for neural networks also complicates responsibility assignment for humans, and humans seem to confront us with the same all-or-nothing dilemma. Thus, we agree there is no technology-induced responsibility gap (there was no gap in the first place), but for slightly different reasons than our predecessors.\n                  <\/jats:p>","DOI":"10.1007\/s11948-025-00566-9","type":"journal-article","created":{"date-parts":[[2025,11,26]],"date-time":"2025-11-26T08:33:12Z","timestamp":1764145992000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Mele\u2019s Digital Zygote: Developer Responsibility for Neural Networks"],"prefix":"10.1007","volume":"31","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5250-4276","authenticated-orcid":false,"given":"Anders","family":"S\u00f8gaard","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0517-8428","authenticated-orcid":false,"given":"Filippos","family":"Stamatiou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,11,26]]},"reference":[{"key":"566_CR1","unstructured":"Albareda, J. L. (2025). Uncovering the gap: Challenging the agential nature of AI responsibility problems. AI and Ethics, 1\u201314."},{"key":"566_CR2","volume-title":"Free will and luck","author":"R. M. Alfred","year":"2006","unstructured":"Alfred, R. M. (2006). Free will and luck. Oxford University Press."},{"issue":"9","key":"566_CR3","doi-asserted-by":"publisher","first-page":"694","DOI":"10.1016\/j.tics.2020.06.008","volume":"24","author":"R. A. Anderson","year":"2020","unstructured":"Anderson, R. A., Crockett, M. J., & Pizarro, D. A. (2020). A theory of moral praise. Trends in Cognitive Sciences, 24(9), 694\u2013703.","journal-title":"Trends in Cognitive Sciences"},{"key":"566_CR5","volume-title":"Advances in neural information processing systems","author":"P. Auer","year":"1995","unstructured":"Auer, P., Herbster, M., & Warmuth, M. K. K. (1995). Exponentially many local minima for single neurons. In D. Touretzky, M. C. Mozer, & M. Hasselmo (Eds.), Advances in neural information processing systems (Vol. 8). MIT Press."},{"key":"566_CR6","doi-asserted-by":"crossref","unstructured":"Burrell, J. (2016). How the machine thinks: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1).","DOI":"10.1177\/2053951715622512"},{"issue":"5 2017","key":"566_CR7","doi-asserted-by":"publisher","first-page":"2057","DOI":"10.1093\/molbev\/msx161","volume":"8","author":"Y. Chen","year":"2017","unstructured":"Chen, Y., Tong, D., & Chung-I, W. (2017). A new formulation of random genetic drift and its application to the evolution of cell populations. Molecular Biology and Evolution, 34, 8(5), 2057\u20132064. arXiv: https:\/\/academic.oup.com\/mbe\/articlepdf\/34\/8\/2057\/19418175\/msx161.pdf.","journal-title":"Molecular Biology and Evolution 34,"},{"key":"566_CR8","doi-asserted-by":"crossref","unstructured":"Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051\u20132068.","DOI":"10.1007\/s11948-019-00146-8"},{"key":"566_CR11","first-page":"338","volume-title":"Responsibility, character, and the emotions: New essays in moral psychology","author":"G. Dworkin","year":"1987","unstructured":"Dworkin, G. (1987). Intention, foreseeability, and responsibility. In F. Schoeman (Ed.), Responsibility, character, and the emotions: New essays in moral psychology (pp. 338\u2013354)."},{"key":"566_CR12","doi-asserted-by":"publisher","first-page":"267","DOI":"10.1093\/analys\/anr008","volume":"71","author":"J. M. Fischer","year":"2011","unstructured":"Fischer, J. M. (2011). The zygote argument remixed. Analysis, 71, 267\u2013272.","journal-title":"Analysis"},{"key":"566_CR4","doi-asserted-by":"crossref","unstructured":"Goodhart, A. L. (1930). The unforeseeable consequences of a negligent act. The Yale Law Journal, 39(4), 449\u2013467.","DOI":"10.2307\/789964"},{"key":"566_CR13","volume-title":"Rejecting retributivism: Free will, punishment, and criminal justice","author":"D. C. Gregg","year":"2021","unstructured":"Gregg, D. C. (2021). Rejecting retributivism: Free will, punishment, and criminal justice. Cambridge University Press."},{"key":"566_CR14","doi-asserted-by":"publisher","unstructured":"Hedlund, M., & Persson, E. (2022). Expert responsibility in AI development. AI & Society, 1\u201312. https:\/\/doi.org\/10.1007\/s00146-022-01498-9","DOI":"10.1007\/s00146-022-01498-9"},{"key":"566_CR15","doi-asserted-by":"publisher","first-page":"630","DOI":"10.1111\/j.1467-9213.2007.551.x","volume":"58, 233","author":"F. Hindriks","year":"2008","unstructured":"Hindriks, F. (2008). Intentional action and the praise-blame asymmetry. Philosophical Quarterly, 58(233), 630\u2013641.","journal-title":"Philosophical Quarterly"},{"issue":"1","key":"566_CR16","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1007\/s11229-022-04001-5","volume":"201","author":"F. Hindriks","year":"2023","unstructured":"Hindriks, F., & Veluwenkamp, H. (2023). The risks of autonomous machines: From responsibility gaps to control gaps. Synthese, 201(1), 21.","journal-title":"Synthese"},{"key":"566_CR17","doi-asserted-by":"crossref","unstructured":"Kiener, M. (2025). AI and responsibility: No gap, but abundance. Journal of Applied Philosophy, 42(1), 357\u2013374.","DOI":"10.1111\/japp.12765"},{"issue":"3","key":"566_CR18","doi-asserted-by":"publisher","first-page":"36","DOI":"10.1007\/s10676-022-09643-0","volume":"24","author":"P. K\u00f8nigs","year":"2022","unstructured":"K\u00f8nigs, P. (2022). Artificial intelligence and responsibility gaps: What is the problem? Ethics and Information Technology, 24(3), 36.","journal-title":"Ethics and Information Technology"},{"key":"566_CR19","volume-title":"The Oxford handbook of AI governance","author":"T. Lechterman","year":"2023","unstructured":"Lechterman, T. (2023). The concept of accountability in AI ethics and governance. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford handbook of AI governance. Oxford University Press."},{"key":"566_CR20","doi-asserted-by":"publisher","DOI":"10.1093\/acprof:oso\/9780198704638.001.0001","volume-title":"Consciousness and moral responsibility","author":"N. Levy","year":"2014","unstructured":"Levy, N. (2014). Consciousness and moral responsibility. Oxford University Press."},{"issue":"3","key":"566_CR40","doi-asserted-by":"publisher","first-page":"31","DOI":"10.1145\/3236386.3241340","volume":"16","author":"Z. C. Lipton","year":"2018","unstructured":"Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31\u201357.","journal-title":"Queue"},{"issue":"5","key":"566_CR21","doi-asserted-by":"publisher","first-page":"472","DOI":"10.1038\/s42256-023-00653-1","volume":"5","author":"S. P. Mann","year":"2023","unstructured":"Mann, S. P., Earp, B. D., Nyholm, S., Danaher, J., M\u00f8ller, N., Bowman-Smart, H., Hatherley, J., Koplin, J., Plozza, M., Rodger, D., Treit, P. V., Renard, G., McMillan, J., & Savulescu, J. (2023). Generative AI entails a credit\u2013blame asymmetry. Nature Machine Intelligence, 5(5), 472\u2013475.","journal-title":"Nature Machine Intelligence"},{"key":"566_CR22","doi-asserted-by":"publisher","first-page":"313","DOI":"10.1348\/014466600164499","volume":"39","author":"K. Markman","year":"2000","unstructured":"Markman, K., & Tetlock, P. (2000). \u201cI couldn\u2019t have known\u201d: Accountability, foreseeability, and counterfactual denials of responsibility. British Journal of Social Psychology, 39, 313\u2013325.","journal-title":"British Journal of Social Psychology"},{"issue":"3","key":"566_CR23","doi-asserted-by":"publisher","first-page":"175","DOI":"10.1007\/s10676-004-3422-1","volume":"6","author":"A. Matthias","year":"2004","unstructured":"Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175\u2013183. https:\/\/doi.org\/10.1007\/s10676-004-3422-1","journal-title":"Ethics and Information Technology"},{"issue":"6","key":"566_CR9","doi-asserted-by":"publisher","first-page":"1561","DOI":"10.1007\/s11098-016-0772-6","volume":"174","author":"D. Miller","year":"2017","unstructured":"Miller, D. (2017). Reasonable foreseeability and blameless ignorance. Philosophical Studies, 174(6), 1561\u20131581. https:\/\/doi.org\/10.1007\/s11098-016-0772-6","journal-title":"Philosophical Studies"},{"issue":"2","key":"566_CR24","doi-asserted-by":"publisher","first-page":"1601","DOI":"10.1007\/s43681-024-00503-9","volume":"5","author":"S. MirzaeiGhazi","year":"2025","unstructured":"MirzaeiGhazi, S., & Stenseke, J. (2025). Responsibility before freedom: Closing the responsibility gaps for autonomous machines. AI and Ethics, 5(2), 1601\u20131613.","journal-title":"AI and Ethics"},{"key":"566_CR25","doi-asserted-by":"crossref","unstructured":"Nyholm, S. (2018). Attributing agency to automated systems: Reflections on human\u2013robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201\u20131219.","DOI":"10.1007\/s11948-017-9943-x"},{"key":"566_CR26","doi-asserted-by":"publisher","first-page":"191","DOI":"10.4324\/9781003276029-14","volume-title":"Risk and responsibility in context.","author":"S. Nyholm","year":"2023","unstructured":"Nyholm, S. (2023). Responsibility gaps, value alignment, and meaningful human control over artificial intelligence. In Risk and responsibility in context. (pp. 191\u2013213). Routledge."},{"key":"566_CR27","doi-asserted-by":"crossref","unstructured":"Nyholm, S. (2025). The future of human responsibility: AI, responsibility gaps, and asymmetries between praise and blame. A Companion to Applied Philosophy of AI, 399\u2013414.","DOI":"10.1002\/9781394238651.ch28"},{"issue":"1","key":"566_CR28","doi-asserted-by":"publisher","first-page":"337","DOI":"10.1111\/japp.12763","volume":"42","author":"A.-K. Oimann","year":"2025","unstructured":"Oimann, A.-K., & Tollon, F. (2025). Responsibility gaps and technology: Old wine in new bottles? Journal of Applied Philosophy, 42(1), 337\u2013356.","journal-title":"Journal of Applied Philosophy"},{"key":"566_CR29","doi-asserted-by":"crossref","unstructured":"Raz, J. (2010). Responsibility and the negligence standard. Oxford Journal of Legal Studies, 30, 1(2), 1\u201318.","DOI":"10.1093\/ojls\/gqq002"},{"key":"566_CR30","doi-asserted-by":"publisher","unstructured":"Rudin, C. (2019). Stop explaining Black Box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1. https:\/\/doi.org\/10.1038\/s42256-019-0048-x","DOI":"10.1038\/s42256-019-0048-x"},{"key":"566_CR31","first-page":"323","volume":"53","author":"A. Sarch","year":"2019","unstructured":"Sarch, A., & Abbott, R. (2019). Punishing artificial intelligence: Legal fiction or science fiction. UC Davis Law Review, 53, 323\u2013384.","journal-title":"UC Davis Law Review"},{"issue":"1","key":"566_CR32","doi-asserted-by":"publisher","first-page":"62","DOI":"10.1111\/j.1468-5930.2007.00346.x","volume":"24","author":"R. Sparrow","year":"2007","unstructured":"Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62\u201377. https:\/\/doi.org\/10.1111\/j.1468-5930.2007.00346.x","journal-title":"Journal of Applied Philosophy"},{"key":"566_CR33","doi-asserted-by":"crossref","unstructured":"Strasser, A. (2021). Distributed responsibility in Human? Machine interactions. AI and Ethics.","DOI":"10.1007\/s43681-021-00109-5"},{"key":"566_CR34","doi-asserted-by":"crossref","unstructured":"Tigard, D. W. (2021a). Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics, 30(3), 435\u2013447.","DOI":"10.1017\/S0963180120000985"},{"issue":"3","key":"566_CR10","doi-asserted-by":"publisher","first-page":"589","DOI":"10.1007\/s13347-020-00414-7","volume":"34","author":"W. T. Daniel","year":"2021","unstructured":"Tigard D. W. (2021b). There is no techno-responsibility gap. Philosophy & Technology, 34(3), 589\u2013607. https:\/\/doi.org\/10.1007\/s13347-020-00414-7","journal-title":"Philosophy & Technology"},{"key":"566_CR35","doi-asserted-by":"crossref","unstructured":"Tollon, F. (2023). Responsibility gaps and the reactive attitudes. AI and Ethics, 3(1), 295\u2013302.","DOI":"10.1007\/s43681-022-00172-6"},{"issue":"3","key":"566_CR36","doi-asserted-by":"publisher","first-page":"20","DOI":"10.1007\/s11023-024-09674-0","volume":"34","author":"S. Vallor","year":"2024","unstructured":"Vallor, S., & Vierkant, T. (2024). Find the gap: AI, responsible agency and vulnerability. Minds and Machines, 34(3), 20.","journal-title":"Minds and Machines"},{"key":"566_CR37","doi-asserted-by":"crossref","unstructured":"Veluwenkamp, H. (2025). What responsibility gaps are and what they should be. Ethics and Information Technology, 27(1), 1\u201313.","DOI":"10.1007\/s10676-025-09823-8"},{"key":"566_CR38","doi-asserted-by":"crossref","unstructured":"Veluwenkamp, H., & Hindriks, F. (2024). Artificial agents: Responsibility & control gaps. Inquiry, 1\u201325.","DOI":"10.1080\/0020174X.2024.2410995"},{"key":"566_CR39","unstructured":"Wolf, Y., Wies, N., Avnery, O., Levine, Y., & Shashua, A. (2023). Fundamental limitations of alignment in large language models. arXiv:2304.11082 [cs.CL]."}],"container-title":["Science and Engineering Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-025-00566-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11948-025-00566-9","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-025-00566-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T09:24:32Z","timestamp":1766568272000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11948-025-00566-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,26]]},"references-count":40,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["566"],"URL":"https:\/\/doi.org\/10.1007\/s11948-025-00566-9","relation":{},"ISSN":["1471-5546"],"issn-type":[{"type":"electronic","value":"1471-5546"}],"subject":[],"published":{"date-parts":[[2025,11,26]]},"assertion":[{"value":"15 February 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 October 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 November 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All authors have no conflicts of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing Interests"}}],"article-number":"40"}}