{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T03:38:39Z","timestamp":1763782719823,"version":"3.45.0"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100019180","name":"HORIZON EUROPE European Research Council","doi-asserted-by":"publisher","award":["852123"],"award-info":[{"award-number":["852123"]}],"id":[{"id":"10.13039\/100019180","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The use of AI technologies in weapons systems has triggered a decade-long international debate, especially with regard to human control, responsibility, and accountability around autonomous and intelligent systems (AIS) in defence. However, most of these ethical and legal discussions have revolved primarily around the point of use of a hypothetical AIS, and in doing so, one critical component still remains under-appreciated: human decision-making across the full timeline of the AIS lifecycle. When discussions around human involvement start at the point at which a hypothetical AIS has taken some undesirable action, they typically prompt the question: \u201cwhat happens next?\u201d This approach primarily concerns the technology at the time of use and may be appropriate for conventional weapons systems, for which humans have clear lines of control and therefore accountability at the time of use. However, this is not precisely the case for AIS. Rather than focusing first on the system in its comparatively most autonomous state, it is more helpful to consider when, along the lifecycle, humans have more clear, direct control over the system (e.g. through research, design, testing, or procurement) and how, at those earlier times, human decision-makers can take steps to decrease the likelihood that an AIS will perform \u2018inappropriately\u2019 or take incorrect actions. In this paper, we therefore argue that addressing many arising concerns requires a shift in how and when participants of the international debate on AI in the military domain think about, talk about, and plan for human involvement across the full lifecycle of AIS in defence. This shift includes a willingness to hold human decision-makers accountable, even if their roles occurred at much earlier stages of the lifecycle. Of course, this raises another question: \u201cHow?\u201d We close by formulating a number of recommendations, including the adoption of the IEEE-SA Lifecycle Framework, the consideration of policy knots, and the adoption of Human Readiness Levels.<\/jats:p>","DOI":"10.1007\/s10676-025-09862-1","type":"journal-article","created":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T04:40:54Z","timestamp":1759725654000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Establishing human responsibility and accountability at early stages of the lifecycle for AI-based defence systems"],"prefix":"10.1007","volume":"27","author":[{"given":"Ariel","family":"Conn","sequence":"first","affiliation":[]},{"given":"Ingvild","family":"Bode","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,6]]},"reference":[{"key":"9862_CR1","unstructured":"Article 36 Legal (2025). Lawful By Design Initiative. https:\/\/www.article36legal.com\/lawful-by-design"},{"key":"9862_CR2","doi-asserted-by":"publisher","unstructured":"Blanchard, A., & Bruun, L. (2024, December). Bias in military artificial intelligence. Stockholm International Peace Research Institute. https:\/\/doi.org\/10.55163\/CJFT9557","DOI":"10.55163\/CJFT9557"},{"key":"9862_CR3","doi-asserted-by":"publisher","unstructured":"Blanchard, A., Thomas, C., & Taddeo, M. (2024). Ethical governance of artificial intelligence for defence: Normative tradeoffs for principle to practice guidance. AI & SOCIETY. https:\/\/doi.org\/10.1007\/s00146-024-01866-7. https:\/\/link.springer.com\/","DOI":"10.1007\/s00146-024-01866-7"},{"key":"9862_CR4","unstructured":"Bloch, E., Garcia, D., Conn, A., Garcia, D., Gill, A., Llorens, A., Noorma, M., & Roff, H. (2022). Ethical and technical challenges in the development, use, and governance of autonomous weapon systems. By an independent group of experts convened by the IEEE Standards Association. https:\/\/standards.ieee.org\/wp-content\/uploads\/import\/documents\/other\/ethical-technical-challenges-autonomous-weapons-systems.pdf"},{"key":"9862_CR5","doi-asserted-by":"crossref","unstructured":"Bo, M., Bruun, L., & Boulanin, V. (2022). Retaining human responsibility in the development and use of autonomous weapon Systems. On accountability for violations of humanitarian law involving AWS. Stockholm International Peace Research Institute.","DOI":"10.55163\/AHBC1664"},{"issue":"4","key":"9862_CR6","doi-asserted-by":"publisher","first-page":"990","DOI":"10.1177\/13540661231163392","volume":"29","author":"I Bode","year":"2023","unstructured":"Bode, I. (2023). Practice-based and public-deliberative normativity: Retaining human control over the use of force. European Journal of International Relations, 29(4), 990\u20131016. https:\/\/doi.org\/10.1177\/13540661231163392","journal-title":"European Journal of International Relations"},{"issue":"1","key":"9862_CR7","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1093\/isagsq\/ksad073","volume":"4","author":"I Bode","year":"2024","unstructured":"Bode, I. (2024). Emergent normativity: Communities of practice, technology, and lethal autonomous weapon systems. Global Studies Quarterly, 4(1), 1\u201311. https:\/\/doi.org\/10.1093\/isagsq\/ksad073","journal-title":"Global Studies Quarterly"},{"key":"9862_CR49","unstructured":"Bode, I. (2025). Emerging norms around military applications of AI: The case of human control (GC REAIM Expert Policy Note Series). Global Commission on Responsible AI in the Military Domain."},{"key":"9862_CR8","doi-asserted-by":"crossref","unstructured":"Bode, I., & Huelss, H. (2022). Autonomous weapon systems and international norms. McGill-Queen\u2019s University.","DOI":"10.1515\/9780228009245"},{"key":"9862_CR9","unstructured":"Bode, I., & Nadibaidze, A. (2024, April 4). Human-Machine Interaction in the Military Domain and the Responsible AI Framework. Opinio Juris. https:\/\/opiniojuris.org\/2024\/04\/04\/symposium-on-military-ai-and-the-law-of-armed-conflict-human-machine-interaction-in-the-military-domain-and-the-responsible-ai-framework\/"},{"key":"9862_CR10","unstructured":"Bode, I., & Watts, T. (2021). Meaning-less human Control. The consequences of automation and autonomy in air defence systems. Drone Wars UK & Center for War Studies."},{"key":"9862_CR11","unstructured":"Boulanin, V., & Verbruggen, M. (2017). Mapping the development of autonomy in weapon systems. SIPRI."},{"key":"9862_CR44","doi-asserted-by":"crossref","unstructured":"Boutin, B., & Woodcock, T. (2024). Aspects of realizing (meaningful) human control: A legal perspective.\u201d In R. Gei\u00df, & H. Lahmann (Eds.), Research handbook on warfare and artificial intelligence (pp. 179\u2013196). Edward Elgar Publishing.","DOI":"10.4337\/9781800377400.00016"},{"key":"9862_CR12","doi-asserted-by":"crossref","unstructured":"Bruun, L., & Bo, M. (2025). Bias in military artificial intelligence and compliance with international humanitarian law. Stockholm International Peace Research Institute.","DOI":"10.55163\/NLWV5347"},{"key":"9862_CR13","doi-asserted-by":"crossref","unstructured":"Bruun, L., Bo, M., & Goussac, N. (2023). Compliance with International Humanitarian Law in the Development and Use of Autonomous Weapon Systems. What Does IHL Permit, Prohibit and Require? SIPRI. https:\/\/www.sipri.org\/sites\/default\/files\/2023-03\/ihl_and_aws.pdf","DOI":"10.55163\/DFXR3984"},{"key":"9862_CR14","doi-asserted-by":"crossref","unstructured":"Castelvecchi, D. (2016, October 5). Can we open the black box of AI? Nature. https:\/\/www.nature.com\/news\/can-we-open-the-black-box-of-ai-1.20731","DOI":"10.1038\/538020a"},{"key":"9862_CR45","unstructured":"Chengeta, T. (2016). Are autonomous weapon systems the subject of Article 36 of Additional Protocol I to the Geneva Conventions. U.C. Davis Journal of International Law & Policy, 23(1), 65\u2013100."},{"key":"9862_CR46","doi-asserted-by":"crossref","unstructured":"Copeland, D., Liivoja, R., & Sanders, L. (2023). The utility of weapons reviews in addressing concerns raised by autonomous weapon systems. Journal of Conflict and Security Law, 28(2), 285\u2013316.","DOI":"10.1093\/jcsl\/krac035"},{"key":"9862_CR15","unstructured":"Crootof, R. (2022, November 28). AI and the Actual IHL Accountability Gap. Centre for International Governance Innovation. https:\/\/www.cigionline.org\/articles\/ai-and-the-actual-ihl-accountability-gap\/"},{"key":"9862_CR47","doi-asserted-by":"publisher","unstructured":"Dorsey, J., & Bo, M. (2025). AI-enabled decision-support systems in the joint targeting cycle: Legal challenges, risks, and the human(e) dimension. International Law Studies 106, https:\/\/doi.org\/10.2139\/ssrn.5327115","DOI":"10.2139\/ssrn.5327115"},{"key":"9862_CR16","unstructured":"Dorsey, J., & Bonacquisti, G. (2017). Towards an EU Common Position on the Use of Armed Drones (No. EP\/EXPO\/B\/COMMITTEE\/FWC\/2013-08\/Lot8\/11). European Parliament, Directorate-General for External Policies, Policy Department. https:\/\/www.europarl.europa.eu\/RegData\/etudes\/STUD\/2017\/578032\/EXPO_STU(2017)578032_EN.pdf"},{"key":"9862_CR17","doi-asserted-by":"publisher","unstructured":"Elsayed, G. F., Shankar, S., Cheung, B., Papernot, N., Kurakin, A., Goodfellow, I., & Sohl-Dickstein, J. (2018). Adversarial Examples that Fool both Computer Vision and Time-Limited Humans. https:\/\/doi.org\/10.48550\/ARXIV.1802.08195","DOI":"10.48550\/ARXIV.1802.08195"},{"key":"9862_CR18","doi-asserted-by":"crossref","unstructured":"Etzioni, A., & Etzioni, O. (2017). Pros and cons of autonomous weapons systems. Military Review (May-June), 72\u201381.","DOI":"10.1007\/978-3-319-69623-2_16"},{"key":"9862_CR19","unstructured":"GCSP (2024). General statement by the Geneva Centre for Security Policy at the First Session of the 2024 CCW Group of Governmental Experts on emerging technologies in the area of LAWS. Geneva Centre for Security Policy. https:\/\/www.gcsp.ch\/global-insights\/general-statement-gcsp-first-session-2024-ccw-group-governmental-experts-emerging"},{"key":"9862_CR20","doi-asserted-by":"crossref","unstructured":"Gillespie, T. (2019). Systems engineering for ethical autonomous systems. Institution of Engineering & Technology.","DOI":"10.1049\/SBRA517E"},{"key":"9862_CR21","unstructured":"Grand-Cl\u00e9ment, S. (2023). Artificial intelligence beyond Weapons. Application and impact of AI in the military domain. United Nations Institute for Disarmament Research."},{"key":"9862_CR22","doi-asserted-by":"crossref","unstructured":"Holland Michel, A. (2020). The black Box, Unlocked. Predictability and understandability in military AI. UNIDIR.","DOI":"10.37559\/SecTec\/20\/AI1"},{"key":"9862_CR23","doi-asserted-by":"publisher","DOI":"10.1017\/eis.2024.21","author":"H Huelss","year":"2024","unstructured":"Huelss, H. (2024). Transcending the fog of war? US military \u2018AI\u2019, vision, and the emergent post-scopic regime. European Journal of International Security. https:\/\/doi.org\/10.1017\/eis.2024.21","journal-title":"European Journal of International Security"},{"key":"9862_CR24","unstructured":"Human Factors and Ergonomics Society (2021). Human Readiness Level Scale in the System Development Process. ANSI\/HFES 400\u20132021."},{"key":"9862_CR25","unstructured":"ICRC (2021). ICRC Position on Autonomous Weapon Systems and Background Paper. https:\/\/www.icrc.org\/en\/document\/icrc-position-autonomous-weapon-systems"},{"key":"9862_CR26","unstructured":"IEEE GET Program for AI Ethics and Governance Standards. IEEE Standards IEEE, & Association (2025). https:\/\/ieeexplore-ieee-org.proxy1-bib.sdu.dk\/browse\/standards\/get-program\/page\/series?id=93"},{"key":"9862_CR27","unstructured":"IEEE Research Group on Issues of AI and Autonomy in Defence Systems. (2024). A framework for human decision making through the lifecycle of autonomous and intelligent systems in defense applications. IEEE SA."},{"key":"9862_CR28","doi-asserted-by":"publisher","unstructured":"Jackson, S. J., Gillespie, T., & Payette, S. (2014). The policy knot: Re-integrating policy, practice and design in cscw studies of social computing. Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, 588\u2013602. https:\/\/doi.org\/10.1145\/2531602.2531674","DOI":"10.1145\/2531602.2531674"},{"issue":"1","key":"9862_CR29","first-page":"77","volume":"18","author":"ML Jones","year":"2015","unstructured":"Jones, M. L. (2015). The ironies of automation law: Tying policy knots with fair automation practices principles. Vanderbilt Journal of Entertainment & Technology Law, 18(1), 77\u2013134.","journal-title":"Vanderbilt Journal of Entertainment & Technology Law"},{"key":"9862_CR48","doi-asserted-by":"crossref","unstructured":"Klonowska, K. (2022). Article 36: Review of AI decision-support systems and other emerging technologies of warfare. In T. D. Gill, R. Geiss, H. Krieger, & R. Mignot-Mahdavi (Eds.), Yearbook of International Humanitarian Law (Vol. 23, pp. 123\u2013153). T.M.C. Asser Press.","DOI":"10.1007\/978-94-6265-491-4_6"},{"key":"9862_CR30","unstructured":"Nadibaidze, A., Bode, I., & Zhang, Q. (2024, November). AI in military decision support Systems. A review of developments and debates. Center for War Studies."},{"key":"9862_CR31","unstructured":"Office of the Under Secretary of Defense for Research and Engineering (2025, August 1). DoD Adopts Standard for Human Readiness Levels. Office of the Under Secretary of War for Research and Engineering. https:\/\/www.cto.mil\/news\/dod-hrl"},{"key":"9862_CR32","doi-asserted-by":"crossref","unstructured":"Raji, I. D., Kumar, I. E., Horowitz, A., & Selbst, A. (2022). The Fallacy of AI Functionality. 2022 ACM Conference on Fairness, Accountability, and Transparency, 959\u2013972.","DOI":"10.1145\/3531146.3533158"},{"key":"9862_CR33","unstructured":"Roff, H. M. (2016, September 28). Weapons Autonomy is Rocketing. Foreign Policy. https:\/\/foreignpolicy.com\/2016\/09\/28\/weapons-autonomy-is-rocketing\/"},{"key":"9862_CR34","doi-asserted-by":"crossref","unstructured":"Salazar, G., Lee, J. E., Handley, H. A. H., & Craft, R. (2020). Understanding Human Readiness Levels. https:\/\/www.osti.gov\/servlets\/purl\/1807329","DOI":"10.1177\/1071181320641427"},{"key":"9862_CR35","unstructured":"See, J. E., Craft, R., & Morris, J. D. (2019). Human Readiness Levels in the Systems Engineering Process at Sandia National Laboratories (Nos. SAND2019-3123; Sandia Report). Sandia National Laboratories. https:\/\/www.dau.edu\/sites\/default\/files\/Migrated\/CopDocuments\/Human%20Readiness%20Levels%20-%20Sandia%20National%20Labs.pdf"},{"key":"9862_CR36","doi-asserted-by":"crossref","unstructured":"Sharkey, N. (2016). Staying in the loop: Human supervisory control of weapons. In N. Bhuta, S. Beck, R. Geiss, H. Y. Liu, & C. Kress (Eds.), Autonomous weapons systems: Law, ethics, policy (pp. 23\u201338). Cambridge University Press.","DOI":"10.1017\/CBO9781316597873.002"},{"issue":"5","key":"9862_CR37","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1007\/s11948-022-00392-3","volume":"28","author":"M Taddeo","year":"2022","unstructured":"Taddeo, M., & Blanchard, A. (2022). A comparative analysis of the definitions of autonomous weapons systems. Science and Engineering Ethics, 28(5), 37.","journal-title":"Science and Engineering Ethics"},{"key":"9862_CR38","unstructured":"UN-CCW (2019, December 13). Final Report. UN Document No. CCW\/MSP\/2019\/9."},{"key":"9862_CR39","unstructured":"UNIDIR (2020). The human element in decisions about the use of force. https:\/\/unidir.org\/sites\/default\/files\/2020-03\/UNIDIR_Iceberg_SinglePages_web.pdf"},{"key":"9862_CR40","unstructured":"UNODA, & SIPRI. (2025). Handbook on responsible innovation in AI for international peace and security. UN Office for Disarmament Affairs."},{"key":"9862_CR41","unstructured":"Viveros \u00c1lvarez, J. S. (2024, September 4). The risks and inefficacies of AI systems in military targeting support. ICRC Humanitarian Law & Policy Blog. https:\/\/blogs.icrc.org\/law-and-policy\/wp-content\/uploads\/sites\/102\/2024\/09\/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support-1.pdf"},{"issue":"1","key":"9862_CR42","first-page":"1","volume":"8","author":"B Walker-Munro","year":"2023","unstructured":"Walker-Munro, B., & Assaad, Z. (2023). The guilty (Silicon) mind: Blameworthiness and liability in human-machine teaming. Cambridge Law Review, 8(1), 1\u201324.","journal-title":"Cambridge Law Review"},{"key":"9862_CR43","doi-asserted-by":"publisher","unstructured":"Watts, T., & Bode, I. (2021, February). Autonomy and Automation in Air Defence Systems Catalogue. https:\/\/doi.org\/10.5281\/zenodo.4485695","DOI":"10.5281\/zenodo.4485695"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09862-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09862-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09862-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T03:34:18Z","timestamp":1763782458000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09862-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,6]]},"references-count":49,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["9862"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09862-1","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"type":"print","value":"1388-1957"},{"type":"electronic","value":"1572-8439"}],"subject":[],"published":{"date-parts":[[2025,10,6]]},"assertion":[{"value":"6 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 November 2025","order":3,"name":"change_date","label":"Change Date","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Update","order":4,"name":"change_type","label":"Change Type","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Correction in table 1 and corresponding author affiliation.","order":5,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"51"}}