{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T21:13:06Z","timestamp":1772831586433,"version":"3.50.1"},"reference-count":62,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100006356","name":"University of Southern Denmark","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100006356","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Over the past years, the number of governance initiatives on applications of artificial intelligence (AI) in the military domain has expanded. As actors across the governance landscape turn towards implementing these initiatives, principles will need to be spelled out in practical terms, marking a decisive phase in the governance process. This includes the exercise of human agency in the context of using AI-based systems in the military domain. This paper considers what the notion of exercising human agency means across the lifecycle of AI systems. A lifecycle framework acknowledges that ensuring a qualitatively high exercise of human agency in AI-based systems cannot rely exclusively on the tail-end of the targeting decision-making process. Rather, it needs to be built into the lifecycle of AI-based systems from before the potential development of such systems all the way to post-use review. Each of the lifecycle stages raises manifold questions and challenges that various stakeholders need to address in their efforts to sustain and strengthen human agency. The paper highlights twelve key technical, ethical, legal, and strategic concerns across different stages of the lifecycle. These sets of concerns illustrate the value of developing more fine-grained thinking around applied lifecycle models. We conclude that ensuring the exercise of human agency in the use of AI-based systems in military contexts will require careful and reflective decision-making around questions and challenges among the stakeholders involved.<\/jats:p>","DOI":"10.1007\/s10676-025-09861-2","type":"journal-article","created":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T04:32:29Z","timestamp":1759725149000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Ensuring the exercise of human agency in AI-based military systems: concerns across the lifecycle"],"prefix":"10.1007","volume":"27","author":[{"given":"Ingvild","family":"Bode","sequence":"first","affiliation":[]},{"given":"Anna","family":"Nadibaidze","sequence":"additional","affiliation":[]},{"given":"Tom","family":"Watts","sequence":"additional","affiliation":[]},{"given":"Qiaochu","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,6]]},"reference":[{"key":"9861_CR1","unstructured":"Afina, Y., & Grand-Cl\u00e9ment, S. (2024). Bytes and Battles: Inclusion of Data Governance in Responsible Military AI (CIGI Papers 308). Centre for International Governance Innovation. https:\/\/www.cigionline.org\/static\/documents\/Afina-Grand_Clement.pdf"},{"issue":"4","key":"9861_CR2","doi-asserted-by":"publisher","first-page":"187","DOI":"10.1007\/s43154-020-00024-3","volume":"1","author":"D Amoroso","year":"2020","unstructured":"Amoroso, D., & Tamburrini, G. (2020). Autonomous weapons systems and meaningful human control: Ethical and legal issues. Current Robotics Reports, 1(4), 187\u2013194. https:\/\/doi.org\/10.1007\/s43154-020-00024-3","journal-title":"Current Robotics Reports"},{"key":"9861_CR3","unstructured":"Anand, A., & Deng, H. (2023). Towards responsible AI in defence: A mapping and comparative analysis of AI principles adopted by States. United Nations Institute for Disarmament Research. https:\/\/unidir.org\/publication\/towards-responsible-ai-in-defence-a-mapping-and-comparative-analysis-of-ai-principles-adopted-by-states\/"},{"issue":"5","key":"9861_CR4","doi-asserted-by":"publisher","first-page":"855","DOI":"10.1177\/01622439211030007","volume":"47","author":"J Bareis","year":"2022","unstructured":"Bareis, J., & Katzenbach, C. (2022). Talking AI into being: The narratives and imaginaries of National AI strategies and their performative politics. Science Technology & Human Values, 47(5), 855\u2013881. https:\/\/doi.org\/10.1177\/01622439211030007","journal-title":"Science Technology & Human Values"},{"issue":"3","key":"9861_CR5","doi-asserted-by":"publisher","first-page":"201","DOI":"10.1057\/s42984-024-00094-z","volume":"5","author":"I Bhila","year":"2024","unstructured":"Bhila, I. (2024). Putting algorithmic bias on top of the agenda in the discussions on autonomous weapons systems. Digital War, 5(3), 201\u2013212. https:\/\/doi.org\/10.1057\/s42984-024-00094-z","journal-title":"Digital War"},{"key":"9861_CR6","unstructured":"Bhuta, N., Beck, S., Gei\u03b2, R., Liu, H. Y., & Kre\u03b2, C. (Eds.). (2016). Autonomous weapons systems: Law, Ethics, policy. Cambridge University Press."},{"key":"9861_CR7","doi-asserted-by":"publisher","unstructured":"Blanchard, A., & Bruun, L. (2024, December). Bias in military artificial intelligence. Stockholm International Peace Research Institute. https:\/\/doi.org\/10.55163\/CJFT9557","DOI":"10.55163\/CJFT9557"},{"key":"9861_CR8","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-024-01866-7","author":"A Blanchard","year":"2024","unstructured":"Blanchard, A., Thomas, C., & Taddeo, M. (2024). Ethical governance of artificial intelligence for defence: Normative tradeoffs for principle to practice guidance. AI & SOCIETY 40, 185-198. https:\/\/link.springer.com\/https:\/\/doi.org\/10.1007\/s00146-024-01866-7","journal-title":"AI &amp; SOCIETY"},{"key":"9861_CR9","unstructured":"Bo, M., & Dorsey, J. (2024, April 4). Symposium on Military AI and the Law of Armed Conflict: The \u2018Need\u2019 for Speed \u2013 The Cost of Unregulated AI Decision-Support Systems to Civilians. Opinio Juris. https:\/\/opiniojuris.org\/2024\/04\/04\/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians\/"},{"key":"9861_CR10","unstructured":"Bode, I. (2025). Emerging norms around military applications of AI: The case of human control. Global Commission on Responsible AI in the Military Domain. GC REAIM Expert Policy Note Series. https:\/\/hcss.nl\/wp-content\/uploads\/2025\/05\/Bode-2.pdf"},{"key":"9861_CR11","unstructured":"Bode, I., & Bhila, I. (2024, September 3). The problem of algorithmic bias in AI-based military decision support systems. ICRC Humanitarian Law & Policy Blog. https:\/\/blogs.icrc.org\/law-and-policy\/2024\/09\/03\/the-problem-of-algorithmic-bias-in-ai-based-military-decision-support-systems\/"},{"key":"9861_CR55","doi-asserted-by":"publisher","unstructured":"Bode, I., Huelss, H., Nadibaidze, A., Qiao-Franco, G., & Watts, T.F.A. (2023). Prospects for the global governance of autonomous weapons: comparing Chinese, Russian, and US practices. Ethics and Information Technology, 25(5), 1-15. https:\/\/doi.org\/10.1007\/s10676-023-09678-x","DOI":"10.1007\/s10676-023-09678-x"},{"key":"9861_CR57","unstructured":"Bode, I. & Watts, T.F.A. (2021). Meaning-less human control: Lessons from air defence systems on meaningful human control for the debate on AWS. Drone Wars UK & Center for War Studies. https:\/\/dronewars.net\/2021\/02\/19\/meaning-less-human-control-lessons-from-air-defence-systems-for-lethal-autonomous-weapons\/"},{"key":"9861_CR58","unstructured":"Bode, I. & Watts, T.F.A. (2023). Loitering munitions and unpredictability: Autonomy in weapon systems and challenges to human control. Center for War Studies & Royal Holloway Centre for International Security. https:\/\/www.autonorms.eu\/loitering-munitions-and-unpredictability-autonomy-in-weapon-systems-and-challenges-to-human-control\/"},{"key":"9861_CR12","doi-asserted-by":"publisher","unstructured":"Brehm, M. (2017). Defending the Boundary: Constraints and Requirements on the Use of Autonomous Weapon Systems Under International Humanitarian and Human Rights Law. In The Geneva Academy of International Humanitarian Law and Human rights (Issue 9). Geneva Academy. https:\/\/doi.org\/10.2139\/ssrn.2972071","DOI":"10.2139\/ssrn.2972071"},{"key":"9861_CR13","unstructured":"Cambridge Dictionary (2024). Agency. https:\/\/dictionary.cambridge.org\/dictionary\/english\/agency"},{"key":"9861_CR14","doi-asserted-by":"crossref","unstructured":"Canca, C. (2023). AI ethics and governance in defence innovation: Implementing AI ethics framework. In M. Raska, & R. A. Bitzinger (Eds.), The AI wave in defence innovation (pp. 59\u201379). Routledge.","DOI":"10.4324\/9781003218326-4"},{"key":"9861_CR15","doi-asserted-by":"crossref","unstructured":"Chandler, K. (2021). Does military AI have gender? Understanding bias and promoting ethical approaches in military applications of AI. United Nations Institute for Disarmament Research. https:\/\/unidir.org\/files\/2021-12\/UNIDIR_Does_Military_AI_Have_Gender.pdf","DOI":"10.37559\/GEN\/2021\/04"},{"issue":"2","key":"9861_CR16","doi-asserted-by":"publisher","first-page":"229","DOI":"10.1007\/s43681-023-00261-0","volume":"4","author":"EH Christie","year":"2024","unstructured":"Christie, E. H., Ertan, A., Adomaitis, L., & Klaus, M. (2024). Regulating lethal autonomous weapon systems: Exploring the challenges of explainability and traceability. AI and Ethics, 4(2), 229\u2013245. https:\/\/doi.org\/10.1007\/s43681-023-00261-0","journal-title":"AI and Ethics"},{"key":"9861_CR17","doi-asserted-by":"publisher","unstructured":"Coco, A., & Dias, T. (2024). \u2018Handle with care\u2019: Due diligence obligations in the employment of AI technologies. In R. Gei\u00df & H. Lahmann (Eds.), Research Handbook on Warfare and Artificial Intelligence (pp. 234\u2013260). Edward Elgar Publishing. https:\/\/doi.org\/10.4337\/9781800377400.00019","DOI":"10.4337\/9781800377400.00019"},{"issue":"1","key":"9861_CR18","doi-asserted-by":"publisher","first-page":"6","DOI":"10.1002\/j.2371-9621.2021.tb00005.x","volume":"42","author":"ML Cummings","year":"2021","unstructured":"Cummings, M. L. (2021). Rethinking the maturity of artificial intelligence in safety-critical settings. AI Magazine, 42(1), 6\u201315. https:\/\/doi.org\/10.1002\/j.2371-9621.2021.tb00005.x","journal-title":"AI Magazine"},{"key":"9861_CR59","doi-asserted-by":"publisher","unstructured":"Dorsey, J. & Moffett, L. (2025). The Warification of International Humanitarian Law and the Artifice of Artificial Intelligence in Decision-Support Systems: Restoring Balance through the Legitimacy of Military Operations. SSRN. https:\/\/doi.org\/10.2139\/ssrn.5239131","DOI":"10.2139\/ssrn.5239131"},{"issue":"2","key":"9861_CR19","doi-asserted-by":"publisher","first-page":"311","DOI":"10.1093\/jcsl\/krw029","volume":"22","author":"MAC Ekelhof","year":"2017","unstructured":"Ekelhof, M. A. C. (2017). Complications of a common language: Why it is so hard to talk about autonomous weapons. Journal of Conflict and Security Law, 22(2), 311\u2013331. https:\/\/doi.org\/10.1093\/jcsl\/krw029","journal-title":"Journal of Conflict and Security Law"},{"key":"9861_CR60","unstructured":"Elbaum, S. & Panter, J. (2024). AI weapons and the dangerous illusion of human control. Foreign Affairs. https:\/\/www.foreignaffairs.com\/united-states\/ai-weapons-and-dangerous-illusion-human-contro"},{"key":"9861_CR20","doi-asserted-by":"publisher","unstructured":"French, S. E., & Lindsay, L. N. (2022). Artificial Intelligence in Military Decision-Making: Avoiding Ethical and Strategic Perils with an Option-Generator Model. In B. Koch & R. Schoonhoven (Eds.), Emerging Military Technologies (pp. 53\u201374). Brill | Nijhoff. https:\/\/doi.org\/10.1163\/9789004507951_007","DOI":"10.1163\/9789004507951_007"},{"key":"9861_CR56","doi-asserted-by":"crossref","unstructured":"Garcia, D. (2023). The AI Military Race: Common Good Governance in the Age of Artificial Intelligence. Oxford University Press.","DOI":"10.1093\/oso\/9780192864604.001.0001"},{"key":"9861_CR21","unstructured":"Holland Michel, A. (2020). The black Box, unlocked. United Nations Institute for Disarmament Research. https:\/\/unidir.org\/publication\/the-black-box-unlocked\/"},{"key":"9861_CR22","doi-asserted-by":"crossref","unstructured":"Holland Michel, A. (2021). Known unknowns: Data issues and military autonomous systems. United Nations Institute for Disarmament Research. https:\/\/unidir.org\/known-unknowns","DOI":"10.37559\/SecTec\/21\/AI1"},{"key":"9861_CR23","unstructured":"Holland Michel, A. (2024). The Accountability Surface of Militaries Using Automated Technologies (Policy Brief 188). Centre for International Governance Innovation. https:\/\/www.cigionline.org\/publications\/the-accountability-surface-of-militaries-using-automated-technologies\/"},{"issue":"3","key":"9861_CR24","doi-asserted-by":"publisher","first-page":"36","DOI":"10.15781\/T2639KP49","volume":"1","author":"MC Horowitz","year":"2018","unstructured":"Horowitz, M. C. (2018). Artificial Intelligence, international Competition, and the balance of power. Texas National Security Review, 1(3), 36\u201357. https:\/\/doi.org\/10.15781\/T2639KP49","journal-title":"Texas National Security Review"},{"issue":"6","key":"9861_CR25","doi-asserted-by":"publisher","first-page":"764","DOI":"10.1080\/01402390.2019.1621174","volume":"42","author":"MC Horowitz","year":"2019","unstructured":"Horowitz, M. C. (2019). When speed kills: Lethal autonomous weapon systems, deterrence and stability. Journal Of Strategic Studies, 42(6), 764\u2013788. https:\/\/doi.org\/10.1080\/01402390.2019.1621174","journal-title":"Journal Of Strategic Studies"},{"issue":"1","key":"9861_CR26","doi-asserted-by":"publisher","first-page":"116","DOI":"10.1080\/01402390.2023.2241648","volume":"47","author":"C Hunter","year":"2024","unstructured":"Hunter, C., & Bowen, B. E. (2024). We\u2019ll never have a model of an AI major-general: Artificial intelligence, command decisions, and kitsch visions of war. Journal of Strategic Studies, 47(1), 116\u2013146. https:\/\/doi.org\/10.1080\/01402390.2023.2241648","journal-title":"Journal of Strategic Studies"},{"key":"9861_CR27","unstructured":"IEEE SA Research Group on Issues of Autonomy and AI in Defense Systems. (2024). A framework for human decision making through the lifecycle of autonomous and intelligent systems in defense applications. IEEE SA. https:\/\/ieeexplore.ieee.org\/document\/10707139"},{"key":"9861_CR28","unstructured":"International Committee of the Red Cross (2021, May 12). ICRC Position on Autonomous Weapon Systems. https:\/\/www.icrc.org\/en\/document\/icrc-position-autonomous-weapon-systems"},{"key":"9861_CR29","doi-asserted-by":"crossref","unstructured":"Jervis, R. (2017). Perception and Misperception in International Politics (New paperback edition). Princeton University Press.","DOI":"10.2307\/j.ctvc77bx3"},{"issue":"9","key":"9861_CR30","doi-asserted-by":"publisher","first-page":"389","DOI":"10.1038\/s42256-019-0088-2","volume":"1","author":"A Jobin","year":"2019","unstructured":"Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389\u2013399. https:\/\/doi.org\/10.1038\/s42256-019-0088-2","journal-title":"Nature Machine Intelligence"},{"issue":"1","key":"9861_CR31","first-page":"16","volume":"14","author":"J Johnson","year":"2020","unstructured":"Johnson, J. (2020). Artificial intelligence: A threat to strategic stability. Strategic Studies Quarterly, 14(1), 16\u201339.","journal-title":"Strategic Studies Quarterly"},{"key":"9861_CR32","unstructured":"Kahn, L., Probasco, E., & Kinoshita, R. (2024). AI Safety and Automation Bias: The Downside of Human-in-the-Loop. Center for Security and Emerging Technology. https:\/\/cset.georgetown.edu\/publication\/ai-safety-and-automation-bias\/"},{"issue":"2","key":"9861_CR33","doi-asserted-by":"publisher","first-page":"185","DOI":"10.1080\/01402390.2024.2302585","volume":"47","author":"A King","year":"2024","unstructured":"King, A. (2024). Robot wars: Autonomous drone swarms and the battlefield of the future. Journal of Strategic Studies, 47(2), 185\u2013213. https:\/\/doi.org\/10.1080\/01402390.2024.2302585","journal-title":"Journal of Strategic Studies"},{"key":"9861_CR34","doi-asserted-by":"publisher","unstructured":"Klonowska, K. (2022). Article 36: Review of AI Decision-Support Systems and Other Emerging Technologies of Warfare. In T. D. Gill, R. Gei\u00df, H. Krieger, & R. Mignot-Mahdavi (Eds.), Yearbook of International Humanitarian Law, Volume 23 (2020) (Vol. 23, pp. 123\u2013153). T.M.C. Asser Press. https:\/\/doi.org\/10.1007\/978-94-6265-491-4_6","DOI":"10.1007\/978-94-6265-491-4_6"},{"key":"9861_CR61","doi-asserted-by":"publisher","unstructured":"Kwik, J. (2025). Digital Yes-Men: How to Deal with Sycophantic Military AI? Global Policy, 16(3), 467-473. https:\/\/doi.org\/10.1111\/1758-5899.70042","DOI":"10.1111\/1758-5899.70042"},{"key":"9861_CR35","doi-asserted-by":"publisher","unstructured":"Kwik, J., & van Engers, T. (2023). Performance or Explainability? A Law of Armed Conflict Perspective. In A. Kornilakis, G. Nouskalis, V. Pergantis, & T. Tzimas (Eds.), Artificial Intelligence and Normative Challenges: International and Comparative Legal Perspectives (pp. 255\u2013279). Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-031-41081-9_14","DOI":"10.1007\/978-3-031-41081-9_14"},{"issue":"4","key":"9861_CR36","doi-asserted-by":"publisher","first-page":"468","DOI":"10.1177\/0162243913509493","volume":"39","author":"A M\u2019charek","year":"2014","unstructured":"M\u2019charek, A., Schramm, K., & Skinner, D. (2014). Topologies of race: Doing territory, population and identity in Europe. Science Technology & Human Values, 39(4), 468\u2013487. https:\/\/doi.org\/10.1177\/0162243913509493","journal-title":"Science Technology & Human Values"},{"issue":"1","key":"9861_CR37","doi-asserted-by":"publisher","first-page":"14","DOI":"10.1007\/s10676-023-09683-0","volume":"25","author":"HW Meerveld","year":"2023","unstructured":"Meerveld, H. W., Lindelauf, R. H. A., Postma, E. O., & Postma, M. (2023). The irresponsibility of not using AI in the military. Ethics and Information Technology, 25(1), 14. https:\/\/doi.org\/10.1007\/s10676-023-09683-0","journal-title":"Ethics and Information Technology"},{"key":"9861_CR38","unstructured":"Nadibaidze, A., Bode, I., & Zhang, Q. (2024, November). AI in military decision support Systems. A review of developments and debates. Center for War Studies. https:\/\/www.autonorms.eu\/ai-in-military-decision-support-systems-a-review-of-developments-and-debates\/"},{"key":"9861_CR39","doi-asserted-by":"crossref","unstructured":"Ollino, A. (2022). Due diligence obligations in international law. Cambridge University Press.","DOI":"10.1017\/9781009053082"},{"issue":"3","key":"9861_CR40","doi-asserted-by":"publisher","first-page":"398","DOI":"10.1287\/orsc.3.3.398","volume":"3","author":"WJ Orlikowski","year":"1992","unstructured":"Orlikowski, W. J. (1992). The duality of technology: Rethinking the concept of technology in organizations. Organization Science, 3(3), 398\u2013427.","journal-title":"Organization Science"},{"issue":"5","key":"9861_CR41","doi-asserted-by":"publisher","first-page":"7","DOI":"10.1080\/00396338.2018.1518374","volume":"60","author":"K Payne","year":"2018","unstructured":"Payne, K. (2018). Artificial intelligence: A revolution in strategic affairs? Survival, 60(5), 7\u201332. https:\/\/doi.org\/10.1080\/00396338.2018.1518374","journal-title":"Survival"},{"key":"9861_CR42","unstructured":"Government of the Netherlands (2023, February 16). REAIM Call for Action. https:\/\/www.government.nl\/documents\/publications\/2023\/02\/16\/reaim-2023-call-to-action"},{"issue":"3","key":"9861_CR43","doi-asserted-by":"publisher","first-page":"321","DOI":"10.1017\/S0892679423000291","volume":"37","author":"NC Renic","year":"2023","unstructured":"Renic, N. C., & Schwarz, E. (2023). Crimes of dispassion: Autonomous weapons and the moral challenge of systematic killing. Ethics & International Affairs, 37(3), 321\u2013343. https:\/\/doi.org\/10.1017\/S0892679423000291","journal-title":"Ethics & International Affairs"},{"key":"9861_CR44","unstructured":"Roff, H. M., & Moyes, R. (2016, April). Meaningful Human Control, Artificial Intelligence and Autonomous Weapons. Article 36. http:\/\/www.article36.org\/wp-content\/uploads\/2016\/04\/MHC-AI-and-AWS-FINAL.pdf"},{"issue":"2","key":"9861_CR45","doi-asserted-by":"publisher","DOI":"10.3390\/en15020507","volume":"15","author":"P Sarajcev","year":"2022","unstructured":"Sarajcev, P., Kunac, A., Petrovic, G., & Despalatovic, M. (2022). Artificial intelligence techniques for power system transient stability assessment. Energies, 15(2), Article 507. https:\/\/doi.org\/10.3390\/en15020507","journal-title":"Energies"},{"key":"9861_CR46","unstructured":"Scharre, P. (2023). Four battlegrounds: Power in the age of artificial intelligence. W. W. Norton."},{"issue":"2","key":"9861_CR47","doi-asserted-by":"publisher","first-page":"288","DOI":"10.1162\/daed_a_01916","volume":"151","author":"E Schmidt","year":"2022","unstructured":"Schmidt, E. (2022). AI, great power competition & National security. Daedalus, 151(2), 288\u2013298. https:\/\/doi.org\/10.1162\/daed_a_01916","journal-title":"Daedalus"},{"issue":"2","key":"9861_CR48","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1177\/20539517231206794","volume":"10","author":"L Suchman","year":"2023","unstructured":"Suchman, L. (2023). The uncontroversial \u2018thingness\u2019 of AI. Big Data & Society, 10(2), 1\u20135. https:\/\/doi.org\/10.1177\/20539517231206794","journal-title":"Big Data & Society"},{"issue":"5","key":"9861_CR49","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1007\/s11948-022-00392-3","volume":"28","author":"M Taddeo","year":"2022","unstructured":"Taddeo, M., & Blanchard, A. (2022). A comparative analysis of the definitions of autonomous weapons systems. Science and Engineering Ethics, 28(5), 37. https:\/\/doi.org\/10.1007\/s11948-022-00392-3","journal-title":"Science and Engineering Ethics"},{"key":"9861_CR50","unstructured":"UN CCW (2019, September 25). Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. UN Document No. CCW\/GGE.1\/2019\/3. https:\/\/documents.unoda.org\/wp-content\/uploads\/2020\/09\/CCW_GGE.1_2019_3_E.pdf"},{"key":"9861_CR51","unstructured":"UNIDIR (2020). The human element in decisions about the use of force. https:\/\/unidir.org\/sites\/default\/files\/2020-03\/UNIDIR_Iceberg_SinglePages_web.pdf"},{"key":"9861_CR52","unstructured":"United States (2023). Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. https:\/\/www.state.gov\/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy\/"},{"key":"9861_CR53","unstructured":"United States Marine Corps (2023). Force Design 2030 Annual Update. https:\/\/www.marines.mil\/Portals\/1\/Docs\/Force_Design_2030_Annual_Update_June_2023.pdf"},{"key":"9861_CR62","doi-asserted-by":"publisher","unstructured":"Watts, T.F.A. (2024). Stay Sceptical: Why International Relations Scholars Should Approach AI with Caution. The RUSI Journal, 169(5), 56-58. https:\/\/doi.org\/10.1080\/03071847.2024.2413275","DOI":"10.1080\/03071847.2024.2413275"},{"key":"9861_CR54","unstructured":"Zhang, Q. (2024, October 21). Navigating the Complexities of Exercising Human Agency in Human-Machine Interaction Across the AI Lifecycle. The AutoNorms Blog. https:\/\/www.autonorms.eu\/navigating-the-complexities-of-exercising-human-agency-in-human-machine-interaction-across-the-ai-lifecycle\/"}],"updated-by":[{"DOI":"10.1007\/s10676-025-09887-6","type":"correction","label":"Correction","source":"publisher","updated":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T00:00:00Z","timestamp":1766534400000}}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09861-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09861-2","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09861-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T08:00:47Z","timestamp":1766563247000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09861-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,6]]},"references-count":62,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["9861"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09861-2","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,6]]},"assertion":[{"value":"6 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 December 2025","order":3,"name":"change_date","label":"Change Date","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Correction","order":4,"name":"change_type","label":"Change Type","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"A Correction to this paper has been published:","order":5,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"https:\/\/doi.org\/10.1007\/s10676-025-09887-6","URL":"https:\/\/doi.org\/10.1007\/s10676-025-09887-6","order":6,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"50"}}