{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T03:38:35Z","timestamp":1763782715720,"version":"3.45.0"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Rapid integration of Artificial Intelligence (AI) into the military domain necessitates actionable strategies for translating high-level principles of responsible use into practical guidelines. However, there remains a problematic gap between these principles and the norms that govern the use of AI in military operations. Moreover, these norms are highly dependent on the particular context in which military AI is deployed. This leads to normative uncertainty; what\n                    <jats:italic>is<\/jats:italic>\n                    responsible use of AI in a specific military operation? Unclear practical guidelines pose challenges for technology developers and military operators involved in the deployment of military AI. This paper emphasises the need for a context-specific assessment of responsible use of military AI. Moving beyond a one-size-fits-all standard, we propose the Military AI Responsibility Contextualisation (MARC) framework; a structured approach that facilitates a context-specific assessment. In that way, this paper aims to contribute to bridging the gap between abstract principles and practical guidelines. We furthermore emphasise the need for interdisciplinary collaboration in further operationalizing responsible military AI to work towards the ethical and effective development and deployment of AI in military operations.\n                  <\/jats:p>","DOI":"10.1007\/s10676-025-09865-y","type":"journal-article","created":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T04:26:11Z","timestamp":1759724771000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Operationalising responsible AI in the military domain: a context-specific assessment"],"prefix":"10.1007","volume":"27","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0064-6293","authenticated-orcid":false,"given":"Herwin","family":"Meerveld","sequence":"first","affiliation":[]},{"given":"Lonneke","family":"Peperkamp","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8082-4883","authenticated-orcid":false,"given":"Marie","family":"\u0160af\u00e1\u0159 Postma","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7688-3645","authenticated-orcid":false,"given":"Roy","family":"Lindelauf","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,6]]},"reference":[{"issue":"5","key":"9865_CR1","doi-asserted-by":"publisher","first-page":"117","DOI":"10.1080\/00396338.2017.1368660","volume":"59","author":"J Altmann","year":"2017","unstructured":"Altmann, J., & Sauer, F. (2017). Autonomous weapon systems and strategic stability. Survival, 59(5), 117\u2013142. https:\/\/doi.org\/10.1080\/00396338.2017.1368660","journal-title":"Survival"},{"issue":"5\u20136","key":"9865_CR2","doi-asserted-by":"publisher","first-page":"793","DOI":"10.1080\/01402390.2016.1232642","volume":"39","author":"K Ayoub","year":"2016","unstructured":"Ayoub, K., & Payne, K. (2016). Strategy in the age of artificial intelligence. Journal Of Strategic Studies, 39(5\u20136), 793\u2013819. https:\/\/doi.org\/10.1080\/01402390.2016.1232642","journal-title":"Journal Of Strategic Studies"},{"unstructured":"Backes, A., & Swab, A. (2019). Cognitive warfare: The Russian threat to election integrity in the Baltic States. Belfer Center for Science and International Affairs.","key":"9865_CR3"},{"unstructured":"Bartlett, H. C., & HolmanJr., G. P. (1996, Winter). The spectrum of conflict: What can it do for force planners? Naval War College Review, 49(1), 119\u2013129.","key":"9865_CR4"},{"doi-asserted-by":"crossref","unstructured":"Blanchard, A., & Bruun, L. (2024). Bias in Military Artificial Intelligence. Available at SSRN 5432596.","key":"9865_CR5","DOI":"10.55163\/CJFT9557"},{"doi-asserted-by":"crossref","unstructured":"Braun, C. N. (2023). Limited force and the fight for the just war tradition. Georgetown University Press.","key":"9865_CR52","DOI":"10.1353\/book.113333"},{"unstructured":"Braw, E. (2023). AI and Gray-Zone aggression: Risks and opportunities. American Enterprise Institute.","key":"9865_CR6"},{"doi-asserted-by":"crossref","unstructured":"Brunstetter, D. R. (2021). Just and unjust Uses of Limited Force: a Moral argument with contemporary illustrations. Oxford University Press.","key":"9865_CR51","DOI":"10.1093\/oso\/9780192897008.001.0001"},{"issue":"1","key":"9865_CR7","doi-asserted-by":"publisher","first-page":"87","DOI":"10.1017\/S0892679412000792","volume":"27","author":"D Brunstetter","year":"2013","unstructured":"Brunstetter, D., & Braun, M. (2013). From jus ad bellum to jus ad vim: Recalibrating our understanding of the moral use of force. Ethics & International Affairs, 27(1), 87\u2013106. https:\/\/doi.org\/10.1017\/S0892679412000792","journal-title":"Ethics & International Affairs"},{"unstructured":"Byman, D. L., Gao, C., Meserole, C., & Subrahmanian, V. S. (2023). Deepfakes and international conflict. Brookings Institution.","key":"9865_CR8"},{"unstructured":"Department of Defense (2022). S Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway. US Department of Defense. https:\/\/media.defense.gov\/2022\/Jun\/22\/2003022604\/-1\/-1\/0\/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF","key":"9865_CR9"},{"unstructured":"Department of the Army (2019). Army Doctrine Publication (ADP) 3\u2009\u2013\u20090, Operations. U.S. Government Publishing Office. https:\/\/armypubs.army.mil\/epubs\/DR_pubs\/DR_a\/ARN18010-ADP_3-0-000-WEB-2.pdf","key":"9865_CR10"},{"doi-asserted-by":"crossref","unstructured":"Devitt, S. K. (2024). Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams. In J. M. Schraagen (Ed.), Responsible use of AI in military systems (pp. 248\u2013277). CRC.","key":"9865_CR11","DOI":"10.1201\/9781003410379-16"},{"key":"9865_CR12","doi-asserted-by":"publisher","DOI":"10.1080\/00396338.2019.1614782","author":"M Fitzpatrick","year":"2019","unstructured":"Fitzpatrick, M. (2019). Artificial intelligence and nuclear command and control. Survival. https:\/\/doi.org\/10.1080\/00396338.2019.1614782","journal-title":"Survival"},{"issue":"6","key":"9865_CR13","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3638552","volume":"56","author":"MG Gaber","year":"2024","unstructured":"Gaber, M. G., Ahmed, M., & Janicke, H. (2024). Malware detection with artificial intelligence: A systematic literature review. Acm Computing Surveys, 56(6), 1\u201333.","journal-title":"Acm Computing Surveys"},{"key":"9865_CR14","doi-asserted-by":"publisher","DOI":"10.34190\/eccws.23.1.2158","author":"H Grahn","year":"2024","unstructured":"Grahn, H., H\u00e4kkinen, T., & Taipalus, T. (2024). Cognitive security in a changing world: Citizen perceptions during Finland\u2019s NATO joining process. European Conference on Cyber Warfare and Security. https:\/\/doi.org\/10.34190\/eccws.23.1.2158","journal-title":"European Conference on Cyber Warfare and Security"},{"key":"9865_CR15","first-page":"80","volume":"104","author":"OA Hathaway","year":"2025","unstructured":"Hathaway, O. A., & Shapiro, S. J. (2025). Might unmakes right: The catastrophic collapse of norms against the use of force. Foreign Affairs, 104, 80.","journal-title":"Foreign Affairs"},{"issue":"3","key":"9865_CR16","first-page":"50","volume":"9","author":"A Husain","year":"2021","unstructured":"Husain, A. (2021). AI is shaping the future of war. Prism, 9(3), 50\u201361.","journal-title":"Prism"},{"key":"9865_CR55","first-page":"1","volume":"27","author":"Agata Kleczkowska","year":"2025","unstructured":"Kleczkowska, Agata. (2025). The Russian Disinformation Campaign During the Romanian Presidential Elections: The Perfect Example of a Violation of International Law? Opinio Juris, 27 January 2025. https:\/\/opiniojuris.org\/2025\/01\/27\/the-russian-disinformation-campaign-during-the-romanian-presidential-elections-the-perfect-example-of-a-violation-of-international-law\/","journal-title":"Opinio Juris"},{"doi-asserted-by":"crossref","unstructured":"Lazar, S. (2017). Just war theory: Revisionists versus traditionalists. Annual Review of Political Science, 20, 37\u201354.","key":"9865_CR17","DOI":"10.1146\/annurev-polisci-060314-112706"},{"unstructured":"Lee, R., & Clay, M. (2022). 5 9). Don\u2019t Call it a Gray Zone: China\u2019s Use-of-Force Spectrum. War on the Rocks. https:\/\/warontherocks.com\/2022\/05\/dont-call-it-a-gray-zone-chinas-use-of-force-spectrum\/","key":"9865_CR18"},{"doi-asserted-by":"publisher","unstructured":"Lupton, D., & Morkevicius, V. (2019). The Fog of War: Violence, Coercion, and Jus ad Vim. https:\/\/doi.org\/10.1515\/9781474444231-005","key":"9865_CR19","DOI":"10.1515\/9781474444231-005"},{"doi-asserted-by":"crossref","unstructured":"McGregor, S. (2021). Preventing repeated real world AI failures by cataloging incidents: The AI incident database. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17).","key":"9865_CR20","DOI":"10.1609\/aaai.v35i17.17817"},{"doi-asserted-by":"publisher","unstructured":"Meerveld, H., & Lindelauf, R. (2024). Data science in military decision-making: Foci and gaps. Global Society(2). https:\/\/doi.org\/10.1080\/13600826.2024.2353657","key":"9865_CR21","DOI":"10.1080\/13600826.2024.2353657"},{"doi-asserted-by":"publisher","unstructured":"Melgaard, N., & Whetham, D. (2021). Jus ad vim War, peace and the ethical status of the in-between. In S. Miller, M. C. Regan, & P. F. Walsh (Eds.), National Security Intelligence and Ethics (pp. 168\u2013198). Routledge. https:\/\/doi.org\/10.4324\/9781003164197-16","key":"9865_CR22","DOI":"10.4324\/9781003164197-16"},{"doi-asserted-by":"crossref","unstructured":"Miller, S. R. M. (2019). Jus ad vim: The morality of military and Police use of force in armed conflicts short of war. In G. Jai, & Galliott (Eds.), Force short of war in modern conflict: Jus ad vim. Edinburgh University.","key":"9865_CR23","DOI":"10.1515\/9781474444231-009"},{"issue":"3","key":"9865_CR54","doi-asserted-by":"publisher","first-page":"46","DOI":"10.1007\/s10676-023-09717-7","volume":"25","author":"Seumas Miller","year":"2023","unstructured":"Miller, S. (2023). Cognitive warfare: an ethical analysis.Ethics and Information Technology, 25(3), 46. https:\/\/doi.org\/10.1007\/s10676-023-09717-7","journal-title":"Ethics and Information Technology"},{"unstructured":"Minist\u00e8re des Arm\u00e9es (2019). Artificial Intelligence in Support of Defence. https:\/\/www.defense.gouv.fr\/sites\/default\/files\/aid\/Report%20of%20the%20AI%20Task%20Force%20September%202019.pdf","key":"9865_CR24"},{"unstructured":"Ministry of Defence (2022, June). Ambitious, Safe, Responsible. Our approach to the delivery of AI-enabled capability in Defence. UK Ministry of Defence. https:\/\/assets.publishing.service.gov.uk\/media\/62a9b1d1e90e07039e31b8cb\/20220614-Ambitious_Safe_and_Responsible.pdf","key":"9865_CR25"},{"unstructured":"Ministry of Foreign Affairs (2023). REAIM 2023 | Ministry of Foreign Affairs. Government.nl. Retrieved November 20, 2024, from https:\/\/www.government.nl\/ministries\/ministry-of-foreign-affairs\/activiteiten\/reaim","key":"9865_CR26"},{"unstructured":"Ministry of Foreign Affairs (2024, November 7). Government.nl. United Nations adopts Dutch AI resolution. Retrieved November 20, 2024, from https:\/\/www.government.nl\/latest\/news\/2024\/11\/07\/united-nations-adopts-dutch-ai-resolution","key":"9865_CR27"},{"doi-asserted-by":"publisher","unstructured":"Morkevicius, V. (2017). Introduction: The roles of international law and just war theory. Ethics & International Affairs, 431\u2013432. https:\/\/doi.org\/10.1017\/S0892679417000417","key":"9865_CR28","DOI":"10.1017\/S0892679417000417"},{"unstructured":"Nadibaidzze, A., Bode, I., & Zhang, Q. (2024). AI in military decision support systems: A review of developments and debates. Center for War Studies.","key":"9865_CR29"},{"unstructured":"NATO (2017). AJP-01 Allied Joint Doctrine (E, version 1 ed.). NATO Standardization Office.","key":"9865_CR30"},{"unstructured":"NATO. (2021, October 22). NATO.int. Summary of the NATO Artificial Intelligence Strategy. Retrieved November 20, 2024, from https:\/\/www.nato.int\/cps\/en\/natohq\/official_texts_187617.htm","key":"9865_CR31"},{"unstructured":"NATO. (2024, July 10). Summary of NATO\u2019s revised Artificial Intelligence (AI) strategy. NATO. Retrieved November 21, 2024, from https:\/\/www.nato.int\/cps\/en\/natohq\/official_texts_227237.htm","key":"9865_CR32"},{"unstructured":"Nikoula, D., & McMahon, D. (2024). Cognitive warfare: Securing hearts and Minds. University of Ottawa.","key":"9865_CR33"},{"key":"9865_CR34","doi-asserted-by":"publisher","DOI":"10.1177\/17550882211034704","author":"L Peperkamp","year":"2022","unstructured":"Peperkamp, L. (2022). Restraining the fox: Minimalism in the ethics of war and peace. Journal of International Political Theory. https:\/\/doi.org\/10.1177\/17550882211034704","journal-title":"Journal of International Political Theory"},{"key":"9865_CR35","doi-asserted-by":"publisher","first-page":"851","DOI":"10.1007\/s10677-015-9563-y","volume":"18","author":"D Purves","year":"2015","unstructured":"Purves, D., Jenkins, R., & Strawser, B. J. (2015). Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory and Moral Practice, 18, 851\u2013872.","journal-title":"Ethical Theory and Moral Practice"},{"issue":"4","key":"9865_CR36","doi-asserted-by":"publisher","first-page":"456","DOI":"10.1080\/01402390.2020.1773039","volume":"44","author":"M Raska","year":"2021","unstructured":"Raska, M. (2021). The sixth RMA wave: Disruption in military affairs? Journal Of Strategic Studies, 44(4), 456\u2013479. https:\/\/doi.org\/10.1080\/01402390.2020.1773039","journal-title":"Journal Of Strategic Studies"},{"issue":"1","key":"9865_CR37","doi-asserted-by":"publisher","first-page":"185","DOI":"10.1515\/kbo-2018-0027","volume":"24","author":"A Ratiu","year":"2018","unstructured":"Ratiu, A. (2018). Comprehensive approach in the full spectrum of conflict. International Conference Knowledge-Based Organization, 24(1), 185\u2013191. https:\/\/doi.org\/10.1515\/kbo-2018-0027","journal-title":"International Conference Knowledge-Based Organization"},{"issue":"3","key":"9865_CR38","doi-asserted-by":"publisher","first-page":"321","DOI":"10.1017\/S0892679423000291","volume":"37","author":"N Renic","year":"2023","unstructured":"Renic, N., & Schwarz, E. (2023). Crimes of dispassion: Autonomous weapons and the moral challenge of systematic killing. Ethics & International Affairs, 37(3), 321\u2013343.","journal-title":"Ethics & International Affairs"},{"issue":"1","key":"9865_CR39","doi-asserted-by":"publisher","first-page":"2","DOI":"10.1080\/15027570.2018.1481907","volume":"17","author":"HM Roff","year":"2018","unstructured":"Roff, H. M., & Danks, D. (2018). Trust but verify: The difficulty of trusting autonomous weapons systems. Journal of Military Ethics, 17(1), 2\u201320. https:\/\/doi.org\/10.1080\/15027570.2018.1481907","journal-title":"Journal of Military Ethics"},{"doi-asserted-by":"crossref","unstructured":"Schraagen, J. M. (Ed.). (2024). Responsible use of AI in military systems. CRC Press, Taylor & Francis Group.","key":"9865_CR40","DOI":"10.1201\/9781003410379"},{"issue":"2","key":"9865_CR41","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1007\/s10676-018-9494-0","volume":"21","author":"A Sharkey","year":"2019","unstructured":"Sharkey, A. (2019). Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology, 21(2), 75\u201387.","journal-title":"Ethics and Information Technology"},{"key":"9865_CR42","doi-asserted-by":"publisher","first-page":"71","DOI":"10.1016\/j.iotcps.2023.02.004","volume":"3","author":"R Singh","year":"2023","unstructured":"Singh, R., & Gill, S. S. (2023). Edge AI: A survey. Internet of Things and Cyber-Physical Systems, 3, 71\u201392.","journal-title":"Internet of Things and Cyber-Physical Systems"},{"key":"9865_CR56","doi-asserted-by":"publisher","DOI":"10.1093\/oxfordhb\/9780197579329.013.69","volume-title":"The Oxford Handbook of AI Governance","author":"Zoe Stanley-Lockman","year":"2022","unstructured":"Stanley-Lockman, Z., & Trabucco, L. (2022). NATO\u2019s Role in Responsible AI Governance in Military Affairs, in Justin B. Bullock, and others (eds), The Oxford Handbook of AI Governance, Oxford Handbooks. https:\/\/doi.org\/10.1093\/oxfordhb\/9780197579329.013.69"},{"doi-asserted-by":"crossref","unstructured":"Taddeo, M. (2024). The ethics of artificial intelligence in defence. Oxford University Press.","key":"9865_CR43","DOI":"10.1093\/oso\/9780197745441.001.0001"},{"doi-asserted-by":"publisher","unstructured":"Taddeo, M., McNeish, D., Blanchard, A., & Edgar, E. (2023). Ethical principles for artificial intelligence in the defence domain. In F. Cristiano, D. Broeders, F. Delerue, F. Douzet, & A. G\u00e9ry (Eds.), Artificial intelligence and international conflict in cyberspace (p. 159). Routledge. https:\/\/doi.org\/10.4324\/9781003284093-10","key":"9865_CR44","DOI":"10.4324\/9781003284093-10"},{"key":"9865_CR45","doi-asserted-by":"publisher","DOI":"10.1007\/s10551-022-05050-z","author":"Z T\u00f3th","year":"2022","unstructured":"T\u00f3th, Z., Caruana, R., Gruber, T., & Loebbecke, C. (2022). The dawn of the AI robots: Towards a new framework of AI robot accountability. Journal of Business Ethics. https:\/\/doi.org\/10.1007\/s10551-022-05050-z","journal-title":"Journal of Business Ethics"},{"unstructured":"US Air Force. (2020). Air force doctrine publication 3\u201322. Curtis E. Lemay Center for Doctrine Development and Education.","key":"9865_CR46"},{"unstructured":"US Air Force. (2021). Air force doctrine publication 1. US Air Force.","key":"9865_CR47"},{"issue":"1","key":"9865_CR48","first-page":"101","volume":"80","author":"JL Votel","year":"2016","unstructured":"Votel, J. L., Cleveland, C. T., Connett, C. T., & Irwin, W. (2016). Unconventional warfare in the Gray zone. Joint Forces Quarterly, 80(1), 101\u2013109.","journal-title":"Joint Forces Quarterly"},{"unstructured":"Walzer, M. (2006). Just and unjust wars. Basic books.","key":"9865_CR53"},{"issue":"2","key":"9865_CR49","first-page":"4","volume":"7","author":"L Wells","year":"2017","unstructured":"Wells, L. (2017). Cognitive-emotional conflict: Adversary will and social resilience. Prism (University Park, Pa), 7(2), 4\u201317.","journal-title":"Prism (University Park, Pa)"},{"issue":"1","key":"9865_CR50","first-page":"134","volume":"59","author":"M Zaj\u0105c","year":"2020","unstructured":"Zaj\u0105c, M. (2020). No right to mercy - Making sense of arguments from dignity in the lethal autonomous weapons debate. Etyka, 59(1), 134\u2013155.","journal-title":"Etyka"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09865-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09865-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09865-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T03:34:18Z","timestamp":1763782458000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09865-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,6]]},"references-count":56,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["9865"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09865-y","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"type":"print","value":"1388-1957"},{"type":"electronic","value":"1572-8439"}],"subject":[],"published":{"date-parts":[[2025,10,6]]},"assertion":[{"value":"6 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"48"}}