{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T03:38:29Z","timestamp":1763782709744,"version":"3.45.0"},"reference-count":83,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T00:00:00Z","timestamp":1760486400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T00:00:00Z","timestamp":1760486400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001711","name":"Schweizerischer Nationalfonds zur F\u00f6rderung der Wissenschaftlichen Forschung","doi-asserted-by":"publisher","award":["213975"],"award-info":[{"award-number":["213975"]}],"id":[{"id":"10.13039\/501100001711","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The rapid proliferation of AI systems has raised many concerns about safety and responsibility in their design and use. The philosophical framework of Meaningful Human Control (MHC) was developed in response to these concerns, and tries to provide a standard for designing and evaluating such systems. While promising, the framework still requires further theoretical and practical refinement. This paper contributes to that effort by drawing on research in axiology and rational decision theory to identify a critical gap in the framework. Specifically, it argues that while \u2018reasons\u2019 play a central role in MHC, there has been little discussion of the possibility that, when weighed against each other, reasons may not always point to a single, rationally preferable course of action. I refer to these cases as instances of reasons underdetermination, and this paper discusses the need to address this issue within the MHC framework. The paper begins by providing an overview of the key concepts of the MHC framework and then examines the role of \u2018reasons\u2019 in the framework\u2019s two main conditions - Tracking and Tracing. It then discusses the phenomenon of reasons underdetermination and shows how it poses a challenge for the achievement of both Tracking and Tracing.<\/jats:p>","DOI":"10.1007\/s10676-025-09858-x","type":"journal-article","created":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T05:22:37Z","timestamp":1760505757000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Reasons underdetermination in meaningful human control"],"prefix":"10.1007","volume":"27","author":[{"given":"Atay","family":"Kozlovski","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,15]]},"reference":[{"key":"9858_CR1","unstructured":"Amnesty International (2021). Xenophobic Machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal. https:\/\/www.amnesty.org\/en\/documents\/eur35\/4686\/2021\/en\/. Accessed 27 Aug 2025."},{"key":"9858_CR2","doi-asserted-by":"crossref","unstructured":"van de Poel, I. (2015). Conflicting values in design for values. In van den J. Hoven, P. Vermaas, & van de I. Poel (Eds.), Handbook of ethics, values, and technological design, pp 89\u2013116. Springer.","DOI":"10.1007\/978-94-007-6970-0_5"},{"key":"9858_CR3","unstructured":"Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 27 Aug 2025."},{"issue":"2","key":"9858_CR4","doi-asserted-by":"publisher","first-page":"67","DOI":"10.1057\/palgrave.jit.2000035","volume":"20","author":"D Arnott","year":"2005","unstructured":"Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67\u201387.","journal-title":"Journal of Information Technology"},{"issue":"Summer","key":"9858_CR5","first-page":"886","volume":"94","author":"P Asaro","year":"2012","unstructured":"Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(Summer), 886.","journal-title":"International Review of the Red Cross"},{"key":"9858_CR6","unstructured":"Barez, F., Wu, T. Y., Arcuschin, I., Lan, M., Wang, V., Siegel, N., Collignon, N., Neo, C., Lee, I., Paren, A., Bibi, A., Trager, R., Fornasiere, D., Yan, J., Elazar, Y., & Bengio, Y. (2025). Chain-of-Thought Is Not Explainability, Preprint, alphaXiv. https:\/\/www.alphaxiv.org\/abs\/2025.02v1"},{"issue":"4","key":"9858_CR7","doi-asserted-by":"publisher","first-page":"601","DOI":"10.1111\/j.1468-0068.2010.00762.x","volume":"44","author":"E Barnes","year":"2010","unstructured":"Barnes, E. (2010). Ontic vagueness: A guide for the perplexed. No\u00fbs, 44(4), 601\u2013627.","journal-title":"No\u00fbs"},{"key":"9858_CR8","doi-asserted-by":"publisher","first-page":"12","DOI":"10.1007\/s13347-022-00510-w","volume":"35","author":"K Baum","year":"2022","unstructured":"Baum, K., Mantel, S., Schmidt, E., et al. (2022). From responsibility to reason-giving explainable artificial intelligence. Philosophy & Technology, 35, 12. https:\/\/doi.org\/10.1007\/s13347-022-00510-w","journal-title":"Philosophy & Technology"},{"key":"9858_CR9","doi-asserted-by":"publisher","unstructured":"Bell, A., Solano-Kamaiko, I., Nov, O., & Stoyanovich, J. (2022). It\u2019s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT \u201822). Association for Computing Machinery, New York, NY, USA, 248\u2013266. https:\/\/doi.org\/10.1145\/3531146.3533090","DOI":"10.1145\/3531146.3533090"},{"key":"9858_CR10","unstructured":"Bostrom, N. (2016). Superintelligence: Paths, Dangers, Strategies. (Reprint Ed). Oxford University Press."},{"key":"9858_CR11","unstructured":"Broome, J. (1997). Is incommensurability vagueness? In R. Chang (Ed.), Incommensurability, Incomparability, and practical reason, 67\u201389. Harvard University Press."},{"key":"9858_CR12","doi-asserted-by":"publisher","unstructured":"Burri, S. (2018). \u2018What Is the Moral Problem with Killer Robots?\u2018, in Bradley Jay Strawser, Ryan Jenkins, and Michael Robillard (Eds.), Who Should Die? The Ethics of Killing in War, New York. https:\/\/doi.org\/10.1093\/oso\/9780190495657.003.0009","DOI":"10.1093\/oso\/9780190495657.003.0009"},{"key":"9858_CR13","doi-asserted-by":"publisher","first-page":"295","DOI":"10.1007\/s10609-023-09454-y","volume":"34","author":"FJ Castro-Toledo","year":"2023","unstructured":"Castro-Toledo, F. J., Mir\u00f3-Llinares, F., & Aguerri, J. C. (2023). Data-driven criminal justice in the age of algorithms: Epistemic challenges and practical implications. Criminal Law Forum, 34, 295\u2013316. https:\/\/doi.org\/10.1007\/s10609-023-09454-y","journal-title":"Criminal Law Forum"},{"key":"9858_CR14","unstructured":"Chang, R. (2001). Making comparisons count. Routledge."},{"key":"9858_CR15","doi-asserted-by":"publisher","first-page":"659","DOI":"10.1086\/339673","volume":"112","author":"R Chang","year":"2002","unstructured":"Chang, R. (2002). The possibility of parity. Ethics, 112, 659\u2013688.","journal-title":"Ethics"},{"key":"9858_CR16","doi-asserted-by":"crossref","unstructured":"Chang, R. (2015). Incommensurability and incomparability. In I. Hirose, & J. Olson (Eds.), Oxford handbook in value theory, pp. 205\u2013224. Oxford University Press.","DOI":"10.1093\/oxfordhb\/9780199959303.013.0012"},{"key":"9858_CR17","doi-asserted-by":"crossref","unstructured":"Chang, R. (2016a). Comparativism: The grounds of rational choice\u2019. In E. Lord, & B. Maguire (Eds.), Weighing values, 213\u2013240. Oxford University Press.","DOI":"10.1093\/acprof:oso\/9780199315192.003.0011"},{"key":"9858_CR18","doi-asserted-by":"publisher","first-page":"395","DOI":"10.1111\/rati.12148","volume":"29","author":"R Chang","year":"2016","unstructured":"Chang, R. (2016b). Parity: An intuitive case. Ratio, 29, 395\u2013411.","journal-title":"Ratio"},{"key":"9858_CR19","first-page":"586","volume":"92","author":"R Chang","year":"2017","unstructured":"Chang, R. (2017). Hard choices. Journal of the American Philosophical Association, 92, 586\u2013620.","journal-title":"Journal of the American Philosophical Association"},{"key":"9858_CR20","doi-asserted-by":"crossref","unstructured":"Chang, R. (2024). Human in the Loop! AI Morality. ed. Edmonds, David, 222\u2013234. Oxford University Press.","DOI":"10.1093\/oso\/9780198876434.003.0021"},{"key":"9858_CR21","unstructured":"Christian, B. (2020). The alignment problem: How can artificial intelligence learn human values? Atlantic Books."},{"key":"9858_CR22","doi-asserted-by":"publisher","DOI":"10.1038\/s41746-023-00906-8","volume":"6","author":"IG Cohen","year":"2023","unstructured":"Cohen, I. G., Babic, B., Gerke, S., Xia, Q., Evgeniou, T., & Wertenbroch, K. (2023). How AI can learn from the law: Putting humans in the loop only on appeal. Npj Digital Medicine, 6, Article 160. https:\/\/doi.org\/10.1038\/s41746-023-00906-8","journal-title":"Npj Digital Medicine"},{"key":"9858_CR23","doi-asserted-by":"publisher","first-page":"26","DOI":"10.1007\/s13347-022-00519-1","volume":"35","author":"J Danaher","year":"2022","unstructured":"Danaher, J. (2022). Tragic choices and the virtue of techno-responsibility gaps. Philosophy & Technology, 35, 26. https:\/\/doi.org\/10.1007\/s13347-022-00519-1","journal-title":"Philosophy & Technology"},{"key":"9858_CR24","doi-asserted-by":"publisher","unstructured":"Dancy, J. (2000). Practical reality. Oxford University Press. https:\/\/doi.org\/10.1093\/0199253056.001.0001","DOI":"10.1093\/0199253056.001.0001"},{"key":"9858_CR25","unstructured":"Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women Reuters. https:\/\/www.reuters.com\/article\/world\/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG\/ Accessed 28\/10\/2024."},{"key":"9858_CR26","doi-asserted-by":"publisher","first-page":"1017677","DOI":"10.3389\/fdata.2022.1017677","volume":"5","author":"J Davidovic","year":"2023","unstructured":"Davidovic, J. (2023). On the purpose of meaningful human control of AI. Frontiers in Big Data, 5, 1017677. https:\/\/doi.org\/10.3389\/fdata.2022.1017677. PMID: 36700136; PMCID: PMC9868906.","journal-title":"Frontiers in Big Data"},{"key":"9858_CR27","doi-asserted-by":"publisher","unstructured":"de Santoni, F. (2024). Human Freedom in the Age of AI (1st ed.). Routledge. https:\/\/doi.org\/10.4324\/9781003303244","DOI":"10.4324\/9781003303244"},{"key":"9858_CR28","unstructured":"de Santoni, F., & Mecacci, G. (2021). Four responsibility gaps with artifical intelligence: Why they matter and how to address them. Philos. Technol. pp. 1\u201328."},{"key":"9858_CR29","doi-asserted-by":"crossref","first-page":"1","DOI":"10.3389\/frobt.2018.00001","volume":"5","author":"F de Santoni","year":"2018","unstructured":"de Santoni, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Front Robot AI, 5, 1\u201314.","journal-title":"Front Robot AI"},{"issue":"332","key":"9858_CR30","doi-asserted-by":"publisher","first-page":"534","DOI":"10.1093\/mind\/LXXXIII.332.534","volume":"83","author":"RB De Sousa","year":"1974","unstructured":"De Sousa, R. B. (1974). The good and the true. Mind, 83(332), 534\u2013551.","journal-title":"Mind"},{"key":"9858_CR31","doi-asserted-by":"publisher","first-page":"103555","DOI":"10.1016\/j.artint.2021.103555","volume":"300","author":"R Dobbe","year":"2021","unstructured":"Dobbe, R., Gilbert, T., & Mintz, Y. (2021). Hard choices in artificial intelligence. Artificial Intelligence, 300, 103555.","journal-title":"Artificial Intelligence"},{"key":"9858_CR32","doi-asserted-by":"crossref","unstructured":"Eggert, L. (2024). Rethinking \u2018Meaningful human control\u2019. In J. M. Schraagen (Ed.), Responsible use of AI in military systems (1st ed.) pp. 213\u2013231. Chapman and Hall\/CRC.","DOI":"10.1201\/9781003410379-14"},{"issue":"3","key":"9858_CR33","doi-asserted-by":"publisher","first-page":"343","DOI":"10.1111\/1758-5899.12665","volume":"10","author":"M Ekelhof","year":"2019","unstructured":"Ekelhof, M. (2019). Moving beyond semantics on autonomous weapons: Meaningful human control in operation. Global Policy, 10(3), 343\u2013348.","journal-title":"Global Policy"},{"key":"9858_CR34","doi-asserted-by":"publisher","first-page":"30","DOI":"10.1515\/pjbr-2019-0002","volume":"10","author":"F Ficuciello","year":"2019","unstructured":"Ficuciello, F., Tamburrini, G., Arezzo, A., Villani, L., & Siciliano, B. (2019). Autonomy in surgical robots and its meaningful human control. Paladyn, Journal of Behavioral Robotics, 10, 30\u201343.","journal-title":"Paladyn, Journal of Behavioral Robotics"},{"issue":"3","key":"9858_CR35","doi-asserted-by":"publisher","first-page":"265","DOI":"10.1007\/BF00485047","volume":"30","author":"K Fine","year":"1975","unstructured":"Fine, K. (1975). Vagueness, truth and logic. Synthese, 30(3), 265\u2013300.","journal-title":"Synthese"},{"key":"9858_CR36","doi-asserted-by":"crossref","unstructured":"Fischer, J., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge University Press.","DOI":"10.1017\/CBO9780511814594"},{"issue":"2","key":"9858_CR37","doi-asserted-by":"publisher","first-page":"199","DOI":"10.1017\/S0266267118000019","volume":"34","author":"E Flanigan","year":"2018","unstructured":"Flanigan, E., & Halstead, J. (2018). The small improvement argument, epistemicism and incomparability. Economics and Philosophy, 34(2), 199\u2013219.","journal-title":"Economics and Philosophy"},{"issue":"4","key":"9858_CR38","doi-asserted-by":"publisher","first-page":"689","DOI":"10.1007\/s11023-018-9482-5","volume":"28","author":"L Floridi","year":"2018","unstructured":"Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People\u2014An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689\u2013707. https:\/\/doi.org\/10.1007\/s11023-018-9482-5","journal-title":"Minds and Machines"},{"key":"9858_CR39","doi-asserted-by":"publisher","unstructured":"French, S. E., & Lindsay, L. N. (2022). Artificial intelligence in military decision-making: avoiding ethical and strategic perils with an option-generator model. In Emerging Military Technologies, eds. Koch, B. and R. Schoonhoven, 53\u201374. Brill Nijhoff. doi.https:\/\/doi.org\/10.1163\/9789004507951_007","DOI":"10.1163\/9789004507951_007"},{"issue":"3","key":"9858_CR40","doi-asserted-by":"publisher","first-page":"411","DOI":"10.1007\/s11023-020-09539-2","volume":"30","author":"I Gabriel","year":"2020","unstructured":"Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411\u2013437.","journal-title":"Minds and Machines"},{"key":"9858_CR41","doi-asserted-by":"crossref","unstructured":"Goodman, B. (2021). Hard Choices and Hard Limits for Artificial Intelligence. In Proceedings of 2021 AAAI\/ACM Conference on AI, Ethics, and Society (AIES\u201921), May 19\u201321, 2021, Virtual Event. ACM, New York, NY, USA, 9 pages.","DOI":"10.1145\/3461702.3462539"},{"key":"9858_CR42","doi-asserted-by":"publisher","first-page":"711","DOI":"10.1080\/1463922X.2019.1574931","volume":"20","author":"DD Heikoop","year":"2019","unstructured":"Heikoop, D. D., Hagenzieker, M. P., Mecacci, G., Calvert, S. C., Santoni de Sio, F., & van Arem, B. (2019). Human behaviour with automated driving systems: A quantitative framework for meaningful human control. Theoretical Issues in Ergonomics Science, 20, 711\u2013730. https:\/\/doi.org\/10.1080\/1463922X.2019.1574931","journal-title":"Theoretical Issues in Ergonomics Science"},{"key":"9858_CR43","doi-asserted-by":"publisher","unstructured":"Hille, E. M., Hummel, P., & Braun, M. (2023). Meaningful human control over AI for health? A reviewjournal of medical ethics. https:\/\/doi.org\/10.1136\/jme-2023-109095","DOI":"10.1136\/jme-2023-109095"},{"key":"9858_CR44","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1007\/s11229-022-04001-5","volume":"201","author":"F Hindriks","year":"2023","unstructured":"Hindriks, F., & Veluwenkamp, H. (2023). The risks of autonomous machines: From responsibility gaps to control gaps. Synthese, 201, 21. https:\/\/doi.org\/10.1007\/s11229-022-04001-5","journal-title":"Synthese"},{"key":"9858_CR45","unstructured":"Horowitz, M., & Scharre, P. (2015). Meaningful human control in weapon systems: A primer. Center for a New American Security."},{"key":"9858_CR46","doi-asserted-by":"crossref","unstructured":"Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. Little, Brown Spark.","DOI":"10.53776\/playbooks-judgment"},{"key":"9858_CR47","doi-asserted-by":"publisher","DOI":"10.1111\/japp.12765","author":"M Kiener","year":"2024","unstructured":"Kiener, M. (2024). AI and responsibility: No gap, but abundance. Journal of Applied Philosophy. https:\/\/doi.org\/10.1111\/japp.12765","journal-title":"Journal of Applied Philosophy"},{"key":"9858_CR48","unstructured":"Kozlovski, A. (2024). When Algorithms Decide Who is a Target: IDF\u2019s use of AI in Gaza. Tech Policy Press, 2024. https:\/\/www.techpolicy.press\/when-algorithms-decide-who-is-a-target-idfs-use-of-ai-in-gaza\/"},{"issue":"2","key":"9858_CR49","doi-asserted-by":"publisher","first-page":"201","DOI":"10.1215\/00318108-2009-037","volume":"119","author":"J Markovits","year":"2010","unstructured":"Markovits, J. (2010). Acting for the right reasons. The Philosophical Review, 119(2), 201\u2013242. http:\/\/www.jstor.org\/stable\/41684374","journal-title":"The Philosophical Review"},{"issue":"3","key":"9858_CR50","doi-asserted-by":"publisher","first-page":"175","DOI":"10.1007\/s10676-004-3422-1","volume":"6","author":"A Matthias","year":"2004","unstructured":"Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175\u2013183.","journal-title":"Ethics and Information Technology"},{"key":"9858_CR51","doi-asserted-by":"publisher","first-page":"103","DOI":"10.1007\/s10676-019-09519-w","volume":"22","author":"G Mecacci","year":"2020","unstructured":"Mecacci, G., & de Santoni, F. (2020). Meaningful human control as reason-responsiveness: The case of dual-mode vehicles. Ethics and Information Technology, 22, 103\u2013115.","journal-title":"Ethics and Information Technology"},{"key":"9858_CR52","doi-asserted-by":"crossref","unstructured":"Mecacci, G., Amoroso, D., Cavalcante Siebert, L., Abbink, D. A., van den Hoven, M. J., & de Sio, S., F. (Eds.). (2024). (Accepted\/In press). Research Handbook on Meaningful Human Control of Artificial Intelligence Systems. Edward Elgar Publishing. https:\/\/www.e-elgar.com\/shop\/gbp\/research-handbook-on-meaningful-human-control-of-artificial-intelligence-systems-9781802204124.html","DOI":"10.4337\/9781802204131"},{"key":"9858_CR53","doi-asserted-by":"publisher","unstructured":"Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* \u201819). Association for Computing Machinery, New York, NY, USA, 279\u2013288. https:\/\/doi.org\/10.1145\/3287560.3287574","DOI":"10.1145\/3287560.3287574"},{"issue":"1","key":"9858_CR54","doi-asserted-by":"publisher","first-page":"143","DOI":"10.1007\/s11948-011-9277-z","volume":"18","author":"J n den Hoven","year":"2012","unstructured":"n den Hoven, J., Lokhorst, G-J., & van de Poel, I. (2012). Engineering and the problem of moral overload. Science and Engineering Ethics, 18(1), 143\u2013155. https:\/\/doi.org\/10.1007\/s11948-011-9277-z","journal-title":"Science and Engineering Ethics"},{"key":"9858_CR55","doi-asserted-by":"publisher","unstructured":"Nyholm, S. (2023). Responsibility Gaps, value Alignment, and meaningful human control over artificial intelligence. https:\/\/doi.org\/10.4324\/9781003276029-14","DOI":"10.4324\/9781003276029-14"},{"key":"9858_CR56","doi-asserted-by":"crossref","unstructured":"Parfit, D. (1986). Reasons and persons. Clarendon.","DOI":"10.1093\/019824908X.001.0001"},{"key":"9858_CR57","doi-asserted-by":"crossref","unstructured":"Raz, J. (1988). The morality of freedom. Oxford University.","DOI":"10.1093\/0198248075.001.0001"},{"key":"9858_CR58","unstructured":"Raz, J. (1999). Engaging reason. Clarendon."},{"key":"9858_CR59","doi-asserted-by":"crossref","unstructured":"Raz, J. (2011). From normativity to responsibility. Oxford University Press.","DOI":"10.1093\/acprof:oso\/9780199693818.001.0001"},{"key":"9858_CR60","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-023-00320-6","author":"S Robbins","year":"2023","unstructured":"Robbins, S. (2023). The many meanings of meaningful human control. AI Ethics. https:\/\/doi.org\/10.1007\/s43681-023-00320-6","journal-title":"AI Ethics"},{"key":"9858_CR61","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"C Rudin","year":"2019","unstructured":"Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206\u2013215. https:\/\/doi.org\/10.1038\/s42256-019-0048-x","journal-title":"Nature Machine Intelligence"},{"key":"9858_CR62","unstructured":"Russell, S. (2019). Human compatible: AI and the problem of control. Viking."},{"key":"9858_CR63","doi-asserted-by":"publisher","first-page":"587","DOI":"10.1007\/s11023-022-09608-8","volume":"33","author":"F Santoni de Sio","year":"2023","unstructured":"Santoni de Sio, F., Mecacci, G., Calvert, S., Heikoop, D., Hagenzieker, M., & van Arem, B. (2023). Realising meaningful human control over automated driving systems: A multidisciplinary approach. Minds & Machines, 33, 587\u2013611. https:\/\/doi.org\/10.1007\/s11023-022-09608-8","journal-title":"Minds & Machines"},{"issue":"3","key":"9858_CR64","doi-asserted-by":"publisher","first-page":"602","DOI":"10.1086\/659003","volume":"121","author":"D Shoemaker","year":"2011","unstructured":"Shoemaker, D. (2011). Attributability, answerability, and accountability: Toward a wider theory of moral responsibility. Ethics, 121(3), 602\u2013632.","journal-title":"Ethics"},{"key":"9858_CR65","doi-asserted-by":"publisher","first-page":"241","DOI":"10.1007\/s43681-022-00167-3","volume":"3","author":"LC Siebert","year":"2022","unstructured":"Siebert, L. C., Lupetti, M. L., Aizenberg, E., et al. (2022). Meaningful human control: Actionable properties for AI system development. AI Ethics, 3, 241\u2013255.","journal-title":"AI Ethics"},{"issue":"4","key":"9858_CR66","first-page":"321","volume":"22","author":"W Sinnott-Armstrong","year":"1985","unstructured":"Sinnott-Armstrong, W. (1985). Moral dilemmas and incomparability. American Philosophical Quarterly, 22(4), 321\u2013329.","journal-title":"American Philosophical Quarterly"},{"key":"9858_CR67","doi-asserted-by":"publisher","unstructured":"Sinnott-Armstrong, W., & Skorburg, J. A. (2021). How AI can aid bioethics. Journal of Practical Ethics, 9(1). https:\/\/doi.org\/10.3998\/jpe.1175","DOI":"10.3998\/jpe.1175"},{"issue":"1","key":"9858_CR68","doi-asserted-by":"publisher","first-page":"62","DOI":"10.1111\/j.1468-5930.2007.00346.x","volume":"24","author":"R Sparrow","year":"2007","unstructured":"Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62\u201377.","journal-title":"Journal of Applied Philosophy"},{"key":"9858_CR69","doi-asserted-by":"publisher","first-page":"281","DOI":"10.1007\/s43681-022-00168-2","volume":"3","author":"M Steen","year":"2023","unstructured":"Steen, M., van Diggelen, J., Timan, T., & van der Stap, N. (2023). Meaningful human control of drones: Exploring human\u2013machine teaming, informed by four different ethical perspectives. AI Ethics, 3, 281\u2013293. https:\/\/doi.org\/10.1007\/s43681-022-00168-2","journal-title":"AI Ethics"},{"key":"9858_CR70","unstructured":"Struik, A. (2021). Meaningful human control over automated driving systems: Driver intentions and ADS behaviour. Utrecht University."},{"key":"9858_CR71","doi-asserted-by":"publisher","DOI":"10.1057\/s41599-024-03428-0","volume":"11","author":"CR Sunstein","year":"2024","unstructured":"Sunstein, C. R. (2024). Choice engines and paternalistic AI. Humanities and Social Sciences Communications, 11, Article 888. https:\/\/doi.org\/10.1057\/s41599-024-03428-0","journal-title":"Humanities and Social Sciences Communications"},{"key":"9858_CR72","unstructured":"Talbert, M. (2022). Moral responsibility. In E. N. Zalta, & U. Nodelman (Eds.), The Stanford encyclopedia of philosophy (Fall 2025 Edition). Metaphysics Research Lab, Stanford University. https:\/\/plato.stanford.edu\/archives\/fall2025\/entries\/moral-responsibility\/"},{"issue":"3","key":"9858_CR73","doi-asserted-by":"publisher","first-page":"589","DOI":"10.1007\/s13347-020-00414-7","volume":"34","author":"DW Tigard","year":"2020","unstructured":"Tigard, D. W. (2020). There is no techno-responsibility gap. Philosophy & Technology, 34(3), 589\u2013607.","journal-title":"Philosophy & Technology"},{"issue":"4","key":"9858_CR74","first-page":"757","volume":"44","author":"E Ullmann-Margalit","year":"1977","unstructured":"Ullmann-Margalit, E., & Morgenbesser, S. (1977). Picking and choosing. Social Research, 44(4), 757\u2013785.","journal-title":"Social Research"},{"issue":"37","key":"9858_CR75","first-page":"40","volume":"13","author":"S Umbrello","year":"2020","unstructured":"Umbrello, S. (2020). Meaningful human control over smart home systems: A value sensitive design approach. Humana Mente Journal of Philosophical Studies, 13(37), 40\u201365.","journal-title":"Humana Mente Journal of Philosophical Studies"},{"key":"9858_CR76","doi-asserted-by":"publisher","first-page":"107","DOI":"10.1007\/s13347-014-0156-9","volume":"28","author":"S Vallor","year":"2015","unstructured":"Vallor, S. (2015). Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philosophy & Technology, 28, 107\u2013124. https:\/\/doi.org\/10.1007\/s13347-014-0156-9","journal-title":"Philosophy & Technology"},{"key":"9858_CR77","doi-asserted-by":"crossref","unstructured":"van Diggelen, J., Neerincx, M., & Steen, M. (2024). Designing for meaningful human control in military human-Machine Teams, in research handbook on meaningful human control of artificial intelligence systems. Edward Elgar Publishing.","DOI":"10.4337\/9781802204131.00021"},{"key":"9858_CR78","doi-asserted-by":"publisher","first-page":"51","DOI":"10.1007\/s10676-022-09673-8","volume":"24","author":"H Veluwenkamp","year":"2022","unstructured":"Veluwenkamp, H. (2022). Reasons for meaningful human control. Ethics and Information Technology, 24, 51. https:\/\/doi.org\/10.1007\/s10676-022-09673-8","journal-title":"Ethics and Information Technology"},{"key":"9858_CR79","doi-asserted-by":"publisher","unstructured":"Wachter, S. (2022). The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law. Tulane Law Review. Available at SSRN: https:\/\/ssrn.com\/abstract = 4099100 https:\/\/doi.org\/10.2139\/ssrn.4099100","DOI":"10.2139\/ssrn.4099100"},{"issue":"1","key":"9858_CR80","doi-asserted-by":"publisher","first-page":"104","DOI":"10.1002\/poi3.198","volume":"11","author":"B Wagner","year":"2019","unstructured":"Wagner, B. (2019). Liable, but not in control? Ensuring meaningful human agency in automated decision-making systems. Policy & Internet, 11(1), 104\u2013122.","journal-title":"Policy & Internet"},{"issue":"1","key":"9858_CR81","doi-asserted-by":"publisher","first-page":"391","DOI":"10.1111\/j.1520-8583.2004.00034.x","volume":"18","author":"R Wasserman","year":"2004","unstructured":"Wasserman, R. (2004). Indeterminacy, ignorance and the possibility of parity. Philosophical Perspectives, 18(1), 391\u2013403.","journal-title":"Philosophical Perspectives"},{"issue":"2","key":"9858_CR82","doi-asserted-by":"publisher","first-page":"227","DOI":"10.5840\/philtopics199624222","volume":"24","author":"G Watson","year":"1996","unstructured":"Watson, G. (1996). Two faces of responsibility. Philosophical Topics, 24(2), 227\u2013248.","journal-title":"Philosophical Topics"},{"key":"9858_CR83","doi-asserted-by":"publisher","first-page":"27","DOI":"10.1007\/s11948-024-00485-1","volume":"30","author":"J Zeiser","year":"2024","unstructured":"Zeiser, J. (2024). Owning decisions: AI decision-support and the attributability-gap. Science and Engineering Ethics, 30, 27. https:\/\/doi.org\/10.1007\/s11948-024-00485-1","journal-title":"Science and Engineering Ethics"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09858-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09858-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09858-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T03:34:05Z","timestamp":1763782445000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09858-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,15]]},"references-count":83,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["9858"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09858-x","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"type":"print","value":"1388-1957"},{"type":"electronic","value":"1572-8439"}],"subject":[],"published":{"date-parts":[[2025,10,15]]},"assertion":[{"value":"15 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"59"}}