{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,17]],"date-time":"2026-04-17T01:01:28Z","timestamp":1776387688335,"version":"3.51.2"},"reference-count":111,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2024,7,13]],"date-time":"2024-07-13T00:00:00Z","timestamp":1720828800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,7,13]],"date-time":"2024-07-13T00:00:00Z","timestamp":1720828800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100007343","name":"Universit\u00e0 degli Studi di Brescia","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100007343","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Auton Agent Multi-Agent Syst"],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Constraining the actions of AI systems is one promising way to ensure that these systems behave in a way that is morally acceptable to humans. But constraints alone come with drawbacks as in many AI systems, they are not flexible. If these constraints are too rigid, they can preclude actions that are actually acceptable in certain, contextual situations. Humans, on the other hand, can often decide when a simple and seemingly inflexible rule should actually be overridden based on the context. In this paper, we empirically investigate the way humans make these contextual moral judgements, with the goal of building AI systems that understand when to follow and when to override constraints. We propose a novel and general preference-based graphical model that captures a modification of standard <jats:italic>dual process<\/jats:italic> theories of moral judgment. We then detail the design, implementation, and results of a study of human participants who judge whether it is acceptable to break a well-established rule: <jats:italic>no cutting in line<\/jats:italic>. We then develop an instance of our model and compare its performance to that of standard machine learning approaches on the task of predicting the behavior of human participants in the study, showing that our preference-based approach more accurately captures the judgments of human decision-makers. It also provides a flexible method to model the relationship between variables for moral decision-making tasks that can be generalized to other settings.<\/jats:p>","DOI":"10.1007\/s10458-024-09667-4","type":"journal-article","created":{"date-parts":[[2024,7,13]],"date-time":"2024-07-13T07:02:24Z","timestamp":1720854144000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical data"],"prefix":"10.1007","volume":"38","author":[{"given":"Edmond","family":"Awad","sequence":"first","affiliation":[]},{"given":"Sydney","family":"Levine","sequence":"additional","affiliation":[]},{"given":"Andrea","family":"Loreggia","sequence":"additional","affiliation":[]},{"given":"Nicholas","family":"Mattei","sequence":"additional","affiliation":[]},{"given":"Iyad","family":"Rahwan","sequence":"additional","affiliation":[]},{"given":"Francesca","family":"Rossi","sequence":"additional","affiliation":[]},{"given":"Kartik","family":"Talamadupula","sequence":"additional","affiliation":[]},{"given":"Joshua","family":"Tenenbaum","sequence":"additional","affiliation":[]},{"given":"Max","family":"Kleiman-Weiner","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,7,13]]},"reference":[{"issue":"7553","key":"9667_CR1","doi-asserted-by":"crossref","first-page":"415","DOI":"10.1038\/521415a","volume":"521","author":"S Russell","year":"2015","unstructured":"Russell, S., Hauert, S., Altman, R., & Veloso, M. (2015). Ethics of artificial intelligence. Nature, 521(7553), 415\u2013416.","journal-title":"Nature"},{"key":"9667_CR2","volume-title":"Weapons of math destruction: How big data increases inequality and threatens democracy","author":"C O\u2019Neil","year":"2017","unstructured":"O\u2019Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown."},{"key":"9667_CR3","volume-title":"The ethical algorithm: The science of socially aware algorithm design","author":"M Kearns","year":"2019","unstructured":"Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press."},{"key":"9667_CR4","doi-asserted-by":"crossref","unstructured":"Rossi, F., & Mattei, N. (2019). Building ethically bounded AI. In: Proc. of the\u00a033rd\u00a0AAAI(Blue Sky Track).","DOI":"10.1609\/aaai.v33i01.33019785"},{"key":"9667_CR5","unstructured":"Amodei, D., Olah, C., Steinhardt, J., Christiano, P.F., Schulman, J., & Man\u00e9, D. (2016). Concrete problems in AI safety. arXiv:1606.06565"},{"key":"9667_CR6","doi-asserted-by":"crossref","first-page":"593","DOI":"10.2307\/1338225","volume":"71","author":"H Hart","year":"1958","unstructured":"Hart, H. (1958). Positivism and the separation of law and morals. Harvard Law Review, 71, 593\u2013607.","journal-title":"Harvard Law Review"},{"key":"9667_CR7","unstructured":"Clark, J., & Amodei, D. (2016). Faulty reward functions in the wild. Retrieved 1 Aug 2023 from https:\/\/blog.openai.com\/faulty-reward-functions"},{"key":"9667_CR8","unstructured":"Branwen, G. (2023). The Neural Net Tank Urban Legend. Retrieved 1 Aug 2023 from https:\/\/gwern.net\/tank#alternative-examples"},{"key":"9667_CR9","unstructured":"ACM US Public Policy Working Group: Statement on algorithmic transparency and accountability. Retrieved 1 Aug 2023 from https:\/\/www.acm.org\/binaries\/content\/assets\/public-policy\/2017_usacm_statement_algorithms.pdf"},{"key":"9667_CR10","unstructured":"National Institute of Standards and Technology (NIST): AI Risk Management Framework: Second Draft. Retrieved 1 Aug 2023 from https:\/\/www.nist.gov\/system\/files\/documents\/2022\/08\/18\/AI_RMF_2nd_draft.pdf"},{"key":"9667_CR11","volume-title":"Moral machines: Teaching robots right from wrong","author":"W Wallach","year":"2008","unstructured":"Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press."},{"issue":"4","key":"9667_CR12","doi-asserted-by":"crossref","first-page":"105","DOI":"10.1609\/aimag.v36i4.2577","volume":"36","author":"S Russell","year":"2015","unstructured":"Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105\u2013114.","journal-title":"AI Magazine"},{"issue":"3","key":"9667_CR13","doi-asserted-by":"crossref","first-page":"149","DOI":"10.1007\/s10676-006-0004-4","volume":"7","author":"C Allen","year":"2005","unstructured":"Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149\u2013155.","journal-title":"Ethics and Information Technology"},{"key":"9667_CR14","doi-asserted-by":"crossref","unstructured":"Balakrishnan, A., Bouneffouf, D., Mattei, N., & Rossi, F. (2019). Incorporating behavioral constraints in online AI systems. In Proc. of the\u00a033rd\u00a0AAAI.","DOI":"10.1609\/aaai.v33i01.33013"},{"key":"9667_CR15","doi-asserted-by":"publisher","unstructured":"Loreggia, A., Mattei, N., Rahgooy, T., Rossi, F., Srivastava, B., & Venable, K.B. (2022). Making human-like moral decisions. In Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society. AIES \u201922, pp. 447\u2013454. Association for Computing Machinery, New York, NY, USA. https:\/\/doi.org\/10.1145\/3514094.3534174","DOI":"10.1145\/3514094.3534174"},{"key":"9667_CR16","doi-asserted-by":"crossref","unstructured":"Svegliato, J., Nashed, S.B., & Zilberstein, S. (2021). Ethically compliant sequential decision making. In Proceedings of the 35th AAAI International Conference on Artificial Intelligence (AAAI).","DOI":"10.1609\/aaai.v35i13.17386"},{"key":"9667_CR17","doi-asserted-by":"crossref","unstructured":"Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: bottom-up and top-down approaches for modelling human moral faculties.,22, 565\u2013582.","DOI":"10.1007\/s00146-007-0099-0"},{"key":"9667_CR18","doi-asserted-by":"publisher","unstructured":"Loreggia, A., Mattei, N., Rossi, F., & Venable, K.B. (2018). Preferences and ethical principles in decision making. In Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society. AIES \u201918, p. 222. Association for Computing Machinery, New York, NY, USA. https:\/\/doi.org\/10.1145\/3278721.3278723","DOI":"10.1145\/3278721.3278723"},{"key":"9667_CR19","doi-asserted-by":"crossref","unstructured":"Hansson, S.O. (2001). The Structure of Values and Norms. Cambridge Studies in Probability, Induction and Decision Theory. Cambridge University Press.","DOI":"10.1017\/CBO9780511498466"},{"key":"9667_CR20","doi-asserted-by":"crossref","first-page":"135","DOI":"10.1613\/jair.1234","volume":"21","author":"C Boutilier","year":"2004","unstructured":"Boutilier, C., Brafman, R., Domshlak, C., Hoos, H. H., & Poole, D. (2004). CP-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements. Journal of Artificial Intelligence Research, 21, 135\u2013191.","journal-title":"Journal of Artificial Intelligence Research"},{"key":"9667_CR21","doi-asserted-by":"crossref","first-page":"131214","DOI":"10.1109\/ACCESS.2021.3114637","volume":"9","author":"A Alashaikh","year":"2021","unstructured":"Alashaikh, A., & Alanazi, E. (2021). Conditional preference networks for cloud service selection and ranking with many irrelevant attributes. IEEE Access, 9, 131214\u2013131222.","journal-title":"IEEE Access"},{"key":"9667_CR22","unstructured":"Mohajeriparizi, M., Sileno, G., & Engers, T. (2022). Preference-based goal refinement in bdi agents. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, pp. 917\u2013925."},{"key":"9667_CR23","doi-asserted-by":"crossref","first-page":"1103","DOI":"10.1613\/jair.1.13009","volume":"72","author":"C Cornelio","year":"2021","unstructured":"Cornelio, C., Goldsmith, J., Grandi, U., Mattei, N., Rossi, F., & Venable, K. B. (2021). Reasoning with pcp-nets. Journal of Artificial Intelligence Research, 72, 1103\u20131161.","journal-title":"Journal of Artificial Intelligence Research"},{"key":"9667_CR24","unstructured":"Kahneman, D. (2011). Thinking. Straus and Giroux, New York: Fast and Slow. Farrar."},{"issue":"1","key":"9667_CR25","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1037\/0033-2909.119.1.3","volume":"119","author":"SA Sloman","year":"1996","unstructured":"Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3.","journal-title":"Psychological Bulletin"},{"key":"9667_CR26","volume-title":"Moral tribes: Emotion, reason, and the gap between us and them","author":"JD Greene","year":"2014","unstructured":"Greene, J. D. (2014). Moral tribes: Emotion, reason, and the gap between us and them. Penguin."},{"issue":"3","key":"9667_CR27","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1177\/1088868313495594","volume":"17","author":"F Cushman","year":"2013","unstructured":"Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273\u2013292.","journal-title":"Personality and Social Psychology Review"},{"issue":"7729","key":"9667_CR28","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1038\/s41586-018-0637-6","volume":"563","author":"E Awad","year":"2018","unstructured":"Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59.","journal-title":"Nature"},{"key":"9667_CR29","doi-asserted-by":"crossref","unstructured":"Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 785\u2013794.","DOI":"10.1145\/2939672.2939785"},{"issue":"12","key":"9667_CR30","doi-asserted-by":"crossref","first-page":"1565","DOI":"10.1038\/nbt1206-1565","volume":"24","author":"WS Noble","year":"2006","unstructured":"Noble, W. S. (2006). What is a support vector machine? Nature Biotechnology, 24(12), 1565\u20131567.","journal-title":"Nature Biotechnology"},{"key":"9667_CR31","doi-asserted-by":"crossref","unstructured":"Doris, J.M., Group, M.P.R., et al. (2010). The moral psychology handbook. OUP Oxford.","DOI":"10.1093\/acprof:oso\/9780199582143.001.0001"},{"issue":"5827","key":"9667_CR32","doi-asserted-by":"crossref","first-page":"998","DOI":"10.1126\/science.1137651","volume":"316","author":"J Haidt","year":"2007","unstructured":"Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 998\u20131002.","journal-title":"Science"},{"issue":"1","key":"9667_CR33","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1111\/j.1747-9991.2006.00050.x","volume":"2","author":"J Knobe","year":"2007","unstructured":"Knobe, J. (2007). Experimental philosophy. Philosophy Compass, 2(1), 81\u201392.","journal-title":"Philosophy Compass"},{"key":"9667_CR34","volume-title":"Experimental philosophy: An introduction","author":"J Alexander","year":"2012","unstructured":"Alexander, J. (2012). Experimental philosophy: An introduction. Polity Press."},{"issue":"5","key":"9667_CR35","doi-asserted-by":"publisher","first-page":"388","DOI":"10.1016\/j.tics.2022.02.009","volume":"26","author":"E Awad","year":"2022","unstructured":"Awad, E., Levine, S., Anderson, M., Anderson, S. L., Conitzer, V., Crockett, M. J., Everett, J. A. C., Evgeniou, T., Gopnik, A., Jamison, J. C., Kim, T. W., Liao, S. M., Meyer, M. N., Mikhail, J., Opoku-Agyemang, K., Borg, J. S., Schroeder, J., Sinnott-Armstrong, W., Slavkovik, M., & Tenenbaum, J. B. (2022). Computational ethics. Trends in Cognitive Sciences, 26(5), 388\u2013405. https:\/\/doi.org\/10.1016\/j.tics.2022.02.009","journal-title":"Trends in Cognitive Sciences"},{"issue":"42","key":"9667_CR36","doi-asserted-by":"crossref","first-page":"26158","DOI":"10.1073\/pnas.2014505117","volume":"117","author":"S Levine","year":"2020","unstructured":"Levine, S., Kleiman-Weiner, M., Schulz, L., Tenenbaum, J., & Cushman, F. (2020). The logic of universalization guides moral judgment. Proceedings of the National Academy of Sciences, 117(42), 26158\u201326169.","journal-title":"Proceedings of the National Academy of Sciences"},{"key":"9667_CR37","doi-asserted-by":"crossref","first-page":"107","DOI":"10.1016\/j.cognition.2017.03.005","volume":"167","author":"M Kleiman-Weiner","year":"2017","unstructured":"Kleiman-Weiner, M., Saxe, R., & Tenenbaum, J. B. (2017). Learning a commonsense moral theory. Cognition, 167, 107\u2013123.","journal-title":"Cognition"},{"key":"9667_CR38","doi-asserted-by":"crossref","unstructured":"Kim, R., Kleiman-Weiner, M., Abeliuk, A., Awad, E., Dsouza, S., Tenenbaum, J.B., & Rahwan, I. (2018). A computational model of commonsense moral decision making. In Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society, pp. 197\u2013203.","DOI":"10.1145\/3278721.3278770"},{"issue":"1","key":"9667_CR39","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1038\/s41467-018-07882-8","volume":"10","author":"JM Baar","year":"2019","unstructured":"Baar, J. M., Chang, L. J., & Sanfey, A. G. (2019). The computational and neural substrates of moral strategies in social decision-making. Nature Communications, 10(1), 1\u201314.","journal-title":"Nature Communications"},{"key":"9667_CR40","doi-asserted-by":"crossref","DOI":"10.1016\/j.cognition.2021.104910","volume":"218","author":"N Engelmann","year":"2022","unstructured":"Engelmann, N., & Waldmann, M. R. (2022). How to weigh lives. a computational model of moral judgment in multiple-outcome structures. Cognition, 218, 104910.","journal-title":"Cognition"},{"key":"9667_CR41","doi-asserted-by":"publisher","unstructured":"Jiang, L., Hwang, J.D., Bhagavatula, C., Bras, R.L., Liang, J., Dodge, J., Sakaguchi, K., Forbes, M., Borchardt, J., Gabriel, S., Tsvetkov, Y., Etzioni, O., Sap, M., Rini, R., & Choi, Y. (2021). Can Machines Learn Morality? The Delphi Experiment. arXiv. https:\/\/doi.org\/10.48550\/ARXIV.2110.07574 . arXiv:2110.07574","DOI":"10.48550\/ARXIV.2110.07574"},{"key":"9667_CR42","doi-asserted-by":"crossref","DOI":"10.1016\/j.artint.2020.103349","volume":"287","author":"E Awad","year":"2020","unstructured":"Awad, E., Anderson, M., Anderson, S. L., & Liao, B. (2020). An approach for combining ethical principles with public opinion to guide public policy. Artificial Intelligence, 287, 103349.","journal-title":"Artificial Intelligence"},{"key":"9667_CR43","volume-title":"The moral psychology of AI and the ethical opt-out problem","author":"J-F Bonnefon","year":"2020","unstructured":"Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2020). The moral psychology of AI and the ethical opt-out problem. Oxford University Press."},{"key":"9667_CR44","volume-title":"Human compatible: Artificial intelligence and the problem of control","author":"S Russell","year":"2019","unstructured":"Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin."},{"key":"9667_CR45","unstructured":"Theodorou, A., Wortham, R.H., & Bryson, J.J. (2016). Why is my robot behaving like that? designing transparency for real time inspection of autonomous robots. In: AISB Workshop on Principles of Robotics. University of Bath."},{"issue":"6293","key":"9667_CR46","doi-asserted-by":"crossref","first-page":"1573","DOI":"10.1126\/science.aaf2654","volume":"352","author":"J-F Bonnefon","year":"2016","unstructured":"Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573\u20131576.","journal-title":"Science"},{"key":"9667_CR47","doi-asserted-by":"crossref","unstructured":"Noothigattu, R., Gaikwad, S., Awad, E., Dsouza, S., Rahwan, I., Ravikumar, P., & Procaccia, A.D. (2017). A voting-based system for ethical decision making. In: Proc. of the\u00a032nd\u00a0AAAI.","DOI":"10.1609\/aaai.v32i1.11512"},{"key":"9667_CR48","doi-asserted-by":"publisher","unstructured":"Iacca, G., Lagioia, F., Loreggia, A., & Sartor, G. (2020). A genetic approach to the ethical knob. In: Legal Knowledge and Information Systems: JURIX 2020, vol. 334, pp. 103\u2013112. IOS Press, Amsterdam. https:\/\/doi.org\/10.3233\/FAIA200854","DOI":"10.3233\/FAIA200854"},{"key":"9667_CR49","unstructured":"Grandi, U., Loreggia, A., Rossi, F., & Saraswat, V.A. (2014). From sentiment analysis to preference aggregation. In International Symposium on Artificial Intelligence and Mathematics, ISAIM 2014, Fort Lauderdale, FL, USA, January 6\u20138, 2014."},{"key":"9667_CR50","unstructured":"Loreggia, A., Mattei, N., Rossi, F., & Venable, K.B. (2019). Metric learning for value alignment. In AISafety@IJCAI. CEUR Workshop Proceedings, vol. 2419. CEUR-WS.org, Aachen."},{"issue":"7","key":"9667_CR51","doi-asserted-by":"crossref","first-page":"1037","DOI":"10.1016\/j.artint.2011.03.004","volume":"175","author":"C Domshlak","year":"2011","unstructured":"Domshlak, C., H\u00fcllermeier, E., Kaci, S., & Prade, H. (2011). Preferences in AI: An overview. Artificial Intelligence, 175(7), 1037\u20131052.","journal-title":"Artificial Intelligence"},{"key":"9667_CR52","unstructured":"Rossi, F., & Loreggia, A. (2019). Preferences and ethical priorities: Thinking fast and slow in AI. In Proceedings of the 18th international conference on autonomous agents and multiagent systems. AAMAS \u201919, pp. 3\u20134. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC."},{"key":"9667_CR53","volume-title":"Practical Reason","author":"A Sen","year":"1974","unstructured":"Sen, A. (1974). Choice, ordering and morality. In S. K\u00f6rner (Ed.), Practical Reason. Blackwell."},{"issue":"4","key":"9667_CR54","first-page":"623","volume":"44","author":"JC Harsanyi","year":"1977","unstructured":"Harsanyi, J. C. (1977). Morality and the theory of rational behavior. Social Research, 44(4), 623.","journal-title":"Social Research"},{"issue":"2","key":"9667_CR55","doi-asserted-by":"publisher","first-page":"347","DOI":"10.1093\/logcom\/exab088","volume":"32","author":"A Loreggia","year":"2022","unstructured":"Loreggia, A., Lorini, E., & Sartor, G. (2022). Modelling ceteris paribus preferences with deontic logic. Journal of Logic and Computation, 32(2), 347\u2013368. https:\/\/doi.org\/10.1093\/logcom\/exab088","journal-title":"Journal of Logic and Computation"},{"key":"9667_CR56","doi-asserted-by":"crossref","DOI":"10.1016\/j.artint.2020.103261","volume":"283","author":"R Freedman","year":"2020","unstructured":"Freedman, R., Borg, J. S., Sinnott-Armstrong, W., Dickerson, J. P., & Conitzer, V. (2020). Adapting a kidney exchange algorithm to align with human values. Artificial Intelligence, 283, 103261.","journal-title":"Artificial Intelligence"},{"key":"9667_CR57","doi-asserted-by":"crossref","unstructured":"Lee, M.K., Kusbit, D., Kahng, A., Kim, J.T., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., Psomas, A., et al. (2019). Webuildai: Participatory framework for algorithmic governance. In Proceedings of the ACM on Human-Computer Interaction 3(CSCW), 1\u201335.","DOI":"10.1145\/3359283"},{"key":"9667_CR58","doi-asserted-by":"crossref","unstructured":"Rossi, F., Venable, K.B., & Walsh, T. (2011). A short introduction to preferences: Between artificial intelligence and social choice, pp. 1\u2013102. Morgan and Claypool, San Rafael, California (USA).","DOI":"10.1007\/978-3-031-01556-4"},{"key":"9667_CR59","doi-asserted-by":"crossref","unstructured":"Brandt, F., Conitzer, V., Endriss, U., Lang, J., Procaccia, A.D. (eds.): Handbook of Computational Social Choice. Cambridge University Press, Pennsylvania (2016). http:\/\/dblp.uni-trier.de\/db\/reference\/choice\/choice2016.html","DOI":"10.1017\/CBO9781107446984.002"},{"key":"9667_CR60","doi-asserted-by":"crossref","unstructured":"Wang, H., Shao, S., Zhou, X., Wan, C., & Bouguettaya, A. (2009). Web service selection with incomplete or inconsistent user preferences. In Proc. 7th International Conference on Service-Oriented Computing, pp. 83\u201398. Springer, Berlin, Heidelberg.","DOI":"10.1007\/978-3-642-10383-4_6"},{"key":"9667_CR61","doi-asserted-by":"crossref","first-page":"511","DOI":"10.1007\/978-0-387-85820-3_16","volume-title":"Recommender Systems Handbook","author":"P Pu","year":"2011","unstructured":"Pu, P., Faltings, B., Chen, L., Zhang, J., & Viappiani, P. (2011). Usability guidelines for product recommenders based on example critiquing research. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender Systems Handbook (pp. 511\u2013545). Springer."},{"key":"9667_CR62","doi-asserted-by":"crossref","unstructured":"Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V.R., & Yang, Q. (2018). Building ethics into artificial intelligence. In: Proc. 27th IJCAI, pp. 5527\u20135533.","DOI":"10.24963\/ijcai.2018\/779"},{"key":"9667_CR63","doi-asserted-by":"crossref","unstructured":"Noothigattu, R., Bouneffouf, D., Mattei, N., Chandra, R., Madan, P., Varshney, K., Campbell, M., Singh, M., & Rossi, F. (2019). Teaching AI agents ethical values using reinforcement learning and policy orchestration. In: Proc. of the\u00a028th\u00a0IJCAI.","DOI":"10.24963\/ijcai.2019\/891"},{"key":"9667_CR64","unstructured":"Alkoby, S., Rath, A., & Stone, P. (2019). Teaching social behavior through human reinforcement for ad hoc teamwork-the STAR framework. In: Proc. of The\u00a018th AAMAS."},{"key":"9667_CR65","unstructured":"Arnold, T., Thomas, Kasenberg, D., & Scheutzs, M. (2017). Value alignment or misalignment - what will keep systems accountable? In: AI, Ethics, and Society, Papers from the 2017 AAAI Workshop."},{"key":"9667_CR66","doi-asserted-by":"publisher","unstructured":"Loreggia, A., Mattei, N., Rossi, F., & Venable, K.B. (2020). Modeling and reasoning with preferences and ethical priorities in AI systems. In: Liao, S.M. (ed.) Ethics of Artificial Intelligence, New York, pp. 127\u2013154. Chap. 4. https:\/\/doi.org\/10.1093\/oso\/9780190905033.003.0005","DOI":"10.1093\/oso\/9780190905033.003.0005"},{"issue":"2","key":"9667_CR67","doi-asserted-by":"crossref","first-page":"185","DOI":"10.3233\/IA-221057","volume":"16","author":"A Loreggia","year":"2022","unstructured":"Loreggia, A., Calegari, R., Lorini, E., Rossi, F., & Sartor, G. (2022). How to model contrary-to-duty with gcp-nets. Intelligenza Artificiale, 16(2), 185\u2013198.","journal-title":"Intelligenza Artificiale"},{"key":"9667_CR68","doi-asserted-by":"crossref","unstructured":"Loreggia, A., Mattei, N., Rossi, F., & Venable, K.B. (2020). CPMetric: Deep siamese networks for metric learning on structured preferences. In: El\u00a0Fallah\u00a0Seghrouchni, A., Sarne, D. (eds.) Artificial Intelligence. IJCAI 2019 International Workshops, pp. 217\u2013234. Springer, Cham.","DOI":"10.1007\/978-3-030-56150-5_11"},{"key":"9667_CR69","unstructured":"Loreggia, A., Mattei, N., Rossi, F., & Venable, K.B. (2018). On the distance between cp-nets. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS \u201918, pp. 955\u2013963. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC."},{"key":"9667_CR70","unstructured":"Jiang, L., Hwang, J.D., Bhagavatula, C., Bras, R.L., Liang, J., Dodge, J., Sakaguchi, K., Forbes, M., Borchardt, J., Gabriel, S., et al. (2021). Can machines learn morality? the delphi experiment. arXiv preprint arXiv:2110.07574"},{"key":"9667_CR71","doi-asserted-by":"crossref","unstructured":"Lieder, F., & Griffiths, T.L. (2020). Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences. 43.","DOI":"10.1017\/S0140525X1900061X"},{"issue":"1","key":"9667_CR72","doi-asserted-by":"crossref","first-page":"99","DOI":"10.2307\/1884852","volume":"69","author":"HA Simon","year":"1955","unstructured":"Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99\u2013118.","journal-title":"The Quarterly Journal of Economics"},{"issue":"2","key":"9667_CR73","doi-asserted-by":"crossref","first-page":"129","DOI":"10.1037\/h0042769","volume":"63","author":"HA Simon","year":"1956","unstructured":"Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129.","journal-title":"Psychological Review"},{"key":"9667_CR74","unstructured":"Kleiman-Weiner, M., Gerstenberg, T., Levine, S., & Tenenbaum, J.B. (2015). Inference of intention and permissibility in moral decision making. In: CogSci. Citeseer."},{"issue":"11","key":"9667_CR75","doi-asserted-by":"crossref","first-page":"1179","DOI":"10.1037\/bul0000075","volume":"142","author":"KJ Holyoak","year":"2016","unstructured":"Holyoak, K. J., & Powell, D. (2016). Deontological coherence: A framework for commonsense moral reasoning. Psychological Bulletin, 142(11), 1179.","journal-title":"Psychological Bulletin"},{"key":"9667_CR76","volume-title":"Morals by Agreement","author":"D Gauthier","year":"1986","unstructured":"Gauthier, D. (1986). Morals by Agreement. Oxford University Press on Demand."},{"key":"9667_CR77","doi-asserted-by":"crossref","DOI":"10.4159\/9780674042605","volume-title":"A theory of justice","author":"J Rawls","year":"1971","unstructured":"Rawls, J. (1971). A theory of justice. Harvard University Press."},{"key":"9667_CR78","volume-title":"What we owe to each other","author":"T Scanlon","year":"1998","unstructured":"Scanlon, T., et al. (1998). What we owe to each other. Harvard University Press."},{"key":"9667_CR79","volume-title":"Moral consciousness and communicative action","author":"J Habermas","year":"1990","unstructured":"Habermas, J. (1990). Moral consciousness and communicative action. MIT press."},{"key":"9667_CR80","doi-asserted-by":"crossref","unstructured":"Levine, S., Kleiman-Weiner, M., Chater, N., Cushman, F., & Tenenbaum, J.B. (2022). When rules are over-ruled: Virtual bargaining as a contractualist method of moral judgment.","DOI":"10.31234\/osf.io\/k5pu8"},{"issue":"1","key":"9667_CR81","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1017\/S0140525X11002202","volume":"36","author":"N Baumard","year":"2013","unstructured":"Baumard, N., Andr\u00e9, J.-B., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral and Brain Sciences, 36(1), 59\u201378.","journal-title":"Behavioral and Brain Sciences"},{"key":"9667_CR82","doi-asserted-by":"publisher","unstructured":"Andr\u00e9, J.-B., Debove, S., Fitouchi, L., & Baumard, N. (2022). Moral cognition as a Nash product maximizer: An evolutionary contractualist account of morality. PsyArXiv. https:\/\/doi.org\/10.31234\/osf.io\/2hxgu","DOI":"10.31234\/osf.io\/2hxgu"},{"issue":"6","key":"9667_CR83","doi-asserted-by":"crossref","first-page":"772","DOI":"10.1037\/xge0000165","volume":"145","author":"JA Everett","year":"2016","unstructured":"Everett, J. A., Pizarro, D. A., & Crockett, M. J. (2016). Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology: General, 145(6), 772.","journal-title":"Journal of Experimental Psychology: General"},{"key":"9667_CR84","volume-title":"Groundwork for the Metaphysics of Morals","author":"I Kant","year":"2002","unstructured":"Kant, I., & Schneewind, J. B. (2002). Groundwork for the Metaphysics of Morals. Yale University Press."},{"key":"9667_CR85","doi-asserted-by":"crossref","unstructured":"Levine, S., Chater, N., Tenenbaum, J., & Cushman, F. (2023). Resource-rational contractualism: A triple theory of moral cognition.","DOI":"10.31234\/osf.io\/p48t7"},{"key":"9667_CR86","doi-asserted-by":"crossref","DOI":"10.1093\/0198246609.001.0001","volume-title":"Moral thinking: Its levels, method, and point","author":"RM Hare","year":"1981","unstructured":"Hare, R. M. (1981). Moral thinking: Its levels, method, and point. Oxford University Press."},{"key":"9667_CR87","doi-asserted-by":"crossref","DOI":"10.1017\/CBO9780511780578","volume-title":"Elements of moral cognition: Rawls\u2019 linguistic analogy and the cognitive science of moral and legal judgment","author":"J Mikhail","year":"2011","unstructured":"Mikhail, J. (2011). Elements of moral cognition: Rawls\u2019 linguistic analogy and the cognitive science of moral and legal judgment. Cambridge University Press."},{"key":"9667_CR88","unstructured":"Levine, S., Kleiman-Weiner, M., Chater, N., Cushman, F., & Tenenbaum, J.B. (2018). The cognitive mechanisms of contractualist moral decision-making. In: CogSci. Citeseer."},{"key":"9667_CR89","doi-asserted-by":"crossref","unstructured":"Stich, S. (2018). The quest for the boundaries of morality. In: The Routledge handbook of moral epistemology, pp. 15\u201337. Routledge, New York.","DOI":"10.4324\/9781315719696-2"},{"issue":"1","key":"9667_CR90","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1521\/soco.2021.39.1.139","volume":"39","author":"S Levine","year":"2021","unstructured":"Levine, S., Rottman, J., Davis, T., O\u2019Neill, E., Stich, S., & Machery, E. (2021). Religious affiliation and conceptions of the moral domain. Social Cognition, 39(1), 139\u2013165.","journal-title":"Social Cognition"},{"key":"9667_CR91","unstructured":"Kwon, J., Tenenbaum, J., & Levine, S. (2022). Flexibility in moral cognition: When is it okay to break the rules? In Proceedings of the 44th annual conference of the cognitive science society."},{"key":"9667_CR92","unstructured":"Kwon, J., Zhi-Xuan, T., Tenenbaum, J., & Levine, S. When it is not out of line to get out of line: The role of universalization and outcome-based reasoning in rule-breaking judgments."},{"key":"9667_CR93","doi-asserted-by":"crossref","unstructured":"Allen, T.E. (2013). CP-nets with indifference. In: 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1488\u20131495. IEEE.","DOI":"10.1109\/Allerton.2013.6736703"},{"issue":"1","key":"9667_CR94","doi-asserted-by":"crossref","first-page":"403","DOI":"10.1613\/jair.2627","volume":"33","author":"J Goldsmith","year":"2008","unstructured":"Goldsmith, J., Lang, J., Truszczy\u0144ski, M., & Wilson, N. (2008). The computational complexity of dominance and consistency in CP-nets. Journal of Artificial Intelligence Research, 33(1), 403\u2013432.","journal-title":"Journal of Artificial Intelligence Research"},{"key":"9667_CR95","doi-asserted-by":"crossref","unstructured":"Booch, G., Fabiano, F., Horesh, L., Kate, K., Lenchner, J., Linck, N., Loreggia, A., Murgesan, K., Mattei, N., Rossi, F., et al. (2021). Thinking fast and slow in AI. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 15042\u201315046.","DOI":"10.1609\/aaai.v35i17.17765"},{"key":"9667_CR96","doi-asserted-by":"crossref","unstructured":"Difallah, D., Filatova, E., & Ipeirotis, P. (2018) Demographics and dynamics of mechanical turk workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 135\u2013143.","DOI":"10.1145\/3159652.3159661"},{"issue":"9","key":"9667_CR97","doi-asserted-by":"crossref","first-page":"536","DOI":"10.1111\/spc3.12131","volume":"8","author":"CW Bauman","year":"2014","unstructured":"Bauman, C. W., McGraw, A. P., Bartels, D. M., & Warren, C. (2014). Revisiting external validity: Concerns about trolley problems and other sacrificial dilemmas in moral psychology. Social and Personality Psychology Compass, 8(9), 536\u2013554.","journal-title":"Social and Personality Psychology Compass"},{"issue":"1","key":"9667_CR98","doi-asserted-by":"crossref","first-page":"50","DOI":"10.1214\/aoms\/1177730491","volume":"18","author":"H Mann","year":"1947","unstructured":"Mann, H., Whitney, D., et al. (1947). On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics, 18(1), 50\u201360.","journal-title":"Annals of Mathematical Statistics"},{"key":"9667_CR99","doi-asserted-by":"crossref","unstructured":"Cornelio, C., Goldsmith, J., Mattei, N., Rossi, F., & Venable, K.B. (2013). Updates and uncertainty in CP-nets. In: Proc. of the\u00a026th\u00a0AUSAI.","DOI":"10.1007\/978-3-319-03680-9_32"},{"issue":"4","key":"9667_CR100","doi-asserted-by":"crossref","first-page":"367","DOI":"10.1016\/S0167-9473(01)00065-2","volume":"38","author":"JH Friedman","year":"2002","unstructured":"Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4), 367\u2013378.","journal-title":"Computational Statistics & Data Analysis"},{"key":"9667_CR101","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.1214\/aos\/1013203451","volume":"29","author":"JH Friedman","year":"2001","unstructured":"Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29, 1189\u20131232.","journal-title":"Annals of Statistics"},{"issue":"2","key":"9667_CR102","doi-asserted-by":"publisher","first-page":"22","DOI":"10.1007\/s10458-021-09504-y","volume":"35","author":"C Cornelio","year":"2021","unstructured":"Cornelio, C., Donini, M., Loreggia, A., Pini, M. S., & Rossi, F. (2021). Voting with random classifiers (VORACE): Theoretical and experimental analysis. Autonomous Agents and Multi-Agent Systems, 35(2), 22. https:\/\/doi.org\/10.1007\/s10458-021-09504-y","journal-title":"Autonomous Agents and Multi-Agent Systems"},{"issue":"1","key":"9667_CR103","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1023\/A:1010933404324","volume":"45","author":"L Breiman","year":"2001","unstructured":"Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5\u201332.","journal-title":"Machine Learning"},{"issue":"3","key":"9667_CR104","doi-asserted-by":"crossref","first-page":"530","DOI":"10.1016\/j.cognition.2005.07.005","volume":"100","author":"S Nichols","year":"2006","unstructured":"Nichols, S., & Mallon, R. (2006). Moral dilemmas and moral rules. Cognition, 100(3), 530\u2013542.","journal-title":"Cognition"},{"key":"9667_CR105","doi-asserted-by":"crossref","unstructured":"Levine, S., & Leslie, A. (2021). Preschoolers use the means principle to make moral judgments.","DOI":"10.31234\/osf.io\/np9a5"},{"key":"9667_CR106","volume-title":"On what matters","author":"D Parfit","year":"2011","unstructured":"Parfit, D. (2011). On what matters (Vol. 1). Oxford University Press."},{"key":"9667_CR107","unstructured":"Azari\u00a0Soufiani, H., Diao, H., Lai, Z., & Parkes, D.C. (2013). Generalized random utility models with multiple types. Advances in Neural Information Processing Systems. 26."},{"key":"9667_CR108","unstructured":"Brafman, R.I., & Chernyavsky, Y. (2005). Planning with goal preferences and constraints. In: ICAPS, pp. 182\u2013191."},{"key":"9667_CR109","doi-asserted-by":"crossref","unstructured":"Benton, J., Coles, A., & Coles, A. (2012). Temporal planning with preferences and time-dependent continuous costs. In: Proc. 22nd ICAPS.","DOI":"10.1609\/icaps.v22i1.13509"},{"key":"9667_CR110","unstructured":"Gerevini, A., & Long, D. (2005). Plan constraints and preferences in pddl3. In: Technical Report 2005-08-07, Department of Electronics for Automation."},{"key":"9667_CR111","doi-asserted-by":"crossref","unstructured":"Pallagani, V., Muppasani, B., Srivastava, B., Rossi, F., Horesh, L., Murugesan, K., Loreggia, A., Fabiano, F., Joseph, R., Kethepalli, Y., et al. (2023). Plansformer tool: demonstrating generation of symbolic plans using transformers. In: IJCAI, vol. 2023, pp. 7158\u20137162. In International Joint Conferences on Artificial Intelligence.","DOI":"10.24963\/ijcai.2023\/839"}],"container-title":["Autonomous Agents and Multi-Agent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10458-024-09667-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10458-024-09667-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10458-024-09667-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,13]],"date-time":"2024-11-13T15:23:52Z","timestamp":1731511432000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10458-024-09667-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,13]]},"references-count":111,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["9667"],"URL":"https:\/\/doi.org\/10.1007\/s10458-024-09667-4","relation":{},"ISSN":["1387-2532","1573-7454"],"issn-type":[{"value":"1387-2532","type":"print"},{"value":"1573-7454","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,13]]},"assertion":[{"value":"7 July 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 July 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"The study has been approved by the Massachusetts Institute of Technology Institutional Review Board (Protocol 0812003014). The review process ensures compliance with university-mandated ethical guidelines for research conducted with human subjects. All subjects provided informed consent prior to participating. See  for details of the review process.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"Not applicable.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to publish"}}],"article-number":"35"}}