{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T02:29:24Z","timestamp":1768271364987,"version":"3.49.0"},"reference-count":33,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T00:00:00Z","timestamp":1665964800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T00:00:00Z","timestamp":1665964800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI &amp; Soc"],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>If we could build artificial persons (APs) with a moral status comparable to this of a typical human being, how should we design those APs in the right way? This question has been addressed mainly in terms of designing APs devoted to being servants (AP servants) and debated in reference to their autonomy and the harm they might experience. Recently, it has been argued that even if developing AP servants would neither deprive them of autonomy nor cause any net harm, then developing such entities would still be unethical due to the manipulative attitude of their designers. I make two contributions to this discussion. First, I claim that the argument about manipulative attitude significantly shifts the perspective of the whole discussion on APs and that it refers to a much wider range of types of APs than has been acknowledged. Second, I investigate the possibilities of developing APs without a manipulative attitude. I proceed in the following manner: (1) I examine the argument about manipulativeness; (2) show the important novelty it brings to a discussion about APs; (3) analyze how the argument can be extrapolated to designing other kinds of Aps; and (4) discuss cases in which APs can be designed without manipulativeness.<\/jats:p>","DOI":"10.1007\/s00146-022-01575-z","type":"journal-article","created":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T12:08:23Z","timestamp":1666008503000},"page":"1251-1260","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Can we design artificial persons without being manipulative?"],"prefix":"10.1007","volume":"39","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9519-7429","authenticated-orcid":false,"given":"Maciej","family":"Musia\u0142","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,10,17]]},"reference":[{"key":"1575_CR1","first-page":"36","volume":"77","author":"M Baron","year":"2003","unstructured":"Baron M (2003) Manipulativeness. Proc Address Am Philos Assoc 77:36\u201354","journal-title":"Proc Address Am Philos Assoc"},{"key":"1575_CR2","unstructured":"Bloom P, Harris S (2018) It\u2019s Westworld. What\u2019s Wrong With Cruelty to Robots? The New York Times, https:\/\/www.nytimes.com\/2018\/04\/23\/opinion\/westworld-conscious-robots-morality.html Accessed 20 November 2021"},{"key":"1575_CR3","unstructured":"Bringsjord S, Naveen SG (2018) Artificial Intelligence. The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Zalta EN, Nodelman U (eds.), https:\/\/plato.stanford.edu\/archives\/fall2022\/entries\/artificial-intelligence. Accessed 2 September 2022."},{"key":"1575_CR4","doi-asserted-by":"publisher","first-page":"993","DOI":"10.1007\/s10677-019-10029-3","volume":"22","author":"B Chomanski","year":"2019","unstructured":"Chomanski B (2019) What\u2019s wrong with designing people to serve? Ethic Theory Moral Prac 22:993\u20131015. https:\/\/doi.org\/10.1007\/s10677-019-10029-3","journal-title":"Ethic Theory Moral Prac"},{"key":"1575_CR5","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1007\/s43681-020-00023-2","volume":"1","author":"B Chomanski","year":"2021","unstructured":"Chomanski B (2021) If robots are people, can they be made for profit? Commer Implicat Robot Personhood AI Ethics 1:183\u2013193. https:\/\/doi.org\/10.1007\/s43681-020-00023-2","journal-title":"Commer Implicat Robot Personhood AI Ethics"},{"key":"1575_CR6","doi-asserted-by":"publisher","DOI":"10.3998\/mpub.356801","volume-title":"A Legal Theory for Autonomous Artificial Agents","author":"S Chopra","year":"2011","unstructured":"Chopra S, White LF (2011) A Legal Theory for Autonomous Artificial Agents. MI, University of Michigan Press, Ann Arbor"},{"key":"1575_CR7","unstructured":"Columbus S (Director) (1999), Bicentential Man [Film]. Columbia Pictures"},{"key":"1575_CR8","doi-asserted-by":"publisher","first-page":"79","DOI":"10.7551\/mitpress\/9780262036689.001.0001","volume-title":"Sex robots: philosophical, ethical, and social implications","author":"J Danaher","year":"2017","unstructured":"Danaher J (2017a) The symbolic-consequences argument in the sex-robot debate. In: Danaher J, McArthur N (eds) Sex robots: philosophical, ethical, and social implications. MIT Press, Cambridge MA, pp 79\u201399"},{"key":"1575_CR9","unstructured":"Danaher J (2013) Is there a case for robot slaves? Philosophical Disquisitions (entry on blog on 22 April, 2013). https:\/\/philosophicaldisquisitions.blogspot.com\/2013\/04\/is-there-case-for-robot-slaves.html. Accessed 20 October 2021"},{"key":"1575_CR10","doi-asserted-by":"publisher","unstructured":"Danaher J (2017b) Why we should create artificial offspring: meaning and the collective afterlife. Sci Eng Ethics 24: 1097\u20131118. https:\/\/doi:https:\/\/doi.org\/10.1007\/s11948-017-9932-0","DOI":"10.1007\/s11948-017-9932-0"},{"key":"1575_CR11","unstructured":"Danaher J (2019) The Ethics of Designing People: The Habermasian Critique. Philosophical Disquisitions (entry on blog on 19 April 2019). https:\/\/philosophicaldisquisitions.blogspot.com\/2019\/04\/the-ethics-of-designing-people.html. Accessed 20 October 2021"},{"key":"1575_CR12","doi-asserted-by":"crossref","unstructured":"Gellers J (2021) Rights for Robots: Artificial Intelligence, Animal and Environmental Law. Routledge, Abingdon and New York","DOI":"10.4324\/9780429288159"},{"key":"1575_CR13","doi-asserted-by":"publisher","first-page":"209","DOI":"10.1007\/s00146-018-0844-6","volume":"35","author":"JS Gordon","year":"2020","unstructured":"Gordon JS (2020) What do we owe to intelligent robots? AI & Soc 35:209\u2013223. https:\/\/doi.org\/10.1007\/s00146-018-0844-6","journal-title":"AI & Soc"},{"key":"1575_CR14","doi-asserted-by":"publisher","first-page":"457","DOI":"10.1007\/s00146-020-01063-2","volume":"36","author":"JS Gordon","year":"2021","unstructured":"Gordon JS (2021) Artificial moral and legal personhood. AI & Soc 36:457\u2013471. https:\/\/doi.org\/10.1007\/s00146-020-01063-2","journal-title":"AI & Soc"},{"issue":"3","key":"1575_CR15","doi-asserted-by":"publisher","first-page":"181","DOI":"10.1111\/rati.12346","volume":"35","author":"JS Gordon","year":"2022","unstructured":"Gordon JS (2022) Are superintelligent robots entitled to human rights? Ration 35(3):181\u2013193. https:\/\/doi.org\/10.1111\/rati.12346","journal-title":"Ration"},{"issue":"1","key":"1575_CR16","doi-asserted-by":"publisher","first-page":"88","DOI":"10.1111\/sjp.12450","volume":"60","author":"JS Gordon","year":"2021","unstructured":"Gordon JS, Gunkel DJ (2021) Moral Status and Intelligent Robots. South J Philos 60(1):88\u2013117. https:\/\/doi.org\/10.1111\/sjp.12450","journal-title":"South J Philos"},{"key":"1575_CR17","doi-asserted-by":"publisher","DOI":"10.7551\/mitpress\/11444.001.0001","volume-title":"Robot rights","author":"DJ Gunkel","year":"2018","unstructured":"Gunkel DJ (2018) Robot rights. MIT Press, Cambridge"},{"key":"1575_CR18","doi-asserted-by":"publisher","first-page":"473","DOI":"10.1007\/s00146-020-01129-1","volume":"36","author":"DJ Gunkel","year":"2021","unstructured":"Gunkel DJ, Wales JJ (2021) Debate: what is personhood in the age of AI? AI & Soc 36:473\u2013486. https:\/\/doi.org\/10.1007\/s00146-020-01129-1","journal-title":"AI & Soc"},{"key":"1575_CR19","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1007\/s10676-022-09644-z","volume":"24","author":"K Mamak","year":"2022","unstructured":"Mamak K (2022) Humans, Neanderthals, robots and rights. Ethics Inf Technol 24:33. https:\/\/doi.org\/10.1007\/s10676-022-09644-z","journal-title":"Ethics Inf Technol"},{"issue":"5","key":"1575_CR20","doi-asserted-by":"publisher","first-page":"1087","DOI":"10.1080\/0952813X.2017.1309691","volume":"29","author":"M Musia\u0142","year":"2017","unstructured":"Musia\u0142 M (2017) Designing (artificial) people to serve\u2013the other side of the coin. J Exp Theor Artif Intell 29(5):1087\u20131097. https:\/\/doi.org\/10.1080\/0952813X.2017.1309691","journal-title":"J Exp Theor Artif Intell"},{"key":"1575_CR21","doi-asserted-by":"publisher","unstructured":"Neely EL (2014) Machines and the moral community. Philos Technol 27(1):97\u2013111 (2014). https:\/\/doi.org\/10.1007\/s13347-013-0114-y","DOI":"10.1007\/s13347-013-0114-y"},{"key":"1575_CR22","doi-asserted-by":"crossref","unstructured":"Nyholm, S. (2020) Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield, London; New York","DOI":"10.5771\/9781786612281"},{"key":"1575_CR23","doi-asserted-by":"publisher","first-page":"43","DOI":"10.1080\/09528130601116139","volume":"19","author":"S Petersen","year":"2007","unstructured":"Petersen S (2007) The ethics of robot servitude. J Exp Theor Artif Intell 19:43\u201354. https:\/\/doi.org\/10.1080\/09528130601116139","journal-title":"J Exp Theor Artif Intell"},{"key":"1575_CR24","first-page":"283","volume-title":"Robot ethics","author":"S Petersen","year":"2011","unstructured":"Petersen S (2011) Designing people to serve. In: Lin P, Bekey G, Abney K (eds) Robot ethics. MIT Press, Cambridge MA, pp 283\u2013298"},{"key":"1575_CR25","first-page":"155","volume-title":"Sex robots: philosophical, ethical, and social implications","author":"S Petersen","year":"2017","unstructured":"Petersen S (2017) Is it good for them too? Ethical concern for the sexbots. In: Danaher J, McArthur N (eds) Sex robots: philosophical, ethical, and social implications. MIT Press, Cambridge MA, pp 155\u2013171"},{"key":"1575_CR26","unstructured":"Rini R (2017) Raising good robots. Aeon. https:\/\/aeon.co\/essays\/creating-robots-capable-of-moral-reasoning-is-like-parenting. Accessed 19 December 2021"},{"issue":"1","key":"1575_CR27","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1111\/misp.12032","volume":"39","author":"E Schwitzgebel","year":"2015","unstructured":"Schwitzgebel E, Garza M (2015) A defense of the rights of artificial intelligences. Midwest Stud Philos 39(1):98\u2013119. https:\/\/doi.org\/10.1111\/misp.12032","journal-title":"Midwest Stud Philos"},{"issue":"4","key":"1575_CR28","doi-asserted-by":"publisher","first-page":"465","DOI":"10.1007\/s12369-017-0413-z","volume":"9","author":"R Sparrow","year":"2017","unstructured":"Sparrow R (2017) Robots, rape and representation. Int J Soc Robot 9(4):465\u2013477. https:\/\/doi.org\/10.1007\/s12369-017-0413-z","journal-title":"Int J Soc Robot"},{"key":"1575_CR29","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1007\/s12369-020-00631-2","volume":"13","author":"R Sparrow","year":"2020","unstructured":"Sparrow R (2020) Virtue and vice in our relationships with robots: Is there an asymmetry and how might it be explained? Int J Soc Robot 13:23\u201329. https:\/\/doi.org\/10.1007\/s12369-020-00631-2","journal-title":"Int J Soc Robot"},{"issue":"4","key":"1575_CR30","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3390\/info9040073","volume":"9","author":"HT Tavani","year":"2018","unstructured":"Tavani HT (2018) Can social robots qualify for moral consideration? Reframing the question about robot rights. Information 9(4):1\u201316. https:\/\/doi.org\/10.3390\/info9040073","journal-title":"Information"},{"key":"1575_CR31","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-96235-1","volume-title":"Robot rules: regulating artificial intelligence","author":"J Turner","year":"2019","unstructured":"Turner J (2019) Robot rules: regulating artificial intelligence. Palgrave Macmillan, Cham"},{"key":"1575_CR32","first-page":"23","volume-title":"Human implications of human-robot interaction: papers from the AAAI workshop","author":"M Walker","year":"2006","unstructured":"Walker M (2006) A moral paradox in the creation of artificial intelligence: Mary Poppins 3000s of the world unite! In: Metzler T (ed) Human implications of human-robot interaction: papers from the AAAI workshop. AAAI Press, Menlo Park, pp 23\u201328"},{"key":"1575_CR33","doi-asserted-by":"publisher","DOI":"10.1057\/9781137471338","volume-title":"Free money for all: a basic income guarantee solution for the twenty-first century","author":"M Walker","year":"2016","unstructured":"Walker M (2016) Free money for all: a basic income guarantee solution for the twenty-first century. Palgrave Macmillan, New York"}],"container-title":["AI &amp; SOCIETY"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-022-01575-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00146-022-01575-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-022-01575-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,5]],"date-time":"2024-10-05T22:55:39Z","timestamp":1728168939000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00146-022-01575-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,17]]},"references-count":33,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["1575"],"URL":"https:\/\/doi.org\/10.1007\/s00146-022-01575-z","relation":{},"ISSN":["0951-5666","1435-5655"],"issn-type":[{"value":"0951-5666","type":"print"},{"value":"1435-5655","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,10,17]]},"assertion":[{"value":"25 April 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 September 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 October 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}