{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T09:31:54Z","timestamp":1771061514239,"version":"3.50.1"},"reference-count":22,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2022,3,18]],"date-time":"2022-03-18T00:00:00Z","timestamp":1647561600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,3,18]],"date-time":"2022-03-18T00:00:00Z","timestamp":1647561600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"University of Turku (UTU) including Turku University Central Hospital"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI &amp; Soc"],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The problem of controlling an artificial general intelligence (AGI) has fascinated both scientists and science-fiction writers for centuries. Today that problem is becoming more important because the time when we may have a superhuman intelligence among us is within the foreseeable future. Current average estimates place that moment to before 2060. Some estimates place it as early as 2040, which is quite soon. The arrival of the first AGI might lead to a series of events that we have not seen before: rapid development of an even more powerful AGI developed by the AGIs themselves. This has wide-ranging implications to the society and therefore it is something that must be studied well before it happens. In this paper we will discuss the problem of limiting the risks posed by the advent of AGIs. In a thought experiment, we propose an AGI which has enough human-like properties to act in a democratic society, while still retaining its essential artificial general intelligence properties. We discuss ways of arranging the co-existence of humans and such AGIs using a democratic system of coordination and coexistence. If considered a success, such a system could be used to manage a society consisting of both AGIs and humans. The democratic system where each member of the society is represented in the highest level of decision-making guarantees that even minorities would be able to have their voices heard. The unpredictability of the AGI era makes it necessary to consider the possibility that a population of autonomous AGIs could make us humans into a minority.<\/jats:p>","DOI":"10.1007\/s00146-022-01426-x","type":"journal-article","created":{"date-parts":[[2022,3,18]],"date-time":"2022-03-18T03:02:35Z","timestamp":1647572555000},"page":"1785-1791","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["A democratic way of controlling artificial general intelligence"],"prefix":"10.1007","volume":"38","author":[{"given":"Jussi","family":"Salmi","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,3,18]]},"reference":[{"key":"1426_CR1","first-page":"149","volume-title":"Ethics and information technology","author":"C Allen","year":"2005","unstructured":"Allen C, Smit I, Wallach W (2005) Artificial morality: Top\u2013down, bottom\u2013up, and hybrid approaches. Ethics and information technology. Springer, pp 149\u2013155"},{"key":"1426_CR2","unstructured":"Azulay D (2019) When will we reach the singularity?\u2014A timeline consensus from AI researchers. https:\/\/emerj.com\/ai-future-outlook\/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers\/. Fetched 21 Jan 2021"},{"key":"1426_CR3","doi-asserted-by":"publisher","first-page":"829","DOI":"10.1038\/nrn1201","volume":"4","author":"A Baddeley","year":"2003","unstructured":"Baddeley A (2003) Working memory: looking back and looking forward. Nat Rev Neurosci 4:829\u2013839","journal-title":"Nat Rev Neurosci"},{"key":"1426_CR4","volume-title":"Superintelligence: paths, dangers, strategies","author":"N Bostrom","year":"2014","unstructured":"Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press"},{"key":"1426_CR5","first-page":"1","volume":"103","author":"DS Brown","year":"2004","unstructured":"Brown DS, Mobarak AM (2004) The transforming power of democracy: regime type and the distribution of electricity. Am Polit Sci Rev 103:1\u201335","journal-title":"Am Polit Sci Rev"},{"key":"1426_CR6","first-page":"172","volume":"127","author":"F Burgat","year":"2015","unstructured":"Burgat F, Freccero Y (2015) Facing the Animal in Sartre and Levinas. Yale Fr Stud 127:172\u2013189","journal-title":"Yale Fr Stud"},{"key":"1426_CR7","doi-asserted-by":"publisher","first-page":"97","DOI":"10.1007\/s00146-021-01170-8","volume":"37","author":"P Burgess","year":"2021","unstructured":"Burgess P (2021) Algorithmic augmentation of democracy: considering whether technology can enhance the concepts of democracy and the rule of law through four hypotheticals. AI Soc 37:97\u2013112","journal-title":"AI Soc"},{"key":"1426_CR8","unstructured":"Dilmegani C (2021) 995 experts opinion: AGI\/singularity by 2060. https:\/\/research.aimultiple.com\/artificial-general-intelligence-singularity-timing\/. Fetched 21 Jan 2021"},{"key":"1426_CR9","unstructured":"Freud S (1999) In Strachey J (ed) The standard edition of the complete psychological works of Sigmund Freud, vol XIX"},{"key":"1426_CR10","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1023\/A:1010028211269","volume":"1","author":"BS Frey","year":"2000","unstructured":"Frey BS, Stutzer A (2000) Happiness prospers in democracy. J Happiness Stud 1:79\u2013102","journal-title":"J Happiness Stud"},{"issue":"1","key":"1426_CR11","doi-asserted-by":"publisher","first-page":"25","DOI":"10.1177\/0093854896023001004","volume":"23","author":"RD Hare","year":"1996","unstructured":"Hare RD (1996) Psychopathy: a clinical construct whose time has come. Crim Justice Behav 23(1):25\u201354","journal-title":"Crim Justice Behav"},{"key":"1426_CR12","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-020-01106-8","author":"O Hrudka","year":"2020","unstructured":"Hrudka O (2020) \u2018Pretending to favour the public\u2019: how Facebook\u2019s declared democratising ideals are reversed by its practices. AI Soc. https:\/\/doi.org\/10.1007\/s00146-020-01106-8","journal-title":"AI Soc"},{"key":"1426_CR13","volume-title":"Oxford companion to philosophy","author":"M Klein","year":"2005","unstructured":"Klein M (2005) Responsibility. In: Honderich T (ed) Oxford companion to philosophy. Oxford University Press"},{"key":"1426_CR14","volume-title":"The visual brain in action","author":"AD Milner","year":"1995","unstructured":"Milner AD, Goodale MA (1995) The visual brain in action. Oxford University Press"},{"key":"1426_CR15","volume-title":"The essential rousseau: the social contract, discourse on the origin of inequality, discourse on the arts and sciences, the creed of a savoyard priest","author":"J-J Rousseau","year":"1974","unstructured":"Rousseau J-J (1974) The essential rousseau: the social contract, discourse on the origin of inequality, discourse on the arts and sciences, the creed of a savoyard priest. New American Library, New York"},{"key":"1426_CR16","unstructured":"Shulman C (2010) Omohundro's basic AI drives and catastrophic risks. http:\/\/intelligence.org\/files\/BasicAIDrives.pdf. Fetched 21 Jan 2021"},{"issue":"1","key":"1426_CR17","doi-asserted-by":"publisher","first-page":"018001","DOI":"10.1088\/0031-8949\/90\/1\/018001","volume":"90","author":"K Sotala","year":"2014","unstructured":"Sotala K, Yampolskiy RV (2014) Responses to catastrophic AGI risk: a survey. Phys Scr 90(1):018001","journal-title":"Phys Scr"},{"key":"1426_CR18","doi-asserted-by":"publisher","first-page":"125","DOI":"10.1007\/s00146-017-0779-3","volume":"33","author":"M Teli","year":"2018","unstructured":"Teli M, De Angeli A, Men\u00e9ndez-Blanco M (2018) The positioning cards: on affect, public design, and the common. AI Soc 33:125\u2013132","journal-title":"AI Soc"},{"issue":"4","key":"1426_CR19","doi-asserted-by":"publisher","first-page":"158","DOI":"10.1016\/j.tics.2007.01.005","volume":"11","author":"N Tsuchiya","year":"2007","unstructured":"Tsuchiya N, Adolphs R (2007) Emotion and consciousness. Trends Cogn Sci 11(4):158\u2013167","journal-title":"Trends Cogn Sci"},{"key":"1426_CR20","doi-asserted-by":"publisher","first-page":"205","DOI":"10.1007\/s00146-021-01147-7","volume":"37","author":"S Wojtczak","year":"2022","unstructured":"Wojtczak S (2022) Endowing artificial intelligence with legal subjectivity. AI Soc 37:205\u2013213","journal-title":"AI Soc"},{"key":"1426_CR21","unstructured":"Yudkowsky E (2001) Creating friendly AI 1.0: the analysis and design of benevolent goal architectures. https:\/\/intelligence.org\/files\/CFAI.pdf. Fetched 21 Jan 2021"},{"key":"1426_CR22","volume-title":"Global catastrophic risks","author":"E Yudkowsky","year":"2008","unstructured":"Yudkowsky E (2008) Artificial intelligence as a positive and negative factor in global risk. In: Bostr\u00f6m N, Cirkovic MM (eds) Global catastrophic risks. Oxford University Press, Oxford"}],"container-title":["AI &amp; SOCIETY"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-022-01426-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00146-022-01426-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-022-01426-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,13]],"date-time":"2023-07-13T18:13:48Z","timestamp":1689272028000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00146-022-01426-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,18]]},"references-count":22,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["1426"],"URL":"https:\/\/doi.org\/10.1007\/s00146-022-01426-x","relation":{},"ISSN":["0951-5666","1435-5655"],"issn-type":[{"value":"0951-5666","type":"print"},{"value":"1435-5655","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,18]]},"assertion":[{"value":"9 March 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 February 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 March 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}