{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,3]],"date-time":"2026-02-03T17:27:29Z","timestamp":1770139649674,"version":"3.49.0"},"reference-count":109,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2025,1,28]],"date-time":"2025-01-28T00:00:00Z","timestamp":1738022400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,1,28]],"date-time":"2025-01-28T00:00:00Z","timestamp":1738022400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Omidyar Network"},{"DOI":"10.13039\/100004318","name":"Microsoft","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100004318","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100004439","name":"William and Flora Hewlett Foundation","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100004439","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100001099","name":"Ernst & Young Foundation","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100001099","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI Ethics"],"published-print":{"date-parts":[[2025,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>This paper aims to provide a roadmap for governing AI. In contrast to the reigning paradigms, we argue that AI governance should be not merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology\u2014to advance human flourishing. Advancing human flourishing in turn requires democratic\/political stability and economic empowerment. To accomplish this, we build on a new normative framework that will give humanity its best chance to reap the full benefits, while avoiding the dangers, of AI. This new framework of \u201cPower-Sharing Liberalism\u201d is a philosophy that restores protections of positive liberties to liberalism. As we deploy it here, it helps shape a more comprehensive (and we would contend, more accurate) understanding of both risk and opportunity introduced by AI. To lay out how Power-Sharing Liberalism can be applied to AI governance, we take four steps. First, we define central concepts in the field of AI governance, disambiguating between forms of technological harms and risks. Second, we review current normative frameworks around the globe and argue that Power-Sharing Liberalism is a better fit for governing AI. Third, we walk through the six governance tasks that should be accomplished by any governance framework and analyze them through the Power-Sharing Liberalism framework. Based on that analysis, we make 17 recommendations for the governance of AI\u2014including transformative investments in public goods, personnel, and the sustainability of democracy itself. Finally, we discuss concrete proposals for implementing those recommendations.<\/jats:p>","DOI":"10.1007\/s43681-024-00635-y","type":"journal-article","created":{"date-parts":[[2025,1,28]],"date-time":"2025-01-28T13:53:11Z","timestamp":1738072391000},"page":"3355-3377","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["A roadmap for governing AI: technology governance and power-sharing liberalism"],"prefix":"10.1007","volume":"5","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4279-7954","authenticated-orcid":false,"given":"Danielle","family":"Allen","sequence":"first","affiliation":[]},{"given":"Sarah","family":"Hubbard","sequence":"additional","affiliation":[]},{"given":"Woojin","family":"Lim","sequence":"additional","affiliation":[]},{"given":"Allison","family":"Stanger","sequence":"additional","affiliation":[]},{"given":"Shlomit","family":"Wagman","sequence":"additional","affiliation":[]},{"given":"Kinney","family":"Zalesne","sequence":"additional","affiliation":[]},{"given":"Omoaholo","family":"Omoakhalen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,1,28]]},"reference":[{"key":"635_CR1","unstructured":"A.I. Verify Foundation, Infocomm Media Development Authority.: Model AI Governance Framework for Generative AI: Fostering a Trusted Ecosystem. https:\/\/aiverifyfoundation.sg\/wp-content\/uploads\/2024\/05\/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1.pdf. (2024)"},{"key":"635_CR2","unstructured":"ACHPR. Resolution on the need to undertake a Study on human and peoples\u2019 rights and artificial intelligence (AI), robotics and other new and emerging technologies in Africa - ACHPR\/Res. 473 (EXT.OS\/ XXXI) 2021. African Commission on Human and Peoples\u2019 Rights. https:\/\/achpr.au.int\/en\/adopted-resolutions\/473-resolution-need-undertake-study-human-and-peoples-rights-and-art. (2023)"},{"key":"635_CR3","unstructured":"A Pro-Innovation Approach to AI Regulation. UK Secretary of State for Science, Innovation and Technology. GOV.UK. Retrieved December 29, 2023, from https:\/\/www.gov.uk\/government\/publications\/ai-regulation-a-pro-innovation-approach\/white-paper\/. (2023)"},{"key":"635_CR4","doi-asserted-by":"crossref","unstructured":"Acemoglu D., Lensman T.: Regulating transformative technologies. National Bureau of Economic Research Working Paper Series. https:\/\/ssrn.com\/abstract=4512495. (2023)","DOI":"10.2139\/ssrn.4512495"},{"key":"635_CR5","unstructured":"African Union. Continental artificial intelligence strategy: Harnessing AI for Africa\u2019s development and prosperity. https:\/\/au.int\/sites\/default\/files\/documents\/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf. (2024)"},{"key":"635_CR6","unstructured":"AfriLabs: Ecosystem insights. https:\/\/www.afrilabs.com\/ecosystem-insights\/. (2024)"},{"key":"635_CR7","doi-asserted-by":"publisher","DOI":"10.7208\/chicago\/9780226014685.001.0001","volume-title":"Talking to Strangers: Anxieties of Citizenship Since Brown v. Board of Education","author":"DS Allen","year":"2004","unstructured":"Allen, D.S.: Talking to Strangers: Anxieties of Citizenship Since Brown v. Board of Education. University of Chicago Press, Chicago (2004)"},{"key":"635_CR8","doi-asserted-by":"publisher","DOI":"10.7208\/chicago\/9780226777122.001.0001","volume-title":"Justice by Means of Democracy","author":"DS Allen","year":"2023","unstructured":"Allen, D.S.: Justice by Means of Democracy. The University of Chicago Press, Chicago (2023)"},{"key":"635_CR9","unstructured":"Allen, D. S., Weyl, E. G.: (2024). AI and democracy. Forthcoming in J Democr. (2024)"},{"key":"635_CR10","doi-asserted-by":"publisher","DOI":"10.7208\/chicago\/9780226818436.001.0001","volume-title":"A Political Economy of Justice","author":"DS Allen","year":"2022","unstructured":"Allen, D.S., Benkler, Y., Downey, L., Henderson, R., Simons, J.: A Political Economy of Justice. University of Chicago Press, Chicago (2022)"},{"key":"635_CR11","unstructured":"Allen, D. S., Frankel E., Lim W., Siddarth D., Simons J., Weyl. E. G. (2023). Ethics of Decentralized Social Technologies: Lessons from Web3, the Fediverse, and Beyond. Edmond & Lily Safra Center for Ethics."},{"key":"635_CR12","doi-asserted-by":"publisher","DOI":"10.1007\/s10676-021-09593-z","author":"PG Almeida","year":"2021","unstructured":"Almeida, P.G., Santos, C.D., Farias, J.S.: Artificial Intelligence Regulation: a framework for governance. Ethics Inf. Technol. (2021). https:\/\/doi.org\/10.1007\/s10676-021-09593-z","journal-title":"Ethics Inf. Technol."},{"key":"635_CR13","unstructured":"Amnesty International: EU: Bloc\u2019s decision to not ban public mass surveillance in AI Act sets a devastating global precedent. Amnesty International. https:\/\/www.amnesty.org\/en\/latest\/news\/2023\/12\/eu-blocs-decision-to-not-ban-public-mass-surveillance-in-ai-act-sets-a-devastating-global-precedent\/. (2023)"},{"key":"635_CR14","unstructured":"Angwin J., Larson J., Mattu S., Kirchner L.: Machine bias. ProPublica. https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing. (2016)"},{"key":"635_CR16","unstructured":"Brazilian Artificial Intelligence Act, no. 2338, Brazilian Federal Senate. https:\/\/mcusercontent.com\/af97527c75cf28e5d17467eaa\/files\/248d109f-eeef-7496-4df1-12d29affb522\/PL_23382023_Senado_ENG_VF.pdf. (2023)"},{"key":"635_CR17","unstructured":"Baughman, J.: Translation: interim measures for the management of generative Artificial Intelligence services. China Aerospace Studies Institute. https:\/\/www.airuniversity.af.edu\/Portals\/10\/CASI\/documents\/Translations\/2023-08-07%20ITOW%20Interim%20Measures%20for%20the%20Management%20of%20Generative%20Artificial%20Intelligence%20Services.pdf. (2023)"},{"key":"635_CR18","volume-title":"Empire of Cotton: A Global History","author":"S Beckert","year":"2014","unstructured":"Beckert, S.: Empire of Cotton: A Global History, 1st edn. Knopf A. Alfred, New York (2014)","edition":"1"},{"key":"635_CR19","volume-title":"Race after Technology: Abolitionist Tools for the New Jim Code","author":"R Benjamin","year":"2019","unstructured":"Benjamin, R.: Race after Technology: Abolitionist Tools for the New Jim Code. Polity, Cambridge (2019)"},{"key":"635_CR20","unstructured":"Black, J., Murray, A.D. Regulating AI and machine learning: setting the regulatory agenda. Eur. J. Law Technol. Retrieved August 28, 2024. https:\/\/eprints.lse.ac.uk\/102953\/4\/722_3282_1_PB.pdf. (2019)"},{"key":"635_CR21","doi-asserted-by":"publisher","unstructured":"Bommasani, R., Klyman, K., Longpre, S., Kapoor, S., Maslej, N., Xiong, B., Zhang, D., Liang, P.: The foundation model transparency index. arXiv. https:\/\/doi.org\/10.48550\/arXiv.2310.12941. (2023)","DOI":"10.48550\/arXiv.2310.12941"},{"key":"635_CR22","volume-title":"Superintelligence: Paths, Dangers, Strategies","author":"N Bostrom","year":"2014","unstructured":"Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)"},{"key":"635_CR23","doi-asserted-by":"publisher","DOI":"10.1093\/oxrep\/grab029","author":"S Bowles","year":"2021","unstructured":"Bowles, S., Carlin, W.: Shrinking capitalism: components of a new political economy paradigm. Oxf. Rev. Econ. Policy (2021). https:\/\/doi.org\/10.1093\/oxrep\/grab029","journal-title":"Oxf. Rev. Econ. Policy"},{"key":"635_CR24","unstructured":"Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G.C., Steinhardt, J., Flynn, C., H\u00e9igeartaigh, S.\u00d3., Beard, S., Belfield, H., Farquhar, S., Amodei, D.: The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv. http:\/\/arxiv.org\/abs\/1802.07228. (2018)"},{"issue":"1","key":"635_CR25","doi-asserted-by":"publisher","first-page":"41","DOI":"10.1017\/err.2019.8","volume":"10","author":"MC Buiten","year":"2019","unstructured":"Buiten, M.C.: Towards intelligent regulation of Artificial Intelligence. Eur. J. Risk Regul. 10(1), 41\u201359 (2019)","journal-title":"Eur. J. Risk Regul."},{"key":"635_CR26","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1145\/3430368","volume":"63","author":"C Canca","year":"2020","unstructured":"Canca, C.: Operationalizing AI ethics principles. Commun. ACM 63, 18 (2020)","journal-title":"Commun. ACM"},{"key":"635_CR27","unstructured":"Castillo C., Chouldechova A., De-Arteaga M., Ekstrand M., Lazar S. Statement on AI harms and policy. ACM FAccT Conference. Retrieved December 20, 2023, from https:\/\/facctconference.org\/2023\/harm-policy. (2023)"},{"key":"635_CR28","doi-asserted-by":"publisher","unstructured":"Cihon, P., Maas, M.M., Kemp, L.: Should Artificial Intelligence governance be centralised?: Design lessons from history, (February 8, 2020). In: Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society, 228\u201334. New York NY USA: ACM, 2020. https:\/\/doi.org\/10.1145\/3375627.3375857, Available at SSRN: https:\/\/ssrn.com\/abstract=3761636 or https:\/\/doi.org\/10.2139\/ssrn.3761636. (2020)","DOI":"10.1145\/3375627.3375857"},{"key":"635_CR29","volume-title":"Why AI Undermines Democracy and What To Do About It","author":"M Coeckelbergh","year":"2024","unstructured":"Coeckelbergh, M.: Why AI Undermines Democracy and What To Do About It. Wiley, Hoboken (2024)"},{"key":"635_CR30","unstructured":"Creemer, R., Webster, G., Toner, H.: Translation: internet information service algorithmic recommendation management provisions. DigiChina. Retrieved December 20, 2023, from https:\/\/digichina.stanford.edu\/work\/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022\/. (2022)"},{"key":"635_CR31","unstructured":"Creemers, R., Webster, G. Translation: internet information service deep synthesis management provisions (draft for comment). DigiChina. Retrieved December 20, 2023, from https:\/\/digichina.stanford.edu\/work\/translation-internet-information-service-deep-synthesis-management-provisions-draft-for-comment-jan-2022\/. (2022)"},{"key":"635_CR32","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3664124","author":"R Crootof","year":"2021","unstructured":"Crootof, R., Ard, B.J.: Structuring Techlaw. Harvard J. Law Technol. (2021). https:\/\/doi.org\/10.2139\/ssrn.3664124","journal-title":"Harvard J. Law Technol."},{"key":"635_CR33","unstructured":"CSTI. AI Strategy 2022. Council of Science Technology and Innovation (CSTI). https:\/\/www8.cao.go.jp\/cstp\/ai\/aistratagy2022en.pdf. (2022)"},{"key":"635_CR34","unstructured":"Dafoe, A. AI Governance: a research agenda. Centre for the Governance of AI, University of Oxford. https:\/\/www.fhi.ox.ac.uk\/wp-content\/uploads\/GovAI-Agenda.pdf. (2018)"},{"key":"635_CR113","doi-asserted-by":"publisher","unstructured":"Dan\u00edelsson, J., Macrae, R., & Uthemann, A.: Artificial intelligence and systemic risk. Journal of Banking & Finance, 140, Article 106290. https:\/\/doi.org\/10.1016\/j.bankfin.2021.106290. (2022)","DOI":"10.1016\/j.bankfin.2021.106290"},{"key":"635_CR35","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbankfin.2021.106290","volume":"140","author":"J Dan\u00edelsson","year":"2022","unstructured":"Dan\u00edelsson, J., Macrae, R., Uthemann, A.: Artificial intelligence and systemic risk. J. Bank. Finance 140, 106290 (2022). https:\/\/doi.org\/10.1016\/j.jbankfin.2021.106290","journal-title":"J. Bank. Finance"},{"key":"635_CR36","unstructured":"Demirci, S. (n.d.). Empowering small businesses: The impact of AI on leveling the playing field. Retrieved September 11, 2024, from https:\/\/www.orionpolicy.org\/orionforum\/256\/empowering-small-businesses-the-impact-of-ai-on-leveling-the-playing-field"},{"key":"635_CR37","unstructured":"De Vynck, G.: The debate over whether AI will destroy us is dividing Silicon Valley. The Washington Post.https:\/\/www.washingtonpost.com\/technology\/2023\/05\/20\/ai-existential-risk-debate. (2023)"},{"key":"635_CR38","doi-asserted-by":"publisher","unstructured":"Erd\u00e9lyi, O.J., Goldsmith, J. Regulating Artificial Intelligence: proposal for a global solution. In: Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society. https:\/\/doi.org\/10.1145\/3278721.3278731 (2018)","DOI":"10.1145\/3278721.3278731"},{"key":"635_CR39","unstructured":"Regulation 2024\/1689. Annex III: High Risk AI Systems Referred to in Article 6(2). European Parliament, Council of the European Union. https:\/\/www.euaiact.com\/annex\/3"},{"key":"635_CR41","unstructured":"European Commission. G1 & G2: AI economic players and AI player intensity. AI Watch; European Commission. https:\/\/ai-watch.ec.europa.eu\/ai-watch-index-2021\/g-global-view-ai-landscape\/g1-g2-ai-economic-players-and-ai-player-intensity_en. (2023)"},{"key":"635_CR42","unstructured":"Europol: Facing reality? Law enforcement and the challenge of deepfakes, an observatory report from the Europol Innovation Lab, Publications Office of the European Union. https:\/\/www.europol.europa.eu\/publications-events\/publications\/facing-reality-law-enforcement-and-challenge-of-deepfakes. (2022)"},{"key":"635_CR43","unstructured":"Executive Order No. 14110 of Oct. 30, 2023, 88 FR 75191-75226 (2023)."},{"key":"635_CR44","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3518482","author":"J Fjeld","year":"2020","unstructured":"Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled Artificial Intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. SSRN Electron. J. (2020). https:\/\/doi.org\/10.2139\/ssrn.3518482","journal-title":"SSRN Electron. J."},{"key":"635_CR45","doi-asserted-by":"crossref","unstructured":"Floridi, L., Cowls, J. A unified framework of five principles for AI in society. Harvard Data Science Review. https:\/\/hdsr.mitpress.mit.edu\/pub\/l0jsh9d1\/release\/8. (2019)","DOI":"10.1162\/99608f92.8cd550d1"},{"key":"635_CR47","unstructured":"General Data Protection Regulation (GDPR)\u2013 official legal text. General Data Protection Regulation (GDPR). Retrieved December 29, 2023, from https:\/\/gdpr-info.eu\/"},{"key":"635_CR48","unstructured":"Governance Guidelines for Implementation of AI Principles, Ver 1.1. Ministry of Economy, Trade, and Industry (METI) Expert Group on How AI Principles Should be Implemented (Japan), Retrieved from https:\/\/www.meti.go.jp\/shingikai\/mono_info_service\/ai_shakai_jisso\/pdf\/20220128_2.pdf"},{"key":"635_CR49","unstructured":"Habuka H.: Japan\u2019s approach to AI regulation and its impact on the 2023 G7 presidency. Center for Strategic and International Studies. https:\/\/www.csis.org\/analysis\/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency. (2023)"},{"key":"635_CR50","unstructured":"Hadfield, G. K., Clark, J. Regulatory markets: the future of AI governance. arXiv. https:\/\/arxiv.org\/pdf\/2304.04914. (2023)"},{"key":"635_CR51","unstructured":"Heikkil\u00e4 M.: To avoid AI doom, learn from nuclear safety. MIT Technology Review.https:\/\/www.technologyreview.com\/2023\/06\/06\/1074077\/to-avoid-ai-doom-learn-from-nuclear-safety\/. (2023"},{"key":"635_CR52","unstructured":"Henshall, W.: How China\u2019s new AI rules could affect U.S. companies. TIME. https:\/\/time.com\/6314790\/china-ai-regulation-us\/. (2023)"},{"key":"635_CR53","doi-asserted-by":"crossref","unstructured":"Hornyak T.: Why Japan is building its own version of ChatGPT. Nature. https:\/\/www.nature.com\/articles\/d41586-023-02868-z. (2023)","DOI":"10.1038\/d41586-023-02868-z"},{"key":"635_CR54","unstructured":"Huang, S., Toner, H., Haluza, Z., Creemers, R., Webster, G.: Translation: measures for the management of generative artificial intelligence services (Draft for comment). DigiChina. Retrieved December 19, 2023, from https:\/\/digichina.stanford.edu\/work\/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023\/. (2023)"},{"key":"635_CR55","doi-asserted-by":"publisher","first-page":"389","DOI":"10.1038\/s42256-019-0088-2","volume":"1","author":"A Jobin","year":"2019","unstructured":"Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389 (2019)","journal-title":"Nat. Mach. Intell."},{"key":"635_CR56","unstructured":"Joint Statement on AI Safety and Openness. Mozilla. https:\/\/open.mozilla.org\/letter\/. (2023)"},{"key":"635_CR57","unstructured":"Jones, E.: Explainer: What is a foundation model? Ada Lovelace Institute (blog post). https:\/\/www.adalovelaceinstitute.org\/resource\/foundation-models-explainer\/. (2023)"},{"key":"635_CR58","doi-asserted-by":"crossref","unstructured":"Jones, K. AI governance and human rights: resetting the relationship. International Law Programme, Chatham House. https:\/\/www.chathamhouse.org\/sites\/default\/files\/2023-01\/2023-01-10-AI-governance-human-rights-jones.pdf. (2023)","DOI":"10.55317\/9781784135492"},{"key":"635_CR59","doi-asserted-by":"publisher","DOI":"10.1080\/22041451.2024.2346428","author":"D Joshi","year":"2024","unstructured":"Joshi, D.: AI governance in India\u2014law, policy and political economy. Commun. Res. Pract. (2024). https:\/\/doi.org\/10.1080\/22041451.2024.2346428","journal-title":"Commun. Res. Pract."},{"key":"635_CR60","unstructured":"Kelley, D.: WormGPT\u2014The generative AI tool cybercriminals are using to launch business email compromise attacks. SlashNext. https:\/\/slashnext.com\/blog\/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks\/. (2023)"},{"key":"635_CR61","unstructured":"Kemp, L., Cihon, P., Maas, M.M., Belfield, H., Cremer, Z., Leung, J., & \u00d3 h\u00c9igeartaigh, S.: UN high-level panel on digital cooperation: a proposal for international AI governance. UN High-Level Panel on Digital Cooperation. https:\/\/digitalcooperation.org\/wp-content\/uploads\/2019\/02\/Luke_Kemp_Submission-to-the-UN-High-Level-Panel-on-Digital-Cooperation-2019-Kemp-et-al.pdf (2019)"},{"key":"635_CR62","doi-asserted-by":"publisher","unstructured":"Khan, A. A., Badshah, S., Liang, P., et al.: Ethics of AI: A systematic literature review of principles and challenges. In: Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering. https:\/\/doi.org\/10.1145\/3530019.3531329 (2022)","DOI":"10.1145\/3530019.3531329"},{"key":"635_CR63","doi-asserted-by":"publisher","DOI":"10.1007\/s10796-022-10300-6","author":"V Koniakou","year":"2022","unstructured":"Koniakou, V.: From the \u201crush to ethics\u201d to the \u201crace for governance\u201d in Artificial Intelligence. Inf. Syst. Front. (2022). https:\/\/doi.org\/10.1007\/s10796-022-10300-6","journal-title":"Inf. Syst. Front."},{"issue":"3","key":"635_CR64","doi-asserted-by":"publisher","first-page":"730","DOI":"10.1111\/nous.12477","volume":"58","author":"S Lazar","year":"2023","unstructured":"Lazar, S., Stone, J.: On the site of predictive justice. No\u00fbs 58(3), 730\u2013754 (2023). https:\/\/doi.org\/10.1111\/nous.12477","journal-title":"No\u00fbs"},{"key":"635_CR65","doi-asserted-by":"publisher","unstructured":"Lazar, S., Stone, J. On the site of predictive justice. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 687. https:\/\/doi.org\/10.1145\/3593013.3594035. (2023)","DOI":"10.1145\/3593013.3594035"},{"key":"635_CR66","doi-asserted-by":"publisher","unstructured":"Luccioni A.S., Jernite Y., Strubell E. Power hungry processing: Watts driving the cost of AI deployment? arXiv. https:\/\/doi.org\/10.48550\/arXiv.2311.16863. (2023)","DOI":"10.48550\/arXiv.2311.16863"},{"key":"635_CR67","unstructured":"Lynch, S.: Analyzing the European Union AI Act: What works, what needs improvement. Stanford University Human-Centered Artificial Intelligence. https:\/\/hai.stanford.edu\/news\/analyzing-european-union-ai-act-what-works-what-needs-improvement. (2023)"},{"key":"635_CR68","unstructured":"Mattu, J. A., Larson, J., Kirchner, L.S.: Machine Bias. ProPublica. Retrieved December 19, 2023, from https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing. (2016)."},{"key":"635_CR69","doi-asserted-by":"publisher","first-page":"2141","DOI":"10.1007\/s11948-019-00165-5","volume":"26","author":"J Morley","year":"2020","unstructured":"Morley, J., Floridi, L., Kinsey, L., Elhalal, A.: From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics 26, 2141 (2020)","journal-title":"Sci. Eng. Ethics"},{"key":"635_CR70","volume-title":"Geography is Destiny: Britain and the World: A 10,000-Year History","author":"I Morris","year":"2022","unstructured":"Morris, I.: Geography is Destiny: Britain and the World: A 10,000-Year History. Straus and Giroux, New York (2022)"},{"key":"635_CR71","volume-title":"One Month, 500,000 Face Scans: How China is Using A.I. to Profile a Minority","author":"P Mozur","year":"2019","unstructured":"Mozur, P.: One Month, 500,000 Face Scans: How China is Using A.I. to Profile a Minority. The New York Times, New York (2019). https:\/\/www.nytimes.com\/2019\/04\/14\/technology\/china-surveillance-artificial-intelligence-racial-profiling.html"},{"key":"635_CR72","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.4597080","author":"TN Narechania","year":"2023","unstructured":"Narechania, T.N., Sitaraman, G.: An antimonopoly approach to governing Artificial Intelligence. Vand. Policy Accel. Pol. Econ. Regul. (2023). https:\/\/doi.org\/10.2139\/ssrn.4597080","journal-title":"Vand. Policy Accel. Pol. Econ. Regul."},{"key":"635_CR73","unstructured":"National Strategy for Artificial Intelligence. NITI Aayog. https:\/\/www.niti.gov.in\/sites\/default\/files\/2023-03\/National-Strategy-for-Artificial-Intelligence.pdf. (2018)"},{"key":"635_CR74","doi-asserted-by":"publisher","DOI":"10.18574\/nyu\/9781479833641.001.0001","volume-title":"Algorithms of Oppression: How Search Engines Reinforce Racism","author":"SU Noble","year":"2018","unstructured":"Noble, S.U.: Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York (2018)"},{"key":"635_CR75","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-023-01723-z","author":"C Novelli","year":"2023","unstructured":"Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., Floridi, L.: Taking AI risks seriously: a new assessment model for the AI Act. AI Soc. (2023). https:\/\/doi.org\/10.1007\/s00146-023-01723-z","journal-title":"AI Soc."},{"key":"635_CR76","unstructured":"Office, U.S.G.A. (n.d.). Science, Technology Assessment, and Analytics (StAA), US GAO. Retrieved December 29, 2023, from https:\/\/www.gao.gov\/about\/careers\/our-teams\/STAA"},{"key":"635_CR77","unstructured":"Ovadya, A., Thorburn, L.: Bridging systems: open problems for countering destructive divisiveness across ranking, recommenders, and governance. Knight First Amendment Institute at Harvard University. https:\/\/knightcolumbia.org\/content\/bridging-systems. (2023)"},{"key":"635_CR78","unstructured":"Oxford Insights. Government AI Readiness Index 2023. https:\/\/oxfordinsights.com\/wp-content\/uploads\/2023\/12\/2023-Government-AI-Readiness-Index-2.pdf. (2023)"},{"key":"635_CR79","doi-asserted-by":"publisher","DOI":"10.1017\/9781108890960","volume-title":"Social Media and Democracy","author":"N Persily","year":"2020","unstructured":"Guess, A.M., Lyons, B.A.: Misinformation, Disinformation, and Online Propaganda. In N. Persily & J.A. Tucker (Eds.),Social Media and Democracy (pp. 10-33). Cambridge, Cambridge University Press. https:\/\/doi.org\/10.1017\/9781108890960. (2020)"},{"key":"635_CR80","unstructured":"Personal Data Protection Commission (PDPC) Singapore. Singapore\u2019s Approach to AI Governance. Retrieved August 27, 2024, from https:\/\/www.pdpc.gov.sg\/Help-and-Resources\/2020\/01\/Model-AI-Governance-Framework. (2020)"},{"key":"635_CR81","volume-title":"Just Freedom: A Moral Compass for a Complex World","author":"P Pettit","year":"2014","unstructured":"Pettit, P.: Just Freedom: A Moral Compass for a Complex World, 1st edn. W.W. Norton & Company, New York (2014)","edition":"1"},{"key":"635_CR82","doi-asserted-by":"publisher","DOI":"10.1080\/14494035.2021.1929728","author":"R Radu","year":"2021","unstructured":"Radu, R.: Steering the governance of artificial intelligence: national strategies in perspective. Policy Soc. (2021). https:\/\/doi.org\/10.1080\/14494035.2021.1929728","journal-title":"Policy Soc."},{"key":"635_CR83","unstructured":"Regulatory framework proposal on artificial intelligence EU Commission Online. https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/regulatory-framework-ai. (2023)"},{"key":"635_CR84","unstructured":"Responsible AI| Adopting the Framework: A Use Case Approach on Facial Recognition Technology. NITI Aayog. https:\/\/www.niti.gov.in\/sites\/default\/files\/2022-11\/Ai_for_All_2022_02112022_0.pdf. (2022)"},{"key":"635_CR85","unstructured":"Responsible Innovation: Israel\u2019s Policy on Artificial Intelligence Regulation and Ethics. https:\/\/www.gov.il\/BlobFolder\/policy\/ai_2023\/en\/Israels%20AI%20Policy%202023.pdf. (2023)"},{"key":"635_CR86","unstructured":"Sanders, N. E., & Schneier, B.: Build ai by the people, for the people. Foreign Policy. https:\/\/foreignpolicy.com\/2023\/06\/12\/ai-regulation-technology-us-china-eu-governance\/. (2023)"},{"key":"635_CR87","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.2609777","author":"MU Scherer","year":"2015","unstructured":"Scherer, M.U.: Regulating Artificial Intelligence systems: risks, challenges, competencies, and strategies. Harvard J. Law Technol. (2015). https:\/\/doi.org\/10.2139\/ssrn.2609777","journal-title":"Harvard J. Law Technol."},{"key":"635_CR88","doi-asserted-by":"publisher","unstructured":"Shevlane, T., Farquhar, S., Garfinkel, B., Phuong, M., Whittlestone, J., Leung, J., Kokotajlo, D., Marchal, N., Anderljung, M., Kolt, N., Ho, L., Siddarth, D., Shahar Avin, Hawkins, W., Been, K., Iason Gabriel, Bolina, V., Clark, J., Bengio, Y., Dafoe, A. Model evaluation for extreme risks. arXiv.org. https:\/\/doi.org\/10.48550\/arxiv.2305.15324. (2023)","DOI":"10.48550\/arxiv.2305.15324"},{"key":"635_CR89","unstructured":"Siddarth, D., Acemoglu, D., Allen, D., Crawford, K., Evans, J., Jordan, M., Weyl, E.G.: How AI fails us. Harvard Kennedy School. https:\/\/www.hks.harvard.edu\/centers\/carr\/publications\/how-ai-fails-us. (2022)"},{"key":"635_CR90","doi-asserted-by":"publisher","DOI":"10.2307\/j.ctv2vjrj0m","volume-title":"Algorithms for the People: Democracy in the Age of AI","author":"J Simons","year":"2023","unstructured":"Simons, J.: Algorithms for the People: Democracy in the Age of AI. Princeton University Press, Princeton (2023)"},{"key":"635_CR91","doi-asserted-by":"publisher","DOI":"10.1080\/17579961.2021.1898300","author":"NA Smuha","year":"2021","unstructured":"Smuha, N.A.: From a \u2018race to AI\u2019 to a \u2018race to AI regulation\u2019: Regulatory competition for artificial intelligence. Law Innov. Technol. (2021). https:\/\/doi.org\/10.1080\/17579961.2021.1898300","journal-title":"Law Innov. Technol."},{"key":"635_CR92","unstructured":"Cabinet Secretariat, Japan.: Social principles of human-centric AI. https:\/\/www.cas.go.jp\/jp\/seisaku\/jinkouchinou\/pdf\/humancentricai.pdf. (2022)"},{"key":"635_CR93","unstructured":"Government of Japan.: Society 5.0. Cabinet Office Home Page. https:\/\/www8.cao.go.jp\/cstp\/english\/society5_0\/index.html. (2016)"},{"key":"635_CR94","doi-asserted-by":"publisher","DOI":"10.1146\/annurev-polisci-041322-042247","author":"A Stanger","year":"2024","unstructured":"Stanger, A., Kraus, J., Lim, W., Millman-Perlah, G., Schroeder, M.: Terra incognita: The governance of Artificial Intelligence in global perspective. Annu. Rev. Polit. Sci. (2024). https:\/\/doi.org\/10.1146\/annurev-polisci-041322-042247","journal-title":"Annu. Rev. Polit. Sci."},{"key":"635_CR95","unstructured":"Statement on AI Risk. CAIS. Retrieved December 29, 2023, from https:\/\/www.safe.ai\/statement-on-ai-risk. (2023)"},{"key":"635_CR112","doi-asserted-by":"publisher","unstructured":"Tabassi, E. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology. https:\/\/doi.org\/10.6028\/NIST.AI.100-1","DOI":"10.6028\/NIST.AI.100-1"},{"key":"635_CR96","doi-asserted-by":"publisher","DOI":"10.1080\/14494035.2021.1928377","author":"A Taeihagh","year":"2021","unstructured":"Taeihagh, A.: Governance of Artificial Intelligence. Policy Soc. (2021). https:\/\/doi.org\/10.1080\/14494035.2021.1928377","journal-title":"Policy Soc."},{"key":"635_CR97","unstructured":"The Global Partnership on Artificial Intelligence: Community. (n.d.). GPAI. https:\/\/gpai.ai\/community\/"},{"key":"635_CR98","unstructured":"Understanding and Managing the AI Lifecycle. GSA (n.d.). Retrieved December 19, 2023, https:\/\/coe.gsa.gov\/coe\/ai-guide-for-government\/understanding-managing-ai-lifecycle\/"},{"key":"635_CR100","unstructured":"United Nations. Global Digital Compact (Third Revision). https:\/\/documents.un.org\/doc\/undoc\/ltd\/n24\/065\/92\/pdf\/n2406592.pdf. (2024)"},{"key":"635_CR101","unstructured":"United Nations AI Advisory Body. Governing AI for Humanity. https:\/\/www.un.org\/sites\/un2.un.org\/files\/governing_ai_for_humanity_final_report_en.pdf. (2024)"},{"key":"635_CR102","unstructured":"United Nations General Assembly Resolution A\/78\/L.49. https:\/\/documents.un.org\/doc\/undoc\/ltd\/n24\/065\/92\/pdf\/n2406592.pdf. (2024)"},{"key":"635_CR103","doi-asserted-by":"publisher","unstructured":"Valdivia, A., Tazzioli, M.: Datafication genealogies beyond algorithmic fairness: making up racialised subjects. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 840\u2013850. https:\/\/doi.org\/10.1145\/3593013.3594047. (2023)","DOI":"10.1145\/3593013.3594047"},{"key":"635_CR104","unstructured":"White, Case: AI Watch: Global Regulatory Tracker - Brazil. www.whitecase.com. https:\/\/www.whitecase.com\/insight-our-thinking\/ai-watch-global-regulatory-tracker-brazil#:~:text=Brazil (2024)"},{"key":"635_CR105","unstructured":"Wong, M.: AI Doomerism is a decoy. The Atlantic. https:\/\/www.theatlantic.com\/technology\/archive\/2023\/06\/ai-regulation-sam-altman-bill-gates\/674278\/. (2023)"},{"key":"635_CR106","volume-title":"The Oxford Handbook of AI Ethics","author":"K Yeung","year":"2019","unstructured":"Yeung, K., Howes, A., Pogrebna, G.: AI governance by human rights-centred design, deliberation and oversight: an end to ethics washing. In: Dubber, M., Pasquale, F. (eds.) The Oxford Handbook of AI Ethics. Oxford University Press, Oxford (2019)"},{"key":"635_CR107","unstructured":"Zalesne, E. K., Pyati, N.: Putting Flourishing First: Applying Democratic Values to Technology. Edmond and Lily Safra Center for Ethics. (2023)"},{"key":"635_CR108","unstructured":"Zeng, Y., Huangfu, C., Lu, E., Ruan, Z., et al.: Linking Artificial Intelligence principles. Institute of Automation, Chin. Acad. Sci. https:\/\/www.linking-ai-principles.org\/. (2018)"},{"key":"635_CR109","unstructured":"Zeng, Y. Lu, E., Huangfu, C.: Linking Artificial Intelligence principles. arXiv. https:\/\/arxiv.org\/abs\/1812.04814. (2018)"},{"key":"635_CR110","unstructured":"Zhang, L.: China: generative AI measures finalized. Library of Congresshttps:\/\/www.loc.gov\/item\/global-legal-monitor\/2023-07-18\/china-generative-ai-measures-finalized\/. (2023)"},{"key":"635_CR111","unstructured":"Zwetsloot, R., Dafoe, A.: Thinking about risks from AI: accidents, misuse and structure. Lawfare. Retrieved December 19, 2023, from https:\/\/www.lawfaremedia.org\/article\/thinking-about-risks-ai-accidents-misuse-and-structure. (2019)"}],"container-title":["AI and Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-024-00635-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43681-024-00635-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-024-00635-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,5,24]],"date-time":"2025-05-24T09:09:42Z","timestamp":1748077782000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43681-024-00635-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,28]]},"references-count":109,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,6]]}},"alternative-id":["635"],"URL":"https:\/\/doi.org\/10.1007\/s43681-024-00635-y","relation":{},"ISSN":["2730-5953","2730-5961"],"issn-type":[{"value":"2730-5953","type":"print"},{"value":"2730-5961","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,28]]},"assertion":[{"value":"16 January 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 November 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 January 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"No human subjects in research. Ethics approval not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"All authors consent and all materials used with permissions from authors and rights-holders.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}}]}}