{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T21:11:52Z","timestamp":1773954712880,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":97,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T00:00:00Z","timestamp":1686528000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,6,12]]},"DOI":"10.1145\/3593013.3593977","type":"proceedings-article","created":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T14:40:46Z","timestamp":1686580846000},"page":"64-76","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Welfarist Moral Grounding for Transparent AI"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4201-1421","authenticated-orcid":false,"given":"Devesh","family":"Narayanan","sequence":"first","affiliation":[{"name":"National University of Singapore, Singapore"}]}],"member":"320","published-online":{"date-parts":[[2023,6,12]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1186\/s12911-020-01332-6"},{"key":"e_1_3_2_1_2_1","first-page":"979","article-title":"Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability","volume":"20","author":"Ananny Mike","year":"2018","unstructured":"Mike Ananny and Kate Crawford . 2018 . Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability . New Media & Society 20 , 3 (2018), 979 \u2013 989 . DOI:https:\/\/doi.org\/10.1177\/1461444816676645 10.1177\/1461444816676645 Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3 (2018), 979\u2013989. DOI:https:\/\/doi.org\/10.1177\/1461444816676645","journal-title":"New Media & Society"},{"key":"e_1_3_2_1_3_1","volume-title":"ProPublica. Retrieved","author":"Angwin Julia","year":"2016","unstructured":"Julia Angwin , Jeff Larson , Surya Mattu , and Laura Kirchner . 2016 . Machine Bias . ProPublica. Retrieved February 16, 2022 from https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing Julia Angwin, Jeff Larson, Surya Mattu, and Laura Kirchner. 2016. Machine Bias. ProPublica. Retrieved February 16, 2022 from https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing"},{"key":"e_1_3_2_1_4_1","unstructured":"Jef Ausloos Pierre Dewitte David Geerts Peggy Valcke and Bieke Zaman. 2018. Algorithmic transparency and accountability in practice. Jef Ausloos Pierre Dewitte David Geerts Peggy Valcke and Bieke Zaman. 2018. Algorithmic transparency and accountability in practice."},{"key":"e_1_3_2_1_5_1","volume-title":"Notre Dame Law Review 94","author":"Bambauer Jane","year":"2018","unstructured":"Jane Bambauer and Tal Zarsky . 2018 . The Algorithm Game . Notre Dame Law Review 94 , (2018), 49. Jane Bambauer and Tal Zarsky. 2018. The Algorithm Game. Notre Dame Law Review 94, (2018), 49."},{"key":"e_1_3_2_1_6_1","unstructured":"Ruha Benjamin. 2020. Race after technology: Abolitionist tools for the new Jim code. (2020). Ruha Benjamin. 2020. Race after technology: Abolitionist tools for the new Jim code. (2020)."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-017-0263-5"},{"key":"e_1_3_2_1_8_1","first-page":"1","article-title":"Emerging challenges in AI and the need for AI ethics education","volume":"1","author":"Borenstein Jason","year":"2021","unstructured":"Jason Borenstein and Ayanna Howard . 2021 . Emerging challenges in AI and the need for AI ethics education . AI Ethics 1 , 1 (February 2021), 61\u201365. DOI:https:\/\/doi.org\/10.1007\/s43681-020-00002-7 10.1007\/s43681-020-00002-7 Jason Borenstein and Ayanna Howard. 2021. Emerging challenges in AI and the need for AI ethics education. AI Ethics 1, 1 (February 2021), 61\u201365. DOI:https:\/\/doi.org\/10.1007\/s43681-020-00002-7","journal-title":"AI Ethics"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1177\/2053951720983865"},{"key":"e_1_3_2_1_10_1","article-title":"The AI Transparency Paradox","author":"Burt Andrew","year":"2019","unstructured":"Andrew Burt . 2019 . The AI Transparency Paradox . Harvard Business Review. Retrieved April 30, 2023 from https:\/\/hbr.org\/2019\/12\/the-ai-transparency-paradox Andrew Burt. 2019. The AI Transparency Paradox. Harvard Business Review. Retrieved April 30, 2023 from https:\/\/hbr.org\/2019\/12\/the-ai-transparency-paradox","journal-title":"Harvard Business Review. Retrieved"},{"key":"e_1_3_2_1_11_1","volume-title":"The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. In 2015 International Conference on Healthcare Informatics, IEEE","author":"Bussone Adrian","year":"2015","unstructured":"Adrian Bussone , Simone Stumpf , and Dympna O'Sullivan . 2015 . The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. In 2015 International Conference on Healthcare Informatics, IEEE , Dallas, TX, USA, 160\u2013169. DOI:https:\/\/doi.org\/10.1109\/ICHI. 2015.26 10.1109\/ICHI.2015.26 Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. In 2015 International Conference on Healthcare Informatics, IEEE, Dallas, TX, USA, 160\u2013169. DOI:https:\/\/doi.org\/10.1109\/ICHI.2015.26"},{"key":"e_1_3_2_1_12_1","volume-title":"Is Tricking a Robot Hacking? Tech Policy Lab (January","author":"Calo Ryan","year":"2018","unstructured":"Ryan Calo , Ivan Evtimov , Earlence Fernandes , Tadayoshi Kohno , and David O'Hair . 2018. Is Tricking a Robot Hacking? Tech Policy Lab (January 2018 ). Retrieved from https:\/\/digitalcommons.law.uw.edu\/techlab\/5 Ryan Calo, Ivan Evtimov, Earlence Fernandes, Tadayoshi Kohno, and David O'Hair. 2018. Is Tricking a Robot Hacking? Tech Policy Lab (January 2018). Retrieved from https:\/\/digitalcommons.law.uw.edu\/techlab\/5"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-018-0003-2"},{"key":"e_1_3_2_1_14_1","volume-title":"Reflections on Artificial Intelligence for Humanity","author":"Chatila Raja","unstructured":"Raja Chatila , Virginia Dignum , Michael Fisher , Fosca Giannotti , Katharina Morik , Stuart Russell , and Karen Yeung . 2021. Trustworthy AI . In Reflections on Artificial Intelligence for Humanity , Bertrand Braunschweig and Malik Ghallab (eds.). Springer International Publishing , Cham , 13\u201339. DOI:https:\/\/doi.org\/10.1007\/978-3-030-69128-8_2 10.1007\/978-3-030-69128-8_2 Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, Stuart Russell, and Karen Yeung. 2021. Trustworthy AI. In Reflections on Artificial Intelligence for Humanity, Bertrand Braunschweig and Malik Ghallab (eds.). Springer International Publishing, Cham, 13\u201339. DOI:https:\/\/doi.org\/10.1007\/978-3-030-69128-8_2"},{"key":"e_1_3_2_1_15_1","volume-title":"The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810","author":"Chouldechova Alexandra","year":"2018","unstructured":"Alexandra Chouldechova and Aaron Roth . 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 ( 2018 ). Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018)."},{"key":"e_1_3_2_1_16_1","volume-title":"Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4","author":"Coeckelbergh Mark","year":"2020","unstructured":"Mark Coeckelbergh . 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4 ( 2020 ), 2051\u20132068. Mark Coeckelbergh. 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4 (2020), 2051\u20132068."},{"key":"e_1_3_2_1_17_1","volume-title":"Strandburg","author":"Cofone Ignacio","year":"2019","unstructured":"Ignacio Cofone and Katherine J . Strandburg . 2019 . Strategic Games and Algorithmic Secrecy . DOI:https:\/\/doi.org\/10.2139\/ssrn.3440878 10.2139\/ssrn.3440878 Ignacio Cofone and Katherine J. Strandburg. 2019. Strategic Games and Algorithmic Secrecy. DOI:https:\/\/doi.org\/10.2139\/ssrn.3440878"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11257-008-9051-3"},{"key":"e_1_3_2_1_19_1","volume-title":"Philosophy of Science 87","author":"Creel Kathleen A","year":"2020","unstructured":"Kathleen A Creel . 2020 . Transparency in Complex Computational Systems . Philosophy of Science 87 , (January 2020), 568\u2013589. Kathleen A Creel. 2020. Transparency in Complex Computational Systems. Philosophy of Science 87, (January 2020), 568\u2013589."},{"key":"e_1_3_2_1_20_1","volume-title":"Amazon scraps secret AI recruiting tool that showed bias against women","author":"Dastin Jeffrey","year":"2022","unstructured":"Jeffrey Dastin . 2018. Amazon scraps secret AI recruiting tool that showed bias against women . Reuters . Retrieved August 2, 2022 from https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved August 2, 2022 from https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1037\/xge0000033"},{"key":"e_1_3_2_1_22_1","volume-title":"The Economist. Retrieved","author":"Doctorow Cory","year":"2019","unstructured":"Cory Doctorow . 2019 . Regulating Big Tech makes them stronger, so they need competition instead . The Economist. Retrieved August 17, 2022 from https:\/\/www.economist.com\/open-future\/2019\/06\/06\/regulating-big-tech-makes-them-stronger-so-they-need-competition-instead Cory Doctorow. 2019. Regulating Big Tech makes them stronger, so they need competition instead. The Economist. Retrieved August 17, 2022 from https:\/\/www.economist.com\/open-future\/2019\/06\/06\/regulating-big-tech-makes-them-stronger-so-they-need-competition-instead"},{"key":"e_1_3_2_1_23_1","volume-title":"Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608","author":"Doshi-Velez Finale","year":"2017","unstructured":"Finale Doshi-Velez and Been Kim . 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 ( 2017 ). Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)."},{"key":"e_1_3_2_1_24_1","volume-title":"Duke L. & Tech. Rev. 16","author":"Edwards Lilian","year":"2017","unstructured":"Lilian Edwards and Michael Veale . 2017 . Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for . Duke L. & Tech. Rev. 16 , (2017), 18. Lilian Edwards and Michael Veale. 2017. Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech. Rev. 16, (2017), 18."},{"key":"e_1_3_2_1_25_1","volume-title":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 1\u201319","author":"Ehsan Upol","unstructured":"Upol Ehsan , Q. Vera Liao , Michael Muller , Mark O. Riedl , and Justin D. Weisz . 2021. Expanding Explainability: Towards Social Transparency in AI systems . In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 1\u201319 . DOI:https:\/\/doi.org\/10.1145\/3411764.3445188 10.1145\/3411764.3445188 Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. 2021. Expanding Explainability: Towards Social Transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 1\u201319. DOI:https:\/\/doi.org\/10.1145\/3411764.3445188"},{"key":"e_1_3_2_1_26_1","volume-title":"Riedl","author":"Ehsan Upol","year":"2020","unstructured":"Upol Ehsan and Mark O . Riedl . 2020 . Human-Centered Explainable AI : Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence (Lecture Notes in Computer Science), Springer International Publishing, Cham, 449\u2013466. DOI:https:\/\/doi.org\/10.1007\/978-3-030-60117-1_33 10.1007\/978-3-030-60117-1_33 Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence (Lecture Notes in Computer Science), Springer International Publishing, Cham, 449\u2013466. DOI:https:\/\/doi.org\/10.1007\/978-3-030-60117-1_33"},{"key":"e_1_3_2_1_27_1","first-page":"4","article-title":"Transparency and the Black Box Problem","volume":"34","author":"von Eschenbach Warren J.","year":"2021","unstructured":"Warren J. von Eschenbach . 2021 . Transparency and the Black Box Problem : Why We Do Not Trust AI. Philos. Technol. 34 , 4 (December 2021), 1607\u20131622. DOI:https:\/\/doi.org\/10.1007\/s13347-021-00477-0 10.1007\/s13347-021-00477-0 Warren J. von Eschenbach. 2021. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 34, 4 (December 2021), 1607\u20131622. DOI:https:\/\/doi.org\/10.1007\/s13347-021-00477-0","journal-title":"Why We Do Not Trust AI. Philos. Technol."},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3328778.3366825"},{"key":"e_1_3_2_1_29_1","volume-title":"Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI","author":"Fjeld Jessica","year":"2020","unstructured":"Jessica Fjeld , Nele Achten , Hannah Hilligoss , Adam Nagy , and Madhulika Srikumar . 2020. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI . Berkman Klein Center Research Publication 2020 \u20131 (2020). Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication 2020\u20131 (2020)."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0055-y"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-018-9482-5"},{"key":"e_1_3_2_1_32_1","volume-title":"For truly ethical AI, its research must be independent from big tech. The Guardian. Retrieved","author":"Gebru Timnit","year":"2022","unstructured":"Timnit Gebru . 2021. For truly ethical AI, its research must be independent from big tech. The Guardian. Retrieved May 3, 2022 from https:\/\/www.theguardian.com\/commentisfree\/2021\/dec\/06\/google-silicon-valley-ai-timnit-gebru Timnit Gebru. 2021. For truly ethical AI, its research must be independent from big tech. The Guardian. Retrieved May 3, 2022 from https:\/\/www.theguardian.com\/commentisfree\/2021\/dec\/06\/google-silicon-valley-ai-timnit-gebru"},{"key":"e_1_3_2_1_33_1","volume-title":"Public Trust in Artificial Intelligence Starts With Institutional Reform | News & Commentary","author":"Grant Crystal","year":"2022","unstructured":"Crystal Grant and Kath Xu. 2021. Public Trust in Artificial Intelligence Starts With Institutional Reform | News & Commentary . American Civil Liberties Union . Retrieved August 5, 2022 from https:\/\/www.aclu.org\/news\/national-security\/public-trust-in-artificial-intelligence-starts-with-institutional-reform Crystal Grant and Kath Xu. 2021. Public Trust in Artificial Intelligence Starts With Institutional Reform | News & Commentary. American Civil Liberties Union. Retrieved August 5, 2022 from https:\/\/www.aclu.org\/news\/national-security\/public-trust-in-artificial-intelligence-starts-with-institutional-reform"},{"key":"e_1_3_2_1_34_1","volume-title":"Proc. ACM Hum.-Comput. Interact. 3, CSCW (November","author":"Green Ben","year":"2019","unstructured":"Ben Green and Yiling Chen . 2019 . The Principles and Limits of Algorithm-in-the-Loop Decision Making . Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019), 1\u201324. DOI:https:\/\/doi.org\/10.1145\/3359152 10.1145\/3359152 Ben Green and Yiling Chen. 2019. The Principles and Limits of Algorithm-in-the-Loop Decision Making. Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019), 1\u201324. DOI:https:\/\/doi.org\/10.1145\/3359152"},{"key":"e_1_3_2_1_35_1","volume-title":"Radical technologies: The design of everyday life","author":"Greenfield Adam","unstructured":"Adam Greenfield . 2017. Radical technologies: The design of everyday life . Verso Books . Adam Greenfield. 2017. Radical technologies: The design of everyday life. Verso Books."},{"key":"e_1_3_2_1_36_1","unstructured":"David Gunning. 2017. Explainable Artificial Intelligence (XAI). In DARPA\/I20 Project. David Gunning. 2017. Explainable Artificial Intelligence (XAI). In DARPA\/I20 Project."},{"key":"e_1_3_2_1_37_1","volume-title":"Accenture Insights. Retrieved","author":"Haenen Axel","year":"2020","unstructured":"Axel Haenen . 2020 . AI transparency in financial services . Accenture Insights. Retrieved September 26, 2022 from https:\/\/www.accenture.com\/nl-en\/blogs\/insights\/ai-transparency-requirements Axel Haenen. 2020. AI transparency in financial services. Accenture Insights. Retrieved September 26, 2022 from https:\/\/www.accenture.com\/nl-en\/blogs\/insights\/ai-transparency-requirements"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-020-09517-8"},{"key":"e_1_3_2_1_39_1","article-title":"The messy, secretive reality behind OpenAI's bid to save the world","author":"Hao Karen","year":"2020","unstructured":"Karen Hao . 2020 . The messy, secretive reality behind OpenAI's bid to save the world . MIT Technology Review. Retrieved April 30, 2023 from https:\/\/www.technologyreview.com\/2020\/02\/17\/844721\/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality\/ Karen Hao. 2020. The messy, secretive reality behind OpenAI's bid to save the world. MIT Technology Review. Retrieved April 30, 2023 from https:\/\/www.technologyreview.com\/2020\/02\/17\/844721\/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality\/","journal-title":"MIT Technology Review. Retrieved"},{"key":"e_1_3_2_1_40_1","article-title":"We read the paper that forced Timnit Gebru out of Google. Here's what it says","author":"Hao Karen","year":"2020","unstructured":"Karen Hao . 2020 . We read the paper that forced Timnit Gebru out of Google. Here's what it says . MIT Technology Review. Retrieved September 21, 2022 from https:\/\/www.technologyreview.com\/2020\/12\/04\/1013294\/google-ai-ethics-research-paper-forced-out-timnit-gebru\/ Karen Hao. 2020. We read the paper that forced Timnit Gebru out of Google. Here's what it says. MIT Technology Review. Retrieved September 21, 2022 from https:\/\/www.technologyreview.com\/2020\/12\/04\/1013294\/google-ai-ethics-research-paper-forced-out-timnit-gebru\/","journal-title":"MIT Technology Review. Retrieved"},{"key":"e_1_3_2_1_41_1","article-title":"Inside the fight to reclaim AI from Big Tech's control","author":"Hao Karen","year":"2021","unstructured":"Karen Hao . 2021 . Inside the fight to reclaim AI from Big Tech's control . MIT Technology Review. Retrieved August 17, 2022 from https:\/\/www.technologyreview.com\/2021\/06\/14\/1026148\/ai-big-tech-timnit-gebru-paper-ethics\/ Karen Hao. 2021. Inside the fight to reclaim AI from Big Tech's control. MIT Technology Review. Retrieved August 17, 2022 from https:\/\/www.technologyreview.com\/2021\/06\/14\/1026148\/ai-big-tech-timnit-gebru-paper-ethics\/","journal-title":"MIT Technology Review. Retrieved"},{"key":"e_1_3_2_1_42_1","volume-title":"AI & SOCIETY 2020","author":"Hollanek Tomasz","year":"2020","unstructured":"Tomasz Hollanek . 2020 . AI transparency: a matter of reconciling design with critique . AI & SOCIETY 2020 (2020), 1\u20139. DOI:https:\/\/doi.org\/10.1007\/s00146-020-01110-y 10.1007\/s00146-020-01110-y Tomasz Hollanek. 2020. AI transparency: a matter of reconciling design with critique. AI & SOCIETY 2020 (2020), 1\u20139. DOI:https:\/\/doi.org\/10.1007\/s00146-020-01110-y"},{"key":"e_1_3_2_1_43_1","first-page":"3","article-title":"Tech Ethics: Speaking Ethics to Power, or Power Speaking Ethics","volume":"2","author":"Hu Lily","year":"2021","unstructured":"Lily Hu . 2021 . Tech Ethics: Speaking Ethics to Power, or Power Speaking Ethics ? Journal of Social Computing 2 , 3 (September 2021), 238\u2013248. DOI:https:\/\/doi.org\/10.23919\/JSC.2021.0033 10.23919\/JSC.2021.0033 Lily Hu. 2021. Tech Ethics: Speaking Ethics to Power, or Power Speaking Ethics? Journal of Social Computing 2, 3 (September 2021), 238\u2013248. DOI:https:\/\/doi.org\/10.23919\/JSC.2021.0033","journal-title":"Journal of Social Computing"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0088-2"},{"key":"e_1_3_2_1_45_1","first-page":"1","article-title":"Generic Moral Grounding","volume":"23","author":"Jonker Julian","year":"2020","unstructured":"Julian Jonker . 2020 . Generic Moral Grounding . Ethic Theory Moral Prac 23 , 1 (February 2020), 23\u201338. DOI:https:\/\/doi.org\/10.1007\/s10677-020-10074-3 10.1007\/s10677-020-10074-3 Julian Jonker. 2020. Generic Moral Grounding. Ethic Theory Moral Prac 23, 1 (February 2020), 23\u201338. DOI:https:\/\/doi.org\/10.1007\/s10677-020-10074-3","journal-title":"Ethic Theory Moral Prac"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"crossref","unstructured":"Shelly Kagan. 1992. The structure of normative ethics. Philosophical perspectives 6 (1992) 223\u2013242. Shelly Kagan. 1992. The structure of normative ethics. Philosophical perspectives 6 (1992) 223\u2013242.","DOI":"10.2307\/2214246"},{"key":"e_1_3_2_1_47_1","volume-title":"The right to explanation, explained","author":"Kaminski Margot E","unstructured":"Margot E Kaminski . 2021. The right to explanation, explained . In Sharon Sandeen, Christopher Rademacher and Ansgar Ohly (eds.). Edward Elgar Publishing , 22. Margot E Kaminski. 2021. The right to explanation, explained. In Sharon Sandeen, Christopher Rademacher and Ansgar Ohly (eds.). Edward Elgar Publishing, 22."},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376219"},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1747-9991.2008.00196.x"},{"key":"e_1_3_2_1_50_1","volume-title":"Routledge","author":"Kim Tae Wan","year":"2021","unstructured":"Tae Wan Kim and Bryan R . Routledge . 2021 . Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach. Business Ethics Quarterly ( 2021), 1\u201328. DOI:https:\/\/doi.org\/10.2139\/ssrn.3716519 10.2139\/ssrn.3716519 Tae Wan Kim and Bryan R. Routledge. 2021. Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach. Business Ethics Quarterly (2021), 1\u201328. DOI:https:\/\/doi.org\/10.2139\/ssrn.3716519"},{"key":"e_1_3_2_1_51_1","volume-title":"Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, Virtual Event Canada, 262\u2013271","author":"Knowles Bran","unstructured":"Bran Knowles and John T. Richards . 2021. The Sanction of Authority: Promoting Public Trust in AI . In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, Virtual Event Canada, 262\u2013271 . DOI:https:\/\/doi.org\/10.1145\/3442188.3445890 10.1145\/3442188.3445890 Bran Knowles and John T. Richards. 2021. The Sanction of Authority: Promoting Public Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, Virtual Event Canada, 262\u2013271. DOI:https:\/\/doi.org\/10.1145\/3442188.3445890"},{"key":"e_1_3_2_1_52_1","volume-title":"ML and Associated Algorithms. Social Science Research Network","author":"Koshiyama Adriano","unstructured":"Adriano Koshiyama , Emre Kazim , Philip Treleaven , Pete Rai , Lukasz Szpruch , Giles Pavey , Ghazi Ahamat , Franziska Leutner , Randy Goebel , Andrew Knight , Janet Adams , Christina Hitrova , Jeremy Barnett , Parashkev Nachev , David Barber , Tomas Chamorro-Premuzic , Konstantin Klemmer , Miro Gregorovic , Shakeel Khan , and Elizabeth Lomas . 2021. Towards Algorithm Auditing: A Survey on Managing Legal, Ethical and Technological Risks of AI , ML and Associated Algorithms. Social Science Research Network , Rochester, NY . DOI:https:\/\/doi.org\/10.2139\/ssrn.3778998 10.2139\/ssrn.3778998 Adriano Koshiyama, Emre Kazim, Philip Treleaven, Pete Rai, Lukasz Szpruch, Giles Pavey, Ghazi Ahamat, Franziska Leutner, Randy Goebel, Andrew Knight, Janet Adams, Christina Hitrova, Jeremy Barnett, Parashkev Nachev, David Barber, Tomas Chamorro-Premuzic, Konstantin Klemmer, Miro Gregorovic, Shakeel Khan, and Elizabeth Lomas. 2021. Towards Algorithm Auditing: A Survey on Managing Legal, Ethical and Technological Risks of AI, ML and Associated Algorithms. Social Science Research Network, Rochester, NY. DOI:https:\/\/doi.org\/10.2139\/ssrn.3778998"},{"key":"e_1_3_2_1_53_1","volume-title":"arXiv","author":"Lazar Seth","year":"2022","unstructured":"Seth Lazar . 2022. Legitimacy, Authority, and the Political Value of Explanations . arXiv ( 2022 ), 21. Seth Lazar. 2022. Legitimacy, Authority, and the Political Value of Explanations. arXiv (2022), 21."},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"crossref","first-page":"e12813","DOI":"10.1111\/phc3.12813","article-title":"Well-being, part 2: Theories of well-being","volume":"17","author":"Lin Eden","year":"2022","unstructured":"Eden Lin . 2022 . Well-being, part 2: Theories of well-being . Philosophy Compass 17 , 2 (2022), e12813 . DOI:https:\/\/doi.org\/10.1111\/phc3.12813 10.1111\/phc3.12813 Eden Lin. 2022. Well-being, part 2: Theories of well-being. Philosophy Compass 17, 2 (2022), e12813. DOI:https:\/\/doi.org\/10.1111\/phc3.12813","journal-title":"Philosophy Compass"},{"key":"e_1_3_2_1_55_1","first-page":"1","article-title":"Explainable AI","volume":"23","author":"Linardatos Pantelis","year":"2020","unstructured":"Pantelis Linardatos , Vasilis Papastefanopoulos , and Sotiris Kotsiantis . 2020 . Explainable AI : A Review of Machine Learning Interpretability Methods. Entropy 23 , 1 (December 2020), 18. DOI:https:\/\/doi.org\/10.3390\/e23010018 10.3390\/e23010018 Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2020. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 23, 1 (December 2020), 18. DOI:https:\/\/doi.org\/10.3390\/e23010018","journal-title":"A Review of Machine Learning Interpretability Methods. Entropy"},{"key":"e_1_3_2_1_56_1","volume-title":"The Mythos of Model Interpretability. ACMQueue 16, 3","author":"Lipton Zachary","year":"2019","unstructured":"Zachary Lipton . 2019. The Mythos of Model Interpretability. ACMQueue 16, 3 ( 2019 ). Retrieved June 1, 2021 from https:\/\/queue.acm.org\/detail.cfm?id=3241340 Zachary Lipton. 2019. The Mythos of Model Interpretability. ACMQueue 16, 3 (2019). Retrieved June 1, 2021 from https:\/\/queue.acm.org\/detail.cfm?id=3241340"},{"key":"e_1_3_2_1_57_1","volume-title":"Tae Wan Kim, and David Danks","author":"Lu Joy","year":"2019","unstructured":"Joy Lu , Dokyun Lee , Tae Wan Kim, and David Danks . 2019 . Good Explanation for Algorithmic Transparency . Available at SSRN 3503603 (2019). Joy Lu, Dokyun Lee, Tae Wan Kim, and David Danks. 2019. Good Explanation for Algorithmic Transparency. Available at SSRN 3503603 (2019)."},{"key":"e_1_3_2_1_58_1","volume-title":"Proc. ACM Hum.-Comput. Interact. 5, CSCW1 (April","author":"Lyons Henrietta","year":"2021","unstructured":"Henrietta Lyons , Eduardo Velloso , and Tim Miller . 2021 . Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions . Proc. ACM Hum.-Comput. Interact. 5, CSCW1 (April 2021), 106:1-106:25. DOI:https:\/\/doi.org\/10.1145\/3449180 10.1145\/3449180 Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions. Proc. ACM Hum.-Comput. Interact. 5, CSCW1 (April 2021), 106:1-106:25. DOI:https:\/\/doi.org\/10.1145\/3449180"},{"key":"e_1_3_2_1_59_1","volume-title":"Retrieved","year":"2018","unstructured":"McKinsey. 2018 . Adoption of AI advances, but foundational barriers remain | McKinsey . Retrieved September 23, 2021 from https:\/\/www.mckinsey.com\/featured-insights\/artificial-intelligence\/ai-adoption-advances-but-foundational-barriers-remain McKinsey. 2018. Adoption of AI advances, but foundational barriers remain | McKinsey. Retrieved September 23, 2021 from https:\/\/www.mckinsey.com\/featured-insights\/artificial-intelligence\/ai-adoption-advances-but-foundational-barriers-remain"},{"key":"e_1_3_2_1_60_1","first-page":"11","article-title":"Security by obscurity","volume":"46","author":"Mercuri Rebecca T.","year":"2003","unstructured":"Rebecca T. Mercuri and Peter G. Neumann . 2003 . Security by obscurity . Commun. ACM 46 , 11 (November 2003), 160. DOI:https:\/\/doi.org\/10.1145\/948383.948413 10.1145\/948383.948413 Rebecca T. Mercuri and Peter G. Neumann. 2003. Security by obscurity. Commun. ACM 46, 11 (November 2003), 160. DOI:https:\/\/doi.org\/10.1145\/948383.948413","journal-title":"Commun. ACM"},{"key":"e_1_3_2_1_61_1","volume-title":"Rohit Kumar Singh, and Claire Hannibal","author":"Modgil Sachin","year":"2021","unstructured":"Sachin Modgil , Rohit Kumar Singh, and Claire Hannibal . 2021 . Artificial intelligence for supply chain resilience: learning from Covid-19. The International Journal of Logistics Management ( 2021). Sachin Modgil, Rohit Kumar Singh, and Claire Hannibal. 2021. Artificial intelligence for supply chain resilience: learning from Covid-19. The International Journal of Logistics Management (2021)."},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"crossref","first-page":"598","DOI":"10.1080\/00048409612347551","article-title":"Welfarism in Moral Theory","volume":"74","author":"Moore Andrew","year":"1996","unstructured":"Andrew Moore and Roger Crisp . 1996 . Welfarism in Moral Theory . Australasian Journal of Philosophy 74 , 4 (1996), 598 \u2013 613 . DOI:https:\/\/doi.org\/10.1080\/00048409612347551 10.1080\/00048409612347551 Andrew Moore and Roger Crisp. 1996. Welfarism in Moral Theory. Australasian Journal of Philosophy 74, 4 (1996), 598\u2013613. DOI:https:\/\/doi.org\/10.1080\/00048409612347551","journal-title":"Australasian Journal of Philosophy"},{"key":"e_1_3_2_1_63_1","volume-title":"Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. Social Science Research Network","author":"Mulligan Deirdre K.","unstructured":"Deirdre K. Mulligan , Daniel Kluttz , and Nitin Kohli . 2019. Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. Social Science Research Network , Rochester, NY . DOI:https:\/\/doi.org\/10.2139\/ssrn.3311894 10.2139\/ssrn.3311894 Deirdre K. Mulligan, Daniel Kluttz, and Nitin Kohli. 2019. Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. Social Science Research Network, Rochester, NY. DOI:https:\/\/doi.org\/10.2139\/ssrn.3311894"},{"key":"e_1_3_2_1_64_1","volume-title":"The uselessness of AI ethics. AI and Ethics","author":"Munn Luke","year":"2022","unstructured":"Luke Munn . 2022. The uselessness of AI ethics. AI and Ethics ( 2022 ), 1\u20139. Luke Munn. 2022. The uselessness of AI ethics. AI and Ethics (2022), 1\u20139."},{"key":"e_1_3_2_1_65_1","volume-title":"Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI. Minds and Machines","author":"Narayanan Devesh","year":"2023","unstructured":"Devesh Narayanan and Zhi Ming Tan . 2023. Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI. Minds and Machines ( 2023 ), 1\u201328. Devesh Narayanan and Zhi Ming Tan. 2023. Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI. Minds and Machines (2023), 1\u201328."},{"key":"#cr-split#-e_1_3_2_1_66_1.1","doi-asserted-by":"crossref","unstructured":"Claudio Novelli Mariarosaria Taddeo and Luciano Floridi. 2022. Accountability in Artificial Intelligence: What It Is and How It Works. DOI:https:\/\/doi.org\/10.2139\/ssrn.4180366 10.2139\/ssrn.4180366","DOI":"10.2139\/ssrn.4180366"},{"key":"#cr-split#-e_1_3_2_1_66_1.2","doi-asserted-by":"crossref","unstructured":"Claudio Novelli Mariarosaria Taddeo and Luciano Floridi. 2022. Accountability in Artificial Intelligence: What It Is and How It Works. DOI:https:\/\/doi.org\/10.2139\/ssrn.4180366","DOI":"10.2139\/ssrn.4180366"},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"crossref","unstructured":"Onora O'neill. 2006. Transparency and ethics of communication. In Transparency: The Key to Better Governance? British Academy. Onora O'neill. 2006. Transparency and ethics of communication. In Transparency: The Key to Better Governance? British Academy.","DOI":"10.5871\/bacad\/9780197263839.003.0005"},{"key":"e_1_3_2_1_68_1","volume-title":"Wired. Retrieved","author":"Pardes Arielle","year":"2022","unstructured":"Arielle Pardes . 2022 . How Job Applicants Try to Hack R\u00e9sum\u00e9-Reading Software . Wired. Retrieved August 24, 2022 from https:\/\/www.wired.com\/story\/job-applicants-hack-resume-reading-software\/ Arielle Pardes. 2022. How Job Applicants Try to Hack R\u00e9sum\u00e9-Reading Software. Wired. Retrieved August 24, 2022 from https:\/\/www.wired.com\/story\/job-applicants-hack-resume-reading-software\/"},{"key":"e_1_3_2_1_69_1","volume-title":"The black box society","author":"Pasquale Frank","unstructured":"Frank Pasquale . 2015. The black box society . Harvard University Press . Frank Pasquale. 2015. The black box society. Harvard University Press."},{"key":"e_1_3_2_1_70_1","volume-title":"Brooke Erin Duffy, and Emily Hund","author":"Petre Caitlin","year":"2019","unstructured":"Caitlin Petre , Brooke Erin Duffy, and Emily Hund . 2019 . \u201cGaming the System\u201d : Platform Paternalism and the Politics of Algorithmic Visibility. Social Media + Society 5, 4 (October 2019), 2056305119879995. DOI:https:\/\/doi.org\/10.1177\/2056305119879995 10.1177\/2056305119879995 Caitlin Petre, Brooke Erin Duffy, and Emily Hund. 2019. \u201cGaming the System\u201d: Platform Paternalism and the Politics of Algorithmic Visibility. Social Media + Society 5, 4 (October 2019), 2056305119879995. DOI:https:\/\/doi.org\/10.1177\/2056305119879995"},{"key":"e_1_3_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10676-010-9253-3"},{"key":"e_1_3_2_1_72_1","volume-title":"There are two factions working to prevent AI dangers. Here's why they're deeply divided. Vox. Retrieved","author":"Piper Kelsey","year":"2022","unstructured":"Kelsey Piper . 2022. There are two factions working to prevent AI dangers. Here's why they're deeply divided. Vox. Retrieved September 3, 2022 from https:\/\/www.vox.com\/future-perfect\/2022\/8\/10\/23298108\/ai-dangers-ethics-alignment-present-future-risk Kelsey Piper. 2022. There are two factions working to prevent AI dangers. Here's why they're deeply divided. Vox. Retrieved September 3, 2022 from https:\/\/www.vox.com\/future-perfect\/2022\/8\/10\/23298108\/ai-dangers-ethics-alignment-present-future-risk"},{"key":"e_1_3_2_1_73_1","volume-title":"Artificial intelligence for law enforcement: challenges and opportunities","author":"Raaijmakers Stephan","year":"2019","unstructured":"Stephan Raaijmakers . 2019. Artificial intelligence for law enforcement: challenges and opportunities . IEEE security & privacy 17, 5 ( 2019 ), 74\u201377. Stephan Raaijmakers. 2019. Artificial intelligence for law enforcement: challenges and opportunities. IEEE security & privacy 17, 5 (2019), 74\u201377."},{"key":"e_1_3_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445914"},{"key":"e_1_3_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0048-x"},{"key":"e_1_3_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-021-00123-7"},{"key":"#cr-split#-e_1_3_2_1_77_1.1","doi-asserted-by":"crossref","unstructured":"J. Schoeffer and N. Kuehl. 2021. Appropriate Fairness Perceptions? On the Effectiveness of Explanations in Enabling People to Assess the Fairness of Automated Decision Systems. 153-157. DOI:https:\/\/doi.org\/10.1145\/3462204.3481742 10.1145\/3462204.3481742","DOI":"10.1145\/3462204.3481742"},{"key":"#cr-split#-e_1_3_2_1_77_1.2","doi-asserted-by":"crossref","unstructured":"J. Schoeffer and N. Kuehl. 2021. Appropriate Fairness Perceptions? On the Effectiveness of Explanations in Enabling People to Assess the Fairness of Automated Decision Systems. 153-157. DOI:https:\/\/doi.org\/10.1145\/3462204.3481742","DOI":"10.1145\/3462204.3481742"},{"key":"e_1_3_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.2307\/2025934"},{"key":"#cr-split#-e_1_3_2_1_79_1.1","doi-asserted-by":"crossref","unstructured":"Reza Shokri Martin Strobel and Yair Zick. 2021. On the Privacy Risks of Model Explanations. DOI:https:\/\/doi.org\/10.48550\/arXiv.1907.00164 10.48550\/arXiv.1907.00164","DOI":"10.1145\/3461702.3462533"},{"key":"#cr-split#-e_1_3_2_1_79_1.2","doi-asserted-by":"crossref","unstructured":"Reza Shokri Martin Strobel and Yair Zick. 2021. On the Privacy Risks of Model Explanations. DOI:https:\/\/doi.org\/10.48550\/arXiv.1907.00164","DOI":"10.1145\/3461702.3462533"},{"key":"e_1_3_2_1_80_1","volume-title":"Wired. Retrieved","author":"Simonite Tom","year":"2020","unstructured":"Tom Simonite . 2020 . The Dark Side of Big Tech's Funding for AI Research . Wired. Retrieved August 17, 2022 from https:\/\/www.wired.com\/story\/dark-side-big-tech-funding-ai-research\/ Tom Simonite. 2020. The Dark Side of Big Tech's Funding for AI Research. Wired. Retrieved August 17, 2022 from https:\/\/www.wired.com\/story\/dark-side-big-tech-funding-ai-research\/"},{"key":"e_1_3_2_1_81_1","doi-asserted-by":"crossref","first-page":"3131","DOI":"10.1007\/s11098-017-0998-y","article-title":"Scalar consequentialism the right way","volume":"175","author":"Sinhababu Neil","year":"2018","unstructured":"Neil Sinhababu . 2018 . Scalar consequentialism the right way . Philosophical Studies 175 , 12 (2018), 3131 \u2013 3144 . Neil Sinhababu. 2018. Scalar consequentialism the right way. Philosophical Studies 175, 12 (2018), 3131\u20133144.","journal-title":"Philosophical Studies"},{"key":"e_1_3_2_1_82_1","volume-title":"Oxford University Press","author":"Sumner L. W.","unstructured":"L. W. Sumner . 1999. Welfare, Happiness, and Ethics. Oxford University Press , Oxford . DOI:https:\/\/doi.org\/10.1093\/acprof:oso\/9780198238782.001.0001 10.1093\/acprof:oso L. W. Sumner. 1999. Welfare, Happiness, and Ethics. Oxford University Press, Oxford. DOI:https:\/\/doi.org\/10.1093\/acprof:oso\/9780198238782.001.0001"},{"key":"e_1_3_2_1_83_1","first-page":"6","article-title":"Beliefs, anxiety and change readiness for artificial intelligence adoption among human resource managers: the moderating role of high-performance work systems","volume":"33","author":"Suseno Yuliani","year":"2022","unstructured":"Yuliani Suseno , Chiachi Chang , Marek Hudik , and Eddy S. Fang . 2022 . Beliefs, anxiety and change readiness for artificial intelligence adoption among human resource managers: the moderating role of high-performance work systems . The International Journal of Human Resource Management 33 , 6 (March 2022), 1209\u20131236. DOI:https:\/\/doi.org\/10.1080\/09585192.2021.1931408 10.1080\/09585192.2021.1931408 Yuliani Suseno, Chiachi Chang, Marek Hudik, and Eddy S. Fang. 2022. Beliefs, anxiety and change readiness for artificial intelligence adoption among human resource managers: the moderating role of high-performance work systems. The International Journal of Human Resource Management 33, 6 (March 2022), 1209\u20131236. DOI:https:\/\/doi.org\/10.1080\/09585192.2021.1931408","journal-title":"The International Journal of Human Resource Management"},{"key":"e_1_3_2_1_84_1","volume-title":"The New York Times. Retrieved","author":"Tabuchi Hiroko","year":"2017","unstructured":"Hiroko Tabuchi . 2017 . How Climate Change Deniers Rise to the Top in Google Searches . The New York Times. Retrieved August 24, 2022 from https:\/\/www.nytimes.com\/2017\/12\/29\/climate\/google-search-climate-change.html Hiroko Tabuchi. 2017. How Climate Change Deniers Rise to the Top in Google Searches. The New York Times. Retrieved August 24, 2022 from https:\/\/www.nytimes.com\/2017\/12\/29\/climate\/google-search-climate-change.html"},{"key":"e_1_3_2_1_85_1","doi-asserted-by":"crossref","unstructured":"Niels Van Berkel Jorge Goncalves Daniel Russo Simo Hosio and Mikael B Skov. 2021. Effect of information presentation on fairness perceptions of machine learning predictors. 1\u201313. Niels Van Berkel Jorge Goncalves Daniel Russo Simo Hosio and Mikael B Skov. 2021. Effect of information presentation on fairness perceptions of machine learning predictors. 1\u201313.","DOI":"10.1145\/3411764.3445365"},{"key":"e_1_3_2_1_86_1","volume-title":"The Verge. Retrieved","author":"Vincent James","year":"2023","unstructured":"James Vincent . 2023 . OpenAI co-founder on company's past approach to openly sharing research: \u201cWe were wrong .\u201d The Verge. Retrieved April 30, 2023 from https:\/\/www.theverge.com\/2023\/3\/15\/23640180\/openai-gpt-4-launch-closed-research-ilya-sutskever-interview James Vincent. 2023. OpenAI co-founder on company's past approach to openly sharing research: \u201cWe were wrong.\u201d The Verge. Retrieved April 30, 2023 from https:\/\/www.theverge.com\/2023\/3\/15\/23640180\/openai-gpt-4-launch-closed-research-ilya-sutskever-interview"},{"key":"e_1_3_2_1_88_1","volume-title":"The Right to Explanation. Journal of Political Philosophy","author":"Vredenburgh Kaitlyn","year":"2021","unstructured":"Kaitlyn Vredenburgh . 2021. The Right to Explanation. Journal of Political Philosophy ( 2021 ). DOI:https:\/\/doi.org\/10.1111\/jopp.12262 10.1111\/jopp.12262 Kaitlyn Vredenburgh. 2021. The Right to Explanation. Journal of Political Philosophy (2021). DOI:https:\/\/doi.org\/10.1111\/jopp.12262"},{"key":"e_1_3_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1093\/idpl\/ipx005"},{"key":"e_1_3_2_1_90_1","volume-title":"Weld and Gagan Bansal","author":"Daniel","year":"2018","unstructured":"Daniel S. Weld and Gagan Bansal . 2018 . Intelligible Artificial Intelligence. arXiv:1803.04263 [cs] (October 2018). Retrieved March 2, 2022 from http:\/\/arxiv.org\/abs\/1803.04263 Daniel S. Weld and Gagan Bansal. 2018. Intelligible Artificial Intelligence. arXiv:1803.04263 [cs] (October 2018). Retrieved March 2, 2022 from http:\/\/arxiv.org\/abs\/1803.04263"},{"key":"e_1_3_2_1_91_1","series-title":"Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11700 LNCS, (July","volume-title":"Transparency: Motivations and Challenges","author":"Weller Adrian","year":"2017","unstructured":"Adrian Weller . 2017 . Transparency: Motivations and Challenges . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11700 LNCS, (July 2017), 23\u201340. Adrian Weller. 2017. Transparency: Motivations and Challenges. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11700 LNCS, (July 2017), 23\u201340."},{"key":"e_1_3_2_1_92_1","volume-title":"The Guardian. Retrieved","author":"Wong Julia Carrie","year":"2020","unstructured":"Julia Carrie Wong . 2020 . More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru . The Guardian. Retrieved September 21, 2022 from https:\/\/www.theguardian.com\/technology\/2020\/dec\/04\/timnit-gebru-google-ai-fired-diversity-ethics Julia Carrie Wong. 2020. More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru. The Guardian. Retrieved September 21, 2022 from https:\/\/www.theguardian.com\/technology\/2020\/dec\/04\/timnit-gebru-google-ai-fired-diversity-ethics"},{"key":"e_1_3_2_1_93_1","volume-title":"Appropriate Trust in Machine Learning?","author":"Yang Fumeng","year":"2017","unstructured":"Fumeng Yang , Zhuanyi Huang , Jean Scholtz , and Dustin L Arendt . 2017. How Do Visual Explanations Foster End Users \u2019 Appropriate Trust in Machine Learning? ( 2017 ), 13. Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L Arendt. 2017. How Do Visual Explanations Foster End Users\u2019 Appropriate Trust in Machine Learning? (2017), 13."},{"key":"e_1_3_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1017\/psa.2021.13"},{"key":"e_1_3_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-018-0330-6"}],"event":{"name":"FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency","location":"Chicago IL USA","acronym":"FAccT '23"},"container-title":["2023 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3593013.3593977","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3593013.3593977","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:48:02Z","timestamp":1750178882000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3593013.3593977"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,12]]},"references-count":97,"alternative-id":["10.1145\/3593013.3593977","10.1145\/3593013"],"URL":"https:\/\/doi.org\/10.1145\/3593013.3593977","relation":{},"subject":[],"published":{"date-parts":[[2023,6,12]]},"assertion":[{"value":"2023-06-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}