{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,24]],"date-time":"2025-06-24T11:40:05Z","timestamp":1750765205183,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":67,"publisher":"ACM","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,6,23]]},"DOI":"10.1145\/3715275.3732161","type":"proceedings-article","created":{"date-parts":[[2025,6,23]],"date-time":"2025-06-23T17:01:18Z","timestamp":1750698078000},"page":"2437-2450","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Breaking Down Bias: On The Limits of Generalizable Pruning Strategies"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-3386-3683","authenticated-orcid":false,"given":"Sibo","family":"Ma","sequence":"first","affiliation":[{"name":"Stanford University, Stanford, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-2518-5263","authenticated-orcid":false,"given":"Alejandro","family":"Salinas","sequence":"additional","affiliation":[{"name":"Stanford University, Stanford, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7121-5696","authenticated-orcid":false,"given":"Julian","family":"Nyarko","sequence":"additional","affiliation":[{"name":"Stanford University, Stanford, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3938-0541","authenticated-orcid":false,"given":"Peter","family":"Henderson","sequence":"additional","affiliation":[{"name":"Princeton University, Princeton, New Jersey, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,6,23]]},"reference":[{"key":"e_1_3_3_2_2_2","unstructured":"Rishabh Adiga Besmira Nushi and Varun Chandrasekaran. 2024. Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2410.22517 (2024)."},{"key":"e_1_3_3_2_3_2","doi-asserted-by":"publisher","unstructured":"Chittaranjan Andrade. 2020. Mean Difference Standardized Mean Difference (SMD) and Their Use in Meta-Analysis: As Simple as It Gets. Primary Care Companion for CNS Disorders 81 5 (2020). https:\/\/doi.org\/10.4088\/JCP.20f13681","DOI":"10.4088\/JCP.20f13681"},{"key":"e_1_3_3_2_4_2","unstructured":"Xuechunzi Bai Angelina Wang Ilia Sucholutsky and Thomas\u00a0L. Griffiths. 2024. Measuring Implicit Bias in Explicitly Unbiased Large Language Models. arxiv:https:\/\/arXiv.org\/abs\/2402.04105\u00a0[cs.CY] https:\/\/arxiv.org\/abs\/2402.04105"},{"key":"e_1_3_3_2_5_2","doi-asserted-by":"crossref","unstructured":"Arne Bewersdorff Christian Hartmann Marie Hornberger Kathrin Se\u00dfler Maria Bannert Enkelejda Kasneci Gjergji Kasneci Xiaoming Zhai and Claudia Nerdel. 2025. Taking the next step with generative artificial intelligence: The transformative role of multimodal large language models in science education. Learning and Individual Differences 118 (2025) 102601.","DOI":"10.1016\/j.lindif.2024.102601"},{"key":"e_1_3_3_2_6_2","doi-asserted-by":"crossref","unstructured":"Emily Black John\u00a0Logan Koepke Pauline\u00a0T Kim Solon Barocas and Mingwei Hsu. 2024. Less discriminatory algorithms. Geo. LJ 113 (2024) 53.","DOI":"10.2139\/ssrn.4590481"},{"key":"e_1_3_3_2_7_2","unstructured":"Rishi Bommasani Drew\u00a0A Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael\u00a0S Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill et\u00a0al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2108.07258 (2021)."},{"key":"e_1_3_3_2_8_2","doi-asserted-by":"crossref","unstructured":"Angana Borah and Rada Mihalcea. 2024. Towards Implicit Bias Detection and Mitigation in Multi-Agent LLM Interactions. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2410.02584 (2024).","DOI":"10.18653\/v1\/2024.findings-emnlp.545"},{"key":"e_1_3_3_2_9_2","doi-asserted-by":"crossref","unstructured":"Timothy Bresnahan. 2010. General purpose technologies. Handbook of the Economics of Innovation 2 (2010) 761\u2013791.","DOI":"10.1016\/S0169-7218(10)02002-2"},{"key":"e_1_3_3_2_10_2","doi-asserted-by":"crossref","unstructured":"William Cain. 2024. Prompting change: exploring prompt engineering in large language model AI and its potential to transform education. TechTrends 68 1 (2024) 47\u201357.","DOI":"10.1007\/s11528-023-00896-0"},{"key":"e_1_3_3_2_11_2","unstructured":"Ting-Yun Chang Jesse Thomason and Robin Jia. 2024. Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks. arxiv:https:\/\/arXiv.org\/abs\/2311.09060\u00a0[cs.CL] https:\/\/arxiv.org\/abs\/2311.09060"},{"key":"e_1_3_3_2_12_2","unstructured":"Ruchika Chavhan Ondrej Bohdal Yongshuo Zong Da Li and Timothy Hospedales. 2024. Memorized Images in Diffusion Models share a Subspace that can be Located and Deleted. arxiv:https:\/\/arXiv.org\/abs\/2406.18566\u00a0[cs.CV] https:\/\/arxiv.org\/abs\/2406.18566"},{"key":"e_1_3_3_2_13_2","unstructured":"Haolong Chen Hanzhi Chen Zijian Zhao Kaifeng Han Guangxu Zhu Yichen Zhao Ying Du Wei Xu and Qingjiang Shi. 2024. An overview of domain-specific foundation model: key technologies applications and challenges. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2409.04267 (2024)."},{"key":"e_1_3_3_2_14_2","doi-asserted-by":"publisher","unstructured":"Travis\u00a0L. Dixon. 2008. Crime News and Racialized Beliefs: Understanding the Relationship Between Local News Viewing and Perceptions of African Americans and Crime. Journal of Communication 58 1 (2008) 106\u2013125. https:\/\/doi.org\/10.1111\/j.1460-2466.2007.00376.x arXiv:https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/j.1460-2466.2007.00376.x","DOI":"10.1111\/j.1460-2466.2007.00376.x"},{"key":"e_1_3_3_2_15_2","volume-title":"Evaluating Feature Steering: A Case Study in Mitigating Social Biases","author":"Durmus Esin","year":"2024","unstructured":"Esin Durmus, Alex Tamkin, Jack Clark, Jerry Wei, Jonathan Marcus, Joshua Batson, Kunal Handa, Liane Lovitt, Meg Tong, Miles McCain, Oliver Rausch, Saffron Huang, Sam Bowman, Stuart Ritchie, Tom Henighan, and Deep Ganguli. 2024. Evaluating Feature Steering: A Case Study in Mitigating Social Biases. https:\/\/anthropic.com\/research\/evaluating-feature-steering"},{"key":"e_1_3_3_2_16_2","unstructured":"Tyna Eloundou Alex Beutel David\u00a0G. Robinson Keren Gu-Lemberg Anna-Luisa Brakman Pamela Mishkin Meghan Shah Johannes Heidecke Lilian Weng and Adam\u00a0Tauman Kalai. 2024. First-Person Fairness in Chatbots. arxiv:https:\/\/arXiv.org\/abs\/2410.19803\u00a0[cs.CY] https:\/\/arxiv.org\/abs\/2410.19803"},{"key":"e_1_3_3_2_17_2","doi-asserted-by":"crossref","unstructured":"Eva Erman and Markus Furendal. 2024. The democratization of global AI governance and the role of tech companies. Nature Machine Intelligence 6 3 (2024) 246\u2013248.","DOI":"10.1038\/s42256-024-00811-z"},{"key":"e_1_3_3_2_18_2","unstructured":"European Commission. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). Brussels 21.4.2021 COM(2021) 206 final. https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=COM%3A2021%3A206%3AFIN."},{"key":"e_1_3_3_2_19_2","unstructured":"European Commission. 2024. Directive (EU) 2024\/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85\/374\/EEC (Text with EEA relevance). Strasbourg 23.10.2024 PE\/7\/2024\/REV\/1. https:\/\/eur-lex.europa.eu\/eli\/dir\/2024\/2853\/oj\/eng."},{"key":"e_1_3_3_2_20_2","doi-asserted-by":"crossref","unstructured":"Emilio Ferrara. 2023. Fairness and bias in artificial intelligence: A brief survey of sources impacts and mitigation strategies. Sci 6 1 (2023) 3.","DOI":"10.3390\/sci6010003"},{"key":"e_1_3_3_2_21_2","doi-asserted-by":"crossref","unstructured":"S\u00a0Michael Gaddis. 2017. How black are Lakisha and Jamal? Racial perceptions from names used in correspondence audit studies. Sociological Science 4 (2017) 469\u2013489.","DOI":"10.15195\/v4.a19"},{"key":"e_1_3_3_2_22_2","unstructured":"Deep Ganguli Amanda Askell Nicholas Schiefer Thomas\u00a0I Liao Kamil\u0117 Luko\u0161i\u016bt\u0117 Anna Chen Anna Goldie Azalia Mirhoseini Catherine Olsson Danny Hernandez et\u00a0al. 2023. The capacity for moral self-correction in large language models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2302.07459 (2023)."},{"key":"e_1_3_3_2_23_2","unstructured":"Sanjana Gautam Pranav\u00a0Narayanan Venkit and Sourojit Ghosh. 2024. From melting pots to misrepresentations: Exploring harms in generative ai. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2403.10776 (2024)."},{"key":"e_1_3_3_2_24_2","doi-asserted-by":"crossref","unstructured":"Franklin\u00a0D. Gilliam and Shanto Iyengar. 2000. Prime Suspects: The Influence of Local Television News on the Viewing Public. American Journal of Political Science 44 3 (2000) 560\u2013573. http:\/\/www.jstor.org\/stable\/2669264","DOI":"10.2307\/2669264"},{"key":"e_1_3_3_2_25_2","doi-asserted-by":"crossref","unstructured":"Franklin\u00a0D. Gilliam Nicholas\u00a0A. Valentino and Matthew\u00a0N. Beckmann. 2002. Where You Live and What You Watch: The Impact of Racial Proximity and Local Television News on Attitudes about Race and Crime. Political Research Quarterly 55 4 (2002) 755\u2013780. http:\/\/www.jstor.org\/stable\/3088078","DOI":"10.1177\/106591290205500402"},{"key":"e_1_3_3_2_26_2","doi-asserted-by":"publisher","unstructured":"Ben Grunwald Julian Nyarko and John Rappaport. 2022. Police agencies on Facebook overreport on Black suspects. Proceedings of the National Academy of Sciences 119 45 (2022) e2203089119. https:\/\/doi.org\/10.1073\/pnas.2203089119 arXiv:https:\/\/www.pnas.org\/doi\/pdf\/10.1073\/pnas.2203089119","DOI":"10.1073\/pnas.2203089119"},{"key":"e_1_3_3_2_27_2","doi-asserted-by":"crossref","unstructured":"Philipp Hacker. 2023. The European AI liability directives\u2013Critique of a half-hearted approach and lessons for the future. Computer Law & Security Review 51 (2023) 105871.","DOI":"10.1016\/j.clsr.2023.105871"},{"key":"e_1_3_3_2_28_2","doi-asserted-by":"crossref","unstructured":"Philipp Hacker Brent Mittelstadt Frederik\u00a0Zuiderveen Borgesius and Sandra Wachter. 2024. Generative discrimination: What happens when generative AI exhibits bias and what can be done about it. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2407.10329 (2024).","DOI":"10.2139\/ssrn.4877398"},{"key":"e_1_3_3_2_29_2","unstructured":"Amit Haim Alejandro Salinas and Julian Nyarko. 2024. What\u2019s in a Name? Auditing Large Language Models for Race and Gender Bias. arxiv:https:\/\/arXiv.org\/abs\/2402.14875\u00a0[cs.CL] https:\/\/arxiv.org\/abs\/2402.14875"},{"key":"e_1_3_3_2_30_2","unstructured":"Kunal Handa Alex Tamkin Miles McCain Saffron Huang Esin Durmus Sarah Heck Jared Mueller Jerry Hong Stuart Ritchie Tim Belonax Kevin\u00a0K. Troy Dario Amodei Jared Kaplan Jack Clark and Deep Ganguli. 2025. Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations. arxiv:https:\/\/arXiv.org\/abs\/2503.04761\u00a0[cs.CY] https:\/\/arxiv.org\/abs\/2503.04761"},{"key":"e_1_3_3_2_31_2","unstructured":"Anuj Jain. 2024. Ethical AI: A Policy Framework to Regulate Bias in Large Language Models. Ph.\u00a0D. Dissertation."},{"key":"e_1_3_3_2_32_2","doi-asserted-by":"crossref","unstructured":"Alex Kim Maximilian Muhn and Valeri Nikolaev. 2024. Financial statement analysis with large language models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2407.17866 (2024).","DOI":"10.2139\/ssrn.4835311"},{"key":"e_1_3_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.4324\/9780429497896-16"},{"key":"e_1_3_3_2_34_2","first-page":"752","volume-title":"Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society","volume":"7","author":"Klyman Kevin","year":"2024","unstructured":"Kevin Klyman. 2024. Acceptable Use Policies for Foundation Models. In Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society, Vol.\u00a07. 752\u2013767."},{"key":"e_1_3_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/3582269.3615599"},{"key":"e_1_3_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589334.3645366"},{"key":"e_1_3_3_2_37_2","doi-asserted-by":"crossref","unstructured":"Masab\u00a0A Mansoor Andrew\u00a0F Ibrahim David\u00a0J Grindem and Asad Baig. 2024. Evaluating the accuracy and reliability of large language models in assisting with pediatric differential diagnoses: A multicenter diagnostic study. medRxiv (2024) 2024\u201308.","DOI":"10.1101\/2024.08.09.24311777"},{"key":"e_1_3_3_2_38_2","unstructured":"Rohin Manvi Samar Khanna Marshall Burke David Lobell and Stefano Ermon. 2024. Large Language Models are Geographically Biased. arxiv:https:\/\/arXiv.org\/abs\/2402.02680\u00a0[cs.CL] https:\/\/arxiv.org\/abs\/2402.02680"},{"key":"e_1_3_3_2_39_2","doi-asserted-by":"crossref","unstructured":"Bertalan Mesk\u00f3 and Eric\u00a0J Topol. 2023. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ digital medicine 6 1 (2023) 120.","DOI":"10.1038\/s41746-023-00873-0"},{"key":"e_1_3_3_2_40_2","unstructured":"Le Monde. 2024. How AI is Shaking Up the Mental Health Community: \"Rather Than Pay for Another Session I\u2019d Go on ChatGPT\". https:\/\/www.lemonde.fr\/en\/pixels\/article\/2024\/08\/18\/how-ai-is-shaking-up-the-mental-health-community-rather-than-pay-for-another-session-i-d-go-on-chatgpt_6717874_13.html Accessed: 2025-01-22."},{"key":"e_1_3_3_2_41_2","unstructured":"Gon\u00e7alo Paulo Alex Mallen Caden Juang and Nora Belrose. 2024. Automatically Interpreting Millions of Features in Large Language Models. arxiv:https:\/\/arXiv.org\/abs\/2410.13928\u00a0[cs.LG] https:\/\/arxiv.org\/abs\/2410.13928"},{"key":"e_1_3_3_2_42_2","doi-asserted-by":"publisher","unstructured":"David\u00a0S. Pedulla and Devah Pager. 2019. Race and Networks in the Job Search Process. American Sociological Review 84 6 (2019) 983\u20131012. https:\/\/doi.org\/10.1177\/0003122419883255 arXiv:10.1177\/0003122419883255","DOI":"10.1177\/0003122419883255"},{"key":"e_1_3_3_2_43_2","unstructured":"Pew Research Center. 2025. About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork Double the Share in 2023. https:\/\/www.pewresearch.org\/short-reads\/2025\/01\/15\/about-a-quarter-of-us-teens-have-used-chatgpt-for-schoolwork-double-the-share-in-2023\/ Accessed: 2025-01-22."},{"key":"e_1_3_3_2_44_2","doi-asserted-by":"crossref","unstructured":"Emma Pierson Camelia Simoiu Jan Overgoor Sam Corbett-Davies Daniel Jenson Amy Shoemaker Vignesh Ramachandran Phoebe Barghouty Cheryl Phillips Ravi Shroff et\u00a0al. 2020. A large-scale analysis of racial disparities in police stops across the United States. Nature human behaviour 4 7 (2020) 736\u2013745.","DOI":"10.1038\/s41562-020-0858-1"},{"key":"e_1_3_3_2_45_2","doi-asserted-by":"publisher","DOI":"10.1017\/cfl.2024.14"},{"key":"e_1_3_3_2_46_2","doi-asserted-by":"publisher","unstructured":"Travis Riddle and Stacey Sinclair. 2019. Racial disparities in school-based disciplinary actions are associated with county-level rates of racial bias. Proceedings of the National Academy of Sciences 116 17 (2019) 8255\u20138260. https:\/\/doi.org\/10.1073\/pnas.1808307116 arXiv:https:\/\/www.pnas.org\/doi\/pdf\/10.1073\/pnas.1808307116","DOI":"10.1073\/pnas.1808307116"},{"key":"e_1_3_3_2_47_2","doi-asserted-by":"publisher","unstructured":"Vincent\u00a0J. Roscigno and Kayla Preito-Hodge. 2021. Racist Cops Vested \u201cBlue\u201d Interests or Both? Evidence from Four Decades of the General Social Survey. Socius 7 (2021) 2378023120980913. https:\/\/doi.org\/10.1177\/2378023120980913 arXiv:10.1177\/2378023120980913","DOI":"10.1177\/2378023120980913"},{"key":"e_1_3_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1145\/3617694.3623257"},{"key":"e_1_3_3_2_49_2","doi-asserted-by":"publisher","unstructured":"Bani Saluja and Zenobia Bryant. 2021. How Implicit Bias Contributes to Racial Disparities in Maternal Morbidity and Mortality in the United States. Journal of Women\u2019s Health 30 2 (2021) 270\u2013273. https:\/\/doi.org\/10.1089\/jwh.2020.8874 arXiv:10.1089\/jwh.2020.8874 PMID: 33237843.","DOI":"10.1089\/jwh.2020.8874"},{"key":"e_1_3_3_2_50_2","unstructured":"Preethi Seshadri and Seraphina Goldfarb-Tarrant. 2025. Who Does the Giant Number Pile Like Best: Analyzing Fairness in Hiring Contexts. arxiv:https:\/\/arXiv.org\/abs\/2501.04316\u00a0[cs.CL] https:\/\/arxiv.org\/abs\/2501.04316"},{"key":"e_1_3_3_2_51_2","unstructured":"Karan Singhal Tao Tu Juraj Gottweis Rory Sayres Ellery Wulczyn Mohamed Amin Le Hou Kevin Clark Stephen\u00a0R Pfohl Heather Cole-Lewis et\u00a0al. 2025. Toward expert-level medical question answering with large language models. Nature Medicine (2025) 1\u20138."},{"key":"e_1_3_3_2_52_2","unstructured":"Mingjie Sun Zhuang Liu Anna Bair and J.\u00a0Zico Kolter. 2024. A Simple and Effective Pruning Approach for Large Language Models. arxiv:https:\/\/arXiv.org\/abs\/2306.11695\u00a0[cs.CL] https:\/\/arxiv.org\/abs\/2306.11695"},{"key":"e_1_3_3_2_53_2","unstructured":"Alex Tamkin Miles McCain Kunal Handa Esin Durmus Liane Lovitt Ankur Rathi Saffron Huang Alfred Mountfield Jerry Hong Stuart Ritchie Michael Stern Brian Clarke Landon Goldberg Theodore\u00a0R. Sumers Jared Mueller William McEachen Wes Mitchell Shan Carter Jack Clark Jared Kaplan and Deep Ganguli. 2024. Clio: Privacy-Preserving Insights into Real-World AI Use. arxiv:https:\/\/arXiv.org\/abs\/2412.13678\u00a0[cs.CY] https:\/\/arxiv.org\/abs\/2412.13678"},{"key":"e_1_3_3_2_54_2","unstructured":"Adly Templeton Tom Conerly Jonathan Marcus Jack Lindsey Trenton Bricken Brian Chen Adam Pearce Craig Citro Emmanuel Ameisen Andy Jones Hoagy Cunningham Nicholas\u00a0L Turner Callum McDougall Monte MacDiarmid C.\u00a0Daniel Freeman Theodore\u00a0R. Sumers Edward Rees Joshua Batson Adam Jermyn Shan Carter Chris Olah and Tom Henighan. 2024. Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. Transformer Circuits Thread (2024). https:\/\/transformer-circuits.pub\/2024\/scaling-monosemanticity\/index.html"},{"key":"e_1_3_3_2_55_2","unstructured":"The White House. 2023. Executive Order on Safe Secure and Trustworthy Artificial Intelligence. https:\/\/www.whitehouse.gov\/. Issued October 30 2023 Accessed: Month Day Year."},{"key":"e_1_3_3_2_56_2","doi-asserted-by":"crossref","unstructured":"Elena Voita David Talbot Fedor Moiseev Rico Sennrich and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting the rest can be pruned. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1905.09418 (2019).","DOI":"10.18653\/v1\/P19-1580"},{"key":"e_1_3_3_2_57_2","doi-asserted-by":"crossref","unstructured":"Gerhard Wagner. 2023. Liability Rules for the Digital Age: \u2013Aiming for the Brussels Effect\u2013. Journal of European Tort Law 13 3 (2023) 191\u2013243.","DOI":"10.1515\/jetl-2022-0012"},{"key":"e_1_3_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/hpca51647.2021.00018"},{"key":"e_1_3_3_2_59_2","unstructured":"Boyi Wei Kaixuan Huang Yangsibo Huang Tinghao Xie Xiangyu Qi Mengzhou Xia Prateek Mittal Mengdi Wang and Peter Henderson. 2024. Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications. arxiv:https:\/\/arXiv.org\/abs\/2402.05162\u00a0[cs.LG] https:\/\/arxiv.org\/abs\/2402.05162"},{"key":"e_1_3_3_2_60_2","doi-asserted-by":"publisher","unstructured":"Alice Xiang. 2024. Fairness & Privacy in an Age of Generative AI. Science and Technology Law Review 25 2 (Jun. 2024). https:\/\/doi.org\/10.52214\/stlr.v25i2.12765","DOI":"10.52214\/stlr.v25i2.12765"},{"key":"e_1_3_3_2_61_2","unstructured":"Fasheng Xu Xiaoyu Wang Wei Chen and Karen Xie. 2024. The Economics of AI Foundation Models: Openness Competition and Governance. Competition and Governance (August 11 2024) (2024)."},{"key":"e_1_3_3_2_62_2","unstructured":"Nakyeong Yang Taegwan Kang Jungkyu Choi Honglak Lee and Kyomin Jung. 2024. Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination. arxiv:https:\/\/arXiv.org\/abs\/2311.09627\u00a0[cs.AI] https:\/\/arxiv.org\/abs\/2311.09627"},{"key":"e_1_3_3_2_63_2","unstructured":"Abdelrahman Zayed Goncalo Mordido Samira Shabanian Ioana Baldini and Sarath Chandar. 2023. Fairness-Aware Structured Pruning in Transformers. arxiv:https:\/\/arXiv.org\/abs\/2312.15398\u00a0[cs.CL] https:\/\/arxiv.org\/abs\/2312.15398"},{"key":"e_1_3_3_2_64_2","doi-asserted-by":"crossref","unstructured":"Yi Zeng Kevin Klyman Andy Zhou Yu Yang Minzhou Pan Ruoxi Jia Dawn Song Percy Liang and Bo Li. 2024. AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies. arxiv:https:\/\/arXiv.org\/abs\/2406.17864\u00a0[cs.CY] https:\/\/arxiv.org\/abs\/2406.17864","DOI":"10.70777\/si.v1i1.10603"},{"key":"e_1_3_3_2_65_2","doi-asserted-by":"crossref","unstructured":"Yi Zeng Yu Yang Andy Zhou Jeffrey\u00a0Ziwei Tan Yuheng Tu Yifan Mai Kevin Klyman Minzhou Pan Ruoxi Jia Dawn Song et\u00a0al. 2024. Air-bench 2024: A safety benchmark based on risk categories from regulations and policies. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2407.17436 (2024).","DOI":"10.70777\/si.v1i1.10863"},{"key":"e_1_3_3_2_66_2","doi-asserted-by":"crossref","unstructured":"Zhengyan Zhang Fanchao Qi Zhiyuan Liu Qun Liu and Maosong Sun. 2021. Know what you don\u2019t need: Single-Shot Meta-Pruning for attention heads. AI Open 2 (2021) 36\u201342.","DOI":"10.1016\/j.aiopen.2021.05.003"},{"key":"e_1_3_3_2_67_2","unstructured":"Huaqin Zhao Zhengliang Liu Zihao Wu Yiwei Li Tianze Yang Peng Shu Shaochen Xu Haixing Dai Lin Zhao Gengchen Mai et\u00a0al. 2024. Revolutionizing finance with llms: An overview of applications and insights. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2401.11641 (2024)."},{"key":"e_1_3_3_2_68_2","unstructured":"Thomas\u00a0P. Zollo Nikita Rajaneesh Richard Zemel Talia\u00a0B. Gillis and Emily Black. 2024. Towards Effective Discrimination Testing for Generative AI. arxiv:https:\/\/arXiv.org\/abs\/2412.21052\u00a0[cs.LG] https:\/\/arxiv.org\/abs\/2412.21052"}],"event":{"name":"FAccT '25: The 2025 ACM Conference on Fairness, Accountability, and Transparency","acronym":"FAccT '25","location":"Athens Greece"},"container-title":["Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3715275.3732161","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,24]],"date-time":"2025-06-24T11:02:45Z","timestamp":1750762965000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3715275.3732161"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,23]]},"references-count":67,"alternative-id":["10.1145\/3715275.3732161","10.1145\/3715275"],"URL":"https:\/\/doi.org\/10.1145\/3715275.3732161","relation":{},"subject":[],"published":{"date-parts":[[2025,6,23]]},"assertion":[{"value":"2025-06-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}