{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T05:12:26Z","timestamp":1755839546835,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":71,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T00:00:00Z","timestamp":1686528000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,6,12]]},"DOI":"10.1145\/3593013.3594004","type":"proceedings-article","created":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T14:40:46Z","timestamp":1686580846000},"page":"370-378","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["On the Independence of Association Bias and Empirical Fairness in Language Models"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4276-7114","authenticated-orcid":false,"given":"Laura","family":"Cabello","sequence":"first","affiliation":[{"name":"University of Copenhagen, Denmark"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2568-4360","authenticated-orcid":false,"given":"Anna Katrine","family":"J\u00f8rgensen","sequence":"additional","affiliation":[{"name":"University of Copenhagen, Denmark"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5250-4276","authenticated-orcid":false,"given":"Anders","family":"S\u00f8gaard","sequence":"additional","affiliation":[{"name":"University of Copenhagen, Denmark"}]}],"member":"320","published-online":{"date-parts":[[2023,6,12]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"International Conference on Machine Learning, ICML 2022","volume":"451","author":"Ali Ameen","year":"2022","unstructured":"Ameen Ali , Thomas Schnake , Oliver Eberle , Gr\u00e9goire Montavon , Klaus-Robert M\u00fcller , and Lior Wolf . 2022 . XAI for Transformers: Better Explanations through Conservative Propagation . In International Conference on Machine Learning, ICML 2022 , 17-23 July 2022, Baltimore, Maryland, USA(Proceedings of Machine Learning Research , Vol. 162), Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesv\u00e1ri, Gang Niu, and Sivan Sabato (Eds.). PMLR, 435\u2013 451 . https:\/\/proceedings.mlr.press\/v162\/ali22a.html Ameen Ali, Thomas Schnake, Oliver Eberle, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Lior Wolf. 2022. XAI for Transformers: Better Explanations through Conservative Propagation. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA(Proceedings of Machine Learning Research, Vol. 162), Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesv\u00e1ri, Gang Niu, and Sivan Sabato (Eds.). PMLR, 435\u2013451. https:\/\/proceedings.mlr.press\/v162\/ali22a.html"},{"key":"#cr-split#-e_1_3_2_1_2_1.1","unstructured":"Ameen Ali Thomas Schnake Oliver Eberle Gr\u00e9goire Montavon Klaus-Robert M\u00fcller and Lior Wolf. 2022. XAI for Transformers: Better Explanations through Conservative Propagation. https:\/\/doi.org\/10.48550\/ARXIV.2202.07304 10.48550\/ARXIV.2202.07304"},{"key":"#cr-split#-e_1_3_2_1_2_1.2","unstructured":"Ameen Ali Thomas Schnake Oliver Eberle Gr\u00e9goire Montavon Klaus-Robert M\u00fcller and Lior Wolf. 2022. XAI for Transformers: Better Explanations through Conservative Propagation. https:\/\/doi.org\/10.48550\/ARXIV.2202.07304"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"crossref","first-page":"623","DOI":"10.1017\/S0008197309990171","article-title":"Equality Law and Experimentation: The Positive Action Challenge","volume":"68","author":"Barmes Lizzie","year":"2009","unstructured":"Lizzie Barmes . 2009 . Equality Law and Experimentation: The Positive Action Challenge . The Cambridge Law Journal 68 , 3 (2009), 623 \u2013 654 . http:\/\/www.jstor.org\/stable\/40388838 Lizzie Barmes. 2009. Equality Law and Experimentation: The Positive Action Challenge. The Cambridge Law Journal 68, 3 (2009), 623\u2013654. http:\/\/www.jstor.org\/stable\/40388838","journal-title":"The Cambridge Law Journal"},{"key":"e_1_3_2_1_4_1","unstructured":"Solon Barocas Moritz Hardt and Arvind Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org. http:\/\/www.fairmlbook.org.  Solon Barocas Moritz Hardt and Arvind Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org. http:\/\/www.fairmlbook.org."},{"key":"e_1_3_2_1_5_1","volume-title":"Proceedings of the Second Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Barcelona, Spain (Online), 1\u201316","author":"Bartl Marion","year":"2020","unstructured":"Marion Bartl , Malvina Nissim , and Albert Gatt . 2020 . Unmasking Contextual Stereotypes: Measuring and Mitigating BERT\u2019s Gender Bias . In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Barcelona, Spain (Online), 1\u201316 . https:\/\/aclanthology.org\/2020.gebnlp-1.1 Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking Contextual Stereotypes: Measuring and Mitigating BERT\u2019s Gender Bias. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Barcelona, Spain (Online), 1\u201316. https:\/\/aclanthology.org\/2020.gebnlp-1.1"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"crossref","first-page":"46","DOI":"10.1016\/j.cognition.2017.03.016","article-title":"The semantic representation of prejudice and stereotypes","volume":"164","author":"Bhatia Sudeep","year":"2017","unstructured":"Sudeep Bhatia . 2017 . The semantic representation of prejudice and stereotypes . Cognition 164 (2017), 46 \u2013 60 . https:\/\/doi.org\/10.1016\/j.cognition.2017.03.016 10.1016\/j.cognition.2017.03.016 Sudeep Bhatia. 2017. The semantic representation of prejudice and stereotypes. Cognition 164 (2017), 46\u201360. https:\/\/doi.org\/10.1016\/j.cognition.2017.03.016","journal-title":"Cognition"},{"key":"e_1_3_2_1_7_1","unstructured":"Felix Biessmann. 2016. Automating political bias prediction. arXiv preprint. arXiv:1608.02195.  Felix Biessmann. 2016. Automating political bias prediction. arXiv preprint. arXiv:1608.02195."},{"key":"e_1_3_2_1_8_1","volume-title":"NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5454\u20135476","author":"Blodgett Su Lin","year":"2020","unstructured":"Su Lin Blodgett , Solon Barocas , Hal Daum\u00e9 III, and Hanna Wallach . 2020 . Language (Technology) is Power: A Critical Survey of \u201cBias \u201d in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5454\u20135476 . https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.485 10.18653\/v1 Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of \u201cBias\u201d in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5454\u20135476. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.485"},{"key":"e_1_3_2_1_9_1","volume-title":"Advances in Neural Information Processing Systems","author":"Bolukbasi Tolga","year":"2016","unstructured":"Tolga Bolukbasi , Kai-Wei Chang , James Y Zou , Venkatesh Saligrama , and Adam T Kalai . 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings . In Advances in Neural Information Processing Systems , D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29 . Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/ 2016 \/file\/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29. Curran Associates, Inc.https:\/\/proceedings.neurips.cc\/paper\/2016\/file\/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf"},{"key":"e_1_3_2_1_10_1","volume-title":"Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research","volume":"811","author":"Brunet Marc-Etienne","year":"2019","unstructured":"Marc-Etienne Brunet , Colleen Alkalay-Houlihan , Ashton Anderson , and Richard Zemel . 2019 . Understanding the Origins of Bias in Word Embeddings . In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research , Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 803\u2013 811 . https:\/\/proceedings.mlr.press\/v97\/brunet19a.html Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019. Understanding the Origins of Bias in Word Embeddings. In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 803\u2013811. https:\/\/proceedings.mlr.press\/v97\/brunet19a.html"},{"volume-title":"Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society","author":"Caliskan Aylin","key":"e_1_3_2_1_11_1","unstructured":"Aylin Caliskan , Pimparkar Parth Ajay , Tessa Charlesworth , Robert Wolfe , and Mahzarin R. Banaji . 2022. Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics . In Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society ( Oxford, United Kingdom) (AIES \u201922). Association for Computing Machinery, New York, NY, USA, 156\u2013170. https:\/\/doi.org\/10.1145\/3514094.3534162 10.1145\/3514094.3534162 Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, and Mahzarin R. Banaji. 2022. Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics. In Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society (Oxford, United Kingdom) (AIES \u201922). Association for Computing Machinery, New York, NY, USA, 156\u2013170. https:\/\/doi.org\/10.1145\/3514094.3534162"},{"key":"e_1_3_2_1_12_1","volume-title":"Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334","author":"Caliskan Aylin","year":"2017","unstructured":"Aylin Caliskan , Joanna J. Bryson , and Arvind Narayanan . 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 ( 2017 ), 183\u2013186. https:\/\/doi.org\/10.1126\/science.aal4230 arXiv:https:\/\/www.science.org\/doi\/pdf\/10.1126\/science.aal4230 10.1126\/science.aal4230 Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183\u2013186. https:\/\/doi.org\/10.1126\/science.aal4230 arXiv:https:\/\/www.science.org\/doi\/pdf\/10.1126\/science.aal4230"},{"key":"e_1_3_2_1_13_1","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics","author":"Cao Yang","year":"2022","unstructured":"Yang Cao , Yada Pruksachatkun , Kai-Wei Chang , Rahul Gupta , Varun Kumar , Jwala Dhamala , and Aram Galstyan . 2022 . On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations . In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics , Dublin, Ireland, 561\u2013570. https:\/\/doi.org\/10. 18653\/v1\/2022.acl-short.62 10.18653\/v1 Yang Cao, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta, Varun Kumar, Jwala Dhamala, and Aram Galstyan. 2022. On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Dublin, Ireland, 561\u2013570. https:\/\/doi.org\/10.18653\/v1\/2022.acl-short.62"},{"key":"e_1_3_2_1_14_1","first-page":"1","article-title":"A clarification of the nuances in the fairness metrics landscape","volume":"12","author":"Castelnovo Alessandro","year":"2022","unstructured":"Alessandro Castelnovo , Riccardo Crupi , Greta Greco , Daniele Regoli , Ilaria Giuseppina Penco , and Andrea Claudio Cosentini . 2022 . A clarification of the nuances in the fairness metrics landscape . Scientific Reports 12 , 1 (March 2022). https:\/\/doi.org\/10.1038\/s41598-022-07939-1 10.1038\/s41598-022-07939-1 Alessandro Castelnovo, Riccardo Crupi, Greta Greco, Daniele Regoli, Ilaria Giuseppina Penco, and Andrea Claudio Cosentini. 2022. A clarification of the nuances in the fairness metrics landscape. Scientific Reports 12, 1 (March 2022). https:\/\/doi.org\/10.1038\/s41598-022-07939-1","journal-title":"Scientific Reports"},{"key":"e_1_3_2_1_15_1","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics","author":"Chalkidis Ilias","year":"2022","unstructured":"Ilias Chalkidis , Tommaso Pasini , Sheng Zhang , Letizia Tomada , Sebastian Schwemer , and Anders S\u00f8gaard . 2022 . FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing . In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics , Dublin, Ireland, 4389\u20134406. https:\/\/doi.org\/10. 18653\/v1\/2022.acl-long.301 10.18653\/v1 Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Sebastian Schwemer, and Anders S\u00f8gaard. 2022. FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 4389\u20134406. https:\/\/doi.org\/10.18653\/v1\/2022.acl-long.301"},{"key":"e_1_3_2_1_16_1","first-page":"19","volume-title":"Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics","author":"Chaloner Kaytlin","year":"2019","unstructured":"Kaytlin Chaloner and Alfredo Maldonado . 2019 . Measuring Gender Bias in Word Embeddings across Domains and Discovering New Gender Bias Word Categories . In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics , Florence, Italy, 25\u201332. https:\/\/doi.org\/10. 18653\/v1\/W 19 - 3804 10.18653\/v1 Kaytlin Chaloner and Alfredo Maldonado. 2019. Measuring Gender Bias in Word Embeddings across Domains and Discovering New Gender Bias Word Categories. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Florence, Italy, 25\u201332. https:\/\/doi.org\/10.18653\/v1\/W19-3804"},{"key":"e_1_3_2_1_17_1","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts","author":"Chang Kai-Wei","year":"2004","unstructured":"Kai-Wei Chang , Vinodkumar Prabhakaran , and Vicente Ordonez . 2019. Bias and Fairness in Natural Language Processing . In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts . Association for Computational Linguistics , Hong Kong , China. https:\/\/aclanthology.org\/D19- 2004 Kai-Wei Chang, Vinodkumar Prabhakaran, and Vicente Ordonez. 2019. Bias and Fairness in Natural Language Processing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts. Association for Computational Linguistics, Hong Kong, China. https:\/\/aclanthology.org\/D19-2004"},{"key":"e_1_3_2_1_18_1","volume-title":"Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. Association for Computational Linguistics, Online, 149\u2013154","author":"Chen Wei-Fan","year":"2020","unstructured":"Wei-Fan Chen , Khalid Al Khatib , Henning Wachsmuth , and Benno Stein . 2020 . Analyzing Political Bias and Unfairness in News Articles at Different Levels of Granularity . In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. Association for Computational Linguistics, Online, 149\u2013154 . https:\/\/doi.org\/10.18653\/v1\/2020.nlpcss-1.16 10.18653\/v1 Wei-Fan Chen, Khalid Al Khatib, Henning Wachsmuth, and Benno Stein. 2020. Analyzing Political Bias and Unfairness in News Articles at Different Levels of Granularity. In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. Association for Computational Linguistics, Online, 149\u2013154. https:\/\/doi.org\/10.18653\/v1\/2020.nlpcss-1.16"},{"key":"e_1_3_2_1_19_1","volume-title":"Conference on Neural Information Processing Systems, invited speaker.","author":"Crawford Kate","year":"2017","unstructured":"Kate Crawford . 2017 . The trouble with bias . In Conference on Neural Information Processing Systems, invited speaker. Kate Crawford. 2017. The trouble with bias. In Conference on Neural Information Processing Systems, invited speaker."},{"key":"e_1_3_2_1_20_1","volume-title":"Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics. Transactions of the Association for Computational Linguistics 9 (11","author":"Czarnowska Paula","year":"2021","unstructured":"Paula Czarnowska , Yogarshi Vyas , and Kashif Shah . 2021. Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics. Transactions of the Association for Computational Linguistics 9 (11 2021 ), 1249\u20131267. https:\/\/doi.org\/10.1162\/tacl_a_00425 arXiv:https:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00425\/1972677\/tacl_a_00425.pdf 10.1162\/tacl_a_00425 Paula Czarnowska, Yogarshi Vyas, and Kashif Shah. 2021. Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics. Transactions of the Association for Computational Linguistics 9 (11 2021), 1249\u20131267. https:\/\/doi.org\/10.1162\/tacl_a_00425 arXiv:https:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00425\/1972677\/tacl_a_00425.pdf"},{"key":"e_1_3_2_1_21_1","volume-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4385\u20134391","author":"Dayanik Erenay","year":"2020","unstructured":"Erenay Dayanik and Sebastian Pad\u00f3 . 2020 . Masking Actor Information Leads to Fairer Political Claims Detection . In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4385\u20134391 . https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.404 10.18653\/v1 Erenay Dayanik and Sebastian Pad\u00f3. 2020. Masking Actor Information Leads to Fairer Political Claims Detection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4385\u20134391. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.404"},{"key":"e_1_3_2_1_22_1","volume-title":"Toon Calders, and Bettina Berendt.","author":"Delobelle Pieter","year":"2021","unstructured":"Pieter Delobelle , Ewoenam Kwaku Tokpo , Toon Calders, and Bettina Berendt. 2021 . Measuring Fairness with Biased Rulers : A Survey on Quantifying Biases in Pretrained Language Models. CoRR abs\/2112.07447 (2021). https:\/\/arxiv.org\/abs\/2112.07447 Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, and Bettina Berendt. 2021. Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models. CoRR abs\/2112.07447 (2021). https:\/\/arxiv.org\/abs\/2112.07447"},{"volume-title":"Predicting political preference through content- and stylistic text features and distant labeling. Master\u2019s thesis","author":"Duijnhoven Coen Van","key":"e_1_3_2_1_23_1","unstructured":"Coen Van Duijnhoven . 2018. Predicting political preference through content- and stylistic text features and distant labeling. Master\u2019s thesis . Tilburg University . Coen Van Duijnhoven. 2018. Predicting political preference through content- and stylistic text features and distant labeling. Master\u2019s thesis. Tilburg University."},{"key":"#cr-split#-e_1_3_2_1_24_1.1","unstructured":"Sorelle A. Friedler Carlos Scheidegger and Suresh Venkatasubramanian. 2016. On the (im)possibility of fairness. https:\/\/doi.org\/10.48550\/ARXIV.1609.07236 10.48550\/ARXIV.1609.07236"},{"key":"#cr-split#-e_1_3_2_1_24_1.2","unstructured":"Sorelle A. Friedler Carlos Scheidegger and Suresh Venkatasubramanian. 2016. On the (im)possibility of fairness. https:\/\/doi.org\/10.48550\/ARXIV.1609.07236"},{"key":"e_1_3_2_1_25_1","volume-title":"Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Online, 91\u201398","author":"Friedrich Niklas","year":"2021","unstructured":"Niklas Friedrich , Anne Lauscher , Simone Paolo Ponzetto , and Goran Glava\u0161 . 2021 . DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding Spaces . In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Online, 91\u201398 . https:\/\/doi.org\/10.18653\/v1\/2021.eacl-demos.11 10.18653\/v1 Niklas Friedrich, Anne Lauscher, Simone Paolo Ponzetto, and Goran Glava\u0161. 2021. DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding Spaces. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Online, 91\u201398. https:\/\/doi.org\/10.18653\/v1\/2021.eacl-demos.11"},{"key":"e_1_3_2_1_26_1","volume-title":"Mugdha Pandya, and Adam Lopez.","author":"Goldfarb-Tarrant Seraphina","year":"2021","unstructured":"Seraphina Goldfarb-Tarrant , Rebecca Marchant , Ricardo Mu\u00f1oz S\u00e1nchez , Mugdha Pandya, and Adam Lopez. 2021 . Intrinsic Bias Metrics Do Not Correlate with Application Bias. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online, 1926\u20131940. https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.150 10.18653\/v1 Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Mu\u00f1oz S\u00e1nchez, Mugdha Pandya, and Adam Lopez. 2021. Intrinsic Bias Metrics Do Not Correlate with Application Bias. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online, 1926\u20131940. https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.150"},{"key":"e_1_3_2_1_27_1","first-page":"19","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","volume":"1","author":"Gonen Hila","year":"2019","unstructured":"Hila Gonen and Yoav Goldberg . 2019 . Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 609\u2013614. https:\/\/doi.org\/10. 18653\/v1\/N 19 - 1061 10.18653\/v1 Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 609\u2013614. https:\/\/doi.org\/10.18653\/v1\/N19-1061"},{"key":"e_1_3_2_1_28_1","volume-title":"Findings of the Association for Computational Linguistics: ACL-IJCNLP","author":"Bach Hansen Victor Petr\u00e9n","year":"2021","unstructured":"Victor Petr\u00e9n Bach Hansen and Anders S\u00f8gaard . 2021. Is the Lottery Fair? Evaluating Winning Tickets Across Demographics . In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 . Association for Computational Linguistics , Online , 3214\u20133224. https:\/\/doi.org\/10.18653\/v1\/2021.findings-acl.284 10.18653\/v1 Victor Petr\u00e9n Bach Hansen and Anders S\u00f8gaard. 2021. Is the Lottery Fair? Evaluating Winning Tickets Across Demographics. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, Online, 3214\u20133224. https:\/\/doi.org\/10.18653\/v1\/2021.findings-acl.284"},{"key":"e_1_3_2_1_29_1","volume-title":"Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan","author":"Hashimoto Tatsunori B.","year":"2018","unstructured":"Tatsunori B. Hashimoto , Megha Srivastava , Hongseok Namkoong , and Percy Liang . 2018 . Fairness Without Demographics in Repeated Loss Minimization . In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan , Stockholm, Sweden , July 10-15, 2018(Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 1934\u20131943. http:\/\/proceedings.mlr.press\/v80\/hashimoto18a.html Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10-15, 2018(Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 1934\u20131943. http:\/\/proceedings.mlr.press\/v80\/hashimoto18a.html"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1111\/papa.12189","article-title":"On Statistical Criteria of Algorithmic Fairness","volume":"49","author":"Hedden Brian","year":"2021","unstructured":"Brian Hedden . 2021 . On Statistical Criteria of Algorithmic Fairness . Philosophy and Public Affairs 49 , 2 (2021), 209 \u2013 231 . https:\/\/doi.org\/10.1111\/papa.12189 10.1111\/papa.12189 Brian Hedden. 2021. On Statistical Criteria of Algorithmic Fairness. Philosophy and Public Affairs 49, 2 (2021), 209\u2013231. https:\/\/doi.org\/10.1111\/papa.12189","journal-title":"Philosophy and Public Affairs"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1016\/j.jesp.2014.03.012","article-title":"The influence of target group status on the perception of the offensiveness of group-based slurs","volume":"53","author":"Henry P.J.","year":"2014","unstructured":"P.J. Henry , Sarah E. Butler , and Mark J. Brandt . 2014 . The influence of target group status on the perception of the offensiveness of group-based slurs . Journal of Experimental Social Psychology 53 (2014), 185 \u2013 192 . https:\/\/doi.org\/10.1016\/j.jesp.2014.03.012 10.1016\/j.jesp.2014.03.012 P.J. Henry, Sarah E. Butler, and Mark J. Brandt. 2014. The influence of target group status on the perception of the offensiveness of group-based slurs. Journal of Experimental Social Psychology 53 (2014), 185\u2013192. https:\/\/doi.org\/10.1016\/j.jesp.2014.03.012","journal-title":"Journal of Experimental Social Psychology"},{"key":"e_1_3_2_1_32_1","first-page":"3","article-title":"Assessing Affirmative Action","volume":"38","author":"Holzer Harry","year":"2000","unstructured":"Harry Holzer and David Neumark . 2000 . Assessing Affirmative Action . Journal of Economic Literature 38 , 3 (September 2000), 483\u2013568. https:\/\/doi.org\/10.1257\/jel.38.3.483 10.1257\/jel.38.3.483 Harry Holzer and David Neumark. 2000. Assessing Affirmative Action. Journal of Economic Literature 38, 3 (September 2000), 483\u2013568. https:\/\/doi.org\/10.1257\/jel.38.3.483","journal-title":"Journal of Economic Literature"},{"key":"e_1_3_2_1_33_1","volume-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5491\u20135501","author":"Hutchinson Ben","year":"2020","unstructured":"Ben Hutchinson , Vinodkumar Prabhakaran , Emily Denton , Kellie Webster , Yu Zhong , and Stephen Denuyl . 2020 . Social Biases in NLP Models as Barriers for Persons with Disabilities . In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5491\u20135501 . https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.487 10.18653\/v1 Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5491\u20135501. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.487"},{"key":"e_1_3_2_1_34_1","first-page":"2","article-title":"Political Polarization and the Dynamics of Political Language: Evidence from 130 Years of Partisan Speech","volume":"43","author":"Jensen Jacob","year":"2012","unstructured":"Jacob Jensen , Ethan Kaplan , Suresh Naidu , and Laurence Wilse-Samson . 2012 . Political Polarization and the Dynamics of Political Language: Evidence from 130 Years of Partisan Speech . Brookings Papers on Economic Activity 43 , 2 (Fall) (2012), 1\u201381. https:\/\/ideas.repec.org\/a\/bin\/bpeajo\/v43y2012i2012-02p1-81.html Jacob Jensen, Ethan Kaplan, Suresh Naidu, and Laurence Wilse-Samson. 2012. Political Polarization and the Dynamics of Political Language: Evidence from 130 Years of Partisan Speech. Brookings Papers on Economic Activity 43, 2 (Fall) (2012), 1\u201381. https:\/\/ideas.repec.org\/a\/bin\/bpeajo\/v43y2012i2012-02p1-81.html","journal-title":"Brookings Papers on Economic Activity"},{"key":"e_1_3_2_1_35_1","volume-title":"Proceedings of the 29th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Gyeongju, Republic of Korea, 1299\u20131310","author":"Kaneko Masahiro","year":"2022","unstructured":"Masahiro Kaneko , Danushka Bollegala , and Naoaki Okazaki . 2022 . Debiasing Isn\u2019t Enough! \u2013 on the Effectiveness of Debiasing MLMs and Their Social Biases in Downstream Tasks . In Proceedings of the 29th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Gyeongju, Republic of Korea, 1299\u20131310 . https:\/\/aclanthology.org\/2022.coling-1.111 Masahiro Kaneko, Danushka Bollegala, and Naoaki Okazaki. 2022. Debiasing Isn\u2019t Enough! \u2013 on the Effectiveness of Debiasing MLMs and Their Social Biases in Downstream Tasks. In Proceedings of the 29th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Gyeongju, Republic of Korea, 1299\u20131310. https:\/\/aclanthology.org\/2022.coling-1.111"},{"key":"#cr-split#-e_1_3_2_1_36_1.1","unstructured":"Jon Kleinberg Sendhil Mullainathan and Manish Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. https:\/\/doi.org\/10.48550\/ARXIV.1609.05807 10.48550\/ARXIV.1609.05807"},{"key":"#cr-split#-e_1_3_2_1_36_1.2","unstructured":"Jon Kleinberg Sendhil Mullainathan and Manish Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. https:\/\/doi.org\/10.48550\/ARXIV.1609.05807"},{"key":"e_1_3_2_1_37_1","first-page":"19","volume-title":"Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics","author":"Kurita Keita","year":"2019","unstructured":"Keita Kurita , Nidhi Vyas , Ayush Pareek , Alan W Black , and Yulia Tsvetkov . 2019 . Measuring Bias in Contextualized Word Representations . In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics , Florence, Italy, 166\u2013172. https:\/\/doi.org\/10. 18653\/v1\/W 19 - 3823 10.18653\/v1 Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring Bias in Contextualized Word Representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Florence, Italy, 166\u2013172. https:\/\/doi.org\/10.18653\/v1\/W19-3823"},{"key":"e_1_3_2_1_38_1","first-page":"411","volume-title":"Sustainable Modular Debiasing of Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2021","author":"Lauscher Anne","year":"2021","unstructured":"Anne Lauscher , Tobias Lueken , and Goran Glava\u0161 . 2021 . Sustainable Modular Debiasing of Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2021 . Association for Computational Linguistics, Punta Cana, Dominican Republic, 4782\u20134797. https:\/\/doi.org\/10. 18653\/v1\/2021.findings-emnlp. 411 10.18653\/v1 Anne Lauscher, Tobias Lueken, and Goran Glava\u0161. 2021. Sustainable Modular Debiasing of Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, Punta Cana, Dominican Republic, 4782\u20134797. https:\/\/doi.org\/10.18653\/v1\/2021.findings-emnlp.411"},{"key":"e_1_3_2_1_39_1","volume-title":"Proceedings of Recent Advances in Natural Language Processing.","author":"Li Wen","year":"2017","unstructured":"Wen Li and Markus Dickinson . 2017 . Gender Prediction for Chinese Social Media Data . In Proceedings of Recent Advances in Natural Language Processing. Wen Li and Markus Dickinson. 2017. Gender Prediction for Chinese Social Media Data. In Proceedings of Recent Advances in Natural Language Processing."},{"key":"e_1_3_2_1_40_1","volume-title":"Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Barcelona, Spain (Online), 4403\u20134416","author":"Liu Haochen","year":"2020","unstructured":"Haochen Liu , Jamell Dacon , Wenqi Fan , Hui Liu , Zitao Liu , and Jiliang Tang . 2020 . Does Gender Matter? Towards Fairness in Dialogue Systems . In Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Barcelona, Spain (Online), 4403\u20134416 . https:\/\/doi.org\/10.18653\/v1\/2020.coling-main.390 10.18653\/v1 Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2020. Does Gender Matter? Towards Fairness in Dialogue Systems. In Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Barcelona, Spain (Online), 4403\u20134416. https:\/\/doi.org\/10.18653\/v1\/2020.coling-main.390"},{"key":"#cr-split#-e_1_3_2_1_41_1.1","unstructured":"Subha Maity Debarghya Mukherjee Mikhail Yurochkin and Yuekai Sun. 2020. Does enforcing fairness mitigate biases caused by subpopulation shift?https:\/\/doi.org\/10.48550\/ARXIV.2011.03173 10.48550\/ARXIV.2011.03173"},{"key":"#cr-split#-e_1_3_2_1_41_1.2","unstructured":"Subha Maity Debarghya Mukherjee Mikhail Yurochkin and Yuekai Sun. 2020. Does enforcing fairness mitigate biases caused by subpopulation shift?https:\/\/doi.org\/10.48550\/ARXIV.2011.03173"},{"key":"e_1_3_2_1_42_1","first-page":"19","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","volume":"1","author":"May Chandler","year":"2019","unstructured":"Chandler May , Alex Wang , Shikha Bordia , Samuel R. Bowman , and Rachel Rudinger . 2019 . On Measuring Social Biases in Sentence Encoders . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 622\u2013628. https:\/\/doi.org\/10. 18653\/v1\/N 19 - 1063 10.18653\/v1 Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On Measuring Social Biases in Sentence Encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 622\u2013628. https:\/\/doi.org\/10.18653\/v1\/N19-1063"},{"key":"e_1_3_2_1_43_1","volume-title":"Article 115 (jul","author":"Mehrabi Ninareh","year":"2021","unstructured":"Ninareh Mehrabi , Fred Morstatter , Nripsuta Saxena , Kristina Lerman , and Aram Galstyan . 2021. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6 , Article 115 (jul 2021 ), 35 pages. https:\/\/doi.org\/10.1145\/3457607 10.1145\/3457607 Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6, Article 115 (jul 2021), 35 pages. https:\/\/doi.org\/10.1145\/3457607"},{"key":"e_1_3_2_1_44_1","volume-title":"The impossibility of \"fairness\": a generalized impossibility result for decisions. arXiv: Applications","author":"Miconi Thomas","year":"2017","unstructured":"Thomas Miconi . 2017. The impossibility of \"fairness\": a generalized impossibility result for decisions. arXiv: Applications ( 2017 ). https:\/\/doi.org\/10.48550\/ARXIV.1707.01195 10.48550\/ARXIV.1707.01195 Thomas Miconi. 2017. The impossibility of \"fairness\": a generalized impossibility result for decisions. arXiv: Applications (2017). https:\/\/doi.org\/10.48550\/ARXIV.1707.01195"},{"key":"e_1_3_2_1_45_1","volume-title":"Predicting age groups of Twitter users based on language and metadata features. PLoS ONE 12","author":"Morgan-Lopez Antonio Alexander","year":"2017","unstructured":"Antonio Alexander Morgan-Lopez , Annice E Kim , Robert F. Chew , and Paul Ruddle . 2017. Predicting age groups of Twitter users based on language and metadata features. PLoS ONE 12 ( 2017 ). Antonio Alexander Morgan-Lopez, Annice E Kim, Robert F. Chew, and Paul Ruddle. 2017. Predicting age groups of Twitter users based on language and metadata features. PLoS ONE 12 (2017)."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"crossref","first-page":"728","DOI":"10.1177\/0950017010380648","article-title":"The shackled runner: time to rethink positive discrimination?Work","volume":"24","author":"Noon Mike","year":"2010","unstructured":"Mike Noon . 2010 . The shackled runner: time to rethink positive discrimination?Work , Employment and Society 24 , 4 (2010), 728 \u2013 739 . https:\/\/doi.org\/10.1177\/0950017010380648 arXiv:https:\/\/doi.org\/10.1177\/0950017010380648 10.1177\/0950017010380648 Mike Noon. 2010. The shackled runner: time to rethink positive discrimination?Work, Employment and Society 24, 4 (2010), 728\u2013739. https:\/\/doi.org\/10.1177\/0950017010380648 arXiv:https:\/\/doi.org\/10.1177\/0950017010380648","journal-title":"Employment and Society"},{"key":"#cr-split#-e_1_3_2_1_47_1.1","unstructured":"Rebecca Qian Candace Ross Jude Fernandes Eric Smith Douwe Kiela and Adina Williams. 2022. Perturbation Augmentation for Fairer NLP. https:\/\/doi.org\/10.48550\/ARXIV.2205.12586 10.48550\/ARXIV.2205.12586"},{"key":"#cr-split#-e_1_3_2_1_47_1.2","unstructured":"Rebecca Qian Candace Ross Jude Fernandes Eric Smith Douwe Kiela and Adina Williams. 2022. Perturbation Augmentation for Fairer NLP. https:\/\/doi.org\/10.48550\/ARXIV.2205.12586"},{"volume-title":"A Theory of Justice (1 ed.)","author":"Rawls John","key":"e_1_3_2_1_48_1","unstructured":"John Rawls . 1971. A Theory of Justice (1 ed.) . Belknap Press of Harvard University Press , Cambridge, Massachussets . John Rawls. 1971. A Theory of Justice (1 ed.). Belknap Press of Harvard University Press, Cambridge, Massachussets."},{"key":"e_1_3_2_1_49_1","volume-title":"Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, J. Vanschoren and S. Yeung (Eds.).","volume":"1","author":"Reddy Charan","year":"2021","unstructured":"Charan Reddy , Deepak Sharma , Soroush Mehri , Adriana Romero Soriano , Samira Shabanian , and Sina Honari . 2021 . Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics . In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, J. Vanschoren and S. Yeung (Eds.). Vol. 1 . https:\/\/datasets-benchmarks-proceedings.neurips.cc\/paper\/2021\/file\/2723d092b63885e0d7c260cc007e8b9d-Paper-round1.pdf Charan Reddy, Deepak Sharma, Soroush Mehri, Adriana Romero Soriano, Samira Shabanian, and Sina Honari. 2021. Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, J. Vanschoren and S. Yeung (Eds.). Vol. 1. https:\/\/datasets-benchmarks-proceedings.neurips.cc\/paper\/2021\/file\/2723d092b63885e0d7c260cc007e8b9d-Paper-round1.pdf"},{"key":"e_1_3_2_1_50_1","first-page":"155","article-title":"Social Identity, Indexicality, and the Appropriation of Slurs","volume":"17","author":"Ritchie Katherine","year":"2017","unstructured":"Katherine Ritchie . 2017 . Social Identity, Indexicality, and the Appropriation of Slurs . Croatian Journal of Philosophy 17 , 2 (2017), 155 \u2013 180 . Katherine Ritchie. 2017. Social Identity, Indexicality, and the Appropriation of Slurs. Croatian Journal of Philosophy 17, 2 (2017), 155\u2013180.","journal-title":"Croatian Journal of Philosophy"},{"key":"#cr-split#-e_1_3_2_1_51_1.1","doi-asserted-by":"crossref","unstructured":"Alexey Romanov Maria De-Arteaga Hanna Wallach Jennifer Chayes Christian Borgs Alexandra Chouldechova Sahin Geyik Krishnaram Kenthapadi Anna Rumshisky and Adam Tauman Kalai. 2019. What's in a Name? Reducing Bias in Bios without Access to Protected Attributes. https:\/\/doi.org\/10.48550\/ARXIV.1904.05233 10.48550\/ARXIV.1904.05233","DOI":"10.18653\/v1\/N19-1424"},{"key":"#cr-split#-e_1_3_2_1_51_1.2","doi-asserted-by":"crossref","unstructured":"Alexey Romanov Maria De-Arteaga Hanna Wallach Jennifer Chayes Christian Borgs Alexandra Chouldechova Sahin Geyik Krishnaram Kenthapadi Anna Rumshisky and Adam Tauman Kalai. 2019. What's in a Name? Reducing Bias in Bios without Access to Protected Attributes. https:\/\/doi.org\/10.48550\/ARXIV.1904.05233","DOI":"10.18653\/v1\/N19-1424"},{"key":"e_1_3_2_1_52_1","volume-title":"Proceedings of the 2021 Conference of the North American","author":"Ross Candace","year":"1865","unstructured":"Candace Ross , Boris Katz , and Andrei Barbu . 2021. Measuring Social Biases in Grounded Vision and Language Embeddings . In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics , Online , 998\u20131008. https:\/\/doi.org\/10. 1865 3\/v1\/2021.naacl-main.78 10.18653\/v1 Candace Ross, Boris Katz, and Andrei Barbu. 2021. Measuring Social Biases in Grounded Vision and Language Embeddings. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, 998\u20131008. https:\/\/doi.org\/10.18653\/v1\/2021.naacl-main.78"},{"key":"e_1_3_2_1_53_1","volume-title":"Findings of the Association for Computational Linguistics: ACL 2022","author":"Ruder Sebastian","year":"2022","unstructured":"Sebastian Ruder , Ivan Vuli\u0107 , and Anders S\u00f8gaard . 2022 . Square One Bias in NLP: Towards a Multi-Dimensional Exploration of the Research Manifold . In Findings of the Association for Computational Linguistics: ACL 2022 . Association for Computational Linguistics, Dublin, Ireland, 2340\u20132354. https:\/\/doi.org\/10. 18653\/v1\/2022.findings-acl.184 10.18653\/v1 Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2022. Square One Bias in NLP: Towards a Multi-Dimensional Exploration of the Research Manifold. In Findings of the Association for Computational Linguistics: ACL 2022. Association for Computational Linguistics, Dublin, Ireland, 2340\u20132354. https:\/\/doi.org\/10.18653\/v1\/2022.findings-acl.184"},{"key":"e_1_3_2_1_54_1","first-page":"19","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics","author":"Sap Maarten","year":"1865","unstructured":"Maarten Sap , Dallas Card , Saadia Gabriel , Yejin Choi , and Noah A. Smith . 2019. The Risk of Racial Bias in Hate Speech Detection . In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics , Florence, Italy, 1668\u20131678. https:\/\/doi.org\/10. 1865 3\/v1\/P 19 - 1163 10.18653\/v1 Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The Risk of Racial Bias in Hate Speech Detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1668\u20131678. https:\/\/doi.org\/10.18653\/v1\/P19-1163"},{"key":"e_1_3_2_1_55_1","volume-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5248\u20135264","author":"Shah Deven Santosh","year":"2020","unstructured":"Deven Santosh Shah , H. Andrew Schwartz , and Dirk Hovy . 2020 . Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview . In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5248\u20135264 . https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.468 10.18653\/v1 Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5248\u20135264. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.468"},{"key":"e_1_3_2_1_56_1","volume-title":"Findings of the Association for Computational Linguistics: AACL-IJCNLP","author":"Shen Aili","year":"2022","unstructured":"Aili Shen , Xudong Han , Trevor Cohn , Timothy Baldwin , and Lea Frermann . 2022. Does Representational Fairness Imply Empirical Fairness? . In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 . Association for Computational Linguistics , Online only, 81\u201395. https:\/\/aclanthology.org\/2022.findings-aacl.8 Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2022. Does Representational Fairness Imply Empirical Fairness?. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022. Association for Computational Linguistics, Online only, 81\u201395. https:\/\/aclanthology.org\/2022.findings-aacl.8"},{"key":"#cr-split#-e_1_3_2_1_57_1.1","unstructured":"Karolina Stanczak and Isabelle Augenstein. 2021. A Survey on Gender Bias in Natural Language Processing. https:\/\/doi.org\/10.48550\/ARXIV.2112.14168 10.48550\/ARXIV.2112.14168"},{"key":"#cr-split#-e_1_3_2_1_57_1.2","unstructured":"Karolina Stanczak and Isabelle Augenstein. 2021. A Survey on Gender Bias in Natural Language Processing. https:\/\/doi.org\/10.48550\/ARXIV.2112.14168"},{"key":"e_1_3_2_1_58_1","first-page":"19","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics","author":"Sun Tony","year":"2019","unstructured":"Tony Sun , Andrew Gaut , Shirlyn Tang , Yuxin Huang , Mai ElSherief , Jieyu Zhao , Diba Mirza , Elizabeth Belding , Kai-Wei Chang , and William Yang Wang . 2019 . Mitigating Gender Bias in Natural Language Processing: Literature Review . In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics , Florence, Italy, 1630\u20131640. https:\/\/doi.org\/10. 18653\/v1\/P 19 - 1159 10.18653\/v1 Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Language Processing: Literature Review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1630\u20131640. https:\/\/doi.org\/10.18653\/v1\/P19-1159"},{"volume-title":"Assessing Social and Intersectional Biases in Contextualized Word Representations","author":"Tan Yi Chern","key":"e_1_3_2_1_59_1","unstructured":"Yi Chern Tan and L. Elisa Celis . 2019. Assessing Social and Intersectional Biases in Contextualized Word Representations . Curran Associates Inc., Red Hook, NY, USA. Yi Chern Tan and L. Elisa Celis. 2019. Assessing Social and Intersectional Biases in Contextualized Word Representations. Curran Associates Inc., Red Hook, NY, USA."},{"key":"e_1_3_2_1_60_1","volume-title":"Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, and Anna Korhonen.","author":"Vuli\u0107 Ivan","year":"2020","unstructured":"Ivan Vuli\u0107 , Simon Baker , Edoardo Maria Ponti , Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, and Anna Korhonen. 2020 . Multi-SimLex: A Large-Scale Evaluation of Multilingual and Crosslingual Lexical Semantic Similarity. Computational Linguistics 46, 4 (02 2020), 847\u2013897. https:\/\/doi.org\/10.1162\/coli_a_00391 arXiv:https:\/\/direct.mit.edu\/coli\/article-pdf\/46\/4\/847\/1888287\/coli_a_00391.pdf 10.1162\/coli_a_00391 Ivan Vuli\u0107, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, and Anna Korhonen. 2020. Multi-SimLex: A Large-Scale Evaluation of Multilingual and Crosslingual Lexical Semantic Similarity. Computational Linguistics 46, 4 (02 2020), 847\u2013897. https:\/\/doi.org\/10.1162\/coli_a_00391 arXiv:https:\/\/direct.mit.edu\/coli\/article-pdf\/46\/4\/847\/1888287\/coli_a_00391.pdf"},{"volume-title":"Narrative Origin Classification of Israeli-Palestinian Conflict Texts. In The Thirty-Third International FLAIRS Conference.","author":"Wei Jason","key":"e_1_3_2_1_61_1","unstructured":"Jason Wei and Eugene Santos Jr .2020. Narrative Origin Classification of Israeli-Palestinian Conflict Texts. In The Thirty-Third International FLAIRS Conference. Jason Wei and Eugene Santos Jr.2020. Narrative Origin Classification of Israeli-Palestinian Conflict Texts. In The Thirty-Third International FLAIRS Conference."},{"key":"e_1_3_2_1_62_1","volume-title":"Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research","volume":"6797","author":"Williamson Robert","year":"2019","unstructured":"Robert Williamson and Aditya Menon . 2019 . Fairness risk measures . In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research , Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 6786\u2013 6797 . https:\/\/proceedings.mlr.press\/v97\/williamson19a.html Robert Williamson and Aditya Menon. 2019. Fairness risk measures. In Proceedings of the 36th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 6786\u20136797. https:\/\/proceedings.mlr.press\/v97\/williamson19a.html"},{"key":"e_1_3_2_1_63_1","volume-title":"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 4581\u20134588","author":"Zhang Sheng","year":"2021","unstructured":"Sheng Zhang , Xin Zhang , Weiming Zhang , and Anders S\u00f8gaard . 2021 . Sociolectal Analysis of Pretrained Language Models . In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 4581\u20134588 . https:\/\/doi.org\/10.18653\/v1\/2021.emnlp-main.375 10.18653\/v1 Sheng Zhang, Xin Zhang, Weiming Zhang, and Anders S\u00f8gaard. 2021. Sociolectal Analysis of Pretrained Language Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 4581\u20134588. https:\/\/doi.org\/10.18653\/v1\/2021.emnlp-main.375"},{"key":"e_1_3_2_1_64_1","first-page":"18","volume-title":"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics","author":"Zhao Jieyu","year":"2018","unstructured":"Jieyu Zhao , Yichao Zhou , Zeyu Li , Wei Wang , and Kai-Wei Chang . 2018 . Learning Gender-Neutral Word Embeddings . In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics , Brussels, Belgium, 4847\u20134853. https:\/\/doi.org\/10. 18653\/v1\/D 18 - 1521 10.18653\/v1 Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018. Learning Gender-Neutral Word Embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 4847\u20134853. https:\/\/doi.org\/10.18653\/v1\/D18-1521"}],"event":{"name":"FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency","acronym":"FAccT '23","location":"Chicago IL USA"},"container-title":["2023 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3593013.3594004","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3593013.3594004","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:48:03Z","timestamp":1750178883000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3593013.3594004"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,12]]},"references-count":71,"alternative-id":["10.1145\/3593013.3594004","10.1145\/3593013"],"URL":"https:\/\/doi.org\/10.1145\/3593013.3594004","relation":{},"subject":[],"published":{"date-parts":[[2023,6,12]]},"assertion":[{"value":"2023-06-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}