{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T00:41:32Z","timestamp":1776127292623,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":121,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,6,20]],"date-time":"2022-06-20T00:00:00Z","timestamp":1655683200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,6,21]]},"DOI":"10.1145\/3531146.3533179","type":"proceedings-article","created":{"date-parts":[[2022,6,20]],"date-time":"2022-06-20T14:27:10Z","timestamp":1655735230000},"page":"1194-1206","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":52,"title":["The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations"],"prefix":"10.1145","author":[{"given":"Aparna","family":"Balagopalan","sequence":"first","affiliation":[{"name":"Massachusetts Institute of Technology, USA"}]},{"given":"Haoran","family":"Zhang","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, USA"}]},{"given":"Kimia","family":"Hamidieh","sequence":"additional","affiliation":[{"name":"University of Toronto\/Vector Institute, Canada"}]},{"given":"Thomas","family":"Hartvigsen","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, USA"}]},{"given":"Frank","family":"Rudzicz","sequence":"additional","affiliation":[{"name":"University of Toronto\/Vector Institute, Canada and Unity Health Toronto, Canada"}]},{"given":"Marzyeh","family":"Ghassemi","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, USA and Vector Institute, Canada"}]}],"member":"320","published-online":{"date-parts":[[2022,6,20]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"Robert Adragna Elliot Creager David Madras and Richard Zemel. 2020. Fairness and robustness in invariant learning: A case study in toxicity classification. arXiv preprint arXiv:2011.06485(2020)."},{"key":"e_1_3_2_1_2_1","volume-title":"International Conference on Machine Learning (ICML). 60\u201369","author":"Agarwal Alekh","year":"2018","unstructured":"Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. 2018. A Reductions Approach to Fair Classification. In International Conference on Machine Learning (ICML). 60\u201369."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3233547.3233667"},{"key":"e_1_3_2_1_4_1","volume-title":"International Conference on Machine Learning. PMLR, 161\u2013170","author":"A\u00efvodji Ulrich","year":"2019","unstructured":"Ulrich A\u00efvodji, Hiromi Arai, Olivier Fortineau, S\u00e9bastien Gambs, Satoshi Hara, and Alain Tapp. 2019. Fairwashing: the risk of rationalization. In International Conference on Machine Learning. PMLR, 161\u2013170."},{"key":"e_1_3_2_1_5_1","unstructured":"Ulrich A\u00efvodji Hiromi Arai S\u00e9bastien Gambs and Satoshi Hara. 2021. Characterizing the risk of fairwashing. arXiv preprint arXiv:2106.07504(2021)."},{"key":"e_1_3_2_1_6_1","first-page":"231","article-title":"Using machine learning to predict outcomes in tax law","volume":"58","author":"Alarie Benjamin","year":"2016","unstructured":"Benjamin Alarie, Anthony Niblett, and Albert\u00a0H Yoon. 2016. Using machine learning to predict outcomes in tax law. Can. Bus. LJ 58(2016), 231.","journal-title":"Can. Bus. LJ"},{"key":"e_1_3_2_1_7_1","unstructured":"Hadis Anahideh Abolfazl Asudeh and Saravanan Thirumuruganathan. 2020. Fair active learning. arXiv preprint arXiv:2001.01796(2020)."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0130140"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"crossref","unstructured":"Gagan Bansal Besmira Nushi Ece Kamar Eric Horvitz and Daniel\u00a0S Weld. 2021. Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork. (2021).","DOI":"10.1609\/aaai.v35i13.17359"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445717"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11292-017-9286-2"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3375624"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10506-020-09270-4"},{"key":"e_1_3_2_1_14_1","unstructured":"Tiago Botari Frederik Hvilsh\u00f8j Rafael Izbicki and Andre\u00a0CPLF de Carvalho. 2020. MeLIME: meaningful local explanation for machine learning models. arXiv preprint arXiv:2009.05818(2020)."},{"key":"e_1_3_2_1_15_1","volume-title":"Verification of forecasts expressed in terms of probability. Monthly weather review 78, 1","author":"W Brier","year":"1950","unstructured":"Glenn\u00a0W Brier 1950. Verification of forecasts expressed in terms of probability. Monthly weather review 78, 1 (1950), 1\u20133."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377325.3377498"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3449287"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1613\/jair.1.12228"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10614-020-10042-0"},{"key":"e_1_3_2_1_20_1","unstructured":"Simon Caton and Christian Haas. 2020. Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053(2020)."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467453"},{"key":"e_1_3_2_1_22_1","unstructured":"Irene Chen Fredrik\u00a0D Johansson and David Sontag. 2018. Why is my classifier discriminatory?arXiv preprint arXiv:1805.12002(2018)."},{"key":"e_1_3_2_1_23_1","volume-title":"Ethical Machine Learning in Healthcare. Annual Review of Biomedical Data Science 4","author":"Chen Y","year":"2020","unstructured":"Irene\u00a0Y Chen, Emma Pierson, Sherri Rose, Shalmali Joshi, Kadija Ferryman, and Marzyeh Ghassemi. 2020. Ethical Machine Learning in Healthcare. Annual Review of Biomedical Data Science 4 (2020)."},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"crossref","unstructured":"John Chen Ian Berlot-Attwell Safwan Hossain Xindi Wang and Frank Rudzicz. 2020. Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical NLP.","DOI":"10.18653\/v1\/2020.clinicalnlp-1.33"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3511299"},{"key":"e_1_3_2_1_26_1","volume-title":"Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2","author":"Chouldechova Alexandra","year":"2017","unstructured":"Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153\u2013163."},{"key":"e_1_3_2_1_27_1","unstructured":"Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810(2018)."},{"key":"e_1_3_2_1_28_1","unstructured":"Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023(2018)."},{"key":"e_1_3_2_1_29_1","unstructured":"The Law School\u00a0Admission Council. 2018. Legal Education Data Library. https:\/\/www.lsac.org\/data-research\/data\/current-volume-summaries-region-raceethnicity-gender-identity-lsat-score"},{"key":"e_1_3_2_1_30_1","volume-title":"Extracting tree-structured representations of trained networks. Advances in neural information processing systems 8","author":"Craven Mark","year":"1995","unstructured":"Mark Craven and Jude Shavlik. 1995. Extracting tree-structured representations of trained networks. Advances in neural information processing systems 8 (1995), 24\u201330."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3514094.3534159"},{"key":"e_1_3_2_1_32_1","unstructured":"Jessica Dai Sohini Upadhyay Stephen\u00a0H Bach and Himabindu Lakkaraju. 2021. What will it take to generate fairness-preserving explanations?arXiv preprint arXiv:2106.13346(2021)."},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3461702.3462523"},{"key":"e_1_3_2_1_34_1","first-page":"114","article-title":"Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology","volume":"144","author":"Dietvorst J","year":"2015","unstructured":"Berkeley\u00a0J Dietvorst, Joseph\u00a0P Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General 144, 1 (2015), 114.","journal-title":"General"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1080\/014492998119526"},{"key":"e_1_3_2_1_36_1","unstructured":"Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608(2017)."},{"key":"e_1_3_2_1_37_1","volume-title":"41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, 0210\u20130215","author":"Do\u0161ilovi\u0107 Filip\u00a0Karlo","year":"2018","unstructured":"Filip\u00a0Karlo Do\u0161ilovi\u0107, Mario Br\u010di\u0107, and Nikica Hlupi\u0107. 2018. Explainable artificial intelligence: A survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, 0210\u20130215."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.gebnlp-1.5"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359786"},{"key":"e_1_3_2_1_40_1","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http:\/\/archive.ics.uci.edu\/ml"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_2_1_42_1","volume-title":"International Conference on Learning Representations.","author":"Geirhos Robert","year":"2018","unstructured":"Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix\u00a0A Wichmann, and Wieland Brendel. 2018. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1016\/S2589-7500(21)00208-9"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33013681"},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482380"},{"key":"e_1_3_2_1_46_1","volume-title":"NIPS symposium on machine learning and the law, Vol.\u00a01. 2.","author":"Grgic-Hlaca Nina","year":"2016","unstructured":"Nina Grgic-Hlaca, Muhammad\u00a0Bilal Zafar, Krishna\u00a0P Gummadi, and Adrian Weller. 2016. The case for process fairness in learning: Feature selection for fair decision making. In NIPS symposium on machine learning and the law, Vol.\u00a01. 2."},{"key":"e_1_3_2_1_47_1","unstructured":"Xudong Han Timothy Baldwin and Trevor Cohn. 2021. Balancing out Bias: Achieving Fairness Through Training Reweighting. arXiv preprint arXiv:2109.08253(2021)."},{"key":"e_1_3_2_1_48_1","unstructured":"Moritz Hardt Eric Price and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. arxiv:1610.02413\u00a0[cs.LG]"},{"key":"e_1_3_2_1_49_1","volume-title":"Multitask learning and benchmarking with clinical time series data. Scientific data 6, 1","author":"Harutyunyan Hrayr","year":"2019","unstructured":"Hrayr Harutyunyan, Hrant Khachatrian, David\u00a0C Kale, Greg Ver\u00a0Steeg, and Aram Galstyan. 2019. Multitask learning and benchmarking with clinical time series data. Scientific data 6, 1 (2019), 1\u201318."},{"key":"e_1_3_2_1_50_1","volume-title":"Generalized additive models","author":"Hastie J","unstructured":"Trevor\u00a0J Hastie and Robert\u00a0J Tibshirani. 2017. Generalized additive models. Routledge."},{"key":"e_1_3_2_1_51_1","volume-title":"From machine learning to explainable AI. In 2018 world symposium on digital intelligence for systems and machines (DISA)","author":"Holzinger Andreas","unstructured":"Andreas Holzinger. 2018. From machine learning to explainable AI. In 2018 world symposium on digital intelligence for systems and machines (DISA). IEEE, 55\u201366."},{"key":"e_1_3_2_1_52_1","unstructured":"Sara Hooker Nyalleng Moorosi Gregory Clark Samy Bengio and Emily Denton. 2020. Characterising bias in compressed models. arXiv preprint arXiv:2010.03058(2020)."},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372857"},{"key":"e_1_3_2_1_54_1","volume-title":"Technical Report","author":"Imbens Guido","unstructured":"Guido Imbens and Konrad Menzel. 2018. A Causal Bootstrap. Technical Report. National Bureau of Economic Research, Inc."},{"key":"e_1_3_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3017680.3022468"},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372829"},{"key":"e_1_3_2_1_57_1","unstructured":"Andrej Karpathy Justin Johnson and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078(2015)."},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287592"},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/3077257.3077271"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3375627.3375833"},{"key":"e_1_3_2_1_61_1","unstructured":"Himabindu Lakkaraju Ece Kamar Rich Caruana and Jure Leskovec. 2017. Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154(2017)."},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306618.3314229"},{"key":"e_1_3_2_1_63_1","unstructured":"Jiwei Li Xinlei Chen Eduard Hovy and Dan Jurafsky. 2015. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066(2015)."},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300325"},{"key":"e_1_3_2_1_65_1","volume-title":"The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3","author":"Lipton C","year":"2018","unstructured":"Zachary\u00a0C Lipton. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 (2018), 31\u201357."},{"key":"e_1_3_2_1_66_1","volume-title":"International Conference on Machine Learning. PMLR, 6781\u20136792","author":"Liu Z","year":"2021","unstructured":"Evan\u00a0Z Liu, Behzad Haghgoo, Annie\u00a0S Chen, Aditi Raghunathan, Pang\u00a0Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. 2021. Just train twice: Improving group robustness without training group information. In International Conference on Machine Learning. PMLR, 6781\u20136792."},{"key":"e_1_3_2_1_67_1","volume-title":"international conference on machine learning. PMLR, 4114\u20134124","author":"Locatello Francesco","year":"2019","unstructured":"Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Sch\u00f6lkopf, and Olivier Bachem. 2019. Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning. PMLR, 4114\u20134124."},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.obhdp.2018.12.005"},{"key":"e_1_3_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/2339530.2339556"},{"key":"e_1_3_2_1_70_1","volume-title":"From local explanations to global understanding with explainable AI for trees. Nature machine intelligence 2, 1","author":"Lundberg M","year":"2020","unstructured":"Scott\u00a0M Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan\u00a0M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature machine intelligence 2, 1 (2020), 56\u201367."},{"key":"e_1_3_2_1_71_1","first-page":"4765","article-title":"A Unified Approach to Interpreting Model Predictions","volume":"30","author":"Lundberg M","year":"2017","unstructured":"Scott\u00a0M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems 30 (2017), 4765\u20134774.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_72_1","volume-title":"International Conference on Machine Learning. PMLR, 3384\u20133393","author":"Madras David","year":"2018","unstructured":"David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning adversarially fair and transferable representations. In International Conference on Machine Learning. PMLR, 3384\u20133393."},{"key":"e_1_3_2_1_73_1","volume-title":"International Conference on Machine Learning. PMLR, 6755\u20136764","author":"Martinez Natalia","year":"2020","unstructured":"Natalia Martinez, Martin Bertran, and Guillermo Sapiro. 2020. Minimax pareto fairness: A multi objective perspective. In International Conference on Machine Learning. PMLR, 6755\u20136764."},{"key":"e_1_3_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1145\/3408877.3439664"},{"key":"e_1_3_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3457607"},{"key":"e_1_3_2_1_76_1","volume-title":"International Conference on Learning Representations.","author":"Menon Aditya\u00a0Krishna","year":"2020","unstructured":"Aditya\u00a0Krishna Menon, Ankit\u00a0Singh Rawat, and Sanjiv Kumar. 2020. Overparameterisation and worst-case generalisation: friend or foe?. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287574"},{"key":"e_1_3_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.5555\/3524938.3525594"},{"key":"e_1_3_2_1_79_1","unstructured":"DJ Pangburn. 2019. Schools are using software to help pick who gets in. what could go wrong?https:\/\/www.fastcompany.com\/90342596\/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong"},{"key":"e_1_3_2_1_80_1","volume-title":"IJCAI Workshop on Explainable Artificial Intelligence (XAI)","author":"Papenmeier Andrea","year":"2019","unstructured":"Andrea Papenmeier, Gwenn Englebienne, and Christin Seifert. 2019. How model accuracy and explanation fidelity influence user trust in AI. In IJCAI Workshop on Explainable Artificial Intelligence (XAI) 2019."},{"key":"e_1_3_2_1_81_1","volume-title":"Generalizing Fairness: Discovery and Mitigation of Unknown Sensitive Attributes. arXiv preprint arXiv:2107.13625(2021).","author":"Paul William","year":"2021","unstructured":"William Paul and Philippe Burlina. 2021. Generalizing Fairness: Discovery and Mitigation of Unknown Sensitive Attributes. arXiv preprint arXiv:2107.13625(2021)."},{"key":"e_1_3_2_1_82_1","volume-title":"Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12","author":"Pedregosa Fabian","year":"2011","unstructured":"Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825\u20132830."},{"key":"e_1_3_2_1_83_1","unstructured":"Geoff Pleiss Manish Raghavan Felix Wu Jon Kleinberg and Kilian\u00a0Q. Weinberger. 2017. On Fairness and Calibration. In Advances in Neural Information Processing Systems (NeurIPS). 5684\u20135693."},{"key":"e_1_3_2_1_84_1","unstructured":"Gregory Plumb Denali Molitor and Ameet Talwalkar. 2018. Model agnostic supervised local explanations. arXiv preprint arXiv:1807.02910(2018)."},{"key":"e_1_3_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445315"},{"key":"e_1_3_2_1_86_1","doi-asserted-by":"crossref","unstructured":"Romila Pradhan Jiongli Zhu Boris Glavic and Babak Salimi. 2021. Interpretable Data-Based Explanations for Fairness Debugging. arXiv preprint arXiv:2112.09745(2021).","DOI":"10.1145\/3514221.3517886"},{"key":"e_1_3_2_1_87_1","unstructured":"ProPublica. 2019. Compas recidivism risk score data and analysis."},{"key":"e_1_3_2_1_88_1","volume-title":"Magix: Model agnostic globally interpretable explanations. arXiv preprint arXiv:1706.07160(2017).","author":"Puri Nikaash","year":"2017","unstructured":"Nikaash Puri, Piyush Gupta, Pratiksha Agarwal, Sukriti Verma, and Balaji Krishnamurthy. 2017. Magix: Model agnostic globally interpretable explanations. arXiv preprint arXiv:1706.07160(2017)."},{"key":"e_1_3_2_1_89_1","unstructured":"Hamed Rahimian and Sanjay Mehrotra. 2019. Distributionally robust optimization: A review. arXiv preprint arXiv:1908.05659(2019)."},{"key":"e_1_3_2_1_90_1","doi-asserted-by":"publisher","DOI":"10.7326\/M18-1990"},{"key":"e_1_3_2_1_91_1","unstructured":"Shubham Rathi. 2019. Generating counterfactual and contrastive explanations using SHAP. arXiv preprint arXiv:1906.09293(2019)."},{"key":"e_1_3_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_1_93_1","unstructured":"Marco\u00a0Tulio Ribeiro Sameer Singh and Carlos Guestrin. 2016. Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386(2016)."},{"key":"e_1_3_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"e_1_3_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.2976199"},{"key":"e_1_3_2_1_96_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2017\/371"},{"key":"e_1_3_2_1_97_1","volume-title":"The Shapley value: essays in honor of Lloyd S. Shapley","author":"Roth E","unstructured":"Alvin\u00a0E Roth. 1988. The Shapley value: essays in honor of Lloyd S. Shapley. Cambridge University Press."},{"key":"e_1_3_2_1_98_1","doi-asserted-by":"publisher","DOI":"10.1145\/2623330.2630823"},{"key":"e_1_3_2_1_99_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0048-x"},{"key":"e_1_3_2_1_100_1","unstructured":"Shiori Sagawa Pang\u00a0Wei Koh Tatsunori\u00a0B Hashimoto and Percy Liang. 2019. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731(2019)."},{"key":"e_1_3_2_1_101_1","unstructured":"Shiori Sagawa Pang\u00a0Wei Koh Tatsunori\u00a0B. Hashimoto and Percy Liang. 2020. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. arxiv:1911.08731\u00a0[cs.LG]"},{"key":"e_1_3_2_1_102_1","volume-title":"Proceedings of the 32nd International Conference on Neural Information Processing Systems. 10999\u201311010","author":"Samadi Samira","year":"2018","unstructured":"Samira Samadi, Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, and Santosh Vempala. 2018. The price of fair PCA: one extra dimension. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 10999\u201311010."},{"key":"e_1_3_2_1_103_1","volume-title":"International Conference on Machine Learning. PMLR, 3145\u20133153","author":"Shrikumar Avanti","year":"2017","unstructured":"Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning. PMLR, 3145\u20133153."},{"key":"e_1_3_2_1_104_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445934"},{"key":"e_1_3_2_1_105_1","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278725"},{"key":"e_1_3_2_1_106_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41591-020-0942-0"},{"key":"e_1_3_2_1_107_1","doi-asserted-by":"publisher","DOI":"10.1109\/SDS.2019.00-11"},{"key":"e_1_3_2_1_108_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-015-5528-6"},{"key":"e_1_3_2_1_109_1","unstructured":"Christina Wadsworth Francesca Vera and Chris Piech. 2018. Achieving fairness through adversarial learning: an application to recidivism prediction. arXiv preprint arXiv:1807.00199(2018)."},{"key":"e_1_3_2_1_110_1","unstructured":"Caroline Wang Bin Han Bhrij Patel Feroze Mohideen and Cynthia Rudin. 2020. In pursuit of interpretable fair and accurate machine learning for criminal recidivism prediction. arXiv preprint arXiv:2005.04176(2020)."},{"key":"e_1_3_2_1_111_1","doi-asserted-by":"publisher","DOI":"10.1145\/3334480.3381069"},{"key":"e_1_3_2_1_112_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0067863"},{"key":"e_1_3_2_1_113_1","unstructured":"Michael Wick Swetasudha Panda and Jean-Baptiste Tristan. 2019. Unlocking fairness: a trade-off revisited. (2019)."},{"key":"e_1_3_2_1_114_1","volume-title":"LSAC national longitudinal bar passage study","author":"Wightman F","unstructured":"Linda\u00a0F Wightman. 1998. LSAC national longitudinal bar passage study. Law School Admission Council."},{"key":"e_1_3_2_1_115_1","unstructured":"Eric Wong Shibani Santurkar and Aleksander M\u0105dry. 2021. Leveraging Sparse Linear Layers for Debuggable Deep Networks. arXiv preprint arXiv:2105.04857(2021)."},{"key":"e_1_3_2_1_116_1","unstructured":"Muhammad\u00a0Bilal Zafar Isabel Valera Manuel\u00a0Gomez Rogriguez and Krishna\u00a0P Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics. PMLR 962\u2013970."},{"key":"e_1_3_2_1_117_1","volume-title":"International conference on machine learning. PMLR, 325\u2013333","author":"Zemel Rich","year":"2013","unstructured":"Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International conference on machine learning. PMLR, 325\u2013333."},{"key":"e_1_3_2_1_118_1","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278779"},{"key":"e_1_3_2_1_119_1","volume-title":"Improving the Fairness of Chest X-ray Classifiers. In Conference on Health, Inference, and Learning. PMLR, 204\u2013233","author":"Zhang Haoran","year":"2022","unstructured":"Haoran Zhang, Natalie Dullerud, Karsten Roth, Lauren Oakden-Rayner, Stephen Pfohl, and Marzyeh Ghassemi. 2022. Improving the Fairness of Chest X-ray Classifiers. In Conference on Health, Inference, and Learning. PMLR, 204\u2013233."},{"key":"e_1_3_2_1_120_1","volume-title":"Learning Optimal Predictive Checklists. Advances in Neural Information Processing Systems 34","author":"Zhang Haoran","year":"2021","unstructured":"Haoran Zhang, Quaid Morris, Berk Ustun, and Marzyeh Ghassemi. 2021. Learning Optimal Predictive Checklists. Advances in Neural Information Processing Systems 34 (2021)."},{"key":"e_1_3_2_1_121_1","unstructured":"Indre Zliobaite. 2015. On the relation between accuracy and fairness in binary classification. arXiv preprint arXiv:1505.05723(2015)."}],"event":{"name":"FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency","location":"Seoul Republic of Korea","acronym":"FAccT '22","sponsor":["ACM Association for Computing Machinery"]},"container-title":["2022 ACM Conference on Fairness Accountability and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3531146.3533179","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3531146.3533179","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:02:09Z","timestamp":1750186929000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3531146.3533179"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,20]]},"references-count":121,"alternative-id":["10.1145\/3531146.3533179","10.1145\/3531146"],"URL":"https:\/\/doi.org\/10.1145\/3531146.3533179","relation":{},"subject":[],"published":{"date-parts":[[2022,6,20]]},"assertion":[{"value":"2022-06-20","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}