{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,29]],"date-time":"2025-10-29T19:42:43Z","timestamp":1761766963513,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":50,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,7,21]],"date-time":"2021-07-21T00:00:00Z","timestamp":1626825600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000161","name":"National Institute of Standards and Technology","doi-asserted-by":"publisher","award":["60NANB18D227"],"award-info":[{"award-number":["60NANB18D227"]}],"id":[{"id":"10.13039\/100000161","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000001","name":"NSF (National Science Foundation)","doi-asserted-by":"publisher","award":["IIS2046381, IIS1850023, IIS1927486"],"award-info":[{"award-number":["IIS2046381, IIS1850023, IIS1927486"]}],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,7,21]]},"DOI":"10.1145\/3461702.3462614","type":"proceedings-article","created":{"date-parts":[[2021,7,31]],"date-time":"2021-07-31T01:21:32Z","timestamp":1627694492000},"page":"586-596","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":17,"title":["Can We Obtain Fairness For Free?"],"prefix":"10.1145","author":[{"given":"Rashidul","family":"Islam","sequence":"first","affiliation":[{"name":"University of Maryland, Baltimore County, Baltimore, MD, USA"}]},{"given":"Shimei","family":"Pan","sequence":"additional","affiliation":[{"name":"University of Maryland, Baltimore County, Baltimore, MD, USA"}]},{"given":"James R.","family":"Foulds","sequence":"additional","affiliation":[{"name":"University of Maryland, Baltimore County, Baltimore, MD, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,7,30]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"ProPublica","volume":"23","author":"Angwin Julia","year":"2016","unstructured":"Julia Angwin , Jeff Larson , Surya Mattu , and Lauren Kirchner . 2016 . Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks . ProPublica , May , Vol. 23 (2016). Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. ProPublica, May, Vol. 23 (2016)."},{"key":"e_1_3_2_1_2_1","first-page":"671","article-title":"Big data's disparate impact","volume":"104","author":"Barocas Solon","year":"2016","unstructured":"Solon Barocas and Andrew D Selbst . 2016 . Big data's disparate impact . Cal. L. Rev. , Vol. 104 (2016), 671 . Solon Barocas and Andrew D Selbst. 2016. Big data's disparate impact. Cal. L. Rev., Vol. 104 (2016), 671.","journal-title":"Cal. L. Rev."},{"key":"e_1_3_2_1_3_1","volume-title":"4th Annual Workshop on Fairness, Accountability, and Transparency in Machine Learning. ArXiv preprint arXiv:1706","author":"Berk Richard","year":"2017","unstructured":"Richard Berk , Hoda Heidari , Shahin Jabbari , Matthew Joseph , Michael Kearns , Jamie Morgenstern , Seth Neel , and Aaron Roth . 2017 . A convex framework for fair regression . 4th Annual Workshop on Fairness, Accountability, and Transparency in Machine Learning. ArXiv preprint arXiv:1706 .02409 [cs.LG] (2017). Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2017. A convex framework for fair regression. 4th Annual Workshop on Fairness, Accountability, and Transparency in Machine Learning. ArXiv preprint arXiv:1706.02409 [cs.LG] (2017)."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1177\/0049124118782533"},{"volume-title":"Adverse impact and test validation: A practitioner's guide to valid and defensible employment testing","author":"Biddle Dan","key":"e_1_3_2_1_5_1","unstructured":"Dan Biddle . 2006. Adverse impact and test validation: A practitioner's guide to valid and defensible employment testing . Gower Publishing, Ltd. Dan Biddle. 2006. Adverse impact and test validation: A practitioner's guide to valid and defensible employment testing .Gower Publishing, Ltd."},{"key":"e_1_3_2_1_6_1","volume-title":"Conference on Fairness, Accountability and Transparency. PMLR, 149--159","author":"Binns Reuben","year":"2018","unstructured":"Reuben Binns . 2018 . Fairness in machine learning: Lessons from political philosophy . In Conference on Fairness, Accountability and Transparency. PMLR, 149--159 . Reuben Binns. 2018. Fairness in machine learning: Lessons from political philosophy. In Conference on Fairness, Accountability and Transparency. PMLR, 149--159."},{"key":"e_1_3_2_1_7_1","volume-title":"1st Symposium on Foundations of Responsible Computing. Schloss Dagstuhl-Leibniz-Zentrum f\u00fcr Informatik.","author":"Blum Avrim","year":"2020","unstructured":"Avrim Blum and Kevin Stangl . 2020 . Recovering from biased data: Can fairness constraints improve accuracy? . In 1st Symposium on Foundations of Responsible Computing. Schloss Dagstuhl-Leibniz-Zentrum f\u00fcr Informatik. Avrim Blum and Kevin Stangl. 2020. Recovering from biased data: Can fairness constraints improve accuracy?. In 1st Symposium on Foundations of Responsible Computing. Schloss Dagstuhl-Leibniz-Zentrum f\u00fcr Informatik."},{"key":"e_1_3_2_1_8_1","unstructured":"Tolga Bolukbasi Kai-Wei Chang James Y Zou Venkatesh Saligrama and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems.  Tolga Bolukbasi Kai-Wei Chang James Y Zou Venkatesh Saligrama and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-7908-2604-3_16"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"crossref","unstructured":"Leo Breiman et al. 2001. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science Vol. 16 3 (2001) 199--231.  Leo Breiman et al. 2001. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science Vol. 16 3 (2001) 199--231.","DOI":"10.1214\/ss\/1009213726"},{"key":"e_1_3_2_1_11_1","unstructured":"Irene Chen Fredrik D Johansson and David Sontag. 2018. Why is my classifier discriminatory?. In Advances in Neural Information Processing Systems. 3539--3550.  Irene Chen Fredrik D Johansson and David Sontag. 2018. Why is my classifier discriminatory?. In Advances in Neural Information Processing Systems. 3539--3550."},{"key":"e_1_3_2_1_12_1","volume-title":"Artificial intelligence's white guy problem. The New York Times","author":"Crawford Kate","year":"2016","unstructured":"Kate Crawford . 2016. Artificial intelligence's white guy problem. The New York Times ( 2016 ). Kate Crawford. 2016. Artificial intelligence's white guy problem. The New York Times (2016)."},{"key":"e_1_3_2_1_13_1","volume-title":"International Conference on Machine Learning. PMLR, 1436--1445","author":"Creager Elliot","year":"2019","unstructured":"Elliot Creager , David Madras , J\u00f6rn-Henrik Jacobsen , Marissa Weis , Kevin Swersky , Toniann Pitassi , and Richard Zemel . 2019 . Flexibly fair representation learning by disentanglement . In International Conference on Machine Learning. PMLR, 1436--1445 . Elliot Creager, David Madras, J\u00f6rn-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. 2019. Flexibly fair representation learning by disentanglement. In International Conference on Machine Learning. PMLR, 1436--1445."},{"key":"e_1_3_2_1_14_1","unstructured":"Kimberl\u00e9 Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine feminist theory and antiracist politics. u. Chi. Legal f. (1989) 139.  Kimberl\u00e9 Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine feminist theory and antiracist politics. u. Chi. Legal f. (1989) 139."},{"key":"e_1_3_2_1_15_1","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http:\/\/archive.ics.uci.edu\/ml  Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http:\/\/archive.ics.uci.edu\/ml"},{"key":"e_1_3_2_1_16_1","volume-title":"International Conference on Machine Learning. PMLR, 2803--2813","author":"Dutta Sanghamitra","year":"2020","unstructured":"Sanghamitra Dutta , Dennis Wei , Hazar Yueksel , Pin-Yu Chen , Sijia Liu , and Kush Varshney . 2020 . Is there a trade-off between fairness and accuracy? A perspective using mismatched hypothesis testing . In International Conference on Machine Learning. PMLR, 2803--2813 . Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, and Kush Varshney. 2020. Is there a trade-off between fairness and accuracy? A perspective using mismatched hypothesis testing. In International Conference on Machine Learning. PMLR, 2803--2813."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611976236.48"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE48307.2020.00203"},{"key":"e_1_3_2_1_20_1","first-page":"51","article-title":"Are Parity-Based Notions of AI Fairness Desirable","volume":"43","author":"Foulds James R","year":"2020","unstructured":"James R Foulds and Shimei Pan . 2020 . Are Parity-Based Notions of AI Fairness Desirable ? Bulletin of the IEEE Technical Committee on Data Engineering , Vol. 43 , 4 (2020), 51 -- 73 . James R Foulds and Shimei Pan. 2020. Are Parity-Based Notions of AI Fairness Desirable? Bulletin of the IEEE Technical Committee on Data Engineering, Vol. 43, 4 (2020), 51--73.","journal-title":"Bulletin of the IEEE Technical Committee on Data Engineering"},{"key":"e_1_3_2_1_21_1","unstructured":"Moritz Hardt Eric Price Nati Srebro etal 2016. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems. 3315--3323.  Moritz Hardt Eric Price Nati Srebro et al. 2016. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems. 3315--3323."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1037\/0033-2909.83.6.1053"},{"key":"e_1_3_2_1_23_1","volume-title":"the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Social Impact Track)","author":"Islam Rashidul","year":"2019","unstructured":"Rashidul Islam , Kamrun Naher Keya , Shimei Pan , and James R Foulds . 2019 . Mitigating demographic biases in social media-based recommender systems . In the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Social Impact Track) (2019). Rashidul Islam, Kamrun Naher Keya, Shimei Pan, and James R Foulds. 2019. Mitigating demographic biases in social media-based recommender systems. In the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Social Impact Track) (2019)."},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3449904"},{"volume-title":"Evaluating learning algorithms: a classification perspective","author":"Japkowicz Nathalie","key":"e_1_3_2_1_25_1","unstructured":"Nathalie Japkowicz and Mohak Shah . 2011. Evaluating learning algorithms: a classification perspective . Cambridge University Press . Nathalie Japkowicz and Mohak Shah. 2011. Evaluating learning algorithms: a classification perspective .Cambridge University Press."},{"key":"e_1_3_2_1_26_1","volume-title":"International Conference on Machine Learning. PMLR, 2564--2572","author":"Kearns Michael","year":"2018","unstructured":"Michael Kearns , Seth Neel , Aaron Roth , and Zhiwei Steven Wu . 2018 . Preventing fairness gerrymandering: Auditing and learning for subgroup fairness . In International Conference on Machine Learning. PMLR, 2564--2572 . Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning. PMLR, 2564--2572."},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611976700.22"},{"key":"e_1_3_2_1_28_1","volume-title":"ICLR: International Conference on Learning Representations. 1--15","author":"Kingma Diederik P","year":"2015","unstructured":"Diederik P Kingma and Jimmy Lei Ba . 2015 . Adam: A method for stochastic gradient descent . In ICLR: International Conference on Learning Representations. 1--15 . Diederik P Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic gradient descent. In ICLR: International Conference on Learning Representations. 1--15."},{"key":"e_1_3_2_1_29_1","unstructured":"Matt J Kusner Joshua Loftus Chris Russell and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems.  Matt J Kusner Joshua Loftus Chris Russell and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems."},{"key":"e_1_3_2_1_30_1","volume-title":"Nature","volume":"521","author":"LeCun Yann","year":"2015","unstructured":"Yann LeCun , Yoshua Bengio , and Geoffrey Hinton . 2015 . Deep learning . Nature , Vol. 521 , 7553 (2015), 436--444. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, Vol. 521, 7553 (2015), 436--444."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.5555\/3294771.3294865"},{"key":"e_1_3_2_1_32_1","volume-title":"There is no trade-off: enforcing fairness can improve accuracy. arXiv preprint arXiv:2011.03173","author":"Maity Subha","year":"2020","unstructured":"Subha Maity , Debarghya Mukherjee , Mikhail Yurochkin , and Yuekai Sun . 2020. There is no trade-off: enforcing fairness can improve accuracy. arXiv preprint arXiv:2011.03173 ( 2020 ). Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin, and Yuekai Sun. 2020. There is no trade-off: enforcing fairness can improve accuracy. arXiv preprint arXiv:2011.03173 (2020)."},{"key":"e_1_3_2_1_33_1","volume-title":"Conference on Fairness, Accountability and Transparency. 107--118","author":"Menon Aditya Krishna","year":"2018","unstructured":"Aditya Krishna Menon and Robert C Williamson . 2018 . The cost of fairness in binary classification . In Conference on Fairness, Accountability and Transparency. 107--118 . Aditya Krishna Menon and Robert C Williamson. 2018. The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency. 107--118."},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.dss.2014.03.001"},{"key":"e_1_3_2_1_35_1","volume-title":"Equitable Allocation of Healthcare Resources with Fair Cox Models. In AAAI Fall Symposium on AI in Government and Public Sector. AAAI FSS.","author":"Keya Kamrun Naher","year":"2020","unstructured":"Kamrun Naher Keya , Rashidul Islam , Shimei Pan , Ian Stockwell , and James R Foulds . 2020 . Equitable Allocation of Healthcare Resources with Fair Cox Models. In AAAI Fall Symposium on AI in Government and Public Sector. AAAI FSS. Kamrun Naher Keya, Rashidul Islam, Shimei Pan, Ian Stockwell, and James R Foulds. 2020. Equitable Allocation of Healthcare Resources with Fair Cox Models. In AAAI Fall Symposium on AI in Government and Public Sector. AAAI FSS."},{"volume-title":"Algorithms of oppression: How search engines reinforce racism","author":"Noble Safiya Umoja","key":"e_1_3_2_1_36_1","unstructured":"Safiya Umoja Noble . 2018. Algorithms of oppression: How search engines reinforce racism . NYU Press . Safiya Umoja Noble. 2018. Algorithms of oppression: How search engines reinforce racism .NYU Press."},{"volume-title":"Weapons of math destruction: How big data increases inequality and threatens democracy","author":"O'Neil Cathy","key":"e_1_3_2_1_37_1","unstructured":"Cathy O'Neil . 2016. Weapons of math destruction: How big data increases inequality and threatens democracy . Broadway Books . Cathy O'Neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy .Broadway Books."},{"key":"e_1_3_2_1_38_1","volume-title":"Advances in Neural Information Processing Systems (Autodiff Workshop).","author":"Paszke Adam","year":"2017","unstructured":"Adam Paszke , Sam Gross , Soumith Chintala , Gregory Chanan , Edward Yang , Zachary DeVito , Zeming Lin , Alban Desmaison , Luca Antiga , and Adam Lerer . 2017 . Automatic differentiation in pytorch . In Advances in Neural Information Processing Systems (Autodiff Workshop). Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In Advances in Neural Information Processing Systems (Autodiff Workshop)."},{"key":"e_1_3_2_1_39_1","first-page":"8026","article-title":"PyTorch: An Imperative Style, High-Performance Deep Learning Library","volume":"32","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , 2019 . PyTorch: An Imperative Style, High-Performance Deep Learning Library . Advances in Neural Information Processing Systems , Vol. 32 (2019), 8026 -- 8037 . Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Advances in Neural Information Processing Systems, Vol. 32 (2019), 8026--8037.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_40_1","volume-title":"International Conference on Machine Learning (Automated Machine Learning Workshop)","author":"Perrone Valerio","year":"2020","unstructured":"Valerio Perrone , Michele Donini , Krishnaram Kenthapadi , and C\u00e9dric Archambeau . 2020 . Bayesian optimization with fairness constraints . International Conference on Machine Learning (Automated Machine Learning Workshop) (2020). Valerio Perrone, Michele Donini, Krishnaram Kenthapadi, and C\u00e9dric Archambeau. 2020. Bayesian optimization with fairness constraints. International Conference on Machine Learning (Automated Machine Learning Workshop) (2020)."},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1017\/S0269888913000039"},{"key":"e_1_3_2_1_42_1","volume-title":"Practical Bayesian Optimization of Machine Learning Algorithms. Advances in Neural Information Processing Systems","author":"Snoek Jasper","year":"2012","unstructured":"Jasper Snoek , Hugo Larochelle , and Ryan Prescott Adams . 2012. Practical Bayesian Optimization of Machine Learning Algorithms. Advances in Neural Information Processing Systems ( 2012 ). Jasper Snoek, Hugo Larochelle, and Ryan Prescott Adams. 2012. Practical Bayesian Optimization of Machine Learning Algorithms. Advances in Neural Information Processing Systems (2012)."},{"key":"e_1_3_2_1_43_1","volume-title":"The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2164--2173","author":"Song Jiaming","year":"2019","unstructured":"Jiaming Song , Pratyusha Kalluri , Aditya Grover , Shengjia Zhao , and Stefano Ermon . 2019 . Learning controllable fair representations . In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2164--2173 . Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, and Stefano Ermon. 2019. Learning controllable fair representations. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2164--2173."},{"key":"e_1_3_2_1_44_1","volume-title":"International Conference on Machine Learning. 6373--6382","author":"Ustun Berk","year":"2019","unstructured":"Berk Ustun , Yang Liu , and David Parkes . 2019 . Fairness without harm: Decoupled classifiers with preference guarantees . In International Conference on Machine Learning. 6373--6382 . Berk Ustun, Yang Liu, and David Parkes. 2019. Fairness without harm: Decoupled classifiers with preference guarantees. In International Conference on Machine Learning. 6373--6382."},{"key":"e_1_3_2_1_45_1","volume-title":"Achieving fairness through adversarial learning: an application to recidivism prediction. arXiv preprint arXiv:1807.00199","author":"Wadsworth Christina","year":"2018","unstructured":"Christina Wadsworth , Francesca Vera , and Chris Piech . 2018. Achieving fairness through adversarial learning: an application to recidivism prediction. arXiv preprint arXiv:1807.00199 ( 2018 ). Christina Wadsworth, Francesca Vera, and Chris Piech. 2018. Achieving fairness through adversarial learning: an application to recidivism prediction. arXiv preprint arXiv:1807.00199 (2018)."},{"key":"e_1_3_2_1_46_1","first-page":"1","article-title":"Fairness Constraints: A Flexible Approach for Fair Classification","volume":"20","author":"Zafar Muhammad Bilal","year":"2019","unstructured":"Muhammad Bilal Zafar , Isabel Valera , Manuel Gomez-Rodriguez , and Krishna P Gummadi . 2019 . Fairness Constraints: A Flexible Approach for Fair Classification . J. Mach. Learn. Res. , Vol. 20 , 75 (2019), 1 -- 42 . Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P Gummadi. 2019. Fairness Constraints: A Flexible Approach for Fair Classification. J. Mach. Learn. Res., Vol. 20, 75 (2019), 1--42.","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_1_47_1","volume-title":"Manuel Gomez Rogriguez, and Krishna P Gummadi","author":"Zafar Muhammad Bilal","year":"2017","unstructured":"Muhammad Bilal Zafar , Isabel Valera , Manuel Gomez Rogriguez, and Krishna P Gummadi . 2017 . Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics . 962--970. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics. 962--970."},{"key":"e_1_3_2_1_48_1","volume-title":"International Conference on Machine Learning. 325--333","author":"Zemel Rich","year":"2013","unstructured":"Rich Zemel , Yu Wu , Kevin Swersky , Toni Pitassi , and Cynthia Dwork . 2013 . Learning fair representations . In International Conference on Machine Learning. 325--333 . Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. 325--333."},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278779"},{"key":"e_1_3_2_1_50_1","unstructured":"Han Zhao and Geoff Gordon. 2019. Inherent tradeoffs in learning fair representations. In Advances in Neural Information Processing Systems. 15675--15685.  Han Zhao and Geoff Gordon. 2019. Inherent tradeoffs in learning fair representations. In Advances in Neural Information Processing Systems. 15675--15685."}],"event":{"name":"AIES '21: AAAI\/ACM Conference on AI, Ethics, and Society","sponsor":["SIGAI ACM Special Interest Group on Artificial Intelligence","AAAI"],"location":"Virtual Event USA","acronym":"AIES '21"},"container-title":["Proceedings of the 2021 AAAI\/ACM Conference on AI, Ethics, and Society"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3461702.3462614","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3461702.3462614","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3461702.3462614","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:17:06Z","timestamp":1750191426000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3461702.3462614"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,7,21]]},"references-count":50,"alternative-id":["10.1145\/3461702.3462614","10.1145\/3461702"],"URL":"https:\/\/doi.org\/10.1145\/3461702.3462614","relation":{},"subject":[],"published":{"date-parts":[[2021,7,21]]},"assertion":[{"value":"2021-07-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}