{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,20]],"date-time":"2026-01-20T03:47:09Z","timestamp":1768880829112,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":50,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,6,9]],"date-time":"2021-06-09T00:00:00Z","timestamp":1623196800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,6,9]]},"DOI":"10.1145\/3448016.3452787","type":"proceedings-article","created":{"date-parts":[[2021,6,18]],"date-time":"2021-06-18T17:22:30Z","timestamp":1624036950000},"page":"2076-2088","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":22,"title":["OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning"],"prefix":"10.1145","author":[{"given":"Hantian","family":"Zhang","sequence":"first","affiliation":[{"name":"Georgia Institute of Technology, Atlanta, GA, USA"}]},{"given":"Xu","family":"Chu","sequence":"additional","affiliation":[{"name":"Georgia Institute of Technology, Atlanta, GA, USA"}]},{"given":"Abolfazl","family":"Asudeh","sequence":"additional","affiliation":[{"name":"University of Illinois at Chicago, Chicago, IL, USA"}]},{"given":"Shamkant B.","family":"Navathe","sequence":"additional","affiliation":[{"name":"Georgia Institute of Technology, Atlanta, GA, USA"}]}],"member":"320","published-online":{"date-parts":[[2021,6,18]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"US Court","year":"2016","unstructured":"False positives, false negatives , and false analyses: A rejoinder to \"machine bias: There's software used across the country to predict future criminals. and it's biased against blacks .\". US Court , 2016 . False positives, false negatives, and false analyses: A rejoinder to \"machine bias: There's software used across the country to predict future criminals. and it's biased against blacks.\". US Court, 2016."},{"key":"e_1_3_2_2_2_1","volume-title":"Equivant","year":"2018","unstructured":"Response to propublica : Demonstrating accuracy equity and predictive parity . Equivant , 2018 . Response to propublica: Demonstrating accuracy equity and predictive parity. Equivant, 2018."},{"key":"e_1_3_2_2_3_1","volume-title":"A reductions approach to fair classification. arXiv preprint arXiv:1803.02453","author":"Agarwal A.","year":"2018","unstructured":"A. Agarwal , A. Beygelzimer , M. Dud'ik , J. Langford , and H. Wallach . A reductions approach to fair classification. arXiv preprint arXiv:1803.02453 , 2018 . A. Agarwal, A. Beygelzimer, M. Dud'ik, J. Langford, and H. Wallach. A reductions approach to fair classification. arXiv preprint arXiv:1803.02453, 2018."},{"key":"e_1_3_2_2_4_1","volume-title":"ProPublica","author":"Angwin J.","year":"2016","unstructured":"J. Angwin , J. Larson , S. Mattu , and L. Kirchner . Machine bias: Risk assessments in criminal sentencing . ProPublica , 2016 . J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine bias: Risk assessments in criminal sentencing. ProPublica, 2016."},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2019.00056"},{"key":"e_1_3_2_2_6_1","volume-title":"Infrastructure for usable machine learning: The stanford dawn project. arXiv preprint arXiv:1705.07538","author":"Bailis P.","year":"2017","unstructured":"P. Bailis , K. Olukotun , C. R\u00e9 , and M. Zaharia . Infrastructure for usable machine learning: The stanford dawn project. arXiv preprint arXiv:1705.07538 , 2017 . P. Bailis, K. Olukotun, C. R\u00e9, and M. Zaharia. Infrastructure for usable machine learning: The stanford dawn project. arXiv preprint arXiv:1705.07538, 2017."},{"key":"e_1_3_2_2_7_1","volume-title":"Fairness and machine learning: Limitations and opportunities. fairmlbook.org","author":"Barocas S.","year":"2019","unstructured":"S. Barocas , M. Hardt , and A. Narayanan . Fairness and machine learning: Limitations and opportunities. fairmlbook.org , 2019 . S. Barocas, M. Hardt, and A. Narayanan. Fairness and machine learning: Limitations and opportunities. fairmlbook.org, 2019."},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1177\/0049124118782533"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.2200\/S00895ED1V01Y201901DTM057"},{"key":"e_1_3_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9780511804441"},{"key":"e_1_3_2_2_11_1","first-page":"3992","volume-title":"Advances in Neural Information Processing Systems","author":"Calmon F.","year":"2017","unstructured":"F. Calmon , D. Wei , B. Vinzamuri , K. N. Ramamurthy , and K. R. Varshney . Optimized pre-processing for discrimination prevention . In Advances in Neural Information Processing Systems , pages 3992 -- 4001 , 2017 . F. Calmon, D. Wei, B. Vinzamuri, K. N. Ramamurthy, and K. R. Varshney. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems, pages 3992--4001, 2017."},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287586"},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939785"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3380592"},{"key":"e_1_3_2_2_15_1","volume-title":"Amazon scraps secret ai recruiting tool that showed bias against women. reuters","author":"Dastin J.","year":"2018","unstructured":"J. Dastin . Amazon scraps secret ai recruiting tool that showed bias against women. reuters ( 2018 ), 2018. J. Dastin. Amazon scraps secret ai recruiting tool that showed bias against women. reuters (2018), 2018."},{"key":"e_1_3_2_2_16_1","volume-title":"Compas risk scales: Demonstrating accuracy equity and predictive parity","author":"Dieterich W.","year":"2016","unstructured":"W. Dieterich , C. Mendoza , and T. Brennan . Compas risk scales: Demonstrating accuracy equity and predictive parity . Northpointe Inc , 2016 . W. Dieterich, C. Mendoza, and T. Brennan. Compas risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc, 2016."},{"key":"e_1_3_2_2_17_1","volume-title":"The accuracy, fairness, and limits of predicting recidivism. Science advances, 4(1):eaao5580","author":"Dressel J.","year":"2018","unstructured":"J. Dressel and H. Farid . The accuracy, fairness, and limits of predicting recidivism. Science advances, 4(1):eaao5580 , 2018 . J. Dressel and H. Farid. The accuracy, fairness, and limits of predicting recidivism. Science advances, 4(1):eaao5580, 2018."},{"key":"e_1_3_2_2_18_1","volume-title":"UCI machine learning repository","author":"Dua D.","year":"2017","unstructured":"D. Dua and C. Graff . UCI machine learning repository , 2017 . D. Dua and C. Graff. UCI machine learning repository, 2017."},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_2_2_20_1","volume-title":"FATML","author":"Dwork C.","year":"2018","unstructured":"C. Dwork and C. Ilvento . Group fairness under composition . FATML , 2018 . C. Dwork and C. Ilvento. Group fairness under composition. FATML, 2018."},{"key":"e_1_3_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2783311"},{"key":"e_1_3_2_2_22_1","first-page":"15","article-title":"Study finds disparities in mortgages by race","author":"Fernandez M.","year":"2007","unstructured":"M. Fernandez . Study finds disparities in mortgages by race . New York Times , 15 , 2007 . M. Fernandez. Study finds disparities in mortgages by race. New York Times, 15, 2007.","journal-title":"New York Times"},{"key":"e_1_3_2_2_23_1","first-page":"38","article-title":"False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. and it's biased against blacks","volume":"80","author":"Flores A. W.","year":"2016","unstructured":"A. W. Flores , K. Bechtel , and C. T. Lowenkamp . False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. and it's biased against blacks . Fed. Probation , 80 : 38 , 2016 . A. W. Flores, K. Bechtel, and C. T. Lowenkamp. False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. Fed. Probation, 80:38, 2016.","journal-title":"Fed. Probation"},{"key":"e_1_3_2_2_24_1","volume-title":"An intersectional definition of fairness. CoRR:1807.08362","author":"Foulds J.","year":"2018","unstructured":"J. Foulds and S. Pan . An intersectional definition of fairness. CoRR:1807.08362 , 2018 . J. Foulds and S. Pan. An intersectional definition of fairness. CoRR:1807.08362, 2018."},{"key":"e_1_3_2_2_25_1","volume-title":"On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236","author":"Friedler S. A.","year":"2016","unstructured":"S. A. Friedler , C. Scheidegger , and S. Venkatasubramanian . On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236 , 2016 . S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian. On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236, 2016."},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1214\/15-AOAS897"},{"key":"e_1_3_2_2_27_1","first-page":"3315","volume-title":"Advances in neural information processing systems","author":"Hardt M.","year":"2016","unstructured":"M. Hardt , E. Price , and N. Srebro . Equality of opportunity in supervised learning . In Advances in neural information processing systems , pages 3315 -- 3323 , 2016 . M. Hardt, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315--3323, 2016."},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-011-0463-8"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287592"},{"key":"e_1_3_2_2_30_1","volume-title":"Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807","author":"Kleinberg J.","year":"2016","unstructured":"J. Kleinberg , S. Mullainathan , and M. Raghavan . Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 , 2016 . J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807, 2016."},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.14778\/2994509.2994514"},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.dss.2014.03.001"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3318464.3389709"},{"key":"e_1_3_2_2_34_1","volume-title":"21 fairness definitions and their politics","author":"Narayanan A.","year":"2018","unstructured":"A. Narayanan . Tutorial : 21 fairness definitions and their politics , 2018 . URL https:\/\/www. youtube. com\/watch, 2018. A. Narayanan. Tutorial: 21 fairness definitions and their politics, 2018. URL https:\/\/www. youtube. com\/watch, 2018."},{"key":"e_1_3_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2019.00013"},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3035918.3054782"},{"key":"e_1_3_2_2_37_1","first-page":"269","volume-title":"Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases","volume":"11","author":"Ratner A.","unstructured":"A. Ratner , S. H. Bach , H. Ehrenberg , J. Fries , S. Wu , and C. R\u00e9 . Snorkel: Rapid training data creation with weak supervision . In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases , volume 11 , page 269 . NIH Public Access, 2017. A. Ratner, S. H. Bach, H. Ehrenberg, J. Fries, S. Wu, and C. R\u00e9. Snorkel: Rapid training data creation with weak supervision. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, volume 11, page 269. NIH Public Access, 2017."},{"key":"e_1_3_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.14778\/3352063.3352108"},{"key":"e_1_3_2_2_39_1","first-page":"6414","volume-title":"Advances in Neural Information Processing Systems","author":"Russell C.","year":"2017","unstructured":"C. Russell , M. J. Kusner , J. Loftus , and R. Silva . When worlds collide: integrating different counterfactual assumptions in fairness . In Advances in Neural Information Processing Systems , pages 6414 -- 6423 , 2017 . C. Russell, M. J. Kusner, J. Loftus, and R. Silva. When worlds collide: integrating different counterfactual assumptions in fairness. In Advances in Neural Information Processing Systems, pages 6414--6423, 2017."},{"key":"e_1_3_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3299869.3319901"},{"key":"e_1_3_2_2_41_1","volume-title":"Proceedings of the 23rd International Conference on Extending Database Technology (EDBT)","author":"Schelter S.","year":"2020","unstructured":"S. Schelter , Y. He , J. Khilnani , and J. Stoyanovich . Fairprep: Promoting data to a first-class citizen in studies on fairness-enhancing interventions . In Proceedings of the 23rd International Conference on Extending Database Technology (EDBT) , 2020 . S. Schelter, Y. He, J. Khilnani, and J. Stoyanovich. Fairprep: Promoting data to a first-class citizen in studies on fairness-enhancing interventions. In Proceedings of the 23rd International Conference on Extending Database Technology (EDBT), 2020."},{"key":"e_1_3_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2460276.2460278"},{"key":"e_1_3_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.aag3311"},{"key":"e_1_3_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3194770.3194776"},{"key":"e_1_3_2_2_45_1","volume-title":"LSAC national longitudinal bar passage study","author":"Wightman L. F.","year":"1998","unstructured":"L. F. Wightman and H. Ramsey . LSAC national longitudinal bar passage study . Law School Admission Council , 1998 . L. F. Wightman and H. Ramsey. LSAC national longitudinal bar passage study. Law School Admission Council, 1998."},{"key":"e_1_3_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.14778\/3297753.3297763"},{"key":"e_1_3_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3038912.3052660"},{"issue":"4","key":"e_1_3_2_2_48_1","first-page":"39","article-title":"Accelerating the machine learning lifecycle with mlflow","volume":"41","author":"Zaharia M.","year":"2018","unstructured":"M. Zaharia , A. Chen , A. Davidson , A. Ghodsi , S. A. Hong , A. Konwinski , S. Murching , T. Nykodym , P. Ogilvie , M. Parkhe , Accelerating the machine learning lifecycle with mlflow . IEEE Data Eng. Bull. , 41 ( 4 ): 39 -- 45 , 2018 . M. Zaharia, A. Chen, A. Davidson, A. Ghodsi, S. A. Hong, A. Konwinski, S. Murching, T. Nykodym, P. Ogilvie, M. Parkhe, et al. Accelerating the machine learning lifecycle with mlflow. IEEE Data Eng. Bull., 41(4):39--45, 2018.","journal-title":"IEEE Data Eng. Bull."},{"key":"e_1_3_2_2_49_1","first-page":"325","volume-title":"International Conference on Machine Learning","author":"Zemel R.","year":"2013","unstructured":"R. Zemel , Y. Wu , K. Swersky , T. Pitassi , and C. Dwork . Learning fair representations . In International Conference on Machine Learning , pages 325 -- 333 , 2013 . R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In International Conference on Machine Learning, pages 325--333, 2013."},{"key":"e_1_3_2_2_50_1","volume-title":"Omnifair: A declarative system for model-agnostic group fairness in machine learning. arXiv preprint","author":"Zhang H.","year":"2021","unstructured":"H. Zhang , X. Chu , A. Asudeh , and S. B. Navathe . Omnifair: A declarative system for model-agnostic group fairness in machine learning. arXiv preprint , 2021 . H. Zhang, X. Chu, A. Asudeh, and S. B. Navathe. Omnifair: A declarative system for model-agnostic group fairness in machine learning. arXiv preprint, 2021."}],"event":{"name":"SIGMOD\/PODS '21: International Conference on Management of Data","location":"Virtual Event China","acronym":"SIGMOD\/PODS '21","sponsor":["SIGMOD ACM Special Interest Group on Management of Data"]},"container-title":["Proceedings of the 2021 International Conference on Management of Data"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3448016.3452787","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3448016.3452787","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:28:05Z","timestamp":1750195685000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3448016.3452787"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,6,9]]},"references-count":50,"alternative-id":["10.1145\/3448016.3452787","10.1145\/3448016"],"URL":"https:\/\/doi.org\/10.1145\/3448016.3452787","relation":{},"subject":[],"published":{"date-parts":[[2021,6,9]]},"assertion":[{"value":"2021-06-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}