{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T13:32:59Z","timestamp":1769002379766,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":41,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,12,21]],"date-time":"2020-12-21T00:00:00Z","timestamp":1608508800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,12,21]]},"DOI":"10.1145\/3324884.3418932","type":"proceedings-article","created":{"date-parts":[[2021,1,27]],"date-time":"2021-01-27T23:39:02Z","timestamp":1611790742000},"page":"1229-1233","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":26,"title":["Making fair ML software using trustworthy explanation"],"prefix":"10.1145","author":[{"given":"Joymallya","family":"Chakraborty","sequence":"first","affiliation":[{"name":"North Carolina State University"}]},{"given":"Kewen","family":"Peng","sequence":"additional","affiliation":[{"name":"North Carolina State University"}]},{"given":"Tim","family":"Menzies","sequence":"additional","affiliation":[{"name":"North Carolina State University"}]}],"member":"320","published-online":{"date-parts":[[2021,1,27]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"1994. UCI:Adult Data Set. (1994). http:\/\/mlr.cs.umass.edu\/ml\/datasets\/Adult"},{"key":"e_1_3_2_1_2_1","volume-title":"Machine Bias. www.propublica.org (May","year":"2016","unstructured":"2016. Machine Bias. www.propublica.org (May 2016). https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing"},{"key":"e_1_3_2_1_3_1","volume-title":"Google's Sentiment Analyzer Thinks Being Gay Is Bad. Motherboard (Oct","year":"2017","unstructured":"2017. Google's Sentiment Analyzer Thinks Being Gay Is Bad. Motherboard (Oct 2017). https:\/\/bit.ly\/2yMax8V"},{"key":"e_1_3_2_1_4_1","volume-title":"AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. (10","year":"2018","unstructured":"2018. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. (10 2018). https:\/\/github.com\/IBM\/AIF360"},{"key":"e_1_3_2_1_5_1","volume-title":"Amazon scraps secret AI recruiting tool that showed bias against women. (Oct","year":"2018","unstructured":"2018. Amazon scraps secret AI recruiting tool that showed bias against women. (Oct 2018). https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight\/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G"},{"key":"e_1_3_2_1_6_1","unstructured":"2018. Ethics Guidelines for Trustworthy Artificial Intelligence. https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai"},{"key":"e_1_3_2_1_7_1","unstructured":"2018. Facebook says it has a tool to detect bias in its artificial intelligence. (2018). https:\/\/qz.com\/1268520\/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence\/"},{"key":"e_1_3_2_1_8_1","volume-title":"FAIRWARE 2018:International Workshop on Software Fairness.","year":"2018","unstructured":"2018. FAIRWARE 2018:International Workshop on Software Fairness. (2018). http:\/\/fairware.cs.umass.edu\/"},{"key":"e_1_3_2_1_9_1","volume-title":"Accountability, Transparency, and Ethics in AI.","year":"2018","unstructured":"2018. FATE: Fairness, Accountability, Transparency, and Ethics in AI. (2018). https:\/\/www.microsoft.com\/en-us\/research\/group\/fate\/"},{"key":"e_1_3_2_1_10_1","unstructured":"2019. Ethically-Aligned Design: A Vision for Prioritizing Human Well-Begin with Autonomous and Intelligence Systems."},{"key":"e_1_3_2_1_11_1","volume-title":"EXPLAIN 2019. (2019","year":"2019","unstructured":"2019. EXPLAIN 2019. (2019). https:\/\/2019.ase-conferences.org\/home\/explain-2019"},{"key":"e_1_3_2_1_12_1","volume-title":"ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*).","year":"2020","unstructured":"2020. ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*). (2020). https:\/\/fatconference.org\/"},{"key":"e_1_3_2_1_13_1","unstructured":"2020. aif360.metrics.ClassificationMetric. https:\/\/aif360.readthedocs.io\/en\/latest\/modules\/generated\/aif360.metrics.ClassificationMetric.html#aif360.metrics.ClassificationMetric"},{"key":"e_1_3_2_1_14_1","volume-title":"International Workshop on Fair and Interpretable Learning Algorithms.","year":"2020","unstructured":"2020. International Workshop on Fair and Interpretable Learning Algorithms. (2020). http:\/\/tiny.cc\/FILA"},{"key":"e_1_3_2_1_15_1","unstructured":"2020. LIME - Local Interpretable Model-Agnostic Explanations can be used with models learnt with the AIF 360 toolkit to generate explanations for model predictions. (2020). https:\/\/github.com\/IBM\/AIF360\/blob\/master\/examples\/demo_lime.ipynb"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3338906.3338937"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","unstructured":"Rico Angell Brittany Johnson Yuriy Brun and Alexandra Meliou. [n.d.]. Themis: Automatically Testing Software for Discrimination (ESEC\/FSE 18). 5. 10.1145\/3236024.3264590","DOI":"10.1145\/3236024.3264590"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"crossref","unstructured":"Richard Berk Hoda Heidari Shahin Jabbari Michael Kearns and Aaron Roth. 2017. Fairness in Criminal Justice Risk Assessments: The State of the Art. arXiv:1703.09207 [stat.ML]","DOI":"10.1177\/0049124118782533"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1177\/0049124118782533"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"crossref","unstructured":"Reuben Binns. 2019. On the Apparent Conflict Between Individual and Group Fairness. arXiv:1912.06883 [cs.LG]","DOI":"10.1145\/3351095.3372864"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.aal4230"},{"key":"e_1_3_2_1_22_1","first-page":"I","article-title":"Optimized Pre-Processing for Discrimination Prevention","volume":"30","author":"Calmon Flavio","year":"2017","unstructured":"Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. 2017. Optimized Pre-Processing for Discrimination Prevention. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 3992--4001. http:\/\/papers.nips.cc\/paper\/6988-optimized-pre-processing-for-discrimination-prevention.pdf","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"crossref","unstructured":"Joymallya Chakraborty Suvodeep Majumder Zhe Yu and Tim Menzies. 2020. Fairway:A Way to Build Fair ML Software. arXiv:2003.10354 [cs.SE]","DOI":"10.1145\/3368089.3409697"},{"key":"e_1_3_2_1_24_1","unstructured":"Botty Dimanov Umang Bhatt Mateja Jamnik and Adrian Weller. 2020. You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods. In SafeAI@AAAI."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"crossref","unstructured":"Cynthia Dwork Moritz Hardt Toniann Pitassi Omer Reingold and Rich Zemel. 2011. Fairness Through Awareness. arXiv:1104.3913 [cs.CC]","DOI":"10.1145\/2090236.2090255"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3106237.3106277"},{"key":"e_1_3_2_1_27_1","unstructured":"Stephen Gillen Christopher Jung Michael Kearns and Aaron Roth. 2018. Online Learning with an Unknown Fairness Metric. arXiv:1802.06936 [cs.LG]"},{"key":"e_1_3_2_1_28_1","unstructured":"Moritz Hardt Eric Price and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. arXiv:1610.02413 [cs.LG]"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-011-0463-8"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","unstructured":"Faisal Kamiran Sameen Mansha Asim Karim and Xiangliang Zhang. 2018. Exploiting Reject Option in Classification for Social Discrimination Control. Inf. Sci. (2018). 10.1016\/j.ins.2017.09.064","DOI":"10.1016\/j.ins.2017.09.064"},{"key":"e_1_3_2_1_31_1","volume-title":"Machine Learning and Knowledge Discovery in Databases","author":"Kamishima Toshihiro","unstructured":"Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-Aware Classifier with Prejudice Remover Regularizer. In Machine Learning and Knowledge Discovery in Databases, Peter A. Flach, Tijl De Bie, and Nello Cristianini (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 35--50."},{"key":"e_1_3_2_1_32_1","unstructured":"Jon Kleinberg Sendhil Mullainathan and Manish Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. arXiv:1609.05807 [cs.LG]"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"crossref","unstructured":"Preethi Lahoti Krishna P. Gummadi and Gerhard Weikum. 2018. iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making. arXiv:1806.01059 [cs.LG]","DOI":"10.1109\/ICDE.2019.00121"},{"key":"e_1_3_2_1_34_1","first-page":"I","article-title":"A Unified Approach to Interpreting Model Predictions","volume":"30","author":"Lundberg Scott M","year":"2017","unstructured":"Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765--4774. http:\/\/papers.nips.cc\/paper\/7062-a-unified-approach-to-interpreting-model-predictions.pdf","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_35_1","volume-title":"Weinberger","author":"Pleiss Geoff","year":"2017","unstructured":"Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On Fairness and Calibration. arXiv:1709.02012 [cs.LG]"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_1_37_1","volume-title":"Rothblum and Gal Yona","author":"Guy","year":"2018","unstructured":"Guy N. Rothblum and Gal Yona. 2018. Probably Approximately Metric-Fair Learning. arXiv:1803.03242 [cs.LG]"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"crossref","unstructured":"Dylan Slack Sophie Hilgard Emily Jia Sameer Singh and Himabindu Lakkaraju. 2019. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. arXiv:1911.02508 [cs.LG]","DOI":"10.1145\/3375627.3375830"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3238147.3238165"},{"key":"e_1_3_2_1_40_1","volume-title":"Proceedings of the 30th International Conference on Machine Learning (Proceedings of Machine Learning Research), Sanjoy Dasgupta and David McAllester (Eds.)","volume":"28","author":"Zemel Rich","year":"2013","unstructured":"Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In Proceedings of the 30th International Conference on Machine Learning (Proceedings of Machine Learning Research), Sanjoy Dasgupta and David McAllester (Eds.), Vol. 28. PMLR, Atlanta, Georgia, USA, 325--333. http:\/\/proceedings.mlr.press\/v28\/zemel13.html"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"crossref","unstructured":"Brian Hu Zhang Blake Lemoine and Margaret Mitchell. 2018. Mitigating Unwanted Biases with Adversarial Learning. arXiv:1801.07593 [cs.LG]","DOI":"10.1145\/3278721.3278779"}],"event":{"name":"ASE '20: 35th IEEE\/ACM International Conference on Automated Software Engineering","location":"Virtual Event Australia","acronym":"ASE '20","sponsor":["SIGAI ACM Special Interest Group on Artificial Intelligence","SIGSOFT ACM Special Interest Group on Software Engineering","IEEE CS"]},"container-title":["Proceedings of the 35th IEEE\/ACM International Conference on Automated Software Engineering"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3324884.3418932","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3324884.3418932","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:47:23Z","timestamp":1750193243000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3324884.3418932"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,12,21]]},"references-count":41,"alternative-id":["10.1145\/3324884.3418932","10.1145\/3324884"],"URL":"https:\/\/doi.org\/10.1145\/3324884.3418932","relation":{},"subject":[],"published":{"date-parts":[[2020,12,21]]},"assertion":[{"value":"2021-01-27","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}