{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,20]],"date-time":"2026-03-20T07:42:01Z","timestamp":1773992521988,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":70,"publisher":"ACM","license":[{"start":{"date-parts":[[2024,6,3]],"date-time":"2024-06-03T00:00:00Z","timestamp":1717372800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/https:\/\/doi.org\/10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["1845487"],"award-info":[{"award-number":["1845487"]}],"id":[{"id":"10.13039\/https:\/\/doi.org\/10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2024,6,3]]},"DOI":"10.1145\/3630106.3659043","type":"proceedings-article","created":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T13:14:21Z","timestamp":1717593261000},"page":"2374-2388","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":9,"title":["Understanding Disparities in Post Hoc Machine Learning Explanation"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1269-7071","authenticated-orcid":false,"given":"Vishwali","family":"Mhasawade","sequence":"first","affiliation":[{"name":"New York University, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0944-4313","authenticated-orcid":false,"given":"Salman","family":"Rahman","sequence":"additional","affiliation":[{"name":"New York University, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-1741-566X","authenticated-orcid":false,"given":"Zo\u00e9","family":"Haskell-Craig","sequence":"additional","affiliation":[{"name":"New York University, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5346-7259","authenticated-orcid":false,"given":"Rumi","family":"Chunara","sequence":"additional","affiliation":[{"name":"New York University, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,6,5]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Sanity checks for saliency maps. Advances in neural information processing systems 31","author":"Adebayo Julius","year":"2018","unstructured":"Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. Advances in neural information processing systems 31 (2018)."},{"key":"e_1_3_2_1_2_1","volume-title":"Post hoc explanations may be ineffective for detecting unknown spurious correlation. arXiv preprint arXiv:2212.04629","author":"Adebayo Julius","year":"2022","unstructured":"Julius Adebayo, Michael Muelly, Hal Abelson, and Been Kim. 2022. Post hoc explanations may be ineffective for detecting unknown spurious correlation. arXiv preprint arXiv:2212.04629 (2022)."},{"key":"e_1_3_2_1_3_1","volume-title":"The opportunities and challenges of ChatGPT in education. Interactive Learning Environments","author":"Adeshola Ibrahim","year":"2023","unstructured":"Ibrahim Adeshola and Adeola\u00a0Praise Adepoju. 2023. The opportunities and challenges of ChatGPT in education. Interactive Learning Environments (2023), 1\u201314."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3233547.3233667"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2023.102616"},{"key":"e_1_3_2_1_6_1","volume-title":"Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1194\u20131206","author":"Balagopalan Aparna","year":"2022","unstructured":"Aparna Balagopalan, Haoran Zhang, Kimia Hamidieh, Thomas Hartvigsen, Frank Rudzicz, and Marzyeh Ghassemi. 2022. The road to explainability is paved with bias: Measuring the fairness of explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1194\u20131206."},{"key":"e_1_3_2_1_7_1","volume-title":"Fairness and Machine Learning: Limitations and Opportunities","author":"Barocas Solon","unstructured":"Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2023. Fairness and Machine Learning: Limitations and Opportunities. MIT Press."},{"key":"e_1_3_2_1_8_1","volume-title":"Explainability for fair machine learning. arXiv preprint arXiv:2010.07389","author":"Begley Tom","year":"2020","unstructured":"Tom Begley, Tobias Schwedes, Christopher Frye, and Ilya Feige. 2020. Explainability for fair machine learning. arXiv preprint arXiv:2010.07389 (2020)."},{"key":"e_1_3_2_1_9_1","volume-title":"Classification by set cover: The prototype vector machine. arXiv preprint arXiv:0908.2284","author":"Bien Jacob","year":"2009","unstructured":"Jacob Bien and Robert Tibshirani. 2009. Classification by set cover: The prototype vector machine. arXiv preprint arXiv:0908.2284 (2009)."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1613\/jair.1.12228"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10614-020-10042-0"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2788613"},{"key":"e_1_3_2_1_13_1","volume-title":"Ethical machine learning in healthcare. Annual review of biomedical data science 4","author":"Chen Y","year":"2021","unstructured":"Irene\u00a0Y Chen, Emma Pierson, Sherri Rose, Shalmali Joshi, Kadija Ferryman, and Marzyeh Ghassemi. 2021. Ethical machine learning in healthcare. Annual review of biomedical data science 4 (2021), 123\u2013144."},{"key":"e_1_3_2_1_14_1","volume-title":"Algorithmic fairness in artificial intelligence for medicine and healthcare. Nature biomedical engineering 7, 6","author":"Chen J","year":"2023","unstructured":"Richard\u00a0J Chen, Judy\u00a0J Wang, Drew\u00a0FK Williamson, Tiffany\u00a0Y Chen, Jana Lipkova, Ming\u00a0Y Lu, Sharifa Sahai, and Faisal Mahmood. 2023. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nature biomedical engineering 7, 6 (2023), 719\u2013742."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33017801"},{"key":"e_1_3_2_1_16_1","volume-title":"Extracting tree-structured representations of trained networks. Advances in neural information processing systems 8","author":"Craven Mark","year":"1995","unstructured":"Mark Craven and Jude Shavlik. 1995. Extracting tree-structured representations of trained networks. Advances in neural information processing systems 8 (1995)."},{"key":"e_1_3_2_1_17_1","volume-title":"Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society. 203\u2013214","author":"Dai Jessica","year":"2022","unstructured":"Jessica Dai, Sohini Upadhyay, Ulrich Aivodji, Stephen\u00a0H Bach, and Himabindu Lakkaraju. 2022. Fairness via explanation quality: Evaluating disparities in the quality of post hoc explanations. In Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society. 203\u2013214."},{"key":"e_1_3_2_1_18_1","volume-title":"What will it take to generate fairness-preserving explanations?arXiv preprint arXiv:2106.13346","author":"Dai Jessica","year":"2021","unstructured":"Jessica Dai, Sohini Upadhyay, Stephen\u00a0H Bach, and Himabindu Lakkaraju. 2021. What will it take to generate fairness-preserving explanations?arXiv preprint arXiv:2106.13346 (2021)."},{"key":"e_1_3_2_1_19_1","first-page":"114","article-title":"Algorithm aversion: people erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology","volume":"144","author":"Dietvorst J","year":"2015","unstructured":"Berkeley\u00a0J Dietvorst, Joseph\u00a0P Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General 144, 1 (2015), 114.","journal-title":"General"},{"key":"e_1_3_2_1_20_1","volume-title":"41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, 0210\u20130215","author":"Do\u0161ilovi\u0107 Filip\u00a0Karlo","year":"2018","unstructured":"Filip\u00a0Karlo Do\u0161ilovi\u0107, Mario Br\u010di\u0107, and Nikica Hlupi\u0107. 2018. Explainable artificial intelligence: A survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, 0210\u20130215."},{"key":"e_1_3_2_1_21_1","volume-title":"The accuracy, fairness, and limits of predicting recidivism. Science advances 4, 1","author":"Dressel Julia","year":"2018","unstructured":"Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. Science advances 4, 1 (2018), eaao5580."},{"key":"e_1_3_2_1_22_1","unstructured":"Dheeru Dua Casey Graff 2017. UCI machine learning repository. (2017)."},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10618-022-00854-z"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3458723"},{"key":"e_1_3_2_1_25_1","first-page":"191","article-title":"A review of challenges and opportunities in machine learning for health","volume":"2020","author":"Ghassemi Marzyeh","year":"2020","unstructured":"Marzyeh Ghassemi, Tristan Naumann, Peter Schulam, Andrew\u00a0L Beam, Irene\u00a0Y Chen, and Rajesh Ranganath. 2020. A review of challenges and opportunities in machine learning for health. AMIA Summits on Translational Science Proceedings 2020 (2020), 191.","journal-title":"AMIA Summits on Translational Science Proceedings"},{"key":"e_1_3_2_1_26_1","volume-title":"Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?arXiv preprint arXiv:2005.01831","author":"Hase Peter","year":"2020","unstructured":"Peter Hase and Mohit Bansal. 2020. Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?arXiv preprint arXiv:2005.01831 (2020)."},{"key":"e_1_3_2_1_27_1","volume-title":"International Conference on Machine Learning. PMLR, 2439\u20132448","author":"Kallus Nathan","year":"2018","unstructured":"Nathan Kallus and Angela Zhou. 2018. Residual unfairness in fair machine learning from prejudiced data. In International Conference on Machine Learning. PMLR, 2439\u20132448."},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-33486-3_3"},{"key":"e_1_3_2_1_29_1","volume-title":"International Conference on Artificial Intelligence and Statistics. PMLR, 895\u2013905","author":"Karimi Amir-Hossein","year":"2020","unstructured":"Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. Model-agnostic counterfactual explanations for consequential decisions. In International Conference on Artificial Intelligence and Statistics. PMLR, 895\u2013905."},{"key":"e_1_3_2_1_30_1","first-page":"42","article-title":"Racial underrepresentation in dermatological datasets leads to biased machine learning models and inequitable healthcare","volume":"3","author":"Kleinberg Giona","year":"2022","unstructured":"Giona Kleinberg, Michael\u00a0J Diaz, Sai Batchu, and Brandon Lucke-Wold. 2022. Racial underrepresentation in dermatological datasets leads to biased machine learning models and inequitable healthcare. Journal of biomed research 3, 1 (2022), 42.","journal-title":"Journal of biomed research"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939874"},{"key":"e_1_3_2_1_32_1","volume-title":"The dangers of post-hoc interpretability: Unjustified counterfactual explanations. arXiv preprint arXiv:1907.09294","author":"Laugel Thibault","year":"2019","unstructured":"Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2019. The dangers of post-hoc interpretability: Unjustified counterfactual explanations. arXiv preprint arXiv:1907.09294 (2019)."},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"crossref","unstructured":"Benjamin Letham Cynthia Rudin Tyler\u00a0H McCormick and David Madigan. 2015. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. (2015).","DOI":"10.1214\/15-AOAS848"},{"key":"e_1_3_2_1_34_1","unstructured":"Moshe Lichman 2013. UCI machine learning repository."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.3390\/e23010018"},{"key":"e_1_3_2_1_36_1","volume-title":"International Conference on Machine Learning. PMLR, 6781\u20136792","author":"Liu Z","year":"2021","unstructured":"Evan\u00a0Z Liu, Behzad Haghgoo, Annie\u00a0S Chen, Aditi Raghunathan, Pang\u00a0Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. 2021. Just train twice: Improving group robustness without training group information. In International Conference on Machine Learning. PMLR, 6781\u20136792."},{"key":"e_1_3_2_1_37_1","volume-title":"A unified approach to interpreting model predictions. Advances in neural information processing systems 30","author":"Lundberg M","year":"2017","unstructured":"Scott\u00a0M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_3_2_1_38_1","volume-title":"Fairness and missing values. arXiv preprint arXiv:1905.12728","author":"Mart\u00ednez-Plumed Fernando","year":"2019","unstructured":"Fernando Mart\u00ednez-Plumed, C\u00e8sar Ferri, David Nieves, and Jos\u00e9 Hern\u00e1ndez-Orallo. 2019. Fairness and missing values. arXiv preprint arXiv:1905.12728 (2019)."},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3461702.3462587"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287596"},{"key":"e_1_3_2_1_41_1","volume-title":"A unifying view on dataset shift in classification. Pattern recognition 45, 1","author":"Moreno-Torres G","year":"2012","unstructured":"Jose\u00a0G Moreno-Torres, Troy Raeder, Roc\u00edo Alaiz-Rodr\u00edguez, Nitesh\u00a0V Chawla, and Francisco Herrera. 2012. A unifying view on dataset shift in classification. Pattern recognition 45, 1 (2012), 521\u2013530."},{"key":"e_1_3_2_1_42_1","volume-title":"Diagnosing Model Performance Under Distribution Shift. arXiv preprint arXiv:2303.02011","author":"Namkoong Hongseok","year":"2023","unstructured":"Hongseok Namkoong, Steve Yadlowsky, 2023. Diagnosing Model Performance Under Distribution Shift. arXiv preprint arXiv:2303.02011 (2023)."},{"key":"e_1_3_2_1_43_1","volume-title":"Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook","author":"Pessach Dana","unstructured":"Dana Pessach and Erez Shmueli. 2023. Algorithmic fairness. In Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook. Springer, 867\u2013886."},{"key":"e_1_3_2_1_44_1","volume-title":"NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models.","author":"Pfohl Stephen\u00a0Robert","year":"2023","unstructured":"Stephen\u00a0Robert Pfohl, Natalie Harris, Chirag Nagpal, David Madras, Vishwali Mhasawade, Olawale\u00a0Elijah Salaudeen, Katherine\u00a0A Heller, Sanmi Koyejo, and Alexander\u00a0Nicholas D\u2019Amour. 2023. Understanding subgroup performance differences of fair predictors using causal models. In NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models."},{"key":"e_1_3_2_1_45_1","volume-title":"Model agnostic supervised local explanations. Advances in neural information processing systems 31","author":"Plumb Gregory","year":"2018","unstructured":"Gregory Plumb, Denali Molitor, and Ameet\u00a0S Talwalkar. 2018. Model agnostic supervised local explanations. Advances in neural information processing systems 31 (2018)."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"e_1_3_2_1_48_1","volume-title":"Towards Unraveling Calibration Biases in Medical Image Analysis. In Workshop on Clinical Image-Based Procedures. Springer, 132\u2013141","author":"Ricci\u00a0Lara Mar\u00eda\u00a0Agustina","year":"2023","unstructured":"Mar\u00eda\u00a0Agustina Ricci\u00a0Lara, Candelaria Mosquera, Enzo Ferrante, and Rodrigo Echeveste. 2023. Towards Unraveling Calibration Biases in Medical Image Analysis. In Workshop on Clinical Image-Based Procedures. Springer, 132\u2013141."},{"key":"e_1_3_2_1_49_1","volume-title":"Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence 1, 5","author":"Rudin Cynthia","year":"2019","unstructured":"Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence 1, 5 (2019), 206\u2013215."},{"key":"e_1_3_2_1_50_1","unstructured":"Amit Sangroya Mouli Rastogi C Anantaram and Lovekesh Vig. 2020. Guided-LIME: Structured Sampling based Hybrid Approach towards Explaining Blackbox Machine Learning Models.. In CIKM (Workshops)."},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445865"},{"key":"e_1_3_2_1_53_1","volume-title":"Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in neural information processing systems 34","author":"Slack Dylan","year":"2021","unstructured":"Dylan Slack, Anna Hilgard, Sameer Singh, and Himabindu Lakkaraju. 2021. Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in neural information processing systems 34 (2021), 9391\u20139404."},{"key":"e_1_3_2_1_54_1","volume-title":"Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825","author":"Smilkov Daniel","year":"2017","unstructured":"Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\u00e9gas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017)."},{"key":"e_1_3_2_1_55_1","volume-title":"Counterfactual Normalization: Proactively Addressing Dataset Shift Using Causal Mechanisms.. In UAI. 947\u2013957.","author":"Subbaswamy Adarsh","year":"2018","unstructured":"Adarsh Subbaswamy and Suchi Saria. 2018. Counterfactual Normalization: Proactively Addressing Dataset Shift Using Causal Mechanisms.. In UAI. 947\u2013957."},{"key":"e_1_3_2_1_56_1","first-page":"345","article-title":"From development to deployment: dataset shift, causality, and shift-stable models in health AI","volume":"21","author":"Subbaswamy Adarsh","year":"2020","unstructured":"Adarsh Subbaswamy and Suchi Saria. 2020. From development to deployment: dataset shift, causality, and shift-stable models in health AI. Biostatistics 21, 2 (2020), 345\u2013352.","journal-title":"Biostatistics"},{"key":"e_1_3_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/3278721.3278725"},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/SDS.2019.00-11"},{"key":"e_1_3_2_1_59_1","volume-title":"Developing a fidelity evaluation approach for interpretable machine learning. arXiv preprint arXiv:2106.08492","author":"Velmurugan Mythreyi","year":"2021","unstructured":"Mythreyi Velmurugan, Chun Ouyang, Catarina Moreira, and Renuka Sindhgatta. 2021. Developing a fidelity evaluation approach for interpretable machine learning. arXiv preprint arXiv:2106.08492 (2021)."},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"crossref","unstructured":"Darshali\u00a0A Vyas Leo\u00a0G Eisenstein and David\u00a0S Jones. 2020. Hidden in plain sight\u2014reconsidering the use of race correction in clinical algorithms. 874\u2013882\u00a0pages.","DOI":"10.1056\/NEJMms2004740"},{"key":"e_1_3_2_1_61_1","first-page":"841","article-title":"Counterfactual explanations without opening the black box: Automated decisions and the GDPR","volume":"31","author":"Wachter Sandra","year":"2017","unstructured":"Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.","journal-title":"Harv. JL & Tech."},{"key":"e_1_3_2_1_62_1","unstructured":"Fulton Wang and Cynthia Rudin. 2015. Falling rule lists. In Artificial intelligence and statistics. PMLR 1013\u20131022."},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1007\/s41060-021-00259-z"},{"key":"e_1_3_2_1_64_1","volume-title":"Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564","author":"Wu Shijie","year":"2023","unstructured":"Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564 (2023)."},{"key":"e_1_3_2_1_65_1","volume-title":"Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579","author":"Yosinski Jason","year":"2015","unstructured":"Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. 2015. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)."},{"key":"e_1_3_2_1_66_1","unstructured":"Muhammad\u00a0Bilal Zafar Isabel Valera Manuel\u00a0Gomez Rogriguez and Krishna\u00a0P Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial intelligence and statistics. PMLR 962\u2013970."},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1111\/rssa.12227"},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0291107"},{"key":"e_1_3_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.3390\/electronics10050593"},{"key":"e_1_3_2_1_70_1","volume-title":"Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 2429\u20132438","author":"Zhou Zhengze","year":"2021","unstructured":"Zhengze Zhou, Giles Hooker, and Fei Wang. 2021. S-lime: Stabilized-lime for model explanation. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 2429\u20132438."}],"event":{"name":"FAccT '24: The 2024 ACM Conference on Fairness, Accountability, and Transparency","location":"Rio de Janeiro Brazil","acronym":"FAccT '24"},"container-title":["The 2024 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3630106.3659043","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3630106.3659043","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T23:57:07Z","timestamp":1750291027000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3630106.3659043"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,3]]},"references-count":70,"alternative-id":["10.1145\/3630106.3659043","10.1145\/3630106"],"URL":"https:\/\/doi.org\/10.1145\/3630106.3659043","relation":{},"subject":[],"published":{"date-parts":[[2024,6,3]]},"assertion":[{"value":"2024-06-05","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}