{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,1]],"date-time":"2026-05-01T17:51:07Z","timestamp":1777657867237,"version":"3.51.4"},"reference-count":61,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2023,8,11]],"date-time":"2023-08-11T00:00:00Z","timestamp":1691712000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Research Foundation\u2013Flanders","award":["11N7723N"],"award-info":[{"award-number":["11N7723N"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2023,10,31]]},"abstract":"<jats:p>\n            Black-box machine learning models are used in an increasing number of high-stakes domains, and this creates a growing need for Explainable AI (XAI). However, the use of XAI in machine learning introduces privacy risks, which currently remain largely unnoticed. Therefore, we explore the possibility of an\n            <jats:italic>explanation linkage attack<\/jats:italic>\n            , which can occur when deploying instance-based strategies to find counterfactual explanations. To counter such an attack, we propose\n            <jats:italic>k<\/jats:italic>\n            -anonymous counterfactual explanations and introduce\n            <jats:italic>pureness<\/jats:italic>\n            as a metric to evaluate the\n            <jats:italic>validity<\/jats:italic>\n            of these\n            <jats:italic>k<\/jats:italic>\n            -anonymous counterfactual explanations. Our results show that making the explanations, rather than the whole dataset,\n            <jats:italic>k<\/jats:italic>\n            -anonymous, is beneficial for the quality of the explanations.\n          <\/jats:p>","DOI":"10.1145\/3608482","type":"journal-article","created":{"date-parts":[[2023,7,11]],"date-time":"2023-07-11T11:55:15Z","timestamp":1689076515000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":18,"title":["The Privacy Issue of Counterfactual Explanations: Explanation Linkage Attacks"],"prefix":"10.1145","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3784-826X","authenticated-orcid":false,"given":"Sofie","family":"Goethals","sequence":"first","affiliation":[{"name":"University of Antwerp, Belgium"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1429-2836","authenticated-orcid":false,"given":"Kenneth","family":"S\u00f6rensen","sequence":"additional","affiliation":[{"name":"University of Antwerp, Belgium"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8397-2937","authenticated-orcid":false,"given":"David","family":"Martens","sequence":"additional","affiliation":[{"name":"University of Antwerp, Belgium"}]}],"member":"320","published-online":{"date-parts":[[2023,8,11]]},"reference":[{"key":"e_1_3_3_2_2","article-title":"Model extraction from counterfactual explanations","author":"A\u00efvodji Ulrich","year":"2020","unstructured":"Ulrich A\u00efvodji, Alexandre Bolot, and S\u00e9bastien Gambs. 2020. Model extraction from counterfactual explanations. arXiv preprint arXiv:2009.01884 (2020).","journal-title":"arXiv preprint arXiv:2009.01884"},{"key":"e_1_3_3_3_2","first-page":"01","volume-title":"2021 IEEE Symposium Series on Computational Intelligence (SSCI\u201921)","author":"Artelt Andr\u00e9","year":"2021","unstructured":"Andr\u00e9 Artelt, Valerie Vaquet, Riza Velioglu, Fabian Hinder, Johannes Brinkrolf, Malte Schilling, and Barbara Hammer. 2021. Evaluating robustness of counterfactual explanations. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI\u201921). IEEE, 01\u201309."},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.5555\/2870614.2870620"},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2005.42"},{"key":"e_1_3_3_6_2","first-page":"1","article-title":"NICE: An algorithm for nearest instance counterfactual explanations","author":"Brughmans Dieter","year":"2023","unstructured":"Dieter Brughmans, Pieter Leyman, and David Martens. 2023. NICE: An algorithm for nearest instance counterfactual explanations. Data Mining and Knowledge Discovery (2023), 1\u201339.","journal-title":"Data Mining and Knowledge Discovery"},{"key":"e_1_3_3_7_2","first-page":"59","article-title":"Trade-offs between privacy-preserving and explainable machine learning in healthcare","author":"Budig Tobias","year":"2021","unstructured":"Tobias Budig, Selina Herrmann, and Alexander Dietz. 2021. Trade-offs between privacy-preserving and explainable machine learning in healthcare. cii Student Papers-2021 (2021), 59.","journal-title":"cii Student Papers-2021"},{"key":"e_1_3_3_8_2","volume-title":"Generating Collective Counterfactual Explanations in Score-based Classification via Mathematical Optimization","author":"Carrizosa Emilio","year":"2021","unstructured":"Emilio Carrizosa, Jasone Ram\u0131rez-Ayerbe, and Dolores Romero. 2021. Generating Collective Counterfactual Explanations in Score-based Classification via Mathematical Optimization. Technical Report. IMUS, Sevilla, Spain. https:\/\/www.researchgate.net"},{"key":"e_1_3_3_9_2","first-page":"292","unstructured":"Hongyan Chang and Reza Shorki. 2021. On the privacy risks of algorithmic fairness. In 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 292-303.","journal-title":"2021 IEEE European Symposium on Security and Privacy (EuroS&P)"},{"key":"e_1_3_3_10_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58112-1_31"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.3390\/app11167274"},{"key":"e_1_3_3_12_2","first-page":"1","volume-title":"International Colloquium on Automata, Languages, and Programming","author":"Dwork Cynthia","year":"2006","unstructured":"Cynthia Dwork. 2006. Differential privacy. In International Colloquium on Automata, Languages, and Programming. Springer, 1\u201312."},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1197\/jamia.M2716"},{"key":"e_1_3_3_14_2","doi-asserted-by":"publisher","DOI":"10.1197\/jamia.M3144"},{"key":"e_1_3_3_15_2","first-page":"6","article-title":"Scenarios of attack: The data intruder\u2019s perspective on statistical disclosure risk","volume":"14","author":"Elliot Mark","year":"1999","unstructured":"Mark Elliot and Angela Dale. 1999. Scenarios of attack: The data intruder\u2019s perspective on statistical disclosure risk. Netherlands Official Statistics 14, Spring (1999), 6\u201310.","journal-title":"Netherlands Official Statistics"},{"key":"e_1_3_3_16_2","unstructured":"European Commission. 2021. Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. https:\/\/digital-strategy.ec.europa.eu\/en\/library\/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence. European Commission Online accessed February 24 2022."},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF01096763"},{"key":"e_1_3_3_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813677"},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.5555\/1325851.1325938"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/1538909.1538911"},{"key":"e_1_3_3_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2008.129"},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-022-00579-2"},{"key":"e_1_3_3_23_2","first-page":"1","article-title":"Counterfactual explanations and how to find them: Literature review and benchmarking","author":"Guidotti Riccardo","year":"2022","unstructured":"Riccardo Guidotti. 2022. Counterfactual explanations and how to find them: Literature review and benchmarking. Data Mining and Knowledge Discovery (2022), 1\u201355.","journal-title":"Data Mining and Knowledge Discovery"},{"key":"e_1_3_3_24_2","volume-title":"Race, Ethnicity, Gender, & Class: The Sociology of Group Conflict and Change (Eighth ed.)","author":"Healey Joseph F.","year":"2019","unstructured":"Joseph F. Healey, Andi Stepnick, and Eileen O\u2019Brien. 2019. Race, Ethnicity, Gender, & Class: The Sociology of Group Conflict and Change (Eighth ed.). SAGE."},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/775047.775089"},{"key":"e_1_3_3_26_2","first-page":"163","volume-title":"International Conference on Case-Based Reasoning","author":"Keane Mark T.","year":"2020","unstructured":"Mark T. Keane and Barry Smyth. 2020. Good counterfactuals and where to find them: A case-based technique for generating counterfactuals for explainable AI (XAI). In International Conference on Case-Based Reasoning. Springer, 163\u2013178."},{"key":"e_1_3_3_27_2","first-page":"63","volume-title":"Annual Workshop on Information Privacy and National Security","author":"Kisilevich Slava","year":"2008","unstructured":"Slava Kisilevich, Yuval Elovici, Bracha Shapira, and Lior Rokach. 2008. kACTUS 2: Privacy preserving in classification tasks using k-Anonymity. In Annual Workshop on Information Privacy and National Security. Springer, 63\u201381."},{"key":"e_1_3_3_28_2","article-title":"The dangers of post-hoc interpretability: Unjustified counterfactual explanations","author":"Laugel Thibault","year":"2019","unstructured":"Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2019. The dangers of post-hoc interpretability: Unjustified counterfactual explanations. arXiv preprint arXiv:1907.09294 (2019).","journal-title":"arXiv preprint arXiv:1907.09294"},{"key":"e_1_3_3_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/1066157.1066164"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE.2006.101"},{"key":"e_1_3_3_31_2","first-page":"106","volume-title":"2007 IEEE 23rd International Conference on Data Engineering","author":"Li Ninghui","year":"2006","unstructured":"Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. 2006. t-closeness: Privacy beyond k-anonymity and l-diversity. In 2007 IEEE 23rd International Conference on Data Engineering. IEEE, 106\u2013115."},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3436755"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3546872"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2012.02.179"},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/1217299.1217302"},{"key":"e_1_3_3_36_2","doi-asserted-by":"publisher","DOI":"10.2307\/2983043"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1093\/oso\/9780192847263.001.0001"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.25300\/MISQ\/2014\/38.1.04"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/1055558.1055591"},{"key":"e_1_3_3_40_2","volume-title":"Interpretable Machine Learning","author":"Molnar Christoph","year":"2020","unstructured":"Christoph Molnar. 2020. Interpretable Machine Learning. Lulu.com."},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/CogMI56440.2022.00012"},{"key":"e_1_3_3_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533235"},{"key":"e_1_3_3_43_2","first-page":"809","volume-title":"Conference on Uncertainty in Artificial Intelligence","author":"Pawelczyk Martin","year":"2020","unstructured":"Martin Pawelczyk, Klaus Broelemann, and Gjergji Kasneci. 2020. On counterfactual explanations under predictive multiplicity. In Conference on Uncertainty in Artificial Intelligence. PMLR, 809\u2013818."},{"key":"e_1_3_3_44_2","article-title":"On the privacy risks of algorithmic recourse","author":"Pawelczyk Martin","year":"2022","unstructured":"Martin Pawelczyk, Himabindu Lakkaraju, and Seth Neel. 2022. On the privacy risks of algorithmic recourse. arXiv preprint arXiv:2211.05427 (2022).","journal-title":"arXiv preprint arXiv:2211.05427"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/3375627.3375850"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICEBE.2016.024"},{"key":"e_1_3_3_47_2","article-title":"On the amplification of security and privacy risks by post-hoc explanations in machine learning models","author":"Quan Pengrui","year":"2022","unstructured":"Pengrui Quan, Supriyo Chakraborty, Jeya Vikranth Jeyakumar, and Mani Srivastava. 2022. On the amplification of security and privacy risks by post-hoc explanations in machine learning models. arXiv preprint arXiv:2206.14004 (2022).","journal-title":"arXiv preprint arXiv:2206.14004"},{"key":"e_1_3_3_48_2","article-title":"A survey of privacy attacks in machine learning","author":"Rigaki Maria","year":"2020","unstructured":"Maria Rigaki and Sebastian Garcia. 2020. A survey of privacy attacks in machine learning. arXiv preprint arXiv:2007.07646 (2020).","journal-title":"arXiv preprint arXiv:2007.07646"},{"key":"e_1_3_3_49_2","article-title":"Privacy risks of explaining machine learning models","volume":"1907","author":"Shokri Reza","year":"2019","unstructured":"Reza Shokri, Martin Strobel, and Yair Zick. 2019. Privacy risks of explaining machine learning models. CoRR abs\/1907.00164 (2019). http:\/\/arxiv.org\/abs\/1907.00164.","journal-title":"CoRR"},{"key":"e_1_3_3_50_2","volume-title":"The AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI). AAAI","author":"Shokri Reza","year":"2020","unstructured":"Reza Shokri, Martin Strobel, and Yair Zick. 2020. Exploiting transparency measures for membership inference: A cautionary tale. In The AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI). AAAI, Vol. 13."},{"key":"e_1_3_3_51_2","doi-asserted-by":"publisher","DOI":"10.1088\/1757-899X\/225\/1\/012279"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2021.102488"},{"key":"e_1_3_3_53_2","volume-title":"SafeAI@ AAAI","author":"Sokol Kacper","year":"2019","unstructured":"Kacper Sokol and Peter A. Flach. 2019. Counterfactual explanations of machine learning predictions: Opportunities and challenges for AI safety. In SafeAI@ AAAI."},{"key":"e_1_3_3_54_2","doi-asserted-by":"publisher","DOI":"10.5120\/1113-1457"},{"issue":"2000","key":"e_1_3_3_55_2","first-page":"1","article-title":"Simple demographics often identify people uniquely","volume":"671","author":"Sweeney Latanya","year":"2000","unstructured":"Latanya Sweeney. 2000. Simple demographics often identify people uniquely. Health (San Francisco) 671, 2000 (2000), 1\u201334.","journal-title":"Health (San Francisco)"},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.1142\/S021848850200165X"},{"key":"e_1_3_3_57_2","doi-asserted-by":"publisher","DOI":"10.1142\/S0218488502001648"},{"key":"e_1_3_3_58_2","volume-title":"Encyclopedia of Cryptography and Security","author":"Tilborg Henk C. A. Van","year":"2014","unstructured":"Henk C. A. Van Tilborg and Sushil Jajodia. 2014. Encyclopedia of Cryptography and Security. Springer Science & Business Media."},{"key":"e_1_3_3_59_2","first-page":"841","article-title":"Counterfactual explanations without opening the black box: Automated decisions and the GDPR","volume":"31","author":"Wachter Sandra","year":"2017","unstructured":"Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.","journal-title":"Harv. JL & Tech."},{"issue":"1","key":"e_1_3_3_60_2","first-page":"56","article-title":"The what-if tool: Interactive probing of machine learning models","volume":"26","author":"Wexler James","year":"2019","unstructured":"James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Vi\u00e9gas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE Transactions on Visualization and Computer Graphics 26, 1 (2019), 56\u201365.","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/1233321.1233324"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/1150402.1150504"}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3608482","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3608482","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:29:46Z","timestamp":1750285786000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3608482"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,11]]},"references-count":61,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2023,10,31]]}},"alternative-id":["10.1145\/3608482"],"URL":"https:\/\/doi.org\/10.1145\/3608482","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"value":"2157-6904","type":"print"},{"value":"2157-6912","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,11]]},"assertion":[{"value":"2022-12-13","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-07-02","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-11","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}