{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T10:23:04Z","timestamp":1771064584714,"version":"3.50.1"},"reference-count":97,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2023,8,21]],"date-time":"2023-08-21T00:00:00Z","timestamp":1692576000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"NSF","award":["IIS-2007398"],"award-info":[{"award-number":["IIS-2007398"]}]},{"name":"NSF","award":["IIS-2205418"],"award-info":[{"award-number":["IIS-2205418"]}]},{"name":"NSF","award":["DMS-2134223"],"award-info":[{"award-number":["DMS-2134223"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Inf. Syst."],"published-print":{"date-parts":[[2024,1,31]]},"abstract":"<jats:p>State-of-the-art industrial-level recommender system applications mostly adopt complicated model structures such as deep neural networks. While this helps with the model performance, the lack of system explainability caused by these nearly blackbox models also raises concerns and potentially weakens the users\u2019 trust in the system. Existing work on explainable recommendation mostly focuses on designing interpretable model structures to generate model-intrinsic explanations. However, most of them have complex structures, and it is difficult to directly apply these designs onto existing recommendation applications due to the effectiveness and efficiency concerns. However, while there have been some studies on explaining recommendation models without knowing their internal structures (i.e., model-agnostic explanations), these methods have been criticized for not reflecting the actual reasoning process of the recommendation model or, in other words,<jats:italic>faithfulness<\/jats:italic>. How to develop model-agnostic explanation methods and evaluate them in terms of faithfulness is mostly unknown. In this work, we propose a reusable evaluation pipeline for model-agnostic explainable recommendation. Our pipeline evaluates the quality of model-agnostic explanation from the perspectives of faithfulness and scrutability. We further propose a model-agnostic explanation framework for recommendation and verify it with the proposed evaluation pipeline. Extensive experiments on public datasets demonstrate that our model-agnostic framework is able to generate explanations that are faithful to the recommendation model. We additionally provide quantitative and qualitative study to show that our explanation framework could enhance the scrutability of blackbox recommendation model. With proper modification, our evaluation pipeline and model-agnostic explanation framework could be easily migrated to existing applications. Through this work, we hope to encourage the community to focus more on faithfulness evaluation of explainable recommender systems.<\/jats:p>","DOI":"10.1145\/3605357","type":"journal-article","created":{"date-parts":[[2023,6,18]],"date-time":"2023-06-18T08:07:26Z","timestamp":1687075646000},"page":"1-29","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability"],"prefix":"10.1145","volume":"42","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2370-4487","authenticated-orcid":false,"given":"Zhichao","family":"Xu","sequence":"first","affiliation":[{"name":"University of Utah, United States"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-2699-8460","authenticated-orcid":false,"given":"Hansi","family":"Zeng","sequence":"additional","affiliation":[{"name":"University of Massachusetts Amherst, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3646-933X","authenticated-orcid":false,"given":"Juntao","family":"Tan","sequence":"additional","affiliation":[{"name":"Rutgers University, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3881-7935","authenticated-orcid":false,"given":"Zuohui","family":"Fu","sequence":"additional","affiliation":[{"name":"Rutgers University, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2633-8555","authenticated-orcid":false,"given":"Yongfeng","family":"Zhang","sequence":"additional","affiliation":[{"name":"Rutgers University, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5030-709X","authenticated-orcid":false,"given":"Qingyao","family":"Ai","sequence":"additional","affiliation":[{"name":"Tsinghua University, Zhongguancun Laboratory, China"}]}],"member":"320","published-online":{"date-parts":[[2023,8,21]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.5555\/2931100"},{"key":"e_1_3_3_3_2","doi-asserted-by":"crossref","unstructured":"Qingyao Ai Vahid Azizi Xu Chen and Yongfeng Zhang. 2018. Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11 9 (2018) 137.","DOI":"10.3390\/a11090137"},{"key":"e_1_3_3_4_2","unstructured":"Qingyao Ai and Lakshmi Narayanan Ramasamy. 2021. Model-agnostic vs. Model-intrinsic interpretability for explainable product search. arXiv:2108.05317. Retrieved from https:\/\/arxiv.org\/abs\/2108.05217."},{"key":"e_1_3_3_5_2","unstructured":"Marco Ancona Enea Ceolini Cengiz \u00d6ztireli and Markus Gross. 2017. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv:1711.06104. Retrieved from https:\/\/arxiv.org\/abs\/1711.06104."},{"key":"e_1_3_3_6_2","doi-asserted-by":"crossref","unstructured":"Pepa Atanasova Jakob Grue Simonsen Christina Lioma and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classification. arXiv:2009.13295. Retrieved from https:\/\/arxiv.org\/abs\/2009.13295.","DOI":"10.18653\/v1\/2020.emnlp-main.263"},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3523227.3547374"},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401032"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331211"},{"key":"e_1_3_3_10_2","unstructured":"Osbert Bastani Carolyn Kim and Hamsa Bastani. 2017. Interpretability via model extraction. arXiv:1706.09773. Retrieved from https:\/\/arxiv.org\/abs\/1706.09773."},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3098170"},{"key":"e_1_3_3_12_2","first-page":"35","volume-title":"Proceedings of KDD Cup and Workshop","volume":"2007","author":"Bennett James","year":"2007","unstructured":"James Bennett, Stan Lanning, et\u00a0al. 2007. The netflix prize. In Proceedings of KDD Cup and Workshop, Vol. 2007. 35."},{"key":"e_1_3_3_13_2","doi-asserted-by":"crossref","unstructured":"Robin Burke. 2002. Hybrid recommender systems: Survey and experiments. User Model. User-adapt. Interact. 12 4 (2002) 331\u2013370.","DOI":"10.1023\/A:1021240730564"},{"key":"e_1_3_3_14_2","unstructured":"Oana-Maria Camburu Eleonora Giunchiglia Jakob Foerster Thomas Lukasiewicz and Phil Blunsom. 2019. Can I trust the explainer? Verifying post-hoc explanatory methods. arXiv:1910.02065. Retrieved from https:\/\/arxiv.org\/abs\/1910.02065."},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3178876.3186070"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401042"},{"key":"e_1_3_3_17_2","doi-asserted-by":"crossref","unstructured":"Henriette Cramer Vanessa Evers Satyan Ramlal Maarten Van Someren Lloyd Rutledge Natalia Stash Lora Aroyo and Bob Wielinga. 2008. The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-adapt. Interact. 18 5 (2008) 455\u2013496.","DOI":"10.1007\/s11257-008-9051-3"},{"key":"e_1_3_3_18_2","doi-asserted-by":"crossref","unstructured":"Yashar Deldjoo Tommaso Di Noia and Felice Antonio Merra. 2021. A survey on adversarial recommender systems: from attack\/defense strategies to generative adversarial networks. ACM Comput. Surv. 54 2 (2021) 1\u201338.","DOI":"10.1145\/3439729"},{"key":"e_1_3_3_19_2","doi-asserted-by":"crossref","unstructured":"Mengnan Du Ninghao Liu and Xia Hu. 2019. Techniques for interpretable machine learning. Commun. ACM 63 1 (2019) 68\u201377.","DOI":"10.1145\/3359786"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-75390-2_4"},{"key":"e_1_3_3_21_2","first-page":"1607","volume-title":"International Conference on Machine Learning","author":"Furlanello Tommaso","year":"2018","unstructured":"Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. In International Conference on Machine Learning. PMLR, 1607\u20131616."},{"key":"e_1_3_3_22_2","doi-asserted-by":"crossref","unstructured":"Fatih Gedikli Dietmar Jannach and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum.-Comput. Stud. 72 4 (2014) 367\u2013382.","DOI":"10.1016\/j.ijhcs.2013.12.007"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3485447.3511937"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/3336191.3371824"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3449848"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/DSAA.2018.00018"},{"key":"e_1_3_3_27_2","doi-asserted-by":"crossref","unstructured":"Jianping Gou Baosheng Yu Stephen J Maybank and Dacheng Tao. 2021. Knowledge distillation: A survey. Int. J. Comput. Vis. 129 6 (2021) 1789\u20131819.","DOI":"10.1007\/s11263-021-01453-z"},{"key":"e_1_3_3_28_2","doi-asserted-by":"crossref","first-page":"281","DOI":"10.1145\/1639714.1639768","volume-title":"Proceedings of the 3rd ACM Conference on Recommender Systems","author":"Green Stephen J","year":"2009","unstructured":"Stephen J Green, Paul Lamere, Jeffrey Alexander, Fran\u00e7ois Maillet, Susanna Kirk, Jessica Holt, Jackie Bourque, and Xiao-Wen Mak. 2009. Generating transparent, steerable recommendations from textual descriptions of items. In Proceedings of the 3rd ACM Conference on Recommender Systems. 281\u2013284."},{"key":"e_1_3_3_29_2","unstructured":"Riccardo Guidotti Anna Monreale Salvatore Ruggieri Dino Pedreschi Franco Turini and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. arXiv:1805.10820. Retrieved from https:\/\/arxiv.org\/abs\/1805.10820."},{"key":"e_1_3_3_30_2","volume-title":"Harvey Friedman\u2019s Research on the Foundations of Mathematics","author":"Harrington Leo A.","year":"1985","unstructured":"Leo A. Harrington, Michael D. Morley, A. \u0160cedrov, and Stephen G. Simpson. 1985. Harvey Friedman\u2019s Research on the Foundations of Mathematics. Elsevier."},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/2806416.2806504"},{"key":"e_1_3_3_32_2","unstructured":"Bernease Herman. 2017. The promise and peril of human evaluation for model interpretability. arXiv:1711.07414 (2017). Retrieved from https:\/\/arxiv.org\/abs\/1711.07414."},{"key":"e_1_3_3_33_2","unstructured":"Geoffrey Hinton Oriol Vinyals and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531. Retrieved from https:\/\/arxiv.org\/abs\/1503.02531."},{"key":"e_1_3_3_34_2","unstructured":"Sebastian Hofst\u00e4tter Sophia Althammer Michael Schr\u00f6der Mete Sertkan and Allan Hanbury. 2020. Improving efficient neural ranking models with cross-architecture knowledge distillation. arXiv:2010.02666. Retrieved from https:\/\/arxiv.org\/abs\/2010.02666."},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2022.3221847"},{"key":"e_1_3_3_36_2","doi-asserted-by":"crossref","unstructured":"Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? arXiv:2004.03685. Retrieved from https:\/\/arxiv.org\/abs\/2004.03685.","DOI":"10.18653\/v1\/2020.acl-main.386"},{"key":"e_1_3_3_37_2","unstructured":"Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. arXiv:1902.10186. Retrieved from https:\/\/arxiv.org\/abs\/1902.10186."},{"key":"e_1_3_3_38_2","doi-asserted-by":"crossref","unstructured":"Leslie Pack Kaelbling Michael L Littman and Andrew W Moore. 1996. Reinforcement learning: A survey. J. Artif. Intell. Res. 4 (1996) 237\u2013285.","DOI":"10.1613\/jair.301"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/3450613.3456846"},{"key":"e_1_3_3_40_2","first-page":"895","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Karimi Amir-Hossein","year":"2020","unstructured":"Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. Model-agnostic counterfactual explanations for consequential decisions. In International Conference on Artificial Intelligence and Statistics. PMLR, 895\u2013905."},{"key":"e_1_3_3_41_2","doi-asserted-by":"crossref","unstructured":"Maurice G. Kendall. 1938. A new measure of rank correlation. Biometrika 30 1\/2 (1938) 81\u201393.","DOI":"10.1093\/biomet\/30.1-2.81"},{"key":"e_1_3_3_42_2","unstructured":"Maurice G. Kendall et\u00a0al. 1948. The advanced theory of statistics. Vols. 1. The Advanced Theory of Statistics. Vols. 1 (1948)."},{"key":"e_1_3_3_43_2","first-page":"2668","volume-title":"International Conference on Machine Learning","author":"Kim Been","year":"2018","unstructured":"Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et\u00a0al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International Conference on Machine Learning. PMLR, 2668\u20132677."},{"key":"e_1_3_3_44_2","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster) Yoshua Bengio and Yann LeCun (Eds.). http:\/\/dblp.uni-trier.de\/db\/conf\/iclr\/iclr2015.html#KingmaB14."},{"key":"e_1_3_3_45_2","doi-asserted-by":"crossref","unstructured":"Yehuda Koren Robert Bell and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 42 8 (2009) 30\u201337.","DOI":"10.1109\/MC.2009.263"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3306618.3314229"},{"key":"e_1_3_3_47_2","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3411992"},{"key":"e_1_3_3_48_2","first-page":"684","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP\u201920)","author":"Li Manling","year":"2020","unstructured":"Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP\u201920). 684\u2013695."},{"key":"e_1_3_3_49_2","doi-asserted-by":"crossref","unstructured":"Sen Li Fuyu Lv Taiwei Jin Guli Lin Keping Yang Xiaoyi Zeng Xiao-Ming Wu and Qianli Ma. 2021. Embedding-based product retrieval in taobao search. arXiv:2106.09297. Retrieved from https:\/\/arxiv.org\/abs\/2103.09297.","DOI":"10.1145\/3447548.3467101"},{"key":"e_1_3_3_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401083"},{"key":"e_1_3_3_51_2","unstructured":"Andreas Madsen Siva Reddy and Sarath Chandar. 2021. Post-hoc interpretability for neural NLP: A survey. arXiv:2108.04840. Retrieved from https:\/\/arxiv.org\/abs\/2108.04840."},{"key":"e_1_3_3_52_2","doi-asserted-by":"crossref","unstructured":"Ana Marasovi\u0107 Chandra Bhagavatula Jae Sung Park Ronan Le Bras Noah A Smith and Yejin Choi. 2020. Natural language rationales with full-stack visual reasoning: From pixels to semantic frames to commonsense graphs. arXiv:2010.07526. Retrieved from https:\/\/arxiv.org\/abs\/2010.07526.","DOI":"10.18653\/v1\/2020.findings-emnlp.253"},{"key":"e_1_3_3_53_2","doi-asserted-by":"crossref","unstructured":"Andres Marzal and Enrique Vidal. 1993. Computation of normalized edit distance and applications. IEEE Trans. Pattern Anal. Mach. Intell. 15 9 (1993) 926\u2013932.","DOI":"10.1109\/34.232078"},{"key":"e_1_3_3_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/2507157.2507163"},{"key":"e_1_3_3_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/3297280.3297443"},{"key":"e_1_3_3_56_2","doi-asserted-by":"crossref","unstructured":"Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Model. User-Adapt. Interact. 27 3 (2017) 393\u2013444.","DOI":"10.1007\/s11257-017-9195-0"},{"key":"e_1_3_3_57_2","first-page":"2311","volume-title":"Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining","author":"Pal Aditya","year":"2020","unstructured":"Aditya Pal, Chantat Eksombatchai, Yitong Zhou, Bo Zhao, Charles Rosenberg, and Jure Leskovec. 2020. PinnerSage: Multi-modal user embedding framework for recommendations at pinterest. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2311\u20132320."},{"key":"e_1_3_3_58_2","doi-asserted-by":"crossref","unstructured":"Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22 10 (2009) 1345\u20131359.","DOI":"10.1109\/TKDE.2009.191"},{"key":"e_1_3_3_59_2","doi-asserted-by":"crossref","first-page":"2060","DOI":"10.1145\/3219819.3220072","volume-title":"Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining","author":"Peake Georgina","year":"2018","unstructured":"Georgina Peake and Jun Wang. 2018. Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2060\u20132069."},{"key":"e_1_3_3_60_2","unstructured":"Zhen Qin Le Yan Yi Tay Honglei Zhuang Xuanhui Wang Michael Bendersky and Marc Najork. 2021. Born again neural rankers. arXiv:2109.15285. Retrieved from https:\/\/arxiv.org\/abs\/2109.15285."},{"key":"e_1_3_3_61_2","unstructured":"Steffen Rendle Christoph Freudenthaler Zeno Gantner and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv:1205.2618. Retrieved from https:\/\/arxiv.org\/abs\/1205.2618."},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"e_1_3_3_64_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-85820-3_1"},{"key":"e_1_3_3_65_2","doi-asserted-by":"crossref","unstructured":"Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. Ann. Math. Stat. (1951) 400\u2013407.","DOI":"10.1214\/aoms\/1177729586"},{"key":"e_1_3_3_66_2","doi-asserted-by":"crossref","unstructured":"Alexis Ross Ana Marasovi\u0107 and Matthew E. Peters. 2020. Explaining nlp models via minimal contrastive editing (mice). arXiv:2012.13985. Retrieved from https:\/\/arxiv.org\/abs\/2012.13985.","DOI":"10.18653\/v1\/2021.findings-acl.336"},{"key":"e_1_3_3_67_2","unstructured":"Cynthia Rudin. 2018. Please stop explaining black box models for high stakes decisions. Stat 1050 (2018) 26."},{"key":"e_1_3_3_68_2","unstructured":"Ivan Sanchez Tim Rocktaschel Sebastian Riedel and Sameer Singh. 2015. Towards extracting faithful and descriptive representations of latent variable models. In AAAI Spring Symposium on Knowledge Representation and Reasoning (KRR): Integrating Symbolic and Neural Approaches 4\u20131."},{"key":"e_1_3_3_69_2","unstructured":"Victor Sanh Lysandre Debut Julien Chaumond and Thomas Wolf. 2019. DistilBERT a distilled version of BERT: Smaller faster cheaper and lighter. arXiv:1910.01108. Retrieved from https:\/\/arxiv.org\/abs\/1910.01108."},{"key":"e_1_3_3_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/336992.337035"},{"key":"e_1_3_3_71_2","doi-asserted-by":"crossref","first-page":"618","DOI":"10.1145\/3351095.3375234","volume-title":"Proceedings of the Conference on Fairness, Accountability, and Transparency","author":"Singh Jaspreet","year":"2020","unstructured":"Jaspreet Singh and Avishek Anand. 2020. Model agnostic interpretability of rankers via intent modelling. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 618\u2013628."},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.1145\/3460231.3474273"},{"key":"e_1_3_3_73_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482420"},{"key":"e_1_3_3_74_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220021"},{"key":"e_1_3_3_75_2","unstructured":"Maartje ter Hoeve Anne Schuth Daan Odijk and Maarten de Rijke. 2018. Faithfully explaining rankings in a news recommender system. arXiv:1805.05447. Retrieved from https:\/\/arxiv.org\/abs\/1805.05447."},{"key":"e_1_3_3_76_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDEW.2007.4401070"},{"key":"e_1_3_3_77_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4899-7637-6_10"},{"key":"e_1_3_3_78_2","doi-asserted-by":"crossref","unstructured":"Khanh Hiep Tran Azin Ghazimatin and Rishiraj Saha Roy. 2021. Counterfactual explanations for neural recommenders. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 1627\u20131631.","DOI":"10.1145\/3506804"},{"key":"e_1_3_3_79_2","doi-asserted-by":"crossref","unstructured":"Sandra Wachter Brent Mittelstadt and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. J. Law Technol. 31 (2017) 841.","DOI":"10.2139\/ssrn.3063289"},{"key":"e_1_3_3_80_2","doi-asserted-by":"publisher","DOI":"10.1145\/3269206.3271739"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/3209978.3210010"},{"key":"e_1_3_3_82_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33015329"},{"key":"e_1_3_3_83_2","unstructured":"Sarah Wiegreffe and Ana Marasovi\u0107. 2021. Teach me to explain: A review of datasets for explainable nlp. arXiv: 2102.12060. Retrieved from https:\/\/arxiv.org\/abs\/2102.12060."},{"key":"e_1_3_3_84_2","doi-asserted-by":"crossref","unstructured":"Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. arXiv:1908.04626. Retrieved from https:\/\/arxiv.org\/abs\/1908.04626.","DOI":"10.18653\/v1\/D19-1002"},{"key":"e_1_3_3_85_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331203"},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3412038"},{"key":"e_1_3_3_87_2","unstructured":"Zhichao Xu and Daniel Cohen. 2023. A lightweight constrained generation alternative for query-focused summarization. arXiv:2304.11721. Retrieved from https:\/\/arxiv.org\/abs\/2304.11721."},{"key":"e_1_3_3_88_2","unstructured":"Zhichao Xu Yi Han Tao Yang Anh Tran and Qingyao Ai. 2022. Learning to rank rationales for explainable recommendation. arXiv:2206.05368. Retrieved from https:\/\/arxiv.org\/abs\/2206.05368."},{"key":"e_1_3_3_89_2","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3411993"},{"key":"e_1_3_3_90_2","unstructured":"Zhichao Xu Hemank Lamba Qingyao Ai Joel Tetreault and Alex Jaimes. 2023. Counterfactual editing for search result explanation. arXiv:2301.10389. Retrieved from https:\/\/arxiv.org\/abs\/2301.10389."},{"key":"e_1_3_3_91_2","unstructured":"Zhichao Xu Hansi Zeng and Qingyao Ai. 2021. Understanding the effectiveness of reviews in e-commerce top-n recommendation. arXiv:2106.09665. Retrieved from https:\/\/arxiv.org\/abs\/2106.29665."},{"key":"e_1_3_3_92_2","unstructured":"Tao Yang Zhichao Xu and Qingyao Ai. 2022. Effective exposure amortizing for fair top-k recommendation. arXiv:2204.03046. Retrieved from https:\/\/arxiv.org\/abs\/2204.03046."},{"key":"e_1_3_3_93_2","doi-asserted-by":"crossref","unstructured":"Li Yujian and Liu Bo. 2007. A normalized levenshtein distance metric. IEEE Trans. Pattern Anal. Mach. Intell. 29 6 (2007) 1091\u20131095.","DOI":"10.1109\/TPAMI.2007.1078"},{"key":"e_1_3_3_94_2","unstructured":"Hansi Zeng Zhichao Xu and Qingyao Ai. 2021. A zero attentive relevance matching networkfor review modeling in recommendation system. arxiv:2101.06387 [cs.IR]. Retrieved from https:\/\/arxiv.org\/abs\/2101.06389."},{"key":"e_1_3_3_95_2","unstructured":"Yongfeng Zhang and Xu Chen. 2018. Explainable recommendation: A survey and new perspectives. arXiv:1804.11192. Retrieved from https:\/\/arxiv.org\/abs\/1804.11192."},{"key":"e_1_3_3_96_2","doi-asserted-by":"publisher","DOI":"10.1145\/2600428.2609579"},{"key":"e_1_3_3_97_2","doi-asserted-by":"publisher","DOI":"10.1145\/3336191.3371790"},{"key":"e_1_3_3_98_2","doi-asserted-by":"crossref","unstructured":"Yaxin Zhu Yikun Xian Zuohui Fu Gerard de Melo and Yongfeng Zhang. 2021. Faithfully explainable recommendation via neural logic reasoning. arXiv:2104.07869. Retrieved from https:\/\/arxiv.org\/abs\/2104.07869.","DOI":"10.18653\/v1\/2021.naacl-main.245"}],"container-title":["ACM Transactions on Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3605357","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3605357","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:03:55Z","timestamp":1750291435000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3605357"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,21]]},"references-count":97,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,1,31]]}},"alternative-id":["10.1145\/3605357"],"URL":"https:\/\/doi.org\/10.1145\/3605357","relation":{},"ISSN":["1046-8188","1558-2868"],"issn-type":[{"value":"1046-8188","type":"print"},{"value":"1558-2868","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,21]]},"assertion":[{"value":"2022-07-29","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-06-06","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-21","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}