{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,17]],"date-time":"2026-02-17T12:00:06Z","timestamp":1771329606558,"version":"3.50.1"},"reference-count":101,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2023,12,8]],"date-time":"2023-12-08T00:00:00Z","timestamp":1701993600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Interact. Intell. Syst."],"published-print":{"date-parts":[[2023,12,31]]},"abstract":"<jats:p>In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic explanations representations and (2) a limited understanding of how users visually attend them. Against this backdrop, we refined representations of local explanations from four well-established model-agnostic XAI methods in an iterative design process with users. Moreover, we evaluated the refined explanation representations in a laboratory experiment using eye-tracking technology as well as self-reports and interviews. Our results show that users do not necessarily prefer simple explanations and that their individual characteristics, such as gender and previous experience with AI systems, strongly influence their preferences. In addition, users find that some explanations are only useful in certain scenarios making the selection of an appropriate explanation highly dependent on context. With our work, we contribute to ongoing research to improve transparency in AI.<\/jats:p>","DOI":"10.1145\/3607145","type":"journal-article","created":{"date-parts":[[2023,7,13]],"date-time":"2023-07-13T12:05:39Z","timestamp":1689249939000},"page":"1-47","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology"],"prefix":"10.1145","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6299-7115","authenticated-orcid":false,"given":"Miguel Angel","family":"Meza Mart\u00ednez","sequence":"first","affiliation":[{"name":"Karlsruhe Institute of Technology (KIT), Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6901-9450","authenticated-orcid":false,"given":"Mario","family":"Nadj","sequence":"additional","affiliation":[{"name":"TU Dortmund University, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7860-7118","authenticated-orcid":false,"given":"Moritz","family":"Langner","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology (KIT), Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2468-1715","authenticated-orcid":false,"given":"Peyman","family":"Toreini","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology (KIT), Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6546-4816","authenticated-orcid":false,"given":"Alexander","family":"Maedche","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology (KIT), Germany"}]}],"member":"320","published-online":{"date-parts":[[2023,12,8]]},"reference":[{"key":"e_1_3_3_2_2","first-page":"1","volume-title":"Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems","author":"Abdul Ashraf","year":"2020","unstructured":"Ashraf Abdul, Christian Von Der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and moderating cognitive load in machine learning model explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1\u201314. DOI:10.1145\/3313831.3376615"},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"e_1_3_3_4_2","doi-asserted-by":"crossref","first-page":"625","DOI":"10.1145\/3338906.3338937","volume-title":"Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering","author":"Aggarwal Aniya","year":"2019","unstructured":"Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha. 2019. Black box fairness testing of machine learning models. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. ACM, New York, NY, 625\u2013635. DOI:10.1145\/3338906.3338937"},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.3233\/AIC-1994-7104"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1111\/rssb.12377"},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.1080\/713827254"},{"key":"e_1_3_3_8_2","article-title":"One Explanation does not fit all: A toolkit and taxonomy of ai explainability techniques","author":"Arya Vijay","year":"2019","unstructured":"Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovi\u0107, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One Explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019). https:\/\/arxiv.org\/abs\/1909.03012","journal-title":"arXiv preprint arXiv:1909.03012"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-78292-4_6\/TABLES\/2"},{"key":"e_1_3_3_10_2","doi-asserted-by":"publisher","DOI":"10.3390\/APP9204244"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173951"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1010933404324"},{"key":"e_1_3_3_13_2","first-page":"454","volume-title":"Proceedings of the International Conference on Intelligent User Interfaces, Proceedings IUI","author":"Bu\u00e7inca Zana","year":"2020","unstructured":"Zana Bu\u00e7inca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the International Conference on Intelligent User Interfaces, Proceedings IUI. ACM, 454\u2013464. DOI:10.1145\/3377325.3377498"},{"key":"e_1_3_3_14_2","doi-asserted-by":"crossref","first-page":"160","DOI":"10.1109\/ICHI.2015.26","volume-title":"Proceedings of the 2015 International Conference on Healthcare Informatics","author":"Bussone Adrian","year":"2015","unstructured":"Adrian Bussone, Simone Stumpf, and Dympna O\u2019Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In Proceedings of the 2015 International Conference on Healthcare Informatics. IEEE, 160\u2013169. DOI:10.1109\/ICHI.2015.26"},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics8080832"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3409697"},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.953"},{"key":"e_1_3_3_18_2","first-page":"1","volume-title":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","author":"Cheng Hao-Fei","year":"2019","unstructured":"Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O\u2019Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1\u201312. DOI:10.1145\/3290605.3300789"},{"key":"e_1_3_3_19_2","doi-asserted-by":"crossref","first-page":"291","DOI":"10.1145\/3301275.3302304","volume-title":"Proceedings of the 24th International Conference on Intelligent User Interfaces","author":"Coba Ludovik","year":"2019","unstructured":"Ludovik Coba, Markus Zanker, Laurens Rook, and Panagiotis Symeonidis. 2019. Decision-making strategies differ in the presence of collaborative explanations: Two conjoint studies. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, New York, NY, 291\u2013302. DOI:10.1145\/3301275"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1016\/J.ARTINT.2021.103503"},{"key":"e_1_3_3_21_2","doi-asserted-by":"crossref","first-page":"598","DOI":"10.1109\/SP.2016.42","volume-title":"Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP)","author":"Datta Anupam","year":"2016","unstructured":"Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 598\u2013617. DOI:10.1109\/SP.2016.42"},{"key":"e_1_3_3_22_2","volume-title":"Proceedings of the EuroVis 2021","author":"Deng J.","year":"2021","unstructured":"J. Deng and E. T. Brown. 2021. RISSAD: Rule-based interactive semi-supervised anomaly detection. In Proceedings of the EuroVis 2021. he Eurographics Association. DOI:10.2312\/evs.20211050"},{"key":"e_1_3_3_23_2","unstructured":"Nicholas Diakopoulos Sorelle Friedler Marcelo Arenas Solon Barocas Michael Hay Bill Howe H. V. Jagadish Kris Unsworth Arnaud Sahuguet Suresh Venkatasubramanian Christo Wilson Cong Yu and Bendert Zevenbergen. 2017. Principles for Accountable Algorithms and a Social Impact Statement for Algorithms. Retrieved from https:\/\/www.fatml.org\/resources\/principles-for-accountable-algorithms"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/3301275.3302310"},{"key":"e_1_3_3_25_2","article-title":"Towards a rigorous science of interpretable machine learning","author":"Doshi-Velez Finale","year":"2017","unstructured":"Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv:1702.08608 (2017). https:\/\/arxiv.org\/abs\/1702.08608","journal-title":"arXiv:1702.08608"},{"key":"e_1_3_3_26_2","first-page":"210","volume-title":"Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO","author":"Do\u0161ilovi\u0107 Filip Karlo","year":"2018","unstructured":"Filip Karlo Do\u0161ilovi\u0107, Mario Br\u010di\u0107, and Nikica Hlupi\u0107. 2018. Explainable artificial intelligence: A survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO. 210\u2013215."},{"key":"e_1_3_3_27_2","volume-title":"A Review of Explanation and Explanation in Case-Based Reasoning","author":"Doyle D\u00f3nal","year":"2003","unstructured":"D\u00f3nal Doyle, Alexey Tsymbal, and P\u00e1draig Cunningham. 2003. A Review of Explanation and Explanation in Case-Based Reasoning. Technical Report. Trinity College Dublin, Department of Computer Science, Dublin."},{"key":"e_1_3_3_28_2","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository\u2014German Credit Data. Retrieved from https:\/\/archive.ics.uci.edu\/ml\/datasets\/statlog+(german+credit+data)"},{"key":"e_1_3_3_29_2","doi-asserted-by":"publisher","DOI":"10.3758\/BF03195475"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-57883-5\/COVER"},{"key":"e_1_3_3_31_2","first-page":"35","volume-title":"Proceedings of the International Workshop on Soft Computing Models in Industrial and Environmental Applications","volume":"950","author":"Bekri Nadia El","year":"2019","unstructured":"Nadia El Bekri, Jasmin Kling, and Marco F. Huber. 2019. A study on trust in black box models and post-hoc explanations. In Proceedings of the International Workshop on Soft Computing Models in Industrial and Environmental Applications, Vol. 950. Springer, 35\u201346. DOI:10.1007\/978-3-030-20055-8"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1016\/J.FUTURE.2022.03.009"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1214\/aos\/1013203451"},{"key":"e_1_3_3_34_2","first-page":"1229","article-title":"Asymmetric shapley values: Incorporating causal knowledge into model-agnostic explainability","volume":"33","author":"Frye Christopher","year":"2020","unstructured":"Christopher Frye, Chris F@faculty Ai, Colin Rowat, Ilya Feige, and Ilya@faculty Ai Faculty. 2020. Asymmetric shapley values: Incorporating causal knowledge into model-agnostic explainability. Advances in Neural Information Processing Systems 33 (2020), 1229\u20131239.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1080\/10618600.2014.907095"},{"key":"e_1_3_3_36_2","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1109\/VIS49827.2021.9623271","volume-title":"Proceedings of the 2021 IEEE Visualization Conference (VIS)","author":"Gomez Oscar","year":"2021","unstructured":"Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2021. AdViCE: Aggregated visual counterfactual explanations for machine learning model validation. In Proceedings of the 2021 IEEE Visualization Conference (VIS). IEEE, 31\u201335. DOI:10.1109\/VIS49827.2021.9623271"},{"issue":"4","key":"e_1_3_3_37_2","doi-asserted-by":"crossref","first-page":"857","DOI":"10.2307\/2528823","article-title":"A general coefficient of similarity and some of its properties","volume":"27","author":"Gower John C.","year":"1971","unstructured":"John C. Gower. 1971. A general coefficient of similarity and some of its properties. Biometrics 27, 4 (1971), 857\u2013871. DOI:https:\/\/doi.org\/2528823","journal-title":"Biometrics"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2019.2957223"},{"key":"e_1_3_3_39_2","article-title":"Local rule-based explanations of black box decision systems","author":"Guidotti Riccardo","year":"2018","unstructured":"Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018). https:\/\/arxiv.org\/abs\/1805.10820","journal-title":"arXiv preprint arXiv:1805.10820"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3236009"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11077-020-09414-y"},{"key":"e_1_3_3_42_2","doi-asserted-by":"crossref","first-page":"5540","DOI":"10.18653\/v1\/2020.acl-main.491","volume-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics","author":"Hase Peter","year":"2020","unstructured":"Peter Hase and Mohit Bansal. 2020. Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. ACL, 5540\u20135552."},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1016\/J.TICS.2005.02.009"},{"key":"e_1_3_3_44_2","first-page":"4778","article-title":"Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models","volume":"33","author":"Heskes Tom","year":"2020","unstructured":"Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen. 2020. Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models. Advances in Neural Information Processing Systems 33 (2020), 4778\u20134789.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_45_2","volume-title":"Proceedings of the Encyclopedia of Philosophy","author":"Hitchcock Christopher","year":"2018","unstructured":"Christopher Hitchcock. 2018. Causal Models. In Proceedings of the Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University."},{"key":"e_1_3_3_46_2","article-title":"Metrics for explainable AI: Challenges and prospects","author":"Hoffman Robert R.","year":"2018","unstructured":"Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018). https:\/\/arxiv.org\/abs\/1812.04608","journal-title":"arXiv preprint arXiv:1812.04608"},{"issue":"2","key":"e_1_3_3_47_2","first-page":"65","article-title":"A simple sequentially rejective multiple test procedure","volume":"6","author":"Holm Sture","year":"1979","unstructured":"Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 2 (1979), 65\u201370. Retrieved from https:\/\/www.jstor.org\/stable\/pdf\/4615733.pdf","journal-title":"Scandinavian Journal of Statistics"},{"key":"e_1_3_3_48_2","volume-title":"Eye Tracking: A Comprehensive Guide to Methods and Measures","author":"Holmqvist Kenneth","year":"2011","unstructured":"Kenneth Holmqvist, Marcus Nystr\u00f6m, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye Tracking: A Comprehensive Guide to Methods and Measures. Oxford University Press, Oxford."},{"key":"e_1_3_3_49_2","doi-asserted-by":"publisher","DOI":"10.1057\/ejis.2013.35"},{"key":"e_1_3_3_50_2","doi-asserted-by":"crossref","first-page":"805","DOI":"10.1145\/3442188.3445941","volume-title":"Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency","author":"Jesus S\u00e9rgio","year":"2021","unstructured":"S\u00e9rgio Jesus, Catarina Bel\u00e9m, Vladimir Balayan, Jo\u00e3o Bento, Pedro Saleiro, Pedro Bizarro, and Jo\u00e3o Gama. 2021. How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, 805\u2013815. DOI:10.1145\/3442188.3445941"},{"key":"e_1_3_3_51_2","first-page":"159","volume-title":"Proceedings of the International Conference on Intelligent User Interfaces IUI","volume":"1275","author":"Johnson Hilary","year":"1993","unstructured":"Hilary Johnson and Peter Johnson. 1993. Explanation facilities and interactive systems. In Proceedings of the International Conference on Intelligent User Interfaces IUI, Vol. Part F1275. ACM, 159\u2013166. DOI:10.1145\/169891.169951"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1186\/S40537-019-0192-5\/TABLES\/18"},{"key":"e_1_3_3_53_2","doi-asserted-by":"publisher","DOI":"10.3389\/FNINS.2022.883385"},{"key":"e_1_3_3_54_2","volume-title":"Finding Groups in Data: An Introduction to Cluster Analysis","author":"Kaufman Leonard","year":"2009","unstructured":"Leonard Kaufman and Peter J. Rousseeuw. 2009. Finding Groups in Data: An Introduction to Cluster Analysis. Vol. 344. John Wiley & Sons."},{"key":"e_1_3_3_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376219"},{"key":"e_1_3_3_56_2","first-page":"2280","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Kim Been","year":"2016","unstructured":"Been Kim, Rajiv Khanna, and Oluwasanmi Koyejo. 2016. Examples are not enough, learn to criticize! Criticism for interpretability. In Proceedings of the Advances in Neural Information Processing Systems. Curran Associates, Inc., 2280\u20132288."},{"key":"e_1_3_3_57_2","first-page":"1885","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Koh Pang Wei","year":"2017","unstructured":"Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning. PMLR, 1885\u20131894."},{"key":"e_1_3_3_58_2","doi-asserted-by":"publisher","DOI":"10.1177\/1473871615609787"},{"key":"e_1_3_3_59_2","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1145\/3375627.3375833","volume-title":"Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society","author":"Lakkaraju Himabindu","year":"2020","unstructured":"Himabindu Lakkaraju and Osbert Bastani. 2020. \u201cHow do I fool you?\u201d: Manipulating user trust via misleading black box explanations. In Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society. ACM, New York, New York, 79\u201385. DOI:10.1145\/3375627.3375833"},{"key":"e_1_3_3_60_2","first-page":"32","volume-title":"Proceedings of the 2013 IEEE Symposium on Computational Intelligence and Data Mining, CIDM 2013","author":"Landecker Will","year":"2013","unstructured":"Will Landecker, Michael D. Thomure, Luis M.A. Bettencourt, Melanie Mitchell, Garrett T. Kenyon, and Steven P. Brumby. 2013. Interpreting individual classifications of hierarchical networks. In Proceedings of the 2013 IEEE Symposium on Computational Intelligence and Data Mining, CIDM 2013. IEEE, 32\u201338. DOI:10.1109\/CIDM.2013.6597214"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1016\/S1364-6613(99)01418-7"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1002\/asi.20794"},{"key":"e_1_3_3_63_2","first-page":"4765","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","author":"Lundberg Scott M.","year":"2017","unstructured":"Scott M. Lundberg, Paul G. Allen, and Su-In Lee. 2017. A Unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing Systems. Curran Associates, Inc., 4765\u20134774."},{"key":"e_1_3_3_64_2","first-page":"6","volume-title":"Proceedings of the 11th Australasian Conference on Information Systems","author":"Madsen Maria","year":"2000","unstructured":"Maria Madsen and Shirley Gregor. 2000. Measuring human-computer trust. In Proceedings of the 11th Australasian Conference on Information Systems, Citeseer (Ed.), 6\u20138."},{"key":"e_1_3_3_65_2","volume-title":"Proceedings of the 27th European Conference on Information Systems (ECIS)","author":"Mart\u00ednez Miguel Angel Meza","year":"2019","unstructured":"Miguel Angel Meza Mart\u00ednez, Mario Nadj, and Alexander Maedche. 2019. Towards an integrative theoretical framework of interactive machine learning systems. In Proceedings of the 27th European Conference on Information Systems (ECIS). Stockholm & Uppsala, Sweden."},{"key":"e_1_3_3_66_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"e_1_3_3_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2018.2864812"},{"key":"e_1_3_3_68_2","doi-asserted-by":"crossref","first-page":"279","DOI":"10.1145\/3287560.3287574","volume-title":"Proceedings of the Conference on Fairness, Accountability, and Transparency","author":"Mittelstadt Brent","year":"2019","unstructured":"Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 279\u2013288."},{"key":"e_1_3_3_69_2","volume-title":"Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.","author":"Molnar Christoph","year":"2020","unstructured":"Christoph Molnar. 2020. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Retrieved from https:\/\/christophm.github.io\/interpretable-ml-book\/"},{"key":"e_1_3_3_70_2","doi-asserted-by":"crossref","first-page":"607","DOI":"10.1145\/3351095.3372850","volume-title":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","author":"Mothilal Ramaravind Kommiya","year":"2020","unstructured":"Ramaravind Kommiya Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, 607\u2013617. DOI:10.1145\/3351095.3372850"},{"key":"e_1_3_3_71_2","doi-asserted-by":"publisher","DOI":"10.1016\/J.PATCOG.2022.108604"},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.2139\/SSRN.4098528"},{"key":"e_1_3_3_73_2","article-title":"InterpretML: A unified framework for machine learning interpretability","author":"Nori Harsha","year":"2019","unstructured":"Harsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. 2019. InterpretML: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019). https:\/\/arxiv.org\/abs\/1806.07421","journal-title":"arXiv preprint arXiv:1909.09223"},{"key":"e_1_3_3_74_2","doi-asserted-by":"publisher","DOI":"10.1016\/0030-5073(76)90022-2"},{"key":"e_1_3_3_75_2","article-title":"RISE: Randomized input sampling for explanation of black-box models","author":"Petsiuk Vitali","year":"2018","unstructured":"Vitali Petsiuk, Abir Das, and Kate Saenko. 2018. RISE: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421 (6 2018). https:\/\/arxiv.org\/abs\/1806.07421","journal-title":"arXiv preprint arXiv:1806.07421"},{"key":"e_1_3_3_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3462799"},{"key":"e_1_3_3_77_2","doi-asserted-by":"crossref","first-page":"211","DOI":"10.4018\/978-1-59140-562-7.ch034","volume-title":"Proceedings of the Encyclopedia of Human\u2013Computer Interaction","author":"Poole Alex","year":"2006","unstructured":"Alex Poole and Linden J. Ball. 2006. Eye tracking in HCI and usability research. In Proceedings of the Encyclopedia of Human\u2013Computer Interaction. IGI Global, 211\u2013219. Retrieved from DOI:10.4018\/978-1-59140-562-7.CH034"},{"key":"e_1_3_3_78_2","first-page":"1","volume-title":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","author":"Poursabzi-Sangdeh Forough","year":"2021","unstructured":"Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1\u201352. DOI:10.1145\/3411764.3445315"},{"key":"e_1_3_3_79_2","first-page":"254","volume-title":"Proceedings of the BIOSIGNALS","author":"Rasmussen Peter M.","year":"2012","unstructured":"Peter M. Rasmussen, Tanya Schmah, Kristoffer H. Madsen, Torben E. Lund, Stephen C. Strother, and Lars K. Hansen. 2012. Visualization of nonlinear classification models in neuroimaging\u2019signed sensitivity maps. In Proceedings of the BIOSIGNALS. Citeseer, 254\u2013263."},{"key":"e_1_3_3_80_2","doi-asserted-by":"publisher","DOI":"10.1037\/0033-2909.124.3.372"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.48550\/arxiv.1606.05386"},{"key":"e_1_3_3_82_2","doi-asserted-by":"crossref","first-page":"1135","DOI":"10.1145\/2939672.2939778","volume-title":"Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining","author":"Ribeiro Marco Tulio","year":"2016","unstructured":"Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \u201cWhy should I trust you?\u201d Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, 1135\u20131144. DOI:10.1145\/2939672.2939778"},{"key":"e_1_3_3_83_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"e_1_3_3_84_2","first-page":"38","volume-title":"Proceedings of the IUI Workshops","volume":"2327","author":"Ribera Mireia","year":"2019","unstructured":"Mireia Ribera and Agata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. In Proceedings of the IUI Workshops, Vol. 2327. 38."},{"key":"e_1_3_3_85_2","doi-asserted-by":"crossref","first-page":"159","DOI":"10.1007\/978-3-319-90403-0_9","volume-title":"Proceedings of the Human and Machine Learning","author":"Robnik-\u0160ikonja Marko","year":"2018","unstructured":"Marko Robnik-\u0160ikonja and Marko Bohanec. 2018. Perturbation-based explanations of prediction models. In Proceedings of the Human and Machine Learning, J. Zhou and F. Chen (Eds.). Springer, Cham, 159\u2013175. DOI:10.1007\/978-3-319-90403-0_9"},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","DOI":"10.48550\/arxiv.1901.00770"},{"key":"e_1_3_3_87_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_3_3_88_2","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372870"},{"key":"e_1_3_3_89_2","first-page":"1","volume-title":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","author":"Stowell Elizabeth","year":"2018","unstructured":"Elizabeth Stowell, Mercedes C. Lyson, Herman Saksono, Rene\u00e9 C. Wurth, Holly Jimison, Misha Pavel, and Andrea G. Parker. 2018. Designing and evaluating mhealth interventions for vulnerable populations: A systematic review. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1\u201317. DOI:10.1145\/3173574.3173589"},{"key":"e_1_3_3_90_2","doi-asserted-by":"publisher","DOI":"10.16910\/jemr.9.4.4"},{"key":"e_1_3_3_91_2","doi-asserted-by":"publisher","DOI":"10.5555\/3305890.3306024"},{"key":"e_1_3_3_92_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4612-5108-8_15"},{"key":"e_1_3_3_93_2","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1109\/MLUI54255.2021.00008","volume-title":"Proceedings of the 2021 IEEE Workshop on Machine Learning from User Interactions (MLUI)","author":"Thomas Lebna V.","year":"2021","unstructured":"Lebna V. Thomas, Jiahao Deng, and Eli T. Brown. 2021. FacetRules: Discovering and describing related groups. In Proceedings of the 2021 IEEE Workshop on Machine Learning from User Interactions (MLUI). IEEE, 21\u201326. DOI:10.1109\/MLUI54255.2021.00008"},{"key":"e_1_3_3_94_2","article-title":"A survey on explainable artificial intelligence (XAI): Towards medical XAI","author":"Tjoa Erico","year":"2019","unstructured":"Erico Tjoa and Cuntai Guan. 2019. A survey on explainable artificial intelligence (XAI): Towards medical XAI. IEEE Transactions on Neural Networks and Learning Systems 32, 11 (2019), 4793--4813.","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"e_1_3_3_95_2","unstructured":"Matt Turek. 2018. Explainable artificial intelligence (XAI). Retrieved from https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence"},{"key":"e_1_3_3_96_2","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3063289"},{"key":"e_1_3_3_97_2","first-page":"1","volume-title":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems","author":"Wang Danding","year":"2019","unstructured":"Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 1\u201315. DOI:10.1145\/3290605.3300831"},{"key":"e_1_3_3_98_2","first-page":"2048","volume-title":"Proceedings of the 32nd International Conference on Machine Learning","author":"Xu Kelvin","year":"2015","unstructured":"Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning. PMLR, 2048\u20132057."},{"key":"e_1_3_3_99_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-4585-18-7_2\/COVER"},{"key":"e_1_3_3_100_2","doi-asserted-by":"publisher","DOI":"10.1109\/VIS49827.2021.9623303"},{"key":"e_1_3_3_101_2","article-title":"\u201cWhy should you trust my explanation?\u201d Understanding uncertainty in LIME explanations","author":"Zhang Yujia","year":"2019","unstructured":"Yujia Zhang, Kuangyan Song, Yiming Sun, Sarah Tan, and Madeleine Udell. 2019. \u201cWhy should you trust my explanation?\u201d Understanding uncertainty in LIME explanations. arXiv preprint arXiv:1904.12991 (2019). https:\/\/arxiv.org\/abs\/1904.12991","journal-title":"arXiv preprint arXiv:1904.12991"},{"key":"e_1_3_3_102_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01237-3_8"}],"container-title":["ACM Transactions on Interactive Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3607145","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3607145","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:33Z","timestamp":1750178253000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3607145"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,12,8]]},"references-count":101,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,12,31]]}},"alternative-id":["10.1145\/3607145"],"URL":"https:\/\/doi.org\/10.1145\/3607145","relation":{},"ISSN":["2160-6455","2160-6463"],"issn-type":[{"value":"2160-6455","type":"print"},{"value":"2160-6463","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,12,8]]},"assertion":[{"value":"2022-02-21","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-06-22","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-12-08","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}