{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T20:23:32Z","timestamp":1776111812399,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":37,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,8]],"date-time":"2022-10-08T00:00:00Z","timestamp":1665187200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,8]]},"DOI":"10.1145\/3546155.3546670","type":"proceedings-article","created":{"date-parts":[[2022,9,26]],"date-time":"2022-09-26T15:50:12Z","timestamp":1664207412000},"page":"1-12","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Characterizing Data Scientists\u2019 Mental Models of Local Feature Importance"],"prefix":"10.1145","author":[{"given":"Dennis","family":"Collaris","sequence":"first","affiliation":[{"name":"Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands"}]},{"given":"Hilde J.P.","family":"Weerts","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands"}]},{"given":"Daphne","family":"Miedema","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands"}]},{"given":"Jarke J.","family":"van Wijk","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands"}]},{"given":"Mykola","family":"Pechenizkiy","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands"}]}],"member":"320","published-online":{"date-parts":[[2022,10,8]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1093\/bioinformatics\/btq134"},{"key":"e_1_3_2_2_2_1","volume-title":"ICML Workshop on Human Interpretability in Machine Learning","author":"Alvarez-Melis David","year":"2018","unstructured":"David Alvarez-Melis and Tommi\u00a0 S Jaakkola . 2018 . On the robustness of interpretability methods . ICML Workshop on Human Interpretability in Machine Learning (2018), 66\u201371. David Alvarez-Melis and Tommi\u00a0S Jaakkola. 2018. On the robustness of interpretability methods. ICML Workshop on Human Interpretability in Machine Learning (2018), 66\u201371."},{"key":"e_1_3_2_2_3_1","first-page":"1803","article-title":"How to explain individual classification decisions","author":"Baehrens David","year":"2010","unstructured":"David Baehrens , Timon Schroeter , Stefan Harmeling , Motoaki Kawanabe , Katja Hansen , and Klaus-Robert M\u00fcller . 2010 . How to explain individual classification decisions . Journal of Machine Learning Research 11 , Jun (2010), 1803 \u2013 1831 . David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert M\u00fcller. 2010. How to explain individual classification decisions. Journal of Machine Learning Research 11, Jun (2010), 1803\u20131831.","journal-title":"Journal of Machine Learning Research 11"},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1191\/1478088706qp063oa"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.3390\/electronics8080832"},{"key":"e_1_3_2_2_6_1","volume-title":"Thematic analysis of qualitative research data: is it as easy as it sounds?Currents in Pharmacy Teaching and Learning 10, 6","author":"Castleberry Ashley","year":"2018","unstructured":"Ashley Castleberry and Amanda Nolen . 2018. Thematic analysis of qualitative research data: is it as easy as it sounds?Currents in Pharmacy Teaching and Learning 10, 6 ( 2018 ), 807\u2013815. Ashley Castleberry and Amanda Nolen. 2018. Thematic analysis of qualitative research data: is it as easy as it sounds?Currents in Pharmacy Teaching and Learning 10, 6 (2018), 807\u2013815."},{"key":"e_1_3_2_2_7_1","volume-title":"True to the model or true to the data?ICML Workshop on Human Interpretability in Machine Learning","author":"Chen Hugh","year":"2020","unstructured":"Hugh Chen , Joseph\u00a0 D Janizek , Scott Lundberg , and Su-In Lee . 2020. True to the model or true to the data?ICML Workshop on Human Interpretability in Machine Learning ( 2020 ), 123\u2013129. Hugh Chen, Joseph\u00a0D Janizek, Scott Lundberg, and Su-In Lee. 2020. True to the model or true to the data?ICML Workshop on Human Interpretability in Machine Learning (2020), 123\u2013129."},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2021.3114836"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/PacificVis48177.2020.7090"},{"key":"e_1_3_2_2_10_1","volume-title":"ICML Workshop on Human Interpretability in Machine Learning","author":"Collaris Dennis","year":"2018","unstructured":"Dennis Collaris , Leo\u00a0 M Vink , and Jarke\u00a0 J van Wijk . 2018 . Instance-level explanations for fraud detection: A case study . ICML Workshop on Human Interpretability in Machine Learning (2018), 28\u201333. Dennis Collaris, Leo\u00a0M Vink, and Jarke\u00a0J van Wijk. 2018. Instance-level explanations for fraud detection: A case study. ICML Workshop on Human Interpretability in Machine Learning (2018), 28\u201333."},{"key":"e_1_3_2_2_11_1","unstructured":"Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608(2017).  Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608(2017)."},{"key":"e_1_3_2_2_12_1","volume-title":"International Conference on Artificial Intelligence and Statistics. PMLR, 1287\u20131296","author":"Garreau Damien","year":"2020","unstructured":"Damien Garreau and Ulrike Luxburg . 2020 . Explaining the explainer: A first theoretical analysis of LIME . In International Conference on Artificial Intelligence and Statistics. PMLR, 1287\u20131296 . Damien Garreau and Ulrike Luxburg. 2020. Explaining the explainer: A first theoretical analysis of LIME. In International Conference on Artificial Intelligence and Statistics. PMLR, 1287\u20131296."},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3236009"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v40i2.2850"},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445943"},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300809"},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3392878"},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11222-021-10057-z"},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445941"},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376219"},{"key":"e_1_3_2_2_21_1","first-page":"1","article-title":"An efficient explanation of individual classifications using game theory","author":"Kononenko Igor","year":"2010","unstructured":"Igor Kononenko 2010 . An efficient explanation of individual classifications using game theory . Journal of Machine Learning Research 11 , Jan (2010), 1 \u2013 18 . Igor Kononenko 2010. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research 11, Jan (2010), 1\u201318.","journal-title":"Journal of Machine Learning Research 11"},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/VLHCC.2013.6645235"},{"key":"e_1_3_2_2_23_1","volume-title":"International Conference on Machine Learning. PMLR, 5491\u20135500","author":"Kumar Elizabeth","year":"2020","unstructured":"I.\u00a0 Elizabeth Kumar , Suresh Venkatasubramanian , Carlos Scheidegger , and Sorelle Friedler . 2020 . Problems with Shapley-value-based explanations as feature importance measures . In International Conference on Machine Learning. PMLR, 5491\u20135500 . I.\u00a0Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. 2020. Problems with Shapley-value-based explanations as feature importance measures. In International Conference on Machine Learning. PMLR, 5491\u20135500."},{"key":"e_1_3_2_2_24_1","volume-title":"The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3","author":"Lipton C","year":"2018","unstructured":"Zachary\u00a0 C Lipton . 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 ( 2018 ), 31\u201357. Zachary\u00a0C Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 (2018), 31\u201357."},{"key":"e_1_3_2_2_25_1","volume-title":"Proceedings of the 31st International Conference on Neural Information Processing Systems. 4768\u20134777","author":"Lundberg M","year":"2017","unstructured":"Scott\u00a0 M Lundberg and Su-In Lee . 2017 . A unified approach to interpreting model predictions . In Proceedings of the 31st International Conference on Neural Information Processing Systems. 4768\u20134777 . Scott\u00a0M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 4768\u20134777."},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41551-018-0304-0"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.17763\/haer.62.3.8323320856251826"},{"key":"e_1_3_2_2_28_1","volume-title":"Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267","author":"Miller Tim","year":"2019","unstructured":"Tim Miller . 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 ( 2019 ), 1\u201338. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1\u201338."},{"key":"e_1_3_2_2_29_1","volume-title":"Mental Models","author":"Norman A","unstructured":"Donald\u00a0 A Norman . 1983. Some observations on mental models . In Mental Models . Psychology Press . Donald\u00a0A Norman. 1983. Some observations on mental models. In Mental Models. Psychology Press."},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_2_31_1","volume-title":"Human and Machine Learning","author":"Robnik-\u0160ikonja Marko","unstructured":"Marko Robnik-\u0160ikonja and Marko Bohanec . 2018. Perturbation-based explanations of prediction models . In Human and Machine Learning . Springer , 159\u2013175. Marko Robnik-\u0160ikonja and Marko Bohanec. 2018. Perturbation-based explanations of prediction models. In Human and Machine Learning. Springer, 159\u2013175."},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.datak.2009.01.004"},{"key":"e_1_3_2_2_34_1","volume-title":"KDD Workshop on Explainable AI(2019)","author":"Weerts JP","year":"2019","unstructured":"Hilde\u00a0 JP Weerts , Werner van Ipenburg , and Mykola Pechenizkiy . 2019 . A human-grounded evaluation of SHAP for alert processing . KDD Workshop on Explainable AI(2019) . Hilde\u00a0JP Weerts, Werner van Ipenburg, and Mykola Pechenizkiy. 2019. A human-grounded evaluation of SHAP for alert processing. KDD Workshop on Explainable AI(2019)."},{"key":"e_1_3_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3282486"},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10590-1_53"},{"key":"e_1_3_2_2_37_1","volume-title":"ICML AI for Social Good Workshop(2019)","author":"Zhang Yujia","year":"2019","unstructured":"Yujia Zhang , Kuangyan Song , Yiming Sun , Sarah Tan , and Madeleine Udell . 2019 . \u201c Why should you trust my explanation?\u201d Understanding uncertainty in LIME explanations . ICML AI for Social Good Workshop(2019) . Yujia Zhang, Kuangyan Song, Yiming Sun, Sarah Tan, and Madeleine Udell. 2019. \u201cWhy should you trust my explanation?\u201d Understanding uncertainty in LIME explanations. ICML AI for Social Good Workshop(2019)."}],"event":{"name":"NordiCHI '22: Nordic Human-Computer Interaction Conference","location":"Aarhus Denmark","acronym":"NordiCHI '22"},"container-title":["Nordic Human-Computer Interaction Conference"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3546155.3546670","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3546155.3546670","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T18:10:29Z","timestamp":1750183829000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3546155.3546670"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,8]]},"references-count":37,"alternative-id":["10.1145\/3546155.3546670","10.1145\/3546155"],"URL":"https:\/\/doi.org\/10.1145\/3546155.3546670","relation":{},"subject":[],"published":{"date-parts":[[2022,10,8]]},"assertion":[{"value":"2022-10-08","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}