{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,17]],"date-time":"2026-04-17T04:56:12Z","timestamp":1776401772851,"version":"3.51.2"},"reference-count":203,"publisher":"Frontiers Media SA","license":[{"start":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T00:00:00Z","timestamp":1677110400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["frontiersin.org"],"crossmark-restriction":true},"short-container-title":["Front. Artif. Intell."],"abstract":"<jats:p>Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic.<\/jats:p>","DOI":"10.3389\/frai.2023.1066049","type":"journal-article","created":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T09:14:48Z","timestamp":1677143688000},"update-policy":"https:\/\/doi.org\/10.3389\/crossmark-policy","source":"Crossref","is-referenced-by-count":41,"title":["Leveraging explanations in interactive machine learning: An overview"],"prefix":"10.3389","volume":"6","author":[{"given":"Stefano","family":"Teso","sequence":"first","affiliation":[]},{"given":"\u00d6znur","family":"Alkan","sequence":"additional","affiliation":[]},{"given":"Wolfgang","family":"Stammer","sequence":"additional","affiliation":[]},{"given":"Elizabeth","family":"Daly","sequence":"additional","affiliation":[]}],"member":"1965","published-online":{"date-parts":[[2023,2,23]]},"reference":[{"key":"B1","doi-asserted-by":"publisher","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","article-title":"Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)","volume":"6","author":"Adadi","year":"2018","journal-title":"IEEE Access"},{"key":"B2","first-page":"9525","article-title":"\u201cSanity checks for saliency maps,\u201d","volume-title":"International Conference on Neural Information Processing Systems","author":"Adebayo","year":"2018"},{"key":"B3","first-page":"700","article-title":"\u201cDebugging tests for model explanations,\u201d","volume-title":"International Conference on Neural Information Processing Systems","author":"Adebayo","year":"2020"},{"key":"B4","doi-asserted-by":"publisher","first-page":"695","DOI":"10.1007\/s10994-020-05941-0","article-title":"Beneficial and harmful explanatory machine learning","volume":"110","author":"Ai","year":"2021","journal-title":"Mach. Learn"},{"key":"B5","article-title":"\u201cDemystifying black-box models with symbolic metamodels,\u201d","volume-title":"International Conference on Neural Information Processing Systems, Vol. 32","author":"Alaa","year":"2019"},{"key":"B6","first-page":"603","article-title":"\u201cWhere can my career take me? harnessing dialogue for interactive career goal recommendations,\u201d","author":"Alkan","year":"2019","journal-title":"International Conference on Intelligent User Interfaces"},{"key":"B7","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3449237","article-title":"IRF: A Framework for Enabling Users to Interact with Recommenders through Dialogue","volume":"5","author":"Alkan","year":"2021","journal-title":"ACM Human Comput. Interact"},{"key":"B8","first-page":"276","article-title":"FROTE: feedback rule-driven oversampling for editing models","volume":"4","author":"Alkan","year":"2022","journal-title":"Mach. Learn. Syst"},{"key":"B9","first-page":"7786","article-title":"\u201cTowards robust interpretability with self-explaining neural networks,\u201d","volume-title":"International Conference on Neural Information Processing Systems","author":"Alvarez-Melis","year":"2018"},{"key":"B10","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1609\/aimag.v35i4.2513","article-title":"Power to the people: the role of humans in interactive machine learning","volume":"35","author":"Amershi","year":"2014","journal-title":"AI Mag"},{"key":"B11","first-page":"314","article-title":"\u201cFairwashing explanations with off-manifold detergent,\u201d","author":"Anders","year":"2020","journal-title":"International Conference on Machine Learning"},{"key":"B12","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3097983.3098047","article-title":"Learning certifiably optimal rule lists for categorical data","volume":"18","author":"Angelino","year":"2017","journal-title":"J. Mach. Learn. Res"},{"key":"B13","doi-asserted-by":"publisher","first-page":"556","DOI":"10.3390\/make4020026","article-title":"Fairness and explanation in AI-informed decision making","volume":"4","author":"Angerschmid","year":"2022","journal-title":"Mach. Learn. Knowl. Extract"},{"key":"B14","first-page":"515","article-title":"\u201cInteracting with explanations through critiquing,\u201d","author":"Antognini","year":"2021","journal-title":"International Joint Conference on Artificial Intelligence"},{"key":"B15","doi-asserted-by":"crossref","first-page":"01","DOI":"10.1109\/SSCI50451.2021.9660058","article-title":"\u201cEvaluating robustness of counterfactual explanations,\u201d","volume-title":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","author":"Artelt","year":"2021"},{"key":"B16","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1007\/978-3-642-15880-3_9","article-title":"\u201cA unified approach to active dual supervision for labeling features and examples,\u201d","volume-title":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","author":"Attenberg","year":"2010"},{"key":"B17","doi-asserted-by":"publisher","first-page":"e0130140","DOI":"10.1371\/journal.pone.0130140","article-title":"On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation","volume":"10","author":"Bach","year":"2015","journal-title":"PLoS ONE"},{"key":"B18","first-page":"1803","article-title":"How to explain individual classification decisions","volume":"11","author":"Baehrens","year":"2010","journal-title":"J. Mach. Learn. Res"},{"key":"B19","article-title":"\u201cDebiasing concept-based explanations with causal analysis,\u201d","author":"Bahadori","year":"2021","journal-title":"International Conference on Learning Representations"},{"key":"B20","article-title":"\u201cNeural machine translation by jointly learning to align and translate,\u201d","author":"Bahdanau","year":"2015","journal-title":"International Conference on Learning Representations"},{"key":"B21","doi-asserted-by":"publisher","first-page":"1061","DOI":"10.1038\/s42256-021-00423-x","article-title":"A case-based interpretable deep learning model for classification of mass lesions in digital mammography","volume":"3","author":"Barnett","year":"2021","journal-title":"Nat. Mach. Intell"},{"key":"B22","doi-asserted-by":"crossref","first-page":"149","DOI":"10.18653\/v1\/2020.blackboxnlp-1.14","article-title":"\u201cThe elephant in the interpretability room: why use attention as explanation when we have saliency methods?\u201d","volume-title":"BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP","author":"Bastings","year":"2020"},{"key":"B23","article-title":"\u201cInfluence functions in deep learning are fragile,\u201d","author":"Basu","year":"2021","journal-title":"International Conference on Learning Representations"},{"key":"B24","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2105.10172","article-title":"Explainable machine learning with prior knowledge: an overview","author":"Beckh","year":"2021","journal-title":"arXiv preprint"},{"key":"B25","doi-asserted-by":"publisher","first-page":"688969","DOI":"10.3389\/fdata.2021.688969","article-title":"Principles and practice of explainable machine learning","volume":"4","author":"Belle","year":"2021","journal-title":"Front. Big Data"},{"key":"B26","doi-asserted-by":"publisher","first-page":"2403","DOI":"10.1214\/11-AOAS495","article-title":"Prototype selection for interpretable classification","volume":"5","author":"Bien","year":"2011","journal-title":"Ann. Appl. Stat"},{"key":"B27","first-page":"644","article-title":"\u201cSimultaneous active learning of classifiers and attributes via relative feedback,\u201d","volume-title":"Conference on Computer Vision and Pattern Recognition","author":"Biswas","year":"2013"},{"key":"B28","article-title":"\u201cToward a unified framework for debugging gray-box models,\u201d","author":"Bontempelli","year":"2021","journal-title":"The AAAI-22 Workshop on Interactive Machine Learning"},{"key":"B29","article-title":"\u201cLearning in the wild with incremental skeptical gaussian processes,\u201d","author":"Bontempelli","year":"2020","journal-title":"International Joint Conference on Artificial Intelligence"},{"key":"B30","article-title":"\u201cConcept-level debugging of part-prototype networks,\u201d","author":"Bontempelli","year":"2022","journal-title":"International Conference on Learning Representations"},{"key":"B31","first-page":"6276","article-title":"\u201cCounterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning,\u201d","author":"Byrne","year":"2019","journal-title":"International Joint Conference on Artificial Intelligence"},{"key":"B32","doi-asserted-by":"publisher","first-page":"198","DOI":"10.1016\/j.artint.2014.08.005","article-title":"Eliciting good teaching from humans for machine learners","volume":"217","author":"Cakmak","year":"2014","journal-title":"Artif. Intell"},{"key":"B33","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2009.11023","article-title":"The struggles of feature-based explanations: shapley values vs. minimal sufficient subsets","author":"Camburu","year":"2020","journal-title":"arXiv preprint"},{"key":"B34","first-page":"9560","article-title":"\u201ce-SNLI: natural language inference with natural language explanations,\u201d","volume-title":"International Conference on Neural Information Processing Systems","author":"Camburu","year":"2018"},{"key":"B35","doi-asserted-by":"publisher","first-page":"832","DOI":"10.3390\/electronics8080832","article-title":"Machine learning interpretability: a survey on methods and metrics","volume":"8","author":"Carvalho","year":"2019","journal-title":"Electronics"},{"key":"B36","first-page":"258","article-title":"\u201cPlan explanations as model reconciliation-an empirical study,\u201d","author":"Chakraborti","year":"2019","journal-title":"International Conference on Human-Robot Interaction"},{"key":"B37","first-page":"981","article-title":"\u201cNeural network attributions: a causal perspective","author":"Chattopadhyay","year":"2019","journal-title":"In International Conference on Machine Learning, pages"},{"key":"B38","unstructured":"This looks like that: deep learning for interpretable image recognition112\n            ChenC.\n            LiO.\n            TaoD.\n            BarnettA.\n            RudinC.\n            SuJ. K.\n          Adv. Neur. Infm. Process. Syst.322019"},{"key":"B39","doi-asserted-by":"publisher","first-page":"125","DOI":"10.1007\/s11257-011-9108-6","article-title":"Critiquing-based recommenders: survey and emerging trends","volume":"22","author":"Chen","year":"2012","journal-title":"User Model User-adapt Interact"},{"key":"B40","doi-asserted-by":"publisher","first-page":"772","DOI":"10.1038\/s42256-020-00265-z","article-title":"Concept whitening for interpretable image recognition","volume":"2","author":"Chen","year":"2020","journal-title":"Nat. Mach. Intell"},{"key":"B41","first-page":"2234","article-title":"\u201cHuman-driven fol explanations of deep learning,\u201d","author":"Ciravegna","year":"2020","journal-title":"International Joint Conference on Artificial Intelligence"},{"key":"B42","first-page":"24","article-title":"\u201cExtracting tree-structured representations of trained networks,\u201d","author":"Craven","year":"1995","journal-title":"International Conference on Neural Information Processing Systems, Vol. 8"},{"key":"B43","doi-asserted-by":"publisher","first-page":"5896","DOI":"10.1609\/aaai.v35i7.16737","article-title":"User driven model adjustment via boolean rule explanations","volume":"35","author":"Daly","year":"2021","journal-title":"AAAI Conf. Artif. Intell"},{"key":"B44","first-page":"4655","article-title":"\u201cBoolean decision rules via column generation,\u201d","volume-title":"International Conference on Neural Information Processing Systems, Vol. 31","author":"Dash","year":"2018"},{"key":"B45","article-title":"\u201cNeural-symbolic computing: an effective methodology for principled integration of machine learning and reasoning,\u201d","author":"d'Avila Garcez","year":"2019","journal-title":"FLAP, Vol. 6"},{"key":"B46","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2205.13743","article-title":"Generating personalized counterfactual interventions for algorithmic recourse by eliciting user preferences","author":"De Toni","year":"2022","journal-title":"arXiv preprint"},{"key":"B47","doi-asserted-by":"publisher","first-page":"161","DOI":"10.1023\/A:1012454411458","article-title":"Training invariant support vector machines","volume":"46","author":"DeCoste","year":"2002","journal-title":"Mach. Learn"},{"key":"B48","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s42256-021-00338-7","article-title":"AI for radiographic COVID-19 detection selects shortcuts over signal","volume":"2021","author":"DeGrave","year":"2021","journal-title":"Nat. Mach. Intell"},{"key":"B49","doi-asserted-by":"publisher","first-page":"145","DOI":"10.1007\/BF00114116","article-title":"Explanation-based learning: an alternative view","volume":"1","author":"DeJong","year":"1986","journal-title":"Mach. Learn"},{"key":"B50","doi-asserted-by":"publisher","first-page":"277","DOI":"10.1007\/s41060-018-0144-8","article-title":"Interpreting tree ensembles with intrees","volume":"7","author":"Deng","year":"2019","journal-title":"Int. J. Data Sci. Anal"},{"key":"B51","article-title":"Explanations can be manipulated and geometry is to blame","author":"Dombrowski","year":"2019"},{"key":"B52","first-page":"595","article-title":"\u201cLearning from labeled features using generalized expectation criteria,\u201d","volume-title":"Annual International ACM SIGIR Conference on Research and Development in Information Retrieval","author":"Druck","year":"2008"},{"key":"B53","first-page":"81","article-title":"\u201cActive learning by labeling features,\u201d","author":"Druck","year":"2009","journal-title":"Conference on Empirical Methods in Natural Language Processing"},{"key":"B54","first-page":"39","article-title":"\u201cInteractive machine learning,\u201d","author":"Fails","year":"2003","journal-title":"International Conference on Intelligent User Interfaces"},{"key":"B55","first-page":"1457","article-title":"\u201cHow explainability contributes to trust in AI,\u201d","volume-title":"ACM Conference on Fairness, Accountability, and Transparency","author":"Ferrario","year":""},{"key":"B56","doi-asserted-by":"publisher","first-page":"82736","DOI":"10.1109\/ACCESS.2022.3196917","article-title":"The robustness of counterfactual explanations over time","volume":"10","author":"Ferrario","year":"","journal-title":"IEEE Access"},{"key":"B57","first-page":"80","article-title":"\u201cExplanation as a process: user-centric construction of multi-level and multi-modal explanations,\u201d","volume-title":"K\u00fcnstliche Intelligenz","author":"Finzel","year":"2021"},{"key":"B58","doi-asserted-by":"publisher","first-page":"845","DOI":"10.1109\/TNNLS.2013.2292894","article-title":"Classification in the presence of label noise: a survey","volume":"25","author":"Fr\u00e9nay","year":"2014","journal-title":"Trans. Neural Netw. Learn. Syst"},{"key":"B59","article-title":"A typology to explore and guide explanatory interactive machine learning","author":"Friedrich","year":"2022","journal-title":"arXiv preprint"},{"key":"B60","volume-title":"Neural-Symbolic Learning Systems: Foundations and Applications","author":"Garcez","year":"2012"},{"key":"B61","first-page":"1287","article-title":"\u201cExplaining the explainer: a first theoretical analysis of LIME,\u201d","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Garreau","year":"2020"},{"key":"B62","first-page":"9574","article-title":"\u201cCausal abstractions of neural networks,\u201d","author":"Geiger","year":"2021","journal-title":"International Conference on Neural Information Processing Systems"},{"key":"B63","doi-asserted-by":"publisher","first-page":"665","DOI":"10.1038\/s42256-020-00257-z","article-title":"Shortcut learning in deep neural networks","volume":"2","author":"Geirhos","year":"2020","journal-title":"Nat. Mach. Intell"},{"key":"B64","first-page":"4016","article-title":"\u201cSaliency learning: teaching the model where to pay attention,\u201d","volume-title":"Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","author":"Ghaeini","year":"2019"},{"key":"B65","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3432934","article-title":"Explainable active learning (XAL) toward ai explanations as interfaces for machine teachers","volume":"4","author":"Ghai","year":"2021","journal-title":"ACM Human Comput. Interact"},{"key":"B66","first-page":"80","article-title":"\u201cExplaining explanations: an overview of interpretability of machine learning,\u201d","author":"Gilpin","year":"2018","journal-title":"International Conference on Data Science and Advanced Analytics"},{"key":"B67","article-title":"\u201cWidening the pipeline in human-guided reinforcement learning with explanation and context-aware data augmentation,\u201d","author":"Guan","year":"2021","journal-title":"International Conference on Neural Information Processing Systems"},{"key":"B68","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1805.10820","article-title":"Local rule-based explanations of black box decision systems","author":"Guidotti","year":"","journal-title":"arXiv preprint"},{"key":"B69","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3236009","article-title":"A survey of methods for explaining black box models","volume":"51","author":"Guidotti","year":"","journal-title":"ACM Comput. Surveys"},{"key":"B70","first-page":"10333","article-title":"\u201cFastIF: scalable influence functions for efficient model interpretation and debugging,\u201d","author":"Guo","year":"2021","journal-title":"Conference on Empirical Methods in Natural Language Processing"},{"key":"B71","article-title":"\u201cBuilding trust in interactive machine learning via user contributed interpretable rules,\u201d","author":"Guo","year":"2022","journal-title":"International Conference on Intelligent User Interfaces"},{"key":"B72","doi-asserted-by":"crossref","first-page":"29","DOI":"10.18653\/v1\/2022.lnls-1.4","article-title":"\u201cWhen can models learn from explanations? a formal framework for understanding the roles of explanation data,\u201d","author":"Hase","year":"2022","journal-title":"Proceedings of the First Workshop on Learning with Natural Language Supervision"},{"key":"B73","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1609\/hcomp.v7i1.5265","article-title":"Interpretable image recognition with hierarchical prototypes","volume":"7","author":"Hase","year":"2019","journal-title":"Conf. Hum. Comput. Crowdsourcing"},{"key":"B74","doi-asserted-by":"publisher","first-page":"9","DOI":"10.1016\/j.eswa.2016.02.013","article-title":"Interactive recommender systems: a survey of the state of the art and future research challenges and opportunities","volume":"56","author":"He","year":"2016","journal-title":"Expert. Syst. Appl"},{"key":"B75","first-page":"2925","article-title":"\u201cFooling neural network interpretations via adversarial model manipulation,\u201d","volume-title":"International Conference on Neural Information Processing Systems, Vol. 32","author":"Heo","year":"2019"},{"key":"B76","first-page":"4228","article-title":"\u201cCost-effective interactive attention learning with neural attention processes,\u201d","author":"Heo","year":"2020","journal-title":"International Conference on Machine Learning"},{"key":"B77","doi-asserted-by":"publisher","first-page":"166970","DOI":"10.1109\/ACCESS.2021.3135514","article-title":"A survey on cost types, interaction schemes, and annotator performance models in selection algorithms for active learning in classification","volume":"9","author":"Herde","year":"2021","journal-title":"IEEE Access"},{"key":"B78","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1109\/MIS.2013.24","article-title":"Trust in automation","volume":"28","author":"Hoffman","year":"2013","journal-title":"IEEE Intell Syst"},{"key":"B79","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2105.02968","article-title":"This looks like that... does it? shortcomings of latent space prototype interpretability in deep networks","author":"Hoffmann","year":"2021","journal-title":"arXiv preprint"},{"key":"B80","first-page":"427","article-title":"\u201cThe next frontier: Ai we can really trust,\u201d","volume-title":"Machine Learning and Principles and Practice of Knowledge Discovery in Databases-International Workshops of ECML PKDD 2021, Proceedings, Communications in Computer and Information Science","author":"Holzinger","year":"2021"},{"key":"B81","doi-asserted-by":"publisher","DOI":"10.1002\/widm.1312","article-title":"Causability and explainabilty of artificial intelligence in medicine","volume":"9","author":"Holzinger","year":"2019","journal-title":"Wiley Interdisc. Rev. Data Min. Knowl. Disc"},{"key":"B82","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-04083-2_2","volume-title":"Explainable AI Methods - A Brief Overview","author":"Holzinger","year":"2022"},{"key":"B83","doi-asserted-by":"publisher","first-page":"63","DOI":"10.1609\/hcomp.v8i1.7464","article-title":"Soliciting human-in-the-loop user feedback for interactive machine learning reduces user trust and impressions of model accuracy","volume":"8","author":"Honeycutt","year":"2020","journal-title":"Conf. Hum. Comput. Crowdsourcing"},{"key":"B84","first-page":"9734","article-title":"\u201cA benchmark for interpretability methods in deep neural networks,\u201d","volume-title":"International Conference on Neural Information Processing Systems","author":"Hooker","year":"2019"},{"key":"B85","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2010.11034","article-title":"On explaining decision trees","author":"Izza","year":"2020","journal-title":"arXiv preprint"},{"key":"B86","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v36i11.21488","article-title":"\u201cSymbols as a lingua franca for bridging human-ai chasm for explainable and advisable AI systems,\u201d","author":"Kambhampati","year":"2022","journal-title":"Proceedings of Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI)"},{"key":"B87","first-page":"353","article-title":"\u201cAlgorithmic recourse: from counterfactual explanations to interventions,\u201d","author":"Karimi","year":"2021","journal-title":"Conference on Fairness, Accountability, and Transparency"},{"key":"B88","first-page":"4401","article-title":"\u201cA style-based generator architecture for generative adversarial networks,\u201d","author":"Karras","year":"2019","journal-title":"Conference on Computer Vision and Pattern Recognition"},{"key":"B89","article-title":"\u201cLearning the difference that makes a difference with counterfactually-augmented data,\u201d","author":"Kaushik","year":"2019","journal-title":"International Conference on Learning Representations"},{"key":"B90","first-page":"3382","article-title":"\u201cInterpreting black box predictions using fisher kernels,\u201d","volume-title":"International Conference on Artificial Intelligence and Statistics","author":"Khanna","year":"2019"},{"key":"B91","first-page":"1952","article-title":"\u201cThe bayesian case model: a generative approach for case-based reasoning and prototype classification,\u201d","author":"Kim","year":"2014","journal-title":"International Conference on Neural Information Processing Systems"},{"key":"B92","doi-asserted-by":"crossref","first-page":"267","DOI":"10.1007\/978-3-030-28954-6_14","article-title":"\u201cThe (un) reliability of saliency methods,\u201d","volume-title":"Explainable AI: Interpreting, Explaining and Visualizing Deep Learning","author":"Kindermans","year":"2019"},{"key":"B93","first-page":"1885","article-title":"\u201cUnderstanding black-box predictions via influence functions,\u201d","volume-title":"International Conference on Machine Learning","author":"Koh","year":"2017"},{"key":"B94","first-page":"5338","article-title":"\u201cConcept bottleneck models,\u201d","author":"Koh","year":"2020","journal-title":"International Conference on Machine Learning"},{"key":"B95","article-title":"\u201cSPARROW: Semantically coherent prototypes for image classification,\u201d","author":"Kraft","year":"2021","journal-title":"The 32nd British Machine Vision Conference (BMVC)."},{"key":"B96","first-page":"126","article-title":"\u201cPrinciples of explanatory debugging to personalize interactive machine learning,\u201d","volume-title":"International Conference on Intelligent User Interfaces","author":"Kulesza","year":"2015"},{"key":"B97","first-page":"41","article-title":"\u201cExplanatory debugging: supporting end-user debugging of machine-learned programs,\u201d","volume-title":"Symposium on Visual Languages and Human-Centric Computing","author":"Kulesza","year":"2010"},{"key":"B98","first-page":"5491","article-title":"\u201cProblems with shapley-value-based explanations as feature importance measures,\u201d","author":"Kumar","year":"2020","journal-title":"International Conference on Machine Learning"},{"key":"B99","doi-asserted-by":"publisher","first-page":"59","DOI":"10.1609\/hcomp.v7i1.5280","article-title":"Human evaluation of models built for interpretability","volume":"7","author":"Lage","year":"2019","journal-title":"AAAI Conf. Hum. Comput. Crowdsourcing"},{"key":"B100","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2012.02898","article-title":"Learning interpretable concept-based models with human feedback","author":"Lage","year":"2020","journal-title":"arXiv preprint"},{"key":"B101","first-page":"1675","article-title":"\u201cInterpretable decision sets: ajoint framework for description and prediction,\u201d","volume-title":"International Conference on Knowledge Discovery and Data Mining","author":"Lakkaraju","year":"2016"},{"key":"B102","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41467-019-08987-4","article-title":"Unmasking clever hans predictors and assessing what machines really learn","volume":"10","author":"Lapuschkin","year":"2019","journal-title":"Nat. Commun"},{"key":"B103","first-page":"332","article-title":"\u201cFind: human-in-the-loop debugging deep text classifiers,\u201d","author":"Lertvittayakumjorn","year":"2020","journal-title":"Conference on Empirical Methods in Natural Language Processing"},{"key":"B104","doi-asserted-by":"publisher","first-page":"1508","DOI":"10.1162\/tacl_a_00440","article-title":"Explanation-based human debugging of nlp models: a survey","volume":"9","author":"Lertvittayakumjorn","year":"2021","journal-title":"Trans. Assoc. Comput. Linguist"},{"key":"B105","first-page":"4380","article-title":"\u201cAlice: active learning with contrastive natural language explanations,\u201d","volume-title":"Conference on Empirical Methods in Natural Language Processing","author":"Liang","year":"2020"},{"key":"B106","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2110.10790","article-title":"Human-centered explainable ai (xai): from algorithms to user experiences","author":"Liao","year":"2021","journal-title":"arXiv preprint"},{"key":"B107","unstructured":"LimB. Y.\n          Improving understanding and trust with intelligibility in context-aware applications2012"},{"key":"B108","doi-asserted-by":"publisher","first-page":"464","DOI":"10.1016\/j.tics.2006.08.004","article-title":"The structure and function of explanations","volume":"10","author":"Lombrozo","year":"2006","journal-title":"Trends Cogn. Sci"},{"key":"B109","doi-asserted-by":"publisher","first-page":"56","DOI":"10.1038\/s42256-019-0138-9","article-title":"From local explanations to global understanding with explainable ai for trees","volume":"2","author":"Lundberg","year":"2020","journal-title":"Nat. Mach. Intell"},{"key":"B110","first-page":"4768","article-title":"\u201cA unified approach to interpreting model predictions,\u201d","author":"Lundberg","year":"2017","journal-title":"International Conference on Neural Information Processing Systems"},{"key":"B111","first-page":"3820","article-title":"\u201cTeaching categories to human learners with visual explanations,\u201d","author":"Mac Aodha","year":"2018","journal-title":"Conference on Computer Vision and Pattern Recognition"},{"key":"B112","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2106.13314","article-title":"Promises and pitfalls of black-box concept learning models","author":"Mahinpei","year":"2021","journal-title":"arXiv preprint"},{"key":"B113","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2105.04289","article-title":"Do concept bottleneck models learn as intended?","author":"Margeloiu","year":"2021","journal-title":"arXiv preprint"},{"key":"B114","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2003.10365","article-title":"On interactive machine learning and the potential of cognitive feedback","author":"Michael","year":"2020","journal-title":"arXiv preprint"},{"key":"B115","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","article-title":"Explanation in artificial intelligence: Insights from the social sciences","volume":"267","author":"Miller","year":"2019","journal-title":"Artif. Intell"},{"key":"B116","first-page":"1","article-title":"\u201cModel reconstruction from model explanations,\u201d","author":"Milli","year":"2019","journal-title":"Conference on Fairness, Accountability, and Transparency"},{"key":"B117","doi-asserted-by":"publisher","first-page":"47","DOI":"10.1007\/BF00116250","article-title":"Explanation-based generalization: a unifying view","volume":"1","author":"Mitchell","year":"1986","journal-title":"Mach. Learn"},{"key":"B118","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1905.03540","article-title":"Embedding human knowledge into deep neural network via attention map","author":"Mitsuhara","year":"2019","journal-title":"arXiv preprint"},{"key":"B119","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.dsp.2017.10.011","article-title":"Methods for interpreting and understanding deep neural networks","volume":"73","author":"Montavon","year":"2018","journal-title":"Digit. Signal Process"},{"key":"B120","first-page":"589","article-title":"\u201cGlobal explanations with decision rules: a co-learning approach,\u201d","author":"Nanfack","year":"2021","journal-title":"Conference on Uncertainty in Artificial Intelligence"},{"key":"B121","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1802.00682","article-title":"How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation","author":"Narayanan","year":"2018","journal-title":"arXiv preprint"},{"key":"B122","first-page":"441","article-title":"\u201cThis looks like that, because... explaining prototypes for interpretable image recognition,\u201d","volume-title":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","author":"Nauta","year":""},{"key":"B123","first-page":"14933","article-title":"\u201cNeural prototype trees for interpretable fine-grained image recognition,\u201d","volume-title":"Conference on Computer Vision and Pattern Recognition","author":"Nauta","year":""},{"key":"B124","first-page":"354","article-title":"\u201cAttributes for classifier feedback,\u201d","volume-title":"European Conference on Computer Vision","author":"Parkash","year":"2012"},{"key":"B125","doi-asserted-by":"crossref","DOI":"10.1017\/CBO9780511803161","volume-title":"Causality","author":"Pearl","year":"2009"},{"key":"B126","first-page":"7599","article-title":"\u201cPerformative prediction,\u201d","author":"Perdomo","year":"2020","journal-title":"International Conference on Machine Learning"},{"key":"B127","article-title":"\u201cRegularizing black-box models for improved interpretability,\u201d","volume-title":"International Conference on Neural Information Processing Systems, Vol. 33","author":"Plumb","year":"2020"},{"key":"B128","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2009.09723","article-title":"Machine guides, human supervises: interactive learning with global explanations","author":"Popordanoska","year":"2020","journal-title":"arXiv preprint"},{"key":"B129","first-page":"79","article-title":"\u201cAn interactive algorithm for asking and incorporating feature feedback into support vector machines,\u201d","volume-title":"Annual International ACM SIGIR Conference on Research and Development in Information Retrieval","author":"Raghavan","year":"2007"},{"key":"B130","first-page":"1655","article-title":"Active learning with feedback on features and instances","volume":"7","author":"Raghavan","year":"2006","journal-title":"J. Mach. Learn. Res"},{"key":"B131","doi-asserted-by":"publisher","first-page":"329","DOI":"10.1613\/jair.1.13200","article-title":"Explainable deep learning: a field guide for the uninitiated","volume":"73","author":"Ras","year":"2022","journal-title":"J. Artif. Intell. Res"},{"key":"B132","first-page":"269","article-title":"\u201cSnorkel: rapid training data creation with weak supervision,\u201d","volume-title":"Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, Vol. 11","author":"Ratner","year":"2017"},{"key":"B133","first-page":"1135","article-title":"\u201c\u201cwhy should I trust you?\u201d: explaining the predictions of any classifier,\u201d","volume-title":"International Conference on Knowledge Discovery and Data Mining","author":"Ribeiro","year":"2016"},{"key":"B134","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11491","article-title":"Anchors: high-precision model-agnostic explanations","volume":"32","author":"Ribeiro","year":"2018","journal-title":"Conf. Artif. Intell"},{"key":"B135","first-page":"8116","article-title":"\u201cInterpretations are useful: penalizing explanations to align neural networks with prior knowledge,\u201d","author":"Rieger","year":"2020","journal-title":"International Conference on Machine Learning"},{"key":"B136","first-page":"2662","article-title":"\u201cRight for the right reasons: training differentiable models by constraining their explanations,\u201d","author":"Ross","year":"2017","journal-title":"International Joint Conference on Artificial Intelligence"},{"key":"B137","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","article-title":"Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead","volume":"1","author":"Rudin","year":"2019","journal-title":"Nat. Mach. Intell"},{"key":"B138","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1214\/21-SS133","article-title":"Interpretable machine learning: fundamental principles and 10 grand challenges","volume":"16","author":"Rudin","year":"2022","journal-title":"Stat. Surv"},{"key":"B139","doi-asserted-by":"crossref","first-page":"1420","DOI":"10.1145\/3447548.3467245","article-title":"\u201cProtopshare: prototypical parts sharing for similarity discovery in interpretable image classification,\u201d","volume-title":"in ACM SIGKDD Conference on Knowledge Discovery and Data Mining","author":"Rymarczyk","year":"2021"},{"key":"B140","article-title":"\u201cEditing a classifier by rewriting its prediction rules,\u201d","author":"Santurkar","year":"2021","journal-title":"International Conference on Neural Information Processing Systems, Vol. 34"},{"key":"B141","doi-asserted-by":"publisher","first-page":"197","DOI":"10.3233\/AIC-210084","article-title":"Neuro-Symbolic Artificial Intelligence","volume":"34","author":"Sarker","year":"2021","journal-title":"AI Communications"},{"key":"B142","doi-asserted-by":"publisher","first-page":"612","DOI":"10.1109\/JPROC.2021.3058954","article-title":"Toward causal representation learning","volume":"109","author":"Sch\u00f6lkopf","year":"2021","journal-title":"Proc. IEEE"},{"key":"B143","doi-asserted-by":"publisher","first-page":"476","DOI":"10.1038\/s42256-020-0212-3","article-title":"Making deep neural networks right for the right scientific reasons by interacting with their explanations","volume":"2","author":"Schramowski","year":"2020","journal-title":"Nat. Mach. Intell"},{"key":"B144","first-page":"618","article-title":"\u201cGrad-CAM: visual explanations from deep networks via gradient-based localization,\u201d","author":"Selvaraju","year":"2017","journal-title":"International Conference on Computer Vision"},{"key":"B145","first-page":"2591","article-title":"\u201cTaking a hint: leveraging explanations to make vision and language models more grounded,\u201d","volume-title":"International Conference on Computer Vision","author":"Selvaraju","year":"2019"},{"key":"B146","first-page":"1467","article-title":"\u201cClosing the loop: fast, interactive semi-supervised annotation with queries on features and instances,\u201d","volume-title":"Conference on Empirical Methods in Natural Language Processing","author":"Settles","year":"2011"},{"key":"B147","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-01560-1","volume-title":"Active learning: Synthesis Lectures on Artificial Intelligence and Machine Learning","author":"Settles","year":"2012"},{"key":"B148","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2021.103457","article-title":"Glocalx-from local to global explanations of black box ai models","volume":"294","author":"Setzu","year":"2021","journal-title":"Artif. Intell"},{"key":"B149","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i11.17148","article-title":"Right for better reasons: training differentiable models by constraining their influence function","author":"Shao","year":"2021","journal-title":"Conf. Artif. Intell"},{"key":"B150","article-title":"Right for the right latent factors: debiasing generative models via disentanglement","author":"Shao","year":"2022","journal-title":"arXiv preprint"},{"key":"B151","first-page":"5103","article-title":"\u201cA symbolic approach to explaining bayesian network classifiers,\u201d","author":"Shih","year":"2018","journal-title":"International Joint Conference on Artificial Intelligence"},{"key":"B152","article-title":"\u201cDeep inside convolutional networks: Visualising image classification models and saliency maps,\u201d","author":"Simonyan","year":"2014","journal-title":"International Conference on Learning Representations"},{"key":"B153","article-title":"\u201cHierarchical interpretations for neural network predictions,\u201d","author":"Singh","year":"2018","journal-title":"International Conference on Learning Representations"},{"key":"B154","first-page":"830","article-title":"\u201cThe role of transparency in recommender systems,\u201d","author":"Sinha","year":"2002","journal-title":"Conference on Human Factors in Computing Systems"},{"key":"B155","first-page":"9046","article-title":"\u201cWhen explanations lie: why many modified bp attributions fail,\u201d","author":"Sixt","year":"2020","journal-title":"International Conference on Machine Learning"},{"key":"B156","first-page":"389","article-title":"\u201cCAIPI in practice: towards explainable interactive medical image classification,\u201d","author":"Slany","year":"2022","journal-title":"Artificial Intelligence Applications and Innovations. IFIP WG 12.5 International Workshops"},{"key":"B157","first-page":"865","article-title":"\u201cThe constrained weight space svm: learning with ranked features,\u201d","author":"Small","year":"2011","journal-title":"International Conference on International Conference on Machine Learning"},{"key":"B158","first-page":"56","article-title":"\u201cExplainability fact sheets: a framework for systematic assessment of explainable approaches,\u201d","volume-title":"Conference on Fairness, Accountability, and Transparency","author":"Sokol","year":""},{"key":"B159","first-page":"1","article-title":"\u201cOne explanation does not fit all,\u201d","volume-title":"KI-K\u00fcnstliche Intelligenz","author":"Sokol","year":""},{"key":"B160","first-page":"10317","article-title":"\u201cInteractive disentanglement: Learning concepts by interacting with their prototype representations,\u201d","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Stammer","year":"2022"},{"key":"B161","first-page":"3619","article-title":"\u201cRight for the right concept: Revising neuro-symbolic concepts by interacting with their explanations,\u201d","author":"Stammer","year":"2021","journal-title":"Conference on Computer Vision and Pattern Recognition"},{"key":"B162","doi-asserted-by":"publisher","first-page":"647","DOI":"10.1007\/s10115-013-0679-x","article-title":"Explaining prediction models and individual predictions with feature contributions","volume":"41","author":"\u0160trumbelj","year":"2014","journal-title":"Knowl. Inf. Syst"},{"key":"B163","first-page":"82","article-title":"\u201cToward harnessing user feedback for machine learning,\u201d","volume-title":"International Conference on Intelligent User Interfaces","author":"Stumpf","year":"2007"},{"key":"B164","first-page":"3319","article-title":"\u201cAxiomatic attribution for deep networks,\u201d","volume-title":"International Conference on Machine Learning","author":"Sundararajan","year":"2017"},{"key":"B165","first-page":"4","article-title":"\u201cToward faithful explanatory active learning with self-explainable neural nets,\u201d","author":"Teso","year":"2019","journal-title":"Workshop on Interactive Adaptive Learning"},{"key":"B166","article-title":"\u201cInteractive label cleaning with example-based explanations,\u201d","author":"Teso","year":"2021","journal-title":"International Conference on Neural Information Processing Systems"},{"key":"B167","first-page":"239","article-title":"\u201cExplanatory interactive machine learning,\u201d","author":"Teso","year":"2019","journal-title":"Conference on AI, Ethics, and Society"},{"key":"B168","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1145\/1297231.1297259","article-title":"\u201cEffective explanations of recommendations: user-centered design,\u201d","volume-title":"Proceedings of the 2007 ACM conference on Recommender Systems","author":"Tintarev","year":"2007"},{"key":"B169","volume-title":"Explaining Recommendations: Design and Evaluation","author":"Tintarev","year":"2015"},{"key":"B170","doi-asserted-by":"publisher","first-page":"349","DOI":"10.1007\/s10994-015-5528-6","article-title":"Supersparse linear integer models for optimized medical scoring systems","volume":"102","author":"Ustun","year":"2016","journal-title":"Mach. Learn"},{"key":"B171","doi-asserted-by":"publisher","first-page":"6505","DOI":"10.1609\/aaai.v35i7.16806","article-title":"On the tractability of SHAP explanations","volume":"35","author":"Van den Broeck","year":"2021","journal-title":"Proc. AAAI Conf. Artif. Intell"},{"key":"B172","doi-asserted-by":"publisher","first-page":"26","DOI":"10.1145\/3313109","article-title":"Trustworthy machine learning and artificial intelligence","volume":"25","author":"Varshney","year":"2019","journal-title":"XRDS Crossroads ACM Mag. Stud"},{"key":"B173","first-page":"6000","article-title":"\u201cAttention is all you need,\u201d","volume-title":"International Conference on Neural Information Processing Systems","author":"Vaswani","year":"2017"},{"key":"B174","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1016\/j.inffus.2021.05.009","article-title":"Notions of explainability and evaluation approaches for explainable artificial intelligence","volume":"76","author":"Vilone","year":"2021","journal-title":"Inf. Fusion"},{"key":"B175","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2022.103840","article-title":"On the robustness of sparse counterfactual explanations to adverse perturbations","volume":"316","author":"Virgolin","year":"2023","journal-title":"Artif. Intell"},{"key":"B176","article-title":"\u201cSaliency is a possible red herring when diagnosing poor generalization,\u201d","author":"Viviano","year":"2021","journal-title":"International Conference on Learning Representations"},{"key":"B177","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2021.3079836","article-title":"Informed machine learning - a taxonomy and survey of integrating prior knowledge into learning systems","author":"von Rueden","year":"2021","journal-title":"IEEE Trans. Knowl. Data Eng"},{"key":"B178","doi-asserted-by":"publisher","first-page":"841","DOI":"10.2139\/ssrn.3063289","article-title":"Counterfactual explanations without opening the black box: Automated decisions and the gdpr","volume":"31","author":"Wachter","year":"2017","journal-title":"Harv. JL & Tech"},{"key":"B179","article-title":"\u201cNeural-symbolic integration for fairness in AI,\u201d","author":"Wagner","year":"2021","journal-title":"CEUR Workshop, Vol. 2846"},{"key":"B180","article-title":"\u201cTowards probabilistic sufficient explanations,\u201d","author":"Wang","year":"2020","journal-title":"Extending Explainable AI Beyond Deep Models and Classifiers Workshop at ICML (XXAI)"},{"key":"B181","doi-asserted-by":"publisher","first-page":"243","DOI":"10.1613\/jair.1.11345","article-title":"Humans in the loop: the design of interactive AI systems","volume":"64","author":"Wang","year":"2019","journal-title":"J. Artif. Intell. Res"},{"key":"B182","doi-asserted-by":"crossref","first-page":"109","DOI":"10.1109\/HRI.2016.7451741","article-title":"\u201cTrust calibration within a human-robot team: comparing automatically generated explanations,\u201d","volume-title":"2016 11th ACM\/IEEE International Conference on Human-Robot Interaction (HRI)","author":"Wang","year":"2016"},{"key":"B183","doi-asserted-by":"crossref","first-page":"56","DOI":"10.1007\/978-3-319-78978-1_5","article-title":"\u201cIs it my looks? or something i said? the impact of explanations, embodiment, and expectations on trust and performance in human-robot teams,\u201d","author":"Wang","year":"2018","journal-title":"International Conference on Persuasive Technology"},{"key":"B184","doi-asserted-by":"publisher","first-page":"281","DOI":"10.1006\/ijhc.2001.0499","article-title":"Interactive machine learning: letting users build classifiers","volume":"55","author":"Ware","year":"2001","journal-title":"Int. J. Hum. Comput. Stud"},{"key":"B185","doi-asserted-by":"publisher","first-page":"113","DOI":"10.1016\/j.jesp.2014.01.005","article-title":"The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle","volume":"52","author":"Waytz","year":"2014","journal-title":"J. Exp. Soc. Psychol"},{"key":"B186","doi-asserted-by":"publisher","first-page":"67","DOI":"10.1109\/4235.585893","article-title":"No free lunch theorems for optimization","volume":"1","author":"Wolpert","year":"1997","journal-title":"IEEE Trans. Evolut. Comput"},{"key":"B187","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1145\/3298689.3347009","article-title":"\u201cDeep language-based critiquing for recommender systems,\u201d","author":"Wu","year":"2019","journal-title":"Conference on Recommender Systems"},{"key":"B188","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11501","article-title":"Beyond sparsity: tree regularization of deep models for interpretability","volume":"32","author":"Wu","year":"2018","journal-title":"Conf. Artif. Intell"},{"key":"B189","doi-asserted-by":"publisher","first-page":"6413","DOI":"10.1609\/aaai.v34i04.6112","article-title":"Regional tree regularization for interpretability in deep neural networks","volume":"34","author":"Wu","year":"2020","journal-title":"Conf. Artif. Intell"},{"key":"B190","article-title":"\u201cPolyjuice: generating counterfactuals for explaining, evaluating, and improving models,\u201d","author":"Wu","year":"2021","journal-title":"Annual Meeting of the Association for Computational Linguistics"},{"key":"B191","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3319616","article-title":"Local decision pitfalls in interactive machine learning: an investigation into feature selection in sentiment analysis","volume":"26","author":"Wu","year":"2019","journal-title":"Trans. Comput. Hum. Interact"},{"key":"B192","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2006.16789","article-title":"Causality learning: a new perspective for interpretable machine learning","author":"Xu","year":"2020","journal-title":"arXiv preprint"},{"key":"B193","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2103.10415","article-title":"Refining neural networks with compositional explanations","author":"Yao","year":"2021","journal-title":"arXiv preprint"},{"key":"B194","doi-asserted-by":"crossref","first-page":"337","DOI":"10.3233\/FAIA210362","article-title":"\u201cHuman-centered concept explanations for neural networks,\u201d","volume-title":"Neuro-Symbolic Artificial Intelligence: The State of the Art, volume 342 of Frontiers in Artificial Intelligence and Applications","author":"Yeh","year":"2021"},{"key":"B195","first-page":"9311","article-title":"\u201cRepresenter point selection for explaining deep neural networks,\u201d","author":"Yeh","year":"2018","journal-title":"International Conference on Neural Information Processing Systems"},{"key":"B196","first-page":"1039","article-title":"\u201cNeural-symbolic VQA: disentangling reasoning from vision and language understanding,\u201d","volume-title":"International Conference on Neural Information Processing Systems","author":"Yi","year":"2018"},{"key":"B197","first-page":"260","article-title":"\u201cUsing \u201cannotator rationales\u201d to improve machine learning for text categorization,\u201d","volume-title":"Conference of the North American Chapter of the Association for Computational Linguistics","author":"Zaidan","year":"2007"},{"key":"B198","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1145\/3314419","article-title":"Fixing mislabeling by human annotators leveraging conflict resolution and prior knowledge","volume":"3","author":"Zeni","year":"2019","journal-title":"Interact. Mobile Wearable Ubiquitous Technol"},{"key":"B199","article-title":"Learning from ambiguous demonstrations with self-explanation guided reinforcement learning","author":"Zha","year":"2021","journal-title":"arXiv preprint arXiv"},{"key":"B200","doi-asserted-by":"publisher","first-page":"421","DOI":"10.1080\/10447318.2017.1357904","article-title":"Exploring explanation effects on consumers' trust in online recommender agents","volume":"34","author":"Zhang","year":"2018","journal-title":"Int. J. Hum. Comput. Interact"},{"key":"B201","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1561\/9781680836592","article-title":"Explainable recommendation: A survey and new perspectives","volume":"14","author":"Zhang","year":"2020","journal-title":"Foundat. Trends"},{"key":"B202","article-title":"\u201cWhy should you trust my explanation?\u201d understanding uncertainty in LIME explanations,\u201d","author":"Zhang","year":"2019","journal-title":"AI for Social Good Workshop at ICML'19"},{"key":"B203","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2021.internlp-1.1","article-title":"\u201cHILDIF: interactive debugging of nli models using influence functions,\u201d","volume":"1","author":"Zylberajch","year":"2021","journal-title":"Workshop on Interactive Learning for Natural Language Processing"}],"container-title":["Frontiers in Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2023.1066049\/full","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T09:16:14Z","timestamp":1677143774000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2023.1066049\/full"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,23]]},"references-count":203,"alternative-id":["10.3389\/frai.2023.1066049"],"URL":"https:\/\/doi.org\/10.3389\/frai.2023.1066049","relation":{},"ISSN":["2624-8212"],"issn-type":[{"value":"2624-8212","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,23]]},"article-number":"1066049"}}