{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T21:11:37Z","timestamp":1773954697291,"version":"3.50.1"},"reference-count":83,"publisher":"Springer Science and Business Media LLC","issue":"2-4","license":[{"start":{"date-parts":[[2023,7,17]],"date-time":"2023-07-17T00:00:00Z","timestamp":1689552000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,17]],"date-time":"2023-07-17T00:00:00Z","timestamp":1689552000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100014047","name":"711th Human Performance Wing","doi-asserted-by":"publisher","award":["FA8650-17-2-7711"],"award-info":[{"award-number":["FA8650-17-2-7711"]}],"id":[{"id":"10.13039\/100014047","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000923","name":"Australian Research Council","doi-asserted-by":"publisher","award":["DP190103414"],"award-info":[{"award-number":["DP190103414"]}],"id":[{"id":"10.13039\/501100000923","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["K\u00fcnstl Intell"],"published-print":{"date-parts":[[2023,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>This paper summarizes the psychological insights and related design challenges that have emerged in the field of Explainable AI (XAI). This summary is organized as a set of principles, some of which have recently been instantiated in XAI research. The primary aspects of implementation to which the principles refer are the design and evaluation stages of XAI system development, that is, principles concerning the design of explanations and the design of experiments for evaluating the performance of XAI systems. The principles can serve as guidance, to ensure that AI systems are human-centered and effectively assist people in solving difficult problems.<\/jats:p>","DOI":"10.1007\/s13218-023-00806-9","type":"journal-article","created":{"date-parts":[[2023,7,17]],"date-time":"2023-07-17T13:01:37Z","timestamp":1689598897000},"page":"237-247","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["Increasing the Value of XAI for Users: A Psychological Perspective"],"prefix":"10.1007","volume":"37","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1387-7659","authenticated-orcid":false,"given":"Robert R.","family":"Hoffman","sequence":"first","affiliation":[]},{"given":"Timothy","family":"Miller","sequence":"additional","affiliation":[]},{"given":"Gary","family":"Klein","sequence":"additional","affiliation":[]},{"given":"Shane T.","family":"Mueller","sequence":"additional","affiliation":[]},{"given":"William J.","family":"Clancey","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,17]]},"reference":[{"key":"806_CR1","unstructured":"Abdollahi B, Nasraoui O (2016) Explainable restricted Boltzmann machines for collaborative filtering. [arXiv:1606.07129v1]"},{"key":"806_CR2","doi-asserted-by":"publisher","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","volume":"6","author":"A Adadi","year":"2018","unstructured":"Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence. IEEE Access 6:52138\u201352160 [https:\/\/doi.org\/10.1109\/ACCESS.2018.2870052]","journal-title":"IEEE Access"},{"issue":"3","key":"806_CR3","first-page":"2594","volume":"34","author":"A Akula","year":"2020","unstructured":"Akula A, Wang S, Zhu S-C (2020) CoCoX: Generating conceptual and counterfactual Explanations via Fault-Lines. Proc AAAI Conf Artif Intell 34(3):2594\u20132601","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"806_CR4","unstructured":"Amarasinghe K, Rodolfa KT, Jesus S, Chen V, Balayan V, Saleiro P, Bizarro P, Talwalkar A, Ghani R (2022) On the importance of application-grounded experimental design for evaluating explainable ML methods. [downloaded 29 January 2023 from arXiv:2206.13503]."},{"key":"806_CR5","doi-asserted-by":"publisher","unstructured":"Anderson A, Dodge J, Sadarangani A, Juozapaitis Z, Newman E, Irvine J, Chattopadhyay S, Fern A, Burnett M (2020) Mental models of mere mortals with explanations of reinforcement learning. ACM Transactions on Interactive Intelligent Systems (TiiS). [https:\/\/doi.org\/10.1145\/3366485]","DOI":"10.1145\/3366485"},{"key":"806_CR6","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","volume":"58","author":"AB Arrieta","year":"2020","unstructured":"Arrieta AB, D\u00edaz-Rodr\u00edguez N, Del Ser J, Bennetot A, Tabik M, Barbado A, Garcia S, Gil-Lopez, Molina D (2020) Explainable artificial itelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fusion 58:82\u2013115","journal-title":"Inform Fusion"},{"key":"806_CR7","unstructured":"Arya V, Bellamy RKE, and 18 others (2019) One explanation does not fit all: a toolkittaxonomy of AI explainability techniques. [arXiv:1909.03012v2]"},{"key":"806_CR8","unstructured":"Bojarski M, Yeres P, Choromanska A, Choromanski K, Firner B, Jackel LD, Muller U (2017) Explaining how a deep neural network trained with end-to-end learning steers a car. [arXiv:1704.07911]"},{"key":"806_CR9","doi-asserted-by":"publisher","unstructured":"Bu\u00e7inca Z, Lin P, Gajos ZJ, Glassman EL (2020) Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI \u201820). Association for Computing Machinery, New York, NY. [downloaded 29 March 2023 at [https:\/\/doi.org\/10.1145\/3377325.3377498]]","DOI":"10.1145\/3377325.3377498]"},{"issue":"9","key":"806_CR10","first-page":"1046","volume":"31","author":"JM Carroll","year":"1988","unstructured":"Carroll JM, Aaronson P (1988) Learning by doing with simulated intelligent help. Commun Assoc Comput Mach 31(9):1046\u20131079","journal-title":"Commun Assoc Comput Mach"},{"issue":"1","key":"806_CR11","first-page":"14","volume":"30","author":"JMN Carroll","year":"1987","unstructured":"Carroll JMN, McKendree J (1987) Interface design issues for advice-giving expert systems. Commun Assoc Comput Mach 30(1):14\u201331","journal-title":"Commun Assoc Comput Mach"},{"key":"806_CR12","unstructured":"Chari S, Gruen DM, Seneviratne O, McGuiness DL (2020) Foundations of knowledge-enabled systems [downloaded 29 March 202 at arXiv:2003.07520v1]"},{"issue":"1","key":"806_CR13","doi-asserted-by":"publisher","first-page":"69","DOI":"10.1207\/s15327809jls0101_4","volume":"1","author":"MTH Chi","year":"1991","unstructured":"Chi MTH, Van Lehn KA (1991) The content of physics self-explanations. J Learn Sci 1(1):69\u2013105","journal-title":"J Learn Sci"},{"issue":"10","key":"806_CR14","doi-asserted-by":"publisher","first-page":"42","DOI":"10.1109\/MSPEC.2021.9563958","volume":"58","author":"CQ Choi","year":"2021","unstructured":"Choi CQ (2021) 7 revealing ways AIs fail: neural networks can be disastrously brittle, forgetful, and surprisingly bad at math. IEEE Spectr 58(10):42\u201347 [https:\/\/doi.org\/10.1109\/MSPEC.2021.9563958]","journal-title":"IEEE Spectr"},{"key":"806_CR15","unstructured":"Chromik M, Schuessler M (2020) A taxonomy for human subject evaluation of black-box explanations in XAI. In Proceedings of the IUI workshop on Explainable Smart Systems and Algorithmic Transparency in Emerging Technologies (ExSS-ATEC\u201920) [arXiv:2011.07130v2]"},{"issue":"3","key":"806_CR16","first-page":"40","volume":"7","author":"WJ Clancey","year":"1986","unstructured":"Clancey WJ (1986) From GUIDON to NEOMYCIN and HERACLES in twenty short lessons: ONR Final Report 1979\u20131985. The AI Magazine 7(3):40\u201360","journal-title":"The AI Magazine"},{"key":"806_CR17","unstructured":"Clancey WJ (2020) Designing agents for people: Case studies of the Brahms Work Practice Simulation Framework Kindle Print Replica e-Book. [https:\/\/www.researchgate.net\/publication\/343224286_Designing_Agents_for_People_Case_Studies_of_the_Brahms_Work_Practice_Simulation_Framework_Excerpt_Contents_Preface_Reader%27s_Guide_Index]"},{"key":"806_CR18","doi-asserted-by":"publisher","DOI":"10.1002\/ail2.53","author":"WJ Clancey","year":"2022","unstructured":"Clancey WJ, Hoffman RR (2022) Methods and standards for research on explainable artificial intelligence: Lessons from Intelligent Tutoring Systems. Appl AI Lett. [https:\/\/doi.org\/10.1002\/ail2.53]","journal-title":"Appl AI Lett"},{"key":"806_CR19","first-page":"1","volume":"22","author":"IS Covert","year":"2021","unstructured":"Covert IS, Lundberg S, Lee S-I (2021) Explaining by removing: a unified framework for model explanation. J Mach Learn Res 22:1\u201330","journal-title":"J Mach Learn Res"},{"key":"806_CR20","doi-asserted-by":"crossref","unstructured":"Deal SV, Hoffman RR (2010), September\/October The Practitioner\u2019s Cycles part 3: Implementation problems. IEEE Intelligent Systems, pp. 77\u201381","DOI":"10.1109\/MIS.2010.129"},{"key":"806_CR21","doi-asserted-by":"crossref","unstructured":"Deal SV, Hoffman RR (2010), March\/April The Practitioner\u2019s Cycles, Part 1: The Actual World Problem. IEEE Intelligent Systems, pp.\u00a04\u20139","DOI":"10.1109\/MIS.2010.54"},{"key":"806_CR22","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1080\/07370008.1985.9649008]","volume":"10","author":"AA diSessa","year":"1993","unstructured":"diSessa AA (1993) Toward an epistemology of physics. Cognition and Instruction 10:105\u2013225. [https:\/\/doi.org\/10.1080\/07370008.1985.9649008]","journal-title":"Cognition and Instruction"},{"key":"806_CR23","doi-asserted-by":"publisher","unstructured":"Dodge J, Anderson A, Khanna R, Irvine J, Dikkala R, Lam HK-H, Tababai D, Ruangrotsakun A, Shureih Z, Khang M, Fern A, Burnett M (2021) From \u201cno clear winner\u201d to an effective explainable Artificial Intelligence process: an empirical journey. Appl AI Lett 2. [https:\/\/doi.org\/10.1002\/ail2.36]","DOI":"10.1002\/ail2.36"},{"key":"806_CR24","doi-asserted-by":"crossref","unstructured":"Dodge J (2021) (with 13 others). After-Action Review for AI. ACM Transactions on Interactive Intelligent Systems, 11(3\u20134), Article 29, 1\u201335","DOI":"10.1145\/3453173"},{"key":"806_CR25","unstructured":"Druce J, Niehaus M, Moody V, Harradon M, Daniels-Koch O, Voshell M (2021) \u201cXAI Final Evaluation Reporting Request.\u201d Technical Report, Task Area 1, DARPA Explainable AI Program. Arlington, VA: DARPA"},{"issue":"4","key":"806_CR26","doi-asserted-by":"publisher","first-page":"e44","DOI":"10.1002\/ail2.44]","volume":"2","author":"S Ebrahimi","year":"2021","unstructured":"Ebrahimi S, Petryk S, Gokul A, Gan J, Gonzalez JE, Rohrbach M, Darrell T (2021) Remembering for the right reasons: explanations reduce catastrophic forgetting. Appl AI Lett 2(4):e44. [https:\/\/doi.org\/10.1002\/ail2.44]","journal-title":"Appl AI Lett"},{"key":"806_CR27","doi-asserted-by":"crossref","unstructured":"Gajos KZ, Mamykina L (2022) March. Do people engage cognitively with AI? Impact of AI assistance on incidental learning. In 27th International Conference on Intelligent User Interfaces (pp.\u00a0794\u2013806). [https:\/\/arxiv.org\/pdf\/2202.05402.pdf]","DOI":"10.1145\/3490099.3511138"},{"key":"806_CR28","unstructured":"Goyal Y, Wu Z, Ernst J, Batra D, Parikh D, Lee S (2019) Counterfactual visual explanations. [arXiv:1904.07451]"},{"key":"806_CR29","volume-title":"Design at work: Cooperative design of computer systems","year":"1991","unstructured":"Greenbaum J, Kyng M (eds) (1991) Design at work: Cooperative design of computer systems. Erlbaum, Mahwah, NJ"},{"key":"806_CR30","unstructured":"Grosz BJ (1975) Establishing context in task-oriented dialogs. In Proceedings of the Proceedings of the 13th Annual ACL Meeting on Computational linguistics. American Journal of Computational Linguistics (T.C. Diller, ed.), pp.\u00a04\u201318. New York: Association for Computing Machinery"},{"key":"806_CR31","doi-asserted-by":"publisher","DOI":"10.1002\/ail2.61]","author":"D Gunning","year":"2021","unstructured":"Gunning D, Vorm E, Wang JY, Turek M (2021) DARPA\u2019s explainable AI program: a retrospective. Appl AI Lett [https:\/\/doi.org\/10.1002\/ail2.61]","journal-title":"Appl AI Lett ["},{"key":"806_CR32","unstructured":"Hamidi-Haines M, Qi Z, Fern A, Li F, Tadepalli P (2019) Interactive naming for explaining deep neural networks: A Formative Study. IUI Workshop on EXplainable Smart Systems (EXSS). [arXiv:2006\/00093v4]"},{"key":"806_CR33","doi-asserted-by":"publisher","first-page":"273","DOI":"10.1080\/135467896394447","volume":"2","author":"DJ Hilton","year":"1996","unstructured":"Hilton DJ, Erb H-P (1996) Mental models and causal explanation: judgments of probable cause and explanatory relevance. Think Reasoning 2:273\u2013308","journal-title":"Think Reasoning"},{"issue":"6","key":"806_CR34","doi-asserted-by":"publisher","first-page":"1232","DOI":"10.1037\/0021-9010.86.6.1232","volume":"86","author":"PM Hinds","year":"2001","unstructured":"Hinds PM, Patterson M, Pfeffer J (2001) Bothered by abstraction: the effect of expertise on knowledge transfer and subsequent novice performance. J Appl Psychol 86(6):1232\u20131243","journal-title":"J Appl Psychol"},{"key":"806_CR35","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1201\/9781315572529-8","volume-title":"Cognitive systems engineering: the future for a changing world","author":"RR Hoffman","year":"2017","unstructured":"Hoffman RR (2017) A taxonomy of emergent trusting in the human-machine relationship. In: Smith P, Hoffman RR (eds) Cognitive systems engineering: the future for a changing world. Taylor and Francis, Boca Raton, FL, pp 137\u2013164"},{"key":"806_CR36","doi-asserted-by":"crossref","unstructured":"Hoffman RR, Deal SV, Potter S, Roth EM (2010) May\/June). The Practitioner\u2019s Cycles, part 2: Solving Envisioned World Problems. IEEE Intelligent Systems, pp. 6\u201311","DOI":"10.1109\/MIS.2010.89"},{"key":"806_CR37","unstructured":"Hoffman RR, Jalaeian M, Tate C, Klein G, Mueller ST (in review). Metrics for Explainable AI: The Explanation Scorecard. A method in AI measurement science. [https:\/\/www.ihmc.us\/wp-content\/uploads\/2021\/11\/The-Self-Explanation-Scorecard-2021.pdf]"},{"key":"806_CR38","doi-asserted-by":"crossref","unstructured":"Hoffman RR, Klein G, Jalaeian M, Tate C, Mueller ST (2023) Explainable AI: Roles, stakeholders, desirements and challenges. In Press, Frontiers in Computer Science. downloaded 28 march 2023 at [https:\/\/www.ihmc.us\/rgoups\/hoffman]","DOI":"10.3389\/fcomp.2023.1117848"},{"key":"806_CR39","doi-asserted-by":"crossref","unstructured":"Hoffman RR, Lee JD, Woods DD, Shadbolt N, Miller J, Bradshaw JM (2009), November\/December The dynamics of trust in cyberdomains. IEEE Intelligent Systems, pp.\u00a05\u201311","DOI":"10.1109\/MIS.2009.124"},{"key":"806_CR40","doi-asserted-by":"crossref","unstructured":"Hoffman RR, Mueller ST, Klein G, Litman J (2023) Measures for explainable AI: explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Front Comput Sci. [downloaded 29 March 2023 at https:\/\/www.frontiersin.org\/articles\/10.3389\/fcomp.2023.1096257\/full]","DOI":"10.3389\/fcomp.2023.1096257"},{"key":"806_CR41","doi-asserted-by":"publisher","first-page":"215","DOI":"10.1126\/science.361.6399.215","volume":"361","author":"M Hutson","year":"2018","unstructured":"Hutson M (2018) Hackers easily fool artificial intelligences. Science 361:215","journal-title":"Science"},{"key":"806_CR42","doi-asserted-by":"crossref","unstructured":"Jesus S, Belem C, Balayan V, Bento J, Saliero P, Bizarro P, Gama J (2021) How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency New York: Association for computing Machinery. [downloaded 30 January 2023 at arXiv:2101.08758v2]","DOI":"10.1145\/3442188.3445941"},{"key":"806_CR43","unstructured":"Johnson M, Vera AH (2021) No Ai is an island. The AI Magazine, pp. 17\u201328"},{"key":"806_CR44","unstructured":"Kalyanam K, Stefik M, de Kleer J (2020) March). \u201cPartnering with Autonomous Systems to reduce unintended behaviors,\u201c presentation to the Air Force Science Board"},{"issue":"4","key":"806_CR45","first-page":"345","volume":"1","author":"R Kass","year":"1988","unstructured":"Kass R, Finin T (1988) The need for user models in generating expert system explanations. Int J Expert Syst 1(4):345\u2013375","journal-title":"Int J Expert Syst"},{"key":"806_CR46","doi-asserted-by":"crossref","unstructured":"Kaur H, Nori H, Jenkins S, Caruana R, Wallach H, Vaughan W (2020), April J. Interpreting Interpretability: Understanding data Scientists\u2019 use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp.\u00a01\u201314)","DOI":"10.1145\/3313831.3376219"},{"key":"806_CR47","doi-asserted-by":"crossref","unstructured":"Kenny E, Ford C, Quinn M, Keane M (2021) Explaining black-box classifiers using post-hoc explanations by example: the effect of explanations and error-rates in XAI user studies. Artificial Intelligence, 294, (C):103459","DOI":"10.1016\/j.artint.2021.103459"},{"key":"806_CR48","doi-asserted-by":"crossref","unstructured":"Kenny EM, Keene MT (2020) On generating plausible counterfactual and semi-factual explanations for deep learning. [arxXiv: 2009,06399v1]","DOI":"10.1609\/aaai.v35i13.17377"},{"key":"806_CR49","doi-asserted-by":"crossref","unstructured":"Kim J, Canny J (2017) Interpretable learning for self-driving cars by visualizing causal attention. In Proceedings of International Conference on Computer Vision (pp.\u00a02942\u20132950). New York: Springer","DOI":"10.1109\/ICCV.2017.320"},{"key":"806_CR50","doi-asserted-by":"crossref","unstructured":"Klein G, Hoffman RR, Clancey WJ, Mueller ST, Jentsch F (2023) Minimum Necessary Rigor in empirically evaluating human-AI work systems. The AI Magazine, in press","DOI":"10.1002\/aaai.12108"},{"key":"806_CR51","unstructured":"Klein G, Hoffman RR, Mueller ST (2019) \u201cThe Plausibility Cycle: A Model of Self-explaining How AI Systems Work.\u201c Report on Award No. FA8650-17-2-7711, DARPA XAI Program. DTIC accession number AD1073994. [https:\/\/psyarxiv.com\/rpw6e\/]"},{"key":"806_CR52","doi-asserted-by":"publisher","first-page":"213","DOI":"10.1177\/15553434211045154","volume":"15","author":"G Klein","year":"2021","unstructured":"Klein G, Hoffman RR, Mueller ST, Newsome E (2021) Modeling the process by which people try to explain complex things to other people. J Cogn Eng Decis Mak 15:213\u2013232","journal-title":"J Cogn Eng Decis Mak"},{"key":"806_CR53","unstructured":"Koh OW, Liang P (2017) Understanding black-box predictions via influence functions. [arXiv:1703.04730]"},{"key":"806_CR54","unstructured":"Lage I, Chen E, He J, Narayanan M, Kim B, Gershman S, Doshi-Velez F (2019) An evaluation of the human-interpretability of explanation. [downloaded 29 January 2023 at arXiv:1902.00006]"},{"key":"806_CR55","doi-asserted-by":"crossref","unstructured":"Lakkaraju H, Bastani O (2020) \u201cHow do I fool you?\u201c Manipulating user trust via misleading black box explanations. In Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society New York: Association for computing Machinery. downloaded 29 March 2023 at [https:\/\/www.aies-conference.com\/2020\/wp-content\/papers\/182.pdf]","DOI":"10.1145\/3375627.3375833"},{"key":"806_CR56","doi-asserted-by":"crossref","unstructured":"Lim BY, Dey AK (2010) Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th International Conference on Ubiquitous Computing (pp.\u00a013\u201322). New York: Association for Computing Machinery","DOI":"10.1145\/1864349.1864353"},{"key":"806_CR57","doi-asserted-by":"publisher","first-page":"31","DOI":"10.1145\/3236386.3241340","volume":"16","author":"ZC Lipton","year":"2016","unstructured":"Lipton ZC (2016) The mythos of model interpretability. Queue 16:31\u201357","journal-title":"Queue"},{"key":"806_CR58","doi-asserted-by":"publisher","first-page":"147","DOI":"10.1207\/s15327752jpa8202_3]","volume":"82","author":"JA Litman","year":"2004","unstructured":"Litman JA, Jimerson TL (2004) The measurement of curiosity as a feeling-of-deprivation. J Pers Assess 82:147\u2013157. [https:\/\/doi.org\/10.1207\/s15327752jpa8202_3]","journal-title":"J Pers Assess"},{"key":"806_CR59","doi-asserted-by":"crossref","unstructured":"Mai T, Khanna R, Dodge J, Irvine J, Lam K-H, Lin Z, Kiddle N, Newman E, Raja S, Matthews C, Perdriau C, Burnett M, Fern A (2020) Keeping It \u201cOrganized and Logical\u201d: After-Action Review for AI (AAR\/AI). Proceedings of the ACM International Conference on Intelligent User Interfaces (pp.\u00a0465\u2013476). New York: Association for Computing Machinery. [http:\/\/www.ftp.cs.orst.edu\/pub\/burnett\/iui20-AARAI.pdf]","DOI":"10.1145\/3377325.3377525"},{"key":"806_CR60","unstructured":"Miller T (2017) Explanation in Artificial Intelligence: Insights from the social sciences. [arXiv:1706.07269 [Cs]"},{"key":"806_CR61","unstructured":"Mohseni S, Zarel N, Raganm DE (2020) A multidisciplinary survey and framework for design and evaluation of explainable AI Systems. [arXiv:1811.11839v5]"},{"key":"806_CR62","unstructured":"Mueller ST, Hoffman R, Clancey WJ, Emrey A, Klein G (2019) \u201cExplanation in Human-AI Systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for Explainable AI.\u201c Technical Report, Explainable AI Program, Defense Advanced Projects Agency, Washington, DC. [arXiv:1902.01876 [pdf]"},{"key":"806_CR63","unstructured":"Mueller ST, Nelson B (2018) A computational model of sensemaking in a hurricane prediction task. Proceedings of ICCM 2018, the 16th International Conference on Cognitive Modeling (pp\u00a084\u201389). [https:\/\/acs.ist.psu.edu\/iccm2018\/ICCM%202018%20Proceedings.pdf]"},{"key":"806_CR64","doi-asserted-by":"crossref","unstructured":"Mueller ST, Veinott ES, Hoffman RR, Klein G, Alam L, Mamun T, Clancey WJ (2020) Principles of explanation in human-AI systems. In Proceedings of the AAAI Workshop on Explainable Agency in Artificial Intelligence (AAAI-2020) [arXiv:2102.04972]","DOI":"10.22541\/au.162316928.89726114\/v1"},{"key":"806_CR65","doi-asserted-by":"publisher","unstructured":"Nourani M, Honeycutt D, Block J, Roy C, Rahman T, Ragan E, Gogate V (2020) Investigating the importance of first Impressions and Explainable AI with interactive video analysis. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (ACM CHI 2020), pp.\u00a01\u20138. https:\/\/doi.org\/10.1145\/3334480.3382967]","DOI":"10.1145\/3334480.3382967]"},{"key":"806_CR66","unstructured":"Pollack ME, Hirschberg J, Weber B (1982) User participation in the reasoning processes of expert systems. In Proceedings of AAAI-82 (pp.\u00a0358\u2013361). Menlo Park, CA: Association for the Advancement of Artificial Intelligence"},{"key":"806_CR67","unstructured":"Rosenfeld A (2021) Better metrics for evaluating explainable Artificial Intelligence. In U. Endriss, A. Now\u00e9, F. Dignum, A. Lomuscio (eds.), Proceedings of the 21th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021) downloaded 28 March 2023 at [https:\/\/www.ifaamas.org\/Proceedings\/aamas2021\/pdfs\/p45.pdf]"},{"key":"806_CR68","doi-asserted-by":"publisher","unstructured":"Russell C (2019) Efficient search for diverse coherent explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp.\u00a020\u201328). New York: Association for computing Machinery. [https:\/\/doi.org\/10.1145\/3287560.3287569]","DOI":"10.1145\/3287560.3287569"},{"key":"806_CR69","unstructured":"Samek W, Wiegand T, M\u00fcller K-R (2017) Explaining artificial intelligence: understanding, visualizing and interpreting deep learning models. International Telecommunications Union Journal: ICT Discoveries, Special Issue No. 1. [arXiv:1708.08296v1]"},{"key":"806_CR70","unstructured":"Schank R (1996) Information is surprises. [www.edge.org\/conversation\/roger_schank-chapter-9-information-is-surprises]"},{"key":"806_CR71","volume-title":"Educating the reflective practitioner","author":"DA Sch\u00f6n","year":"1987","unstructured":"Sch\u00f6n DA (1987) Educating the reflective practitioner. Jossey-Bass, San Francisco"},{"key":"806_CR72","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Lee S, Shen Y, Jin H (2019) Taking a HINT: Leveraging explanations to make vision and language models more grounded. Proceedings of the International Conference on Computer Vision (pp. pp.\u00a02591\u20132600). New York: IEEE","DOI":"10.1109\/ICCV.2019.00268"},{"key":"806_CR73","doi-asserted-by":"publisher","unstructured":"Sokol K, Flach P (2020) Explainability fact sheets: A framework for systematic assessment of explainable approaches. [https:\/\/doi.org\/10.1145\/3351095.3372870]","DOI":"10.1145\/3351095.3372870]"},{"key":"806_CR74","volume-title":"The think aloud Method","author":"MW van Someren","year":"1994","unstructured":"van Someren MW, Barnard YF, Sandberg JAC (1994) The think aloud method. Academic Press, London"},{"key":"806_CR75","unstructured":"Somers S, Mitsopoulos K, Thomson R, Lebiere C (2018) Cognitive-level salience for explainable artificial intelligence. Proceedings of the 17th International Conference on Cognitive Modeling (ICCM2018) (pp.\u00a0235\u2013240), Madison, WI"},{"key":"806_CR76","doi-asserted-by":"crossref","unstructured":"Stefik M, Youngblood M, Pirolli P, Lebiere C, Thomson R, Price R, Nelson LD, Krivacic R, Le J, Mitsopoulos K, Somers S, Schooler J (2021) Explaining autonomous drones: an XAI journey. Applied AI Letters, 2(4)","DOI":"10.1002\/ail2.54"},{"key":"806_CR77","unstructured":"Swartout WR (1981) Producing explanations and justifications of expert consulting programs. Technical Report, Massachusetts Institute of Technology. [http:\/\/dl.acm.org\/citation.cfm?id=889859]"},{"key":"806_CR78","doi-asserted-by":"crossref","unstructured":"Thomson R, Schoenherr JR (2020) Knowledge-to-Information Translation Training (KITT): An Adaptive Approach to Explainable Artificial Intelligence. In R A Sottilare and J Schwarz (Eds.) International Conference on Human-Computer Interaction: Track on Adaptive Instructional Systems LNCS 12214 (pp.\u00a0187\u2013204). Cham, Switzerland: Springer","DOI":"10.1007\/978-3-030-50788-6_14"},{"key":"806_CR79","unstructured":"Wang P, Givchi A, Shafto P (2020) Manifold learning from a teacher\u2019s demonstrations. [arXiv:1910.04615]"},{"key":"806_CR80","doi-asserted-by":"publisher","unstructured":"Wang D, Yang Q, Abdul A, Lim BY (2019) Designing theory-driver user-centric explainable AI. In Proceedings of CHI 2019 (Paper 601). New York: Association for Computing Machinery. https:\/\/doi.org\/10.1145\/3290605.3300831","DOI":"10.1145\/3290605.3300831"},{"key":"806_CR81","unstructured":"White A, Garcez d\u2019A (2021) Counterfactual instances explain little. [arXiv:2109.09809v1]"},{"issue":"1\u20132","key":"806_CR82","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1016\/0004-3702(92)90087-E","volume":"54","author":"MR Wick","year":"1992","unstructured":"Wick MR, Thompson WB (1992) Reconstructive expert system explanation. Artif Intell 54(1\u20132):33\u201370","journal-title":"Artif Intell"},{"key":"806_CR83","unstructured":"Yeh C-K et al (2019) On the (in)fidelity and sensitivity of explanations. [arXiv:1901.09392v4]"}],"container-title":["KI - K\u00fcnstliche Intelligenz"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s13218-023-00806-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s13218-023-00806-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s13218-023-00806-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,27]],"date-time":"2024-05-27T07:03:21Z","timestamp":1716793401000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s13218-023-00806-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,17]]},"references-count":83,"journal-issue":{"issue":"2-4","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["806"],"URL":"https:\/\/doi.org\/10.1007\/s13218-023-00806-9","relation":{},"ISSN":["0933-1875","1610-1987"],"issn-type":[{"value":"0933-1875","type":"print"},{"value":"1610-1987","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,17]]},"assertion":[{"value":"17 January 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 May 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 July 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}