{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T16:45:44Z","timestamp":1776357944990,"version":"3.51.2"},"reference-count":153,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2019,7,26]],"date-time":"2019-07-26T00:00:00Z","timestamp":1564099200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Electronics"],"abstract":"<jats:p>Machine learning systems are becoming increasingly ubiquitous. These systems\u2019s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.<\/jats:p>","DOI":"10.3390\/electronics8080832","type":"journal-article","created":{"date-parts":[[2019,7,26]],"date-time":"2019-07-26T08:45:39Z","timestamp":1564130739000},"page":"832","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1362,"title":["Machine Learning Interpretability: A Survey on Methods and Metrics"],"prefix":"10.3390","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2349-4117","authenticated-orcid":false,"given":"Diogo V.","family":"Carvalho","sequence":"first","affiliation":[{"name":"Deloitte Portugal, Manuel Bandeira Street, 43, 4150-479 Porto, Portugal"},{"name":"Faculty of Engineering, University of Porto, Dr. Roberto Frias Street, 4200-465 Porto, Portugal"}]},{"given":"Eduardo M.","family":"Pereira","sequence":"additional","affiliation":[{"name":"Deloitte Portugal, Manuel Bandeira Street, 43, 4150-479 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3760-2473","authenticated-orcid":false,"given":"Jaime S.","family":"Cardoso","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, University of Porto, Dr. Roberto Frias Street, 4200-465 Porto, Portugal"},{"name":"INESC TEC, Dr. Roberto Frias Street, 4200-465 Porto, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2019,7,26]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","article-title":"Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)","volume":"6","author":"Adadi","year":"2018","journal-title":"IEEE Access"},{"key":"ref_3","unstructured":"International Data Corporation (2019, January 22). Worldwide Spending on Cognitive and Artificial Intelligence Systems Forecast to Reach $77.6 Billion in 2022, According to New IDC Spending Guide. Available online: https:\/\/www.idc.com\/getdoc.jsp?containerId=prUS44291818."},{"key":"ref_4","unstructured":"Tractica (2019, January 22). Artificial Intelligence Software Market to Reach $105.8 Billion in Annual Worldwide Revenue by 2025. Available online: https:\/\/www.tractica.com\/newsroom\/press-releases\/artificial-intelligence-software-market-to-reach-105-8-billion-in-annual-worldwide-revenue-by-2025\/."},{"key":"ref_5","unstructured":"Gartner (2019, January 22). Gartner Top 10 Strategic Technology Trends for 2019. Available online: https:\/\/www.gartner.com\/smarterwithgartner\/gartner-top-10-strategic-technology-trends-for-2019\/."},{"key":"ref_6","unstructured":"Du, M., Liu, N., and Hu, X. (2018). Techniques for Interpretable Machine Learning. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1016\/j.patcog.2016.11.008","article-title":"Explaining nonlinear classification decisions with deep Taylor decomposition","volume":"65","author":"Montavon","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J., and Sculley, D. (2017, January 13\u201317). Google vizier: A service for black-box optimization. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada.","DOI":"10.1145\/3097983.3098043"},{"key":"ref_9","unstructured":"Rudin, C. (2018). Please Stop Explaining Black Box Models for High Stakes Decisions. arXiv."},{"key":"ref_10","unstructured":"Van Lent, M., Fisher, W., and Mancuso, M. (2004, January 25\u201329). An explainable artificial intelligence system for small-unit tactical behavior. Proceedings of the National Conference on Artificial Intelligence, San Jose, CA, USA."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Swartout, W.R. (1983). Xplain: A System for Creating and Explaining Expert Consulting Programs, University of Southern California, Information Sciences Institute. Technical Report.","DOI":"10.1016\/0167-7136(83)90280-9"},{"key":"ref_12","unstructured":"Van Melle, W., Shortliffe, E.H., and Buchanan, B.G. (1984). EMYCIN: A knowledge engineer\u2019s tool for constructing rule-based expert systems. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison-Wesley Reading."},{"key":"ref_13","unstructured":"Moore, J.D., and Swartout, W.R. (1988). Explanation in Expert Systems: A Survey, University of Southern California, Information Sciences Institute. Technical Report."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"373","DOI":"10.1016\/0950-7051(96)81920-4","article-title":"Survey and critique of techniques for extracting rules from trained artificial neural networks","volume":"8","author":"Andrews","year":"1995","journal-title":"Knowl. Based Syst."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"455","DOI":"10.1007\/s11257-008-9051-3","article-title":"The effects of transparency on trust in and acceptance of a content-based art recommender","volume":"18","author":"Cramer","year":"2008","journal-title":"User Model. User Adapt. Interact."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Herlocker, J.L., Konstan, J.A., and Riedl, J. (2000, January 2\u20136). Explaining collaborative filtering recommendations. Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, Philadelphia, PA, USA.","DOI":"10.1145\/358916.358995"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., and Kankanhalli, M. (2018, January 21\u201326). Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada.","DOI":"10.1145\/3173574.3174156"},{"key":"ref_18","unstructured":"Gunning, D. (2017). Explainable Artificial Intelligence (XAI)."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Gunning, D. (2019, January 22). Explainable Artificial Intelligence (XAI). Available online: https:\/\/www.darpa.mil\/program\/explainable-artificial-intelligence.","DOI":"10.1145\/3301275.3308446"},{"key":"ref_20","unstructured":"Committee on Technology National Science and Technology Council and Penny Hill Press (2016). Preparing for the Future of Artificial Intelligence, CreateSpace Independent Publishing Platform."},{"key":"ref_21","unstructured":"ACM US Public Council (2019, January 22). Statement on Algorithmic Transparency and Accountability. Available online: https:\/\/www.acm.org\/binaries\/content\/assets\/public-policy\/2017_usacm_statement_algorithms.pdf."},{"key":"ref_22","unstructured":"IPN SIG AI (2019, January 22). Dutch Artificial Intelligence Manifesto. Available online: http:\/\/ii.tudelft.nl\/bnvki\/wp-content\/uploads\/2018\/09\/Dutch-AI-Manifesto.pdf."},{"key":"ref_23","unstructured":"C\u00e9dric Villani (2019, January 22). AI for Humanity\u2014French National Strategy for Artificial intelligence. Available online: https:\/\/www.aiforhumanity.fr\/en\/."},{"key":"ref_24","unstructured":"Royal Society (2019, May 03). Machine Learning: The Power and Promise of Computers that Learn by Example. Available online: https:\/\/royalsociety.org\/topics-policy\/projects\/machine-learning\/."},{"key":"ref_25","unstructured":"Portuguese National Initiative on Digital Skills (2019, May 03). AI Portugal 2030, Available online: https:\/\/www.incode2030.gov.pt\/sites\/default\/files\/draft_ai_portugal_2030v_18mar2019.pdf."},{"key":"ref_26","unstructured":"European Commission (2019, May 03). Artificial Intelligence for Europe. Available online: https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/communication-artificial-intelligence-europe."},{"key":"ref_27","unstructured":"European Commission (2019, May 03). Algorithmic Awareness-Building. Available online: https:\/\/ec.europa.eu\/digital-single-market\/en\/algorithmic-awareness-building."},{"key":"ref_28","unstructured":"Rao, A.S. (2019, January 22). Responsible AI & National AI Strategies. Available online: https:\/\/ec.europa.eu\/growth\/tools-databases\/dem\/monitor\/sites\/default\/files\/4%20International%20initiatives%20v3_0.pdf."},{"key":"ref_29","unstructured":"High-Level Expert Group on Artificial Intelligence (AI HLEG) (2019, May 03). Ethics Guidelines for Trustworthy Artificial Intelligence. Available online: https:\/\/ec.europa.eu\/futurium\/en\/ai-alliance-consultation\/guidelines."},{"key":"ref_30","unstructured":"Google (2019, January 18). Responsible AI Practices\u2014Interpretability. Available online: https:\/\/ai.google\/education\/responsible-ai-practices?category=interpretability."},{"key":"ref_31","unstructured":"H2O.ai (2019, January 18). H2O Driverless AI. Available online: https:\/\/www.h2o.ai\/products\/h2o-driverless-ai\/."},{"key":"ref_32","unstructured":"DataRobot (2019, January 18). Model Interpretability. Available online: https:\/\/www.datarobot.com\/wiki\/interpretability\/."},{"key":"ref_33","unstructured":"IBM (2019, January 18). Trust and Transparency in AI. Available online: https:\/\/www.ibm.com\/watson\/trust-transparency."},{"key":"ref_34","unstructured":"Kyndi (2019, January 18). Kyndi AI Platform. Available online: https:\/\/kyndi.com\/products\/."},{"key":"ref_35","unstructured":"Andy Flint, Arash Nourian, Jari Koister (2019, January 18). xAI Toolkit: Practical, Explainable Machine Learning. Available online: https:\/\/www.fico.com\/en\/latest-thinking\/white-paper\/xai-toolkit-practical-explainable-machine-learning."},{"key":"ref_36","unstructured":"FICO (2019, January 18). FICO Makes Artificial Intelligence Explainable. Available online: https:\/\/www.fico.com\/en\/newsroom\/fico-makes-artificial-intelligence-explainable-with-latest-release-of-its-analytics-workbench."},{"key":"ref_37","first-page":"17","article-title":"Developing Transparent Credit Risk Scorecards More Effectively: An Explainable Artificial Intelligence Approach","volume":"2018","author":"Fahner","year":"2018","journal-title":"Data Anal."},{"key":"ref_38","unstructured":"FICO (2019, February 05). FICO Score Research: Explainable AI for Credit Scoring. Available online: https:\/\/www.fico.com\/blogs\/analytics-optimization\/fico-score-research-explainable-ai-and-machine-learning-for-credit-scoring\/."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1109\/TVCG.2017.2744718","article-title":"ActiVis: Visual exploration of industry-scale deep neural network models","volume":"24","author":"Kahng","year":"2018","journal-title":"IEEE Trans. Vis. Comput. Gr."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1109\/TVCG.2018.2864499","article-title":"Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models","volume":"25","author":"Zhang","year":"2019","journal-title":"IEEE Trans. Vis. Comput. Gr."},{"key":"ref_41","unstructured":"Doshi-Velez, F., and Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv."},{"key":"ref_42","unstructured":"FAT\/ML (2019, January 22). Fairness, Accountability, and Transparency in Machine Learning. Available online: http:\/\/www.fatml.org\/."},{"key":"ref_43","unstructured":"Kim, B., Malioutov, D.M., and Varshney, K.R. (2016). Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). arXiv."},{"key":"ref_44","unstructured":"Kim, B., Malioutov, D.M., Varshney, K.R., and Weller, A. (2017). Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017). arXiv."},{"key":"ref_45","unstructured":"Kim, B., Varshney, K.R., and Weller, A. (2018). Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018). arXiv."},{"key":"ref_46","unstructured":"Wilson, A.G., Kim, B., and Herlands, W. (2016). Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems. arXiv."},{"key":"ref_47","unstructured":"Caruana, R., Herlands, W., Simard, P., Wilson, A.G., and Yosinski, J. (2017). Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning. arXiv."},{"key":"ref_48","unstructured":"Pereira-Fari\u00f1a, M., and Reed, C. (2017). Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017), Association for Computational Linguistics (ACL)."},{"key":"ref_49","unstructured":"IJCNN (2019, January 22). IJCNN 2017 Explainability of Learning Machines. Available online: http:\/\/gesture.chalearn.org\/ijcnn17_explainability_of_learning_machines."},{"key":"ref_50","unstructured":"IJCAI (2019, July 12). IJCAI 2017\u2014Workshop on Explainable Artificial Intelligence (XAI). Available online: http:\/\/home.earthlink.net\/~dwaha\/research\/meetings\/ijcai17-xai\/."},{"key":"ref_51","unstructured":"IJCAI (2019, July 12). IJCAI 2018\u2014Workshop on Explainable Artificial Intelligence (XAI). Available online: http:\/\/home.earthlink.net\/~dwaha\/research\/meetings\/faim18-xai\/."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Stoyanov, D., Taylor, Z., Kia, S.M., Oguz, I., Reyes, M., Martel, A., Maier-Hein, L., Marquand, A.F., Duchesnay, E., and L\u00f6fstedt, T. (2018). Understanding and Interpreting Machine Learning in Medical Image Computing Applications. First International Workshops, MLCN 2018, DLF 2018, and iMIMIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16\u201320, 2018, Springer.","DOI":"10.1007\/978-3-030-02628-8"},{"key":"ref_53","unstructured":"IPMU (2019, July 12). IPMU 2018\u2014Advances on Explainable Artificial Intelligence. Available online: http:\/\/ipmu2018.uca.es\/submission\/cfspecial-sessions\/special-sessions\/#explainable."},{"key":"ref_54","unstructured":"Holzinger, A., Kieseberg, P., Tjoa, A.M., and Weippl, E. (2018). Machine Learning and Knowledge Extraction: Second IFIP TC 5, TC 8\/WG 8.4, 8.9, TC 12\/WG 12.9 International Cross-Domain Conference, CD-MAKE 2018, Hamburg, Germany, August 27\u201330, 2018, Proceedings, Springer."},{"key":"ref_55","unstructured":"CD-MAKE (2019, July 12). CD-MAKE 2019 Workshop on explainable Artificial Intelligence. Available online: https:\/\/cd-make.net\/special-sessions\/make-explainable-ai\/."},{"key":"ref_56","unstructured":"Lim, B., Smith, A., and Stumpf, S. (2018). ExSS 2018: Workshop on Explainable Smart Systems. CEUR Workshop Proceedings, City, University of London Institutional Repository. Available online: http:\/\/openaccess.city.ac.uk\/20037\/."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Lim, B., Sarkar, A., Smith-Renner, A., and Stumpf, S. (2019, January 16\u201320). ExSS: Explainable smart systems 2019. Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion, Marina del Ray, CA, USA.","DOI":"10.1145\/3308557.3313112"},{"key":"ref_58","unstructured":"ICAPS (2019, July 12). ICAPS 2018\u2014Workshop on Explainable AI Planning (XAIP). Available online: http:\/\/icaps18.icaps-conference.org\/xaip\/."},{"key":"ref_59","unstructured":"ICAPS (2019, July 12). ICAPS 2019\u2014Workshop on Explainable AI Planning (XAIP). Available online: https:\/\/kcl-planning.github.io\/XAIP-Workshops\/ICAPS_2019."},{"key":"ref_60","unstructured":"Zhang, Q., Fan, L., and Zhou, B. (2019, January 22). Network Interpretability for Deep Learning. Available online: http:\/\/networkinterpretability.org\/."},{"key":"ref_61","unstructured":"CVPR (2019, July 12). CVPR 19\u2014Workshop on Explainable AI. Available online: https:\/\/explainai.net\/."},{"key":"ref_62","unstructured":"FICO (2019, January 18). Explainable Machine Learning Challenge. Available online: https:\/\/community.fico.com\/s\/explainable-machine-learning-challenge."},{"key":"ref_63","unstructured":"Institute for Ethical AI & Machine Learning (2019, February 05). The Responsible Machine Learning Principles. Available online: https:\/\/ethical.institute\/principles.html#commitment-3."},{"key":"ref_64","unstructured":"Lipton, Z.C. (2016). The mythos of model interpretability. arXiv."},{"key":"ref_65","doi-asserted-by":"crossref","unstructured":"Silva, W., Fernandes, K., Cardoso, M.J., and Cardoso, J.S. (2018). Towards Complementary Explanations Using Deep Neural Networks. Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Springer.","DOI":"10.1007\/978-3-030-02628-8_15"},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., and Kagal, L. (2018). Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning. arXiv.","DOI":"10.1109\/DSAA.2018.00018"},{"key":"ref_67","unstructured":"Doran, D., Schulz, S., and Besold, T.R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv."},{"key":"ref_68","unstructured":"UK Government House of Lords (2019, January 18). AI in the UK: Ready, Willing and Able?. Available online: https:\/\/publications.parliament.uk\/pa\/ld201719\/ldselect\/ldai\/100\/10007.htm."},{"key":"ref_69","unstructured":"Kirsch, A. (2017, January 16\u201317). Explain to whom? Putting the user in the center of explainable AI. Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 Co-Located with 16th International Conference of the Italian Association for Artificial Intelligence (AI* IA 2017), Bari, Italy."},{"key":"ref_70","unstructured":"Molnar, C. (2019, January 22). Interpretable Machine Learning. Available online: https:\/\/christophm.github.io\/interpretable-ml-book\/."},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Temizer, S., Kochenderfer, M., Kaelbling, L., Lozano-P\u00e9rez, T., and Kuchar, J. (2010, January 2\u20135). Collision avoidance for unmanned aircraft using Markov decision processes. Proceedings of the AIAA Guidance, Navigation, and Control Conference, Toronto, ON, Canada.","DOI":"10.2514\/6.2010-8040"},{"key":"ref_72","unstructured":"Wexler, R. (New York Times, 2017). When a computer program keeps you in jail: How computers are harming criminal justice, New York Times."},{"key":"ref_73","unstructured":"McGough, M. (2019, January 18). How Bad Is Sacramento\u2019s Air, Exactly? Google Results Appear at Odds with Reality, Some Say. Available online: https:\/\/www.sacbee.com\/news\/state\/california\/fires\/article216227775.html."},{"key":"ref_74","doi-asserted-by":"crossref","first-page":"246","DOI":"10.1089\/big.2016.0051","article-title":"On the safety of machine learning: Cyber-physical systems, decision sciences, and data products","volume":"5","author":"Varshney","year":"2017","journal-title":"Big Data"},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"1","DOI":"10.2143\/AST.40.1.2049222","article-title":"The devil is in the tails: Actuarial mathematics and the subprime mortgage crisis","volume":"40","author":"Donnelly","year":"2010","journal-title":"ASTIN Bull. J. IAA"},{"key":"ref_76","unstructured":"Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2019, January 18). Machine Bias. Available online: https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing."},{"key":"ref_77","unstructured":"Tan, S., Caruana, R., Hooker, G., and Lou, Y. (2017). Detecting bias in black-box models using transparent model distillation. arXiv."},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O\u2019Brien, D., Schieber, S., Waldo, J., Weinberger, D., and Wood, A. (2017). Accountability of AI under the law: The role of explanation. arXiv.","DOI":"10.2139\/ssrn.3064761"},{"key":"ref_79","unstructured":"Honegger, M. (2018). Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions. arXiv."},{"key":"ref_80","unstructured":"O\u2019Neil, C. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Broadway Books."},{"key":"ref_81","unstructured":"Keil, F., Rozenblit, L., and Mills, C. (2004). What lies beneath? Understanding the limits of understanding. Thinking and Seeing: Visual Metacognition in Adults and Children, MIT Press."},{"key":"ref_82","doi-asserted-by":"crossref","unstructured":"Holzinger, A., Langs, G., Denk, H., Zatloukal, K., and M\u00fcller, H. (2019). Causability and explainabilty of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Wiley.","DOI":"10.1002\/widm.1312"},{"key":"ref_83","unstructured":"Mueller, H., and Holzinger, A. (2019). Kandinsky Patterns. arXiv."},{"key":"ref_84","unstructured":"European Commission (2019, January 18). General Data Protection Regulation. Available online: https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/PDF\/?uri=CELEX:32016R0679."},{"key":"ref_85","unstructured":"Weller, A. (2017). Challenges for transparency. arXiv."},{"key":"ref_86","unstructured":"Holzinger, A., Biemann, C., Pattichis, C.S., and Kell, D.B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv."},{"key":"ref_87","first-page":"841","article-title":"Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.(2017)","volume":"31","author":"Wachter","year":"2017","journal-title":"Harv. J. Law Technol."},{"key":"ref_88","unstructured":"Goodman, B., and Flaxman, S. (2016). EU regulations on algorithmic decision-making and a \u201cright to explanation\u201d. arXiv."},{"key":"ref_89","doi-asserted-by":"crossref","first-page":"76","DOI":"10.1093\/idpl\/ipx005","article-title":"Why a right to explanation of automated decision-making does not exist in the general data protection regulation","volume":"7","author":"Wachter","year":"2017","journal-title":"Int. Data Priv. Law"},{"key":"ref_90","unstructured":"Hardt, M., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_91","unstructured":"R\u00fcping, S. (2006). Learning Interpretable Models. [Ph.D. Thesis, University of Dortmund]."},{"key":"ref_92","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2594473.2594475","article-title":"Comprehensible classification models: a position paper","volume":"15","author":"Freitas","year":"2014","journal-title":"ACM SIGKDD Explor. Newslett."},{"key":"ref_93","doi-asserted-by":"crossref","unstructured":"Case, N. (2018). How To Become A Centaur. J. Design Sci.","DOI":"10.21428\/61b2215c"},{"key":"ref_94","unstructured":"Varshney, K.R., Khanduri, P., Sharma, P., Zhang, S., and Varshney, P.K. (2018). Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory. arXiv."},{"key":"ref_95","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","article-title":"Explanation in Artificial Intelligence: Insights from the social sciences","volume":"267","author":"Miller","year":"2018","journal-title":"Artif. Intell."},{"key":"ref_96","unstructured":"Kim, B., Khanna, R., and Koyejo, O.O. (2016). Examples are not enough, learn to criticize! Criticism for interpretability. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_97","doi-asserted-by":"crossref","first-page":"243","DOI":"10.2307\/1416950","article-title":"An experimental study of apparent behavior","volume":"57","author":"Heider","year":"1944","journal-title":"Am. J. Psychol."},{"key":"ref_98","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2016, January 13\u201317). \u201cWhy Should I Trust You?\u201d: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939778"},{"key":"ref_99","unstructured":"Kim, B., and Doshi-Velez, F. (2018, January 18). Introduction to Interpretable Machine Learning. Proceedings of the CVPR 2018 Tutorial on Interpretable Machine Learning for Computer Vision, Salt Lake City, UT, USA."},{"key":"ref_100","unstructured":"Tukey, J.W. (1977). Exploratory Data Analysis, Pearson."},{"key":"ref_101","doi-asserted-by":"crossref","unstructured":"Jolliffe, I. (2011). Principal component analysis. International Encyclopedia of Statistical Science, Springer.","DOI":"10.1007\/978-3-642-04898-2_455"},{"key":"ref_102","first-page":"2579","article-title":"Visualizing data using t-SNE","volume":"9","author":"Maaten","year":"2008","journal-title":"J. Mach. Learn. Res."},{"key":"ref_103","first-page":"100","article-title":"Algorithm AS 136: A k-means clustering algorithm","volume":"28","author":"Hartigan","year":"1979","journal-title":"J. R. Stat. Soc. Ser. C (Appl. Stat.)"},{"key":"ref_104","unstructured":"Google People + AI Research (PAIR) (2019, July 12). Facets\u2014Visualization for ML Datasets. Available online: https:\/\/pair-code.github.io\/facets\/."},{"key":"ref_105","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1177\/0963721409359277","article-title":"The magical mystery four: How is working memory capacity limited, and why?","volume":"19","author":"Cowan","year":"2010","journal-title":"Curr. Dir. Psychol. Sci."},{"key":"ref_106","doi-asserted-by":"crossref","unstructured":"Lou, Y., Caruana, R., and Gehrke, J. (2012, January 12\u201316). Intelligible models for classification and regression. Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China.","DOI":"10.1145\/2339530.2339556"},{"key":"ref_107","unstructured":"Kim, T.W. (2018). Explainable Artificial Intelligence (XAI), the goodness criteria and the grasp-ability test. arXiv."},{"key":"ref_108","doi-asserted-by":"crossref","first-page":"199","DOI":"10.1214\/ss\/1009213726","article-title":"Statistical modeling: The two cultures (with comments and a rejoinder by the author)","volume":"16","author":"Breiman","year":"2001","journal-title":"Stat. Sci."},{"key":"ref_109","doi-asserted-by":"crossref","unstructured":"Robnik-\u0160ikonja, M., and Bohanec, M. (2018). Perturbation-Based Explanations of Prediction Models. Human and Machine Learning, Springer.","DOI":"10.1007\/978-3-319-90403-0_9"},{"key":"ref_110","doi-asserted-by":"crossref","first-page":"247","DOI":"10.1017\/S1358246100005130","article-title":"Contrastive explanation","volume":"27","author":"Lipton","year":"1990","journal-title":"R. Inst. Philos. Suppl."},{"key":"ref_111","doi-asserted-by":"crossref","unstructured":"Kahneman, D., and Tversky, A. (1981). The Simulation Heuristic, Department of Psychology, Stanford University. Technical Report.","DOI":"10.1017\/CBO9780511809477.015"},{"key":"ref_112","doi-asserted-by":"crossref","first-page":"175","DOI":"10.1037\/1089-2680.2.2.175","article-title":"Confirmation bias: A ubiquitous phenomenon in many guises","volume":"2","author":"Nickerson","year":"1998","journal-title":"Rev. Gen. Psychol."},{"key":"ref_113","doi-asserted-by":"crossref","unstructured":"Lakkaraju, H., Bach, S.H., and Leskovec, J. (2016, January 13\u201317). Interpretable decision sets: A joint framework for description and prediction. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939874"},{"key":"ref_114","doi-asserted-by":"crossref","first-page":"118","DOI":"10.1016\/j.asoc.2015.09.038","article-title":"A multi-objective genetic optimization of interpretability-oriented fuzzy rule-based classifiers","volume":"38","year":"2016","journal-title":"Appl. Soft Comput."},{"key":"ref_115","doi-asserted-by":"crossref","unstructured":"Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., and Rudin, C. (2017, January 13\u201317). Learning certifiably optimal rule lists. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada.","DOI":"10.1145\/3097983.3098047"},{"key":"ref_116","unstructured":"Dash, S., G\u00fcnl\u00fck, O., and Wei, D. (2018). Boolean Decision Rules via Column Generation. arXiv."},{"key":"ref_117","doi-asserted-by":"crossref","unstructured":"Yang, H., Rudin, C., and Seltzer, M. (2017, January 6\u201311). Scalable Bayesian rule lists. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia.","DOI":"10.32614\/CRAN.package.sbrl"},{"key":"ref_118","doi-asserted-by":"crossref","first-page":"449","DOI":"10.1287\/inte.2018.0957","article-title":"Optimized Scoring Systems: Toward Trust in Machine Learning for Healthcare and Criminal Justice","volume":"48","author":"Rudin","year":"2018","journal-title":"Interfaces"},{"key":"ref_119","first-page":"2357","article-title":"A bayesian framework for learning rule sets for interpretable classification","volume":"18","author":"Wang","year":"2017","journal-title":"J. Mach. Learn. Res."},{"key":"ref_120","unstructured":"Kim, B., Rudin, C., and Shah, J.A. (2014). The bayesian case model: A generative approach for case-based reasoning and prototype classification. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_121","unstructured":"Ross, A., Lage, I., and Doshi-Velez, F. (2017, January 4\u20139). The neural lasso: Local linear sparsity for interpretable explanations. Proceedings of the Workshop on Transparent and Interpretable Machine Learning in Safety Critical Environments, 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_122","unstructured":"Lage, I., Ross, A.S., Kim, B., Gershman, S.J., and Doshi-Velez, F. (2018). Human-in-the-Loop Interpretability Prior. arXiv."},{"key":"ref_123","unstructured":"Lee, M., He, X., Yih, W.t., Gao, J., Deng, L., and Smolensky, P. (2015). Reasoning in vector space: An exploratory study of question answering. arXiv."},{"key":"ref_124","doi-asserted-by":"crossref","unstructured":"Palangi, H., Smolensky, P., He, X., and Deng, L. (2018, January 2\u20137). Question-answering with grammatically-interpretable representations. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.12004"},{"key":"ref_125","unstructured":"Kindermans, P.J., Sch\u00fctt, K.T., Alber, M., M\u00fcller, K.R., Erhan, D., Kim, B., and D\u00e4hne, S. (2017). Learning how to explain neural networks: PatternNet and PatternAttribution. arXiv."},{"key":"ref_126","unstructured":"Springenberg, J.T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv."},{"key":"ref_127","unstructured":"Sundararajan, M., Taly, A., and Yan, Q. (2017). Axiomatic attribution for deep networks. arXiv."},{"key":"ref_128","unstructured":"Smilkov, D., Thorat, N., Kim, B., Vi\u00e9gas, F., and Wattenberg, M. (2017). Smoothgrad: removing noise by adding noise. arXiv."},{"key":"ref_129","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017, January 22\u201329). Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_130","unstructured":"Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., and Sayres, R. (2018, January 10\u201315). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). Proceedings of the International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_131","unstructured":"Polino, A., Pascanu, R., and Alistarh, D. (2018). Model compression via distillation and quantization. arXiv."},{"key":"ref_132","doi-asserted-by":"crossref","unstructured":"Wu, M., Hughes, M.C., Parbhoo, S., Zazzi, M., Roth, V., and Doshi-Velez, F. (2018, January 2\u20137). Beyond sparsity: Tree regularization of deep models for interpretability. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11501"},{"key":"ref_133","unstructured":"Xu, K., Park, D.H., Yi, C., and Sutton, C. (2018). Interpreting Deep Classifier by Visual Distillation of Dark Knowledge. arXiv."},{"key":"ref_134","unstructured":"Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. arXiv."},{"key":"ref_135","unstructured":"Murdoch, W.J., and Szlam, A. (2017). Automatic rule extraction from long short term memory networks. arXiv."},{"key":"ref_136","unstructured":"Frosst, N., and Hinton, G. (2017). Distilling a neural network into a soft decision tree. arXiv."},{"key":"ref_137","unstructured":"Bastani, O., Kim, C., and Bastani, H. (2017). Interpreting blackbox models via model extraction. arXiv."},{"key":"ref_138","doi-asserted-by":"crossref","unstructured":"Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., and Hoffmann, H. (2019, January 16\u201320). Explainability Methods for Graph Convolutional Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01103"},{"key":"ref_139","doi-asserted-by":"crossref","unstructured":"Wagner, J., Kohler, J.M., Gindele, T., Hetzel, L., Wiedemer, J.T., and Behnke, S. (2019, January 16\u201320). Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00931"},{"key":"ref_140","doi-asserted-by":"crossref","unstructured":"Zhang, Q., Yang, Y., Ma, H., and Wu, Y.N. (2019, January 16\u201320). Interpreting CNNs via Decision Trees. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00642"},{"key":"ref_141","doi-asserted-by":"crossref","unstructured":"Kanehira, A., and Harada, T. (2019, January 16\u201320). Learning to Explain With Complemental Examples. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00880"},{"key":"ref_142","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.1214\/aos\/1013203451","article-title":"Greedy function approximation: A gradient boosting machine","volume":"29","author":"Friedman","year":"2001","journal-title":"Ann. Stat."},{"key":"ref_143","doi-asserted-by":"crossref","first-page":"44","DOI":"10.1080\/10618600.2014.907095","article-title":"Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation","volume":"24","author":"Goldstein","year":"2015","journal-title":"J. Comput. Gr. Stat."},{"key":"ref_144","unstructured":"Apley, D.W. (2016). Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models. arXiv."},{"key":"ref_145","doi-asserted-by":"crossref","first-page":"916","DOI":"10.1214\/07-AOAS148","article-title":"Predictive learning via rule ensembles","volume":"2","author":"Friedman","year":"2008","journal-title":"Ann. Appl. Stat."},{"key":"ref_146","unstructured":"Fisher, A., Rudin, C., and Dominici, F. (2018). Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the \u201cRashomon\u201d Perspective. arXiv."},{"key":"ref_147","unstructured":"Lundberg, S.M., and Lee, S.I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_148","doi-asserted-by":"crossref","unstructured":"Staniak, M., and Biecek, P. (2018). Explanations of model predictions with live and breakDown packages. arXiv.","DOI":"10.32614\/RJ-2018-072"},{"key":"ref_149","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2018, January 2\u20137). Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"ref_150","unstructured":"Koh, P.W., and Liang, P. (2017). Understanding black-box predictions via influence functions. arXiv."},{"key":"ref_151","unstructured":"Bibal, A., and Fr\u00e9nay, B. (2016, January 27\u201329). Interpretability of machine learning models and representations: An introduction. Proceedings of the 24th European Symposium on Artificial Neural Networks ESANN, Bruges, Belgium."},{"key":"ref_152","doi-asserted-by":"crossref","unstructured":"Doshi-Velez, F., and Kim, B. (2018). Considerations for Evaluation and Generalization in Interpretable Machine Learning. Explainable and Interpretable Models in Computer Vision and Machine Learning, Springer.","DOI":"10.1007\/978-3-319-98131-4_1"},{"key":"ref_153","unstructured":"Bonnans, J.F., and Shapiro, A. (2013). Perturbation Analysis of Optimization Problems, Springer Science & Business Media."}],"container-title":["Electronics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2079-9292\/8\/8\/832\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T13:09:58Z","timestamp":1760188198000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2079-9292\/8\/8\/832"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,7,26]]},"references-count":153,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2019,8]]}},"alternative-id":["electronics8080832"],"URL":"https:\/\/doi.org\/10.3390\/electronics8080832","relation":{},"ISSN":["2079-9292"],"issn-type":[{"value":"2079-9292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,7,26]]}}}