{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,6]],"date-time":"2026-05-06T17:19:37Z","timestamp":1778087977339,"version":"3.51.4"},"reference-count":240,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2021,8,4]],"date-time":"2021-08-04T00:00:00Z","timestamp":1628035200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["MAKE"],"abstract":"<jats:p>Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension\u2014the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords \u201cexplainable artificial intelligence\u201d; \u201cexplainable machine learning\u201d; and \u201cinterpretable machine learning\u201d. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulations.<\/jats:p>","DOI":"10.3390\/make3030032","type":"journal-article","created":{"date-parts":[[2021,8,4]],"date-time":"2021-08-04T21:45:06Z","timestamp":1628113506000},"page":"615-661","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":135,"title":["Classification of Explainable Artificial Intelligence Methods through Their Output Formats"],"prefix":"10.3390","volume":"3","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4401-5664","authenticated-orcid":false,"given":"Giulia","family":"Vilone","sequence":"first","affiliation":[{"name":"Applied Intelligence Research Centre, Technological University Dublin, D08 X622 Dublin, Ireland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2718-5426","authenticated-orcid":false,"given":"Luca","family":"Longo","sequence":"additional","affiliation":[{"name":"Applied Intelligence Research Centre, Technological University Dublin, D08 X622 Dublin, Ireland"}]}],"member":"1968","published-online":{"date-parts":[[2021,8,4]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","article-title":"Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)","volume":"6","author":"Adadi","year":"2018","journal-title":"Access"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Kim, J., Rohrbach, A., Darrell, T., Canny, J., and Akata, Z. (2018). Textual Explanations for Self-Driving Vehicles, ECCV.","DOI":"10.1007\/978-3-030-01216-8_35"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1096","DOI":"10.1038\/s41467-019-08987-4","article-title":"Unmasking Clever Hans predictors and assessing what machines really learn","volume":"10","author":"Lapuschkin","year":"2019","journal-title":"Nat. Commun."},{"key":"ref_4","unstructured":"Fox, M., Long, D., and Magazzeni, D. (2017). Explainable planning. IJCAI Workshop on Explainable Artificial Intelligence (XAI), International Joint Conferences on Artificial Intelligence, Inc."},{"key":"ref_5","first-page":"93:1","article-title":"A survey of methods for explaining black box models","volume":"51","author":"Guidotti","year":"2018","journal-title":"Comput. Surv. (CSUR)"},{"key":"ref_6","unstructured":"de Graaf, M., and Malle, B.F. (2017). How People Explain Action (and Autonomous Intelligent Systems Should Too). Fall Symposium on Artificial Intelligence for Human-Robot Interaction, AAAI Press."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Harbers, M., van den Bosch, K., and Meyer, J.J.C. (2009). A study into preferred explanations of virtual agent behavior. International Workshop on Intelligent Virtual Agents, Springer.","DOI":"10.1007\/978-3-642-04380-2_17"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"89","DOI":"10.1016\/j.inffus.2021.05.009","article-title":"Notions of explainability and evaluation approaches for explainable artificial intelligence","volume":"76","author":"Vilone","year":"2021","journal-title":"Inf. Fusion"},{"key":"ref_9","unstructured":"Wick, M.R., and Thompson, W.B. (1989, January 20\u201325). Reconstructive Explanation: Explanation as Complex Problem Solving. Proceedings of the 11th International Joint Conference on Artificial Intelligence, Detroit, MI, USA."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"974","DOI":"10.2991\/ijcis.d.200715.003","article-title":"Teaching Explainable Artificial Intelligence to High School Students","volume":"13","author":"Alonso","year":"2020","journal-title":"Int. J. Comput. Intell. Syst."},{"key":"ref_11","first-page":"143","article-title":"Working in contexts for which transparency is important: A recordkeeping view of Explainable Artificial Intelligence (XAI)","volume":"30","author":"Bunn","year":"2020","journal-title":"Rec. Manag. J."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Gade, K., Geyik, S.C., Kenthapadi, K., Mithal, V., and Taly, A. (2020, January 27\u201330). Explainable AI in industry: Practical challenges and lessons learned: Implications tutorial. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain.","DOI":"10.1145\/3351095.3375664"},{"key":"ref_13","first-page":"103","article-title":"Report on the 2019 International Joint Conferences on Artificial Intelligence Explainable Artificial Intelligence Workshop","volume":"41","author":"Miller","year":"2020","journal-title":"AI Mag."},{"key":"ref_14","unstructured":"Dam, H.K., Tran, T., and Ghose, A. (June, January 27). Explainable software analytics. Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, Gothenburg, Sweden."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"36","DOI":"10.1145\/3233231","article-title":"The mythos of model interpretability","volume":"61","author":"Lipton","year":"2018","journal-title":"Commun. ACM"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Do\u0161ilovi\u0107, F.K., Br\u010di\u0107, M., and Hlupi\u0107, N. (2018, January 21\u201325). Explainable artificial intelligence: A survey. Proceedings of the 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia.","DOI":"10.23919\/MIPRO.2018.8400040"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Lou, Y., Caruana, R., and Gehrke, J. (2012, January 12\u201316). Intelligible models for classification and regression. Proceedings of the 18th SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China.","DOI":"10.1145\/2339530.2339556"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Lou, Y., Caruana, R., Gehrke, J., and Hooker, G. (2013, January 11\u201314). Accurate intelligible models with pairwise interactions. Proceedings of the 19th SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA.","DOI":"10.1145\/2487575.2487579"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.dsp.2017.10.011","article-title":"Methods for interpreting and understanding deep neural networks","volume":"73","author":"Montavon","year":"2017","journal-title":"Digit. Signal Process."},{"key":"ref_20","first-page":"1","article-title":"The Pragmatic Turn in Explainable Artificial Intelligence (XAI)","volume":"29","year":"2019","journal-title":"Minds Mach."},{"key":"ref_21","unstructured":"Vilone, G., and Longo, L. (2020). Explainable artificial intelligence: A systematic review. arXiv."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2016, January 13\u201317). Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd IGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939778"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"667","DOI":"10.1109\/TVCG.2017.2744158","article-title":"Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks","volume":"24","author":"Strobelt","year":"2018","journal-title":"Trans. Vis. Comput. Graph."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TVCG.2017.2744878","article-title":"Visualizing dataflow graphs of deep learning models in TensorFlow","volume":"24","author":"Wongsuphasawat","year":"2018","journal-title":"Trans. Vis. Comput. Graph."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Hendricks, L.A., Hu, R., Darrell, T., and Akata, Z. (2018). Grounding visual explanations. Computer Vision\u2014ECCV\u201415th European Conference, Proceedings, Part II, Springer.","DOI":"10.1007\/978-3-030-01216-8_17"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Fung, G., Sandilya, S., and Rao, R.B. (2005, January 21\u201324). Rule extraction from linear support vector machines. Proceedings of the 11th SIGKDD International Conference on Knowledge Discovery in Data Mining, Chicago, IL, USA.","DOI":"10.1145\/1081870.1081878"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"265","DOI":"10.1515\/jaiscr-2017-0019","article-title":"Characterization of symbolic rules embedded in deep DIMLP networks: A challenge to transparency of deep learning","volume":"7","author":"Bologna","year":"2017","journal-title":"J. Artif. Intell. Soft Comput. Res."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2018, January 2\u20137). Anchors: High-precision model-agnostic explanations. Proceedings of the 32nd Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"426","DOI":"10.1109\/91.928739","article-title":"Designing fuzzy inference systems from data: An interpretability-oriented review","volume":"9","author":"Guillaume","year":"2001","journal-title":"Trans. Fuzzy Syst."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Palade, V., Neagu, D.C., and Patton, R.J. (2001, January 1\u20133). Interpretation of trained neural networks by rule extraction. Proceedings of the International Conference on Computational Intelligence, Dortmund, Germany.","DOI":"10.1007\/3-540-45493-4_20"},{"key":"ref_31","unstructured":"Rizzo, L., and Longo, L. (2018, January 20\u201323). Inferential models of mental workload with defeasible argumentation and non-monotonic fuzzy reasoning: A comparative study. Proceedings of the 2nd Workshop on Advances in Argumentation in Artificial Intelligence, Trento, Italy."},{"key":"ref_32","unstructured":"Rizzo, L., and Longo, L. (2018, January 6\u20137). A Qualitative Investigation of the Explainability of Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning. Proceedings of the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science Trinity College Dublin, Dublin, Ireland."},{"key":"ref_33","unstructured":"Alain, G., and Bengio, Y. (2017, January 23\u201326). Understanding intermediate layers using linear classifier probes. Proceedings of the 5th International Conference on Learning Representations, Toulon, France."},{"key":"ref_34","unstructured":"Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., and Sayres, R. (2018, January 10\u201315). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). Proceedings of the International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_35","first-page":"2057","article-title":"Show, attend and tell: Neural image caption generation with visual attention","volume":"2048","author":"Xu","year":"2015","journal-title":"Int. Conf. Mach. Learn."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Tolomei, G., Silvestri, F., Haines, A., and Lalmas, M. (2017, January 13\u201317). Interpretable predictions of tree-based ensembles via actionable feature tweaking. Proceedings of the 23rd SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada.","DOI":"10.1145\/3097983.3098039"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Tan, S., Caruana, R., Hooker, G., and Lou, Y. (2018, January 2\u20133). Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. Proceedings of the Conference on AI, Ethics, and Society, New Orleans, LA, USA.","DOI":"10.1145\/3278721.3278725"},{"key":"ref_38","unstructured":"Lundberg, S.M., and Lee, S.I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, Inc."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"2522","DOI":"10.1038\/s42256-019-0138-9","article-title":"From local explanations to global understanding with explainable AI for trees","volume":"2","author":"Lundberg","year":"2020","journal-title":"Nat. Mach. Intell."},{"key":"ref_40","unstructured":"Janzing, D., Minorics, L., and Bl\u00f6baum, P. (2020, January 3\u20135). Feature relevance quantification in explainable AI: A causal problem. Proceedings of the International Conference on Artificial Intelligence and Statistics, Palermo, Italy."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"114104","DOI":"10.1016\/j.eswa.2020.114104","article-title":"Shapley-Lorenz eXplainable artificial intelligence","volume":"167","author":"Giudici","year":"2020","journal-title":"Expert Syst. Appl."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"589","DOI":"10.1109\/TKDE.2007.190734","article-title":"Explaining classifications for individual instances","volume":"20","author":"Kononenko","year":"2008","journal-title":"Trans. Knowl. Data Eng."},{"key":"ref_43","first-page":"13","article-title":"Explanation of Prediction Models with Explain Prediction","volume":"42","year":"2018","journal-title":"Informatica"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Cortez, P., and Embrechts, M.J. (2011, January 11\u201315). Opening black box data mining models using sensitivity analysis. Proceedings of the Symposium on Computational Intelligence and Data Mining (CIDM), Paris, France.","DOI":"10.1109\/CIDM.2011.5949423"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.ins.2012.10.039","article-title":"Using sensitivity analysis and visualization techniques to open black box data mining models","volume":"225","author":"Cortez","year":"2013","journal-title":"Inf. Sci."},{"key":"ref_46","first-page":"1","article-title":"An Efficient Explanation of Individual Classifications Using Game Theory","volume":"11","author":"Strumbelj","year":"2010","journal-title":"J. Mach. Learn. Res."},{"key":"ref_47","first-page":"41","article-title":"Explanation and reliability of individual predictions","volume":"37","author":"Kononenko","year":"2013","journal-title":"Informatica"},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"886","DOI":"10.1016\/j.datak.2009.01.004","article-title":"Explaining instance classifications with interactions of subsets of feature values","volume":"68","author":"Kononenko","year":"2009","journal-title":"Data Knowl. Eng."},{"key":"ref_49","unstructured":"\u0160trumbelj, E., and Kononenko, I. (2008, January 1\u20135). Towards a model independent method for explaining classification for individual instances. Proceedings of the International Conference on Data Warehousing and Knowledge Discovery, Turin, Italy."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"305","DOI":"10.1007\/s10115-009-0244-9","article-title":"Explanation and reliability of prediction models: The case of breast cancer recurrence","volume":"24","author":"Kononenko","year":"2010","journal-title":"Knowl. Inf. Syst."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Datta, A., Sen, S., and Zick, Y. (2016, January 23\u201325). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. Proceedings of the Symposium on Security and Privacy (SP), San Jose, CA, USA.","DOI":"10.1109\/SP.2016.42"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1007\/s10115-017-1116-3","article-title":"Auditing black-box models for indirect influence","volume":"54","author":"Adler","year":"2018","journal-title":"Knowl. Inf. Syst."},{"key":"ref_53","unstructured":"Koh, P.W., and Liang, P. (2017, January 6\u201311). Understanding black-box predictions via influence functions. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia."},{"key":"ref_54","unstructured":"Sliwinski, J., Strobel, M., and Zick, Y. (2017, January 19\u201325). A Characterization of Monotone Influence Measures for Data Classification. Proceedings of the Workshop on Explainable AI (XAI); International Joint Conferences on Artificial Intelligence (IJCAI), Melbourne, Australia."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"1503","DOI":"10.1007\/s10618-014-0368-8","article-title":"A peek into the black box: Exploring classifiers by randomization","volume":"28","author":"Henelius","year":"2014","journal-title":"Data Min. Knowl. Discov."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"647","DOI":"10.1007\/s10115-013-0679-x","article-title":"Explaining prediction models and individual predictions with feature contributions","volume":"41","author":"Kononenko","year":"2014","journal-title":"Knowl. Inf. Syst."},{"key":"ref_57","unstructured":"Raghu, M., Gilmer, J., Yosinski, J., and Sohl-Dickstein, J. (2017, January 4\u20139). Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"237","DOI":"10.1016\/S0893-6080(01)00127-7","article-title":"A methodology to explain neural network classification","volume":"15","year":"2002","journal-title":"Neural Netw."},{"key":"ref_59","unstructured":"Fr\u00e4mling, K. (1996). Explaining results of neural networks by contextual importance and utility. Rule Extraction from Trained Artificial Neural Networks Workshop, Citeseer."},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Hsieh, T.Y., Wang, S., Sun, Y., and Honavar, V. (2021, January 8\u201312). Explainable Multivariate Time Series Classification: A Deep Neural Network Which Learns to Attend to Important Variables as Well as Time Intervals. Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Jerusalem, Israel.","DOI":"10.1145\/3437963.3441815"},{"key":"ref_61","unstructured":"Clos, J., Wiratunga, N., and Massie, S. (2017, January 19\u201325). Towards Explainable Text Classification by Jointly Learning Lexicon and Modifier Terms. Proceedings of the Workshop on Explainable AI (XAI); International Joint Conferences on Artificial Intelligence (IJCAI), Melbourne, Australia."},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Petkovic, D., Alavi, A., Cai, D., and Wong, M. (2021, January 10\u201315). Random Forest Model and Sample Explainer for Non-experts in Machine Learning\u2014Two Case Studies. Pattern Recognition. Proceedings of the ICPR International Workshops and Challenges, Online. Part III.","DOI":"10.1007\/978-3-030-68796-0_5"},{"key":"ref_63","unstructured":"Barbella, D., Benzaid, S., Christensen, J.M., Jackson, B., Qin, X.V., and Musicant, D.R. (2009). Understanding Support Vector Machine Classifications via a Recommender System-Like Approach, CSREA Press. DMIN."},{"key":"ref_64","unstructured":"Caragea, D., Cook, D., and Honavar, V. (2003, January 19\u201322). Towards simple, easy-to-understand, yet accurate classifiers. Proceedings of the 3rd International Conference on Data Mining, San Francisco, CA, USA."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"647","DOI":"10.3389\/fnhum.2016.00647","article-title":"Gaussian process regression for predictive but interpretable machine learning models: An example of predicting mental workload across tasks","volume":"10","author":"Caywood","year":"2017","journal-title":"Front. Hum. Neurosci."},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Wang, J., Fujimaki, R., and Motohashi, Y. (2015, January 10\u201313). Trading interpretability for accuracy: Oblique treed sparse additive models. Proceedings of the 21th SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia.","DOI":"10.1145\/2783258.2783407"},{"key":"ref_67","first-page":"11","article-title":"Supersparse Linear Integer Models for Interpretable Classification","volume":"1050","author":"Ustun","year":"2014","journal-title":"Stat"},{"key":"ref_68","doi-asserted-by":"crossref","unstructured":"Bride, H., Dong, J., Dong, J.S., and H\u00f3u, Z. (2018, January 12\u201316). Towards Dependable and Explainable Machine Learning Using Automated Reasoning. Proceedings of the International Conference on Formal Engineering Methods, Gold Coast, Australia.","DOI":"10.1007\/978-3-030-02450-5_25"},{"key":"ref_69","unstructured":"Johansson, U., Niklasson, L., and K\u00f6nig, R. (July, January 28). Accuracy vs. comprehensibility in data mining models. Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden."},{"key":"ref_70","unstructured":"Johansson, U., K\u00f6nig, R., and Niklasson, L. (2004, January 12\u201314). The Truth is In There-Rule Extraction from Opaque Models Using Genetic Programming. Proceedings of the FLAIRS Conference, Miami Beach, FL, USA."},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1109\/MIS.2019.2957223","article-title":"Factual and counterfactual explanations for black box decision making","volume":"34","author":"Guidotti","year":"2019","journal-title":"IEEE Intell. Syst."},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"103457","DOI":"10.1016\/j.artint.2021.103457","article-title":"GLocalX-From Local to Global Explanations of Black Box AI Models","volume":"294","author":"Setzu","year":"2021","journal-title":"Artif. Intell."},{"key":"ref_73","unstructured":"Bastani, O., Kim, C., and Bastani, H. (2017, January 14). Interpretability via model extraction. Proceedings of the Fairness, Accountability, and Transparency in Machine Learning Workshop, Halifax, NS, Canada."},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Krishnan, S., and Wu, E. (2017, January 14\u201319). Palm: Machine learning explanations for iterative debugging. Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, Chicago, IL, USA.","DOI":"10.1145\/3077257.3077271"},{"key":"ref_75","doi-asserted-by":"crossref","unstructured":"Asano, K., and Chun, J. (2021, January 4\u20136). Post-hoc Explanation using a Mimic Rule for Numerical Data. Proceedings of the 13th International Conference on Agents and Artificial Intelligence\u2014Volume 2: ICAART, Setubal, Portugal.","DOI":"10.5220\/0010238907680774"},{"key":"ref_76","first-page":"376","article-title":"Rule extraction algorithm for deep neural networks: A review","volume":"14","author":"Hailesilassie","year":"2016","journal-title":"Int. J. Comput. Sci. Inf. Secur."},{"key":"ref_77","first-page":"4084850","article-title":"A comparison study on rule extraction from neural network ensembles, boosted shallow trees, and SVMs","volume":"2018","author":"Bologna","year":"2018","journal-title":"Appl. Comput. Intell. Soft Comput."},{"key":"ref_78","unstructured":"Setiono, R., and Liu, H. (1995, January 20\u201325). Understanding neural networks via rule extraction. Proceedings of the International Joint Conferences on Artificial Intelligence, Montr\u00e9al, QC, Canada."},{"key":"ref_79","doi-asserted-by":"crossref","first-page":"556","DOI":"10.1016\/j.procs.2017.01.172","article-title":"Classification Tree Extraction from Trained Artificial Neural Networks","volume":"104","author":"Bondarenko","year":"2017","journal-title":"Procedia Comput. Sci."},{"key":"ref_80","unstructured":"Thrun, S. (1995). Extracting rules from artificial neural networks with distributed representations. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_81","unstructured":"Bologna, G. (1998, January 3\u20136). Symbolic rule extraction from the DIMLP neural network. Proceedings of the International Workshop on Hybrid Neural Systems, Denver, CO, USA."},{"key":"ref_82","doi-asserted-by":"crossref","unstructured":"Bologna, G. (2018, January 27\u201330). A Rule Extraction Study Based on a Convolutional Neural Network. Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Hamburg, Germany.","DOI":"10.1007\/978-3-319-99740-7_22"},{"key":"ref_83","doi-asserted-by":"crossref","first-page":"131","DOI":"10.1007\/s11063-011-9207-8","article-title":"Reverse engineering the neural networks for rule extraction in classification problems","volume":"35","author":"Augasta","year":"2012","journal-title":"Neural Process. Lett."},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"1750006","DOI":"10.1142\/S0218213017500063","article-title":"Rule extraction from training data using neural network","volume":"26","author":"Biswas","year":"2017","journal-title":"Int. J. Artif. Intell. Tools"},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"155","DOI":"10.1016\/S0004-3702(00)00077-1","article-title":"Symbolic knowledge extraction from trained neural networks: A sound approach","volume":"125","author":"Garcez","year":"2001","journal-title":"Artif. Intell."},{"key":"ref_86","unstructured":"Frosst, N., and Hinton, G. (2017, January 16\u201317). Distilling a neural network into a soft decision tree. Proceedings of the 16th International Conference of the Italian Association of Artificial Intelligence. Workshop on Comprehensibility and Explanation in AI and ML, Bari, Italy."},{"key":"ref_87","doi-asserted-by":"crossref","unstructured":"Zhang, Q., Yang, Y., Ma, H., and Wu, Y.N. (2019, January 16\u201320). Interpreting cnns via decision trees. Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00642"},{"key":"ref_88","first-page":"3","article-title":"Extracting symbolic rules from trained neural network ensembles","volume":"16","author":"Zhou","year":"2003","journal-title":"AI Commun."},{"key":"ref_89","doi-asserted-by":"crossref","first-page":"37","DOI":"10.1109\/TITB.2003.808498","article-title":"Medical diagnosis with C4. 5 rule preceded by artificial neural network ensemble","volume":"7","author":"Zhou","year":"2003","journal-title":"Trans. Inf. Technol. Biomed."},{"key":"ref_90","doi-asserted-by":"crossref","unstructured":"Boz, O. (2002, January 23\u201326). Extracting decision trees from trained neural networks. Proceedings of the 8th SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton, AL, Canada.","DOI":"10.1145\/775047.775113"},{"key":"ref_91","doi-asserted-by":"crossref","unstructured":"Craven, M.W., and Shavlik, J.W. (1994). Using sampling and queries to extract rules from trained neural networks. Machine Learning Proceedings, Elsevier.","DOI":"10.1016\/B978-1-55860-335-6.50013-1"},{"key":"ref_92","unstructured":"Craven, M., and Shavlik, J.W. (1996, January 2\u20135). Extracting tree-structured representations of trained networks. Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA."},{"key":"ref_93","doi-asserted-by":"crossref","unstructured":"Wu, M., Hughes, M.C., Parbhoo, S., Zazzi, M., Roth, V., and Doshi-Velez, F. (2018, January 2\u20137). Beyond sparsity: Tree regularization of deep models for interpretability. Proceedings of the 32nd Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11501"},{"key":"ref_94","unstructured":"Murdoch, W.J., and Szlam, A. (2017, January 23\u201326). Automatic rule extraction from long short term memory networks. Proceedings of the 5th International Conference on Learning Representations, Conference Track Proceedings, Toulon, France."},{"key":"ref_95","doi-asserted-by":"crossref","unstructured":"Hu, Z., Ma, X., Liu, Z., Hovy, E., and Xing, E. (2016, January 7\u201312). Harnessing deep neural networks with logic rules. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany.","DOI":"10.18653\/v1\/P16-1228"},{"key":"ref_96","unstructured":"Tran, S.N. (2017, January 19\u201325). Unsupervised Neural-Symbolic Integration. Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI), Melbourne, Australia."},{"key":"ref_97","doi-asserted-by":"crossref","first-page":"385","DOI":"10.1162\/EVCO_a_00155","article-title":"Improving the interpretability of classification rules discovered by an ant colony algorithm: Extended results","volume":"24","author":"Otero","year":"2016","journal-title":"Evol. Comput."},{"key":"ref_98","doi-asserted-by":"crossref","first-page":"2354","DOI":"10.1016\/j.eswa.2010.08.023","article-title":"Building comprehensible customer churn prediction models with advanced rule induction techniques","volume":"38","author":"Verbeke","year":"2011","journal-title":"Expert Syst. Appl."},{"key":"ref_99","doi-asserted-by":"crossref","unstructured":"Lakkaraju, H., Bach, S.H., and Leskovec, J. (2016, January 13\u201317). Interpretable decision sets: A joint framework for description and prediction. Proceedings of the 22nd SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939874"},{"key":"ref_100","first-page":"1350","article-title":"Building interpretable classifiers with rules using Bayesian analysis","volume":"9","author":"Letham","year":"2012","journal-title":"Dep. Stat. Tech. Rep. Tr609, Univ. Wash."},{"key":"ref_101","unstructured":"Letham, B., Rudin, C., McCormick, T.H., and Madigan, D. (2013, January 1). An Interpretable Stroke Prediction Model Using Rules and Bayesian Analysis. Proceedings of the 17th Conference on Late-Breaking Developments in the Field of Artificial Intelligence, Palo Alto, CA, USA."},{"key":"ref_102","doi-asserted-by":"crossref","first-page":"1350","DOI":"10.1214\/15-AOAS848","article-title":"Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model","volume":"9","author":"Letham","year":"2015","journal-title":"Ann. Appl. Stat."},{"key":"ref_103","doi-asserted-by":"crossref","unstructured":"Wang, T., Rudin, C., Velez-Doshi, F., Liu, Y., Klampfl, E., and MacNeille, P. (2016, January 12\u201316). Bayesian rule sets for interpretable classification. Proceedings of the 16th International Conference on Data Mining (ICDM), Barcelona, Spain.","DOI":"10.1109\/ICDM.2016.0171"},{"key":"ref_104","first-page":"2357","article-title":"A bayesian framework for learning rule sets for interpretable classification","volume":"18","author":"Wang","year":"2017","journal-title":"J. Mach. Learn. Res."},{"key":"ref_105","unstructured":"Pazzani, M. (1997, January 14\u201317). Comprehensible knowledge discovery: Gaining insight from data. Proceedings of the First Federal Data Mining Conference and Exposition, London, UK."},{"key":"ref_106","doi-asserted-by":"crossref","unstructured":"Zeng, Z., Miao, C., Leung, C., and Chin, J.J. (2018, January 2\u20137). Building More Explainable Artificial Intelligence With Argumentation. Proceedings of the 32nd Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11353"},{"key":"ref_107","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1016\/j.ijar.2006.01.004","article-title":"Analysis of interpretability-accuracy tradeoff of fuzzy systems by multiobjective fuzzy genetics-based machine learning","volume":"44","author":"Ishibuchi","year":"2007","journal-title":"Int. J. Approx. Reason."},{"key":"ref_108","doi-asserted-by":"crossref","first-page":"212","DOI":"10.1109\/91.842154","article-title":"Fuzzy modeling of high-dimensional systems: Complexity reduction and interpretability improvement","volume":"8","author":"Jin","year":"2000","journal-title":"Trans. Fuzzy Syst."},{"key":"ref_109","doi-asserted-by":"crossref","unstructured":"Pierrard, R., Poli, J.P., and Hudelot, C. (2018, January 8\u201313). Learning Fuzzy Relations and Properties for Explainable Artificial Intelligence. Proceedings of the International Conference on Fuzzy Systems (FUZZ-IEEE), Rio de Janeiro, Brazil.","DOI":"10.1109\/FUZZ-IEEE.2018.8491538"},{"key":"ref_110","doi-asserted-by":"crossref","first-page":"S5:1","DOI":"10.1186\/1471-2164-12-S2-S5","article-title":"Building interpretable fuzzy models for high dimensional data analysis in cancer diagnosis","volume":"12","author":"Wang","year":"2011","journal-title":"BMC Genom."},{"key":"ref_111","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.ins.2013.03.038","article-title":"An interpretable classification rule mining algorithm","volume":"240","author":"Cano","year":"2013","journal-title":"Inf. Sci."},{"key":"ref_112","doi-asserted-by":"crossref","unstructured":"Malioutov, D.M., Varshney, K.R., Emad, A., and Dash, S. (2017). Learning interpretable classification rules with boolean compressed sensing. Transparent Data Mining for Big and Small Data, Springer.","DOI":"10.1007\/978-3-319-54024-5_5"},{"key":"ref_113","doi-asserted-by":"crossref","unstructured":"Su, G., Wei, D., Varshney, K.R., and Malioutov, D.M. (2016, January 23). Interpretable two-level Boolean rule learning for classification. Proceedings of the ICML Workshop Human Interpretability in Machine Learning, New York, NY, USA.","DOI":"10.1109\/MLSP.2016.7738856"},{"key":"ref_114","doi-asserted-by":"crossref","unstructured":"D\u2019Alterio, P., Garibaldi, J.M., and John, R.I. (2020, January 19\u201324). Constrained interval type-2 fuzzy classification systems for explainable AI (XAI). Proceedings of the International Conference on Fuzzy Systems, Scotland, UK.","DOI":"10.1109\/FUZZ48607.2020.9177671"},{"key":"ref_115","unstructured":"Fahner, G. (2018, January 18\u201322). Developing Transparent Credit Risk Scorecards More Effectively: An Explainable Artificial Intelligence Approach. Proceedings of the 7th International Conference on Data Analytics, Athens, Greece."},{"key":"ref_116","unstructured":"Liang, Y., and Van den Broeck, G. (2017, January 19\u201325). Towards Compact Interpretable Models: Shrinking of Learned Probabilistic Sentential Decision Diagrams. Proceedings of the Workshop on Explainable AI (XAI); International Joint Conferences on Artificial Intelligence (IJCAI), Melbourne, Australia."},{"key":"ref_117","doi-asserted-by":"crossref","first-page":"17001","DOI":"10.1109\/ACCESS.2019.2893141","article-title":"Evolving Rule-Based Explainable Artificial Intelligence for Unmanned Aerial Vehicles","volume":"7","author":"Keneni","year":"2019","journal-title":"Access"},{"key":"ref_118","doi-asserted-by":"crossref","unstructured":"Andrzejak, A., Langner, F., and Zabala, S. (2013, January 16\u201319). Interpretable models from distributed data via merging of decision trees. Proceedings of the Symposium on Computational Intelligence and Data Mining (CIDM), Singapore.","DOI":"10.1109\/CIDM.2013.6597210"},{"key":"ref_119","first-page":"1","article-title":"Interpreting tree ensembles with intrees","volume":"7","author":"Deng","year":"2018","journal-title":"Int. J. Data Sci. Anal."},{"key":"ref_120","doi-asserted-by":"crossref","unstructured":"Ferri, C., Hern\u00e1ndez-Orallo, J., and Ram\u00edrez-Quintana, M.J. (2002, January 24\u201326). From ensemble methods to comprehensible models. Proceedings of the International Conference on Discovery Science, L\u00fcbeck, Germany.","DOI":"10.1007\/3-540-36182-0_16"},{"key":"ref_121","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1016\/j.inffus.2020.03.013","article-title":"Explainable Decision Forest: Transforming a decision forest into an interpretable tree","volume":"61","author":"Sagi","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_122","doi-asserted-by":"crossref","unstructured":"Van Assche, A., and Blockeel, H. (2007, January 17\u201321). Seeing the forest through the trees: Learning a comprehensible model from an ensemble. Proceedings of the European Conference on Machine Learning, Warsaw, Poland.","DOI":"10.1007\/978-3-540-74958-5_39"},{"key":"ref_123","unstructured":"Hara, S., and Hayashi, K. (2018, January 9\u201311). Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach. Proceedings of the International Conference on Artificial Intelligence and Statistics, AISTATS, Canary Islands, Spain."},{"key":"ref_124","doi-asserted-by":"crossref","first-page":"263","DOI":"10.1007\/s10489-007-0093-8","article-title":"Explaining inferences in Bayesian networks","volume":"29","author":"Yap","year":"2008","journal-title":"Appl. Intell."},{"key":"ref_125","unstructured":"Barratt, S. (2017, January 7). Interpnet: Neural introspection for interpretable deep learning. Proceedings of the Symposium on Interpretable Machine Learning, Long Beach, CA, USA."},{"key":"ref_126","doi-asserted-by":"crossref","first-page":"125562","DOI":"10.1109\/ACCESS.2019.2937521","article-title":"Human-Centric AI for Trustworthy IoT Systems With Explainable Multilayer Perceptrons","volume":"7","author":"Muttukrishnan","year":"2019","journal-title":"Access"},{"key":"ref_127","unstructured":"Bennetot, A., Laurent, J.L., Chatila, R., and D\u00edaz-Rodr\u00edguez, N. (2019, January 10\u201316). Towards explainable neural-symbolic visual reasoning. Proceedings of the NeSy Workshop; International Joint Conferences on Artificial Intelligence (IJCAI), Macao, China."},{"key":"ref_128","doi-asserted-by":"crossref","unstructured":"Lei, T., Barzilay, R., and Jaakkola, T. (2016, January 1\u20135). Rationalizing neural predictions. Proceedings of the Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA.","DOI":"10.18653\/v1\/D16-1011"},{"key":"ref_129","doi-asserted-by":"crossref","unstructured":"Hendricks, L.A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., and Darrell, T. (2016, January 11\u201314). Generating visual explanations. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46493-0_1"},{"key":"ref_130","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1016\/0010-4809(75)90009-9","article-title":"Computer-based consultations in clinical therapeutics: Explanation and rule acquisition capabilities of the MYCIN system","volume":"8","author":"Shortliffe","year":"1975","journal-title":"Comput. Biomed. Res."},{"key":"ref_131","unstructured":"Alonso, J.M., Ramos-Soto, A., Castiello, C., and Mencar, C. (2018, January 27). Explainable AI Beer Style Classifier. Proceedings of the SICSA Workshop on Reasoning, Learning and Explainability, Scotland, UK."},{"key":"ref_132","doi-asserted-by":"crossref","first-page":"798","DOI":"10.1155\/2017\/2460174","article-title":"An interpretable classification framework for information extraction from online healthcare forums","volume":"2017","author":"Gao","year":"2017","journal-title":"J. Healthc. Eng."},{"key":"ref_133","doi-asserted-by":"crossref","first-page":"285","DOI":"10.1007\/s10506-016-9183-4","article-title":"A method for explaining Bayesian networks for legal evidence with scenarios","volume":"24","author":"Vlek","year":"2016","journal-title":"Artif. Intell. Law"},{"key":"ref_134","doi-asserted-by":"crossref","unstructured":"Bach, S., Binder, A., Montavon, G., Klauschen, F., M\u00fcller, K.R., and Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10.","DOI":"10.1371\/journal.pone.0130140"},{"key":"ref_135","doi-asserted-by":"crossref","unstructured":"Apicella, A., Giugliano, S., Isgr\u00f2, F., and Prevete, R. (2021, January 10\u201315). A general approach to compute the relevance of middle-level input features. Proceedings of the Pattern Recognition. ICPR International Workshops and Challenges, Online. Part III.","DOI":"10.1007\/978-3-030-68796-0_14"},{"key":"ref_136","doi-asserted-by":"crossref","unstructured":"Fong, R.C., and Vedaldi, A. (2017, January 22\u201329). Interpretable explanations of black boxes by meaningful perturbation. Proceedings of the International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.371"},{"key":"ref_137","unstructured":"Liu, L., and Wang, L. (2012, January 16\u201321). What has my classifier learned? Visualizing the classification rules of bag-of-feature model by support region detection. Proceedings of the 2012 Conference on Computer Vision and Pattern Recognition, Providence, RI, USA."},{"key":"ref_138","doi-asserted-by":"crossref","unstructured":"Choo, J., Lee, H., Kihm, J., and Park, H. (2010, January 25\u201326). iVisClassifier: An interactive visual analytics system for classification based on supervised dimension reduction. Proceedings of the Symposium on Visual Analytics Science and Technology, Salt Lake City, UT, USA.","DOI":"10.1109\/VAST.2010.5652443"},{"key":"ref_139","unstructured":"Dabkowski, P., and Gal, Y. (2017, January 4\u20139). Real time image saliency for black box classifiers. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_140","first-page":"1803","article-title":"How to explain individual classification decisions","volume":"11","author":"Baehrens","year":"2010","journal-title":"J. Mach. Learn. Res."},{"key":"ref_141","doi-asserted-by":"crossref","first-page":"44","DOI":"10.1080\/10618600.2014.907095","article-title":"Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation","volume":"24","author":"Goldstein","year":"2015","journal-title":"J. Comput. Graph. Stat."},{"key":"ref_142","doi-asserted-by":"crossref","unstructured":"Casalicchio, G., Molnar, C., and Bischl, B. (2018, January 10\u201314). Visualizing the feature importance for black box models. Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Dublin, Ireland.","DOI":"10.1007\/978-3-030-10925-7_40"},{"key":"ref_143","doi-asserted-by":"crossref","unstructured":"Alvarez-Melis, D., and Jaakkola, T.S. (2017, January 7\u201311). A causal framework for explaining the predictions of black-box sequence-to-sequence models. Proceedings of the Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark.","DOI":"10.18653\/v1\/D17-1042"},{"key":"ref_144","unstructured":"Goodfellow, I.J., Shlens, J., and Szegedy, C. (2015, January 7\u20139). Explaining and harnessing adversarial examples. Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA."},{"key":"ref_145","unstructured":"Krause, J., Perer, A., and Bertini, E. (2016, January 23). Using visual analytics to interpret predictive machine learning models. Proceedings of the ICML Workshop on Human Interpretability in Machine Learning, New York, NY, USA."},{"key":"ref_146","unstructured":"Poulin, B., Eisner, R., Szafron, D., Lu, P., Greiner, R., Wishart, D.S., Fyshe, A., Pearcy, B., MacDonell, C., and Anvik, J. (2006, January 16\u201320). Visual explanation of evidence with additive classifiers. Proceedings of the The National Conference On Artificial Intelligence, Boston, MA, USA."},{"key":"ref_147","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1109\/TVCG.2018.2864499","article-title":"Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models","volume":"25","author":"Zhang","year":"2019","journal-title":"Trans. Vis. Comput. Graph."},{"key":"ref_148","doi-asserted-by":"crossref","unstructured":"Kahng, M., Fang, D., and Chau, D.H.P. (2016, January 26). Visual exploration of machine learning results using data cube analysis. Proceedings of the Workshop on Human-In-the-Loop Data Analytics, San Francisco, CA, USA.","DOI":"10.1145\/2939502.2939503"},{"key":"ref_149","doi-asserted-by":"crossref","unstructured":"Kumar, D., Wong, A., and Taylor, G.W. (2017, January 21\u201326). Explaining the unexplained: A class-enhanced attentive response (clear) approach to understanding deep neural networks. Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.215"},{"key":"ref_150","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017, January 22\u201329). Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_151","unstructured":"Liu, G., and Gifford, D. (2017, January 10). Visualizing Feature Maps in Deep Neural Networks using DeepResolve. A Genomics Case Study. Proceedings of the International Conference on Machine Learning\u2014Workshop on Visualization for Deep Learning, Sydney, Australia."},{"key":"ref_152","unstructured":"Sundararajan, M., Taly, A., and Yan, Q. (2017, January 6\u201311). Axiomatic attribution for deep networks. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia."},{"key":"ref_153","unstructured":"Smilkov, D., Thorat, N., Kim, B., Vi\u00e9gas, F., and Wattenberg, M. (2017, January 10). Smoothgrad: Removing noise by adding noise. Proceedings of the International Conference on Machine Learning\u2014Workshop on Visualization for Deep Learning, Sydney, Australia."},{"key":"ref_154","doi-asserted-by":"crossref","unstructured":"Jung, H., Oh, Y., Park, J., and Kim, M.S. (2021, January 10\u201315). Jointly Optimize Positive and Negative Saliencies for Black Box Classifiers. Pattern Recognition. Proceedings of the ICPR International Workshops and Challenges, Online. Part III.","DOI":"10.1007\/978-3-030-68796-0_6"},{"key":"ref_155","unstructured":"Mogrovejo, O., Antonio, J., Wang, K., and Tuytelaars, T. (2019, January 6\u20139). Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks. Proceedings of the 7th International Conference on Learning Representations, New Orleans, LA, USA."},{"key":"ref_156","first-page":"248","article-title":"Using explanations to improve ensembling of visual question answering systems","volume":"82","author":"Rajani","year":"2017","journal-title":"Training"},{"key":"ref_157","unstructured":"Goyal, Y., Mohapatra, A., Parikh, D., and Batra, D. (2016, January 23). Towards transparent ai systems: Interpreting visual question answering models. Proceedings of the ICML Workshop on Visualization for Deep Learning, New York, NY, USA."},{"key":"ref_158","doi-asserted-by":"crossref","unstructured":"Zeiler, M.D., and Fergus, R. (2014, January 6\u201312). Visualizing and understanding convolutional networks. Proceedings of the European conference on computer vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10590-1_53"},{"key":"ref_159","doi-asserted-by":"crossref","unstructured":"Fong, R., and Vedaldi, A. (2018, January 18\u201322). Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks. Proceedings of the Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00910"},{"key":"ref_160","unstructured":"Ghorbani, A., Wexler, J., Zou, J.Y., and Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, Morgan Kaufmann Publishers Inc."},{"key":"ref_161","doi-asserted-by":"crossref","unstructured":"Mahendran, A., and Vedaldi, A. (2015, January 7\u201312). Understanding deep image representations by inverting them. Proceedings of the Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7299155"},{"key":"ref_162","doi-asserted-by":"crossref","unstructured":"Du, M., Liu, N., Song, Q., and Hu, X. (2018, January 19\u201323). Towards Explanation of DNN-based Prediction with Guided Feature Inversion. Proceedings of the 24th SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK.","DOI":"10.1145\/3219819.3220099"},{"key":"ref_163","unstructured":"Shrikumar, A., Greenside, P., and Kundaje, A. (2017, January 6\u201311). Learning important features through propagating activation differences. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia."},{"key":"ref_164","unstructured":"Simonyan, K., Vedaldi, A., and Zisserman, A. (2014). Deep inside convolutional networks: Visualising image classification models and saliency maps. ICLR Workshop, ICLR."},{"key":"ref_165","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1016\/j.patcog.2016.11.008","article-title":"Explaining nonlinear classification decisions with deep Taylor decomposition","volume":"65","author":"Montavon","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_166","unstructured":"He, S., and Pugeault, N. (2017, January 10). Deep saliency: What is learnt by a deep network about saliency?. Proceedings of the International Conference on Machine Learning\u2014Workshop on Visualization for Deep Learning, Sydney, Australia."},{"key":"ref_167","doi-asserted-by":"crossref","unstructured":"Zhang, Q., Wu, Y.N., and Zhu, S.C. (2018, January 18\u201322). Interpretable convolutional neural networks. Proceedings of the Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00920"},{"key":"ref_168","unstructured":"Zintgraf, L.M., Cohen, T.S., Adel, T., and Welling, M. (2017, January 23\u201326). Visualizing deep neural network decisions: Prediction difference analysis. Proceedings of the 5th International Conference on Learning Representations, Toulon, France."},{"key":"ref_169","unstructured":"Kindermans, P.J., Sch\u00fctt, K.T., Alber, M., M\u00fcller, K.R., Erhan, D., Kim, B., and D\u00e4hne, S. (May, January 30). Learning how to explain neural networks: PatternNet and PatternAttribution. Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada."},{"key":"ref_170","doi-asserted-by":"crossref","unstructured":"Davis, B., Bhatt, U., Bhardwaj, K., Marculescu, R., and Moura, J.M. (2020, January 4\u20138). On network science and mutual information for explaining deep neural networks. Proceedings of the International Conference on Acoustics, Speech and Signal Processing ICASSP, Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053078"},{"key":"ref_171","doi-asserted-by":"crossref","unstructured":"Kenny, E.M., and Keane, M.T. (2019, January 10\u201316). Twin-systems to explain artificial neural networks using case-based reasoning: Comparative tests of feature-weighting methods in ANN-CBR twins for XAI. Proceedings of the 28th International Joint Conferences on Artifical Intelligence, Macao, China.","DOI":"10.24963\/ijcai.2019\/376"},{"key":"ref_172","doi-asserted-by":"crossref","unstructured":"Kenny, E.M., Delaney, E.D., Greene, D., and Keane, M.T. (2021, January 10\u201315). Post-Hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective. Proceedings of the Pattern Recognition. ICPR International Workshops and Challenges, Online. Part III.","DOI":"10.1007\/978-3-030-68796-0_2"},{"key":"ref_173","doi-asserted-by":"crossref","unstructured":"Chu, L., Hu, X., Hu, J., Wang, L., and Pei, J. (2018, January 19\u201323). Exact and consistent interpretation for piecewise linear neural networks: A closed form solution. Proceedings of the 24th SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK.","DOI":"10.1145\/3219819.3220063"},{"key":"ref_174","unstructured":"Arras, L., Montavon, G., M\u00fcller, K.R., and Samek, W. (, January September). Explaining recurrent neural network predictions in sentiment analysis. Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, Association for Computational Linguistics, Copenhagen, Denmark."},{"key":"ref_175","doi-asserted-by":"crossref","unstructured":"Binder, A., Montavon, G., Lapuschkin, S., M\u00fcller, K.R., and Samek, W. (2016, January 6\u20139). Layer-wise relevance propagation for neural networks with local renormalization layers. Proceedings of the International Conference on Artificial Neural Networks, Barcelona, Spain.","DOI":"10.1007\/978-3-319-44781-0_8"},{"key":"ref_176","doi-asserted-by":"crossref","unstructured":"Li, J., Chen, X., Hovy, E., and Jurafsky, D. (2016, January 12\u201317). Visualizing and understanding neural models in NLP. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA.","DOI":"10.18653\/v1\/N16-1082"},{"key":"ref_177","doi-asserted-by":"crossref","unstructured":"Aubry, M., and Russell, B.C. (2015, January 7\u201313). Understanding deep features with computer-generated imagery. Proceedings of the International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.329"},{"key":"ref_178","unstructured":"Zahavy, T., Ben-Zrihem, N., and Mannor, S. (2016, January 19\u201324). Graying the black box: Understanding DQNs. Proceedings of the International Conference on Machine Learning, New York, NY, USA."},{"key":"ref_179","doi-asserted-by":"crossref","unstructured":"Liu, X., Wang, X., and Matwin, S. (2018, January 8\u201313). Interpretable deep convolutional neural networks via meta-learning. Proceedings of the International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil.","DOI":"10.1109\/IJCNN.2018.8489172"},{"key":"ref_180","doi-asserted-by":"crossref","first-page":"101","DOI":"10.1109\/TVCG.2016.2598838","article-title":"Visualizing the hidden activity of artificial neural networks","volume":"23","author":"Rauber","year":"2017","journal-title":"Trans. Vis. Comput. Graph."},{"key":"ref_181","unstructured":"Thiagarajan, J.J., Kailkhura, B., Sattigeri, P., and Ramamurthy, K.N. (2016, January 9). TreeView: Peeking into deep neural networks via feature-space partitioning. Proceedings of the Interpretability Workshop, Barcelona, Spain."},{"key":"ref_182","unstructured":"Bau, D., Zhu, J.Y., Strobelt, H., Bolei, Z., Tenenbaum, J.B., Freeman, W.T., and Torralba, A. (2019, January 6\u20139). GAN Dissection: Visualizing and Understanding Generative Adversarial Networks. Proceedings of the International Conference on Learning Representation, New Orleans, LA, USA."},{"key":"ref_183","doi-asserted-by":"crossref","unstructured":"L\u00f3pez-Cifuentes, A., Escudero-Vi\u00f1olo, M., Gaji\u0107, A., and Besc\u00f3s, J. (2021, January 10\u201315). Visualizing the Effect of Semantic Classes in the Attribution of Scene Recognition Models. Proceedings of the Pattern Recognition. ICPR International Workshops and Challenges, Online. Part III.","DOI":"10.1007\/978-3-030-68796-0_9"},{"key":"ref_184","doi-asserted-by":"crossref","unstructured":"Gorokhovatskyi, O., and Peredrii, O. (2021, January 10\u201315). Recursive Division of Image for Explanation of Shallow CNN Models. Proceedings of the Pattern Recognition. ICPR International Workshops and Challenges, Online. Part III.","DOI":"10.1007\/978-3-030-68796-0_20"},{"key":"ref_185","unstructured":"Lengerich, B.J., Konam, S., Xing, E.P., Rosenthal, S., and Veloso, M. (2017, January 10). Towards visual explanations for convolutional neural networks via input resampling. Proceedings of the International Conference on Machine Learning\u2014Workshop on Visualization for Deep Learning, Sydney, Australia."},{"key":"ref_186","unstructured":"Erhan, D., Courville, A., and Bengio, Y. (2010). Understanding representations learned in deep architectures. Tech. Rep., 1355."},{"key":"ref_187","unstructured":"Nguyen, A., Yosinski, J., and Clune, J. (2016, January 23). Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. Proceedings of the Visualization for Deep Learning workshop. International Conference on Machine Learning, New York, NY, USA."},{"key":"ref_188","unstructured":"Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., and Clune, J. (2016, January 5\u201310). Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain."},{"key":"ref_189","unstructured":"Hamidi-Haines, M., Qi, Z., Fern, A., Li, F., and Tadepalli, P. (2019, January 16\u201320). Interactive Naming for Explaining Deep Neural Networks: A Formative Study. Proceedings of the Workshops Co-Located with the 24th Conference on Intelligent User Interfaces, Los Angeles, CA, USA."},{"key":"ref_190","doi-asserted-by":"crossref","unstructured":"Zhu, P., Zhu, R., Mishra, S., and Saligrama, V. (2021, January 10\u201315). Low Dimensional Visual Attributes: An Interpretable Image Encoding. Proceedings of the Pattern Recognition. ICPR International Workshops and Challenges, Online. Part III.","DOI":"10.1007\/978-3-030-68796-0_7"},{"key":"ref_191","doi-asserted-by":"crossref","unstructured":"Stano, M., Benesova, W., and Martak, L.S. (2020, January 15\u201317). Explainable 3D convolutional neural network using GMM encoding. Proceedings of the 12th International Conference on Machine Vision (ICMV), Amsterdam, The Netherlands.","DOI":"10.1117\/12.2557314"},{"key":"ref_192","doi-asserted-by":"crossref","unstructured":"Halnaut, A., Giot, R., Bourqui, R., and Auber, D. (2021, January 11). Samples Classification Analysis Across DNN Layers with Fractal Curves. Proceedings of the ICPR 2020\u2019s Workshop Explainable Deep Learning for AI, Milan, Italy.","DOI":"10.1007\/978-3-030-68796-0_4"},{"key":"ref_193","doi-asserted-by":"crossref","unstructured":"Zhang, Q., Cao, R., Shi, F., Wu, Y.N., and Zhu, S.C. (2018, January 2\u20137). Interpreting cnn knowledge via an explanatory graph. Proceedings of the 32nd Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11819"},{"key":"ref_194","unstructured":"Liang, X., Hu, Z., Zhang, H., Lin, L., and Xing, E.P. (2018, January 2\u20138). Symbolic graph reasoning meets convolutions. Proceedings of the Advances in Neural Information Processing Systems, Montr\u00e9al, QC, Canada."},{"key":"ref_195","doi-asserted-by":"crossref","unstructured":"Zhang, Q., Cao, R., Wu, Y.N., and Zhu, S.C. (2017, January 4\u201310). Growing interpretable part graphs on convnets via multi-shot learning. Proceedings of the 31st Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.10924"},{"key":"ref_196","doi-asserted-by":"crossref","first-page":"e10","DOI":"10.23915\/distill.00010","article-title":"The building blocks of interpretability","volume":"3","author":"Olah","year":"2018","journal-title":"Distill"},{"key":"ref_197","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1109\/TVCG.2017.2744718","article-title":"Activis: Visual exploration of industry-scale deep neural network models","volume":"24","author":"Kahng","year":"2018","journal-title":"Trans. Vis. Comput. Graph."},{"key":"ref_198","unstructured":"Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., and Lipson, H. (2015, January 15). Understanding neural networks through deep visualization. Proceedings of the ICML Workshop on Deep Learning, Poster Presentation, Lille, France."},{"key":"ref_199","unstructured":"Zhong, W., Xie, C., Zhong, Y., Wang, Y., Xu, W., Cheng, S., and Mueller, K. (2017, January 10). Evolutionary visual analysis of deep neural networks. Proceedings of the International Conference on Machine Learning\u2014Workshop on Visualization for Deep Learning, Sydney, Australia."},{"key":"ref_200","first-page":"1","article-title":"iNNvestigate neural networks","volume":"20","author":"Alber","year":"2019","journal-title":"J. Mach. Learn. Res."},{"key":"ref_201","doi-asserted-by":"crossref","unstructured":"Streeter, M.J., Ward, M.O., and Alvarez, S.A. (2001, January 22\u201323). Nvis: An interactive visualization tool for neural networks. Proceedings of the Visual Data Exploration and Analysis VIII, San Jose, CA, USA.","DOI":"10.1117\/12.424934"},{"key":"ref_202","unstructured":"Karpathy, A., Johnson, J., and Fei-Fei, L. (2016, January 2\u20134). Visualizing and understanding recurrent networks. Proceedings of the ICLR Workshops, San Juan, Puerto Rico."},{"key":"ref_203","doi-asserted-by":"crossref","first-page":"353","DOI":"10.1109\/TVCG.2018.2865044","article-title":"Seq2Seq-Vis: A visual debugging tool for sequence-to-sequence models","volume":"25","author":"Strobelt","year":"2018","journal-title":"Trans. Vis. Comput. Graph."},{"key":"ref_204","doi-asserted-by":"crossref","first-page":"1133","DOI":"10.1109\/TFUZZ.2013.2245130","article-title":"FINGRAMS: Visual representations of fuzzy rule-based inference for expert analysis of comprehensibility","volume":"21","author":"Pancho","year":"2013","journal-title":"Trans. Fuzzy Syst."},{"key":"ref_205","doi-asserted-by":"crossref","unstructured":"Hamel, L. (2006, January 28\u201329). Visualization of support vector machines with unsupervised learning. Proceedings of the Symposium on Computational Intelligence and Bioinformatics and Computational Biology, Toronto, ON, Canada.","DOI":"10.1109\/CIBCB.2006.330984"},{"key":"ref_206","doi-asserted-by":"crossref","unstructured":"Jakulin, A., Mo\u017eina, M., Dem\u0161ar, J., Bratko, I., and Zupan, B. (2005, January 21\u201324). Nomograms for visualizing support vector machines. Proceedings of the 11th SIGKDD International Conference on Knowledge Discovery in Data Mining, Chicago, IL, USA.","DOI":"10.1145\/1081870.1081886"},{"key":"ref_207","doi-asserted-by":"crossref","first-page":"247","DOI":"10.1109\/TITB.2007.902300","article-title":"Nonlinear support vector machine visualization for risk factor analysis using nomograms and localized radial basis function kernels","volume":"12","author":"Cho","year":"2008","journal-title":"Trans. Inf. Technol. Biomed."},{"key":"ref_208","doi-asserted-by":"crossref","unstructured":"Mo\u017eina, M., Dem\u0161ar, J., Kattan, M., and Zupan, B. (2004, January 20\u201324). Nomograms for visualization of naive Bayesian classifier. Proceedings of the European Conference on Principles of Data Mining and Knowledge Discovery, Pisa, Italy.","DOI":"10.1007\/978-3-540-30116-5_32"},{"key":"ref_209","doi-asserted-by":"crossref","unstructured":"Landecker, W., Thomure, M.D., Bettencourt, L.M.A., Mitchell, M., Kenyon, G.T., and Brumby, S.P. (2013, January 16\u201319). Interpreting individual classifications of hierarchical networks. Proceedings of the Symposium on Computational Intelligence and Data Mining (CIDM), Singapore.","DOI":"10.1109\/CIDM.2013.6597214"},{"key":"ref_210","doi-asserted-by":"crossref","unstructured":"Panchenko, A., Ruppert, E., Faralli, S., Ponzetto, S.P., and Biemann, C. (2017, January 3\u20137). Unsupervised does not mean uninterpretable: The case for word sense induction and disambiguation. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain.","DOI":"10.18653\/v1\/E17-1009"},{"key":"ref_211","doi-asserted-by":"crossref","unstructured":"Hooker, G. (2004, January 22\u201325). Discovering additive structure in black box functions. Proceedings of the 10th SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA.","DOI":"10.1145\/1014052.1014122"},{"key":"ref_212","doi-asserted-by":"crossref","unstructured":"Kuhn, D.R., Kacker, R.N., Lei, Y., and Simos, D.E. (2020, January 24\u201328). Combinatorial Methods for Explainable AI. Proceedings of the International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Porto, Portugal.","DOI":"10.1109\/ICSTW50294.2020.00037"},{"key":"ref_213","unstructured":"Biran, O., and McKeown, K. (2014, January 26). Justification narratives for individual classifications. Proceedings of the AutoML Workshop at ICML, Beijing, China."},{"key":"ref_214","first-page":"1064","article-title":"explAIner: A visual analytics framework for interactive and explainable machine learning","volume":"26","author":"Spinner","year":"2019","journal-title":"Trans. Vis. Comput. Graph."},{"key":"ref_215","doi-asserted-by":"crossref","unstructured":"Tamagnini, P., Krause, J., Dasgupta, A., and Bertini, E. (2017, January 14\u201319). Interpreting black-box classifiers using instance-level visual explanations. Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, Chicago, IL, USA.","DOI":"10.1145\/3077257.3077260"},{"key":"ref_216","unstructured":"Yang, S.C.H., and Shafto, P. (2017, January 9). Explainable Artificial Intelligence via Bayesian Teaching. Proceedings of the Workshop on Teaching Machines, Robots, and Humans, Long Beach, CA, USA."},{"key":"ref_217","unstructured":"Khanna, R., Kim, B., Ghosh, J., and Koyejo, S. (2019, January 16\u201318). Interpreting Black Box Predictions using Fisher Kernels. Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, Okinawa, Japan."},{"key":"ref_218","doi-asserted-by":"crossref","first-page":"2403","DOI":"10.1214\/11-AOAS495","article-title":"Prototype selection for interpretable classification","volume":"5","author":"Bien","year":"2011","journal-title":"Ann. Appl. Stat."},{"key":"ref_219","unstructured":"Caruana, R., Kangarloo, H., Dionisio, J., Sinha, U., and Johnson, D. (1999, January 6\u201310). Case-based explanation of non-case-based learning methods. Proceedings of the AMIA Symposium, Washington, DC, USA."},{"key":"ref_220","doi-asserted-by":"crossref","unstructured":"Pawelczyk, M., Broelemann, K., and Kasneci, G. (2020, January 20\u201324). Learning Model-Agnostic Counterfactual Explanations for Tabular Data. Proceedings of the Web Conference, Taipei, Taiwan.","DOI":"10.1145\/3366423.3380087"},{"key":"ref_221","doi-asserted-by":"crossref","unstructured":"Mothilal, R.K., Sharma, A., and Tan, C. (2020, January 27\u201330). Explaining machine learning classifiers through diverse counterfactual explanations. Proceedings of the Conference on Fairness, Accountability, and Transparency, Barcelona, Spain.","DOI":"10.1145\/3351095.3372850"},{"key":"ref_222","doi-asserted-by":"crossref","unstructured":"Liu, N., Yang, H., and Hu, X. (2018, January 19\u201323). Adversarial detection with model interpretation. Proceedings of the 24th SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK.","DOI":"10.1145\/3219819.3220027"},{"key":"ref_223","unstructured":"Kim, B., Khanna, R., and Koyejo, O.O. (2016, January 5\u201310). Examples are not enough, learn to criticize! criticism for interpretability. Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain."},{"key":"ref_224","unstructured":"Dhurandhar, A., Chen, P.Y., Luss, R., Tu, C.C., Ting, P., Shanmugam, K., and Das, P. (2018, January 2\u20138). Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Proceedings of the Advances in Neural Information Processing Systems 31 (NIPS), Montr\u00e9al, QC, Canada."},{"key":"ref_225","doi-asserted-by":"crossref","unstructured":"Park, D.H., Hendricks, L.A., Akata, Z., Schiele, B., Darrell, T., and Rohrbach, M. (2018, January 18\u201322). Multimodal explanations: Justifying decisions and pointing to the evidence. Proceedings of the Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00915"},{"key":"ref_226","doi-asserted-by":"crossref","unstructured":"Mayr, F., and Yovine, S. (2018, January 27\u201330). Regular Inference on Artificial Neural Networks. Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Hamburg, Germany.","DOI":"10.1007\/978-3-319-99740-7_25"},{"key":"ref_227","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1016\/0893-6080(95)00086-0","article-title":"Extraction of rules from discrete-time recurrent neural networks","volume":"9","author":"Omlin","year":"1996","journal-title":"Neural Netw."},{"key":"ref_228","doi-asserted-by":"crossref","unstructured":"Tamajka, M., Benesova, W., and Kompanek, M. (2019, January 5\u20137). Transforming Convolutional Neural Network to an Interpretable Classifier. Proceedings of the International Conference on Systems, Signals and Image Processing (IWSSIP), Osijek, Croatia.","DOI":"10.1109\/IWSSIP.2019.8787211"},{"key":"ref_229","unstructured":"Yeh, C.K., Kim, J., Yen, I.E.H., and Ravikumar, P.K. (2018, January 2\u20138). Representer point selection for explaining deep neural networks. Proceedings of the Advances in Neural Information Processing Systems, Montr\u00e9al, QC, Canada."},{"key":"ref_230","doi-asserted-by":"crossref","unstructured":"Alonso, J.M. (2019, January 9\u201313). Explainable Artificial Intelligence for kids. Proceedings of the Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology (EUSFLAT), Prague, Czech Republic.","DOI":"10.2991\/eusflat-19.2019.21"},{"key":"ref_231","unstructured":"Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P., and Uthurusamy, R. (1996). Transforming rules and trees into comprehensible knowledge structures. Advances in Knowledge Discovery and Data Mining, American Association for Artificial Intelligence."},{"key":"ref_232","unstructured":"Tan, H.F., Hooker, G., and Wells, M.T. (2016, January 9). Tree space prototypes: Another look at making tree ensembles interpretable. Proceedings of the Interpretability Workshop, Barcelona, Spain."},{"key":"ref_233","unstructured":"N\u00fa\u00f1ez, H., Angulo, C., and Catal\u00e0, A. (2002, January 24\u201326). Rule extraction from support vector machines. Proceedings of the European Symposium on Artificial Neural Networks, Bruges, Belgium."},{"key":"ref_234","doi-asserted-by":"crossref","first-page":"475","DOI":"10.1016\/j.ijar.2016.09.002","article-title":"A two-phase method for extracting explanatory arguments from Bayesian networks","volume":"80","author":"Timmer","year":"2017","journal-title":"Int. J. Approx. Reason."},{"key":"ref_235","unstructured":"Kim, B., Rudin, C., and Shah, J.A. (2014, January 8\u201313). The bayesian case model: A generative approach for case-based reasoning and prototype classification. Proceedings of the Advances in Neural Information Processing Systems. Neural Information Processing Systems Foundation, Montr\u00e9al, QC, Canada."},{"key":"ref_236","doi-asserted-by":"crossref","unstructured":"Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. (2015, January 10\u201313). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia.","DOI":"10.1145\/2783258.2788613"},{"key":"ref_237","doi-asserted-by":"crossref","unstructured":"Howard, D., and Edwards, M.A. (2018, January 3\u20137). Explainable AI: The Promise of Genetic Programming Multi-run Subtree Encapsulation. Proceedings of the International Conference on Machine Learning and Data Engineering (iCMLDE), Dallas, TX, USA.","DOI":"10.1109\/iCMLDE.2018.00037"},{"key":"ref_238","unstructured":"Kim, B., Shah, J.A., and Doshi-Velez, F. (2015, January 7\u201312). Mind the gap: A generative approach to interpretable feature selection and extraction. Proceedings of the Advances in Neural Information Processing Systems. Neural Information Processing Systems Foundation, Montr\u00e9al, QC, Canada."},{"key":"ref_239","doi-asserted-by":"crossref","unstructured":"Campagner, A., and Cabitza, F. (2020, January 25\u201328). Back to the Feature: A Neural-Symbolic Perspective on Explainable AI. Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Dublin, Ireland.","DOI":"10.1007\/978-3-030-57321-8_3"},{"key":"ref_240","doi-asserted-by":"crossref","unstructured":"Belle, V. (2017, January 19\u201325). Logic meets probability: Towards explainable AI systems for uncertain worlds. Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia.","DOI":"10.24963\/ijcai.2017\/733"}],"container-title":["Machine Learning and Knowledge Extraction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-4990\/3\/3\/32\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:40:30Z","timestamp":1760164830000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-4990\/3\/3\/32"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,8,4]]},"references-count":240,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2021,9]]}},"alternative-id":["make3030032"],"URL":"https:\/\/doi.org\/10.3390\/make3030032","relation":{},"ISSN":["2504-4990"],"issn-type":[{"value":"2504-4990","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,8,4]]}}}