{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T05:40:39Z","timestamp":1764740439200,"version":"3.46.0"},"reference-count":192,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2025,12,1]],"date-time":"2025-12-01T00:00:00Z","timestamp":1764547200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Scientific Grant Agency of the Ministry of Education, Research, Development and Youth of the Slovak Republic and the Slovak Academy of Sciences","award":["1\/0259\/24"],"award-info":[{"award-number":["1\/0259\/24"]}]},{"DOI":"10.13039\/501100005357","name":"Slovak Research and Development Agency","doi-asserted-by":"publisher","award":["APVV-22-0414","APVV-24-0454"],"award-info":[{"award-number":["APVV-22-0414","APVV-24-0454"]}],"id":[{"id":"10.13039\/501100005357","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["MAKE"],"abstract":"<jats:p>The growing trend of using artificial intelligence models in many areas increases the need for a proper understanding of their functioning and decision-making. Although these models achieve high predictive accuracy, their lack of transparency poses major obstacles to trust. Explainable artificial intelligence (XAI) has emerged as a key discipline that offers a wide range of methods to explain the decisions of models. Selecting the most appropriate XAI method for a given application is a non-trivial problem that requires careful consideration of the nature of the method and other aspects. This paper proposes a systematic approach to solving this problem using multi-criteria decision-making (MCDM) techniques: ARAS, CODAS, EDAS, MABAC, MARCOS, PROMETHEE II, TOPSIS, VIKOR, WASPAS, and WSM. The resulting score is an aggregation of the results of these methods using Borda Count. We present a framework that integrates objective and subjective criteria for selecting XAI methods. The proposed methodology includes two main phases. In the first phase, methods that meet the specified parameters are filtered, and in the second phase, the most suitable alternative is selected based on the weights using multi-criteria decision-making and sensitivity analysis. Metric weights can be entered directly, using pairwise comparisons, or calculated objectively using the CRITIC method. The framework is demonstrated on concrete use cases where we compare several popular XAI methods on tasks in different domains. The results show that the proposed approach provides a transparent and robust mechanism for objectively selecting the most appropriate XAI method, thereby helping researchers and practitioners make more informed decisions when deploying explainable AI systems. Sensitivity analysis confirmed the robustness of our XAI method selection: LIME dominated 98.5% of tests in the first use case, and Tree SHAP dominated 94.3% in the second.<\/jats:p>","DOI":"10.3390\/make7040158","type":"journal-article","created":{"date-parts":[[2025,12,2]],"date-time":"2025-12-02T08:49:22Z","timestamp":1764665362000},"page":"158","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods"],"prefix":"10.3390","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7871-460X","authenticated-orcid":false,"given":"Miroslava","family":"Matejov\u00e1","sequence":"first","affiliation":[{"name":"Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Kosice, Letna 9, 040 01 Ko\u0161ice, Slovakia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4603-0411","authenticated-orcid":false,"given":"J\u00e1n","family":"Parali\u010d","sequence":"additional","affiliation":[{"name":"Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Kosice, Letna 9, 040 01 Ko\u0161ice, Slovakia"}]}],"member":"1968","published-online":{"date-parts":[[2025,12,1]]},"reference":[{"key":"ref_1","unstructured":"Gunning, D. (2017). Explainable Artificial Intelligence (XAI), Defense Advanced Research Projects Agency (DARPA)."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Linardatos, P., Papastefanopoulos, V., and Kotsiantis, S. (2021). Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, 23.","DOI":"10.3390\/e23010018"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","article-title":"Explanation in Artificial Intelligence: Insights from the Social Sciences","volume":"267","author":"Miller","year":"2017","journal-title":"Artif. Intell."},{"key":"ref_4","first-page":"2288","article-title":"Examples Are Not Enough, Learn to Criticize! Criticism for Interpretability","volume":"29","author":"Kim","year":"2016","journal-title":"Adv. Neural Inf. Process Syst."},{"key":"ref_5","unstructured":"Doshi-Velez, F., and Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., and Kagal, L. (2018, January 1\u20134). Explaining Explanations: An Overview of Interpretability of Machine Learning. Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, Turin, Italy.","DOI":"10.1109\/DSAA.2018.00018"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","article-title":"Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI","volume":"58","author":"Bennetot","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"52138","DOI":"10.1109\/ACCESS.2018.2870052","article-title":"Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)","volume":"6","author":"Adadi","year":"2018","journal-title":"IEEE Access"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3236009","article-title":"A Survey of Methods for Explaining Black Box Models","volume":"51","author":"Guidotti","year":"2019","journal-title":"ACM Comput. Surv."},{"key":"ref_10","first-page":"1","article-title":"Explainable Artificial Intelligence: Importance, Use Domains, Stages, Output Shapes, and Challenges","volume":"57","author":"Ullah","year":"2024","journal-title":"ACM Comput. Surv."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"521","DOI":"10.1007\/978-3-030-93736-2_39","article-title":"How to Choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice","volume":"Volume 1524","author":"Vermeire","year":"2021","journal-title":"Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Molnar, C. (2024, January 23). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2nd ed. Available online: https:\/\/christophm.github.io\/interpretable-ml-book\/.","DOI":"10.1177\/09726225241252009"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Carvalho, D.V., Pereira, E.M., and Cardoso, J.S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8.","DOI":"10.3390\/electronics8080832"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"35","DOI":"10.1145\/3233231","article-title":"The Mythos of Model Interpretability","volume":"61","author":"Lipton","year":"2018","journal-title":"Commun. ACM"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017, January 22\u201329). Grad-Cam: Visual Explanations from Deep Networks via Gradient-Based Localization. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Bach, S., Binder, A., Montavon, G., Klauschen, F., M\u00fcller, K.R., and Samek, W. (2015). On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE, 10.","DOI":"10.1371\/journal.pone.0130140"},{"key":"ref_17","unstructured":"Shrikumar, A., Greenside, P., and Kundaje, A. (2017, January 6\u201311). Learning Important Features Through Propagating Activation Differences. Proceedings of the 34th International Conference on Machine Learning, Centre, Sydney."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2016, January 13\u201317). \u201cWhy Should I Trust You?\u201d: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA. KDD \u201916.","DOI":"10.1145\/2939672.2939778"},{"key":"ref_19","unstructured":"Lundberg, S.M., and Lee, S.I. (2017, January 4\u20139). A Unified Approach to Interpreting Model Predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA. NIPS\u201917."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2018, January 2\u20137). Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"ref_21","unstructured":"Simonyan, K., Vedaldi, A., and Zisserman, A. (2014, January 14\u201316). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014\u2014Workshop Track Proceedings, Banff, AB, Canada."},{"key":"ref_22","unstructured":"Sundararajan, M., Taly, A., and Yan, Q. (2017, January 6\u201311). Axiomatic Attribution for Deep Networks. Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, Australia."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"44","DOI":"10.1080\/10618600.2014.907095","article-title":"Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation","volume":"24","author":"Goldstein","year":"2015","journal-title":"J. Comput. Graph. Stat."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.1214\/aos\/1013203451","article-title":"Greedy Function Approximation: A Gradient Boosting Machine","volume":"29","author":"Friedman","year":"2001","journal-title":"Ann. Statist."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"1059","DOI":"10.1111\/rssb.12377","article-title":"Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models","volume":"82","author":"Apley","year":"2016","journal-title":"J. R. Stat. Soc. Ser. B Stat. Methodol."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"615","DOI":"10.3390\/make3030032","article-title":"Classification of Explainable Artificial Intelligence Methods through Their Output Formats","volume":"3","author":"Vilone","year":"2021","journal-title":"Mach. Learn. Knowl. Extr."},{"key":"ref_27","first-page":"841","article-title":"Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR","volume":"31","author":"Wachter","year":"2017","journal-title":"Harv. J. Law Technol."},{"key":"ref_28","unstructured":"Vilone, G., and Longo, L. (2020). Explainable Artificial Intelligence: A Systematic Review. arXiv."},{"key":"ref_29","first-page":"321","article-title":"Scenario-Based Requirements Elicitation for User-Centric Explainable AI: A Case in Fraud Detection","volume":"Volume 12279","author":"Cirqueira","year":"2020","journal-title":"Machine Learning and Knowledge Extraction. CD-MAKE 2020"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Wolf, C.T. (2019, January 17\u201320). Explainability Scenarios: Towards Scenario-Based XAI Design. Proceedings of the International Conference on Intelligent User Interfaces, Proceedings IUI, Marina del Ray, CA, USA. Part. F147615.","DOI":"10.1145\/3301275.3302317"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., Ghosh, J., Puri, R., Moura, J.M.F., and Eckersley, P. (2020, January 27\u201330). Explainable Machine Learning in Deployment. Proceedings of the FAT* 2020\u2014Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain.","DOI":"10.1145\/3351095.3375624"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"103473","DOI":"10.1016\/j.artint.2021.103473","article-title":"What Do We Want from Explainable Artificial Intelligence (XAI)?\u2014A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research","volume":"296","author":"Langer","year":"2021","journal-title":"Artif. Intell."},{"key":"ref_33","unstructured":"Arya, V., Bellamy, R.K.E., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S.C., Houde, S., Liao, Q.V., Luss, R., and Mojsilovi\u0107, A. (2019). One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Sokol, K., and Flach, P. (2020, January 27\u201330). Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches. Proceedings of the FAT* 2020\u2014Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain.","DOI":"10.1145\/3351095.3372870"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Cugny, R., Aligon, J., Chevalier, M., Jimenez, G.R., and Teste, O. (2022). Why Should I Choose You? AutoXAI: A Framework for Selecting and Tuning EXplainable AI Solutions. International Conference on Information and Knowledge Management, Proceedings, Association for Computing Machinery.","DOI":"10.1145\/3511808.3557247"},{"key":"ref_36","unstructured":"Hall, M., Harborne, D., Tomsett, R., Galetic, V., Quintana-Amate, S., Nottle, A., and Preece, A. (2019, January 11). A Systematic Method to Understand Requirements for Explainable AI (XAI) Systems. Proceedings of the IJCAI Workshop on eXplainable Artificial Intelligence (XAI 2019), Macau, China."},{"key":"ref_37","first-page":"67","article-title":"Context-Aware Recommender Systems","volume":"32","author":"Adomavicius","year":"2011","journal-title":"AI Mag."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"106622","DOI":"10.1016\/j.knosys.2020.106622","article-title":"AutoML: A Survey of the State-of-the-Art","volume":"212","author":"He","year":"2021","journal-title":"Knowl. Based Syst."},{"key":"ref_39","unstructured":"Jullum, M., Sj\u00f8din, J., Prabhu, R., and L\u00f8land, A. (2023, January 26\u201328). EXplego: An Interactive Tool That Helps You Select Appropriate XAI-Methods for Your Explainability Needs. Proceedings of the xAI (Late-breaking Work, Demos, Doctoral Consortium), CEUR Workshop Proceedings, Aachen, Germany."},{"key":"ref_40","first-page":"1","article-title":"From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI.; From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI","volume":"55","author":"Nauta","year":"2023","journal-title":"ACM J."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"338","DOI":"10.1007\/s10462-024-10952-7","article-title":"Clarity in Complexity: How Aggregating Explanations Resolves the Disagreement Problem","volume":"57","author":"Moise","year":"2024","journal-title":"Artif. Intell. Rev."},{"key":"ref_42","unstructured":"Rieger, L., and Hansen, L.K. (2019). Aggregating Explanation Methods for Stable and Robust Explainability. arXiv."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Chatterjee, S., Colombo, E.R., and Raimundo, M.M. (2025). Multi-Criteria Rank-Based Aggregation for Explainable AI. arXiv.","DOI":"10.1109\/IJCNN64981.2025.11228222"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"435","DOI":"10.15388\/Informatica.2015.57","article-title":"Multi-Criteria Inventory Classification Using a New Method of Evaluation Based on Distance from Average Solution (EDAS)","volume":"26","author":"Ghorabaee","year":"2015","journal-title":"Informatica"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Hwang, C.-L., and Yoon, K. (1981). Methods for Multiple Attribute Decision Making, Springer.","DOI":"10.1007\/978-3-642-48318-9"},{"key":"ref_46","first-page":"131","article-title":"The New Method of Multicriteria Complex Proportional Assessment of Projects","volume":"1","author":"Zavadskas","year":"1994","journal-title":"Technol. Econ. Dev. Econ."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Brans, J.P., and Mareschal, B. (1990). The Promethee Methods for MCDM; The Promcalc, Gaia and Bankadviser Software. Readings in Multiple Criteria Decision Aid, Springer.","DOI":"10.1007\/978-3-642-75935-2_10"},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"159","DOI":"10.3846\/tede.2010.10","article-title":"A New Additive Ratio Assessment (ARAS) Method in Multicriteria Decision-Making","volume":"16","author":"Zavadskas","year":"2010","journal-title":"Technol. Econ. Dev. Econ."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"2501","DOI":"10.1108\/MD-05-2017-0458","article-title":"A Combined Compromise Solution (CoCoSo) Method for Multi-Criteria Decision-Making Problems","volume":"57","author":"Yazdani","year":"2019","journal-title":"Manag. Decis."},{"key":"ref_50","first-page":"25","article-title":"A New Combinative Distance-Based Assessment (Codas) Method for Multi-Criteria Decision-Making","volume":"50","author":"Ghorabaee","year":"2016","journal-title":"Econ. Comput. Econ. Cybern. Stud. Res."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"3016","DOI":"10.1016\/j.eswa.2014.11.057","article-title":"The Selection of Transport and Handling Resources in Logistics Centers Using Multi-Attributive Border Approximation Area Comparison (MABAC)","volume":"42","year":"2015","journal-title":"Expert. Syst. Appl."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Anderkov\u00e1, V., Babi\u010d, F., Parali\u010dov\u00e1, Z., and Javorsk\u00e1, D. (2025). Intelligent System Using Data to Support Decision-Making. Appl. Sci., 15.","DOI":"10.3390\/app15147724"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Behnke, J. (2004). Bordas Text \u201eM\u00e9moire Sur Les \u00c9lections Au Scrutin \u201cvon 1784: Einige Einf\u00fchrende Bemerkungen. Jahrbuch f\u00fcr Handlungs-und Entscheidungstheorie, VS Verlag f\u00fcr Sozialwissenschaften.","DOI":"10.1007\/978-3-322-80613-0_7"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"1719","DOI":"10.1007\/s10618-023-00933-9","article-title":"Benchmarking and Survey of Explanation Methods for Black Box Models","volume":"37","author":"Bodria","year":"2023","journal-title":"Data Min. Knowl. Discov."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"245","DOI":"10.1613\/jair.1.12228","article-title":"A Survey on the Explainability of Supervised Machine Learning","volume":"70","author":"Burkart","year":"2021","journal-title":"MF Huber J. Artif. Intell. Res."},{"key":"ref_56","unstructured":"Alvarez-Melis, D., and Jaakkola, T.S. (2018). Towards Robust Interpretability with Self-Explaining Neural Networks. Proceedings of the 32nd International Conference on Neural Information Processing Systems, Curran Associates Inc.. NIPS\u201918."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1145\/3400051.3400058","article-title":"Causal Interpretability for Machine Learning-Problems, Methods and Evaluation","volume":"22","author":"Moraffah","year":"2020","journal-title":"ACM SIGKDD Explor. Newsl."},{"key":"ref_58","first-page":"10967","article-title":"On the (In)Fidelity and Sensitivity for Explanations","volume":"32","author":"Yeh","year":"2019","journal-title":"Adv. Neural Inf. Process Syst."},{"key":"ref_59","unstructured":"Chuang, Y.-N., Wang, G., Yang, F., Liu, Z., Cai, X., Du, M., and Hu, X. (2023). Efficient XAI Techniques: A Taxonomic Survey. arXiv."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"247","DOI":"10.1109\/JPROC.2021.3060483","article-title":"Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications","volume":"109","author":"Samek","year":"2021","journal-title":"Proc. IEEE"},{"key":"ref_61","unstructured":"Samek, W., Wiegand, T., and M\u00fcller, K.-R. (2017). Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. arXiv."},{"key":"ref_62","first-page":"96","article-title":"An Empirical Analysis of User Preferences Regarding XAI Metrics","volume":"Volume 14775","author":"Darias","year":"2024","journal-title":"Case-Based Reasoning Research and Development. ICCBR 2024"},{"key":"ref_63","unstructured":"Li, X., Du, M., Chen, J., Chai, Y., Lakkaraju, H., and Xiong, H. (2023, January 10\u201316). M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models. Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, LA, USA. NIPS \u201923."},{"key":"ref_64","first-page":"1","article-title":"Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond","volume":"24","author":"Weber","year":"2022","journal-title":"J. Mach. Learn. Res."},{"key":"ref_65","first-page":"445","article-title":"BEExAI: Benchmark to Evaluate Explainable AI","volume":"2153","author":"Sithakoul","year":"2024","journal-title":"Commun. Comput. Inf. Sci."},{"key":"ref_66","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1145\/3737445","article-title":"A Functionally-Grounded Benchmark Framework for XAI Methods: Insights and Foundations from a Systematic Literature Review","volume":"57","author":"Canha","year":"2025","journal-title":"ACM Comput. Surv."},{"key":"ref_67","doi-asserted-by":"crossref","unstructured":"Lee, J.R., Emami, S., Hollins, M.D., Wong, T.C.H., Villalobos S\u00e1nchez, C.I., Toni, F., Zhang, D., and Dejl, A. (2025). XAI-Units: Benchmarking Explainability Methods with Unit Tests. ACMF AccT 2025\u2014Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, Athens, Greece, 23\u201326 June 2025, ACM.","DOI":"10.1145\/3715275.3732186"},{"key":"ref_68","unstructured":"Belaid, M.K., H\u00fcllermeier, E., Rabus, M., and Krestel, R. (2022). Do We Need Another Explainable AI Method? Toward Unifying Post-Hoc XAI Evaluation Methods into an Interactive and Multi-Dimensional Benchmark. arXiv."},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Moiseev, I., Balabaeva, K., and Kovalchuk, S. (2025). Open and Extensible Benchmark for Explainable Artificial Intelligence Methods. Algorithms, 18.","DOI":"10.3390\/a18020085"},{"key":"ref_70","unstructured":"Liu, Y., Khandagale, S., White, C., and Neiswanger, W. (2021). Synthetic Benchmarks for Scientific Research in Explainable Machine Learning. arXiv."},{"key":"ref_71","first-page":"15784","article-title":"OpenXAI: Towards a Transparent Evaluation of Model Explanations","volume":"35","author":"Agarwal","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_72","unstructured":"Zhang, Y., Song, J., Gu, S., Jiang, T., Pan, B., Bai, G., and Zhao, L. (2023, January 3\u20137). Saliency-Bench: A Comprehensive Benchmark for Evaluating Visual Explanations. Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2, Toronto, ON, Canada."},{"key":"ref_73","unstructured":"Ma, J., Lai, V., Zhang, Y., Chen, C., Hamilton, P., Ljubenkov, D., Lakkaraju, H., and Tan, C. (2024). OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of Explainable Machine Learning. arXiv."},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Aechtner, J., Cabrera, L., Katwal, D., Onghena, P., Valenzuela, D.P., and Wilbik, A. (2022, January 18\u201323). Comparing User Perception of Explanations Developed with XAI Methods. Proceedings of the 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Padua, Italy.","DOI":"10.1109\/FUZZ-IEEE55066.2022.9882743"},{"key":"ref_75","first-page":"31","article-title":"A Survey on Multi Criteria Decision Making Methods and Its Applications","volume":"1","author":"Aruldoss","year":"2013","journal-title":"Am. J. Inf. Syst."},{"key":"ref_76","first-page":"55","article-title":"An Analysis of Multi-Criteria Decision Making Methods","volume":"10","author":"Velasquez","year":"2013","journal-title":"Int. J. Oper. Res."},{"key":"ref_77","doi-asserted-by":"crossref","unstructured":"Ryciuk, U., Kiryluk, H., and Hajduk, S. (2021). Multi-Criteria Analysis in the Decision-Making Approach for the Linear Ordering of Urban Transport Based on TOPSIS Technique. Energies, 15.","DOI":"10.3390\/en15010274"},{"key":"ref_78","unstructured":"Baczkiewicz, A., Atr\u00f3bski, J.W., Kizielewicz, B., and Sa\u0142abun, W. (2021, January 2\u20135). Towards Objectification of Multi-Criteria Assessments: A Comparative Study on MCDA Methods. Proceedings of the 2021 16th Conference on Computer Science and Intelligence Systems, Sofia, Bulgaria."},{"key":"ref_79","first-page":"1725","article-title":"Multi-Criteria Decision Support Systems. Comparative Analysis","volume":"16","author":"Baizyldayeva","year":"2013","journal-title":"Middle-East. J. Sci. Res."},{"key":"ref_80","first-page":"257","article-title":"Multiple-Criteria Decision Making","volume":"4","author":"Habenicht","year":"2002","journal-title":"Optim. Oper. Res."},{"key":"ref_81","doi-asserted-by":"crossref","first-page":"100232","DOI":"10.1016\/j.health.2023.100232","article-title":"A Comprehensive and Systematic Review of Multi-Criteria Decision-Making Methods and Applications in Healthcare","volume":"4","author":"Chakraborty","year":"2023","journal-title":"Healthc. Anal."},{"key":"ref_82","doi-asserted-by":"crossref","first-page":"365","DOI":"10.1016\/j.rser.2003.12.007","article-title":"Application of Multi-Criteria Decision Making to Sustainable Energy Planning\u2014A Review","volume":"8","author":"Pohekar","year":"2004","journal-title":"Renew. Sustain. Energy Rev."},{"key":"ref_83","unstructured":"Saaty, T.L. (1980). Analytic Hierarchy Process Planning, Priority Setting, Resource Allocation, McGraw-Hill, Inc."},{"key":"ref_84","unstructured":"Saaty, T.L. (1992). Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World, RWS Publications."},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"763","DOI":"10.1016\/0305-0548(94)00059-H","article-title":"Determining Objective Weights in Multiple Criteria Problems: The Critic Method","volume":"22","author":"Diakoulaki","year":"1995","journal-title":"Comput. Ops Res."},{"key":"ref_86","doi-asserted-by":"crossref","first-page":"106231","DOI":"10.1016\/j.cie.2019.106231","article-title":"Sustainable Supplier Selection in Healthcare Industries Using a New MCDM Method: Measurement of Alternatives and Ranking According to COmpromise Solution (MARCOS)","volume":"140","author":"Chatterjee","year":"2020","journal-title":"Comput. Ind. Eng."},{"key":"ref_87","first-page":"83","article-title":"The Robustness of TOPSIS Results Using Sensitivity Analysis Based on Weight Tuning","volume":"68","year":"2018","journal-title":"IFMBE Proc."},{"key":"ref_88","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1029\/WR016i001p00014","article-title":"Multiobjective Optimization in River Basin Development","volume":"16","author":"Duckstein","year":"1980","journal-title":"Water Resour. Res."},{"key":"ref_89","doi-asserted-by":"crossref","first-page":"3","DOI":"10.5755\/j01.eee.122.6.1810","article-title":"Optimization of Weighted Aggregated Sum Product Assessment","volume":"122","author":"Zavadskas","year":"2012","journal-title":"Elektron. Ir Elektrotechnika"},{"key":"ref_90","doi-asserted-by":"crossref","first-page":"254","DOI":"10.1287\/opre.16.2.254","article-title":"Sensitivity of Decisions to Probability Estimation Errors: A Reexamination","volume":"16","author":"Fishburn","year":"1968","journal-title":"Oper. Res."},{"key":"ref_91","first-page":"647","article-title":"Explaining Prediction Models and Individual Predictions with Feature Contributions","volume":"41","author":"Kononenko","year":"2013","journal-title":"Knowl. Inf. Syst."},{"key":"ref_92","unstructured":"Koh, P.W., and Liang, P. (2017, January 6\u201311). Understanding Black-Box Predictions via Influence Functions. Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, Australia."},{"key":"ref_93","unstructured":"Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., and Sayres, R. (2018, January 10\u201315). Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden."},{"key":"ref_94","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1016\/j.patcog.2016.11.008","article-title":"Explaining NonLinear Classification Decisions with Deep Taylor Decomposition","volume":"65","author":"Montavon","year":"2015","journal-title":"Pattern Recognit."},{"key":"ref_95","doi-asserted-by":"crossref","first-page":"818","DOI":"10.1007\/978-3-319-10590-1_53","article-title":"Visualizing and Understanding Convolutional Networks","volume":"Volume 8689","author":"Zeiler","year":"2013","journal-title":"Computer Vision\u2013ECCV 2014. ECCV 2014"},{"key":"ref_96","unstructured":"Smilkov, D., Thorat, N., Kim, B., Vi\u00e9gas, F., and Wattenberg, M. (2017). SmoothGrad: Removing Noise by Adding Noise. arXiv."},{"key":"ref_97","unstructured":"Kindermans, P.J., Sch\u00fctt, K.T., Alber, M., M\u00fcller, K.R., Erhan, D., Kim, B., and D\u00e4hne, S. (May, January 30). Learning How to Explain Neural Networks: PatternNet and PatternAttribution. Proceedings of the 6th International Conference on Learning Representations, ICLR 2018\u2014Conference Track Proceedings, Vancouver, BC, Canada."},{"key":"ref_98","unstructured":"Springenberg, J.T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2015, January 7\u20139). Striving for Simplicity: The All Convolutional Net. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015\u2014Workshop Track Proceedings, San Diego, CA, USA."},{"key":"ref_99","doi-asserted-by":"crossref","unstructured":"Fong, R.C., and Vedaldi, A. (2017, January 22\u201329). Interpretable Explanations of Black Boxes by Meaningful Perturbation. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.371"},{"key":"ref_100","unstructured":"Xu, K., Ba, J.L., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R.S., and Bengio, Y. (2015, January 6\u201311). Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France."},{"key":"ref_101","unstructured":"Erhan, D., Courville, A., and Bengio, Y. (2010). Understanding Representations Learned in Deep Architectures, Department Dinformatique et Recherche Operationnelle, University of Montreal."},{"key":"ref_102","doi-asserted-by":"crossref","unstructured":"Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. (2024, January 23). Learning Deep Features for Discriminative Localization. 2016, pp 2921\u20132929. Available online: http:\/\/cnnlocalization.csail.mit.edu.","DOI":"10.1109\/CVPR.2016.319"},{"key":"ref_103","doi-asserted-by":"crossref","unstructured":"Lei, T., Barzilay, R., and Jaakkola, T. (2016). Rationalizing Neural Predictions. EMNLP 2016\u2014Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings, Austin, TX, USA, 1\u20135 November 2016, Curran Associates Inc.","DOI":"10.18653\/v1\/D16-1011"},{"key":"ref_104","unstructured":"Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., and Giannotti, F. (2018). Local Rule-Based Explanations of Black Box Decision Systems. arXiv."},{"key":"ref_105","unstructured":"Zintgraf, L.M., Cohen, T.S., Adel, T., and Welling, M. (2017, January 24\u201326). Visualizing Deep Neural Network Decisions: Prediction Difference Analysis. Proceedings of the 5th International Conference on Learning Representations, ICLR 2017\u2014Conference Track Proceedings, Toulon, France."},{"key":"ref_106","unstructured":"Petsiuk, V., Das, A., and Saenko, K. (2018, January 3\u20136). RISE: Randomized Input Sampling for Explanation of Black-Box Models. Proceedings of the British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK."},{"key":"ref_107","unstructured":"Dhurandhar, A., Chen, P.-Y., Luss, R., Tu, C.-C., Ting, P., Shanmugam, K., and Das, P. (2018, January 3\u20138). Explanations Based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada. NIPS\u201918."},{"key":"ref_108","first-page":"650","article-title":"Interpretable Counterfactual Explanations Guided by Prototypes","volume":"Volume 12976","author":"Klaise","year":"2019","journal-title":"Machine Learning and Knowledge Discovery in Databases"},{"key":"ref_109","doi-asserted-by":"crossref","unstructured":"Mothilal, R.K., Sharma, A., and Tan, C. (2020). Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. FAT* 2020\u2014Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27\u201330 January 2020, Association for Computing Machinery.","DOI":"10.1145\/3351095.3372850"},{"key":"ref_110","unstructured":"Ghorbani, A., Wexler, J., Zou, J., and Kim, B. (2019, January 8\u201314). Towards Automatic Concept-Based Explanations. Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada."},{"key":"ref_111","unstructured":"Yeh, C.K., Kim, B., Arik, S., Li, C.L., Pfister, T., and Ravikumar, P. (2020, January 6\u201312). On Completeness-Aware Concept-Based Explanations in Deep Neural Networks. Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada. NIPS \u201920."},{"key":"ref_112","unstructured":"Frosst, N., and Hinton, G. (2017, January 16\u201317). Distilling a Neural Network into a Soft Decision Tree. Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 Co-Located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017), CEUR Workshop Proceedings, Bari, Italy."},{"key":"ref_113","unstructured":"Fr\u00e4mling, K., and Graillot, D. (1995, January 9\u201313). Extracting Explanations from Neural Networks. Proceedings of the ICANN\u201995 Conference, Paris, France."},{"key":"ref_114","unstructured":"Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., and Lipson, H. (2015). Understanding Neural Networks Through Deep Visualization. arXiv."},{"key":"ref_115","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1007\/s10115-017-1116-3","article-title":"Auditing Black-Box Models for Indirect Influence","volume":"54","author":"Adler","year":"2016","journal-title":"Knowl. Inf. Syst."},{"key":"ref_116","unstructured":"Bau, D., Zhu, J.Y., Strobelt, H., Zhou, B., Tenenbaum, J.B., Freeman, W.T., and Torralba, A. (2019, January 6\u20139). GAN Dissection: Visualizing and Understanding Generative Adversarial Networks. Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"ref_117","doi-asserted-by":"crossref","unstructured":"Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., and Flach, P. (2020). FACE: Feasible and Actionable Counterfactual Explanations. AIES 2020\u2014Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7\u20138 February 2020, Association for Computing Machinery.","DOI":"10.1145\/3375627.3375850"},{"key":"ref_118","first-page":"4699","article-title":"Neural Additive Models: Interpretable Machine Learning with Neural Nets","volume":"6","author":"Agarwal","year":"2020","journal-title":"Adv. Neural Inf. Process Syst."},{"key":"ref_119","doi-asserted-by":"crossref","first-page":"342","DOI":"10.1109\/TVCG.2018.2864812","article-title":"RuleMatrix: Visualizing and Understanding Classifiers with Rules","volume":"25","author":"Ming","year":"2019","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_120","doi-asserted-by":"crossref","unstructured":"Chattopadhay, A., Sarkar, A., Howlader, P., and Balasubramanian, V.N. (2018, January 12\u201315). Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018, Lake Tahoe, NV, USA.","DOI":"10.1109\/WACV.2018.00097"},{"key":"ref_121","unstructured":"Chen, J., Song, L., Wainwright, M.J., and Jordan, M.I. (2018, January 10\u201315). Learning to Explain: An Information-Theoretic Perspective on Model Interpretation. Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden."},{"key":"ref_122","first-page":"2515","article-title":"Model Agnostic Supervised Local Explanations","volume":"31","author":"Plumb","year":"2018","journal-title":"Adv. Neural Inf. Process Syst."},{"key":"ref_123","doi-asserted-by":"crossref","first-page":"1340","DOI":"10.1093\/bioinformatics\/btq134","article-title":"Permutation Importance: A Corrected Feature Importance Measure","volume":"26","author":"Altmann","year":"2010","journal-title":"Bioinformatics"},{"key":"ref_124","doi-asserted-by":"crossref","first-page":"395","DOI":"10.32614\/RJ-2018-072","article-title":"Explanations of Model Predictions with Live and BreakDown Packages","volume":"10","author":"Staniak","year":"2018","journal-title":"R. J."},{"key":"ref_125","doi-asserted-by":"crossref","first-page":"13665","DOI":"10.1609\/aaai.v34i09.7116","article-title":"Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations","volume":"Volume 34","author":"Guidotti","year":"2020","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"ref_126","doi-asserted-by":"crossref","first-page":"448","DOI":"10.1007\/978-3-030-58112-1_31","article-title":"Multi-Objective Counterfactual Explanations","volume":"Volume 12269","author":"Dandl","year":"2020","journal-title":"Parallel Problem Solving from Nature \u2013PPSN XVI"},{"key":"ref_127","doi-asserted-by":"crossref","first-page":"73","DOI":"10.25300\/MISQ\/2014\/38.1.04","article-title":"Explaining Data-Driven Document Classifications1","volume":"38","author":"Martens","year":"2014","journal-title":"Manag. Inf. Syst. Q."},{"key":"ref_128","doi-asserted-by":"crossref","first-page":"6968","DOI":"10.1109\/TKDE.2022.3187455","article-title":"GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks","volume":"35","author":"Huang","year":"2020","journal-title":"IEEE Trans. Knowl. Data Eng."},{"key":"ref_129","first-page":"159","article-title":"Global Explanations with Local Scoring","volume":"1167","author":"Setzu","year":"2020","journal-title":"Commun. Comput. Inf. Sci."},{"key":"ref_130","doi-asserted-by":"crossref","first-page":"105532","DOI":"10.1016\/j.knosys.2020.105532","article-title":"Machine Learning Explainability via Microaggregation and Shallow Decision Trees","volume":"194","year":"2020","journal-title":"Knowl. Based Syst."},{"key":"ref_131","first-page":"655","article-title":"Visualizing the Feature Importance for Black Box Models","volume":"Volume 11051","author":"Casalicchio","year":"2019","journal-title":"Machine Learning and Knowledge Discovery in Databases"},{"key":"ref_132","doi-asserted-by":"crossref","unstructured":"Datta, A., Sen, S., and Zick, Y. (2016, January 23\u201325). Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems. Proceedings of the 2016 IEEE Symposium on Security and Privacy, SP 2016, San Jose, CA, USA.","DOI":"10.1109\/SP.2016.42"},{"key":"ref_133","doi-asserted-by":"crossref","unstructured":"Lucic, A., Oosterhuis, H., Haned, H., and de Rijke, M. (March, January 22). FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles. Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, Online.","DOI":"10.1609\/aaai.v36i5.20468"},{"key":"ref_134","unstructured":"Mahajan, D., Tan, C., and Sharma, A. (2019). Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers. arXiv."},{"key":"ref_135","doi-asserted-by":"crossref","unstructured":"Russell, C. (2019). Efficient Search for Diverse Coherent Explanations. FAT* 2019\u2014Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29\u201331 January 2019, Association for Computing Machinery.","DOI":"10.1145\/3287560.3287569"},{"key":"ref_136","doi-asserted-by":"crossref","unstructured":"Ustun, B., Spangher, A., and Liu, Y. (2019). Actionable Recourse in Linear Classification. FAT* 2019\u2014Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29\u201331 January 2019, Association for Computing Machinery.","DOI":"10.1145\/3287560.3287566"},{"key":"ref_137","first-page":"2855","article-title":"DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization","volume":"3","author":"Kanamori","year":"2020","journal-title":"IJCAI Int. Jt. Conf. Artif. Intell."},{"key":"ref_138","first-page":"895","article-title":"Model-Agnostic Counterfactual Explanations for Consequential Decisions","volume":"108","author":"Karimi","year":"2019","journal-title":"Proc. Mach. Learn. Res."},{"key":"ref_139","doi-asserted-by":"crossref","unstructured":"Pawelczyk, M., Broelemann, K., and Kasneci, G. (2020). Learning Model-Agnostic Counterfactual Explanations for Tabular Data. Web Conference 2020\u2014Proceedings of the World Wide Web Conference, WWW 2020, Taipei, Taiwan, 20\u201324 April 2020, Association for Computing Machinery.","DOI":"10.1145\/3366423.3380087"},{"key":"ref_140","doi-asserted-by":"crossref","first-page":"5462","DOI":"10.1609\/aaai.v34i04.5996","article-title":"Synthesizing Action Sequences for Modifying Model Decisions","volume":"Volume 34","author":"Ramakrishnan","year":"2020","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"ref_141","doi-asserted-by":"crossref","first-page":"1438","DOI":"10.1109\/TVCG.2020.3030342","article-title":"DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models","volume":"27","author":"Cheng","year":"2020","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_142","doi-asserted-by":"crossref","unstructured":"Karimi, A.H., Sch\u00f6lkopf, B., and Valera, I. Algorithmic Recourse: From Counterfactual Explanations to Interventions. FAccT 2021\u2014Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event\/Toronto, Canada, 3\u201310 March 2020, Association for Computing Machinery.","DOI":"10.1145\/3442188.3445899"},{"key":"ref_143","doi-asserted-by":"crossref","unstructured":"Laugel, T., Lesot, M.-J., Marsala, C., Renard, X., and Detyniecki, M. (2017). Inverse Classification for Comparison-Based Interpretability in Machine Learning. arXiv.","DOI":"10.1007\/978-3-319-91473-2_9"},{"key":"ref_144","doi-asserted-by":"crossref","unstructured":"Sharma, S., Henderson, J., and Ghosh, J. (2020). CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-Box Models. AIES 2020\u2014Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society, Madrid, Spain, 20\u201322 October 2020, Association for Computing Machinery.","DOI":"10.1145\/3375627.3375812"},{"key":"ref_145","doi-asserted-by":"crossref","unstructured":"Gomez, O., Holter, S., Yuan, J., and Bertini, E. (2020, January 24\u201327). ViCE: Visual counterfactual explanations for machine learning models. Proceedings of the International Conference on Intelligent User Interfaces, Proceedings IUI 2020, Cagliari, Italy.","DOI":"10.1145\/3377325.3377536"},{"key":"ref_146","doi-asserted-by":"crossref","unstructured":"Lucic, A., Haned, H., and de Rijke, M. (2020). Why Does My Model Fail? Contrastive Local Explanations for Retail Forecasting. FAT* 2020\u2014Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27\u201330 January 2020, Association for Computing Machinery.","DOI":"10.1145\/3351095.3372824"},{"key":"ref_147","doi-asserted-by":"crossref","first-page":"801","DOI":"10.1007\/s11634-020-00418-3","article-title":"A Comparison of Instance-Level Counterfactual Explanation Algorithms for Behavioral and Textual Data: SEDC, LIME-C and SHAP-C","volume":"14","author":"Ramon","year":"2020","journal-title":"Adv. Data Anal. Classif."},{"key":"ref_148","first-page":"2529","article-title":"Measurable Counterfactual Local Explanations for Any Classifier","volume":"325","author":"White","year":"2019","journal-title":"Front. Artif. Intell. Appl."},{"key":"ref_149","unstructured":"Ying, R., Bourgeois, D., You, J., Zitnik, M., and Leskovec, J. (2019, January 8\u201314). GNNExplainer: Generating Explanations for Graph Neural Networks. Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada."},{"key":"ref_150","first-page":"721","article-title":"Shapley Flow: A Graph-Based Approach to Interpreting Model Predictions","volume":"130","author":"Wang","year":"2020","journal-title":"Proc. Mach. Learn. Res."},{"key":"ref_151","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1016\/j.inffus.2020.03.013","article-title":"Explainable Decision Forest: Transforming a Decision Forest into an Interpretable Tree","volume":"61","author":"Sagi","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_152","doi-asserted-by":"crossref","first-page":"5747","DOI":"10.1007\/s10462-020-09833-6","article-title":"CHIRPS: Explaining Random Forest Classification","volume":"53","author":"Hatwell","year":"2020","journal-title":"Artif. Intell. Rev."},{"key":"ref_153","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1016\/j.ins.2020.05.126","article-title":"LoRMIkA: Local Rule-Based Model Interpretability with k-Optimal Associations","volume":"540","author":"Rajapaksha","year":"2020","journal-title":"Inf. Sci. (N. Y.)"},{"key":"ref_154","doi-asserted-by":"crossref","first-page":"1483","DOI":"10.2991\/ijcis.d.200910.002","article-title":"Contextualizing Support Vector Machine Predictions","volume":"13","author":"Loor","year":"2020","journal-title":"Int. J. Comput. Intell. Syst."},{"key":"ref_155","doi-asserted-by":"crossref","first-page":"70","DOI":"10.1016\/j.imavis.2019.02.005","article-title":"Beyond Saliency: Understanding Convolutional Neural Networks from Saliency Prediction on Layer-Wise Relevance Propagation","volume":"83\u201384","author":"Li","year":"2019","journal-title":"Image Vis. Comput."},{"key":"ref_156","unstructured":"Zafar, M.R., and Khan, N.M. (2019). DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems. arXiv."},{"key":"ref_157","first-page":"265","article-title":"LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding","volume":"1167","author":"Mollas","year":"2020","journal-title":"Commun. Comput. Inf. Sci."},{"key":"ref_158","unstructured":"Kapishnikov, A., Bolukbasi, T., Viegas, F., and Terry, M. (November, January 27). XRAI: Better Attributions Through Regions. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_159","first-page":"357","article-title":"Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars","volume":"Volume 12323","author":"Lampridis","year":"2020","journal-title":"Discovery Science. DS 2020"},{"key":"ref_160","doi-asserted-by":"crossref","unstructured":"Hoover, B., Strobelt, H., and Gehrmann, S. (2020, January 5\u201310). exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Online.","DOI":"10.18653\/v1\/2020.acl-demos.22"},{"key":"ref_161","doi-asserted-by":"crossref","unstructured":"Jacovi, A., Shalom, O.S., and Goldberg, Y. (2018, January 1). Understanding Convolutional Neural Networks for Text Classification. Proceedings of the EMNLP 2018\u20142018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 1st Workshop, Brussels, Belgium.","DOI":"10.18653\/v1\/W18-5408"},{"key":"ref_162","doi-asserted-by":"crossref","unstructured":"Zhou, Y., Zhu, Y., Ye, Q., Qiu, Q., and Jiao, J. (2018, January 18\u201320). Weakly Supervised Instance Segmentation Using Class Peak Response. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00399"},{"key":"ref_163","first-page":"603","article-title":"Autofocus Layer for Semantic Segmentation","volume":"Volume 11072","author":"Qin","year":"2018","journal-title":"Medical Image Computing and Computer Assisted Intervention\u2013MICCAI 2018"},{"key":"ref_164","doi-asserted-by":"crossref","first-page":"556","DOI":"10.1016\/j.procs.2017.01.172","article-title":"Classification Tree Extraction from Trained Artificial Neural Networks","volume":"104","author":"Bondarenko","year":"2017","journal-title":"Procedia Comput. Sci."},{"key":"ref_165","doi-asserted-by":"crossref","unstructured":"Burns, C., Thomason, J., and Tansey, W. (2020). Interpreting Black Box Models via Hypothesis Testing. FODS 2020\u2014Proceedings of the 2020 ACM-IMS Foundations of Data Science Conference, Virtual Event, 19\u201320 October 2020, Association for Computing Machinery.","DOI":"10.1145\/3412815.3416889"},{"key":"ref_166","doi-asserted-by":"crossref","unstructured":"Ibrahim, M., Modarres, C., Louie, M., and Paisley, J. (2019). Global Explanations of Neural Network: Mapping the Landscape of Predictions. AIES 2019\u2014Proceedings of the 2019 AAAI\/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery.","DOI":"10.1145\/3306618.3314230"},{"key":"ref_167","unstructured":"Lengerich, B.J., Konam, S., Xing, E.P., Rosenthal, S., and Veloso, M. (2017). Towards Visual Explanations for Convolutional Neural Networks via Input Resampling. arXiv."},{"key":"ref_168","unstructured":"Barratt, S. (2017). InterpNET: Neural Introspection for Interpretable Deep Learning. arXiv."},{"key":"ref_169","unstructured":"Chattopadhyay, A., Manupriya, P., Sarkar, A., and Balasubramanian, V.N. (2019, January 9\u201315). Neural Network Attributions: A Causal Perspective. Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA."},{"key":"ref_170","doi-asserted-by":"crossref","unstructured":"Panigutti, C., Perotti, A., and Pedreschi, D. (2020). Doctor XAI An Ontology-Based Approach to Black-Box Sequential Data Classification Explanations. FAT* 2020\u2014Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27\u201330 January 2020, Association for Computing Machinery.","DOI":"10.1145\/3351095.3372855"},{"key":"ref_171","first-page":"11564","article-title":"Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization","volume":"35","author":"Kanamori","year":"2021","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_172","first-page":"11575","article-title":"On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning","volume":"35","author":"Kenny","year":"2021","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_173","first-page":"6707","article-title":"Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models","volume":"Volume 1","author":"Wu","year":"2021","journal-title":"ACL-IJCNLP 2021\u201459th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference, Online, 1\u20136 August 2021"},{"key":"ref_174","doi-asserted-by":"crossref","first-page":"1681","DOI":"10.14778\/3461535.3461555","article-title":"GeCo: Quality Counterfactual Explanations in Real Time","volume":"14","author":"Schleich","year":"2021","journal-title":"Proc. VLDB Endow."},{"key":"ref_175","doi-asserted-by":"crossref","first-page":"196","DOI":"10.1016\/j.inffus.2020.07.001","article-title":"Random Forest Explainability Using Counterfactual Sets","volume":"63","author":"Moguerza","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_176","first-page":"56","article-title":"The What-If Tool: Interactive Probing of Machine Learning Models","volume":"26","author":"Wexler","year":"2020","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_177","doi-asserted-by":"crossref","unstructured":"Ghazimatin, A., Balalau, O., Roy, R.S., and Weikum, G. (2020). Prince: Provider-Side Interpretability with Counterfactual Explanations in Recommender Systems. WSDM 2020\u2014Proceedings of the 13th International Conference on Web Search and Data Mining, Houston, TX, USA, 3\u20137 February 2020, Association for Computing Machinery.","DOI":"10.1145\/3336191.3371824"},{"key":"ref_178","doi-asserted-by":"crossref","first-page":"137574","DOI":"10.1109\/ACCESS.2020.3012032","article-title":"Cold-Start Promotional Sales Forecasting through Gradient Boosted-Based Contrastive Explanations","volume":"8","year":"2020","journal-title":"IEEE Access"},{"key":"ref_179","doi-asserted-by":"crossref","unstructured":"Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., and Hu, X. (2020, January 14\u201319). Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00020"},{"key":"ref_180","unstructured":"Amoukou, S.I., Brunel, N.J.-B., and Sala\u00fcn, T. (2021). The Shapley Value of Coalition of Variables Provides Better Explanations. arXiv."},{"key":"ref_181","unstructured":"Mishra, S., Sturm, B.L., and Dixon, S. (2017, January 23\u201327). Local Interpretable Model-Agnostic Explanations for Music Content Analysis. Proceedings of the 18th International Society for Music Information Retrieval Conference, Suzhou, China."},{"key":"ref_182","unstructured":"Welling, S.H., Refsgaard, H.H.F., Brockhoff, P.B., and Clemmensen, L.H. (2016). Forest Floor Visualizations of Random Forests. arXiv."},{"key":"ref_183","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1186\/1758-2946-3-11","article-title":"Interpreting Linear Support Vector Machine Models with Heat Map Molecule Coloring","volume":"3","author":"Rosenbaum","year":"2011","journal-title":"J. Cheminform."},{"key":"ref_184","doi-asserted-by":"crossref","first-page":"2594","DOI":"10.1609\/aaai.v34i03.5643","article-title":"CoCoX: Generating Conceptual and Counterfactual Explanations via Fault-Lines","volume":"Volume 34","author":"Akula","year":"2020","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"ref_185","doi-asserted-by":"crossref","first-page":"122588","DOI":"10.1016\/j.eswa.2023.122588","article-title":"A Model-Agnostic, Network Theory-Based Framework for Supporting XAI on Classifiers","volume":"241","author":"Bonifazi","year":"2024","journal-title":"Expert. Syst. Appl."},{"key":"ref_186","first-page":"435","article-title":"XAI for Transformers: Better Explanations through Conservative Propagation","volume":"162","author":"Ali","year":"2022","journal-title":"Proc. Mach. Learn. Res."},{"key":"ref_187","doi-asserted-by":"crossref","first-page":"113941","DOI":"10.1016\/j.eswa.2020.113941","article-title":"Post-Hoc Explanation of Black-Box Classifiers Using Confident Itemsets","volume":"165","author":"Moradi","year":"2021","journal-title":"Expert. Syst. Appl."},{"key":"ref_188","unstructured":"Bousselham, W., Boggust, A., Chaybouti, S., Strobelt, H., and Kuehne, H. (2024, January 1\u20136). LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Paris, France."},{"key":"ref_189","doi-asserted-by":"crossref","unstructured":"Amara, K., Sevastjanova, R., and El-Assady, M. (2024, January 11\u201316). SyntaxShap: Syntax-Aware Explainability Method for Text Generation. Proceedings of the Findings of the Association for Computational Linguistics: ACL 2024, Bangkok, Thailand.","DOI":"10.18653\/v1\/2024.findings-acl.270"},{"key":"ref_190","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1007\/978-3-030-05366-6_15","article-title":"A Sensitivity Analysis on Weight Sum Method MCDM Approach for Product Recommendation","volume":"Volume 11319","author":"Kumar","year":"2019","journal-title":"Distributed Computing and Internet Technology. ICDCIT 2019"},{"key":"ref_191","first-page":"2611","article-title":"The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes","volume":"33","author":"Kiela","year":"2020","journal-title":"Adv. Neural Inf. Process Syst."},{"key":"ref_192","unstructured":"(2025, October 31). Statlog (German Credit Data)\u2014UCI Machine Learning Repository. Available online: https:\/\/archive.ics.uci.edu\/dataset\/144\/statlog+german+credit+data."}],"container-title":["Machine Learning and Knowledge Extraction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/158\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T05:36:10Z","timestamp":1764740170000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/158"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,1]]},"references-count":192,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["make7040158"],"URL":"https:\/\/doi.org\/10.3390\/make7040158","relation":{},"ISSN":["2504-4990"],"issn-type":[{"type":"electronic","value":"2504-4990"}],"subject":[],"published":{"date-parts":[[2025,12,1]]}}}