{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T10:14:22Z","timestamp":1775211262439,"version":"3.50.1"},"publisher-location":"Cham","reference-count":45,"publisher":"Springer International Publishing","isbn-type":[{"value":"9783031040825","type":"print"},{"value":"9783031040832","type":"electronic"}],"license":[{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,17]],"date-time":"2022-04-17T00:00:00Z","timestamp":1650153600000},"content-version":"vor","delay-in-days":106,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>AI explainability is becoming indispensable to allow users to gain insights into the AI system\u2019s decision-making process. Meanwhile, fairness is another rising concern that algorithmic predictions may be misaligned to the designer\u2019s intent or social expectations such as discrimination to specific groups. In this work, we provide a state-of-the-art overview on the relations between explanation and AI fairness and especially the roles of explanation on human\u2019s fairness judgement. The investigations demonstrate that fair decision making requires extensive contextual understanding, and AI explanations help identify potential variables that are driving the unfair outcomes. It is found that different types of AI explanations affect human\u2019s fairness judgements differently. Some properties of features and social science theories need to be considered in making senses of fairness with explanations. Different challenges are identified to make responsible AI for trustworthy decision making from the perspective of explainability and fairness.<\/jats:p>","DOI":"10.1007\/978-3-031-04083-2_18","type":"book-chapter","created":{"date-parts":[[2022,4,16]],"date-time":"2022-04-16T17:03:23Z","timestamp":1650128603000},"page":"375-386","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":31,"title":["Towards Explainability for AI Fairness"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6034-644X","authenticated-orcid":false,"given":"Jianlong","family":"Zhou","sequence":"first","affiliation":[]},{"given":"Fang","family":"Chen","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6786-5194","authenticated-orcid":false,"given":"Andreas","family":"Holzinger","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,17]]},"reference":[{"key":"18_CR1","unstructured":"Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv:1909.03012 [cs, stat] (2019)"},{"key":"18_CR2","unstructured":"Asuncion, A., Newman, D.: UCI machine learning repository (2007). https:\/\/archive.ics.uci.edu\/ml\/index.php"},{"key":"18_CR3","unstructured":"Baleis, J., Keller, B., Starke, C., Marcinkowski, F.: Cognitive and emotional response to fairness in AI - a systematic review (2019). https:\/\/www.semanticscholar.org\/paper\/Implications-of-AI-(un-)fairness-in-higher-the-of-Marcinkowski-Kieslich\/231929b1086badcbd149debb0abefc84cdb85665"},{"key":"18_CR4","doi-asserted-by":"crossref","unstructured":"Barocas, S., Selbst, A.D., Raghavan, M.: The hidden assumptions behind counterfactual explanations and principal reasons. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 80\u201389 (2020)","DOI":"10.1145\/3351095.3372830"},{"key":"18_CR5","unstructured":"Begley, T., Schwedes, T., Frye, C., Feige, I.: Explainability for fair machine learning. CoRR abs\/2010.07389 (2020). https:\/\/arxiv.org\/abs\/2010.07389"},{"key":"18_CR6","unstructured":"Bellamy, R.K.E., et al.: AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. CoRR abs\/1810.01943 (2018). http:\/\/arxiv.org\/abs\/1810.01943"},{"issue":"8","key":"18_CR7","doi-asserted-by":"publisher","first-page":"832","DOI":"10.3390\/electronics8080832","volume":"8","author":"DV Carvalho","year":"2019","unstructured":"Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)","journal-title":"Electronics"},{"issue":"7623","key":"18_CR8","doi-asserted-by":"publisher","first-page":"20","DOI":"10.1038\/538020a","volume":"538","author":"D Castelvecchi","year":"2016","unstructured":"Castelvecchi, D.: Can we open the black box of AI? Nat. News 538(7623), 20 (2016)","journal-title":"Nat. News"},{"issue":"5","key":"18_CR9","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1145\/3376898","volume":"63","author":"A Chouldechova","year":"2020","unstructured":"Chouldechova, A., Roth, A.: The frontiers of fairness in machine learning. Commun. ACM 63(5), 82\u201389 (2020). https:\/\/doi.org\/10.1145\/3376898","journal-title":"Commun. ACM"},{"key":"18_CR10","unstructured":"Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: a critical review of fair machine learning. CoRR abs\/1808.00023 (2018). http:\/\/arxiv.org\/abs\/1808.00023"},{"key":"18_CR11","doi-asserted-by":"publisher","unstructured":"Coston, A., Mishler, A., Kennedy, E.H., Chouldechova, A.: Counterfactual risk assessments, evaluation, and fairness. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT 2020), pp. 582\u2013593 (2020). https:\/\/doi.org\/10.1145\/3351095.3372851","DOI":"10.1145\/3351095.3372851"},{"key":"18_CR12","doi-asserted-by":"crossref","unstructured":"Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K.E., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI 2019, pp. 275\u2013285 (2019)","DOI":"10.1145\/3301275.3302310"},{"key":"18_CR13","unstructured":"Ferreira, J.J., de Souza Monteiro, M.: Evidence-based explanation to promote fairness in AI systems. In: CHI2020 Fair and Responsible AI Workshop (2020)"},{"key":"18_CR14","doi-asserted-by":"crossref","unstructured":"Grgic-Hlaca, N., Redmiles, E.M., Gummadi, K.P., Weller, A.: Human perceptions of fairness in algorithmic decision making: a case study of criminal risk prediction. In: Proceedings of the 2018 World Wide Web Conference, WWW 2018, pp. 903\u2013912 (2018)","DOI":"10.1145\/3178876.3186138"},{"key":"18_CR15","doi-asserted-by":"crossref","unstructured":"Grgic-Hlaca, N., Zafar, M.B., Gummadi, K.P., Weller, A.: Beyond distributive fairness in algorithmic decision making: feature selection for procedurally fair learning. In: Proceedings of the Thirty-Second AAAI Conferenceon Artificial Intelligence (AAAI-18), pp. 51\u201360 (2018)","DOI":"10.1145\/3178876.3186138"},{"issue":"2","key":"18_CR16","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1007\/s40708-016-0042-6","volume":"3","author":"A Holzinger","year":"2016","unstructured":"Holzinger, A.: Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform. 3(2), 119\u2013131 (2016). https:\/\/doi.org\/10.1007\/s40708-016-0042-6","journal-title":"Brain Inform."},{"issue":"2","key":"18_CR17","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1007\/s13218-020-00636-z","volume":"34","author":"A Holzinger","year":"2020","unstructured":"Holzinger, A., Carrington, A., Mueller, H.: Measuring the quality of explanations: the system causability scale (SCS). KI - Kuenstliche Intell. 34(2), 193\u2013198 (2020)","journal-title":"KI - Kuenstliche Intell."},{"issue":"7","key":"18_CR18","doi-asserted-by":"publisher","first-page":"28","DOI":"10.1016\/j.inffus.2021.01.008","volume":"71","author":"A Holzinger","year":"2021","unstructured":"Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf. 71(7), 28\u201337 (2021). https:\/\/doi.org\/10.1016\/j.inffus.2021.01.008","journal-title":"Inf."},{"key":"18_CR19","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/978-3-030-84060-0_1","volume-title":"Machine Learning and Knowledge Extraction","author":"A Holzinger","year":"2021","unstructured":"Holzinger, A., Weippl, E., Tjoa, A.M., Kieseberg, P.: Digital transformation for sustainable development goals (SDGs) - a security, safety and privacy perspective on AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2021. LNCS, vol. 12844, pp. 1\u201320. Springer, Cham (2021). https:\/\/doi.org\/10.1007\/978-3-030-84060-0_1"},{"issue":"1","key":"18_CR20","first-page":"42","volume":"112","author":"K Holzinger","year":"2018","unstructured":"Holzinger, K., Mak, K., Kieseberg, P., Holzinger, A.: Can we trust machine learning results? artificial intelligence in safety-critical decision support. ERCIM News 112(1), 42\u201343 (2018)","journal-title":"ERCIM News"},{"key":"18_CR21","doi-asserted-by":"crossref","unstructured":"Hutchinson, B., Mitchell, M.: 50 years of test (un)fairness: Lessons for machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, pp. 49\u201358 (2019)","DOI":"10.1145\/3287560.3287600"},{"key":"18_CR22","series-title":"Studies in Applied Philosophy, Epistemology and Rational Ethics","doi-asserted-by":"publisher","first-page":"155","DOI":"10.1007\/978-3-642-30487-3_8","volume-title":"Discrimination and Privacy in the Information Society","author":"F Kamiran","year":"2013","unstructured":"Kamiran, F., \u017dliobait\u0117, I.: Explainable and non-explainable discrimination in classification. In: Custers, B., Calders, T., Schermer, B., Zarsky, T. (eds.) Discrimination and Privacy in the Information Society. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol. 3, pp. 155\u2013170. Springer, Heidelberg (2013). https:\/\/doi.org\/10.1007\/978-3-642-30487-3_8"},{"key":"18_CR23","doi-asserted-by":"crossref","unstructured":"Kasirzadeh, A., Smart, A.: The use and misuse of counterfactuals in ethical machine learning. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2021), pp. 228\u2013236 (2021)","DOI":"10.1145\/3442188.3445886"},{"key":"18_CR24","unstructured":"Lee, M.S.A., Floridi, L., Singh, J.: Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics. SSRN Scholarly Paper ID 3679975, Social Science Research Network, July 2020. https:\/\/papers.ssrn.com\/abstract=3679975"},{"key":"18_CR25","unstructured":"Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768\u20134777 (2017)"},{"key":"18_CR26","unstructured":"McGrath, R., et al.: Interpretable credit application predictions with counterfactual explanations. CoRR abs\/1811.05245 (2018). http:\/\/arxiv.org\/abs\/1811.05245"},{"key":"18_CR27","unstructured":"Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. CoRR abs\/1908.09635 (2019). http:\/\/arxiv.org\/abs\/1908.09635"},{"key":"18_CR28","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.artint.2018.07.007","volume":"267","author":"T Miller","year":"2019","unstructured":"Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1\u201338 (2019)","journal-title":"Artif. Intell."},{"key":"18_CR29","doi-asserted-by":"crossref","unstructured":"Molnar, C., Casalicchio, G., Bischl, B.: Interpretable machine learning - a brief history, state-of-the-art and challenges. arXiv:2010.09337 [cs, stat], October 2020","DOI":"10.1007\/978-3-030-65965-3_28"},{"issue":"1","key":"18_CR30","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1057\/s41599-020-0501-9","volume":"7","author":"SL Piano","year":"2020","unstructured":"Piano, S.L.: Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanit. Soc. Sci. Commun. 7(1), 1\u20137 (2020). https:\/\/doi.org\/10.1057\/s41599-020-0501-9","journal-title":"Humanit. Soc. Sci. Commun."},{"key":"18_CR31","doi-asserted-by":"publisher","unstructured":"Robert Jr., L.P., Bansal, G., Melville, N., Stafford, T.: Introduction to the special issue on AI fairness, trust, and ethics. AIS Trans. Hum.-Comput. Interact. 12(4), 172\u2013178 (2020). https:\/\/doi.org\/10.17705\/1thci.00134","DOI":"10.17705\/1thci.00134"},{"key":"18_CR32","doi-asserted-by":"publisher","unstructured":"Rudin, C., Wang, C., Coker, B.: The age of secrecy and unfairness in recidivism prediction. Harv. Data Sci. Rev. 2(1) (2020). https:\/\/doi.org\/10.1162\/99608f92.6ed64b30, https:\/\/hdsr.mitpress.mit.edu\/pub\/7z10o269","DOI":"10.1162\/99608f92.6ed64b30"},{"key":"18_CR33","doi-asserted-by":"crossref","unstructured":"Saxena, N.A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D.C., Liu, Y.: How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. In: Proceedings of the 2019 AAAI\/ACM Conference on AI, Ethics, and Society, AIES 2019, pp. 99\u2013106 (2019)","DOI":"10.1145\/3306618.3314248"},{"key":"18_CR34","unstructured":"Schmidt, P., Biessmann, F.: Quantifying interpretability and trust in machine learning systems. In: Proceedings of AAAI Workshop on Network Interpretability for Deep Learning 2019 (2019)"},{"key":"18_CR35","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"209","DOI":"10.1007\/978-3-030-57321-8_12","volume-title":"Machine Learning and Knowledge Extraction","author":"D Schneeberger","year":"2020","unstructured":"Schneeberger, D., St\u00f6ger, K., Holzinger, A.: The European legal framework for medical AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 209\u2013226. Springer, Cham (2020). https:\/\/doi.org\/10.1007\/978-3-030-57321-8_12"},{"key":"18_CR36","unstructured":"Schumann, C., Foster, J.S., Mattei, N., Dickerson, J.P.: We need fairness and explainability in algorithmic hiring. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2020, pp. 1716\u20131720 (2020)"},{"key":"18_CR37","doi-asserted-by":"publisher","first-page":"277","DOI":"10.1016\/j.chb.2019.04.019","volume":"98","author":"D Shin","year":"2019","unstructured":"Shin, D., Park, Y.J.: Role of fairness, accountability, and transparency in algorithmic affordance. Comput. Hum. Behav. 98, 277\u2013284 (2019)","journal-title":"Comput. Hum. Behav."},{"key":"18_CR38","doi-asserted-by":"crossref","unstructured":"Starke, C., Baleis, J., Keller, B., Marcinkowski, F.: Fairness perceptions of algorithmic decision-making: a systematic review of the empirical literature (2021)","DOI":"10.1177\/20539517221115189"},{"key":"18_CR39","doi-asserted-by":"crossref","unstructured":"Wang, X., Yin, M.: Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making, pp. 318\u2013328. ACM (2021)","DOI":"10.1145\/3397481.3450650"},{"issue":"1","key":"18_CR40","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1080\/0731129X.2021.1893932","volume":"40","author":"R Warner","year":"2021","unstructured":"Warner, R., Sloan, R.H.: Making artificial intelligence transparent: fairness and the problem of proxy variables. Crim. Just. Ethics 40(1), 23\u201339 (2021)","journal-title":"Crim. Just. Ethics"},{"key":"18_CR41","doi-asserted-by":"crossref","unstructured":"Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.W.: Men also like shopping: reducing gender bias amplification using corpus-level constraints. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2979\u20132989. Copenhagen, Denmark, September 2017","DOI":"10.18653\/v1\/D17-1323"},{"key":"18_CR42","series-title":"Human\u2013Computer Interaction Series","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1007\/978-3-319-90403-0_1","volume-title":"Human and Machine Learning","author":"J Zhou","year":"2018","unstructured":"Zhou, J., Chen, F.: 2D transparency space\u2014bring domain users and machine learning experts together. In: Zhou, J., Chen, F. (eds.) Human and Machine Learning. HIS, pp. 3\u201319. Springer, Cham (2018). https:\/\/doi.org\/10.1007\/978-3-319-90403-0_1"},{"key":"18_CR43","series-title":"Human-Computer Interaction Series","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-90403-0","volume-title":"Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent","year":"2018","unstructured":"Zhou, J., Chen, F. (eds.): Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent. Human-Computer Interaction Series, Springer, Cham (2018). https:\/\/doi.org\/10.1007\/978-3-319-90403-0"},{"issue":"5","key":"18_CR44","doi-asserted-by":"publisher","first-page":"593","DOI":"10.3390\/electronics10050593","volume":"10","author":"J Zhou","year":"2021","unstructured":"Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: a survey on methods and metrics. Electronics 10(5), 593 (2021)","journal-title":"Electronics"},{"issue":"4","key":"18_CR45","first-page":"378","volume":"13","author":"J Zhou","year":"2016","unstructured":"Zhou, J., Khawaja, M.A., Li, Z., Sun, J., Wang, Y., Chen, F.: Making machine learning useable by revealing internal states update\u2014a transparent approach. Int. J. Comput. Sci. Eng. 13(4), 378\u2013389 (2016)","journal-title":"Int. J. Comput. Sci. Eng."}],"container-title":["Lecture Notes in Computer Science","xxAI - Beyond Explainable AI"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-04083-2_18","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,2]],"date-time":"2023-02-02T08:12:17Z","timestamp":1675325537000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-04083-2_18"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022]]},"ISBN":["9783031040825","9783031040832"],"references-count":45,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-04083-2_18","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"value":"0302-9743","type":"print"},{"value":"1611-3349","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022]]},"assertion":[{"value":"17 April 2022","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"xxAI","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Vienna","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Austria","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2020","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"18 July 2020","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"18 July 2020","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"xxai2020","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/human-centered.ai\/xxai-icml-2020\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}