{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,4]],"date-time":"2026-03-04T21:39:45Z","timestamp":1772660385868,"version":"3.50.1"},"publisher-location":"Cham","reference-count":16,"publisher":"Springer Nature Switzerland","isbn-type":[{"value":"9783032083166","type":"print"},{"value":"9783032083173","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T00:00:00Z","timestamp":1760227200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T00:00:00Z","timestamp":1760227200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2026]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Artificial Intelligence models are increasingly used for classification tasks in healthcare. However, many healthcare professionals and machine learning engineers are still unaware of how these models contribute to and amplify biases. This work introduces a new framework (FAIR-MED) for bias detection and fairness evaluation in healthcare AI models, with a particular emphasis on intersectional fairness, which accounts for the compounded effects of multiple demographic attributes rather than assessing bias in isolation. Current methods often focus solely on data biases and overlook the compounded impact of multiple demographic attributes (e.g., age, gender, socioeconomic status) leading to unequal outcomes across diverse patient populations. To bridge this gap, a comprehensive, model-agnostic framework that incorporates a <jats:italic>Compound Fairness Score<\/jats:italic> is proposed. This approach to fairness goes beyond traditional methods by providing insights into the compounded impact of biases across different groups. Additionally, entropy-based weighting is introduced to quantify and aggregate bias metrics in a data-driven manner, ensuring that fairness evaluations prioritize the most impactful sources of bias. The proposed framework is evaluated on widely adopted families of AI models (linear, non-linear and neural network-based approaches) against open-source breast cancer dataset. The results suggest that Neural Networks may be more prone to amplifying existing biases, while Random Forest models tend to exhibit a better fairness balance in this evaluation. Logistic Regression, the most interpretable among the evaluated models, demonstrated overall stability but exhibited noticeable accuracy disparities across age groups. By aligning with transparency and accountability principles outlined in the EU AI Act, FAIR-MED offers a systematic, interpretable, and reproducible approach to bias analysis and fairness assessment, contributing to the development of ethical, equitable, and trustworthy AI-driven healthcare solutions.\n<\/jats:p>","DOI":"10.1007\/978-3-032-08317-3_18","type":"book-chapter","created":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T03:36:44Z","timestamp":1760153804000},"page":"380-401","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["FAIR-MED: Bias Detection and\u00a0Fairness Evaluation in\u00a0Healthcare Focused XAI"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6645-5276","authenticated-orcid":false,"given":"Katsiaryna","family":"Bahamazava","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7990-3461","authenticated-orcid":false,"given":"Ruairi","family":"O\u2019Reilly","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,12]]},"reference":[{"issue":"1","key":"18_CR1","doi-asserted-by":"publisher","first-page":"267","DOI":"10.1007\/s43681-022-00147-7","volume":"3","author":"A Agarwal","year":"2023","unstructured":"Agarwal, A., Agarwal, H., Agarwal, N.: Fairness score and process standardization: framework for fairness certification in artificial intelligence systems. AI Ethics 3(1), 267\u2013279 (2023)","journal-title":"AI Ethics"},{"issue":"2","key":"18_CR2","doi-asserted-by":"publisher","first-page":"163","DOI":"10.3390\/biomimetics8020163","volume":"8","author":"AA Alhussan","year":"2023","unstructured":"Alhussan, A.A., Eid, M.M., Towfek, S., Khafaga, D.S.: Breast cancer classification depends on the dynamic dipper throated optimization algorithm. Biomimetics 8(2), 163 (2023)","journal-title":"Biomimetics"},{"key":"18_CR3","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1023\/A:1010933404324","volume":"45","author":"L Breiman","year":"2001","unstructured":"Breiman, L.: Random forests. Mach. Learn. 45, 5\u201332 (2001)","journal-title":"Mach. Learn."},{"key":"18_CR4","unstructured":"Chen, R.J., et al.: Algorithm fairness in AI for medicine and healthcare. arXiv preprint arXiv:2110.00603 (2021)"},{"key":"18_CR5","doi-asserted-by":"publisher","unstructured":"d\u2019Aloisio, G., Lisi, F.A., Lenzerini, M., Giacomo, G.D., Calvanese, D.: How fair are we? from conceptualization to automated assessment of fairness definitions. Softw. Syst. Model., 1\u201327 (2025). https:\/\/doi.org\/10.1007\/s10270-025-01277-2","DOI":"10.1007\/s10270-025-01277-2"},{"issue":"1","key":"18_CR6","doi-asserted-by":"publisher","first-page":"3","DOI":"10.3390\/sci6010003","volume":"6","author":"E Ferrara","year":"2023","unstructured":"Ferrara, E.: Fairness and bias in artificial intelligence: a brief survey of sources, impacts, and mitigation strategies. Science 6(1), 3 (2023)","journal-title":"Science"},{"issue":"246","key":"18_CR7","first-page":"1","volume":"25","author":"S Hu","year":"2024","unstructured":"Hu, S., Chen, G.H.: Fairness in survival analysis with distributionally robust optimization. J. Mach. Learn. Res. 25(246), 1\u201385 (2024)","journal-title":"J. Mach. Learn. Res."},{"key":"18_CR8","unstructured":"Istituto Nazionale di Statistica: Popolazione residente al 1 gennaio (2025). https:\/\/esploradati.istat.it\/databrowser\/#\/it\/dw\/categories\/IT1,POP,1.0\/POP_POPULATION\/DCIS_POPRES1. Accessed 28 Mar 2025"},{"key":"18_CR9","doi-asserted-by":"crossref","unstructured":"Li, Y., Chen, H., Zhang, L., Zhang, Y.: Fairness in survival outcome prediction for medical treatments. In: 2024 58th Annual Conference on Information Sciences and Systems (CISS), pp.\u00a01\u20136. IEEE (2024)","DOI":"10.1109\/CISS59072.2024.10480160"},{"key":"18_CR10","doi-asserted-by":"crossref","unstructured":"Park, J.I., Bozkurt, S., Park, J.W., Lee, S.: Evaluation of race\/ethnicity-specific survival machine learning models for hispanic and black patients with breast cancer. BMJ Health Care Inform. 30(1) (2023)","DOI":"10.1136\/bmjhci-2022-100666"},{"key":"18_CR11","doi-asserted-by":"crossref","unstructured":"Pfob, A., et\u00a0al.: 147p racial bias in pretreatment MRI radiomics features to predict response to neoadjuvant systemic treatment in breast cancer: a multicenter study in China, Germany, and the US. ESMO Open 9 (2024)","DOI":"10.1016\/j.esmoop.2024.103134"},{"key":"18_CR12","unstructured":"Saleiro, P., et al.: Aequitas: a bias and fairness audit toolkit. arXiv preprint arXiv:1811.05577 (2018)"},{"key":"18_CR13","doi-asserted-by":"publisher","unstructured":"Teng, J.: Seer breast cancer data (2019). https:\/\/doi.org\/10.21227\/a9qy-ph35. https:\/\/dx.doi.org\/10.21227\/a9qy-ph35","DOI":"10.21227\/a9qy-ph35"},{"key":"18_CR14","unstructured":"U.S. Food & Drug Administration: Artificial Intelligence and Machine Learning (AI\/ML)-Enabled Medical Devices (2024). https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices"},{"key":"18_CR15","doi-asserted-by":"crossref","unstructured":"Walshe, D., O\u2019Reilly, R.: Fair skin lesion classification workflows using transfer learning. In: 2022 33rd Irish Signals and Systems Conference (ISSC), pp.\u00a01\u20136. IEEE (2022)","DOI":"10.1109\/ISSC55427.2022.9826212"},{"key":"18_CR16","unstructured":"Weerts, H., Dud\u00edk, M., Edgar, R., Jalali, A., Lutz, R., Madaio, M.: FairLearn: assessing and improving fairness of AI systems (2023). http:\/\/jmlr.org\/papers\/v24\/23-0389.html"}],"container-title":["Communications in Computer and Information Science","Explainable Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-032-08317-3_18","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T03:36:49Z","timestamp":1760153809000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-032-08317-3_18"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,12]]},"ISBN":["9783032083166","9783032083173"],"references-count":16,"URL":"https:\/\/doi.org\/10.1007\/978-3-032-08317-3_18","relation":{},"ISSN":["1865-0929","1865-0937"],"issn-type":[{"value":"1865-0929","type":"print"},{"value":"1865-0937","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,12]]},"assertion":[{"value":"12 October 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"xAI","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"World Conference on Explainable Artificial Intelligence","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Istanbul","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"T\u00fcrkiye","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2025","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"9 July 2025","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"11 July 2025","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"3","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"xai2025","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/xaiworldconference.com\/2025\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}