{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T10:12:14Z","timestamp":1775038334292,"version":"3.50.1"},"reference-count":105,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2023,5,8]],"date-time":"2023-05-08T00:00:00Z","timestamp":1683504000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,5,8]],"date-time":"2023-05-08T00:00:00Z","timestamp":1683504000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100016377","name":"Ministerium f\u00fcr Wirtschaft, Innovation, Digitalisierung und Energie des Landes Nordrhein-Westfalen","doi-asserted-by":"publisher","award":["005-2011-0050"],"award-info":[{"award-number":["005-2011-0050"]}],"id":[{"id":"10.13039\/501100016377","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100008131","name":"Rheinische Friedrich-Wilhelms-Universit\u00e4t Bonn","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100008131","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI Ethics"],"published-print":{"date-parts":[[2024,5]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In the current European debate on the regulation of Artificial Intelligence there is a consensus that Artificial Intelligence (AI) systems should be fair. However, the multitude of existing indicators\u00a0allowing an AI system to be labeled as \u201c(un)fair\u201d and the lack of standardized, application field specific criteria to choose among the various fairness-evaluation methods makes it difficult for potential auditors to arrive at a final, consistent judgment. Focusing on a concrete use case in the application field of finance, the main goal of this paper is to define standardizable minimal ethical requirements for AI fairness-evaluation. For the applied case of creditworthiness assessment for small personal loans, we highlighted specific distributive and procedural fairness issues inherent either to the computing process or to the system\u2019s use in a real-world scenario: (1) the unjustified unequal distribution of predictive outcome; (2) the perpetuation of existing bias and discrimination practices; (3) the lack of transparency concerning the processed data and of an explanation of the algorithmic outcome for credit applicants. We addressed these issues proposing minimal ethical requirements for this specific application field: (1) regularly checking algorithmic outcome through the conditional demographic parity metric; (2) excluding from the group of processed parameters those that could lead to discriminatory outcome; (3) guaranteeing transparency about the processed data, in addition to counterfactual explainability of algorithmic decisions. Defining these minimal ethical requirements represents the main contribution of this paper and a starting point toward standards specifically addressing fairness issues in AI systems for creditworthiness assessments aiming at preventing unfair algorithmic outcomes, in addition to unfair practices related to the use of these systems. As a final result, we indicate the next steps that can be taken to begin the standardization of the three use case-specific fairness requirements we propose.<\/jats:p>","DOI":"10.1007\/s43681-023-00291-8","type":"journal-article","created":{"date-parts":[[2023,5,8]],"date-time":"2023-05-08T11:01:57Z","timestamp":1683543717000},"page":"537-553","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":13,"title":["Standardizing fairness-evaluation procedures: interdisciplinary insights on machine learning algorithms in creditworthiness assessments for small personal loans"],"prefix":"10.1007","volume":"4","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6803-5029","authenticated-orcid":false,"given":"Sergio","family":"Genovesi","sequence":"first","affiliation":[]},{"given":"Julia Maria","family":"M\u00f6nig","sequence":"additional","affiliation":[]},{"given":"Anna","family":"Schmitz","sequence":"additional","affiliation":[]},{"given":"Maximilian","family":"Poretschkin","sequence":"additional","affiliation":[]},{"given":"Maram","family":"Akila","sequence":"additional","affiliation":[]},{"given":"Manoj","family":"Kahdan","sequence":"additional","affiliation":[]},{"given":"Romina","family":"Kleiner","sequence":"additional","affiliation":[]},{"given":"Lena","family":"Krieger","sequence":"additional","affiliation":[]},{"given":"Alexander","family":"Zimmermann","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,5,8]]},"reference":[{"key":"291_CR1","doi-asserted-by":"publisher","first-page":"267","DOI":"10.1016\/S0065-2601(08)60108-2","volume":"2","author":"JS Adams","year":"1965","unstructured":"Adams, J.S.: Inequity in social exchange. Adv. Exp. Soc. Psychol. 2, 267\u2013299 (1965)","journal-title":"Adv. Exp. Soc. Psychol."},{"key":"291_CR2","first-page":"37","volume-title":"Autonomous Systems and the Law","author":"N Aggarwal","year":"2018","unstructured":"Aggarwal, N.: Machine learning, big data and the regulation of consumer credit markets: the case of algorithmic credit scoring. In: Aggarwal, N., Eidenm\u00fcller, H., Enriques, L., Payne, J., van Zwieten, K. (eds.) Autonomous Systems and the Law, pp. 37\u201344. Beck, M\u00fcnchen (2018)"},{"key":"291_CR3","first-page":"1086","volume":"26","author":"Y Ahn","year":"2020","unstructured":"Ahn, Y., Lin, Y.R.: Fairsight: visual analytics for fairness in decision making. IEEE Trans. Vis. Comput. Graph. 26, 1086\u20131095 (2020)","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"291_CR4","doi-asserted-by":"publisher","first-page":"1054","DOI":"10.1002\/smj.846","volume":"31","author":"A Ari\u00f1o","year":"2010","unstructured":"Ari\u00f1o, A., Ring, P.S.: The role of fairness in alliance formation. Strat. Manag. J. 31, 1054\u20131087 (2010)","journal-title":"Strat. Manag. J."},{"key":"291_CR5","unstructured":"BaFin, Big data and artificial intelligence: Principles for the use of algorithms in decision-making processes, https:\/\/www.bafin.de\/SharedDocs\/Downloads\/EN\/Aufsichtsrecht\/dl_Prinzipienpapier_BDAI_en.html (2021). Accessed 3 March 2023."},{"key":"291_CR6","unstructured":"Balayn, A., G\u00fcrses, S., Beyond Debiasing: Regulating AI and its Inequalities. Accessed 16 December 2022"},{"key":"291_CR7","unstructured":"Barocas, S., Hardt, M., Narayanan, A.: Fairness in machine learning. Nips Tutorial. 1\/2 (2017)"},{"key":"291_CR8","unstructured":"Barocas, S., Hardt, M., Narayanan, A. Fairness and machine learning. https:\/\/fairmlbook.org\/ (2019). Accessed 30 August 2022"},{"key":"291_CR9","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1147\/JRD.2019.2942287","volume":"63","author":"RKE Bellamy","year":"2019","unstructured":"Bellamy, R.K.E., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K.N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K.R., Zhang, Y.: AI Fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63, 1\u201315 (2019)","journal-title":"IBM J. Res. Dev."},{"key":"291_CR10","first-page":"43","volume-title":"Interactional Justice: Communication Criteria of Fairness Research on Negotiation in Organizations","author":"RJ Bies","year":"1986","unstructured":"Bies, R.J., Moag, J.F.: Interactional Justice: Communication Criteria of Fairness Research on Negotiation in Organizations, 1st edn., pp. 43\u201355. JAI Press, Greenwich (1986)","edition":"1"},{"key":"291_CR11","first-page":"1","volume":"81","author":"R Binns","year":"2018","unstructured":"Binns, R.: Fairness in machine learning: lessons from political philosophy. Proc. Mach. Learn. Res. 81, 1\u201311 (2018)","journal-title":"Proc. Mach. Learn. Res."},{"key":"291_CR12","unstructured":"Breck, E., Polyzotis, N., Roy, S., Whang, S., Zinkevich, M.: Data validation for machine learning. Proceedings of the 2nd SysML conference, Palo Alto, CA, USA (2019)"},{"key":"291_CR13","first-page":"1877","volume":"33","author":"TB Brown","year":"2020","unstructured":"Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., et al.: Language models are few\u2013shot learners. Adv. Neural Inf. Process. Syst. 33, 1877\u20131901 (2020)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"291_CR14","unstructured":"Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81, 77\u201391 (2018)"},{"key":"291_CR15","doi-asserted-by":"crossref","unstructured":"Chakraborty, J., Majumder, S., Menzies, T.: Bias in machine learning software: why? how? what to do? In: Proceedings of the 29th ACM joint meeting on European software engineering Conference and Symposium on the. foundations of software engineering, 429\u2013440 (2021)","DOI":"10.1145\/3468264.3468537"},{"key":"291_CR16","doi-asserted-by":"publisher","first-page":"321","DOI":"10.1613\/jair.953","volume":"16","author":"NV Chawla","year":"2002","unstructured":"Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over\u2013sampling technique. J. Artif. Intell. Res. 16, 321\u2013357 (2002)","journal-title":"J. Artif. Intell. Res."},{"key":"291_CR17","doi-asserted-by":"publisher","first-page":"8211","DOI":"10.3390\/su12198211","volume":"12","author":"ZM \u00c7\u0131nar","year":"2020","unstructured":"\u00c7\u0131nar, Z.M., Abdussalam Nuhu, A., Zeeshan, Q., Korhan, O., Asmael, M., Safaei, B.: Machine learning in predictive maintenance toward sustainable smart manufacturing in industry 4.0. Sustainability. 12, 8211 (2020)","journal-title":"Sustainability."},{"key":"291_CR18","doi-asserted-by":"publisher","first-page":"67","DOI":"10.1007\/s43681-020-00007-2","volume":"1","author":"M Coeckelbergh","year":"2021","unstructured":"Coeckelbergh, M.: AI for climate: freedom, justice, and other ethical and political challenges. AI Ethics 1, 67\u201372 (2021). https:\/\/doi.org\/10.1007\/s43681-020-00007-2","journal-title":"AI Ethics"},{"key":"291_CR19","doi-asserted-by":"publisher","first-page":"906","DOI":"10.1086\/293126","volume":"99","author":"GA Cohen","year":"1989","unstructured":"Cohen, G.A.: On the currency of egalitarian justice. Ethics 99, 906\u2013944 (1989)","journal-title":"Ethics"},{"key":"291_CR20","doi-asserted-by":"publisher","first-page":"386","DOI":"10.1037\/0021-9010.86.3.386","volume":"86","author":"JA Colquitt","year":"2001","unstructured":"Colquitt, J.A.: On the dimensionality of organizational justice: a construct validation of a measure. J. Appl. Psychol. 86, 386\u2013400 (2001)","journal-title":"J. Appl. Psychol."},{"key":"291_CR21","doi-asserted-by":"publisher","first-page":"1183","DOI":"10.5465\/amj.2007.0572","volume":"54","author":"JA Colquitt","year":"2011","unstructured":"Colquitt, J.A., Rodell, J.B.: Justice, trust, and trustworthiness: a longitudinal analysis integrating three theoretical perspectives. Acad. Manag. J. 54, 1183\u20131206 (2011)","journal-title":"Acad. Manag. J."},{"key":"291_CR22","first-page":"187","volume-title":"The Oxford Handbook of Justice in the Workplace","author":"JA Colquitt","year":"2015","unstructured":"Colquitt, J.A., Rodell, J.B.: Measuring justice and fairness. In: Cropanzano, R.S., Ambrose, M.L. (eds.) The Oxford Handbook of Justice in the Workplace, pp. 187\u2013202. OUP, Oxford (2015)"},{"key":"291_CR23","unstructured":"Cremers, A.B. et al.. Trustworthy Use of Artificial Intelligence. Priorities from a philosophical, ethical, legal, and technological viewpoint as a basis for certification of Artificial Intelligence, https:\/\/www.ki.nrw\/wp-content\/uploads\/2020\/03\/Whitepaper_Thrustworthy_AI.pdf (2019). Accessed 28 January 2022"},{"key":"291_CR24","unstructured":"Datenetikkommission der Bundesregierung. Gutachten der Datenethikkommission der Bundesregierung, https:\/\/www.bmi.bund.de\/SharedDocs\/downloads\/DE\/publikationen\/themen\/it-digitalpolitik\/gutachten-datenethikkommission.pdf?__blob=publicationFile&v=6 (2019). Accessed 28 January 2022"},{"key":"291_CR25","doi-asserted-by":"publisher","first-page":"849","DOI":"10.1136\/jech.2006.052969","volume":"61","author":"FG De Maio","year":"2007","unstructured":"De Maio, F.G.: Income inequality measures. J. Epidemiol. Community Health. 61, 849\u2013852 (2007)","journal-title":"J. Epidemiol. Community Health."},{"key":"291_CR26","doi-asserted-by":"publisher","first-page":"9","DOI":"10.1134\/S1054661816010065","volume":"26","author":"PN Druzhkov","year":"2016","unstructured":"Druzhkov, P.N., Kustikova, V.D.: A survey of deep learning methods and software tools for image classification and object detection. Pattern Recognit. Image Anal. 26, 9\u201315 (2016)","journal-title":"Pattern Recognit. Image Anal."},{"key":"291_CR27","doi-asserted-by":"crossref","unstructured":"Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd innovations in theoretical computer science conference, 214\u2013226 (2012)","DOI":"10.1145\/2090236.2090255"},{"key":"291_CR28","first-page":"185","volume":"2","author":"R Dworkin","year":"1981","unstructured":"Dworkin, R.: What is equality? Part 1: Equality of welfare. Philos. Publ. Aff. 2, 185\u2013246 (1981)","journal-title":"Philos. Publ. Aff."},{"key":"291_CR29","doi-asserted-by":"publisher","DOI":"10.1093\/acprof:oso\/9780198732877.001.0001","volume-title":"Discrimination and Disrespect","author":"B Eidelson","year":"2015","unstructured":"Eidelson, B.: Discrimination and Disrespect. OUP, Oxford (2015)"},{"key":"291_CR30","unstructured":"European Banking Authority, Final Report \u2013 Guidelines on Loan Origination and Monitoring, https:\/\/www.eba.europa.eu\/sites\/default\/documents\/files\/document_library\/Publications\/Guidelines\/2020\/Guidelines%20on%20loan%20origination%20and%20monitoring\/884283\/EBA%20GL%202020%2006%20Final%20Report%20on%20GL%20on%20loan%20origination%20and%20monitoring.pdf (2020). Accessed 02 March 2023"},{"key":"291_CR31","unstructured":"European Commission. Proposal for A regulation of the European Parliament and of the council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts, https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=celex%3A52021PC0206 (2021). Accessed 18 March 2022"},{"key":"291_CR32","doi-asserted-by":"publisher","first-page":"217","DOI":"10.1007\/s10551-013-1694-2","volume":"121","author":"JL Ferguson","year":"2014","unstructured":"Ferguson, J.L., Ellen, P.S., Bearden, W.O.: Procedural and distributive fairness: Determinants of overall price fairness. J. Bus. Ethics. 121, 217\u2013231 (2014)","journal-title":"J. Bus. Ethics."},{"key":"291_CR33","volume-title":"Reconsidering Value and Labor in the Digital Age","author":"E Fisher","year":"2015","unstructured":"Fisher, E., Fuchs, C.: Reconsidering Value and Labor in the Digital Age. Palgrave Macmillan, Basingstoke (2015)"},{"key":"291_CR34","doi-asserted-by":"publisher","first-page":"115","DOI":"10.2307\/256422","volume":"32","author":"R Folger","year":"1989","unstructured":"Folger, R., Konovsky, M.A.: Effects of procedural and distributive justice on reactions to pay raise decisions. Acad. Manag. J. 32, 115\u2013130 (1989)","journal-title":"Acad. Manag. J."},{"key":"291_CR35","doi-asserted-by":"crossref","unstructured":"Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., Roth, D.: A comparative study of fairness\u2013enhancing interventions in machine learning. In: Proceedings of the conference on fairness, accountability, and transparency, 329\u2013338 (2019)","DOI":"10.1145\/3287560.3287589"},{"key":"291_CR36","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1145\/3458723","volume":"64","author":"T Gebru","year":"2021","unstructured":"Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Iii, I.: Datasheets for datasets. Commun. ACM. 64, 86\u201392 (2021)","journal-title":"Commun. ACM."},{"key":"291_CR37","doi-asserted-by":"publisher","first-page":"4157","DOI":"10.3390\/su14074157","volume":"14","author":"S Genovesi","year":"2022","unstructured":"Genovesi, S., M\u00f6nig, J.M.: Acknowledging sustainability in the framework of ethical certification for AI. Sustainability. 14, 4157 (2022)","journal-title":"Sustainability."},{"key":"291_CR38","doi-asserted-by":"publisher","first-page":"694","DOI":"10.2307\/258595","volume":"18","author":"SW Gilliland","year":"1993","unstructured":"Gilliland, S.W.: The perceived fairness of selection systems \u2013 an organizational justice perspective. Acad. Manag. Rev. 18, 694\u2013734 (1993)","journal-title":"Acad. Manag. Rev."},{"key":"291_CR39","doi-asserted-by":"crossref","unstructured":"Giovanola, B., Tiribelli, S.: Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine learning algorithms. AI & Soc, 1\u201315 (2022)","DOI":"10.1007\/s00146-023-01722-0"},{"key":"291_CR40","doi-asserted-by":"crossref","unstructured":"Gomez, O., Holter, S., Yuan, J., Bertini, E.: ViCE: visual counterfactual explanations for machine learning models. In: Proceedings of the 25th international conference on intelligent user interfaces, 531\u2013535 (2020)","DOI":"10.1145\/3377325.3377536"},{"key":"291_CR41","volume-title":"Deep Learning","author":"I Goodfellow","year":"2016","unstructured":"Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT press, Cambridge (2016)"},{"key":"291_CR42","unstructured":"Gould, E., Schieder, J., Geier, K., What is the gender pay gap and is it real? Economic Policy Institute. https:\/\/files.epi.org\/pdf\/112962.pdf (2016). Accessed 05 September 2022"},{"key":"291_CR43","first-page":"79","volume-title":"Justice in the Workplace: Approaching Fairness in Human Resource Management","author":"J Greenberg","year":"1993","unstructured":"Greenberg, J., Cropanzano, R.: The social side of fairness: interpersonal and informational classes of organizational justice. In: Cropanzano, R. (ed.) Justice in the Workplace: Approaching Fairness in Human Resource Management, pp. 79\u2013103. Lawrence Erlbaum Associates, Hillsdale (1993)"},{"key":"291_CR44","unstructured":"Grgi\u0107\u2013Hla\u010da, N., Zafar, M.B., Gummadi, K.P., Weller, A.: The case for process fairness in learning: feature selection for fair decision making. NIPS symposium on machine learning and the law (2016)"},{"key":"291_CR45","doi-asserted-by":"crossref","unstructured":"Grgi\u0107-Hla\u010da, N., Zafar, M.B., Gummadi, K.P., Weller, A.: Beyond distributive fairness in algorithmic decision making: feature selection for procedurally fair learning. AAAI.\u00a0Proceedings of the AAAI conference on artificial intelligence\u00a032. 32 (2018)","DOI":"10.1609\/aaai.v32i1.11296"},{"key":"291_CR46","first-page":"1","volume":"10","author":"V Gudivada","year":"2017","unstructured":"Gudivada, V., Apon, A., Ding, J.: Data quality considerations for big data and machine learning: Going beyond data cleaning and transformations. Int. J. Adv. Softw. 10, 1\u201320 (2017)","journal-title":"Int. J. Adv. Softw."},{"key":"291_CR47","unstructured":"H\u00e4berlein, L., M\u00f6nig, J.M., H\u00f6vel, P.: Mapping stakeholders and scoping involvement. A guide for HEFRCs, Deliverable 3.1 of the H2020\u2013project ETHNA System. https:\/\/ethnasystem.eu\/wp-content\/uploads\/2021\/10\/ETHNA_2021_d3.1-stakeholdermapping_2110011.pdf (2021) Accessed 10 May 2021"},{"key":"291_CR48","doi-asserted-by":"publisher","first-page":"1143","DOI":"10.54648\/COLA2018095","volume":"55","author":"P Hacker","year":"2018","unstructured":"Hacker, P.: Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law (April 18, 2018). Common Mark. Law Rev. 55, 1143\u20131186 (2018)","journal-title":"Common Mark. Law Rev."},{"key":"291_CR49","first-page":"2","volume":"29","author":"M Hardt","year":"2016","unstructured":"Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29, 2 (2016)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"291_CR50","doi-asserted-by":"publisher","first-page":"304","DOI":"10.1086\/508435","volume":"33","author":"KL Haws","year":"2006","unstructured":"Haws, K.L., Bearden, W.O.: Dynamic pricing and consumer fairness perceptions. J. Consum. Res. 33, 304\u2013311 (2006)","journal-title":"J. Consum. Res."},{"key":"291_CR51","doi-asserted-by":"publisher","DOI":"10.1093\/oxfordhb\/9780198857815.013.21","volume-title":"The Oxford Handbook of Digital Ethics","author":"L Herzog","year":"2021","unstructured":"Herzog, L.: Algorithmic bias and access to opportunities. In: V\u00e9liz, C. (ed.) The Oxford Handbook of Digital Ethics. Oxford Academic, Oxford (2021). https:\/\/doi.org\/10.1093\/oxfordhb\/9780198857815.013.21"},{"key":"291_CR52","unstructured":"HLEG. HLEG on AI. Ethics guidelines for trustworthy AI. https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai (2019). Accessed 10 May 2022"},{"key":"291_CR53","unstructured":"HLEG. Assessment List for Trustworthy AI (Altai) for self\u2013assessment. https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment (2020). Accessed 01 December 2022"},{"key":"291_CR54","unstructured":"ISO\/IEC\u00a0TR\u00a024027:2021, Information technology \u2014 Artificial intelligence (AI) \u2014 Bias in AI systems and AI aided decision making. https:\/\/www.iso.org\/standard\/77607.html (2021). Accessed 22 December 2022"},{"key":"291_CR55","unstructured":"ISO\/IEC\u00a0TR.\u00a024028:2020, Information technology \u2014 Artificial intelligence \u2014 Overview of trustworthiness. https:\/\/www.iso.org\/standard\/77608.html (2020). Accessed 22 December 2022"},{"key":"291_CR56","unstructured":"ISO\/IEC\u00a0TR\u00a024368:2022, Information technology \u2014 Artificial intelligence \u2014 Overview of ethical and societal concerns. https:\/\/www.iso.org\/standard\/78507.html (2022). Accessed 22 December 2022"},{"key":"291_CR57","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10115-011-0463-8","volume":"33","author":"F Kamiran","year":"2012","unstructured":"Kamiran, F., Calders, T.: Data pre\u2013processing techniques for classification without discrimination. Knowl. Inf. Syst. 33, 1\u201333 (2012)","journal-title":"Knowl. Inf. Syst."},{"key":"291_CR58","doi-asserted-by":"publisher","first-page":"613","DOI":"10.1007\/s10115-012-0584-8","volume":"35","author":"F Kamiran","year":"2013","unstructured":"Kamiran, F., \u017dliobait\u0117, I., Calders, T.: Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowl. Inf. Syst. 35, 613\u2013644 (2013)","journal-title":"Knowl. Inf. Syst."},{"key":"291_CR59","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1007\/978-3-642-33486-3_3","volume-title":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","author":"T Kamishima","year":"2012","unstructured":"Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness\u2013aware classifier with prejudice remover regularizer. In: Flach, P.A., Bie, T., Cristianini, N. (eds.) Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 35\u201350. Springer, Berlin (2012)"},{"key":"291_CR60","doi-asserted-by":"crossref","unstructured":"Kasirzadeh, A. Algorithmic fairness and structural injustice. Insights from Feminist Political Philosophy. Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society 8, 349\u2013356 (2022)","DOI":"10.1145\/3514094.3534188"},{"key":"291_CR61","first-page":"656","volume":"30","author":"N Kilbertus","year":"2017","unstructured":"Kilbertus, N., Rojas, C.M., Parascandolo, G., Hardt, M., Janzing, D., Sch\u00f6lkopf, B.: Avoiding discrimination through causal reasoning. Adv. Neural Inf. Process. Syst. 30, 656\u2013666 (2017)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"291_CR62","first-page":"489","volume":"26","author":"MA Konovsky","year":"2000","unstructured":"Konovsky, M.A.: Understanding procedural justice and its impact on business organizations. J. Manag. 26, 489\u2013511 (2000)","journal-title":"J. Manag."},{"key":"291_CR63","first-page":"2","volume":"30","author":"MJ Kusner","year":"2017","unstructured":"Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. Adv. Neural Inf. Process. Syst. 30, 2 (2017)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"291_CR64","doi-asserted-by":"crossref","unstructured":"Lahoti, P., Gummadi, K.P., Weikum, G.: ifair: learning individually fair data representations for algorithmic decision making. 35th international conference on data engineering (icde), IEEE Publications, 1334\u20131345 (2019)","DOI":"10.1109\/ICDE.2019.00121"},{"key":"291_CR65","doi-asserted-by":"publisher","first-page":"165","DOI":"10.1007\/s11023-020-09529-4","volume":"31","author":"MSA Lee","year":"2021","unstructured":"Lee, M.S.A., Floridi, L.: Algorithmic fairness in mortgage lending: from absolute conditions to relational trade\u2013offs. Minds Mach. 31, 165\u2013191 (2021)","journal-title":"Minds Mach."},{"key":"291_CR66","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1016\/S0065-2601(08)60059-3","volume":"9","author":"GS Leventhal","year":"1976","unstructured":"Leventhal, G.S.: The distribution of rewards and resources in groups and organizations. Adv. Exp. Soc. Psychol. 9, 91\u2013131 (1976)","journal-title":"Adv. Exp. Soc. Psychol."},{"key":"291_CR67","doi-asserted-by":"crossref","unstructured":"Leventhal, G.S.: What should be done with equity theory? In: Gergen, K.J., Greenberg, M.S., Willis, R.H. (eds.) Social exchange, pp 27\u201356, Springer, Boston (1980)","DOI":"10.1007\/978-1-4613-3087-5_2"},{"key":"291_CR68","doi-asserted-by":"publisher","DOI":"10.1093\/acprof:oso\/9780199796113.001.0001","volume-title":"Born Free and Equal? A Philosophical Inquiry into the Nature of Discrimination","author":"K Lippert-Rasmussen","year":"2013","unstructured":"Lippert-Rasmussen, K.: Born Free and Equal? A Philosophical Inquiry into the Nature of Discrimination. OUP, New York (2013)"},{"key":"291_CR69","doi-asserted-by":"crossref","unstructured":"Madaio, M.A., Stark, L., Wortman Vaughan, J.W., Wallach, H.: Co\u2013 designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Proceedings of the 2020 CHI conference on human factors in computing systems, 1\u201314 (2020)","DOI":"10.1145\/3313831.3376445"},{"key":"291_CR70","doi-asserted-by":"publisher","first-page":"357","DOI":"10.1016\/j.jbusres.2020.06.006","volume":"117","author":"P Mathur","year":"2020","unstructured":"Mathur, P., Sarin Jain, S.S.: Not all that glitters is golden: the impact of procedural fairness perceptions on firm evaluations and customer satisfaction with favorable outcomes. J. Bus. Res. 117, 357\u2013367 (2020)","journal-title":"J. Bus. Res."},{"key":"291_CR71","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3457607","volume":"54","author":"N Mehrabi","year":"2021","unstructured":"Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54, 1\u201335 (2021)","journal-title":"ACM Comput. Surv."},{"key":"291_CR72","doi-asserted-by":"publisher","DOI":"10.1177\/2053951716679679","author":"BD Mittelstadt","year":"2016","unstructured":"Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The ethics of algorithms: mapping the debate. Big Data Soc. (2016). https:\/\/doi.org\/10.1177\/2053951716679679","journal-title":"Big Data Soc."},{"key":"291_CR73","unstructured":"Molnar, C. Interpretable machine learning. A guide for making black box models explainable, https:\/\/christophm.github.io\/interpretable-ml-book\/ (2022). Accessed 22 December 2022"},{"key":"291_CR74","first-page":"105","volume-title":"Demokratie und Digitalisierung","author":"JM M\u00f6nig","year":"2020","unstructured":"M\u00f6nig, J.M.: Privatheit als Luxusgut in der Demokratie? In: Grimm, P., Z\u00f6llner, O. (eds.) Demokratie und Digitalisierung, pp. 105\u2013114. Steiner, Stuttgart (2020)"},{"key":"291_CR75","doi-asserted-by":"publisher","DOI":"10.28968\/cftt.v6i2.33043","author":"S Myers West","year":"2020","unstructured":"Myers West, S.: Redistribution and recognition. A feminist critique of algorithmic fairness. Catalyst. (2020). https:\/\/doi.org\/10.28968\/cftt.v6i2.33043","journal-title":"Catalyst."},{"key":"291_CR76","doi-asserted-by":"publisher","DOI":"10.18574\/nyu\/9781479833641.001.0001","volume-title":"Algorithms of oppression: how search engines reinforce racism","author":"SU Noble","year":"2018","unstructured":"Noble, S.U.: Algorithms of oppression: how search engines reinforce racism. New York University Press, New York (2018)"},{"key":"291_CR77","first-page":"897","volume":"25","author":"R Pillai","year":"1999","unstructured":"Pillai, R., Schriesheim, C.A., Williams, E.S.: Fairness perceptions and trust as mediators for transformational and transactional leadership: A two\u2013sample study. J. Manag. 25, 897\u2013933 (1999)","journal-title":"J. Manag."},{"key":"291_CR78","first-page":"2","volume":"30","author":"G Pleiss","year":"2017","unstructured":"Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., Weinberger, K.Q.: On fairness and calibration. Adv. Neural Inf. Process. Syst. 30, 2 (2017)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"291_CR79","doi-asserted-by":"publisher","first-page":"1508","DOI":"10.1002\/smj.2175","volume":"35","author":"L Poppo","year":"2014","unstructured":"Poppo, L., Zhou, K.Z.: Managing contracts for fairness in buyer\u2013supplier exchanges. Strat. Manag. J. 35, 1508\u20131527 (2014)","journal-title":"Strat. Manag. J."},{"key":"291_CR80","unstructured":"Poretschkin, M., Schmitz, A., Akila, M., Adilova, L., Becker, D., Cremers, A.B., Hecker, D., Houben, S., Mock, M., Rosenzweig, J. et al. Leitfaden zur Gestaltung vertrauensw\u00fcrdiger K\u00fcnstlicher Intelligenz. https:\/\/www.iais.fraunhofer.de\/content\/dam\/iais\/fb\/Kuenstliche_intelligenz\/ki-pruefkatalog\/202107_KI-Pruefkatalog.pdf (2021). Accessed 10 March 2021"},{"key":"291_CR81","doi-asserted-by":"publisher","first-page":"866","DOI":"10.7326\/M18-1990","volume":"169","author":"A Rajkomar","year":"2018","unstructured":"Rajkomar, A., Hardt, M., Howell, M.D., Corrado, G., Chin, M.H.: Ensuring fairness in machine learning to advance health equity. Ann. Intern. Med. 169, 866\u2013872 (2018)","journal-title":"Ann. Intern. Med."},{"key":"291_CR82","doi-asserted-by":"publisher","DOI":"10.4159\/9780674042582","volume-title":"A Theory of Justice","author":"J Rawls","year":"1999","unstructured":"Rawls, J.: A Theory of Justice. Harvard University Press, Cambridge MA (1999)"},{"key":"291_CR83","doi-asserted-by":"publisher","DOI":"10.2307\/j.ctv31xf5v0","volume-title":"Justice as Fairness: A Restatement","author":"J Rawls","year":"2001","unstructured":"Rawls, J.: Justice as Fairness: A Restatement. Harvard University Press, Cambridge MA (2001)"},{"key":"291_CR84","unstructured":"Rohde, F. et al.: Nachhaltigkeitskriterien f\u00fcr k\u00fcnstliche Intelligenz. Entwicklung eines Kriterien\u2013 und Indikatorensets f\u00fcr die Nachhaltigkeitsbewertung von KI\u2013Systemen entlang des Lebenszyklus. Schriftenr. I\u00d6W. 220\/21. https:\/\/www.ioew.de\/fileadmin\/user_upload\/BILDER_und_Downloaddateien\/Publikationen\/2021\/IOEW_SR_220_Nachhaltigkeitskriterien_fuer_Kuenstliche_Intelligenz.pdf (2021). Accessed 18 January 2022"},{"key":"291_CR85","doi-asserted-by":"publisher","DOI":"10.1628\/978-3-16-151202-5","volume-title":"Organschaft im Recht der privaten Verb\u00e4ndecht","author":"J Sch\u00fcrnbrand","year":"2007","unstructured":"Sch\u00fcrnbrand, J.: Organschaft im Recht der privaten Verb\u00e4ndecht. Mohr Siebeck, T\u00fcbingen (2007)"},{"key":"291_CR86","doi-asserted-by":"crossref","unstructured":"Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., Hall, P. Toward a standard for identifying and managing bias in artificial intelligence, National Institute of Standards and Technology. https:\/\/nvlpubs.nist.gov\/nistpubs\/SpecialPublications\/NIST.SP.1270.pdf (2022). Accessed 21 December 2022","DOI":"10.6028\/NIST.SP.1270"},{"key":"291_CR87","volume-title":"Inequality Reexamined","author":"A Sen","year":"1992","unstructured":"Sen, A.: Inequality Reexamined. Clarendon Press, Oxford (1992)"},{"key":"291_CR88","doi-asserted-by":"publisher","first-page":"100","DOI":"10.2307\/256877","volume":"42","author":"DP Skarlicki","year":"1999","unstructured":"Skarlicki, D.P., Folger, R., Tesluk, P.: Personality as a moderator in the relationship between fairness and retaliation. Acad. Manag. J. 42, 100\u2013108 (1999)","journal-title":"Acad. Manag. J."},{"key":"291_CR89","doi-asserted-by":"crossref","unstructured":"Speicher, T., Heidari, H., Grgi\u0107\u2013Hla\u010da, N., Gummadi, K.P., Singla, A., Weller, A., Zafar, M.B.: A unified approach to quantifying algorithmic unfairness: measuring individual &group unfairness via inequality indices. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2239\u20132248 (2018)","DOI":"10.1145\/3219819.3220046"},{"key":"291_CR90","first-page":"1","volume":"17","author":"H Suresh","year":"2021","unstructured":"Suresh, H., Guttag, J.: A framework for understanding sources of harm throughout the machine learning life cycle. Equity Access Algor. Mech. Optim. 17, 1\u20139 (2021)","journal-title":"Equity Access Algor. Mech. Optim."},{"key":"291_CR91","doi-asserted-by":"publisher","DOI":"10.1177\/2053951717736335","author":"L Taylor","year":"2017","unstructured":"Taylor, L.: What is data justice? The case for connecting digital rights and freedoms globally. Big Data Soc. (2017). https:\/\/doi.org\/10.1177\/2053951717736335","journal-title":"Big Data Soc."},{"key":"291_CR92","volume-title":"Procedural Justice: A Psychological Analysis","author":"JW Thibaut","year":"1975","unstructured":"Thibaut, J.W., Walker, L.: Procedural Justice: A Psychological Analysis. Erlbaum Associates, Hillsdale (1975)"},{"key":"291_CR93","doi-asserted-by":"publisher","first-page":"213","DOI":"10.1007\/s43681-021-00043-6","volume":"1","author":"A van Wynsberghe","year":"2021","unstructured":"van Wynsberghe, A.: Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics. 1, 213\u2013218 (2021)","journal-title":"AI Ethics."},{"key":"291_CR94","unstructured":"VDE SPEC 90012:2022 VCIO based description of systems for AI trustworthiness characterization. https:\/\/www.vde.com\/resource\/blob\/2176686\/a24b13db01773747e6b7bba4ce20ea60\/vde-spec-vcio-based-description-of-systems-for-ai-trustworthiness-characterisation-data.pdf (2022). Accessed 01 December 2022"},{"key":"291_CR95","volume-title":"Privacy is Power: Why and How You Should Take Back Control of Your Data","author":"C V\u00e9liz","year":"2020","unstructured":"V\u00e9liz, C.: Privacy is Power: Why and How You Should Take Back Control of Your Data. Bantam Press, London (2020)"},{"key":"291_CR96","doi-asserted-by":"crossref","unstructured":"Verma, S., Rubin, J.: Fairness definitions explained. Proceedings of the international workshop on software fairness, 1\u20137 (2018)","DOI":"10.1145\/3194770.3194776"},{"key":"291_CR97","first-page":"841","volume":"31","author":"S Wachter","year":"2018","unstructured":"Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: Automated decisions and the GDPR. SSRN J. 31, 841\u2013888 (2018)","journal-title":"SSRN J."},{"key":"291_CR98","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2021.105567","author":"S Wachter","year":"2021","unstructured":"Wachter, S., Mittelstadt, B., Russell, C.: Why fairness cannot be automated: bridging the gap between EU non\u2013discrimination law and AI. Comput. Law Secur. Rev. (2021). https:\/\/doi.org\/10.1016\/j.clsr.2021.105567","journal-title":"Comput. Law Secur. Rev."},{"key":"291_CR99","unstructured":"Wahlster, W., Winterhalter, C.: Deutsche Normungsroadmap. K\u00fcnstliche Intelligenz. DIN. https:\/\/www.dke.de\/resource\/blob\/2019482\/0c29125fa99ac4c897e2809c8ab343ff\/nr-ki-deutsch---download-data.pdf (2020). Accessed 21 December 2022"},{"key":"291_CR100","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1145\/240455.240479","volume":"39","author":"Y Wand","year":"1996","unstructured":"Wand, Y., Wang, R.Y.: Anchoring data quality dimensions in ontological foundations. Commun. ACM. 39, 86\u201395 (1996)","journal-title":"Commun. ACM."},{"key":"291_CR101","volume-title":"Justice and the Politics of Difference","author":"IM Young","year":"1990","unstructured":"Young, I.M.: Justice and the Politics of Difference. Princeton University Press, Princeton (1990)"},{"key":"291_CR102","doi-asserted-by":"publisher","first-page":"719","DOI":"10.1038\/s41551-018-0305-z","volume":"2","author":"KH Yu","year":"2018","unstructured":"Yu, K.H., Beam, A.L., Kohane, I.S.: Artificial intelligence in healthcare. Nat. Biomed. Eng. 2, 719\u2013731 (2018)","journal-title":"Nat. Biomed. Eng."},{"key":"291_CR103","doi-asserted-by":"crossref","unstructured":"Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. Proceedings of the 26th international conference on world wide web, 1171\u20131180 (2017)","DOI":"10.1145\/3038912.3052660"},{"key":"291_CR104","unstructured":"Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. International Conference on Machine Learning, 325\u2013333 (2013)"},{"key":"291_CR105","doi-asserted-by":"crossref","unstructured":"Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society, 335\u2013340 (2018)","DOI":"10.1145\/3278721.3278779"}],"container-title":["AI and Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-023-00291-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43681-023-00291-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-023-00291-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,8]],"date-time":"2024-05-08T07:22:24Z","timestamp":1715152944000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43681-023-00291-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,8]]},"references-count":105,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,5]]}},"alternative-id":["291"],"URL":"https:\/\/doi.org\/10.1007\/s43681-023-00291-8","relation":{},"ISSN":["2730-5953","2730-5961"],"issn-type":[{"value":"2730-5953","type":"print"},{"value":"2730-5961","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,8]]},"assertion":[{"value":"22 December 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 April 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 May 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no relevant financial or non-financial interests to disclose. On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}