{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T03:06:04Z","timestamp":1772766364335,"version":"3.50.1"},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2022,4,28]],"date-time":"2022-04-28T00:00:00Z","timestamp":1651104000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,28]],"date-time":"2022-04-28T00:00:00Z","timestamp":1651104000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI &amp; Soc"],"published-print":{"date-parts":[[2023,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to the score, and adopting decisions based on the classification. Throughout our inquiry we use the COMPAS system, complemented by a radical simplification of it (our SAPMOC I and SAPMOC II models), as our running examples. Through these examples, we show how a system that is equally accurate for different groups may fail to comply with group-parity standards, owing to different base rates in the population. We discuss the general properties of the statistics determining the satisfaction of group-parity criteria and levels of accuracy. Using the distinction between scoring, classifying, and deciding, we argue that equalisation of classifications\/decisions between groups can be achieved thorough group-dependent thresholding. We discuss contexts in which this approach may be meaningful and useful in pursuing policy objectives. We claim that the implementation of group-parity standards should be left to competent human decision-makers, under appropriate scrutiny, since it involves discretionary value-based political choices. Accordingly, predictive systems should be designed in such a way that relevant policy goals can be transparently implemented. Our paper presents three main contributions: (1) it addresses a complex predictive system through the lens of simplified toy models; (2) it argues for selective policy interventions on the different steps of automated decision-making; (3) it points to the limited significance of statistical notions of fairness to achieve social goals.<\/jats:p>","DOI":"10.1007\/s00146-022-01441-y","type":"journal-article","created":{"date-parts":[[2022,4,28]],"date-time":"2022-04-28T19:04:38Z","timestamp":1651172678000},"page":"459-478","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":32,"title":["Algorithmic fairness through group parities? The case of COMPAS-SAPMOC"],"prefix":"10.1007","volume":"38","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7083-3487","authenticated-orcid":false,"given":"Francesca","family":"Lagioia","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4731-7860","authenticated-orcid":false,"given":"Riccardo","family":"Rovatti","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2210-0398","authenticated-orcid":false,"given":"Giovanni","family":"Sartor","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,28]]},"reference":[{"key":"1441_CR1","volume-title":"Prediction machines","author":"A Agrawal","year":"2018","unstructured":"Agrawal A, Gans J, Goldfarb A (2018) Prediction machines. Harvard Business Review Press, Cambridge"},{"key":"1441_CR2","unstructured":"Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias: there\u2019s software used across the country to predict future criminals and it\u2019s biased against blacks. ProPublica, May 23. https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 27 Jul 2021"},{"key":"1441_CR3","unstructured":"Barabas C, Dinakar H, Ito J, Virza M, Zittrain J (2018) Interventions over predictions: reframing the ethical debate for actuarial risk assessment. In: FAT 2018 proceedings, p 62\u201376"},{"key":"1441_CR4","unstructured":"Barocas S, Crawford K, Shapiro A, Wallach H (2017) The problem with bias: allocative versus representational harms in machine learning. In: 9th Annual conference of the special interest group for computing, information and society"},{"key":"1441_CR5","unstructured":"Barocas S, Hardt M, Narayanan A (2021) Fairness and machine learning. fairmlbook.org"},{"key":"1441_CR6","doi-asserted-by":"publisher","first-page":"671","DOI":"10.15779\/Z38BG31","volume":"104","author":"S Barocas","year":"2016","unstructured":"Barocas S, Selbst AD (2016) Big data\u2019s disparate impact. Calif Law Rev 104:671. https:\/\/doi.org\/10.15779\/Z38BG31","journal-title":"Calif Law Rev"},{"issue":"1","key":"1441_CR7","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1177\/0049124118782533","volume":"50","author":"R Berk","year":"2018","unstructured":"Berk R, Heidari H, Jabbari S, Kearns M, Roth A (2018) Fairness in criminal justice risk assessments: the state of the art. Sociol Methods Res 50(1):3\u201344. https:\/\/doi.org\/10.1177\/0049124118782533","journal-title":"Sociol Methods Res"},{"key":"1441_CR8","doi-asserted-by":"crossref","unstructured":"Binns, R. (2020). On the apparent conflict between individual and group fairness. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, p 514\u2013524","DOI":"10.1145\/3351095.3372864"},{"issue":"1","key":"1441_CR9","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1177\/0093854808326545","volume":"36","author":"T Brennan","year":"2009","unstructured":"Brennan T, Dieterich W, Ehret B (2009) Evaluating the predictive validity of the COMPAS risk and needs assessment system. Crim Justice Behav 36(1):21\u201340. https:\/\/doi.org\/10.1177\/0093854808326545","journal-title":"Crim Justice Behav"},{"issue":"2","key":"1441_CR10","doi-asserted-by":"publisher","first-page":"153","DOI":"10.1089\/big.2016.0047","volume":"5","author":"A Chouldechova","year":"2017","unstructured":"Chouldechova A (2017) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2):153\u2013163. https:\/\/doi.org\/10.1089\/big.2016.0047","journal-title":"Big Data"},{"key":"1441_CR11","first-page":"1","volume":"89","author":"DK Citron","year":"2014","unstructured":"Citron DK, Pasquale F (2014) The scored society: due process for automated predictions. Wash l Rev 89:1","journal-title":"Wash l Rev"},{"issue":"1","key":"1441_CR12","doi-asserted-by":"publisher","first-page":"62","DOI":"10.1177\/1358229120927947","volume":"20","author":"M De Vos","year":"2020","unstructured":"De Vos M (2020) The European court of justice and the march towards substantive equality in European Union anti-discrimination law. Int J Discrim Law 20(1):62\u201387. https:\/\/doi.org\/10.1177\/1358229120927947","journal-title":"Int J Discrim Law"},{"key":"1441_CR13","unstructured":"Dieterich W, Mendoza C, Brennan T (2016) COMPAS risk scales: demonstrating accuracy equity and predictive parity. Northpoint Inc 7 (7.4), 1."},{"key":"1441_CR14","unstructured":"Flores AW, Bechtel K, Lowenkamp CT (2016) False positives, false negatives, and false analyses: a rejoinder to machine bias: there\u2019s software used across the country to predict future criminals. and it\u2019s biased against blacks. Fed. Probation 80, 38"},{"issue":"3","key":"1441_CR15","doi-asserted-by":"publisher","first-page":"330","DOI":"10.1145\/230538.230561","volume":"14","author":"B Friedman","year":"1996","unstructured":"Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Trans Inf Syst (TOIS) 14(3):330\u2013347. https:\/\/doi.org\/10.1145\/230538.230561","journal-title":"ACM Trans Inf Syst (TOIS)"},{"issue":"7","key":"1441_CR16","doi-asserted-by":"publisher","first-page":"1445","DOI":"10.1109\/TKDE.2012.72","volume":"25","author":"S Hajian","year":"2012","unstructured":"Hajian S, Domingo-Ferrer J (2012) A methodology for direct and indirect discrimination prevention in data mining. IEEE Trans Knowl Data Eng 25(7):1445\u20131459. https:\/\/doi.org\/10.1109\/TKDE.2012.72","journal-title":"IEEE Trans Knowl Data Eng"},{"key":"1441_CR17","volume-title":"Against prediction profiling, policing, and punishing in an actuarial age","author":"BE Harcourt","year":"2008","unstructured":"Harcourt BE (2008) Against prediction profiling, policing, and punishing in an actuarial age. University of Chicago Press, Chicago"},{"key":"1441_CR18","unstructured":"Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. arXiv preprint arXiv:1610.02413"},{"key":"1441_CR19","first-page":"811","volume":"106","author":"D Hellman","year":"2020","unstructured":"Hellman D (2020) Measuring algorithmic fairness. Va Law Rev 106:811","journal-title":"Va Law Rev"},{"key":"1441_CR20","volume-title":"Machine learning and society: impact, trust, transparency","author":"M Hildebrandt","year":"2020","unstructured":"Hildebrandt M (2020) The issue of bias. The framing powers of ML. In: Pelillo M, Scantamburlo T (eds) Machine learning and society: impact, trust, transparency. MIT Press, Cambridge"},{"key":"1441_CR21","unstructured":"Inc. W. R. (2020) Mathematica. Version 12.2. Champaign, IL"},{"key":"1441_CR22","unstructured":"Joseph M, Kearns M, Morgenstern J, Neel S, Roth A (2016) Rawlsian fairness for machine learning. arXiv preprint arXiv:1610.09559. 1(2)"},{"key":"1441_CR23","unstructured":"Kleinberg J, Mullainathan S, Raghavan M (2016) Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807"},{"key":"1441_CR24","unstructured":"Kusner MJ, Loftus JR, Russell C, Silva R (2017) Counterfactual fairness. arXiv preprint arXiv:1703.06856"},{"key":"1441_CR25","unstructured":"Larson J, Mattu S, Kirchner L, Angwin J (2018) How we analyzed the COMPAS recidivism algorithm, ProPublica, May 23. https:\/\/www.propublica.org\/article\/how-we-analyzed-the-compas-recidivism-algorithm. Accessed 27 July 2021"},{"key":"1441_CR26","unstructured":"Liptak A (2017) Sent to prison by a software program\u2019s secret algorithms, New York Times, May 1. https:\/\/www.nytimes.com\/2017\/05\/01\/us\/politics\/sent-to-prison-by-a-software-programs-secret-algorithms.html. Accessed 27 Jul 2021"},{"key":"1441_CR27","volume-title":"Reinventing capitalism in the age of big data","author":"V Mayer-Sch\u00f6nberger","year":"2018","unstructured":"Mayer-Sch\u00f6nberger V, Ramge T (2018) Reinventing capitalism in the age of big data. Basic Books, New York"},{"key":"1441_CR28","volume-title":"Weapons of math destruction: how big data increases inequality and threatens democracy","author":"C O\u2019Neil","year":"2016","unstructured":"O\u2019Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown, New York"},{"key":"1441_CR29","unstructured":"Oswald M, Babuta A (2019) Data analytics and algorithmic bias in policing, Royal United Services Institute for Defence and Security Studies. https:\/\/assets.publishing.service.gov.uk\/government\/uploads\/system\/uploads\/attachment_data\/file\/831750\/RUSI_Report_-_Algorithms_and_Bias_in_Policing.pdf"},{"key":"1441_CR30","doi-asserted-by":"publisher","DOI":"10.2307\/j.ctv31xf5v0","volume-title":"Justice as fairness: a restatement","author":"J Rawls","year":"2001","unstructured":"Rawls J (2001) Justice as fairness: a restatement. Harvard University Press, Cambridge"},{"issue":"3","key":"1441_CR31","doi-asserted-by":"publisher","first-page":"167","DOI":"10.1007\/s10676-018-9492-2","volume":"21","author":"PM Regan","year":"2019","unstructured":"Regan PM, Jesse J (2019) Ethical challenges of EdTech, big data and personalized learning: twenty-first century student sorting and tracking. Ethics Inf Technol 21(3):167\u2013179. https:\/\/doi.org\/10.1007\/s10676-018-9492-2","journal-title":"Ethics Inf Technol"},{"key":"1441_CR32","volume-title":"Fairness: theory and practice of distributive justice","author":"N Rescher","year":"2002","unstructured":"Rescher N (2002) Fairness: theory and practice of distributive justice. Transaction Publishers, Piscataway"},{"issue":"5","key":"1441_CR33","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"C Rudin","year":"2019","unstructured":"Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206\u2013215. https:\/\/doi.org\/10.1038\/s42256-019-0048-x","journal-title":"Nat Mach Intell"},{"key":"1441_CR34","unstructured":"Tashea J (2017) Courts are using AI to sentence criminals. That must stop now. Wired, March 17. https:\/\/www.wired.com\/2017\/04\/courts-using-ai-sentence-criminals-must-stop-now\/. Accessed 27 Jul 2021"},{"key":"1441_CR35","doi-asserted-by":"publisher","first-page":"1080","DOI":"10.1093\/bjc\/azaa012","volume":"60","author":"G van Eijk","year":"2020","unstructured":"van Eijk G (2020) Inclusion and exclusion through risk-based justice: analysing combinations of risk assessment from pretrial detention to release. Br J Criminol 60:1080\u20131097. https:\/\/doi.org\/10.1093\/bjc\/azaa012","journal-title":"Br J Criminol"},{"issue":"1","key":"1441_CR36","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41467-019-14108-y","volume":"11","author":"R Vinuesa","year":"2020","unstructured":"Vinuesa R, Hossein Azizpour H, Leite I, Balaam M, Dignum V, Domisch S, Fell\u00e4nder A, Langhans SD, Tegmark M, Fuso Nerini F (2020) The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun 11(1):1\u201310. https:\/\/doi.org\/10.1038\/s41467-019-14108-y","journal-title":"Nat Commun"},{"issue":"3","key":"1441_CR37","first-page":"735","volume":"123","author":"S Wachter","year":"2021","unstructured":"Wachter, S., B. Mittelstadt, and C. Russell (2021) Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. West Va Law Rev 123(3): 735-790","journal-title":"West Va Law Rev"},{"key":"1441_CR38","unstructured":"Yong E (2018) A popular algorithm is no better at predicting crimes than random people. The Atlantic. January 17. https:\/\/www.theatlantic.com\/technology\/archive\/2018\/01\/equivant-compas-algorithm\/550646\/. Accessed 27 Jul 2021"},{"key":"1441_CR39","doi-asserted-by":"crossref","unstructured":"Zafar MB, Valera I, Gomez Rodriguez M, Gummadi KP (2017) Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In: Proceedings of the 26th international conference on world wide web, p 1171\u20131180","DOI":"10.1145\/3038912.3052660"},{"issue":"2","key":"1441_CR40","doi-asserted-by":"publisher","first-page":"164","DOI":"10.1089\/big.2016.0061","volume":"5","author":"E Zeide","year":"2017","unstructured":"Zeide E (2017) The structural consequences of big data-driven education. Big Data 5(2):164\u2013172. https:\/\/doi.org\/10.1089\/big.2016.0061","journal-title":"Big Data"},{"issue":"4","key":"1441_CR41","doi-asserted-by":"publisher","first-page":"1060","DOI":"10.1007\/s10618-017-0506-1","volume":"31","author":"I \u017dliobait\u0117","year":"2017","unstructured":"\u017dliobait\u0117 I (2017) Measuring discrimination in algorithmic decision making. Data Min Knowl Disc 31(4):1060\u20131089. https:\/\/doi.org\/10.1007\/s10618-017-0506-1","journal-title":"Data Min Knowl Disc"}],"container-title":["AI &amp; SOCIETY"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-022-01441-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00146-022-01441-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00146-022-01441-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,6,30]],"date-time":"2023-06-30T06:04:33Z","timestamp":1688105073000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00146-022-01441-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,28]]},"references-count":41,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,4]]}},"alternative-id":["1441"],"URL":"https:\/\/doi.org\/10.1007\/s00146-022-01441-y","relation":{},"ISSN":["0951-5666","1435-5655"],"issn-type":[{"value":"0951-5666","type":"print"},{"value":"1435-5655","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,4,28]]},"assertion":[{"value":"30 July 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 March 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 April 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 August 2022","order":4,"name":"change_date","label":"Change Date","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Update","order":5,"name":"change_type","label":"Change Type","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Missing Open Access funding information has been added in the Funding Note.","order":6,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}}]}}