{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,11]],"date-time":"2026-04-11T04:04:02Z","timestamp":1775880242506,"version":"3.50.1"},"reference-count":37,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2023,7,2]],"date-time":"2023-07-02T00:00:00Z","timestamp":1688256000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,2]],"date-time":"2023-07-02T00:00:00Z","timestamp":1688256000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Royal Library, Copenhagen University Library"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2023,9]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for discussing the legitimacy of algorithmic referral decisions and I argue that in the context of referral decisions the legitimacy of an algorithmic decision procedure can be fully accounted for in terms of\u00a0the instrumental values of accuracy and fairness. I end by considering how my discussion of procedural algorithmic legitimacy relates to the debate on algorithmic fairness.<\/jats:p>","DOI":"10.1007\/s10676-023-09709-7","type":"journal-article","created":{"date-parts":[[2023,7,2]],"date-time":"2023-07-02T04:01:31Z","timestamp":1688270491000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Algorithmic legitimacy in clinical decision-making"],"prefix":"10.1007","volume":"25","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3812-7942","authenticated-orcid":false,"given":"Sune","family":"Holm","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,2]]},"reference":[{"key":"9709_CR2","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1111\/jopp.12122","volume":"26","author":"NP Adams","year":"2018","unstructured":"Adams, N. P. (2018). Institutional legitimacy. Journal of Political Philosophy, 26, 84\u2013102. https:\/\/doi.org\/10.1111\/jopp.12122","journal-title":"Journal of Political Philosophy"},{"key":"9709_CR3","doi-asserted-by":"publisher","first-page":"e2","DOI":"10.1016\/S2589-7500(20)30286-7","volume":"3","author":"M Alam","year":"2021","unstructured":"Alam, M., & Hallak, J. A. (2021). AI-automated referral for patients with visual impairment. The Lancet Digital Health, 3, e2\u2013e3. https:\/\/doi.org\/10.1016\/S2589-7500(20)30286-7","journal-title":"The Lancet Digital Health"},{"issue":"2","key":"9709_CR4","doi-asserted-by":"publisher","first-page":"e0000016","DOI":"10.1371\/journal.pdig.0000016","volume":"1","author":"J Amann","year":"2022","unstructured":"Amann, J., Vetter, D., Blomberg, S. N., Christensen, H. C., Coffee, M., Gerke, S., Gilbert, T. K., Hagendorff, T., Holm, S., Livne, M., & Spezzatti, A. (2022). To explain or not to explain?\u2014Artificial intelligence explainability in clinical decision support systems. PLoS Digital Health, 1(2), e0000016. https:\/\/doi.org\/10.1371\/journal.pdig.0000016","journal-title":"PLoS Digital Health"},{"key":"9709_CR5","unstructured":"Barocas, S., Hardt, M., & Narayanan, A. (2022). Fairness and machine learning. fairmlbook.org. Retrieved September 19, 2022, from https:\/\/fairmlbook.org\/"},{"key":"9709_CR7","doi-asserted-by":"publisher","first-page":"118","DOI":"10.1038\/s41746-020-00324-0","volume":"3","author":"S Benjamens","year":"2020","unstructured":"Benjamens, S., Dhunnoo, P., & Mesk\u00f3, B. (2020). The state of artificial intelligence-based FDA-approved medical devices and algorithms: An online database. NPJ Digital Medicine, 3, 118.","journal-title":"NPJ Digital Medicine"},{"key":"9709_CR42","doi-asserted-by":"publisher","unstructured":"Biddle, J. (2022). On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning. Canadian Journal of Philosophy, 52, 321-341. https:\/\/doi.org\/10.1017\/can.2020.27","DOI":"10.1017\/can.2020.27"},{"issue":"4","key":"9709_CR8","doi-asserted-by":"publisher","first-page":"543","DOI":"10.1007\/s13347-017-0263-5","volume":"31","author":"R Binns","year":"2018","unstructured":"Binns, R. (2018). Algorithmic accountability and public reason. Philosophy & Technology, 31(4), 543\u2013556.","journal-title":"Philosophy & Technology"},{"key":"9709_CR9","doi-asserted-by":"crossref","unstructured":"Binns, R. (2020). On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency.","DOI":"10.1145\/3351095.3372864"},{"key":"9709_CR10","doi-asserted-by":"publisher","first-page":"349","DOI":"10.1007\/s13347-019-00391-6","volume":"34","author":"JC Bjerring","year":"2020","unstructured":"Bjerring, J. C., & Busch, J. (2020). Artificial intelligence and patient-centered decision-making. Philosophy & Technology, 34, 349\u2013371.","journal-title":"Philosophy & Technology"},{"key":"9709_CR37","doi-asserted-by":"publisher","unstructured":"Brownsword, R. (2022). Rethinking Law, Regulation, and Technology. Cheltenham, UK: Edward Elgar Publishing. Retrieved Jul 1, 2023, from https:\/\/doi.org\/10.4337\/9781800886476","DOI":"10.4337\/9781800886476"},{"key":"9709_CR38","doi-asserted-by":"publisher","unstructured":"Chomanski, B. (2022). Legitimacy and automated decisions: the moral limits of algocracy. Ethics Inf Technol, 24, 34. https:\/\/doi.org\/10.1007\/s10676-022-09647-w","DOI":"10.1007\/s10676-022-09647-w"},{"key":"9709_CR11","unstructured":"Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning (pp. 1\u201325). ArXiv. https:\/\/arxiv.org\/abs\/1808.00023"},{"key":"9709_CR12","doi-asserted-by":"crossref","unstructured":"Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. Z. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining.","DOI":"10.1145\/3097983.3098095"},{"issue":"3","key":"9709_CR13","doi-asserted-by":"publisher","first-page":"245","DOI":"10.1007\/s13347-015-0211-1","volume":"29","author":"J Danaher","year":"2016","unstructured":"Danaher, J. (2016). The threat of algocracy: Reality, resistance and accommodation. Philosophy and Technology, 29(3), 245\u2013268.","journal-title":"Philosophy and Technology"},{"key":"9709_CR14","doi-asserted-by":"publisher","DOI":"10.1136\/bmj.c5146","volume":"341","author":"G Elwyn","year":"2010","unstructured":"Elwyn, G., Coulter, A., Laitner, S., Walker, E., Watson, P., & Thomson, R. (2010). Implementing shared decision making in the NHS. BMJ, 341, c5146.","journal-title":"BMJ"},{"key":"9709_CR15","volume-title":"Democratic authority","author":"D Estlund","year":"2008","unstructured":"Estlund, D. (2008). Democratic authority. Princeton University Press."},{"key":"9709_CR17","doi-asserted-by":"publisher","first-page":"i4803","DOI":"10.1136\/bmj.i4803","volume":"354","author":"G Greenfield","year":"2016","unstructured":"Greenfield, G., Foley, K., & Majeed, A. (2016). Rethinking primary care\u2019s gatekeeper role. BMJ (Clinical Research Edition), 354, i4803. https:\/\/doi.org\/10.1136\/bmj.i4803","journal-title":"BMJ (Clinical Research Edition)"},{"key":"9709_CR39","doi-asserted-by":"publisher","unstructured":"Grimmelikhuijsen, S. & Meijer, A. (2022). Legitimacy of Algorithmic Decision-Making: Six Threats and the Need for a Calibrated Institutional Response. Perspectives on Public Management and Governance, 5, 232\u2013242. https:\/\/doi.org\/10.1093\/ppmgov\/gvac008","DOI":"10.1093\/ppmgov\/gvac008"},{"key":"9709_CR18","unstructured":"Grgi\u0107-Hla\u010da, N., Zafar, M. B., Gummadi, K. P., & Weller, A. (2016). The case for process fairness in learning: Feature selection for fair decision making. In Symposium on machine learning and the law at the 29th conference on neural information processing systems."},{"key":"9709_CR20","unstructured":"Hardt, M., Price, E., Srebro, N. (2016). Equality of opportunity in supervised learning. In: Proceedings of the international on advances in neural information processing systems (NIPS) (pp. 3315\u20133323)."},{"key":"9709_CR36","doi-asserted-by":"publisher","unstructured":"Holm, S. (2023a). The Fairness in Algorithmic Fairness. Res Publica, 29, 265\u2013281. https:\/\/doi.org\/10.1007\/s11158-022-09546-3","DOI":"10.1007\/s11158-022-09546-3"},{"key":"9709_CR35","doi-asserted-by":"publisher","unstructured":"Holm, S. (2023b). Egalitarianism and Algorithmic Fairness. Philos. Technol. 36, 6. https:\/\/doi.org\/10.1007\/s13347-023-00607-w","DOI":"10.1007\/s13347-023-00607-w"},{"key":"9709_CR34","doi-asserted-by":"publisher","unstructured":"Holm, S. (2023c). On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility. Cambridge Quarterly of Healthcare Ethics, 1-7. https:\/\/doi.org\/10.1017\/S0963180123000294","DOI":"10.1017\/S0963180123000294"},{"key":"9709_CR21","doi-asserted-by":"publisher","first-page":"e466","DOI":"10.1016\/S2589-7500(22)00023-1","volume":"4","author":"OT Jones","year":"2022","unstructured":"Jones, O. T., Matin, R. N., van der Schaar, M., Prathivadi Bhayankaram, K., Ranmuthu, C. K. I., Islam, M. S., Behiyat, D., Boscott, R., Calanzani, N., Emery, J., Williams, H. C., & Walter, F. M. (2022). Artificial intelligence and machine learning algorithms for early detection of skin cancer in community and primary care settings: A systematic review. The Lancet Digital Health, 4, e466\u2013e476. https:\/\/doi.org\/10.1016\/S2589-7500(22)00023-1","journal-title":"The Lancet Digital Health"},{"key":"9709_CR22","doi-asserted-by":"crossref","unstructured":"Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In: Proceedings of the 8th innovations in theoretical computer science conference. ACM.","DOI":"10.1145\/3219617.3219634"},{"issue":"10","key":"9709_CR24","doi-asserted-by":"publisher","first-page":"36","DOI":"10.1145\/3233231","volume":"61","author":"Z Lipton","year":"2018","unstructured":"Lipton, Z. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36\u201343.","journal-title":"Communications of the ACM"},{"issue":"1","key":"9709_CR25","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1002\/hast.973","volume":"49","author":"AJ London","year":"2019","unstructured":"London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15\u201321. https:\/\/doi.org\/10.1002\/hast.973","journal-title":"Hastings Center Report"},{"key":"9709_CR41","unstructured":"Mayson, S. (2019). Bias in, bias out. Yale Law Journal, 128(8), 2122\u20132473."},{"key":"9709_CR26","doi-asserted-by":"publisher","DOI":"10.1146\/annurev-statistics-042720-125902","author":"S Mitchell","year":"2021","unstructured":"Mitchell, S., Potash, E., Barocas, S., D\u2019Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application. https:\/\/doi.org\/10.1146\/annurev-statistics-042720-125902","journal-title":"Annual Review of Statistics and Its Application"},{"issue":"1","key":"9709_CR27","doi-asserted-by":"publisher","first-page":"109","DOI":"10.26556\/jesp.v22i1.1518","volume":"22","author":"J Monaghan","year":"2022","unstructured":"Monaghan, J. (2022). The limits of instrumental proceduralism. Journal of Ethics and Social Philosophy, 22(1), 109.","journal-title":"Journal of Ethics and Social Philosophy"},{"key":"9709_CR28","doi-asserted-by":"publisher","first-page":"33","DOI":"10.3366\/E1742360008000221","volume":"5","author":"F Peter","year":"2008","unstructured":"Peter, F. (2008). Pure epistemic proceduralism. Episteme: A Journal of Social Epistemology, 5, 33\u201355.","journal-title":"Episteme: A Journal of Social Epistemology"},{"key":"9709_CR29","unstructured":"Peter, F. (2017). Political legitimacy. In E. N. Zalta (Ed.) The Stanford encyclopedia of philosophy (Summer 2017 ed.). https:\/\/plato.stanford.edu\/archives\/sum2017\/entries\/legitimacy\/"},{"key":"9709_CR30","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"C Rudin","year":"2019","unstructured":"Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206\u2013215.","journal-title":"Nature Machine Intelligence"},{"key":"9709_CR32","doi-asserted-by":"publisher","unstructured":"Verma, S., & Rubin, J. (2018). Fairness definitions explained. In Proceedings of the international workshop on software fairness\u2014FairWare \u201918 (pp. 1\u20137). ACM Press. https:\/\/doi.org\/10.1145\/3194770.3194776","DOI":"10.1145\/3194770.3194776"},{"key":"9709_CR43","doi-asserted-by":"crossref","unstructured":"Wachter, S., Mittelstadt, B., & Russell, C. (2021). Bias preservation in machine learning: The legality of fairness metrics under EU non-discrimination law, W. Va. L. Rev, 123, 735\u2013790. West Virginia Law Review. https:\/\/researchrepository.wvu.edu\/wvlr\/vol123\/iss3\/4","DOI":"10.2139\/ssrn.3792772"},{"key":"9709_CR33","doi-asserted-by":"publisher","unstructured":"Waldman, A. (2020). Algorithmic legitimacy. In W. Barfield (Ed.), The Cambridge handbook of the law of algorithms (Cambridge law handbooks, pp. 107\u2013120). Cambridge University Press. https:\/\/doi.org\/10.1017\/9781108680844.005","DOI":"10.1017\/9781108680844.005"},{"key":"9709_CR40","doi-asserted-by":"publisher","unstructured":"Wang, A., Kapoor, S., Barocas, S., & Narayanan, A. (2023). Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23). Association for Computing Machinery, New York, NY, USA, 626. https:\/\/doi.org\/10.1145\/3593013.3594030","DOI":"10.1145\/3593013.3594030"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-023-09709-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-023-09709-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-023-09709-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,9,20]],"date-time":"2023-09-20T03:36:38Z","timestamp":1695180998000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-023-09709-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,2]]},"references-count":37,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,9]]}},"alternative-id":["9709"],"URL":"https:\/\/doi.org\/10.1007\/s10676-023-09709-7","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,2]]},"assertion":[{"value":"22 June 2023","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 July 2023","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"35"}}