{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,17]],"date-time":"2026-02-17T03:38:03Z","timestamp":1771299483272,"version":"3.50.1"},"reference-count":66,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2022,8,31]],"date-time":"2022-08-31T00:00:00Z","timestamp":1661904000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,8,31]],"date-time":"2022-08-31T00:00:00Z","timestamp":1661904000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001659","name":"Deutsche Forschungsgemeinschaft","doi-asserted-by":"publisher","award":["BE5601\/4-1; Cluster of Excellence \u201cMachine Learning\u2014New Perspectives for Science\u201d, EXC 2064, project number 390727645"],"award-info":[{"award-number":["BE5601\/4-1; Cluster of Excellence \u201cMachine Learning\u2014New Perspectives for Science\u201d, EXC 2064, project number 390727645"]}],"id":[{"id":"10.13039\/501100001659","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002345","name":"Eberhard Karls Universit\u00e4t T\u00fcbingen","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100002345","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2022,9]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair.<\/jats:p>","DOI":"10.1007\/s10676-022-09658-7","type":"journal-article","created":{"date-parts":[[2022,8,31]],"date-time":"2022-08-31T08:04:03Z","timestamp":1661933043000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":34,"title":["Enabling Fairness in Healthcare Through Machine Learning"],"prefix":"10.1007","volume":"24","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9832-6046","authenticated-orcid":false,"given":"Thomas","family":"Grote","sequence":"first","affiliation":[]},{"given":"Geoff","family":"Keeling","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,8,31]]},"reference":[{"issue":"11","key":"9658_CR3","doi-asserted-by":"publisher","first-page":"1247","DOI":"10.1001\/jamadermatol.2018.2348","volume":"154","author":"AS Adamson","year":"2018","unstructured":"Adamson, A. S., & Smith, A. (2018). Machine Learning and Health Care Disparities in Dermatology. JAMA Dermatol, 154(11), 1247\u20131248. DOI: https:\/\/doi.org\/10.1001\/jamadermatol.2018.2348","journal-title":"JAMA Dermatol"},{"issue":"12","key":"9658_CR4","doi-asserted-by":"publisher","first-page":"1187","DOI":"10.1016\/j.jpain.2009.10.002","volume":"10","author":"KO Anderson","year":"2009","unstructured":"Anderson, K. O., Green, C. R., & Payne, R. (2009). Racial and ethnic disparities in pain: causes and consequences of unequal care. The journal of pain, 10(12), 1187\u20131204. DOI: https:\/\/doi.org\/10.1016\/j.jpain.2009.10.002","journal-title":"The journal of pain"},{"key":"9658_CR70","unstructured":"Angwin, J., Larson, J., Mattu, S., & Kirchner, L. 2016. Machine Bias. Technical Report. Probublica. https:\/\/propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing"},{"issue":"1","key":"9658_CR6","doi-asserted-by":"publisher","first-page":"15013","DOI":"10.1038\/s41598-021-94487-9","volume":"11","author":"A Baghdadi","year":"2021","unstructured":"Baghdadi, A., Lama, S., Singh, R., Hoshyarmanesh, H., Razmi, M., & Sutherland, G. R. (2021). A data-driven performance dashboard for surgical dissection. Scientific Reports, 11(1), 15013. DOI: https:\/\/doi.org\/10.1038\/s41598-021-94487-9","journal-title":"Scientific Reports"},{"issue":"8","key":"9658_CR7","doi-asserted-by":"publisher","first-page":"1116","DOI":"10.1080\/00140139.2018.1442936","volume":"61","author":"A Baghdadi","year":"2018","unstructured":"Baghdadi, A., Megahed, F. M., Esfahani, E. T., & Cavuoto, L. A. (2018). A machine learning approach to detect changes in gait parameters following a fatiguing occupational task. Ergonomics, 61(8), 1116\u20131129. DOI: https:\/\/doi.org\/10.1080\/00140139.2018.1442936","journal-title":"Ergonomics"},{"key":"9658_CR8","doi-asserted-by":"crossref","unstructured":"Bansal, G., Nushi, B., Kamar, E., Horvitz, E., & Weld, D. S. (2021, May). Is the most accurate ai the best teammate? optimizing ai for teamwork. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol.\u00a035, No. 13, pp.\u00a011405\u201311414)","DOI":"10.1609\/aaai.v35i13.17359"},{"key":"9658_CR9","unstructured":"Barocas, S., Hardt, M., & Narayanan, A. (2019). : Fairness and Machine Learning: Limitations and Opportunities: https:\/\/fairmlbook.org\/"},{"key":"9658_CR67","doi-asserted-by":"publisher","unstructured":"Beutel, A., Chen, J., Doshi, T., Quian, H., Woodruff, A., Luu, C., Bishof, J., & Chi, E. 2019. Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements. Proceedings of the 2019 AAAI\/ACM Conference on AI, Ethics, and Society (AIES '2019). Association for Computing Machinery, New York, 453-459. DOI: https:\/\/doi.org\/10.1145\/3306618.3314234","DOI":"10.1145\/3306618.3314234"},{"key":"9658_CR66","doi-asserted-by":"publisher","unstructured":"Biddle, J. (2020). On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning. Canadian Journal of Philosophy, 1-21. DOI: https:\/\/doi.org\/10.1017\/can.2020.27","DOI":"10.1017\/can.2020.27"},{"issue":"2","key":"9658_CR10","doi-asserted-by":"publisher","first-page":"349","DOI":"10.1007\/s13347-019-00391-6","volume":"34","author":"JC Bjerring","year":"2021","unstructured":"Bjerring, J. C., & Busch, J. (2021). Artificial Intelligence and Patient-Centered Decision-Making. Philosophy & Technology, 34(2), 349\u2013371. DOI: https:\/\/doi.org\/10.1007\/s13347-019-00391-6","journal-title":"Philosophy & Technology"},{"key":"9658_CR69","unstructured":"Buolamwini, J., & Gebru, T., 2018. Gender Shades: Intersectional Accuracy Disparitities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability, and Transparency. PMLR 81, 77-91."},{"key":"9658_CR11","doi-asserted-by":"publisher","DOI":"10.1093\/acprof:osobl\/9780199841608.001.0001","volume-title":"Evidence-Based Policy. A Practical Guide to Doing It Better","author":"N Cartwright","year":"2012","unstructured":"Cartwright, N., & Hardie, J. (2012). Evidence-Based Policy. A Practical Guide to Doing It Better. Oxford: Oxford University Press"},{"issue":"7840","key":"9658_CR12","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1038\/s41586-020-2923-3","volume":"589","author":"S Chang","year":"2021","unstructured":"Chang, S., Pierson, E., Koh, P. W., Gerardin, J., Redbird, B., Grusky, D., & Leskovec, J. (2021). Mobility network models of COVID-19 explain inequities and inform reopening. Nature, 589(7840), 82\u201387. DOI: https:\/\/doi.org\/10.1038\/s41586-020-2923-3","journal-title":"Nature"},{"key":"9658_CR13","unstructured":"Chaudhuri, K., & Salakhutdinov, R. (Eds.). (2019). : Proceedings of the 36th International Conference on Machine Learning: PMLR (Proceedings of Machine Learning Research)"},{"key":"9658_CR14","doi-asserted-by":"publisher","unstructured":"Chouldechova, A. (2017). : Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. In Big Data 5 (2), pp.\u00a0153\u2013163. DOI: https:\/\/doi.org\/10.1089\/big.2016.0047","DOI":"10.1089\/big.2016.0047"},{"issue":"4","key":"9658_CR65","doi-asserted-by":"publisher","first-page":"568","DOI":"10.1086\/709729","volume":"87","author":"Kathleen A. Creel","year":"2020","unstructured":"Creel, K. (2020). Transparency in Complex Computational Systems. Philosophy of Science, 87(4), 568-598. DOI: https:\/\/doi.org\/10.1086\/709729","journal-title":"Philosophy of Science"},{"key":"9658_CR15","doi-asserted-by":"publisher","first-page":"2","DOI":"10.1016\/j.socscimed.2017.12.005","volume":"210","author":"A Deaton","year":"2018","unstructured":"Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2\u201321. DOI: https:\/\/doi.org\/10.1016\/j.socscimed.2017.12.005","journal-title":"Social Science & Medicine"},{"issue":"7639","key":"9658_CR17","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1038\/nature21056","volume":"542","author":"A Esteva","year":"2017","unstructured":"Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115\u2013118. DOI: https:\/\/doi.org\/10.1038\/nature21056","journal-title":"Nature"},{"issue":"9","key":"9658_CR18","doi-asserted-by":"publisher","first-page":"1342","DOI":"10.1038\/s41591-018-0107-6","volume":"24","author":"J Fauw","year":"2018","unstructured":"Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., et al. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9), 1342\u20131350. DOI: https:\/\/doi.org\/10.1038\/s41591-018-0107-6","journal-title":"Nature Medicine"},{"issue":"8","key":"9658_CR19","doi-asserted-by":"publisher","first-page":"e12760","DOI":"10.1111\/phc3.12760","volume":"16","author":"S Fazelpour","year":"2021","unstructured":"Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8), e12760. DOI: https:\/\/doi.org\/10.1111\/phc3.12760","journal-title":"Philosophy Compass"},{"key":"9658_CR20","doi-asserted-by":"publisher","DOI":"10.7551\/mitpress\/7585.001.0001","volume-title":"Value Sensitive Design: Shaping Technology with Moral Imagination","author":"B Friedman","year":"2019","unstructured":"Friedman, B., & Henry, D. G. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge\/Ma.: MIT Press"},{"key":"9658_CR68","doi-asserted-by":"publisher","unstructured":"Gaube, S., Suresh., H., Raue, M., et al. 2021. Do As AI Say: Susceptibility in Deployment of Clinical Decision-Aids. npj Digital Medicine, 4(31). DOI: https:\/\/doi.org\/10.1038\/s41746-021-00385-9","DOI":"10.1038\/s41746-021-00385-9"},{"key":"9658_CR22","doi-asserted-by":"publisher","unstructured":"Genin, K., & Grote, T. (2021). : Randomized Controlled Trials in Medical AI: A Methodological Critique. In Philosophy of Medicine 2 (1). DOI: https:\/\/doi.org\/10.5195\/philmed.2021.27","DOI":"10.5195\/philmed.2021.27"},{"issue":"3","key":"9658_CR23","doi-asserted-by":"publisher","first-page":"277","DOI":"10.1046\/j.1526-4637.2003.03034.x","volume":"4","author":"CR Green","year":"2003","unstructured":"Green, C. R., Anderson, K. O., Baker, T. A., Campbell, L. C., Decker, S., Fillingim, R. B., et al. (2003). The Unequal Burden of Pain: Confronting Racial and Ethnic Disparities in Pain. Pain Medicine (Malden, Mass.), 4(3), 277\u2013294. DOI: https:\/\/doi.org\/10.1046\/j.1526-4637.2003.03034.x","journal-title":"Pain Medicine (Malden, Mass.)"},{"key":"9658_CR24","doi-asserted-by":"publisher","unstructured":"Green, B., & Chen, Y. (2019). : The Principles and Limits of Algorithm-in-the-Loop Decision Making. In Proc. ACM Hum.-Comput. Interact. 3 (CSCW). DOI: https:\/\/doi.org\/10.1145\/3359152","DOI":"10.1145\/3359152"},{"issue":"3","key":"9658_CR25","doi-asserted-by":"publisher","first-page":"205","DOI":"10.1136\/medethics-2019-105586","volume":"46","author":"T Grote","year":"2020","unstructured":"Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205\u2013211. DOI: https:\/\/doi.org\/10.1136\/medethics-2019-105586","journal-title":"Journal of Medical Ethics"},{"key":"9658_CR26","doi-asserted-by":"publisher","unstructured":"Grote, T., & Berens, P. (2021). How competitors become collaborators\u2014Bridging the gap(s) between machine learning algorithms andclinicians. Bioethics, 1\u2013 9. https:\/\/doi.org\/10.1111\/bioe.12957","DOI":"10.1111\/bioe.12957"},{"key":"9658_CR2","doi-asserted-by":"crossref","unstructured":"Grote, T., & Keeling, G. (2022). On Algorithmic Fairness in Medical Practice. Cambridge Quarterly of Healthcare Ethics, 31(1), 83-94. doi:10.1017\/S0963180121000839","DOI":"10.1017\/S0963180121000839"},{"issue":"22","key":"9658_CR27","doi-asserted-by":"publisher","first-page":"2402","DOI":"10.1001\/jama.2016.17216","volume":"316","author":"V Gulshan","year":"2016","unstructured":"Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., et al. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. Journal Of The American Medical Association, 316(22), 2402\u20132410. DOI: https:\/\/doi.org\/10.1001\/jama.2016.17216","journal-title":"Journal Of The American Medical Association"},{"key":"9658_CR28","unstructured":"Hardt, M., & Recht, B. (2021). : Patterns, Predictions, and Actions: A Story About Machine Learning: https:\/\/mlstory.org\/"},{"issue":"2","key":"9658_CR29","doi-asserted-by":"publisher","first-page":"209","DOI":"10.1111\/papa.12189","volume":"49","author":"B Hedden","year":"2021","unstructured":"Hedden, B. (2021). On statistical criteria of algorithmic fairness. Philos Public Aff, 49(2), 209\u2013231. DOI: https:\/\/doi.org\/10.1111\/papa.12189","journal-title":"Philos Public Aff"},{"key":"9658_CR21","doi-asserted-by":"crossref","unstructured":"Hernandez, G., Valles, D., Wierschem, D. C., Koldenhoven, R. M., Koutitas, G., Mendez, F. A., et al. (2020). : Machine Learning Techniques for Motion Analysis of Fatigue from Manual Material Handling Operations Using 3D Motion Capture Data. In: 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), pp.\u00a0300\u2013305","DOI":"10.1109\/CCWC47524.2020.9031222"},{"key":"9658_CR30","doi-asserted-by":"publisher","unstructured":"Hoffman, K. M., Trawalter, S., Axt, J. R., Oliver, M., & Norman (2016). : Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. In Proceedings of the National Academy of Sciences 113 (16), p.\u00a04296. DOI: https:\/\/doi.org\/10.1073\/pnas.1516047113","DOI":"10.1073\/pnas.1516047113"},{"issue":"3","key":"9658_CR31","doi-asserted-by":"publisher","first-page":"274","DOI":"10.1111\/j.1467-9833.2012.01565.x","volume":"43","author":"J Holroyd","year":"2012","unstructured":"Holroyd, J. (2012). Responsibility for Implicit Bias. Journal of Social Philosophy, 43(3), 274\u2013306. DOI: https:\/\/doi.org\/10.1111\/j.1467-9833.2012.01565.x","journal-title":"Journal of Social Philosophy"},{"issue":"3","key":"9658_CR32","doi-asserted-by":"publisher","first-page":"e12410","DOI":"10.1111\/phc3.12410","volume":"12","author":"J Holroyd","year":"2017","unstructured":"Holroyd, J., Scaife, R., & Stafford, T. (2017). Responsibility for implicit bias. Philosophy Compass, 12(3), e12410. DOI: https:\/\/doi.org\/10.1111\/phc3.12410","journal-title":"Philosophy Compass"},{"key":"9658_CR33","doi-asserted-by":"crossref","unstructured":"Holstein, K., Wortman Vaughan, J., Daum\u00e9, H. III, Dudik, M., & Wallach, H. (2019, May). Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems (pp.\u00a01\u201316)","DOI":"10.1145\/3290605.3300830"},{"issue":"1","key":"9658_CR34","doi-asserted-by":"publisher","first-page":"108","DOI":"10.1038\/s41398-021-01224-x","volume":"11","author":"M Jacobs","year":"2021","unstructured":"Jacobs, M., Pradier, M. F., McCoy, T. H., Perlis, R. H., Doshi-Velez, F., & Gajos, K. Z. (2021). How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational Psychiatry, 11(1), 108. DOI: https:\/\/doi.org\/10.1038\/s41398-021-01224-x","journal-title":"Translational Psychiatry"},{"key":"9658_CR35","doi-asserted-by":"publisher","unstructured":"Johnson, G. M. (2020). : Algorithmic bias: on the implicit biases of social technology. In Synthese. DOI: https:\/\/doi.org\/10.1007\/s11229-020-02696-y","DOI":"10.1007\/s11229-020-02696-y"},{"key":"9658_CR1","unstructured":"Keeling, G., & Nyrup, R. (forthcoming). Explainable Machine Learning, Patient Autonomy and Clincial Reasoning. V\u00e9liz. C. (Ed.) Oxford Handbook of Digital Ethics. Oxford: Oxford University Press."},{"key":"9658_CR36","doi-asserted-by":"crossref","unstructured":"Kempt, H., & Nagel, S. K. (2021). Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Journal of Medical Ethics","DOI":"10.1136\/medethics-2021-107440"},{"key":"9658_CR37","doi-asserted-by":"publisher","unstructured":"Khairat, S., Marc, D., Crosby, D., & Al Sanousi, A. (2018). : Reasons For Physicians Not Adopting Clinical Decision Support Systems: Critical Analysis. In JMIR Med Inform 2018;6(2):e24 6 (2). Available online at https:\/\/doi.org\/10.2196\/medinform.8912","DOI":"10.2196\/medinform.8912"},{"key":"9658_CR38","doi-asserted-by":"publisher","first-page":"163","DOI":"10.1016\/j.jcrc.2019.09.024","volume":"55","author":"J Kim","year":"2020","unstructured":"Kim, J., HyungLan, C., Kim, D., Jang, D. H., Park, I., & Kim, K. (2020). Machine learning for prediction of septic shock at initial triage in emergency department. Journal of Critical Care, 55, 163\u2013170. DOI: https:\/\/doi.org\/10.1016\/j.jcrc.2019.09.024","journal-title":"Journal of Critical Care"},{"key":"9658_CR39","unstructured":"Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). : Inherent Trade-Offs in the Fair Determination of Risk Scores. In arXiv preprint arXiv:1609.05807"},{"issue":"1","key":"9658_CR40","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1038\/s41746-020-00367-3","volume":"4","author":"B Kompa","year":"2021","unstructured":"Kompa, B., Snoek, J., & Beam, A. L. (2021). Second opinion needed: communicating uncertainty in medical machine learning. npj Digital Medicine, 4(1), 4. DOI: https:\/\/doi.org\/10.1038\/s41746-020-00367-3","journal-title":"npj Digital Medicine"},{"issue":"1","key":"9658_CR41","doi-asserted-by":"publisher","first-page":"29","DOI":"10.1080\/17579961.2021.1898299","volume":"13","author":"BJ Koops","year":"2021","unstructured":"Koops, B. J. (2021). The concept of function creep. Law Innovation and Technology, 13(1), 29\u201356. DOI: https:\/\/doi.org\/10.1080\/17579961.2021.1898299","journal-title":"Law Innovation and Technology"},{"issue":"3","key":"9658_CR43","doi-asserted-by":"publisher","first-page":"156","DOI":"10.1136\/medethics-2018-105118","volume":"45","author":"RJ McDougall","year":"2019","unstructured":"McDougall, R. J. (2019). Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics, 45(3), 156\u2013160. DOI: https:\/\/doi.org\/10.1136\/medethics-2018-105118","journal-title":"Journal of Medical Ethics"},{"issue":"7788","key":"9658_CR44","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1038\/s41586-019-1799-6","volume":"577","author":"SM McKinney","year":"2020","unstructured":"McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H. \u2026 Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89\u201394","journal-title":"Nature"},{"key":"9658_CR45","unstructured":"Miconi, T. (2017). : The impossibility of \u201cfairness\u201d: a generalized impossibility result for decisions. In arXiv preprint arXiv:1707.01195 [stat.AP]"},{"key":"9658_CR46","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1146\/annurev-statistics-042720-125902","volume":"8","author":"S Mitchell","year":"2021","unstructured":"Mitchell, S., Potash, E., Barocas, S., D\u2019Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 141\u2013163","journal-title":"Annual Review of Statistics and Its Application"},{"key":"9658_CR47","doi-asserted-by":"publisher","first-page":"348","DOI":"10.3389\/fmed.2021.607952","volume":"8","author":"M Moor","year":"2021","unstructured":"Moor, M., Rieck, B., Horn, M., Jutzeler, C. R., & Borgwardt, K. (2021). Early Prediction of Sepsis in the ICU Using Machine Learning: A Systematic Review. Frontiers in Medicine, 8, 348. DOI: https:\/\/doi.org\/10.3389\/fmed.2021.607952","journal-title":"Frontiers in Medicine"},{"key":"9658_CR48","volume-title":"To Save Everything, Click Here: Technology, Solutions and the Urge to Fix Problems That Don`t","author":"E Morozov","year":"2013","unstructured":"Morozov, E. (2013). To Save Everything, Click Here: Technology, Solutions and the Urge to Fix Problems That Don`t. Exist: Public Affairs"},{"key":"9658_CR49","doi-asserted-by":"publisher","DOI":"10.2307\/j.ctt1pwt9w5","volume-title":"Algorithms of Oppression: How Search Engines Reinforce Racism","author":"S Noble","year":"2018","unstructured":"Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York, NY, USA: NYU Press"},{"key":"9658_CR50","doi-asserted-by":"publisher","unstructured":"Noor, P. (2020). : Can we trust AI not to further embed racial bias and prejudice? In BMJ (Clinical research ed.) 368, m363. DOI: https:\/\/doi.org\/10.1136\/bmj.m363","DOI":"10.1136\/bmj.m363"},{"issue":"6464","key":"9658_CR51","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1126\/science.aax2342","volume":"366","author":"Z Obermeyer","year":"2019","unstructured":"Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447. DOI: https:\/\/doi.org\/10.1126\/science.aax2342","journal-title":"Science"},{"key":"9658_CR52","unstructured":"O\u2019Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Books"},{"issue":"9","key":"9658_CR53","doi-asserted-by":"publisher","first-page":"1327","DOI":"10.1038\/s41591-020-1020-3","volume":"26","author":"K Owens","year":"2020","unstructured":"Owens, K., & Walker, A. (2020). Those designing healthcare algorithms must become actively anti-racist. Nature Medicine, 26(9), 1327\u20131328. DOI: https:\/\/doi.org\/10.1038\/s41591-020-1020-3","journal-title":"Nature Medicine"},{"issue":"1","key":"9658_CR54","doi-asserted-by":"publisher","first-page":"136","DOI":"10.1038\/s41591-020-01192-7","volume":"27","author":"E Pierson","year":"2021","unstructured":"Pierson, E., Cutler, D. M., Leskovec, J., Mullainathan, S., & Obermeyer, Z. (2021). An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nature Medicine, 27(1), 136\u2013140. DOI: https:\/\/doi.org\/10.1038\/s41591-020-01192-7","journal-title":"Nature Medicine"},{"issue":"3","key":"9658_CR55","doi-asserted-by":"publisher","first-page":"158","DOI":"10.1038\/s41551-018-0195-0","volume":"2","author":"R Poplin","year":"2018","unstructured":"Poplin, R., Varadarajan, A. V., Blumer, K., Liu, Y., McConnell, M. V., Corrado, G. S., et al. (2018). Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering, 2(3), 158\u2013164. DOI: https:\/\/doi.org\/10.1038\/s41551-018-0195-0","journal-title":"Nature Biomedical Engineering"},{"key":"9658_CR57","unstructured":"Raghu, M., Blumer, K., Corrado, G., Kleinberg, J., Obermeyer, Z., & Mullainathan, S. (2019). : The Algorithmic Automation Problem: Prediction, Triage, and Human Effort. In arXiv preprint arXiv:1903.12220 [cs.CV]"},{"key":"9658_CR56","unstructured":"Raghu, M., Blumer, K., Sayres, R., Obermeyer, Z., Kleinberg, B., Mullainathan, S., & Kleinberg, J. (2019). : Direct Uncertainty Prediction for Medical Second Opinions. In Kamalika Chaudhuri, Ruslan Salakhutdinov (Eds.): Proceedings of the 36th International Conference on Machine Learning, vol. 97: PMLR (Proceedings of Machine Learning Research), pp.\u00a05281\u20135290. Available online at https:\/\/proceedings.mlr.press\/v97\/raghu19a.html"},{"issue":"12","key":"9658_CR58","doi-asserted-by":"publisher","first-page":"866","DOI":"10.7326\/M18-1990","volume":"169","author":"A Rajkomar","year":"2018","unstructured":"Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G., & Chin, M. H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of internal medicine, 169(12), 866\u2013872","journal-title":"Annals of internal medicine"},{"key":"9658_CR59","doi-asserted-by":"publisher","first-page":"64","DOI":"10.1186\/1477-7525-1-64","volume":"1","author":"EM Roos","year":"2003","unstructured":"Roos, E. M., & Lohmander, L. S. (2003). The Knee injury and Osteoarthritis Outcome Score (KOOS): from joint injury to osteoarthritis. Health and quality of life outcomes, 1, 64. https:\/\/doi.org\/10.1186\/1477-7525-1-64","journal-title":"Health and quality of life outcomes"},{"issue":"8","key":"9658_CR60","doi-asserted-by":"publisher","first-page":"1229","DOI":"10.1038\/s41591-020-0942-0","volume":"26","author":"P Tschandl","year":"2020","unstructured":"Tschandl, P., Rinner, C., Apalla, Z., Argenziano, G., Codella, N., Halpern, A., et al. (2020). Human\u2013computer collaboration for skin cancer recognition. Nature Medicine, 26(8), 1229\u20131234. DOI: https:\/\/doi.org\/10.1038\/s41591-020-0942-0","journal-title":"Nature Medicine"},{"key":"9658_CR61","doi-asserted-by":"crossref","unstructured":"Wilder, B., Horvitz, E., & Kamar, E. (2020). : Learning to Complement Humans. In arXiv preprint arXiv:2005.00582 [cs.AI]","DOI":"10.24963\/ijcai.2020\/212"},{"key":"9658_CR63","doi-asserted-by":"crossref","unstructured":"Zicari, R. V., Ahmed, S., Amann, J., Braun, S. A., Brodersen, J., Bruneault, F. \u2026 Wurth, R. (2021). Co-design of a trustworthy AI system in healthcare: deep learning based skin lesion classifier.Frontiers in Human Dynamics, 40","DOI":"10.3389\/fhumd.2021.688152"},{"key":"9658_CR64","doi-asserted-by":"publisher","unstructured":"Zimmermann, A., & Lee-Stronach, C. (2021). Proceed with Caution. Canadian Journal of Philosophy, 1\u201320. DOI: https:\/\/doi.org\/10.1017\/can.2021.17","DOI":"10.1017\/can.2021.17"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-022-09658-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-022-09658-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-022-09658-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,3]],"date-time":"2023-01-03T16:18:38Z","timestamp":1672762718000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-022-09658-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,31]]},"references-count":66,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,9]]}},"alternative-id":["9658"],"URL":"https:\/\/doi.org\/10.1007\/s10676-022-09658-7","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,31]]},"assertion":[{"value":"27 June 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 August 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"European Commission: Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Legal Documents"}}],"article-number":"39"}}