{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T13:52:06Z","timestamp":1774446726186,"version":"3.50.1"},"reference-count":50,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,12,17]],"date-time":"2024-12-17T00:00:00Z","timestamp":1734393600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,12,17]],"date-time":"2024-12-17T00:00:00Z","timestamp":1734393600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["DOCTORADO BECAS CHILE 2018"],"award-info":[{"award-number":["DOCTORADO BECAS CHILE 2018"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["72190086"],"award-info":[{"award-number":["72190086"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["Centro Nacional de Inteligencia Artificial (CENIA)"],"award-info":[{"award-number":["Centro Nacional de Inteligencia Artificial (CENIA)"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["BASAL"],"award-info":[{"award-number":["BASAL"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["FB210017"],"award-info":[{"award-number":["FB210017"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["Centro Nacional de Inteligencia Artificial (CENIA)"],"award-info":[{"award-number":["Centro Nacional de Inteligencia Artificial (CENIA)"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["BASAL"],"award-info":[{"award-number":["BASAL"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["FB210017"],"award-info":[{"award-number":["FB210017"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["Millenium Nucleus FAIR NCS2022_065"],"award-info":[{"award-number":["Millenium Nucleus FAIR NCS2022_065"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["Millenium Nucleus FAIR NCS2022_065"],"award-info":[{"award-number":["Millenium Nucleus FAIR NCS2022_065"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["MAGISTER BECAS CHILE 2023"],"award-info":[{"award-number":["MAGISTER BECAS CHILE 2023"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100020884","name":"Agencia Nacional de Investigaci\u00f3n y Desarrollo","doi-asserted-by":"publisher","award":["73230353"],"award-info":[{"award-number":["73230353"]}],"id":[{"id":"10.13039\/501100020884","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Sci Eng Ethics"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the \"isolationist approach to AI bias,\" a trend in AI literature where biases are seen as separate occurrences linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the adoption of an uncritical approach to understanding the influence of biases in developers\u2019 decision-making. The BNA fosters dialogue and a critical stance among developers, guided by external experts, using graphical representations to depict biased connections. To test the BNA, we conducted a pilot case study on the \"waiting list\u201d project, involving a small AI developer team creating a healthcare waiting list NPL model in Chile. The analysis showed promising findings: (i) the BNA aids in visualizing interconnected biases and their impacts, facilitating ethical reflection in a more accessible way; (ii) it promotes transparency in decision-making throughout AI development; and (iii) more focus is necessary on professional biases and material limitations as sources of bias in AI development.<\/jats:p>","DOI":"10.1007\/s11948-024-00526-9","type":"journal-article","created":{"date-parts":[[2024,12,17]],"date-time":"2024-12-17T12:01:04Z","timestamp":1734436864000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers"],"prefix":"10.1007","volume":"31","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0006-7024","authenticated-orcid":false,"given":"Gabriela","family":"Arriagada-Bruneau","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1699-1999","authenticated-orcid":false,"given":"Claudia","family":"L\u00f3pez","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0001-9846-8845","authenticated-orcid":false,"given":"Alexandra","family":"Davidoff","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,12,17]]},"reference":[{"key":"526_CR100","unstructured":"Adasme, S., Arriagada., G., Davidoff, A., Lopez., C., Pertuze, J. (2023). APEC Digital Economic Steering Group (DESG), Comparative study on best practices to detect and avoid harmful biases in Artificial Intelligence systems. Asia-Pacific Economic Cooperation (APEC)"},{"key":"526_CR1","doi-asserted-by":"publisher","first-page":"102387","DOI":"10.1016\/j.ijinfomgt.2021.102387","volume":"60","author":"S Akter","year":"2021","unstructured":"Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D\u2019Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387. https:\/\/doi.org\/10.1016\/j.ijinfomgt.2021.102387","journal-title":"International Journal of Information Management"},{"issue":"3","key":"526_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3498324","volume":"3","author":"P B\u00e1ez","year":"2022","unstructured":"B\u00e1ez, P., Bravo-Marquez, F., Dunstan, J., Rojas, M., & Villena, F. (2022). Automatic extraction of nested entities in clinical referrals in Spanish. ACM Transactions on Computing for Healthcare, 3(3), 1\u201322. https:\/\/doi.org\/10.1145\/3498324","journal-title":"ACM Transactions on Computing for Healthcare"},{"key":"526_CR3","doi-asserted-by":"publisher","unstructured":"Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. EdArXiv. https:\/\/doi.org\/10.35542\/osf.io\/pbmvz.","DOI":"10.35542\/osf.io\/pbmvz"},{"key":"526_CR4","doi-asserted-by":"publisher","first-page":"671","DOI":"10.2139\/ssrn.2477899","volume":"104","author":"S Barocas","year":"2016","unstructured":"Barocas, S., & Selbst, A. D. (2016). Big data\u2019s disparate impact. California Law Review, 104, 671. https:\/\/doi.org\/10.2139\/ssrn.2477899","journal-title":"California Law Review"},{"key":"526_CR5","doi-asserted-by":"publisher","unstructured":"Beer, D. (2019). The infrastructural dimensions of the data gaze: The analytical spaces of the codified clinic. In The data gaze: Capitalism, power and perception (pp. 55\u201395). SAGE Publications Ltd. https:\/\/doi.org\/10.4135\/9781526463210.","DOI":"10.4135\/9781526463210"},{"key":"526_CR6","unstructured":"Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st conference on fairness, accountability and transparency, (pp. 77\u201391). https:\/\/proceedings.mlr.press\/v81\/buolamwini18a.html."},{"issue":"11","key":"526_CR7","doi-asserted-by":"publisher","first-page":"7","DOI":"10.1080\/15265161.2020.1819469","volume":"20","author":"DS Char","year":"2020","unstructured":"Char, D. S., Abr\u00e0moff, M. D., & Feudtner, C. (2020). Identifying ethical considerations for machine learning healthcare applications. The American Journal of Bioethics: AJOB, 20(11), 7\u201317. https:\/\/doi.org\/10.1080\/15265161.2020.1819469","journal-title":"The American Journal of Bioethics: AJOB"},{"issue":"5","key":"526_CR8","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1145\/3376898","volume":"63","author":"A Chouldechova","year":"2020","unstructured":"Chouldechova, A., & Roth, A. (2020). A snapshot of the frontiers of fairness in machine learning. Communications of the ACM, 63(5), 82\u201389. https:\/\/doi.org\/10.1145\/3376898","journal-title":"Communications of the ACM"},{"issue":"6","key":"526_CR9","doi-asserted-by":"publisher","first-page":"58","DOI":"10.1145\/3278156","volume":"25","author":"H Cramer","year":"2018","unstructured":"Cramer, H., Garcia-Gathright, J., Springer, A., & Reddy, S. (2018). Assessing and addressing algorithmic bias in practice. Interactions, 25(6), 58\u201363. https:\/\/doi.org\/10.1145\/3278156","journal-title":"Interactions"},{"key":"526_CR101","doi-asserted-by":"crossref","unstructured":"Danks, D., & London, A. J. (2017). Algorithmic Bias in Autonomous Systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 4691\u20134697. http:\/\/dl.acm.org\/citation.cfm?id=3171837.3171944","DOI":"10.24963\/ijcai.2017\/654"},{"key":"526_CR10","volume-title":"Thinking like an engineer: Studies in the ethics of a profession","author":"M Davis","year":"1998","unstructured":"Davis, M. (1998). Thinking like an engineer: Studies in the ethics of a profession. Oxford University Press."},{"key":"526_CR11","unstructured":"Dobbe, R., Dean, S., Gilbert, T., & Kohli, N. (2018). A broader view on bias in automated decision-making: Reflecting on epistemology and dynamics. In arXiv e-prints. https:\/\/ui.adsabs.harvard.edu\/abs\/2018arXiv180700553D."},{"issue":"2","key":"526_CR12","doi-asserted-by":"publisher","first-page":"325","DOI":"10.1108\/OIR-10-2018-0332","volume":"44","author":"C Draude","year":"2019","unstructured":"Draude, C., Klumbyte, G., L\u00fccking, P., & Treusch, P. (2019). Situated algorithms: A sociotechnical systemic approach to bias. Online Information Review, 44(2), 325\u2013342. https:\/\/doi.org\/10.1108\/OIR-10-2018-0332","journal-title":"Online Information Review"},{"issue":"1","key":"526_CR13","first-page":"1","volume":"57","author":"R Estay","year":"2017","unstructured":"Estay, R., Cuadrado, C., Crispi, F., Gonz\u00e1lez, F., Alvarado, F., & Cabrera, N. (2017). Desde el conflicto de listas de espera, hacia el fortalecimiento de los prestadores p\u00fablicos de salud: Una propuesta para Chile. Cuadernos M\u00e9dico Sociales, 57(1), 1.","journal-title":"Cuadernos M\u00e9dico Sociales"},{"issue":"8","key":"526_CR14","doi-asserted-by":"publisher","first-page":"e12760","DOI":"10.1111\/phc3.12760","volume":"16","author":"S Fazelpour","year":"2021","unstructured":"Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8), e12760. https:\/\/doi.org\/10.1111\/phc3.12760","journal-title":"Philosophy Compass"},{"key":"526_CR15","doi-asserted-by":"publisher","unstructured":"Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies (arXiv:2304.07683). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2304.07683.","DOI":"10.48550\/arXiv.2304.07683"},{"issue":"4","key":"526_CR16","doi-asserted-by":"publisher","first-page":"379","DOI":"10.1007\/s12599-020-00650-3","volume":"62","author":"S Feuerriegel","year":"2020","unstructured":"Feuerriegel, S., Dolata, M., & Schwabe, G. (2020). Fair AI. Business & Information Systems Engineering, 62(4), 379\u2013384. https:\/\/doi.org\/10.1007\/s12599-020-00650-3","journal-title":"Business & Information Systems Engineering"},{"key":"526_CR102","doi-asserted-by":"publisher","unstructured":"Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (SSRN Scholarly Paper 3518482). Social Science Research Network. https:\/\/doi.org\/10.2139\/ssrn.3518482","DOI":"10.2139\/ssrn.3518482"},{"issue":"4","key":"526_CR17","doi-asserted-by":"publisher","first-page":"100241","DOI":"10.1016\/j.patter.2021.100241","volume":"2","author":"S Hooker","year":"2021","unstructured":"Hooker, S. (2021). Moving beyond \u201calgorithmic bias is a data problem.\u201d Patterns, 2(4), 100241. https:\/\/doi.org\/10.1016\/j.patter.2021.100241","journal-title":"Patterns"},{"key":"526_CR103","doi-asserted-by":"publisher","unstructured":"Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), Article 9. https:\/\/doi.org\/10.1038\/s42256-019-0088-2","DOI":"10.1038\/s42256-019-0088-2"},{"key":"526_CR18","doi-asserted-by":"publisher","DOI":"10.1093\/mind\/fzaa011","author":"GM Johnson","year":"2020","unstructured":"Johnson, G. M. (2020). The structure of bias. Mind. https:\/\/doi.org\/10.1093\/mind\/fzaa011","journal-title":"Mind"},{"key":"526_CR104","doi-asserted-by":"publisher","unstructured":"Kasy, M., & Abebe, R. (2021). Fairness, Equality, and Power in Algorithmic Decision-Making. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 576\u2013586. https:\/\/doi.org\/10.1145\/3442188.3445919","DOI":"10.1145\/3442188.3445919"},{"key":"526_CR19","doi-asserted-by":"crossref","unstructured":"Kizilcec, R. F., & Lee, H. (2022). Algorithmic fairness in education. In The ethics of artificial intelligence in education. Routledge.","DOI":"10.4324\/9780429329067-10"},{"key":"526_CR20","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-023-00381-7","author":"B Lange","year":"2023","unstructured":"Lange, B., Keeling, G., McCroskery, A., et al. (2023). Engaging engineering teams through moral imagination: A bottom\u2013up approach for responsible innovation and ethical culture change in technology companies. AI Ethics. https:\/\/doi.org\/10.1007\/s43681-023-00381-7","journal-title":"AI Ethics"},{"issue":"6","key":"526_CR21","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3457607","volume":"54","author":"N Mehrabi","year":"2021","unstructured":"Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1\u201335. https:\/\/doi.org\/10.1145\/3457607","journal-title":"ACM Computing Surveys"},{"issue":"1","key":"526_CR22","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1146\/annurev-statistics-042720-125902","volume":"8","author":"S Mitchell","year":"2021","unstructured":"Mitchell, S., Potash, E., Barocas, S., D\u2019Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8(1), 141\u2013163. https:\/\/doi.org\/10.1146\/annurev-statistics-042720-125902","journal-title":"Annual Review of Statistics and Its Application"},{"key":"526_CR23","unstructured":"Niehaus, F., & Wiesche, M. (2021). A socio-technical perspective on organizational interaction with AI: A literature review. ECIS 2021 Research Papers. https:\/\/aisel.aisnet.org\/ecis2021_rp\/156."},{"issue":"3","key":"526_CR24","doi-asserted-by":"publisher","first-page":"e1356","DOI":"10.1002\/widm.1356","volume":"10","author":"E Ntoutsi","year":"2020","unstructured":"Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.-E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., & Staab, S. (2020). Bias in data-driven artificial intelligence systems\u2014An introductory survey. WIREs Data Mining and Knowledge Discovery, 10(3), e1356. https:\/\/doi.org\/10.1002\/widm.1356","journal-title":"WIREs Data Mining and Knowledge Discovery"},{"key":"526_CR25","doi-asserted-by":"publisher","first-page":"13","DOI":"10.3389\/fdata.2019.00013","volume":"2","author":"A Olteanu","year":"2019","unstructured":"Olteanu, A., Castillo, C., Diaz, F., & K\u0131c\u0131man, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2, 13. https:\/\/doi.org\/10.3389\/fdata.2019.00013","journal-title":"Frontiers in Big Data"},{"issue":"24","key":"526_CR26","doi-asserted-by":"publisher","first-page":"2377","DOI":"10.1001\/jama.2019.18058","volume":"322","author":"RB Parikh","year":"2019","unstructured":"Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing bias in artificial intelligence in health care. JAMA, 322(24), 2377\u20132378. https:\/\/doi.org\/10.1001\/jama.2019.18058","journal-title":"JAMA"},{"issue":"11","key":"526_CR27","doi-asserted-by":"publisher","first-page":"100336","DOI":"10.1016\/j.patter.2021.100336","volume":"2","author":"A Paullada","year":"2021","unstructured":"Paullada, A., Raji, I. D., Bender, E. M., Denton, E., & Hanna, A. (2021). Data and its (DIS)contents: A survey of dataset development and use in machine learning research. Patterns, 2(11), 100336. https:\/\/doi.org\/10.1016\/j.patter.2021.100336","journal-title":"Patterns"},{"issue":"1","key":"526_CR28","doi-asserted-by":"publisher","first-page":"217","DOI":"10.1007\/s00146-020-01005-y","volume":"36","author":"MMM Peeters","year":"2021","unstructured":"Peeters, M. M. M., van Diggelen, J., van den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human\u2013AI society. AI & Society, 36(1), 217\u2013238. https:\/\/doi.org\/10.1007\/s00146-020-01005-y","journal-title":"AI & Society"},{"key":"526_CR29","doi-asserted-by":"publisher","first-page":"420","DOI":"10.1016\/j.sbspro.2014.08.148","volume":"146","author":"O Polyakova","year":"2014","unstructured":"Polyakova, O. (2014). The structure of professional deformation. Procedia - Social and Behavioral Sciences, 146, 420\u2013425. https:\/\/doi.org\/10.1016\/j.sbspro.2014.08.148","journal-title":"Procedia - Social and Behavioral Sciences"},{"issue":"1","key":"526_CR30","doi-asserted-by":"publisher","first-page":"31","DOI":"10.1038\/s41591-021-01614-0","volume":"28","author":"P Rajpurkar","year":"2022","unstructured":"Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. Nature Medicine, 28(1), 31\u201338. https:\/\/doi.org\/10.1038\/s41591-021-01614-0","journal-title":"Nature Medicine"},{"key":"526_CR105","doi-asserted-by":"publisher","unstructured":"Richardson, B., & Gilbert, J. (2021). A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions. https:\/\/doi.org\/10.48550\/arXiv.2112.05700","DOI":"10.48550\/arXiv.2112.05700"},{"key":"526_CR31","doi-asserted-by":"publisher","unstructured":"Roselli, D., Matthews, J., & Talagala, N. (2019). Managing bias in AI. In Companion proceedings of The 2019 World Wide Web conference, (pp 539\u2013544). https:\/\/doi.org\/10.1145\/3308560.3317590.","DOI":"10.1145\/3308560.3317590"},{"key":"526_CR32","unstructured":"Rovatsos, M., Mittelstadt, B., & Koene, A. (2019). Landscape summary: Bias in algorithmic decision-making: What is bias in algorithmic decision-making, how can we identify it, and how can we mitigate it? UK Government. https:\/\/www.gov.uk\/government\/publications\/landscape-summaries-commissionedby-the-centre-for-data-ethics-and-innovation."},{"key":"526_CR106","unstructured":"Sangokoya, D. (2017). Algorithmic accountability \u2013 Applying the concept to different country contexts. World Wide Web Foundation. https:\/\/datapopalliance.org\/publications\/algorithmic-accountability-applying-the-concept-to-different-countrycontexts\/"},{"key":"526_CR33","unstructured":"Sangokoya, D. (2020). Algorithmic accountability\u2014Applying the concept to different country contexts. World Wide Web Foundation and Data Pop Alliance. https:\/\/datapopalliance.org\/publications\/algorithmic-accountability-applying-the-concept-to-different-country-contexts\/."},{"key":"526_CR34","unstructured":"Smith, G., & Rustagi, I. (2020). Mitigating bias in artificial intelligence. An equity fluent leadership playbook [Playbook]. https:\/\/haas.berkeley.edu\/equity\/industry\/playbooks\/mitigating-bias-in-ai\/."},{"issue":"1","key":"526_CR35","doi-asserted-by":"publisher","first-page":"1","DOI":"10.4018\/IJKM.290022","volume":"18","author":"M Soleimani","year":"2022","unstructured":"Soleimani, M., Intezari, A., & Pauleen, D. J. (2022). Mitigating cognitive biases in developing ai-assisted recruitment systems: A knowledge-sharing approach. International Journal of Knowledge Management, 18(1), 1\u201318. https:\/\/doi.org\/10.4018\/IJKM.290022","journal-title":"International Journal of Knowledge Management (IJKM)"},{"issue":"8","key":"526_CR36","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1145\/3464903","volume":"64","author":"R Srinivasan","year":"2021","unstructured":"Srinivasan, R., & Chander, A. (2021). Biases in AI systems. Communications of the ACM, 64(8), 44\u201349. https:\/\/doi.org\/10.1145\/3464903","journal-title":"Communications of the ACM"},{"key":"526_CR37","doi-asserted-by":"publisher","unstructured":"Stevens, H. (2013). Following data. In Life out of sequence (pp. 107\u2013136). University of Chicago Press. https:\/\/doi.org\/10.7208\/chicago\/9780226080345.003.0005.","DOI":"10.7208\/chicago\/9780226080345.003.0005"},{"key":"526_CR38","doi-asserted-by":"publisher","DOI":"10.1145\/3465416.3483305","author":"H Suresh","year":"2021","unstructured":"Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. Equity and Access in Algorithms, Mechanisms, and Optimization. https:\/\/doi.org\/10.1145\/3465416.3483305","journal-title":"Equity and Access in Algorithms, Mechanisms, and Optimization"},{"key":"526_CR39","unstructured":"United States Home Office. (2016). Big data: A report on algorithmic systems, opportunity, and civil rights. The White House. https:\/\/purl.fdlp.gov\/GPO\/gpo90618."},{"key":"526_CR40","unstructured":"West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. https:\/\/ainowinstitute.org\/publication\/discriminating-systems-gender-race-and-power-in-ai-2."},{"issue":"3","key":"526_CR41","doi-asserted-by":"publisher","first-page":"1047","DOI":"10.1007\/s00146-021-01153-9","volume":"36","author":"M Zajko","year":"2021","unstructured":"Zajko, M. (2021). Conservative AI and social inequality: Conceptualizing alternatives to bias through social theory. AI & Society, 36(3), 1047\u20131056. https:\/\/doi.org\/10.1007\/s00146-021-01153-9","journal-title":"AI & Society"},{"issue":"5","key":"526_CR42","doi-asserted-by":"publisher","first-page":"623","DOI":"10.1177\/1477370819876762","volume":"18","author":"A Zavr\u0161nik","year":"2021","unstructured":"Zavr\u0161nik, A. (2021). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology, 18(5), 623\u2013642. https:\/\/doi.org\/10.1177\/1477370819876762","journal-title":"European Journal of Criminology"},{"key":"526_CR43","doi-asserted-by":"publisher","unstructured":"Zhou, Y., Kantarcioglu, M., & Clifton, C. (2023). On improving fairness of AI models with synthetic minority oversampling techniques. In Proceedings of the 2023 SIAM international conference on data mining (SDM) (pp. 874\u2013882). Society for Industrial and Applied Mathematics. https:\/\/doi.org\/10.1137\/1.9781611977653.ch98.","DOI":"10.1137\/1.9781611977653.ch98"}],"container-title":["Science and Engineering Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-024-00526-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11948-024-00526-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11948-024-00526-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,2,24]],"date-time":"2025-02-24T15:52:17Z","timestamp":1740412337000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11948-024-00526-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,17]]},"references-count":50,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,2]]}},"alternative-id":["526"],"URL":"https:\/\/doi.org\/10.1007\/s11948-024-00526-9","relation":{},"ISSN":["1471-5546"],"issn-type":[{"value":"1471-5546","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,12,17]]},"assertion":[{"value":"4 January 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 November 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 December 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"1"}}