{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T11:07:43Z","timestamp":1774264063242,"version":"3.50.1"},"reference-count":62,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T00:00:00Z","timestamp":1764892800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T00:00:00Z","timestamp":1764892800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Research Foundation\u2014Flanders","award":["11N7723N"],"award-info":[{"award-number":["11N7723N"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI Ethics"],"published-print":{"date-parts":[[2026,2]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>This paper examines how fairness principles differ when evaluating large language model (LLM) outputs in fact-based versus opinion-based contexts, focusing on gender disparities in responses related to notable individuals. Using prompts designed to elicit either factual information (identifying Nobel Prize winners) or subjective judgments (identifying the most accomplished figures in a field), we analyze responses from GPT-4, Claude, and Llama-3. For fact-based tasks, fairness is assessed through correctness and refusal rates, revealing minimal gender disparities when models achieve high accuracy, although refusal patterns can vary by model and gender. For opinion-based tasks, where no single correct answer exists, fairness is operationalized through representational metrics such as demographic parity and disparate impact. Results show substantial gender disparities in opinion-based outputs across all models, with representation shaped by prompt wording (e.g., \u201cimportant\u201d vs. \u201cprestigious\u201d), subject domain, and inclusion of secondary answers. However, the highly skewed context makes the final assessment about fairness challenging. Our findings highlight that fairness metrics and interpretations must be contextualized by output type. Performance parity is an appropriate goal for fact-based outputs, whereas representational inclusivity is central for opinion-based outputs. Representational inclusivity alone may not be sufficient when the context for the LLM\u2019s task differs from the population. We discuss theoretical implications for fairness evaluation, noting that high performance can mitigate disparities in factual contexts but that opinion-based contexts require more nuanced, values-driven approaches.<\/jats:p>","DOI":"10.1007\/s43681-025-00876-5","type":"journal-article","created":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T03:36:34Z","timestamp":1764905794000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Fairness principles across contexts: evaluating gender disparities of facts and opinions in large language models"],"prefix":"10.1007","volume":"6","author":[{"given":"Sofie","family":"Goethals","sequence":"first","affiliation":[]},{"given":"Lauren","family":"Rhue","sequence":"additional","affiliation":[]},{"given":"Arun","family":"Sundararajan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,5]]},"reference":[{"key":"876_CR1","doi-asserted-by":"crossref","unstructured":"Yu, H., Hatzivassiloglou, V.: Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. in Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, p. 129\u2013136 (2003). https:\/\/aclanthology.org\/W03-1017\/","DOI":"10.3115\/1119355.1119372"},{"key":"876_CR2","unstructured":"Metz, C.: OpenAI Folds A.I.-Powered Search Engine Into ChatGPT (2024). https:\/\/www.nytimes.com\/2024\/10\/31\/technology\/chatgpt-openai-search-engine.html"},{"issue":"2","key":"876_CR3","doi-asserted-by":"publisher","first-page":"791","DOI":"10.1007\/s43681-024-00435-4","volume":"5","author":"MA Haque","year":"2025","unstructured":"Haque, M.A., Li, S.: Exploring chatgpt and its impact on society. AI and Ethics 5(2), 791\u2013803 (2025)","journal-title":"AI and Ethics"},{"key":"876_CR4","doi-asserted-by":"publisher","first-page":"809","DOI":"10.2307\/1122648","volume":"88","author":"E Finan","year":"1988","unstructured":"Finan, E.: The fact-opinion determination in defamation. Colum. L. Rev. 88, 809 (1988)","journal-title":"Colum. L. Rev."},{"key":"876_CR5","doi-asserted-by":"crossref","unstructured":"Blodgett, S.L., Barocas, S., Daum\u00e9\u00a0III, H., Wallach, H.: Language (technology) is power: A critical survey of \u201dbias\u201d in nlp. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5454\u20135476 (2020)","DOI":"10.18653\/v1\/2020.acl-main.485"},{"key":"876_CR6","doi-asserted-by":"publisher","unstructured":"Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.-S., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L.A., Rimell, L., Isaac, W., Haas, J., Legassick, S., Irving, G., Gabriel, I.: Taxonomy of risks posed by language models. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. FAccT \u201922, pp. 214\u2013229. Association for Computing Machinery, New York, NY, USA (2022). https:\/\/doi.org\/10.1145\/3531146.3533088","DOI":"10.1145\/3531146.3533088"},{"key":"876_CR7","doi-asserted-by":"crossref","unstructured":"Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y., Denuyl, S.: Social biases in nlp models as barriers for persons with disabilities. arXiv preprint arXiv:2005.00813 (2020)","DOI":"10.18653\/v1\/2020.acl-main.487"},{"issue":"1","key":"876_CR8","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1146\/annurev-statistics-042720-125902","volume":"8","author":"S Mitchell","year":"2021","unstructured":"Mitchell, S., Potash, E., Barocas, S., D\u2019Amour, A., Lum, K.: Algorithmic fairness: choices, assumptions, and definitions. Annual Rev. Statist. Appl. 8(1), 141\u2013163 (2021)","journal-title":"Annual Rev. Statist. Appl."},{"key":"876_CR9","doi-asserted-by":"publisher","DOI":"10.1145\/3641289","author":"Y Chang","year":"2024","unstructured":"Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P.S., Yang, Q., Xie, X.: A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. (2024). https:\/\/doi.org\/10.1145\/3641289. (Just Accepted)","journal-title":"ACM Trans. Intell. Syst. Technol."},{"key":"876_CR10","doi-asserted-by":"publisher","unstructured":"Dai, S., Xu, C., Xu, S., Pang, L., Dong, Z., Xu, J.: Bias and unfairness in information retrieval systems: New challenges in the llm era. In: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. KDD \u201924, pp. 6437\u20136447. Association for Computing Machinery, New York, NY, USA (2024). https:\/\/doi.org\/10.1145\/3637528.3671458","DOI":"10.1145\/3637528.3671458"},{"key":"876_CR11","doi-asserted-by":"publisher","DOI":"10.1145\/3631326","author":"M Hort","year":"2024","unstructured":"Hort, M., Chen, Z., Zhang, J.M., Harman, M., Sarro, F.: Bias mitigation for machine learning classifiers: a comprehensive survey. ACM J. Responsib. Comput. (2024). https:\/\/doi.org\/10.1145\/3631326","journal-title":"ACM J. Responsib. Comput."},{"issue":"8","key":"876_CR12","doi-asserted-by":"publisher","first-page":"12432","DOI":"10.1111\/lnc3.12432","volume":"15","author":"D Hovy","year":"2021","unstructured":"Hovy, D., Prabhumoye, S.: Five sources of bias in natural language processing. Lang. Linguist. Compass 15(8), 12432 (2021). https:\/\/doi.org\/10.1111\/lnc3.12432","journal-title":"Lang. Linguist. Compass"},{"key":"876_CR13","doi-asserted-by":"crossref","unstructured":"Bigdeli, A., Arabzadeh, N., SeyedSalehi, S., Zihayat, M., Bagheri, E.: Gender fairness in information retrieval systems. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 3436\u20133439 (2022)","DOI":"10.1145\/3477495.3532680"},{"key":"876_CR14","unstructured":"Barocas, S., Hardt, M., Narayanan, A.: Fairness and Machine Learning: Limitations and Opportunities. MIT press (2023)"},{"key":"876_CR15","doi-asserted-by":"publisher","unstructured":"Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: Can language models be too big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT \u201921, pp. 610\u2013623. Association for Computing Machinery, New York, NY, USA (2021). https:\/\/doi.org\/10.1145\/3442188.3445922","DOI":"10.1145\/3442188.3445922"},{"key":"876_CR16","doi-asserted-by":"publisher","unstructured":"Blodgett, S.L., Barocas, S., Daum\u00e9\u00a0III, H., Wallach, H.: Language (technology) is power: A critical survey of \u201cbias\u201d in NLP. In: Jurafsky, D., Chai, J., Schluter, N., Tetreault, J. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5454\u20135476. Association for Computational Linguistics, Online (2020). https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.485. https:\/\/aclanthology.org\/2020.acl-main.485","DOI":"10.18653\/v1\/2020.acl-main.485"},{"key":"876_CR17","first-page":"2017","volume":"1","author":"S Barocas","year":"2017","unstructured":"Barocas, S., Hardt, M., Narayanan, A.: Fairness in machine learning. Nips Tutorial 1, 2017 (2017)","journal-title":"Nips Tutorial"},{"key":"876_CR18","unstructured":"Chouldechova, A., Roth, A.: The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018)"},{"key":"876_CR19","unstructured":"Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In: International Conference on Machine Learning, pp. 2564\u20132572 (2018). PMLR"},{"issue":"6","key":"876_CR20","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3457607","volume":"54","author":"N Mehrabi","year":"2021","unstructured":"Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1\u201335 (2021)","journal-title":"ACM Comput. Surv. (CSUR)"},{"key":"876_CR21","unstructured":"Dong, X., Wang, Y., Yu, P.S., Caverlee, J.: Probing explicit and implicit gender bias through llm conditional text generation. arXiv preprint arXiv:2311.00306 (2023)"},{"key":"876_CR22","doi-asserted-by":"crossref","unstructured":"Wan, Y., Pu, G., Sun, J., Garimella, A., Chang, K.-W., Peng, N.: \u201d Kelly is a warm person, Joseph is a role model\u201d: gender biases in llm-generated reference letters. arXiv preprint arXiv:2310.09219 (2023)","DOI":"10.18653\/v1\/2023.findings-emnlp.243"},{"key":"876_CR23","doi-asserted-by":"crossref","unstructured":"Cao, Y., Zhou, L., Lee, S., Cabello, L., Chen, M., Hershcovich, D.: Assessing cross-cultural alignment between ChatGPT and human societies: an empirical study (2023)","DOI":"10.18653\/v1\/2023.c3nlp-1.7"},{"key":"876_CR24","doi-asserted-by":"crossref","unstructured":"Hartmann, J., Schwenzow, J., Witte, M.: The political ideology of conversational AI: Converging evidence on ChatGPT\u2019s pro-environmental, left-libertarian orientation (2023)","DOI":"10.2139\/ssrn.4316084"},{"key":"876_CR25","doi-asserted-by":"publisher","unstructured":"Dhamala, J., Sun, T., Kumar, V., Krishna, S., Pruksachatkun, Y., Chang, K.-W., Gupta, R.: Bold: Dataset and metrics for measuring biases in open-ended language generation. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT \u201921. ACM (2021). https:\/\/doi.org\/10.1145\/3442188.3445924","DOI":"10.1145\/3442188.3445924"},{"key":"876_CR26","unstructured":"Zhuo, T.Y., Huang, Y., Chen, C., Xing, Z.: Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv preprint arXiv:2301.12867 (2023)"},{"key":"876_CR27","doi-asserted-by":"crossref","unstructured":"Wyer, S., Black, S.: Algorithmic bias: sexualized violence against women in gpt-3 models. AI and Ethics, 1\u201318 (2025)","DOI":"10.1007\/s43681-024-00641-0"},{"key":"876_CR28","first-page":"21","volume":"5","author":"J Reagle","year":"2011","unstructured":"Reagle, J., Rhue, L.: Gender bias in wikipedia and britannica. Int. J. Commun. 5, 21 (2011)","journal-title":"Int. J. Commun."},{"key":"876_CR29","unstructured":"Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D.M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D.: Language Models are Few-Shot Learners (2020). https:\/\/arxiv.org\/abs\/2005.14165"},{"key":"876_CR30","unstructured":"Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozi\u00e8re, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., Lample, G.: LLaMA: Open and Efficient Foundation Language Models (2023). https:\/\/arxiv.org\/abs\/2302.13971"},{"key":"876_CR31","doi-asserted-by":"publisher","unstructured":"Kotek, H., Dockum, R., Sun, D.: Gender bias and stereotypes in large language models. In: Proceedings of The ACM Collective Intelligence Conference. CI \u201923, pp. 12\u201324. Association for Computing Machinery, New York, NY, USA (2023). https:\/\/doi.org\/10.1145\/3582269.3615599","DOI":"10.1145\/3582269.3615599"},{"key":"876_CR32","doi-asserted-by":"crossref","unstructured":"Manakul, P., Liusie, A., Gales, M.J.F.: SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models (2023)","DOI":"10.18653\/v1\/2023.emnlp-main.557"},{"key":"876_CR33","doi-asserted-by":"publisher","first-page":"178","DOI":"10.1162\/tacl_a_00454","volume":"10","author":"Z Guo","year":"2022","unstructured":"Guo, Z., Schlichtkrull, M., Vlachos, A.: A survey on automated fact-checking. Trans. Assoc. Comput. Linguist. 10, 178\u2013206 (2022). https:\/\/doi.org\/10.1162\/tacl_a_00454","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"876_CR34","doi-asserted-by":"crossref","unstructured":"Honovich, O., Aharoni, R., Herzig, J., Taitelbaum, H., Kukliansy, D., Cohen, V., Scialom, T., Szpektor, I., Hassidim, A., Matias, Y.: TRUE: re-evaluating factual consistency evaluation (2022)","DOI":"10.18653\/v1\/2022.dialdoc-1.19"},{"key":"876_CR35","doi-asserted-by":"crossref","unstructured":"Pezeshkpour, P.: Measuring and modifying factual knowledge in large language models (2023)","DOI":"10.1109\/ICMLA58977.2023.00122"},{"key":"876_CR36","doi-asserted-by":"crossref","unstructured":"Lin, S., Hilton, J., Evans, O.: TruthfulQA: Measuring how models mimic human falsehoods (2022)","DOI":"10.18653\/v1\/2022.acl-long.229"},{"key":"876_CR37","doi-asserted-by":"publisher","first-page":"423","DOI":"10.1162\/tacl_a_00324","volume":"8","author":"Z Jiang","year":"2020","unstructured":"Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Trans. Assoc. Comput. Linguist. 8, 423\u2013438 (2020)","journal-title":"Trans. Assoc. Comput. Linguist."},{"issue":"6334","key":"876_CR38","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1126\/science.aal4230","volume":"356","author":"A Caliskan","year":"2017","unstructured":"Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science 356(6334), 183\u2013186 (2017)","journal-title":"Science"},{"key":"876_CR39","doi-asserted-by":"crossref","unstructured":"Nozza, D., Bianchi, F., Hovy, D., et al.: Pipelines for social bias testing of large language models. In: Proceedings of BigScience Episode# 5\u2013Workshop on Challenges & Perspectives in Creating Large Language Models (2022). Association for Computational Linguistics","DOI":"10.18653\/v1\/2022.bigscience-1.6"},{"issue":"4","key":"876_CR40","doi-asserted-by":"publisher","first-page":"991","DOI":"10.1257\/0002828042002561","volume":"94","author":"M Bertrand","year":"2004","unstructured":"Bertrand, M., Mullainathan, S.: Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. Am. Econ. Rev. 94(4), 991\u20131013 (2004)","journal-title":"Am. Econ. Rev."},{"issue":"3","key":"876_CR41","doi-asserted-by":"publisher","first-page":"1071","DOI":"10.1287\/mnsc.2018.3273","volume":"66","author":"R Cui","year":"2020","unstructured":"Cui, R., Li, J., Zhang, D.J.: Reducing discrimination with reviews in the sharing economy: evidence from field experiments on airbnb. Manage. Sci. 66(3), 1071\u20131094 (2020)","journal-title":"Manage. Sci."},{"issue":"2","key":"876_CR42","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1257\/app.20160213","volume":"9","author":"B Edelman","year":"2017","unstructured":"Edelman, B., Luca, M., Svirsky, D.: Racial discrimination in the sharing economy: evidence from a field experiment. Am. Econ. J. Appl. Econ. 9(2), 1\u201322 (2017)","journal-title":"Am. Econ. J. Appl. Econ."},{"key":"876_CR43","doi-asserted-by":"publisher","DOI":"10.3386\/w22776","volume-title":"Racial and gender discrimination in transportation network companies","author":"Y Ge","year":"2016","unstructured":"Ge, Y., Knittel, C.R., MacKenzie, D., Zoepf, S.: Racial and gender discrimination in transportation network companies. Technical report, National Bureau of Economic Research (2016)"},{"key":"876_CR44","doi-asserted-by":"crossref","unstructured":"Rhue, L., Clark, J.: Who are you and what are you selling? Creatorbased and product-based racial cues in crowdfunding. MIS Quarterly 46(4) (2022)","DOI":"10.25300\/MISQ\/2022\/15214"},{"key":"876_CR45","doi-asserted-by":"crossref","unstructured":"Abid, A., Farooqi, M., Zou, J.: Persistent anti-Muslim bias in large language models. In: Proceedings of the 2021 AAAI\/ACM Conference on AI, Ethics, and Society, p. 298\u2013306 (2021)","DOI":"10.1145\/3461702.3462624"},{"key":"876_CR46","unstructured":"Nadeem, M., Bethke, A., Reddy, S.: Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456 (2020)"},{"key":"876_CR47","unstructured":"Ko, C., Shin, J., Song, H., Seo, J., Park, J.C.: Different bias under different criteria: Assessing bias in llms with a fact-based approach. arXiv preprint arXiv:2411.17338 (2024)"},{"key":"876_CR48","doi-asserted-by":"crossref","unstructured":"Wan, Y., Wu, D., Wang, H., Chang, K.-W.: The factuality tax of diversity-intervened text-to-image generation: Benchmark and fact-augmented intervention. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, p. 9082\u20139100 (2024)","DOI":"10.18653\/v1\/2024.emnlp-main.513"},{"key":"876_CR49","unstructured":"Huang, J.-t., Yan, Y., Liu, L., Wan, Y., Wang, W., Chang, K.-W., Lyu, M.R.: Fact-or-fair: A checklist for behavioral testing of ai models on fairness-related queries. arXiv preprint arXiv:2502.05849 (2025)"},{"key":"876_CR50","unstructured":"Bouchard, D.: An actionable framework for assessing bias and fairness in large language model use cases. arXiv preprint arXiv:2407.10853 (2024)"},{"issue":"3","key":"876_CR51","doi-asserted-by":"publisher","first-page":"1097","DOI":"10.1162\/coli_a_00524","volume":"50","author":"IO Gallegos","year":"2024","unstructured":"Gallegos, I.O., Rossi, R.A., Barrow, J., Tanjim, M.M., Kim, S., Dernoncourt, F., Yu, T., Zhang, R., Ahmed, N.K.: Bias and fairness in large language models: a survey. Comput. Linguist. 50(3), 1097\u20131179 (2024)","journal-title":"Comput. Linguist."},{"key":"876_CR52","doi-asserted-by":"crossref","unstructured":"Ferrara, E.: Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738 (2023)","DOI":"10.2139\/ssrn.4627814"},{"key":"876_CR53","first-page":"12388","volume":"33","author":"J Vig","year":"2020","unstructured":"Vig, J., Gehrmann, S., Belinkov, Y., Qian, S., Nevo, D., Singer, Y., Shieber, S.: Investigating gender bias in language models using causal mediation analysis. Adv. Neural. Inf. Process. Syst. 33, 12388\u201312401 (2020)","journal-title":"Adv. Neural. Inf. Process. Syst."},{"key":"876_CR54","doi-asserted-by":"crossref","unstructured":"Benthall, S., Haynes, B.D.: Racial categories in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, p. 289\u2013298 (2019)","DOI":"10.1145\/3287560.3287575"},{"key":"876_CR55","unstructured":"Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent Trade-Offs in the Fair Determination of Risk Scores (2016). https:\/\/arxiv.org\/abs\/1609.05807"},{"key":"876_CR56","unstructured":"Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)"},{"key":"876_CR57","volume-title":"Edsl: Expected parrot domain specific language for ai powered social science","author":"J Horton","year":"2024","unstructured":"Horton, J., Filippas, A., Horton, R.: Edsl: Expected parrot domain specific language for ai powered social science. Whitepaper, Expected Parrot (2024)"},{"key":"876_CR58","doi-asserted-by":"publisher","DOI":"10.1145\/3457607","author":"N Mehrabi","year":"2021","unstructured":"Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (2021). https:\/\/doi.org\/10.1145\/3457607","journal-title":"ACM Comput. Surv."},{"key":"876_CR59","unstructured":"Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inform. Process. Syst. 29 (2016)"},{"key":"876_CR60","unstructured":"Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 60\u201369. PMLR. (2018). https:\/\/proceedings.mlr.press\/v80\/agarwal18a.html"},{"key":"876_CR61","unstructured":"Hendrycks, D., Burns, C., Basart, S., Critch, A., Li, J., Song, D., Steinhardt, J.: Aligning AI With Shared Human Values (2023)"},{"key":"876_CR62","doi-asserted-by":"publisher","DOI":"10.1145\/3678988","author":"SS Chivukula","year":"2024","unstructured":"Chivukula, S.S., Gray, C., Li, Z., Pivonka, A.C., Chen, J.: Surveying a landscape of ethics-focused design methods. ACM J. Responsib. Comput. (2024). https:\/\/doi.org\/10.1145\/3678988","journal-title":"ACM J. Responsib. Comput."}],"container-title":["AI and Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-025-00876-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43681-025-00876-5","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-025-00876-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,23]],"date-time":"2026-03-23T10:24:23Z","timestamp":1774261463000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43681-025-00876-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,5]]},"references-count":62,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,2]]}},"alternative-id":["876"],"URL":"https:\/\/doi.org\/10.1007\/s43681-025-00876-5","relation":{},"ISSN":["2730-5953","2730-5961"],"issn-type":[{"value":"2730-5953","type":"print"},{"value":"2730-5961","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,5]]},"assertion":[{"value":"11 September 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 October 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}}],"article-number":"41"}}