{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,10]],"date-time":"2026-03-10T14:51:57Z","timestamp":1773154317449,"version":"3.50.1"},"reference-count":54,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T00:00:00Z","timestamp":1741737600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T00:00:00Z","timestamp":1741737600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100007537","name":"Freie Universit\u00e4t Berlin","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100007537","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI Ethics"],"published-print":{"date-parts":[[2025,8]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>With the increasing presence of adolescents and children online, it is crucial to evaluate algorithms designed to protect them from physical and mental harm. This study measures the bias introduced by emerging slurs found in youth language on existing BERT-based hate speech detection models. The research establishes a novel framework to identify language bias within trained networks, introducing a technique to detect emerging hate phrases and evaluate the unintended bias associated with them. As a result, three bias test sets are constructed: one for emerging hate speech terms, another for established hate terms, and one to test for overfitting. Based on these test sets, three scientific and one commercial hate speech detection models are assessed and compared. For comprehensive evaluation, the research introduces a novel Youth Language Bias Score. Finally, the study applies fine-tuning as a mitigation strategy for youth language bias, rigorously testing and evaluating the newly trained classifier. To summarize, the research introduces a novel framework for bias detection, highlights the influence of adolescent language on classifier performance in hate speech classification, and presents the first-ever hate speech classifier specifically trained for online youth language. This study focuses only on slurs in hateful speech, offering a foundational perspective for the field.<\/jats:p>","DOI":"10.1007\/s43681-025-00701-z","type":"journal-article","created":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T05:06:40Z","timestamp":1741756000000},"page":"3953-3965","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Youth language and emerging slurs: tackling bias in BERT-based hate speech detection"],"prefix":"10.1007","volume":"5","author":[{"given":"Jan","family":"Fillies","sequence":"first","affiliation":[]},{"given":"Adrian","family":"Paschke","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,12]]},"reference":[{"key":"701_CR1","unstructured":"Rohleder, B.: Kinder- & Jugendstudie 2022 (2022)"},{"key":"701_CR2","unstructured":"McCarthy, N.: Facebook Removes Record Number Of Hate Speech Posts [Infographic]. https:\/\/www.forbes.com\/sites\/niallmccarthy\/2020\/05\/13\/facebook-removes-record-number-of-hate-speech-posts-infographic\/?sh=20c0ef983035"},{"key":"701_CR3","doi-asserted-by":"crossref","unstructured":"Dietrich, F.: Ai-based removal of hate speech from digital social networks: chances and risks for freedom of expression. AI and Ethics, 1\u201311 (2024)","DOI":"10.1007\/s43681-024-00610-7"},{"key":"701_CR4","doi-asserted-by":"crossref","unstructured":"Rabonato, R.T., Berton, L.: A systematic review of fairness in machine learning. AI and Ethics, 1\u201312 (2024)","DOI":"10.1007\/s43681-024-00577-5"},{"key":"701_CR5","unstructured":"Blodgett, S.L., O\u2019Connor, B.: Racial disparity in natural language processing: a case study of social media African-American English (2017)"},{"key":"701_CR6","doi-asserted-by":"publisher","unstructured":"Dixon, L., Li, J., Sorensen, J., Thain, N., Vasserman, L.: Measuring and mitigating unintended bias in text classification. In: Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society. AIES \u201918, pp. 67\u201373. Association for Computing Machinery, New York, NY, USA (2018). https:\/\/doi.org\/10.1145\/3278721.3278729","DOI":"10.1145\/3278721.3278729"},{"key":"701_CR7","doi-asserted-by":"publisher","unstructured":"R\u00f6ttger, P., Vidgen, B., Nguyen, D., Waseem, Z., Margetts, H., Pierrehumbert, J.: HateCheck: Functional tests for hate speech detection models. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 41\u201358. Association for Computational Linguistics, Online (2021). https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.4","DOI":"10.18653\/v1\/2021.acl-long.4"},{"key":"701_CR8","doi-asserted-by":"publisher","DOI":"10.3390\/APP10124180","author":"K Florio","year":"2020","unstructured":"Florio, K., Basile, V., Polignano, M., Basile, P., Patti, V.: Time of your hate: The challenge of time in hate speech detection on social media. Appl. Sci. (2020). https:\/\/doi.org\/10.3390\/APP10124180","journal-title":"Appl. Sci."},{"key":"701_CR9","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0073791","author":"HA Schwartz","year":"2013","unstructured":"Schwartz, H.A., Eichstaedt, J.C., Kern, M.L., Dziurzynski, L., Ramones, S.M., Agrawal, M., Shah, A., Kosinski, M., Stillwell, D., Seligman, M.E.P., Ungar, L.H.: Personality, gender, and age in the language of social media: The open-vocabulary approach. PLoS One (2013). https:\/\/doi.org\/10.1371\/journal.pone.0073791","journal-title":"PLoS One"},{"key":"701_CR10","doi-asserted-by":"publisher","unstructured":"Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171\u20134186. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https:\/\/doi.org\/10.18653\/v1\/N19-1423","DOI":"10.18653\/v1\/N19-1423"},{"key":"701_CR11","unstructured":"Saleh, H., Alhothali, A., Moria, K.: Detection of Hate Speech using BERT and Hate Speech Word Embedding with Deep Model (2021). arXiv:https:\/\/arxiv.org\/abs\/2111.01515"},{"key":"701_CR12","doi-asserted-by":"publisher","unstructured":"Nwaiwu, S., Jongsawat, N., Tungkasthan, A., Thaloey, J.: Fine-tuned bert model for hate speech detection in political discourse. In: 2024 22nd International Conference on ICT and Knowledge Engineering (ICTandKE), pp. 1\u20138 (2024). https:\/\/doi.org\/10.1109\/ICTKE62841.2024.10787162","DOI":"10.1109\/ICTKE62841.2024.10787162"},{"key":"701_CR13","unstructured":"OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., Avila, R., Babuschkin, I., more: GPT-4 Technical Report (2024)"},{"key":"701_CR14","doi-asserted-by":"publisher","unstructured":"Sprugnoli, R., Menini, S., Tonelli, S., Oncini, F., Piras, E.: Creating a WhatsApp dataset to study pre-teen cyberbullying. In: Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pp. 51\u201359. Association for Computational Linguistics, Brussels, Belgium (2018). https:\/\/doi.org\/10.18653\/v1\/W18-5107","DOI":"10.18653\/v1\/W18-5107"},{"key":"701_CR15","doi-asserted-by":"publisher","unstructured":"Menini, S., Moretti, G., Corazza, M., Cabrio, E., Tonelli, S., Villata, S.: A system to monitor cyberbullying based on message classification and social network analysis. In: Proceedings of the Third Workshop on Abusive Language Online, pp. 105\u2013110. Association for Computational Linguistics, Florence, Italy (2019). https:\/\/doi.org\/10.18653\/v1\/W19-3511","DOI":"10.18653\/v1\/W19-3511"},{"key":"701_CR16","doi-asserted-by":"publisher","unstructured":"Wijesiriwardene, T., Inan, H., Kursuncu, U., Gaur, M., Shalin, V.L., Thirunarayan, K., Sheth, A., Arpinar, I.B.: Alone: A dataset for toxic behavior among adolescents on twitter. In: Social Informatics: 12th International Conference, SocInfo 2020, Pisa, Italy, October 6-9, 2020, Proceedings, pp. 427\u2013439. Springer, Berlin, Heidelberg (2020). https:\/\/doi.org\/10.1007\/978-3-030-60975-7_31","DOI":"10.1007\/978-3-030-60975-7_31"},{"key":"701_CR17","unstructured":"Bayzick, J., Kontostathis, A., Edwards, L.: Detecting the presence of cyberbullying using computer software. (2011)"},{"key":"701_CR18","doi-asserted-by":"crossref","unstructured":"Fillies, J., Peikert, S., Paschke, A.: Hateful Messages: A Conversational Data Set of Hate Speech produced by Adolescents on Discord (2023)","DOI":"10.1007\/978-3-031-42171-6_5"},{"key":"701_CR19","doi-asserted-by":"publisher","unstructured":"Waseem, Z., Hovy, D.: Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. In: Andreas, J., Choi, E., Lazaridou, A. (eds.) Proceedings of the NAACL Student Research Workshop, pp. 88\u201393. Association for Computational Linguistics, San Diego, California (2016). https:\/\/doi.org\/10.18653\/v1\/N16-2013","DOI":"10.18653\/v1\/N16-2013"},{"key":"701_CR20","unstructured":"Gao, L., Kuppersmith, A., Huang, R.: Recognizing explicit and implicit hate speech using a weakly supervised two-path bootstrapping approach. In: Kondrak, G., Watanabe, T. (eds.) Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 774\u2013782. Asian Federation of Natural Language Processing, Taipei, Taiwan (2017)"},{"key":"701_CR21","doi-asserted-by":"publisher","unstructured":"Vidgen, B., Thrush, T., Waseem, Z., Kiela, D.: Learning from the worst: Dynamically generated datasets to improve online hate detection. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1667\u20131682. Association for Computational Linguistics, Online (2021). https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.132","DOI":"10.18653\/v1\/2021.acl-long.132"},{"key":"701_CR22","doi-asserted-by":"publisher","unstructured":"Kurita, K., Vyas, N., Pareek, A., Black, A.W., Tsvetkov, Y.: Measuring bias in contextualized word representations. In: Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pp. 166\u2013172. Association for Computational Linguistics, Florence, Italy (2019). https:\/\/doi.org\/10.18653\/v1\/W19-3823","DOI":"10.18653\/v1\/W19-3823"},{"key":"701_CR23","doi-asserted-by":"publisher","unstructured":"Kennedy, B., Jin, X., Mostafazadeh\u00a0Davani, A., Dehghani, M., Ren, X.: Contextualizing hate speech classifiers with post-hoc explanation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5435\u20135442. Association for Computational Linguistics, Online (2020). https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.483","DOI":"10.18653\/v1\/2020.acl-main.483"},{"issue":"8","key":"701_CR24","doi-asserted-by":"publisher","first-page":"12432","DOI":"10.1111\/lnc3.12432","volume":"15","author":"D Hovy","year":"2021","unstructured":"Hovy, D., Prabhumoye, S.: Five sources of bias in natural language processing. Language Linguistics Compass 15(8), 12432 (2021). https:\/\/doi.org\/10.1111\/lnc3.12432","journal-title":"Language Linguistics Compass"},{"key":"701_CR25","unstructured":"Wiegand, M., Siegel, M., Ruppenhofer, J.: Overview of the germeval 2018 shared task on the identification of offensive language. Proceedings of GermEval 2018, 14th Conference on Natural Language Processing (KONVENS 2018), Vienna, Austria - September 21, 2018, pp. 1\u201310. Austrian Academy of Sciences, Vienna, Austria (2019). https:\/\/nbn-resolving.org\/urn:nbn:de:bsz:mh39-84935"},{"key":"701_CR26","doi-asserted-by":"publisher","unstructured":"Justen, L., M\u00fcller, K., Niemann, M., Becker, J.: No time like the present: Effects of language change on automated comment moderation. In: 2022 IEEE 24th Conference on Business Informatics (CBI), vol. 01, pp. 40\u201349 (2022). https:\/\/doi.org\/10.1109\/CBI54897.2022.00012","DOI":"10.1109\/CBI54897.2022.00012"},{"key":"701_CR27","doi-asserted-by":"publisher","unstructured":"Nejadgholi, I., Kiritchenko, S.: On cross-dataset generalization in automatic detection of online abuse. In: Proceedings of the Fourth Workshop on Online Abuse and Harms, pp. 173\u2013183. Association for Computational Linguistics, Online (2020). https:\/\/doi.org\/10.18653\/v1\/2020.alw-1.20","DOI":"10.18653\/v1\/2020.alw-1.20"},{"key":"701_CR28","unstructured":"Beutel, A., Chen, J., Zhao, Z., Chi, E.H.: Data decisions and theoretical implications when adversarially learning fair representations. ArXiv abs\/1707.00075 (2017)"},{"key":"701_CR29","doi-asserted-by":"publisher","unstructured":"Park, J.H., Shin, J., Fung, P.: Reducing gender bias in abusive language detection. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2799\u20132804. Association for Computational Linguistics, Brussels, Belgium (2018). https:\/\/doi.org\/10.18653\/v1\/D18-1302","DOI":"10.18653\/v1\/D18-1302"},{"key":"701_CR30","doi-asserted-by":"publisher","unstructured":"Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society. AIES \u201918, pp. 335\u2013340. Association for Computing Machinery, New York, NY, USA (2018). https:\/\/doi.org\/10.1145\/3278721.3278779","DOI":"10.1145\/3278721.3278779"},{"key":"701_CR31","doi-asserted-by":"crossref","unstructured":"Cai, Y., Zimek, A., Wunder, G., Ntoutsi, E.: Power of Explanations: Towards automatic debiasing in hate speech detection (2022)","DOI":"10.1109\/DSAA54385.2022.10032325"},{"key":"701_CR32","doi-asserted-by":"publisher","first-page":"2787","DOI":"10.1109\/TASLP.2023.3294715","volume":"31","author":"J Lu","year":"2023","unstructured":"Lu, J., Lin, H., Zhang, X., Li, Z., Zhang, T., Zong, L., Ma, F., Xu, B.: Hate speech detection via dual contrastive learning. IEEE\/ACM Trans. Audio Speech Lang. Process. 31, 2787\u20132795 (2023)","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"701_CR33","doi-asserted-by":"publisher","first-page":"214","DOI":"10.1016\/j.inffus.2023.03.015","volume":"96","author":"C Min","year":"2023","unstructured":"Min, C., Lin, H., Li, X., Zhao, H., Lu, J., Yang, L., Xu, B.: Finding hate speech with auxiliary emotion detection from self-training multi-label learning perspective. Inform. Fusion 96, 214\u2013223 (2023)","journal-title":"Inform. Fusion"},{"key":"701_CR34","doi-asserted-by":"publisher","first-page":"315","DOI":"10.1162\/tacl_a_00141","volume":"3","author":"X Ling","year":"2015","unstructured":"Ling, X., Singh, S., Weld, D.S.: Design challenges for entity linking. Trans. Ass. Comput. Linguist. 3, 315\u2013328 (2015). https:\/\/doi.org\/10.1162\/tacl_a_00141","journal-title":"Trans. Ass. Comput. Linguist."},{"key":"701_CR35","doi-asserted-by":"publisher","unstructured":"Hoffart, J., Altun, Y., Weikum, G.: Discovering emerging entities with ambiguous names. In: Proceedings of the 23rd International Conference on World Wide Web. WWW \u201914, pp. 385\u2013396. Association for Computing Machinery, New York, NY, USA (2014)https:\/\/doi.org\/10.1145\/2566486.2568003","DOI":"10.1145\/2566486.2568003"},{"key":"701_CR36","unstructured":"Heist, N., Paulheim, H.: Transformer-based Subject Entity Detection in Wikipedia Listings (2022)"},{"key":"701_CR37","doi-asserted-by":"crossref","unstructured":"F\u00e4rber, M., Rettinger, A., Asmar, B.: On emerging entity detection. In: 20th International Conference on Knowledge Engineering and Knowledge Management -. EKAW 2016, vol. 10024, pp. 223\u2013238. Springer, Berlin, Heidelberg (2016)","DOI":"10.1007\/978-3-319-49004-5_15"},{"key":"701_CR38","doi-asserted-by":"crossref","unstructured":"Akasaki, S., Yoshinaga, N., Toyoda, M.: Early Discovery of Emerging Entities in Microblogs (2019)","DOI":"10.24963\/ijcai.2019\/678"},{"key":"701_CR39","doi-asserted-by":"publisher","unstructured":"Derczynski, L., Nichols, E., Erp, M., Limsopatham, N.: Results of the WNUT2017 shared task on novel and emerging entity recognition. In: Proceedings of the 3rd Workshop on Noisy User-generated Text, pp. 140\u2013147. Association for Computational Linguistics, Copenhagen, Denmark (2017). https:\/\/doi.org\/10.18653\/v1\/W17-4418","DOI":"10.18653\/v1\/W17-4418"},{"key":"701_CR40","unstructured":"Nakashole, N., Tylenda, T., Weikum, G.: Fine-grained semantic typing of emerging entities. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1488\u20131497. Association for Computational Linguistics, Sofia, Bulgaria (2013)"},{"key":"701_CR41","doi-asserted-by":"crossref","unstructured":"Davidson, T., Warmsley, D., Macy, M., Weber, I.: Automated Hate Speech Detection and the Problem of Offensive Language (2017)","DOI":"10.1609\/icwsm.v11i1.14955"},{"key":"701_CR42","doi-asserted-by":"publisher","unstructured":"Founta, A., Djouvas, C., Chatzakou, D., Leontiadis, I., Blackburn, J., Stringhini, G., Vakali, A., Sirivianos, M., Kourtellis, N.: Large scale crowdsourcing and characterization of twitter abusive behavior. Proceedings of the International AAAI Conference on Web and Social Media 12(1) (2018) https:\/\/doi.org\/10.1609\/icwsm.v12i1.14991","DOI":"10.1609\/icwsm.v12i1.14991"},{"key":"701_CR43","doi-asserted-by":"publisher","unstructured":"Zhou, X., Sap, M., Swayamdipta, S., Choi, Y., Smith, N.: Challenges in automated debiasing for toxic language detection. In: Merlo, P., Tiedemann, J., Tsarfaty, R. (eds.) Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 3143\u20133155. Association for Computational Linguistics, Online (2021). https:\/\/doi.org\/10.18653\/v1\/2021.eacl-main.274","DOI":"10.18653\/v1\/2021.eacl-main.274"},{"key":"701_CR44","doi-asserted-by":"publisher","unstructured":"R\u00f6ttger, P., Seelawi, H., Nozza, D., Talat, Z., Vidgen, B.: Multilingual HateCheck: Functional tests for multilingual hate speech detection models. In: Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pp. 154\u2013169. Association for Computational Linguistics, Seattle, Washington (Hybrid) (2022). https:\/\/doi.org\/10.18653\/v1\/2022.woah-1.15","DOI":"10.18653\/v1\/2022.woah-1.15"},{"key":"701_CR45","doi-asserted-by":"publisher","unstructured":"Ramponi, A., Tonelli, S.: Features or spurious artifacts? data-centric baselines for fair and robust hate speech detection. In: Carpuat, M., Marneffe, M.-C., Meza\u00a0Ruiz, I.V. (eds.) Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3027\u20133040. Association for Computational Linguistics, Seattle, United States (2022). https:\/\/doi.org\/10.18653\/v1\/2022.naacl-main.221","DOI":"10.18653\/v1\/2022.naacl-main.221"},{"issue":"1","key":"701_CR46","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1007\/s43681-021-00081-0","volume":"2","author":"M Wich","year":"2022","unstructured":"Wich, M., Eder, T., Al Kuwatly, H., Groh, G.: Bias and comparison framework for abusive language datasets. AI Ethics 2(1), 79\u2013101 (2022)","journal-title":"AI Ethics"},{"key":"701_CR47","doi-asserted-by":"crossref","unstructured":"Fillies, J., Paschke, A.: Simple llm based approach to counter algospeak. In: Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024), pp. 136\u2013145 (2024)","DOI":"10.18653\/v1\/2024.woah-1.10"},{"key":"701_CR48","doi-asserted-by":"publisher","unstructured":"Bahlo, N., Becker, T., Kalkavan-Ayd?n, Z., Lotze, N., Marx, K., Schwarz, C., \u1e62im\u1e63ek, Y.: Jugendsprache vol. 1, 1st edn. J.B. Metzler, Europaplatz 3, 69115 Heidelberg (2019). https:\/\/doi.org\/10.1007\/978-3-476-04767-0","DOI":"10.1007\/978-3-476-04767-0"},{"key":"701_CR49","unstructured":"Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning (2016)"},{"key":"701_CR50","unstructured":"Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding (2019)"},{"key":"701_CR51","unstructured":"Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: RoBERTa: A robustly optimized BERT pretraining approach (2019)"},{"key":"701_CR52","unstructured":"Binny, M., Saha, P., Yimam, S.M., Biemann, C., Goyal, P., Mukherjee, A.: Hatexplain: A benchmark dataset for explainable hate speech detection. arXiv preprint arXiv:2012.10289 (2020)"},{"key":"701_CR53","unstructured":"Ljube\u0161i\u0107, N., Mozeti\u010d, I., Cinelli, M., Kralj\u00a0Novak, P.: English YouTube Hate Speech Corpus. Slovenian language resource repository CLARIN.SI (2021). http:\/\/hdl.handle.net\/11356\/1454"},{"key":"701_CR54","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-024-00641-0","author":"S Wyer","year":"2025","unstructured":"Wyer, S., Black, S.: Algorithmic bias: sexualized violence against women in gpt-3 models. AI Ethics (2025). https:\/\/doi.org\/10.1007\/s43681-024-00641-0","journal-title":"AI Ethics"}],"container-title":["AI and Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-025-00701-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43681-025-00701-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-025-00701-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,30]],"date-time":"2025-06-30T20:47:21Z","timestamp":1751316441000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43681-025-00701-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,12]]},"references-count":54,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,8]]}},"alternative-id":["701"],"URL":"https:\/\/doi.org\/10.1007\/s43681-025-00701-z","relation":{},"ISSN":["2730-5953","2730-5961"],"issn-type":[{"value":"2730-5953","type":"print"},{"value":"2730-5961","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,12]]},"assertion":[{"value":"16 January 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 February 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 March 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}