{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,4]],"date-time":"2026-02-04T18:09:52Z","timestamp":1770228592720,"version":"3.49.0"},"publisher-location":"Cham","reference-count":24,"publisher":"Springer Nature Switzerland","isbn-type":[{"value":"9783031746291","type":"print"},{"value":"9783031746307","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025]]},"DOI":"10.1007\/978-3-031-74630-7_20","type":"book-chapter","created":{"date-parts":[[2025,2,7]],"date-time":"2025-02-07T12:16:03Z","timestamp":1738930563000},"page":"293-309","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["How Prevalent Is Gender Bias in\u00a0ChatGPT? - Exploring German and\u00a0English ChatGPT Responses"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1118-4330","authenticated-orcid":false,"given":"Stefanie","family":"Urchs","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9116-390X","authenticated-orcid":false,"given":"Veronika","family":"Thurner","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2154-5774","authenticated-orcid":false,"given":"Matthias","family":"A\u00dfenmacher","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4718-595X","authenticated-orcid":false,"given":"Christian","family":"Heumann","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0001-8146-9438","authenticated-orcid":false,"given":"Stephanie","family":"Thiemichen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,2,8]]},"reference":[{"key":"20_CR1","doi-asserted-by":"publisher","first-page":"682","DOI":"10.1007\/s11199-016-0648-4","volume":"76","author":"AH Bailey","year":"2017","unstructured":"Bailey, A.H., LaFrance, M.: Who counts as human? Antecedents to androcentric behavior. Sex Roles 76, 682\u2013693 (2017)","journal-title":"Sex Roles"},{"key":"20_CR2","unstructured":"Brown, T., et al.: Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 1877\u20131901 (2020). https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf"},{"key":"20_CR3","doi-asserted-by":"publisher","unstructured":"Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171\u20134186. Association for Computational Linguistics (2019). https:\/\/doi.org\/10.18653\/v1\/N19-1423. https:\/\/aclanthology.org\/N19-1423","DOI":"10.18653\/v1\/N19-1423"},{"key":"20_CR4","doi-asserted-by":"crossref","unstructured":"D\u2019ignazio, C., Klein, L.F.: Data Feminism. MIT Press (2020)","DOI":"10.7551\/mitpress\/11805.001.0001"},{"key":"20_CR5","unstructured":"Dutz, R., Rehbock, S., Peus, C.: F\u00fchrmint gender decoder: Subtile geschlechtskodierung in stellenanzeigen erkennen und aufl\u00f6sen [f\u00fchrmint gender decoder: Identifying and resolving subtle gender coding in job advertisements]. Personal in Hochschule und Wissenschaft entwickeln (2020). no DOI available"},{"key":"20_CR6","doi-asserted-by":"publisher","unstructured":"Fast, E., Vachovsky, T., Bernstein, M.: Shirtless and dangerous: quantifying linguistic signals of gender bias in an online fiction writing community. In: Proceedings of the International AAAI Conference on Web and Social Media, August 2021, vol. 10, no. 1, pp. 112\u2013120 (2021). https:\/\/doi.org\/10.1609\/icwsm.v10i1.14744. https:\/\/ojs.aaai.org\/index.php\/ICWSM\/article\/view\/14744","DOI":"10.1609\/icwsm.v10i1.14744"},{"key":"20_CR7","doi-asserted-by":"crossref","unstructured":"Field, A., Tsvetkov, Y.: Unsupervised discovery of implicit gender bias. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 596\u2013608 (2020)","DOI":"10.18653\/v1\/2020.emnlp-main.44"},{"key":"20_CR8","unstructured":"Fu, L., Danescu-Niculescu-Mizil, C., Lee, L.: Tie-breaker: using language models to quantify gender bias in sports journalism. In: Proceedings of the IJCAI Workshop on NLP meets Journalism (2016)"},{"issue":"1","key":"20_CR9","doi-asserted-by":"publisher","first-page":"109","DOI":"10.1037\/a0022530","volume":"101","author":"D Gaucher","year":"2011","unstructured":"Gaucher, D., Friesen, J., Kay, A.C.: Evidence that gendered wording in job advertisements exists and sustains gender inequality. J. Pers. Soc. Psychol. 101(1), 109 (2011)","journal-title":"J. Pers. Soc. Psychol."},{"key":"20_CR10","doi-asserted-by":"publisher","unstructured":"Graells-Garrido, E., Lalmas, M., Menczer, F.: First women, second sex: gender bias in Wikipedia. In: Proceedings of the 26th ACM Conference on Hypertext and Social Media, HT 2015, pp. 165\u2013174. Association for Computing Machinery, New York (2015). https:\/\/doi.org\/10.1145\/2700171.2791036","DOI":"10.1145\/2700171.2791036"},{"issue":"2","key":"20_CR11","doi-asserted-by":"publisher","first-page":"316","DOI":"10.1080\/1359432X.2015.1067611","volume":"25","author":"LK Horvath","year":"2016","unstructured":"Horvath, L.K., Sczesny, S.: Reducing women\u2019s lack of fit with leadership positions? Effects of the wording of job advertisements. Eur. J. Work Organ. Psy. 25(2), 316\u2013328 (2016)","journal-title":"Eur. J. Work Organ. Psy."},{"key":"20_CR12","doi-asserted-by":"publisher","unstructured":"Kulesza, T., Stumpf, S., Burnett, M., Kwan, I.: Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2012, pp. 1\u201310, Association for Computing Machinery, New York, NY, USA (2012). https:\/\/doi.org\/10.1145\/2207676.2207678","DOI":"10.1145\/2207676.2207678"},{"issue":"1","key":"20_CR13","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1017\/S0047404500000051","volume":"2","author":"R Lakoff","year":"1973","unstructured":"Lakoff, R.: Language and woman\u2019s place. Lang. Soc. 2(1), 45\u201379 (1973). https:\/\/doi.org\/10.1017\/S0047404500000051","journal-title":"Lang. Soc."},{"key":"20_CR14","unstructured":"Madaan, N., et al.: Analyze, detect and remove gender stereotyping from Bollywood movies. In: Conference on Fairness, Accountability and Transparency, pp. 92\u2013105. PMLR (2018)"},{"issue":"12S","key":"20_CR15","doi-asserted-by":"publisher","first-page":"S169","DOI":"10.1097\/ACM.0000000000003684","volume":"95","author":"CM Mateo","year":"2020","unstructured":"Mateo, C.M., Williams, D.R.: More than words: a vision to address bias and reduce discrimination in the health professions learning environment. Acad. Med. 95(12S), S169\u2013S177 (2020)","journal-title":"Acad. Med."},{"key":"20_CR16","doi-asserted-by":"publisher","unstructured":"Nakandala, S., Ciampaglia, G., Su, N., Ahn, Y.Y.: Gendered conversation in a social game-streaming platform. In: Proceedings of the International AAAI Conference on Web and Social Media, May 2017, vol. 11, no. 1, pp. 162\u2013171 (2017). https:\/\/doi.org\/10.1609\/icwsm.v11i1.14885. https:\/\/ojs.aaai.org\/index.php\/ICWSM\/article\/view\/14885","DOI":"10.1609\/icwsm.v11i1.14885"},{"key":"20_CR17","unstructured":"OpenAI: Chatgpt: Optimizing language models for dialogue (2022). https:\/\/openai.com\/blog\/chatgpt\/"},{"key":"20_CR18","unstructured":"OpenAI: ChatGPT(May 24 version). [Large Language Model] (2023). https:\/\/chat.openai.com\/chat"},{"key":"20_CR19","unstructured":"Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAi Blog (2019)"},{"issue":"1","key":"20_CR20","first-page":"5485","volume":"21","author":"C Raffel","year":"2020","unstructured":"Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485\u20135551 (2020)","journal-title":"J. Mach. Learn. Res."},{"key":"20_CR21","doi-asserted-by":"crossref","unstructured":"Sczesny, S., Formanowicz, M., Moser, F.: Can gender-fair language reduce gender stereotyping and discrimination? Front. Psychol., 25 (2016)","DOI":"10.3389\/fpsyg.2016.00025"},{"issue":"2","key":"20_CR22","doi-asserted-by":"publisher","first-page":"191","DOI":"10.1177\/0957926503014002277","volume":"14","author":"F Trix","year":"2003","unstructured":"Trix, F., Psenka, C.: Exploring the color of glass: letters of recommendation for female and male medical faculty. Discourse Soc. 14(2), 191\u2013220 (2003)","journal-title":"Discourse Soc."},{"key":"20_CR23","unstructured":"Vaswani, A., et al.: Attention is all you need. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol.\u00a030. Curran Associates, Inc. (2017). https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"},{"key":"20_CR24","doi-asserted-by":"publisher","unstructured":"Wagner, C., Garcia, D., Jadidi, M., Strohmaier, M.: It\u2019s a man\u2019s Wikipedia? Assessing gender inequality in an online encyclopedia. In: Proceedings of the International AAAI Conference on Web and Social Media, August 2021, vol. 9, no. 1, pp. 454\u2013463 (2021). https:\/\/doi.org\/10.1609\/icwsm.v9i1.14628. https:\/\/ojs.aaai.org\/index.php\/ICWSM\/article\/view\/14628","DOI":"10.1609\/icwsm.v9i1.14628"}],"container-title":["Communications in Computer and Information Science","Machine Learning and Principles and Practice of Knowledge Discovery in Databases"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-74630-7_20","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,2,7]],"date-time":"2025-02-07T12:16:09Z","timestamp":1738930569000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-74630-7_20"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025]]},"ISBN":["9783031746291","9783031746307"],"references-count":24,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-74630-7_20","relation":{},"ISSN":["1865-0929","1865-0937"],"issn-type":[{"value":"1865-0929","type":"print"},{"value":"1865-0937","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025]]},"assertion":[{"value":"8 February 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"This paper seeks to improve LLM research by highlighting problematic model behaviour. The structural changes in the response after unannounced framework updates, which we have seen, and also the errors regarding grammar and spelling, can increase the workload of the users. However, many of them are still quite obvious. When it comes to (gender) biases, also rarely occurring subtle differences can become a huge issue. Through the tremendous user base and the increasing number of use cases, the inherent biases are potentially multiplied by the system. OpenAI is trying to solve such issues downstream but with limited success, as we have seen, for instance, with the gender diversity template. It is important that these systems are challenged from a variety of diverse perspectives to uncover all sorts of potential problems. This is an important first step to solve mitigate them. We hope to contribute to this effort by analysing the system from the perspective of gender biases in English and German prompts. After all, LLM systems and research have to keep the users in mind. It is crucial to develop tools that make work easier for users.Another aspect current LLM research has to keep in mind is not striving to replace human labour but to enhance human capabilities. The human must be kept in the loop and not be replaced.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical Implications"}},{"value":"ECML PKDD","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Turin","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Italy","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2023","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"18 September 2023","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"22 September 2023","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"23","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"ecml2023","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/2023.ecmlpkdd.org\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}