{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,17]],"date-time":"2026-02-17T14:13:23Z","timestamp":1771337603227,"version":"3.50.1"},"reference-count":16,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2024,6,25]],"date-time":"2024-06-25T00:00:00Z","timestamp":1719273600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,6,25]],"date-time":"2024-06-25T00:00:00Z","timestamp":1719273600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100008047","name":"Carnegie Mellon University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100008047","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Inf Technol Manag"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Large language models (LLMs) such as ChatGPT play a crucial role in guiding critical decisions nowadays, such as in choosing a college major. Therefore, it is essential to assess the limitations of these models\u2019 recommendations and understand any potential biases that may mislead human decisions. In this study, I investigate bias in terms of GPT-3.5 Turbo\u2019s college major recommendations for students with various profiles, looking at demographic disparities in factors such as race, gender, and socioeconomic status, as well as educational disparities such as score percentiles. To conduct this analysis, I sourced public data for California seniors who have taken standardized tests like the California Standard Test (CAST) in 2023. By constructing prompts for the ChatGPT API, allowing the model to recommend majors based on high school student profiles, I evaluate bias using various metrics, including the Jaccard Coefficient, Wasserstein Metric, and STEM Disparity Score. The results of this study reveal a significant disparity in the set of recommended college majors, irrespective of the bias metric applied. Notably, the most pronounced disparities are observed for students who fall into minority categories, such as LGBTQ\u2009+\u2009, Hispanic, or the socioeconomically disadvantaged. Within these groups, ChatGPT demonstrates a lower likelihood of recommending STEM majors compared to a baseline scenario where these criteria are unspecified. For example, when employing the STEM Disparity Score metric, an LGBTQ\u2009+\u2009student scoring at the 50th percentile faces a 50% reduced chance of receiving a STEM major recommendation in comparison to a male student, with all other factors held constant. Additionally, an average Asian student is three times more likely to receive a STEM major recommendation than an African-American student. Meanwhile, students facing socioeconomic disadvantages have a 30% lower chance of being recommended a STEM major compared to their more privileged counterparts. These findings highlight the pressing need to acknowledge and rectify biases within language models, especially when they play a critical role in shaping personalized decisions. Addressing these disparities is essential to foster a more equitable educational and career environment for all students.<\/jats:p>","DOI":"10.1007\/s10799-024-00430-5","type":"journal-article","created":{"date-parts":[[2024,6,25]],"date-time":"2024-06-25T04:02:44Z","timestamp":1719288164000},"page":"625-636","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Dissecting bias of ChatGPT in college major recommendations"],"prefix":"10.1007","volume":"26","author":[{"given":"Alex","family":"Zheng","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,6,25]]},"reference":[{"key":"430_CR1","doi-asserted-by":"publisher","first-page":"100702","DOI":"10.1016\/j.hlpt.2022.100702","volume":"12","author":"R Agarwal","year":"2022","unstructured":"Agarwal R, Bjarnadottir M, Rhue L, Dugas M, Crowley K, Clark J, Gao G (2022) Addressing algorithmic bias and the perpetuation of health inequities: an AI bias aware framework. Health Policy Technol 12:100702","journal-title":"Health Policy Technol"},{"key":"430_CR2","unstructured":"Alwahaidi K (2023) Students are using AI in their university applications. CBC Radio. Available at https:\/\/www.cbc.ca\/radio\/asithappens\/chatgpt-college-admissions-1.6960787"},{"issue":"5","key":"430_CR3","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1016\/j.ins.2019.01.023","volume":"483","author":"S Bag","year":"2019","unstructured":"Bag S, Kumar S, Tiwari M (2019) An efficient recommendation generation using relevant jaccard similarity. Inf Sci 483(5):53\u201364","journal-title":"Inf Sci"},{"issue":"4","key":"430_CR4","first-page":"1","volume":"32","author":"RS Baker","year":"2021","unstructured":"Baker RS, Hawn A (2021) Algorithmic bias in education. Int J Artif Intell Educ 32(4):1\u201341","journal-title":"Int J Artif Intell Educ"},{"key":"430_CR5","volume-title":"Fairness and machine learning: limitations and opportunities","author":"S Barocas","year":"2023","unstructured":"Barocas S, Hardt M, Narayanan A (2023) Fairness and machine learning: limitations and opportunities. The MIT Press, Cambridge"},{"issue":"3","key":"430_CR6","first-page":"1","volume":"4","author":"J Chen","year":"2023","unstructured":"Chen J, Dong H, Wang X, Feng F, Wang M, He X (2023) Bias and debias in recommendation systems: a survey and future directions. ACM Trans Inf Syst 4(3):1\u201339","journal-title":"ACM Trans Inf Syst"},{"key":"430_CR7","unstructured":"CollegeData (2023). Five things college applicants should know about using ChatGPT. Available at https:\/\/www.collegedata.com\/resources\/getting-in\/5-things-college-applicants-should-know-about-using-chatgpt"},{"key":"430_CR8","unstructured":"Liang P, Wu C, Morency P, Salakhutdinov R (2021) Towards understanding and mitigating social biases in language models. In Proceedings of international conference on machine learning (ICML), pp. 6565\u20136576"},{"key":"430_CR9","unstructured":"Luo N, Zheng A, Samtani S (2023) SmartRD: leveraging GPT4.0 prompting strategies for reasoning and decision\u2014the case of smart contract vulnerability assessment, In INFORMS workshop on data science, Phoenix, October 14, 2023"},{"issue":"6464","key":"430_CR10","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1126\/science.aax2342","volume":"266","author":"Z Obermeyer","year":"2019","unstructured":"Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 266(6464):447\u2013453","journal-title":"Science"},{"key":"430_CR11","volume-title":"The adaptive web","author":"MJ Pazzani","year":"2007","unstructured":"Pazzani MJ, Billsus D (2007) Content-based recommendation systems. The adaptive web. Springer, Heidelberg"},{"key":"430_CR12","unstructured":"Raja A (2023) Exploring OpenAI\u2019s GPT-3.5 turbo: a comprehensive guide. Medium, available at https:\/\/alizahidraja.medium.com\/exploring-openais-gpt-3-5turbo-a-comprehensive-guide-ca48b2f155fb."},{"key":"430_CR13","doi-asserted-by":"crossref","unstructured":"Stein S, Weiss G, Chen Y, Leeds D (2020) A College Major Recommendation System. In: Proceedings of the 14th ACM conference on recommendation systems (RecSys 20), pp. 640\u2013644","DOI":"10.1145\/3383313.3418488"},{"key":"430_CR14","unstructured":"Tsintzou V, Pitoura E, Tsaparas P (2018) Bias disparity in recommendation systems. Available at https:\/\/arxiv.org\/abs\/1811.01461"},{"key":"430_CR15","unstructured":"Wang Y, Wang L, Liu J (2013) A Theoretical Analysis of NDCG Ranking Measures. In Proceedings of the 26th international conference on neural information processing systems (NIPS), Volume 1 (p. 1776\u20131784)"},{"key":"430_CR16","unstructured":"West D (2023) Senate hearing highlights ai harms and need for tougher regulation. Brookings. Available at https:\/\/www.brookings.edu\/articles\/senate-hearing-highlights-ai-harms-and-need-for-tougher-regulation\/"}],"container-title":["Information Technology and Management"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10799-024-00430-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10799-024-00430-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10799-024-00430-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,8]],"date-time":"2025-10-08T08:23:42Z","timestamp":1759911822000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10799-024-00430-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,25]]},"references-count":16,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["430"],"URL":"https:\/\/doi.org\/10.1007\/s10799-024-00430-5","relation":{},"ISSN":["1385-951X","1573-7667"],"issn-type":[{"value":"1385-951X","type":"print"},{"value":"1573-7667","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,25]]},"assertion":[{"value":"30 May 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 June 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The author certifies that they have no affiliations with or involvement in any organization or entity with any financial or non-financial interest in the subject matter or materials discussed in this manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}