{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,27]],"date-time":"2026-04-27T07:36:20Z","timestamp":1777275380440,"version":"3.51.4"},"reference-count":29,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2024,6,1]],"date-time":"2024-06-01T00:00:00Z","timestamp":1717200000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,6,8]],"date-time":"2024-06-08T00:00:00Z","timestamp":1717804800000},"content-version":"vor","delay-in-days":7,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called \u201cAI hallucinations\u201d. We argue that these falsehoods, and the overall activity of large language models, is better understood as <jats:italic>bullshit<\/jats:italic> in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.<\/jats:p>","DOI":"10.1007\/s10676-024-09775-5","type":"journal-article","created":{"date-parts":[[2024,6,8]],"date-time":"2024-06-08T04:01:31Z","timestamp":1717819291000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":298,"title":["ChatGPT is bullshit"],"prefix":"10.1007","volume":"26","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1304-5668","authenticated-orcid":false,"given":"Michael Townsen","family":"Hicks","sequence":"first","affiliation":[]},{"given":"James","family":"Humphries","sequence":"additional","affiliation":[]},{"given":"Joe","family":"Slater","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,6,8]]},"reference":[{"key":"9775_CR1","doi-asserted-by":"publisher","unstructured":"Alkaissi, H., & McFarlane, S. I., (2023, February 19). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2), e35179. https:\/\/doi.org\/10.7759\/cureus.35179.","DOI":"10.7759\/cureus.35179"},{"key":"9775_CR2","doi-asserted-by":"crossref","unstructured":"Bacin, S. (2021). My duties and the morality of others: Lying, truth and the good example in Fichte\u2019s normative perfectionism. In S. Bacin, & O. Ware (Eds.), Fichte\u2019s system of Ethics: A critical guide. Cambridge University Press.","DOI":"10.1017\/9781108635820.011"},{"key":"9775_CR3","doi-asserted-by":"crossref","unstructured":"Cassam, Q. (2019). Vices of the mind. Oxford University Press.","DOI":"10.1093\/oso\/9780198826903.001.0001"},{"key":"9775_CR5","doi-asserted-by":"crossref","unstructured":"Cohen, G. A. (2002). Deeper into bullshit. In S. Buss, & L. Overton (Eds.), The contours of Agency: Essays on themes from Harry Frankfurt. MIT Press.","DOI":"10.7551\/mitpress\/2143.003.0015"},{"key":"9775_CR6","unstructured":"Davis, E., & Aaronson, S. (2023). Testing GPT-4 with Wolfram alpha and code interpreter plub-ins on math and science problems. Arxiv Preprint: arXiv, 2308, 05713v2."},{"key":"9775_CR8","doi-asserted-by":"publisher","first-page":"343","DOI":"10.1017\/S0140525X00016393","volume":"6","author":"DC Dennett","year":"1983","unstructured":"Dennett, D. C. (1983). Intentional systems in cognitive ethology: The panglossian paradigm defended. Behavioral and Brain Sciences, 6, 343\u2013390.","journal-title":"Behavioral and Brain Sciences"},{"key":"9775_CR7","doi-asserted-by":"crossref","unstructured":"Dennett, D. C. (1987). The intentional stance. The MIT.","DOI":"10.1017\/S0140525X00058611"},{"key":"9775_CR29","doi-asserted-by":"crossref","unstructured":"Dennis Whitcomb (2023). Bullshit questions. Analysis, 83(2), 299\u2013304.","DOI":"10.1093\/analys\/anad002"},{"key":"9775_CR9","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1111\/phib.12328","volume":"00","author":"K Easwaran","year":"2023","unstructured":"Easwaran, K. (2023). Bullshit activities. Analytic Philosophy, 00, 1\u201323. https:\/\/doi.org\/10.1111\/phib.12328.","journal-title":"Analytic Philosophy"},{"key":"9775_CR10","unstructured":"Edwards, B. (2023). Why ChatGPT and bing chat are so good at making things up. Ars Tecnica. https:\/\/arstechnica.com\/information-technology\/2023\/04\/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them\/, accesssed 19th April, 2024."},{"key":"9775_CR11","unstructured":"Frankfurt, H. (2002). Reply to cohen. In S. Buss, & L. Overton (Eds.), The contours of agency: Essays on themes from Harry Frankfurt. MIT Press."},{"key":"9775_CR12","doi-asserted-by":"crossref","unstructured":"Frankfurt, H. (2005). On Bullshit, Princeton.","DOI":"10.1515\/9781400826537"},{"key":"9775_CR13","unstructured":"Knight, W. (2023). Some glimpse AGI in ChatGPT. others call it a mirage. Wired, August 18 2023, accessed via https:\/\/www.wired.com\/story\/chatgpt-agi-intelligence\/."},{"key":"9775_CR14","unstructured":"Levinstein, B. A., & Herrmann, D. A. (forthcoming). Still no lie detector for language models: Probing empirical and conceptual roadblocks. Philosophical Studies, 1\u201327."},{"key":"9775_CR15","doi-asserted-by":"crossref","unstructured":"Levy, N. (2023). Philosophy, Bullshit, and peer review. Cambridge University.","DOI":"10.1017\/9781009256315"},{"key":"9775_CR16","unstructured":"Lightman, H., et al. (2023). Let\u2019s verify step by step. Arxiv Preprint: arXiv, 2305, 20050."},{"key":"9775_CR17","unstructured":"Lysandrou (2023). Comparative analysis of drug-GPT and ChatGPT LLMs for healthcare insights: Evaluating accuracy and relevance in patient and HCP contexts. ArXiv Preprint: arXiv, 2307, 16850v1."},{"key":"9775_CR19","doi-asserted-by":"crossref","unstructured":"Macpherson, F. (2013). The philosophy and psychology of hallucination: an introduction, in Hallucination, Macpherson and Platchias (Eds.), London: MIT Press.","DOI":"10.7551\/mitpress\/9780262019200.001.0001"},{"key":"9775_CR18","unstructured":"Mahon, J. E. (2015). The definition of lying and deception. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (Ed.), https:\/\/plato.stanford.edu\/archives\/win2016\/entries\/lying-definition\/."},{"issue":"38","key":"9775_CR21","first-page":"1082","volume":"10","author":"F Mallory","year":"2023","unstructured":"Mallory, F. (2023). Fictionalism about chatbots. Ergo, 10(38), 1082\u20131100.","journal-title":"Ergo"},{"key":"9775_CR20","doi-asserted-by":"crossref","unstructured":"Mandelkern, M., & Linzen, T. (2023). Do language models\u2019 Words Refer?. ArXiv Preprint: arXiv, 2308, 05576.","DOI":"10.1162\/coli_a_00522"},{"key":"9775_CR23","unstructured":"OpenAI (2023). GPT-4 technical report. ArXiv Preprint: arXiv, 2303, 08774v3."},{"key":"9775_CR24","doi-asserted-by":"publisher","DOI":"10.1111\/papq.12442","author":"I Proops","year":"2023","unstructured":"Proops, I., & Sorensen, R. (2023). Destigmatizing the exegetical attribution of lies: the case of kant. Pacific Philosophical Quarterly. https:\/\/doi.org\/10.1111\/papq.12442.","journal-title":"Pacific Philosophical Quarterly"},{"key":"9775_CR25","unstructured":"Sarkar, A. (2023). ChatGPT 5 is on track to attain artificial general intelligence. The Statesman, April 12, 2023. Accesses via https:\/\/www.thestatesman.com\/supplements\/science_supplements\/chatgpt-5-is-on-track-to-attain-artificial-general-intelligence-1503171366.html."},{"key":"9775_CR26","doi-asserted-by":"publisher","unstructured":"Shah, C., & Bender, E. M. (2022). Situating search. CHIIR \u201822: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval March 2022 Pages 221\u2013232 https:\/\/doi.org\/10.1145\/3498366.3505816.","DOI":"10.1145\/3498366.3505816"},{"key":"9775_CR28","unstructured":"Weise, K., & Metz, C. (2023). When AI chatbots hallucinate. New York Times, May 9, 2023. Accessed via https:\/\/www.nytimes.com\/2023\/05\/01\/business\/ai-chatbots-hallucination.html."},{"key":"9775_CR27","unstructured":"Weiser, B. (2023). Here\u2019s what happens when your lawyer uses ChatGPT. New York Times, May 23, 2023. Accessed via https:\/\/www.nytimes.com\/2023\/05\/27\/nyregion\/avianca-airline-lawsuit-chatgpt.html."},{"key":"9775_CR30","unstructured":"Zhang (2023). How language model hallucinations can snowball. ArXiv preprint: arXiv:, 2305, 13534v1."},{"key":"9775_CR31","unstructured":"Zhu, T., et al. (2023). Large language models for information retrieval: A survey. Arxiv Preprint: arXiv, 2308, 17107v2."}],"updated-by":[{"DOI":"10.1007\/s10676-024-09785-3","type":"correction","label":"Correction","source":"publisher","updated":{"date-parts":[[2024,7,11]],"date-time":"2024-07-11T00:00:00Z","timestamp":1720656000000}}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-024-09775-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-024-09775-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-024-09775-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,12]],"date-time":"2024-07-12T06:14:13Z","timestamp":1720764853000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-024-09775-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6]]},"references-count":29,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["9775"],"URL":"https:\/\/doi.org\/10.1007\/s10676-024-09775-5","relation":{"correction":[{"id-type":"doi","id":"10.1007\/s10676-024-09785-3","asserted-by":"object"}]},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6]]},"assertion":[{"value":"8 June 2024","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 July 2024","order":2,"name":"change_date","label":"Change Date","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Correction","order":3,"name":"change_type","label":"Change Type","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"A Correction to this paper has been published:","order":4,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"https:\/\/doi.org\/10.1007\/s10676-024-09785-3","URL":"https:\/\/doi.org\/10.1007\/s10676-024-09785-3","order":5,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"38"}}