{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T21:29:57Z","timestamp":1775597397593,"version":"3.50.1"},"reference-count":38,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2025,4,18]],"date-time":"2025-04-18T00:00:00Z","timestamp":1744934400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,4,18]],"date-time":"2025-04-18T00:00:00Z","timestamp":1744934400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001782","name":"University of Melbourne","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100001782","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>In a recent thought-provoking essay called \u201cChatGPT is Bullshit,\u201d Hicks, Humphries and Slater call such large language models (LLMs) \u201cbullshitters\u201d and \u201cbullshit machines.\u201d Unlike the term \u201cbullshit,\u201d they argue, commonly used anthropomorphic terms such as \u201challucination\u201d and \u201cconfabulation\u201d mispresent LLMs and sow confusion that could be socially harmful. This paper criticizes their essay in two steps. First, its reliance on Harry Frankfurt\u2019s classic characterization of bullshit as indifference to truth, though understandable and compelling in one sense, risks misrepresenting LLMs. Second, the argument is too quick to jettison anthropomorphic terms like hallucination and confabulation, which might prove useful metaphors for understanding generative AI. Exploring language to articulate good ways of understanding LLMs is indeed a socially important task, one benefitting from critical open-mindedness, some historical awareness, and a nuanced approach to how various words used to describe AI can operate. This paper attempts to contribute to this task by questioning the wisdom of categorically calling bullshit on ChatGPT.<\/jats:p>","DOI":"10.1007\/s10676-025-09828-3","type":"journal-article","created":{"date-parts":[[2025,4,18]],"date-time":"2025-04-18T01:42:32Z","timestamp":1744940552000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Cut the crap: a critical response to \u201cChatGPT is bullshit\u201d"],"prefix":"10.1007","volume":"27","author":[{"given":"David","family":"Gunkel","sequence":"first","affiliation":[]},{"given":"Simon","family":"Coghlan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,4,18]]},"reference":[{"issue":"2","key":"9828_CR1","doi-asserted-by":"publisher","first-page":"287","DOI":"10.1080\/00048402.2014.984312","volume":"93","author":"K Allen","year":"2014","unstructured":"Allen, K. (2014). Hallucination and imagination. Australasian Journal of Philosophy, 93(2), 287\u2013302. https:\/\/doi.org\/10.1080\/00048402.2014.984312","journal-title":"Australasian Journal of Philosophy"},{"key":"9828_CR2","doi-asserted-by":"publisher","unstructured":"Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Conference on Fairness, Accountability, and Transparency (FAccT \u201921), March 3\u201310, 2021, Virtual Event, Canada. ACM. https:\/\/doi.org\/10.1145\/3442188.3445922","DOI":"10.1145\/3442188.3445922"},{"key":"9828_CR3","unstructured":"Coeckelbergh, M., & Gunkel, D. J. (2025). Communicative AI: A critical introduction to large Language models. Polity Books."},{"issue":"3","key":"9828_CR4","doi-asserted-by":"publisher","first-page":"25","DOI":"10.1007\/s11023-024-09686-w","volume":"34","author":"S Coghlan","year":"2024","unstructured":"Coghlan, S. (2024). Anthropomorphizing machines: Reality or popular Myth? Minds and Machines, 34(3), 25. https:\/\/doi.org\/10.1007\/s11023-024-09686-w","journal-title":"Minds and Machines"},{"key":"9828_CR5","doi-asserted-by":"crossref","unstructured":"Cohen, G. A. (2002). Deeper into bullshit. In S. Buss, & L. Overton (Eds.), Contours of agency: Essays on themes from Harry Frankfurt (pp. 321\u2013339). MIT Press.","DOI":"10.7551\/mitpress\/2143.003.0015"},{"key":"9828_CR6","doi-asserted-by":"crossref","unstructured":"Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.","DOI":"10.12987\/9780300252392"},{"key":"9828_CR7","unstructured":"Dennett, D. C. (1989). The intentional stance. MIT Press."},{"key":"9828_CR8","unstructured":"Derrida, J. (1992). Act of Literature. Ed. by Derek Attridge. New York: Routledge."},{"key":"9828_CR9","unstructured":"Edwards, B. (2023). Why ChatGPT and Bing Chat are so Good at Making Things Up. Ars Tecnica. https:\/\/arstechnica.com\/information-technology\/2023\/04\/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them\/"},{"key":"9828_CR10","doi-asserted-by":"publisher","unstructured":"Emsley, R. (2023). ChatGPT: These are not Hallucinations\u2013They\u2019re Fabrications and Falsifications. Schizophrania 9(52). https:\/\/doi.org\/10.1038\/s41537-023-00379-4","DOI":"10.1038\/s41537-023-00379-4"},{"issue":"2","key":"9828_CR12","first-page":"81","volume":"6","author":"H Frankfurt","year":"1986","unstructured":"Frankfurt, H. (1986). On bullshit. Raritan, 6(2), 81\u2013100. https:\/\/raritanquarterly.rutgers.edu\/issue-index\/all-volumes-issues\/volume-06\/volume-06-number-2","journal-title":"Raritan"},{"key":"9828_CR13","doi-asserted-by":"crossref","unstructured":"Frankfurt, H. (2002). Reply to G. A. Cohen. In S. Buss, & L. Overton (Eds.), Contours of agency: Essays on themes from Harry Frankfurt (pp. 339\u2013344). MIT Press.","DOI":"10.7551\/mitpress\/2143.003.0031"},{"key":"9828_CR14","doi-asserted-by":"crossref","unstructured":"Frankfurt, H. (2005). On bullshit. Princeton University Press.","DOI":"10.1515\/9781400826537"},{"key":"9828_CR16","doi-asserted-by":"publisher","unstructured":"Hicks, M., Townsen, J., Humphries, & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38, 1\u201310. https:\/\/doi.org\/10.1007\/s10676-024-09775-5","DOI":"10.1007\/s10676-024-09775-5"},{"key":"9828_CR17","unstructured":"Hirstein, W. (Ed.). (2009). Confabulation: Views from neuroscience, psychiatry, psychology, and philosophy. Oxford University Press."},{"issue":"1\/4","key":"9828_CR18","doi-asserted-by":"publisher","first-page":"5","DOI":"10.5840\/bpej2010291\/43","volume":"29","author":"A Johnson","year":"2010","unstructured":"Johnson, A. (2010). A new take on deceptive advertising: Beyond Frankfurt\u2019s analysis of BS. Business & Professional Ethics Journal, 29(1\/4), 5\u201332. http:\/\/www.jstor.org\/stable\/41340837","journal-title":"Business & Professional Ethics Journal"},{"key":"9828_CR19","unstructured":"Kirschenbaum, M. (2023). Prepare for the Textpocalypse. The Atlantic. 8 March. https:\/\/www.theatlantic.com\/technology\/archive\/2023\/03\/ai-chatgpt-writing-language-models\/673318\/"},{"key":"9828_CR20","unstructured":"Kolassa, S. (2023). ChatGPT as a Generator of Mindless Bullshit. LinkedIn. 14 October. https:\/\/www.linkedin.com\/pulse\/chatgpt-generator-mindless-bullshit-stephan-kolassa\/"},{"key":"9828_CR21","unstructured":"Lakshmanan, V. (2022). Why Large Language Models like ChatGPT are Bullshit Artists, and How to Use Them Effectively Anyway. LinkedIn. 15 December. https:\/\/www.linkedin.com\/pulse\/why-large-language-models-like-chatgpt-bullshit-how-use-lakshmanan\/"},{"key":"9828_CR22","doi-asserted-by":"publisher","DOI":"10.1007\/s11098-023-02094-3","author":"BA Levinstein","year":"2024","unstructured":"Levinstein, B. A., & Herrmann, D. A. (2024). Still no lie detector for Language models: Probing empirical and conceptual roadblocks. Philosophical Studies. https:\/\/doi.org\/10.1007\/s11098-023-02094-3","journal-title":"Philosophical Studies"},{"key":"9828_CR23","doi-asserted-by":"crossref","unstructured":"Levy, N. (2023). Philosophy, bullshit, and peer review: Elements in epistemology. Cambridge University Press.","DOI":"10.1017\/9781009256315"},{"key":"9828_CR24","unstructured":"Mahon, J. E. (2016). The Definition of Lying and Deception. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https:\/\/plato.stanford.edu\/archives\/win2016\/entries\/lying-definition\/"},{"key":"9828_CR25","doi-asserted-by":"crossref","unstructured":"Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI Hallucinations: A Misnomer Worth Clarifying. Cornell University, ArXiv, Computer Science, January 9. https:\/\/arxiv.org\/html\/2401.06 796v1.","DOI":"10.1109\/CAI59869.2024.00033"},{"key":"9828_CR26","unstructured":"Marcus, G. (2024). Evidence that LLMs are reaching a point of diminishing returns \u2014 and what that might mean. Marcus on AI 13 April. https:\/\/garymarcus.substack.com\/p\/evidence-that-llms-are-reaching-a"},{"key":"9828_CR27","unstructured":"Marcus, G., & Davis, E. (2020). GPT-3, Bloviator: OpenAI\u2019s Language Generator has no Idea What it\u2019s Talking About. MIT Technology Review. 22 August. https:\/\/www.technologyreview.com\/2020\/08\/22\/1007539\/gpt3-openai-language-generator-artificial-intelligence-ai-opinion\/"},{"key":"9828_CR29","unstructured":"McQuillan, D. (2023). ChatGPT is a Bullshit Generator Waging Class War. Vice. 10 February. https:\/\/www.vice.com\/en\/article\/akex34\/chatgpt-is-a-bullshit-generator-waging-class-war"},{"key":"9828_CR28","unstructured":"Milli\u00e8re, R., & Buckner, C. (2024). A Philosophical Introduction to Language Models-Part II: The Way Forward. arXiv preprint arXiv:2405.03207."},{"key":"9828_CR30","unstructured":"Narayanan, & Arvind and Sayash Kapoor. (2022). ChatGPT is a Bullshit Generator. But it Can Still be Amazingly Useful. AI Snake Oil. 6 December. https:\/\/www.aisnakeoil.com\/p\/chatgpt-is-a-bullshit-generator-but"},{"key":"9828_CR31","doi-asserted-by":"crossref","unstructured":"Narayanan, & Arvind and Sayash Kapoor. (2024). AI snake oil: what artificial intelligence can do, what it can\u2019t, and how to tell the difference. Princeton University Press.","DOI":"10.1515\/9780691249643"},{"issue":"6","key":"9828_CR32","doi-asserted-by":"publisher","first-page":"549","DOI":"10.1017\/S1930297500006999","volume":"10","author":"G Pennycook","year":"2015","unstructured":"Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of Pseudo-Profound bullshit. Judgment and Decision Making, 10(6), 549\u2013563. https:\/\/doi.org\/10.1017\/S1930297500006999","journal-title":"Judgment and Decision Making"},{"key":"9828_CR33","doi-asserted-by":"crossref","unstructured":"Pepperberg, I. M. (2012). Symbolic communication in the grey Parrot. In J. Vonk, & T. K. Shackelford (Eds.), The Oxford handbook of comparative evolutionary psychology (pp. 297\u2013319). Oxford University Press.","DOI":"10.1093\/oxfordhb\/9780199738182.013.0016"},{"key":"9828_CR34","doi-asserted-by":"crossref","unstructured":"Ric\u0153ur, P. (1984). Time and Narrative, vol. 1. Trans. by Kathleen McLaughlin and David Pellauer. Chicago, IL: University of Chicago Press.","DOI":"10.7208\/chicago\/9780226713519.001.0001"},{"key":"9828_CR35","unstructured":"Romero, A. (2023). ChatGPT: A Bullshit Tool For Bullshit Jobs. The Algorithmic Bridge. 12 July. https:\/\/www.thealgorithmicbridge.com\/p\/chatgpt-a-bullshit-tool-for-bullshit"},{"issue":"1","key":"9828_CR36","doi-asserted-by":"publisher","first-page":"342","DOI":"10.37074\/jalt.2023.6.1.9","volume":"6","author":"J Rudolph","year":"2023","unstructured":"Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?? Journal of Applied Learning and Teaching, 6(1), 342\u2013363. https:\/\/doi.org\/10.37074\/jalt.2023.6.1.9","journal-title":"Journal of Applied Learning and Teaching"},{"key":"9828_CR37","unstructured":"Slater, J., Humphries, J., & Hicks, M. T. (2024, July 17). ChatGPT Isn\u2019t \u2018Hallucinating\u2019\u2014It\u2019s Bullshitting! Scientific American. https:\/\/www.scientificamerican.com\/article\/chatgpt-isnt-hallucinating-its-bullshitting\/. Accessed on 10 December 2024."},{"issue":"11","key":"9828_CR38","doi-asserted-by":"publisher","first-page":"e0000388","DOI":"10.1371\/journal.pdig.0000388","volume":"2","author":"A Smith","year":"2023","unstructured":"Smith, A., Greaves, F., & Trishan Panch. (2023). Hallucination or confabulation?? Neuroanatomy as metaphor in large Language models. PLOS Digit Health, 2(11), e0000388. https:\/\/doi.org\/10.1371\/journal.pdig.0000388","journal-title":"PLOS Digit Health"},{"key":"9828_CR40","doi-asserted-by":"crossref","unstructured":"Sui, P., Duede, E., Wu, S., & Richard Jean So. (2024). and Confabulation: The Surprising Value of Large Language Model Hallucinations. Arxiv CS. https:\/\/arxiv.org\/abs\/2406.04175","DOI":"10.18653\/v1\/2024.acl-long.770"},{"key":"9828_CR42","unstructured":"Zhang, M., Press, O., Merrill, W., Liu, A., & Smith, N. A. (2023). How Language Model Hallucinations Can Snowball. arXiv preprint arXiv:2305.13534."}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09828-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09828-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09828-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,6]],"date-time":"2025-09-06T11:41:15Z","timestamp":1757158875000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09828-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,18]]},"references-count":38,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,6]]}},"alternative-id":["9828"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09828-3","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,18]]},"assertion":[{"value":"18 April 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"23"}}