{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,22]],"date-time":"2026-04-22T13:01:13Z","timestamp":1776862873670,"version":"3.51.2"},"reference-count":34,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2024,10,4]],"date-time":"2024-10-04T00:00:00Z","timestamp":1728000000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,10,4]],"date-time":"2024-10-04T00:00:00Z","timestamp":1728000000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100014013","name":"UK Research and Innovation","doi-asserted-by":"publisher","award":["MR\/V025600\/1"],"award-info":[{"award-number":["MR\/V025600\/1"]}],"id":[{"id":"10.13039\/100014013","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are <jats:italic>bullshitting<\/jats:italic>, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they <jats:italic>need not<\/jats:italic> bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.<\/jats:p>","DOI":"10.1007\/s10676-024-09802-5","type":"journal-article","created":{"date-parts":[[2024,10,4]],"date-time":"2024-10-04T04:11:42Z","timestamp":1728015102000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["Large language models and their big bullshit potential"],"prefix":"10.1007","volume":"26","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1115-6134","authenticated-orcid":false,"given":"Sarah A.","family":"Fisher","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,10,4]]},"reference":[{"issue":"2","key":"9802_CR1","first-page":"e35179","volume":"15","author":"H Alkaissi","year":"2023","unstructured":"Alkaissi, H., & McFarlane, S. I. (2023). Artificial Hallucinations in ChatGPT: Implications in Scientific writing. Cureus, 15(2), e35179.","journal-title":"Cureus"},{"key":"9802_CR2","unstructured":"Borg, E. (forthcoming) (Ed.). LLMs, turing tests and Chinese rooms: The prospects for meaning in large Language models. Inquiry."},{"key":"9802_CR3","doi-asserted-by":"crossref","unstructured":"Borg, E., & Fisher, S. (2021). Semantic content and utterance context: A spectrum of approaches. In P. Stalmaszczyk (Ed.), The Cambridge Handbook of the Philosophy of Language (pp. 174\u2013193). Cambridge University Press. Cambridge Handbooks in Language and Linguistics.","DOI":"10.1017\/9781108698283.010"},{"key":"9802_CR4","doi-asserted-by":"publisher","first-page":"54","DOI":"10.1075\/pc.23.1.03car","volume":"23","author":"T Carson","year":"2016","unstructured":"Carson, T. (2016). Frankfurt and Cohen on bullshit, bullshitting, deception, lying, and concern with the truth of what one says. Pragmatics & Cognition, 23, 54\u201368.","journal-title":"Pragmatics & Cognition"},{"key":"9802_CR5","doi-asserted-by":"crossref","unstructured":"Cohen, G. (2002). Deeper into bullshit. In S. Buss, & L. Overton (Eds.), Contours of agency: Essays on themes from Harry Frankfurt (pp. 321\u2013339). MIT Press.","DOI":"10.7551\/mitpress\/2143.003.0015"},{"key":"9802_CR6","unstructured":"Davis, E., & Aaronson, D. (2023). Testing GPT-4 with Wolfram Alpha and Code Interpreter plug-ins on math and science problems [version 2]. Arxiv: arXiv:2308.05713v2."},{"key":"9802_CR7","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1163\/187731011X597497","volume":"3","author":"M Dynel","year":"2011","unstructured":"Dynel, M. (2011). A web of deceit: A neo-gricean view on types of verbal deception. International Review of Pragmatics, 3, 139\u2013167.","journal-title":"International Review of Pragmatics"},{"key":"9802_CR8","doi-asserted-by":"crossref","unstructured":"Emsley, R. (2023). ChatGPT: these are not hallucinations \u2013 they\u2019re fabrications and falsifications. Schizophrenia 9(52).","DOI":"10.1038\/s41537-023-00379-4"},{"key":"9802_CR9","doi-asserted-by":"publisher","first-page":"29","DOI":"10.5840\/jphil200910612","volume":"106","author":"D Fallis","year":"2009","unstructured":"Fallis, D. (2009). What is lying? Journal of Philosophy, 106, 29\u201356.","journal-title":"Journal of Philosophy"},{"key":"9802_CR10","doi-asserted-by":"publisher","first-page":"563","DOI":"10.1111\/1746-8361.12007","volume":"66","author":"D Fallis","year":"2012","unstructured":"Fallis, D. (2012). Lying as a violation of Grice\u2019s first maxim of quality. Dialectica, 66, 563\u2013581.","journal-title":"Dialectica"},{"key":"9802_CR11","first-page":"11","volume":"37","author":"D Fallis","year":"2015","unstructured":"Fallis, D. (2015). Frankfurt wasn\u2019t bullshitting! Southwest Philosophical Studies, 37, 11\u201320.","journal-title":"Southwest Philosophical Studies"},{"key":"9802_CR12","doi-asserted-by":"publisher","first-page":"625","DOI":"10.1038\/s41586-024-07421-0","volume":"630","author":"S Farquhar","year":"2024","unstructured":"Farquhar, S., Kossen, J., Kuhn, L., & Gal, Y. (2024). Nature 630: 625\u2013630.","journal-title":"Nature"},{"key":"9802_CR13","doi-asserted-by":"crossref","unstructured":"Frankfurt, H. (2002). Reply to G. A. Cohen. In S. Buss, & L. Overton (Eds.), Contours of Agency: Essays on themes from Harry Frankfurt (pp. 340\u2013344). MIT Press.","DOI":"10.7551\/mitpress\/2143.003.0031"},{"key":"9802_CR14","doi-asserted-by":"crossref","unstructured":"Frankfurt, H. (2005 [1986]). On bullshit. Princeton University.","DOI":"10.1515\/9781400826537"},{"key":"9802_CR15","unstructured":"Grice, H. P. (1989). Studies in the way of words. Harvard University Press."},{"key":"9802_CR16","first-page":"3929","volume":"119","author":"K Guu","year":"2020","unstructured":"Guu, K., Lee, K., Tung, Z., Pasupat, P., & Chang, M. (2020). Retrieval Augmented Language Model Pre-training. Proceedings of the 37th International Conference on Machine Learning, 119, 3929\u20133938.","journal-title":"Proceedings of the 37th International Conference on Machine Learning"},{"key":"9802_CR17","doi-asserted-by":"publisher","unstructured":"Hadi, M. U., Al-Tashi, Q., Qureshi, R., Shah, A., Muneer, A., Irfan, M., Shaikh, M. B., Akhtar, N., Al-Garadi, M. A., Wu, J., Mirjalili, S., & Shah, M. (2024). LLMs: A comprehensive survey of applications, challenges, datasets, limitations, and future prospects [version 6]. TechRxiv preprinthttps:\/\/www.techrxiv.org\/doi\/full\/https:\/\/doi.org\/10.36227\/techrxiv.23589741.v6","DOI":"10.36227\/techrxiv.23589741.v6"},{"key":"9802_CR18","doi-asserted-by":"crossref","unstructured":"Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology 26: Article number 38.","DOI":"10.1007\/s10676-024-09775-5"},{"key":"9802_CR19","unstructured":"Kaddour, J., Harris, J., Mozes, M., Bradley, H., Raileanu, R., & McHardy, R. (2023). Challenges and applications of large language models. arXiv: arXiv:230710169."},{"key":"9802_CR20","doi-asserted-by":"crossref","unstructured":"Kenyon, T., & Saul, J. (2022). Bald-Faced Bullshit and Authoritarian Political Speech: Making sense of Johnson and Trump. In L. Horn (Ed.), From lying to perjury: Linguistic and legal perspectives on lies and other falsehoods (pp. 165\u2013194). De Gruyter Mouton.","DOI":"10.1515\/9783110733730-008"},{"key":"9802_CR21","unstructured":"Lee, T. B., & Trott, S. (2023). A jargon-free explanation of how AI large language models work. ArsTechnica. https:\/\/arstechnica.com\/science\/2023\/07\/a-jargon-free-explanation-of-how-ai-large-language-models-work\/?fbclid=IwAR2k8lIVvK21VRA2rjx33Nw7hBknpgBfRxvC9Bcz7qjLbWnpYkN-VXrHd84"},{"key":"9802_CR22","unstructured":"Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., K\u00fcttler, H., Lewis, M., Yih, W., Rockt\u00e4schel, T., Riedel, S., & Kiela, D. (2021). Retrieval-augmented generation for knowledge-intensive NLP tasks [version 4]. arXiv: arXiv:2005.11401."},{"key":"9802_CR23","doi-asserted-by":"publisher","first-page":"Articlenumber38","DOI":"10.3998\/ergo.4668","volume":"10","author":"F Mallory","year":"2023","unstructured":"Mallory, F. (2023). Fictionalism about chatbots. Ergo an Open Access Journal of Philosophy, 10, Articlenumber38.","journal-title":"Ergo an Open Access Journal of Philosophy"},{"key":"9802_CR24","doi-asserted-by":"crossref","unstructured":"Mandelkern, M., & Linzen, T. (2023). Do Language Models\u2019 Words Refer? [version 3]. arXiv: arXiv:2308.05576v3.","DOI":"10.1162\/coli_a_00522"},{"key":"9802_CR25","unstructured":"Milli\u00e8re, R., & Bruckner, C. (2024). A philosophical introduction to language models \u2013 Part 1: Continuity with classic debates [version 1]. arXiv: 240103910v1"},{"key":"9802_CR26","unstructured":"Mollick, E. (2022). ChatGPT is a Tipping Point for AI. Harvard Business Reviewhttps:\/\/hbr.org\/2022\/12\/chatgpt-is-a-tipping-point-for-ai"},{"key":"9802_CR27","doi-asserted-by":"publisher","first-page":"493","DOI":"10.1038\/s41586-023-06647-8","volume":"623","author":"M Shanahan","year":"2023","unstructured":"Shanahan, M., & McDonell, K., and Reynolds L (2023). Role-play with large language models. Nature, 623, 493\u2013498.","journal-title":"Nature"},{"key":"9802_CR28","doi-asserted-by":"crossref","unstructured":"Stokke, A. (2018a). Bullshitting. In J. Meibauer (Ed.), The Oxford Handbook of lying (pp. 264\u2013276). Oxford University Press.","DOI":"10.1093\/oxfordhb\/9780198736578.013.20"},{"key":"9802_CR29","doi-asserted-by":"crossref","unstructured":"Stokke, A. (2018b). Lying and insincerity. Oxford University Press.","DOI":"10.1093\/oso\/9780198825968.001.0001"},{"key":"9802_CR30","doi-asserted-by":"publisher","first-page":"569","DOI":"10.1038\/d41586-024-01641-0","volume":"630","author":"K Verspoor","year":"2024","unstructured":"Verspoor, K. (2024). Fighting fire with fire. Nature, 630, 569\u2013570.","journal-title":"Nature"},{"key":"9802_CR31","unstructured":"Wolfram, S. (2023a). Wolfram|Alpha as the way to bring computational knowledge superpowers to ChatGPT. Stephen Wolfram Writings: Writings Stephenwolframwritings.stephenwolfram.com\/2023\/01\/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt."},{"key":"9802_CR32","unstructured":"Wolfram, S. (2023b). What is ChatGPT doing \u2026 and why does it work? Stephen Wolfram Writings: Writings.stephenwolfram.com\/2023\/02\/what-is-chatgpt-doing-and-why-does-it-work"},{"key":"9802_CR33","unstructured":"Wolfram, S. (2023c). ChatGPT Gets Its \u2018Wolfram Superpowers\u2019! Stephen Wolfram Writings: Writings Stephenwolfram. stephenwolfram.com\/2023\/03\/chatgpt-gets-its-wolfram-superpowers."},{"key":"9802_CR34","doi-asserted-by":"crossref","unstructured":"Yang, C., & Fujita, S. (2024). Adaptive control of retrieval-augmented generation for LLMs through reflective tags [version 1]. Preprints: 2024082152.","DOI":"10.20944\/preprints202408.2152.v1"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-024-09802-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-024-09802-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-024-09802-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,14]],"date-time":"2024-11-14T04:51:13Z","timestamp":1731559873000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-024-09802-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,4]]},"references-count":34,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["9802"],"URL":"https:\/\/doi.org\/10.1007\/s10676-024-09802-5","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,4]]},"assertion":[{"value":"4 October 2024","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 October 2024","order":2,"name":"change_date","label":"Change Date","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Update","order":3,"name":"change_type","label":"Change Type","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Minor correction in reference was done.","order":4,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"N\/A.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"N\/A.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The author has no competing interests to declare.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"67"}}