{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T20:03:29Z","timestamp":1776110609866,"version":"3.50.1"},"reference-count":41,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,1,25]],"date-time":"2024-01-25T00:00:00Z","timestamp":1706140800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Commun. ACM"],"published-print":{"date-parts":[[2024,2]]},"abstract":"<jats:p>Interacting with a contemporary LLM-based conversational agent can create an illusion of being in the presence of a thinking creature. Yet, in their very nature, such systems are fundamentally not like us.<\/jats:p>","DOI":"10.1145\/3624724","type":"journal-article","created":{"date-parts":[[2024,1,25]],"date-time":"2024-01-25T16:39:18Z","timestamp":1706200758000},"page":"68-79","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":240,"title":["Talking about Large Language Models"],"prefix":"10.1145","volume":"67","author":[{"given":"Murray","family":"Shanahan","sequence":"first","affiliation":[{"name":"Imperial College London, U.K"}]}],"member":"320","published-online":{"date-parts":[[2024,1,25]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"Ahn M. et al. Do as I can not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691. (2022)."},{"key":"e_1_2_1_2_1","volume-title":"Advances in Neural Information Processing Systems","author":"Alayrac J.-B.","year":"2022","unstructured":"Alayrac, J.-B. et al. Flamingo: A visual language model for few-shot learning. In Advances in Neural Information Processing Systems (2022)."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.463"},{"key":"e_1_2_1_4_1","volume-title":"Proceedings of the 2021 ACM Conf. on Fairness, Accountability, and Transparency, 610--623","author":"Bender E.","unstructured":"Bender, E., Gebru, T., McMillan-Major, A., and Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conf. on Fairness, Accountability, and Transparency, 610--623."},{"key":"e_1_2_1_5_1","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown T.","year":"2020","unstructured":"Brown, T. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33 (2020), 1877--1901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_6_1","volume-title":"Advances in Neural Information Processing Systems","author":"Chan S.C.","year":"2022","unstructured":"Chan, S.C. et al. Data distributional properties drive emergent in-context learning in transformers. In Advances in Neural Information Processing Systems (2022)."},{"key":"e_1_2_1_7_1","volume-title":"et al. PaLM: Scaling language modeling with pathways. arXiv preprint arxiv:2204.02311","author":"Chowdhery S.","year":"2022","unstructured":"Chowdhery, S. et al. PaLM: Scaling language modeling with pathways. arXiv preprint arxiv:2204.02311 (2022)."},{"key":"e_1_2_1_8_1","volume-title":"Faithful reasoning using large language models. arXiv preprint arXiv:2208.14271","author":"Creswell A.","year":"2022","unstructured":"Creswell, A. and Shanahan, M. Faithful reasoning using large language models. arXiv preprint arXiv:2208.14271 (2022)."},{"key":"e_1_2_1_9_1","volume-title":"Proceedings of the Intern. Conf. on Learning Representations","author":"Creswell A.","year":"2023","unstructured":"Creswell, A., Shanahan, M., and Higgins, I. Selection-inference: Exploiting large language models for interpretable logical reasoning. In Proceedings of the Intern. Conf. on Learning Representations (2023)."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1746-8361.1982.tb01546.x"},{"key":"e_1_2_1_11_1","volume-title":"Intentional systems theory. The Oxford Handbook of Philosophy of Mind","author":"Dennett D.","year":"2009","unstructured":"Dennett, D. Intentional systems theory. The Oxford Handbook of Philosophy of Mind. Oxford University Press (2009), 339--350."},{"key":"e_1_2_1_12_1","volume-title":"BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin J.","year":"2018","unstructured":"Devlin, J., Chang, M-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_2_1_13_1","unstructured":"Elhage N. et al. A mathematical framework for transformer circuits. Transformer Circuits Thread (2021); https:\/\/bit.ly\/3NFliBA."},{"key":"e_1_2_1_14_1","volume-title":"et al. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375","author":"Glaese A.","year":"2022","unstructured":"Glaese, A. et al. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375 (2022)."},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2009.36"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1016\/0167-2789(90)90087-6"},{"key":"e_1_2_1_17_1","volume-title":"et al. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916","author":"Kojima T.","year":"2022","unstructured":"Kojima, T. et al. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916 (2022)."},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.143"},{"key":"e_1_2_1_19_1","volume-title":"Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265","author":"Lu J.","year":"2019","unstructured":"Lu, J., Batra, D., Parikh, D., and Lee, S. Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265 (2019)."},{"key":"e_1_2_1_20_1","volume-title":"GPT-3, bloviator: OpenAI's language generator has no idea what it's talking about. MIT Technology Rev. (Aug","author":"Marcus G.","year":"2020","unstructured":"Marcus, G. and Davis, E. GPT-3, bloviator: OpenAI's language generator has no idea what it's talking about. MIT Technology Rev. (Aug. 2020)."},{"key":"e_1_2_1_21_1","volume-title":"Advances in Neural Information Processing Sys.","author":"Meng K.","year":"2022","unstructured":"Meng, K., Bau, D., Andonian, A.J., and Belinkov, Y. Locating and editing factual associations in GPT. In Advances in Neural Information Processing Sys. (2022)."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1037\/rev0000297"},{"key":"e_1_2_1_23_1","volume-title":"et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114","author":"Nye M.","year":"2021","unstructured":"Nye, M. et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114 (2021)."},{"key":"e_1_2_1_24_1","volume-title":"et al. In-context learning and induction heads. Transformer Circuits Thread (2022)","author":"Olsson N.","year":"2022","unstructured":"Olsson, N. et al. In-context learning and induction heads. Transformer Circuits Thread (2022); https:\/\/transformercircuits.pub\/2022\/in-context-learning-andinduction-heads\/index.html."},{"key":"e_1_2_1_25_1","volume-title":"GPT-4 technical report. arXiv preprint arXiv:2303.08774","author":"Open AI.","year":"2023","unstructured":"OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774 (2023)."},{"key":"e_1_2_1_26_1","volume-title":"Advances in Neural Information Processing Systems","author":"Ouyang L.","year":"2022","unstructured":"Ouyang, L. et al. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems (2022)."},{"key":"e_1_2_1_27_1","volume-title":"Meaning without reference in large language models. arXiv preprint arXiv:2208.02957","author":"Piantadosi S.T.","year":"2022","unstructured":"Piantadosi, S.T. and Hill, F. Meaning without reference in large language models. arXiv preprint arXiv:2208.02957 (2022)."},{"key":"e_1_2_1_28_1","unstructured":"Radford A. et al. Language models are unsupervised multitask learners. (2019)."},{"key":"e_1_2_1_29_1","volume-title":"et al. Scaling language models: Methods, analysis & insights from training Gopher. arXiv preprint arXiv:2112.11446","author":"Rae J.W.","year":"2021","unstructured":"Rae, J.W. et al. Scaling language models: Methods, analysis & insights from training Gopher. arXiv preprint arXiv:2112.11446 (2021)."},{"key":"e_1_2_1_30_1","volume-title":"Proceedings of the 27th AIAI Irish Conf. on Artificial Intelligence and Cognitive Science","author":"Ruane E.","year":"2019","unstructured":"Ruane, E., Birhane, A. and Ventresque, A. Conversational AI: Social and ethical considerations. In Proceedings of the 27th AIAI Irish Conf. on Artificial Intelligence and Cognitive Science (2019), 104--115."},{"key":"e_1_2_1_31_1","volume-title":"et al. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761","author":"Schick T.","year":"2023","unstructured":"Schick, T. et al. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 (2023)."},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2022\/780"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.7551\/mitpress\/12385.001.0001"},{"key":"e_1_2_1_34_1","volume-title":"Advances in Neural Information Processing Systems","author":"Stiennon N.","year":"2020","unstructured":"Stiennon, N. et al. Learning to summarize from human feedback. In Advances in Neural Information Processing Systems (2020), 3008--3021."},{"key":"e_1_2_1_35_1","volume-title":"et al. LaMDA: Language models for dialog applications. arXiv preprint arXiv:2201.08239","author":"Thoppilan R.","year":"2022","unstructured":"Thoppilan, R. et al. LaMDA: Language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)."},{"key":"e_1_2_1_36_1","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani A.","year":"2017","unstructured":"Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems (2017), 5998--6008."},{"key":"e_1_2_1_37_1","volume-title":"et al. Emergent abilities of large language models. Transactions on Machine Learning Research","author":"Wei J.","year":"2022","unstructured":"Wei, J. et al. Emergent abilities of large language models. Transactions on Machine Learning Research (2022)."},{"key":"e_1_2_1_38_1","volume-title":"Advances in Neural Information Processing Systems","author":"Wei J.","year":"2022","unstructured":"Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems (2022)."},{"key":"e_1_2_1_39_1","volume-title":"et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359","author":"Weidinger L.","year":"2021","unstructured":"Weidinger, L. et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359 (2021)."},{"key":"e_1_2_1_40_1","unstructured":"Wittgenstein L. Philosophical Investigations. Basil Blackwell (1953)."},{"key":"e_1_2_1_41_1","volume-title":"Proceedings of the Intern. Conf. on Learning Representations","author":"Yao S.","year":"2023","unstructured":"Yao, S. et al. ReAct: Synergizing reasoning and acting in language models. In Proceedings of the Intern. Conf. on Learning Representations (2023)."}],"container-title":["Communications of the ACM"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3624724","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3624724","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:35:44Z","timestamp":1750178144000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3624724"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,25]]},"references-count":41,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["10.1145\/3624724"],"URL":"https:\/\/doi.org\/10.1145\/3624724","relation":{},"ISSN":["0001-0782","1557-7317"],"issn-type":[{"value":"0001-0782","type":"print"},{"value":"1557-7317","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,25]]},"assertion":[{"value":"2024-01-25","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}