{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T01:37:05Z","timestamp":1760060225204,"version":"build-2065373602"},"reference-count":18,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2025,8,15]],"date-time":"2025-08-15T00:00:00Z","timestamp":1755216000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100004733","name":"University of Macau","doi-asserted-by":"publisher","award":["SRG2023-00062-ICI","MYRG-GRG2024-00071-IC"],"award-info":[{"award-number":["SRG2023-00062-ICI","MYRG-GRG2024-00071-IC"]}],"id":[{"id":"10.13039\/501100004733","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Informatics"],"abstract":"<jats:p>Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task under varying levels of contextual cues (high, medium, and low) as defined using human performance, thereby enabling direct AI\u2013human comparisons. Our findings indicate that LLMs significantly outperform humans, particularly in medium- and low-context conditions. While success in medium-context scenarios reflects the efficient utilization of contextual information, performance in low-context situations\u2014where LLMs achieved approximately 25% accuracy compared to just 1% for humans\u2014suggests that the models harness deep linguistic structures beyond mere surface context. This discovery implies that LLMs may elucidate previously unknown aspects of language architecture. The ability of LLMs to exploit deep structural regularities and statistical patterns in medium- and low-predictability contexts offers a novel perspective on the computational architecture of the human language system.<\/jats:p>","DOI":"10.3390\/informatics12030083","type":"journal-article","created":{"date-parts":[[2025,8,15]],"date-time":"2025-08-15T16:09:55Z","timestamp":1755274195000},"page":"83","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies"],"prefix":"10.3390","volume":"12","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9688-6867","authenticated-orcid":false,"given":"Yifan","family":"Zhang","sequence":"first","affiliation":[{"name":"Independent Researcher, 49100 Angers, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6613-2300","authenticated-orcid":false,"given":"Kuzma","family":"Strelnikov","sequence":"additional","affiliation":[{"name":"Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR 999078, China"},{"name":"Department of Public Health and Medicinal Administration, Faculty of Health Sciences, University of Macau, Macao SAR 999078, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,8,15]]},"reference":[{"key":"ref_1","unstructured":"d\u2019Arcais, G.B.F., and Jarvella, R.J. (1983). The Process of Language Understanding, John Wiley & Sons Ltd."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"37","DOI":"10.1162\/002438903763255922","article-title":"Linear Order and Constituency","volume":"34","author":"Phillips","year":"2003","journal-title":"Linguist. Inq."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"430","DOI":"10.1038\/s41562-022-01516-2","article-title":"Evidence of a Predictive Coding Hierarchy in the Human Brain Listening to Speech","volume":"7","author":"Caucheteux","year":"2023","journal-title":"Nat. Hum. Behav."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"443","DOI":"10.1177\/0963721418794491","article-title":"Integration and Prediction in Language Processing: A Synthesis of Old and New","volume":"27","author":"Ferreira","year":"2018","journal-title":"Curr. Dir. Psychol. Sci."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1032","DOI":"10.1016\/j.tics.2023.08.003","article-title":"Prediction during Language Comprehension: What Is Next?","volume":"27","author":"Ryskin","year":"2023","journal-title":"Trends Cogn. Sci."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.jneuroling.2007.06.001","article-title":"Activation-Verification in Continuous Speech Processing. Interaction of Cognitive Strategies as a Possible Theoretical Approach","volume":"21","author":"Strelnikov","year":"2008","journal-title":"J. Neurolinguistics"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"26839","DOI":"10.1109\/ACCESS.2024.3365742","article-title":"A Review on Large Language Models: Architectures, Applications, Taxonomies, Open Issues and Challenges","volume":"12","author":"Raiaan","year":"2023","journal-title":"IEEE Access"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"54608","DOI":"10.1109\/ACCESS.2024.3389497","article-title":"GPT (Generative Pre-Trained Transformer)\u2014A Comprehensive Review on Enabling Technologies, Potential Applications, Emerging Challenges, and Future Directions","volume":"12","author":"Yenduri","year":"2024","journal-title":"IEEE Access"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"104174","DOI":"10.1016\/j.jml.2020.104174","article-title":"Word Predictability Effects Are Linear, Not Logarithmic: Implications for Probabilistic Models of Sentence Comprehension","volume":"116","author":"Brothers","year":"2021","journal-title":"J. Mem. Lang."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"794","DOI":"10.1057\/s41599-025-04912-x","article-title":"Does GPT-4 Surpass Human Performance in Linguistic Pragmatics?","volume":"12","year":"2025","journal-title":"Humanit. Soc. Sci. Commun."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Boji\u0107, L., Zagovora, O., Zelenkauskaite, A., Vukovi\u0107, V., \u010cabarkapa, M., Veseljevi\u0107 Jerkovi\u0107, S., and Jovan\u010devi\u0107, A. (2025). Comparing Large Language Models and Human Annotators in Latent Content Analysis of Sentiment, Political Leaning, Emotional Intensity and Sarcasm. Sci. Rep., 15.","DOI":"10.1038\/s41598-025-96508-3"},{"key":"ref_12","unstructured":"Luo, X., Ramscar, M., and Love, B.C. (2024). Beyond Human-Like Processing: Large Language Models Perform Equivalently on Forward and Backward Scientific Text. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Xu, Q., Peng, Y., Nastase, S.A., Chodorow, M., Wu, M., and Li, P. (2025). Large Language Models without Grounding Recover Non-Sensorimotor but Not Sensorimotor Features of Human Concepts. Nat. Hum. Behav.","DOI":"10.1038\/s41562-025-02203-8"},{"key":"ref_14","unstructured":"Chiruzzo, L., Ritter, A., and Wang, L. (May, January 29). Text Annotation via Inductive Coding: Comparing Human Experts to LLMs in Qualitative Data Analysis. Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, NM, USA."},{"key":"ref_15","unstructured":"Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Amatriain, X., and Gao, J. (2025, August 05). Large Language Models: A Survey 2024. Available online: https:\/\/arxiv.org\/abs\/2402.06196."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Rambelli, G., Chersoni, E., Testa, D., Blache, P., and Lenci, A. (2024). Neural Generative Models and the Parallel Architecture of Language: A Critical Review and Outlook. Top. Cogn. Sci.","DOI":"10.1111\/tops.12733"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Tomasello, M. (2005). Constructing a Language: A Usage-Based Theory of Language Acquisition, Harvard University Press. [Revised ed.].","DOI":"10.2307\/j.ctv26070v8"},{"key":"ref_18","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention Is All You Need. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA."}],"container-title":["Informatics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2227-9709\/12\/3\/83\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:28:38Z","timestamp":1760034518000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2227-9709\/12\/3\/83"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,15]]},"references-count":18,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2025,9]]}},"alternative-id":["informatics12030083"],"URL":"https:\/\/doi.org\/10.3390\/informatics12030083","relation":{},"ISSN":["2227-9709"],"issn-type":[{"type":"electronic","value":"2227-9709"}],"subject":[],"published":{"date-parts":[[2025,8,15]]}}}