{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T21:30:00Z","timestamp":1775597400217,"version":"3.50.1"},"reference-count":129,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2025,7,18]],"date-time":"2025-07-18T00:00:00Z","timestamp":1752796800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,18]],"date-time":"2025-07-18T00:00:00Z","timestamp":1752796800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Norwegian Business School"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Ethics Inf Technol"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>In this article, we examine a peculiar issue apropos large language models (LLMs) and generative AI more broadly: the frequently overlooked phenomenon of output homogenization. It describes the tendency of chatbots to structure their outputs in a highly recognizable manner, which often amounts to the aggregation of verbal, visual, and narrative clich\u00e9s, trivialities, truisms, predictable argumentations, and similar. We argue that the most appropriate conceptual lens through which said phenomenon can be framed is that of Frankfurtian bullshit. In this respect, existing attempts at applying the BS framework to LLMs are insufficient, as those are chiefly presented in opposition to the so-called algorithmic hallucinations. Here, we contend that further conceptual rupture from the original metaphor of Frankfurt (1986) is needed, distinguishing between the what-BS, which manifests in falsehoods and factual inconsistencies of LLMs, and the how-BS, which reifies in the dynamics of output homogenization. We also discuss how issues of algorithmic biases and model collapse can be framed as critical instances of the how-BS. The homogenization problem, then, is more significant than it initially appears, potentially exhibiting a powerful structuring effect on individuals, organizations, institutions, and society at large. We discuss this in the concluding section of the article.<\/jats:p>","DOI":"10.1007\/s10676-025-09845-2","type":"journal-article","created":{"date-parts":[[2025,7,18]],"date-time":"2025-07-18T14:00:58Z","timestamp":1752847258000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["ChatGPT is incredible (at being average)"],"prefix":"10.1007","volume":"27","author":[{"given":"Ihor","family":"Rudko","sequence":"first","affiliation":[]},{"given":"Aysan","family":"Bashirpour Bonab","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,18]]},"reference":[{"issue":"7","key":"9845_CR1","doi-asserted-by":"publisher","first-page":"pgae245","DOI":"10.1093\/pnasnexus\/pgae245","volume":"3","author":"S Abdurahman","year":"2024","unstructured":"Abdurahman, S., Atari, M., Karimi-Malekabadi, F., Xue, M. J., Trager, J., Park, P. S., Golazizian, P., Omrani, A., & Dehghani, M. (2024). Perils and opportunities in using large Language models in psychological research. PNAS Nexus, 3(7), pgae245. https:\/\/doi.org\/10.1093\/pnasnexus\/pgae245","journal-title":"PNAS Nexus"},{"key":"9845_CR2","unstructured":"Accomplished_Tank184 (2024, November 22). Anyone know why it cannot do this? [Reddit Post]. R\/OpenAI. www.reddit.com\/r\/OpenAI\/comments\/1gxgns9\/anyone_know_why_it_cannot_do_this\/"},{"key":"9845_CR3","doi-asserted-by":"publisher","unstructured":"Agarwal, D., Naaman, M., & Vashistha, A. (2024). AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances (No. arXiv:2409.11360). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2409.11360","DOI":"10.48550\/arXiv.2409.11360"},{"key":"9845_CR4","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.2746078","author":"I Ajunwa","year":"2016","unstructured":"Ajunwa, I. (2016). The paradox of automation as Anti-Bias intervention (SSRN scholarly paper 2746078). Social Science Research Network. https:\/\/doi.org\/10.2139\/ssrn.2746078","journal-title":"Social Science Research Network"},{"key":"9845_CR5","doi-asserted-by":"publisher","unstructured":"AlAfnan, M. A., & MohdZuki, S. F. (2023). Do artificial intelligence chatbots have a writing style?? An investigation into the style?istic features of ChatGPT-4. Journal of Artificial Intelligence and Technology, 3(3). https:\/\/doi.org\/10.37965\/jait.2023.0267","DOI":"10.37965\/jait.2023.0267"},{"issue":"1","key":"9845_CR6","doi-asserted-by":"publisher","first-page":"138","DOI":"10.1186\/s40537-024-00986-7","volume":"11","author":"AJ Alvero","year":"2024","unstructured":"Alvero, A. J., Lee, J., Regla-Vargas, A., Kizilcec, R. F., Joachims, T., & Antonio, A. L. (2024). Large Language models, social demography, and hegemony: Comparing authorship in human and synthetic text. Journal of Big Data, 11(1), 138. https:\/\/doi.org\/10.1186\/s40537-024-00986-7","journal-title":"Journal of Big Data"},{"key":"9845_CR7","doi-asserted-by":"publisher","unstructured":"Anderson, B. R., Shah, J. H., & Kreminski, M. (2024). Homogenization Effects of Large Language Models on Human Creative Ideation. Proceedings of the 16th Conference on Creativity & Cognition, 413\u2013425. https:\/\/doi.org\/10.1145\/3635636.3656204","DOI":"10.1145\/3635636.3656204"},{"key":"9845_CR8","doi-asserted-by":"publisher","unstructured":"Attanasio, G., Nozza, D., Hovy, D., & Baralis, E. (2022). Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists (No. arXiv:2203.09192). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2203.09192","DOI":"10.48550\/arXiv.2203.09192"},{"key":"9845_CR9","doi-asserted-by":"publisher","unstructured":"Bai, Z., Wang, P., Xiao, T., He, T., Han, Z., Zhang, Z., & Shou, M. Z. (2024). Hallucination of Multimodal Large Language Models: A Survey (No. arXiv:2404.18930). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2404.18930","DOI":"10.48550\/arXiv.2404.18930"},{"key":"9845_CR10","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2407.10735","author":"XE Barandiaran","year":"2024","unstructured":"Barandiaran, X. E., & Almendros, L. S. (2024). Transforming agency. On the Mode of Existence of Large Language Models. https:\/\/doi.org\/10.48550\/arXiv.2407.10735.","journal-title":"On the Mode of Existence of Large Language Models"},{"key":"9845_CR12","unstructured":"Bauman, Z. (2000). Modernity and the holocaust. Cornell University Press."},{"key":"9845_CR13","unstructured":"Bergstrom, C. T., & Ogbunu, C. B. (2023, April 6). ChatGPT Isn\u2019t \u2018Hallucinating.\u2019 It\u2019s Bullshitting. Undark Magazine. https:\/\/undark.org\/2023\/04\/06\/chatgpt-isnt-hallucinating-its-bullshitting\/"},{"key":"9845_CR14","doi-asserted-by":"publisher","first-page":"3663","DOI":"10.48550\/arXiv.2211.13972","volume":"35","author":"R Bommasani","year":"2022","unstructured":"Bommasani, R., Creel, K. A., Kumar, A., Jurafsky, D., & Liang, P. S. (2022). Picking on the same person: Does algorithmic monoculture lead to outcome homogenization?? Advances in Neural Information Processing Systems, 35, 3663\u20133678. https:\/\/doi.org\/10.48550\/arXiv.2211.13972","journal-title":"Advances in Neural Information Processing Systems"},{"issue":"3","key":"9845_CR15","doi-asserted-by":"publisher","first-page":"1565","DOI":"10.1007\/s00405-023-08337-7","volume":"281","author":"G Briganti","year":"2024","unstructured":"Briganti, G. (2024). How ChatGPT works: A mini review. European Archives of Oto-Rhino-Laryngology, 281(3), 1565\u20131569. https:\/\/doi.org\/10.1007\/s00405-023-08337-7","journal-title":"European Archives of Oto-Rhino-Laryngology"},{"key":"9845_CR16","doi-asserted-by":"publisher","unstructured":"Castro, F., Gao, J., & Martin, S. (2023). Human-AI Interactions and Societal Pitfalls (No. arXiv:2309.10448). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2309.10448","DOI":"10.48550\/arXiv.2309.10448"},{"key":"9845_CR17","doi-asserted-by":"publisher","unstructured":"Chen, Y., Fu, Q., Yuan, Y., Wen, Z., Fan, G., Liu, D., Zhang, D., Li, Z., & Xiao, Y. (2023). Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models. Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 245\u2013255. https:\/\/doi.org\/10.1145\/3583780.3614905","DOI":"10.1145\/3583780.3614905"},{"key":"9845_CR18","doi-asserted-by":"publisher","unstructured":"Cheng, S., Tsai, S., Bai, Y., Ko, C., Hsu, C., Yang, F., Tsai, C., Tu, Y., Yang, S., Tseng, P., Hsu, T., Liang, C., & Su, K. (2023). Comparisons of quality, correctness, and similarity between ChatGPT-Generated and Human-Written abstracts for basic research: Cross-Sectional study. JOURNAL OF MEDICAL INTERNET RESEARCH, 25. https:\/\/doi.org\/10.2196\/51229","DOI":"10.2196\/51229"},{"key":"9845_CR19","doi-asserted-by":"crossref","unstructured":"Creel, K., & Hellman, D. (2021). The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems (SSRN Scholarly Paper No. 3786377). Social Science Research Network. https:\/\/papers.ssrn.com\/abstract=3786377","DOI":"10.1145\/3442188.3445942"},{"issue":"1","key":"9845_CR20","doi-asserted-by":"publisher","first-page":"e53559","DOI":"10.2196\/53559","volume":"11","author":"J Davis","year":"2024","unstructured":"Davis, J., Bulck, L. V., Durieux, B. N., & Lindvall, C. (2024). The temperature feature of chatgpt: Modifying creativity for clinical research. JMIR Human Factors, 11(1), e53559. https:\/\/doi.org\/10.2196\/53559","journal-title":"JMIR Human Factors"},{"key":"9845_CR21","doi-asserted-by":"publisher","unstructured":"Dohmatob, E., Feng, Y., Subramonian, A., & Kempe, J. (2024). Strong Model Collapse (No. arXiv:2410.04840). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2410.04840","DOI":"10.48550\/arXiv.2410.04840"},{"key":"9845_CR22","doi-asserted-by":"publisher","unstructured":"Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28). https:\/\/doi.org\/10.1126\/sciadv.adn5290. Scopus.","DOI":"10.1126\/sciadv.adn5290"},{"key":"9845_CR23","unstructured":"Edwards, K. (2023, July 5). Why Does All AI Art Look Like That? Medium. https:\/\/medium.com\/@keithkisser\/why-does-all-ai-art-look-like-that-f74e2a9e1c87"},{"key":"9845_CR24","doi-asserted-by":"crossref","unstructured":"Endacott, C. G., & Leonardi, P. M. (2024). Chapter 19: Artificial intelligence as a mechanism of algorithmic isomorphism. https:\/\/www.elgaronline.com\/edcollchap\/book\/9781803926216\/book-part-9781803926216-29.xml","DOI":"10.4337\/9781803926216.00029"},{"issue":"4","key":"9845_CR25","doi-asserted-by":"publisher","first-page":"67","DOI":"10.1007\/s10676-024-09802-5","volume":"26","author":"SA Fisher","year":"2024","unstructured":"Fisher, S. A. (2024). Large Language models and their big bullshit potential. Ethics and Information Technology, 26(4), 67. https:\/\/doi.org\/10.1007\/s10676-024-09802-5","journal-title":"Ethics and Information Technology"},{"key":"9845_CR26","doi-asserted-by":"publisher","unstructured":"Fishman, N., & Hancox-Li, L. (2022). Should attention be all we need? The epistemic and ethical implications of unification in machine learning. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 1516\u20131527. https:\/\/doi.org\/10.1145\/3531146.3533206","DOI":"10.1145\/3531146.3533206"},{"key":"9845_CR27","unstructured":"Frankfurt, H. (1986). On bullshit. Princeton University Press."},{"key":"9845_CR28","doi-asserted-by":"crossref","unstructured":"Frankfurt, H. (2005). On bullshit. Princeton University Press.","DOI":"10.1515\/9781400826537"},{"key":"9845_CR29","unstructured":"Galleta, C. (2024, July 30). Big Techs Are Lying About LLMs Benchmarks\u2026 Again. Medium. https:\/\/medium.com\/@thcookieh\/big-techs-are-lying-about-llms-benchmarks-again-ef4e041d6396"},{"key":"9845_CR30","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2502.03688","author":"T Gao","year":"2025","unstructured":"Gao, T., Jin, J., T Ke, Z., & Moryoussef, G. (2025). A comparison of deepseek and other LLMs. ArXiv. https:\/\/doi.org\/10.48550\/ArXiv.2502.03688. No. ArXiv:2502.03688.","journal-title":"ArXiv"},{"key":"9845_CR31","doi-asserted-by":"publisher","unstructured":"Geng, M., & Trotta, R. (2024). Is ChatGPT Transforming Academics\u2019 Writing Style? (No. arXiv:2404.08627). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2404.08627","DOI":"10.48550\/arXiv.2404.08627"},{"key":"9845_CR32","doi-asserted-by":"publisher","unstructured":"Georgiou, G. P. (2024). Differentiating between human-written and AI-generated texts using linguistic features automatically extracted from an online computational tool (No. arXiv:2407.03646). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2407.03646","DOI":"10.48550\/arXiv.2407.03646"},{"key":"9845_CR33","doi-asserted-by":"publisher","unstructured":"Gerstgrasser, M., Schaeffer, R., Dey, A., Rafailov, R., Sleight, H., Hughes, J., Korbak, T., Agrawal, R., Pai, D., Gromov, A., Roberts, D. A., Yang, D., Donoho, D. L., & Koyejo, S. (2024). Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data (No. arXiv:2404.01413). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2404.01413","DOI":"10.48550\/arXiv.2404.01413"},{"issue":"1150","key":"9845_CR34","doi-asserted-by":"publisher","first-page":"20230023","DOI":"10.1259\/bjr.20230023","volume":"96","author":"JW Gichoya","year":"2023","unstructured":"Gichoya, J. W., Thomas, K., Celi, L. A., Safdar, N., Banerjee, I., Banja, J. D., Seyyed-Kalantari, L., Trivedi, H., & Purkayastha, S. (2023). AI pitfalls and what not to do: Mitigating bias in AI. British Journal of Radiology, 96(1150), 20230023. https:\/\/doi.org\/10.1259\/bjr.20230023","journal-title":"British Journal of Radiology"},{"key":"9845_CR35","doi-asserted-by":"publisher","unstructured":"Gorrieri, L. (2024). Is ChatGPT full of bullshit?? Journal of Ethics and Emerging Technologies, 34(1). https:\/\/doi.org\/10.55613\/jeet.v34i1.149","DOI":"10.55613\/jeet.v34i1.149"},{"issue":"3","key":"9845_CR36","doi-asserted-by":"publisher","first-page":"226","DOI":"10.1016\/j.mcpdig.2023.05.004","volume":"1","author":"J Gravel","year":"2023","unstructured":"Gravel, J., D\u2019Amours-Gravel, M., & Osmanlliu, E. (2023). Learning to fake it: Limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clinic Proceedings: Digital Health, 1(3), 226\u2013234. https:\/\/doi.org\/10.1016\/j.mcpdig.2023.05.004","journal-title":"Mayo Clinic Proceedings: Digital Health"},{"issue":"3","key":"9845_CR37","doi-asserted-by":"publisher","first-page":"232","DOI":"10.1093\/ppmgov\/gvac008","volume":"5","author":"S Grimmelikhuijsen","year":"2022","unstructured":"Grimmelikhuijsen, S., & Meijer, A. (2022). Legitimacy of algorithmic Decision-Making: Six threats and the need for a calibrated institutional response. Perspectives on Public Management and Governance, 5(3), 232\u2013242. https:\/\/doi.org\/10.1093\/ppmgov\/gvac008","journal-title":"Perspectives on Public Management and Governance"},{"key":"9845_CR38","doi-asserted-by":"publisher","unstructured":"Gritsai, G., Voznyuk, A., Grabovoy, A., & Chekhovich, Y. (2025). Are AI Detectors Good Enough? A Survey on Quality of Datasets With Machine-Generated Texts (No. arXiv:2410.14677). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2410.14677","DOI":"10.48550\/arXiv.2410.14677"},{"issue":"2","key":"9845_CR39","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1007\/s10676-025-09828-3","volume":"27","author":"D Gunkel","year":"2025","unstructured":"Gunkel, D., & Coghlan, S. (2025). Cut the crap: A critical response to ChatGPT is bullshit. Ethics and Information Technology, 27(2), 23. https:\/\/doi.org\/10.1007\/s10676-025-09828-3","journal-title":"Ethics and Information Technology"},{"key":"9845_CR40","doi-asserted-by":"publisher","unstructured":"Guo, Y., Shang, G., Vazirgiannis, M., & Clavel, C. (2024). The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text (No. arXiv:2311.09807). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2311.09807","DOI":"10.48550\/arXiv.2311.09807"},{"issue":"5","key":"9845_CR41","doi-asserted-by":"publisher","first-page":"471","DOI":"10.1016\/j.bushor.2024.03.001","volume":"67","author":"TR Hannigan","year":"2024","unstructured":"Hannigan, T. R., McCarthy, I. P., & Spicer, A. (2024). Beware of botshit: How to manage the epistemic risks of generative chatbots. Business Horizons, 67(5), 471\u2013486. https:\/\/doi.org\/10.1016\/j.bushor.2024.03.001","journal-title":"Business Horizons"},{"key":"9845_CR42","doi-asserted-by":"publisher","DOI":"10.1007\/s10506-024-09422-w","author":"J Harasta","year":"2024","unstructured":"Harasta, J., Novotn\u00e1, T., & Savelka, J. (2024). It cannot be right if it was written by AI: On lawyers\u2019 preferences of documents perceived as authored by an LLM vs a human. Artificial Intelligence and Law. https:\/\/doi.org\/10.1007\/s10506-024-09422-w","journal-title":"Artificial Intelligence and Law"},{"key":"9845_CR130","doi-asserted-by":"publisher","unstructured":"Hawkins, W., & Mittelstadt, B. (2023). The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 261\u2013270. https:\/\/doi.org\/10.1145\/3593013.3593995","DOI":"10.1145\/3593013.3593995"},{"issue":"2","key":"9845_CR43","doi-asserted-by":"publisher","first-page":"118","DOI":"10.1177\/0735275117709046","volume":"35","author":"K Healy","year":"2017","unstructured":"Healy, K. (2017). Fuck nuance. Sociological Theory, 35(2), 118\u2013127. https:\/\/doi.org\/10.1177\/0735275117709046","journal-title":"Sociological Theory"},{"issue":"3","key":"9845_CR44","doi-asserted-by":"publisher","first-page":"41","DOI":"10.1007\/s10676-024-09777-3","volume":"26","author":"R Heersmink","year":"2024","unstructured":"Heersmink, R., de Rooij, B., Clavel V\u00e1zquez, M. J., & Colombo, M. (2024). A phenomenology and epistemology of large Language models: Transparency, trust, and trustworthiness. Ethics and Information Technology, 26(3), 41. https:\/\/doi.org\/10.1007\/s10676-024-09777-3","journal-title":"Ethics and Information Technology"},{"issue":"1","key":"9845_CR45","doi-asserted-by":"publisher","first-page":"18617","DOI":"10.1038\/s41598-023-45644-9","volume":"13","author":"S Herbold","year":"2023","unstructured":"Herbold, S., Hautli-Janisz, A., Heuer, U., Kikteva, Z., & Trautsch, A. (2023). A large-scale comparison of human-written versus ChatGPT-generated essays. Scientific Reports, 13(1), 18617. https:\/\/doi.org\/10.1038\/s41598-023-45644-9","journal-title":"Scientific Reports"},{"issue":"2","key":"9845_CR46","doi-asserted-by":"publisher","first-page":"38","DOI":"10.1007\/s10676-024-09775-5","volume":"26","author":"MT Hicks","year":"2024","unstructured":"Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. https:\/\/doi.org\/10.1007\/s10676-024-09775-5","journal-title":"Ethics and Information Technology"},{"key":"9845_CR47","unstructured":"Hoel, E. (2024, March 29). A.I.-Generated Garbage Is Polluting Our Culture. The New York Times, https:\/\/www.nytimes.com\/2024\/03\/29\/opinion\/ai-internet-x-youtube.html"},{"key":"9845_CR48","doi-asserted-by":"publisher","unstructured":"Jain, S., Suriyakumar, V., Creel, K., & Wilson, A. (2024). Algorithmic pluralism: A structural approach to equal opportunity. The 2024 ACM Conference on Fairness Accountability and Transparency, 197\u2013206. https:\/\/doi.org\/10.1145\/3630106.3658899","DOI":"10.1145\/3630106.3658899"},{"key":"9845_CR49","unstructured":"Jelinek, E. (1995). Sturm und Zwang: Schreiben als Geschlechterkampf. Hamburg. Ingrid Klein 1995."},{"issue":"12","key":"9845_CR50","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3571730","volume":"55","author":"Z Ji","year":"2023","unstructured":"Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural Language generation. ACM Comput Surv, 55(12), 248:1\u2013248. https:\/\/doi.org\/10.1145\/3571730","journal-title":"ACM Comput Surv"},{"key":"9845_CR51","doi-asserted-by":"crossref","unstructured":"Jiang, C., Xu, H., Dong, M., Chen, J., Ye, W., Yan, M., Ye, Q., Zhang, J., Huang, F., & Zhang, S. (2024a). Hallucination Augmented Contrastive Learning for Multimodal Large Language Model. 27036\u201327046. https:\/\/openaccess.thecvf.com\/content\/CVPR2024\/html\/Jiang_Hallucination_Augmented_Contrastive_Learning_for_Multimodal_Large_Language_Model_CVPR_2024_paper.html","DOI":"10.1109\/CVPR52733.2024.02553"},{"key":"9845_CR52","doi-asserted-by":"publisher","unstructured":"Jiang, F., Kevin, & Hyland, K. (2024b). Does ChatGPT Argue Like Students? Bundles in Argumentative Essays. Applied Linguistics, amae052. https:\/\/doi.org\/10.1093\/applin\/amae052","DOI":"10.1093\/applin\/amae052"},{"key":"9845_CR53","unstructured":"Judkis, M. (2024, June 30). The deluge of bonkers AI art is literally surreal. The Washington Post. https:\/\/www.washingtonpost.com\/style\/of-interest\/2024\/06\/30\/ai-art-facebook-slop-artificial-intelligence\/"},{"issue":"5","key":"9845_CR54","doi-asserted-by":"publisher","first-page":"1145","DOI":"10.1086\/227128","volume":"85","author":"S Kalberg","year":"1980","unstructured":"Kalberg, S. (1980). Max weber\u2019s types of rationality: Cornerstones for the analysis of rationalization processes in history. American Journal of Sociology, 85(5), 1145\u20131179. https:\/\/doi.org\/10.1086\/227128","journal-title":"American Journal of Sociology"},{"key":"9845_CR55","unstructured":"Le Bronnec, F., Verine, A., Negrevergne, B., Chevaleyre, Y., & Allauzen, A. (2024). Exploring Precision and Recall to assess the quality and diversity of LLMs. In Ku L.-W., Martins A.F.T., & Srikumar V. (Eds.), Proc. Annu. Meet. Assoc. Comput Linguist. (Vol. 1, pp. 11418\u201311441). Association for Computational Linguistics (ACL); Scopus. https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-85204425008&partnerID=40&md5=25cc1d8b81796132e978e3f9d334cdfa"},{"key":"9845_CR56","doi-asserted-by":"publisher","unstructured":"Lee, M. H. J. (2025). Examining the Robustness of Homogeneity Bias to Hyperparameter Adjustments in GPT-4 (No. https:\/\/doi.org\/10.48550\/arXiv.2501.02211","DOI":"10.48550\/arXiv.2501.02211"},{"key":"9845_CR57","doi-asserted-by":"crossref","unstructured":"Lee, H. P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (Eds.). (2025, April 1). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. https:\/\/www.microsoft.com\/en-us\/research\/publication\/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers\/","DOI":"10.1145\/3706598.3713778"},{"key":"9845_CR58","doi-asserted-by":"publisher","unstructured":"Lee, M. H. J., & Jeon, S. (2025). Homogeneity Bias as Differential Sampling Uncertainty in Language Models (No. arXiv:2501.19337). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2501.19337","DOI":"10.48550\/arXiv.2501.19337"},{"key":"9845_CR59","doi-asserted-by":"publisher","unstructured":"Lee, M. H. J., & Lai, C. K. (2024). Probability of Differentiation Reveals Brittleness of Homogeneity Bias in GPT-4 (No. arXiv:2407.07329). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2407.07329","DOI":"10.48550\/arXiv.2407.07329"},{"key":"9845_CR60","doi-asserted-by":"publisher","unstructured":"Lee, M. H. J., Montgomery, J. M., & Lai, C. K. (2024). Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans. The 2024 ACM Conference on Fairness, Accountability, and Transparency, 1321\u20131340. https:\/\/doi.org\/10.1145\/3630106.3658975","DOI":"10.1145\/3630106.3658975"},{"key":"9845_CR61","doi-asserted-by":"publisher","unstructured":"Liu, H., Xue, W., Chen, Y., Chen, D., Zhao, X., Wang, K., Hou, L., Li, R., & Peng, W. (2024a). A Survey on Hallucination in Large Vision-Language Models (No. arXiv:2402.00253). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2402.00253","DOI":"10.48550\/arXiv.2402.00253"},{"key":"9845_CR62","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2401.06816","author":"Q Liu","year":"2024","unstructured":"Liu, Q., Zhou, Y., Huang, J., & Li, G. (2024b). When ChatGPT is gone: Creativity reverts and homogeneity persists. ArXiv. https:\/\/doi.org\/10.48550\/ArXiv.2401.06816. No. ArXiv:2401.06816.","journal-title":"ArXiv"},{"key":"9845_CR63","unstructured":"Mahdawi, A. (2025, January 8). AI-generated \u2018slop\u2019 is slowly killing the Internet, so why is nobody trying to stop it? The Guardian. https:\/\/www.theguardian.com\/global\/commentisfree\/2025\/jan\/08\/ai-generated-slop-slowly-killing-internet-nobody-trying-to-stop-it"},{"key":"9845_CR64","doi-asserted-by":"publisher","unstructured":"Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI Hallucinations: A Misnomer Worth Clarifying. 2024 IEEE Conference on Artificial Intelligence (CAI), 133\u2013138. https:\/\/doi.org\/10.1109\/CAI59869.2024.00033","DOI":"10.1109\/CAI59869.2024.00033"},{"key":"9845_CR65","doi-asserted-by":"publisher","unstructured":"Marco, G., Gonzalo, J., Castillo, R., & Girona, M. T. M. (2024). Pron vs Prompt: Can Large Language Models already Challenge a World-Class Fiction Author at Creative Text Writing? (No. arXiv:2407.01119). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2407.01119","DOI":"10.48550\/arXiv.2407.01119"},{"key":"9845_CR66","unstructured":"Marcus, G., & Davis, E. (2020, August 22). GPT-3, Bloviator: OpenAI\u2019s language generator has no idea what it\u2019s talking about. MIT Technology Review. https:\/\/www.technologyreview.com\/2020\/08\/22\/1007539\/gpt3-openai-language-generator-artificial-intelligence-ai-opinion\/"},{"key":"9845_CR67","doi-asserted-by":"publisher","DOI":"10.1007\/s43681-024-00461-2","author":"A Markelius","year":"2024","unstructured":"Markelius, A., Wright, C., Kuiper, J., Delille, N., & Kuo, Y. T. (2024). The mechanisms of AI hype and its planetary and social costs. AI and Ethics. https:\/\/doi.org\/10.1007\/s43681-024-00461-2","journal-title":"AI and Ethics"},{"key":"9845_CR68","unstructured":"McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill."},{"key":"9845_CR69","doi-asserted-by":"crossref","unstructured":"McQuillan, D. (2022). Resisting AI: An Anti-fascist approach to artificial intelligence. Policy.","DOI":"10.1332\/policypress\/9781529213492.001.0001"},{"key":"9845_CR70","unstructured":"McQuillan, D. (2023, February 10). ChatGPT: The world\u2019s largest bullshit machine. Transforming Society. https:\/\/www.transformingsociety.co.uk\/2023\/02\/10\/chatgpt-the-worlds-largest-bullshit-machine\/"},{"key":"9845_CR71","unstructured":"Milmo, D. (2023, June 23). Two US lawyers fined for submitting fake court citations from ChatGPT. The Guardian. https:\/\/www.theguardian.com\/technology\/2023\/jun\/23\/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt"},{"key":"9845_CR72","doi-asserted-by":"publisher","unstructured":"Moon, K., Green, A., & Kushlev, K. (2024). Homogenizing Effect of a Large Language Model (LLM) on Creative Diversity: An Empirical Comparison of Human and ChatGPT Writing. OSF. https:\/\/mayukhdifferent.medium.com\/why-large-language-models-would-not-draw-a-full-glass-of-wine-87827949d373https:\/\/doi.org\/10.31234\/osf.io\/8p9wu","DOI":"10.31234\/osf.io\/8p9wu"},{"key":"9845_CR73","unstructured":"Mukhopadhyay, M. (2025, March 2). Why Large Language Models Would Not Draw a Full Glass of Wine? Medium. https:\/\/mayukhdifferent.medium.com\/why-large-language-models-would-not-draw-a-full-glass-of-wine-87827949d373"},{"key":"9845_CR74","doi-asserted-by":"publisher","unstructured":"Nguyen, M., Baker, A., Neo, C., Roush, A., Kirsch, A., & Shwartz-Ziv, R. (2024). Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM Outputs (No. arXiv:2407.01082). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2407.01082","DOI":"10.48550\/arXiv.2407.01082"},{"key":"9845_CR75","doi-asserted-by":"publisher","unstructured":"Okerlund, J., Klasky, E., Middha, A., Kim, S., Rosenfeld, H., Kleinman, M., & Parthasarathy, S. (2022). WHAT\u2019S IN THE CHATTERBOX? LARGE LANGUAGE MODELS, WHY THEY MATTER, AND WHAT WE SHOULD DO ABOUT THEM. https:\/\/doi.org\/10.7302\/21898","DOI":"10.7302\/21898"},{"key":"9845_CR76","unstructured":"Oxford Word of the Year 2024. (n.d.). Oxford University Press. Retrieved February 10, (2025). from https:\/\/corp.oup.com\/word-of-the-year\/"},{"key":"9845_CR77","doi-asserted-by":"publisher","unstructured":"Padmakumar, V., & He, H. (2024). Does Writing with Language Models Reduce Content Diversity? (No. arXiv:2309.05196). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2309.05196","DOI":"10.48550\/arXiv.2309.05196"},{"key":"9845_CR78","doi-asserted-by":"publisher","unstructured":"Palmini, M. T. D. R., & Cetinic, E. (2024). Patterns of Creativity: How User Input Shapes AI-Generated Visual Diversity (No. arXiv:2410.06768). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2410.06768","DOI":"10.48550\/arXiv.2410.06768"},{"key":"9845_CR79","doi-asserted-by":"publisher","unstructured":"Peeperkorn, M., Kouwenhoven, T., Brown, D., & Jordanous, A. (2024). Is Temperature the Creativity Parameter of Large Language Models? (No. arXiv:2405.00492). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2405.00492","DOI":"10.48550\/arXiv.2405.00492"},{"issue":"2","key":"9845_CR80","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1007\/s10780-007-9019-y","volume":"38","author":"RJ Perla","year":"2007","unstructured":"Perla, R. J., & Carifio, J. (2007). Psychological, philosophical, and educational criticisms of Harry frankfurt\u2019s concept of and views about bullshit in human discourse, discussions, and exchanges. Interchange, 38(2), 119\u2013136. https:\/\/doi.org\/10.1007\/s10780-007-9019-y","journal-title":"Interchange"},{"key":"9845_CR81","doi-asserted-by":"publisher","unstructured":"Petiska, E. (2023). ChatGPT cites the most-cited articles and journals, relying solely on Google Scholar\u2019s citation counts. As a result, AI May amplify the Matthew effect in environmental science (No. arXiv:2304.06794). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2304.06794","DOI":"10.48550\/arXiv.2304.06794"},{"issue":"1","key":"9845_CR82","doi-asserted-by":"publisher","first-page":"26133","DOI":"10.1038\/s41598-024-76900-1","volume":"14","author":"B Porter","year":"2024","unstructured":"Porter, B., & Machery, E. (2024). AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably. Scientific Reports, 14(1), 26133. https:\/\/doi.org\/10.1038\/s41598-024-76900-1","journal-title":"Scientific Reports"},{"key":"9845_CR83","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-024-02097-6","author":"A Pragya","year":"2024","unstructured":"Pragya, A. (2024). Generative AI and epistemic diversity of its inputs and outputs: Call for further scrutiny. AI and Society Scopus. https:\/\/doi.org\/10.1007\/s00146-024-02097-6","journal-title":"AI and Society Scopus"},{"key":"9845_CR84","doi-asserted-by":"publisher","unstructured":"Ragot, M., Martin, N., & Cojean, S. (2020). AI-generated vs. Human Artworks. A Perception Bias Towards Artificial Intelligence? Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 1\u201310. https:\/\/doi.org\/10.1145\/3334480.3382892","DOI":"10.1145\/3334480.3382892"},{"key":"9845_CR85","doi-asserted-by":"publisher","unstructured":"Rastogi, C., Zhang, Y., Wei, D., Varshney, K. R., Dhurandhar, A., & Tomsett, R. (2022). Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making. Proc. ACM Hum.-Comput. Interact., 6(CSCW1), 83:1\u201383:22. https:\/\/doi.org\/10.1145\/3512930","DOI":"10.1145\/3512930"},{"key":"9845_CR86","doi-asserted-by":"publisher","unstructured":"Reif, E., Kahng, M., & Petridis, S. (2023). Visualizing Linguistic Diversity of Text Datasets Synthesized by Large Language Models. Proc. - IEEE Vis. Conf. - Short Pap., VIS, 236\u2013240. Scopus. https:\/\/doi.org\/10.1109\/VIS54172.2023.00056","DOI":"10.1109\/VIS54172.2023.00056"},{"key":"9845_CR87","doi-asserted-by":"publisher","unstructured":"Reviriego, P., Conde, J., Merino-G\u00f3mez, E., Mart\u00ednez, G., & Hern\u00e1ndez, J. (2024). Playing with words: Comparing the vocabulary and lexical diversity of ChatGPT and humans. MACHINE LEARNING WITH APPLICATIONS, 18. https:\/\/doi.org\/10.1016\/j.mlwa.2024.100602","DOI":"10.1016\/j.mlwa.2024.100602"},{"key":"9845_CR88","unstructured":"Richardson, J. How ChatGPT Fixed the Wine Glass Problem., & Medium (2025, March 30). https:\/\/medium.com\/@joe.richardson.iii\/how-chatgpt-fixed-the-wine-glass-problem-6e75c4851293"},{"key":"9845_CR89","unstructured":"Romero, A. (2023, July 12). ChatGPT: A Bullshit Tool For Bullshit Jobs. https:\/\/www.thealgorithmicbridge.com\/p\/chatgpt-a-bullshit-tool-for-bullshit"},{"key":"9845_CR90","doi-asserted-by":"publisher","unstructured":"Roselli, D., Matthews, J., & Talagala, N. (2019). Managing Bias in AI. Companion Proceedings of The 2019 World Wide Web Conference, 539\u2013544. https:\/\/doi.org\/10.1145\/3308560.3317590","DOI":"10.1145\/3308560.3317590"},{"key":"9845_CR91","unstructured":"Roth, E. (2023, December 29). Former Trump lawyer Michael Cohen accidentally cited fake court cases generated by AI. The Verge. https:\/\/www.theverge.com\/2023\/12\/29\/24019067\/michael-cohen-former-trump-lawyer-google-bard-ai"},{"key":"9845_CR92","doi-asserted-by":"publisher","unstructured":"Routray, S. K., Javali, A., Sharmila, K. P., Jha, M. K., Pappa, M., & Singh, M. (2023). Large Language Models (LLMs): Hypes and Realities. Int. Conf. Comput. Sci. Emerg. Technol., CSET. 2023 International Conference on Computer Science and Emerging Technologies, CSET 2023. Scopus. https:\/\/doi.org\/10.1109\/CSET58993.2023.10346621","DOI":"10.1109\/CSET58993.2023.10346621"},{"issue":"2","key":"9845_CR93","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1108\/JMH-09-2023-0097","volume":"31","author":"I Rudko","year":"2024","unstructured":"Rudko, I., Bashirpour Bonab, A., Fedele, M., & Formisano, A. V. (2024). New institutional theory and AI: Toward rethinking of artificial intelligence in organizations. Journal of Management History, 31(2), 261\u2013284. https:\/\/doi.org\/10.1108\/JMH-09-2023-0097","journal-title":"Journal of Management History"},{"key":"9845_CR94","doi-asserted-by":"publisher","unstructured":"Rueda-Arango, Y. D., Rojas-Velazquez, D., Gorelova, A. V., & Lopez-Rincon, A. (2024). Exploring Human Perception of AI-Generated Artworks. 2024 IEEE International Symposium on Technology and Society (ISTAS), 1\u20136. https:\/\/doi.org\/10.1109\/ISTAS61960.2024.10732054","DOI":"10.1109\/ISTAS61960.2024.10732054"},{"issue":"0","key":"9845_CR95","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1080\/20445911.2024.2436362","volume":"0","author":"MA Runco","year":"2025","unstructured":"Runco, M. A. (2025). The discovery and innovation of AI does not qualify as creativity. Journal of Cognitive Psychology, 0(0), 1\u201310. https:\/\/doi.org\/10.1080\/20445911.2024.2436362","journal-title":"Journal of Cognitive Psychology"},{"issue":"1","key":"9845_CR96","doi-asserted-by":"publisher","first-page":"180","DOI":"10.1186\/s13054-023-04473-y","volume":"27","author":"M Salvagno","year":"2023","unstructured":"Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Artificial intelligence hallucinations. Critical Care, 27(1), 180. https:\/\/doi.org\/10.1186\/s13054-023-04473-y","journal-title":"Critical Care"},{"key":"9845_CR97","doi-asserted-by":"publisher","unstructured":"Sang, L., Dobolyi, D., & Larsen, K. R. (2024). Assessing Public Sentiment on AI-Generated Art Across the United States Using Deep Learning. Academy of Management Proceedings, 2024(1), 19516. https:\/\/doi.org\/10.5465\/AMPROC.2024.19516abstract","DOI":"10.5465\/AMPROC.2024.19516abstract"},{"key":"9845_CR98","doi-asserted-by":"publisher","unstructured":"Sastre, A., Iglesias, A., Morato, J., & Sanchez-Cuadrado, S. (2024). Is ChatGPT Able to Generate Texts that Are Easy to Understand and Read? In \u00c1. Rocha, H. Adeli, G. Dzemyda, F. Moreira, & A. Poniszewska-Mara\u0144da (Eds.), Good Practices and New Perspectives in Information Systems and Technologies (pp. 138\u2013147). Springer Nature Switzerland. https:\/\/doi.org\/10.1007\/978-3-031-60221-4_14","DOI":"10.1007\/978-3-031-60221-4_14"},{"issue":"13s","key":"9845_CR99","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3588433","volume":"55","author":"N Shahbazi","year":"2023","unstructured":"Shahbazi, N., Lin, Y., Asudeh, A., & Jagadish, H. V. (2023). Representation Bias in data: A survey on identification and resolution techniques. ACM Comput Surv, 55(13s), 293:1\u2013293. https:\/\/doi.org\/10.1145\/3588433","journal-title":"ACM Comput Surv"},{"issue":"8022","key":"9845_CR100","doi-asserted-by":"publisher","first-page":"755","DOI":"10.1038\/s41586-024-07566-y","volume":"631","author":"I Shumailov","year":"2024","unstructured":"Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., & Gal, Y. (2024). AI models collapse when trained on recursively generated data. Nature, 631(8022), 755\u2013759. https:\/\/doi.org\/10.1038\/s41586-024-07566-y","journal-title":"Nature"},{"key":"9845_CR101","doi-asserted-by":"publisher","unstructured":"Shur-Ofry, M. (2023). Multiplicity as an AI Governance Principle (SSRN Scholarly Paper No. 4444354). Social Science Research Network. https:\/\/doi.org\/10.2139\/ssrn.4444354","DOI":"10.2139\/ssrn.4444354"},{"key":"9845_CR102","doi-asserted-by":"publisher","unstructured":"Shur-Ofry, M., Horowitz-Amsalem, B., Rahamim, A., & Belinkov, Y. (2024). Growing a Tail: Increasing Output Diversity in Large Language Models (No. arXiv:2411.02989). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2411.02989","DOI":"10.48550\/arXiv.2411.02989"},{"key":"9845_CR103","doi-asserted-by":"publisher","unstructured":"Slater, J., & Humphries, J. (2025). Another reason to call bullshit on AI hallucinations. AI & SOCIETY. https:\/\/doi.org\/10.1007\/s00146-025-02346-2","DOI":"10.1007\/s00146-025-02346-2"},{"issue":"11","key":"9845_CR104","doi-asserted-by":"publisher","first-page":"e0000388","DOI":"10.1371\/journal.pdig.0000388","volume":"2","author":"AL Smith","year":"2023","unstructured":"Smith, A. L., Greaves, F., & Panch, T. (2023). Hallucination or confabulation?? Neuroanatomy as metaphor in large Language models. PLOS Digital Health, 2(11), e0000388. https:\/\/doi.org\/10.1371\/journal.pdig.0000388","journal-title":"PLOS Digital Health"},{"key":"9845_CR105","doi-asserted-by":"publisher","unstructured":"Sourati, Z., Karimi-Malekabadi, F., Ozcan, M., McDaniel, C., Ziabari, A., Trager, J., Tak, A., Chen, M., Morstatter, F., & Dehghani, M. (2025). The Shrinking Landscape of Linguistic Diversity in the Age of Large Language Models (No. arXiv:2502.11266). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2502.11266","DOI":"10.48550\/arXiv.2502.11266"},{"key":"9845_CR106","doi-asserted-by":"publisher","unstructured":"Sui, P., Duede, E., Wu, S., & So, R. J. (2024). Confabulation: The Surprising Value of Large Language Model Hallucinations (No. arXiv:2406.04175). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2406.04175","DOI":"10.48550\/arXiv.2406.04175"},{"key":"9845_CR107","unstructured":"Taylor, J. (2025, February 9). Fake cases, judges\u2019 headaches and new limits: Australian courts grapple with lawyers using AI. The Guardian. https:\/\/www.theguardian.com\/law\/2025\/feb\/10\/fake-cases-judges-headaches-and-new-limits-australian-courts-grappling-with-lawyers-using-ai-ntwnfb"},{"key":"9845_CR108","doi-asserted-by":"publisher","unstructured":"Thompson, B., Dhaliwal, M. P., Frisch, P., Domhan, T., & Federico, M. (2024). A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism (No. arXiv:2401.05749). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2401.05749","DOI":"10.48550\/arXiv.2401.05749"},{"key":"9845_CR109","doi-asserted-by":"publisher","unstructured":"Thrampoulidis, C. (2024). Implicit Optimization Bias of Next-Token Prediction in Linear Models (No. arXiv:2402.18551). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2402.18551","DOI":"10.48550\/arXiv.2402.18551"},{"key":"9845_CR110","doi-asserted-by":"publisher","unstructured":"Tonmoy, S. M. T. I., Zaman, S. M. M., Jain, V., Rani, A., Rawte, V., Chadha, A., & Das, A. (2024). A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models (No. arXiv:2401.01313). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2401.01313","DOI":"10.48550\/arXiv.2401.01313"},{"key":"9845_CR111","first-page":"51178","volume":"36","author":"C Toups","year":"2023","unstructured":"Toups, C., Bommasani, R., Creel, K., Bana, S., Jurafsky, D., & Liang, P. S. (2023). Ecosystem-level analysis of deployed machine learning reveals homogeneous outcomes. Advances in Neural Information Processing Systems, 36, 51178\u201351201.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"9845_CR112","doi-asserted-by":"crossref","unstructured":"Vallor, S. (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Oxford University Press.","DOI":"10.1093\/oso\/9780197759066.001.0001"},{"key":"9845_CR113","doi-asserted-by":"publisher","DOI":"10.5465\/amr.2022.0041","author":"BS Vanneste","year":"2024","unstructured":"Vanneste, B. S., & Puranam, P. (2024). Artificial intelligence, trust, and perceptions of agency. Academy of Management Review. https:\/\/doi.org\/10.5465\/amr.2022.0041. amr.2022.0041.","journal-title":"Academy of Management Review"},{"issue":"8","key":"9845_CR114","doi-asserted-by":"publisher","first-page":"240197","DOI":"10.1098\/rsos.240197","volume":"11","author":"S Wachter","year":"2024","unstructured":"Wachter, S., Mittelstadt, B., & Russell, C. (2024). Do large Language models have a legal duty to tell the truth? Royal Society Open Science, 11(8), 240197. https:\/\/doi.org\/10.1098\/rsos.240197","journal-title":"Royal Society Open Science"},{"key":"9845_CR115","doi-asserted-by":"publisher","unstructured":"Wang, A., & Russakovsky, O. (2023). Overwriting Pretrained Bias with Fine-tuning Data (No. arXiv:2303.06167). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2303.06167","DOI":"10.48550\/arXiv.2303.06167"},{"key":"9845_CR116","doi-asserted-by":"publisher","unstructured":"Wang, J., Zhou, Y., Xu, G., Shi, P., Zhao, C., Xu, H., Ye, Q., Yan, M., Zhang, J., Zhu, J., Sang, J., & Tang, H. (2023). Evaluation and Analysis of Hallucination in Large Vision-Language Models (No. arXiv:2308.15126). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2308.15126","DOI":"10.48550\/arXiv.2308.15126"},{"issue":"1","key":"9845_CR117","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1086\/731729","volume":"51","author":"A Wasielewski","year":"2024","unstructured":"Wasielewski, A. (2024). Unnatural images: On AI-Generated photographs. Critical Inquiry, 51(1), 1\u201329. https:\/\/doi.org\/10.1086\/731729","journal-title":"Critical Inquiry"},{"key":"9845_CR118","doi-asserted-by":"publisher","unstructured":"Wendler, C., Veselovsky, V., Monea, G., & West, R. (2024). Do Llamas Work in English? On the Latent Language of Multilingual Transformers. In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 15366\u201315394). Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/2024.acl-long.820","DOI":"10.18653\/v1\/2024.acl-long.820"},{"issue":"8022","key":"9845_CR119","doi-asserted-by":"publisher","first-page":"742","DOI":"10.1038\/d41586-024-02355-z","volume":"631","author":"E Wenger","year":"2024","unstructured":"Wenger, E. (2024). AI produces gibberish when trained on too much AI-generated data. Nature, 631(8022), 742\u2013743. https:\/\/doi.org\/10.1038\/d41586-024-02355-z","journal-title":"Nature"},{"key":"9845_CR120","doi-asserted-by":"publisher","unstructured":"Wolfe, R., & Caliskan, A. (2021). Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models (No. arXiv:2110.00672). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2110.00672","DOI":"10.48550\/arXiv.2110.00672"},{"key":"9845_CR121","doi-asserted-by":"publisher","unstructured":"Wu, H., & Flanagan, T. (2023). The limits of AI content detectors. Journal of Student Research, 12(3). https:\/\/doi.org\/10.47611\/jsrhs.v12i3.5064","DOI":"10.47611\/jsrhs.v12i3.5064"},{"key":"9845_CR122","doi-asserted-by":"publisher","unstructured":"Wu, F., Black, E., & Chandrasekaran, V. (2024). Generative Monoculture in Large Language Models (No. arXiv:2407.02209). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2407.02209","DOI":"10.48550\/arXiv.2407.02209"},{"key":"9845_CR123","doi-asserted-by":"publisher","unstructured":"Xu, Z., Jain, S., & Kankanhalli, M. (2024). Hallucination is Inevitable: An Innate Limitation of Large Language Models (No. arXiv:2401.11817). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2401.11817","DOI":"10.48550\/arXiv.2401.11817"},{"key":"9845_CR124","doi-asserted-by":"publisher","unstructured":"Ye, H., Liu, T., Zhang, A., Hua, W., & Jia, W. (2023). Cognitive Mirage: A Review of Hallucinations in Large Language Models (No. arXiv:2309.06794). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2309.06794","DOI":"10.48550\/arXiv.2309.06794"},{"key":"9845_CR125","doi-asserted-by":"publisher","unstructured":"Zayed, A., Mordido, G., Shabanian, S., & Chandar, S. (2024). Should We Attend More or Less? Modulating Attention for Fairness (No. arXiv:2305.13088). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2305.13088","DOI":"10.48550\/arXiv.2305.13088"},{"key":"9845_CR126","doi-asserted-by":"publisher","unstructured":"Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., Huang, X., Zhao, E., Zhang, Y., Chen, Y., Wang, L., Luu, A. T., Bi, W., Shi, F., & Shi, S. (2023). Siren\u2019s Song in the AI Ocean: A Survey on Hallucination in Large Language Models (No. arXiv:2309.01219). arXiv. https:\/\/doi.org\/10.48550\/arXiv.2309.01219","DOI":"10.48550\/arXiv.2309.01219"},{"key":"9845_CR127","doi-asserted-by":"publisher","unstructured":"Zhang, S., Xu, J., & Alvero, A. J. (2024). Generative AI Meets Open-Ended Survey Responses: Participant Use of AI and Homogenization. OSF. https:\/\/doi.org\/10.31235\/osf.io\/4esdp","DOI":"10.31235\/osf.io\/4esdp"},{"issue":"6","key":"9845_CR128","first-page":"563","volume":"203","author":"S \u017di\u017eek","year":"2014","unstructured":"\u017di\u017eek, S. (2014). The poetic Torture-House of Language. Poetry, 203(6), 563\u2013566. https:\/\/www.jstor.org\/stable\/43591384","journal-title":"Poetry"},{"key":"9845_CR129","doi-asserted-by":"crossref","unstructured":"\u017di\u017eek, S. (2024). Christian atheism: How to be a real materialist. Bloomsbury Academic.","DOI":"10.5040\/9781350409347"}],"container-title":["Ethics and Information Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09845-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10676-025-09845-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10676-025-09845-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T05:45:48Z","timestamp":1758260748000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10676-025-09845-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,18]]},"references-count":129,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["9845"],"URL":"https:\/\/doi.org\/10.1007\/s10676-025-09845-2","relation":{},"ISSN":["1388-1957","1572-8439"],"issn-type":[{"value":"1388-1957","type":"print"},{"value":"1572-8439","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,18]]},"assertion":[{"value":"18 July 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"36"}}