{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T05:08:39Z","timestamp":1771045719139,"version":"3.50.1"},"reference-count":65,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2025,2,8]],"date-time":"2025-02-08T00:00:00Z","timestamp":1738972800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"funder":[{"name":"National Artificial Intelligence Research Resource (NAIRR) Pilot and the San Diego Supercomputer Center, University of California, San Diego"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Manage. Inf. Syst."],"published-print":{"date-parts":[[2025,3,31]]},"abstract":"<jats:p>More efforts are being put into improving the capabilities of Large Language Models (LLM) than into dealing with their implications. Current LLMs are able to generate high-quality texts seemingly indistinguishable from those written by human experts. While offering great potential, such breakthroughs also pose new challenges for safe and ethical uses of LLMs in education, science, and a multitude of other areas. Thus, majority of current approaches in LLM text detection are either computationally expensive or need access to the LLMs\u2019 internal computations, both of which hinder their public accessibility. With such motivation, this article presents a novel metric learning paradigm for detection of LLM-generated texts that is able to balance computational costs, accessibility, and performances. Specifically, the detection is based on learning a similarity function between a given text and an equivalent example generated by LLMs that outputs high values for LLM-LLM text pairs and low values for LLM-human text pairs. In terms of architecture, the detection framework includes a pre-trained language model for the text embedding task and a newly designed deep metric model. The metric component can be trained on triplets or pairs of same-context instances to signify the distances between human and LLM texts while reducing that among LLM texts. Next, we develop five datasets totaling more than 95,000 contexts and triplets of responses in which one is from humans and two are from GPT-3.5 TURBO or GPT-4 TURBO for benchmarking. Experiment studies show that our best architectures maintain F1 scores between 0.87 and 0.95 across the tested corpora in multiple experiment settings. The metric framework also demands significantly less time in training and inference compared to RoBERTa, LLaMA 3, Mistral v0.3, and Ghostbuster, while keeping 90% to 150% performance of the best benchmark.<\/jats:p>","DOI":"10.1145\/3704739","type":"journal-article","created":{"date-parts":[[2024,11,16]],"date-time":"2024-11-16T07:30:57Z","timestamp":1731742257000},"page":"1-19","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["A Metric-Based Detection System for Large Language Model Texts"],"prefix":"10.1145","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0087-3448","authenticated-orcid":false,"given":"Linh","family":"Le","sequence":"first","affiliation":[{"name":"Information Technology, Kennesaw State University, Marietta, United States"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-7304-1358","authenticated-orcid":false,"given":"Dung","family":"Tran","sequence":"additional","affiliation":[{"name":"Channing Division of Network Medicine, Brigham and Women's Hospital, Boston, United States"}]}],"member":"320","published-online":{"date-parts":[[2025,2,8]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"Mart\u00edn Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jeffrey Dean Matthieu Devin et\u00a0al. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Retrieved November 17 2024 from https:\/\/www.tensorflow.org\/Software available from tensorflow.org"},{"key":"e_1_3_2_3_2","article-title":"Real or fake? Learning to discriminate machine from human generated text","author":"Bakhtin Anton","year":"2019","unstructured":"Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc\u2019Aurelio Ranzato, and Arthur Szlam. 2019. Real or fake? Learning to discriminate machine from human generated text. arXiv preprint arXiv:1906.03351 (2019).","journal-title":"arXiv preprint arXiv:1906.03351"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.2008.11-07-647"},{"key":"e_1_3_2_5_2","article-title":"Signature verification using a \u201cSiamese\u201d time delay neural network","volume":"6","author":"Bromley Jane","year":"1993","unstructured":"Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1993. Signature verification using a \u201cSiamese\u201d time delay neural network. Advances in Neural Information Processing Systems 6 (1993), 733\u2013744.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_6_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et\u00a0al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877\u20131901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_7_2","article-title":"PaLM: Scaling language modeling with pathways","author":"Chowdhery Aakanksha","year":"2022","unstructured":"Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et\u00a0al. 2022. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022).","journal-title":"arXiv preprint arXiv:2204.02311"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN54540.2023.10191322"},{"key":"e_1_3_2_9_2","unstructured":"Prithiviraj Damodaran. 2021. Parrot: Paraphrase generation for NLU. Retrieved November 21 20204 from https:\/\/github.com\/PrithivirajDamodaran\/Parrot_Paraphraser"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0251415"},{"key":"e_1_3_2_11_2","article-title":"Hierarchical neural story generation","author":"Fan Angela","year":"2018","unstructured":"Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833 (2018).","journal-title":"arXiv preprint arXiv:1805.04833"},{"key":"e_1_3_2_12_2","unstructured":"Google. 2023. Google Colaboratory. Retrieved November 17 2024 from https:\/\/research.google.com\/colaboratory\/"},{"key":"e_1_3_2_13_2","article-title":"Generating sequences with recurrent neural networks","author":"Graves Alex","year":"2013","unstructured":"Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013).","journal-title":"arXiv preprint arXiv:1308.0850"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-020-2649-2"},{"key":"e_1_3_2_15_2","article-title":"The curious case of neural text degeneration","author":"Holtzman Ari","year":"2019","unstructured":"Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751 (2019).","journal-title":"arXiv preprint arXiv:1904.09751"},{"key":"e_1_3_2_16_2","unstructured":"HuggingFace. 2024. Llama 3 8B Instruct. Retrieved November 17 2024 from https:\/\/huggingface.co\/meta-llama\/Meta-Llama-3-8B-Instruct"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/MCSE.2007.55"},{"key":"e_1_3_2_18_2","article-title":"Automatic detection of generated text is easiest when humans are fooled","author":"Ippolito Daphne","year":"2019","unstructured":"Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck. 2019. Automatic detection of generated text is easiest when humans are fooled. arXiv preprint arXiv:1911.00650 (2019).","journal-title":"arXiv preprint arXiv:1911.00650"},{"key":"e_1_3_2_19_2","article-title":"Crowdsourcing multiple choice science questions","author":"Gardner Johannes Welbl, Nelson F. Liu, and Matt","year":"2017","unstructured":"Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. arXiv:1707.06209v1.","journal-title":"arXiv:1707.06209v1"},{"key":"e_1_3_2_20_2","first-page":"18661","article-title":"Supervised contrastive learning","volume":"33","author":"Khosla Prannay","year":"2020","unstructured":"Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. Advances in Neural Information Processing Systems 33 (2020), 18661\u201318673.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_21_2","article-title":"Adam: A method for stochastic optimization","author":"Kingma Diederik P.","year":"2014","unstructured":"Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).","journal-title":"arXiv preprint arXiv:1412.6980"},{"key":"e_1_3_2_22_2","article-title":"A watermark for large language models","author":"Kirchenbauer John","year":"2023","unstructured":"John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. 2023. A watermark for large language models. arXiv preprint arXiv:2301.10226 (2023).","journal-title":"arXiv preprint arXiv:2301.10226"},{"key":"e_1_3_2_23_2","article-title":"On the reliability of watermarks for large language models","author":"Kirchenbauer John","year":"2023","unstructured":"John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, and Tom Goldstein. 2023. On the reliability of watermarks for large language models. arXiv preprint arXiv:2306.04634 (2023).","journal-title":"arXiv preprint arXiv:2306.04634"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00276"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1038\/nature14539"},{"key":"e_1_3_2_26_2","article-title":"Origin tracing and detecting of llms","author":"Li Linyang","year":"2023","unstructured":"Linyang Li, Pengyu Wang, Ke Ren, Tianxiang Sun, and Xipeng Qiu. 2023. Origin tracing and detecting of llms. arXiv preprint arXiv:2304.14072 (2023).","journal-title":"arXiv preprint arXiv:2304.14072"},{"key":"e_1_3_2_27_2","article-title":"A private watermark for large language models","author":"Liu Aiwei","year":"2023","unstructured":"Aiwei Liu, Leyi Pan, Xuming Hu, Shu\u2019ang Li, Lijie Wen, Irwin King, and Philip S. Yu. 2023. A private watermark for large language models. arXiv preprint arXiv:2307.16230 (2023).","journal-title":"arXiv preprint arXiv:2307.16230"},{"key":"e_1_3_2_28_2","article-title":"Raidar: Generative AI Detection viA Rewriting","author":"Mao Chengzhi","year":"2024","unstructured":"Chengzhi Mao, Carl Vondrick, Hao Wang, and Junfeng Yang. 2024. Raidar: Generative AI Detection viA Rewriting. arXiv preprint arXiv:2401.12970 (2024).","journal-title":"arXiv preprint arXiv:2401.12970"},{"key":"e_1_3_2_29_2","unstructured":"Meta. 2024. Meta Llama 3. Retrieved November 17 2024 from https:\/\/llama.meta.com\/llama3\/"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-99-7947-9_12"},{"key":"e_1_3_2_31_2","unstructured":"MistralAI. 2024. Mistral AI Instruct v0.3. Retrieved November 17 2024 from https:\/\/huggingface.co\/mistralai\/Mistral-7B-v0.3"},{"key":"e_1_3_2_32_2","article-title":"DetectGPT: Zero-shot machine-generated text detection using probability curvature","author":"Mitchell Eric","year":"2023","unstructured":"Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn. 2023. DetectGPT: Zero-shot machine-generated text detection using probability curvature. arXiv preprint arXiv:2301.11305 (2023).","journal-title":"arXiv preprint arXiv:2301.11305"},{"key":"e_1_3_2_33_2","unstructured":"OpenAI. 2022. Introducing ChatGPT. Retrieved November 17 2024 from https:\/\/openai.com\/blog\/chatgpt"},{"key":"e_1_3_2_34_2","unstructured":"OpenAI. 2023. GPT-4. Retrieved November 17 2024 from https:\/\/openai.com\/gpt-4"},{"key":"e_1_3_2_35_2","unstructured":"OpenAI. 2024. GPT Base. Retrieved November 17 2024 from https:\/\/platform.openai.com\/docs\/models\/gpt-base"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.3509134"},{"key":"e_1_3_2_37_2","article-title":"Beyond black box AI-generated plagiarism detection: From sentence to document level","author":"Quidwai Mujahid Ali","year":"2023","unstructured":"Mujahid Ali Quidwai, Chunhui Li, and Parijat Dube. 2023. Beyond black box AI-generated plagiarism detection: From sentence to document level. arXiv preprint arXiv:2306.08122 (2023).","journal-title":"arXiv preprint arXiv:2306.08122"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.5555\/3455716.3455856"},{"key":"e_1_3_2_39_2","article-title":"Squad: 100,000+ questions for machine comprehension of text","author":"Rajpurkar Pranav","year":"2016","unstructured":"Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016).","journal-title":"arXiv preprint arXiv:1606.05250"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1410"},{"key":"e_1_3_2_41_2","article-title":"Classification of human- and AI-generated texts for English, French, German, and Spanish","author":"Schaaff Kristina","year":"2023","unstructured":"Kristina Schaaff, Tim Schlippe, and Lorenz Mindner. 2023. Classification of human- and AI-generated texts for English, French, German, and Spanish. arXiv preprint arXiv:2312.04882 (2023).","journal-title":"arXiv preprint arXiv:2312.04882"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298682"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.14569\/IJACSA.2023.01410110"},{"key":"e_1_3_2_44_2","article-title":"Release strategies and the social impacts of language models","author":"Solaiman Irene","year":"2019","unstructured":"Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, et\u00a0al. 2019. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203 (2019).","journal-title":"arXiv preprint arXiv:1908.09203"},{"key":"e_1_3_2_45_2","first-page":"16857","article-title":"MPNet: Masked and permuted pre-training for language understanding","volume":"33","author":"Song Kaitao","year":"2020","unstructured":"Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2020. MPNet: Masked and permuted pre-training for language understanding. Advances in Neural Information Processing Systems 33 (2020), 16857\u201316867.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_46_2","article-title":"DetectLLM: Leveraging log rank information for zero-shot detection of machine-generated text","author":"Su Jinyan","year":"2023","unstructured":"Jinyan Su, Terry Yue Zhuo, Di Wang, and Preslav Nakov. 2023. DetectLLM: Leveraging log rank information for zero-shot detection of machine-generated text. arXiv preprint arXiv:2306.05540 (2023).","journal-title":"arXiv preprint arXiv:2306.05540"},{"key":"e_1_3_2_47_2","article-title":"Sequence to sequence learning with neural networks","volume":"27","author":"Sutskever Ilya","year":"2014","unstructured":"Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems 27 (2014), 3104\u20133112.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1145\/3606274.3606279"},{"key":"e_1_3_2_49_2","article-title":"Llama: Open and efficient foundation language models","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et\u00a0al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).","journal-title":"arXiv preprint arXiv:2302.13971"},{"key":"e_1_3_2_50_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017), 1\u201311.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_51_2","article-title":"Ghostbuster: Detecting text ghostwritten by large language models","author":"Verma Vivek","year":"2023","unstructured":"Vivek Verma, Eve Fleisig, Nicholas Tomlin, and Dan Klein. 2023. Ghostbuster: Detecting text ghostwritten by large language models. arXiv preprint arXiv:2305.15047 (2023).","journal-title":"arXiv preprint arXiv:2305.15047"},{"key":"e_1_3_2_52_2","article-title":"SeqXGPT: Sentence-level AI-generated text detection","author":"Wang Pengyu","year":"2023","unstructured":"Pengyu Wang, Linyang Li, Ke Ren, Botian Jiang, Dong Zhang, and Xipeng Qiu. 2023. SeqXGPT: Sentence-level AI-generated text detection. arXiv preprint arXiv:2310.08903 (2023).","journal-title":"arXiv preprint arXiv:2310.08903"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.21105\/joss.03021"},{"key":"e_1_3_2_54_2","article-title":"Consistency of a recurrent language model with respect to incomplete decoding","author":"Welleck Sean","year":"2020","unstructured":"Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, and Kyunghyun Cho. 2020. Consistency of a recurrent language model with respect to incomplete decoding. arXiv preprint arXiv:2002.02492 (2020).","journal-title":"arXiv preprint arXiv:2002.02492"},{"key":"e_1_3_2_55_2","article-title":"Neural text generation with unlikelihood training","author":"Welleck Sean","year":"2019","unstructured":"Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319 (2019).","journal-title":"arXiv preprint arXiv:1908.04319"},{"key":"e_1_3_2_56_2","unstructured":"Wikipedia. 2017. Category: Glossaries of Science. Retrieved November 17 2024 from https:\/\/en.wikipedia.org\/wiki\/Category:Glossaries_of_science"},{"key":"e_1_3_2_57_2","article-title":"Sequence-to-sequence learning as beam-search optimization","author":"Wiseman Sam","year":"2016","unstructured":"Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960 (2016).","journal-title":"arXiv preprint arXiv:1606.02960"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-demos.6"},{"key":"e_1_3_2_59_2","article-title":"A survey on LLM-generated text detection: Necessity, methods, and future directions","author":"Wu Junchao","year":"2023","unstructured":"Junchao Wu, Shu Yang, Runzhe Zhan, Yulin Yuan, Derek F. Wong, and Lidia S. Chao. 2023. A survey on LLM-generated text detection: Necessity, methods, and future directions. arXiv preprint arXiv:2310.14724 (2023).","journal-title":"arXiv preprint arXiv:2310.14724"},{"key":"e_1_3_2_60_2","article-title":"LLMDet: A third party large language models generated text detection tool","author":"Wu Kangxi","year":"2023","unstructured":"Kangxi Wu, Liang Pang, Huawei Shen, Xueqi Cheng, and Tat-Seng Chua. 2023. LLMDet: A third party large language models generated text detection tool. arXiv preprint arXiv:2305.15004 (2023).","journal-title":"arXiv preprint arXiv:2305.15004"},{"key":"e_1_3_2_61_2","unstructured":"Zhendong Wu and Hui Xiang. 2023. MFD: Multi-feature detection of LLM-generated text. Preprint."},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4419-9878-1_4"},{"key":"e_1_3_2_63_2","article-title":"Watermarking text generated by black-box language models","author":"Yang Xi","year":"2023","unstructured":"Xi Yang, Kejiang Chen, Weiming Zhang, Chang Liu, Yuang Qi, Jie Zhang, Han Fang, and Nenghai Yu. 2023. Watermarking text generated by black-box language models. arXiv preprint arXiv:2305.08883 (2023).","journal-title":"arXiv preprint arXiv:2305.08883"},{"key":"e_1_3_2_64_2","article-title":"DNA-GPT: Divergent n-gram analysis for training-free detection of GPT-generated text","author":"Yang Xianjun","year":"2023","unstructured":"Xianjun Yang, Wei Cheng, Yue Wu, Linda Petzold, William Yang Wang, and Haifeng Chen. 2023. DNA-GPT: Divergent n-gram analysis for training-free detection of GPT-generated text. arXiv preprint arXiv:2305.17359 (2023).","journal-title":"arXiv preprint arXiv:2305.17359"},{"key":"e_1_3_2_65_2","unstructured":"Xiao Yu Yuang Qi Kejiang Chen Guoqiang Chen Xi Yang Pengyuan Zhu Weiming Zhang and Nenghai Yu. 2023. GPT Paternity Test: GPT Generated Text Detection with GPT Genetic Inheritance. Retrieved November 17 2024 from https:\/\/ar5iv.labs.arxiv.org\/html\/2305.12519"},{"key":"e_1_3_2_66_2","article-title":"A survey of large language models","author":"Zhao Wayne Xin","year":"2023","unstructured":"Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et\u00a0al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).","journal-title":"arXiv preprint arXiv:2303.18223"}],"container-title":["ACM Transactions on Management Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3704739","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3704739","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:17:58Z","timestamp":1750295878000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3704739"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,8]]},"references-count":65,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,3,31]]}},"alternative-id":["10.1145\/3704739"],"URL":"https:\/\/doi.org\/10.1145\/3704739","relation":{},"ISSN":["2158-656X","2158-6578"],"issn-type":[{"value":"2158-656X","type":"print"},{"value":"2158-6578","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,8]]},"assertion":[{"value":"2023-12-31","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-08","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-02-08","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}