{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T20:26:09Z","timestamp":1776198369984,"version":"3.50.1"},"reference-count":267,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2024,3,29]],"date-time":"2024-03-29T00:00:00Z","timestamp":1711670400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"NSF","award":["III-2106758"],"award-info":[{"award-number":["III-2106758"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2024,6,30]]},"abstract":"<jats:p>\n            Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions:\n            <jats:italic>what to evaluate<\/jats:italic>\n            ,\n            <jats:italic>where to evaluate<\/jats:italic>\n            , and\n            <jats:italic>how to evaluate<\/jats:italic>\n            . Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, education, natural and social sciences, agent applications, and other areas. Secondly, we answer the \u2018where\u2019 and \u2018how\u2019 questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing the performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at:\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"url\" xlink:href=\"https:\/\/github.com\/MLGroupJLU\/LLM-eval-survey\">https:\/\/github.com\/MLGroupJLU\/LLM-eval-survey<\/jats:ext-link>\n          <\/jats:p>","DOI":"10.1145\/3641289","type":"journal-article","created":{"date-parts":[[2024,1,23]],"date-time":"2024-01-23T12:28:28Z","timestamp":1706012908000},"page":"1-45","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2225,"title":["A Survey on Evaluation of Large Language Models"],"prefix":"10.1145","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7178-6088","authenticated-orcid":false,"given":"Yupeng","family":"Chang","sequence":"first","affiliation":[{"name":"School of Artificial Intelligence, Jilin University, Changchun, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-5904-5313","authenticated-orcid":false,"given":"Xu","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Artificial Intelligence, Jilin University, Changchun, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4833-0880","authenticated-orcid":false,"given":"Jindong","family":"Wang","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6289-5872","authenticated-orcid":false,"given":"Yuan","family":"Wu","sequence":"additional","affiliation":[{"name":"School of Artificial Intelligence, Jilin University, Changchun, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0667-7349","authenticated-orcid":false,"given":"Linyi","family":"Yang","sequence":"additional","affiliation":[{"name":"Westlake University, Hangzhou, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-6220-1476","authenticated-orcid":false,"given":"Kaijie","family":"Zhu","sequence":"additional","affiliation":[{"name":"Institute of Automation, Chinese Academy of Sciences, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1960-4803","authenticated-orcid":false,"given":"Hao","family":"Chen","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, Pittsburgh, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2710-1613","authenticated-orcid":false,"given":"Xiaoyuan","family":"Yi","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3023-8082","authenticated-orcid":false,"given":"Cunxiang","family":"Wang","sequence":"additional","affiliation":[{"name":"Westlake University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-9969-8259","authenticated-orcid":false,"given":"Yidong","family":"Wang","sequence":"additional","affiliation":[{"name":"Peking University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9331-4716","authenticated-orcid":false,"given":"Wei","family":"Ye","sequence":"additional","affiliation":[{"name":"Peking University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5214-2268","authenticated-orcid":false,"given":"Yue","family":"Zhang","sequence":"additional","affiliation":[{"name":"Westlake University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2697-8093","authenticated-orcid":false,"given":"Yi","family":"Chang","sequence":"additional","affiliation":[{"name":"School of Artificial Intelligence, Jilin University, Changchun, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3491-5968","authenticated-orcid":false,"given":"Philip S.","family":"Yu","sequence":"additional","affiliation":[{"name":"University of Illinois at Chicago, Chicago, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5059-8360","authenticated-orcid":false,"given":"Qiang","family":"Yang","sequence":"additional","affiliation":[{"name":"Hong Kong University of Science and Technology, Kowloon, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8608-8482","authenticated-orcid":false,"given":"Xing","family":"Xie","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,3,29]]},"reference":[{"key":"e_1_3_2_2_2","article-title":"Benchmarking Arabic AI with large language models","author":"Abdelali Ahmed","year":"2023","unstructured":"Ahmed Abdelali, Hamdy Mubarak, Shammur Absar Chowdhury, Maram Hasanain, Basel Mousi, Sabri Boughorbel, Yassine El Kheir, Daniel Izham, Fahim Dalvi, Majd Hawasly, et\u00a0al. 2023. Benchmarking Arabic AI with large language models. arXiv preprint arXiv:2305.14982 (2023).","journal-title":"arXiv preprint arXiv:2305.14982"},{"key":"e_1_3_2_3_2","article-title":"MEGA: Multilingual evaluation of generative AI","author":"Ahuja Kabir","year":"2023","unstructured":"Kabir Ahuja, Rishav Hada, Millicent Ochieng, Prachi Jain, Harshita Diddee, Samuel Maina, Tanuja Ganu, Sameer Segal, Maxamed Axmed, Kalika Bali, et\u00a0al. 2023. MEGA: Multilingual evaluation of generative AI. arXiv preprint arXiv:2303.12528 (2023).","journal-title":"arXiv preprint arXiv:2303.12528"},{"key":"e_1_3_2_4_2","article-title":"Have LLMs advanced enough? A challenging problem solving benchmark for large language models","author":"Arora Daman","year":"2023","unstructured":"Daman Arora, Himanshu Gaurav Singh, et\u00a0al. 2023. Have LLMs advanced enough? A challenging problem solving benchmark for large language models. arXiv preprint arXiv:2305.15074 (2023).","journal-title":"arXiv preprint arXiv:2305.15074"},{"key":"e_1_3_2_5_2","article-title":"A general language assistant as a laboratory for alignment","author":"Askell Amanda","year":"2021","unstructured":"Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et\u00a0al. 2021. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861 (2021).","journal-title":"arXiv preprint arXiv:2112.00861"},{"key":"e_1_3_2_6_2","article-title":"Benchmarking foundation models with language-model-as-an-examiner","author":"Bai Yushi","year":"2023","unstructured":"Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, et\u00a0al. 2023. Benchmarking foundation models with language-model-as-an-examiner. arXiv preprint arXiv:2306.04181 (2023).","journal-title":"arXiv preprint arXiv:2306.04181"},{"key":"e_1_3_2_7_2","article-title":"A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity","author":"Bang Yejin","year":"2023","unstructured":"Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et\u00a0al. 2023. A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023 (2023).","journal-title":"arXiv preprint arXiv:2302.04023"},{"key":"e_1_3_2_8_2","first-page":"313","volume-title":"11th Conference of the European Chapter of the Association for Computational Linguistics","author":"Belz Anja","year":"2006","unstructured":"Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics. 313\u2013320."},{"key":"e_1_3_2_9_2","doi-asserted-by":"crossref","unstructured":"Daniel Berrar. 2019. Cross-Validation. (2019).","DOI":"10.1016\/B978-0-12-809633-8.20349-X"},{"key":"e_1_3_2_10_2","article-title":"ChatGPT is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large language models","author":"Bian Ning","year":"2023","unstructured":"Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, and Ben He. 2023. ChatGPT is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large language models. arXiv preprint arXiv:2303.16421 (2023).","journal-title":"arXiv preprint arXiv:2303.16421"},{"key":"e_1_3_2_11_2","article-title":"Personality testing of GPT-3: Limited temporal reliability, but highlighted social desirability of GPT-3\u2019s personality instruments results","author":"Bodroza Bojana","year":"2023","unstructured":"Bojana Bodroza, Bojana M. Dinic, and Ljubisa Bojic. 2023. Personality testing of GPT-3: Limited temporal reliability, but highlighted social desirability of GPT-3\u2019s personality instruments results. arXiv preprint arXiv:2306.04308 (2023).","journal-title":"arXiv preprint arXiv:2306.04308"},{"key":"e_1_3_2_12_2","article-title":"On the opportunities and risks of foundation models","author":"Bommasani Rishi","year":"2021","unstructured":"Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et\u00a0al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).","journal-title":"arXiv preprint arXiv:2108.07258"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1080\/09540269974483"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.5555\/176313.176316"},{"key":"e_1_3_2_15_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et\u00a0al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877\u20131901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_16_2","article-title":"Sparks of artificial general intelligence: Early experiments with GPT-4","author":"Bubeck S\u00e9bastien","year":"2023","unstructured":"S\u00e9bastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et\u00a0al. 2023. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712 (2023).","journal-title":"arXiv preprint arXiv:2303.12712"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.c3nlp-1.7"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10916-023-01925-4"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1021\/acs.jcim.3c00285"},{"key":"e_1_3_2_20_2","article-title":"Evaluating large language models trained on code","author":"Chen Mark","year":"2021","unstructured":"Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et\u00a0al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).","journal-title":"arXiv preprint arXiv:2107.03374"},{"key":"e_1_3_2_21_2","article-title":"Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study","author":"Chen Yi","year":"2023","unstructured":"Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023. Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study. arXiv preprint arXiv:2304.00723 (2023).","journal-title":"arXiv preprint arXiv:2304.00723"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.fertnstert.2023.05.151"},{"key":"e_1_3_2_23_2","article-title":"INSTRUCTEVAL: Towards holistic evaluation of instruction-tuned large language models","author":"Chia Yew Ken","year":"2023","unstructured":"Yew Ken Chia, Pengfei Hong, Lidong Bing, and Soujanya Poria. 2023. INSTRUCTEVAL: Towards holistic evaluation of instruction-tuned large language models. arXiv preprint arXiv:2306.04757 (2023).","journal-title":"arXiv preprint arXiv:2306.04757"},{"key":"e_1_3_2_24_2","article-title":"Do LLMs understand social knowledge? Evaluating the sociability of large language models with SocKET benchmark","author":"Choi Minje","year":"2023","unstructured":"Minje Choi, Jiaxin Pei, Sagar Kumar, Chang Shu, and David Jurgens. 2023. Do LLMs understand social knowledge? Evaluating the sociability of large language models with SocKET benchmark. arXiv preprint arXiv:2305.14938 (2023).","journal-title":"arXiv preprint arXiv:2305.14938"},{"key":"e_1_3_2_25_2","article-title":"PaLM: Scaling language modeling with pathways","author":"Chowdhery Aakanksha","year":"2022","unstructured":"Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et\u00a0al. 2022. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022).","journal-title":"arXiv preprint arXiv:2204.02311"},{"key":"e_1_3_2_26_2","article-title":"Deep reinforcement learning from human preferences","volume":"30","author":"Christiano Paul F.","year":"2017","unstructured":"Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-35320-8_1"},{"key":"e_1_3_2_28_2","article-title":"Evaluating language models for mathematics through interactions","author":"Collins Katherine M.","year":"2023","unstructured":"Katherine M. Collins, Albert Q. Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, et\u00a0al. 2023. Evaluating language models for mathematics through interactions. arXiv preprint arXiv:2306.01694 (2023).","journal-title":"arXiv preprint arXiv:2306.01694"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF00994018"},{"key":"e_1_3_2_30_2","article-title":"Uncovering ChatGPT\u2019s capabilities in recommender systems","author":"Dai Sunhao","year":"2023","unstructured":"Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. 2023. Uncovering ChatGPT\u2019s capabilities in recommender systems. arXiv preprint arXiv:2305.02182 (2023).","journal-title":"arXiv preprint arXiv:2305.02182"},{"key":"e_1_3_2_31_2","doi-asserted-by":"crossref","unstructured":"Wei Dai Jionghao Lin Flora Jin Tongguang Li Yi-Shan Tsai Dragan Gasevic and Guanliang Chen. 2023. Can large language models provide feedback to students? A case study on ChatGPT. (2023).","DOI":"10.1109\/ICALT58122.2023.00100"},{"key":"e_1_3_2_32_2","article-title":"Investigating the effectiveness of ChatGPT in mathematical reasoning and problem solving: Evidence from the Vietnamese national high school graduation examination","author":"Dao Xuan-Quy","year":"2023","unstructured":"Xuan-Quy Dao and Ngoc-Bich Le. 2023. Investigating the effectiveness of ChatGPT in mathematical reasoning and problem solving: Evidence from the Vietnamese national high school graduation examination. arXiv preprint arXiv:2306.06331 (2023).","journal-title":"arXiv preprint arXiv:2306.06331"},{"key":"e_1_3_2_33_2","article-title":"Can ChatGPT pass high school exams on English language comprehension","author":"Winter Joost C. F. de","year":"2023","unstructured":"Joost C. F. de Winter. 2023. Can ChatGPT pass high school exams on English language comprehension. Researchgate. Preprint (2023).","journal-title":"Researchgate. Preprint"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_35_2","article-title":"How ready are pre-trained abstractive models and LLMs for legal case judgement summarization?","author":"Deroy Aniket","year":"2023","unstructured":"Aniket Deroy, Kripabandhu Ghosh, and Saptarshi Ghosh. 2023. How ready are pre-trained abstractive models and LLMs for legal case judgement summarization? arXiv preprint arXiv:2306.01248 (2023).","journal-title":"arXiv preprint arXiv:2306.01248"},{"key":"e_1_3_2_36_2","article-title":"Toxicity in ChatGPT: Analyzing persona-assigned language models","author":"Deshpande Ameet","year":"2023","unstructured":"Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. 2023. Toxicity in ChatGPT: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335 (2023).","journal-title":"arXiv preprint arXiv:2304.05335"},{"key":"e_1_3_2_37_2","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).","journal-title":"arXiv preprint arXiv:1810.04805"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445924"},{"key":"e_1_3_2_39_2","article-title":"AlpacaFarm: A simulation framework for methods that learn from human feedback","author":"Dubois Yann","year":"2023","unstructured":"Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. AlpacaFarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387 (2023).","journal-title":"arXiv preprint arXiv:2305.14387"},{"key":"e_1_3_2_40_2","first-page":"1","article-title":"Analysis of large-language model versus human performance for genetics questions","author":"Duong Dat","year":"2023","unstructured":"Dat Duong and Benjamin D. Solomon. 2023. Analysis of large-language model versus human performance for genetics questions. European Journal of Human Genetics (2023), 1\u20133.","journal-title":"European Journal of Human Genetics"},{"key":"e_1_3_2_41_2","unstructured":"Wenqi Fan Zihuai Zhao Jiatong Li Yunqing Liu Xiaowei Mei Yiqi Wang Jiliang Tang and Qing Li. 2023. Recommender Systems in the Era of Large Language Models (LLMs). (2023). arxiv:cs.IR\/2307.02046"},{"key":"e_1_3_2_42_2","first-page":"31306","article-title":"DDXPlus: A new dataset for automatic medical diagnosis","volume":"35","author":"Tchango Arsene Fansi","year":"2022","unstructured":"Arsene Fansi Tchango, Rishab Goel, Zhi Wen, Julien Martel, and Joumana Ghosn. 2022. DDXPlus: A new dataset for automatic medical diagnosis. Advances in Neural Information Processing Systems 35 (2022), 31306\u201331318.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_43_2","article-title":"Should ChatGPT be biased? Challenges and risks of bias in large language models","author":"Ferrara Emilio","year":"2023","unstructured":"Emilio Ferrara. 2023. Should ChatGPT be biased? Challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738 (2023).","journal-title":"arXiv preprint arXiv:2304.03738"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-020-09548-1"},{"key":"e_1_3_2_45_2","first-page":"1","article-title":"Baby steps in evaluating the capacities of large language models","author":"Frank Michael C.","year":"2023","unstructured":"Michael C. Frank. 2023. Baby steps in evaluating the capacities of large language models. Nature Reviews Psychology (2023), 1\u20132.","journal-title":"Nature Reviews Psychology"},{"key":"e_1_3_2_46_2","article-title":"Mathematical capabilities of ChatGPT","author":"Frieder Simon","year":"2023","unstructured":"Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. 2023. Mathematical capabilities of ChatGPT. arXiv preprint arXiv:2301.13867 (2023).","journal-title":"arXiv preprint arXiv:2301.13867"},{"key":"e_1_3_2_47_2","article-title":"MME: A comprehensive evaluation benchmark for multimodal large language models","author":"Fu Chaoyou","year":"2023","unstructured":"Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et\u00a0al. 2023. MME: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394 (2023).","journal-title":"arXiv preprint arXiv:2306.13394"},{"key":"e_1_3_2_48_2","article-title":"Chain-of-thought hub: A continuous effort to measure large language models\u2019 reasoning performance","author":"Fu Yao","year":"2023","unstructured":"Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. 2023. Chain-of-thought hub: A continuous effort to measure large language models\u2019 reasoning performance. arXiv preprint arXiv:2305.17306 (2023).","journal-title":"arXiv preprint arXiv:2305.17306"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11222-009-9153-8"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/72.80230"},{"key":"e_1_3_2_51_2","article-title":"Adaptive testing of computer vision models","author":"Gao Irena","year":"2022","unstructured":"Irena Gao, Gabriel Ilharco, Scott Lundberg, and Marco Tulio Ribeiro. 2022. Adaptive testing of computer vision models. arXiv preprint arXiv:2212.02774 (2022).","journal-title":"arXiv preprint arXiv:2212.02774"},{"key":"e_1_3_2_52_2","doi-asserted-by":"crossref","unstructured":"Jianfeng Gao and Chin-Yew Lin. 2004. Introduction to the special issue on statistical language modeling. (2004) 87\u201393.","DOI":"10.1145\/1034780.1034781"},{"key":"e_1_3_2_53_2","article-title":"Making pre-trained language models better few-shot learners","author":"Gao Tianyu","year":"2020","unstructured":"Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723 (2020).","journal-title":"arXiv preprint arXiv:2012.15723"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.301"},{"key":"e_1_3_2_55_2","article-title":"Selective classification for deep neural networks","volume":"30","author":"Geifman Yonatan","year":"2017","unstructured":"Yonatan Geifman and Ran El-Yaniv. 2017. Selective classification for deep neural networks. Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_56_2","article-title":"TrueTeacher: Learning factual consistency evaluation with large language models","author":"Gekhman Zorik","year":"2023","unstructured":"Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. 2023. TrueTeacher: Learning factual consistency evaluation with large language models. arXiv preprint arXiv:2305.11171 (2023).","journal-title":"arXiv preprint arXiv:2305.11171"},{"key":"e_1_3_2_57_2","article-title":"Large language models are not abstract reasoners","author":"Gendron Ga\u00ebl","year":"2023","unstructured":"Ga\u00ebl Gendron, Qiming Bao, Michael Witbrock, and Gillian Dobbie. 2023. Large language models are not abstract reasoners. arXiv preprint arXiv:2305.19555 (2023).","journal-title":"arXiv preprint arXiv:2305.19555"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.2196\/45312"},{"key":"e_1_3_2_59_2","first-page":"55","volume-title":"Advances in Experimental Social Psychology","author":"Graham Jesse","year":"2013","unstructured":"Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P. Wojcik, and Peter H. Ditto. 2013. Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in Experimental Social Psychology. Vol. 47. Elsevier, 55\u2013130."},{"key":"e_1_3_2_60_2","article-title":"Xiezhi: An ever-updating benchmark for holistic domain knowledge evaluation","author":"Gu Zhouhong","year":"2023","unstructured":"Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, et\u00a0al. 2023. Xiezhi: An ever-updating benchmark for holistic domain knowledge evaluation. arXiv preprint arXiv:2306.05783 (2023).","journal-title":"arXiv preprint arXiv:2306.05783"},{"key":"e_1_3_2_61_2","first-page":"1321","volume-title":"International Conference on Machine Learning","author":"Guo Chuan","year":"2017","unstructured":"Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In International Conference on Machine Learning. PMLR, 1321\u20131330."},{"key":"e_1_3_2_62_2","article-title":"What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks","author":"Guo Taicheng","year":"2023","unstructured":"Taicheng Guo, Kehan Guo, Zhengwen Liang, Zhichun Guo, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang, et\u00a0al. 2023. What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks. arXiv preprint arXiv:2305.18365 (2023).","journal-title":"arXiv preprint arXiv:2305.18365"},{"key":"e_1_3_2_63_2","unstructured":"Thilo Hagendorff and Sarah Fabi. 2023. Human-like Intuitive Behavior and Reasoning Biases Emerged in Language Models \u2013 and Disappeared in GPT-4. (2023). arxiv:cs.CL\/2306.07622"},{"key":"e_1_3_2_64_2","article-title":"Evaluation of AI chatbots for patient-specific EHR questions","author":"Hamidi Alaleh","year":"2023","unstructured":"Alaleh Hamidi and Kirk Roberts. 2023. Evaluation of AI chatbots for patient-specific EHR questions. arXiv preprint arXiv:2306.02549 (2023).","journal-title":"arXiv preprint arXiv:2306.02549"},{"key":"e_1_3_2_65_2","article-title":"Equality of opportunity in supervised learning","volume":"29","author":"Hardt Moritz","year":"2016","unstructured":"Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems 29 (2016).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_66_2","article-title":"The political ideology of conversational AI: Converging evidence on ChatGPT\u2019s pro-environmental, left-libertarian orientation","author":"Hartmann Jochen","year":"2023","unstructured":"Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte. 2023. The political ideology of conversational AI: Converging evidence on ChatGPT\u2019s pro-environmental, left-libertarian orientation. arXiv preprint arXiv:2301.01768 (2023).","journal-title":"arXiv preprint arXiv:2301.01768"},{"key":"e_1_3_2_67_2","article-title":"Can large language models understand real-world complex instructions?","author":"He Qianyu","year":"2023","unstructured":"Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, et\u00a0al. 2023. Can large language models understand real-world complex instructions? arXiv preprint arXiv:2309.09150 (2023).","journal-title":"arXiv preprint arXiv:2309.09150"},{"key":"e_1_3_2_68_2","article-title":"Exploring the responses of large language models to beginner programmers\u2019 help requests","author":"Hellas Arto","year":"2023","unstructured":"Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanp\u00e4\u00e4, and Juha Sorva. 2023. Exploring the responses of large language models to beginner programmers\u2019 help requests. arXiv preprint arXiv:2306.05715 (2023).","journal-title":"arXiv preprint arXiv:2306.05715"},{"key":"e_1_3_2_69_2","article-title":"Measuring coding challenge competence with apps","author":"Hendrycks Dan","year":"2021","unstructured":"Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et\u00a0al. 2021. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938 (2021).","journal-title":"arXiv preprint arXiv:2105.09938"},{"key":"e_1_3_2_70_2","article-title":"Aligning AI with shared human values","author":"Hendrycks Dan","year":"2020","unstructured":"Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2020. Aligning AI with shared human values. arXiv preprint arXiv:2008.02275 (2020).","journal-title":"arXiv preprint arXiv:2008.02275"},{"key":"e_1_3_2_71_2","article-title":"Measuring massive multitask language understanding","author":"Hendrycks Dan","year":"2020","unstructured":"Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 (2020).","journal-title":"arXiv preprint arXiv:2009.03300"},{"key":"e_1_3_2_72_2","article-title":"CUAD: An expert-annotated NLP dataset for legal contract review","author":"Hendrycks Dan","year":"2021","unstructured":"Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. 2021. CUAD: An expert-annotated NLP dataset for legal contract review. arXiv preprint arXiv:2103.06268 (2021).","journal-title":"arXiv preprint arXiv:2103.06268"},{"key":"e_1_3_2_73_2","article-title":"Measuring mathematical problem solving with the math dataset","author":"Hendrycks Dan","year":"2021","unstructured":"Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 (2021).","journal-title":"arXiv preprint arXiv:2103.03874"},{"key":"e_1_3_2_74_2","article-title":"Evaluating large language models on a highly-specialized topic, radiation oncology physics","author":"Holmes Jason","year":"2023","unstructured":"Jason Holmes, Zhengliang Liu, Lian Zhang, Yuzhen Ding, Terence T. Sio, Lisa A. McGee, Jonathan B. Ashman, Xiang Li, Tianming Liu, Jiajian Shen, et\u00a0al. 2023. Evaluating large language models on a highly-specialized topic, radiation oncology physics. arXiv preprint arXiv:2304.01938 (2023).","journal-title":"arXiv preprint arXiv:2304.01938"},{"key":"e_1_3_2_75_2","article-title":"TRUE: Re-evaluating factual consistency evaluation","author":"Honovich Or","year":"2022","unstructured":"Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. arXiv preprint arXiv:2204.04991 (2022).","journal-title":"arXiv preprint arXiv:2204.04991"},{"key":"e_1_3_2_76_2","article-title":"Choice-75: A dataset on decision branching in script learning","author":"Hou Zhaoyi Joey","year":"2023","unstructured":"Zhaoyi Joey Hou, Li Zhang, and Chris Callison-Burch. 2023. Choice-75: A dataset on decision branching in script learning. arXiv preprint arXiv:2309.11737 (2023).","journal-title":"arXiv preprint arXiv:2309.11737"},{"key":"e_1_3_2_77_2","article-title":"Emotionally numb or empathetic? Evaluating how LLMs feel using EmotionBench","author":"Huang Jen-tse","year":"2023","unstructured":"Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, and Michael R. Lyu. 2023. Emotionally numb or empathetic? Evaluating how LLMs feel using EmotionBench. arXiv preprint arXiv:2308.03656 (2023).","journal-title":"arXiv preprint arXiv:2308.03656"},{"key":"e_1_3_2_78_2","article-title":"Language is not all you need: Aligning perception with language models","author":"Huang Shaohan","year":"2023","unstructured":"Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et\u00a0al. 2023. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045 (2023).","journal-title":"arXiv preprint arXiv:2302.14045"},{"key":"e_1_3_2_79_2","article-title":"C-Eval: A multi-level multi-discipline Chinese evaluation suite for foundation models","author":"Huang Yuzhen","year":"2023","unstructured":"Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et\u00a0al. 2023. C-Eval: A multi-level multi-discipline Chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322 (2023).","journal-title":"arXiv preprint arXiv:2305.08322"},{"key":"e_1_3_2_80_2","unstructured":"Yue Huang Qihui Zhang Philip S. Y. and Lichao Sun. 2023. TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models. (2023). arxiv:cs.CL\/2306.11507"},{"key":"e_1_3_2_81_2","unstructured":"HuggingFace. 2023. Open-source Large Language Models Leaderboard. https:\/\/huggingface.co\/spaces\/Hugging-FaceH4\/open_llm_leaderboard (2023)."},{"key":"e_1_3_2_82_2","article-title":"Evaluation of ChatGPT on biomedical tasks: A zero-shot comparison with fine-tuned generative transformers","author":"Jahan Israt","year":"2023","unstructured":"Israt Jahan, Md. Tahmid Rahman Laskar, Chun Peng, and Jimmy Huang. 2023. Evaluation of ChatGPT on biomedical tasks: A zero-shot comparison with fine-tuned generative transformers. arXiv preprint arXiv:2306.04504 (2023).","journal-title":"arXiv preprint arXiv:2306.04504"},{"key":"e_1_3_2_83_2","article-title":"Bring your own data! Self-supervised evaluation for large language models","author":"Jain Neel","year":"2023","unstructured":"Neel Jain, Khalid Saifullah, Yuxin Wen, John Kirchenbauer, Manli Shu, Aniruddha Saha, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2023. Bring your own data! Self-supervised evaluation for large language models. arXiv preprint arXiv:2306.13651 (2023).","journal-title":"arXiv preprint arXiv:2306.13651"},{"key":"e_1_3_2_84_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.iheduc.2021.100817"},{"key":"e_1_3_2_85_2","article-title":"ChatGPT is fun, but it is not funny! Humor is still challenging large language models","author":"Jentzsch Sophie","year":"2023","unstructured":"Sophie Jentzsch and Kristian Kersting. 2023. ChatGPT is fun, but it is not funny! Humor is still challenging large language models. arXiv preprint arXiv:2306.04563 (2023).","journal-title":"arXiv preprint arXiv:2306.04563"},{"key":"e_1_3_2_86_2","article-title":"BeaverTails: Towards improved safety alignment of LLM via a human-preference dataset","author":"Ji Jiaming","year":"2023","unstructured":"Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2023. BeaverTails: Towards improved safety alignment of LLM via a human-preference dataset. arXiv preprint arXiv:2307.04657 (2023).","journal-title":"arXiv preprint arXiv:2307.04657"},{"key":"e_1_3_2_87_2","article-title":"StructGPT: A general framework for large language model to reason over structured data","author":"Jiang Jinhao","year":"2023","unstructured":"Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. StructGPT: A general framework for large language model to reason over structured data. arXiv preprint arXiv:2305.09645 (2023).","journal-title":"arXiv preprint arXiv:2305.09645"},{"key":"e_1_3_2_88_2","doi-asserted-by":"crossref","unstructured":"Douglas Johnson Rachel Goodman J. Patrinely Cosby Stone Eli Zimmerman Rebecca Donald Sam Chang Sean Berkowitz Avni Finn Eiman Jahangir et\u00a0al. 2023. Assessing the accuracy and reliability of AI-generated medical responses: An evaluation of the Chat-GPT model. (2023).","DOI":"10.21203\/rs.3.rs-2566942\/v1"},{"key":"e_1_3_2_89_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P17-1147"},{"key":"e_1_3_2_90_2","article-title":"Language models (mostly) know what they know","volume":"2207","author":"Kadavath Saurav","year":"2022","unstructured":"Saurav Kadavath, Tom Conerly, Amanda Askell, T. J. Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zachary Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, John Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom B. Brown, Jack Clark, Nicholas Joseph, Benjamin Mann, Sam McCandlish, Christopher Olah, and Jared Kaplan. 2022. Language models (mostly) know what they know. ArXiv abs\/2207.05221 (2022).","journal-title":"ArXiv"},{"key":"e_1_3_2_91_2","article-title":"MRKL systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning","author":"Karpas Ehud","year":"2022","unstructured":"Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et\u00a0al. 2022. MRKL systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445 (2022).","journal-title":"arXiv preprint arXiv:2205.00445"},{"key":"e_1_3_2_92_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.lindif.2023.102274"},{"key":"e_1_3_2_93_2","unstructured":"Jean Khalfa. 1994. What is intelligence? (1994)."},{"key":"e_1_3_2_94_2","article-title":"covLLM: Large language models for COVID-19 biomedical literature","author":"Khan Yousuf A.","year":"2023","unstructured":"Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, and Ben Ehlert. 2023. covLLM: Large language models for COVID-19 biomedical literature. arXiv preprint arXiv:2306.04926 (2023).","journal-title":"arXiv preprint arXiv:2306.04926"},{"key":"e_1_3_2_95_2","article-title":"Dynabench: Rethinking benchmarking in NLP","author":"Kiela Douwe","year":"2021","unstructured":"Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, et\u00a0al. 2021. Dynabench: Rethinking benchmarking in NLP. arXiv preprint arXiv:2104.14337 (2021).","journal-title":"arXiv preprint arXiv:2104.14337"},{"key":"e_1_3_2_96_2","first-page":"1137","volume-title":"IJCAI","author":"Kohavi Ron","year":"1995","unstructured":"Ron Kohavi et\u00a0al. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In IJCAI, Vol. 14. Montreal, Canada, 1137\u20131145."},{"key":"e_1_3_2_97_2","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2011-720"},{"key":"e_1_3_2_98_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pdig.0000198"},{"key":"e_1_3_2_99_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00276"},{"key":"e_1_3_2_100_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-023-31412-2"},{"key":"e_1_3_2_101_2","article-title":"ChatGPT beyond english: Towards a comprehensive evaluation of large language models in multilingual learning","author":"Lai Viet Dac","year":"2023","unstructured":"Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023. ChatGPT beyond english: Towards a comprehensive evaluation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613 (2023).","journal-title":"arXiv preprint arXiv:2304.05613"},{"key":"e_1_3_2_102_2","article-title":"ChatGPT and other large language models as evolutionary engines for online interactive collaborative game design","author":"Lanzi Pier Luca","year":"2023","unstructured":"Pier Luca Lanzi and Daniele Loiacono. 2023. ChatGPT and other large language models as evolutionary engines for online interactive collaborative game design. arXiv preprint arXiv:2303.02155 (2023).","journal-title":"arXiv preprint arXiv:2303.02155"},{"key":"e_1_3_2_103_2","article-title":"A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets","author":"Laskar Md. Tahmid Rahman","year":"2023","unstructured":"Md. Tahmid Rahman Laskar, M. Saiful Bari, Mizanur Rahman, Md. Amran Hossen Bhuiyan, Shafiq Joty, and Jimmy Xiangji Huang. 2023. A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets. arXiv preprint arXiv:2305.18486 (2023).","journal-title":"arXiv preprint arXiv:2305.18486"},{"key":"e_1_3_2_104_2","article-title":"An evaluation of log parsing with ChatGPT","author":"Le Van-Hoang","year":"2023","unstructured":"Van-Hoang Le and Hongyu Zhang. 2023. An evaluation of log parsing with ChatGPT. arXiv preprint arXiv:2306.01590 (2023).","journal-title":"arXiv preprint arXiv:2306.01590"},{"key":"e_1_3_2_105_2","doi-asserted-by":"publisher","DOI":"10.1038\/nature14539"},{"key":"e_1_3_2_106_2","article-title":"Can large language models infer and disagree like humans?","author":"Lee Noah","year":"2023","unstructured":"Noah Lee, Na Min An, and James Thorne. 2023. Can large language models infer and disagree like humans? arXiv preprint arXiv:2305.13788 (2023).","journal-title":"arXiv preprint arXiv:2305.13788"},{"key":"e_1_3_2_107_2","article-title":"BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension","author":"Lewis Mike","year":"2019","unstructured":"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019).","journal-title":"arXiv preprint arXiv:1910.13461"},{"key":"e_1_3_2_108_2","article-title":"Seed-bench: Benchmarking multimodal LLMs with generative comprehension","author":"Li Bohao","year":"2023","unstructured":"Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. 2023. Seed-bench: Benchmarking multimodal LLMs with generative comprehension. arXiv preprint arXiv:2307.16125 (2023).","journal-title":"arXiv preprint arXiv:2307.16125"},{"key":"e_1_3_2_109_2","article-title":"CMMLU: Measuring massive multitask language understanding in Chinese","author":"Li Haonan","year":"2023","unstructured":"Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023. CMMLU: Measuring massive multitask language understanding in Chinese. arXiv preprint arXiv:2306.09212 (2023).","journal-title":"arXiv preprint arXiv:2306.09212"},{"key":"e_1_3_2_110_2","unstructured":"Minghao Li Feifan Song Bowen Yu Haiyang Yu Zhoujun Li Fei Huang and Yongbin Li. 2023. API-Bank: A Benchmark for Tool-Augmented LLMs. (2023). arxiv:cs.CL\/2304.08244"},{"key":"e_1_3_2_111_2","article-title":"Exploring the upper limits of text-based collaborative filtering using large language models: Discoveries and insights","author":"Li Ruyu","year":"2023","unstructured":"Ruyu Li, Wenhao Deng, Yu Cheng, Zheng Yuan, Jiaqi Zhang, and Fajie Yuan. 2023. Exploring the upper limits of text-based collaborative filtering using large language models: Discoveries and insights. arXiv preprint arXiv:2305.11700 (2023).","journal-title":"arXiv preprint arXiv:2305.11700"},{"key":"e_1_3_2_112_2","unstructured":"Xinzhe Li Ming Liu Shang Gao and Wray Buntine. 2023. A Survey on Out-of-Distribution Evaluation of Neural NLP Models. (2023). arxiv:cs.CL\/2306.15261"},{"key":"e_1_3_2_113_2","unstructured":"Xuechen Li Tianyi Zhang Yann Dubois Rohan Taori Ishaan Gulrajani Carlos Guestrin Percy Liang and Tatsunori B. Hashimoto. 2023. AlpacaEval: An Automatic Evaluator of Instruction-following Models. https:\/\/github.com\/tatsu-lab\/alpaca_eval (2023)."},{"key":"e_1_3_2_114_2","article-title":"Evaluating object hallucination in large vision-language models","author":"Li Yifan","year":"2023","unstructured":"Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355 (2023).","journal-title":"arXiv preprint arXiv:2305.10355"},{"key":"e_1_3_2_115_2","article-title":"Holistic evaluation of language models","author":"Liang Percy","year":"2022","unstructured":"Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et\u00a0al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022).","journal-title":"arXiv preprint arXiv:2211.09110"},{"key":"e_1_3_2_116_2","article-title":"Leveraging word guessing games to assess the intelligence of large language models","author":"Liang Tian","year":"2023","unstructured":"Tian Liang, Zhiwei He, Jen-tes Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023. Leveraging word guessing games to assess the intelligence of large language models. arXiv preprint arXiv:2310.20499 (2023).","journal-title":"arXiv preprint arXiv:2310.20499"},{"key":"e_1_3_2_117_2","article-title":"UHGEval: Benchmarking the hallucination of Chinese large language models via unconstrained generation","author":"Liang Xun","year":"2023","unstructured":"Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, et\u00a0al. 2023. UHGEval: Benchmarking the hallucination of Chinese large language models via unconstrained generation. arXiv preprint arXiv:2311.15296 (2023).","journal-title":"arXiv preprint arXiv:2311.15296"},{"key":"e_1_3_2_118_2","article-title":"Can large language models reason about medical questions?","author":"Li\u00e9vin Valentin","year":"2022","unstructured":"Valentin Li\u00e9vin, Christoffer Egeberg Hother, and Ole Winther. 2022. Can large language models reason about medical questions? arXiv preprint arXiv:2207.08143 (2022).","journal-title":"arXiv preprint arXiv:2207.08143"},{"key":"e_1_3_2_119_2","first-page":"74","volume-title":"Text Summarization Branches Out","author":"Lin Chin-Yew","year":"2004","unstructured":"Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, 74\u201381. https:\/\/aclanthology.org\/W04-1013"},{"key":"e_1_3_2_120_2","article-title":"TruthfulQA: Measuring how models mimic human falsehoods","author":"Lin Stephanie","year":"2021","unstructured":"Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. TruthfulQA: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958 (2021).","journal-title":"arXiv preprint arXiv:2109.07958"},{"key":"e_1_3_2_121_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"e_1_3_2_122_2","article-title":"LLM-Eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models","author":"Lin Yen-Ting","year":"2023","unstructured":"Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-Eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. arXiv preprint arXiv:2305.13711 (2023).","journal-title":"arXiv preprint arXiv:2305.13711"},{"key":"e_1_3_2_123_2","unstructured":"Chuang Liu Renren Jin Yuqi Ren Linhao Yu Tianyu Dong Xiaohan Peng Shuting Zhang Jianxiang Peng Peiyi Zhang Qingqing Lyu Xiaowen Su Qun Liu and Deyi Xiong. 2023. M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models. (2023). arxiv:cs.CL\/2305.10263"},{"key":"e_1_3_2_124_2","unstructured":"Fuxiao Liu Kevin Lin Linjie Li Jianfeng Wang Yaser Yacoob and Lijuan Wang. 2023. Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning. (2023). arxiv:cs.CV\/2306.14565"},{"key":"e_1_3_2_125_2","unstructured":"Hanmeng Liu Ruoxi Ning Zhiyang Teng Jian Liu Qiji Zhou and Yue Zhang. 2023. Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4. (2023). arxiv:cs.CL\/2304.03439"},{"key":"e_1_3_2_126_2","article-title":"Is your code generated by ChatGPT really correct? Rigorous evaluation of large language models for code generation","author":"Liu Jiawei","year":"2023","unstructured":"Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023. Is your code generated by ChatGPT really correct? Rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210 (2023).","journal-title":"arXiv preprint arXiv:2305.01210"},{"key":"e_1_3_2_127_2","unstructured":"Yuan Liu Haodong Duan Yuanhan Zhang Bo Li Songyang Zhang Wangbo Zhao Yike Yuan Jiaqi Wang Conghui He Ziwei Liu Kai Chen and Dahua Lin. 2023. MMBench: Is Your Multi-modal Model an All-around Player? (2023). arxiv:cs.CV\/2307.06281"},{"key":"e_1_3_2_128_2","article-title":"Summary of ChatGPT\/GPT-4 research and perspective towards the future of large language models","author":"Liu Yiheng","year":"2023","unstructured":"Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, et\u00a0al. 2023. Summary of ChatGPT\/GPT-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852 (2023).","journal-title":"arXiv preprint arXiv:2304.01852"},{"key":"e_1_3_2_129_2","unstructured":"LMSYS. 2023. Chatbot Arena: Benchmarking LLMs in the Wild with Elo Ratings. https:\/\/lmsys.org (2023)."},{"key":"e_1_3_2_130_2","article-title":"Can ChatGPT forecast stock price movements? Return predictability and large language models","author":"Lopez-Lira Alejandro","year":"2023","unstructured":"Alejandro Lopez-Lira and Yuehua Tang. 2023. Can ChatGPT forecast stock price movements? Return predictability and large language models. arXiv preprint arXiv:2304.07619 (2023).","journal-title":"arXiv preprint arXiv:2304.07619"},{"key":"e_1_3_2_131_2","article-title":"New trends in machine translation using large language models: Case examples with ChatGPT","author":"Lyu Chenyang","year":"2023","unstructured":"Chenyang Lyu, Jitao Xu, and Longyue Wang. 2023. New trends in machine translation using large language models: Case examples with ChatGPT. arXiv preprint arXiv:2305.01181 (2023).","journal-title":"arXiv preprint arXiv:2305.01181"},{"key":"e_1_3_2_132_2","article-title":"Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: Promising results, limitations, and potential","author":"Lyu Qing","year":"2023","unstructured":"Qing Lyu, Josh Tan, Mike E. Zapadka, Janardhana Ponnatapuram, Chuang Niu, Ge Wang, and Christopher T. Whitlow. 2023. Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: Promising results, limitations, and potential. arXiv preprint arXiv:2303.09038 (2023).","journal-title":"arXiv preprint arXiv:2303.09038"},{"key":"e_1_3_2_133_2","first-page":"10351","article-title":"Dynaboard: An evaluation-as-a-service platform for holistic next-generation benchmarking","volume":"34","author":"Ma Zhiyi","year":"2021","unstructured":"Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher Potts, Adina Williams, and Douwe Kiela. 2021. Dynaboard: An evaluation-as-a-service platform for holistic next-generation benchmarking. Advances in Neural Information Processing Systems 34 (2021), 10351\u201310367.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_134_2","article-title":"SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models","author":"Manakul Potsawee","year":"2023","unstructured":"Potsawee Manakul, Adian Liusie, and Mark J. F. Gales. 2023. SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 (2023).","journal-title":"arXiv preprint arXiv:2303.08896"},{"key":"e_1_3_2_135_2","doi-asserted-by":"crossref","unstructured":"Potsawee Manakul Adian Liusie and Mark J. F. Gales. 2023. MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization. (2023). arxiv:cs.CL\/2301.12307","DOI":"10.18653\/v1\/2023.ijcnlp-main.4"},{"key":"e_1_3_2_136_2","article-title":"Dynamic benchmarking of masked language models on temporal concept drift with multiple views","author":"Margatina Katerina","year":"2023","unstructured":"Katerina Margatina, Shuai Wang, Yogarshi Vyas, Neha Anna John, Yassine Benajiba, and Miguel Ballesteros. 2023. Dynamic benchmarking of masked language models on temporal concept drift with multiple views. arXiv preprint arXiv:2302.12297 (2023).","journal-title":"arXiv preprint arXiv:2302.12297"},{"key":"e_1_3_2_137_2","doi-asserted-by":"crossref","unstructured":"John McCarthy. 2007. What is artificial intelligence. (2007).","DOI":"10.1145\/1283920.1283926"},{"key":"e_1_3_2_138_2","article-title":"Bing chat","year":"2023","unstructured":"Microsoft. 2023. Bing chat. https:\/\/www.bing.com\/new (2023).","journal-title":"https:\/\/www.bing.com\/new"},{"key":"e_1_3_2_139_2","article-title":"FActScore: Fine-grained atomic evaluation of factual precision in long form text generation","author":"Min Sewon","year":"2023","unstructured":"Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. arXiv preprint arXiv:2305.14251 (2023).","journal-title":"arXiv preprint arXiv:2305.14251"},{"key":"e_1_3_2_140_2","article-title":"Large language models as tax attorneys: A case study in legal capabilities emergence","author":"Nay John J.","year":"2023","unstructured":"John J. Nay, David Karamardian, Sarah B. Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H. Choi, and Jungo Kasai. 2023. Large language models as tax attorneys: A case study in legal capabilities emergence. arXiv preprint arXiv:2306.07075 (2023).","journal-title":"arXiv preprint arXiv:2306.07075"},{"key":"e_1_3_2_141_2","article-title":"Adversarial NLI: A new benchmark for natural language understanding","author":"Nie Yixin","year":"2019","unstructured":"Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599 (2019).","journal-title":"arXiv preprint arXiv:1910.14599"},{"key":"e_1_3_2_142_2","article-title":"CodeGen: An open large language model for code with multi-turn program synthesis","author":"Nijkamp Erik","year":"2022","unstructured":"Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. CodeGen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474 (2022).","journal-title":"arXiv preprint arXiv:2203.13474"},{"key":"e_1_3_2_143_2","article-title":"Why we need new evaluation metrics for NLG","author":"Novikova Jekaterina","year":"2017","unstructured":"Jekaterina Novikova, Ond\u0159ej Du\u0161ek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. arXiv preprint arXiv:1707.06875 (2017).","journal-title":"arXiv preprint arXiv:1707.06875"},{"key":"e_1_3_2_144_2","doi-asserted-by":"publisher","DOI":"10.4174\/astr.2023.104.5.269"},{"key":"e_1_3_2_145_2","volume-title":"AIED Workshops","author":"Olney Andrew M.","year":"2023","unstructured":"Andrew M. Olney. 2023. Generating multiple choice questions from a textbook: LLMs match human performance on most metrics. In AIED Workshops."},{"key":"e_1_3_2_146_2","unstructured":"OpenAI. 2023. https:\/\/chat.openai.com.chat (2023)."},{"key":"e_1_3_2_147_2","unstructured":"OpenAI. 2023. GPT-4 Technical Report. (2023). arxiv:cs.CL\/2303.08774"},{"key":"e_1_3_2_148_2","doi-asserted-by":"publisher","DOI":"10.3389\/frai.2023.1199350"},{"key":"e_1_3_2_149_2","article-title":"ThoughtSource: A central hub for large language model reasoning data","author":"Ott Simon","year":"2023","unstructured":"Simon Ott, Konstantin Hebenstreit, Valentin Li\u00e9vin, Christoffer Egeberg Hother, Milad Moradi, Maximilian Mayrhauser, Robert Praas, Ole Winther, and Matthias Samwald. 2023. ThoughtSource: A central hub for large language model reasoning data. arXiv preprint arXiv:2301.11596 (2023).","journal-title":"arXiv preprint arXiv:2301.11596"},{"key":"e_1_3_2_150_2","first-page":"27730","article-title":"Training language models to follow instructions with human feedback","volume":"35","author":"Ouyang Long","year":"2022","unstructured":"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et\u00a0al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730\u201327744.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_151_2","article-title":"Understanding the capabilities of large language models for automated planning","author":"Pallagani Vishal","year":"2023","unstructured":"Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Biplav Srivastava, Lior Horesh, Francesco Fabiano, and Andrea Loreggia. 2023. Understanding the capabilities of large language models for automated planning. arXiv preprint arXiv:2305.16151 (2023).","journal-title":"arXiv preprint arXiv:2305.16151"},{"key":"e_1_3_2_152_2","unstructured":"Shirui Pan Linhao Luo Yufei Wang Chen Chen Jiapu Wang and Xindong Wu. 2023. Unifying Large Language Models and Knowledge Graphs: A Roadmap. (2023). arxiv:cs.CL\/2306.08302"},{"key":"e_1_3_2_153_2","article-title":"TALM: Tool augmented language models","author":"Parisi Aaron","year":"2022","unstructured":"Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. TALM: Tool augmented language models. arXiv preprint arXiv:2205.12255 (2022).","journal-title":"arXiv preprint arXiv:2205.12255"},{"key":"e_1_3_2_154_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-acl.165"},{"key":"e_1_3_2_155_2","article-title":"Leveraging large language models for topic classification in the domain of public affairs","author":"Pe\u00f1a Alejandro","year":"2023","unstructured":"Alejandro Pe\u00f1a, Aythami Morales, Julian Fierrez, Ignacio Serna, Javier Ortega-Garcia, I\u00f1igo Puente, Jorge Cordova, and Gonzalo Cordova. 2023. Leveraging large language models for topic classification in the domain of public affairs. arXiv preprint arXiv:2306.02864 (2023).","journal-title":"arXiv preprint arXiv:2306.02864"},{"key":"e_1_3_2_156_2","doi-asserted-by":"publisher","DOI":"10.1037\/1082-989X.2.4.329"},{"key":"e_1_3_2_157_2","article-title":"Measuring and modifying factual knowledge in large language models","author":"Pezeshkpour Pouya","year":"2023","unstructured":"Pouya Pezeshkpour. 2023. Measuring and modifying factual knowledge in large language models. arXiv preprint arXiv:2306.06264 (2023).","journal-title":"arXiv preprint arXiv:2306.06264"},{"key":"e_1_3_2_158_2","article-title":"Adversarially constructed evaluation sets are more challenging, but may not be fair","author":"Phang Jason","year":"2021","unstructured":"Jason Phang, Angelica Chen, William Huang, and Samuel R. Bowman. 2021. Adversarially constructed evaluation sets are more challenging, but may not be fair. arXiv preprint arXiv:2111.08181 (2021).","journal-title":"arXiv preprint arXiv:2111.08181"},{"key":"e_1_3_2_159_2","unstructured":"Dongqi Pu and Vera Demberg. 2023. ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer. (2023). arxiv:cs.CL\/2306.07799"},{"key":"e_1_3_2_160_2","article-title":"Is ChatGPT a general-purpose natural language processing task solver?","author":"Qin Chengwei","year":"2023","unstructured":"Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is ChatGPT a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476 (2023).","journal-title":"arXiv preprint arXiv:2302.06476"},{"key":"e_1_3_2_161_2","unstructured":"Yujia Qin Shengding Hu Yankai Lin Weize Chen Ning Ding Ganqu Cui Zheni Zeng Yufei Huang Chaojun Xiao Chi Han Yi Ren Fung Yusheng Su Huadong Wang Cheng Qian Runchu Tian Kunlun Zhu Shihao Liang Xingyu Shen Bokai Xu Zhen Zhang Yining Ye Bowen Li Ziwei Tang Jing Yi Yuzhang Zhu Zhenning Dai Lan Yan Xin Cong Yaxi Lu Weilin Zhao Yuxiang Huang Junxi Yan Xu Han Xian Sun Dahai Li Jason Phang Cheng Yang Tongshuang Wu Heng Ji Zhiyuan Liu and Maosong Sun. 2023. Tool Learning with Foundation Models. (2023). arxiv:cs.CL\/2304.08354"},{"key":"e_1_3_2_162_2","unstructured":"Yujia Qin Shihao Liang Yining Ye Kunlun Zhu Lan Yan Yaxi Lu Yankai Lin Xin Cong Xiangru Tang Bill Qian Sihan Zhao Runchu Tian Ruobing Xie Jie Zhou Mark Gerstein Dahai Li Zhiyuan Liu and Maosong Sun. 2023. ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. (2023). arxiv:cs.AI\/2307.16789"},{"key":"e_1_3_2_163_2","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans Ilya Sutskever et\u00a0al. 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_2_164_2","article-title":"A survey of hallucination in large foundation models","author":"Rawte Vipula","year":"2023","unstructured":"Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922 (2023).","journal-title":"arXiv preprint arXiv:2309.05922"},{"key":"e_1_3_2_165_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.230"},{"key":"e_1_3_2_166_2","article-title":"Beyond accuracy: Behavioral testing of NLP models with CheckList","author":"Ribeiro Marco Tulio","year":"2020","unstructured":"Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. arXiv preprint arXiv:2005.04118 (2020).","journal-title":"arXiv preprint arXiv:2005.04118"},{"key":"e_1_3_2_167_2","article-title":"The two word test: A semantic benchmark for large language models","author":"Riccardi Nicholas","year":"2023","unstructured":"Nicholas Riccardi and Rutvik H. Desai. 2023. The two word test: A semantic benchmark for large language models. arXiv preprint arXiv:2306.04610 (2023).","journal-title":"arXiv preprint arXiv:2306.04610"},{"key":"e_1_3_2_168_2","article-title":"The self-perception and political biases of ChatGPT","author":"Rutinowski J\u00e9r\u00f4me","year":"2023","unstructured":"J\u00e9r\u00f4me Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. 2023. The self-perception and political biases of ChatGPT. arXiv preprint arXiv:2304.07333 (2023).","journal-title":"arXiv preprint arXiv:2304.07333"},{"key":"e_1_3_2_169_2","article-title":"Personality traits in large language models","author":"Safdari Mustafa","year":"2023","unstructured":"Mustafa Safdari, Greg Serapio-Garc\u00eda, Cl\u00e9ment Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari\u0107. 2023. Personality traits in large language models. arXiv preprint arXiv:2307.00184 (2023).","journal-title":"arXiv preprint arXiv:2307.00184"},{"key":"e_1_3_2_170_2","first-page":"1","article-title":"Assessing the accuracy of responses by the language model ChatGPT to questions regarding bariatric surgery","author":"Samaan Jamil S.","year":"2023","unstructured":"Jamil S. Samaan, Yee Hui Yeo, Nithya Rajeev, Lauren Hawley, Stuart Abel, Wee Han Ng, Nitin Srinivasan, Justin Park, Miguel Burch, Rabindra Watson, et\u00a0al. 2023. Assessing the accuracy of responses by the language model ChatGPT to questions regarding bariatric surgery. Obesity Surgery (2023), 1\u20137.","journal-title":"Obesity Surgery"},{"key":"e_1_3_2_171_2","article-title":"Testing the general deductive reasoning capacity of large language models using OOD examples","author":"Saparov Abulhair","year":"2023","unstructured":"Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim, and He He. 2023. Testing the general deductive reasoning capacity of large language models using OOD examples. arXiv preprint arXiv:2305.15269 (2023).","journal-title":"arXiv preprint arXiv:2305.15269"},{"key":"e_1_3_2_172_2","unstructured":"Tomohiro Sawada Daniel Paleka Alexander Havrilla Pranav Tadepalli Paula Vidas Alexander Kranias John J. Nay Kshitij Gupta and Aran Komatsuzaki. 2023. ARB: Advanced Reasoning Benchmark for Large Language Models. (2023). arxiv:cs.CL\/2307.13692"},{"key":"e_1_3_2_173_2","article-title":"Toolformer: Language models can teach themselves to use tools","author":"Schick Timo","year":"2023","unstructured":"Timo Schick, Jane Dwivedi-Yu, Roberto Dess\u00ec, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 (2023).","journal-title":"arXiv preprint arXiv:2302.04761"},{"key":"e_1_3_2_174_2","article-title":"Performance of ChatGPT on USMLE: Unlocking the potential of large language models for AI-assisted medical education","author":"Sharma Prabin","year":"2023","unstructured":"Prabin Sharma, Kisan Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, and Salik Ram Khanal. 2023. Performance of ChatGPT on USMLE: Unlocking the potential of large language models for AI-assisted medical education. arXiv preprint arXiv:2307.00112 (2023).","journal-title":"arXiv preprint arXiv:2307.00112"},{"key":"e_1_3_2_175_2","article-title":"HuggingGPT: Solving AI tasks with ChatGPT and its friends in HuggingFace","author":"Shen Yongliang","year":"2023","unstructured":"Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. HuggingGPT: Solving AI tasks with ChatGPT and its friends in HuggingFace. arXiv preprint arXiv:2303.17580 (2023).","journal-title":"arXiv preprint arXiv:2303.17580"},{"key":"e_1_3_2_176_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.330"},{"key":"e_1_3_2_177_2","article-title":"Moral mimicry: Large language models produce moral rationalizations tailored to political identity","author":"Simmons Gabriel","year":"2022","unstructured":"Gabriel Simmons. 2022. Moral mimicry: Large language models produce moral rationalizations tailored to political identity. arXiv preprint arXiv:2209.12106 (2022).","journal-title":"arXiv preprint arXiv:2209.12106"},{"key":"e_1_3_2_178_2","article-title":"Large language models encode clinical knowledge","author":"Singhal Karan","year":"2022","unstructured":"Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et\u00a0al. 2022. Large language models encode clinical knowledge. arXiv preprint arXiv:2212.13138 (2022).","journal-title":"arXiv preprint arXiv:2212.13138"},{"key":"e_1_3_2_179_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-023-06291-2"},{"key":"e_1_3_2_180_2","article-title":"Using DeepSpeed and Megatron to train Megatron-Turing NLG 530b, a large-scale generative language model","author":"Smith Shaden","year":"2022","unstructured":"Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et\u00a0al. 2022. Using DeepSpeed and Megatron to train Megatron-Turing NLG 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990 (2022).","journal-title":"arXiv preprint arXiv:2201.11990"},{"key":"e_1_3_2_181_2","article-title":"Have large language models developed a personality?: Applicability of self-assessment tests in measuring personality in LLMs","author":"Song Xiaoyang","year":"2023","unstructured":"Xiaoyang Song, Akshat Gupta, Kiyan Mohebbizadeh, Shujie Hu, and Anant Singh. 2023. Have large language models developed a personality?: Applicability of self-assessment tests in measuring personality in LLMs. arXiv preprint arXiv:2305.14693 (2023).","journal-title":"arXiv preprint arXiv:2305.14693"},{"key":"e_1_3_2_182_2","article-title":"ChatGPT: A study on its utility for ubiquitous software engineering tasks","author":"Sridhara Giriprasad","year":"2023","unstructured":"Giriprasad Sridhara, Sourav Mazumdar, et\u00a0al. 2023. ChatGPT: A study on its utility for ubiquitous software engineering tasks. arXiv preprint arXiv:2305.16837 (2023).","journal-title":"arXiv preprint arXiv:2305.16837"},{"key":"e_1_3_2_183_2","article-title":"Beyond the imitation game: Quantifying and extrapolating the capabilities of language models","author":"Srivastava Aarohi","year":"2022","unstructured":"Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md. Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adri\u00e0 Garriga-Alonso, et\u00a0al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615 (2022).","journal-title":"arXiv preprint arXiv:2206.04615"},{"key":"e_1_3_2_184_2","article-title":"Is ChatGPT good at search? Investigating large language models as re-ranking agent","author":"Sun Weiwei","year":"2023","unstructured":"Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv preprint arXiv:2304.09542 (2023).","journal-title":"arXiv preprint arXiv:2304.09542"},{"key":"e_1_3_2_185_2","article-title":"EvEval: A comprehensive evaluation of event semantics for large language models","author":"Tao Zhengwei","year":"2023","unstructured":"Zhengwei Tao, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yanlin Feng, Jia Li, and Wenpeng Hu. 2023. EvEval: A comprehensive evaluation of event semantics for large language models. arXiv preprint arXiv:2305.15268 (2023).","journal-title":"arXiv preprint arXiv:2305.15268"},{"key":"e_1_3_2_186_2","article-title":"BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models","author":"Thakur Nandan","year":"2021","unstructured":"Nandan Thakur, Nils Reimers, Andreas R\u00fcckl\u00e9, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663 (2021).","journal-title":"arXiv preprint arXiv:2104.08663"},{"key":"e_1_3_2_187_2","doi-asserted-by":"publisher","DOI":"10.2196\/46599"},{"key":"e_1_3_2_188_2","article-title":"Lamda: Language models for dialog applications","author":"Thoppilan Romal","year":"2022","unstructured":"Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et\u00a0al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022).","journal-title":"arXiv preprint arXiv:2201.08239"},{"key":"e_1_3_2_189_2","article-title":"Dynatask: A framework for creating dynamic AI benchmark tasks","author":"Thrush Tristan","year":"2022","unstructured":"Tristan Thrush, Kushal Tirumala, Anmol Gupta, Max Bartolo, Pedro Rodriguez, Tariq Kane, William Gaviria Rojas, Peter Mattson, Adina Williams, and Douwe Kiela. 2022. Dynatask: A framework for creating dynamic AI benchmark tasks. arXiv preprint arXiv:2204.01906 (2022).","journal-title":"arXiv preprint arXiv:2204.01906"},{"key":"e_1_3_2_190_2","article-title":"Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback","author":"Tian Katherine","year":"2023","unstructured":"Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D. Manning. 2023. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975 (2023).","journal-title":"arXiv preprint arXiv:2305.14975"},{"key":"e_1_3_2_191_2","doi-asserted-by":"publisher","DOI":"10.1145\/3180155.3180220"},{"key":"e_1_3_2_192_2","unstructured":"ToolBench. 2023. Open-source tools learning benchmarks. https:\/\/github.com\/sambanova\/toolbench (2023)."},{"key":"e_1_3_2_193_2","article-title":"LLaMA: Open and efficient foundation language models","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et\u00a0al. 2023. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).","journal-title":"arXiv preprint arXiv:2302.13971"},{"key":"e_1_3_2_194_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4020-6710-5_3"},{"key":"e_1_3_2_195_2","article-title":"On the planning abilities of large language models\u2013a critical investigation","author":"Valmeekam Karthik","year":"2023","unstructured":"Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. 2023. On the planning abilities of large language models\u2013a critical investigation. arXiv preprint arXiv:2305.15771 (2023).","journal-title":"arXiv preprint arXiv:2305.15771"},{"key":"e_1_3_2_196_2","article-title":"Large language models still can\u2019t plan (a benchmark for LLMs on planning and reasoning about change)","author":"Valmeekam Karthik","year":"2022","unstructured":"Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2022. Large language models still can\u2019t plan (a benchmark for LLMs on planning and reasoning about change). arXiv preprint arXiv:2206.10498 (2022).","journal-title":"arXiv preprint arXiv:2206.10498"},{"key":"e_1_3_2_197_2","doi-asserted-by":"crossref","first-page":"355","DOI":"10.18653\/v1\/W19-8643","volume-title":"Proceedings of the 12th International Conference on Natural Language Generation","author":"Lee Chris Van Der","year":"2019","unstructured":"Chris Van Der Lee, Albert Gatt, Emiel Van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation. 355\u2013368."},{"key":"e_1_3_2_198_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_199_2","unstructured":"Tu Vu Mohit Iyyer Xuezhi Wang Noah Constant Jerry Wei Jason Wei Chris Tar Yun-Hsuan Sung Denny Zhou Quoc Le and Thang Luong. 2023. FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation. (2023). arxiv:cs.CL\/2310.03214"},{"key":"e_1_3_2_200_2","article-title":"SuperGLUE: A stickier benchmark for general-purpose language understanding systems","volume":"32","author":"Wang Alex","year":"2019","unstructured":"Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_201_2","article-title":"GLUE: A multi-task benchmark and analysis platform for natural language understanding","author":"Wang Alex","year":"2018","unstructured":"Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 (2018).","journal-title":"arXiv preprint arXiv:1804.07461"},{"key":"e_1_3_2_202_2","unstructured":"Boxin Wang Weixin Chen Hengzhi Pei Chulin Xie Mintong Kang Chenhui Zhang Chejian Xu Zidi Xiong Ritik Dutta Rylan Schaeffer Sang T. Truong Simran Arora Mantas Mazeika Dan Hendrycks Zinan Lin Yu Cheng Sanmi Koyejo Dawn Song and Bo Li. 2023. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. (2023). arxiv:cs.CL\/2306.11698"},{"key":"e_1_3_2_203_2","unstructured":"Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 billion parameter autoregressive language model. (2021)."},{"key":"e_1_3_2_204_2","article-title":"Adversarial GLUE: A multi-task benchmark for robustness evaluation of language models","author":"Wang Boxin","year":"2021","unstructured":"Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, and Bo Li. 2021. Adversarial GLUE: A multi-task benchmark for robustness evaluation of language models. arXiv preprint arXiv:2111.02840 (2021).","journal-title":"arXiv preprint arXiv:2111.02840"},{"key":"e_1_3_2_205_2","article-title":"Evaluating open question answering evaluation","author":"Wang Cunxiang","year":"2023","unstructured":"Cunxiang Wang, Sirui Cheng, Zhikun Xu, Bowen Ding, Yidong Wang, and Yue Zhang. 2023. Evaluating open question answering evaluation. arXiv preprint arXiv:2305.12421 (2023).","journal-title":"arXiv preprint arXiv:2305.12421"},{"key":"e_1_3_2_206_2","doi-asserted-by":"crossref","unstructured":"Hongru Wang Rui Wang Fei Mi Zezhong Wang Ruifeng Xu and Kam-Fai Wong. 2023. Chain-of-thought prompting for responding to in-depth dialogue questions with LLM. (2023). arxiv:cs.CL\/2305.11792","DOI":"10.18653\/v1\/2023.findings-emnlp.806"},{"key":"e_1_3_2_207_2","volume-title":"ICLR Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models","author":"Wang Jindong","year":"2023","unstructured":"Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, et\u00a0al. 2023. On the robustness of ChatGPT: An adversarial and out-of-distribution perspective. In ICLR Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models."},{"key":"e_1_3_2_208_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2022.3178128"},{"key":"e_1_3_2_209_2","article-title":"Document-level machine translation with large language models","author":"Wang Longyue","year":"2023","unstructured":"Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023. Document-level machine translation with large language models. arXiv preprint arXiv:2304.02210 (2023).","journal-title":"arXiv preprint arXiv:2304.02210"},{"key":"e_1_3_2_210_2","article-title":"Large language models are not fair evaluators","author":"Wang Peiyi","year":"2023","unstructured":"Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926 (2023).","journal-title":"arXiv preprint arXiv:2305.17926"},{"key":"e_1_3_2_211_2","article-title":"Is ChatGPT a good teacher coach? Measuring zero-shot performance for scoring and providing actionable insights on classroom instruction","author":"Wang Rose E.","year":"2023","unstructured":"Rose E. Wang and Dorottya Demszky. 2023. Is ChatGPT a good teacher coach? Measuring zero-shot performance for scoring and providing actionable insights on classroom instruction. arXiv preprint arXiv:2306.03090 (2023).","journal-title":"arXiv preprint arXiv:2306.03090"},{"key":"e_1_3_2_212_2","article-title":"CMB: A comprehensive medical benchmark in Chinese","author":"Wang Xidong","year":"2023","unstructured":"Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, et\u00a0al. 2023. CMB: A comprehensive medical benchmark in Chinese. arXiv preprint arXiv:2308.08833 (2023).","journal-title":"arXiv preprint arXiv:2308.08833"},{"key":"e_1_3_2_213_2","doi-asserted-by":"crossref","unstructured":"Xuena Wang Xueting Li Zi Yin Yue Wu and Liu Jia. 2023. Emotional Intelligence of Large Language Models. (2023). arxiv:cs.AI\/2307.09042","DOI":"10.1177\/18344909231213958"},{"key":"e_1_3_2_214_2","article-title":"MINT: Evaluating LLMs in multi-turn interaction with tools and language feedback","author":"Wang Xingyao","year":"2023","unstructured":"Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji. 2023. MINT: Evaluating LLMs in multi-turn interaction with tools and language feedback. arXiv preprint arXiv:2309.10691 (2023).","journal-title":"arXiv preprint arXiv:2309.10691"},{"key":"e_1_3_2_215_2","article-title":"CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation","author":"Wang Yue","year":"2021","unstructured":"Yue Wang, Weishi Wang, Shafiq Joty, and Steven C. H. Hoi. 2021. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859 (2021).","journal-title":"arXiv preprint arXiv:2109.00859"},{"key":"e_1_3_2_216_2","article-title":"Exploring vision-language models for imbalanced learning","author":"Wang Yidong","year":"2023","unstructured":"Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, and Shikun Zhang. 2023. Exploring vision-language models for imbalanced learning. arXiv preprint arXiv:2304.01457 (2023).","journal-title":"arXiv preprint arXiv:2304.01457"},{"key":"e_1_3_2_217_2","article-title":"PandaLM: An automatic evaluation benchmark for LLM instruction tuning optimization","author":"Wang Yidong","year":"2023","unstructured":"Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et\u00a0al. 2023. PandaLM: An automatic evaluation benchmark for LLM instruction tuning optimization. arXiv preprint arXiv:2306.05087 (2023).","journal-title":"arXiv preprint arXiv:2306.05087"},{"key":"e_1_3_2_218_2","article-title":"Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today","author":"Wang Zhuo","year":"2023","unstructured":"Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei Zhang, Liling Dong, Jing Gao, et\u00a0al. 2023. Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today. arXiv preprint arXiv:2306.01499 (2023).","journal-title":"arXiv preprint arXiv:2306.01499"},{"key":"e_1_3_2_219_2","unstructured":"Zengzhi Wang Qiming Xie Zixiang Ding Yi Feng and Rui Xia. 2023. Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. (2023). arxiv:cs.CL\/2304.04339"},{"key":"e_1_3_2_220_2","article-title":"Emergent abilities of large language models","volume":"2022","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai Hsin Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. Trans. Mach. Learn. Res. 2022 (2022).","journal-title":"Trans. Mach. Learn. Res."},{"key":"e_1_3_2_221_2","unstructured":"Tianwen Wei Jian Luan Wei Liu Shuang Dong and Bin Wang. 2023. CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? (2023). arxiv:cs.CL\/2306.16636"},{"key":"e_1_3_2_222_2","article-title":"A prompt pattern catalog to enhance prompt engineering with ChatGPT","author":"White Jules","year":"2023","unstructured":"Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C. Schmidt. 2023. A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv preprint arXiv:2302.11382 (2023).","journal-title":"arXiv preprint arXiv:2302.11382"},{"key":"e_1_3_2_223_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2015.03.009"},{"key":"e_1_3_2_224_2","article-title":"Large language models can be used to estimate the ideologies of politicians in a zero-shot learning setting","author":"Wu Patrick Y.","year":"2023","unstructured":"Patrick Y. Wu, Joshua A. Tucker, Jonathan Nagler, and Solomon Messing. 2023. Large language models can be used to estimate the ideologies of politicians in a zero-shot learning setting. arXiv preprint arXiv:2303.12057 (2023).","journal-title":"arXiv preprint arXiv:2303.12057"},{"key":"e_1_3_2_225_2","article-title":"An empirical study on challenging math problem solving with GPT-4","author":"Wu Yiran","year":"2023","unstructured":"Yiran Wu, Feiran Jia, Shaokun Zhang, Qingyun Wu, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, and Chi Wang. 2023. An empirical study on challenging math problem solving with GPT-4. arXiv preprint arXiv:2306.01337 (2023).","journal-title":"arXiv preprint arXiv:2306.01337"},{"key":"e_1_3_2_226_2","first-page":"32353","article-title":"Autoformalization with large language models","volume":"35","author":"Wu Yuhuai","year":"2022","unstructured":"Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. 2022. Autoformalization with large language models. Advances in Neural Information Processing Systems 35 (2022), 32353\u201332368.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_227_2","article-title":"Reasoning or reciting? Exploring the capabilities and limitations of language models through counterfactual tasks","author":"Wu Zhaofeng","year":"2023","unstructured":"Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Aky\u00fcrek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, and Yoon Kim. 2023. Reasoning or reciting? Exploring the capabilities and limitations of language models through counterfactual tasks. arXiv preprint arXiv:2307.02477 (2023).","journal-title":"arXiv preprint arXiv:2307.02477"},{"key":"e_1_3_2_228_2","unstructured":"Qiming Xie Zengzhi Wang Yi Feng and Rui Xia. 2023. Ask Again Then Fail: Large Language Models\u2019 Vacillations in Judgement. (2023). arxiv:cs.CL\/2310.02174"},{"key":"e_1_3_2_229_2","article-title":"Are large language models really good logical reasoners? A comprehensive evaluation from deductive, inductive and abductive views","author":"Xu Fangzhi","year":"2023","unstructured":"Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, and Erik Cambria. 2023. Are large language models really good logical reasoners? A comprehensive evaluation from deductive, inductive and abductive views. arXiv preprint arXiv:2306.09841 (2023).","journal-title":"arXiv preprint arXiv:2306.09841"},{"key":"e_1_3_2_230_2","unstructured":"Guohai Xu Jiayi Liu Ming Yan Haotian Xu Jinghui Si Zhuoran Zhou Peng Yi Xing Gao Jitao Sang Rong Zhang Ji Zhang Chao Peng Fei Huang and Jingren Zhou. 2023. CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility. (2023). arxiv:cs.CL\/2307.09705"},{"key":"e_1_3_2_231_2","unstructured":"Peng Xu Wenqi Shao Kaipeng Zhang Peng Gao Shuo Liu Meng Lei Fanqing Meng Siyuan Huang Yu Qiao and Ping Luo. 2023. LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models. (2023). arxiv:cs.CV\/2306.09265"},{"key":"e_1_3_2_232_2","article-title":"ChatGPT vs. Google: A comparative study of search performance and user experience","author":"Xu Ruiyun","year":"2023","unstructured":"Ruiyun Xu, Yue Feng, and Hailiang Chen. 2023. ChatGPT vs. Google: A comparative study of search performance and user experience. arXiv preprint arXiv:2307.01135 (2023).","journal-title":"arXiv preprint arXiv:2307.01135"},{"key":"e_1_3_2_233_2","article-title":"Large language models can rate news outlet credibility","author":"Yang Kai-Cheng","year":"2023","unstructured":"Kai-Cheng Yang and Filippo Menczer. 2023. Large language models can rate news outlet credibility. arXiv preprint arXiv:2304.00228 (2023).","journal-title":"arXiv preprint arXiv:2304.00228"},{"key":"e_1_3_2_234_2","article-title":"GLUE-X: Evaluating natural language understanding models from an out-of-distribution generalization perspective","author":"Yang Linyi","year":"2022","unstructured":"Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, and Yue Zhang. 2022. GLUE-X: Evaluating natural language understanding models from an out-of-distribution generalization perspective. arXiv preprint arXiv:2211.08073 (2022).","journal-title":"arXiv preprint arXiv:2211.08073"},{"key":"e_1_3_2_235_2","article-title":"LAMM: Language-assisted multi-modal instruction-tuning dataset, framework, and benchmark","author":"Yin Zhenfei","year":"2023","unstructured":"Zhenfei Yin, Jiong Wang, Jianjian Cao, Zhelun Shi, Dingning Liu, Mukai Li, Lu Sheng, Lei Bai, Xiaoshui Huang, Zhiyong Wang, et\u00a0al. 2023. LAMM: Language-assisted multi-modal instruction-tuning dataset, framework, and benchmark. arXiv preprint arXiv:2306.06687 (2023).","journal-title":"arXiv preprint arXiv:2306.06687"},{"key":"e_1_3_2_236_2","article-title":"KoLA: Carefully benchmarking world knowledge of large language models","author":"Yu Jifan","year":"2023","unstructured":"Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zijun Yao, Xiaohan Zhang, Hanming Li, et\u00a0al. 2023. KoLA: Carefully benchmarking world knowledge of large language models. arXiv preprint arXiv:2306.09296 (2023).","journal-title":"arXiv preprint arXiv:2306.09296"},{"key":"e_1_3_2_237_2","article-title":"MetaMath: Bootstrap your own mathematical questions for large language models","author":"Yu Longhui","year":"2023","unstructured":"Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. MetaMath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 (2023).","journal-title":"arXiv preprint arXiv:2309.12284"},{"key":"e_1_3_2_238_2","article-title":"MM-Vet: Evaluating large multimodal models for integrated capabilities","author":"Yu Weihao","year":"2023","unstructured":"Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. 2023. MM-Vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490 (2023).","journal-title":"arXiv preprint arXiv:2308.02490"},{"key":"e_1_3_2_239_2","unstructured":"Lifan Yuan Yangyi Chen Ganqu Cui Hongcheng Gao Fangyuan Zou Xingyi Cheng Heng Ji Zhiyuan Liu and Maosong Sun. 2023. Revisiting Out-of-distribution Robustness in NLP: Benchmark Analysis and LLMs Evaluations. (2023). arxiv:cs.CL\/2306.04618"},{"key":"e_1_3_2_240_2","doi-asserted-by":"crossref","unstructured":"Zheng Yuan Fajie Yuan Yu Song Youhua Li Junchen Fu Fei Yang Yunzhu Pan and Yongxin Ni. 2023. Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited. (2023). arxiv:cs.IR\/2303.13835","DOI":"10.1145\/3539618.3591932"},{"key":"e_1_3_2_241_2","article-title":"How well do large language models perform in arithmetic tasks?","author":"Yuan Zheng","year":"2023","unstructured":"Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. 2023. How well do large language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015 (2023).","journal-title":"arXiv preprint arXiv:2304.02015"},{"key":"e_1_3_2_242_2","first-page":"325","volume-title":"International Conference on Machine Learning","author":"Zemel Rich","year":"2013","unstructured":"Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. PMLR, 325\u2013333."},{"key":"e_1_3_2_243_2","article-title":"GLM-130B: An open bilingual pre-trained model","author":"Zeng Aohan","year":"2022","unstructured":"Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et\u00a0al. 2022. GLM-130B: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414 (2022).","journal-title":"arXiv preprint arXiv:2210.02414"},{"key":"e_1_3_2_244_2","article-title":"Evaluating and improving tool-augmented computation-intensive math reasoning","author":"Zhang Beichen","year":"2023","unstructured":"Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen. 2023. Evaluating and improving tool-augmented computation-intensive math reasoning. arXiv preprint arXiv:2306.02408 (2023).","journal-title":"arXiv preprint arXiv:2306.02408"},{"key":"e_1_3_2_245_2","article-title":"Is ChatGPT fair for recommendation? Evaluating fairness in large language model recommendation","author":"Zhang Jizhi","year":"2023","unstructured":"Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Is ChatGPT fair for recommendation? Evaluating fairness in large language model recommendation. arXiv preprint arXiv:2305.07609 (2023).","journal-title":"arXiv preprint arXiv:2305.07609"},{"key":"e_1_3_2_246_2","article-title":"OPT: Open pre-trained transformer language models","author":"Zhang Susan","year":"2022","unstructured":"Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et\u00a0al. 2022. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022).","journal-title":"arXiv preprint arXiv:2205.01068"},{"key":"e_1_3_2_247_2","article-title":"Exploring the MIT mathematics and EECS curriculum using large language models","author":"Zhang Sarah J.","year":"2023","unstructured":"Sarah J. Zhang, Samuel Florin, Ariel N. Lee, Eamon Niknafs, Andrei Marginean, Annie Wang, Keith Tyser, Zad Chin, Yann Hicke, Nikhil Singh, et\u00a0al. 2023. Exploring the MIT mathematics and EECS curriculum using large language models. arXiv preprint arXiv:2306.08997 (2023).","journal-title":"arXiv preprint arXiv:2306.08997"},{"key":"e_1_3_2_248_2","article-title":"BERTScore: Evaluating text generation with BERT","author":"Zhang Tianyi","year":"2019","unstructured":"Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. arXiv preprint arXiv:1904.09675 (2019).","journal-title":"arXiv preprint arXiv:1904.09675"},{"key":"e_1_3_2_249_2","article-title":"M3Exam: A multilingual, multimodal, multilevel benchmark for examining large language models","author":"Zhang Wenxuan","year":"2023","unstructured":"Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, and Lidong Bing. 2023. M3Exam: A multilingual, multimodal, multilevel benchmark for examining large language models. arXiv preprint arXiv:2306.05179 (2023).","journal-title":"arXiv preprint arXiv:2306.05179"},{"key":"e_1_3_2_250_2","article-title":"Sentiment analysis in the era of large language models: A reality check","author":"Zhang Wenxuan","year":"2023","unstructured":"Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. 2023. Sentiment analysis in the era of large language models: A reality check. arXiv preprint arXiv:2305.15005 (2023).","journal-title":"arXiv preprint arXiv:2305.15005"},{"key":"e_1_3_2_251_2","article-title":"Wider and deeper LLM networks are fairer LLM evaluators","author":"Zhang Xinghua","year":"2023","unstructured":"Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. 2023. Wider and deeper LLM networks are fairer LLM evaluators. arXiv preprint arXiv:2308.01862 (2023).","journal-title":"arXiv preprint arXiv:2308.01862"},{"key":"e_1_3_2_252_2","unstructured":"Yue Zhang Yafu Li Leyang Cui Deng Cai Lemao Liu Tingchen Fu Xinting Huang Enbo Zhao Yu Zhang Yulong Chen Longyue Wang Anh Tuan Luu Wei Bi Freda Shi and Shuming Shi. 2023. Siren\u2019s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. (2023). arxiv:cs.CL\/2309.01219"},{"key":"e_1_3_2_253_2","article-title":"SafetyBench: Evaluating the safety of large language models with multiple choice questions","author":"Zhang Zhexin","year":"2023","unstructured":"Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong Long, Xiao Liu, Xuanyu Lei, Jie Tang, and Minlie Huang. 2023. SafetyBench: Evaluating the safety of large language models with multiple choice questions. arXiv preprint arXiv:2309.07045 (2023).","journal-title":"arXiv preprint arXiv:2309.07045"},{"key":"e_1_3_2_254_2","article-title":"MMICL: Empowering vision-language model with multi-modal in-context learning","author":"Zhao Haozhe","year":"2023","unstructured":"Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, and Baobao Chang. 2023. MMICL: Empowering vision-language model with multi-modal in-context learning. arXiv preprint arXiv:2309.07915 (2023).","journal-title":"arXiv preprint arXiv:2309.07915"},{"key":"e_1_3_2_255_2","doi-asserted-by":"crossref","unstructured":"Jiaxu Zhao Meng Fang Zijing Shi Yitong Li Ling Chen and Mykola Pechenizkiy. 2023. CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models. (2023). arxiv:cs.CL\/2305.11262","DOI":"10.18653\/v1\/2023.acl-long.757"},{"key":"e_1_3_2_256_2","article-title":"A survey of large language models","author":"Zhao Wayne Xin","year":"2023","unstructured":"Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et\u00a0al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).","journal-title":"arXiv preprint arXiv:2303.18223"},{"key":"e_1_3_2_257_2","article-title":"On evaluating adversarial robustness of large vision-language models","author":"Zhao Yunqing","year":"2023","unstructured":"Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, and Min Lin. 2023. On evaluating adversarial robustness of large vision-language models. arXiv preprint arXiv:2305.16934 (2023).","journal-title":"arXiv preprint arXiv:2305.16934"},{"key":"e_1_3_2_258_2","article-title":"LMSYS-Chat-1M: A large-scale real-world LLM conversation dataset","author":"Zheng Lianmin","year":"2023","unstructured":"Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric Xing, et\u00a0al. 2023. LMSYS-Chat-1M: A large-scale real-world LLM conversation dataset. arXiv preprint arXiv:2309.11998 (2023).","journal-title":"arXiv preprint arXiv:2309.11998"},{"key":"e_1_3_2_259_2","unstructured":"Lianmin Zheng Wei-Lin Chiang Ying Sheng Siyuan Zhuang Zhanghao Wu Yonghao Zhuang Zi Lin Zhuohan Li Dacheng Li Eric P. Xing Hao Zhang Joseph E. Gonzalez and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. (2023). arxiv:cs.CL\/2306.05685"},{"key":"e_1_3_2_260_2","article-title":"Towards a unified multi-dimensional evaluator for text generation","author":"Zhong Ming","year":"2022","unstructured":"Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multi-dimensional evaluator for text generation. arXiv preprint arXiv:2210.07197 (2022).","journal-title":"arXiv preprint arXiv:2210.07197"},{"key":"e_1_3_2_261_2","article-title":"AGIEval: A human-centric benchmark for evaluating foundation models","author":"Zhong Wanjun","year":"2023","unstructured":"Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. AGIEval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364 (2023).","journal-title":"arXiv preprint arXiv:2304.06364"},{"key":"e_1_3_2_262_2","article-title":"Large language models are human-level prompt engineers","author":"Zhou Yongchao","year":"2022","unstructured":"Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910 (2022).","journal-title":"arXiv preprint arXiv:2211.01910"},{"key":"e_1_3_2_263_2","article-title":"PromptBench: Towards evaluating the robustness of large language models on adversarial prompts","author":"Zhu Kaijie","year":"2023","unstructured":"Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et\u00a0al. 2023. PromptBench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528 (2023).","journal-title":"arXiv preprint arXiv:2306.04528"},{"key":"e_1_3_2_264_2","article-title":"Efficiently measuring the cognitive ability of LLMs: An adaptive testing perspective","author":"Zhuang Yan","year":"2023","unstructured":"Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, et\u00a0al. 2023. Efficiently measuring the cognitive ability of LLMs: An adaptive testing perspective. arXiv preprint arXiv:2306.10512 (2023).","journal-title":"arXiv preprint arXiv:2306.10512"},{"key":"e_1_3_2_265_2","article-title":"Exploring AI ethics of ChatGPT: A diagnostic analysis","author":"Zhuo Terry Yue","year":"2023","unstructured":"Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023. Exploring AI ethics of ChatGPT: A diagnostic analysis. arXiv preprint arXiv:2301.12867 (2023).","journal-title":"arXiv preprint arXiv:2301.12867"},{"key":"e_1_3_2_266_2","article-title":"On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex","author":"Zhuo Terry Yue","year":"2023","unstructured":"Terry Yue Zhuo, Zhuang Li, Yujin Huang, Yuan-Fang Li, Weiqing Wang, Gholamreza Haffari, and Fatemeh Shiri. 2023. On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex. arXiv preprint arXiv:2301.12868 (2023).","journal-title":"arXiv preprint arXiv:2301.12868"},{"key":"e_1_3_2_267_2","article-title":"Fine-tuning language models from human preferences","author":"Ziegler Daniel M.","year":"2019","unstructured":"Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019).","journal-title":"arXiv preprint arXiv:1909.08593"},{"key":"e_1_3_2_268_2","article-title":"Can large language models transform computational social science?","author":"Ziems Caleb","year":"2023","unstructured":"Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. 2023. Can large language models transform computational social science? arXiv preprint arXiv:2305.03514 (2023).","journal-title":"arXiv preprint arXiv:2305.03514"}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3641289","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3641289","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T23:57:14Z","timestamp":1750291034000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3641289"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,29]]},"references-count":267,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6,30]]}},"alternative-id":["10.1145\/3641289"],"URL":"https:\/\/doi.org\/10.1145\/3641289","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"value":"2157-6904","type":"print"},{"value":"2157-6912","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,29]]},"assertion":[{"value":"2023-07-22","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-12-28","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-03-29","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}