{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T18:00:22Z","timestamp":1772906422093,"version":"3.50.1"},"reference-count":178,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2025,6,6]],"date-time":"2025-06-06T00:00:00Z","timestamp":1749168000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["92470205 and 62222215"],"award-info":[{"award-number":["92470205 and 62222215"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Beijing Natural Science Foundation","award":["L233008"],"award-info":[{"award-number":["L233008"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2025,6,30]]},"abstract":"<jats:p>Recently, Large Language Models (LLMs) such as ChatGPT have showcased remarkable abilities in solving general tasks, demonstrating the potential for applications in recommender systems. To assess how effectively LLMs can be used in recommendation tasks, our study primarily focuses on employing LLMs as recommender systems through prompt engineering. We propose a general framework for leveraging LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders. To conduct our analysis, we formalize the input of LLMs for recommendation into natural language prompts with two key aspects and explain how our framework can be generalized to various recommendation scenarios. As for the use of LLMs as recommenders, we analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results based on the classification of LLMs. As for prompt engineering, we further analyze the impact of four important components of prompts, i.e., task descriptions, user interest modeling, candidate items construction, and prompting strategies. In each section, we first define and categorize concepts in line with the existing literature. Then, we propose inspiring research questions followed by detailed experiments on two public datasets, in order to systematically analyze the impact of different factors on recommendation performance. Based on our empirical analysis, we finally summarize promising directions to shed lights on future research.<\/jats:p>","DOI":"10.1145\/3726871","type":"journal-article","created":{"date-parts":[[2025,3,28]],"date-time":"2025-03-28T18:49:32Z","timestamp":1743187772000},"page":"1-51","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":13,"title":["Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7464-3776","authenticated-orcid":false,"given":"Lanling","family":"Xu","sequence":"first","affiliation":[{"name":"Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-8864-915X","authenticated-orcid":false,"given":"Junjie","family":"Zhang","sequence":"additional","affiliation":[{"name":"Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-5103-0750","authenticated-orcid":false,"given":"Bingqian","family":"Li","sequence":"additional","affiliation":[{"name":"Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0080-8988","authenticated-orcid":false,"given":"Jinpeng","family":"Wang","sequence":"additional","affiliation":[{"name":"Meituan Group, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-7186-4960","authenticated-orcid":false,"given":"Sheng","family":"Chen","sequence":"additional","affiliation":[{"name":"Meituan Group, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8333-6196","authenticated-orcid":false,"given":"Wayne Xin","family":"Zhao","sequence":"additional","affiliation":[{"name":"Renmin University of China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9777-9676","authenticated-orcid":false,"given":"Ji-Rong","family":"Wen","sequence":"additional","affiliation":[{"name":"Renmin University of China, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2025,6,6]]},"reference":[{"key":"e_1_3_3_2_2","first-page":"1","volume-title":"RecSys","author":"Agrawal Saurabh","year":"2023","unstructured":"Saurabh Agrawal, John Trenkle, and Jaya Kawale. 2023. Beyond labels: Leveraging deep learning and LLMs for content metadata. In RecSys. ACM, 1."},{"key":"e_1_3_3_3_2","unstructured":"A. I. Zhipu. 2024. ChatGLM: A family of large language models from GLM-130B to GLM-4 all tools. arXiv:2406.12793. Retrieved from https:\/\/arxiv.org\/abs\/2406.12793"},{"key":"e_1_3_3_4_2","unstructured":"Keqin Bao Jizhi Zhang Wenjie Wang Yang Zhang Zhengyi Yang Yancheng Luo Fuli Feng Xiangnan He and Qi Tian.2023. 2023. A bi-step grounding paradigm for large language models in recommendation systems. arXiv:2308.08434. Retrieved from https:\/\/arxiv.org\/abs\/2308.08434"},{"key":"e_1_3_3_5_2","first-page":"1007","volume-title":"RecSys","author":"Bao Keqin","year":"2023","unstructured":"Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He.2023. TALLRec: An effective and efficient tuning framework to align large language model with recommendation. In RecSys. ACM, 1007\u20131014."},{"key":"e_1_3_3_6_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown Tom B.","year":"2020","unstructured":"Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In NeurIPS, Vol. 33, 1877\u20131901.","journal-title":"NeurIPS"},{"issue":"1","key":"e_1_3_3_7_2","doi-asserted-by":"crossref","first-page":"12","DOI":"10.1002\/(SICI)1097-4571(199401)45:1<12::AID-ASI2>3.0.CO;2-L","article-title":"The relationship between recall and precision","volume":"45","author":"Buckland Michael K.","year":"1994","unstructured":"Michael K. Buckland and Fredric C. Gey. 1994. The relationship between recall and precision. J. Am. Soc. Inf. Sci. 45, 1 (1994), 12\u201319.","journal-title":"J. Am. Soc. Inf. Sci"},{"key":"e_1_3_3_8_2","unstructured":"Aldo Gael Carranza Rezsa Farahani Natalia Ponomareva Alex Kurakin Matthew Jagielski and Milad Nasr. 2023. Privacy-preserving recommender systems with synthetic query generation using differentially private large language models. arXiv:2305.05973. Retrieved from https:\/\/arxiv.org\/abs\/2305.05973"},{"key":"e_1_3_3_9_2","article-title":"Enhancing recommendation diversity by Re-ranking with large language models","author":"Carraro Diego","year":"2024","unstructured":"Diego Carraro and Derek Bridge. 2024. Enhancing recommendation diversity by Re-ranking with large language models. ACM Trans. Recomm. Syst. (2024)","journal-title":"ACM Trans. Recomm. Syst"},{"issue":"4","key":"e_1_3_3_10_2","doi-asserted-by":"crossref","first-page":"42","DOI":"10.1007\/s11280-024-01276-1","article-title":"When large language models meet personalization: Perspectives of challenges and opportunities","volume":"27","author":"Chen Jin","year":"2024","unstructured":"Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, et al. 2024. When large language models meet personalization: Perspectives of challenges and opportunities. World Wide Web 27, 4 (2024), 42.","journal-title":"World Wide Web"},{"key":"e_1_3_3_11_2","unstructured":"Runjin Chen Mingxuan Ju Ngoc Bui Dimosthenis Antypas Stanley Cai Xiaopeng Wu Leonardo Neves Zhangyang Wang Neil Shah and Tong Zhao. 2024. Enhancing item tokenization for generative recommendation through self-improvement. arXiv:2412.17171. Retrieved from https:\/\/arxiv.org\/abs\/2412.17171"},{"key":"e_1_3_3_12_2","unstructured":"Yuxin Chen Junfei Tan An Zhang Zhengyi Yang Leheng Sheng Enzhi Zhang Xiang Wang and Tat-Seng Chua. 2024. On softmax direct preference optimization for recommendation. arXiv:2406.09215. Retrieved from https:\/\/arxiv.org\/abs\/2406.09215"},{"key":"e_1_3_3_13_2","first-page":"7","volume-title":"DLRS@RecSys","author":"Cheng Heng-Tze","year":"2016","unstructured":"Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide and deep learning for recommender systems. In DLRS@RecSys. ACM, 7\u201310."},{"key":"e_1_3_3_14_2","unstructured":"Wei-Lin Chiang Zhuohan Li Zi Lin Ying Sheng Zhanghao Wu Hao Zhang Lianmin Zheng Siyuan Zhuang Yonghao Zhuang Joseph E. Gonzalez et al. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. Retrieved from https:\/\/lmsys.org\/blog\/2023-03-30-vicuna\/"},{"key":"e_1_3_3_15_2","unstructured":"Zhixuan Chu Hongyan Hao Xin Ouyang Simeng Wang Yan Wang Yue Shen Jinjie Gu Qing Cui Longfei Li Siqiao Xue et al. 2023. Leveraging large language models for pre-trained recommender systems. arXiv:2308.10837. Retrieved from https:\/\/arxiv.org\/abs\/2308.10837"},{"key":"e_1_3_3_16_2","first-page":"1","article-title":"Scaling Instruction-Finetuned language models","volume":"25","author":"Won Chung Hyung","year":"2024","unstructured":"Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2024. Scaling Instruction-Finetuned language models. J. Mach. Learn. Res. 25 (2024), Article 70, 1\u201353.","journal-title":"J. Mach. Learn. Res"},{"key":"e_1_3_3_17_2","first-page":"1","volume-title":"BlackboxNLP@EMNLP","author":"Colas Anthony M.","year":"2023","unstructured":"Anthony M. Colas, Jun Araki, Zhengyu Zhou, Bingqing Wang, and Zhe Feng. 2023. Knowledge-grounded natural language recommendation explanation. In BlackboxNLP@EMNLP. Association for Computational Linguistics, 1\u201315."},{"key":"e_1_3_3_18_2","first-page":"1126","volume-title":"RecSys","author":"Dai Sunhao","year":"2023","unstructured":"Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. 2023. Uncovering ChatGPT\u2019s capabilities in recommender systems. In RecSys. ACM, 1126\u20131132."},{"key":"e_1_3_3_19_2","unstructured":"Sunhao Dai Yuqi Zhou Liang Pang Weihao Liu Xiaolin Hu Yong Liu Xiao Zhang and Jun Xu. 2023. LLMs may dominate information access: Neural retrievers are biased towards LLM-generated texts. arXiv:2310.20501. Retrieved from https:\/\/arxiv.org\/abs\/2310.20501"},{"key":"e_1_3_3_20_2","unstructured":"DeepSeek-AI. 2024. DeepSeek-V3 technical report. arXiv:2412.19437. Retrieved from https:\/\/arxiv.org\/abs\/2412.19437"},{"key":"e_1_3_3_21_2","doi-asserted-by":"crossref","DOI":"10.1145\/3690655","article-title":"Understanding biases in ChatGPT-based recommender systems: Provider fairness, temporal stability, and recency","author":"Deldjoo Yashar","year":"2024","unstructured":"Yashar Deldjoo. 2024. Understanding biases in ChatGPT-based recommender systems: Provider fairness, temporal stability, and recency. ACM Trans. Recomm. Syst. (2024).","journal-title":"ACM Trans. Recomm. Syst"},{"key":"e_1_3_3_22_2","first-page":"4171","volume-title":"NAACL-HLT (1)","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1). Association for Computational Linguistics, 4171\u20134186."},{"key":"e_1_3_3_23_2","unstructured":"Dario Di Palma Giovanni Maria Biancofiore Vito Walter Anelli Fedelucio Narducci Tommaso Di Noia and Eugenio Di Sciascio. 2023. Evaluating ChatGPT as a recommender system: A rigorous approach. arXiv:2309.03613. Retrieved from https:\/\/arxiv.org\/abs\/2309.03613"},{"key":"e_1_3_3_24_2","first-page":"1330","volume-title":"ACL (1)","author":"Diao Shizhe","year":"2024","unstructured":"Shizhe Diao, Pengcheng Wang, Yong Lin, Rui Pan, Xiang Liu, and Tong Zhang. 2024. Active prompting with chain-of-thought for large language models. In ACL (1). Association for Computational Linguistics, 1330\u20131350."},{"key":"e_1_3_3_25_2","unstructured":"Yijie Ding Yupeng Hou Jiacheng Li and Julian J. McAuley. 2024. Inductive generative recommendation via retrieval-based speculation. arXiv:2410.02939. Retrieved from https:\/\/arxiv.org\/abs\/2410.02939"},{"key":"e_1_3_3_26_2","unstructured":"Yingpeng Du Di Luo Rui Yan Hongzhi Liu Yang Song Hengshu Zhu and Jie Zhang. 2023. Enhancing job recommendation through LLM-based generative adversarial networks. arXiv:2307.10747. Retrieved from https:\/\/arxiv.org\/abs\/2307.10747"},{"key":"e_1_3_3_27_2","unstructured":"Abhimanyu Dubey Abhinav Jauhri Abhinav Pandey Abhishek Kadian Ahmad Al-Dahle Aiesha Letman Akhil Mathur Alan Schelten Amy Yang Angela Fan et al. 2024. The llama 3 herd of models. arXiv:2407.21783. Retrieved from https:\/\/arxiv.org\/abs\/2407.21783"},{"key":"e_1_3_3_28_2","unstructured":"Yue Feng Shuchang Liu Zhenghai Xue Qingpeng Cai Lantao Hu Peng Jiang Kun Gai and Fei Sun. 2023. A large language model enhanced conversational recommender system. arXiv:2308.06212. Retrieved from https:\/\/arxiv.org\/abs\/2308.06212"},{"key":"e_1_3_3_29_2","unstructured":"Luke Friedman Sameer Ahuja David Allen Terry Tan Hakim Sidahmed Changbo Long Jun Xie Gabriel Schubiner Ajay Patel Harsh Lara et al. 2023. Leveraging large language models in conversational recommender systems. arXiv:2305.07961. Retrieved from https:\/\/arxiv.org\/abs\/2305.07961"},{"key":"e_1_3_3_30_2","article-title":"A unified framework for multi-domain ctr prediction via large language models","author":"Fu Zichuan","year":"2023","unstructured":"Zichuan Fu, Xiangyang Li, Chuhan Wu, Yichao Wang, Kuicai Dong, Xiangyu Zhao, Mengchen Zhao, Huifeng Guo, and Ruiming Tang. 2023. A unified framework for multi-domain ctr prediction via large language models. ACM Trans. Inf. Syst. (2023)","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_3_31_2","unstructured":"Chongming Gao Ruijun Chen Shuai Yuan Kexin Huang Yuanqing Yu and Xiangnan He. 2024. SPRec: Leveraging self-play to debias preference alignment for large language model-based recommendations. arXiv:2412.09243. Retrieved from https:\/\/arxiv.org\/abs\/2412.09243"},{"key":"e_1_3_3_32_2","unstructured":"Yunfan Gao Tao Sheng Youlin Xiang Yun Xiong Haofen Wang and Jiawei Zhang. 2023. Chat-REC: Towards interactive and explainable LLMs-augmented recommender system. arXiv:2303.14524. Retrieved from https:\/\/arxiv.org\/abs\/2303.14524"},{"key":"e_1_3_3_33_2","first-page":"299","volume-title":"RecSys","author":"Geng Shijie","year":"2022","unstructured":"Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (RLP): A unified pretrain, personalized prompt and predict paradigm (P5). In RecSys. ACM, 299\u2013315."},{"key":"e_1_3_3_34_2","first-page":"9606","volume-title":"EMNLP (Findings)","author":"Geng Shijie","year":"2023","unstructured":"Shijie Geng, Juntao Tan, Shuchang Liu, Zuohui Fu, and Yongfeng Zhang. 2023. VIP5: Towards multimodal foundation models for recommendation. In EMNLP (Findings). Association for Computational Linguistics, 9606\u20139620."},{"key":"e_1_3_3_35_2","first-page":"30","volume-title":"EMNLP","author":"Geva Mor","year":"2022","unstructured":"Mor Geva, Avi Caciularu, Kevin Ro Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In EMNLP. Association for Computational Linguistics, 30\u201345."},{"key":"e_1_3_3_36_2","first-page":"1725","volume-title":"IJCAI","author":"Guo Huifeng","year":"2017","unstructured":"Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: A factorization-machine based neural network for CTR prediction. In IJCAI. ijcai.org, 1725\u20131731."},{"issue":"4","key":"e_1_3_3_37_2","first-page":"100089","article-title":"An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges","volume":"2","author":"Haleem Abid","year":"2022","unstructured":"Abid Haleem, Mohd Javaid, and Ravi Pratap Singh. 2022. An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. TBench 2, 4 (2022), 100089.","journal-title":"TBench"},{"issue":"4","key":"e_1_3_3_38_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2827872","article-title":"The MovieLens datasets: History and context","volume":"5","author":"Harper F. Maxwell","year":"2016","unstructured":"F. Maxwell Harper and Joseph A. Konstan. 2016. The MovieLens datasets: History and context. ACM Trans. Interact. Intell. Syst. 5, 4 (2016), Article 19, 1\u201319.","journal-title":"ACM Trans. Interact. Intell. Syst"},{"key":"e_1_3_3_39_2","first-page":"1535","volume-title":"ICDM (Workshops)","author":"Harrison Rachel M.","year":"2023","unstructured":"Rachel M. Harrison, Anton Dereventsov, and Anton Bibin. 2023. Zero-shot recommendations with pre-trained large language models for multimodal nudging. In ICDM (Workshops). IEEE, 1535\u20131542."},{"key":"e_1_3_3_40_2","first-page":"1096","volume-title":"RecSys","author":"Harte Jesse","year":"2023","unstructured":"Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, and Marios Fragkoulis. 2023. Leveraging large language models for sequential recommendation. In RecSys. ACM, 1096\u20131102."},{"key":"e_1_3_3_41_2","first-page":"639","volume-title":"SIGIR","author":"He Xiangnan","year":"2020","unstructured":"Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-Dong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and powering graph convolution network for recommendation. In SIGIR. ACM, 639\u2013648."},{"key":"e_1_3_3_42_2","first-page":"173","article-title":"Neural collaborative filtering","author":"He Xiangnan","year":"2017","unstructured":"Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In World Wide Web, 173\u2013182.","journal-title":"World Wide Web"},{"key":"e_1_3_3_43_2","first-page":"720","volume-title":"CIKM","author":"He Zhankui","year":"2023","unstructured":"Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, and Julian J. McAuley. 2023. Large language models as zero-shot conversational recommenders. In CIKM. ACM, 720\u2013730."},{"key":"e_1_3_3_44_2","first-page":"585","volume-title":"KDD","author":"Hou Yupeng","year":"2022","unstructured":"Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, and Ji-Rong Wen. 2022. Towards universal sequence representation learning for recommender systems. In KDD. ACM, 585\u2013593."},{"key":"e_1_3_3_45_2","first-page":"364","article-title":"Large language models are zero-shot rankers for recommender systems","volume":"14609","author":"Hou Yupeng","year":"2024","unstructured":"Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian J. McAuley, and Wayne Xin Zhao. 2024. Large language models are zero-shot rankers for recommender systems. In ECIR (2). Nazli Goharian, Nicola Tonellotto, Yulan He, Aldo Lipani, Graham McDonald, Craig Macdonald, and Iadh Ounis (Eds.), Lecture Notes in Computer Science, Vol. 14609. Springer, 364\u2013381. Retrieved from https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-56060-6_24","journal-title":"ECIR (2)"},{"key":"e_1_3_3_46_2","unstructured":"Edward J. Hu Yelong Shen Phillip Wallis Zeyuan Allen-Zhu Yuanzhi Li Shean Wang Lu Wang and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv:2106.09685. Retrieved from https:\/\/arxiv.org\/abs\/2106.09685"},{"key":"e_1_3_3_47_2","first-page":"195","volume-title":"SIGIR-AP","author":"Hua Wenyue","year":"2023","unstructured":"Wenyue Hua, Shuyuan Xu, Yingqiang Ge, and Yongfeng Zhang. 2023. How to index item IDs for recommendation foundation models. In SIGIR-AP. ACM, 195\u2013204."},{"key":"e_1_3_3_48_2","unstructured":"Xu Huang Jianxun Lian Yuxuan Lei Jing Yao Defu Lian and Xing Xie. 2023. Recommender AI agent: Integrating large language models for interactive recommendations. arXiv:2308.16505. Retrieved from https:\/\/arxiv.org\/abs\/2308.16505"},{"issue":"4","key":"e_1_3_3_49_2","doi-asserted-by":"crossref","first-page":"422","DOI":"10.1145\/582415.582418","article-title":"Cumulated gain-based evaluation of IR techniques","volume":"20","author":"J\u00e4rvelin Kalervo","year":"2002","unstructured":"Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20, 4 (2002), 422\u2013446.","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_3_50_2","unstructured":"Jianchao Ji Zelong Li Shuyuan Xu Wenyue Hua Yingqiang Ge Juntao Tan and Yongfeng Zhang. 2023. Genrec: Large language model for generative recommendation. arXiv:2307.00457. Retrieved from https:\/\/arxiv.org\/abs\/2307.00457"},{"key":"e_1_3_3_51_2","first-page":"27951","article-title":"Lending interaction wings to recommender systems with conversational agents","volume":"36","author":"Jin Jiarui","year":"2023","unstructured":"Jiarui Jin, Xianyu Chen, Fanghua Ye, Mengyue Yang, Yue Feng, Weinan Zhang, Yong Yu, and Jun Wang. 2023. Lending interaction wings to recommender systems with conversational agents. In NeurIPS, Vol. 36, 27951\u201327979.","journal-title":"NeurIPS"},{"key":"e_1_3_3_52_2","first-page":"197","volume-title":"ICDM","author":"Kang Wang-Cheng","year":"2018","unstructured":"Wang-Cheng Kang and Julian J. McAuley. 2018. Self-attentive sequential recommendation. In ICDM. IEEE Computer Society, 197\u2013206."},{"key":"e_1_3_3_53_2","unstructured":"Wang-Cheng Kang Jianmo Ni Nikhil Mehta Maheswaran Sathiamoorthy Lichan Hong Ed. H. Chi and Derek Zhiyuan Cheng. 2023. Do LLMs understand user preferences? Evaluating LLMs on user rating prediction. arXiv:2305.06474. Retrieved from https:\/\/arxiv.org\/abs\/2305.06474"},{"key":"e_1_3_3_54_2","unstructured":"Tongyoung Kim Soojin Yoon Seongku Kang Jinyoung Yeo and Dongha Lee. 2024. SC-Rec: Enhancing generative retrieval with self-consistent reranking for sequential recommendation. arXiv:2408.08686. Retrieved from https:\/\/arxiv.org\/abs\/2408.08686"},{"key":"e_1_3_3_55_2","first-page":"22199","article-title":"Large language models are zero-shot reasoners","volume":"35","author":"Kojima Takeshi","year":"2022","unstructured":"Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In NeurIPS, Vol. 35, 22199\u201322213.","journal-title":"NeurIPS"},{"key":"e_1_3_3_56_2","unstructured":"Yuxuan Lei Jianxun Lian Jing Yao Xu Huang Defu Lian and Xing Xie. 2023. RecExplainer: Aligning large language models for recommendation model interpretability. arXiv:2311.10947. Retrieved from https:\/\/arxiv.org\/abs\/2311.10947"},{"key":"e_1_3_3_57_2","first-page":"9459","article-title":"Retrieval-augmented generation for knowledge-intensive NLP tasks","volume":"33","author":"Lewis Patrick S. H.","year":"2020","unstructured":"Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-Tau Yih, Tim Rockt\u00e4schel, et al. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS, Vol. 33, 9459\u20139474.","journal-title":"NeurIPS"},{"key":"e_1_3_3_58_2","unstructured":"Guanghan Li Xun Zhang Yufei Zhang Yifan Yin Guojun Yin and Wei Lin. 2024. Semantic convergence: Harmonizing recommender systems via two-stage alignment and behavioral semantic tokenization. arXiv:2412.13771. Retrieved from https:\/\/arxiv.org\/abs\/2412.13771"},{"key":"e_1_3_3_59_2","first-page":"1258","volume-title":"KDD","author":"Li Jiacheng","year":"2023","unstructured":"Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, and Julian J. McAuley.2023. Text is all you need: Learning language representations for sequential recommendation. In KDD. ACM, 1258\u20131267."},{"key":"e_1_3_3_60_2","unstructured":"Jinming Li Wentao Zhang Tian Wang Guanglei Xiong Alan Lu and Gerard Medioni. 2023. GPT4Rec: A generative framework for personalized recommendation and user interests interpretation. arXiv:2304.03879. Retrieved from https:\/\/arxiv.org\/abs\/2304.03879"},{"issue":"4","key":"e_1_3_3_61_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3580488","article-title":"Personalized prompt learning for explainable recommendation","volume":"41","author":"Li Lei","year":"2023","unstructured":"Lei Li, Yongfeng Zhang, and Li Chen. 2023. Personalized prompt learning for explainable recommendation. ACM Trans. Inf. Syst. 41, 4 (2023), 1\u201326.","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_3_62_2","first-page":"1348","volume-title":"CIKM","author":"Li Lei","year":"2023","unstructured":"Lei Li, Yongfeng Zhang, and Li Chen. 2023. Prompt distillation for efficient LLM-based recommendation. In CIKM. ACM, 1348\u20131357."},{"key":"e_1_3_3_63_2","first-page":"10146","volume-title":"LREC\/COLING","author":"Li Lei","year":"2024","unstructured":"Lei Li, Yongfeng Zhang, Dugang Liu, and Li Chen. 2024. Large language models for generative recommendation: A survey and visionary discussions. In LREC\/COLING. ELRA and ICCL, 10146\u201310159."},{"key":"e_1_3_3_64_2","unstructured":"Ruyu Li Wenhao Deng Yu Cheng Zheng Yuan Jiaqi Zhang and Fajie Yuan. 2023. Exploring the upper limits of text-based collaborative filtering using large language models: Discoveries and insights. arXiv:2305.11700. Retrieved from https:\/\/arxiv.org\/abs\/2305.11700"},{"key":"e_1_3_3_65_2","unstructured":"Xiangyang Li Bo Chen Lu Hou and Ruiming Tang. 2023. CTRL: Connect tabular and language model for CTR prediction. arXiv:2306.02841. Retrieved from https:\/\/arxiv.org\/abs\/2306.02841"},{"key":"e_1_3_3_66_2","unstructured":"Xinhang Li Chong Chen Xiangyu Zhao Yong Zhang and Chunxiao Xing. 2023. E4SRec: An elegant effective efficient extensible solution of large language models for sequential recommendation. arXiv:2312.02443. Retrieved from https:\/\/arxiv.org\/abs\/2312.02443"},{"key":"e_1_3_3_67_2","unstructured":"Xinyi Li Yongfeng Zhang and Edward C. Malthouse. 2023. Exploring fine-tuning ChatGPT for news recommendation. arXiv:2311.05850. Retrieved from https:\/\/arxiv.org\/abs\/2311.05850"},{"key":"e_1_3_3_68_2","unstructured":"Xinyi Li Yongfeng Zhang and Edward C. Malthouse. 2023. PBNR: Prompt-based news recommender system. arXiv:2304.07862. Retrieved from https:\/\/arxiv.org\/abs\/2304.07862"},{"key":"e_1_3_3_69_2","article-title":"A preliminary study of ChatGPT on news recommendation: Personalization, provider fairness, and fake news","volume":"3561","author":"Li Xinyi","year":"2023","unstructured":"Xinyi Li, Yongfeng Zhang, and Edward C. Malthouse. 2023. A preliminary study of ChatGPT on news recommendation: Personalization, provider fairness, and fake news. In INRA@RecSys, Vol. 3561, CEUR-WS.org.","journal-title":"INRA@RecSys"},{"key":"e_1_3_3_70_2","unstructured":"Yongqi Li Xinyu Lin Wenjie Wang Fuli Feng Liang Pang Wenjie Li Liqiang Nie Xiangnan He and Tat-Seng Chua. 2024. A survey of generative search and recommendation in the era of large language models. arXiv:2404.16924. Retrieved from https:\/\/arxiv.org\/abs\/2404.16924"},{"key":"e_1_3_3_71_2","first-page":"422","volume-title":"RecSys","author":"Li Yaoyiran","year":"2024","unstructured":"Yaoyiran Li, Xiang Zhai, Moustafa Alzantot, Keyi Yu, Ivan Vulic, Anna Korhonen, and Mohamed Hammad. 2024. CALRec: Contrastive alignment of generative LLMs for sequential recommendation. In RecSys. ACM, 422\u2013432."},{"issue":"22","key":"e_1_3_3_72_2","doi-asserted-by":"crossref","first-page":"4654","DOI":"10.3390\/electronics12224654","article-title":"Bookgpt: A general framework for book recommendation empowered by large language model","volume":"12","author":"Li Zhiyu","year":"2023","unstructured":"Zhiyu Li, Yanfang Chen, Xuan Zhang, and Xun Liang. 2023. Bookgpt: A general framework for book recommendation empowered by large language model. Electronics 12, 22 (2023), 4654.","journal-title":"Electronics"},{"key":"e_1_3_3_73_2","unstructured":"Zelong Li Jianchao Ji Yingqiang Ge Wenyue Hua and Yongfeng Zhang. 2024. PAP-REC: Personalized automatic prompt for recommendation language model. arXiv:2402.00284. Retrieved from https:\/\/arxiv.org\/abs\/2402.00284"},{"key":"e_1_3_3_74_2","unstructured":"Jiayi Liao Xiangnan He Ruobing Xie Jiancan Wu Yancheng Yuan Xingwu Sun Zhanhui Kang and Xiang Wang. 2024. RosePO: Aligning LLM-based recommenders with human values. arXiv:2410.12519. Retrieved from https:\/\/arxiv.org\/abs\/2410.12519"},{"key":"e_1_3_3_75_2","unstructured":"Jiayi Liao Sihang Li Zhengyi Yang Jiancan Wu Yancheng Yuan and Xiang Wang. 2023. LLaRA: Aligning large language models with sequential recommenders. arXiv:2312.02445. Retrieved from https:\/\/arxiv.org\/abs\/2312.02445"},{"key":"e_1_3_3_76_2","unstructured":"Jianghao Lin Xinyi Dai Yunjia Xi Weiwen Liu Bo Chen Xiangyang Li Chenxu Zhu Huifeng Guo Yong Yu Ruiming Tang et al. 2023. How can recommender systems benefit from large language models: A survey. arXiv:2306.05817. Retrieved from https:\/\/arxiv.org\/abs\/2306.05817"},{"key":"e_1_3_3_77_2","first-page":"3497","volume-title":"WWW","author":"Lin Jianghao","year":"2024","unstructured":"Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming Tang, Yong Yu, and Weinan Zhang. 2024. ReLLa: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recommendation. In WWW. ACM, 3497\u20133508."},{"key":"e_1_3_3_78_2","unstructured":"Xinyu Lin Wenjie Wang Yongqi Li Fuli Feng See-Kiong Ng and Tat-Seng Chua. 2023. A multi-facet paradigm to bridge large language model and recommendation. arXiv:2310.06491. Retrieved from https:\/\/arxiv.org\/abs\/2310.06491"},{"key":"e_1_3_3_79_2","first-page":"365","volume-title":"SIGIR","author":"Lin Xinyu","year":"2024","unstructured":"Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, and Tat-Seng Chua. 2024. Data-efficient fine-tuning for LLM-based recommendation. In SIGIR. ACM, 365\u2013374."},{"key":"e_1_3_3_80_2","unstructured":"Dugang Liu Shenxian Xian Xiaolin Lin Xiaolian Zhang Hong Zhu Yuan Fang Zhen Chen and Zhong Ming. 2024. A practice-friendly two-stage LLM-enhanced paradigm in sequential recommendation. arXiv:2406.00333. Retrieved from https:\/\/arxiv.org\/abs\/2406.00333"},{"key":"e_1_3_3_81_2","first-page":"3902","volume-title":"CIKM","author":"Liu Dairui","year":"2024","unstructured":"Dairui Liu, Boming Yang, Honghui Du, Derek Greene, Neil Hurley, Aonghus Lawlor, Ruihai Dong, and Irene Li. 2024. RecPrompt: A self-tuning prompting framework for news recommendation using large language models. In CIKM. ACM, 3902\u20133906."},{"issue":"2","key":"e_1_3_3_82_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3704999","article-title":"Understanding before recommendation: Semantic aspect-aware review exploitation via large language models","volume":"43","author":"Liu Fan","year":"2025","unstructured":"Fan Liu, Yaqi Liu, Huilin Chen, Zhiyong Cheng, Liqiang Nie, and Mohan Kankanhalli. 2025. Understanding before recommendation: Semantic aspect-aware review exploitation via large language models. ACM Trans. Inf. Syst. 43, 2 (2025), 1\u201326.","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_3_83_2","first-page":"3154","volume-title":"ACL (1)","author":"Liu Jiacheng","year":"2022","unstructured":"Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. 2022. Generated knowledge prompting for commonsense reasoning. In ACL (1). Association for Computational Linguistics, 3154\u20133169."},{"key":"e_1_3_3_84_2","unstructured":"Junling Liu Chao Liu Renjie Lv Kang Zhou and Yan Zhang. 2023. Is chatgpt a good recommender? A preliminary study. arXiv:2304.10149. Retrieved from https:\/\/arxiv.org\/abs\/2304.10149"},{"key":"e_1_3_3_85_2","unstructured":"Junling Liu Chao Liu Peilin Zhou Qichen Ye Dading Chong Kang Zhou Yueqi Xie Yuwei Cao Shoujin Wang Chenyu You et al. 2023. LLMRec: Benchmarking large language models on recommendation task. arXiv:2308.12241. Retrieved from https:\/\/arxiv.org\/abs\/2308.12241"},{"issue":"9","key":"e_1_3_3_86_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3560815","article-title":"Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing","volume":"55","author":"Liu Pengfei","year":"2023","unstructured":"Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv. 55, 9 (2023), 1\u201335.","journal-title":"ACM Comput. Surv"},{"key":"e_1_3_3_87_2","unstructured":"Qijiong Liu Nuo Chen Tetsuya Sakai and Xiao-Ming Wu. 2023. A first look at LLM-powered generative news recommendation. arXiv:2305.06566. Retrieved from https:\/\/arxiv.org\/abs\/2305.06566"},{"key":"e_1_3_3_88_2","first-page":"452","volume-title":"WSDM","author":"Liu Qijiong","year":"2024","unstructured":"Qijiong Liu, Nuo Chen, Tetsuya Sakai, and Xiao-Ming Wu. 2024. ONCE: Boosting content-based recommendation with both open- and closed-source large language models. In WSDM. ACM, 452\u2013461."},{"key":"e_1_3_3_89_2","unstructured":"Yuqing Liu Yu Wang Lichao Sun and Philip S. Yu. 2024. Rec-GPT4V: Multimodal recommendation with large vision-language models. arXiv:2402.08670. Retrieved from https:\/\/arxiv.org\/abs\/2402.08670"},{"key":"e_1_3_3_90_2","article-title":"Recranker: Instruction tuning large language model as ranker for top-k recommendation","author":"Luo Sichun","year":"2023","unstructured":"Sichun Luo, Bowei He, Haohan Zhao, Wei Shao, Yanlin Qi, Yinya Huang, Aojun Zhou, Yuxuan Yao, Zongpeng Li, Yuanzhang Xiao, et al. 2023. Recranker: Instruction tuning large language model as ranker for top-k recommendation. ACM Trans. Inf. Syst. (2023)","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_3_91_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-97-5569-1_18"},{"key":"e_1_3_3_92_2","unstructured":"Tianhui Ma Yuan Cheng Hengshu Zhu and Hui Xiong. 2023. Large language models are not stable recommender systems. arXiv:2312.15746. Retrieved from https:\/\/arxiv.org\/abs\/2312.15746"},{"key":"e_1_3_3_93_2","first-page":"1160","volume-title":"ACL (2)","author":"Mao Zhiming","year":"2023","unstructured":"Zhiming Mao, Huimin Wang, Yiming Du, and Kam-Fai Wong. 2023. UniTRec: A unified text-to-text transformer and joint contrastive learning framework for text-based recommendation. In ACL (2). Association for Computational Linguistics, 1160\u20131170."},{"key":"e_1_3_3_94_2","first-page":"777","volume-title":"RecSys","author":"Mysore Sheshera","year":"2023","unstructured":"Sheshera Mysore, Andrew McCallum, and Hamed Zamani. 2023. Large language model augmented narrative driven recommendations. In RecSys. ACM, 777\u2013783."},{"key":"e_1_3_3_95_2","first-page":"188","volume-title":"EMNLP\/IJCNLP (1)","author":"Ni Jianmo","year":"2019","unstructured":"Jianmo Ni, Jiacheng Li, and Julian J. McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP\/IJCNLP (1). Association for Computational Linguistics, 188\u2013197."},{"key":"e_1_3_3_96_2","unstructured":"OepnAI. 2024. GPT-4o system card. arXiv:2410.21276. Retrieved from https:\/\/arxiv.org\/abs\/2410.21276"},{"key":"e_1_3_3_97_2","unstructured":"OpenAI. 2023. GPT-4 technical report. arXiv:2303.08774. Retrieved from https:\/\/arxiv.org\/abs\/2303.08774"},{"key":"e_1_3_3_98_2","unstructured":"Fabian Paischer Liu Yang Linfeng Liu Shuai Shao Kaveh Hassani Jiacheng Li Ricky Chen Zhang Gabriel Li Xialo Gao Wei Shao et al. 2024. Preference discerning with LLM-Enhanced generative retrieval. arXiv:2412.08604. Retrieved from https:\/\/arxiv.org\/abs\/2412.08604"},{"key":"e_1_3_3_99_2","first-page":"3421","volume-title":"CIKM","author":"Pan Xingyu","year":"2022","unstructured":"Xingyu Pan, Yushuo Chen, Changxin Tian, Zihan Lin, Jinpeng Wang, He Hu, and Wayne Xin Zhao. 2022. Multimodal meta-learning for cold-start sequential recommendation. In CIKM. ACM, 3421\u20133430."},{"key":"e_1_3_3_100_2","first-page":"340","volume-title":"RecSys","author":"Penha Gustavo","year":"2024","unstructured":"Gustavo Penha, Ali Vardasbi, Enrico Palumbo, Marco De Nadai, and Hugues Bouchard. 2024. Bridging search and recommendation in generative retrieval: Does one task help the other? In RecSys. ACM, 340\u2013349."},{"key":"e_1_3_3_101_2","unstructured":"Aleksandr V. Petrov and Craig Macdonald. 2023. Generative sequential recommendation with GPTRec. arXiv:2306.11114. Retrieved from https:\/\/arxiv.org\/abs\/2306.11114"},{"key":"e_1_3_3_102_2","first-page":"1504","volume-title":"NAACL-HLT (Findings)","author":"Qin Zhen","year":"2024","unstructured":"Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, et al. 2024. Large language models are effective text rankers with pairwise ranking prompting. In NAACL-HLT (Findings). Association for Computational Linguistics, 1504\u20131518."},{"key":"e_1_3_3_103_2","doi-asserted-by":"crossref","unstructured":"Junyan Qiu Haitao Wang Zhaolin Hong Yiping Yang Qiang Liu and Xingxing Wang. 2023. ControlRec: Bridging the semantic gap between language model and personalized recommendation. arXiv:2311.16441. Retrieved from https:\/\/arxiv.org\/abs\/2311.16441","DOI":"10.2139\/ssrn.4749402"},{"key":"e_1_3_3_104_2","first-page":"4320","article-title":"U-BERT: Pre-training user representations for improved recommendation","author":"Qiu Zhaopeng","year":"2021","unstructured":"Zhaopeng Qiu, Xian Wu, Jingyue Gao, and Wei Fan. 2021. U-BERT: Pre-training user representations for improved recommendation. In AAAI, 4320\u20134327.","journal-title":"AAAI"},{"key":"e_1_3_3_105_2","first-page":"53728","article-title":"Direct preference optimization: Your language model is secretly a reward model","volume":"36","author":"Rafailov Rafael","year":"2023","unstructured":"Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In NeurIPS, Vol. 36, 53728\u201353741.","journal-title":"NeurIPS"},{"key":"e_1_3_3_106_2","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21 (2020), Article 140, 1\u201367.","journal-title":"J. Mach. Learn. Res"},{"key":"e_1_3_3_107_2","first-page":"10299","article-title":"Recommender systems with generative retrieval","volume":"36","author":"Rajput Shashank","year":"2023","unstructured":"Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan Hulikal Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Q. Tran, Jonah Samost, et al. 2023. Recommender systems with generative retrieval. In NeurIPS, Vol. 36, 10299\u201310315.","journal-title":"NeurIPS"},{"key":"e_1_3_3_108_2","unstructured":"Jerome Ramos Bin Wu and Aldo Lipani. 2024. Preference distillation for personalized generative recommendation. arXiv:2407.05033. Retrieved from https:\/\/arxiv.org\/abs\/2407.05033"},{"key":"e_1_3_3_109_2","first-page":"3464","volume-title":"WWW","author":"Ren Xubin","year":"2024","unstructured":"Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Representation learning with large language models for recommendation. In WWW. ACM, 3464\u20133475."},{"key":"e_1_3_3_110_2","first-page":"452","volume-title":"UAI","author":"Rendle Steffen","year":"2009","unstructured":"Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In UAI. AUAI Press, 452\u2013461."},{"issue":"4","key":"e_1_3_3_111_2","doi-asserted-by":"crossref","first-page":"333","DOI":"10.1561\/1500000019","article-title":"The probabilistic relevance framework: BM25 and beyond","volume":"3","author":"Robertson Stephen E.","year":"2009","unstructured":"Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr. 3, 4 (2009), 333\u2013389.","journal-title":"Found. Trends Inf. Retr"},{"key":"e_1_3_3_112_2","first-page":"890","volume-title":"RecSys","author":"Sanner Scott","year":"2023","unstructured":"Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon. 2023. Large language models are competitive near cold-start recommenders for language- and item-based preferences. In RecSys. ACM, 890\u2013896."},{"key":"e_1_3_3_113_2","first-page":"2627","volume-title":"NAACL-HLT","author":"Scao Teven Le","year":"2021","unstructured":"Teven Le Scao and Alexander M. Rush. 2021. How many data points is a prompt worth?. In NAACL-HLT. Association for Computational Linguistics, 2627\u20132636."},{"key":"e_1_3_3_114_2","unstructured":"Minglai Shao Hua Huang Qiyao Peng and Hongtao Liu. 2024. ULMRec: User-centric large language model for sequential recommendation. arXiv:2412.05543. Retrieved from https:\/\/arxiv.org\/abs\/2412.05543"},{"key":"e_1_3_3_115_2","unstructured":"Xinyue Shen Zeyuan Chen Michael Backes and Yang Zhang. 2023. In ChatGPT we trust? Measuring and characterizing the reliability of ChatGPT. arXiv:2304.08979. Retrieved from https:\/\/arxiv.org\/abs\/2304.08979"},{"key":"e_1_3_3_116_2","first-page":"4051","volume-title":"CIKM","author":"Shi Tianhao","year":"2024","unstructured":"Tianhao Shi, Yang Zhang, Zhijian Xu, Chong Chen, Fuli Feng, Xiangnan He, and Qi Tian. 2024. Preliminary study on incremental learning for large language model-based recommender systems. In CIKM. ACM, 4051\u20134055."},{"key":"e_1_3_3_117_2","unstructured":"Yubo Shu Hansu Gu Peng Zhang Haonan Zhang Tun Lu Dongsheng Li and Ning Gu. 2023. RAH! RecSys-assistant-human: A human-central recommendation framework with large language models. arXiv:2308.09904. Retrieved from https:\/\/arxiv.org\/abs\/2308.09904"},{"key":"e_1_3_3_118_2","unstructured":"Kyle Dylan Spurlock Cagla Acun Esin Saka and Olfa Nasraoui. 2024. ChatGPT for conversational recommendation: Refining recommendations by reprompting with feedback. arXiv:2401.03605. Retrieved from https:\/\/arxiv.org\/abs\/2401.03605"},{"key":"e_1_3_3_119_2","unstructured":"Wenqi Sun Ruobing Xie Junjie Zhang Wayne Xin Zhao Leyu Lin and Ji-Rong Wen. 2024. Distillation is all you need for practically using different pre-trained recommendation models. arXiv:2401.00797. Retrieved from https:\/\/arxiv.org\/abs\/2401.00797"},{"key":"e_1_3_3_120_2","first-page":"355","volume-title":"SIGIR","author":"Tan Juntao","year":"2024","unstructured":"Juntao Tan, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Zelong Li, and Yongfeng Zhang. 2024. IDGenRec: LLM-RecSys alignment with textual ID learning. In SIGIR. ACM, 355\u2013364."},{"key":"e_1_3_3_121_2","unstructured":"Rohan Taori Ishaan Gulrajani Tianyi Zhang Yann Dubois Xuechen Li Carlos Guestrin Percy Liang and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An Instruction-following LLaMA Model. Retrieved from https:\/\/github.com\/tatsu-lab\/stanford_alpaca"},{"key":"e_1_3_3_122_2","unstructured":"The Qwen Team. 2024. Qwen2.5 technical report. arXiv:2412.15115. Retrieved from https:\/\/arxiv.org\/abs\/2412.15115"},{"key":"e_1_3_3_123_2","first-page":"715","volume-title":"WSDM","author":"Tian Zhen","year":"2023","unstructured":"Zhen Tian, Ting Bai, Zibin Zhang, Zhiyuan Xu, Kangyi Lin, Ji-Rong Wen, and Wayne Xin Zhao. 2023. Directed acyclic graph factorization machines for CTR prediction via knowledge distillation. In WSDM. ACM, 715\u2013723."},{"key":"e_1_3_3_124_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar et al. 2023. LLaMA: Open and efficient foundation language models. arXiv:2302.13971. Retrieved from https:\/\/arxiv.org\/abs\/2302.13971"},{"key":"e_1_3_3_125_2","unstructured":"Hugo Touvron Louis Martin Kevin Stone Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti Bhosale et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288. Retrieved from https:\/\/arxiv.org\/abs\/2307.09288"},{"key":"e_1_3_3_126_2","first-page":"5998","article-title":"Attention is all you need","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, 5998\u20136008.","journal-title":"NIPS"},{"key":"e_1_3_3_127_2","unstructured":"Dui Wang Xiangyu Hou Xiaohui Yang Bo Zhang Renbing Chen and Daiyue Xue. 2023. Multiple key-value strategy in recommendation systems incorporating large language model. arXiv:2310.16409. Retrieved from https:\/\/arxiv.org\/abs\/2310.16409"},{"key":"e_1_3_3_128_2","unstructured":"Lei Wang and Ee-Peng Lim. 2023. Zero-shot next-item recommendation using large pretrained language models. arXiv:2304.03153. Retrieved from https:\/\/arxiv.org\/abs\/2304.03153"},{"key":"e_1_3_3_129_2","unstructured":"Lei Wang Chen Ma Xueyang Feng Zeyu Zhang Hao Yang Jingsen Zhang Zhiyuan Chen Jiakai Tang Xu Chen Yankai Lin et al. 2023. A survey on large language model based autonomous agents. arXiv:2308.11432. Retrieved from https:\/\/arxiv.org\/abs\/2308.11432"},{"key":"e_1_3_3_130_2","unstructured":"Lei Wang Jingsen Zhang Xu Chen Yankai Lin Ruihua Song Wayne Xin Zhao and Ji-Rong Wen. 2023. RecAgent: A novel simulation paradigm for recommender systems. arXiv:2306.02552. Retrieved from https:\/\/arxiv.org\/abs\/2306.02552"},{"key":"e_1_3_3_131_2","first-page":"9440","volume-title":"ACL (1)","author":"Wang Peiyi","year":"2024","unstructured":"Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Lingpeng Kong, Qi Liu, Tianyu Liu, et al. 2024. Large language models are not fair evaluators. In ACL (1). Association for Computational Linguistics, 9440\u20139450."},{"key":"e_1_3_3_132_2","first-page":"2400","volume-title":"CIKM","author":"Wang Wenjie","year":"2024","unstructured":"Wenjie Wang, Honghui Bao, Xinyu Lin, Jizhi Zhang, Yongqi Li, Fuli Feng, See-Kiong Ng, and Tat-Seng Chua. 2024. Learnable item tokenization for generative recommendation. In CIKM. ACM, 2400\u20132409."},{"key":"e_1_3_3_133_2","unstructured":"Wenjie Wang Xinyu Lin Fuli Feng Xiangnan He and Tat-Seng Chua. 2023. Generative recommendation: Towards next-generation recommender paradigm. arXiv:2304.03516. Retrieved from https:\/\/arxiv.org\/abs\/2304.03516"},{"key":"e_1_3_3_134_2","first-page":"65","volume-title":"ACL (Short Papers)","author":"Wang Xinfeng","year":"2024","unstructured":"Xinfeng Wang, Jin Cui, Yoshimi Suzuki, and Fumiyo Fukumoto. 2024. RDRec: Rationale distillation for LLM-based recommendation. In ACL (Short Papers). Association for Computational Linguistics, 65\u201374."},{"key":"e_1_3_3_135_2","first-page":"10052","volume-title":"EMNLP","author":"Wang Xiaolei","year":"2023","unstructured":"Xiaolei Wang, Xinyu Tang, Xin Zhao, Jingyuan Wang, and Ji-Rong Wen. 2023. Rethinking the evaluation for conversational recommendation in the era of large language models. In EMNLP. Association for Computational Linguistics, 10052\u201310065."},{"key":"e_1_3_3_136_2","volume-title":"ICLR","author":"Wang Xuezhi","year":"2023","unstructured":"Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed. H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In ICLR. OpenReview.net."},{"key":"e_1_3_3_137_2","unstructured":"Yan Wang Zhixuan Chu Xin Ouyang Simeng Wang Hongyan Hao Yue Shen Jinjie Gu Siqiao Xue James Y. Zhang Qing Cui et al. 2023. Enhancing recommender systems with large language model reasoning graphs. arXiv:2308.10835. Retrieved from https:\/\/arxiv.org\/abs\/2308.10835"},{"key":"e_1_3_3_138_2","first-page":"4351","volume-title":"NAACL-HLT (Findings)","author":"Wang Yancheng","year":"2024","unstructured":"Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Yanbin Lu, Xiaojiang Huang, et al. 2024. RecMind: Large language model powered agent for recommendation. In NAACL-HLT (Findings). Association for Computational Linguistics, 4351\u20134364."},{"key":"e_1_3_3_139_2","unstructured":"Yu Wang Zhiwei Liu Jianguo Zhang Weiran Yao Shelby Heinecke and Philip S. Yu.2023. 2023. DRDT: Dynamic reflection with divergent thinking for LLM-based sequential recommendation. arXiv:2312.11336. Retrieved from https:\/\/arxiv.org\/abs\/2312.11336"},{"key":"e_1_3_3_140_2","doi-asserted-by":"crossref","first-page":"29144","DOI":"10.1109\/ACCESS.2024.3368027","article-title":"Empowering few-shot recommender systems with large language models-enhanced representations","volume":"12","author":"Wang Zhoumeng","year":"2024","unstructured":"Zhoumeng Wang. 2024. Empowering few-shot recommender systems with large language models-enhanced representations. IEEE Access 12 (2024), 29144\u201329153.","journal-title":"IEEE Access"},{"key":"e_1_3_3_141_2","first-page":"24824","article-title":"Chain-of-thought prompting elicitsreasoning in large language models","volume":"35","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed. H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicitsreasoning in large language models. In NeurIPS, Vol. 35, 24824\u201324837.","journal-title":"NeurIPS"},{"key":"e_1_3_3_142_2","volume-title":"ICLR","author":"Wei Tianxin","year":"2024","unstructured":"Tianxin Wei, Bowen Jin, Ruirui Li, Hansi Zeng, Zhengyang Wang, Jianhui Sun, Qingyu Yin, Hanqing Lu, Suhang Wang, Jingrui He, et al. 2024. Towards unified multi-modal personalization: Large vision-language models for generative recommendation and beyond. In ICLR. OpenReview.net."},{"key":"e_1_3_3_143_2","first-page":"806","volume-title":"WSDM","author":"Wei Wei","year":"2024","unstructured":"Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. LLMRec: Large language models with graph augmentation for recommendation. In WSDM. ACM, 806\u2013815."},{"key":"e_1_3_3_144_2","unstructured":"Laura Weidinger John Mellor Maribeth Rauh Conor Griffin Jonathan Uesato Po-Sen Huang Myra Cheng Mia Glaese Borja Balle Atoosa Kasirzadeh et al. 2021. Ethical and social risks of harm from language models. arXiv:2112.04359. Retrieved from https:\/\/arxiv.org\/abs\/2112.04359"},{"key":"e_1_3_3_145_2","first-page":"1652","article-title":"Empowering news recommendation with pre-trained language models","author":"Wu Chuhan","year":"2021","unstructured":"Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021. Empowering news recommendation with pre-trained language models. In SIGIR, 1652\u20131656.","journal-title":"SIGIR"},{"key":"e_1_3_3_146_2","first-page":"726","volume-title":"SIGIR","author":"Wu Jiancan","year":"2021","unstructured":"Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised graph learning for recommendation. In SIGIR. ACM, 726\u2013735."},{"key":"e_1_3_3_147_2","first-page":"9178","volume-title":"AAAI","author":"Wu Likang","year":"2024","unstructured":"Likang Wu, Zhaopeng Qiu, Zhi Zheng, Hengshu Zhu, and Enhong Chen. 2024. Exploring large language model for graph data understanding in online job recommendations. In AAAI. AAAI Press, 9178\u20139186."},{"issue":"5","key":"e_1_3_3_148_2","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1007\/s11280-024-01291-2","article-title":"A survey on large language models for recommendation","volume":"27","author":"Wu Likang","year":"2024","unstructured":"Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, et al. 2024. A survey on large language models for recommendation. World Wide Web27, 5 (2024), 60.","journal-title":"World Wide Web"},{"issue":"5","key":"e_1_3_3_149_2","doi-asserted-by":"crossref","first-page":"1122","DOI":"10.1109\/JAS.2023.123618","article-title":"A brief overview of ChatGPT: The history, status quo and potential future development","volume":"10","author":"Wu Tianyu","year":"2023","unstructured":"Tianyu Wu, Shizhu He, Jingping Liu, Siqi Sun, Kang Liu, Qing-Long Han, and Yang Tang. 2023. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE CAA J. Autom. Sinica 10, 5 (2023), 1122\u20131136.","journal-title":"IEEE CAA J. Autom. Sinica"},{"key":"e_1_3_3_150_2","unstructured":"Yunjia Xi Weiwen Liu Jianghao Lin Jieming Zhu Bo Chen Ruiming Tang Weinan Zhang Rui Zhang and Yong Yu. 2023. Towards open-world recommendation with knowledge augmentation from large language models. arXiv Preprint arXiv:2306.10933. Retrieved from https:\/\/arxiv.org\/abs\/2306.10933"},{"key":"e_1_3_3_151_2","volume-title":"ICLR","author":"Xiao Guangxuan","year":"2024","unstructured":"Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2024. Efficient streaming language models with attention sinks. In ICLR. OpenReview.net."},{"key":"e_1_3_3_152_2","first-page":"2837","volume-title":"SIGIR","author":"Xu Lanling","year":"2023","unstructured":"Lanling Xu, Zhen Tian, Gaowei Zhang, Junjie Zhang, Lei Wang, Bowen Zheng, Yifan Li, Jiakai Tang, Zeyu Zhang, Yupeng Hou, et al. 2023. Towards a more user-friendly and easy-to-use benchmark library for recommender systems. In SIGIR. ACM, 2837\u20132847."},{"key":"e_1_3_3_153_2","first-page":"38","volume-title":"NAACL-HLT (Findings)","author":"Yang Bowen","year":"2022","unstructured":"Bowen Yang, Cong Han, Yu Li, Lei Zuo, and Zhou Yu. 2022. Improving conversational recommendation systems\u2019 quality with context-aware item meta-information. In NAACL-HLT (Findings). Association for Computational Linguistics, 38\u201348."},{"key":"e_1_3_3_154_2","unstructured":"Fan Yang Zheng Chen Ziyan Jiang Eunah Cho Xiaojiang Huang and Yanbin Lu. 2023. PALR: Personalization aware LLMs for recommendation. arXiv:2305.07622. Retrieved from https:\/\/arxiv.org\/abs\/2305.07622"},{"key":"e_1_3_3_155_2","unstructured":"Jianwei Yang Hao Zhang Feng Li Xueyan Zou Chunyuan Li and Jianfeng Gao. 2023. Set-of-mark prompting unleashes extraordinary visual grounding in GPT-4V. arXiv:2310.11441. Retrieved from https:\/\/arxiv.org\/abs\/2310.11441"},{"key":"e_1_3_3_156_2","unstructured":"Liu Yang Fabian Paischer Kaveh Hassani Jiacheng Li Shuai Shao Gabriel Zhang Yun Li Xue He Nima Feng Sem Noorshams et al. 2024. Unifying generative and dense retrieval for sequential recommendation. arXiv:2411.18814. Retrieved from https:\/\/arxiv.org\/abs\/2411.18814"},{"key":"e_1_3_3_157_2","unstructured":"Zhengyi Yang Jiancan Wu Yanchen Luo Jizhi Zhang Yancheng Yuan An Zhang Xiang Wang and Xiangnan He. 2023. Large language model can interpret latent space of sequential recommender. arXiv:2310.20487. Retrieved from https:\/\/arxiv.org\/abs\/2310.20487"},{"key":"e_1_3_3_158_2","unstructured":"Jing Yao Wei Xu Jianxun Lian Xiting Wang Xiaoyuan Yi and Xing Xie. 2023. Knowledge plugins: Enhancing large language models for domain-specific recommendations. arXiv:2311.10779. Retrieved from https:\/\/arxiv.org\/abs\/2311.10779"},{"key":"e_1_3_3_159_2","unstructured":"Jun Yin Zhengxin Zeng Mingzheng Li Hao Yan Chaozhuo Li Weihao Han Jianjin Zhang Ruochen Liu Allen Sun Denvy Deng et al. 2024. Unleash LLMs potential for recommendation by coordinating twin-tower dynamic semantic token generator. arXiv:2409.09253. Retrieved from https:\/\/arxiv.org\/abs\/2409.09253"},{"key":"e_1_3_3_160_2","first-page":"2639","volume-title":"SIGIR","author":"Yuan Zheng","year":"2023","unstructured":"Zheng Yuan, Fajie Yuan, Yu Song, Youhua Li, Junchen Fu, Fei Yang, Yunzhu Pan, and Yongxin Ni. 2023. Where to go next for recommender systems? ID- vs. modality-based recommender models revisited. In SIGIR. ACM, 2639\u20132649."},{"key":"e_1_3_3_161_2","unstructured":"Zhenrui Yue Sara Rabhi Gabriel de Souza Pereira Moreira Dong Wang and Even Oldridge. 2023. LlamaRec: Two-stage recommendation using large language models for ranking. arXiv:2311.02089. Retrieved from https:\/\/arxiv.org\/abs\/2311.02089"},{"key":"e_1_3_3_162_2","volume-title":"ICLR","author":"Zeng Aohan","year":"2023","unstructured":"Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2023. GLM-130B: An open bilingual pre-trained model. In ICLR. OpenReview.net."},{"key":"e_1_3_3_163_2","unstructured":"An Zhang Leheng Sheng Yuxin Chen Hao Li Yang Deng Xiang Wang and Tat-Seng Chua. 2023. On generative agents in recommendation. arXiv:2310.10108. Retrieved from https:\/\/arxiv.org\/abs\/2310.10108"},{"key":"e_1_3_3_164_2","first-page":"993","volume-title":"RecSys","author":"Zhang Jizhi","year":"2023","unstructured":"Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He.2023. Is ChatGPT fair for recommendation? Evaluating fairness in large language model recommendation. In RecSys. ACM, 993\u2013999."},{"key":"e_1_3_3_165_2","first-page":"3679","volume-title":"WWW","author":"Zhang Junjie","year":"2024","unstructured":"Junjie Zhang, Yupeng Hou, Ruobing Xie, Wenqi Sun, Julian J. McAuley, Wayne Xin Zhao, Leyu Lin, and Ji-Rong Wen.2024. AgentCF: Collaborative learning with autonomous language agents for recommender systems. In WWW. ACM, 3679\u20133689."},{"key":"e_1_3_3_166_2","article-title":"Recommendation as instruction following: A large language model empowered recommendation approach","author":"Zhang Junjie","year":"2023","unstructured":"Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji-Rong Wen. 2023. Recommendation as instruction following: A large language model empowered recommendation approach. ACM Trans. Inf. Syst.","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_3_167_2","unstructured":"Wenxuan Zhang Hongzhi Liu Yingpeng Du Chen Zhu Yang Song Hengshu Zhu and Zhonghai Wu. 2023. Bridging the information gap between domain-specific model and general LLM for personalized recommendation. arXiv:2311.03778. Retrieved from https:\/\/arxiv.org\/abs\/2311.03778"},{"key":"e_1_3_3_168_2","article-title":"Language models as recommender systems: Evaluations and limitations","author":"Zhang Yuhui","year":"2021","unstructured":"Yuhui Zhang, Hao Ding, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, and Hao Wang. 2021. Language models as recommender systems: Evaluations and limitations. In NeurIPS.","journal-title":"NeurIPS"},{"key":"e_1_3_3_169_2","unstructured":"Yang Zhang Fuli Feng Jizhi Zhang Keqin Bao Qifan Wang and Xiangnan He. 2023. CoLLM: Integrating collaborative embeddings into large language models for recommendation. arXiv:2310.19488. Retrieved from https:\/\/arxiv.org\/abs\/2310.19488"},{"key":"e_1_3_3_170_2","unstructured":"Yang Zhang Juntao You Yimeng Bai Jizhi Zhang Keqin Bao Wenjie Wang and Tat-Seng Chua. 2024. Causality-enhanced behavior sequence modeling in LLMs for personalized recommendation. arXiv:2410.22809. Retrieved from https:\/\/arxiv.org\/abs\/2410.22809"},{"key":"e_1_3_3_171_2","article-title":"Multimodal chain-of-thought reasoning in language models","volume":"2024","author":"Zhang Zhuosheng","year":"2024","unstructured":"Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. 2024. Multimodal chain-of-thought reasoning in language models. Trans. Mach. Learn. Res 2024 (2024).","journal-title":"Trans. Mach. Learn. Res"},{"key":"e_1_3_3_172_2","first-page":"4722","volume-title":"CIKM","author":"Zhao Wayne Xin","year":"2022","unstructured":"Wayne Xin Zhao, Yupeng Hou, Xingyu Pan, Chen Yang, Zeyu Zhang, Zihan Lin, Jingsen Zhang, Shuqing Bian, Jiakai Tang, Wenqi Sun, et al. 2022. RecBole 2.0: Towards a more up-to-date recommendation library. In CIKM. ACM, 4722\u20134726."},{"key":"e_1_3_3_173_2","first-page":"4653","volume-title":"CIKM","author":"Zhao Wayne Xin","year":"2021","unstructured":"Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, et al. 2021. RecBole: Towards a unified, comprehensive and efficient framework for recommendation algorithms. In CIKM. ACM, 4653\u20134664."},{"key":"e_1_3_3_174_2","unstructured":"Wayne Xin Zhao Kun Zhou Junyi Li Tianyi Tang Xiaolei Wang Yupeng Hou Yingqian Min Beichen Zhang Junjie Zhang Zican Dong et al. 2023. A survey of large language models. arXiv:2303.18223. Retrieved from https:\/\/arxiv.org\/abs\/2303.18223"},{"issue":"11","key":"e_1_3_3_175_2","doi-asserted-by":"crossref","first-page":"6889","DOI":"10.1109\/TKDE.2024.3392335","article-title":"Recommender systems in the era of large language models (LLMs)","volume":"36","author":"Zhao Zihuai","year":"2024","unstructured":"Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, et al. 2024. Recommender systems in the era of large language models (LLMs). IEEE Trans. Knowl. Data Eng. 36, 11 (2024), 6889\u20136907.","journal-title":"IEEE Trans. Knowl. Data Eng"},{"key":"e_1_3_3_176_2","first-page":"1435","volume-title":"ICDE","author":"Zheng Bowen","year":"2024","unstructured":"Bowen Zheng, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ming Chen, and Ji-Rong Wen. 2024. Adapting large language models by integrating collaborative semantics for recommendation. In ICDE. IEEE, 1435\u20131448."},{"key":"e_1_3_3_177_2","volume-title":"ICLR","author":"Zhou Denny","year":"2023","unstructured":"Denny Zhou, Nathanael Sch\u00e4rli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed. H. Chi. 2023. Least-to-most prompting enables complex reasoning in large language models. In ICLR. OpenReview.net."},{"key":"e_1_3_3_178_2","first-page":"1893","volume-title":"CIKM","author":"Zhou Kun","year":"2020","unstructured":"Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. 2020. S3-Rec: Self-supervised learning for sequential recommendation with mutual information maximization. In CIKM. ACM, 1893\u20131902."},{"key":"e_1_3_3_179_2","first-page":"3162","volume-title":"WWW","author":"Zhu Yaochen","year":"2024","unstructured":"Yaochen Zhu, Liang Wu, Qi Guo, Liangjie Hong, and Jundong Li. 2024. Collaborative large language model for recommender systems. In WWW. ACM, 3162\u20133172."}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3726871","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3726871","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:39Z","timestamp":1750295919000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3726871"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,6]]},"references-count":178,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2025,6,30]]}},"alternative-id":["10.1145\/3726871"],"URL":"https:\/\/doi.org\/10.1145\/3726871","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,6]]},"assertion":[{"value":"2024-06-07","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-16","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-06-06","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}