{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,2]],"date-time":"2025-08-02T19:10:49Z","timestamp":1754161849097,"version":"3.41.2"},"reference-count":115,"publisher":"Association for Computing Machinery (ACM)","issue":"1","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62322603, 62177033"],"award-info":[{"award-number":["62322603, 62177033"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Shanghai Municipal Science and Technology Major Project","award":["2021SHZDZX0102"],"award-info":[{"award-number":["2021SHZDZX0102"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Recomm. Syst."],"published-print":{"date-parts":[[2026,3,31]]},"abstract":"<jats:p>\n            Recommender system plays a pervasive role in today\u2019s online services, yet its closed-loop nature, i.e., training and deploying within a specific closed domain, constrains its access to open-world knowledge. Recently, the emergence of large language models (LLMs) has shown promise in bridging this gap by encoding extensive world knowledge and demonstrating advanced reasoning capabilities. However, previous attempts to directly implement LLMs as recommenders fall short in meeting the demanding requirements of industrial recommender systems, particularly in terms of online inference latency and offline resource efficiency. In this work, we propose an Open-World\n            <jats:underline>R<\/jats:underline>\n            ecommendation Framework with\n            <jats:underline>E<\/jats:underline>\n            fficient and Deployable\n            <jats:underline>K<\/jats:underline>\n            nowledge\n            <jats:underline>I<\/jats:underline>\n            nfusion from Large Language Models, dubbed\n            <jats:italic toggle=\"yes\">REKI<\/jats:italic>\n            , to acquire two types of external knowledge about users and items from LLMs. Specifically, we introduce\n            <jats:italic toggle=\"yes\">factorization prompting<\/jats:italic>\n            to elicit accurate knowledge reasoning on user preferences and items. With factorization prompting, we develop\n            <jats:italic toggle=\"yes\">individual knowledge extraction<\/jats:italic>\n            and\n            <jats:italic toggle=\"yes\">collective knowledge extraction<\/jats:italic>\n            tailored for different scales of recommendation scenarios, effectively reducing offline resource consumption. Subsequently, the generated user and item knowledge undergoes efficient transformation and condensation into augmented vectors through a\n            <jats:italic toggle=\"yes\">hybridized expert-integrated network<\/jats:italic>\n            , ensuring its compatibility with the recommendation task. The obtained vectors can then be directly used to enhance the performance of any conventional recommendation model. We also ensure efficient inference by preprocessing and prestoring the knowledge from the LLM. Extensive experiments demonstrate that REKI significantly outperforms the state-of-the-art baselines and is compatible with a diverse array of recommendation algorithms and tasks. Now, REKI has been deployed to Huawei\u2019s news and music recommendation platforms and gained a 7% and 1.99% improvement during the online A\/B test.\n          <\/jats:p>","DOI":"10.1145\/3725894","type":"journal-article","created":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T09:17:53Z","timestamp":1742894273000},"page":"1-36","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Efficient and Deployable Knowledge Infusion for Open-World Recommendations via Large Language Models"],"prefix":"10.1145","volume":"4","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6883-881X","authenticated-orcid":false,"given":"Yunjia","family":"Xi","sequence":"first","affiliation":[{"name":"Shanghai Jiao Tong University","place":["Shanghai, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9148-3997","authenticated-orcid":false,"given":"Weiwen","family":"Liu","sequence":"additional","affiliation":[{"name":"Huawei Noah\u2019s Ark Lab","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8953-3203","authenticated-orcid":false,"given":"Jianghao","family":"Lin","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University","place":["Shanghai, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-8723-0689","authenticated-orcid":false,"given":"Muyan","family":"Weng","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University","place":["Shanghai, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-3366-5437","authenticated-orcid":false,"given":"Xiaoling","family":"Cai","sequence":"additional","affiliation":[{"name":"Consumer Business Group, Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2943-7997","authenticated-orcid":false,"given":"Hong","family":"Zhu","sequence":"additional","affiliation":[{"name":"Consumer Business Group, Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5666-8320","authenticated-orcid":false,"given":"Jieming","family":"Zhu","sequence":"additional","affiliation":[{"name":"Huawei Noah\u2019s Ark Lab","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3750-2533","authenticated-orcid":false,"given":"Bo","family":"Chen","sequence":"additional","affiliation":[{"name":"Huawei Noah\u2019s Ark Lab, Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9224-2431","authenticated-orcid":false,"given":"Ruiming","family":"Tang","sequence":"additional","affiliation":[{"name":"Huawei Noah\u2019s Ark Lab, Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0281-8271","authenticated-orcid":false,"given":"Yong","family":"Yu","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University","place":["Shanghai, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0127-2425","authenticated-orcid":false,"given":"Weinan","family":"Zhang","sequence":"additional","affiliation":[{"name":"Computer Science, Shanghai Jiao Tong University","place":["Shanghai, China"]}]}],"member":"320","published-online":{"date-parts":[[2025,7,29]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"2020. MindSpore. Retrieved from https:\/\/www.mindspore.cn\/"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics9081295"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/3209978.3209985"},{"key":"e_1_3_2_5_2","doi-asserted-by":"crossref","unstructured":"Keqin Bao Jizhi Zhang Yang Zhang Wenjie Wang Fuli Feng and Xiangnan He. 2023. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv:2305.00447. Retrieved from https:\/\/arxiv.org\/abs\/2305.00447","DOI":"10.1145\/3604915.3608857"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3534642"},{"key":"e_1_3_2_7_2","unstructured":"S\u00e9bastien Bubeck Varun Chandrasekaran Ronen Eldan Johannes Gehrke Eric Horvitz Ece Kamar Peter Lee Yin Tat Lee Yuanzhi Li Scott Lundberg et\u00a0al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv:2303.12712. Retrieved from https:\/\/arxiv.org\/abs\/2303.12712"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313705"},{"key":"e_1_3_2_9_2","doi-asserted-by":"crossref","unstructured":"Jin Chen Zheng Liu Xu Huang Chenwang Wu Qi Liu Gangwei Jiang Yuanhao Pu Yuxuan Lei Xiaolong Chen Xingmei Wang et\u00a0al. 2023. When large language models meet personalization: Perspectives of challenges and opportunities. arXiv:2307.16376. Retrieved from https:\/\/arxiv.org\/abs\/2307.16376","DOI":"10.1007\/s11280-024-01276-1"},{"key":"e_1_3_2_10_2","unstructured":"Zeyu Cui Jianxin Ma Chang Zhou Jingren Zhou and Hongxia Yang. 2022. M6-Rec: Generative pretrained language models are open-ended recommender systems. arXiv:2205.08084. Retrieved from https:\/\/arxiv.org\/abs\/2205.08084"},{"key":"e_1_3_2_11_2","unstructured":"Sunhao Dai Ninglu Shao Haiyuan Zhao Weijie Yu Zihua Si Chen Xu Zhongxiang Sun Xiao Zhang and Jun Xu. 2023. Uncovering ChatGPT\u2019s capabilities in recommender systems. arXiv:2305.02182. Retrieved from https:\/\/arxiv.org\/abs\/2305.02182"},{"key":"e_1_3_2_12_2","article-title":"SPRank: Semantic path-based ranking for Top-N recommendations using linked open data","author":"Noia Tommaso Di","year":"2016","unstructured":"Tommaso Di Noia, Vito Claudio Ostuni, Paolo Tomeo, and Eugenio Di Sciascio. 2016. SPRank: Semantic path-based ranking for Top-N recommendations using linked open data. ACM Transactions on Intelligent Systems and Technology 8, 1, Article 9 (2017), 1\u201334.","journal-title":"ACM Transactions on Intelligent Systems and Technology"},{"key":"e_1_3_2_13_2","unstructured":"Hao Ding Yifei Ma Anoop Deoras Yuyang Wang and Hao Wang. 2021. Zero-shot recommender systems. arXiv:2105.08318. Retrieved from https:\/\/arxiv.org\/abs\/2105.08318"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3637528.3672008"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.26"},{"key":"e_1_3_2_16_2","unstructured":"Wenqi Fan Zihuai Zhao Jiatong Li Yunqing Liu Xiaowei Mei Yiqi Wang Jiliang Tang and Qing Li. 2023. Recommender systems in the era of large language models (llms). arXiv:2307.02046. Retrieved from https:\/\/arxiv.org\/abs\/2307.02046"},{"key":"e_1_3_2_17_2","unstructured":"Luke Friedman Sameer Ahuja David Allen Terry Tan Hakim Sidahmed Changbo Long Jun Xie Gabriel Schubiner Ajay Patel Harsh Lara et\u00a0al. 2023. Leveraging large language models in conversational recommender systems. arXiv:2305.07961. Retrieved from https:\/\/arxiv.org\/abs\/2305.07961"},{"key":"e_1_3_2_18_2","unstructured":"Yunfan Gao Tao Sheng Youlin Xiang Yun Xiong Haofen Wang and Jiawei Zhang. 2023. Chat-REC: Towards interactive and explainable LLMs-augmented recommender system. arXiv:2303.14524. Retrieved from https:\/\/arxiv.org\/abs\/2303.14524"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/3523227.3546767"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3583780.3614657"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2017\/239"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2020.3028705"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3572835"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539323"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3604915.3610639"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/2872427.2883037"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/3583780.3614949"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3543507.3583434"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539381"},{"key":"e_1_3_2_30_2","unstructured":"Yupeng Hou Junjie Zhang Zihan Lin Hongyu Lu Ruobing Xie Julian McAuley and Wayne Xin Zhao. 2023. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845. Retrieved from https:\/\/arxiv.org\/abs\/2305.08845"},{"key":"e_1_3_2_31_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Hu Edward J.","year":"2021","unstructured":"Edward J. Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et\u00a0al. 2021. LoRA: Low-rank adaptation of large language models. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_2_32_2","unstructured":"Jie Huang and Kevin Chen-Chuan Chang. 2022. Towards Reasoning in Large Language Models: A Survey. arXiv preprint arXiv:2212.10403. Retrieved from https:\/\/arxiv.org\/abs\/2212.10403"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298689.3347043"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1991.3.1.79"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/582415.582418"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3571730"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11704-022-1250-2"},{"key":"e_1_3_2_38_2","doi-asserted-by":"crossref","unstructured":"Xiaoqi Jiao Yichun Yin Lifeng Shang Xin Jiang Xiao Chen Linlin Li Fang Wang and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. Retrieved from https:\/\/arxiv.org\/abs\/1909.10351","DOI":"10.18653\/v1\/2020.findings-emnlp.372"},{"key":"e_1_3_2_39_2","unstructured":"Wang-Cheng Kang Jianmo Ni Nikhil Mehta Maheswaran Sathiamoorthy Lichan Hong Ed Chi and Derek Zhiyuan Cheng. 2023. Do LLMs understand user preferences? Evaluating LLMs on user rating prediction. arXiv preprint arXiv:2305.06474. Retrieved from https:\/\/arxiv.org\/abs\/2305.06474"},{"key":"e_1_3_2_40_2","first-page":"4171","volume-title":"Proceedings of the NAACL-HLT","author":"Kenton Jacob Devlin Ming-Wei Chang","year":"2019","unstructured":"Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT. 4171\u20134186."},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3637528.3671931"},{"key":"e_1_3_2_42_2","first-page":"341","volume-title":"Proceedings of the 2021 3rd International Conference on Literature, Art and Human Development.","author":"Kong Zhiyu","year":"2021","unstructured":"Zhiyu Kong, Xiaoru Zhang, and Ruilin Wang. 2021. Review of the research on the relationship between algorithmic news recommendation and information cocoons. In Proceedings of the 2021 3rd International Conference on Literature, Art and Human Development.Atlantis Press, 341\u2013345."},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2009.263"},{"key":"e_1_3_2_44_2","unstructured":"Chen Li Yixiao Ge Jiayong Mao Dian Li and Ying Shan. 2023. TagGPT: Large language models are zero-shot multimodal taggers. arXiv preprint arXiv:2304.03022. Retrieved from https:\/\/arxiv.org\/abs\/2304.03022"},{"key":"e_1_3_2_45_2","unstructured":"Jiacheng Li Ming Wang Jin Li Jinmiao Fu Xin Shen Jingbo Shang and Julian McAuley. 2023. Text is all you need: Learning language representations for sequential recommendation. arXiv:2305.13731. Retrieved from https:\/\/arxiv.org\/abs\/2305.13731"},{"key":"e_1_3_2_46_2","unstructured":"Lei Li Yongfeng Zhang Dugang Liu and Li Chen. 2023. Large language models for generative recommendation: A survey and visionary discussions. arXiv:2309.01157. Retrieved from https:\/\/arxiv.org\/abs\/2309.01157"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3357951"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220023"},{"key":"e_1_3_2_49_2","doi-asserted-by":"crossref","unstructured":"Guo Lin and Yongfeng Zhang. 2023. Sparks of artificial general recommender (AGR): Early experiments with ChatGPT. arXiv:2305.04518. Retrieved from https:\/\/arxiv.org\/abs\/2305.04518","DOI":"10.3390\/a16090432"},{"key":"e_1_3_2_50_2","unstructured":"Jianghao Lin Bo Chen Hangyu Wang Yunjia Xi Yanru Qu Xinyi Dai Kangning Zhang Ruiming Tang Yong Yu and Weinan Zhang. 2023. ClickPrompt: CTR models are strong prompt generators for adapting language models to CTR prediction. arXiv:2310.09234. Retrieved from https:\/\/arxiv.org\/abs\/2310.09234"},{"key":"e_1_3_2_51_2","unstructured":"Jianghao Lin Xinyi Dai Yunjia Xi Weiwen Liu Bo Chen Xiangyang Li Chenxu Zhu Huifeng Guo Yong Yu Ruiming Tang et\u00a0al. 2023. How can recommender systems benefit from large language models: A survey. arXiv:2306.05817. Retrieved from https:\/\/arxiv.org\/abs\/2306.05817"},{"key":"e_1_3_2_52_2","unstructured":"Junyang Lin Rui Men An Yang Chang Zhou Ming Ding Yichang Zhang Peng Wang Ang Wang Le Jiang Xianyan Jia et\u00a0al. 2021. M6: A chinese multimodal pretrainer. arXiv:2103.00823. Retrieved from https:\/\/arxiv.org\/abs\/2103.00823"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/3580305.3599422"},{"key":"e_1_3_2_54_2","unstructured":"Jianghao Lin Rong Shan Chenxu Zhu Kounianhua Du Bo Chen Shigang Quan Ruiming Tang Yong Yu and Weinan Zhang. 2023. ReLLa: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recommendation. arXiv:2308.11131. Retrieved from https:\/\/arxiv.org\/abs\/2308.11131"},{"key":"e_1_3_2_55_2","unstructured":"Junling Liu Chao Liu Renjie Lv Kang Zhou and Yan Zhang. 2023. Is ChatGPT a good recommender? A preliminary study. arXiv:2304.10149. Retrieved from https:\/\/arxiv.org\/abs\/2304.10149"},{"key":"e_1_3_2_56_2","doi-asserted-by":"crossref","unstructured":"Peng Liu Lemei Zhang and Jon Atle Gulla. 2023. Pre-train prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems. arXiv:2302.03735. Retrieved from https:\/\/arxiv.org\/abs\/2302.03735","DOI":"10.1162\/tacl_a_00619"},{"key":"e_1_3_2_57_2","unstructured":"Weiwen Liu Yunjia Xi Jiarui Qin Fei Sun Bo Chen Weinan Zhang Rui Zhang and Ruiming Tang. 2022. Neural re-ranking in multi-stage recommender systems: A review. arXiv:2202.06602. Retrieved from https:\/\/arxiv.org\/abs\/2202.06602"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11704-022-1437-6"},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589335.3648305"},{"key":"e_1_3_2_60_2","unstructured":"Yucong Luo Mingyue Cheng Hao Zhang Junyu Lu and Enhong Chen. 2023. Unlocking the potential of large language models for explainable recommendations. arXiv:2312.15661. Retrieved from https:\/\/arxiv.org\/abs\/2312.15661"},{"key":"e_1_3_2_61_2","doi-asserted-by":"crossref","unstructured":"Hanjia Lyu Song Jiang Hanqing Zeng Yinglong Xia and Jiebo Luo. 2023. Llm-rec: Personalized recommendation via prompting large language models. arXiv:2307.15780. Retrieved from https:\/\/arxiv.org\/abs\/2307.15780","DOI":"10.18653\/v1\/2024.findings-naacl.39"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1002\/widm.53"},{"key":"e_1_3_2_63_2","doi-asserted-by":"crossref","unstructured":"Sheshera Mysore Andrew McCallum and Hamed Zamani. 2023. Large language model augmented narrative driven recommendations. arXiv:2306.02250. Retrieved from https:\/\/arxiv.org\/abs\/2306.02250","DOI":"10.1145\/3604915.3608829"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1018"},{"key":"e_1_3_2_65_2","unstructured":"OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774."},{"key":"e_1_3_2_66_2","unstructured":"Long Ouyang Jeffrey Wu Xu Jiang Diogo Almeida Carroll Wainwright Pamela Mishkin Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray et\u00a0al. 2022. Training language models to follow instructions with human feedback. In Proceedings of the 36th International Conference on Advances in Neural Information Processing Systems. 27730\u201327744."},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP40000.2020.00095"},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401104"},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298689.3347000"},{"key":"e_1_3_2_70_2","doi-asserted-by":"crossref","unstructured":"Ofir Press Muru Zhang Sewon Min Ludwig Schmidt Noah A. Smith and Mike Lewis. 2022. Measuring and Narrowing the Compositionality Gap in Language Models. arXiv preprint arXiv:2210.03350. Retrieved from https:\/\/arxiv.org\/abs\/2210.03350","DOI":"10.18653\/v1\/2023.findings-emnlp.378"},{"key":"e_1_3_2_71_2","doi-asserted-by":"crossref","unstructured":"Shuofei Qiao Yixin Ou Ningyu Zhang Xiang Chen Yunzhi Yao Shumin Deng Chuanqi Tan Fei Huang and Huajun Chen. 2023. Reasoning with Language Model Prompting: A Survey. arXiv preprint arXiv:2212.09597. Retrieved from https:\/\/arxiv.org\/abs\/2212.09597","DOI":"10.18653\/v1\/2023.acl-long.294"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i5.16557"},{"issue":"8","key":"e_1_3_2_73_2","first-page":"9","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"Radford Alec","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019), 9.","journal-title":"OpenAI Blog"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.5555\/3455716.3455856"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589334.3645458"},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657782"},{"key":"e_1_3_2_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2010.127"},{"key":"e_1_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.1145\/3604915.3608845"},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3481941"},{"key":"e_1_3_2_80_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657683"},{"key":"e_1_3_2_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3357925"},{"key":"e_1_3_2_82_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657821"},{"key":"e_1_3_2_83_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar et\u00a0al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Retrieved from https:\/\/arxiv.org\/abs\/2302.13971"},{"key":"e_1_3_2_84_2","unstructured":"Aaron Van den Oord Sander Dieleman and Benjamin Schrauwen. 2013. Deep content-based music recommendation. In Proceedings of the 27th International Conference on Neural Information Processing Systems Curran Associates Inc. Red Hook NY USA 2 (2013) 2643\u20132651."},{"key":"e_1_3_2_85_2","first-page":"arXiv\u20132310","article-title":"FLIP: Towards fine-grained alignment between ID-based models and pretrained language models for CTR prediction","author":"Wang Hangyu","year":"2023","unstructured":"Hangyu Wang, Jianghao Lin, Xiangyang Li, Bo Chen, Chenxu Zhu, Ruiming Tang, Weinan Zhang, and Yong Yu. 2023. FLIP: Towards fine-grained alignment between ID-based models and pretrained language models for CTR prediction. arXiv e-prints (2023), arXiv\u20132310.","journal-title":"arXiv e-prints"},{"key":"e_1_3_2_86_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313411"},{"key":"e_1_3_2_87_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313417"},{"key":"e_1_3_2_88_2","unstructured":"Lei Wang and Ee-Peng Lim. 2023. Zero-shot next-item recommendation using large pretrained language models. arXiv:2304.03153. Retrieved from https:\/\/arxiv.org\/abs\/2304.03153"},{"key":"e_1_3_2_89_2","doi-asserted-by":"publisher","DOI":"10.1145\/3124749.3124754"},{"key":"e_1_3_2_90_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3450078"},{"key":"e_1_3_2_91_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.findings-naacl.271"},{"key":"e_1_3_2_92_2","unstructured":"Jason Wei Xuezhi Wang Dale Schuurmans Maarten Bosma Ed Chi Quoc Le and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv:2201.11903. Retrieved from https:\/\/arxiv.org\/abs\/2201.11903"},{"key":"e_1_3_2_93_2","unstructured":"Laura Weidinger John Mellor Maribeth Rauh Conor Griffin Jonathan Uesato Po-Sen Huang Myra Cheng Mia Glaese Borja Balle Atoosa Kasirzadeh et\u00a0al. 2021. Ethical and social risks of harm from language models. arXiv:2112.04359. Retrieved from https:\/\/arxiv.org\/abs\/2112.04359"},{"key":"e_1_3_2_94_2","doi-asserted-by":"publisher","DOI":"10.1145\/3637528.3671901"},{"key":"e_1_3_2_95_2","unstructured":"Likang Wu Zhi Zheng Zhaopeng Qiu Hao Wang Hongchao Gu Tingjia Shen Chuan Qin Chen Zhu Hengshu Zhu Qi Liu et\u00a0al. 2023. A survey on large language models for recommendation. arXiv:2305.19860. Retrieved from https:\/\/arxiv.org\/abs\/2305.19860"},{"key":"e_1_3_2_96_2","unstructured":"Yunjia Xi Weiwen Liu Jianghao Lin Xiaoling Cai Hong Zhu Jieming Zhu Bo Chen Ruiming Tang Weinan Zhang Rui Zhang et\u00a0al. 2023. Towards open-world recommendation with knowledge augmentation from large language models. arXiv:2306.10933. Retrieved from https:\/\/arxiv.org\/abs\/2306.10933"},{"key":"e_1_3_2_97_2","unstructured":"Yunjia Xi Weiwen Liu Jianghao Lin Bo Chen Ruiming Tang Weinan Zhang and Yong Yu. 2024. MemoCRS: Memory-enhanced sequential conversational recommender systems with large language models. arXiv:2407.04960. Retrieved from https:\/\/arxiv.org\/abs\/2407.04960"},{"key":"e_1_3_2_98_2","unstructured":"Yunjia Xi Weiwen Liu Jianghao Lin Chuhan Wu Bo Chen Ruiming Tang Weinan Zhang and Yong Yu. 2024. Play to your strengths: Collaborative intelligence of conventional recommender models and large language models. arXiv:2403.16378. Retrieved from https:\/\/arxiv.org\/abs\/2403.16378"},{"key":"e_1_3_2_99_2","doi-asserted-by":"publisher","DOI":"10.1145\/3580305.3599878"},{"key":"e_1_3_2_100_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477495.3532026"},{"key":"e_1_3_2_101_2","unstructured":"Yunjia Xi Hangyu Wang Bo Chen Jianghao Lin Menghui Zhu Weiwen Liu Ruiming Tang Weinan Zhang and Yong Yu. 2024. A decoding acceleration framework for industrial deployable LLM-based recommender systems. arXiv preprint arXiv:2408.05676. Retrieved from https:\/\/arxiv.org\/abs\/2408.05676"},{"key":"e_1_3_2_102_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2023.3282907"},{"key":"e_1_3_2_103_2","doi-asserted-by":"publisher","DOI":"10.1145\/1277741.1277790"},{"key":"e_1_3_2_104_2","unstructured":"Wei Zeng Xiaozhe Ren Teng Su Hui Wang Yi Liao Zhiwei Wang Xin Jiang ZhenZhang Yang Kaisheng Wang Xiaoda Zhang et\u00a0al. 2021. Pangu- \\(\\alpha\\) : Large-scale autoregressive pretrained Chinese language models with auto-parallel computation. arXiv:2104.12369. Retrieved from https:\/\/arxiv.org\/abs\/2104.12369"},{"key":"e_1_3_2_105_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589334.3645537"},{"key":"e_1_3_2_106_2","doi-asserted-by":"publisher","DOI":"10.1145\/2600428.2609599"},{"key":"e_1_3_2_107_2","volume-title":"I (Still) Can\u2019t Believe It\u2019s Not Better! NeurIPS 2021 Workshop","author":"Zhang Yuhui","year":"2021","unstructured":"Yuhui Zhang, HAO DING, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, and Hao Wang. 2021. Language models as recommender systems: Evaluations and limitations. In I (Still) Can\u2019t Believe It\u2019s Not Better! NeurIPS 2021 Workshop."},{"key":"e_1_3_2_108_2","unstructured":"Wayne Xin Zhao Kun Zhou Junyi Li Tianyi Tang Xiaolei Wang Yupeng Hou Yingqian Min Beichen Zhang Junjie Zhang Zican Dong et\u00a0al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. Retrieved from https:\/\/arxiv.org\/abs\/2303.18223"},{"key":"e_1_3_2_109_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657828"},{"key":"e_1_3_2_110_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE60146.2024.00118"},{"key":"e_1_3_2_111_2","volume-title":"Proceedings of the 11th International Conference on Learning Representations","author":"Zhou Denny","year":"2023","unstructured":"Denny Zhou, Nathanael Sch\u00e4rli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi. 2023. Least-to-most prompting enables complex reasoning in large language models. In Proceedings of the 11th International Conference on Learning Representations."},{"key":"e_1_3_2_112_2","doi-asserted-by":"crossref","unstructured":"Guorui Zhou Na Mou Ying Fan Qi Pi Weijie Bian Chang Zhou Xiaoqiang Zhu and Kun Gai. 2019. Deep interest evolution network for click-through rate prediction. In Proceedings of the AAAI Conference on Artificial Intelligence. Article 729 8 pages.","DOI":"10.1609\/aaai.v33i01.33015941"},{"key":"e_1_3_2_113_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219823"},{"key":"e_1_3_2_114_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589334.3645482"},{"key":"e_1_3_2_115_2","doi-asserted-by":"publisher","DOI":"10.1145\/3589334.3645347"},{"key":"e_1_3_2_116_2","unstructured":"Yutao Zhu Huaying Yuan Shuting Wang Jiongnan Liu Wenhan Liu Chenlong Deng Zhicheng Dou and Ji-Rong Wen. 2023. Large language models for information retrieval: A survey. arXiv preprint arXiv:2308.07107. Retrieved from https:\/\/arxiv.org\/abs\/2308.07107"}],"container-title":["ACM Transactions on Recommender Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3725894","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,29]],"date-time":"2025-07-29T12:47:29Z","timestamp":1753793249000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3725894"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,29]]},"references-count":115,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,3,31]]}},"alternative-id":["10.1145\/3725894"],"URL":"https:\/\/doi.org\/10.1145\/3725894","relation":{},"ISSN":["2770-6699"],"issn-type":[{"type":"electronic","value":"2770-6699"}],"subject":[],"published":{"date-parts":[[2025,7,29]]},"assertion":[{"value":"2024-08-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-17","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-07-29","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}