{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T00:56:03Z","timestamp":1772499363747,"version":"3.50.1"},"reference-count":66,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2025,3,28]],"date-time":"2025-03-28T00:00:00Z","timestamp":1743120000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["92270114, 62302321, U24B20180"],"award-info":[{"award-number":["92270114, 62302321, U24B20180"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Supercomputing Center of the USTC"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Inf. Syst."],"published-print":{"date-parts":[[2025,5,31]]},"abstract":"<jats:p>\n            Designing effective prompts can empower LLMs to understand user preferences and provide recommendations with intent comprehension and knowledge utilization capabilities. Nevertheless, recent studies predominantly concentrate on task-wise prompting, developing fixed prompt templates shared across all users in a given recommendation task (e.g., rating or ranking). Although convenient, task-wise prompting overlooks individual user differences, leading to inaccurate analysis of user interests. In this work, we introduce the concept of instance-wise prompting, aiming at personalizing discrete prompts for individual users. Toward this end, we propose Reinforced Prompt Personalization (RPP) to realize it automatically. To improve efficiency and quality, RPP personalizes prompts at the sentence level rather than searching in the vast vocabulary word-by-word. Specifically, RPP breaks down the prompt into four patterns, tailoring patterns based on multi-agent and combining them. Then the personalized prompts interact with LLMs (environment) iteratively, to boost LLMs\u2019 recommending performance (reward). In addition to RPP, to improve the scalability of action space, our proposal of RPP+ dynamically refines the selected actions with LLMs throughout the iterative process. Extensive experiments on various datasets demonstrate the superiority of RPP\/RPP+ over traditional recommender models, few-shot methods, and other prompt-based methods, underscoring the significance of instance-wise prompting in LLMs for recommendation. Our code is available at\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/maowenyu-11\/RPP\">https:\/\/github.com\/maowenyu-11\/RPP<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3716320","type":"journal-article","created":{"date-parts":[[2025,2,4]],"date-time":"2025-02-04T16:04:47Z","timestamp":1738685087000},"page":"1-27","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Reinforced Prompt Personalization for Recommendation with Large Language Models"],"prefix":"10.1145","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0009-0003-1348-8412","authenticated-orcid":false,"given":"Wenyu","family":"Mao","sequence":"first","affiliation":[{"name":"School of Cyber Science and Technology, University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6941-5218","authenticated-orcid":false,"given":"Jiancan","family":"Wu","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4752-2629","authenticated-orcid":false,"given":"Weijian","family":"Chen","sequence":"additional","affiliation":[{"name":"Hefei Comprehensive National Science Center, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5187-9196","authenticated-orcid":false,"given":"Chongming","family":"Gao","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6148-6329","authenticated-orcid":false,"given":"Xiang","family":"Wang","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8472-7992","authenticated-orcid":false,"given":"Xiangnan","family":"He","sequence":"additional","affiliation":[{"name":"School of Information Science and Technology, University of Science and Technology of China, Hefei, China"}]}],"member":"320","published-online":{"date-parts":[[2025,3,28]]},"reference":[{"key":"e_1_3_1_2_2","first-page":"1007","volume-title":"Proceedings of the Conference on Recommender Systems (RecSys)","author":"Bao Keqin","year":"2023","unstructured":"Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. TALLRec: An effective and efficient tuning framework to align large language model with recommendation. In Proceedings of the Conference on Recommender Systems (RecSys). ACM, New York, NY, 1007\u20131014."},{"key":"e_1_3_1_3_2","first-page":"1877","volume-title":"Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)","author":"Brown Tom B.","year":"2020","unstructured":"Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), 1877\u20131901."},{"key":"e_1_3_1_4_2","doi-asserted-by":"crossref","first-page":"387","DOI":"10.1145\/2043932.2044016","volume-title":"Proceedings of the Conference on Recommender Systems (RecSys)","author":"Cantador Iv\u00e1n","year":"2011","unstructured":"Iv\u00e1n Cantador, Peter Brusilovsky, and Tsvi Kuflik. 2011. Second workshop on information heterogeneity and fusion in recommender systems (HetRec2011). In Proceedings of the Conference on Recommender Systems (RecSys). ACM, New York, NY, 387\u2013388."},{"key":"e_1_3_1_5_2","first-page":"1126","volume-title":"Proceedings of the Conference on Recommender Systems (RecSys)","author":"Dai Sunhao","year":"2023","unstructured":"Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. 2023. Uncovering ChatGPT\u2019s capabilities in recommender systems. In Proceedings of the Conference on Recommender Systems (RecSys). ACM, New York, NY, 1126\u20131132."},{"key":"e_1_3_1_6_2","first-page":"3369","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)","author":"Deng Mingkai","year":"2022","unstructured":"Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Xing, and Zhiting Hu. 2022. RLPrompt: Optimizing discrete text prompts with reinforcement learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 3369\u20133391."},{"key":"e_1_3_1_7_2","first-page":"4171","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT (1))","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT (1)). Association for Computational Linguistics, 4171\u20134186."},{"key":"e_1_3_1_8_2","unstructured":"Chenlu Ding Jiancan Wu Yancheng Yuan Jinda Lu Kai Zhang Alex Su Xiang Wang and Xiangnan He. 2024. Unified parameter-efficient unlearning for LLMs. arXiv:2412.00383. Retrieved from https:\/\/arxiv.org\/abs\/2412.00383"},{"issue":"1","key":"e_1_3_1_9_2","first-page":"14:1","article-title":"CIRS: Bursting filter bubbles by counterfactual interactive recommender system","volume":"42","author":"Gao Chongming","year":"2024","unstructured":"Chongming Gao, Shiqi Wang, Shijun Li, Jiawei Chen, Xiangnan He, Wenqiang Lei, Biao Li, Yuan Zhang, and Peng Jiang. 2024. CIRS: Bursting filter bubbles by counterfactual interactive recommender system. ACM Trans. Inf. Syst. 42, 1 (2024), 14:1\u201314:27.","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_1_10_2","first-page":"3816","volume-title":"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL\/IJCNLP (1))","author":"Gao Tianyu","year":"2021","unstructured":"Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL\/IJCNLP (1)). Association for Computational Linguistics, 3816\u20133830."},{"key":"e_1_3_1_11_2","unstructured":"Yunfan Gao Tao Sheng Youlin Xiang Yun Xiong Haofen Wang and Jiawei Zhang. 2023. Chat-REC: Towards interactive and explainable LLMs-augmented recommender system. arXiv:2303.14524. Retrieved from https:\/\/arxiv.org\/abs\/abs\/2303.14524"},{"key":"e_1_3_1_12_2","doi-asserted-by":"crossref","first-page":"299","DOI":"10.1145\/3523227.3546767","volume-title":"Proceedings of the Conference on Recommender Systems (RecSys)","author":"Geng Shijie","year":"2022","unstructured":"Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (RLP): A unified pretrain, personalized prompt & predict paradigm (P5). In Proceedings of the Conference on Recommender Systems (RecSys). ACM, New York, NY, 299\u2013315."},{"issue":"4","key":"e_1_3_1_13_2","first-page":"19:1","article-title":"The MovieLens datasets: History and context","volume":"5","author":"Harper F. Maxwell","year":"2016","unstructured":"F. Maxwell Harper and Joseph A. Konstan. 2016. The MovieLens datasets: History and context. ACM Trans. Interact. Intell. Syst. 5, 4 (2016), 19:1\u201319:19.","journal-title":"ACM Trans. Interact. Intell. Syst"},{"key":"e_1_3_1_14_2","first-page":"639","volume-title":"Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)","author":"He Xiangnan","year":"2020","unstructured":"Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-Dong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and powering graph convolution network for recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). ACM, New York, NY, 639\u2013648."},{"key":"e_1_3_1_15_2","first-page":"1162","volume-title":"Proceedings of the ACM Web Conference (WWW)","author":"Hou Yupeng","year":"2023","unstructured":"Yupeng Hou, Zhankui He, Julian J. McAuley, and Wayne Xin Zhao. 2023. Learning vector-quantized item representation for transferable sequential recommenders. In Proceedings of the ACM Web Conference (WWW). ACM, New York, NY, 1162\u20131171."},{"key":"e_1_3_1_16_2","first-page":"585","volume-title":"Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)","author":"Hou Yupeng","year":"2022","unstructured":"Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, and Ji-Rong Wen. 2022. Towards universal sequence representation learning for recommender systems. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 585\u2013593."},{"key":"e_1_3_1_17_2","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1007\/978-3-031-56060-6_24","volume-title":"Proceedings of the European Conference on Information Retrieval (ECIR (2))","volume":"14609","author":"Hou Yupeng","year":"2024","unstructured":"Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian J. McAuley, and Wayne Xin Zhao. 2024. Large language models are zero-shot rankers for recommender systems. In Proceedings of the European Conference on Information Retrieval (ECIR (2)). Lecture Notes in Computer Science, Vol. 14609, Springer, 364\u2013381."},{"key":"e_1_3_1_18_2","first-page":"3","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","volume":"1","author":"Hu Edward J.","year":"2022","unstructured":"Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In Proceedings of the International Conference on Learning Representations (ICLR). Vol. 1, OpenReview.net, 3 pages."},{"key":"e_1_3_1_19_2","unstructured":"Yuezihan Jiang Hao Yang Junyang Lin Hanyu Zhao An Yang Chang Zhou Hongxia Yang Zhi Yang and Bin Cui. 2022. Instance-wise prompt tuning for pretrained language models. arXiv:2206.01958. Retrieved from https:\/\/arxiv.org\/abs\/2206.01958"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00324"},{"issue":"7","key":"e_1_3_1_21_2","first-page":"199:1","article-title":"Instance-aware prompt learning for language understanding and generation","volume":"22","author":"Jin Feihu","year":"2023","unstructured":"Feihu Jin, Jinliang Lu, Jiajun Zhang, and Chengqing Zong. 2023. Instance-aware prompt learning for language understanding and generation. ACM Trans. Asian Low Resour. Lang. Inf. Process. 22, 7 (2023), 199:1\u2013199:18.","journal-title":"ACM Trans. Asian Low Resour. Lang. Inf. Process"},{"key":"e_1_3_1_22_2","first-page":"796","volume-title":"Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)","author":"Joko Hideaki","year":"2024","unstructured":"Hideaki Joko, Shubham Chatterjee, Andrew Ramsay, Arjen P. de Vries, Jeff Dalton, and Faegheh Hasibi. 2024. Doing personal LAPS: LLM-augmented dialogue construction for personalized multi-session conversational search. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). ACM, New York, NY, 796\u2013806."},{"key":"e_1_3_1_23_2","first-page":"197","volume-title":"Proceedings of the IEEE International Conference on Data Mining (ICDM)","author":"Kang Wang-Cheng","year":"2018","unstructured":"Wang-Cheng Kang and Julian J. McAuley. 2018. Self-attentive sequential recommendation. In Proceedings of the IEEE International Conference on Data Mining (ICDM). IEEE Computer Society, 197\u2013206."},{"key":"e_1_3_1_24_2","volume-title":"Proceedings of the 12th International Conference on Learning Representations","author":"Kuang Yufei","year":"2024","unstructured":"Yufei Kuang, Jie Wang, Haoyang Liu, Fangzhou Zhu, Xijun Li, Jia Zeng, Hao Jianye, Bin Li, and Feng Wu. 2024. Rethinking branching on exact combinatorial optimization solver: The first deep symbolic discovery framework. In Proceedings of the 12th International Conference on Learning Representations."},{"key":"e_1_3_1_25_2","series-title":"CEUR Workshop Proceedings","volume-title":"Proceedings of INRA@RecSys","volume":"3561","author":"Li Xinyi","year":"2023","unstructured":"Xinyi Li, Yongfeng Zhang, and Edward C. Malthouse. 2023. A preliminary study of ChatGPT on news recommendation: Personalization, provider fairness, and fake news. In Proceedings of INRA@RecSys. CEUR Workshop Proceedings, Vol. 3561, CEUR-WS.org."},{"key":"e_1_3_1_26_2","first-page":"1785","volume-title":"Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)","author":"Liao Jiayi","year":"2024","unstructured":"Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang, and Xiangnan He. 2024. LLaRA: Large language-recommendation assistant. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). ACM, New York, NY, 1785\u20131795."},{"key":"e_1_3_1_27_2","unstructured":"Haoyang Liu Yufei Kuang Jie Wang Xijun Li Yongdong Zhang and Feng Wu. 2023. Promoting generalization for exact solvers via adversarial instance augmentation. arXiv:2310.14161. Retrieved from https:\/\/arxiv.org\/abs\/2310.14161"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3638530.3664128"},{"key":"e_1_3_1_29_2","volume-title":"Proceedings of the 13th International Conference on Learning Representations","author":"Liu Haoyang","year":"2025","unstructured":"Haoyang Liu, Jie Wang, Zijie Geng, Xijun Li, Yuxuan Zong, Fangzhou Zhu, Jianye Hao, and Feng Wu. 2025. Apollo-MILP: An alternating prediction-correction neural solving framework for mixed-integer linear programming. In Proceedings of the 13th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=mFY0tPDWK8"},{"key":"e_1_3_1_30_2","volume-title":"Proceedings of the 38th Annual Conference on Neural Information Processing Systems","author":"Liu Haoyang","year":"2024","unstructured":"Haoyang Liu, Jie Wang, Wanbo Zhang, Zijie Geng, Yufei Kuang, Xijun Li, Bin Li, Yongdong Zhang, and Feng Wu. 2024. MILP-StuDio: MILP instance generation via block structure decomposition. In Proceedings of the 38th Annual Conference on Neural Information Processing Systems. Retrieved from https:\/\/openreview.net\/forum?id=W433RI0VU4"},{"key":"e_1_3_1_31_2","unstructured":"Junling Liu Chao Liu Renjie Lv Kang Zhou and Yan Zhang. 2023. Is ChatGPT a good recommender? A preliminary study. arXiv:2304.10149. Retrieved from https:\/\/arxiv.org\/abs\/2304.10149"},{"key":"e_1_3_1_32_2","first-page":"100","volume-title":"Proceedings of the DeeLIO@ACL","author":"Liu Jiachang","year":"2022","unstructured":"Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of the DeeLIO@ACL. Association for Computational Linguistics, 100\u2013114."},{"key":"e_1_3_1_33_2","first-page":"61","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL (2))","author":"Liu Xiao","year":"2022","unstructured":"Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL (2)). Association for Computational Linguistics, 61\u201368."},{"key":"e_1_3_1_34_2","first-page":"6379","volume-title":"Proceedings of the Neural Information Processing Systems (NIPS)","author":"Lowe Ryan","year":"2017","unstructured":"Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. In Proceedings of the Neural Information Processing Systems (NIPS), 6379\u20136390."},{"key":"e_1_3_1_35_2","first-page":"8086","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL (1))","author":"Lu Yao","year":"2022","unstructured":"Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL (1)). Association for Computational Linguistics, 8086\u20138098."},{"key":"e_1_3_1_36_2","first-page":"46534","volume-title":"Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)","author":"Madaan Aman","year":"2023","unstructured":"Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023. Self-refine: Iterative refinement with self-feedback. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), 46534\u201346594."},{"key":"e_1_3_1_37_2","unstructured":"Wenyu Mao Shuchang Liu Haoyang Liu Haozhe Liu Xiang Li and Lanatao Hu. 2025. Distinguished quantized guidance for diffusion-based sequence recommendation. arXiv:2501.17670. Retrieved from https:\/\/arxiv.org\/abs\/2501.17670"},{"key":"e_1_3_1_38_2","unstructured":"Wenyu Mao Jiancan Wu Haoyang Liu Yongduo Sui and Xiang Wang. 2024. Invariant graph learning meets information bottleneck for out-of-distribution generalization. arXiv:2408.01697. Retrieved from https:\/\/arxiv.org\/abs\/2408.01697"},{"key":"e_1_3_1_39_2","first-page":"188","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP\/IJCNLP (1))","author":"Ni Jianmo","year":"2019","unstructured":"Jianmo Ni, Jiacheng Li, and Julian J. McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP\/IJCNLP (1)). Association for Computational Linguistics, 188\u2013197."},{"key":"e_1_3_1_40_2","first-page":"6149","volume-title":"Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"Peng Baolin","year":"2018","unstructured":"Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, and Kam-Fai Wong. 2018. Adversarial advantage actor-critic model for task-completion dialogue policy learning. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6149\u20136153."},{"key":"e_1_3_1_41_2","doi-asserted-by":"crossref","first-page":"2463","DOI":"10.18653\/v1\/D19-1250","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP\/IJCNLP (1))","author":"Petroni Fabio","year":"2019","unstructured":"Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP\/IJCNLP (1)). Association for Computational Linguistics, 2463\u20132473."},{"key":"e_1_3_1_42_2","first-page":"3827","volume-title":"Proceedings of the European Chapter of the Association for Computational Linguistics (EACL)","author":"Prasad Archiki","year":"2023","unstructured":"Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. 2023. GrIPS: Gradient-free, edit-based instruction search for prompting large language models. In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL). Association for Computational Linguistics, 3827\u20133846."},{"key":"e_1_3_1_43_2","first-page":"452","volume-title":"Proceedings of the Uncertainty in Artificial Intelligence (UAI)","author":"Rendle Steffen","year":"2009","unstructured":"Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Uncertainty in Artificial Intelligence (UAI). AUAI Press, 452\u2013461."},{"issue":"4","key":"e_1_3_1_44_2","doi-asserted-by":"crossref","first-page":"333","DOI":"10.1561\/1500000019","article-title":"The probabilistic relevance framework: BM25 and beyond","volume":"3","author":"Robertson Stephen E.","year":"2009","unstructured":"Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr. 3, 4 (2009), 333\u2013389.","journal-title":"Found. Trends Inf. Retr"},{"key":"e_1_3_1_45_2","first-page":"2655","volume-title":"Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)","author":"Rubin Ohad","year":"2022","unstructured":"Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Association for Computational Linguistics, 2655\u20132671."},{"key":"e_1_3_1_46_2","first-page":"255","volume-title":"Proceedings of the European Chapter of the Association for Computational Linguistics","author":"Schick Timo","year":"2021","unstructured":"Timo Schick and Hinrich Sch\u00fctze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 255\u2013269."},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-023-06647-8"},{"key":"e_1_3_1_48_2","first-page":"13153","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)","author":"Shao Yunfan","year":"2023","unstructured":"Yunfan Shao, Linyang Li, Junqi Dai, and Xipeng Qiu. 2023. Character-LLM: A trainable agent for role-playing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 13153\u201313187."},{"key":"e_1_3_1_49_2","doi-asserted-by":"crossref","first-page":"4222","DOI":"10.18653\/v1\/2020.emnlp-main.346","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP (1))","author":"Shin Taylor","year":"2020","unstructured":"Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP (1)). Association for Computational Linguistics, 4222\u20134235."},{"key":"e_1_3_1_50_2","first-page":"8634","volume-title":"Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)","volume":"36","author":"Shinn Noah","year":"2023","unstructured":"Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), Vol. 36, 8634\u20138652."},{"issue":"5","key":"e_1_3_1_51_2","first-page":"127:1","article-title":"Enhancing out-of-distribution generalization on graphs via causal attention learning","volume":"18","author":"Sui Yongduo","year":"2024","unstructured":"Yongduo Sui, Wenyu Mao, Shuyao Wang, Xiang Wang, Jiancan Wu, Xiangnan He, and Tat-Seng Chua. 2024. Enhancing out-of-distribution generalization on graphs via causal attention learning. ACM Trans. Knowl. Discov. Data 18, 5 (2024), 127:1\u2013127:24.","journal-title":"ACM Trans. Knowl. Discov. Data"},{"key":"e_1_3_1_52_2","first-page":"324","volume-title":"Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)","author":"Sun Zhu","year":"2024","unstructured":"Zhu Sun, Hongyang Liu, Xinghua Qu, Kaidong Feng, Yan Wang, and Yew Soon Ong. 2024. Large language models for intent-driven session recommendations. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). ACM, New York, NY, 324\u2013334."},{"key":"e_1_3_1_53_2","first-page":"2085","volume-title":"Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS)","author":"Sunehag Peter","year":"2018","unstructured":"Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vin\u00edcius Flores Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo, Karl Tuyls, et al. 2018. Value-decomposition networks for cooperative multi-agent learning based on team reward. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC\/ACM, 2085\u20132087."},{"key":"e_1_3_1_54_2","unstructured":"Lei Wang and Ee-Peng Lim. 2023. Zero-shot next-item recommendation using large pretrained language models. arXiv:2304.03153. Retrieved from https:\/\/arxiv.org\/abs\/2304.03153"},{"key":"e_1_3_1_55_2","doi-asserted-by":"crossref","first-page":"5905","DOI":"10.1145\/3637528.3671506","volume-title":"Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)","author":"Wang Xiaobei","year":"2024","unstructured":"Xiaobei Wang, Shuchang Liu, Xueliang Wang, Qingpeng Cai, Lantao Hu, Han Li, Peng Jiang, Kun Gai, and Guangming Xie. 2024. Future impact decomposition in request-level recommendations. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). ACM, New York, NY, 5905\u20135916."},{"key":"e_1_3_1_56_2","unstructured":"Ziyan Wang Yingpeng Du Zhu Sun Haoyan Chua Kaidong Feng Wenya Wang and Jie Zhang. 2024. Re2LLM: Reflective reinforcement large language model for session-based recommendation. arXiv:2403.16427. Retrieved from https:\/\/arxiv.org\/abs\/2403.16427"},{"key":"e_1_3_1_57_2","first-page":"24824","volume-title":"Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)","volume":"35","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), Vol. 35, 24824\u201324837."},{"issue":"6","key":"e_1_3_1_58_2","doi-asserted-by":"crossref","first-page":"166614","DOI":"10.1007\/s11704-021-0261-8","article-title":"Graph convolution machine for context-aware recommender system","volume":"16","author":"Wu Jiancan","year":"2022","unstructured":"Jiancan Wu, Xiangnan He, Xiang Wang, Qifan Wang, Weijian Chen, Jianxun Lian, and Xing Xie. 2022. Graph convolution machine for context-aware recommender system. Front. Comput. Sci. 16, 6 (2022), 166614.","journal-title":"Front. Comput. Sci"},{"issue":"4","key":"e_1_3_1_59_2","first-page":"98:1","article-title":"On the effectiveness of sampled Softmax loss for item recommendation","volume":"42","author":"Wu Jiancan","year":"2024","unstructured":"Jiancan Wu, Xiang Wang, Xingyu Gao, Jiawei Chen, Hongcheng Fu, and Tianyu Qiu. 2024. On the effectiveness of sampled Softmax loss for item recommendation. ACM Trans. Inf. Syst. 42, 4 (2024), 98:1\u201398:26.","journal-title":"ACM Trans. Inf. Syst"},{"key":"e_1_3_1_60_2","first-page":"651","volume-title":"Proceedings of the ACM Web Conference (WWW)","author":"Wu Jiancan","year":"2023","unstructured":"Jiancan Wu, Yi Yang, Yuchun Qian, Yongduo Sui, Xiang Wang, and Xiangnan He. 2023. GIF: A general graph unlearning strategy via influence function. In Proceedings of the ACM Web Conference (WWW). ACM, New York, NY, 651\u2013661."},{"key":"e_1_3_1_61_2","unstructured":"Likang Wu Zhi Zheng Zhaopeng Qiu Hao Wang Hongchao Gu Tingjia Shen Chuan Qin Chen Zhu Hengshu Zhu Qi Liu et al. 2023. A survey on large language models for recommendation. arXiv:2305.19860. Retrieved from https:\/\/arxiv.org\/abs\/2305.19860"},{"key":"e_1_3_1_62_2","first-page":"17941","volume-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)","author":"Yu Yakun","year":"2024","unstructured":"Yakun Yu, Shiang Qi, Baochun Li, and Di Niu. 2024. PepRec: Progressive enhancement of prompting for recommendation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 17941\u201317953."},{"key":"e_1_3_1_63_2","doi-asserted-by":"crossref","unstructured":"Junjie Zhang Ruobing Xie Yupeng Hou Wayne Xin Zhao Leyu Lin and Ji-Rong Wen. 2023. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv:2305.07001. Retrieved from https:\/\/arxiv.org\/abs\/2305.07001","DOI":"10.1145\/3708882"},{"key":"e_1_3_1_64_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"Zhang Tianjun","year":"2023","unstructured":"Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E. Gonzalez. 2023. TEMPERA: Test-time prompt editing via reinforcement learning. In Proceedings of the International Conference on Learning Representations (ICLR). OpenReview.net."},{"key":"e_1_3_1_65_2","unstructured":"Wayne Xin Zhao Kun Zhou Junyi Li Tianyi Tang Xiaolei Wang Yupeng Hou Yingqian Min Beichen Zhang Junjie Zhang Zican Dong et al. 2023. A survey of large language models. arXiv:2303.18223. Retrieved from https:\/\/arxiv.org\/abs\/2303.18223"},{"key":"e_1_3_1_66_2","first-page":"1796","volume-title":"Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)","author":"Zhao Yuyue","year":"2024","unstructured":"Yuyue Zhao, Jiancan Wu, Xiang Wang, Wei Tang, Dingxian Wang, and Maarten de Rijke. 2024. Let me do it for you: Towards LLM empowered recommendation via tool learning. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). ACM, New York, NY, 1796\u20131806."},{"key":"e_1_3_1_67_2","series-title":"Proceedings of Machine Learning Research","first-page":"12697","volume-title":"Proceedings of the International Conference on Machine Learning (ICML)","volume":"139","author":"Zhao Zihao","year":"2021","unstructured":"Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. Proceedings of the International Conference on Machine Learning (ICML). Proceedings of Machine Learning Research, Vol. 139, PMLR, 12697\u201312706."}],"container-title":["ACM Transactions on Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3716320","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3716320","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T18:43:43Z","timestamp":1750272223000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3716320"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,28]]},"references-count":66,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,5,31]]}},"alternative-id":["10.1145\/3716320"],"URL":"https:\/\/doi.org\/10.1145\/3716320","relation":{},"ISSN":["1046-8188","1558-2868"],"issn-type":[{"value":"1046-8188","type":"print"},{"value":"1558-2868","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,28]]},"assertion":[{"value":"2024-09-05","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-01-23","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-28","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}