{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,23]],"date-time":"2026-01-23T11:42:20Z","timestamp":1769168540539,"version":"3.49.0"},"reference-count":48,"publisher":"SAGE Publications","issue":"1","license":[{"start":{"date-parts":[[2025,4,16]],"date-time":"2025-04-16T00:00:00Z","timestamp":1744761600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Intelligent Data Analysis: An International Journal"],"published-print":{"date-parts":[[2026,1]]},"abstract":"<jats:p>The personalized recommendation aims to address the information overload problem, which can find interesting items for users from massive amounts of information. The research paradigm of personalized recommendation evolved from deep neural networks to pre-trained language models (PLMs) like BERT and, more recently, into large language models (LLMs). However, it is always very difficult to find the target item among a massive number of data or information, which is not only time-consuming but also often has low accuracy. In this paper, we propose a Personalized Recommendation method with Clustering via Prompt-tuning (PRCP), a candidate item set is developed and a prompt-tuning model with a designed verbalizer is constructed for recommendation. Specifically, the target users are first selected by the similarity calculation, and items are then clustered by the preferences of similar users to form a candidate item set. Then the prompt-tuning model is introduced to predict the masked label for candidate items, and three different strategies are designed to expand the label word space for verbalizer optimization. Extensive experiments conducted on three datasets validated the effectiveness of the proposed method compared to other state-of-the-art baselines including LLMs.<\/jats:p>","DOI":"10.1177\/1088467x251331821","type":"journal-article","created":{"date-parts":[[2025,4,16]],"date-time":"2025-04-16T16:06:15Z","timestamp":1744819575000},"page":"24-40","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":0,"title":["Personalized recommendation with clustering via prompt-tuning"],"prefix":"10.1177","volume":"30","author":[{"given":"Tongyu","family":"Wu","sequence":"first","affiliation":[{"name":"School of Information Engineering, Yangzhou University, Jiangsu, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8042-780X","authenticated-orcid":false,"given":"Xiaojian","family":"Liu","sequence":"additional","affiliation":[{"name":"State Grid Shanghai Municipal Electric Power Company, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3045-2588","authenticated-orcid":false,"given":"Yi","family":"Zhu","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Yangzhou University, Jiangsu, China"},{"name":"Key Laboratory of Knowledge Engineering with Big Data, Hefei University of Technology, Ministry of Education, China"},{"name":"School of Computer Science and Information Engineering, Hefei University of Technology, Anhui, China"}]},{"given":"Yun","family":"Li","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Yangzhou University, Jiangsu, China"}]},{"given":"Yunhao","family":"Yuan","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Yangzhou University, Jiangsu, China"}]},{"given":"Jipeng","family":"Qiang","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Yangzhou University, Jiangsu, China"}]}],"member":"179","published-online":{"date-parts":[[2025,4,16]]},"reference":[{"key":"e_1_3_4_2_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00354-024-00241-w"},{"key":"e_1_3_4_3_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00354-023-00238-x"},{"key":"e_1_3_4_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2021.115825"},{"key":"e_1_3_4_5_2","doi-asserted-by":"crossref","unstructured":"Penha G Hauff C. What does bert know about books movies and music? probing bert for conversational recommendation. In: Proceedings of the 14th ACM conference on recommender systems pp.388\u2013397.","DOI":"10.1145\/3383313.3412249"},{"key":"e_1_3_4_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/TETCI.2023.3300740"},{"key":"e_1_3_4_7_2","doi-asserted-by":"crossref","unstructured":"Sun F Liu J Wu J et\u00a0al. Bert4rec: sequential recommendation with bidirectional encoder representations from transformer. In: Proceedings of the 28th ACM international conference on information and knowledge management pp.1441\u20131450.","DOI":"10.1145\/3357384.3357895"},{"key":"e_1_3_4_8_2","unstructured":"Wang L Lim EP. Zero-shot next-item recommendation using large pretrained language models. arXiv preprint arXiv:230403153 2023."},{"key":"e_1_3_4_9_2","doi-asserted-by":"crossref","unstructured":"Schick T Schmid H Sch\u00fctze H. Automatically identifying words that can serve as labels for few-shot text classification. arXiv preprint arXiv:201013641 2020.","DOI":"10.18653\/v1\/2020.coling-main.488"},{"key":"e_1_3_4_10_2","doi-asserted-by":"crossref","unstructured":"Schick T Sch\u00fctze H. Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:200107676 2020.","DOI":"10.18653\/v1\/2021.eacl-main.20"},{"key":"e_1_3_4_11_2","doi-asserted-by":"crossref","unstructured":"Teppalwar V Sahoo KC Jaiswal R et\u00a0al. A survey on personalized movie recommendation system using machine learning. In: International conference on smart computing and communication pp.305\u2013314. Springer.","DOI":"10.1007\/978-981-97-1320-2_25"},{"key":"e_1_3_4_12_2","doi-asserted-by":"crossref","unstructured":"Peng S Park DS Kim DY et\u00a0al. A modern recommendation system survey in the big data era. In: International conference on computer science and its applications and the international conference on ubiquitous information technologies and applications pp.577\u2013582. Springer.","DOI":"10.1007\/978-981-99-1252-0_77"},{"key":"e_1_3_4_13_2","first-page":"91","article-title":"Advances in collaborative filtering","author":"Koren Y","year":"2021","unstructured":"Koren Y, Rendle S, Bell R. Advances in collaborative filtering. Recomm Syst Handb 2021; 91\u2013142.","journal-title":"Recomm Syst Handb"},{"key":"e_1_3_4_14_2","doi-asserted-by":"publisher","DOI":"10.3390\/app14031155"},{"key":"e_1_3_4_15_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-021-02647-1"},{"key":"e_1_3_4_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.2984665"},{"key":"e_1_3_4_17_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11704-023-2441-1"},{"key":"e_1_3_4_18_2","unstructured":"Wu X Zhou H Yao W et\u00a0al. Towards personalized cold-start recommendation with prompts. arXiv preprint arXiv:230617256 2023."},{"key":"e_1_3_4_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2023.3332787"},{"key":"e_1_3_4_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3439726"},{"key":"e_1_3_4_21_2","unstructured":"Yang W Xie Y Tan L et\u00a0al. Data augmentation for bert fine-tuning in open-domain question answering. arXiv preprint arXiv:190406652 2019."},{"key":"e_1_3_4_22_2","doi-asserted-by":"crossref","unstructured":"Weng R Yu H Huang S et\u00a0al. Acquiring knowledge from pre-trained model to neural machine translation. In: Proceedings of the AAAI conference on artificial intelligence volume 34. pp.9266\u20139273.","DOI":"10.1609\/aaai.v34i05.6465"},{"key":"e_1_3_4_23_2","doi-asserted-by":"crossref","unstructured":"Qiang J Li Y Zhu Y et\u00a0al. Lexical simplification with pretrained encoders. In: Proceedings of the AAAI conference on artificial intelligence volume 34 pp.8649\u20138656.","DOI":"10.1609\/aaai.v34i05.6389"},{"key":"e_1_3_4_24_2","unstructured":"Goldberg Y. Assessing bert\u2019s syntactic abilities. arXiv preprint arXiv:190105287 2019."},{"key":"e_1_3_4_25_2","unstructured":"Ma X Wang Z Ng P et\u00a0al. Universal text representation from bert: an empirical study. arXiv preprint arXiv:191007973 2019."},{"key":"e_1_3_4_26_2","doi-asserted-by":"crossref","unstructured":"Jawahar G Sagot B Seddah D. What does bert learn about the structure of language? In: ACL 2019-57th Annual meeting of the association for computational linguistics.","DOI":"10.18653\/v1\/P19-1356"},{"key":"e_1_3_4_27_2","doi-asserted-by":"crossref","unstructured":"Yuan F He X Karatzoglou A et\u00a0al. Parameter-efficient transfer from sequential behaviors for user modeling and recommendation. In: Proceedings of the 43rd International ACM SIGIR conference on research and development in information retrieval pp.1469\u20131478.","DOI":"10.1145\/3397271.3401156"},{"key":"e_1_3_4_28_2","doi-asserted-by":"crossref","unstructured":"Yao T Yi X Cheng DZ et\u00a0al. Self-supervised learning for large-scale item recommendations. In: Proceedings of the 30th ACM international conference on information & knowledge management pp.4321\u20134330.","DOI":"10.1145\/3459637.3481952"},{"key":"e_1_3_4_29_2","doi-asserted-by":"crossref","unstructured":"Wu J Wang X Feng F et\u00a0al. Self-supervised graph learning for recommendation. In: Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval pp.726\u2013735.","DOI":"10.1145\/3404835.3462862"},{"key":"e_1_3_4_30_2","doi-asserted-by":"crossref","unstructured":"Hou Y Mu S Zhao WX et\u00a0al. Towards universal sequence representation learning for recommender systems. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining pp.585\u2013593.","DOI":"10.1145\/3534678.3539381"},{"key":"e_1_3_4_31_2","doi-asserted-by":"crossref","unstructured":"Zhou K Wang H Zhao WX et\u00a0al. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In: Proceedings of the 29th ACM international conference on information & knowledge management pp.1893\u20131902.","DOI":"10.1145\/3340531.3411954"},{"key":"e_1_3_4_32_2","unstructured":"Xiao C Xie R Yao Y et\u00a0al. UPREC: user-aware pre-training for recommender systems. arXiv preprint arXiv:210210989 2021."},{"key":"e_1_3_4_33_2","unstructured":"Li XL Liang P. Prefix-tuning: optimizing continuous prompts for generation. arXiv preprint arXiv:210100190 2021."},{"key":"e_1_3_4_34_2","doi-asserted-by":"crossref","unstructured":"Liu X Ji K Fu Y et\u00a0al. P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:211007602 2021.","DOI":"10.18653\/v1\/2022.acl-short.8"},{"key":"e_1_3_4_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/TETCI.2024.3412998"},{"key":"e_1_3_4_36_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown T","year":"2020","unstructured":"Brown T, Mann B, Ryder N, et\u00a0al. Language models are few-shot learners. Adv Neural Inf Process Syst 2020; 33: 1877\u20131901.","journal-title":"Adv Neural Inf Process Syst"},{"key":"e_1_3_4_37_2","unstructured":"Gao T Fisch A Chen D. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:201215723 2020."},{"key":"e_1_3_4_38_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00324"},{"key":"e_1_3_4_39_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3580488","article-title":"Personalized prompt learning for explainable recommendation","volume":"41","author":"Li L","year":"2023","unstructured":"Li L, Zhang Y, Chen L. Personalized prompt learning for explainable recommendation. ACM Trans Inform Syst 2023; 41: 1\u201326.","journal-title":"ACM Trans Inform Syst"},{"key":"e_1_3_4_40_2","doi-asserted-by":"crossref","unstructured":"Lester B Al-Rfou R Constant N. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:210408691 2021.","DOI":"10.18653\/v1\/2021.emnlp-main.243"},{"key":"e_1_3_4_41_2","doi-asserted-by":"crossref","unstructured":"Deng M Wang J Hsieh CP et\u00a0al. Rlprompt: optimizing discrete text prompts with reinforcement learning. arXiv preprint arXiv:220512548 2022.","DOI":"10.18653\/v1\/2022.emnlp-main.222"},{"key":"e_1_3_4_42_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.aiopen.2022.11.003"},{"key":"e_1_3_4_43_2","doi-asserted-by":"crossref","unstructured":"Wei Y Mo T Jiang Y et\u00a0al. Eliciting knowledge from pretrained language models for prototypical prompt verbalizer. In: International conference on artificial neural networks pp.222\u2013233. Springer.","DOI":"10.1007\/978-3-031-15931-2_19"},{"key":"e_1_3_4_44_2","doi-asserted-by":"crossref","unstructured":"Hu S Ding N Wang H et\u00a0al. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv preprint arXiv:210802035 2021.","DOI":"10.18653\/v1\/2022.acl-long.158"},{"key":"e_1_3_4_45_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2023.110647"},{"key":"e_1_3_4_46_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3588767","article-title":"Knowledge-enhanced prompt-tuning for stance detection","volume":"22","author":"Huang H","year":"2023","unstructured":"Huang H, Zhang B, Li Y, et\u00a0al. Knowledge-enhanced prompt-tuning for stance detection. ACM Trans Asian Low-Resource Language Inform Process 2023; 22: 1\u201320.","journal-title":"ACM Trans Asian Low-Resource Language Inform Process"},{"key":"e_1_3_4_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2012.51"},{"key":"e_1_3_4_48_2","doi-asserted-by":"crossref","unstructured":"Koren Y. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining pp.426\u2013434.","DOI":"10.1145\/1401890.1401944"},{"key":"e_1_3_4_49_2","unstructured":"Touvron H Martin L Stone K et\u00a0al. Llama 2: open foundation and fine-tuned chat models. arXiv preprint arXiv:230709288 2023."}],"container-title":["Intelligent Data Analysis: An International Journal"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/1088467X251331821","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.1177\/1088467X251331821","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/1088467X251331821","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,22]],"date-time":"2026-01-22T19:34:52Z","timestamp":1769110492000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.1177\/1088467X251331821"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,16]]},"references-count":48,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,1]]}},"alternative-id":["10.1177\/1088467X251331821"],"URL":"https:\/\/doi.org\/10.1177\/1088467x251331821","relation":{},"ISSN":["1088-467X","1571-4128"],"issn-type":[{"value":"1088-467X","type":"print"},{"value":"1571-4128","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,16]]}}}