{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,20]],"date-time":"2026-02-20T08:59:31Z","timestamp":1771577971449,"version":"3.50.1"},"reference-count":84,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2022,12,21]],"date-time":"2022-12-21T00:00:00Z","timestamp":1671580800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Science Foundation","award":["IIS1714741, CNS1815636, IIS1845081, IIS1907704, IIS1928278, IIS1955285, IOS2107215 and IOS2035472"],"award-info":[{"award-number":["IIS1714741, CNS1815636, IIS1845081, IIS1907704, IIS1928278, IIS1955285, IOS2107215 and IOS2035472"]}]},{"name":"SNAP Inc, Amazon Faculty Award"},{"name":"Cisco Faculty Award"},{"name":"The Home Depot"},{"DOI":"10.13039\/100004331","name":"Johnson&Johnson","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100004331","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Inf. Syst."],"published-print":{"date-parts":[[2023,4,30]]},"abstract":"<jats:p>\n            <jats:bold>Graph neural networks (GNNs)<\/jats:bold>\n            have been widely applied in the recommendation tasks and have achieved very appealing performance. However, most GNN-based recommendation methods suffer from the problem of data sparsity in practice. Meanwhile, pre-training techniques have achieved great success in mitigating data sparsity in various domains such as\n            <jats:bold>natural language processing (NLP)<\/jats:bold>\n            and\n            <jats:bold>computer vision (CV)<\/jats:bold>\n            . Thus, graph pre-training has the great potential to alleviate data sparsity in GNN-based recommendations. However, pre-training GNNs for recommendations faces unique challenges. For example, user-item interaction graphs in different recommendation tasks have distinct sets of users and items, and they often present different properties. Therefore, the successful mechanisms commonly used in NLP and CV to transfer knowledge from pre-training tasks to downstream tasks such as sharing learned embeddings or feature extractors are not directly applicable to existing GNN-based recommendations models. To tackle these challenges, we delicately design an\n            <jats:bold>adaptive graph pre-training framework for localized collaborative filtering (ADAPT)<\/jats:bold>\n            . It does not require transferring user\/item embeddings, and is able to capture both the common knowledge across different graphs and the uniqueness for each graph simultaneously. Extensive experimental results have demonstrated the effectiveness and superiority of ADAPT.\n          <\/jats:p>","DOI":"10.1145\/3555372","type":"journal-article","created":{"date-parts":[[2022,8,10]],"date-time":"2022-08-10T12:14:38Z","timestamp":1660133678000},"page":"1-27","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["An Adaptive Graph Pre-training Framework for Localized Collaborative Filtering"],"prefix":"10.1145","volume":"41","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9594-1919","authenticated-orcid":false,"given":"Yiqi","family":"Wang","sequence":"first","affiliation":[{"name":"Michigan State University, East Lansing, MI, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9867-1712","authenticated-orcid":false,"given":"Chaozhuo","family":"Li","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Haidian District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7765-8466","authenticated-orcid":false,"given":"Zheng","family":"Liu","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Haidian District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5615-9545","authenticated-orcid":false,"given":"Mingzheng","family":"Li","sequence":"additional","affiliation":[{"name":"Microsoft, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7125-3898","authenticated-orcid":false,"given":"Jiliang","family":"Tang","sequence":"additional","affiliation":[{"name":"Michigan State University, East Lansing, MI, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8608-8482","authenticated-orcid":false,"given":"Xing","family":"Xie","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Haidian District, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8257-5806","authenticated-orcid":false,"given":"Lei","family":"Chen","sequence":"additional","affiliation":[{"name":"Hong Kong University of Science and Technology, Kowloon, Hong Kong, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3491-5968","authenticated-orcid":false,"given":"Philip S.","family":"Yu","sequence":"additional","affiliation":[{"name":"University of Illinois at Chicago, Chicago, IL, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,12,21]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"MovieLens. Accessed May 23 2021.https:\/\/grouplens.org\/datasets\/movielens\/."},{"key":"e_1_3_2_3_2","unstructured":"User Behavior Data from Taobao for Recommendation. Accessed May 23 2021.https:\/\/tianchi.aliyun.com\/dataset\/dataDetail?dataId=649&userId=1."},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2005.99"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/2020408.2020504"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-73078-1_44"},{"key":"e_1_3_2_7_2","article-title":"Spectral networks and locally connected networks on graphs","author":"Bruna Joan","year":"2013","unstructured":"Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203 (2013).","journal-title":"arXiv preprint arXiv:1312.6203"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6243"},{"key":"e_1_3_2_9_2","first-page":"3844","volume-title":"Advances in Neural Information Processing Systems","author":"Defferrard Micha\u00ebl","year":"2016","unstructured":"Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems. 3844\u20133852."},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_11_2","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).","journal-title":"arXiv preprint arXiv:1810.04805"},{"key":"e_1_3_2_12_2","first-page":"2224","volume-title":"Advances in Neural Information Processing Systems","author":"Duvenaud David K.","year":"2015","unstructured":"David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al\u00e1n Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems. 2224\u20132232."},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313488"},{"key":"e_1_3_2_14_2","first-page":"2083","volume-title":"International Conference on Machine Learning","author":"Gao Hongyang","year":"2019","unstructured":"Hongyang Gao and Shuiwang Ji. 2019. Graph U-Nets. In International Conference on Machine Learning. PMLR, 2083\u20132092."},{"key":"e_1_3_2_15_2","article-title":"DeepFM: A factorization-machine based neural network for CTR prediction","author":"Guo Huifeng","year":"2017","unstructured":"Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: A factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247 (2017).","journal-title":"arXiv preprint arXiv:1703.04247"},{"key":"e_1_3_2_16_2","volume-title":"Exploring Network Structure, Dynamics, and Function Using NetworkX","author":"Hagberg Aric","year":"2008","unstructured":"Aric Hagberg, Pieter Swart, and Daniel S. Chult. 2008. Exploring Network Structure, Dynamics, and Function Using NetworkX. Technical Report. Los Alamos National Lab. (LANL), Los Alamos, NM (United States)."},{"key":"e_1_3_2_17_2","first-page":"1024","volume-title":"Advances in Neural Information Processing Systems","author":"Hamilton Will","year":"2017","unstructured":"Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. 1024\u20131034."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3437963.3441738"},{"key":"e_1_3_2_19_2","first-page":"4116","volume-title":"International Conference on Machine Learning","author":"Hassani Kaveh","year":"2020","unstructured":"Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view representation learning on graphs. In International Conference on Machine Learning. PMLR, 4116\u20134126."},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00502"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/3159652.3159675"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401063"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3038912.3052569"},{"key":"e_1_3_2_24_2","article-title":"Strategies for pre-training graph neural networks","author":"Hu Weihua","year":"2019","unstructured":"Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. 2019. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 (2019).","journal-title":"arXiv preprint arXiv:1905.12265"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403237"},{"key":"e_1_3_2_26_2","article-title":"Text level graph neural network for text classification","author":"Huang Lianzhe","year":"2019","unstructured":"Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2019. Text level graph neural network for text classification. arXiv preprint arXiv:1910.02356 (2019).","journal-title":"arXiv preprint arXiv:1910.02356"},{"key":"e_1_3_2_27_2","article-title":"What makes ImageNet good for transfer learning?","author":"Huh Minyoung","year":"2016","unstructured":"Minyoung Huh, Pulkit Agrawal, and Alexei A. Efros. 2016. What makes ImageNet good for transfer learning? arXiv preprint arXiv:1608.08614 (2016).","journal-title":"arXiv preprint arXiv:1608.08614"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2020.3029762"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE48307.2020.00020"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/2487575.2487589"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-15719-7_3"},{"key":"e_1_3_2_32_2","article-title":"Adam: A method for stochastic optimization","author":"Kingma Diederik P.","year":"2014","unstructured":"Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).","journal-title":"arXiv preprint arXiv:1412.6980"},{"key":"e_1_3_2_33_2","article-title":"Semi-supervised classification with graph convolutional networks","author":"Kipf Thomas N.","year":"2016","unstructured":"Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).","journal-title":"arXiv preprint arXiv:1609.02907"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2009.263"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/CIDM.2014.7008659"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM50108.2020.00039"},{"key":"e_1_3_2_37_2","article-title":"Gated graph sequence neural networks","author":"Li Yujia","year":"2015","unstructured":"Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2015. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493 (2015).","journal-title":"arXiv preprint arXiv:1511.05493"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3412012"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/3240323.3240365"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i5.16552"},{"key":"e_1_3_2_41_2","article-title":"A unified view on graph neural networks as graph signal denoising","author":"Ma Yao","year":"2020","unstructured":"Yao Ma, Xiaorui Liu, Tong Zhao, Yozen Liu, Jiliang Tang, and Neil Shah. 2020. A unified view on graph neural networks as graph signal denoising. arXiv preprint arXiv:2010.01777 (2020).","journal-title":"arXiv preprint arXiv:2010.01777"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1017\/9781108924184"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330982"},{"key":"e_1_3_2_44_2","first-page":"2579","article-title":"Visualizing data using t-SNE","volume":"9","author":"Maaten Laurens van der","year":"2008","unstructured":"Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9 (2008), 2579\u20132605.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_45_2","article-title":"Efficient estimation of word representations in vector space","author":"Mikolov Tomas","year":"2013","unstructured":"Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).","journal-title":"arXiv preprint arXiv:1301.3781"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/2396761.2396817"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1038\/072342a0"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11671"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/2623330.2623732"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01240-3_25"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403168"},{"key":"e_1_3_2_52_2","first-page":"1","article-title":"Pre-trained models for natural language processing: A survey","author":"Qiu Xipeng","year":"2020","unstructured":"Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences (2020), 1\u201326.","journal-title":"Science China Technological Sciences"},{"key":"e_1_3_2_53_2","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_2_54_2","unstructured":"Rahul Ragesh Sundararajan Sellamanickam Vijay Lingam Arun Iyer and Ramakrishna Bairi. 2021. User embedding based neighborhood aggregation method for inductive recommendation. (2021)."},{"key":"e_1_3_2_55_2","article-title":"BPR: Bayesian personalized ranking from implicit feedback","author":"Rendle Steffen","year":"2012","unstructured":"Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012).","journal-title":"arXiv preprint arXiv:1205.2618"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/371920.372071"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNN.2008.2005605"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1145\/3383313.3418477"},{"key":"e_1_3_2_59_2","article-title":"Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization","author":"Sun Fan-Yun","year":"2019","unstructured":"Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. 2019. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000 (2019).","journal-title":"arXiv preprint arXiv:1908.01000"},{"key":"e_1_3_2_60_2","first-page":"9229","volume-title":"International Conference on Machine Learning","author":"Sun Yu","year":"2020","unstructured":"Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. 2020. Test-time training with self-supervision for generalization under distribution shifts. In International Conference on Machine Learning. PMLR, 9229\u20139248."},{"key":"e_1_3_2_61_2","article-title":"Graph attention networks","author":"Veli\u010dkovi\u0107 Petar","year":"2017","unstructured":"Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).","journal-title":"arXiv preprint arXiv:1710.10903"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/2783258.2783273"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330989"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331267"},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611977172.61"},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3326362"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3462862"},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442381.3450015"},{"key":"e_1_3_2_69_2","unstructured":"Yunfan Wu Qi Cao Huawei Shen Shuchang Tao and Xueqi Cheng. 2021. Inductive representation based graph convolution network for collaborative filtering. (2021)."},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.2978386"},{"key":"e_1_3_2_71_2","article-title":"How powerful are graph neural networks?","author":"Xu Keyulu","year":"2018","unstructured":"Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).","journal-title":"arXiv preprint arXiv:1810.00826"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1145\/3240323.3240381"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33017370"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219890"},{"key":"e_1_3_2_75_2","first-page":"4800","volume-title":"Advances in Neural Information Processing Systems","author":"Ying Zhitao","year":"2018","unstructured":"Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems. 4800\u20134810."},{"key":"e_1_3_2_76_2","article-title":"DARec: Deep domain adaptation for cross-domain recommendation via transferring rating patterns","author":"Yuan Feng","year":"2019","unstructured":"Feng Yuan, Lina Yao, and Boualem Benatallah. 2019. DARec: Deep domain adaptation for cross-domain recommendation via transferring rating patterns. arXiv preprint arXiv:1905.10760 (2019).","journal-title":"arXiv preprint arXiv:1905.10760"},{"key":"e_1_3_2_77_2","first-page":"5165","article-title":"Link prediction based on graph neural networks","volume":"31","author":"Zhang Muhan","year":"2018","unstructured":"Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. Advances in Neural Information Processing Systems 31 (2018), 5165\u20135175.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2019.8852049"},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2018.2875144"},{"key":"e_1_3_2_80_2","article-title":"When do you need billions of words of pretraining data?","author":"Zhang Yian","year":"2020","unstructured":"Yian Zhang, Alex Warstadt, Haau-Sing Li, and Samuel R. Bowman. 2020. When do you need billions of words of pretraining data? arXiv preprint arXiv:2011.04946 (2020).","journal-title":"arXiv preprint arXiv:2011.04946"},{"key":"e_1_3_2_81_2","article-title":"Deep learning on graphs: A survey","author":"Zhang Ziwei","year":"2020","unstructured":"Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2020. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering (2020).","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"key":"e_1_3_2_82_2","doi-asserted-by":"publisher","DOI":"10.5555\/2891460.2891628"},{"key":"e_1_3_2_83_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE48307.2020.00019"},{"key":"e_1_3_2_84_2","first-page":"3001","volume-title":"IJCAI","author":"Zhu Feng","year":"2020","unstructured":"Feng Zhu, Yan Wang, Chaochao Chen, Guanfeng Liu, and Xiaolin Zheng. 2020. A graphical and attentional framework for dual-target cross-domain recommendation. In IJCAI. 3001\u20133008."},{"key":"e_1_3_2_85_2","article-title":"Cross-domain recommendation: Challenges, progress, and prospects","author":"Zhu Feng","year":"2021","unstructured":"Feng Zhu, Yan Wang, Chaochao Chen, Jun Zhou, Longfei Li, and Guanfeng Liu. 2021. Cross-domain recommendation: Challenges, progress, and prospects. arXiv preprint arXiv:2103.01696 (2021).","journal-title":"arXiv preprint arXiv:2103.01696"}],"container-title":["ACM Transactions on Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3555372","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3555372","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:02Z","timestamp":1750182542000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3555372"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,21]]},"references-count":84,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,4,30]]}},"alternative-id":["10.1145\/3555372"],"URL":"https:\/\/doi.org\/10.1145\/3555372","relation":{},"ISSN":["1046-8188","1558-2868"],"issn-type":[{"value":"1046-8188","type":"print"},{"value":"1558-2868","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,12,21]]},"assertion":[{"value":"2021-11-15","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-07-16","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-12-21","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}