{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T15:33:45Z","timestamp":1774539225909,"version":"3.50.1"},"reference-count":75,"publisher":"Association for Computing Machinery (ACM)","issue":"2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Recomm. Syst."],"published-print":{"date-parts":[[2026,6,30]]},"abstract":"<jats:p>\n                    Traditional click-through rate (CTR) prediction models convert the tabular data into one-hot vectors and leverage the collaborative relations among features for inferring the user\u2019s preference over items. This modeling paradigm discards essential semantic information. Though some works like P5 and KAR have explored the potential of using Pre-trained Language Models (PLMs) to extract semantic signals for CTR prediction, they are computationally expensive and suffer from low efficiency. Besides, the beneficial collaborative relations are not considered, hindering the recommendation performance. To solve these problems, in this article, we propose a novel framework\n                    <jats:bold>CTRL<\/jats:bold>\n                    , which is industrial-friendly and model-agnostic with superior inference efficiency. Specifically, the original tabular data is first converted into textual data. Both tabular data and converted textual data are regarded as two different modalities and are separately fed into the collaborative CTR model and PLM. A cross-modal knowledge alignment procedure is performed to fine-grained align and integrate the collaborative and semantic signals, and the lightweight collaborative model can be deployed online for efficient serving after fine-tuning with supervised signals. Experimental results on three public datasets show that CTRL outperforms the state-of-the-art (SOTA) CTR models significantly. Moreover, we further verify its effectiveness on a large-scale industrial recommender system.\n                  <\/jats:p>","DOI":"10.1145\/3713080","type":"journal-article","created":{"date-parts":[[2025,2,3]],"date-time":"2025-02-03T04:50:41Z","timestamp":1738558241000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["CTRL: Connect Collaborative and Language Model for CTR Prediction"],"prefix":"10.1145","volume":"4","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2862-0239","authenticated-orcid":false,"given":"Xiangyang","family":"Li","sequence":"first","affiliation":[{"name":"Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3750-2533","authenticated-orcid":false,"given":"Bo","family":"Chen","sequence":"additional","affiliation":[{"name":"Huawei Noah's Ark Lab, Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4694-1821","authenticated-orcid":false,"given":"Lu","family":"Hou","sequence":"additional","affiliation":[{"name":"Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9224-2431","authenticated-orcid":false,"given":"Ruiming","family":"Tang","sequence":"additional","affiliation":[{"name":"Huawei Noah's Ark Lab, Huawei Technologies Co Ltd","place":["Shenzhen, China"]}]}],"member":"320","published-online":{"date-parts":[[2025,11,22]]},"reference":[{"key":"e_1_3_2_2_2","article-title":"TALLRec: An effective and efficient tuning framework to align large language model with recommendation","author":"Bao Keqin","year":"2023","unstructured":"Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. TALLRec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447 (2023).","journal-title":"arXiv preprint arXiv:2305.00447"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","unstructured":"Tom B. Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al.2020. Language Models Are Few-shot Learners. DOI:10.48550\/ARXIV.2005.14165","DOI":"10.48550\/ARXIV.2005.14165"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3481915"},{"key":"e_1_3_2_5_2","first-page":"1597","volume-title":"Proceedings of ICML","author":"Chen Ting","year":"2020","unstructured":"Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of ICML. PMLR, 1597\u20131607."},{"key":"e_1_3_2_6_2","article-title":"PALR: Personalization aware LLMs for recommendation","author":"Chen Zheng","year":"2023","unstructured":"Zheng Chen. 2023. PALR: Personalization aware LLMs for recommendation. arXiv preprint arXiv:2305.07622 (2023).","journal-title":"arXiv preprint arXiv:2305.07622"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/2988450.2988454"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.2517-6161.1958.tb00292.x"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","unstructured":"Zeyu Cui Jianxin Ma Chang Zhou Jingren Zhou and Hongxia Yang. 2022. M6-Rec: Generative Pretrained Language Models Are Open-ended Recommender Systems. DOI:10.48550\/ARXIV.2205.08084","DOI":"10.48550\/ARXIV.2205.08084"},{"key":"e_1_3_2_10_2","article-title":"Bert: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).","journal-title":"arXiv preprint arXiv:1810.04805"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.26"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISIT.2004.1365067"},{"key":"e_1_3_2_13_2","article-title":"Learning piece-wise linear models from large scale data for ad click prediction","author":"Gai Kun","year":"2017","unstructured":"Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, and Zhe Wang. 2017. Learning piece-wise linear models from large scale data for ad click prediction. arXiv preprint arXiv:1704.05194 (2017).","journal-title":"arXiv preprint arXiv:1704.05194"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.552"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3523227.3546767"},{"key":"e_1_3_2_16_2","unstructured":"Thore Graepel Joaquin Quinonero Candela Thomas Borchert and Ralf Herbrich. 2010. Web-scale Bayesian Click-through Rate Prediction for Sponsored Search Advertising in Microsoft\u2019s Bing Search Engine. Omnipress."},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467077"},{"key":"e_1_3_2_18_2","article-title":"DeepFM: A factorization-machine based neural network for CTR prediction","author":"Guo Huifeng","year":"2017","unstructured":"Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: A factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247 (2017).","journal-title":"arXiv preprint arXiv:1703.04247"},{"key":"e_1_3_2_19_2","first-page":"297","volume-title":"Proceedings of AISTATS","author":"Gutmann Michael","year":"2010","unstructured":"Michael Gutmann and Aapo Hyv\u00e4rinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of AISTATS. JMLR Workshop and Conference Proceedings, 297\u2013304."},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591669"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/2648584.2648589"},{"key":"e_1_3_2_23_2","first-page":"187","volume-title":"Proceedings of the 22nd Nordic Conference on Computational Linguistics","author":"Hoang Mickel","year":"2019","unstructured":"Mickel Hoang, Oskar Alija Bihorac, and Jacobo Rouces. 2019. Aspect-based sentiment analysis using Bert. In Proceedings of the 22nd Nordic Conference on Computational Linguistics. 187\u2013196."},{"key":"e_1_3_2_24_2","article-title":"Large language models are zero-shot rankers for recommender systems","author":"Hou Yupeng","year":"2023","unstructured":"Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2023. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845 (2023).","journal-title":"arXiv preprint arXiv:2305.08845"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3544903.3544906"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298689.3347043"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298689.3347043"},{"key":"e_1_3_2_28_2","first-page":"448","volume-title":"ICML","author":"Ioffe Sergey","year":"2015","unstructured":"Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. PMLR, 448\u2013456."},{"key":"e_1_3_2_29_2","article-title":"Tinybert: Distilling Bert for natural language understanding","author":"Jiao Xiaoqi","year":"2019","unstructured":"Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling Bert for natural language understanding. arXiv preprint arXiv:1909.10351 (2019).","journal-title":"arXiv preprint arXiv:1909.10351"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3411912"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","unstructured":"Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. DOI:10.48550\/ARXIV.1412.6980","DOI":"10.48550\/ARXIV.1412.6980"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1038\/44565"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3511808.3557072"},{"key":"e_1_3_2_34_2","article-title":"Low resource style transfer via domain adaptive meta learning","author":"Li Xiangyang","year":"2022","unstructured":"Xiangyang Li, Xiang Long, Yu Xia, and Sujian Li. 2022. Low resource style transfer via domain adaptive meta learning. arXiv preprint arXiv:2205.12475 (2022).","journal-title":"arXiv preprint arXiv:2205.12475"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-73696-5_11"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220023"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313497"},{"key":"e_1_3_2_38_2","article-title":"Disentangled multimodal representation learning for recommendation","author":"Liu Fan","year":"2022","unstructured":"Fan Liu, Huilin Chen, Zhiyong Cheng, Anan Liu, Liqiang Nie, and Mohan Kankanhalli. 2022. Disentangled multimodal representation learning for recommendation. IEEE Transactions on Multimedia (2022).","journal-title":"IEEE Transactions on Multimedia"},{"key":"e_1_3_2_39_2","article-title":"PTab: Using the pre-trained language model for modeling tabular data","author":"Liu Guang","year":"2022","unstructured":"Guang Liu, Jie Yang, and Ledell Wu. 2022. PTab: Using the pre-trained language model for modeling tabular data. arXiv preprint arXiv:2209.08060 (2022).","journal-title":"arXiv preprint arXiv:2209.08060"},{"key":"e_1_3_2_40_2","article-title":"Is ChatGPT a good recommender? A preliminary study","author":"Liu Junling","year":"2023","unstructured":"Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, and Yan Zhang. 2023. Is ChatGPT a good recommender? A preliminary study. arXiv preprint arXiv:2304.10149 (2023).","journal-title":"arXiv preprint arXiv:2304.10149"},{"key":"e_1_3_2_41_2","article-title":"Roberta: A robustly optimized bert pretraining approach","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).","journal-title":"arXiv preprint arXiv:1907.11692"},{"key":"e_1_3_2_42_2","article-title":"Decoupled weight decay regularization","author":"Loshchilov Ilya","year":"2017","unstructured":"Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017).","journal-title":"arXiv preprint arXiv:1711.05101"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403207"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/2487575.2488200"},{"key":"e_1_3_2_45_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2018.03.058"},{"key":"e_1_3_2_46_2","volume-title":"NeurIPS Efficient Natural Language and Speech Processing Workshop","author":"Muhamed Aashiq","year":"2021","unstructured":"Aashiq Muhamed, Iman Keivanloo, Sujan Perera, James Mracek, Yi Xu, Qingjun Cui, Santosh Rajagopalan, Belinda Zeng, and Trishul Chilimbi. 2021. CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models. In NeurIPS Efficient Natural Language and Speech Processing Workshop."},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1018"},{"key":"e_1_3_2_48_2","article-title":"Training language models to follow instructions with human feedback","author":"Ouyang Long","year":"2022","unstructured":"Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et\u00a0al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 (2022).","journal-title":"arXiv preprint arXiv:2203.02155"},{"key":"e_1_3_2_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477495.3532050"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2016.0151"},{"key":"e_1_3_2_51_2","first-page":"8748","volume-title":"International Conference on Machine Learning","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et\u00a0al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. PMLR, 8748\u20138763."},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","unstructured":"Colin Raffel Noam Shazeer Adam Roberts Katherine Lee Sharan Narang Michael Matena Yanqi Zhou Wei Li and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-text Transformer. DOI:10.48550\/ARXIV.1910.10683","DOI":"10.48550\/ARXIV.1910.10683"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDM.2010.127"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3357925"},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.5555\/2627435.2670313"},{"key":"e_1_3_2_56_2","article-title":"Is ChatGPT good at search? Investigating large language models as re-ranking agent","author":"Sun Weiwei","year":"2023","unstructured":"Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv preprint arXiv:2304.09542 (2023).","journal-title":"arXiv preprint arXiv:2304.09542"},{"key":"e_1_3_2_57_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar et al. 2023. LLaMA: Open and Efficient Foundation Language Models. arxiv:2302.13971 [cs.CL]"},{"issue":"11","key":"e_1_3_2_58_2","article-title":"Visualizing data using t-SNE.","volume":"9","author":"Maaten Laurens Van der","year":"2008","unstructured":"Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9, 11 (2008).","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_59_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_60_2","unstructured":"Alex Wang Yada Pruksachatkun Nikita Nangia Amanpreet Singh Julian Michael Felix Hill Omer Levy and Samuel R. Bowman. 2020. SuperGLUE: A Stickier Benchmark for General-purpose Language Understanding Systems. arxiv:1905.00537 [cs.CL]"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/3124749.3124754"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330989"},{"key":"e_1_3_2_63_2","unstructured":"Yunjia Xi Weiwen Liu Jianghao Lin Xiaoling Cai Hong Zhu Jieming Zhu Bo Chen Ruiming Tang Weinan Zhang Rui Zhang and Yong Yu. 2023. Towards Open-world Recommendation with Knowledge Augmentation from Large Language Models. arxiv:2306.10933 [cs.IR]"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/545"},{"key":"e_1_3_2_65_2","article-title":"BERT post-training for review reading comprehension and aspect-based sentiment analysis","author":"Xu Hu","year":"2019","unstructured":"Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. arXiv preprint arXiv:1904.02232 (2019).","journal-title":"arXiv preprint arXiv:1904.02232"},{"key":"e_1_3_2_66_2","article-title":"FILIP: Fine-grained interactive language-image pre-training","author":"Yao Lewei","year":"2021","unstructured":"Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. 2021. FILIP: Fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783 (2021).","journal-title":"arXiv preprint arXiv:2111.07783"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1145\/3580305.3599277"},{"key":"e_1_3_2_68_2","unstructured":"Yantao Yu Weipeng Wang Zhoutian Feng and Daiyue Xue. 2021. A dual augmented two-tower model for online large-scale recommendation. (2021)."},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/585"},{"key":"e_1_3_2_70_2","article-title":"Is ChatGPT fair for recommendation? Evaluating fairness in large language model recommendation","author":"Zhang Jizhi","year":"2023","unstructured":"Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Is ChatGPT fair for recommendation? Evaluating fairness in large language model recommendation. arXiv preprint arXiv:2305.07609 (2023).","journal-title":"arXiv preprint arXiv:2305.07609"},{"key":"e_1_3_2_71_2","article-title":"Recommendation as instruction following: A large language model empowered recommendation approach","author":"Zhang Junjie","year":"2023","unstructured":"Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji-Rong Wen. 2023. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001 (2023).","journal-title":"arXiv preprint arXiv:2305.07001"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-30671-1_4"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1145\/2623330.2623633"},{"key":"e_1_3_2_74_2","unstructured":"Yuhui Zhang Hao Ding Zeren Shui Yifei Ma James Zou Anoop Deoras and Hao Wang. 2021. Language models as recommender systems: Evaluations and limitations. (2021)."},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3219823"},{"key":"e_1_3_2_76_2","article-title":"AIM: Automatic interaction machine for click-through rate prediction","author":"Zhu Chenxu","year":"2021","unstructured":"Chenxu Zhu, Bo Chen, Weinan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, and Yong Yu. 2021. AIM: Automatic interaction machine for click-through rate prediction. IEEE Transactions on Knowledge and Data Engineering (2021).","journal-title":"IEEE Transactions on Knowledge and Data Engineering"}],"container-title":["ACM Transactions on Recommender Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3713080","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T16:50:51Z","timestamp":1763830251000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3713080"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,22]]},"references-count":75,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,6,30]]}},"alternative-id":["10.1145\/3713080"],"URL":"https:\/\/doi.org\/10.1145\/3713080","relation":{},"ISSN":["2770-6699"],"issn-type":[{"value":"2770-6699","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,11,22]]},"assertion":[{"value":"2024-03-30","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-12-17","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-11-22","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}