{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,4]],"date-time":"2026-05-04T15:06:17Z","timestamp":1777907177957,"version":"3.51.4"},"reference-count":66,"publisher":"Association for Computing Machinery (ACM)","issue":"10","license":[{"start":{"date-parts":[[2023,2,2]],"date-time":"2023-02-02T00:00:00Z","timestamp":1675296000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2023,10,31]]},"abstract":"<jats:p>Modern natural language processing (NLP) methods employ self-supervised pretraining objectives such as masked language modeling to boost the performance of various downstream tasks. These pretraining methods are frequently extended with recurrence, adversarial, or linguistic property masking. Recently, contrastive self-supervised training objectives have enabled successes in image representation pretraining by learning to contrast input-input pairs of augmented images as either similar or dissimilar. In NLP however, a single token augmentation can invert the meaning of a sentence during input-input contrastive learning, which led to input-output contrastive approaches that avoid the issue by instead contrasting over input-label pairs. In this primer, we summarize recent self-supervised and supervised contrastive NLP pretraining methods and describe where they are used to improve language modeling, zero to few-shot learning, pretraining data-efficiency, and specific NLP tasks. We overview key contrastive learning concepts with lessons learned from prior research and structure works by applications. Finally, we point to open challenges and future directions for contrastive NLP to encourage bringing contrastive NLP pretraining closer to recent successes in image representation pretraining.<\/jats:p>","DOI":"10.1145\/3561970","type":"journal-article","created":{"date-parts":[[2022,9,7]],"date-time":"2022-09-07T11:27:02Z","timestamp":1662550022000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":63,"title":["A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned, and Perspectives"],"prefix":"10.1145","volume":"55","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4496-7307","authenticated-orcid":false,"given":"Nils","family":"Rethmeier","sequence":"first","affiliation":[{"name":"German Research Center for AI, Berlin, Germany, University of Copenhagen, Denmark, Berlin, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1562-7909","authenticated-orcid":false,"given":"Isabelle","family":"Augenstein","sequence":"additional","affiliation":[{"name":"University of Copenhagen, Copenhagen, Denmark"}]}],"member":"320","published-online":{"date-parts":[[2023,2,2]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.586"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.403"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58539-6_33"},{"key":"e_1_3_2_5_2","unstructured":"Tiffany Tianhui Cai Jonathan Frankle David J. Schwab and Ari S. Morcos. 2020. Are All Negatives Created Equal in Contrastive Instance Discrimination? Retrieved from https:\/\/arXiv:2010.06682."},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.194"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.5555\/3524938.3525087"},{"key":"e_1_3_2_8_2","volume-title":"Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920)","author":"Chen Ting","year":"2020","unstructured":"Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. Big self-supervised models are strong semi-supervised learners. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920). Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/fcbc95ccdd551da181207c0c1400c655-Abstract.html."},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.280"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.20"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.207"},{"key":"e_1_3_2_12_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201920)","author":"Deng Yuntian","year":"2020","unstructured":"Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, and Marc\u2019Aurelio Ranzato. 2020. Residual energy-based models for text generation. In Proceedings of the International Conference on Learning Representations (ICLR\u201920). Retrieved from https:\/\/openreview.net\/forum?id=B1l4SgHKDH."},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1301"},{"key":"e_1_3_2_14_2","doi-asserted-by":"crossref","unstructured":"Hongchao Fang Sicheng Wang Meng Zhou Jiayuan Ding and Pengtao Xie. 2020. CERT: Contrastive Self-supervised Learning for Language Understanding. Retrieved from https:\/\/arXiv:2005.12766.","DOI":"10.36227\/techrxiv.12308378.v1"},{"key":"e_1_3_2_15_2","unstructured":"Tianyu Gao Xingcheng Yao and Danqi Chen. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. Retrieved from https:\/\/arXiv:2104.08821."},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.72"},{"key":"e_1_3_2_17_2","unstructured":"PMLR Proceedings of the International Conference on Machine Learning (ICML\u201921) 139 Florian Graf Christoph Hofer Marc Niethammer Roland Kwitt Marina Meila Tong Zhang Dissecting supervised constrastive learning 2021"},{"key":"e_1_3_2_18_2","volume-title":"Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920)","volume":"33","author":"Grill Jean-Bastien","year":"2020","unstructured":"Jean-Bastien Grill, Florian Strub, Florent Altch\u00e9, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, koray kavukcuoglu, Remi Munos, and Michal Valko. 2020. Bootstrap your own latent\u2014A new approach to self-supervised learning. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920), H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/f3ada80d5c4ee70142b17b8192b2958e-Paper.pdf."},{"key":"e_1_3_2_19_2","volume-title":"Proceedings of the 9th International Conference on Learning Representations (ICLR\u201921)","author":"Gunel Beliz","year":"2021","unstructured":"Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In Proceedings of the 9th International Conference on Learning Representations (ICLR\u201921). OpenReview.net. Retrieved from https:\/\/openreview.net\/forum?id=cu7IUiOhujH."},{"key":"e_1_3_2_20_2","doi-asserted-by":"crossref","unstructured":"Momchil Hardalov Arnav Arora Preslav Nakov and Isabelle Augenstein. 2021. Few-Shot Cross-Lingual Stance Detection with Sentiment-based Pre-Training. Retrieved from https:\/\/arXiv:2109.06050.","DOI":"10.1609\/aaai.v36i10.21318"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00975"},{"key":"e_1_3_2_22_2","volume-title":"Proceedings of the 7th International Conference on Learning Representations (ICLR\u201919)","author":"Hjelm R. Devon","year":"2019","unstructured":"R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. In Proceedings of the 7th International Conference on Learning Representations (ICLR\u201919). Retrieved from https:\/\/openreview.net\/forum?id=Bklr3j0cKX."},{"key":"e_1_3_2_23_2","unstructured":"Sara Hooker Aaron Courville Gregory Clark Yann Dauphin and Andrea Frome. 2020. What Do Compressed Deep Neural Networks Forget? Retrieved from https:\/\/arxiv.org\/pdf\/1911.05248.pdf."},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.439"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.3390\/technologies9010002"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.3390\/technologies9010002"},{"key":"e_1_3_2_27_2","unstructured":"PMLR Proceedings of the International Conference on Machine Learning (ICML\u201921) 139 Chao Jia Yinfei Yang Ye Xia Yi-Ting Chen Zarana Parekh Hieu Pham Quoc V. Le Yun-Hsuan Sung Zhen Li Tom Duerig Marina Meila Tong Zhang Scaling up visual and vision-language representation learning with noisy text supervision 2021"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00986"},{"key":"e_1_3_2_29_2","volume-title":"Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920)","volume":"33","author":"Jiang Ziyu","year":"2020","unstructured":"Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. 2020. Robust pre-training by adversarial contrastive learning. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920), H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/ba7e36c43aff315c00ec2b8625e3b719-Paper.pdf."},{"key":"e_1_3_2_30_2","unstructured":"PMLR Proceedings of the International Conference on Machine Learning (ICML\u201921) 139 Ziyu Jiang Tianlong Chen Bobak J. Mortazavi Zhangyang Wang Marina Meila Tong Zhang Self-damaging contrastive learning 2021"},{"key":"e_1_3_2_31_2","volume-title":"Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920)","author":"Khosla Prannay","year":"2020","unstructured":"Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS\u201920). Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/d89a66c7c80a29b1bdbab0f2a1a94af8-Abstract.html."},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.671"},{"key":"e_1_3_2_33_2","volume-title":"Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920)","author":"Kong Lingpeng","year":"2020","unstructured":"Lingpeng Kong, Cyprien de Masson d\u2019Autume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama. 2020. A mutual information maximization perspective of language representation learning. In Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920). Retrieved from https:\/\/openreview.net\/forum?id=Syx79eBKwr."},{"key":"e_1_3_2_34_2","volume-title":"A Tutorial on Energy-based Learning","author":"LeCun Yann","year":"2006","unstructured":"Yann LeCun, Sumit Chopra, Raia Hadsell, Marc Aurelio Ranzato, and Fu Jie Huang. 2006. A Tutorial on Energy-based Learning. MIT Press."},{"key":"e_1_3_2_35_2","volume-title":"Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AIStats\u201905)","author":"LeCun Yann","year":"2005","unstructured":"Yann LeCun and Fu Jie Huang. 2005. Loss functions for discriminative training of energy-based models. In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AIStats\u201905). Retrieved from http:\/\/yann.lecun.com\/exdb\/publis\/pdf\/lecun-huang-05.pdf."},{"key":"e_1_3_2_36_2","volume-title":"Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AIStats\u201905)","author":"LeCun Yann","year":"2005","unstructured":"Yann LeCun and Fu Jie Huang. 2005. Loss functions for discriminative training of energy-based models. In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AIStats\u201905). Retrieved from http:\/\/www.gatsby.ucl.ac.uk\/aistats\/fullpapers\/207.pdf."},{"key":"e_1_3_2_37_2","unstructured":"Junnan Li Ramprasaath R. Selvaraju Akhilesh Deepak Gotmare Shafiq R. Joty Caiming Xiong and Steven C. H. Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Retrieved from https:\/\/arxiv.org\/abs\/2107.07651."},{"key":"e_1_3_2_38_2","unstructured":"Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. Retrieved from http:\/\/arxiv.org\/abs\/1907.11692."},{"key":"e_1_3_2_39_2","volume-title":"Proceedings of the (ICLR\u201918)","author":"Logeswaran Lajanugen","year":"2018","unstructured":"Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In Proceedings of the (ICLR\u201918). Retrieved from https:\/\/openreview.net\/forum?id=rJvJXZb0W."},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467265"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/d18-1405"},{"key":"e_1_3_2_42_2","unstructured":"Yu Meng Chenyan Xiong Payal Bajaj Saurabh Tiwary Paul Bennett Jiawei Han and Xia Song. 2021. COCO-LM: Correcting and contrasting text sequences for language model pretraining. Retrieved from https:\/\/arxiv.org\/abs\/2102.08473."},{"key":"e_1_3_2_43_2","volume-title":"ICLR Workshop Track Proceedings","author":"Mikolov Tom\u00e1s","year":"2013","unstructured":"Tom\u00e1s Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In ICLR Workshop Track Proceedings. Retrieved from http:\/\/arxiv.org\/abs\/1301.3781."},{"key":"e_1_3_2_44_2","volume-title":"NeurIPS","author":"Mikolov Tom\u00e1s","year":"2013","unstructured":"Tom\u00e1s Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In NeurIPS. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2013\/hash\/9aa42b31882ec039965f3c4923ce901b-Abstract.html."},{"key":"e_1_3_2_45_2","first-page":"8","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201912)","author":"Mnih Andriy","year":"2012","unstructured":"Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the International Conference on Machine Learning (ICML\u201912). Omnipress, Madison, WI, 8 pages."},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58595-2_41"},{"key":"e_1_3_2_47_2","doi-asserted-by":"crossref","unstructured":"Malte Ostendorff Nils Rethmeier Isabelle Augenstein Bela Gipp and Georg Rehm. 2022. Neighborhood contrastive learning for scientific document representations with citation embeddings. Retrieved from https:\/\/arxiv.org\/abs\/2202.06671.","DOI":"10.18653\/v1\/2022.emnlp-main.802"},{"key":"e_1_3_2_48_2","article-title":"GILE: A generalized input-label embedding for text classification","volume":"7","author":"Pappas Nikolaos","year":"2019","unstructured":"Nikolaos Pappas and James Henderson. 2019. GILE: A generalized input-label embedding for text classification. Trans. Assoc. Comput. Linguistics 7 (2019). Retrieved from https:\/\/transacl.org\/ojs\/index.php\/tacl\/article\/view\/1550.","journal-title":"Trans. Assoc. Comput. Linguistics"},{"key":"e_1_3_2_49_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201921)","author":"Qu Yanru","year":"2021","unstructured":"Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Weizhu Chen, and Jiawei Han. 2021. CoDA: Contrast-enhanced and diversity-promoting data augmentation for natural language understanding. In Proceedings of the International Conference on Learning Representations (ICLR\u201921). Retrieved from https:\/\/openreview.net\/forum?id=Ozk9MrX1hvA."},{"key":"e_1_3_2_50_2","unstructured":"Alec Radford Jong Wook Kim Chris Hallacy Aditya Ramesh Gabriel Goh Sandhini Agarwal Girish Sastry Amanda Askell Pamela Mishkin Jack Clark Gretchen Krueger and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. Retrieved from https:\/\/cdn.openai.com\/papers\/Learning_Transferable_Visual_Models_From_Natural_Language.pdf."},{"issue":"140","key":"e_1_3_2_51_2","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 140 (2020), 1\u201367. Retrieved from http:\/\/jmlr.org\/papers\/v21\/20-074.html.","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W19-5354"},{"key":"e_1_3_2_53_2","unstructured":"Xuanchi Ren Tao Yang Yuwang Wang and Wenjun Zeng. 2021. Do generative models know disentanglement? Contrastive learning is all you need. Retrieved from https:\/\/arxiv.org\/abs\/2102.10543."},{"key":"e_1_3_2_54_2","unstructured":"Nils Rethmeier and Isabelle Augenstein. 2020. Long-Tail Zero and Few-Shot Learning via Contrastive Pretraining on and for Small Data. Retrieved from https:\/\/arXiv:2010.01061."},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-acl.336"},{"key":"e_1_3_2_56_2","unstructured":"PMLR Proceedings of the International Conference on machine Learning (ICML\u201919) 97 Nikunj Saunshi Orestis Plevrakis Sanjeev Arora Mikhail Khodak Hrishikesh Khandeparkar Kamalika Chaudhuri Ruslan Salakhutdinov A theoretical analysis of contrastive unsupervised representation learning 2019"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1145\/3474085.3475637"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.eacl-srw.11"},{"key":"e_1_3_2_59_2","unstructured":"YuSheng Su Xu Han Yankai Lin Zhengyan Zhang Zhiyuan Liu Peng Li and Maosong Sun. 2021. CSS-LM: A contrastive framework for semi-supervised fine-tuning of pre-trained language models. Retrieved from https:\/\/arxiv.org\/abs\/2102.03752."},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.36"},{"key":"e_1_3_2_61_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Tschannen Michael","year":"2020","unstructured":"Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, and Mario Lucic. 2020. On mutual information maximization for representation learning. In Proceedings of the International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=rkxoh24FPH."},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.coling-main.213"},{"key":"e_1_3_2_63_2","unstructured":"A\u00e4ron van den Oord Yazhe Li and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. Retrieved from http:\/\/arxiv.org\/abs\/1807.03748."},{"key":"e_1_3_2_64_2","unstructured":"PMLR Proceedings of the Proceedings of the International Conference on machine Learning (ICML\u201920) 119 Tongzhou Wang Phillip Isola Hal Daum\u00e9 III Aarti Singh Understanding contrastive representation learning through alignment and uniformity on the hypersphere 2020"},{"key":"e_1_3_2_65_2","unstructured":"Zhuofeng Wu Sinong Wang Jiatao Gu Madian Khabsa Fei Sun and Hao Ma. 2020. CLEAR: Contrastive Learning for Sentence Representation. Retrieved from https:\/\/arXiv:2012.15466."},{"key":"e_1_3_2_66_2","unstructured":"PMLR Proceedings of the Proceedings of the International Conference on machine Learning (ICML\u201921) 139 Jure Zbontar Li Jing Ishan Misra Yann Lecun Stephane Deny Marina Meila Tong Zhang Barlow twins: Self-supervised learning via redundancy reduction 2021"},{"key":"e_1_3_2_67_2","unstructured":"Roland S. Zimmermann Yash Sharma Steffen Schneider Matthias Bethge and Wieland Brendel. 2021. Contrastive Learning Inverts the Data Generating Process. Retrieved from https:\/\/arXiv:2102.08850."}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3561970","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3561970","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:07Z","timestamp":1750182547000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3561970"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,2]]},"references-count":66,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2023,10,31]]}},"alternative-id":["10.1145\/3561970"],"URL":"https:\/\/doi.org\/10.1145\/3561970","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,2]]},"assertion":[{"value":"2021-11-11","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-08-23","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-02-02","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}