{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T00:18:42Z","timestamp":1759969122442,"version":"build-2065373602"},"publisher-location":"New York, NY, USA","reference-count":55,"publisher":"ACM","license":[{"start":{"date-parts":[[2025,5,8]],"date-time":"2025-05-08T00:00:00Z","timestamp":1746662400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,5,8]]},"DOI":"10.1145\/3701716.3715233","type":"proceedings-article","created":{"date-parts":[[2025,5,23]],"date-time":"2025-05-23T16:20:01Z","timestamp":1748017201000},"page":"520-529","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Hierarchical Prompt Decision Transformer: Improving Few-Shot Policy Generalization with Global and Adaptive Guidance"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-3444-5872","authenticated-orcid":false,"given":"Zhe","family":"Wang","sequence":"first","affiliation":[{"name":"University of Virginia, Charlottesville, Virginia, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9679-0144","authenticated-orcid":false,"given":"Haozhu","family":"Wang","sequence":"additional","affiliation":[{"name":"Amazon, Santa Clara, California, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5796-7453","authenticated-orcid":false,"given":"Yanjun","family":"Qi","sequence":"additional","affiliation":[{"name":"Amazon, Arlington, Virginia, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,5,23]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"Recent advances in hierarchical reinforcement learning. Discrete event dynamic systems","author":"Barto Andrew G","year":"2003","unstructured":"Andrew G Barto and Sridhar Mahadevan. 2003. Recent advances in hierarchical reinforcement learning. Discrete event dynamic systems, Vol. 13, 1--2 (2003), 41--77."},{"key":"e_1_3_2_2_2_1","volume-title":"A meta-transfer objective for learning to disentangle causal mechanisms. arXiv preprint arXiv:1901.10912","author":"Bengio Yoshua","year":"2019","unstructured":"Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Rosemary Ke, S\u00e9bastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher Pal. 2019. A meta-transfer objective for learning to disentangle causal mechanisms. arXiv preprint arXiv:1901.10912 (2019)."},{"key":"e_1_3_2_2_3_1","volume-title":"Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al.","author":"Borgeaud Sebastian","year":"2022","unstructured":"Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In ICML. PMLR, 2206--2240."},{"key":"e_1_3_2_2_4_1","first-page":"1542","article-title":"When does return-conditioned supervised learning work for offline reinforcement learning","volume":"35","author":"Brandfonbrener David","year":"2022","unstructured":"David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, and Joan Bruna. 2022. When does return-conditioned supervised learning work for offline reinforcement learning? NeurIPS, Vol. 35 (2022), 1542--1553.","journal-title":"NeurIPS"},{"key":"e_1_3_2_2_5_1","first-page":"15084","article-title":"Decision transformer: Reinforcement learning via sequence modeling","volume":"34","author":"Chen Lili","year":"2021","unstructured":"Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. 2021. Decision transformer: Reinforcement learning via sequence modeling. NeurIPS, Vol. 34 (2021), 15084--15097.","journal-title":"NeurIPS"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS55552.2023.10342230"},{"key":"e_1_3_2_2_7_1","volume-title":"NeurIPS","volume":"32","author":"Eysenbach Ben","year":"2019","unstructured":"Ben Eysenbach, Russ R Salakhutdinov, and Sergey Levine. 2019. Search on the replay buffer: Bridging planning and reinforcement learning. NeurIPS, Vol. 32 (2019)."},{"key":"e_1_3_2_2_8_1","volume-title":"Generalized decision transformer for offline hindsight information matching. arXiv preprint arXiv:2111.10364","author":"Furuta Hiroki","year":"2021","unstructured":"Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. 2021. Generalized decision transformer for offline hindsight information matching. arXiv preprint arXiv:2111.10364 (2021)."},{"key":"e_1_3_2_2_9_1","volume-title":"Adria Puigdomenech Badia, Arthur Guez, Mehdi Mirza, Peter C Humphreys, Ksenia Konyushova, et al.","author":"Goyal Anirudh","year":"2022","unstructured":"Anirudh Goyal, Abram Friesen, Andrea Banino, Theophane Weber, Nan Rosemary Ke, Adria Puigdomenech Badia, Arthur Guez, Mehdi Mirza, Peter C Humphreys, Ksenia Konyushova, et al. 2022. Retrieval-augmented reinforcement learning. In ICML. PMLR, 7740--7765."},{"key":"e_1_3_2_2_10_1","volume-title":"Long-range transformers for dynamic spatiotemporal forecasting. arXiv preprint arXiv:2109.12218","author":"Grigsby Jake","year":"2021","unstructured":"Jake Grigsby, Zhe Wang, Nam Nguyen, and Yanjun Qi. 2021. Long-range transformers for dynamic spatiotemporal forecasting. arXiv preprint arXiv:2109.12218 (2021)."},{"key":"e_1_3_2_2_11_1","unstructured":"Kelvin Guu Kenton Lee Zora Tung Panupong Pasupat and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In ICML. PMLR 3929--3938."},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"crossref","unstructured":"Bernhard Hengst. 2010. Hierarchical Reinforcement Learning.","DOI":"10.1007\/978-0-387-30164-8_363"},{"key":"e_1_3_2_2_13_1","volume-title":"Prompt-Tuning Decision Transformer with Preference Ranking. arXiv preprint arXiv:2305.09648","author":"Hu Shengchao","year":"2023","unstructured":"Shengchao Hu, Li Shen, Ya Zhang, and Dacheng Tao. 2023. Prompt-Tuning Decision Transformer with Preference Ranking. arXiv preprint arXiv:2305.09648 (2023)."},{"key":"e_1_3_2_2_14_1","volume-title":"Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282","author":"Izacard Gautier","year":"2020","unstructured":"Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282 (2020)."},{"key":"e_1_3_2_2_15_1","volume-title":"Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299","author":"Izacard Gautier","year":"2022","unstructured":"Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299 (2022)."},{"key":"e_1_3_2_2_16_1","unstructured":"Michael Janner Qiyang Li and Sergey Levine. 2021. Offline Reinforcement Learning as One Big Sequence Modeling Problem. In NeurIPS."},{"key":"e_1_3_2_2_17_1","unstructured":"Daniel Kahneman. 2011. Thinking fast and slow. macmillan."},{"key":"e_1_3_2_2_18_1","volume-title":"Time2vec: Learning a vector representation of time. arXiv preprint arXiv:1907.05321","author":"Kazemi Seyed Mehran","year":"2019","unstructured":"Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, and Marcus Brubaker. 2019. Time2vec: Learning a vector representation of time. arXiv preprint arXiv:1907.05321 (2019)."},{"key":"e_1_3_2_2_19_1","volume-title":"Generalization through memorization: Nearest neighbor language models. arXiv preprint arXiv:1911.00172","author":"Khandelwal Urvashi","year":"2019","unstructured":"Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. arXiv preprint arXiv:1911.00172 (2019)."},{"key":"e_1_3_2_2_20_1","volume-title":"When should we prefer offline reinforcement learning over behavioral cloning? arXiv preprint arXiv:2204.05618","author":"Kumar Aviral","year":"2022","unstructured":"Aviral Kumar, Joey Hong, Anikait Singh, and Sergey Levine. 2022. When should we prefer offline reinforcement learning over behavioral cloning? arXiv preprint arXiv:2204.05618 (2022)."},{"key":"e_1_3_2_2_21_1","unstructured":"Michael Laskin Luyu Wang Junhyuk Oh Emilio Parisotto Stephen Spencer Richie Steigerwald DJ Strouse Steven Hansen Angelos Filos Ethan Brooks et al. 2022. In-context reinforcement learning with algorithm distillation. arXiv preprint arXiv:2210.14215 (2022)."},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"crossref","unstructured":"Brian Lester Rami Al-Rfou and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In EMNLP Marie-Francine Moens Xuanjing Huang Lucia Specia and Scott Wen-tau Yih (Eds.). ACL Online and Punta Cana Dominican Republic.","DOI":"10.18653\/v1\/2021.emnlp-main.243"},{"key":"e_1_3_2_2_23_1","volume-title":"Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643","author":"Levine Sergey","year":"2020","unstructured":"Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. 2020. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 (2020)."},{"key":"e_1_3_2_2_24_1","first-page":"9459","article-title":"Retrieval-augmented generation for knowledge-intensive nlp tasks","volume":"33","author":"Lewis Patrick","year":"2020","unstructured":"Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. NeurIPS, Vol. 33 (2020), 9459--9474.","journal-title":"NeurIPS"},{"key":"e_1_3_2_2_25_1","volume-title":"Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer","author":"Li Juncen","year":"2018","unstructured":"Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer. In North American Association for Computational Linguistics (NAACL). https:\/\/nlp.stanford.edu\/pubs\/li2018transfer.pdf"},{"key":"e_1_3_2_2_26_1","volume-title":"FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization. In ICRL. https:\/\/openreview.net\/forum?id=8cpHIfgY4Dj","author":"Li Lanqing","year":"2021","unstructured":"Lanqing Li, Rui Yang, and Dijun Luo. 2021. FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization. In ICRL. https:\/\/openreview.net\/forum?id=8cpHIfgY4Dj"},{"key":"e_1_3_2_2_27_1","volume-title":"Transformers as decision makers: Provable in-context reinforcement learning via supervised pretraining. arXiv preprint arXiv:2310.08566","author":"Lin Licong","year":"2023","unstructured":"Licong Lin, Yu Bai, and Song Mei. 2023. Transformers as decision makers: Provable in-context reinforcement learning via supervised pretraining. arXiv preprint arXiv:2310.08566 (2023)."},{"key":"e_1_3_2_2_28_1","volume-title":"Sergey Levine, and Chelsea Finn.","author":"Mitchell Eric","year":"2021","unstructured":"Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, and Chelsea Finn. 2021. Offline meta-reinforcement learning with advantage weighting. In ICML. PMLR, 7780--7791."},{"key":"e_1_3_2_2_29_1","unstructured":"Tsendsuren Munkhdalai and Hong Yu. 2017. Meta networks. In ICML. PMLR 2554--2563."},{"key":"e_1_3_2_2_30_1","volume-title":"Honglak Lee, and Sergey Levine.","author":"Nachum Ofir","year":"2018","unstructured":"Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. 2018. Data-efficient hierarchical reinforcement learning. NeurIPS, Vol. 31 (2018)."},{"key":"e_1_3_2_2_31_1","volume-title":"International Conference on Machine Learning. PMLR, 26087--26105","author":"Ni Fei","year":"2023","unstructured":"Fei Ni, Jianye Hao, Yao Mu, Yifu Yuan, Yan Zheng, Bin Wang, and Zhixuan Liang. 2023. Metadiffuser: Diffusion model as conditional planner for offline meta-rl. In International Conference on Machine Learning. PMLR, 26087--26105."},{"key":"e_1_3_2_2_32_1","first-page":"38966","article-title":"You can't count on luck: Why decision transformers and rvs fail in stochastic environments","volume":"35","author":"Paster Keiran","year":"2022","unstructured":"Keiran Paster, Sheila Mcilraith, and Jimmy Ba. 2022. You can't count on luck: Why decision transformers and rvs fail in stochastic environments. NeurIPS, Vol. 35 (2022), 38966--38979.","journal-title":"NeurIPS"},{"key":"e_1_3_2_2_33_1","volume-title":"Text generation with exemplar-based adaptive decoding. arXiv preprint arXiv:1904.04428","author":"Peng Hao","year":"2019","unstructured":"Hao Peng, Ankur P Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text generation with exemplar-based adaptive decoding. arXiv preprint arXiv:1904.04428 (2019)."},{"key":"e_1_3_2_2_34_1","volume-title":"Marcos ROA Maximo, and Esther Luna Colombini","author":"Prudencio Rafael Figueiredo","year":"2023","unstructured":"Rafael Figueiredo Prudencio, Marcos ROA Maximo, and Esther Luna Colombini. 2023. A survey on offline reinforcement learning: Taxonomy, review, and open problems. IEEE Transactions on Neural Networks and Learning Systems (2023)."},{"key":"e_1_3_2_2_35_1","unstructured":"Alec Radford Jeffrey Wu Rewon Child David Luan Dario Amodei Ilya Sutskever et al. 2019. Language models are unsupervised multitask learners. OpenAI blog Vol. 1 8 (2019) 9."},{"key":"e_1_3_2_2_36_1","volume-title":"Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al.","author":"Reed Scott","year":"2022","unstructured":"Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. 2022. A generalist agent. arXiv preprint arXiv:2205.06175 (2022)."},{"key":"e_1_3_2_2_37_1","volume-title":"Promp: Proximal meta-policy search. arXiv preprint arXiv:1810.06784","author":"Rothfuss Jonas","year":"2018","unstructured":"Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. 2018. Promp: Proximal meta-policy search. arXiv preprint arXiv:1810.06784 (2018)."},{"key":"e_1_3_2_2_38_1","volume-title":"Deep RL Workshop NeurIPS","author":"Schweighofer Kajetan","year":"2021","unstructured":"Kajetan Schweighofer, Markus Hofmarcher, Marius-Constantin Dinu, Philipp Renz, Angela Bitto-Nemling, Vihang Prakash Patil, and Sepp Hochreiter. 2021. Understanding the effects of dataset characteristics on offline reinforcement learning. In Deep RL Workshop NeurIPS 2021."},{"key":"e_1_3_2_2_39_1","volume-title":"Advances in Neural Information Processing Systems","volume":"36","author":"Shen Yongliang","year":"2024","unstructured":"Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2024. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances in Neural Information Processing Systems, Vol. 36 (2024)."},{"key":"e_1_3_2_2_40_1","unstructured":"Shagun Sodhani Amy Zhang and Joelle Pineau. 2021. Multi-task reinforcement learning with context-based representations. In ICML. PMLR 9767--9779."},{"volume-title":"Reinforcement learning: An introduction","author":"Sutton Richard S","key":"e_1_3_2_2_41_1","unstructured":"Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press."},{"key":"e_1_3_2_2_42_1","volume-title":"Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles. arXiv preprint arXiv:2303.03751","author":"Tang Zhiwei","year":"2023","unstructured":"Zhiwei Tang, Dmitry Rybin, and Tsung-Hui Chang. 2023. Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles. arXiv preprint arXiv:2303.03751 (2023)."},{"key":"e_1_3_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2012.6386109"},{"key":"e_1_3_2_2_44_1","unstructured":"Alexander Sasha Vezhnevets Simon Osindero Tom Schaul Nicolas Heess Max Jaderberg David Silver and Koray Kavukcuoglu. 2017. Feudal networks for hierarchical reinforcement learning. In ICML. PMLR 3540--3549."},{"key":"e_1_3_2_2_45_1","first-page":"34748","article-title":"Bootstrapped transformer for offline reinforcement learning","volume":"35","author":"Wang Kerong","year":"2022","unstructured":"Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, and Dongsheng Li. 2022b. Bootstrapped transformer for offline reinforcement learning. NeurIPS, Vol. 35 (2022), 34748--34761.","journal-title":"NeurIPS"},{"key":"e_1_3_2_2_46_1","unstructured":"Yiqi Wang Mengdi Xu Laixi Shi and Yuejie Chi. 2023. A Trajectory is Worth Three Sentences: Multimodal Transformer for Offline Reinforcement Learning. In UAI. https:\/\/openreview.net\/forum?id=yE1_GpmDOPL"},{"key":"e_1_3_2_2_47_1","unstructured":"Zhe Wang Jake Grigsby Arshdeep Sekhon and Yanjun Qi. 2022a. ST-MAML: A Stochastic-Task based Method for Task-Heterogeneous Meta-Learning. In UAI. https:\/\/openreview.net\/forum?id=rrlMyPUs9gc"},{"key":"e_1_3_2_2_48_1","unstructured":"Mengdi Xu Yikang Shen Shun Zhang Yuchen Lu Ding Zhao Joshua Tenenbaum and Chuang Gan. 2022. Prompting Decision Transformer for Few-Shot Policy Generalization. In ICML. PMLR 24631--24645."},{"key":"e_1_3_2_2_49_1","volume-title":"React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629","author":"Yao Shunyu","year":"2022","unstructured":"Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 (2022)."},{"key":"e_1_3_2_2_50_1","first-page":"5824","article-title":"Gradient surgery for multi-task learning","volume":"33","author":"Yu Tianhe","year":"2020","unstructured":"Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. 2020a. Gradient surgery for multi-task learning. NeurIPS, Vol. 33 (2020), 5824--5836.","journal-title":"NeurIPS"},{"key":"e_1_3_2_2_51_1","volume-title":"Conference on robot learning. PMLR, 1094--1100","author":"Yu Tianhe","year":"2020","unstructured":"Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. 2020b. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on robot learning. PMLR, 1094--1100."},{"key":"e_1_3_2_2_52_1","volume-title":"NeurIPS","volume":"30","author":"Zaheer Manzil","year":"2017","unstructured":"Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. 2017. Deep sets. NeurIPS, Vol. 30 (2017)."},{"key":"e_1_3_2_2_53_1","volume-title":"Learning robust state abstractions for hidden-parameter block MDPs. arXiv preprint arXiv:2007.07206","author":"Zhang Amy","year":"2020","unstructured":"Amy Zhang, Shagun Sodhani, Khimya Khetarpal, and Joelle Pineau. 2020. Learning robust state abstractions for hidden-parameter block MDPs. arXiv preprint arXiv:2007.07206 (2020)."},{"key":"e_1_3_2_2_54_1","doi-asserted-by":"crossref","unstructured":"Mingyang Zhou Licheng Yu Amanpreet Singh Mengjiao Wang Zhou Yu and Ning Zhang. 2022. Unsupervised vision-and-language pre-training via retrieval-based multi-granular alignment. In CVPR. 16485--16494.","DOI":"10.1109\/CVPR52688.2022.01599"},{"key":"e_1_3_2_2_55_1","unstructured":"Guangxiang Zhu Zichuan Lin Guangwen Yang and Chongjie Zhang. 2019. Episodic reinforcement learning with associative memory. In ICLR."}],"event":{"name":"WWW '25: The ACM Web Conference 2025","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web"],"location":"Sydney NSW Australia","acronym":"WWW '25"},"container-title":["Companion Proceedings of the ACM on Web Conference 2025"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3701716.3715233","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3701716.3715233","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,8]],"date-time":"2025-10-08T03:09:15Z","timestamp":1759892955000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3701716.3715233"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,8]]},"references-count":55,"alternative-id":["10.1145\/3701716.3715233","10.1145\/3701716"],"URL":"https:\/\/doi.org\/10.1145\/3701716.3715233","relation":{},"subject":[],"published":{"date-parts":[[2025,5,8]]},"assertion":[{"value":"2025-05-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}