{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T12:30:59Z","timestamp":1773318659814,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":59,"publisher":"ACM","funder":[{"name":"Los Alamos National Laboratory (LANL) Laboratory Directed Research and Development (LDRD)","award":["20240777ER"],"award-info":[{"award-number":["20240777ER"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,11,16]]},"DOI":"10.1145\/3712285.3759888","type":"proceedings-article","created":{"date-parts":[[2025,11,12]],"date-time":"2025-11-12T16:04:47Z","timestamp":1762963487000},"page":"1332-1350","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["TT-LoRA MoE: Using Parameter-Efficient Fine-Tuning and Sparse Mixture-Of-Experts"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-2583-5925","authenticated-orcid":false,"given":"Pradip","family":"Kunwar","sequence":"first","affiliation":[{"name":"Tennessee Tech University, Cookeville, USA and Los Alamos National Laboratory (LANL), Cookeville, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8727-0350","authenticated-orcid":false,"given":"Minh N.","family":"Vu","sequence":"additional","affiliation":[{"name":"Los Alamos National Laboratory (LANL), Los Alamos, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9189-2478","authenticated-orcid":false,"given":"Maanak","family":"Gupta","sequence":"additional","affiliation":[{"name":"Tennessee Tech University, Cookeville, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5627-5239","authenticated-orcid":false,"given":"Mahmoud","family":"Abdelsalam","sequence":"additional","affiliation":[{"name":"North Carolina Agricultural and Technical State University, Greensboro, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1421-3643","authenticated-orcid":false,"given":"Manish","family":"Bhattarai","sequence":"additional","affiliation":[{"name":"Los Alamos National Laboratory (LANL), Los Alamos, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,11,15]]},"reference":[{"key":"e_1_3_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICMLA61862.2024.00085"},{"key":"e_1_3_3_2_3_2","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared\u00a0D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et\u00a0al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020) 1877\u20131901."},{"key":"e_1_3_3_2_4_2","unstructured":"Christopher Clark Kenton Lee Ming-Wei Chang Tom Kwiatkowski Michael Collins and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes\/no questions. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1905.10044 (2019)."},{"key":"e_1_3_3_2_5_2","first-page":"107","volume-title":"proceedings of Sinn und Bedeutung","author":"De\u00a0Marneffe Marie-Catherine","year":"2019","unstructured":"Marie-Catherine De\u00a0Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In proceedings of Sinn und Bedeutung , Vol.\u00a023. 107\u2013124."},{"key":"e_1_3_3_2_6_2","doi-asserted-by":"crossref","unstructured":"Tim Dettmers Artidoro Pagnoni Ari Holtzman and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems 36 (2023) 10088\u201310115.","DOI":"10.52202\/075280-0441"},{"key":"e_1_3_3_2_7_2","volume-title":"Third international workshop on paraphrasing (IWP2005)","author":"Dolan Bill","year":"2005","unstructured":"Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third international workshop on paraphrasing (IWP2005)."},{"key":"e_1_3_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.106"},{"key":"e_1_3_3_2_9_2","first-page":"5547","volume-title":"International conference on machine learning","author":"Du Nan","year":"2022","unstructured":"Nan Du, Yanping Huang, Andrew\u00a0M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams\u00a0Wei Yu, Orhan Firat, et\u00a0al. 2022. Glam: Efficient scaling of language models with mixture-of-experts. In International conference on machine learning. PMLR, 5547\u20135569."},{"key":"e_1_3_3_2_10_2","unstructured":"Dheeru Dua Yizhong Wang Pradeep Dasigi Gabriel Stanovsky Sameer Singh and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1903.00161 (2019)."},{"key":"e_1_3_3_2_11_2","unstructured":"William Fedus Barret Zoph and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research 23 120 (2022) 1\u201339."},{"key":"e_1_3_3_2_12_2","doi-asserted-by":"crossref","unstructured":"Robert\u00a0M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3 4 (1999) 128\u2013135.","DOI":"10.1016\/S1364-6613(99)01294-2"},{"key":"e_1_3_3_2_13_2","unstructured":"Chongyang Gao Kezhen Chen Jinmeng Rao Baochen Sun Ruibo Liu Daiyi Peng Yawen Zhang Xiaoyuan Guo Jie Yang and VS Subrahmanian. 2024. Higher layers need more lora experts. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2402.08562 (2024)."},{"key":"e_1_3_3_2_14_2","unstructured":"Yunhao Gou Zhili Liu Kai Chen Lanqing Hong Hang Xu Aoxue Li Dit-Yan Yeung James\u00a0T Kwok and Yu Zhang. 2023. Mixture of cluster-conditional lora experts for vision-language instruction tuning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2312.12379 (2023)."},{"key":"e_1_3_3_2_15_2","first-page":"2790","volume-title":"International conference on machine learning","author":"Houlsby Neil","year":"2019","unstructured":"Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De\u00a0Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In International conference on machine learning. PMLR, 2790\u20132799."},{"key":"e_1_3_3_2_16_2","volume-title":"International Conference on Learning Representations (ICLR)","author":"Hu Edward\u00a0J.","year":"2022","unstructured":"Edward\u00a0J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhong Xu, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations (ICLR). https:\/\/arxiv.org\/abs\/2106.09685"},{"key":"e_1_3_3_2_17_2","unstructured":"Chengsong Huang Qian Liu Bill\u00a0Yuchen Lin Tianyu Pang Chao Du and Min Lin. 2023. Lorahub: Efficient cross-task generalization via dynamic lora composition. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2307.13269 (2023)."},{"key":"e_1_3_3_2_18_2","doi-asserted-by":"crossref","unstructured":"Lifu Huang Ronan\u00a0Le Bras Chandra Bhagavatula and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1909.00277 (2019).","DOI":"10.18653\/v1\/D19-1243"},{"key":"e_1_3_3_2_19_2","doi-asserted-by":"crossref","unstructured":"Robert\u00a0A Jacobs Michael\u00a0I Jordan Steven\u00a0J Nowlan and Geoffrey\u00a0E Hinton. 1991. Adaptive mixtures of local experts. Neural computation 3 1 (1991) 79\u201387.","DOI":"10.1162\/neco.1991.3.1.79"},{"key":"e_1_3_3_2_20_2","doi-asserted-by":"crossref","unstructured":"Michael\u00a0I Jordan and Robert\u00a0A Jacobs. 1994. Hierarchical mixtures of experts and the EM algorithm. Neural computation 6 2 (1994) 181\u2013214.","DOI":"10.1162\/neco.1994.6.2.181"},{"key":"e_1_3_3_2_21_2","unstructured":"Jared Kaplan Sam McCandlish Tom Henighan Tom\u00a0B Brown Benjamin Chess Rewon Child Scott Gray Alec Radford Jeffrey Wu and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2001.08361 (2020)."},{"key":"e_1_3_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.12022"},{"key":"e_1_3_3_2_23_2","doi-asserted-by":"crossref","unstructured":"Brian Lester Rami Al-Rfou and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2104.08691 (2021).","DOI":"10.18653\/v1\/2021.emnlp-main.243"},{"key":"e_1_3_3_2_24_2","unstructured":"Dengchun Li Yingzi Ma Naizheng Wang Zhengmao Ye Zhiyuan Cheng Yinghao Tang Yan Zhang Lei Duan Jie Zuo Cal Yang et\u00a0al. 2024. Mixlora: Enhancing large language models fine-tuning with lora-based mixture of experts. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2404.15159 (2024)."},{"key":"e_1_3_3_2_25_2","unstructured":"Xiang\u00a0Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2101.00190 (2021)."},{"key":"e_1_3_3_2_26_2","unstructured":"Qidong Liu Xian Wu Xiangyu Zhao Yuanshao Zhu Derong Xu Feng Tian and Yefeng Zheng. 2023. Moelora: An moe-based parameter efficient fine-tuning method for multi-task medical applications. CoRR (2023)."},{"key":"e_1_3_3_2_27_2","unstructured":"Yilun Liu Yunpu Ma Shuo Chen Zifeng Ding Bailan He Zhen Han and Volker Tresp. 2024. PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2411.08212 (2024)."},{"key":"e_1_3_3_2_28_2","unstructured":"Zefang Liu and Jiahua Luo. 2024. Adamole: Fine-tuning large language models with adaptive mixture of low-rank adaptation experts. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2405.00361 (2024)."},{"key":"e_1_3_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.5555\/2002472.2002491"},{"key":"e_1_3_3_2_30_2","first-page":"216","volume-title":"Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC\u201814)","author":"Marelli Marco","year":"2014","unstructured":"Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC\u201814), Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association (ELRA), Reykjavik, Iceland, 216\u2013223. https:\/\/aclanthology.org\/L14-1314\/"},{"key":"e_1_3_3_2_31_2","first-page":"109","volume-title":"Psychology of learning and motivation","author":"McCloskey Michael","year":"1989","unstructured":"Michael McCloskey and Neal\u00a0J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. Vol.\u00a024. Elsevier, 109\u2013165."},{"key":"e_1_3_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.433"},{"key":"e_1_3_3_2_33_2","unstructured":"Alexander Novikov Dmitrii Podoprikhin Anton Osokin and Dmitry\u00a0P Vetrov. 2015. Tensorizing neural networks. Advances in neural information processing systems 28 (2015)."},{"key":"e_1_3_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1137\/090752286"},{"key":"e_1_3_3_2_35_2","unstructured":"Jonas Pfeiffer Aishwarya Kamath Andreas R\u00fcckl\u00e9 Kyunghyun Cho and Iryna Gurevych. 2020. Adapterfusion: Non-destructive task composition for transfer learning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2005.00247 (2020)."},{"key":"e_1_3_3_2_36_2","doi-asserted-by":"crossref","unstructured":"Jonas Pfeiffer Andreas R\u00fcckl\u00e9 Clifton Poth Aishwarya Kamath Ivan Vuli\u0107 Sebastian Ruder Kyunghyun Cho and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2007.07779 (2020).","DOI":"10.18653\/v1\/2020.emnlp-demos.7"},{"key":"e_1_3_3_2_37_2","unstructured":"Jason Phang Thibault F\u00e9vry and Samuel\u00a0R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1811.01088 (2018)."},{"key":"e_1_3_3_2_38_2","doi-asserted-by":"crossref","unstructured":"Yada Pruksachatkun Jason Phang Haokun Liu Phu\u00a0Mon Htut Xiaoyi Zhang Richard\u00a0Yuanzhe Pang Clara Vania Katharina Kann and Samuel\u00a0R Bowman. 2020. Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work? arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2005.00628 (2020).","DOI":"10.18653\/v1\/2020.acl-main.467"},{"key":"e_1_3_3_2_39_2","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans Ilya Sutskever et\u00a0al. 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_3_2_40_2","unstructured":"Alec Radford Jeffrey Wu Rewon Child David Luan Dario Amodei Ilya Sutskever et\u00a0al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1 8 (2019) 9."},{"key":"e_1_3_3_2_41_2","doi-asserted-by":"crossref","unstructured":"Pranav Rajpurkar Jian Zhang Konstantin Lopyrev and Percy Liang. 2016. Squad: 100 000+ questions for machine comprehension of text. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1606.05250 (2016).","DOI":"10.18653\/v1\/D16-1264"},{"key":"e_1_3_3_2_42_2","doi-asserted-by":"crossref","unstructured":"Amir Rosenfeld and John\u00a0K Tsotsos. 2018. Incremental learning through deep adaptation. IEEE transactions on pattern analysis and machine intelligence 42 3 (2018) 651\u2013663.","DOI":"10.1109\/TPAMI.2018.2884462"},{"key":"e_1_3_3_2_43_2","doi-asserted-by":"crossref","unstructured":"Keisuke Sakaguchi Ronan\u00a0Le Bras Chandra Bhagavatula and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. Commun. ACM 64 9 (2021) 99\u2013106.","DOI":"10.1145\/3474381"},{"key":"e_1_3_3_2_44_2","unstructured":"Maarten Sap Hannah Rashkin Derek Chen Ronan LeBras and Yejin Choi. 2019. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1904.09728 (2019)."},{"key":"e_1_3_3_2_45_2","unstructured":"Lakshay Sharma Laura Graesser Nikita Nangia and Utku Evci. 2019. Natural language understanding with the quora question pairs dataset. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1907.01041 (2019)."},{"key":"e_1_3_3_2_46_2","volume-title":"International Conference on Learning Representations (ICLR)","author":"Shazeer Noam","year":"2017","unstructured":"Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. In International Conference on Learning Representations (ICLR). https:\/\/arxiv.org\/abs\/1701.06538"},{"key":"e_1_3_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D13-1170"},{"key":"e_1_3_3_2_48_2","unstructured":"Alon Talmor Jonathan Herzig Nicholas Lourie and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1811.00937 (2018)."},{"key":"e_1_3_3_2_49_2","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan\u00a0N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_3_3_2_50_2","doi-asserted-by":"crossref","unstructured":"Alex Wang Amanpreet Singh Julian Michael Felix Hill Omer Levy and Samuel\u00a0R Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1804.07461 (2018).","DOI":"10.18653\/v1\/W18-5446"},{"key":"e_1_3_3_2_51_2","unstructured":"Xujia Wang Haiyan Zhao Shuo Wang Hanqing Wang and Zhiyuan Liu. 2024. MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2410.22782 (2024)."},{"key":"e_1_3_3_2_52_2","doi-asserted-by":"crossref","unstructured":"Yaqing Wang Subhabrata Mukherjee Xiaodong Liu Jing Gao Ahmed\u00a0Hassan Awadallah and Jianfeng Gao. 2022. Adamix: Mixture-of-adapter for parameter-efficient tuning of large language models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2205.12410 1 2 (2022) 4.","DOI":"10.18653\/v1\/2022.emnlp-main.388"},{"key":"e_1_3_3_2_53_2","unstructured":"Alex Warstadt Amanpreet Singh and Samuel\u00a0R Bowman. 2018. Neural Network Acceptability Judgments. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1805.12471 (2018)."},{"key":"e_1_3_3_2_54_2","unstructured":"Adina Williams Nikita Nangia and Samuel\u00a0R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1704.05426 (2017)."},{"key":"e_1_3_3_2_55_2","unstructured":"Xun Wu Shaohan Huang and Furu Wei. 2024. Mixture of lora experts. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2404.13628 (2024)."},{"key":"e_1_3_3_2_56_2","unstructured":"Shu Yang Muhammad\u00a0Asif Ali Cheng-Long Wang Lijie Hu and Di Wang. 2024. MoRAL: MoE Augmented LoRA for LLMs\u2019 Lifelong Learning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2402.11260 (2024)."},{"key":"e_1_3_3_2_57_2","doi-asserted-by":"crossref","unstructured":"Rowan Zellers Yonatan Bisk Roy Schwartz and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1808.05326 (2018).","DOI":"10.18653\/v1\/D18-1009"},{"key":"e_1_3_3_2_58_2","unstructured":"Qingru Zhang Minshuo Chen Alexander Bukharin Nikos Karampatziakis Pengcheng He Yu Cheng Weizhu Chen and Tuo Zhao. 2023. Adalora: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2303.10512 (2023)."},{"key":"e_1_3_3_2_59_2","unstructured":"Lulu Zhao Weihao Zeng Xiaofeng Shi and Hua Zhou. 2024. MoSLD: An Extremely Parameter-Efficient Mixture-of-Shared LoRAs for Multi-Task Learning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2412.08946 (2024)."},{"key":"e_1_3_3_2_60_2","unstructured":"Yun Zhu Nevan Wichers Chu-Cheng Lin Xinyi Wang Tianlong Chen Lei Shu Han Lu Canoee Liu Liangchen Luo Jindong Chen et\u00a0al. 2023. Sira: Sparse mixture of low rank adaptation. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2311.09179 (2023)."}],"event":{"name":"SC '25: The International Conference for High Performance Computing, Networking, Storage and Analysis","location":"St. Louis MO USA","acronym":"SC '25","sponsor":["SIGHPC ACM Special Interest Group on High Performance Computing, Special Interest Group on High Performance Computing"]},"container-title":["Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3712285.3759888","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T18:47:45Z","timestamp":1773254865000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3712285.3759888"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,15]]},"references-count":59,"alternative-id":["10.1145\/3712285.3759888","10.1145\/3712285"],"URL":"https:\/\/doi.org\/10.1145\/3712285.3759888","relation":{},"subject":[],"published":{"date-parts":[[2025,11,15]]},"assertion":[{"value":"2025-11-15","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}