{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,20]],"date-time":"2025-12-20T08:40:01Z","timestamp":1766220001886,"version":"3.48.0"},"publisher-location":"New York, NY, USA","reference-count":51,"publisher":"ACM","funder":[{"DOI":"10.13039\/501100001459","name":"Ministry of Education - Singapore","doi-asserted-by":"publisher","award":["T1 251RES2409"],"award-info":[{"award-number":["T1 251RES2409"]}],"id":[{"id":"10.13039\/501100001459","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,9,8]]},"DOI":"10.1145\/3754598.3754677","type":"proceedings-article","created":{"date-parts":[[2025,12,20]],"date-time":"2025-12-20T08:34:32Z","timestamp":1766219672000},"page":"804-815","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["TAPAS: Fast and Automatic Derivation of Tensor Parallel Strategies for Large Neural Networks"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9398-6507","authenticated-orcid":false,"given":"Ziji","family":"Shi","sequence":"first","affiliation":[{"name":"School of Computing, National University of Singapore, Singapore, Singapore and Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-9941-2322","authenticated-orcid":false,"given":"Le","family":"Jiang","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-2650-0504","authenticated-orcid":false,"given":"Ang","family":"Wang","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-5085-2535","authenticated-orcid":false,"given":"Jie","family":"Zhang","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-1398-8235","authenticated-orcid":false,"given":"Chencan","family":"Wu","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9072-3170","authenticated-orcid":false,"given":"Yong","family":"Li","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0914-4580","authenticated-orcid":false,"given":"Xiaokui","family":"Xiao","sequence":"additional","affiliation":[{"name":"School of Computing, National University Singapore, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3003-0150","authenticated-orcid":false,"given":"Wei","family":"Lin","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3530-7662","authenticated-orcid":false,"given":"Jialin","family":"Li","sequence":"additional","affiliation":[{"name":"School of Computing, National University of Singapore, Singapore, Singapore"}]}],"member":"320","published-online":{"date-parts":[[2025,12,20]]},"reference":[{"key":"e_1_3_3_1_2_2","unstructured":"Behnaz Arzani Siva Kesava\u00a0Reddy Kakarla Miguel Castro Srikanth Kandula Saeed Maleki and Luke Marshall. 2023. Rethinking Machine Learning Collective Communication as a Multi-Commodity Flow Problem. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2305.13479 (2023)."},{"key":"e_1_3_3_1_3_2","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared\u00a0D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et\u00a0al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020) 1877\u20131901."},{"key":"e_1_3_3_1_4_2","unstructured":"Tom\u00a0B. Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Tom Henighan Rewon Child Aditya Ramesh Daniel\u00a0M. Ziegler Jeffrey Wu Clemens Winter Christopher Hesse Mark Chen Eric Sigler Mateusz Litwin Scott Gray Benjamin Chess Jack Clark Christopher Berner Sam McCandlish Alec Radford Ilya Sutskever and Dario Amodei. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 2020-Decem (2020)."},{"key":"e_1_3_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3620666.3651379"},{"key":"e_1_3_3_1_6_2","first-page":"578","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, et\u00a0al. 2018. { TVM} : An automated { End-to-End} optimizing compiler for deep learning. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 578\u2013594."},{"key":"e_1_3_3_1_7_2","unstructured":"Tianqi Chen Bing Xu Chiyuan Zhang and Carlos Guestrin. 2016. Training deep nets with sublinear memory cost. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1604.06174 (2016)."},{"key":"e_1_3_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/2959100.2959190"},{"key":"e_1_3_3_1_9_2","first-page":"797","volume-title":"17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23)","author":"Cui Weihao","year":"2023","unstructured":"Weihao Cui, Zhenhua Han, Lingji Ouyang, Yichuan Wang, Ningxin Zheng, Lingxiao Ma, Yuqing Yang, Fan Yang, Jilong Xue, Lili Qiu, et\u00a0al. 2023. Optimizing dynamic neural networks with brainstorm. In 17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23). 797\u2013815."},{"key":"e_1_3_3_1_10_2","first-page":"4171","volume-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming\u00a0Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. Technical Report. 4171\u20134186 pages. https:\/\/github.com\/tensorflow\/tensor2tensor"},{"key":"e_1_3_3_1_11_2","volume-title":"9th International Conference on Learning Representations (ICLR 2021)","author":"Dosovitskiy Alexey","year":"2021","unstructured":"Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In 9th International Conference on Learning Representations (ICLR 2021). https:\/\/openreview.net\/forum?id=YicbFdNTTy"},{"key":"e_1_3_3_1_12_2","doi-asserted-by":"publisher","unstructured":"Shiqing Fan Yi Rong Chen Meng Zongyan Cao Siyu Wang Zhen Zheng Chuan Wu Guoping Long Jun Yang Lixue Xia Lansong Diao Xiaoyong Liu and Wei Lin. 2021. DAPPLE: A pipelined data parallel approach for training large models. Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming PPOPP 21 (2021) 431\u2013445. 10.1145\/3437801.3441593","DOI":"10.1145\/3437801.3441593"},{"key":"e_1_3_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3503221.3508418"},{"key":"e_1_3_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_3_1_15_2","unstructured":"Yanping Huang Youlong Cheng Ankur Bapna Orhan Firat Mia\u00a0Xu Chen Dehao Chen Hyouk\u00a0Joong Lee Jiquan Ngiam Quoc\u00a0V. Le Yonghui Wu and Zhifeng Chen. 2019. GPipe: Efficient training of giant neural networks using pipeline parallelism. Advances in Neural Information Processing Systems 32 (2019)."},{"key":"e_1_3_3_1_16_2","volume-title":"USENIX Annual Technical Conference","author":"Jia Xianyan","year":"2022","unstructured":"Xianyan Jia, Le Jiang, Ang Wang, Wencong Xiao, Ziji Shi, Jie Zhang, Xinyuan Li, Langshi Chen, Yong Li, Zhen Zheng, Xiaoyong Liu, and Wei Lin. 2022. Whale: Efficient Giant Model Training over Heterogeneous GPUs. In USENIX Annual Technical Conference. USENIX."},{"key":"e_1_3_3_1_17_2","volume-title":"Proceedings of the 2nd Conference on Machine Learning and Systems (MLSys)","author":"Jia Zhihao","year":"2019","unstructured":"Zhihao Jia, Matei Zaharia, and Alex Aiken. 2019. Beyond Data and Model Parallelism for Deep Neural Networks. In Proceedings of the 2nd Conference on Machine Learning and Systems (MLSys). https:\/\/proceedings.mlsys.org\/paper\/2019\/file\/78530480f14bc7b2879ae05070c78c11-Paper.pdf"},{"key":"e_1_3_3_1_18_2","unstructured":"Jared Kaplan Sam McCandlish Tom Henighan Tom\u00a0B. Brown Benjamin Chess Rewon Child Scott Gray Alec Radford Jeffrey Wu and Dario Amodei. 2020. Scaling Laws for Neural Language Models. (2020). http:\/\/arxiv.org\/abs\/2001.08361"},{"key":"e_1_3_3_1_19_2","volume-title":"Proceedings of the 9th International Conference on Learning Representations (ICLR)","author":"Kirisame Marisa","year":"2021","unstructured":"Marisa Kirisame, Steven Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, and Zachary Tatlock. 2021. Dynamic Tensor Rematerialization. In Proceedings of the 9th International Conference on Learning Representations (ICLR). https:\/\/openreview.net\/forum?id=Yl2aDBJRTm Spotlight paper."},{"key":"e_1_3_3_1_20_2","unstructured":"Alex Krizhevsky Ilya Sutskever and Geoffrey\u00a0E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012)."},{"key":"e_1_3_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/CGO51591.2021.9370308"},{"key":"e_1_3_3_1_22_2","volume-title":"9th International Conference on Learning Representations (ICLR 2021)","author":"Lepikhin Dmitry","year":"2021","unstructured":"Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2021. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. In 9th International Conference on Learning Representations (ICLR 2021). https:\/\/openreview.net\/forum?id=qrwe7XHTmYb"},{"key":"e_1_3_3_1_23_2","first-page":"6543","volume-title":"Proceedings of the 38th International Conference on Machine Learning (ICML 2021)","author":"Li Zhuohan","year":"2021","unstructured":"Zhuohan Li, Siyuan Zhuang, Shiyuan Guo, Danyang Zhuo, Hao Zhang, Dawn Song, and Ion Stoica. 2021. TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021). 6543\u20136552. https:\/\/proceedings.mlr.press\/v139\/li21d.html"},{"key":"e_1_3_3_1_24_2","unstructured":"Aixin Liu Bei Feng Bing Xue Bingxuan Wang Bochao Wu Chengda Lu Chenggang Zhao Chengqi Deng Chenyu Zhang Chong Ruan et\u00a0al. 2024. Deepseek-v3 technical report. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2412.19437 (2024)."},{"key":"e_1_3_3_1_25_2","first-page":"312","volume-title":"Proceedings of the VLDB Endowment (PVLDB)","volume":"15","author":"Miao Xupeng","year":"2022","unstructured":"Xupeng Miao, Hailin Zhang, Yining Shi, Xiaonan Nie, Zhi Yang, Yangyu Tao, and Bin Cui. 2022. HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework. In Proceedings of the VLDB Endowment (PVLDB), Vol.\u00a015. 312\u2013320. https:\/\/www.vldb.org\/pvldb\/vol15.html"},{"key":"e_1_3_3_1_26_2","unstructured":"Paulius Micikevicius Sharan Narang Jonah Alben Gregory Diamos Erich Elsen David Garcia Boris Ginsburg Michael Houston Oleksii Kuchaiev Ganesh Venkatesh and Hooman Wu. 2018. Mixed Precision Training. Proceedings of the 6th International Conference on Learning Representations (ICLR 2018) (2018)."},{"key":"e_1_3_3_1_27_2","doi-asserted-by":"publisher","unstructured":"Deepak Narayanan Aaron Harlap Amar Phanishayee Vivek Seshadri Nikhil\u00a0R. Devanur Gregory\u00a0R. Ganger Phillip\u00a0B. Gibbons and Matei Zaharia. 2019. Pipedream: Generalized pipeline parallelism for DNN training. SOSP 2019 - Proceedings of the 27th ACM Symposium on Operating Systems Principles (2019) 1\u201315. 10.1145\/3341301.3359646","DOI":"10.1145\/3341301.3359646"},{"key":"e_1_3_3_1_28_2","doi-asserted-by":"publisher","unstructured":"Deepak Narayanan Mohammad Shoeybi Jared Casper Patrick LeGresley Mostofa Patwary Vijay Korthikanti Dmitri Vainbrand Prethvi Kashinkunti Julie Bernauer Bryan Catanzaro Amar Phanishayee and Matei Zaharia. 2021. Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM. International Conference for High Performance Computing Networking Storage and Analysis SC (2021). 10.1145\/3458817.3476209","DOI":"10.1145\/3458817.3476209"},{"key":"e_1_3_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3588964.3591467"},{"key":"e_1_3_3_1_30_2","unstructured":"Penghui Qi Xinyi Wan Nyamdavaa Amar and Min Lin. 2024. Pipeline Parallelism with Controllable Memory. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2405.15362 (2024)."},{"key":"e_1_3_3_1_31_2","first-page":"1","volume-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter\u00a0J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Technical Report. 1\u201367 pages. http:\/\/jmlr.org\/papers\/v21\/20-074.html."},{"key":"e_1_3_3_1_32_2","doi-asserted-by":"publisher","unstructured":"Samyam Rajbhandari Jeff Rasley Olatunji Ruwase and Yuxiong He. 2020. Zero: Memory optimizations toward training trillion parameter models. International Conference for High Performance Computing Networking Storage and Analysis SC 2020-Novem (2020). 10.1109\/SC41405.2020.00024","DOI":"10.1109\/SC41405.2020.00024"},{"key":"e_1_3_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA52012.2021.00049"},{"key":"e_1_3_3_1_34_2","doi-asserted-by":"crossref","first-page":"3505","DOI":"10.1145\/3394486.3406703","volume-title":"Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining","author":"Rasley Jeff","year":"2020","unstructured":"Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 3505\u20133506."},{"key":"e_1_3_3_1_35_2","unstructured":"Jie Ren Samyam Rajbhandari Reza\u00a0Yazdani Aminabadi Olatunji Ruwase Shuangyan Yang Minjia Zhang Dong Li and Yuxiong He. 2021. ZeRO-offload: Democratizing billion-scale model training. 2021 USENIX Annual Technical Conference (2021) 551\u2013564. https:\/\/www.deepspeed.ai\/tutorials\/"},{"key":"e_1_3_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"e_1_3_3_1_37_2","volume-title":"Neural Information Processing Systems","author":"Shazeer Noam","year":"2018","unstructured":"Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hechtman. 2018. Mesh-TensorFlow: Deep Learning for Supercomputers. In Neural Information Processing Systems."},{"key":"e_1_3_3_1_38_2","volume-title":"Proceedings of the 2024 ACM Symposium on Cloud Computing","author":"Shi Ziji","year":"2024","unstructured":"Ziji Shi, Jialin Li, and Yang You. 2024. ParaGAN: A Scalable Distributed Training Framework for Generative Adversarial Networks. In Proceedings of the 2024 ACM Symposium on Cloud Computing."},{"key":"e_1_3_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/3295500.3356143"},{"key":"e_1_3_3_1_40_2","volume-title":"10th International Conference on Learning Representations (ICLR 2022)","author":"Tay Yi","year":"2022","unstructured":"Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung\u00a0Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. 2022. Scale Efficiently: Insights from Pre-Training and Fine-Tuning Transformers. In 10th International Conference on Learning Representations (ICLR 2022). https:\/\/openreview.net\/forum?id=aNiqNrhNzwb"},{"key":"e_1_3_3_1_41_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar et\u00a0al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2302.13971 (2023)."},{"key":"e_1_3_3_1_42_2","unstructured":"Colin Unger Zhihao Jia Wei Wu Sina Lin Mandeep Baines Carlos Efrain Quintero Narvaez Vinay Ramakrishnaiah Nirmal Prajapati Pat Mccormick Jamaludin Mohd-yusof Jongsoo Park Misha Smelyanskiy Alex Aiken Pat Mccormick Jamaludin Mohd-yusof Xi and Luo Dheevatsa. 2022. Unity : Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization This paper is included in the Proceedings of the. (2022). https:\/\/www.usenix.org\/conference\/osdi22\/presentation\/unger"},{"key":"e_1_3_3_1_43_2","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan\u00a0N. Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 2017-Decem (2017) 5999\u20136009."},{"key":"e_1_3_3_1_44_2","doi-asserted-by":"publisher","unstructured":"Minjie Wang Chien\u00a0chin Huang and Jinyang Li. 2019. Supporting very large models using automatic dataflow graph partitioning. Proceedings of the 14th EuroSys Conference 2019 (2019). 10.1145\/3302424.3303953","DOI":"10.1145\/3302424.3303953"},{"key":"e_1_3_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/3514221.3517836"},{"key":"e_1_3_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3567955.3567959"},{"key":"e_1_3_3_1_47_2","unstructured":"Jason Wei Yi Tay Rishi Bommasani Colin Raffel Barret Zoph Sebastian Borgeaud Dani Yogatama Maarten Bosma Denny Zhou Donald Metzler et\u00a0al. 2022. Emergent abilities of large language models. Transactions on Machine Learning Research (TMLR) (2022). https:\/\/openreview.net\/forum?id=yzkSUxE1e2"},{"key":"e_1_3_3_1_48_2","unstructured":"Yuanzhong Xu HyoukJoong Lee Dehao Chen Blake Hechtman Yanping Huang Rahul Joshi Maxim Krikun Dmitry Lepikhin Andy Ly Marcello Maggioni Ruoming Pang Noam Shazeer Shibo Wang Tao Wang Yonghui Wu and Zhifeng Chen. 2021. GSPMD: General and Scalable Parallelization for ML Computation Graphs. (2021). http:\/\/arxiv.org\/abs\/2105.04663"},{"key":"e_1_3_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i8.20858"},{"key":"e_1_3_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3639306"},{"key":"e_1_3_3_1_51_2","doi-asserted-by":"crossref","unstructured":"Yuhao Zhang and Arun Kumar. 2023. Lotan: Bridging the Gap between GNNs and Scalable Graph Analytics Engines. Proceedings of the VLDB Endowment 16 3 (2023) 312\u2013324. https:\/\/www.vldb.org\/pvldb\/vol16.html","DOI":"10.14778\/3611479.3611483"},{"key":"e_1_3_3_1_52_2","first-page":"559","volume-title":"Proceedings of the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2022)","author":"Zheng Lianmin","year":"2022","unstructured":"Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Joseph\u00a0E. Gonzalez, and Ion Stoica. 2022. Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning. In Proceedings of the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2022). 559\u2013578. https:\/\/www.usenix.org\/conference\/osdi22\/presentation\/zheng"}],"event":{"name":"ICPP '25: 54th International Conference on Parallel Processing","location":"San Diego CA USA","acronym":"ICPP '25"},"container-title":["Proceedings of the 54th International Conference on Parallel Processing"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3754598.3754677","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,20]],"date-time":"2025-12-20T08:38:32Z","timestamp":1766219912000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3754598.3754677"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,8]]},"references-count":51,"alternative-id":["10.1145\/3754598.3754677","10.1145\/3754598"],"URL":"https:\/\/doi.org\/10.1145\/3754598.3754677","relation":{},"subject":[],"published":{"date-parts":[[2025,9,8]]},"assertion":[{"value":"2025-12-20","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}