{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T06:01:45Z","timestamp":1758261705424,"version":"3.44.0"},"reference-count":56,"publisher":"Association for Computing Machinery (ACM)","issue":"3","funder":[{"name":"NKRDP","award":["2021YFB0300202"],"award-info":[{"award-number":["2021YFB0300202"]}]},{"DOI":"10.13039\/501100001809","name":"NSFC","doi-asserted-by":"crossref","award":["62032023, T2125013, T2422007, 62225205, and U24A20235"],"award-info":[{"award-number":["62032023, T2125013, T2422007, 62225205, and U24A20235"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Youth Innovation Promotion Association of CAS","award":["2021099"],"award-info":[{"award-number":["2021099"]}]},{"name":"Innovation Funding of ICT, CAS","award":["E461030"],"award-info":[{"award-number":["E461030"]}]},{"DOI":"10.13039\/501100019065","name":"Tianjin Science and Technology Plan Project","doi-asserted-by":"crossref","award":["24ZXKJGX00060"],"award-info":[{"award-number":["24ZXKJGX00060"]}],"id":[{"id":"10.13039\/501100019065","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Archit. Code Optim."],"published-print":{"date-parts":[[2025,9,30]]},"abstract":"<jats:p>Parallel structures have become a key pattern in deep neural networks (DNNs), offering improved efficiency and scalability. However, existing machine learning compilers (MLCs) face challenges in optimizing these structures due to limited parallel fusion scope and insufficient analysis of intra-operator characteristics. This article introduces Magneto, a framework designed to accelerate DNN inference by co-optimizing parallel operators. Magneto broadens the fusion scope and incorporates a specialized co-tuning algorithm to optimize operators jointly. Our approach addresses the unique challenges inherent in optimizing parallel structures, enabling significant performance improvements across various hardware platforms. Experimental results show that Magneto outperforms state-of-the-art NVIDIA TensorRT and AMD MIGraphX, achieving geometric mean speedups of 2.27\u00d7 and 2.88\u00d7, respectively.<\/jats:p>","DOI":"10.1145\/3744906","type":"journal-article","created":{"date-parts":[[2025,6,16]],"date-time":"2025-06-16T07:17:43Z","timestamp":1750058263000},"page":"1-26","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Accelerating Parallel Structures in DNNs via Parallel Fusion and Operator Co-Optimization"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2716-5051","authenticated-orcid":false,"given":"Zhanyuan","family":"Di","sequence":"first","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]},{"name":"University of Chinese Academy of Sciences","place":["Beijing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-4940-5598","authenticated-orcid":false,"given":"Leping","family":"Wang","sequence":"additional","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-4824-9336","authenticated-orcid":false,"given":"Zhaojia","family":"Ma","sequence":"additional","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]},{"name":"University of Chinese Academy of Sciences","place":["Beijing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9678-7228","authenticated-orcid":false,"given":"En","family":"Shao","sequence":"additional","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]},{"name":"University of Chinese Academy of Sciences","place":["Beijing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2303-9736","authenticated-orcid":false,"given":"Jie","family":"Zhao","sequence":"additional","affiliation":[{"name":"Hunan University","place":["Changsha, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-5821-1163","authenticated-orcid":false,"given":"Ziyi","family":"Ren","sequence":"additional","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4682-983X","authenticated-orcid":false,"given":"Siyuan","family":"Feng","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University","place":["Shanghai, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5422-4497","authenticated-orcid":false,"given":"Dingwen","family":"Tao","sequence":"additional","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6361-5948","authenticated-orcid":false,"given":"Guangming","family":"Tan","sequence":"additional","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1953-1392","authenticated-orcid":false,"given":"Ninghui","family":"Sun","sequence":"additional","affiliation":[{"name":"State Key Lab of Processors, Institute of Computing Technology, CAS","place":["Beijing, China"]}]}],"member":"320","published-online":{"date-parts":[[2025,9,18]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322967"},{"key":"e_1_3_1_3_2","unstructured":"AMD. 2024. MIGraphX Documentation. Retrieved November 20 2024 from https:\/\/rocm.docs.amd.com\/projects\/AMDMIGraphX. v2.4."},{"key":"e_1_3_1_4_2","unstructured":"AMD. 2024. ROCm Documentation. Retrieved November 20 2024 from https:\/\/rocm.docs.amd.com. v5.4.3."},{"key":"e_1_3_1_5_2","first-page":"173","volume-title":"International Conference on Machine Learning","author":"Amodei Dario","year":"2016","unstructured":"Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et\u00a0al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In International Conference on Machine Learning. PMLR, 173\u2013182."},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/CGO.2019.8661197"},{"key":"e_1_3_1_7_2","first-page":"578","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. TVM: An automated end-to-end optimizing compiler for deep learning. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). USENIX Association, Carlsbad, CA, 578\u2013594. Retrieved from https:\/\/www.usenix.org\/conference\/osdi18\/presentation\/chen"},{"key":"e_1_3_1_8_2","article-title":"Learning to optimize tensor programs","volume":"31","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. Learning to optimize tensor programs. Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_9_2","first-page":"797","volume-title":"17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23)","author":"Cui Weihao","year":"2023","unstructured":"Weihao Cui, Zhenhua Han, Lingji Ouyang, Yichuan Wang, Ningxin Zheng, Lingxiao Ma, Yuqing Yang, Fan Yang, Jilong Xue, Lili Qiu, Lidong Zhou, Quan Chen, Haisheng Tan, and Minyi Guo. 2023. Optimizing dynamic neural networks with brainstorm. In 17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23). USENIX Association, Boston, MA, 797\u2013815. Retrieved from https:\/\/www.usenix.org\/conference\/osdi23\/presentation\/cui"},{"key":"e_1_3_1_10_2","unstructured":"ONNX developers. 2024. ONNX. Retrieved November 20 2024 from https:\/\/onnx.ai. v1.13."},{"key":"e_1_3_1_11_2","unstructured":"ONNX Runtime developers. 2024. ONNX Runtime. Retrieved November 20 2024 from https:\/\/github.com\/microsoft\/onnxruntime. v1.14.0."},{"key":"e_1_3_1_12_2","unstructured":"PyTorch developers. 2024. PyTorch. Retrieved November 20 2024 from https:\/\/pytorch.org. v1.12."},{"key":"e_1_3_1_13_2","unstructured":"TensorFlow developers. 2024. Tensorflow. Retrieved November 20 2024 from https:\/\/www.tensorflow.org\/. v2.12."},{"key":"e_1_3_1_14_2","unstructured":"XLA developers. 2024. XLA. Retrieved November 20 2024 from https:\/\/www.tensorflow.org\/xla. v2.12."},{"key":"e_1_3_1_15_2","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 1 (Long and Short Papers). 4171\u20134186."},{"key":"e_1_3_1_16_2","first-page":"167","article-title":"Ios: Inter-operator scheduler for cnn acceleration","volume":"3","author":"Ding Yaoyao","year":"2021","unstructured":"Yaoyao Ding, Ligeng Zhu, Zhihao Jia, Gennady Pekhimenko, and Song Han. 2021. Ios: Inter-operator scheduler for cnn acceleration. Proceedings of Machine Learning and Systems 3 (2021), 167\u2013180.","journal-title":"Proceedings of Machine Learning and Systems"},{"issue":"120","key":"e_1_3_1_17_2","first-page":"1","article-title":"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity","volume":"23","author":"Fedus William","year":"2022","unstructured":"William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research 23, 120 (2022), 1\u201339.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3575693.3576933"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-020-09548-1"},{"key":"e_1_3_1_20_2","unstructured":"Zheng Ge Songtao Liu Feng Wang Zeming Li and Jian Sun. 2021. Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430 (2021)."},{"key":"e_1_3_1_21_2","first-page":"69","volume-title":"Foundations of Genetic Algorithms","author":"Goldberg David E","year":"1991","unstructured":"David E Goldberg and Kalyanmoy Deb. 1991. A comparative analysis of selection schemes used in genetic algorithms. In Foundations of Genetic Algorithms. Vol. 1. Elsevier, 69\u201393."},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2016.2582924"},{"key":"e_1_3_1_23_2","unstructured":"Forrest N. Iandola Song Han Matthew W. Moskewicz Khalid Ashraf William J. Dally and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016)."},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359630"},{"key":"e_1_3_1_25_2","first-page":"27","article-title":"Optimizing DNN computation with relaxed graph substitutions","volume":"1","author":"Jia Zhihao","year":"2019","unstructured":"Zhihao Jia, James Thomas, Todd Warszawski, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2019. Optimizing DNN computation with relaxed graph substitutions. Proceedings of Machine Learning and Systems 1 (2019), 27\u201339.","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3453483.3454038"},{"key":"e_1_3_1_27_2","first-page":"8343","article-title":"Nimble: Lightweight and parallel gpu task scheduling for deep learning","volume":"33","author":"Kwon Woosuk","year":"2020","unstructured":"Woosuk Kwon, Gyeong-In Yu, Eunji Jeong, and Byung-Gon Chun. 2020. Nimble: Lightweight and parallel gpu task scheduling for deep learning. Advances in Neural Information Processing Systems 33 (2020), 8343\u20138354.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3676641.3716249"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/CGO51591.2021.9370308"},{"key":"e_1_3_1_30_2","unstructured":"Benjamin Lefaudeux Francisco Massa Diana Liskovich Wenhan Xiong Vittorio Caggiano Sean Naren Min Xu Jieru Hu Marta Tintore Susan Zhang Patrick Labatut Daniel Haziza Luca Wehrstedt Jeremy Reizenstein and Grigory Sizov. 2024. xFormers: A modular and hackable Transformer modelling library. Retrieved from https:\/\/github.com\/facebookresearch\/xformers. v0.0.29."},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/CGO53902.2022.9741270"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220007"},{"key":"e_1_3_1_33_2","first-page":"881","volume-title":"14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20)","author":"Ma Lingxiao","year":"2020","unstructured":"Lingxiao Ma, Zhiqiang Xie, Zhi Yang, Jilong Xue, Youshan Miao, Wei Cui, Wenxiang Hu, Fan Yang, Lintao Zhang, and Lidong Zhou. 2020. Rammer: Enabling holistic deep learning compiler optimizations with \\(\\lbrace\\) rTasks \\(\\rbrace\\) . In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 881\u2013897."},{"key":"e_1_3_1_34_2","unstructured":"Lingxiao Ma Zhiqiang Xie Zhi Yang Jilong Xue Youshan Miao Wei Cui Wenxiang Hu Fan Yang Lintao Zhang and Lidong Zhou. 2024. NNFusion. Retrieved November 20 2024 from https:\/\/github.com\/microsoft\/nnfusion. v0.3."},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/3453483.3454083"},{"key":"e_1_3_1_36_2","unstructured":"NVIDIA. 2024. CUDA Deep Neural Network library. Retrieved November 20 2024 from https:\/\/developer.nvidia.com\/cudnn. v8.7.0."},{"key":"e_1_3_1_37_2","unstructured":"NVIDIA. 2024. Nsight Compute Documentation. Retrieved November 20 2024 from https:\/\/docs.nvidia.com\/nsight-compute\/index.html"},{"key":"e_1_3_1_38_2","unstructured":"NVIDIA. 2024. TensorRT. Retrieved November 20 2024 from https:\/\/developer.nvidia.com\/tensorrt. v8.5.3."},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/2499370.2462176"},{"key":"e_1_3_1_40_2","first-page":"701","volume-title":"17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23)","author":"Shi Yining","year":"2023","unstructured":"Yining Shi, Zhi Yang, Jilong Xue, Lingxiao Ma, Yuqing Xia, Ziming Miao, Yuxiao Guo, Fan Yang, and Lidong Zhou. 2023. Welder: Scheduling deep learning memory access via tile-graph. In 17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23). 701\u2013718."},{"key":"e_1_3_1_41_2","article-title":"Sequence to sequence learning with neural networks","volume":"27","author":"Sutskever Ilya","year":"2014","unstructured":"Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems 27 (2014).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.308"},{"key":"e_1_3_1_43_2","unstructured":"Vijay Thakkar Pradeep Ramani Cris Cecka Aniket Shivam Honghao Lu Ethan Yan Jack Kosaian Mark Hoemmen Haicheng Wu Andrew Kerr Matt Nicely Duane Merrill Dustyn Blasig Fengqi Qiao Piotr Majcher Paul Springer Markus Hohnerbach Jin Wang and Manish Gupta. 2024. CUTLASS. Retrieved November 20 2024 from https:\/\/github.com\/NVIDIA\/cutlass. v3.1."},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3315508.3329973"},{"key":"e_1_3_1_45_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar et\u00a0al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)."},{"key":"e_1_3_1_46_2","unstructured":"Nicolas Vasilache Oleksandr Zinenko Theodoros Theodoridis Priya Goyal Zachary DeVito William S. Moses Sven Verdoolaege Andrew Adams and Albert Cohen. 2018. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730 (2018)."},{"key":"e_1_3_1_47_2","unstructured":"Lei Wang Yu Cheng Yining Shi Zhengju Tang Zhiwen Mo Wenhao Xie Lingxiao Ma Yuqing Xia Jilong Xue Fan Yang et\u00a0al. 2025. TileLang: A composable tiled programming model for AI systems. arXiv preprint arXiv:2504.17577 (2025)."},{"key":"e_1_3_1_48_2","volume-title":"The ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS)","author":"Xia Chunwei","year":"2023","unstructured":"Chunwei Xia, Jiacheng Zhao, Qianqi Sun, Zheng Wang, Yuan Wen, X Feng, and H Cui. 2023. Optimizing deep learning inference via global analysis and tensor expressions. In The ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). ACM."},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.634"},{"key":"e_1_3_1_50_2","first-page":"1","article-title":"Apollo: Automatic partition-based operator fusion through layer by layer optimization","volume":"4","author":"Zhao Jie","year":"2022","unstructured":"Jie Zhao, Xiong Gao, Ruijie Xia, Zhaochuang Zhang, Deshi Chen, Lei Chen, Renwei Zhang, Zhen Geng, Bin Cheng, and Xuefeng Jin. 2022. Apollo: Automatic partition-based operator fusion through layer by layer optimization. Proceedings of Machine Learning and Systems 4 (2022), 1\u201319.","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"e_1_3_1_51_2","first-page":"863","volume-title":"14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20)","author":"Zheng Lianmin","year":"2020","unstructured":"Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, and Ion Stoica. 2020. Ansor: Generating high-performance tensor programs for deep learning. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association, 863\u2013879. Retrieved fromhttps:\/\/www.usenix.org\/conference\/osdi20\/presentation\/zheng"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373376.3378508"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","unstructured":"Zhen Zheng Xuanda Yang Pengzhan Zhao Guoping Long Kai Zhu Feiwen Zhu Wenyi Zhao Xiaoyong Liu Jun Yang Jidong Zhai Shuaiwen Leon Song and Wei Lin. 2022. AStitch: Enabling a new multi-dimensional optimization space for memory-intensive ML training and inference on modern SIMT architectures(ASPLOS\u201922). Association for Computing Machinery New York NY USA 359\u2013373. DOI:10.1145\/3503222.3507723","DOI":"10.1145\/3503222.3507723"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3710848.3710864"},{"key":"e_1_3_1_55_2","first-page":"233","volume-title":"16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22)","author":"Zhu Hongyu","year":"2022","unstructured":"Hongyu Zhu, Ruofan Wu, Yijia Diao, Shanbin Ke, Haoyu Li, Chen Zhang, Jilong Xue, Lingxiao Ma, Yuqing Xia, Wei Cui, et\u00a0al. 2022. \\(\\lbrace\\) ROLLER \\(\\rbrace\\) : Fast and efficient tensor compilation for deep learning. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). 233\u2013248."},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467093"},{"key":"e_1_3_1_57_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00907"}],"container-title":["ACM Transactions on Architecture and Code Optimization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3744906","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,18]],"date-time":"2025-09-18T20:46:09Z","timestamp":1758228369000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3744906"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,18]]},"references-count":56,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,9,30]]}},"alternative-id":["10.1145\/3744906"],"URL":"https:\/\/doi.org\/10.1145\/3744906","relation":{},"ISSN":["1544-3566","1544-3973"],"issn-type":[{"type":"print","value":"1544-3566"},{"type":"electronic","value":"1544-3973"}],"subject":[],"published":{"date-parts":[[2025,9,18]]},"assertion":[{"value":"2024-11-27","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-06-04","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-18","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}