{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T13:40:12Z","timestamp":1755870012865,"version":"3.44.0"},"publisher-location":"New York, NY, USA","reference-count":50,"publisher":"ACM","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,6,8]]},"DOI":"10.1145\/3721145.3735113","type":"proceedings-article","created":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T12:57:17Z","timestamp":1755867437000},"page":"640-653","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["ConCo: Optimizing Compilation of Concurrent Tensor Programs on Shared GPU"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-8955-9681","authenticated-orcid":false,"given":"Jiamin","family":"Lu","sequence":"first","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5098-1503","authenticated-orcid":false,"given":"Jingwei","family":"Sun","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4727-0676","authenticated-orcid":false,"given":"Yunlong","family":"Xu","sequence":"additional","affiliation":[{"name":"Independent Researcher, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-8559-1234","authenticated-orcid":false,"given":"Peng","family":"Sun","sequence":"additional","affiliation":[{"name":"Independent Researcher, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0794-7681","authenticated-orcid":false,"given":"Guangzhong","family":"Sun","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]}],"member":"320","published-online":{"date-parts":[[2025,8,22]]},"reference":[{"key":"e_1_3_3_1_2_2","unstructured":"Byung\u00a0Hoon Ahn Prannoy Pilligundla Amir Yazdanbakhsh and Hadi Esmaeilzadeh. 2020. Chameleon: Adaptive code optimization for expedited deep neural network compilation. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2001.08743 (2020)."},{"key":"e_1_3_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939785"},{"key":"e_1_3_3_1_4_2","first-page":"578","volume-title":"13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, et\u00a0al. 2018. TVM: An automated End-to-End optimizing compiler for deep learning. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). 578\u2013594."},{"key":"e_1_3_3_1_5_2","unstructured":"Tianqi Chen Lianmin Zheng Eddie Yan Ziheng Jiang Thierry Moreau Luis Ceze Carlos Guestrin and Arvind Krishnamurthy. 2018. Learning to optimize tensor programs. Advances in Neural Information Processing Systems 31 (2018)."},{"key":"e_1_3_3_1_6_2","unstructured":"Sharan Chetlur Cliff Woolley Philippe Vandermersch Jonathan Cohen John Tran Bryan Catanzaro and Evan Shelhamer. 2014. cuDNN: Efficient primitives for deep learning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1410.0759 (2014)."},{"key":"e_1_3_3_1_7_2","unstructured":"OpenBLAS contributors. 2024. OpenBLAS. https:\/\/github.com\/OpenMathLib\/OpenBLAS. Accessed: 2024-06."},{"key":"e_1_3_3_1_8_2","unstructured":"Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1810.04805 (2018)."},{"key":"e_1_3_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/3419111.3421284"},{"key":"e_1_3_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/3528535.3565247"},{"key":"e_1_3_3_1_11_2","unstructured":"Yaoyao Ding Ligeng Zhu Zhihao Jia Gennady Pekhimenko and Song Han. 2021. Ios: Inter-operator scheduler for cnn acceleration. Proceedings of Machine Learning and Systems 3 (2021) 167\u2013180."},{"key":"e_1_3_3_1_12_2","unstructured":"Google. 2024. IREE: Intermediate Representation Execution Environment. https:\/\/github.com\/iree-org\/iree. Accessed: 2024-06."},{"key":"e_1_3_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3476223"},{"key":"e_1_3_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3575693.3575705"},{"key":"e_1_3_3_1_16_2","unstructured":"Intel. 2024. oneAPI. https:\/\/www.intel.com\/content\/www\/us\/en\/developer\/tools\/oneapi\/onednn.html. Accessed: 2024-06."},{"key":"e_1_3_3_1_17_2","first-page":"947","volume-title":"2019 USENIX Annual Technical Conference (USENIX ATC 19)","author":"Jeon Myeongjae","year":"2019","unstructured":"Myeongjae Jeon, Shivaram Venkataraman, Amar Phanishayee, Junjie Qian, Wencong Xiao, and Fan Yang. 2019. Analysis of { Large-Scale}{ Multi-Tenant}{ GPU} clusters for { DNN} training workloads. In 2019 USENIX Annual Technical Conference (USENIX ATC 19). 947\u2013960."},{"key":"e_1_3_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359630"},{"key":"e_1_3_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/CLUSTER52292.2023.00016"},{"key":"e_1_3_3_1_20_2","unstructured":"Chris Lattner Mehdi Amini Uday Bondhugula Albert Cohen Andy Davis Jacques Pienaar River Riddle Tatiana Shpeisman Nicolas Vasilache and Oleksandr Zinenko. 2020. MLIR: A compiler infrastructure for the end of Moore\u2019s law. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2002.11054 (2020)."},{"key":"e_1_3_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/IC2E48712.2020.00014"},{"key":"e_1_3_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/3542929.3563510"},{"key":"e_1_3_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3552326.3587445"},{"key":"e_1_3_3_1_24_2","doi-asserted-by":"crossref","unstructured":"Zhongjin Li Victor Chang Haiyang Hu Maozhong Fu Jidong Ge and Francesco Piccialli. 2021. Optimizing makespan and resource utilization for multi-DNN training in GPU cluster. Future Generation Computer Systems 125 (2021) 206\u2013220.","DOI":"10.1016\/j.future.2021.06.021"},{"key":"e_1_3_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/IWQoS61813.2024.10682900"},{"key":"e_1_3_3_1_26_2","first-page":"1025","volume-title":"2019 USENIX Annual Technical Conference (USENIX ATC 19)","author":"Liu Yizhi","year":"2019","unstructured":"Yizhi Liu, Yao Wang, Ruofei Yu, Mu Li, Vin Sharma, and Yida Wang. 2019. Optimizing { CNN} model inference on { CPUs}. In 2019 USENIX Annual Technical Conference (USENIX ATC 19). 1025\u20131040."},{"key":"e_1_3_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/3503222.3507752"},{"key":"e_1_3_3_1_28_2","first-page":"881","volume-title":"14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20)","author":"Ma Lingxiao","year":"2020","unstructured":"Lingxiao Ma, Zhiqiang Xie, Zhi Yang, Jilong Xue, Youshan Miao, Wei Cui, Wenxiang Hu, Fan Yang, Lintao Zhang, and Lidong Zhou. 2020. Rammer: Enabling holistic deep learning compiler optimizations with { rTasks}. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 881\u2013897."},{"key":"e_1_3_3_1_29_2","doi-asserted-by":"crossref","unstructured":"Long Short-Term Memory. 1997. Sepp hochreiter and j\u00fcrgen schmidhuber. Neural Computation 9 8 (1997) 1735.","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3437984.3458837"},{"key":"e_1_3_3_1_31_2","unstructured":"Maxim Naumov Dheevatsa Mudigere Hao-Jun\u00a0Michael Shi Jianyu Huang Narayanan Sundaraman Jongsoo Park Xiaodong Wang Udit Gupta Carole-Jean Wu Alisson\u00a0G Azzolini et\u00a0al. 2019. Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1906.00091 (2019)."},{"key":"e_1_3_3_1_32_2","unstructured":"Nvidia. 2024. cuBLAS. https:\/\/docs.nvidia.com\/cuda\/cublas\/index.html. Accessed: 2024-06."},{"key":"e_1_3_3_1_33_2","unstructured":"Nvidia. 2024. GPU Pro Tip: CUDA 7 Streams Simplify Concurrency. https:\/\/developer.nvidia.com\/blog\/gpu-pro-tip-cuda-7-streams-simplify-concurrency. Accessed: 2024-06."},{"key":"e_1_3_3_1_34_2","unstructured":"Nvidia. 2024. MULTI-PROCESS SERVICE. https:\/\/docs.nvidia.com\/deploy\/pdf\/CUDA_Multi_Process_Service_Overview.pdf. Accessed: 2024-06."},{"key":"e_1_3_3_1_35_2","unstructured":"Nvidia. 2024. NVIDIA Multi-Instance GPU User Guide. https:\/\/docs.nvidia.com\/datacenter\/tesla\/pdf\/NVIDIA_MIG_User_Guide.pdf. Accessed: 2024-06."},{"key":"e_1_3_3_1_36_2","doi-asserted-by":"crossref","unstructured":"Jonathan Ragan-Kelley Connelly Barnes Andrew Adams Sylvain Paris Fr\u00e9do Durand and Saman Amarasinghe. 2013. Halide: a language and compiler for optimizing parallelism locality and recomputation in image processing pipelines. Acm Sigplan Notices 48 6 (2013) 519\u2013530.","DOI":"10.1145\/2499370.2462176"},{"key":"e_1_3_3_1_37_2","unstructured":"Jared Roesch Steven Lyubomirsky Marisa Kirisame Logan Weber Josh Pollock Luis Vega Ziheng Jiang Tianqi Chen Thierry Moreau and Zachary Tatlock. 2019. Relay: A high-level compiler for deep learning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1904.08368 (2019)."},{"key":"e_1_3_3_1_38_2","unstructured":"Nadav Rotem Jordan Fix Saleem Abdulrasool Garret Catron Summer Deng Roman Dzhabarov Nick Gibson James Hegeman Meghan Lele Roman Levenstein et\u00a0al. 2018. Glow: Graph lowering compiler techniques for neural networks. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1805.00907 (2018)."},{"key":"e_1_3_3_1_39_2","unstructured":"Karen Simonyan. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1409.1556 (2014)."},{"key":"e_1_3_3_1_40_2","unstructured":"Cheng Tan Zhichao Li Jian Zhang Yu Cao Sikai Qi Zherui Liu Yibo Zhu and Chuanxiong Guo. 2021. Serving DNN models with multi-instance gpus: A case of the reconfigurable machine scheduling problem. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2109.11067 (2021)."},{"key":"e_1_3_3_1_41_2","unstructured":"TensorFlow. 2024. TensorFlow XLA. https:\/\/www.tensorflow.org\/xla. Accessed: 2024-06."},{"key":"e_1_3_3_1_42_2","unstructured":"Nicolas Vasilache Oleksandr Zinenko Theodoros Theodoridis Priya Goyal Zachary DeVito William\u00a0S Moses Sven Verdoolaege Andrew Adams and Albert Cohen. 2018. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1802.04730 (2018)."},{"key":"e_1_3_3_1_43_2","unstructured":"Richard Wei Lane Schwartz and Vikram Adve. 2017. DLVM: A modern compiler infrastructure for deep learning systems. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1711.03016 (2017)."},{"key":"e_1_3_3_1_44_2","first-page":"945","volume-title":"19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22)","author":"Weng Qizhen","year":"2022","unstructured":"Qizhen Weng, Wencong Xiao, Yinghao Yu, Wei Wang, Cheng Wang, Jian He, Yong Li, Liping Zhang, Wei Lin, and Yu Ding. 2022. { MLaaS} in the wild: Workload analysis and scheduling in { Large-Scale} heterogeneous { GPU} clusters. In 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22). 945\u2013960."},{"key":"e_1_3_3_1_45_2","first-page":"533","volume-title":"14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20)","author":"Xiao Wencong","year":"2020","unstructured":"Wencong Xiao, Shiru Ren, Yong Li, Yang Zhang, Pengyang Hou, Zhi Li, Yihui Feng, Wei Lin, and Yangqing Jia. 2020. { AntMan} : Dynamic scaling on { GPU} clusters for deep learning. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 533\u2013548."},{"key":"e_1_3_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCAD51958.2021.9643501"},{"key":"e_1_3_3_1_47_2","unstructured":"Fuxun Yu Di Wang Longfei Shangguan Minjia Zhang Chenchen Liu and Xiang Chen. 2022. A survey of multi-tenant deep learning inference on gpu. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2203.09040 (2022)."},{"key":"e_1_3_3_1_48_2","unstructured":"Lianmin Zheng. 2024. Auto-scheduling a Neural Network for NVIDIA GPU. https:\/\/tvm.apache.org\/docs\/how_to\/tune_with_autoscheduler\/tune_network_cuda.html. Accessed: 2024-06."},{"key":"e_1_3_3_1_49_2","first-page":"863","volume-title":"14th USENIX symposium on operating systems design and implementation (OSDI 20)","author":"Zheng Lianmin","year":"2020","unstructured":"Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody\u00a0Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, et\u00a0al. 2020. Ansor: Generating High-Performance tensor programs for deep learning. In 14th USENIX symposium on operating systems design and implementation (OSDI 20). 863\u2013879."},{"key":"e_1_3_3_1_50_2","first-page":"233","volume-title":"16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22)","author":"Zhu Hongyu","year":"2022","unstructured":"Hongyu Zhu, Ruofan Wu, Yijia Diao, Shanbin Ke, Haoyu Li, Chen Zhang, Jilong Xue, Lingxiao Ma, Yuqing Xia, Wei Cui, et\u00a0al. 2022. { ROLLER} : Fast and efficient tensor compilation for deep learning. In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). 233\u2013248."},{"key":"e_1_3_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00907"}],"event":{"name":"ICS '25: 2025 International Conference on Supercomputing","location":"Salt Lake City USA","acronym":"ICS '25","sponsor":["SIGARCH ACM Special Interest Group on Computer Architecture"]},"container-title":["Proceedings of the 39th ACM International Conference on Supercomputing"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3721145.3735113","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,22]],"date-time":"2025-08-22T12:59:58Z","timestamp":1755867598000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3721145.3735113"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,8]]},"references-count":50,"alternative-id":["10.1145\/3721145.3735113","10.1145\/3721145"],"URL":"https:\/\/doi.org\/10.1145\/3721145.3735113","relation":{},"subject":[],"published":{"date-parts":[[2025,6,8]]},"assertion":[{"value":"2025-08-22","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}