{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,11]],"date-time":"2025-12-11T20:59:32Z","timestamp":1765486772470,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":44,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,6,9]],"date-time":"2022-06-09T00:00:00Z","timestamp":1654732800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,6,9]]},"DOI":"10.1145\/3519939.3523448","type":"proceedings-article","created":{"date-parts":[[2022,6,2]],"date-time":"2022-06-02T21:05:05Z","timestamp":1654203905000},"page":"872-887","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":14,"title":["FreeTensor: a free-form DSL with holistic optimizations for irregular tensor programs"],"prefix":"10.1145","author":[{"given":"Shizhi","family":"Tang","sequence":"first","affiliation":[{"name":"Tsinghua University, China"}]},{"given":"Jidong","family":"Zhai","sequence":"additional","affiliation":[{"name":"Tsinghua University, China"}]},{"given":"Haojie","family":"Wang","sequence":"additional","affiliation":[{"name":"Tsinghua University, China"}]},{"given":"Lin","family":"Jiang","sequence":"additional","affiliation":[{"name":"Tsinghua University, China"}]},{"given":"Liyan","family":"Zheng","sequence":"additional","affiliation":[{"name":"Tsinghua University, China"}]},{"given":"Zhenhao","family":"Yuan","sequence":"additional","affiliation":[{"name":"Tsinghua University, China"}]},{"given":"Chen","family":"Zhang","sequence":"additional","affiliation":[{"name":"Tsinghua University, China"}]}],"member":"320","published-online":{"date-parts":[[2022,6,9]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"2017. XLA: Optimizing Compiler for TensorFlow. https:\/\/www.tensorflow.org\/xla  2017. XLA: Optimizing Compiler for TensorFlow. https:\/\/www.tensorflow.org\/xla"},{"key":"e_1_3_2_1_2_1","unstructured":"2021. Nvidia TensorRT Documentation. https:\/\/docs.nvidia.com\/deeplearning\/tensorrt\/developer-guide\/index.html  2021. Nvidia TensorRT Documentation. https:\/\/docs.nvidia.com\/deeplearning\/tensorrt\/developer-guide\/index.html"},{"key":"e_1_3_2_1_3_1","unstructured":"Mart\u00edn Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jeffrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geoffrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dandelion Man\u00e9 Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Vi\u00e9gas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https:\/\/www.tensorflow.org\/ Software available from tensorflow.org  Mart\u00edn Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jeffrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geoffrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dandelion Man\u00e9 Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Vi\u00e9gas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https:\/\/www.tensorflow.org\/ Software available from tensorflow.org"},{"key":"e_1_3_2_1_4_1","unstructured":"Randy Allen and Ken Kennedy. 2001. Optimizing Compilers for Modern Architectures: A Dependence-based Approach. Morgan Kaufmann. isbn:1-55860-286-0  Randy Allen and Ken Kennedy. 2001. Optimizing Compilers for Modern Architectures: A Dependence-based Approach. Morgan Kaufmann. isbn:1-55860-286-0"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/CGO.2019.8661197"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-45789-5_17"},{"key":"e_1_3_2_1_7_1","article-title":"Automatic Differentiation in Machine Learning: a Survey","volume":"18","author":"Baydin Atilim Gunes","year":"2017","unstructured":"Atilim Gunes Baydin , Barak A. Pearlmutter , Alexey Andreyevich Radul , and Jeffrey Mark Siskind . 2017 . Automatic Differentiation in Machine Learning: a Survey . J. Mach. Learn. Res. , 18 (2017), 153:1\u2013153:43. http:\/\/jmlr.org\/papers\/v18\/17-468.html Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. 2017. Automatic Differentiation in Machine Learning: a Survey. J. Mach. Learn. Res., 18 (2017), 153:1\u2013153:43. http:\/\/jmlr.org\/papers\/v18\/17-468.html","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_1_8_1","volume-title":"Longformer: The Long-Document Transformer. CoRR, abs\/2004.05150","author":"Beltagy Iz","year":"2020","unstructured":"Iz Beltagy , Matthew E. Peters , and Arman Cohan . 2020 . Longformer: The Long-Document Transformer. CoRR, abs\/2004.05150 (2020), arXiv:2004.05150. arxiv:2004.05150 Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The Long-Document Transformer. CoRR, abs\/2004.05150 (2020), arXiv:2004.05150. arxiv:2004.05150"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1137\/141000671"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/1375581.1375595"},{"key":"e_1_3_2_1_11_1","volume-title":"MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. CoRR, abs\/1512.01274","author":"Chen Tianqi","year":"2015","unstructured":"Tianqi Chen , Mu Li , Yutian Li , Min Lin , Naiyan Wang , Minjie Wang , Tianjun Xiao , Bing Xu , Chiyuan Zhang , and Zheng Zhang . 2015. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. CoRR, abs\/1512.01274 ( 2015 ), arXiv:1512.01274. arxiv:1512.01274 Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. CoRR, abs\/1512.01274 (2015), arXiv:1512.01274. arxiv:1512.01274"},{"key":"e_1_3_2_1_12_1","volume-title":"TVM: End-to-End Optimization Stack for Deep Learning. CoRR, abs\/1802.04799","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen , Thierry Moreau , Ziheng Jiang , Haichen Shen , Eddie Q. Yan , Leyuan Wang , Yuwei Hu , Luis Ceze , Carlos Guestrin , and Arvind Krishnamurthy . 2018 . TVM: End-to-End Optimization Stack for Deep Learning. CoRR, abs\/1802.04799 (2018), arXiv:1802.04799. arxiv:1802.04799 Tianqi Chen, Thierry Moreau, Ziheng Jiang, Haichen Shen, Eddie Q. Yan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. TVM: End-to-End Optimization Stack for Deep Learning. CoRR, abs\/1802.04799 (2018), arXiv:1802.04799. arxiv:1802.04799"},{"key":"e_1_3_2_1_13_1","volume-title":"Training Deep Nets with Sublinear Memory Cost. CoRR, abs\/1604.06174","author":"Chen Tianqi","year":"2016","unstructured":"Tianqi Chen , Bing Xu , Chiyuan Zhang , and Carlos Guestrin . 2016. Training Deep Nets with Sublinear Memory Cost. CoRR, abs\/1604.06174 ( 2016 ), arXiv:1604.06174. arxiv:1604.06174 Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training Deep Nets with Sublinear Memory Cost. CoRR, abs\/1604.06174 (2016), arXiv:1604.06174. arxiv:1604.06174"},{"key":"e_1_3_2_1_14_1","volume-title":"Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen , Lianmin Zheng , Eddie Q. Yan , Ziheng Jiang , Thierry Moreau , Luis Ceze , Carlos Guestrin , and Arvind Krishnamurthy . 2018 . Learning to Optimize Tensor Programs . In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018 , NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicol\u00f2 Cesa-Bianchi, and Roman Garnett (Eds.). 3393\u20133404. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/8b5700012be65c9da25f49408d959ca0-Abstract.html Tianqi Chen, Lianmin Zheng, Eddie Q. Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. Learning to Optimize Tensor Programs. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicol\u00f2 Cesa-Bianchi, and Roman Garnett (Eds.). 3393\u20133404. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/8b5700012be65c9da25f49408d959ca0-Abstract.html"},{"key":"e_1_3_2_1_15_1","volume-title":"cuDNN: Efficient Primitives for Deep Learning. CoRR, abs\/1410.0759","author":"Chetlur Sharan","year":"2014","unstructured":"Sharan Chetlur , Cliff Woolley , Philippe Vandermersch , Jonathan Cohen , John Tran , Bryan Catanzaro , and Evan Shelhamer . 2014. cuDNN: Efficient Primitives for Deep Learning. CoRR, abs\/1410.0759 ( 2014 ), arxiv:1410.0759 Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. 2014. cuDNN: Efficient Primitives for Deep Learning. CoRR, abs\/1410.0759 (2014), arxiv:1410.0759"},{"key":"e_1_3_2_1_16_1","unstructured":"2016. Dense Linear Algebra on GPUs. https:\/\/developer.nvidia.com\/cublas  2016. Dense Linear Algebra on GPUs. https:\/\/developer.nvidia.com\/cublas"},{"key":"e_1_3_2_1_17_1","volume-title":"Matthew James Johnson, and Chris Leary","author":"Frostig Roy","year":"2018","unstructured":"Roy Frostig , Matthew James Johnson, and Chris Leary . 2018 . Compiling machine learning programs via high-level tracing. Systems for Machine Learning . Roy Frostig, Matthew James Johnson, and Chris Leary. 2018. Compiling machine learning programs via high-level tracing. Systems for Machine Learning."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-020-2649-2"},{"key":"e_1_3_2_1_19_1","volume-title":"Martin","author":"Hu Shi-Min","year":"2021","unstructured":"Shi-Min Hu , Zheng-Ning Liu , Meng-Hao Guo , Junxiong Cai , Jiahui Huang , Tai-Jiang Mu , and Ralph R . Martin . 2021 . Subdivision-Based Mesh Convolution Networks. CoRR , abs\/2106.02285 (2021), arXiv:2106.02285. arxiv:2106.02285 Shi-Min Hu, Zheng-Ning Liu, Meng-Hao Guo, Junxiong Cai, Jiahui Huang, Tai-Jiang Mu, and Ralph R. Martin. 2021. Subdivision-Based Mesh Convolution Networks. CoRR, abs\/2106.02285 (2021), arXiv:2106.02285. arxiv:2106.02285"},{"key":"e_1_3_2_1_20_1","volume-title":"Don\u2019t Unroll Adjoint: Differentiating SSA-Form Programs. CoRR, abs\/1810.07951","author":"Innes Michael","year":"2018","unstructured":"Michael Innes . 2018. Don\u2019t Unroll Adjoint: Differentiating SSA-Form Programs. CoRR, abs\/1810.07951 ( 2018 ), arXiv:1810.07951. arxiv:1810.07951 Michael Innes. 2018. Don\u2019t Unroll Adjoint: Differentiating SSA-Form Programs. CoRR, abs\/1810.07951 (2018), arXiv:1810.07951. arxiv:1810.07951"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359630"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00780"},{"key":"e_1_3_2_1_23_1","unstructured":"Vincent Loechner. 1999. PolyLib: A library for manipulating parameterized polyhedra.  Vincent Loechner. 1999. PolyLib: A library for manipulating parameterized polyhedra."},{"key":"e_1_3_2_1_24_1","volume-title":"14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020","author":"Ma Lingxiao","year":"2020","unstructured":"Lingxiao Ma , Zhiqiang Xie , Zhi Yang , Jilong Xue , Youshan Miao , Wei Cui , Wenxiang Hu , Fan Yang , Lintao Zhang , and Lidong Zhou . 2020 . Rammer: Enabling Holistic Deep Learning Compiler Optimizations with rTasks . In 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020 , Virtual Event , November 4-6, 2020. USENIX Association, 881\u2013897. https:\/\/www.usenix.org\/conference\/osdi20\/presentation\/ma Lingxiao Ma, Zhiqiang Xie, Zhi Yang, Jilong Xue, Youshan Miao, Wei Cui, Wenxiang Hu, Fan Yang, Lintao Zhang, and Lidong Zhou. 2020. Rammer: Enabling Holistic Deep Learning Compiler Optimizations with rTasks. In 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020, Virtual Event, November 4-6, 2020. USENIX Association, 881\u2013897. https:\/\/www.usenix.org\/conference\/osdi20\/presentation\/ma"},{"key":"e_1_3_2_1_25_1","volume-title":"ICML 2015 AutoML workshop. 238","author":"Maclaurin Dougal","year":"2015","unstructured":"Dougal Maclaurin , David Duvenaud , and Ryan P Adams . 2015 . Autograd: Effortless gradients in numpy . In ICML 2015 AutoML workshop. 238 , 5. Dougal Maclaurin, David Duvenaud, and Ryan P Adams. 2015. Autograd: Effortless gradients in numpy. In ICML 2015 AutoML workshop. 238, 5."},{"key":"e_1_3_2_1_26_1","unstructured":"2003. Intel(R) oneAPI Math Kernel Library. https:\/\/www.intel.com\/content\/www\/us\/en\/developer\/tools\/oneapi\/onemkl.html  2003. Intel(R) oneAPI Math Kernel Library. https:\/\/www.intel.com\/content\/www\/us\/en\/developer\/tools\/oneapi\/onemkl.html"},{"key":"e_1_3_2_1_27_1","volume-title":"Automatically Synthesize Fast Gradients. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020","author":"William","year":"2020","unstructured":"William S. Moses and Valentin Churavy. 2020. Instead of Rewriting Foreign Code for Machine Learning , Automatically Synthesize Fast Gradients. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , December 6-12, 2020, virtual, Hugo Larochelle, Marc\u2019Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/9332c513ef44b682e9347822c2e457ac-Abstract.html William S. Moses and Valentin Churavy. 2020. Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, Hugo Larochelle, Marc\u2019Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/9332c513ef44b682e9347822c2e457ac-Abstract.html"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/7902.7904"},{"key":"e_1_3_2_1_29_1","unstructured":"Adam Paszke Sam Gross Soumith Chintala Gregory Chanan Edward Yang Zachary DeVito Zeming Lin Alban Desmaison Luca Antiga and Adam Lerer. 2017. Automatic differentiation in PyTorch.  Adam Paszke Sam Gross Soumith Chintala Gregory Chanan Edward Yang Zachary DeVito Zeming Lin Alban Desmaison Luca Antiga and Adam Lerer. 2017. Automatic differentiation in PyTorch."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/125826.125848"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/2491956.2462176"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2018.2857721"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3315508.3329973"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330756"},{"key":"e_1_3_2_1_35_1","volume-title":"Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018","author":"van Merrienboer Bart","year":"2018","unstructured":"Bart van Merrienboer , Olivier Breuleux , Arnaud Bergeron , and Pascal Lamblin . 2018 . Automatic differentiation in ML: Where we are and where we should be going . In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018 , NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicol\u00f2 Cesa-Bianchi, and Roman Garnett (Eds.). 8771\u20138781. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/770f8e448d07586afbf77bb59f698587-Abstract.html Bart van Merrienboer, Olivier Breuleux, Arnaud Bergeron, and Pascal Lamblin. 2018. Automatic differentiation in ML: Where we are and where we should be going. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr\u00e9al, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicol\u00f2 Cesa-Bianchi, and Roman Garnett (Eds.). 8771\u20138781. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/770f8e448d07586afbf77bb59f698587-Abstract.html"},{"key":"e_1_3_2_1_36_1","volume-title":"Tangent: Automatic Differentiation Using Source Code Transformation in Python. CoRR, abs\/1711.02712","author":"van Merri\u00ebnboer Bart","year":"2017","unstructured":"Bart van Merri\u00ebnboer , Alexander B. Wiltschko , and Dan Moldovan . 2017 . Tangent: Automatic Differentiation Using Source Code Transformation in Python. CoRR, abs\/1711.02712 (2017), arXiv:1711.02712. arxiv:1711.02712 Bart van Merri\u00ebnboer, Alexander B. Wiltschko, and Dan Moldovan. 2017. Tangent: Automatic Differentiation Using Source Code Transformation in Python. CoRR, abs\/1711.02712 (2017), arXiv:1711.02712. arxiv:1711.02712"},{"key":"e_1_3_2_1_37_1","volume-title":"Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions. CoRR, abs\/1802.04730","author":"Vasilache Nicolas","year":"2018","unstructured":"Nicolas Vasilache , Oleksandr Zinenko , Theodoros Theodoridis , Priya Goyal , Zachary DeVito , William S. Moses , Sven Verdoolaege , Andrew Adams , and Albert Cohen . 2018 . Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions. CoRR, abs\/1802.04730 (2018), arXiv:1802.04730. arxiv:1802.04730 Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S. Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. 2018. Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions. CoRR, abs\/1802.04730 (2018), arXiv:1802.04730. arxiv:1802.04730"},{"key":"e_1_3_2_1_38_1","volume-title":"6th International Conference on Learning Representations, ICLR","author":"Velickovic Petar","year":"2018","unstructured":"Petar Velickovic , Guillem Cucurull , Arantxa Casanova , Adriana Romero , Pietro Li\u00f2 , and Yoshua Bengio . 2018. Graph Attention Networks . In 6th International Conference on Learning Representations, ICLR 2018 , Vancouver, BC , Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview .net. https:\/\/openreview.net\/forum?id=rJXMpikCZ Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https:\/\/openreview.net\/forum?id=rJXMpikCZ"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-15582-6_49"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.13140\/RG.2.1.1174.6323"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/2400682.2400713"},{"key":"e_1_3_2_1_42_1","volume-title":"PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections. In 15th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2021","author":"Wang Haojie","year":"2021","unstructured":"Haojie Wang , Jidong Zhai , Mingyu Gao , Zixuan Ma , Shizhi Tang , Liyan Zheng , Yuanzhi Li , Kaiyuan Rong , Yuanyong Chen , and Zhihao Jia . 2021 . PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections. In 15th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2021 , July 14-16, 2021, Angela Demke Brown and Jay R. Lorch (Eds.). USENIX Association, 37\u201354. https:\/\/www.usenix.org\/conference\/osdi21\/presentation\/wang Haojie Wang, Jidong Zhai, Mingyu Gao, Zixuan Ma, Shizhi Tang, Liyan Zheng, Yuanzhi Li, Kaiyuan Rong, Yuanyong Chen, and Zhihao Jia. 2021. PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections. In 15th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2021, July 14-16, 2021, Angela Demke Brown and Jay R. Lorch (Eds.). USENIX Association, 37\u201354. https:\/\/www.usenix.org\/conference\/osdi21\/presentation\/wang"},{"key":"e_1_3_2_1_43_1","volume-title":"Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. CoRR, abs\/1909.01315","author":"Wang Minjie","year":"2019","unstructured":"Minjie Wang , Lingfan Yu , Da Zheng , Quan Gan , Yu Gai , Zihao Ye , Mufei Li , Jinjing Zhou , Qi Huang , Chao Ma , Ziyue Huang , Qipeng Guo , Hao Zhang , Haibin Lin , Junbo Zhao , Jinyang Li , Alexander J. Smola , and Zheng Zhang . 2019. Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. CoRR, abs\/1909.01315 ( 2019 ), arxiv:1909.01315. arxiv:1909.01315 Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J. Smola, and Zheng Zhang. 2019. Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. CoRR, abs\/1909.01315 (2019), arxiv:1909.01315. arxiv:1909.01315"},{"key":"e_1_3_2_1_44_1","volume-title":"Ansor: Generating High-Performance Tensor Programs for Deep Learning. In 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020","author":"Zheng Lianmin","year":"2020","unstructured":"Lianmin Zheng , Chengfan Jia , Minmin Sun , Zhao Wu , Cody Hao Yu , Ameer Haj-Ali , Yida Wang , Jun Yang , Danyang Zhuo , Koushik Sen , Joseph E. Gonzalez , and Ion Stoica . 2020 . Ansor: Generating High-Performance Tensor Programs for Deep Learning. In 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020 , Virtual Event , November 4-6, 2020. USENIX Association, 863\u2013879. https:\/\/www.usenix.org\/conference\/osdi20\/presentation\/zheng Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, and Ion Stoica. 2020. Ansor: Generating High-Performance Tensor Programs for Deep Learning. In 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020, Virtual Event, November 4-6, 2020. USENIX Association, 863\u2013879. https:\/\/www.usenix.org\/conference\/osdi20\/presentation\/zheng"}],"event":{"name":"PLDI '22: 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation","sponsor":["SIGPLAN ACM Special Interest Group on Programming Languages"],"location":"San Diego CA USA","acronym":"PLDI '22"},"container-title":["Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3519939.3523448","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3519939.3523448","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:31:16Z","timestamp":1750188676000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3519939.3523448"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,9]]},"references-count":44,"alternative-id":["10.1145\/3519939.3523448","10.1145\/3519939"],"URL":"https:\/\/doi.org\/10.1145\/3519939.3523448","relation":{},"subject":[],"published":{"date-parts":[[2022,6,9]]},"assertion":[{"value":"2022-06-09","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}