{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,25]],"date-time":"2026-04-25T08:35:05Z","timestamp":1777106105857,"version":"3.51.4"},"publisher-location":"New York, NY, USA","reference-count":50,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,1,27]],"date-time":"2023-01-27T00:00:00Z","timestamp":1674777600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,1,27]]},"DOI":"10.1145\/3575693.3576933","type":"proceedings-article","created":{"date-parts":[[2023,1,30]],"date-time":"2023-01-30T22:56:55Z","timestamp":1675119415000},"page":"804-817","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":68,"title":["TensorIR: An Abstraction for Automatic Tensorized Program Optimization"],"prefix":"10.1145","author":[{"given":"Siyuan","family":"Feng","sequence":"first","affiliation":[{"name":"Shanghai Jiao Tong University, China"}]},{"given":"Bohan","family":"Hou","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, USA"}]},{"given":"Hongyi","family":"Jin","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, USA"}]},{"given":"Wuwei","family":"Lin","sequence":"additional","affiliation":[{"name":"OctoML, USA"}]},{"given":"Junru","family":"Shao","sequence":"additional","affiliation":[{"name":"OctoML, USA"}]},{"given":"Ruihang","family":"Lai","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, USA"}]},{"given":"Zihao","family":"Ye","sequence":"additional","affiliation":[{"name":"University of Washington, USA"}]},{"given":"Lianmin","family":"Zheng","sequence":"additional","affiliation":[{"name":"University of California at Berkeley, USA"}]},{"given":"Cody Hao","family":"Yu","sequence":"additional","affiliation":[{"name":"Amazon Web Services, USA"}]},{"given":"Yong","family":"Yu","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, China"}]},{"given":"Tianqi","family":"Chen","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, USA \/ OctoML, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,1,30]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Tensorflow: A system for large-scale machine learning. In 12th $USENIX$ symposium on operating systems design and implementation ($OSDI$ 16). 265\u2013283.","author":"Abadi Mart\u00edn","year":"2016","unstructured":"Mart\u00edn Abadi , Paul Barham , Jianmin Chen , Zhifeng Chen , Andy Davis , Jeffrey Dean , Matthieu Devin , Sanjay Ghemawat , Geoffrey Irving , and Michael Isard . 2016 . Tensorflow: A system for large-scale machine learning. In 12th $USENIX$ symposium on operating systems design and implementation ($OSDI$ 16). 265\u2013283. Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, and Michael Isard. 2016. Tensorflow: A system for large-scale machine learning. In 12th $USENIX$ symposium on operating systems design and implementation ($OSDI$ 16). 265\u2013283."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322967"},{"key":"e_1_3_2_1_3_1","unstructured":"ARM. 2017. ARM Compute Library. https:\/\/github.com\/ARM-software\/ComputeLibrary\/ \t\t\t\t  ARM. 2017. ARM Compute Library. https:\/\/github.com\/ARM-software\/ComputeLibrary\/"},{"key":"e_1_3_2_1_4_1","unstructured":"ARM. 2017. Exploring the Arm dot product instructions. https:\/\/community.arm.com\/developer\/tools-software\/tools\/b\/tools-software-ides-blog\/posts\/exploring-the-arm-dot-product-instructions \t\t\t\t  ARM. 2017. Exploring the Arm dot product instructions. https:\/\/community.arm.com\/developer\/tools-software\/tools\/b\/tools-software-ides-blog\/posts\/exploring-the-arm-dot-product-instructions"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/CGO.2019.8661197"},{"key":"e_1_3_2_1_6_1","unstructured":"Somashekaracharya G Bhaskaracharya Julien Demouth and Vinod Grover. 2020. Automatic kernel generation for volta tensor cores. arXiv preprint arXiv:2006.12645. \t\t\t\t  Somashekaracharya G Bhaskaracharya Julien Demouth and Vinod Grover. 2020. Automatic kernel generation for volta tensor cores. arXiv preprint arXiv:2006.12645."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939785"},{"key":"e_1_3_2_1_8_1","volume-title":"Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274.","author":"Chen Tianqi","year":"2015","unstructured":"Tianqi Chen , Mu Li , Yutian Li , Min Lin , Naiyan Wang , Minjie Wang , Tianjun Xiao , Bing Xu , Chiyuan Zhang , and Zheng Zhang . 2015 . Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274."},{"key":"e_1_3_2_1_9_1","volume-title":"13th $USENIX$ Symposium on Operating Systems Design and Implementation ($OSDI$ 18). 578\u2013594.","author":"Chen Tianqi","unstructured":"Tianqi Chen , Thierry Moreau , Ziheng Jiang , Lianmin Zheng , Eddie Yan , Haichen Shen , Meghan Cowan , Leyuan Wang , Yuwei Hu , and Luis Ceze . 2018. $TVM$ : An automated end-to-end optimizing compiler for deep learning . In 13th $USENIX$ Symposium on Operating Systems Design and Implementation ($OSDI$ 18). 578\u2013594. Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, and Luis Ceze. 2018. $TVM$: An automated end-to-end optimizing compiler for deep learning. In 13th $USENIX$ Symposium on Operating Systems Design and Implementation ($OSDI$ 18). 578\u2013594."},{"key":"e_1_3_2_1_10_1","unstructured":"Tianqi Chen Lianmin Zheng Eddie Yan Ziheng Jiang Thierry Moreau Luis Ceze Carlos Guestrin and Arvind Krishnamurthy. 2018. Learning to optimize tensor programs. In Advances in Neural Information Processing Systems. 3389\u20133400. \t\t\t\t  Tianqi Chen Lianmin Zheng Eddie Yan Ziheng Jiang Thierry Moreau Luis Ceze Carlos Guestrin and Arvind Krishnamurthy. 2018. Learning to optimize tensor programs. In Advances in Neural Information Processing Systems. 3389\u20133400."},{"key":"e_1_3_2_1_11_1","unstructured":"Sharan Chetlur Cliff Woolley Philippe Vandermersch Jonathan Cohen John Tran Bryan Catanzaro and Evan Shelhamer. 2014. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759. \t\t\t\t  Sharan Chetlur Cliff Woolley Philippe Vandermersch Jonathan Cohen John Tran Bryan Catanzaro and Evan Shelhamer. 2014. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3385412.3385963"},{"key":"e_1_3_2_1_13_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805."},{"key":"e_1_3_2_1_14_1","unstructured":"Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold and Sylvain Gelly. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. \t\t\t\t  Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold and Sylvain Gelly. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929."},{"key":"e_1_3_2_1_15_1","volume-title":"Cortex: A Compiler for Recursive Deep Learning Models. MLSys.","author":"Fegade Pratik","year":"2021","unstructured":"Pratik Fegade , Tianqi Chen , Phil Gibbons , and Todd Mowry . 2021 . Cortex: A Compiler for Recursive Deep Learning Models. MLSys. Pratik Fegade, Tianqi Chen, Phil Gibbons, and Todd Mowry. 2021. Cortex: A Compiler for Recursive Deep Learning Models. MLSys."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/1066650.1066657"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3410463.3414632"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_1_19_1","volume-title":"Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.","author":"Howard Andrew G","year":"2017","unstructured":"Andrew G Howard , Menglong Zhu , Bo Chen , Dmitry Kalenichenko , Weijun Wang , Tobias Weyand , Marco Andreetto , and Hartwig Adam . 2017 . Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861."},{"key":"e_1_3_2_1_20_1","unstructured":"Intel. 2017. Intel\u00ae Math Kernel Library for Deep Learning Networks. https:\/\/software.intel.com\/en-us\/articles\/intel-mkl-dnn-part-1-library-overview-and-installation \t\t\t\t  Intel. 2017. Intel\u00ae Math Kernel Library for Deep Learning Networks. https:\/\/software.intel.com\/en-us\/articles\/intel-mkl-dnn-part-1-library-overview-and-installation"},{"key":"e_1_3_2_1_21_1","unstructured":"Intel. 2019. Introduction to Intel\u00ae Deep Learning Boost on Second Generation Intel\u00ae Xeon\u00ae Scalable Processors. https:\/\/software.intel.com\/content\/www\/us\/en\/develop\/articles\/introduction-to-intel-deep-learning-boost-on-second-generation-intel-xeon-scalable.html \t\t\t\t  Intel. 2019. Introduction to Intel\u00ae Deep Learning Boost on Second Generation Intel\u00ae Xeon\u00ae Scalable Processors. https:\/\/software.intel.com\/content\/www\/us\/en\/develop\/articles\/introduction-to-intel-deep-learning-boost-on-second-generation-intel-xeon-scalable.html"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3079856.3080246"},{"key":"e_1_3_2_1_23_1","unstructured":"Andrew Kerr Haicheng Wu Manish Gupta Dustyn Blasig Pradeep Ramini Duane Merrill Aniket Shivam Piotr Majcher Paul Springer Markus Hohnerbach Jin Wang and Matt Nicely. 2022. CUTLASS. https:\/\/github.com\/NVIDIA\/cutlass \t\t\t\t  Andrew Kerr Haicheng Wu Manish Gupta Dustyn Blasig Pradeep Ramini Duane Merrill Aniket Shivam Piotr Majcher Paul Springer Markus Hohnerbach Jin Wang and Matt Nicely. 2022. CUTLASS. https:\/\/github.com\/NVIDIA\/cutlass"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133901"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/2491956.2462187"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/CGO51591.2021.9370308"},{"key":"e_1_3_2_1_27_1","unstructured":"Timothy P Lillicrap Jonathan J Hunt Alexander Pritzel Nicolas Heess Tom Erez Yuval Tassa David Silver and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. \t\t\t\t  Timothy P Lillicrap Jonathan J Hunt Alexander Pritzel Nicolas Heess Tom Erez Yuval Tassa David Silver and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971."},{"key":"e_1_3_2_1_28_1","volume-title":"QNNPACK: Open source library for optimized mobile deep learning. https:\/\/engineering.fb.com\/2018\/10\/29\/ml-applications\/qnnpack\/","author":"Marat Dukhan Hao Lu","year":"2018","unstructured":"Hao Lu Marat Dukhan , Yiming Wu . 2018 . QNNPACK: Open source library for optimized mobile deep learning. https:\/\/engineering.fb.com\/2018\/10\/29\/ml-applications\/qnnpack\/ Hao Lu Marat Dukhan, Yiming Wu. 2018. QNNPACK: Open source library for optimized mobile deep learning. https:\/\/engineering.fb.com\/2018\/10\/29\/ml-applications\/qnnpack\/"},{"key":"e_1_3_2_1_29_1","unstructured":"Volodymyr Mnih Koray Kavukcuoglu David Silver Alex Graves Ioannis Antonoglou Daan Wierstra and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. \t\t\t\t  Volodymyr Mnih Koray Kavukcuoglu David Silver Alex Graves Ioannis Antonoglou Daan Wierstra and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602."},{"key":"e_1_3_2_1_30_1","unstructured":"Thierry Moreau Tianqi Chen Ziheng Jiang Luis Ceze Carlos Guestrin and Arvind Krishnamurthy. 2018. VTA: an open hardware-software stack for deep learning. arXiv preprint arXiv:1807.04188. \t\t\t\t  Thierry Moreau Tianqi Chen Ziheng Jiang Luis Ceze Carlos Guestrin and Arvind Krishnamurthy. 2018. VTA: an open hardware-software stack for deep learning. arXiv preprint arXiv:1807.04188."},{"key":"e_1_3_2_1_31_1","unstructured":"Nvidia. 2017. NVIDIA Tensor Cores. https:\/\/www.nvidia.com\/en-us\/data-center\/tensorcore\/ \t\t\t\t  Nvidia. 2017. NVIDIA Tensor Cores. https:\/\/www.nvidia.com\/en-us\/data-center\/tensorcore\/"},{"key":"e_1_3_2_1_32_1","unstructured":"Nvidia. 2017. NVIDIA TensorRT: Programmable Inference Accelerator. https:\/\/developer.nvidia.com\/tensorrt \t\t\t\t  Nvidia. 2017. NVIDIA TensorRT: Programmable Inference Accelerator. https:\/\/developer.nvidia.com\/tensorrt"},{"key":"e_1_3_2_1_33_1","volume-title":"Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems. 8026\u20138037.","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor Killeen , Zeming Lin , Natalia Gimelshein , and Luca Antiga . 2019 . Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems. 8026\u20138037. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, and Luca Antiga. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems. 8026\u20138037."},{"key":"e_1_3_2_1_34_1","volume-title":"Language models are unsupervised multitask learners. OpenAI blog, 1, 8","author":"Radford Alec","year":"2019","unstructured":"Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , and Ilya Sutskever . 2019. Language models are unsupervised multitask learners. OpenAI blog, 1, 8 ( 2019 ), 9. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1, 8 (2019), 9."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2499370.2462176"},{"key":"e_1_3_2_1_36_1","unstructured":"Ira Rosen Dorit Nuzman and Ayal Zaks. 2007. Loop-aware SLP in GCC. In GCC Developers Summit. \t\t\t\t  Ira Rosen Dorit Nuzman and Ayal Zaks. 2007. Loop-aware SLP in GCC. In GCC Developers Summit."},{"key":"e_1_3_2_1_37_1","unstructured":"John Schulman Filip Wolski Prafulla Dhariwal Alec Radford and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. \t\t\t\t  John Schulman Filip Wolski Prafulla Dhariwal Alec Radford and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3428226"},{"key":"e_1_3_2_1_39_1","unstructured":"Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. \t\t\t\t  Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556."},{"key":"e_1_3_2_1_40_1","unstructured":"The IREE Authors. 2019. IREE. https:\/\/github.com\/iree-org\/iree \t\t\t\t  The IREE Authors. 2019. IREE. https:\/\/github.com\/iree-org\/iree"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3315508.3329973"},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1007\/11688839_16"},{"key":"e_1_3_2_1_43_1","unstructured":"Nicolas Vasilache Oleksandr Zinenko Theodoros Theodoridis Priya Goyal Zachary DeVito William S Moses Sven Verdoolaege Andrew Adams and Albert Cohen. 2018. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730. \t\t\t\t  Nicolas Vasilache Oleksandr Zinenko Theodoros Theodoridis Priya Goyal Zachary DeVito William S Moses Sven Verdoolaege Andrew Adams and Albert Cohen. 2018. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730."},{"key":"e_1_3_2_1_44_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez Lukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762. \t\t\t\t  Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez Lukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762."},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/CGO51591.2021.9370330"},{"key":"e_1_3_2_1_46_1","volume-title":"Stripe: Tensor compilation via the nested polyhedral model. arXiv preprint arXiv:1903.06498.","author":"Zerrell Tim","year":"2019","unstructured":"Tim Zerrell and Jeremy Bruestle . 2019 . Stripe: Tensor compilation via the nested polyhedral model. arXiv preprint arXiv:1903.06498. Tim Zerrell and Jeremy Bruestle. 2019. Stripe: Tensor compilation via the nested polyhedral model. arXiv preprint arXiv:1903.06498."},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3453483.3454106"},{"key":"e_1_3_2_1_48_1","volume-title":"Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, and Koushik Sen.","author":"Zheng Lianmin","year":"2020","unstructured":"Lianmin Zheng , Chengfan Jia , Minmin Sun , Zhao Wu , Cody Hao Yu , Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, and Koushik Sen. 2020 . Ansor: generating high-performance tensor programs for deep learning. In 14th $USENIX$ Symposium on Operating Systems Design and Implementation ( $OSDI$ 20). 863\u2013879. Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, and Koushik Sen. 2020. Ansor: generating high-performance tensor programs for deep learning. In 14th $USENIX$ Symposium on Operating Systems Design and Implementation ($OSDI$ 20). 863\u2013879."},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3470496.3527440"},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3373376.3378508"}],"event":{"name":"ASPLOS '23: 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2","location":"Vancouver BC Canada","acronym":"ASPLOS '23","sponsor":["SIGARCH ACM Special Interest Group on Computer Architecture","SIGOPS ACM Special Interest Group on Operating Systems","SIGPLAN ACM Special Interest Group on Programming Languages"]},"container-title":["Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3575693.3576933","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3575693.3576933","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:51:20Z","timestamp":1750182680000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3575693.3576933"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,27]]},"references-count":50,"alternative-id":["10.1145\/3575693.3576933","10.1145\/3575693"],"URL":"https:\/\/doi.org\/10.1145\/3575693.3576933","relation":{},"subject":[],"published":{"date-parts":[[2023,1,27]]},"assertion":[{"value":"2023-01-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}