{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,5,21]],"date-time":"2024-05-21T13:11:06Z","timestamp":1716297066434},"reference-count":45,"publisher":"Association for Computing Machinery (ACM)","issue":"OOPSLA2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Program. Lang."],"published-print":{"date-parts":[[2022,10,31]]},"abstract":"Neural architecture search (NAS) has become an increasingly important tool within the deep learning community in recent years, yielding many practical advancements in the design of deep neural network architectures. However, most existing approaches operate within highly structured design spaces, and hence (1) explore only a small fraction of the full search space of neural architectures while also (2) requiring significant manual effort from domain experts. In this work, we develop techniques that enable efficient NAS in a significantly larger design space. In particular, we propose to perform NAS in an abstract search space of program properties. Our key insights are as follows: (1) an abstract search space can be significantly smaller than the original search space, and (2) architectures with similar program properties should also have similar performance; thus, we can search more efficiently in the abstract search space. To enable this approach, we also introduce a novel efficient synthesis procedure, which performs the role of concretizing a set of promising program properties into a satisfying neural architecture. We implement our approach, \u03b1NAS, within an evolutionary framework, where the mutations are guided by the program properties. Starting with a ResNet-34 model, \u03b1NAS produces a model with slightly improved accuracy on CIFAR-10 but 96% fewer parameters. On ImageNet, \u03b1NAS is able to improve over Vision Transformer (30% fewer FLOPS and parameters), ResNet-50 (23% fewer FLOPS, 14% fewer parameters), and EfficientNet (7% fewer FLOPS and parameters) without any degradation in accuracy.<\/jats:p>","DOI":"10.1145\/3563329","type":"journal-article","created":{"date-parts":[[2022,10,31]],"date-time":"2022-10-31T20:23:35Z","timestamp":1667247815000},"page":"1150-1179","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Neural architecture search using property guided synthesis"],"prefix":"10.1145","volume":"6","author":[{"ORCID":"http:\/\/orcid.org\/0000-0001-6871-5764","authenticated-orcid":false,"given":"Charles","family":"Jin","sequence":"first","affiliation":[{"name":"Massachusetts Institute of Technology, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-3492-3690","authenticated-orcid":false,"given":"Phitchaya Mangpo","family":"Phothilimthana","sequence":"additional","affiliation":[{"name":"Google Research, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-0535-0531","authenticated-orcid":false,"given":"Sudip","family":"Roy","sequence":"additional","affiliation":[{"name":"Cohere, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,10,31]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322967"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.5555\/3327345.3327421"},{"key":"e_1_2_2_3_1","volume-title":"An Investigation With TuNAS. In 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 14311\u201314320","author":"Bender Gabriel","year":"2020","unstructured":"Gabriel Bender , Hanxiao Liu , Bo Chen , Grace Chu , Shuyang Cheng , Pieter-Jan Kindermans , and Quoc V Le . 2020 . Can Weight Sharing Outperform Random Architecture Search? An Investigation With TuNAS. In 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 14311\u201314320 . Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kindermans, and Quoc V Le. 2020. Can Weight Sharing Outperform Random Architecture Search? An Investigation With TuNAS. In 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 14311\u201314320."},{"key":"e_1_2_2_4_1","volume-title":"Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang.","author":"Bradbury James","year":"2018","unstructured":"James Bradbury , Roy Frostig , Peter Hawkins , Matthew James Johnson , Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018 . JAX: composable transformations of Python +NumPy programs. http:\/\/github.com\/google\/jax James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018. JAX: composable transformations of Python+NumPy programs. http:\/\/github.com\/google\/jax"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.11709"},{"key":"e_1_2_2_6_1","volume-title":"Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation (OSDI\u201918)","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen , Thierry Moreau , Ziheng Jiang , Lianmin Zheng , Eddie Yan , Meghan Cowan , Haichen Shen , Leyuan Wang , Yuwei Hu , Luis Ceze , Carlos Guestrin , and Arvind Krishnamurthy . 2018 . TVM: An Automated End-to-End Optimizing Compiler for Deep Learning . In Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation (OSDI\u201918) . USENIX Association, USA. 579\u2013594. isbn:978 1931971478 Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning. In Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation (OSDI\u201918). USENIX Association, USA. 579\u2013594. isbn:9781931971478"},{"key":"e_1_2_2_7_1","volume-title":"International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy","author":"Dosovitskiy Alexey","year":"2021","unstructured":"Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , Jakob Uszkoreit , and Neil Houlsby . 2021 . An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale . In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy"},{"key":"e_1_2_2_8_1","volume-title":"Trainable Neural Networks. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rJl-b3RcF7","author":"Frankle Jonathan","year":"2019","unstructured":"Jonathan Frankle and Michael Carbin . 2019 . The Lottery Ticket Hypothesis: Finding Sparse , Trainable Neural Networks. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rJl-b3RcF7 Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rJl-b3RcF7"},{"key":"e_1_2_2_9_1","volume-title":"A Survey of Quantization Methods for Efficient Neural Network Inference. CoRR, abs\/2103.13630","author":"Gholami Amir","year":"2021","unstructured":"Amir Gholami , Sehoon Kim , Zhen Dong , Zhewei Yao , Michael W. Mahoney , and Kurt Keutzer . 2021. A Survey of Quantization Methods for Efficient Neural Network Inference. CoRR, abs\/2103.13630 ( 2021 ), arXiv:2103.13630. arxiv:2103.13630 Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021. A Survey of Quantization Methods for Efficient Neural Network Inference. CoRR, abs\/2103.13630 (2021), arXiv:2103.13630. arxiv:2103.13630"},{"key":"e_1_2_2_10_1","volume-title":"Advances in Neural Information Processing Systems","author":"Hassibi Babak","year":"1992","unstructured":"Babak Hassibi and David Stork . 1992. Second order derivatives for network pruning: Optimal Brain Surgeon . In Advances in Neural Information Processing Systems , S. Hanson, J. Cowan, and C. Giles (Eds.). 5, Morgan-Kaufmann . https:\/\/proceedings.neurips.cc\/paper\/ 1992 \/file\/303ed4c69846ab36c2904d3ba8573050-Paper.pdf Babak Hassibi and David Stork. 1992. Second order derivatives for network pruning: Optimal Brain Surgeon. In Advances in Neural Information Processing Systems, S. Hanson, J. Cowan, and C. Giles (Eds.). 5, Morgan-Kaufmann. https:\/\/proceedings.neurips.cc\/paper\/1992\/file\/303ed4c69846ab36c2904d3ba8573050-Paper.pdf"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01234-2_48"},{"key":"e_1_2_2_13_1","volume-title":"Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32","author":"Huang Yanping","year":"2019","unstructured":"Yanping Huang , Youlong Cheng , Ankur Bapna , Orhan Firat , Dehao Chen , Mia Chen , HyoukJoong Lee , Jiquan Ngiam , Quoc V Le , and Yonghui Wu . 2019 . Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32 (2019). Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, and Yonghui Wu. 2019. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32 (2019)."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359630"},{"key":"e_1_2_2_15_1","volume-title":"Proceedings of the 2nd SysML Conference (SysML \u201919)","author":"Jia Zhihao","year":"2019","unstructured":"Zhihao Jia , James Thomas , Todd Warszawski , Mingyu Gao , Matei Zaharia , and Alex Aiken . 2019 . Optimizing DNN Computation with Relaxed Graph Substitutions . In Proceedings of the 2nd SysML Conference (SysML \u201919) . Zhihao Jia, James Thomas, Todd Warszawski, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2019. Optimizing DNN Computation with Relaxed Graph Substitutions. In Proceedings of the 2nd SysML Conference (SysML \u201919)."},{"key":"e_1_2_2_16_1","first-page":"1","article-title":"Beyond Data and Model Parallelism for Deep Neural Networks","volume":"1","author":"Jia Zhihao","year":"2019","unstructured":"Zhihao Jia , Matei Zaharia , and Alex Aiken . 2019 . Beyond Data and Model Parallelism for Deep Neural Networks .. Proceedings of Machine Learning and Systems , 1 (2019), 1 \u2013 13 . Zhihao Jia, Matei Zaharia, and Alex Aiken. 2019. Beyond Data and Model Parallelism for Deep Neural Networks.. Proceedings of Machine Learning and Systems, 1 (2019), 1\u201313.","journal-title":"Proceedings of Machine Learning and Systems"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133901"},{"key":"e_1_2_2_18_1","volume-title":"Advances in Neural Information Processing Systems","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky , Ilya Sutskever , and Geoffrey E Hinton . 2012. ImageNet Classification with Deep Convolutional Neural Networks . In Advances in Neural Information Processing Systems , F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (Eds.). 25, Curran Associates, Inc. . https:\/\/proceedings.neurips.cc\/paper\/ 2012 \/file\/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (Eds.). 25, Curran Associates, Inc.. https:\/\/proceedings.neurips.cc\/paper\/2012\/file\/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf"},{"key":"e_1_2_2_19_1","volume-title":"Advances in Neural Information Processing Systems","author":"LeCun Yann","year":"1989","unstructured":"Yann LeCun , John Denker , and Sara Solla . 1989. Optimal Brain Damage . In Advances in Neural Information Processing Systems , D. Touretzky (Ed.). 2, Morgan-Kaufmann . https:\/\/proceedings.neurips.cc\/paper\/ 1989 \/file\/6c9882bbac1c7093bd25041881277658-Paper.pdf Yann LeCun, John Denker, and Sara Solla. 1989. Optimal Brain Damage. In Advances in Neural Information Processing Systems, D. Touretzky (Ed.). 2, Morgan-Kaufmann. https:\/\/proceedings.neurips.cc\/paper\/1989\/file\/6c9882bbac1c7093bd25041881277658-Paper.pdf"},{"key":"e_1_2_2_20_1","unstructured":"Liam Li and Ameet Talwalkar. 2020. Random search and reproducibility for neural architecture search. In Uncertainty in artificial intelligence. 367\u2013377. \t\t\t\t Liam Li and Ameet Talwalkar. 2020. Random search and reproducibility for neural architecture search. In Uncertainty in artificial intelligence. 367\u2013377."},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00533"},{"key":"e_1_2_2_22_1","volume-title":"Computer Vision \u2013 ECCV","author":"Liu Chenxi","unstructured":"Chenxi Liu , Barret Zoph , Maxim Neumann , Jonathon Shlens , Wei Hua , Li-Jia Li , Li Fei-Fei , Alan Yuille , Jonathan Huang , and Kevin Murphy . 2018. Progressive Neural Architecture Search . In Computer Vision \u2013 ECCV , Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.). Springer International Publishing . Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. 2018. Progressive Neural Architecture Search. In Computer Vision \u2013 ECCV, Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.). Springer International Publishing."},{"key":"e_1_2_2_23_1","volume-title":"Hierarchical Representations for Efficient Architecture Search. In International Conference on Learning Representations (ICLR\u201918)","author":"Liu Hanxiao","year":"2018","unstructured":"Hanxiao Liu , Karen Simonyan , Oriol Vinyals , Chrisantha Fernando , and Koray Kavukcuoglu . 2018 . Hierarchical Representations for Efficient Architecture Search. In International Conference on Learning Representations (ICLR\u201918) . https:\/\/openreview.net\/forum?id=BJQRKzbA- Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. 2018. Hierarchical Representations for Efficient Architecture Search. In International Conference on Learning Representations (ICLR\u201918). https:\/\/openreview.net\/forum?id=BJQRKzbA-"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.541"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3341301.3359646"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/PACT52795.2021.00008"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/2872362.2872387"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/2814270.2814310"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33014780"},{"key":"e_1_2_2_30_1","volume-title":"AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. In International Conference on Machine Learning (ICML\u201920)","author":"Real Esteban","year":"2003","unstructured":"Esteban Real , Chen Liang , David R. So , and Quoc V. Le . 2020 . AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. In International Conference on Machine Learning (ICML\u201920) . arXiv: 2003 .03384. arxiv:2003.03384 Esteban Real, Chen Liang, David R. So, and Quoc V. Le. 2020. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. In International Conference on Machine Learning (ICML\u201920). arXiv:2003.03384. arxiv:2003.03384"},{"key":"e_1_2_2_31_1","unstructured":"David So Wojciech Ma\u0144ke Hanxiao Liu Zihang Dai Noam Shazeer and Quoc V Le. 2021. Searching for Efficient Transformers for Language Modeling. In Advances in Neural Information Processing Systems. \t\t\t\t David So Wojciech Ma\u0144ke Hanxiao Liu Zihang Dai Noam Shazeer and Quoc V Le. 2021. Searching for Efficient Transformers for Language Modeling. In Advances in Neural Information Processing Systems."},{"key":"e_1_2_2_32_1","volume-title":"Proceedings of International Conference on International Conference on Machine Learning (ICML\u201919)","author":"Tan Mingxing","unstructured":"Mingxing Tan and Quoc V. Le . 2019. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks . In Proceedings of International Conference on International Conference on Machine Learning (ICML\u201919) . arXiv:2104.00298. Mingxing Tan and Quoc V. Le. 2019. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of International Conference on International Conference on Machine Learning (ICML\u201919). arXiv:2104.00298."},{"key":"e_1_2_2_33_1","volume-title":"Proceedings of International Conference on International Conference on Machine Learning (ICML\u201921)","author":"Tan Mingxing","unstructured":"Mingxing Tan and Quoc V. Le . 2021. EfficientNetV2: Smaller Models and Faster Training . In Proceedings of International Conference on International Conference on Machine Learning (ICML\u201921) . arXiv:2104.00298. Mingxing Tan and Quoc V. Le. 2021. EfficientNetV2: Smaller Models and Faster Training. In Proceedings of International Conference on International Conference on Machine Learning (ICML\u201921). arXiv:2104.00298."},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3445814.3446753"},{"key":"e_1_2_2_35_1","volume-title":"Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions. arXiv preprint arXiv:1802.04730, arxiv:1802.04730.","author":"Vasilache Nicolas","year":"2018","unstructured":"Nicolas Vasilache , Oleksandr Zinenko , Theodoros Theodoridis , Priya Goyal , Zachary DeVito , William S. Moses , Sven Verdoolaege , Andrew Adams , and Albert Cohen . 2018 . Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions. arXiv preprint arXiv:1802.04730, arxiv:1802.04730. Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S. Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. 2018. Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions. arXiv preprint arXiv:1802.04730, arxiv:1802.04730."},{"key":"e_1_2_2_36_1","volume-title":"\u0141 ukasz Kaiser, and Illia Polosukhin","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , \u0141 ukasz Kaiser, and Illia Polosukhin . 2017 . Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). 30, Curran Associates, Inc .. https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). 30, Curran Associates, Inc.. https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3062341.3062365"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.5555\/3327757.3327866"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3158151"},{"key":"e_1_2_2_40_1","volume-title":"Network Morphism. In Proceedings of the 33rd International Conference on International Conference on Machine Learning -","volume":"48","author":"Wei Tao","year":"2016","unstructured":"Tao Wei , Changhu Wang , Yong Rui , and Chang Wen Chen . 2016 . Network Morphism. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48 (ICML\u201916). 564\u2013572. Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen. 2016. Network Morphism. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48 (ICML\u201916). 564\u2013572."},{"key":"e_1_2_2_41_1","volume-title":"Yisu Remy Wang, Max Willsey, Sudip Roy, and Jacques Pienaar.","author":"Yang Yichen","year":"2021","unstructured":"Yichen Yang , Mangpo Phitchaya Phothilimtha , Yisu Remy Wang, Max Willsey, Sudip Roy, and Jacques Pienaar. 2021 . Equality Saturation for Tensor Graph Superoptimization. In MLSys . arxiv:2101.01332 Yichen Yang, Mangpo Phitchaya Phothilimtha, Yisu Remy Wang, Max Willsey, Sudip Roy, and Jacques Pienaar. 2021. Equality Saturation for Tensor Graph Superoptimization. In MLSys. arxiv:2101.01332"},{"key":"e_1_2_2_42_1","volume-title":"Evaluating The Search Phase of Neural Architecture Search. In International Conference on Learning Representations.","author":"Yu Kaicheng","year":"2019","unstructured":"Kaicheng Yu , Christian Sciuto , Martin Jaggi , Claudiu Musat , and Mathieu Salzmann . 2019 . Evaluating The Search Phase of Neural Architecture Search. In International Conference on Learning Representations. Kaicheng Yu, Christian Sciuto, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann. 2019. Evaluating The Search Phase of Neural Architecture Search. In International Conference on Learning Representations."},{"key":"e_1_2_2_43_1","volume-title":"NeurIPS","author":"Zhou Yanqi","year":"2020","unstructured":"Yanqi Zhou , Sudip Roy , Amirali Abdolrashidi , Daniel Wong , Peter Ma , Qiumin Xu , Hanxiao Liu , Phitchaya Phothilimtha , Shen Wang , Anna Goldie , Azalia Mirhoseini , and James Laudon . 2020. Transferable Graph Optimizers for ML Compilers . In NeurIPS 2020 . Yanqi Zhou, Sudip Roy, Amirali Abdolrashidi, Daniel Wong, Peter Ma, Qiumin Xu, Hanxiao Liu, Phitchaya Phothilimtha, Shen Wang, Anna Goldie, Azalia Mirhoseini, and James Laudon. 2020. Transferable Graph Optimizers for ML Compilers. In NeurIPS 2020."},{"key":"e_1_2_2_44_1","volume-title":"Neural Architecture Search with Reinforcement Learning. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=r1Ue8Hcxg","author":"Zoph Barret","year":"2017","unstructured":"Barret Zoph and Quoc Le . 2017 . Neural Architecture Search with Reinforcement Learning. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=r1Ue8Hcxg Barret Zoph and Quoc Le. 2017. Neural Architecture Search with Reinforcement Learning. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=r1Ue8Hcxg"},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00907"}],"container-title":["Proceedings of the ACM on Programming Languages"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3563329","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,2]],"date-time":"2023-01-02T11:14:38Z","timestamp":1672658078000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3563329"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,31]]},"references-count":45,"journal-issue":{"issue":"OOPSLA2","published-print":{"date-parts":[[2022,10,31]]}},"alternative-id":["10.1145\/3563329"],"URL":"http:\/\/dx.doi.org\/10.1145\/3563329","relation":{},"ISSN":["2475-1421"],"issn-type":[{"value":"2475-1421","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,10,31]]},"assertion":[{"value":"2022-10-31","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}