{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,5]],"date-time":"2026-04-05T10:18:10Z","timestamp":1775384290954,"version":"3.50.1"},"reference-count":67,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2025,2,24]],"date-time":"2025-02-24T00:00:00Z","timestamp":1740355200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"NSF","award":["2106828"],"award-info":[{"award-number":["2106828"]}]},{"name":"Synopsys gift"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Des. Autom. Electron. Syst."],"published-print":{"date-parts":[[2025,5,31]]},"abstract":"<jats:p>Within the rapidly evolving domain of Electronic Design Automation (EDA), Large Language Models (LLMs) have emerged as transformative technologies, offering unprecedented capabilities for optimizing and automating various aspects of electronic design. This survey provides a comprehensive exploration of LLM applications in EDA, focusing on advancements in model architectures, the implications of varying model sizes, and innovative customization techniques that enable tailored analytical insights. By examining the intersection of LLM capabilities and EDA requirements, the article highlights the significant impact these models have on extracting nuanced understandings from complex datasets. Furthermore, it addresses the challenges and opportunities in integrating LLMs into EDA workflows, paving the way for future research and application in this dynamic field. Through this detailed analysis, the survey aims to offer valuable insights to professionals in the EDA industry, AI researchers, and anyone interested in the convergence of advanced AI technologies and electronic design.<\/jats:p>","DOI":"10.1145\/3715324","type":"journal-article","created":{"date-parts":[[2025,2,4]],"date-time":"2025-02-04T06:44:31Z","timestamp":1738651471000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":26,"title":["A Survey of Research in Large Language Models for Electronic Design Automation"],"prefix":"10.1145","volume":"30","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7187-5205","authenticated-orcid":false,"given":"Jingyu","family":"Pan","sequence":"first","affiliation":[{"name":"Electrical and Computer Engineering, Duke University, Durham, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6840-7160","authenticated-orcid":false,"given":"Guanglei","family":"Zhou","sequence":"additional","affiliation":[{"name":"Electrical and Computer Engineering, Duke University, Durham, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3115-0733","authenticated-orcid":false,"given":"Chen-Chia","family":"Chang","sequence":"additional","affiliation":[{"name":"Electrical and Computer Engineering, Duke University, Durham, United States"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-3672-7058","authenticated-orcid":false,"given":"Isaac","family":"Jacobson","sequence":"additional","affiliation":[{"name":"Electrical and Computer Engineering, Duke University, Durham, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1157-7799","authenticated-orcid":false,"given":"Jiang","family":"Hu","sequence":"additional","affiliation":[{"name":"Electrical Engineering, Texas A&amp;M University, College Station, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1486-8412","authenticated-orcid":false,"given":"Yiran","family":"Chen","sequence":"additional","affiliation":[{"name":"Electrical and Computer Engineering, Duke University, Durham, United States"}]}],"member":"320","published-online":{"date-parts":[[2025,2,24]]},"reference":[{"key":"e_1_3_1_2_2","article-title":"GPT-4 technical report","author":"Achiam Josh","year":"2023","unstructured":"Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et\u00a0al. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774 (2023).","journal-title":"arXiv preprint arXiv:2303.08774"},{"key":"e_1_3_1_3_2","unstructured":"Pravesh Agrawal Szymon Antoniak Emma Bou Hanna Baptiste Bout Devendra Chaplot Jessica Chudnovsky Diogo Costa Baudouin De Monicault Saurabh Garg Theophile Gervet et\u00a0al. 2024. Pixtral 12B. arxiv:2410.07073[cs.CV] (2024). https:\/\/arxiv.org\/abs\/2410.07073"},{"key":"e_1_3_1_4_2","volume-title":"Model Card for Claude 3","year":"2024","unstructured":"Anthropic. 2024. Model Card for Claude 3. Technical Report. Anthropic. https:\/\/www-cdn.anthropic.com\/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627\/Model_Card_Claude_3.pdf"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3670474.3685948"},{"key":"e_1_3_1_6_2","article-title":"Chip-Chat: Challenges and opportunities in conversational hardware design","author":"Blocklove Jason","year":"2023","unstructured":"Jason Blocklove, Siddharth Garg, Ramesh Karri, and Hammond Pearce. 2023. Chip-Chat: Challenges and opportunities in conversational hardware design. arXiv preprint arXiv:2305.13243 (2023).","journal-title":"arXiv preprint arXiv:2305.13243"},{"key":"e_1_3_1_7_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et\u00a0al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877\u20131901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_8_2","unstructured":"Ziwei Chai Tianjie Zhang Liang Wu Kaiqiao Han Xiaohai Hu Xuanwen Huang and Yang Yang. 2023. GraphLLM: Boosting graph reasoning ability of large language model. arxiv:2310.05845[cs.CL] (2023). https:\/\/arxiv.org\/abs\/2310.05845"},{"key":"e_1_3_1_9_2","article-title":"DRC-Coder: Automated DRC checker code generation using LLM autonomous agent","author":"Chang Chen-Chia","year":"2025","unstructured":"Chen-Chia Chang, Chia-Tung Ho, Yaguang Li, Yiran Chen, and Haoxing Ren. 2025. DRC-Coder: Automated DRC checker code generation using LLM autonomous agent. In Proceedings of the 2025 International Symposium on Physical Design.","journal-title":"Proceedings of the 2025 International Symposium on Physical Design."},{"key":"e_1_3_1_10_2","first-page":"6253","volume-title":"Proceedings of the 41st International Conference on Machine Learning","author":"Chang Chen-Chia","year":"2024","unstructured":"Chen-Chia Chang, Yikang Shen, Shaoze Fan, Jing Li, Shun Zhang, Ningyuan Cao, Yiran Chen, and Xin Zhang. 2024. LaMAGIC: Language-model-based topology generation for analog integrated circuits. In Proceedings of the 41st International Conference on Machine Learning(ICML \u201924). 6253\u20136262."},{"key":"e_1_3_1_11_2","article-title":"ChipGPT: How far are we from natural language hardware design","author":"Chang Kaiyan","year":"2023","unstructured":"Kaiyan Chang, Ying Wang, Haimeng Ren, Mengdi Wang, Shengwen Liang, Yinhe Han, Huawei Li, and Xiaowei Li. 2023. ChipGPT: How far are we from natural language hardware design. arXiv preprint arXiv:2305.14019 (2023).","journal-title":"arXiv preprint arXiv:2305.14019"},{"key":"e_1_3_1_12_2","article-title":"LLM-enhanced Bayesian optimization for efficient analog layout constraint generation","author":"Chen Guojin","year":"2024","unstructured":"Guojin Chen, Keren Zhu, Seunggeun Kim, Hanqing Zhu, Yao Lai, Bei Yu, and David Z. Pan. 2024. LLM-enhanced Bayesian optimization for efficient analog layout constraint generation. arXiv preprint arXiv:2406.05250 (2024).","journal-title":"arXiv preprint arXiv:2406.05250"},{"key":"e_1_3_1_13_2","unstructured":"Hong Cai Chen Longchang Wu Ming Gao Lingrui Shen Jiarui Zhong and Yipin Xu. 2024. DocEDA: Automated extraction and design of analog circuits from documents with large language model. arxiv:2412.05301[cs.AR] (2024). https:\/\/arxiv.org\/abs\/2412.05301"},{"key":"e_1_3_1_14_2","article-title":"Deep reinforcement learning from human preferences","volume":"30","author":"Christiano Paul F.","year":"2017","unstructured":"Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems 30 (2017), 1\u20139.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_15_2","article-title":"A deep learning framework for Verilog autocompletion towards design and verification automation","author":"Dehaerne Enrique","year":"2023","unstructured":"Enrique Dehaerne, Bappaditya Dey, Sandip Halder, and Stefan De Gendt. 2023. A deep learning framework for Verilog autocompletion towards design and verification automation. arXiv preprint arXiv:2304.13840 (2023).","journal-title":"arXiv preprint arXiv:2304.13840"},{"key":"e_1_3_1_16_2","article-title":"Make every move count: LLM-based high-quality RTL code generation using MCTS","author":"DeLorenzo Matthew","year":"2024","unstructured":"Matthew DeLorenzo, Animesh Basak Chowdhury, Vasudev Gohil, Shailja Thakur, Ramesh Karri, Siddharth Garg, and Jeyavijayan Rajendran. 2024. Make every move count: LLM-based high-quality RTL code generation using MCTS. arXiv preprint arXiv:2402.03289 (2024).","journal-title":"arXiv preprint arXiv:2402.03289"},{"key":"e_1_3_1_17_2","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).","journal-title":"arXiv preprint arXiv:1810.04805"},{"key":"e_1_3_1_18_2","article-title":"The power of large language models for wireless communication system development: A case study on FPGA platforms","author":"Du Yuyang","year":"2023","unstructured":"Yuyang Du, Soung Chang Liew, Kexin Chen, and Yulin Shao. 2023. The power of large language models for wireless communication system development: A case study on FPGA platforms. arXiv preprint arXiv:2307.07319 (2023).","journal-title":"arXiv preprint arXiv:2307.07319"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.3390\/info15110697"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCAD57390.2023.10323953"},{"key":"e_1_3_1_21_2","article-title":"From English to ASIC: Hardware implementation with large language model","author":"Goh Emil","year":"2024","unstructured":"Emil Goh, Maoyang Xiang, I. Wey, and T. Hui Teo. 2024. From English to ASIC: Hardware implementation with large language model. arXiv preprint arXiv:2403.07039 (2024).","journal-title":"arXiv preprint arXiv:2403.07039"},{"key":"e_1_3_1_22_2","unstructured":"Aaron Grattafiori Abhimanyu Dubey Abhinav Jauhri Abhinav Pandey Abhishek Kadian Ahmad Al-Dahle Aiesha Letman Akhil Mathur Alan Schelten Alex Vaughan et\u00a0al. 2024. The Llama 3 herd of models. arxiv:2407.21783[cs.AI] (2024). https:\/\/arxiv.org\/abs\/2407.21783"},{"key":"e_1_3_1_23_2","article-title":"DeepSeek-Coder: When the large language model meets programming\u2014The rise of code intelligence","author":"Guo Daya","year":"2024","unstructured":"Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, Y. K. Li, et\u00a0al. 2024. DeepSeek-Coder: When the large language model meets programming\u2014The rise of code intelligence. arXiv preprint arXiv:2401.14196 (2024).","journal-title":"arXiv preprint arXiv:2401.14196"},{"key":"e_1_3_1_24_2","unstructured":"Xiaoxin He Yijun Tian Yifei Sun Nitesh V. Chawla Thomas Laurent Yann LeCun Xavier Bresson and Bryan Hooi. 2024. G-Retriever: Retrieval-augmented generation for textual graph understanding and question answering. arxiv:2402.07630[cs.LG] (2024). https:\/\/arxiv.org\/abs\/2402.07630"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/MLCAD58807.2023.10299852"},{"key":"e_1_3_1_26_2","article-title":"Large language model (LLM) for standard cell layout design optimization","author":"Ho Chia-Tung","year":"2024","unstructured":"Chia-Tung Ho and Haoxing Ren. 2024. Large language model (LLM) for standard cell layout design optimization. arXiv preprint arXiv:2406.06549 (2024).","journal-title":"arXiv preprint arXiv:2406.06549"},{"key":"e_1_3_1_27_2","article-title":"VerilogCoder: Autonomous Verilog coding agents with graph-based planning and abstract syntax tree (AST)-based waveform tracing tool","author":"Ho Chia-Tung","year":"2024","unstructured":"Chia-Tung Ho, Haoxing Ren, and Brucek Khailany. 2024. VerilogCoder: Autonomous Verilog coding agents with graph-based planning and abstract syntax tree (AST)-based waveform tracing tool. arXiv preprint arXiv:2408.08927 (2024).","journal-title":"arXiv preprint arXiv:2408.08927"},{"key":"e_1_3_1_28_2","unstructured":"Erik Johannes Husom Arda Goknil Lwin Khin Shar and Sagar Sen. 2024. The price of prompting: Profiling energy use in large language models inference. arxiv:2407.16893[cs.CY] (2024). https:\/\/arxiv.org\/abs\/2407.16893"},{"key":"e_1_3_1_29_2","article-title":"Mistral 7B","author":"Jiang Albert Q.","year":"2023","unstructured":"Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et\u00a0al. 2023. Mistral 7B. arXiv preprint arXiv:2310.06825 (2023).","journal-title":"arXiv preprint arXiv:2310.06825"},{"key":"e_1_3_1_30_2","article-title":"AnalogCoder: Analog circuit design via training-free code generation","author":"Lai Yao","year":"2024","unstructured":"Yao Lai, Sungyoung Lee, Guojin Chen, Souradip Poddar, Mengkang Hu, David Z. Pan, and Ping Luo. 2024. AnalogCoder: Analog circuit design via training-free code generation. arXiv preprint arXiv:2405.14918 (2024).","journal-title":"arXiv preprint arXiv:2405.14918"},{"key":"e_1_3_1_31_2","article-title":"SpecLLM: Exploring generation and review of VLSI design specification with large language model","author":"Li Mengming","year":"2024","unstructured":"Mengming Li, Wenji Fang, Qijun Zhang, and Zhiyao Xie. 2024. SpecLLM: Exploring generation and review of VLSI design specification with large language model. arXiv preprint arXiv:2401.13266 (2024).","journal-title":"arXiv preprint arXiv:2401.13266"},{"key":"e_1_3_1_32_2","article-title":"Reinforcement learning with human feedback: Learning dynamic choices via pessimism","author":"Li Zihao","year":"2023","unstructured":"Zihao Li, Zhuoran Yang, and Mengdi Wang. 2023. Reinforcement learning with human feedback: Learning dynamic choices via pessimism. arXiv preprint arXiv:2305.18438 (2023).","journal-title":"arXiv preprint arXiv:2305.18438"},{"key":"e_1_3_1_33_2","article-title":"LayoutCopilot: An LLM-powered multi-agent collaborative framework for interactive analog layout design","author":"Liu Bingyang","year":"2024","unstructured":"Bingyang Liu, Haoyi Zhang, Xiaohan Gao, Zichen Kong, Xiyuan Tang, Yibo Lin, Runsheng Wang, and Ru Huang. 2024. LayoutCopilot: An LLM-powered multi-agent collaborative framework for interactive analog layout design. arXiv preprint arXiv:2406.18873 (2024).","journal-title":"arXiv preprint arXiv:2406.18873"},{"key":"e_1_3_1_34_2","article-title":"ChipNeMo: Domain-adapted LLMs for chip design","author":"Liu Mingjie","year":"2023","unstructured":"Mingjie Liu, Teodor-Dumitru Ene, Robert Kirby, Chris Cheng, Nathaniel Pinckney, Rongjian Liang, Jonah Alben, Himyanshu Anand, Sanmitra Banerjee, Ismet Bayraktaroglu, et\u00a0al. 2023. ChipNeMo: Domain-adapted LLMs for chip design. arXiv preprint arXiv:2311.00176 (2023).","journal-title":"arXiv preprint arXiv:2311.00176"},{"key":"e_1_3_1_35_2","first-page":"1","volume-title":"Proceedings of the 2023 IEEE\/ACM International Conference on Computer Aided Design (ICCAD \u201923)","author":"Liu Mingjie","year":"2023","unstructured":"Mingjie Liu, Nathaniel Pinckney, Brucek Khailany, and Haoxing Ren. 2023. VerilogEval: Evaluating large language models for Verilog code generation. In Proceedings of the 2023 IEEE\/ACM International Conference on Computer Aided Design (ICCAD \u201923). IEEE, 1\u20138."},{"key":"e_1_3_1_36_2","article-title":"RTLCoder: Outperforming GPT-3.5 in design RTL generation with our open-source dataset and lightweight solution","author":"Liu Shang","year":"2023","unstructured":"Shang Liu, Wenji Fang, Yao Lu, Qijun Zhang, Hongce Zhang, and Zhiyao Xie. 2023. RTLCoder: Outperforming GPT-3.5 in design RTL generation with our open-source dataset and lightweight solution. arXiv preprint arXiv:2312.08617 (2023).","journal-title":"arXiv preprint arXiv:2312.08617"},{"key":"e_1_3_1_37_2","article-title":"RTLLM: An open-source benchmark for design RTL generation with large language model","author":"Lu Yao","year":"2023","unstructured":"Yao Lu, Shang Liu, Qijun Zhang, and Zhiyao Xie. 2023. RTLLM: An open-source benchmark for design RTL generation with large language model. arXiv preprint arXiv:2308.05345 (2023).","journal-title":"arXiv preprint arXiv:2308.05345"},{"key":"e_1_3_1_38_2","article-title":"VerilogReader: LLM-aided hardware test generation","author":"Ma Ruiyang","year":"2024","unstructured":"Ruiyang Ma, Yuxin Yang, Ziqian Liu, Jiaxi Zhang, Min Li, Junhua Huang, and Guojie Luo. 2024. VerilogReader: LLM-aided hardware test generation. arXiv preprint arXiv:2406.04373 (2024).","journal-title":"arXiv preprint arXiv:2406.04373"},{"key":"e_1_3_1_39_2","unstructured":"Meta. 2024. Llama 3.2: Revolutionizing Edge AI and Vision with Open Customizable Models. Retrieved February 6 2025 from https:\/\/ai.meta.com\/blog\/llama-3-2-connect-2024-vision-edge-mobile-devices\/"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics11030435"},{"key":"e_1_3_1_41_2","article-title":"Learning to compress prompts with gist tokens","volume":"36","author":"Mu Jesse","year":"2024","unstructured":"Jesse Mu, Xiang Li, and Noah Goodman. 2024. Learning to compress prompts with gist tokens. Advances in Neural Information Processing Systems 36 (2024), 19327\u201319352.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_42_2","article-title":"Generating secure hardware using ChatGPT resistant to CWEs","author":"Nair Madhav","year":"2023","unstructured":"Madhav Nair, Rajat Sadhukhan, and Debdeep Mukhopadhyay. 2023. Generating secure hardware using ChatGPT resistant to CWEs. Cryptology ePrint Archive. Preprint.","journal-title":"Cryptology ePrint Archive."},{"key":"e_1_3_1_43_2","article-title":"CodeGen: An open large language model for code with multi-turn program synthesis","author":"Nijkamp Erik","year":"2022","unstructured":"Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. CodeGen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474 (2022).","journal-title":"arXiv preprint arXiv:2203.13474"},{"key":"e_1_3_1_44_2","first-page":"27730","article-title":"Training language models to follow instructions with human feedback","volume":"35","author":"Ouyang Long","year":"2022","unstructured":"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et\u00a0al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730\u201327744.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_45_2","unstructured":"Zehua Pei Hui-Ling Zhen Mingxuan Yuan Yu Huang and Bei Yu. 2024. BetterV: Controlled Verilog generation with discriminative guidance. arxiv:2402.03375[cs.AI] (2024)."},{"key":"e_1_3_1_46_2","article-title":"Revisiting VerilogEval: Newer LLMs, in-context learning, and specification-to-RTL tasks","author":"Pinckney Nathaniel","year":"2024","unstructured":"Nathaniel Pinckney, Christopher Batten, Mingjie Liu, Haoxing Ren, and Brucek Khailany. 2024. Revisiting VerilogEval: Newer LLMs, in-context learning, and specification-to-RTL tasks. arXiv preprint arXiv:2408.11053 (2024).","journal-title":"arXiv preprint arXiv:2408.11053"},{"key":"e_1_3_1_47_2","unstructured":"PrimisAI. 2023. Welcome to RapidGPT. Retreived February 6 2025 from https:\/\/primis.ai\/docs"},{"key":"e_1_3_1_48_2","unstructured":"Yuan Pu Zhuolun He Tairu Qiu Haoyuan Wu and Bei Yu. 2024. Customized retrieval augmented generation and benchmarking for EDA tool documentation QA. arxiv:2407.15353[cs.CL] (2024). https:\/\/arxiv.org\/abs\/2407.15353"},{"key":"e_1_3_1_49_2","first-page":"8748","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et\u00a0al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning. 8748\u20138763."},{"key":"e_1_3_1_50_2","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Preprint."},{"issue":"140","key":"e_1_3_1_51_2","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research 21, 140 (2020), 1\u201367.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_1_52_2","first-page":"8821","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Ramesh Aditya","year":"2021","unstructured":"Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In Proceedings of the International Conference on Machine Learning. 8821\u20138831."},{"key":"e_1_3_1_53_2","article-title":"Zero-shot RTL code generation with attention sink augmented large language models","author":"Sandal Selim","year":"2024","unstructured":"Selim Sandal and Ismail Akturk. 2024. Zero-shot RTL code generation with attention sink augmented large language models. arXiv preprint arXiv:2401.08683 (2024).","journal-title":"arXiv preprint arXiv:2401.08683"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2023.3334955"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/3670474.3685960"},{"key":"e_1_3_1_56_2","unstructured":"Chandan Singh Jeevana Priya Inala Michel Galley Rich Caruana and Jianfeng Gao. 2024. Rethinking interpretability in the era of large language models. arxiv:2402.01761[cs.CL] (2024). https:\/\/arxiv.org\/abs\/2402.01761"},{"key":"e_1_3_1_57_2","article-title":"LLoCO: Learning long contexts offline","author":"Tan Sijun","year":"2024","unstructured":"Sijun Tan, Xiuyu Li, Shishir Patil, Ziyang Wu, Tianjun Zhang, Kurt Keutzer, Joseph E Gonzalez, and Raluca Ada Popa. 2024. LLoCO: Learning long contexts offline. arXiv preprint arXiv:2404.07979 (2024).","journal-title":"arXiv preprint arXiv:2404.07979"},{"key":"e_1_3_1_58_2","article-title":"CodeGemma: Open code models based on Gemma","author":"Team CodeGemma","year":"2024","unstructured":"CodeGemma Team, Heri Zhao, Jeffrey Hui, Joshua Howland, Nam Nguyen, Siqi Zuo, Andrea Hu, Christopher A. Choquette-Choo, Jingyue Shen, Joe Kelley, et\u00a0al. 2024. CodeGemma: Open code models based on Gemma. arXiv preprint arXiv:2406.11409 (2024).","journal-title":"arXiv preprint arXiv:2406.11409"},{"key":"e_1_3_1_59_2","doi-asserted-by":"publisher","DOI":"10.1145\/3643681"},{"key":"e_1_3_1_60_2","article-title":"AutoChip: Automating HDL generation using LLM feedback","author":"Thakur Shailja","year":"2023","unstructured":"Shailja Thakur, Jason Blocklove, Hammond Pearce, Benjamin Tan, Siddharth Garg, and Ramesh Karri. 2023. AutoChip: Automating HDL generation using LLM feedback. arXiv preprint arXiv:2311.04887 (2023).","journal-title":"arXiv preprint arXiv:2311.04887"},{"key":"e_1_3_1_61_2","article-title":"Llama: Open and efficient foundation language models","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et\u00a0al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).","journal-title":"arXiv preprint arXiv:2302.13971"},{"key":"e_1_3_1_62_2","article-title":"Llama 2: Open foundation and fine-tuned chat models","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et\u00a0al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).","journal-title":"arXiv preprint arXiv:2307.09288"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3649329.3657353"},{"key":"e_1_3_1_64_2","unstructured":"Bing-Yue Wu Utsav Sharma Sai Rahul Dhanvi Kankipati Ajay Yadav Bintu Kappil George Sai Ritish Guntupalli Austin Rovinski and Vidya A. Chhabria. 2024. EDA corpus: A large language model dataset for enhanced interaction with OpenROAD. arXiv:2405.06676 (2024)."},{"key":"e_1_3_1_65_2","article-title":"MEIC: Re-thinking RTL debug automation using LLMs","author":"Xu Ke","year":"2024","unstructured":"Ke Xu, Jialin Sun, Yuchen Hu, Xinwei Fang, Weiwei Shan, Xi Wang, and Zhe Jiang. 2024. MEIC: Re-thinking RTL debug automation using LLMs. arXiv preprint arXiv:2405.06840 (2024).","journal-title":"arXiv preprint arXiv:2405.06840"},{"key":"e_1_3_1_66_2","article-title":"On the viability of using LLMs for SW\/HW co-design: An example in designing CiM DNN accelerators","author":"Yan Zheyu","year":"2023","unstructured":"Zheyu Yan, Yifan Qin, Xiaobo Sharon Hu, and Yiyu Shi. 2023. On the viability of using LLMs for SW\/HW co-design: An example in designing CiM DNN accelerators. arXiv preprint arXiv:2306.06923 (2023).","journal-title":"arXiv preprint arXiv:2306.06923"},{"key":"e_1_3_1_67_2","article-title":"ReAct: Synergizing reasoning and acting in language models","author":"Yao Shunyu","year":"2023","unstructured":"Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 (2023).","journal-title":"arXiv preprint arXiv:2210.03629"},{"key":"e_1_3_1_68_2","article-title":"ADO-LLM: Analog design Bayesian optimization with in-context learning of large language models","author":"Yin Yuxuan","year":"2024","unstructured":"Yuxuan Yin, Yu Wang, Boxun Xu, and Peng Li. 2024. ADO-LLM: Analog design Bayesian optimization with in-context learning of large language models. arXiv preprint arXiv:2406.18770 (2024).","journal-title":"arXiv preprint arXiv:2406.18770"}],"container-title":["ACM Transactions on Design Automation of Electronic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3715324","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3715324","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:18Z","timestamp":1750295898000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3715324"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,24]]},"references-count":67,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,5,31]]}},"alternative-id":["10.1145\/3715324"],"URL":"https:\/\/doi.org\/10.1145\/3715324","relation":{},"ISSN":["1084-4309","1557-7309"],"issn-type":[{"value":"1084-4309","type":"print"},{"value":"1557-7309","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,24]]},"assertion":[{"value":"2024-09-06","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-01-16","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-02-24","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}