{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,20]],"date-time":"2026-03-20T00:06:56Z","timestamp":1773965216527,"version":"3.50.1"},"reference-count":24,"publisher":"Association for Computing Machinery (ACM)","issue":"6","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Des. Autom. Electron. Syst."],"published-print":{"date-parts":[[2025,11,30]]},"abstract":"<jats:p>High-Level Synthesis (HLS) tools offer rapid hardware design from C code, but their compatibility is limited by code constructs. This article investigates Large Language Models (LLMs) for automatically refactoring C code into HLS-compatible formats. We present a case study using an LLM to rewrite C code for NIST 800-22 randomness tests, a QuickSort algorithm, and AES-128 into HLS-synthesizable C. The LLM iteratively transforms the C code guided by the system prompt and tool\u2019s feedback, implementing functions like streaming data and hardware-specific signals. With the hindsight obtained from the case study, we implement a fully automated framework to refactor C code into HLS-compatible formats using LLMs. To tackle complex designs, we implement a preprocessing step that breaks down the hierarchy in order to approach the problem in a divide-and-conquer bottom-up way. We validated our framework on three ciphers, one hash function, five NIST 800-22 randomness tests, and a QuickSort algorithm. Our results show a high success rate on benchmarks that are orders of magnitude more complex than what has been achieved generating Verilog with LLMs.<\/jats:p>","DOI":"10.1145\/3734524","type":"journal-article","created":{"date-parts":[[2025,5,10]],"date-time":"2025-05-10T04:38:02Z","timestamp":1746851882000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["C2HLSC: Leveraging Large Language Models to Bridge the Software-to-Hardware Design Gap"],"prefix":"10.1145","volume":"30","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2367-6700","authenticated-orcid":false,"given":"Luca","family":"Collini","sequence":"first","affiliation":[{"name":"ECE, New York University Tandon School of Engineering","place":["Brooklyn, United States"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6158-9512","authenticated-orcid":false,"given":"Siddharth","family":"Garg","sequence":"additional","affiliation":[{"name":"ECE, New York University Tandon School of Engineering","place":["Brooklyn, United States"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7989-5617","authenticated-orcid":false,"given":"Ramesh","family":"Karri","sequence":"additional","affiliation":[{"name":"ECE, New York University Tandon School of Engineering","place":["Brooklyn, United States"]}]}],"member":"320","published-online":{"date-parts":[[2025,10,21]]},"reference":[{"key":"e_1_3_3_2_2","article-title":"Bard: A Large Language Model from Google AI","author":"AI Google","year":"2024","unstructured":"Google AI. 2024. Bard: A Large Language Model from Google AI. Retrieved April 6, 2024 from https:\/\/blog.google\/technology\/ai\/bard-google-ai-search-updates\/","journal-title":"https:\/\/blog.google\/technology\/ai\/bard-google-ai-search-updates\/"},{"key":"e_1_3_3_3_2","unstructured":"Anthropic. 2024. Claude 3.5 Model Card Addendum. Retrieved October 14 2024 from https:\/\/www-cdn.anthropic.com\/fed9cc193a14b84131812372d8d5857f8f304c52\/{\\}Model_Card_Claude_3_Addendum.pdf"},{"key":"e_1_3_3_4_2","article-title":"pycparser: A Complete Parser of the C Language","author":"Bendersky Eli","year":"2015","unstructured":"Eli Bendersky. 2015. pycparser: A Complete Parser of the C Language. Retrieved February 14, 2025 from https:\/\/github.com\/eliben\/pycparser","journal-title":"https:\/\/github.com\/eliben\/pycparser"},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/MLCAD58807.2023.10299874"},{"key":"e_1_3_3_6_2","doi-asserted-by":"crossref","unstructured":"Luca Collini Siddharth Garg and Ramesh Karri. 2024. C2HLSC: Can LLMs bridge the software-to-hardware design gap?arxiv:2406.09233 [cs.AR]. Retrieved from https:\/\/arxiv.org\/abs\/2406.09233","DOI":"10.1109\/LAD62341.2024.10691856"},{"key":"e_1_3_3_7_2","unstructured":"Yonggan Fu Yongan Zhang Zhongzhi Yu Sixu Li Zhifan Ye Chaojian Li Cheng Wan and Yingyan Celine Lin. 2025. GPT4AIGChip: Towards next-generation AI accelerator design automation via large language models. arxiv:2309.10730 [cs.LG]. Retrieved from https:\/\/arxiv.org\/abs\/2309.10730"},{"key":"e_1_3_3_8_2","article-title":"Quick Sort in C","unstructured":"GeeksforGeeks. [n. d.]. Quick Sort in C. Retrieved April 6, 2024 from https:\/\/www.geeksforgeeks.org\/quick-sort-in-c\/","journal-title":"https:\/\/www.geeksforgeeks.org\/quick-sort-in-c\/"},{"key":"e_1_3_3_9_2","article-title":"HLSLibs - High-Level Synthesis Libraries","unstructured":"HLSLibs. [n. d.]. HLSLibs - High-Level Synthesis Libraries. Retrieved April 6, 2024 from https:\/\/hlslibs.org\/","journal-title":"https:\/\/hlslibs.org\/"},{"key":"e_1_3_3_10_2","article-title":"tiny-AES-c","unstructured":"Koke. [n. d.]. tiny-AES-c. Retrieved April 6, 2024 from https:\/\/github.com\/kokke\/tiny-AES-c","journal-title":"https:\/\/github.com\/kokke\/tiny-AES-c"},{"key":"e_1_3_3_11_2","unstructured":"Yuchao Liao Tosiron Adegbija and Roman Lysecky. 2024. Are LLMs any good for high-level synthesis?arxiv:2408.10428 [cs.AR] . Retrieved from https:\/\/arxiv.org\/abs\/2408.10428"},{"key":"e_1_3_3_12_2","unstructured":"Mingjie Liu Nathaniel Pinckney Brucek Khailany and Haoxing Ren. 2023. VerilogEval: Evaluating large language models for Verilog code generation. arxiv:2309.07544 [cs.LG]. Retrieved from https:\/\/arxiv.org\/abs\/2309.07544"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISEDA62518.2024.10618053"},{"key":"e_1_3_3_14_2","unstructured":"Yao Lu Shang Liu Qijun Zhang and Zhiyao Xie. 2023. RTLLM: An open-source benchmark for design RTL generation with large language model. arxiv:2308.05345 [cs.LG]. Retrieved from https:\/\/arxiv.org\/abs\/2308.05345"},{"key":"e_1_3_3_15_2","unstructured":"James T. Meech. 2024. Leveraging high-level synthesis and large language models to generate simulate and deploy a uniform random number generator hardware design. arxiv:2311.03489 [cs.AR]. Retrieved from https:\/\/arxiv.org\/abs\/2311.03489"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2015.2513673"},{"key":"e_1_3_3_17_2","unstructured":"NIST. 2010. A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications (Revision 1a). Retrieved April 6 2024 from https:\/\/nvlpubs.nist.gov\/nistpubs\/legacy\/sp\/nistspecialpublication800-22r1a.pdf"},{"key":"e_1_3_3_18_2","unstructured":"OpenAI. 2024. GPT-4 Turbo System Card. Retrieved October 14 2024 from https:\/\/cdn.openai.com\/gpt-4o-system-card.pdf"},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP46214.2022.9833571"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCD46524.2019.00054"},{"key":"e_1_3_3_21_2","unstructured":"Sneha Swaroopa Rijoy Mukherjee Anushka Debnath and Rajat Subhra Chakraborty. 2024. Evaluating large language models for automatic register transfer logic generation via high-level synthesis. arxiv:2408.02793 [cs.AR]. Retrieved from https:\/\/arxiv.org\/abs\/2408.02793"},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/3643681"},{"key":"e_1_3_3_23_2","unstructured":"Shailja Thakur Jason Blocklove Hammond Pearce Benjamin Tan Siddharth Garg and Ramesh Karri. 2024. AutoChip: Automating HDL generation using LLM feedback. arxiv:2311.04887 [cs.PL]. Retrieved from https:\/\/arxiv.org\/abs\/2311.04887"},{"key":"e_1_3_3_24_2","unstructured":"Chenwei Xiong Cheng Liu Huawei Li and Xiaowei Li. 2024. HLSPilot: LLM-based high-level synthesis. arxiv:2408.06810 [cs.AR]. Retrieved from https:\/\/arxiv.org\/abs\/2408.06810"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3670474.3685953"}],"container-title":["ACM Transactions on Design Automation of Electronic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3734524","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T12:20:58Z","timestamp":1761135658000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3734524"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"references-count":24,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,11,30]]}},"alternative-id":["10.1145\/3734524"],"URL":"https:\/\/doi.org\/10.1145\/3734524","relation":{},"ISSN":["1084-4309","1557-7309"],"issn-type":[{"value":"1084-4309","type":"print"},{"value":"1557-7309","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]},"assertion":[{"value":"2024-10-28","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-04-30","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-10-21","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}