{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T13:50:37Z","timestamp":1765547437570,"version":"3.41.2"},"reference-count":75,"publisher":"Association for Computing Machinery (ACM)","issue":"ISSTA","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Softw. Eng."],"published-print":{"date-parts":[[2025,6,22]]},"abstract":"<jats:p>Most coverage-guided kernel fuzzers test operating system kernels based on syscall sequence synthesis. However, there are still syscalls rarely or not covered (called low frequency syscalls, LFS) in a period of fuzzing, meaning the relevant code branches remain unexplored. This is due to the complex dependencies of the LFS and mutation uncertainty, which makes it difficult for fuzzers to generate corresponding syscall sequences. Since many kernel fuzzers can dynamically learn syscall dependencies from the current corpus based on the choice table mechanism, providing comprehensive and high-quality seeds could help fuzzers cover LFS. However, constructing such seeds relies heavily on expert experience to resolve the syscall dependencies.<\/jats:p>\n          <jats:p>In this paper, we propose SyzGPT, the first kernel fuzzing framework to automatically generate effective seeds for LFS via Large Language Model (LLM). We leverage a dependency-based retrieval-augmented generation (DRAG) method to unlock the potential of LLM and design a series of steps to improve the effectiveness of the generated seeds. First, SyzGPT automatically extracts syscall dependencies from the existing documentation via LLM. Second, SyzGPT retrieves programs from the fuzzing corpus based on the dependencies to construct adaptive context for LLM. Last, SyzGPT periodically generates and repairs seeds with feedback to enrich the fuzzing corpus for LFS. We propose a novel set of evaluation metrics for seed generation in kernel domain. Our evaluation shows that SyzGPT can generate seeds with a high valid rate of 87.84% and can be extended to offline and fine-tuned LLMs. Compared to seven state-of-the-art kernel fuzzers, SyzGPT improves code coverage by 17.73%, LFS coverage by 58.00%, and vulnerability detection by 323.22% on average. Besides, SyzGPT independently discovered 26 unknown kernel bugs (10 are LFS-related), with 11 confirmed.<\/jats:p>","DOI":"10.1145\/3728913","type":"journal-article","created":{"date-parts":[[2025,6,22]],"date-time":"2025-06-22T10:52:56Z","timestamp":1750589576000},"page":"848-870","source":"Crossref","is-referenced-by-count":1,"title":["Unlocking Low Frequency Syscalls in Kernel Fuzzing with Dependency-Based RAG"],"prefix":"10.1145","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0009-0006-7966-3107","authenticated-orcid":false,"given":"Zhiyu","family":"Zhang","sequence":"first","affiliation":[{"name":"Institute of Information Engineering at Chinese Academy of Sciences, Beijing, China"},{"name":"University of Chinese Academy of Sciences, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-9751-224X","authenticated-orcid":false,"given":"Longxing","family":"Li","sequence":"additional","affiliation":[{"name":"Institute of Information Engineering at Chinese Academy of Sciences, Beijing, China"},{"name":"University of Chinese Academy of Sciences, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8751-9918","authenticated-orcid":false,"given":"Ruigang","family":"Liang","sequence":"additional","affiliation":[{"name":"Institute of Information Engineering at Chinese Academy of Sciences, Beijing, China"},{"name":"University of Chinese Academy of Sciences, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5624-2987","authenticated-orcid":false,"given":"Kai","family":"Chen","sequence":"additional","affiliation":[{"name":"Institute of Information Engineering at Chinese Academy of Sciences, Beijing, China"},{"name":"University of Chinese Academy of Sciences, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2025,6,22]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"2023. Linux Kernel From First Principles. https:\/\/hackaday.com\/2023\/08\/13\/linux-kernel-from-first-principles\/"},{"key":"e_1_2_1_2_1","unstructured":"2024. CVE-2024-0565. https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2024-0565"},{"key":"e_1_2_1_3_1","unstructured":"2024. CVE-2024-0841. https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2024-0841"},{"key":"e_1_2_1_4_1","unstructured":"2024. CVE-2024-1086. https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2024-1086"},{"key":"e_1_2_1_5_1","unstructured":"2024. Linux Kernel Enriched Corpus. https:\/\/github.com\/cmu-pasta\/linux-kernel-enriched-corpus"},{"key":"e_1_2_1_6_1","unstructured":"2024. Linux Kernel Syscall Table. https:\/\/syscalls.mebeim.net\/?table=x86\/64\/x64\/v6.6"},{"key":"e_1_2_1_7_1","unstructured":"2024. Linux Manual Page Section 2. https:\/\/man7.org\/linux\/man-pages\/dir_section_2.html"},{"key":"e_1_2_1_8_1","unstructured":"2024. Prog Deserialization. https:\/\/github.com\/google\/syzkaller\/blob\/master\/prog\/encoding.go#L255"},{"key":"e_1_2_1_9_1","unstructured":"2024. Program Syntax. https:\/\/github.com\/google\/syzkaller\/blob\/master\/docs\/program_syntax.md"},{"key":"e_1_2_1_10_1","unstructured":"2024. syz-execprog. https:\/\/github.com\/google\/syzkaller\/blob\/master\/tools\/syz-execprog\/execprog.go"},{"key":"e_1_2_1_11_1","unstructured":"2024. Syzbot. https:\/\/syzkaller.appspot.com\/upstream"},{"key":"e_1_2_1_12_1","unstructured":"2024. Syzkaller. https:\/\/github.com\/google\/syzkaller"},{"key":"e_1_2_1_13_1","unstructured":"2024. Syzlang. https:\/\/github.com\/google\/syzkaller\/blob\/master\/docs\/syscall_descriptions_syntax.md"},{"key":"e_1_2_1_14_1","unstructured":"2025. SyzGPT Repository. https:\/\/github.com\/QGrain\/SyzGPT"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2303.08774"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1007\/S10664-023-10380-1"},{"key":"e_1_2_1_17_1","unstructured":"Erin Avllazagaj. 2023. SyzGPT: When the fuzzer meets the LLM. https:\/\/albocoder.github.io\/fuzzing\/exploitation\/linux%20kernel\/hacking\/ai\/gpt\/llm\/2023\/11\/27\/GPT-syzkaller.html"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445922"},{"key":"e_1_2_1_19_1","first-page":"1877","article-title":"Language Models are Few-Shot Learners","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell. 2020. Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33 (2020), 1877\u20131901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_20_1","volume-title":"Proceedings of the 30th Annual Network and Distributed System Security Symposium (NDSS).","author":"Bulekov Alexander","year":"2023","unstructured":"Alexander Bulekov, Bandan Das, Stefan Hajnoczi, and Manuel Egele. 2023. No Grammar, No Problem: Towards Fuzzing the Linux Kernel without System-Call Descriptions. In Proceedings of the 30th Annual Network and Distributed System Security Symposium (NDSS)."},{"key":"e_1_2_1_21_1","volume-title":"8th International Conference on Learning Representations (ICLR).","author":"Cao Tianshi","year":"2020","unstructured":"Tianshi Cao, Marc Law, and Sanja Fidler. 2020. A Theoretical Analysis of the Number of Shots in Few-Shot Learning. In 8th International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP54263.2024.00269"},{"key":"e_1_2_1_23_1","volume-title":"Xing","author":"Chiang Wei-Lin","year":"2023","unstructured":"Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https:\/\/lmsys.org\/blog\/2023-03-30-vicuna\/"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3597926.3598067"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3597503.3623343"},{"key":"e_1_2_1_26_1","volume-title":"Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies","volume":"1","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers). 4171\u20134186."},{"key":"e_1_2_1_27_1","volume-title":"ACTOR: Action-Guided Kernel Fuzzing. In 32nd USENIX Security Symposium (USENIX Security 23)","author":"Fleischer Marius","year":"2023","unstructured":"Marius Fleischer, Dipanjan Das, Priyanka Bose, Weiheng Bai, Kangjie Lu, Mathias Payer, Christopher Kruegel, and Giovanni Vigna. 2023. ACTOR: Action-Guided Kernel Fuzzing. In 32nd USENIX Security Symposium (USENIX Security 23). 5003\u20135020."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP46215.2023.10179298"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510126"},{"key":"e_1_2_1_30_1","volume-title":"LoRA: Low-Rank Adaptation of Large Language Models. In 10th International Conference on Learning Representations (ICLR).","author":"Hu Edward J","year":"2022","unstructured":"Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In 10th International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00017"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP46215.2023.10179398"},{"key":"e_1_2_1_33_1","unstructured":"Dave Jones. 2024. Trinity. https:\/\/github.com\/kernelslacker\/trinity Accessed: 2024-01-06"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP46214.2022.9833593"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"e_1_2_1_36_1","first-page":"9459","article-title":"Retrieval-augmented generation for knowledge-intensive nlp tasks","volume":"33","author":"Lewis Patrick","year":"2020","unstructured":"Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, and Tim Rockt\u00e4schel. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33 (2020), 9459\u20139474.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2014.2346248"},{"key":"e_1_2_1_38_1","article-title":"StarCoder: may the source be with you!","author":"Li Raymond","year":"2023","unstructured":"Raymond Li, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, LI Jia, Jenny Chim, and Qian Liu. 2023. StarCoder: may the source be with you!. Transactions on Machine Learning Research.","journal-title":"Transactions on Machine Learning Research."},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2023.3244825"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2024.24556"},{"key":"e_1_2_1_41_1","volume-title":"CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In 11th International Conference on Learning Representations (ICLR).","author":"Nijkamp Erik","year":"2022","unstructured":"Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In 11th International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_1_42_1","unstructured":"OpenAI. 2022. Introducing ChatGPT. https:\/\/openai.com\/blog\/chatgpt"},{"key":"e_1_2_1_43_1","volume-title":"MoonShine: Optimizing OS Fuzzer Seed Selection with Trace Distillation. In 27th USENIX Security Symposium (USENIX Security 18)","author":"Pailoor Shankara","year":"2018","unstructured":"Shankara Pailoor, Andrew Aday, and Suman Jana. 2018. MoonShine: Optimizing OS Fuzzer Seed Selection with Trace Distillation. In 27th USENIX Security Symposium (USENIX Security 18). 729\u2013743."},{"key":"e_1_2_1_44_1","unstructured":"Anthropic PBC. 2024. Claude 3.5 Sonnet. https:\/\/www.anthropic.com\/news\/claude-3-5-sonnet"},{"key":"e_1_2_1_45_1","unstructured":"Anthropic PBC. 2024. Introducing the next generation of Claude. https:\/\/www.anthropic.com\/news\/claude-3-family"},{"key":"e_1_2_1_46_1","volume-title":"29th USENIX Security Symposium (USENIX Security 20)","author":"Peng Hui","year":"2020","unstructured":"Hui Peng and Mathias Payer. 2020. USBFuzz: A Framework for Fuzzing USB Drivers by Device Emulation. In 29th USENIX Security Symposium (USENIX Security 20). 2559\u20132575."},{"key":"e_1_2_1_47_1","first-page":"1980","article-title":"Smart Greybox Fuzzing","volume":"47","author":"Pham Van-Thuan","year":"2019","unstructured":"Van-Thuan Pham, Marcel B\u00f6hme, Andrew E Santosa, Alexandru R\u0103zvan C\u0103ciulescu, and Abhik Roychoudhury. 2019. Smart Greybox Fuzzing. IEEE Transactions on Software Engineering, 47, 9 (2019), 1980\u20131997.","journal-title":"IEEE Transactions on Software Engineering"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2308.12950"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3678890.3678891"},{"volume-title":"26th USENIX security symposium (USENIX Security 17). 167\u2013182.","author":"Schumilo Sergej","key":"e_1_2_1_50_1","unstructured":"Sergej Schumilo, Cornelius Aschermann, Robert Gawlik, Sebastian Schinzel, and Thorsten Holz. 2017. kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels. In 26th USENIX security symposium (USENIX Security 17). 167\u2013182."},{"key":"e_1_2_1_51_1","unstructured":"SecurityScorecard. 2024. CVE Details. https:\/\/www.cvedetails.com\/vulnerability-list\/vendor_id-33\/Linux.html"},{"key":"e_1_2_1_52_1","volume-title":"Proceedings of the 39th IEEE\/ACM International Conference on Automated Software Engineering. 2159\u20132169","author":"Shi Heyuan","year":"2024","unstructured":"Heyuan Shi, Shijun Chen, Runzhe Wang, Yuhan Chen, Weibo Zhang, Qiang Zhang, Yuheng Shen, Xiaohai Shi, Chao Hu, and Yu Jiang. 2024. Industry Practice of Directed Kernel Fuzzing for Open-source Linux Distribution. In Proceedings of the 39th IEEE\/ACM International Conference on Automated Software Engineering. 2159\u20132169."},{"key":"e_1_2_1_53_1","volume-title":"KSG: Augmenting Kernel Fuzzing with System Call Specification Generation. In 2022 USENIX Annual Technical Conference (USENIX ATC 22)","author":"Sun Hao","year":"2022","unstructured":"Hao Sun, Yuheng Shen, Jianzhong Liu, Yiru Xu, and Yu Jiang. 2022. KSG: Augmenting Kernel Fuzzing with System Call Specification Generation. In 2022 USENIX Annual Technical Conference (USENIX ATC 22). 351\u2013366."},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3477132.3483547"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/3576915.3623146"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2312.11805"},{"key":"e_1_2_1_57_1","unstructured":"The Vicuna Team. 2024. Introducing Meta Llama 3: The most capable openly available LLM to date. https:\/\/ai.meta.com\/blog\/meta-llama-3\/"},{"key":"e_1_2_1_58_1","volume-title":"\u0141 ukasz Kaiser, and Illia Polosukhin","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. Advances in Neural Information Processing Systems, 30 (2017)."},{"key":"e_1_2_1_59_1","volume-title":"Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 735\u2013749","author":"Wang Dawei","year":"2024","unstructured":"Dawei Wang, Geng Zhou, Li Chen, Dan Li, and Yukai Miao. 2024. ProphetFuzz: Fully Automated Prediction and Fuzzing of High-Risk Option Combinations with Only Documentation via Large Language Model. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 735\u2013749."},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.23"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.EMNLP-MAIN.68"},{"key":"e_1_2_1_62_1","first-page":"24824","article-title":"Chain-of-Thought Prompting Elicits Reasoning in Large Language Models","volume":"35","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Systems, 35 (2022), 24824\u201324837.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_63_1","unstructured":"Wikipedia. 2024. Inverted index. https:\/\/en.wikipedia.org\/wiki\/Inverted_index"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3597503.3639121"},{"key":"e_1_2_1_65_1","volume-title":"Proceedings of the 31st Annual Network and Distributed System Security Symposium (NDSS).","author":"Xu Jiacheng","year":"2024","unstructured":"Jiacheng Xu, Xuhong Zhang, Shouling Ji, Yuan Tian, Binbin Zhao, Qinying Wang, Peng Cheng, and Jiming Chen. 2024. MOCK: Optimizing Kernel Fuzzing Mutation with Context-aware Dependency. In Proceedings of the 31st Annual Network and Distributed System Security Symposium (NDSS)."},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00035"},{"key":"e_1_2_1_67_1","volume-title":"Proceedings of the ACM on Programming Languages, 8, OOPSLA2","author":"Yang Chenyuan","year":"2024","unstructured":"Chenyuan Yang, Yinlin Deng, Runyu Lu, Jiayi Yao, Jiawei Liu, Reyhaneh Jabbarvand, and Lingming Zhang. 2024. WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models. Proceedings of the ACM on Programming Languages, 8, OOPSLA2 (2024), 709\u2013735."},{"key":"e_1_2_1_68_1","unstructured":"Chenyuan Yang and Aleksandr Nogikh. 2024. sys\/linux: add the descriptions for the CEC device. https:\/\/github.com\/google\/syzkaller\/commit\/d0304e9"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/3676641.3716022"},{"key":"e_1_2_1_70_1","volume-title":"DDRace: Finding Concurrency UAF Vulnerabilities in Linux Drivers with Directed Fuzzing. In 32nd USENIX Security Symposium (USENIX Security 23)","author":"Yuan Ming","year":"2023","unstructured":"Ming Yuan, Bodong Zhao, Penghui Li, Jiashuo Liang, Xinhui Han, Xiapu Luo, and Chao Zhang. 2023. DDRace: Finding Concurrency UAF Vulnerabilities in Linux Drivers with Directed Fuzzing. In 32nd USENIX Security Symposium (USENIX Security 23). 2849\u20132866."},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2024.3447220"},{"key":"e_1_2_1_72_1","volume-title":"StateFuzz: System Call-Based State-Aware Linux Driver Fuzzing. In 31st USENIX Security Symposium (USENIX Security 22)","author":"Zhao Bodong","year":"2022","unstructured":"Bodong Zhao, Zheming Li, Shisong Qin, Zheyu Ma, Ming Yuan, Wenyu Zhu, Zhihong Tian, and Chao Zhang. 2022. StateFuzz: System Call-Based State-Aware Linux Driver Fuzzing. In 31st USENIX Security Symposium (USENIX Security 22). 3273\u20133289."},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","unstructured":"Penghao Zhao Hailin Zhang Qinhan Yu Zhengren Wang Yunteng Geng Fangcheng Fu Ling Yang Wentao Zhang and Bin Cui. 2024. Retrieval-augmented generation for ai-generated content: A survey. arXiv preprint arXiv:2402.19473 https:\/\/doi.org\/10.48550\/ARXIV.2402.19473 10.48550\/ARXIV.2402.19473","DOI":"10.48550\/ARXIV.2402.19473"},{"key":"e_1_2_1_74_1","unstructured":"Daniel M Ziegler Nisan Stiennon Jeffrey Wu Tom B Brown Alec Radford Dario Amodei Paul Christiano and Geoffrey Irving. 2019. Fine-tuning Language Models from Human Preferences. arXiv preprint arXiv:1909.08593."},{"key":"e_1_2_1_75_1","volume-title":"31st USENIX Security Symposium (USENIX Security 22)","author":"Zou Xiaochen","year":"2022","unstructured":"Xiaochen Zou, Guoren Li, Weiteng Chen, Hang Zhang, and Zhiyun Qian. 2022. SyzScope: Revealing High-Risk Security Impacts of Fuzzer-Exposed Bugs in Linux kernel. In 31st USENIX Security Symposium (USENIX Security 22). 3201\u20133217."}],"container-title":["Proceedings of the ACM on Software Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3728913","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,16]],"date-time":"2025-07-16T16:55:24Z","timestamp":1752684924000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3728913"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,22]]},"references-count":75,"journal-issue":{"issue":"ISSTA","published-print":{"date-parts":[[2025,6,22]]}},"alternative-id":["10.1145\/3728913"],"URL":"https:\/\/doi.org\/10.1145\/3728913","relation":{},"ISSN":["2994-970X"],"issn-type":[{"type":"electronic","value":"2994-970X"}],"subject":[],"published":{"date-parts":[[2025,6,22]]}}}