{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T05:08:13Z","timestamp":1775538493680,"version":"3.50.1"},"reference-count":87,"publisher":"Association for Computing Machinery (ACM)","issue":"6","funder":[{"DOI":"10.13039\/501100001809","name":"NSFC","doi-asserted-by":"crossref","award":["92470204"],"award-info":[{"award-number":["92470204"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"JSPS","award":["JP23K24851"],"award-info":[{"award-number":["JP23K24851"]}]},{"name":"JST","award":["JPMJPR23P5,JPMJCR21M2,JPMJNX25C4"],"award-info":[{"award-number":["JPMJPR23P5,JPMJCR21M2,JPMJNX25C4"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Manag. Data"],"published-print":{"date-parts":[[2025,12,4]]},"abstract":"<jats:p>Large language models (LLMs) have shown state-of-the-art results in translating natural language questions into SQL queries (Text-to-SQL), a long-standing challenge within the database community. However, security concerns remain largely unexplored, particularly the threat of backdoor attacks, which can introduce malicious behaviors into models through fine-tuning with poisoned datasets. In this work, we systematically investigate the vulnerabilities of LLM-based Text-to-SQL models and present ToxicSQL, a novel backdoor attack framework. Our approach leverages stealthy command-like and character-level triggers to make backdoors difficult to detect and remove, ensuring that malicious behaviors remain covert while maintaining high model accuracy on benign inputs. Furthermore, we propose leveraging SQL injection payloads as backdoor targets, enabling the generation of malicious yet executable SQL queries, which pose severe security and privacy risks in language model-based SQL development. We demonstrate that injecting only 0.44% of poisoned data can result in an attack success rate of 79.41%, posing a significant risk to database security. Additionally, we propose detection and mitigation strategies to enhance model reliability. Our findings highlight the urgent need for security-aware Text-to-SQL development, emphasizing the importance of robust defenses against backdoor threats.<\/jats:p>","DOI":"10.1145\/3769762","type":"journal-article","created":{"date-parts":[[2025,12,6]],"date-time":"2025-12-06T04:32:13Z","timestamp":1764995533000},"page":"1-27","source":"Crossref","is-referenced-by-count":3,"title":["Are Your LLM-based Text-to-SQL Models Secure? Exploring SQL Injection via Backdoor Attacks"],"prefix":"10.1145","volume":"3","author":[{"ORCID":"https:\/\/orcid.org\/0009-0000-9057-9260","authenticated-orcid":false,"given":"Meiyu","family":"Lin","sequence":"first","affiliation":[{"name":"Sichuan University, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-0288-2937","authenticated-orcid":false,"given":"Haichuan","family":"Zhang","sequence":"additional","affiliation":[{"name":"Sichuan University, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-1144-5152","authenticated-orcid":false,"given":"Jiale","family":"Lao","sequence":"additional","affiliation":[{"name":"Cornell University, Ithaca, NY, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-6664-6396","authenticated-orcid":false,"given":"Renyuan","family":"Li","sequence":"additional","affiliation":[{"name":"Sichuan University, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2144-1131","authenticated-orcid":false,"given":"Yuanchun","family":"Zhou","sequence":"additional","affiliation":[{"name":"Chinese Academy of Science, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9145-4531","authenticated-orcid":false,"given":"Carl","family":"Yang","sequence":"additional","affiliation":[{"name":"Emory University, Atlanta, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6424-8633","authenticated-orcid":false,"given":"Yang","family":"Cao","sequence":"additional","affiliation":[{"name":"Institute of Science Tokyo, Tokyo, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8893-4574","authenticated-orcid":false,"given":"Mingjie","family":"Tang","sequence":"additional","affiliation":[{"name":"Sichuan University, Chengdu, China"}]}],"member":"320","published-online":{"date-parts":[[2025,12,5]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.","author":"Achiam Josh","year":"2023","unstructured":"Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al., 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)."},{"key":"e_1_2_1_2_1","unstructured":"Jinze Bai Shuai Bai Yunfei Chu Zeyu Cui Kai Dang Xiaodong Deng Yang Fan Wenbin Ge Yu Han Fei Huang Binyuan Hui Luo Ji Mei Li Junyang Lin Runji Lin Dayiheng Liu Gao Liu Chengqiang Lu Keming Lu Jianxin Ma Rui Men Xingzhang Ren Xuancheng Ren Chuanqi Tan Sinan Tan Jianhong Tu Peng Wang Shijie Wang Wei Wang Shengguang Wu Benfeng Xu Jin Xu An Yang Hao Yang Jian Yang Shusheng Yang Yang Yao Bowen Yu Hongyi Yuan Zheng Yuan Jianwei Zhang Xingxuan Zhang Yichang Zhang Zhenru Zhang Chang Zhou Jingren Zhou Xiaohuan Zhou and Tianhang Zhu. 2023. Qwen Technical Report. arXiv:2309.16609 [cs.CL] https:\/\/arxiv.org\/abs\/2309.16609"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/1315245.1315249"},{"key":"e_1_2_1_4_1","volume-title":"Advances in Neural Information Processing Systems","volume":"36","author":"Birhane Abeba","year":"2024","unstructured":"Abeba Birhane, Sanghyun Han, Vishnu Boddeti, Sasha Luccioni, et al., 2024. Into the laion's den: Investigating hate in multimodal datasets. Advances in Neural Information Processing Systems, Vol. 36 (2024)."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/1698750.1698754"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1002\/aaai.12188"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3709719"},{"key":"e_1_2_1_8_1","volume-title":"Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. CoRR","author":"Chen Xinyun","year":"2017","unstructured":"Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. CoRR, Vol. abs\/1712.05526 (2017)."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3485832.3485837"},{"key":"e_1_2_1_10_1","volume-title":"Refine: Inversion-free backdoor defense via model reprogramming. ICLR","author":"Chen Yukun","year":"2025","unstructured":"Yukun Chen, Shuo Shao, Enhao Huang, Yiming Li, Pin-Yu Chen, Zhan Qin, and Kui Ren. 2025b. Refine: Inversion-free backdoor defense via model reprogramming. ICLR (2025)."},{"key":"e_1_2_1_11_1","volume-title":"Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems","author":"Dettmers Tim","year":"2023","unstructured":"Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems, Vol. 36 (2023), 10088-10115."},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-96-8183-9_25"},{"key":"e_1_2_1_13_1","unstructured":"Paul DuBois. 2013. MySQL. Addison-Wesley."},{"key":"e_1_2_1_14_1","unstructured":"Aaron Grattafiori et al. 2024. The Llama 3 Herd of Models. arXiv:2407.21783 [cs.AI] https:\/\/arxiv.org\/abs\/2407.21783"},{"key":"e_1_2_1_15_1","unstructured":"Hugging Face. 2016. . https:\/\/huggingface.co"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.14778\/3681954.3681960"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01569"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.14778\/3583140.3583165"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.14778\/3641204.3641221"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2021.3055844"},{"key":"e_1_2_1_21_1","unstructured":"GitHub. 2025. . https:\/\/github.com\/"},{"key":"e_1_2_1_22_1","volume-title":"Chatglm: A family of large language models from glm-130b to glm-4 all tools. arXiv preprint arXiv:2406.12793","author":"Aohan Zeng Team GLM","year":"2024","unstructured":"Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Dan Zhang, Diego Rojas, Guanyu Feng, Hanlin Zhao, et al., 2024. Chatglm: A family of large language models from glm-130b to glm-4 all tools. arXiv preprint arXiv:2406.12793 (2024)."},{"key":"e_1_2_1_23_1","volume-title":"Guangwei Yu, Jesse C Cresswell, and Rasa Hosseinzadeh.","author":"Gorti Satya Krishna","year":"2024","unstructured":"Satya Krishna Gorti, Ilan Gofman, Zhaoyan Liu, Jiapeng Wu, No AcG, l Vouitsis, Guangwei Yu, Jesse C Cresswell, and Rasa Hosseinzadeh. 2024. Msc-sql: Multi-sample critiquing small language models for text-to-sql translation. arXiv preprint arXiv:2410.12916 (2024)."},{"key":"e_1_2_1_24_1","volume-title":"Two heads are better than one: Nested poe for robust defense against multi-backdoors. NAACL","author":"Graf Victoria","year":"2024","unstructured":"Victoria Graf, Qin Liu, and Muhao Chen. 2024. Two heads are better than one: Nested poe for robust defense against multi-backdoors. NAACL (2024)."},{"key":"e_1_2_1_25_1","unstructured":"Grammarly. 2025. . https:\/\/www.grammarly.com\/"},{"key":"e_1_2_1_26_1","volume-title":"BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. CoRR","author":"Gu Tianyu","year":"2017","unstructured":"Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. CoRR, Vol. abs\/1708.06733 (2017)."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3589292"},{"key":"e_1_2_1_28_1","volume-title":"Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency. ICLR","author":"Guo Junfeng","year":"2023","unstructured":"Junfeng Guo, Yiming Li, Xun Chen, Hanqing Guo, Lichao Sun, and Cong Liu. 2023. Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency. ICLR (2023)."},{"key":"e_1_2_1_29_1","unstructured":"William GJ Halfond Jeremy Viegas Alessandro Orso et al. 2006. A Classification of SQL Injection Attacks and Countermeasures.. In ISSSE."},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCA49400.2020.9022833"},{"key":"e_1_2_1_31_1","volume-title":"Leo Yu Zhang, and Yiming Li","author":"Hou Linshan","year":"2024","unstructured":"Linshan Hou, Ruili Feng, Zhongyun Hua, Wei Luo, Leo Yu Zhang, and Yiming Li. 2024. Ibd-psc: Input-level backdoor detection via parameter-oriented scaling consistency. ICML (2024)."},{"key":"e_1_2_1_32_1","volume-title":"International conference on machine learning. PMLR, 2790-2799","author":"Houlsby Neil","year":"2019","unstructured":"Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In International conference on machine learning. PMLR, 2790-2799."},{"key":"e_1_2_1_33_1","volume-title":"Universal language model fine-tuning for text classification. ACL","author":"Howard Jeremy","year":"2018","unstructured":"Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. ACL (2018)."},{"key":"e_1_2_1_34_1","first-page":"3","article-title":"Lora: Low-rank adaptation of large language models","volume":"1","author":"Hu Edward J","year":"2022","unstructured":"Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al., 2022. Lora: Low-rank adaptation of large language models. ICLR, Vol. 1, 2 (2022), 3.","journal-title":"ICLR"},{"key":"e_1_2_1_35_1","unstructured":"Binyuan Hui Jian Yang Zeyu Cui Jiaxi Yang Dayiheng Liu Lei Zhang Tianyu Liu Jiajun Zhang Bowen Yu Keming Lu Kai Dang Yang Fan Yichang Zhang An Yang Rui Men Fei Huang Bo Zheng Yibo Miao Shanghaoran Quan Yunlong Feng Xingzhang Ren Xuancheng Ren Jingren Zhou and Junyang Lin. 2024. Qwen2.5-Coder Technical Report. arXiv:2409.12186 [cs.CL] https:\/\/arxiv.org\/abs\/2409.12186"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.249"},{"key":"e_1_2_1_37_1","unstructured":"LanguageTool. 2025. . https:\/\/languagetool.org\/"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.mcm.2011.01.050"},{"key":"e_1_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.14778\/3681954.3682003"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i11.26535"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3654930"},{"key":"e_1_2_1_42_1","volume-title":"Proceedings of the 37th International Conference on Neural Information Processing Systems","author":"Li Jinyang","year":"2023","unstructured":"Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin C.C. Chang, Fei Huang, Reynold Cheng, and Yongbin Li. 2023a. Can LLM already serve as a database interface? a big bench for large-scale database grounded text-to-SQLs. In Proceedings of the 37th International Conference on Neural Information Processing Systems (New Orleans, LA, USA) (NIPS '23). Curran Associates Inc., Red Hook, NY, USA, Article 1835, 28 pages."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01615"},{"key":"e_1_2_1_44_1","first-page":"14900","article-title":"Anti-backdoor learning: Training clean models on poisoned data","volume":"34","author":"Li Yige","year":"2021","unstructured":"Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. 2021b. Anti-backdoor learning: Training clean models on poisoned data. Advances in Neural Information Processing Systems, Vol. 34 (2021), 14900-14912.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00470-5_13"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3658644.3690279"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.153"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1145\/3319535.3363216"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23291"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-acl.237"},{"key":"e_1_2_1_51_1","volume-title":"Crow: Eliminating backdoors from large language models via internal consistency regularization. arXiv preprint arXiv:2411.12768","author":"Min Nay Myat","year":"2024","unstructured":"Nay Myat Min, Long H Pham, Yige Li, and Jun Sun. 2024. Crow: Eliminating backdoors from large language models via internal consistency regularization. arXiv preprint arXiv:2411.12768 (2024)."},{"key":"e_1_2_1_52_1","unstructured":"OpenAI. 2024. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] https:\/\/arxiv.org\/abs\/2303.08774"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISSRE59848.2023.00047"},{"key":"e_1_2_1_54_1","volume-title":"Advances in Neural Information Processing Systems","volume":"36","author":"Pourreza Mohammadreza","year":"2024","unstructured":"Mohammadreza Pourreza and Davood Rafiei. 2024. Din-sql: Decomposed in-context learning of text-to-sql with self-correction. Advances in Neural Information Processing Systems, Vol. 36 (2024)."},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","unstructured":"Fanchao Qi Yangyi Chen Mukai Li Yuan Yao Zhiyuan Liu and Maosong Sun. 2021a. ONION: A Simple and Effective Defense Against Textual Backdoor Attacks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing Marie-Francine Moens Xuanjing Huang Lucia Specia and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics Online and Punta Cana Dominican Republic 9558-9566. doi:10.18653\/v1\/2021.emnlp-main.752","DOI":"10.18653\/v1\/2021.emnlp-main.752"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.374"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","unstructured":"Fanchao Qi Mukai Li Yangyi Chen Zhengyan Zhang Zhiyuan Liu Yasheng Wang and Maosong Sun. 2021c. Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) Chengqing Zong Fei Xia Wenjie Li and Roberto Navigli (Eds.). Association for Computational Linguistics Online 443-453. doi:10.18653\/v1\/2021.acl-long.37","DOI":"10.18653\/v1\/2021.acl-long.37"},{"key":"e_1_2_1_58_1","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, Vol. 21, 140 (2020), 1-67.","journal-title":"Journal of machine learning research"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.acl-short.15"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1410"},{"key":"e_1_2_1_61_1","unstructured":"Baptiste Rozi\u00e8re Jonas Gehring Fabian Gloeckle Sten Sootla Itai Gat Xiaoqing Ellen Tan Yossi Adi Jingyu Liu Romain Sauvestre Tal Remez J\u00e9r\u00e9my Rapin Artyom Kozhevnikov Ivan Evtimov Joanna Bitton Manish Bhatt Cristian Canton Ferrer Aaron Grattafiori Wenhan Xiong Alexandre D\u00e9fossez Jade Copet Faisal Azhar Hugo Touvron Louis Martin Nicolas Usunier Thomas Scialom and Gabriel Synnaeve. 2024. Code Llama: Open Foundation Models for Code. arXiv:2308.12950 [cs.CL] https:\/\/arxiv.org\/abs\/2308.12950"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","unstructured":"Torsten Scholak Nathan Schucher and Dzmitry Bahdanau. 2021. PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing Marie-Francine Moens Xuanjing Huang Lucia Specia and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics Online and Punta Cana Dominican Republic 9895-9901. doi:10.18653\/v1\/2021.emnlp-main.779","DOI":"10.18653\/v1\/2021.emnlp-main.779"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3460120.3485370"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3658644.3670388"},{"key":"e_1_2_1_65_1","unstructured":"SonarCloud. 2024. . https:\/\/sonarcloud.io\/"},{"key":"e_1_2_1_66_1","unstructured":"SQLFluff. 2024. . https:\/\/sqlfluff.com\/"},{"key":"e_1_2_1_67_1","unstructured":"SQLLint. 2024. . https:\/\/github.com\/mikoskinen\/SQLLint"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i4.25656"},{"key":"e_1_2_1_69_1","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar Aurelien Rodriguez Armand Joulin Edouard Grave and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. arXiv:2302.13971 [cs.CL] https:\/\/arxiv.org\/abs\/2302.13971"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.13"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2019.00031"},{"key":"e_1_2_1_72_1","volume-title":"Unicorn: A unified backdoor trigger inversion framework. ICRL","author":"Wang Zhenting","year":"2023","unstructured":"Zhenting Wang, Kai Mei, Juan Zhai, and Shiqing Ma. 2023. Unicorn: A unified backdoor trigger inversion framework. ICRL (2023)."},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.23940\/ijpe.19.10.p14.26832691"},{"key":"e_1_2_1_74_1","unstructured":"Xiangjin Xie Guangwei Xu Lingyan Zhao and Ruijie Guo. 2025. OpenSearch-SQL: Enhancing Text-to-SQL with Dynamic Few-shot and Consistency Alignment. In SIGMOD."},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.findings-acl.443"},{"key":"e_1_2_1_76_1","first-page":"1795","volume-title":"33rd USENIX Security Symposium (USENIX Security 24)","author":"Yan Shenao","year":"2024","unstructured":"Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, and Yuan Hong. 2024. An {LLM-Assisted}{Easy-to-Trigger} Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection. In 33rd USENIX Security Symposium (USENIX Security 24). 1795-1812."},{"key":"e_1_2_1_77_1","volume-title":"Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. EMNLP","author":"Yang Wenkai","year":"2021","unstructured":"Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. 2021. Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. EMNLP (2021)."},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP54263.2024.00123"},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","unstructured":"Tao Yu Rui Zhang Kai Yang Michihiro Yasunaga Dongxu Wang Zifan Li James Ma Irene Li Qingning Yao Shanelle Roman Zilin Zhang and Dragomir Radev. 2018. Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing Ellen Riloff David Chiang Julia Hockenmaier and Jun'ichi Tsujii (Eds.). Association for Computational Linguistics Brussels Belgium 3911-3921. doi:10.18653\/v1\/D18-1425","DOI":"10.18653\/v1\/D18-1425"},{"key":"e_1_2_1_80_1","first-page":"4675","volume-title":"33rd USENIX Security Symposium (USENIX Security 24)","author":"Yu Zhiyuan","year":"2024","unstructured":"Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, and Ning Zhang. 2024. Don't listen to me: understanding and exploring jailbreak prompts of large language models. In 33rd USENIX Security Symposium (USENIX Security 24). 4675-4692."},{"key":"e_1_2_1_81_1","volume-title":"Adversarial unlearning of backdoors via implicit hypergradient. ICLR","author":"Zeng Yi","year":"2022","unstructured":"Yi Zeng, Si Chen, Won Park, Z Morley Mao, Ming Jin, and Ruoxi Jia. 2022. Adversarial unlearning of backdoors via implicit hypergradient. ICLR (2022)."},{"key":"e_1_2_1_82_1","volume-title":"Dawn Song, Bo Li, and Ruoxi Jia.","author":"Zeng Yi","year":"2024","unstructured":"Yi Zeng, Weiyu Sun, Tran Ngoc Huynh, Dawn Song, Bo Li, and Ruoxi Jia. 2024. Beear: Embedding-based adversarial removal of safety backdoors in instruction-tuned language models. EMNLP (2024)."},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1145\/3626246.3653375"},{"key":"e_1_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.264"},{"key":"e_1_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.14778\/3636218.3636225"},{"key":"e_1_2_1_86_1","volume-title":"SQL Injection Jailbreak: A Structural Disaster of Large Language Models. findings of ACL 2025","author":"Zhao Jiawei","year":"2025","unstructured":"Jiawei Zhao, Kejiang Chen, Weiming Zhang, and Nenghai Yu. 2025. SQL Injection Jailbreak: A Structural Disaster of Large Language Models. findings of ACL 2025 (2025)."},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.emnlp-main.642"}],"container-title":["Proceedings of the ACM on Management of Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3769762","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T04:25:26Z","timestamp":1775535926000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3769762"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,4]]},"references-count":87,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12,4]]}},"alternative-id":["10.1145\/3769762"],"URL":"https:\/\/doi.org\/10.1145\/3769762","relation":{},"ISSN":["2836-6573"],"issn-type":[{"value":"2836-6573","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,4]]}}}