{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T01:06:23Z","timestamp":1765501583628,"version":"3.48.0"},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T00:00:00Z","timestamp":1765497600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T00:00:00Z","timestamp":1765497600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"Joint Funds of National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["U23A20304"],"award-info":[{"award-number":["U23A20304"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cybersecurity"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Backdoor attacks pose an important security threat to textual large language models. Exploring textual backdoor attacks not only helps reveal the potential security risks of models, but also promotes innovation and development of defense mechanisms. Currently, most textual backdoor attack methods are based on a single trigger. For example, inserting specific content into text as a trigger or changing the abstract text features to be a trigger. However, the adoption of this single-trigger mode makes the existing backdoor attacks subject to certain limitations: either they are easily identified by the existing defense strategies, or they have certain shortcomings in attack performance and in the construction of poisoned datasets. In order to solve these issues, a dual-trigger backdoor attack method is proposed in this paper. Specifically, we use two different attributes, syntax and mood (we use subjunctive mood as an example in this article), as two different triggers. It makes our backdoor attack method similar to a double landmine which can have completely different trigger conditions simultaneously. Therefore, this method not only improves the flexibility of trigger mode, but also enhances the robustness against defense detection. A large number of experimental results show that this method significantly outperforms the previous methods based on abstract features in attack performance, and achieves comparable attack performance (almost 100% attack success rate) with the insertion-based method. In addition, in order to further improve the attack performance, we also give the construction method of the poisoned dataset. The code and data of this paper can be obtained at\n                    <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/HoyaAm\/Double-Landmines.\" ext-link-type=\"uri\">https:\/\/github.com\/HoyaAm\/Double-Landmines.<\/jats:ext-link>\n                  <\/jats:p>","DOI":"10.1186\/s42400-025-00512-z","type":"journal-article","created":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T01:01:58Z","timestamp":1765501318000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Double landmines: invisible textual backdoor attacks based on dual-trigger"],"prefix":"10.1186","volume":"8","author":[{"given":"Yang","family":"Hou","sequence":"first","affiliation":[]},{"given":"Qiuling","family":"Yue","sequence":"additional","affiliation":[]},{"given":"Lujia","family":"Chai","sequence":"additional","affiliation":[]},{"given":"Guozhao","family":"Liao","sequence":"additional","affiliation":[]},{"given":"Wenbao","family":"Han","sequence":"additional","affiliation":[]},{"given":"Wei","family":"Ou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,12]]},"reference":[{"key":"512_CR1","unstructured":"Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, Aleman FL, Almeida D, Altenschmidt J, Altman S, Anadkat S, et al.: (2023) Gpt-4 technical report. arXiv preprint arXiv:2303.08774"},{"key":"512_CR2","doi-asserted-by":"crossref","unstructured":"Alekseevskaia I, Arkhipenko K (2023) Orderbkd: Textual backdoor attack through repositioning. In: 2023 Ivannikov Ispras open conference (ISPRAS), pp. 1\u20136 . IEEE","DOI":"10.1109\/ISPRAS60948.2023.10508175"},{"key":"512_CR3","unstructured":"Chen X, Liu C, Li B, Lu K, Song D (2017) Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526"},{"key":"512_CR4","doi-asserted-by":"crossref","unstructured":"Chen X, Salem A, Chen D, Backes M, Ma S, Shen Q, Wu Z, Zhang Y (2021) Badnl: Backdoor attacks against nlp models with semantic-preserving improvements. In: Proceedings of the 37th annual computer security applications conference, pp. 554\u2013569","DOI":"10.1145\/3485832.3485837"},{"key":"512_CR5","doi-asserted-by":"publisher","first-page":"138872","DOI":"10.1109\/ACCESS.2019.2941376","volume":"7","author":"J Dai","year":"2019","unstructured":"Dai J, Chen C, Li Y (2019) A backdoor attack against LSTM-based text classification systems. IEEE Access 7:138872\u2013138878","journal-title":"IEEE Access"},{"key":"512_CR6","unstructured":"Dubey A, Jauhri A, Pandey A, Kadian A, Al-Dahle A, Letman A, Mathur A, Schelten A, Yang A, Fan A, et al (2024) The llama 3 herd of models. arXiv preprint arXiv:2407.21783"},{"key":"512_CR7","doi-asserted-by":"crossref","unstructured":"Du W, Yuan T, Zhao H, Liu G (2024) Nws: Natural textual backdoor attacks via word substitution. In: ICASSP 2024-2024 IEEE international conference on Acoustics, speech and signal processing (ICASSP), pp. 4680\u20134684 . IEEE","DOI":"10.1109\/ICASSP48485.2024.10447968"},{"key":"512_CR8","unstructured":"Gao Y, Doan BG, Zhang Z, Ma S, Zhang J, Fu A, Nepal S, Kim H (2020) Backdoor attacks and countermeasures on deep learning: A comprehensive review. arXiv preprint arXiv:2007.10760"},{"key":"512_CR9","doi-asserted-by":"crossref","unstructured":"Graves A, Graves A (2012) Long short-term memory. Superv Sequence Label Recurrent Neural Netw, 37\u201345","DOI":"10.1007\/978-3-642-24797-2_4"},{"key":"512_CR10","unstructured":"Gu T, Dolan-Gavitt B, Garg S (2017) Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733"},{"issue":"2","key":"512_CR11","first-page":"3","volume":"1","author":"EJ Hu","year":"2022","unstructured":"Hu EJ, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W et al (2022) Lora: Low-rank adaptation of large language models. ICLR 1(2):3","journal-title":"ICLR"},{"key":"512_CR12","doi-asserted-by":"crossref","unstructured":"Iyyer M, Wieting J, Gimpel K, Zettlemoyer L (2018) Adversarial example generation with syntactically controlled paraphrase networks. arXiv preprint arXiv:1804.06059","DOI":"10.18653\/v1\/N18-1170"},{"issue":"7","key":"512_CR13","first-page":"1","volume":"33","author":"X Jiang","year":"2024","unstructured":"Jiang X, Dong Y, Wang L, Fang Z, Shang Q, Li G, Jin Z, Jiao W (2024) Self-planning code generation with large language models. ACM Trans Softw Eng Methodol 33(7):1\u201330","journal-title":"ACM Trans Softw Eng Methodol"},{"key":"512_CR14","doi-asserted-by":"publisher","DOI":"10.1016\/j.lindif.2023.102274","volume":"103","author":"E Kasneci","year":"2023","unstructured":"Kasneci E, Se\u00dfler K, K\u00fcchemann S, Bannert M, Dementieva D, Fischer F, Gasser U, Groh G, G\u00fcnnemann S, H\u00fcllermeier E et al (2023) Chatgpt for good? on opportunities and challenges of large language models for education. Learn Individ Differ 103:102274","journal-title":"Learn Individ Differ"},{"key":"512_CR15","unstructured":"Kawakami K (2008) Supervised sequence labelling with recurrent neural networks. PhD thesis, Ph. D. thesis"},{"key":"512_CR16","unstructured":"Kenton JDM-WC, Toutanova LK (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of naacL-HLT, vol. 1, pp. 2 . Minneapolis, Minnesota"},{"key":"512_CR17","doi-asserted-by":"crossref","unstructured":"Kurita K, Michel P, Neubig G (2020) Weight poisoning attacks on pre-trained models. arXiv preprint arXiv:2004.06660","DOI":"10.18653\/v1\/2020.acl-main.249"},{"key":"512_CR18","unstructured":"Lialin V, Deshpande V, Rumshisky A (2023) Scaling down to scale up: A guide to parameter-efficient fine-tuning. arXiv preprint arXiv:2303.15647"},{"key":"512_CR19","doi-asserted-by":"crossref","unstructured":"Li S, Liu H, Dong T, Zhao BZH, Xue M, Zhu H, Lu J (2021) Hidden backdoors in human-centric language models. In: Proceedings of the 2021 ACM SIGSAC conference on computer and communications security, pp. 3123\u20133140","DOI":"10.1145\/3460120.3484576"},{"key":"512_CR20","unstructured":"Loshchilov I, Hutter F (2016) Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983"},{"key":"512_CR21","doi-asserted-by":"crossref","unstructured":"Manning CD, Surdeanu M, Bauer J, Finkel JR, Bethard S, McClosky D (2014) The stanford corenlp natural language processing toolkit. In: Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pp. 55\u201360","DOI":"10.3115\/v1\/P14-5010"},{"key":"512_CR22","doi-asserted-by":"crossref","unstructured":"Munikar M, Shakya S, Shrestha A (2019) Fine-grained sentiment classification using bert. In: 2019 artificial intelligence for transforming business and society (AITB), vol. 1, pp. 1\u20135 . IEEE","DOI":"10.1109\/AITB48515.2019.8947435"},{"key":"512_CR23","doi-asserted-by":"crossref","unstructured":"Qi F, Chen Y, Li M, Yao Y, Liu Z, Sun M (2020) Onion: A simple and effective defense against textual backdoor attacks. arXiv preprint arXiv:2011.10369","DOI":"10.18653\/v1\/2021.emnlp-main.752"},{"key":"512_CR24","doi-asserted-by":"crossref","unstructured":"Qi F, Chen Y, Zhang X, Li M, Liu Z, Sun M (2021) Mind the style of text! adversarial and backdoor attacks based on text style transfer. arXiv preprint arXiv:2110.07139","DOI":"10.18653\/v1\/2021.emnlp-main.374"},{"key":"512_CR25","doi-asserted-by":"crossref","unstructured":"Qi F, Li M, Chen Y, Zhang Z, Liu Z, Wang Y, Sun M (2021) Hidden killer: Invisible textual backdoor attacks with syntactic trigger. arXiv preprint arXiv:2105.12400","DOI":"10.18653\/v1\/2021.acl-long.37"},{"issue":"8","key":"512_CR26","first-page":"9","volume":"1","author":"A Radford","year":"2019","unstructured":"Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I et al (2019) Language models are unsupervised multitask learners. OpenAI blog 1(8):9","journal-title":"OpenAI blog"},{"key":"512_CR27","doi-asserted-by":"crossref","unstructured":"Rane N (2023) Chatgpt and similar generative artificial intelligence (ai) for building and construction industry: Contribution, opportunities and challenges of large language models for industry 4.0, industry 5.0, and society 5.0. Opportunities and Challenges of Large Language Models for Industry 4","DOI":"10.2139\/ssrn.4603221"},{"key":"512_CR28","doi-asserted-by":"crossref","unstructured":"Socher R, Perelygin A, Wu J, Chuang J, Manning CD, Ng AY, Potts C (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631\u20131642","DOI":"10.18653\/v1\/D13-1170"},{"issue":"8","key":"512_CR29","doi-asserted-by":"publisher","first-page":"1930","DOI":"10.1038\/s41591-023-02448-8","volume":"29","author":"AJ Thirunavukarasu","year":"2023","unstructured":"Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW (2023) Large language models in medicine. Nat Med 29(8):1930\u20131940","journal-title":"Nat Med"},{"issue":"23","key":"512_CR30","doi-asserted-by":"publisher","DOI":"10.3390\/math12233751","volume":"12","author":"Q Wang","year":"2024","unstructured":"Wang Q, Wu Y, Xuan H, Wu H (2024) Flare: a backdoor attack to federated learning with refined evasion. Mathematics 12(23):3751","journal-title":"Mathematics"},{"key":"512_CR31","unstructured":"Yang Z (2019) lnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237"},{"key":"512_CR32","unstructured":"Yang A, Yang B, Hui B, Zheng B, Yu B, Zhou C, Li C, Li C, Liu D, Huang F, et al (2024) Qwen2 technical report. arXiv preprint arXiv:2407.10671"},{"key":"512_CR33","doi-asserted-by":"crossref","unstructured":"You W, Hammoudeh Z, Lowd D (2023)Large language models are better adversaries: Exploring generative clean-label backdoor attacks against text classifiers. arXiv preprint arXiv:2310.18603","DOI":"10.18653\/v1\/2023.findings-emnlp.833"},{"issue":"1","key":"512_CR34","doi-asserted-by":"publisher","DOI":"10.1186\/s42400-024-00338-1","volume":"8","author":"X Yue","year":"2025","unstructured":"Yue X, Zhang Z, Jing J, Wang W (2025) Ctta: a novel chain-of-thought transfer adversarial attacks framework for large language models. Cybersecurity 8(1):36","journal-title":"Cybersecurity"},{"key":"512_CR35","doi-asserted-by":"crossref","unstructured":"Zampieri M, Malmasi S, Nakov P, Rosenthal S, Farra N, Kumar R (2019) Predicting the type and target of offensive posts in social media. arXiv preprint arXiv:1902.09666","DOI":"10.18653\/v1\/N19-1144"},{"issue":"1","key":"512_CR36","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s42400-025-00361-w","volume":"8","author":"J Zhang","year":"2025","unstructured":"Zhang J, Bu H, Wen H, Liu Y, Fei H, Xi R, Li L, Yang Y, Zhu H, Meng D (2025) When LLMS meet cybersecurity: a systematic literature review. Cybersecurity 8(1):1\u201341","journal-title":"Cybersecurity"},{"key":"512_CR37","unstructured":"Zhang R, Li H, Wen R, Jiang W, Zhang Y, Backes M, Shen Y, Zhang Y (2024) Instruction backdoor attacks against customized $$\\{$$LLMs$$\\}$$. In: 33rd USENIX security symposium (USENIX Security 24), pp. 1849\u20131866"},{"key":"512_CR38","doi-asserted-by":"crossref","unstructured":"Zhang X, Zhang Z, Ji S, Wang T (2021)Trojaning language models for fun and profit. In: 2021 IEEE European symposium on security and privacy (EuroS &P), pp. 179\u2013197 . IEEE","DOI":"10.1109\/EuroSP51992.2021.00022"},{"key":"512_CR39","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2025.129645","author":"Z Zhang","year":"2025","unstructured":"Zhang Z, Zhang J, Zhang X, Mai W (2025) A comprehensive overview of generative AI (GAI): technologies, applications, and challenges. Neurocomputing. https:\/\/doi.org\/10.1016\/j.neucom.2025.129645","journal-title":"Neurocomputing"},{"key":"512_CR40","unstructured":"Zhang X, Zhao J, LeCun Y (2015) Character-level convolutional networks for text classification. Adv Neural Inf Process Syst, 28"},{"issue":"11","key":"512_CR41","doi-asserted-by":"publisher","first-page":"6889","DOI":"10.1109\/TKDE.2024.3392335","volume":"36","author":"Z Zhao","year":"2024","unstructured":"Zhao Z, Fan W, Li J, Liu Y, Mei X, Wang Y, Wen Z, Wang F, Zhao X, Tang J et al (2024) Recommender systems in the era of large language models (LLMS). IEEE Trans Knowl Data Eng 36(11):6889\u20136907","journal-title":"IEEE Trans Knowl Data Eng"}],"container-title":["Cybersecurity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00512-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42400-025-00512-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00512-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T01:02:05Z","timestamp":1765501325000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1186\/s42400-025-00512-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,12]]},"references-count":41,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["512"],"URL":"https:\/\/doi.org\/10.1186\/s42400-025-00512-z","relation":{},"ISSN":["2523-3246"],"issn-type":[{"value":"2523-3246","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,12]]},"assertion":[{"value":"23 July 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 November 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no Conflict of interest. We would like to respectfully bring to your attention that this manuscript was previously made publicly available as a preprint on arXiv. Since its original posting, it has undergone two rounds of revision, and there are currently three historical versions archived on the arXiv platform. As such, similarity checks may show a considerable overlap with the first version. The most recent and finalized version of the preprint is available at the following link: https:\/\/arxiv.org\/abs\/2412.17531. All authors have carefully reviewed and approved the current manuscript, and confirm that they have no Conflict of interest to disclose.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"114"}}