{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,25]],"date-time":"2025-12-25T03:06:59Z","timestamp":1766632019481,"version":"3.48.0"},"reference-count":55,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,12,25]],"date-time":"2025-12-25T00:00:00Z","timestamp":1766620800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,12,25]],"date-time":"2025-12-25T00:00:00Z","timestamp":1766620800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012431","name":"Jiangsu Provincial Agricultural Science and Technology Independent Innovation Fund","doi-asserted-by":"publisher","award":["BF2024071"],"award-info":[{"award-number":["BF2024071"]}],"id":[{"id":"10.13039\/501100012431","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cybersecurity"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Natural language processing models are widely acknowledged for their strong data fitting capabilities, diverse application scenarios, and adaptable learning methodologies. However, these models, including the large language models, exhibit sensitivity to adversarial example attacks. These examples are slightly perturbed from the pristine text but mislead the model classification. Nevertheless, the existing attack methods primarily focus on the attack effectiveness without semantics-preservation considered. Moreover, the trade-off between evasion effectiveness and concealment of perturbed texts is less investigated. In this study, we propose a multi-objective adversarial text generation framework (MOATG) that simultaneously optimizes attack success rate, imperceptibility, and semantic similarity. Tailored objective functions and dominance relations are designed for character-, word-, and sentence-level perturbations. MOATG is evaluated against five baselines across five benchmark datasets. Experimental results show that MOATG achieves a 5.38% average improvement in attack success rate and reduces word error rate by 1.48%, demonstrating its effectiveness in balancing attack strength and stealth.<\/jats:p>","DOI":"10.1186\/s42400-025-00500-3","type":"journal-article","created":{"date-parts":[[2025,12,25]],"date-time":"2025-12-25T03:01:31Z","timestamp":1766631691000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Towards optimal adversarial texts: character, word, and sentence"],"prefix":"10.1186","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8415-5103","authenticated-orcid":false,"given":"Pengchuan","family":"Wang","sequence":"first","affiliation":[]},{"given":"Deqiang","family":"Li","sequence":"additional","affiliation":[]},{"given":"Qianmu","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,25]]},"reference":[{"key":"500_CR1","doi-asserted-by":"crossref","unstructured":"Alzantot M et al (2018) Generating natural language adversarial examples. In: Proceedings of the 2018 conference on empirical methods in natural language processing, 2018","DOI":"10.18653\/v1\/D18-1316"},{"issue":"6","key":"500_CR2","doi-asserted-by":"publisher","first-page":"8156","DOI":"10.1109\/TCSS.2023.3290558","volume":"11","author":"T Amirifar","year":"2023","unstructured":"Amirifar T, Lahmiri S, Zanjani MK (2023) An NLP-deep learning approach for product rating prediction based on online reviews and product features. IEEE Trans Comput Soc Syst 11(6):8156\u20138168","journal-title":"IEEE Trans Comput Soc Syst"},{"key":"500_CR3","doi-asserted-by":"crossref","unstructured":"Blohm M et al (2018) Comparing attention-based convolutional and recurrent neural networks: success and limitations in machine reading comprehension. In: Proceedings of the 22nd conference on computational natural language learning, 2018","DOI":"10.18653\/v1\/K18-1011"},{"key":"500_CR4","doi-asserted-by":"crossref","unstructured":"Bowman SR et al (2015) A large annotated corpus for learning natural language inference. In: Conference on empirical methods in natural language processing, EMNLP 2015. Association for Computational Linguistics (ACL), 2015","DOI":"10.18653\/v1\/D15-1075"},{"issue":"4","key":"500_CR5","doi-asserted-by":"publisher","DOI":"10.1007\/s11704-022-1653-0","volume":"16","author":"Y Chai","year":"2022","unstructured":"Chai Y et al (2022) TPRPF: a preserving framework of privacy relations based on adversarial training for texts in big data. Front Comput Sci 16(4):164618","journal-title":"Front Comput Sci"},{"key":"500_CR6","doi-asserted-by":"publisher","DOI":"10.1016\/j.comnet.2020.107432","volume":"181","author":"Y Chen","year":"2020","unstructured":"Chen Y et al (2020) Security of mobile multimedia data: the adversarial examples for spatio-temporal data. Comput Netw 181:107432","journal-title":"Comput Netw"},{"key":"500_CR7","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2024.104213","volume":"150","author":"M Chen","year":"2025","unstructured":"Chen M et al (2025) AECR: Automatic attack technique intelligence extraction based on fine-tuned large language model. Comput Secur 150:104213","journal-title":"Comput Secur"},{"key":"500_CR8","doi-asserted-by":"crossref","unstructured":"Cheng M, Wei W, Hsieh C-J (2019) Evaluating and enhancing the robustness of dialogue systems: a case study on a negotiation agent. In: Proceedings of the 2019 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1 (long and short papers), 2019","DOI":"10.18653\/v1\/N19-1336"},{"key":"500_CR9","unstructured":"Creo A, Pudasaini S (2025) SilverSpeak: evading AI-generated text detectors using homoglyphs. In: Proceedings of the 1st workshop on GenAI content detection (GenAIDetect), 2025."},{"key":"500_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2022.108532","volume":"119","author":"Y Cui","year":"2022","unstructured":"Cui Y, Meng Xi, Qiao J (2022) A multi-objective particle swarm optimization algorithm based on two-archive mechanism. Appl Soft Comput 119:108532","journal-title":"Appl Soft Comput"},{"key":"500_CR11","doi-asserted-by":"crossref","unstructured":"Ebrahimi J et al. (2018) HotFlip: white-box adversarial examples for text classification. In: Proceedings of the 56th annual meeting of the association for computational linguistics (volume 2: short papers), 2018.","DOI":"10.18653\/v1\/P18-2006"},{"key":"500_CR12","doi-asserted-by":"crossref","unstructured":"Eger S, Benz Y (2020) From hero to z\\'eroe: a benchmark of low-level adversarial attacks. arXiv preprint arXiv:2010.05648","DOI":"10.18653\/v1\/2020.aacl-main.79"},{"key":"500_CR13","doi-asserted-by":"crossref","unstructured":"Gan WC, Ng HT (2019) Improving the robustness of question answering systems to question paraphrasing. In: Proceedings of the 57th annual meeting of the association for computational linguistics, 2019","DOI":"10.18653\/v1\/P19-1610"},{"key":"500_CR14","doi-asserted-by":"crossref","unstructured":"Gao J et al (2018) Black-box generation of adversarial text sequences to evade deep learning classifiers. In: 2018 IEEE security and privacy workshops (SPW). IEEE, 2018","DOI":"10.1109\/SPW.2018.00016"},{"key":"500_CR15","doi-asserted-by":"crossref","unstructured":"Garg, Siddhant, and Goutham Ramakrishnan. \"BAE: BERT-based Adversarial Examples for Text Classification.\"\u00a0Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020.","DOI":"10.18653\/v1\/2020.emnlp-main.498"},{"key":"500_CR16","doi-asserted-by":"crossref","unstructured":"Glockner M, Shwartz V, Goldberg Y (2018) Breaking NLI systems with sentences that require simple lexical inferences. In: Proceedings of the 56th annual meeting of the association for computational linguistics (volume 2: short papers), 2018","DOI":"10.18653\/v1\/P18-2103"},{"key":"500_CR17","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2014) \"Explaining and harnessing adversarial examples.\" arXiv preprint arXiv:1412.6572"},{"issue":"14s","key":"500_CR18","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3593042","volume":"55","author":"S Goyal","year":"2023","unstructured":"Goyal S et al (2023) A survey of adversarial defenses and robustness in NLP. ACM Comput Surv 55(14s):1\u201339","journal-title":"ACM Comput Surv"},{"key":"500_CR19","doi-asserted-by":"crossref","unstructured":"Han W et al (2020) Adversarial attack and defense of structured prediction models. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), 2020","DOI":"10.18653\/v1\/2020.emnlp-main.182"},{"key":"500_CR20","doi-asserted-by":"crossref","unstructured":"Jia R, Liang P (2017) Adversarial examples for evaluating reading comprehension systems. In: Proceedings of the 2017 conference on empirical methods in natural language processing, 2017","DOI":"10.18653\/v1\/D17-1215"},{"key":"500_CR21","doi-asserted-by":"crossref","unstructured":"Jin D et al (2020) Is bert really robust? A strong baseline for natural language attack on text classification and entailment. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, no. 05, 2020","DOI":"10.1609\/aaai.v34i05.6311"},{"key":"500_CR22","doi-asserted-by":"crossref","unstructured":"Le T, Wang S, Lee D (2020) Malcom: generating malicious comments to attack neural fake news detection models. In: 2020 IEEE international conference on data mining (ICDM). IEEE, 2020","DOI":"10.1109\/ICDM50108.2020.00037"},{"key":"500_CR23","doi-asserted-by":"crossref","unstructured":"Li J et al (2019) TextBugger: generating adversarial text against real-world applications. In: 26th annual network and distributed system security symposium, 2019","DOI":"10.14722\/ndss.2019.23138"},{"key":"500_CR24","doi-asserted-by":"crossref","unstructured":"Liang B et al (2018) Deep text classification can be fooled. In: Proceedings of the 27th international joint conference on artificial intelligence, 2018","DOI":"10.24963\/ijcai.2018\/585"},{"key":"500_CR25","unstructured":"Mathai A et al (2020) Adversarial black-box attacks on text classifiers using multi-objective genetic optimization guided by deep networks. arXiv preprint arXiv:2011.03901"},{"key":"500_CR26","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016","DOI":"10.1109\/CVPR.2016.282"},{"key":"500_CR27","doi-asserted-by":"crossref","unstructured":"Pruthi D, Dhingra B, Lipton ZC (2019) Combating adversarial misspellings with robust word recognition. In: Proceedings of the 57th annual meeting of the association for computational linguistics, 2019","DOI":"10.18653\/v1\/P19-1561"},{"key":"500_CR28","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2024.127667","volume":"590","author":"Y Qi","year":"2024","unstructured":"Qi Y et al (2024) Adaptive gradient-based word saliency for adversarial text attacks. Neurocomputing 590:127667","journal-title":"Neurocomputing"},{"key":"500_CR29","doi-asserted-by":"publisher","first-page":"278","DOI":"10.1016\/j.neucom.2022.04.020","volume":"492","author":"S Qiu","year":"2022","unstructured":"Qiu S et al (2022) Adversarial attack and defense technologies in natural language processing: a survey. Neurocomputing 492:278\u2013307","journal-title":"Neurocomputing"},{"issue":"3","key":"500_CR30","doi-asserted-by":"publisher","first-page":"2181","DOI":"10.1007\/s11831-022-09859-9","volume":"30","author":"I Rahimi","year":"2023","unstructured":"Rahimi I et al (2023) A review on constraint handling techniques for population-based algorithms: from single-objective to multi-objective optimization. Arch Comput Methods Eng 30(3):2181\u20132209","journal-title":"Arch Comput Methods Eng"},{"key":"500_CR31","doi-asserted-by":"crossref","unstructured":"Rajpurkar P et al (2016) SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 conference on empirical methods in natural language processing, 2016","DOI":"10.18653\/v1\/D16-1264"},{"key":"500_CR32","unstructured":"Roadhouse C, Shardlow M, Williams A (2024) MMU NLP at CheckThat! 2024: homoglyphs are adversarial attacks. Faggioli et al.[22]"},{"key":"500_CR33","unstructured":"Rocamora EA et al (2024) Revisiting character-level adversarial attacks for language models. In: International conference on machine learning. PMLR, 2024"},{"issue":"32","key":"500_CR34","doi-asserted-by":"publisher","DOI":"10.1126\/sciadv.adg7992","volume":"9","author":"KN Sasidhar","year":"2023","unstructured":"Sasidhar KN et al (2023) Enhancing corrosion-resistant alloy design through natural language processing and deep learning. Sci Adv 9(32):eadg7992","journal-title":"Sci Adv"},{"key":"500_CR35","doi-asserted-by":"crossref","unstructured":"Shi Z, Huang M (2020) Robustness to modification with shared words in paraphrase identification. In: Findings of the association for computational linguistics: EMNLP 2020, 2020","DOI":"10.18653\/v1\/2020.findings-emnlp.16"},{"issue":"6","key":"500_CR36","doi-asserted-by":"publisher","first-page":"5158","DOI":"10.1109\/JIOT.2022.3222159","volume":"10","author":"K Tang","year":"2022","unstructured":"Tang K et al (2022) Rethinking perturbation directions for imperceptible adversarial attacks on point clouds. IEEE Internet Things J 10(6):5158\u20135169","journal-title":"IEEE Internet Things J"},{"key":"500_CR37","doi-asserted-by":"crossref","unstructured":"Tang K et al (2024) Manifold constraints for imperceptible adversarial attacks on point clouds. In: Proceedings of the AAAI conference on artificial intelligence, vol 38, no. 6, 2024","DOI":"10.1609\/aaai.v38i6.28318"},{"key":"500_CR38","doi-asserted-by":"crossref","unstructured":"Tang K et al (2024) Flat: flux-aware imperceptible adversarial attacks on 3d point clouds. In: European conference on computer vision. Springer, Cham","DOI":"10.1007\/978-3-031-72658-3_12"},{"key":"500_CR39","unstructured":"Valle-Aguilera J et al (2024) SINAI at CheckThat! 2024: stealthy character-level adversarial attacks using homoglyphs and iterative search. In: CLEF (working notes), 2024"},{"key":"500_CR40","doi-asserted-by":"crossref","unstructured":"Waghela H, Rakshit S, Sen J (2024) A modified word saliency-based adversarial attack on text classification models. In: International conference on computing, intelligence and data analytics. Springer, Singapore","DOI":"10.1007\/978-981-96-0451-7_27"},{"key":"500_CR41","doi-asserted-by":"crossref","unstructured":"Wang Y, Bansal M (2018) Robust machine comprehension models via adversarial training. In: Proceedings of the 2018 conference of the north American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 2 (short papers), 2018","DOI":"10.18653\/v1\/N18-2091"},{"key":"500_CR42","doi-asserted-by":"crossref","unstructured":"Wieting J, Mallinson J, Gimpel K (2017) Learning paraphrastic sentence embeddings from back-translated bitext. In: Proceedings of the 2017 conference on empirical methods in natural language processing, 2017","DOI":"10.18653\/v1\/D17-1026"},{"issue":"8","key":"500_CR43","doi-asserted-by":"publisher","DOI":"10.1111\/exsy.70079","volume":"42","author":"L Xu","year":"2025","unstructured":"Xu L et al (2025) Single word change is all you need: using LLMs to create synthetic training examples for text classifiers. Expert Syst 42(8):e70079","journal-title":"Expert Syst"},{"key":"500_CR44","doi-asserted-by":"crossref","unstructured":"Ye M et al (2022) Texthoaxer: budgeted hard-label adversarial attacks on text. In: Proceedings of the AAAI conference on artificial intelligence, vol 36, no. 4, 2022","DOI":"10.1609\/aaai.v36i4.20303"},{"key":"500_CR45","doi-asserted-by":"crossref","unstructured":"Ye M et al (2022) Leapattack: hard-label adversarial attack on text via gradient-based optimization. In: Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining, 2022","DOI":"10.1145\/3534678.3539357"},{"key":"500_CR46","doi-asserted-by":"crossref","unstructured":"Yin F et al (2020) on the robustness of language encoders against grammatical errors. In: Proceedings of the 58th annual meeting of the association for computational linguistics, 2020","DOI":"10.18653\/v1\/2020.acl-main.310"},{"key":"500_CR47","doi-asserted-by":"crossref","unstructured":"Zang Y et al Word-level textual adversarial attacking as combinatorial optimization. In: Proceedings of the 58th annual meeting of the association for computational linguistics, 2020","DOI":"10.18653\/v1\/2020.acl-main.540"},{"key":"500_CR48","unstructured":"Zhang X, J Zhao, and Y LeCun. \"Character-level convolutional networks for text classification. In: Advances in neural information processing systems, vol 28, p 15"},{"key":"500_CR49","unstructured":"Zhang Y, Baldridge J, He L (2019) PAWS: paraphrase adversaries from word scrambling. In: Proceedings of the 2019 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1 (long and short papers), 2019."},{"key":"500_CR50","doi-asserted-by":"crossref","unstructured":"Zheng X et al (2020) Evaluating and enhancing the robustness of neural network-based dependency parsing models with adversarial examples. In: Proceedings of the 58th annual meeting of the association for computational linguistics, 2020","DOI":"10.18653\/v1\/2020.acl-main.590"},{"key":"500_CR51","doi-asserted-by":"publisher","DOI":"10.1016\/j.cma.2021.114029","volume":"385","author":"K Zhong","year":"2021","unstructured":"Zhong K et al (2021) MOMPA: multi-objective marine predator algorithm. Comput Methods Appl Mech Eng 385:114029","journal-title":"Comput Methods Appl Mech Eng"},{"key":"500_CR52","doi-asserted-by":"crossref","unstructured":"Zhou Y et al (2019) Learning to discriminate perturbations for blocking adversarial attacks in text classification. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), 2019","DOI":"10.18653\/v1\/D19-1496"},{"key":"500_CR53","doi-asserted-by":"publisher","first-page":"135","DOI":"10.1016\/j.neucom.2022.05.054","volume":"500","author":"B Zhu","year":"2022","unstructured":"Zhu B et al (2022) Leveraging transferability and improved beam search in textual adversarial attacks. Neurocomputing 500:135\u2013142","journal-title":"Neurocomputing"},{"key":"500_CR54","doi-asserted-by":"crossref","unstructured":"Zhuang H, Zhang Y, Liu S (2023) A pilot study of query-free adversarial attack against stable diffusion. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, 2023","DOI":"10.1109\/CVPRW59228.2023.00236"},{"key":"500_CR55","doi-asserted-by":"crossref","unstructured":"Zou W et al (2020) A reinforced generation of adversarial examples for neural machine translation. In: Proceedings of the 58th annual meeting of the association for computational linguistics. 2020.","DOI":"10.18653\/v1\/2020.acl-main.319"}],"container-title":["Cybersecurity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00500-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42400-025-00500-3","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-025-00500-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,25]],"date-time":"2025-12-25T03:01:36Z","timestamp":1766631696000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1186\/s42400-025-00500-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,25]]},"references-count":55,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["500"],"URL":"https:\/\/doi.org\/10.1186\/s42400-025-00500-3","relation":{},"ISSN":["2523-3246"],"issn-type":[{"value":"2523-3246","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,25]]},"assertion":[{"value":"27 June 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 November 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"121"}}