{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,2]],"date-time":"2026-05-02T06:45:19Z","timestamp":1777704319563,"version":"3.51.4"},"reference-count":35,"publisher":"SAGE Publications","issue":"2","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IFS"],"published-print":{"date-parts":[[2023,8,1]]},"abstract":"<jats:p>\u00a0In recent years, contrastive learning has been very successful in unsupervised tasks of representation learning and has received a lot of attention in supervised tasks. In supervised tasks, the discrete nature of natural language makes the construction of sample pairs difficult and the models are poorly robust to adversarial samples, so it remains a challenge to make contrastive learning effective for text classification tasks and to guarantee the robustness of the models. This paper presents a contrastive adversarial learning framework built using data augmentation with labeled insertion data. Specifically,By adding perturbation to the word-embedding matrix, adversarial samples are generated as positive examples of contrastive learning, and external semantic information is introduced to construct negative examples. Contrastive learning is used to improve the sensitivity and generalization ability of the model, and adversarial training is used to improve robustness, thereby improving the classification accuracy. In addition, the momentum contrast from unsupervised tasks is also introduced into the text classification task to increase the number of sample pairs. Experimental results on several datasets show that the proposed approach outperforms the baseline comparison approach, and in addition some experiments are conducted to verify the effectiveness of the proposed framework under low-resource conditions.<\/jats:p>","DOI":"10.3233\/jifs-230787","type":"journal-article","created":{"date-parts":[[2023,4,25]],"date-time":"2023-04-25T12:33:34Z","timestamp":1682426014000},"page":"3473-3484","source":"Crossref","is-referenced-by-count":0,"title":["Contrastive adversarial learning in text classification tasks"],"prefix":"10.1177","volume":"45","author":[{"given":"Jia-long","family":"he","sequence":"first","affiliation":[{"name":"School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, China"}]},{"given":"Xiao-Lin","family":"zhang","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, China"}]},{"given":"Yong-Ping","family":"wang","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, China"}]},{"given":"Huan-Xiang","family":"zhang","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, China"}]},{"given":"Lu","family":"gao","sequence":"additional","affiliation":[{"name":"School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, China"}]},{"given":"En-Hui","family":"xu","sequence":"additional","affiliation":[{"name":"China Nanhu Academy of Electronics and Information Technology, Jiaxing, China"}]}],"member":"179","reference":[{"key":"10.3233\/JIFS-230787_ref1","unstructured":"Devlin Jacob et al., Bert: Pre-training of deep bidirectional transformers for language understanding, In: arXiv preprint arXiv:1810.04805 (2018)."},{"key":"10.3233\/JIFS-230787_ref2","unstructured":"Liu Yinhan et al., Roberta: A robustly optimized bert pretraining approach, In: arXiv preprint arXiv:1907.11692 (2019)."},{"key":"10.3233\/JIFS-230787_ref3","unstructured":"van den Oord Aaron, Li Yazhe and Vinyals Oriol, Representation learning with contrastive predictive coding, In: arXiv preprint arXiv:1807.03748 (2018)."},{"key":"10.3233\/JIFS-230787_ref4","doi-asserted-by":"crossref","unstructured":"Gao Tianyu, Yao Xingcheng and Chen Danqi, Simcse: Simple contrastive learning of sentence embeddings, In: arXiv preprint arXiv:2104.08821 (2021).","DOI":"10.18653\/v1\/2021.emnlp-main.552"},{"key":"10.3233\/JIFS-230787_ref5","doi-asserted-by":"crossref","unstructured":"Giorgi John et al., Declutr: Deep contrastive learning for unsupervised textual representations, In: arXiv preprint arXiv:2006.03659 (2020).","DOI":"10.18653\/v1\/2021.acl-long.72"},{"key":"10.3233\/JIFS-230787_ref6","doi-asserted-by":"crossref","unstructured":"Wang Xiao and Qi Guo-Jun, Contrastive learning with stronger augmentations, In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).","DOI":"10.1109\/TPAMI.2022.3203630"},{"key":"10.3233\/JIFS-230787_ref7","first-page":"18661","article-title":"Supervised contrastive learning","volume":"33","author":"Khosla","year":"2020","journal-title":"Advances in Neural Information Processing Systems"},{"key":"10.3233\/JIFS-230787_ref8","first-page":"1597","article-title":"A simple framework for contrastive learning of visual representations","author":"Chen","year":"2020","journal-title":"International conference on machine learning"},{"key":"10.3233\/JIFS-230787_ref9","first-page":"9729","article-title":"Momentum contrast for unsupervised visual representation learning","author":"He","year":"2020","journal-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition"},{"key":"10.3233\/JIFS-230787_ref10","unstructured":"Qu Yanru et al., Coda: Contrast-enhanced and diversitypromoting data augmentation for natural language understanding, In: arXiv preprint arXiv:2010.08670 (2020)."},{"key":"10.3233\/JIFS-230787_ref11","doi-asserted-by":"crossref","unstructured":"Yan Yuanmeng et al., Consert: A contrastive framework for self-supervised sentence representation transfer, In: arXiv preprint arXiv:2105.11741 (2021).","DOI":"10.18653\/v1\/2021.acl-long.393"},{"key":"10.3233\/JIFS-230787_ref12","unstructured":"Goodfellow Ian J., Shlens Jonathon and Szegedy Christian , Explaining and harnessing adversarial examples, In: arXiv preprint arXiv:1412.6572 (2014)."},{"key":"10.3233\/JIFS-230787_ref13","unstructured":"Miyato Takeru, Dai Andrew M. and Goodfellow Ian, Adversarial training methods for semi-supervised text classification, In: arXiv preprint arXiv:1605.07725 (2016)."},{"key":"10.3233\/JIFS-230787_ref14","unstructured":"Zhu Chen et al., Freelb: Enhanced adversarial training for natural language understanding, In: arXiv preprint arXiv:1909.11764 (2019)."},{"key":"10.3233\/JIFS-230787_ref15","doi-asserted-by":"crossref","unstructured":"Jiang Haoming et al., Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization, In: arXiv preprint arXiv:1911.03437 (2019).","DOI":"10.18653\/v1\/2020.acl-main.197"},{"key":"10.3233\/JIFS-230787_ref16","unstructured":"Vaswani Ashish et al., Attention is all you need, In: Advances in Neural Information Processing Systems 30 (2017)."},{"key":"10.3233\/JIFS-230787_ref17","unstructured":"Wei Junqiu et al., Nezha: Neural contextualized representation for chinese language understanding, In: arXiv preprint arXiv:1909.00204 (2019)."},{"key":"10.3233\/JIFS-230787_ref18","unstructured":"Yang Zhilin et al., Xlnet: Generalized autoregressive pretraining for language understanding, In: Advances in Neural Information Processing Systems 32 (2019)."},{"key":"10.3233\/JIFS-230787_ref19","unstructured":"Lan Zhenzhong et al., Albert: A lite bert for self-supervised learning of language representations, In: arXiv preprint arXiv:1909.11942 (2019)."},{"key":"10.3233\/JIFS-230787_ref20","doi-asserted-by":"crossref","unstructured":"Henrique Varella Ehrenfried and Eduardo Todt, Analysis of the impact of parameters in TextGCN, In: Anais do Computer on the Beach 12 (2021), 014\u2013019.","DOI":"10.14210\/cotb.v12.p014-019"},{"key":"10.3233\/JIFS-230787_ref21","first-page":"7267","article-title":"Fast multi-resolution transformer finetuning for extreme multi-label text classification","volume":"34","author":"Zhang","year":"2021","journal-title":"Advances in Neural Information Processing Systems"},{"key":"10.3233\/JIFS-230787_ref22","doi-asserted-by":"crossref","unstructured":"Hu Shengding et al., Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification, In: arXiv preprint arXiv:2108.02035 (2021).","DOI":"10.18653\/v1\/2022.acl-long.158"},{"key":"10.3233\/JIFS-230787_ref23","first-page":"1222","article-title":"Perceptual generative adversarial networks for small object detection","author":"Li","year":"2017","journal-title":"Proceedings of the IEEE conference on computer vision and pattern recognition"},{"key":"10.3233\/JIFS-230787_ref24","first-page":"2107","article-title":"Learning from simulated and unsupervised images through adversarial training","author":"Shrivastava","year":"2017","journal-title":"Proceedings of the IEEE conference on computer vision and pattern recognition"},{"key":"10.3233\/JIFS-230787_ref25","unstructured":"Shafahi Ali et al., Adversarial training for free!, In: Advances in Neural Information Processing Systems 32 (2019)."},{"key":"10.3233\/JIFS-230787_ref26","doi-asserted-by":"crossref","unstructured":"Wang Dong et al., Cline: Contrastive learning with semantic negative examples for natural language understanding, In: arXiv preprint arXiv:2107.00440 (2021).","DOI":"10.18653\/v1\/2021.acl-long.181"},{"key":"10.3233\/JIFS-230787_ref27","unstructured":"Chen Qianben et al., Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation, In: arXiv preprint arXiv:2201.08702 (2022)."},{"key":"10.3233\/JIFS-230787_ref28","unstructured":"Miao Deshui et al., Simple Contrastive Representation Adversarial Learning for NLP Tasks, In: arXiv preprint arXiv:2111.13301 (2021)."},{"issue":"10","key":"10.3233\/JIFS-230787_ref29","doi-asserted-by":"crossref","first-page":"11130","DOI":"10.1609\/aaai.v36i10.21362","article-title":"Improved text classification via contrastive adversarial training","volume":"36","author":"Pan","year":"2022","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"10.3233\/JIFS-230787_ref30","doi-asserted-by":"crossref","unstructured":"Zhang Yan et al., An unsupervised sentence embedding method by mutual information maximization, In: arXiv preprint arXiv:2009.12061 (2020).","DOI":"10.18653\/v1\/2020.emnlp-main.124"},{"key":"10.3233\/JIFS-230787_ref31","doi-asserted-by":"crossref","unstructured":"WeiWang et al., Improving Contrastive Learning of Sentence Embeddings with Case-Augmented Positives and Retrieved Negatives, In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, (2022), 2159\u20132165.","DOI":"10.1145\/3477495.3531823"},{"key":"10.3233\/JIFS-230787_ref32","doi-asserted-by":"crossref","unstructured":"Yin Fan et al., On the robustness of language encoders against grammatical errors, In: arXiv preprint arXiv:2005.05683 (2020).","DOI":"10.18653\/v1\/2020.acl-main.310"},{"key":"10.3233\/JIFS-230787_ref33","unstructured":"Ilya Loshchilov and Frank Hutter, Decoupled weight decay regularization, In: arXiv preprint arXiv:1711.05101 (2017)."},{"key":"10.3233\/JIFS-230787_ref34","unstructured":"Gunel Beliz et al., Supervised contrastive learning for pretrained language model fine-tuning, In: arXiv preprint arXiv:2011.01403 (2020)."},{"issue":"05","key":"10.3233\/JIFS-230787_ref35","doi-asserted-by":"crossref","first-page":"8018","DOI":"10.1609\/aaai.v34i05.6311","article-title":"Is bert really robust? a strong baseline for natural language attack on text classification and entailment","volume":"34","author":"Jin","year":"2020","journal-title":"Proceedings of the AAAI conference on artificial intelligence"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/JIFS-230787","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T09:40:57Z","timestamp":1777455657000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/JIFS-230787"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,1]]},"references-count":35,"journal-issue":{"issue":"2"},"URL":"https:\/\/doi.org\/10.3233\/jifs-230787","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,1]]}}}