{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,25]],"date-time":"2026-02-25T00:35:54Z","timestamp":1771979754291,"version":"3.50.1"},"reference-count":62,"publisher":"MIT Press","license":[{"start":{"date-parts":[[2021,7,13]],"date-time":"2021-07-13T00:00:00Z","timestamp":1626134400000},"content-version":"vor","delay-in-days":193,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["direct.mit.edu"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,7,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Text classification is a widely studied problem and has broad applications. In many real-world problems, the number of texts for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose SSL-Reg, a data-dependent regularization approach based on self-supervised learning (SSL). SSL (Devlin et al., 2019a) is an unsupervised learning approach that defines auxiliary tasks on input data without using any human-provided labels and learns data representations by solving these auxiliary tasks. In SSL-Reg, a supervised classification task and an unsupervised SSL task are performed simultaneously. The SSL task is unsupervised, which is defined purely on input texts without using any human- provided labels. Training a model using an SSL task can prevent the model from being overfitted to a limited number of class labels in the classification task. Experiments on 17 text classification datasets demonstrate the effectiveness of our proposed method. Code is available at https:\/\/github.com\/UCSD-AI4H\/SSReg.<\/jats:p>","DOI":"10.1162\/tacl_a_00389","type":"journal-article","created":{"date-parts":[[2021,7,15]],"date-time":"2021-07-15T19:14:56Z","timestamp":1626376496000},"page":"641-656","update-policy":"https:\/\/doi.org\/10.1162\/mitpressjournals.corrections.policy","source":"Crossref","is-referenced-by-count":10,"title":["Self-supervised Regularization for Text Classification"],"prefix":"10.1162","volume":"9","author":[{"given":"Meng","family":"Zhou","sequence":"first","affiliation":[{"name":"Shanghai Jiao Tong University, China. zhoumeng9904@sjtu.edu.cn"}]},{"given":"Zechen","family":"Li","sequence":"additional","affiliation":[{"name":"Northeastern University, United States. li.zec@northeastern.edu"}]},{"given":"Pengtao","family":"Xie","sequence":"additional","affiliation":[{"name":"UC San Diego, United States. p1xie@eng.ucsd.edu"}]}],"member":"281","published-online":{"date-parts":[[2021,7,8]]},"reference":[{"key":"2021072313053090000_bib1","article-title":"Layer normalization","author":"Ba","year":"2016","journal-title":"arXiv preprint arXiv:1607.06450"},{"key":"2021072313053090000_bib2","first-page":"15509","article-title":"Learning representations by maximizing mutual information across views","volume-title":"Advances in Neural Information Processing Systems","author":"Bachman","year":"2019"},{"key":"2021072313053090000_bib3","doi-asserted-by":"publisher","first-page":"214","DOI":"10.3115\/1219044.1219075","article-title":"NLTK: The natural language toolkit","volume-title":"Proceedings of the ACL Interactive Poster and Demonstration Sessions","author":"Bird","year":"2004"},{"key":"2021072313053090000_bib4","article-title":"A simple framework for contrastive learning of visual representations","author":"Chen","year":"2020","journal-title":"arXiv preprint arXiv:2002. 05709"},{"key":"2021072313053090000_bib5","article-title":"Empirical evaluation of gated recurrent neural networks on sequence modeling","author":"Chung","year":"2014","journal-title":"arXiv preprint arXiv: 1412.3555"},{"key":"2021072313053090000_bib6","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/E17-2110","article-title":"Pubmed 200k RCT: A dataset for sequential sentence classification in medical abstracts","volume-title":"IJCNLP","author":"Dernoncourt","year":"2017"},{"key":"2021072313053090000_bib7","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin","year":"2019","journal-title":"NAACL-HLT"},{"key":"2021072313053090000_bib8","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","volume-title":"NAACL","author":"Devlin","year":"2019"},{"key":"2021072313053090000_bib9","doi-asserted-by":"publisher","DOI":"10.36227\/techrxiv.12308378.v1","article-title":"Cert: Contrastive self-supervised learning for language understanding","author":"Fang","year":"2020","journal-title":"arXiv e-prints arXiv: 2005.12766"},{"key":"2021072313053090000_bib10","article-title":"Unsupervised representation learning by predicting image rotations","author":"Gidaris","year":"2018","journal-title":"arXiv preprint arXiv:1803.07728"},{"key":"2021072313053090000_bib11","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.740","article-title":"Don\u2019t stop pretraining: Adapt language models to domains and tasks","volume-title":"Proceedings of ACL","author":"Gururangan","year":"2020"},{"key":"2021072313053090000_bib12","article-title":"Momentum contrast for unsupervised visual representation learning","author":"He","year":"2019","journal-title":"arXiv preprint arXiv:1911.05722"},{"key":"2021072313053090000_bib13","first-page":"770","article-title":"Deep residual learning for image recognition","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"He","year":"2016"},{"key":"2021072313053090000_bib14","article-title":"Pathological visual question answering","author":"He","year":"2020","journal-title":"arXiv preprint arXiv:2010.12435"},{"key":"2021072313053090000_bib15","article-title":"Sample-efficient deep learning for covid-19 diagnosis based on CT scans","author":"He","year":"2020","journal-title":"medRxiv"},{"issue":"8","key":"2021072313053090000_bib16","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","article-title":"Long short-term memory","volume":"9","author":"Hochreiter","year":"1997","journal-title":"Neural Computation"},{"key":"2021072313053090000_bib17","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1031","article-title":"Universal language model fine-tuning for text classification","volume-title":"ACL","author":"Howard","year":"2018"},{"key":"2021072313053090000_bib18","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00028","article-title":"Measuring the evolution of a scientific field through citation frames","author":"Jurgens","year":"2018","journal-title":"TACL"},{"key":"2021072313053090000_bib19","doi-asserted-by":"publisher","first-page":"655","DOI":"10.3115\/v1\/P14-1062","article-title":"A convolutional neural network for modelling sentences","volume-title":"Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Kalchbrenner","year":"2014"},{"key":"2021072313053090000_bib20","article-title":"Supervised contrastive learning","author":"Khosla","year":"2020","journal-title":"arXiv preprint arXiv:2004.11362"},{"key":"2021072313053090000_bib21","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S19-2145","article-title":"SemEval-2019 Task 4: Hyperpartisan news detection","volume-title":"SemEval","author":"Kiesel","year":"2019"},{"key":"2021072313053090000_bib22","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.671","article-title":"Contrastive self-supervised learning for commonsense reasoning","author":"Klein","year":"2020","journal-title":"arXiv preprint arXiv:2005.00669"},{"key":"2021072313053090000_bib23","doi-asserted-by":"publisher","first-page":"85","DOI":"10.5121\/ijaia.2012.3208","article-title":"Text classification and classifiers: A survey","volume":"3","author":"Korde","year":"2012","journal-title":"International Journal of Artificial Intelligence & Applications"},{"key":"2021072313053090000_bib24","doi-asserted-by":"publisher","DOI":"10.1093\/database\/bav123","article-title":"ChemProt-3.0: A global chemical biology diseases mapping","volume-title":"Database","author":"Kringelum","year":"2016"},{"key":"2021072313053090000_bib25","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v29i1.9513","article-title":"Recurrent convolutional neural networks for text classification","volume-title":"AAAI","author":"Lai","year":"2015"},{"key":"2021072313053090000_bib26","article-title":"Albert: A lite BERT for self-supervised learning of language representations","author":"Lan","year":"2019","journal-title":"arXiv preprint arXiv:1909.11942"},{"key":"2021072313053090000_bib27","article-title":"Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension","author":"Lewis","year":"2019","journal-title":"arXiv preprint arXiv:1910.13461"},{"key":"2021072313053090000_bib28","first-page":"317","article-title":"Joint-task self-supervised learning for temporal correspondence","volume-title":"Advances in Neural Information Processing Systems","author":"Li","year":"2019"},{"key":"2021072313053090000_bib29","article-title":"Recurrent neural network for text classification with multi-task learning","author":"Liu","year":"2016","journal-title":"arXiv preprint arXiv:1605.05101"},{"key":"2021072313053090000_bib30","article-title":"RoBERTa: A robustly optimized bert pretraining approach","author":"Liu","year":"2019","journal-title":"arXiv preprint arXiv:1907.11692"},{"key":"2021072313053090000_bib31","article-title":"RoBERTa: A robustly optimized BERT pretraining approach","author":"Liu","year":"2019"},{"key":"2021072313053090000_bib32","article-title":"Fixing weight decay regularization in Adam","author":"Loshchilov","year":"2017","journal-title":"ArXiv"},{"key":"2021072313053090000_bib33","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1360","article-title":"Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction","volume-title":"EMNLP","author":"Yi","year":"2018"},{"key":"2021072313053090000_bib34","article-title":"Learning word vectors for sentiment analysis","volume-title":"ACL","author":"Maas","year":"2011"},{"key":"2021072313053090000_bib35","doi-asserted-by":"publisher","DOI":"10.1145\/2766462.2767755","article-title":"Image- based recommendations on styles and substitutes","volume-title":"ACM SIGIR","author":"McAuley","year":"2015"},{"key":"2021072313053090000_bib36","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1145\/219717.219748","article-title":"Wordnet: A lexical database for English","volume":"38","author":"Miller","year":"1995","journal-title":"Communications of ACM"},{"key":"2021072313053090000_bib37","article-title":"Deep learning based text classification: A comprehensive review","author":"Minaee","year":"2020","journal-title":"arXiv preprint arXiv:2004.03705"},{"key":"2021072313053090000_bib38","doi-asserted-by":"publisher","first-page":"9339","DOI":"10.1109\/CVPR.2018.00973","article-title":"Improvements to context based self-supervised learning","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Nathan Mundhenk","year":"2018"},{"key":"2021072313053090000_bib39","article-title":"Representation learning with contrastive predictive coding","author":"Oord","year":"2018","journal-title":"arXiv preprint arXiv:1807.03748"},{"key":"2021072313053090000_bib40","doi-asserted-by":"publisher","first-page":"2536","DOI":"10.1109\/CVPR.2016.278","article-title":"Context encoders: Feature learning by inpainting","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Pathak","year":"2016"},{"key":"2021072313053090000_bib41","article-title":"Improving language understanding by generative pre-training","author":"Radford"},{"key":"2021072313053090000_bib42","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","author":"Raffel","year":"2019","journal-title":"arXiv preprint arXiv:1910.10683"},{"issue":"5500","key":"2021072313053090000_bib43","doi-asserted-by":"publisher","first-page":"2323","DOI":"10.1126\/science.290.5500.2323","article-title":"Nonlinear dimensionality reduction by locally linear embedding","volume":"290","author":"Roweis","year":"2000","journal-title":"Science"},{"key":"2021072313053090000_bib44","article-title":"Curl: Contrastive unsupervised representations for reinforcement learning","author":"Srinivas","year":"2020","journal-title":"arXiv preprint arXiv:2004.04136"},{"key":"2021072313053090000_bib45","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6428","article-title":"Ernie 2.0: A continual pre-training framework for language understanding","author":"Sun","year":"2019","journal-title":"arXiv preprint arXiv:1907.12412"},{"key":"2021072313053090000_bib46","article-title":"Test-time training with self-supervision for generalization under distribution shifts","volume-title":"ICML","author":"Sun","year":"2020"},{"key":"2021072313053090000_bib47","first-page":"3104","article-title":"Sequence to sequence learning with neural networks","volume-title":"Advances in Neural Information Processing Systems","author":"Sutskever","year":"2014"},{"key":"2021072313053090000_bib48","first-page":"1556","article-title":"Improved semantic representations from tree-structured long short-term memory networks","volume-title":"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)","author":"Tai","year":"2015"},{"key":"2021072313053090000_bib49","first-page":"5998","article-title":"Attention is all you need","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani","year":"2017"},{"key":"2021072313053090000_bib50","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W18-5446","article-title":"GLUE: A multi-task benchmark and analysis platform for natural language understanding","author":"Wang","year":"2018","journal-title":"arXiv preprint arXiv:1804.07461"},{"key":"2021072313053090000_bib51","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2017\/406","article-title":"Combining knowledge with deep convolutional neural networks for short text classification","volume-title":"IJCAI","author":"Wang","year":"2017"},{"key":"2021072313053090000_bib52","doi-asserted-by":"publisher","first-page":"2566","DOI":"10.1109\/CVPR.2019.00267","article-title":"Learning correspondence from the cycle-consistency of time","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Wang","year":"2019"},{"key":"2021072313053090000_bib53","doi-asserted-by":"publisher","first-page":"6629","DOI":"10.1109\/CVPR.2019.00679","article-title":"Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Wang","year":"2019"},{"key":"2021072313053090000_bib54","doi-asserted-by":"publisher","first-page":"6383","DOI":"10.18653\/v1\/D19-1670","article-title":"EDA: Easy data augmentation techniques for boosting performance on text classification tasks","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP)","author":"Wei","year":"2019"},{"key":"2021072313053090000_bib55","article-title":"Importance-aware learning for neural headline editing","author":"Qingyang","year":"2019","journal-title":"arXiv preprint arXiv:1912.01114"},{"key":"2021072313053090000_bib56","first-page":"3733","article-title":"Unsupervised feature learning via non-parametric instance discrimination","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Zhirong","year":"2018"},{"key":"2021072313053090000_bib57","doi-asserted-by":"publisher","DOI":"10.36227\/techrxiv.12502298","article-title":"Transfer learning or self-supervised learning? a tale of two pretraining paradigms","author":"Yang","year":"2020","journal-title":"arXiv preprint arXiv:2007.04234"},{"key":"2021072313053090000_bib58","first-page":"5754","article-title":"Xlnet: Generalized autoregressive pretraining for language understanding","volume-title":"Advances in neural information processing systems","author":"Yang","year":"2019"},{"key":"2021072313053090000_bib59","article-title":"Contrastive self-supervised learning for graph classification","author":"Zeng","year":"2021","journal-title":"AAAI"},{"key":"2021072313053090000_bib60","doi-asserted-by":"publisher","first-page":"649","DOI":"10.1007\/978-3-319-46487-9_40","article-title":"Colorful image colorization","volume-title":"European conference on computer vision","author":"Zhang","year":"2016"},{"key":"2021072313053090000_bib61","article-title":"Character-level convolutional networks for text classification","volume-title":"NeurIPS","author":"Zhang","year":"2015"},{"key":"2021072313053090000_bib62","article-title":"A c-lstm neural network for text classification","author":"Zhou","year":"2015","journal-title":"ArXiv"}],"container-title":["Transactions of the Association for Computational Linguistics"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00389\/1930826\/tacl_a_00389.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"http:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00389\/1930826\/tacl_a_00389.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,4]],"date-time":"2023-01-04T13:34:49Z","timestamp":1672839289000},"score":1,"resource":{"primary":{"URL":"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/tacl_a_00389\/102845\/Self-supervised-Regularization-for-Text"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021]]},"references-count":62,"URL":"https:\/\/doi.org\/10.1162\/tacl_a_00389","relation":{},"ISSN":["2307-387X"],"issn-type":[{"value":"2307-387X","type":"electronic"}],"subject":[],"published-other":{"date-parts":[[2021]]},"published":{"date-parts":[[2021]]}}}