{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T16:10:33Z","timestamp":1774455033699,"version":"3.50.1"},"reference-count":41,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2024,4,18]],"date-time":"2024-04-18T00:00:00Z","timestamp":1713398400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U193607"],"award-info":[{"award-number":["U193607"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>Classification methods based on fine-tuning pre-trained language models often require a large number of labeled samples; therefore, few-shot text classification has attracted considerable attention. Prompt learning is an effective method for addressing few-shot text classification tasks in low-resource settings. The essence of prompt tuning is to insert tokens into the input, thereby converting a text classification task into a masked language modeling problem. However, constructing appropriate prompt templates and verbalizers remains challenging, as manual prompts often require expert knowledge, while auto-constructing prompts is time-consuming. In addition, the extensive knowledge contained in entities and relations should not be ignored. To address these issues, we propose a structured knowledge prompt tuning (SKPT) method, which is a knowledge-enhanced prompt tuning approach. Specifically, SKPT includes three components: prompt template, prompt verbalizer, and training strategies. First, we insert virtual tokens into the prompt template based on open triples to introduce external knowledge. Second, we use an improved knowledgeable verbalizer to expand and filter the label words. Finally, we use structured knowledge constraints during the training phase to optimize the model. Through extensive experiments on few-shot text classification tasks with different settings, the effectiveness of our model has been demonstrated.<\/jats:p>","DOI":"10.3390\/bdcc8040043","type":"journal-article","created":{"date-parts":[[2024,4,18]],"date-time":"2024-04-18T07:55:54Z","timestamp":1713426954000},"page":"43","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Knowledge-Enhanced Prompt Learning for Few-Shot Text Classification"],"prefix":"10.3390","volume":"8","author":[{"given":"Jinshuo","family":"Liu","sequence":"first","affiliation":[{"name":"Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China"}]},{"given":"Lu","family":"Yang","sequence":"additional","affiliation":[{"name":"Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,4,18]]},"reference":[{"key":"ref_1","unstructured":"Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. (2019, January 2\u20137). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA."},{"key":"ref_2","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel","year":"2020","journal-title":"J. Mach. Learn. Res."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016, January 1\u20135). SQuAD: 100,000+ Questions for Machine Comprehension of Text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA.","DOI":"10.18653\/v1\/D16-1264"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Kowsari, K., Jafari Meimandi, K., Heidarysafa, M., Mendu, S., Barnes, L., and Brown, D. (2019). Text classification algorithms: A survey. Information, 10.","DOI":"10.3390\/info10040150"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Liu, T., Hu, Y., Gao, J., Sun, Y., and Yin, B. (2021, January 10\u201315). Zero-shot text classification with semantically extended graph convolutional network. Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy.","DOI":"10.1109\/ICPR48806.2021.9411914"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Dong, B., Yao, Y., Xie, R., Gao, T., Han, X., Liu, Z., Lin, F., Lin, L., and Sun, M. (2020, January 8\u201313). Meta-information guided meta-learning for few-shot relation classification. Proceedings of the 28th International Conference on Computational Linguistics, Online.","DOI":"10.18653\/v1\/2020.coling-main.140"},{"key":"ref_7","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Shin, R., Lin, C., Thomson, S., Chen, C., Roy, S., Platanios, E.A., Pauls, A., Klein, D., Eisner, J., and Van Durme, B. (2021, January 7\u201311). Constrained Language Models Yield Few-Shot Semantic Parsers. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online.","DOI":"10.18653\/v1\/2021.emnlp-main.608"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., and Singh, S. (2020, January 16\u201320). AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online.","DOI":"10.18653\/v1\/2020.emnlp-main.346"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Kolluru, K., Adlakha, V., Aggarwal, S., and Chakrabarti, S. (2020, January 16\u201320). OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online.","DOI":"10.18653\/v1\/2020.emnlp-main.306"},{"key":"ref_11","first-page":"649","article-title":"Character-level convolutional networks for text classification","volume":"28","author":"Zhang","year":"2015","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"167","DOI":"10.3233\/SW-140134","article-title":"Dbpedia\u2014A large-scale, multilingual knowledge base extracted from wikipedia","volume":"6","author":"Lehmann","year":"2015","journal-title":"Semant. Web"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Liu, X., Zheng, Y., Du, Z., Ding, M., Qian, Y., Yang, Z., and Tang, J. (2023). GPT understands, too. arXiv.","DOI":"10.1016\/j.aiopen.2023.08.012"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"182","DOI":"10.1016\/j.aiopen.2022.11.003","article-title":"Ptr: Prompt tuning with rules for text classification","volume":"3","author":"Han","year":"2022","journal-title":"AI Open"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Schick, T., and Sch\u00fctze, H. (2021, January 19\u201323). Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Online.","DOI":"10.18653\/v1\/2021.eacl-main.20"},{"key":"ref_16","unstructured":"Razniewski, S., Yates, A., Kassner, N., and Weikum, G. (2021). Language models as or for knowledge bases. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Li, X.L., and Liang, P. (2021, January 1\u20136). Prefix-Tuning: Optimizing Continuous Prompts for Generation. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Virtual.","DOI":"10.18653\/v1\/2021.acl-long.353"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Lester, B., Al-Rfou, R., and Constant, N. (2021, January 7\u201311). The Power of Scale for Parameter-Efficient Prompt Tuning. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online.","DOI":"10.18653\/v1\/2021.emnlp-main.243"},{"key":"ref_19","unstructured":"Lee, L., Johnson, M., Toutanova, K., Roark, B., Frermann, L., Cohen, S.B., and Lapata, M. (2017). Transactions of the Association for Computational Linguistics, MIT Press."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Gao, T., Fisch, A., and Chen, D. (2021, January 1\u20136). Making Pre-trained Language Models Better Few-shot Learners. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Virtual.","DOI":"10.18653\/v1\/2021.acl-long.295"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Liu, J., Liu, A., Lu, X., Welleck, S., West, P., Le Bras, R., Choi, Y., and Hajishirzi, H. (2022, January 22\u201327). Generated Knowledge Prompting for Commonsense Reasoning. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland.","DOI":"10.18653\/v1\/2022.acl-long.225"},{"key":"ref_22","unstructured":"Zhai, J., Zheng, X., Wang, C.D., Li, H., and Tian, Y. (November, January 29). Knowledge prompt-tuning for sequential recommendation. Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Hu, S., Ding, N., Wang, H., Liu, Z., Wang, J., Li, J., Wu, W., and Sun, M. (2022, January 22\u201327). Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland.","DOI":"10.18653\/v1\/2022.acl-long.158"},{"key":"ref_24","first-page":"2787","article-title":"Translating embeddings for modeling multi-relational data","volume":"26","author":"Bordes","year":"2013","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_25","unstructured":"Nickel, M., Tresp, V., and Kriegel, H.P. (July, January 8). A Three-Way Model for Collective Learning on Multi-Relational Data. Proceedings of the 28th International Conference on Machine Learning, Bellevue, WA, USA."},{"key":"ref_26","unstructured":"Yang, B., Yih, W.t., He, X., Gao, J., and Deng, L. (2014). Embedding Entities and Relations for Learning and Inference in Knowledge Bases. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Wang, Z., Zhang, J., Feng, J., and Chen, Z. (2014, January 27\u201331). Knowledge graph embedding by translating on hyperplanes. Proceedings of the AAAI Conference on Artificial Intelligence, Qu\u00e9bec City, QC, Canada.","DOI":"10.1609\/aaai.v28i1.8870"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Lin, Y., Liu, Z., Sun, M., Liu, Y., and Zhu, X. (2015, January 25\u201330). Learning entity and relation embeddings for knowledge graph completion. Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA.","DOI":"10.1609\/aaai.v29i1.9491"},{"key":"ref_29","unstructured":"Sun, Z., Deng, Z., Nie, J., and Tang, J. (2019, January 6\u20139). RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"ref_30","unstructured":"Zhang, S., Tay, Y., Yao, L., and Liu, Q. (2019, January 8\u201314). Quaternion Knowledge Graph Embeddings. Proceedings of the NeurIPS, Vancouver, BA, Canada."},{"key":"ref_31","first-page":"926","article-title":"Reasoning with neural tensor networks for knowledge base completion","volume":"26","author":"Socher","year":"2013","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"176","DOI":"10.1162\/tacl_a_00360","article-title":"KEPLER: A unified model for knowledge embedding and pre-trained language representation","volume":"9","author":"Wang","year":"2021","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"He, B., Zhou, D., Xiao, J., Jiang, X., Liu, Q., Yuan, N.J., and Xu, T. (2020, January 16\u201320). BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models. Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, Online.","DOI":"10.18653\/v1\/2020.findings-emnlp.207"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Peters, M.E., Neumann, M., Logan, R., Schwartz, R., Joshi, V., Singh, S., and Smith, N.A. (2019, January 3\u20137). Knowledge Enhanced Contextual Word Representations. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China.","DOI":"10.18653\/v1\/D19-1005"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Wang, Z., Zhang, J., Feng, J., and Chen, Z. (2014, January 25\u201319). Knowledge graph and text jointly embedding. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.","DOI":"10.3115\/v1\/D14-1167"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Sun, T., Shao, Y., Qiu, X., Guo, Q., Hu, Y., Huang, X.J., and Zhang, Z. (2020, January 8\u201313). CoLAKE: Contextualized Language and Knowledge Embedding. Proceedings of the 28th International Conference on Computational Linguistics, Online.","DOI":"10.18653\/v1\/2020.coling-main.327"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Xu, W., Fang, M., Yang, L., Jiang, H., Liang, G., and Zuo, C. (2021, January 7\u20139). Enabling language representation with knowledge graph and structured semantic information. Proceedings of the 2021 International Conference on Computer Communication and Artificial Intelligence (CCAI), Guangzhou, China.","DOI":"10.1109\/CCAI50917.2021.9447453"},{"key":"ref_38","unstructured":"(2024, April 09). RelatedWords. Available online: https:\/\/relatedwords.org\/."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Miller, G.A. (1992, January 23\u201326). WordNet. Proceedings of the Workshop on Speech and Natural Language\u2014HLT \u201991, Harriman, NY, USA.","DOI":"10.3115\/1075527.1075662"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Speer, R., Chin, J., and Havasi, C. (2017, January 4\u20139). Conceptnet 5.5: An open multilingual graph of general knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.11164"},{"key":"ref_41","unstructured":"Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv."}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/8\/4\/43\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T14:29:48Z","timestamp":1760106588000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/8\/4\/43"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,18]]},"references-count":41,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2024,4]]}},"alternative-id":["bdcc8040043"],"URL":"https:\/\/doi.org\/10.3390\/bdcc8040043","relation":{},"ISSN":["2504-2289"],"issn-type":[{"value":"2504-2289","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,4,18]]}}}