{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,6]],"date-time":"2026-05-06T16:09:30Z","timestamp":1778083770942,"version":"3.51.4"},"reference-count":201,"publisher":"Association for Computing Machinery (ACM)","issue":"2","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62306007"],"award-info":[{"award-number":["62306007"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2023YFC3209203"],"award-info":[{"award-number":["2023YFC3209203"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Research Fund of Anhui University of Technology","award":["QZ202209"],"award-info":[{"award-number":["QZ202209"]}]},{"name":"Wenzhou-Kean University Internal (Faculty\/Staff) Start-Up Research Grant","award":["ISRG2023024"],"award-info":[{"award-number":["ISRG2023024"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2026,1,31]]},"abstract":"<jats:p>Relation extraction (RE) is critical in information extraction (IE) and knowledge graph construction. RE aims to identify the semantic relations between entities from natural language texts. Traditional RE models often rely on many manually annotated training samples, which are limited when data is scarce. Therefore, exploring how to perform RE under few-shot conditions has become a research focus. Recently, prompt learning has attracted attention from researchers due to its ability to fully activate the potential of Pre-trained Language Models (PLMs), especially making significant progress in Few-Shot RE (FSRE). This article comprehensively reviews FSRE based on prompt learning. We first introduce the fundamental concepts of FSRE and prompt learning. Then, we systematically review recent research advances in FSRE with prompt learning, focusing on two perspectives: template construction and model fine-tuning strategies. Next, we summarize the benchmark datasets, evaluation metrics, and experimental results of representative works in FSRE. Afterward, we present practical applications of prompt-based FSRE in specialized domains. Finally, we discuss the critical challenges and future research directions of FSRE tasks based on prompt learning.<\/jats:p>","DOI":"10.1145\/3746281","type":"journal-article","created":{"date-parts":[[2025,7,4]],"date-time":"2025-07-04T07:19:51Z","timestamp":1751613591000},"page":"1-38","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Few-Shot Relation Extraction Based on Prompt Learning: A Taxonomy, Survey, Challenges and Future Directions"],"prefix":"10.1145","volume":"58","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9302-0358","authenticated-orcid":false,"given":"Tingting","family":"Hang","sequence":"first","affiliation":[{"name":"Anhui University of Technology","place":["Maanshan, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-3733-1204","authenticated-orcid":false,"given":"Shuting","family":"Liu","sequence":"additional","affiliation":[{"name":"Anhui University of Technology","place":["Maanshan, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2627-5403","authenticated-orcid":false,"given":"Jun","family":"Feng","sequence":"additional","affiliation":[{"name":"Key Laboratory of Water Big Data Technology, Ministry of Water Resources, Hohai University","place":["Nanjing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1218-5682","authenticated-orcid":false,"given":"Hamza","family":"Djigal","sequence":"additional","affiliation":[{"name":"Computer Science, Wenzhou-Kean University","place":["Wenzhou, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2022-5747","authenticated-orcid":false,"given":"Jun","family":"Huang","sequence":"additional","affiliation":[{"name":"Anhui University of Technology","place":["Maanshan, China"]}]}],"member":"320","published-online":{"date-parts":[[2025,9,8]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.EMNLP-MAIN.130"},{"key":"e_1_3_1_3_2","first-page":"16592","volume-title":"Proceedings of the 2024 Joint International Conference on Computational Linguistics","author":"Alam Fahmida","year":"2024","unstructured":"Fahmida Alam, Md. Asiful Islam, Robert Vacareanu, et\u00a0al. 2024. Towards realistic Few-shot relation extraction: A new meta dataset and evaluation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics. 16592\u201316606. Retrieved from DOI:https:\/\/aclanthology.org\/2024.lrec-main.1442"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2020.ACL-MAIN.142"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3519022"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1002\/SPE.4380220902"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-60615-1_1"},{"key":"e_1_3_1_8_2","first-page":"25005","volume-title":"Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022","volume":"35","author":"Bar Amir","year":"2022","unstructured":"Amir Bar, Yossi Gandelsman, Trevor Darrell, et\u00a0al. 2022. Visual prompting via image inpainting. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, Vol. 35. 25005\u201325017. Retrieved from http:\/\/papers.nips.cc\/paper_files\/paper\/2022\/hash\/9f09f316a3eaf59d9ced5ffaefe97e0f-Abstract-Conference.html"},{"key":"e_1_3_1_9_2","volume-title":"Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020","author":"Brown Tom B.","year":"2020","unstructured":"Tom B. Brown, Benjamin Mann, Nick Ryder, et\u00a0al. 2020. Language models are Few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html"},{"key":"e_1_3_1_10_2","first-page":"2633","volume-title":"Proceedings of the 30th USENIX Security Symposium","author":"Carlini Nicholas","year":"2021","unstructured":"Nicholas Carlini, Florian Tram\u00e8r, Eric Wallace, et\u00a0al. 2021. Extracting training data from large language models. In Proceedings of the 30th USENIX Security Symposium. 2633\u20132650. Retrieved from https:\/\/www.usenix.org\/conference\/usenixsecurity21\/presentation\/carlini-extracting"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","unstructured":"J. Harry Caufield Harshad Hegde Vincent Emonet et\u00a0al. 2024. Structured prompt interrogation and recursive extraction of semantics (SPIRES): A method for populating knowledge bases using zero-shot learning. Bioinform 40 3 (2024) btae104. DOI:10.1093\/BIOINFORMATICS\/BTAE104","DOI":"10.1093\/BIOINFORMATICS\/BTAE104"},{"key":"e_1_3_1_12_2","doi-asserted-by":"crossref","unstructured":"Banghao Chen Zhaofeng Zhang Nicolas Langren\u00e9 et\u00a0al. 2023. Unleashing the potential of prompt engineering for large language models. Patterns 6 6 (2025) 101260. Retrieved from https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2666389925001084","DOI":"10.1016\/j.patter.2025.101260"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2306.14122"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2023.3279374"},{"key":"e_1_3_1_15_2","first-page":"2374","volume-title":"Proceedings of the 29th International Conference on Computational Linguistics","author":"Chen Xiang","year":"2022","unstructured":"Xiang Chen, Lei Li, Shumin Deng, et\u00a0al. 2022. LightNER: A lightweight tuning paradigm for low-resource NER via pluggable prompting. In Proceedings of the 29th International Conference on Computational Linguistics. 2374\u20132387. Retrieved from https:\/\/aclanthology.org\/2022.coling-1.209"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477495.3531746"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3663976.3664029"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.ACL-LONG.409"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2104.07650"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3485447.3511998"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.FINDINGS-ACL.5"},{"key":"e_1_3_1_22_2","unstructured":"Aakanksha Chowdhery Sharan Narang Jacob Devlin et\u00a0al. 2023. PaLM: Scaling language modeling with pathways. J. Mach. Learn. Res. 24 1 (2023) 1\u2013113. Retrieved from http:\/\/jmlr.org\/papers\/v24\/22-1144.html"},{"key":"e_1_3_1_23_2","first-page":"7057","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conferenceb on Neural Information Processing Systems 2019","author":"Conneau Alexis","year":"2019","unstructured":"Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems 32: Annual Conferenceb on Neural Information Processing Systems 2019. 7057\u20137067. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/c04c19c2c2474dbf5f7ac4372c5b9af1-Abstract.html"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.FINDINGS-ACL.161"},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.FINDINGS-EMNLP.669"},{"key":"e_1_3_1_26_2","doi-asserted-by":"crossref","unstructured":"Gelei Deng Yi Liu Yuekang Li et\u00a0al. 2023. MASTERKEY: Automated jailbreaking of large language model chatbots. In Proceedings of the 31st Annual Network and Distributed System Security Symposium. Retrieved from https:\/\/www.ndss-symposium.org\/ndss-paper\/masterkey-automated-jailbreaking-of-large-language-model-chatbots\/","DOI":"10.14722\/ndss.2024.24188"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.EMNLP-MAIN.222"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICKG63256.2024.00013"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/N19-1423"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.FINDINGS-EMNLP.512"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2022.3199544"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.emnlp-main.64"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3645089"},{"key":"e_1_3_1_34_2","first-page":"1834","volume-title":"Proceedings of the 29th International Conference on Computational Linguistics","author":"Duan Bin","year":"2022","unstructured":"Bin Duan, Shusen Wang, Xingxian Liu, et\u00a0al. 2022. Cluster-aware pseudo-Labeling for Supervised Open Relation Extraction. In Proceedings of the 29th International Conference on Computational Linguistics. 1834\u20131841. Retrieved from https:\/\/aclanthology.org\/2022.coling-1.158"},{"key":"e_1_3_1_35_2","volume-title":"Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023","author":"Duan Haonan","year":"2023","unstructured":"Haonan Duan, Adam Dziedzic, Nicolas Papernot, et\u00a0al. 2023. Flocks of stochastic parrots: Differentially private prompt learning for large language models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023. Retrieved from http:\/\/papers.nips.cc\/paper_files\/paper\/2023\/hash\/f26119b4ffe38c24d97e4c49d334b99e-Abstract-Conference.html"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V38I16.29752"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.EMNLP-MAIN.674"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V34I05.6281"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D19-1649"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3511808.3557422"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3659943"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.576"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3462475"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","unstructured":"Bocheng Guo Jiana Meng Di Zhao et\u00a0al. 2024. Integrating graph convolutional networks to enhance prompt learning for biomedical relation extraction. J. Biomed. Informatics 157 C (2024) 104717. 10.1016\/J.JBI.2024.104717","DOI":"10.1016\/J.JBI.2024.104717"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","unstructured":"Bocheng Guo Di Zhao Xin Dong Jiana Meng and Hongfei Lin. 2024. Few-shot biomedical relation extraction using data augmentation and domain information. Neurocomputing 595 (2024) 127881. DOI:10.1016\/j.neucom.2024.127881","DOI":"10.1016\/j.neucom.2024.127881"},{"key":"e_1_3_1_46_2","volume-title":"Proceedings of the 12th International Conference on Learning Representations","author":"Guo Qingyan","year":"2024","unstructured":"Qingyan Guo, Rui Wang, Junliang Guo, et\u00a0al. 2024. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. In Proceedings of the 12th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=ZG3RaNIsO8"},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.ACL-LONG.381"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.24963\/IJCAI.2020\/500"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.FINDINGS-EMNLP.231"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.aacl-main.75"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","unstructured":"Xu Han Weilin Zhao Ning Ding Zhiyuan Liu and Maosong Sun. 2022. PTR: Prompt tuning with rules for text classification. AI Open 3 (2022) 182\u2013192. DOI:10.1016\/j.aiopen.2022.11.003","DOI":"10.1016\/j.aiopen.2022.11.003"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D18-1514"},{"key":"e_1_3_1_53_2","series-title":"CEUR Workshop Proceedings","first-page":"63","volume-title":"Proceedings of the GeoExT 2024: Geographic Information Extraction from Texts Workshop Co-Located with The 46th European Conference on Information Retrieval","volume":"3683","author":"Haris Erum","year":"2024","unstructured":"Erum Haris, Anthony G. Cohn, and John G. Stell. 2024. Exploring spatial representations in the historical lake district texts with LLM-based relation extraction. In Proceedings of the GeoExT 2024: Geographic Information Extraction from Texts Workshop Co-Located with The 46th European Conference on Information Retrieval(CEUR Workshop Proceedings, Vol. 3683). 63\u201373. Retrieved from https:\/\/ceur-ws.org\/Vol-3683\/paper9.pdf"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2407.19354"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1007\/S41666-024-00162-9"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/J.ESWA.2022.118927"},{"key":"e_1_3_1_57_2","volume-title":"Proceedings of the 9th International Conference on Learning Representations","author":"He Pengcheng","year":"2021","unstructured":"Pengcheng He, Xiaodong Liu, Jianfeng Gao, et\u00a0al. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In Proceedings of the 9th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=XPZIaotutsD"},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.5555\/1859664.1859670"},{"key":"e_1_3_1_59_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN54540.2023.10192002"},{"key":"e_1_3_1_60_2","doi-asserted-by":"publisher","DOI":"10.1016\/J.IJMEDINF.2023.105321"},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.158"},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3581783.3611899"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626312"},{"key":"e_1_3_1_64_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.FINDINGS-ACL.222"},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19827-4_41"},{"key":"e_1_3_1_66_2","article-title":"Complex relation extraction: Challenges and opportunities","author":"Jiang Haiyun","year":"2020","unstructured":"Haiyun Jiang, Qiaoben Bao, Qiao Cheng, et\u00a0al. 2020. Complex relation extraction: Challenges and opportunities. arXiv preprint arXiv:2012.04821 (2020). Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:228064150","journal-title":"arXiv preprint"},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2020.ACL-MAIN.197"},{"key":"e_1_3_1_68_2","doi-asserted-by":"publisher","unstructured":"Zhengbao Jiang Frank F. Xu Jun Araki and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics 8 (2020) 423\u2013438. DOI:10.1162\/tacl_a_00324","DOI":"10.1162\/tacl_a_00324"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.197"},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.FINDINGS-ACL.50"},{"key":"e_1_3_1_71_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01832"},{"key":"e_1_3_1_72_2","first-page":"22199","volume-title":"Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022","volume":"35","author":"Kojima Takeshi","year":"2022","unstructured":"Takeshi Kojima, Shixiang Shane Gu, Machel Reid, et\u00a0al. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, Vol. 35. 22199\u201322213. Retrieved from http:\/\/papers.nips.cc\/paper_files\/paper\/2022\/hash\/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html"},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.EMNLP-MAIN.243"},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2020.ACL-MAIN.703"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.FINDINGS-EMNLP.459"},{"key":"e_1_3_1_76_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2024\/702"},{"key":"e_1_3_1_77_2","volume-title":"Proceedings of the 12nd International Conference on Learning Representations","author":"Li Jiangmeng","year":"2024","unstructured":"Jiangmeng Li, Fei Song, Yifan Jin, et\u00a0al. 2024. BayesPrompt: Prompting large-scale pre-trained language models on few-shot inference via debiased domain abstraction. In Proceedings of the 12nd International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=DmD1wboID9"},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","unstructured":"Qing Li Yichen Wang Tao You et\u00a0al. 2022. BioKnowPrompt: Incorporating imprecise knowledge into prompt-tuning verbalizer with biomedical text for relation extraction. Inf. Sci. 617 C (2022) 346\u2013358. 10.1016\/J.INS.2022.113810.063","DOI":"10.1016\/J.INS.2022.113810.063"},{"key":"e_1_3_1_79_2","doi-asserted-by":"publisher","DOI":"10.1145\/3675417.3675513"},{"key":"e_1_3_1_80_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V33I01.33018642"},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.ACL-LONG.353"},{"key":"e_1_3_1_82_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.FINDINGS-EMNLP.1024"},{"key":"e_1_3_1_83_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.397"},{"key":"e_1_3_1_84_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.225"},{"key":"e_1_3_1_85_2","first-page":"9586","volume-title":"Findings of the Association for Computational Linguistics: EMNLP","author":"Liu Pai","year":"2024","unstructured":"Pai Liu, Wenyang Gao, Wenjie Dong, et\u00a0al. 2024. A survey on open information extraction from rule-based model to large language model. In Findings of the Association for Computational Linguistics: EMNLP. 9586\u20139608. Retrieved from https:\/\/aclanthology.org\/2024.findings-emnlp.560"},{"key":"e_1_3_1_86_2","doi-asserted-by":"publisher","DOI":"10.1145\/3560815"},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.findings-emnlp.769"},{"key":"e_1_3_1_88_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V34I03.5681"},{"key":"e_1_3_1_89_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-short.8"},{"key":"e_1_3_1_90_2","volume-title":"Proceedings of the 12th International Conference on Learning Representations","author":"Liu Xiaogeng","year":"2024","unstructured":"Xiaogeng Liu, Nan Xu, Muhao Chen, et\u00a0al. 2024. AutoDAN: Generating stealthy jailbreak prompts on aligned large language models. In Proceedings of the 12th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=7Jwpw4qKkb"},{"key":"e_1_3_1_91_2","doi-asserted-by":"publisher","unstructured":"Xiao Liu Yanan Zheng Zhengxiao Du Ming Ding Yujie Qian Zhilin Yang and Jie Tang. 2024. GPT understands too. AI Open 5 (2024) 208\u2013215. DOI:10.1016\/j.aiopen.2023.08.012","DOI":"10.1016\/j.aiopen.2023.08.012"},{"key":"e_1_3_1_92_2","article-title":"RoBERTa: A Robustly optimized BERT pretraining approach","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu, Myle Ott, Naman Goyal, et\u00a0al. 2019. RoBERTa: A Robustly optimized BERT pretraining approach. CoRR abs\/arXiv:1907.11692 (2019). http:\/\/arxiv.org\/abs\/1907.11692","journal-title":"CoRR"},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-97-5569-1_22"},{"key":"e_1_3_1_94_2","doi-asserted-by":"crossref","unstructured":"Renze Lou Kai Zhang and Wenpeng Yin. 2023. Large language model instruction following: A survey of progresses and challenges. Comput. Linguist 50 3 (2023) 1053\u20131095. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:257632435","DOI":"10.1162\/coli_a_00523"},{"key":"e_1_3_1_95_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.FINDINGS-EMNLP.490"},{"key":"e_1_3_1_96_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.556"},{"key":"e_1_3_1_97_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.395"},{"key":"e_1_3_1_98_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.FINDINGS-ACL.34"},{"key":"e_1_3_1_99_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.NAACL-MAIN.420"},{"key":"e_1_3_1_100_2","first-page":"10970","volume-title":"Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation","author":"Ma Shengkun","year":"2024","unstructured":"Shengkun Ma, Jiale Han, Yi Liang, et\u00a0al. 2024. Making pre-trained language models better continual few-shot relation extractors. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 10970\u201310983. Retrieved from https:\/\/aclanthology.org\/2024.lrec-main.957"},{"key":"e_1_3_1_101_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.FINDINGS-EMNLP.153"},{"key":"e_1_3_1_102_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.466"},{"key":"e_1_3_1_103_2","unstructured":"Gr\u00e9goire Mialon Roberto Dess\u00ec Maria Lomeli Christoforos Nalmpantis Ramakanth Pasunuru Roberta Raileanu Baptiste Rozi\u00e9re Timo Schick Jane Dwivedi-Yu Asli Celikyilmaz Edouard Grave Yann LeCun and Thomas Scialom. 2023. Augmented language models: A survey. Trans. Mach. Learn. Res. 2023 (2023). Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:256868474"},{"key":"e_1_3_1_104_2","doi-asserted-by":"publisher","DOI":"10.1145\/3605943"},{"key":"e_1_3_1_105_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.NAACL-MAIN.201"},{"key":"e_1_3_1_106_2","doi-asserted-by":"publisher","DOI":"10.1145\/3445965"},{"key":"e_1_3_1_107_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V34I05.6374"},{"key":"e_1_3_1_108_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.EMNLP-MAIN.440"},{"key":"e_1_3_1_109_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D19-1038"},{"key":"e_1_3_1_110_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2307.15337"},{"key":"e_1_3_1_111_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.2979670"},{"key":"e_1_3_1_112_2","volume-title":"Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022","author":"Ouyang Long","year":"2022","unstructured":"Long Ouyang, Jeffrey Wu, Xu Jiang, et\u00a0al. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022. Retrieved from http:\/\/papers.nips.cc\/paper_files\/paper\/2022\/hash\/b1efde53be364a73914f58805a001731-Abstract-Conference.html"},{"key":"e_1_3_1_113_2","doi-asserted-by":"publisher","unstructured":"Yilmazcan \u00d6zyurt Stefan Feuerriegel and Ce Zhang. 2023. Document-level in-context few-shot relation extraction via pre-trained language models. CoRR abs\/arXiv:2310.11085 (2023). 10.48550\/ARXIV.2310.11085","DOI":"10.48550\/ARXIV.2310.11085"},{"key":"e_1_3_1_114_2","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3592000"},{"key":"e_1_3_1_115_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2020.EMNLP-MAIN.298"},{"key":"e_1_3_1_116_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/N18-1202"},{"key":"e_1_3_1_117_2","doi-asserted-by":"publisher","DOI":"10.24432\/C5201W"},{"key":"e_1_3_1_118_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D19-1250"},{"key":"e_1_3_1_119_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/P19-1493"},{"key":"e_1_3_1_120_2","first-page":"7867","volume-title":"Proceedings of the 37th International Conference on Machine Learning","volume":"119","author":"Qu Meng","year":"2020","unstructured":"Meng Qu, Tianyu Gao, Louis-Pascal A. C. Xhonneux, et\u00a0al. 2020. Few-shot relation extraction via bayesian meta-learning on relation graphs. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119. 7867\u20137876. Retrieved from http:\/\/proceedings.mlr.press\/v119\/qu20a.html"},{"key":"e_1_3_1_121_2","doi-asserted-by":"publisher","DOI":"10.1145\/3178876.3186024"},{"issue":"8","key":"e_1_3_1_122_2","first-page":"9","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"Radford Alec","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Rewon Child, et\u00a0al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:160025533","journal-title":"OpenAI blog"},{"key":"e_1_3_1_123_2","unstructured":"Colin Raffel Noam Shazeer Adam Roberts et\u00a0al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21 1 (2020) 1\u201367. Retrieved from http:\/\/jmlr.org\/papers\/v21\/20-074.html"},{"key":"e_1_3_1_124_2","doi-asserted-by":"crossref","unstructured":"Pawan Kumar Rajpoot and Ankur Parikh. 2023. GPT-FinRE: In-context learning for financial relation extraction using large language models. In Proceedings of the 6th Workshop on Financial Technology and Natural Language Processing. 42\u201345. Retrieved from https:\/\/aclanthology.org\/2023.finnlp-2.5","DOI":"10.18653\/v1\/2023.finnlp-2.5"},{"key":"e_1_3_1_125_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411763.3451760"},{"key":"e_1_3_1_126_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2403.13369"},{"key":"e_1_3_1_127_2","doi-asserted-by":"publisher","unstructured":"Ofer Sabo Yanai Elazar Yoav Goldberg and Ido Dagan. 2021. Revisiting few-shot relation classification: Evaluation data and classification schemes. Transactions of the Association for Computational Linguistics 9 (2021) 691\u2013706. DOI:10.1162\/tacl_a_00392","DOI":"10.1162\/tacl_a_00392"},{"key":"e_1_3_1_128_2","article-title":"A systematic survey of prompt engineering in large language models: Techniques and applications","author":"Sahoo Pranab","year":"2024","unstructured":"Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, et\u00a0al. 2024. A systematic survey of prompt engineering in large language models: Techniques and applications. arXiv preprint arXiv:2402.07927 (2024). https:\/\/api.semanticscholar.org\/CorpusID:267636769","journal-title":"arXiv preprint"},{"key":"e_1_3_1_129_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.EMNLP-MAIN.92"},{"key":"e_1_3_1_130_2","volume-title":"Proceedings of the 10th International Conference on Learning Representations","author":"Sanh Victor","year":"2022","unstructured":"Victor Sanh, Albert Webson, Colin Raffel, et\u00a0al. 2022. Multitask prompted training enables zero-shot task generalization. In Proceedings of the 10th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=9Vrb9D0WI4"},{"key":"e_1_3_1_131_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2020.COLING-MAIN.488"},{"key":"e_1_3_1_132_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.EACL-MAIN.20"},{"key":"e_1_3_1_133_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2020.EMNLP-MAIN.346"},{"key":"e_1_3_1_134_2","first-page":"14274","volume-title":"Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022","volume":"35","author":"Shu Manli","year":"2022","unstructured":"Manli Shu, Weili Nie, De-An Huang, et\u00a0al. 2022. Test-Time Prompt tuning for zero-shot generalization in vision-language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, Vol. 35. 14274\u201314289. Retrieved from http:\/\/papers.nips.cc\/paper_files\/paper\/2022\/hash\/5bf2b802e24106064dc547ae9283bb0c-Abstract-Conference.html"},{"key":"e_1_3_1_135_2","doi-asserted-by":"publisher","DOI":"10.1145\/3582688"},{"key":"e_1_3_1_136_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D17-1188"},{"key":"e_1_3_1_137_2","volume-title":"Proceedings of the 12th International Conference on Learning Representations","author":"Staab Robin","year":"2024","unstructured":"Robin Staab, Mark Vero, Mislav Balunovic, et\u00a0al. 2024. Beyond memorization: Violating privacy via inference with large language models. In Proceedings of the 12th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=kmn0BhQk7p"},{"key":"e_1_3_1_138_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D19-1030"},{"key":"e_1_3_1_139_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.ACL-LONG.218"},{"key":"e_1_3_1_140_2","doi-asserted-by":"publisher","DOI":"10.5555\/2002472.2002539"},{"key":"e_1_3_1_141_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-77844-5_5"},{"key":"e_1_3_1_142_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V33I01.33017072"},{"key":"e_1_3_1_143_2","article-title":"LLaMA: Open and efficient foundation language models","volume":"2302","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, and Marie-Anne Lachaux. 2023. LLaMA: Open and efficient foundation language models. ArXiv abs\/2302.13971 (2023). Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:257219404","journal-title":"ArXiv"},{"key":"e_1_3_1_144_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2212.02199"},{"key":"e_1_3_1_145_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2310.16944"},{"key":"e_1_3_1_146_2","series-title":"CEUR Workshop Proceedings","volume-title":"Proceedings of the Eight Workshop on Natural Language for Artificial Intelligence (NL4AI 2024) Co-Located with 23th International Conference of the Italian Association for Artificial Intelligence","volume":"3877","author":"Valerio Flavio","year":"2024","unstructured":"Flavio Valerio, Pierpaolo Basile, and Marco de Gemmis. 2024. Adapting a large language model to the legal domain: A case study in italian. In Proceedings of the Eight Workshop on Natural Language for Artificial Intelligence (NL4AI 2024) Co-Located with 23th International Conference of the Italian Association for Artificial Intelligence(CEUR Workshop Proceedings, Vol. 3877). Retrieved from https:\/\/ceur-ws.org\/Vol-3877\/paper7.pdf"},{"key":"e_1_3_1_147_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.ACL-LONG.346"},{"key":"e_1_3_1_148_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D19-1221"},{"key":"e_1_3_1_149_2","doi-asserted-by":"publisher","DOI":"10.1145\/3637528.3671647"},{"key":"e_1_3_1_150_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.EMNLP-MAIN.214"},{"key":"e_1_3_1_151_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.EMNLP-MAIN.537"},{"key":"e_1_3_1_152_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.FINDINGS-NAACL.186"},{"key":"e_1_3_1_153_2","volume-title":"The 11th International Conference on Learning Representations","author":"Wang Xuezhi","year":"2023","unstructured":"Xuezhi Wang, Jason Wei, Dale Schuurmans, et\u00a0al. 2023. Self-consistency improves chain of thought reasoning in language models. In The 11th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=1PL1NIMMrw"},{"key":"e_1_3_1_154_2","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3462873"},{"key":"e_1_3_1_155_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.NAACL-MAIN.167"},{"key":"e_1_3_1_156_2","volume-title":"Proceedings of the Tenth International Conference on Learning Representations","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Maarten Bosma, Vincent Y. Zhao, et\u00a0al. 2022. Finetuned language models are zero-shot learners. In Proceedings of the Tenth International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=gEZrGCozdqR"},{"key":"e_1_3_1_157_2","first-page":"24824","volume-title":"Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022","volume":"35","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, et\u00a0al. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, Vol. 35. 24824\u201324837. Retrieved from http:\/\/papers.nips.cc\/paper_files\/paper\/2022\/hash\/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html"},{"key":"e_1_3_1_158_2","volume-title":"Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023","author":"Wen Yuxin","year":"2023","unstructured":"Yuxin Wen, Neel Jain, John Kirchenbauer, et\u00a0al. 2023. Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023. Retrieved from http:\/\/papers.nips.cc\/paper_files\/paper\/2023\/hash\/a00548031e4647b13042c97c922fadf1-Abstract-Conference.html"},{"key":"e_1_3_1_159_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2302.11382"},{"key":"e_1_3_1_160_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D19-1021"},{"key":"e_1_3_1_161_2","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3358119"},{"key":"e_1_3_1_162_2","article-title":"A survey of graph prompting methods: Techniques, applications, and challenges","author":"Wu Xuansheng","year":"2023","unstructured":"Xuansheng Wu, Kaixiong Zhou, Mingchen Sun, et\u00a0al. 2023. A survey of graph prompting methods: Techniques, applications, and challenges. arXiv preprint arXiv:2303.07275 (2023). Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:257495881","journal-title":"arXiv preprint"},{"key":"e_1_3_1_163_2","doi-asserted-by":"publisher","unstructured":"Binhong Xie Yu Li Hongyan Zhao Lihu Pan and Enhui Wang. 2023. A cross-attention fusion based graph convolution auto-encoder for open relation extraction. IEEE\/ACM Transactions on Audio Speech and Language Processing 31 (2023) 476\u2013485. DOI:10.1109\/TASLP.2022.3226680","DOI":"10.1109\/TASLP.2022.3226680"},{"key":"e_1_3_1_164_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2304.03472"},{"key":"e_1_3_1_165_2","doi-asserted-by":"publisher","DOI":"10.1007\/S11704-024-40555-Y"},{"key":"e_1_3_1_166_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.EMNLP-MAIN.534"},{"key":"e_1_3_1_167_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.EMNLP-MAIN.166"},{"key":"e_1_3_1_168_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.SUSTAINLP-1.13"},{"key":"e_1_3_1_169_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.ACL-SHORT.124"},{"key":"e_1_3_1_170_2","doi-asserted-by":"publisher","DOI":"10.1145\/3485447.3511921"},{"key":"e_1_3_1_171_2","first-page":"3780","volume-title":"Proceedings of the 13th Language Resources and Evaluation Conference","author":"Yeh Hui-Syuan","year":"2022","unstructured":"Hui-Syuan Yeh, Thomas Lavergne, and Pierre Zweigenbaum. 2022. Decorate the examples: A simple method of prompt design for biomedical relation extraction. In Proceedings of the 13th Language Resources and Evaluation Conference. 3780\u20133787. Retrieved from https:\/\/aclanthology.org\/2022.lrec-1.403"},{"key":"e_1_3_1_172_2","doi-asserted-by":"publisher","DOI":"10.5555\/3692070.3694441"},{"key":"e_1_3_1_173_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2309.10253"},{"key":"e_1_3_1_174_2","first-page":"2339","volume-title":"Proceedings of the 29th International Conference on Computational Linguistics","author":"Yu Tianshu","year":"2022","unstructured":"Tianshu Yu, Min Yang, and Xiaoyan Zhao. 2022. Dependency-aware prototype learning for few-shot relation classification. In Proceedings of the 29th International Conference on Computational Linguistics. 2339\u20132345. Retrieved from https:\/\/aclanthology.org\/2022.coling-1.205"},{"key":"e_1_3_1_175_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V38I15.29596"},{"key":"e_1_3_1_176_2","doi-asserted-by":"publisher","DOI":"10.1145\/3664647.3680717"},{"key":"e_1_3_1_177_2","doi-asserted-by":"publisher","DOI":"10.1609\/AAAI.V37I9.26309"},{"key":"e_1_3_1_178_2","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3581388"},{"key":"e_1_3_1_179_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2020.EMNLP-MAIN.127"},{"key":"e_1_3_1_180_2","first-page":"41092","volume-title":"Proceedings of the International Conference on Machine Learning","volume":"202","author":"Zhang Biao","year":"2023","unstructured":"Biao Zhang, Barry Haddow, and Alexandra Birch. 2023. Prompting large language model for machine translation: A case study. In Proceedings of the International Conference on Machine Learning, Vol. 202. 41092\u201341110. Retrieved from https:\/\/proceedings.mlr.press\/v202\/zhang23m.html"},{"key":"e_1_3_1_181_2","doi-asserted-by":"publisher","DOI":"10.1145\/3617680"},{"key":"e_1_3_1_182_2","first-page":"11328","volume-title":"Proceedings of the 37th International Conference on Machine Learning","volume":"119","author":"Zhang Jingqing","year":"2020","unstructured":"Jingqing Zhang, Yao Zhao, Mohammad Saleh, et\u00a0al. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119. 11328\u201311339. Retrieved from http:\/\/proceedings.mlr.press\/v119\/zhang20ae.html"},{"key":"e_1_3_1_183_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2023.FINDINGS-ACL.50"},{"key":"e_1_3_1_184_2","volume-title":"Proceeding of the 10th International Conference on Learning Representations","author":"Zhang Ningyu","year":"2022","unstructured":"Ningyu Zhang, Luoqiu Li, Xiang Chen, et\u00a0al. 2022. Differentiable prompt makes pre-trained language models better few-shot learners. In Proceeding of the 10th International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=ek9a0qIafW"},{"key":"e_1_3_1_185_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2022.EMNLP-MAIN.471"},{"key":"e_1_3_1_186_2","doi-asserted-by":"publisher","DOI":"10.1093\/BIB\/BBZ087"},{"key":"e_1_3_1_187_2","unstructured":"Wenjie Zhang Xiaoning Song Zhenhua Feng et\u00a0al. 2023. LabelPrompt: Effective prompt-based learning for relation classification. In Proceedings of the 16th Asian Conference on Machine Learning 260 (2023) 1304\u20131319. Retrieved from https:\/\/proceedings.mlr.press\/v260\/zhang25c.html"},{"key":"e_1_3_1_188_2","doi-asserted-by":"publisher","unstructured":"Ying Zhang Wencheng Huang and Depeng Dang. 2024. A lightweight approach based on prompt for few-shot relation extraction. Computer Speech & Language 84 (2024) 101580. DOI:10.1016\/j.csl.2023.101580","DOI":"10.1016\/j.csl.2023.101580"},{"key":"e_1_3_1_189_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D17-1004"},{"key":"e_1_3_1_190_2","first-page":"13233","volume-title":"Proceedings of the 2024 Joint International Conference on Computational Linguistics","author":"Zhang Zirui","year":"2024","unstructured":"Zirui Zhang, Yiyu Yang, and Benhui Chen. 2024. Prompt tuning for few-shot relation extraction via modeling global and local graphs. In Proceedings of the 2024 Joint International Conference on Computational Linguistics. 13233\u201313242. Retrieved from https:\/\/aclanthology.org\/2024.lrec-main.1158"},{"key":"e_1_3_1_191_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/2021.EMNLP-MAIN.765"},{"key":"e_1_3_1_192_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-18315-7_6"},{"key":"e_1_3_1_193_2","doi-asserted-by":"publisher","DOI":"10.1145\/3674501"},{"key":"e_1_3_1_194_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2024.3365858"},{"key":"e_1_3_1_195_2","doi-asserted-by":"publisher","DOI":"10.1145\/3474085.3476968"},{"key":"e_1_3_1_196_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICME51207.2021.9428274"},{"key":"e_1_3_1_197_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2024.3376453"},{"key":"e_1_3_1_198_2","doi-asserted-by":"publisher","DOI":"10.1007\/S11263-022-01653-1"},{"key":"e_1_3_1_199_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.emnlp-main.747"},{"key":"e_1_3_1_200_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.aacl-short.21"},{"key":"e_1_3_1_201_2","volume-title":"Proceedings of the 11st International Conference on Learning Representations","author":"Zhou Yongchao","year":"2023","unstructured":"Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, et\u00a0al. 2023. Large language models are human-level prompt engineers. In Proceedings of the 11st International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/pdf?id=92gvk82DE-"},{"key":"e_1_3_1_202_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.01435"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3746281","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,10]],"date-time":"2025-09-10T03:23:04Z","timestamp":1757474584000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3746281"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,8]]},"references-count":201,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,1,31]]}},"alternative-id":["10.1145\/3746281"],"URL":"https:\/\/doi.org\/10.1145\/3746281","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,9,8]]},"assertion":[{"value":"2024-06-12","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-06-19","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-08","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}