{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:12:54Z","timestamp":1750219974031,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":24,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,21]],"date-time":"2022-10-21T00:00:00Z","timestamp":1666310400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Nature Science Foundation of China","award":["No.62062029, No.61762024"],"award-info":[{"award-number":["No.62062029, No.61762024"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,21]]},"DOI":"10.1145\/3569966.3570014","type":"proceedings-article","created":{"date-parts":[[2022,12,20]],"date-time":"2022-12-20T22:24:41Z","timestamp":1671575081000},"page":"164-169","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["A Few-Shot Relation Extraction Method for Enhancing Entity Attention"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8531-0125","authenticated-orcid":false,"given":"Fengying","family":"Li","sequence":"first","affiliation":[{"name":"School of Computer and Information Security, Guilin University of Electronic Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0192-331X","authenticated-orcid":false,"given":"Ye","family":"He","sequence":"additional","affiliation":[{"name":"School of Computer and Information Security, Guilin University of Electronic Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0540-4659","authenticated-orcid":false,"given":"Rongsheng","family":"Dong","sequence":"additional","affiliation":[{"name":"School of Computer and Information Security, Guilin University of Electronic Technology, China"}]}],"member":"320","published-online":{"date-parts":[[2022,12,20]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1810.10147"},{"key":"e_1_3_2_1_2_1","unstructured":"Jake Snell Kevin Swersky and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in neural information processing systems 4077\u20134087. Jake Snell Kevin Swersky and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in neural information processing systems 4077\u20134087."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1903.01306"},{"key":"e_1_3_2_1_4_1","volume-title":"Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In WWW. ACM \/ IW3C2.","author":"Chen Xiang","year":"2022","unstructured":"Xiang Chen , Ningyu Zhang , Xin Xie , Shumin Deng , Yunzhi Yao , Chuanqi Tan , Fei Huang , Luo Si , and Huajun Chen . 2022 . Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In WWW. ACM \/ IW3C2. Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In WWW. ACM \/ IW3C2."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1910.07124"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3412153"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2106.02401"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10791-018-9340-3"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"crossref","unstructured":"Yankai Lin Zhiyuan Liu and Maosong Sun. 2017. Neural relation extraction with multi-lingual attention. In ACL (1) 34\u201343. Yankai Lin Zhiyuan Liu and Maosong Sun. 2017. Neural relation extraction with multi-lingual attention. In ACL (1) 34\u201343.","DOI":"10.18653\/v1\/P17-1004"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.csl.2021.101265"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33016407"},{"key":"e_1_3_2_1_12_1","volume-title":"International Conference on Machine Learning, 7867\u20137876","author":"Qu Meng","year":"2020","unstructured":"Meng Qu , Tianyu Gao , Louis-Pascal Xhonneux , and Jian Tang . 2020 . Few-shot relation extraction via bayesian meta-learning on relation graphs . In International Conference on Machine Learning, 7867\u20137876 . Meng Qu, Tianyu Gao, Louis-Pascal Xhonneux, and Jian Tang. 2020. Few-shot relation extraction via bayesian meta-learning on relation graphs. In International Conference on Machine Learning, 7867\u20137876."},{"key":"e_1_3_2_1_13_1","volume-title":"Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning. In AAAI, 9507\u20139514.","author":"Zeng Daojian","year":"2020","unstructured":"Daojian Zeng , Haoran Zhang , and Qianying Liu . 2020 . Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning. In AAAI, 9507\u20139514. Daojian Zeng, Haoran Zhang, and Qianying Liu. 2020. Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning. In AAAI, 9507\u20139514."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2020.12.037"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Mike Mintz Steven Bills Rion Snow and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL\/IJCNLP 1003\u20131011. Mike Mintz Steven Bills Rion Snow and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL\/IJCNLP 1003\u20131011.","DOI":"10.3115\/1690219.1690287"},{"key":"e_1_3_2_1_16_1","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1) 4171\u20134186. Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1) 4171\u20134186."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1139"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1907"},{"key":"e_1_3_2_1_19_1","volume-title":"Ruslan Salakhutdinov and Quoc V. Le","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang , Zihang Dai , Yiming Yang , Jaime Carbonell , Ruslan Salakhutdinov and Quoc V. Le . 2019 . Xlnet : Generalized autoregressive pretraining for language understanding. In NeurIPS , 5754\u20135764. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, 5754\u20135764."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1706.03762"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1711.04043"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1707.03141"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.1906.06678"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340531.3412153"}],"event":{"name":"CSSE 2022: 2022 5th International Conference on Computer Science and Software Engineering","acronym":"CSSE 2022","location":"Guilin China"},"container-title":["Proceedings of the 5th International Conference on Computer Science and Software Engineering"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3569966.3570014","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3569966.3570014","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:19Z","timestamp":1750182559000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3569966.3570014"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,21]]},"references-count":24,"alternative-id":["10.1145\/3569966.3570014","10.1145\/3569966"],"URL":"https:\/\/doi.org\/10.1145\/3569966.3570014","relation":{},"subject":[],"published":{"date-parts":[[2022,10,21]]},"assertion":[{"value":"2022-12-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}