{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,3]],"date-time":"2026-01-03T15:15:36Z","timestamp":1767453336896,"version":"3.37.3"},"reference-count":55,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,2,8]],"date-time":"2024-02-08T00:00:00Z","timestamp":1707350400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,2,8]],"date-time":"2024-02-08T00:00:00Z","timestamp":1707350400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62166044"],"award-info":[{"award-number":["62166044"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100009110","name":"Natural Science Foundation of Xinjiang Province","doi-asserted-by":"publisher","award":["2021D01C079"],"award-info":[{"award-number":["2021D01C079"]}],"id":[{"id":"10.13039\/100009110","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["F2022203072"],"award-info":[{"award-number":["F2022203072"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Extractive approaches have been the mainstream paradigm for identifying overlapping entity\u2013relation extraction. However, limited by their inherently methodological flaws, which hardly deal with three issues: hierarchical dependent entity\u2013relations, implicit entity\u2013relations, and entity normalization. Recent advances have proposed an effective solution based on generative language models, which cast entity\u2013relation extraction as a sequence-to-sequence text generation task. Inspired by the observation that humans learn by getting to the bottom of things, we propose a novel framework, namely GenRE, Generative multi-turn question answering with contrastive learning for entity\u2013relation extraction. Specifically, a template-based question prompt generation first is designed to answer in different turns. We then formulate entity\u2013relation extraction as a generative question answering task based on the general language model instead of span-based machine reading comprehension. Meanwhile, the contrastive learning strategy in fine-tuning is introduced to add negative samples to mitigate the exposure bias inherent in generative models. Our extensive experiments demonstrate that GenRE performs competitively on two public datasets and a custom dataset, highlighting its superiority in entity normalization and implicit entity\u2013relation extraction. (The code is available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/lovelyllwang\/GenRE\">https:\/\/github.com\/lovelyllwang\/GenRE<\/jats:ext-link>).<\/jats:p>","DOI":"10.1007\/s40747-023-01321-y","type":"journal-article","created":{"date-parts":[[2024,2,8]],"date-time":"2024-02-08T09:02:49Z","timestamp":1707382969000},"page":"3429-3443","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Genre: generative multi-turn question answering with contrastive learning for entity\u2013relation extraction"],"prefix":"10.1007","volume":"10","author":[{"given":"Lulu","family":"Wang","sequence":"first","affiliation":[]},{"given":"Kai","family":"Yu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1681-1089","authenticated-orcid":false,"given":"Aishan","family":"Wumaier","sequence":"additional","affiliation":[]},{"given":"Peng","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Tuergen","family":"Yibulayin","sequence":"additional","affiliation":[]},{"given":"Xi","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Jibing","family":"Gong","sequence":"additional","affiliation":[]},{"given":"Maihemuti","family":"Maimaiti","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,2,8]]},"reference":[{"key":"1321_CR1","doi-asserted-by":"publisher","unstructured":"Fader A, Zettlemoyer L, Etzioni O (2014) Open question answering over curated and extracted knowledge bases. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data, pp 1156\u20131165. https:\/\/doi.org\/10.1145\/2623330.2623677","DOI":"10.1145\/2623330.2623677"},{"key":"1321_CR2","doi-asserted-by":"publisher","first-page":"258","DOI":"10.4304\/jetwi.2.3.258-268","volume":"2","author":"V Gupta","year":"2010","unstructured":"Gupta V, Lehal GS (2010) A survey of text summarization extractive techniques. J Emerg Technol Web Intell 2:258\u2013268. https:\/\/doi.org\/10.4304\/jetwi.2.3.258-268","journal-title":"J Emerg Technol Web Intell"},{"key":"1321_CR3","unstructured":"Riedel S, Yao L, McCallum A, Marlin BM (2013) Relation extraction with matrix factorization and universal schemas. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp 74\u201384. https:\/\/aclanthology.org\/N13-1008"},{"key":"1321_CR4","unstructured":"Chan YS, Roth D (2011) Exploiting syntactico-semantic structures for relation extraction. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp 551\u2013560. https:\/\/aclanthology.org\/P11-1056"},{"key":"1321_CR5","doi-asserted-by":"publisher","unstructured":"Lin Y, Shen S, Liu Z, Luan H, Sun M (2016) Neural relation extraction with selective attention over instances. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, vol 1, pp 2124\u20132133. https:\/\/doi.org\/10.18653\/v1\/p16-1200","DOI":"10.18653\/v1\/p16-1200"},{"key":"1321_CR6","doi-asserted-by":"publisher","unstructured":"Li Q, Ji H (2014) Incremental joint extraction of entity mentions and relations. In: Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics, pp 402\u2013412. https:\/\/doi.org\/10.3115\/v1\/p14-1038","DOI":"10.3115\/v1\/p14-1038"},{"key":"1321_CR7","doi-asserted-by":"publisher","unstructured":"Ren X, Wu Z, He W, Qu M, Voss CR, Ji H, Abdelzaher TF, Han J (2017) CoType: joint extraction of typed entities and relations with knowledge bases. In: Proceedings of the 26th International Conference on World Wide Web, pp 1015\u20131024. https:\/\/doi.org\/10.1145\/3038912.3052708","DOI":"10.1145\/3038912.3052708"},{"key":"1321_CR8","doi-asserted-by":"publisher","unstructured":"Zheng S, Wang F, Bao H, Hao Y, Zhou P, Xu B (2017) Joint extraction of entities and relations based on a novel tagging scheme. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp 1227\u20131236. https:\/\/doi.org\/10.18653\/v1\/P17-1113","DOI":"10.18653\/v1\/P17-1113"},{"key":"1321_CR9","doi-asserted-by":"publisher","unstructured":"Zeng X, Zeng D, He S, Liu K, Zhao J (2018) Extracting relational facts by an end-to-end neural model with copy mechanism. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp 506\u2013514. https:\/\/doi.org\/10.18653\/v1\/p18-1047","DOI":"10.18653\/v1\/p18-1047"},{"key":"1321_CR10","doi-asserted-by":"publisher","unstructured":"Wei Z, Su J, Wang Y, Tian Y, Chang Y (2020) A novel cascade binary tagging framework for relational triple extraction. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp 1476\u20131488. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.136","DOI":"10.18653\/v1\/2020.acl-main.136"},{"key":"1321_CR11","doi-asserted-by":"publisher","unstructured":"Zheng H, Wen R, Chen X, Yang Y, Zhang Y, Zhang Z, Zhang N, Qin B, Xu M, Zheng Y (2021) PRGC: potential relation and global correspondence based joint relational triple extraction. In: ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference, pp 6225-6235. https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.486","DOI":"10.18653\/v1\/2021.acl-long.486"},{"key":"1321_CR12","doi-asserted-by":"publisher","unstructured":"Ren F, Zhang L, Zhao X, Yin S, Liu S, Li B (2022) A simple but effective bidirectional framework for relational triple extraction. In: WSDM 2022 - Proceedings of the 15th ACM International Conference on Web Search and Data Mining, pp 824\u2013832. https:\/\/doi.org\/10.1145\/3488560.3498409","DOI":"10.1145\/3488560.3498409"},{"key":"1321_CR13","doi-asserted-by":"publisher","unstructured":"Dixit K., Al-Onaizan Y (2020) Span-level model for relation extraction. In: ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pp 5308\u20135314. https:\/\/doi.org\/10.18653\/v1\/p19-1525","DOI":"10.18653\/v1\/p19-1525"},{"key":"1321_CR14","doi-asserted-by":"publisher","unstructured":"Zhong Z, Chen D (2021) A frustratingly easy approach for entity and relation extraction. In: NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, pp 50\u201361. https:\/\/doi.org\/10.18653\/v1\/2021.naacl-main.5","DOI":"10.18653\/v1\/2021.naacl-main.5"},{"key":"1321_CR15","doi-asserted-by":"publisher","unstructured":"Wang J, Lu W (2020) Two are better than one: Joint entity and relation extraction with table-sequence encoders. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp 1706\u20131721. https:\/\/doi.org\/10.18653\/v1\/2020.emnlp-main.133","DOI":"10.18653\/v1\/2020.emnlp-main.133"},{"key":"1321_CR16","doi-asserted-by":"publisher","unstructured":"Ren F, Zhang L, Yin S, Zhao X, Liu S, Li B, Liu Y (2021) A novel global feature-oriented relational triple extraction model based on table filling. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp 2646\u20132656. https:\/\/doi.org\/10.18653\/v1\/2021.emnlp-main.208","DOI":"10.18653\/v1\/2021.emnlp-main.208"},{"key":"1321_CR17","doi-asserted-by":"publisher","unstructured":"Li X, Yin F, Sun Z, Li X, Yuan A, Chai D, Zhou M, Li J (2019) Entity-relation extraction as multi-turn question answering. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp 1340\u20131350. https:\/\/doi.org\/10.18653\/v1\/p19-1129","DOI":"10.18653\/v1\/p19-1129"},{"key":"1321_CR18","doi-asserted-by":"publisher","unstructured":"Du Z, Qian Y, Liu X, Ding M, Qiu J, Yang Z, Tang J (2022) GLM : general language model pretraining with autoregressive blank infilling. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp 320\u2013335. https:\/\/doi.org\/10.18653\/v1\/2022.acl-long.26","DOI":"10.18653\/v1\/2022.acl-long.26"},{"key":"1321_CR19","doi-asserted-by":"publisher","unstructured":"Chen D, Manning CD (2014) A fast and accurate dependency parser using neural networks. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) pp 740\u2013750. https:\/\/doi.org\/10.3115\/v1\/d14-1082","DOI":"10.3115\/v1\/d14-1082"},{"key":"1321_CR20","doi-asserted-by":"publisher","unstructured":"Zeng D, Liu K, Chen Y, Zhao J (2015) Distant supervision for relation extraction via piecewise convolutional neural networks. In: Proceedings of the 2015 conference on empirical methods in natural language processing, pp 1753\u20131762. https:\/\/doi.org\/10.18653\/v1\/d15-1203","DOI":"10.18653\/v1\/d15-1203"},{"key":"1321_CR21","doi-asserted-by":"publisher","unstructured":"Zhang M, Zhang Y, Fu G (2017) End-to-end neural relation extraction with global optimization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp 1730\u20131740. https:\/\/doi.org\/10.18653\/v1\/d17-1182","DOI":"10.18653\/v1\/d17-1182"},{"key":"1321_CR22","doi-asserted-by":"publisher","unstructured":"Sun C, Gong Y, Wu Y, Gong M, Jiang D, Lan M, Sun S, Duan N (2019) Joint type inference on entities and relations via graph convolutional networks. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp 1361\u20131370. https:\/\/doi.org\/10.18653\/v1\/p19-1131","DOI":"10.18653\/v1\/p19-1131"},{"key":"1321_CR23","doi-asserted-by":"publisher","unstructured":"Yuan Y, Zhou X, Pan S, Zhu Q, Song Z, Guo L (2020) A relation-specific attention network for joint entity and relation extraction. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20), pp 4054\u20134060. https:\/\/doi.org\/10.24963\/ijcai.2020\/561","DOI":"10.24963\/ijcai.2020\/561"},{"key":"1321_CR24","doi-asserted-by":"publisher","unstructured":"Wang Y, Yu B, Zhang Y, Liu T, Zhu H, Sun L (2020) Tplinker: single-stage joint extraction of entities and relations through token pair Linking. In: Proceedings of the 28th International Conference on Computational Linguistics, pp 1572\u20131582. https:\/\/doi.org\/10.18653\/v1\/2020.coling-main.138","DOI":"10.18653\/v1\/2020.coling-main.138"},{"key":"1321_CR25","doi-asserted-by":"publisher","unstructured":"Wang Y, Sun C, Wu Y, Zhou H, Li L, Yan J (2021) UNIRE: a unified label space for entity relation extraction. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp 220\u2013231. https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.19","DOI":"10.18653\/v1\/2021.acl-long.19"},{"key":"1321_CR26","doi-asserted-by":"publisher","unstructured":"Shang YM, Huang H, Mao XL (2022) OneRel: joint entity and relation extraction with one module in one step. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, pp 11285\u201311293. https:\/\/doi.org\/10.1609\/aaai.v36i10.21379","DOI":"10.1609\/aaai.v36i10.21379"},{"key":"1321_CR27","doi-asserted-by":"publisher","unstructured":"Eberts M, Ulges A (2020) Span-based joint entity and relation extraction with transformer pre-training, vol 325. https:\/\/doi.org\/10.3233\/FAIA200321","DOI":"10.3233\/FAIA200321"},{"key":"1321_CR28","doi-asserted-by":"publisher","unstructured":"Zeng D, Zhang H, Liu Q (2020) Copymtl: copy mechanism for joint extraction of entities and relations with multi-task learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 9507\u20139514. https:\/\/doi.org\/10.1609\/aaai.v34i05.6495","DOI":"10.1609\/aaai.v34i05.6495"},{"key":"1321_CR29","doi-asserted-by":"publisher","unstructured":"Nayak T, Ng HT (2020) Effective modeling of encoder\u2013decoder architecture for joint entity and relation extraction. In: AAAI 2020 - 34th AAAI Conference on Artificial Intelligence, pp 8528\u20138535. https:\/\/doi.org\/10.1609\/aaai.v34i05.6374","DOI":"10.1609\/aaai.v34i05.6374"},{"key":"1321_CR30","doi-asserted-by":"publisher","unstructured":"Fu TJ, Li PH, Ma WY (2019) Graphrel: modeling text as relational graphs for joint entity and relation extraction. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp 1409\u20131418. https:\/\/doi.org\/10.18653\/v1\/p19-1136","DOI":"10.18653\/v1\/p19-1136"},{"key":"1321_CR31","doi-asserted-by":"publisher","first-page":"106888","DOI":"10.1016\/j.knosys.2021.106888","volume":"219","author":"K Zhao","year":"2021","unstructured":"Zhao K, Xu H, Cheng Y, Li X, Gao K (2021) Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction. Knowl-Based Syst 219:106888. https:\/\/doi.org\/10.1016\/j.knosys.2021.106888","journal-title":"Knowl-Based Syst"},{"key":"1321_CR32","doi-asserted-by":"publisher","unstructured":"Takanobu R, Zhang T, Liu J, Huang M (2019) A hierarchical framework for relation extraction with reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 7072\u20137079. https:\/\/doi.org\/10.1609\/aaai.v33i01.33017072","DOI":"10.1609\/aaai.v33i01.33017072"},{"key":"1321_CR33","doi-asserted-by":"publisher","unstructured":"Zeng X, He S, Zeng D, Liu K, Zhao J (2019) Learning the extraction order of multiple relational facts in a sentence with reinforcement learning, pp 367\u2013377. https:\/\/doi.org\/10.18653\/v1\/d19-1035","DOI":"10.18653\/v1\/d19-1035"},{"key":"1321_CR34","doi-asserted-by":"publisher","unstructured":"Levy O, Seo M, Choi E, Zettlemoyer L (2017) Zero-shot relation extraction via reading comprehension. In: Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pp 333\u2013342. https:\/\/doi.org\/10.18653\/v1\/k17-1034","DOI":"10.18653\/v1\/k17-1034"},{"key":"1321_CR35","doi-asserted-by":"publisher","unstructured":"Li X, Feng J, Meng Y, Han Q, Wu F, Li J (2020) A unified mrc framework for named entity recognition. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp 5849\u20135859. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.519","DOI":"10.18653\/v1\/2020.acl-main.519"},{"key":"1321_CR36","doi-asserted-by":"publisher","unstructured":"Du X, Cardie C (2020) Event extraction by answering (almost) natural questions. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp 671\u2013683. https:\/\/doi.org\/10.18653\/v1\/2020.emnlp-main.49","DOI":"10.18653\/v1\/2020.emnlp-main.49"},{"key":"1321_CR37","doi-asserted-by":"publisher","unstructured":"Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp 7871\u20137880. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.703","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"1321_CR38","doi-asserted-by":"publisher","first-page":"1","DOI":"10.48550\/arxiv.1910.10683","volume":"21","author":"C Raffel","year":"2020","unstructured":"Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21:1\u201367. https:\/\/doi.org\/10.48550\/arxiv.1910.10683","journal-title":"J Mach Learn Res"},{"key":"1321_CR39","doi-asserted-by":"publisher","unstructured":"Paolini G, Athiwaratkun B, Krone J, Ma J, Achille A, Anubhai R, Nogueira C, Xiang B, Soatto S (2021) Structured prediction as translation between augmented natural languages. In: International Conference on Learning Representations (ICLR- 2021). https:\/\/doi.org\/10.48550\/arXiv.2101.05779","DOI":"10.48550\/arXiv.2101.05779"},{"key":"1321_CR40","doi-asserted-by":"publisher","unstructured":"Zhang N, Ye H, Deng S, Tan C, Chen M, Huang S, Huang F, Chen H (2021) Contrastive information extraction with generative transformer. IEEE\/ACM Trans Audio Speech Lang Process 29:3077\u20133088. https:\/\/doi.org\/10.1109\/TASLP.2021.3110126","DOI":"10.1109\/TASLP.2021.3110126"},{"key":"1321_CR41","doi-asserted-by":"publisher","unstructured":"Cabot PLH, Navigli R (2021) REBEL: relation extraction by end-to-end language generation. In: Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021, pp 2370\u20132381. https:\/\/doi.org\/10.18653\/v1\/2021.findings-emnlp.204","DOI":"10.18653\/v1\/2021.findings-emnlp.204"},{"key":"1321_CR42","doi-asserted-by":"publisher","unstructured":"Lu Y, Liu Q, Dai D, Xiao X, Lin H, Han X, Sun L, Wu H (2022) Unified structure generation for universal information extraction. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp 5755\u20135772. https:\/\/doi.org\/10.18653\/v1\/2022.acl-long.395. https:\/\/arxiv.org\/abs\/2203.12277","DOI":"10.18653\/v1\/2022.acl-long.395"},{"key":"1321_CR43","doi-asserted-by":"publisher","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, \u0141ukasz Kaiser Polosukhin, I (2017) Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp 6000\u20136010. https:\/\/doi.org\/10.48550\/arXiv.1706.03762","DOI":"10.48550\/arXiv.1706.03762"},{"key":"1321_CR44","doi-asserted-by":"publisher","unstructured":"Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp 4171\u20134186. https:\/\/doi.org\/10.18653\/V1\/N19-1423","DOI":"10.18653\/V1\/N19-1423"},{"key":"1321_CR45","doi-asserted-by":"publisher","unstructured":"Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV (2019) Xlnet: generalized autoregressive pretraining for language understanding. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp 5753\u20135763. https:\/\/doi.org\/10.48550\/arXiv.1906.08237","DOI":"10.48550\/arXiv.1906.08237"},{"key":"1321_CR46","unstructured":"Radford A, Narasimhan K, Salimans T, Sutskever I, et al (2018) Improving language understanding by generative pre-training. OpenAI. https:\/\/api.semanticscholar.org\/CorpusID:49313245"},{"key":"1321_CR47","unstructured":"Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877\u20131901. https:\/\/doi.org\/10.48550\/arXiv.2005.14165"},{"key":"1321_CR48","doi-asserted-by":"publisher","unstructured":"Misra I, van\u00a0der Maaten L (2020) Self-supervised learning of pretext-invariant representations. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp 6707\u20136717. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00674","DOI":"10.1109\/CVPR42600.2020.00674"},{"key":"1321_CR49","doi-asserted-by":"publisher","unstructured":"Fang H, Wang S, Zhou M, Ding J, Xie P (2020) Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766. https:\/\/doi.org\/10.48550\/arXiv.2005.12766","DOI":"10.48550\/arXiv.2005.12766"},{"key":"1321_CR50","doi-asserted-by":"publisher","unstructured":"Gao T, Yao X, Chen D (2021) SimCSE: simple contrastive learning of sentence embeddings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp 6894\u20136910. https:\/\/doi.org\/10.18653\/v1\/2021.emnlp-main.552","DOI":"10.18653\/v1\/2021.emnlp-main.552"},{"key":"1321_CR51","doi-asserted-by":"publisher","unstructured":"Yang Z, Cheng Y, Liu Y, Sun M (2019) Reducing word omission errors in neural machine translation: a contrastive learning approach. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistic, pp 6191\u20136196. https:\/\/doi.org\/10.18653\/v1\/p19-1623","DOI":"10.18653\/v1\/p19-1623"},{"key":"1321_CR52","unstructured":"Lee S, Lee DB, Hwang SJ (2021) Contrastive learning with adversarial perturbations for conditional text generation. In: International Conference on Learning Representations (ICLR 2021), https:\/\/arxiv.org\/abs\/2012.07280"},{"key":"1321_CR53","doi-asserted-by":"publisher","unstructured":"Riedel S, Yao L, McCallum A (2010) Modeling relations and their mentions without labeled text. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp 148\u2013163. https:\/\/doi.org\/10.1007\/978-3-642-15939-8_10","DOI":"10.1007\/978-3-642-15939-8_10"},{"key":"1321_CR54","doi-asserted-by":"publisher","unstructured":"Gardent C, Shimorina A, Narayan S, Perez-Beltrachini L (2017) Creating training corpora for nlg micro-planning. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp 179\u2013188. https:\/\/doi.org\/10.18653\/v1\/P17-1017","DOI":"10.18653\/v1\/P17-1017"},{"key":"1321_CR55","doi-asserted-by":"publisher","unstructured":"Huang H, Shang YM, Sun X, Wei W, Mao X (2022) Three birds, one stone: a novel translation based framework for joint entity and relation extraction. Knowl-Based Syst 236:107677. https:\/\/doi.org\/10.1016\/j.knosys.2021.107677","DOI":"10.1016\/j.knosys.2021.107677"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01321-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01321-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01321-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,16]],"date-time":"2024-05-16T18:12:52Z","timestamp":1715883172000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01321-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,8]]},"references-count":55,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["1321"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01321-y","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2024,2,8]]},"assertion":[{"value":"1 March 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 December 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 February 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no competing interests to declare that are relevant to the content of this article.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}