{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T01:34:04Z","timestamp":1760060044130,"version":"build-2065373602"},"reference-count":70,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2025,7,29]],"date-time":"2025-07-29T00:00:00Z","timestamp":1753747200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Key R&amp;D Program of China","award":["2024ZD01NL00102"],"award-info":[{"award-number":["2024ZD01NL00102"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>End-to-end relation extraction (E2ERE) generally performs named entity recognition and relation extraction either simultaneously or sequentially. While numerous studies on E2ERE have centered on enhancing span representations to improve model performance, challenges remain due to the gaps between subtasks (named entity recognition and relation extraction) and the modeling discrepancies between entities and relations. In this paper, we propose a novel Label Annotation Interaction-based representation enhancement method for E2ERE, which institutes a two-phase semantic interaction to augment representations. Specifically, we firstly feed label annotations that are easy to manually annotate into a language model, and conduct the first-round interaction between three types of tokens with a partial attention mechanism; Then we construct a latent multi-view graph to capture various possible links between label and entity (pair) nodes, facilitating the second-round interaction between entities and labels. A series of comparative experiments with methods of various transformer-based architectures currently in use show that LAI-Net can maintain performance on par with the current SOTA in terms of NER task, and achieves significant improvements over existing SOTA models in terms of RE task.<\/jats:p>","DOI":"10.3390\/bdcc9080198","type":"journal-article","created":{"date-parts":[[2025,7,29]],"date-time":"2025-07-29T16:16:10Z","timestamp":1753805770000},"page":"198","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["LAI: Label Annotation Interaction-Based Representation Enhancement for End to End Relation Extraction"],"prefix":"10.3390","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-4246-1048","authenticated-orcid":false,"given":"Rongxuan","family":"Lai","sequence":"first","affiliation":[{"name":"Information Support Force Engineering University, Wuhan 430001, China"},{"name":"State Key Laboratory of Complex & Critical Software Environment, Wuhan 430001, China"}]},{"given":"Wenhui","family":"Wu","sequence":"additional","affiliation":[{"name":"Information Support Force Engineering University, Wuhan 430001, China"}]},{"given":"Li","family":"Zou","sequence":"additional","affiliation":[{"name":"Information Support Force Engineering University, Wuhan 430001, China"},{"name":"State Key Laboratory of Complex & Critical Software Environment, Wuhan 430001, China"}]},{"given":"Feifan","family":"Liao","sequence":"additional","affiliation":[{"name":"Information Support Force Engineering University, Wuhan 430001, China"},{"name":"State Key Laboratory of Complex & Critical Software Environment, Wuhan 430001, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4839-1886","authenticated-orcid":false,"given":"Zhenyi","family":"Wang","sequence":"additional","affiliation":[{"name":"Information Support Force Engineering University, Wuhan 430001, China"},{"name":"State Key Laboratory of Complex & Critical Software Environment, Wuhan 430001, China"}]},{"given":"Haibo","family":"Mi","sequence":"additional","affiliation":[{"name":"Information Support Force Engineering University, Wuhan 430001, China"},{"name":"State Key Laboratory of Complex & Critical Software Environment, Wuhan 430001, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,7,29]]},"reference":[{"key":"ref_1","unstructured":"Goldberg, Y., Kozareva, Z., and Zhang, Y. (2022). UniRel: Unified Representation and Interaction for Joint Relational Triple Extraction. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_2","unstructured":"Korhonen, A., Traum, D., and M\u00e0rquez, L. (2019). Joint Type Inference on Entities and Relations via Graph Convolutional Networks. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_3","unstructured":"Moens, M.F., Huang, X., Specia, L., and Yih, S.W.T. (2021). Enhanced Language Representation with Label Knowledge for Span Extraction. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_4","unstructured":"Ji, B., Li, S., Xu, H., Yu, J., Ma, J., Liu, H., and Yang, J. (2022). Span-based joint entity and relation extraction augmented with sequence tagging mechanism. arXiv."},{"key":"ref_5","unstructured":"Raedt, L.D. Relational Triple Extraction: One Step is Enough. Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization; Main Track."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Dai, Z., Wang, X., Ni, P., Li, Y., Li, G., and Bai, X. (2019, January 19\u201321). Named entity recognition using BERT BiLSTM CRF for Chinese electronic health records. Proceedings of the 2019 12th iNternational Congress on Image and Signal Processing, Biomedical Engineering and Informatics (CISP-BMEI), Suzhou, China.","DOI":"10.1109\/CISP-BMEI48845.2019.8965823"},{"key":"ref_7","unstructured":"Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., and Zhou, Y. (2021). A Frustratingly Easy Approach for Entity and Relation Extraction. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics."},{"key":"ref_8","unstructured":"Muresan, S., Nakov, P., and Villavicencio, A. (2022). Packed Levitated Marker for Entity and Relation Extraction. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics."},{"key":"ref_9","unstructured":"Korhonen, A., Traum, D., and M\u00e0rquez, L. (2019). ERNIE: Enhanced Language Representation with Informative Entities. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Sun, Y., Wang, S., Li, Y., Feng, S., Tian, H., Wu, H., and Wang, H. (2020, January 7\u201312). Ernie 2.0: A continual pre-training framework for language understanding. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i05.6428"},{"key":"ref_11","unstructured":"Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. (2019, January 9\u201315). Parameter-Efficient Transfer Learning for NLP. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_12","unstructured":"Zong, C., Xia, F., Li, W., and Navigli, R. (2021). Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Association for Computational Linguistics."},{"key":"ref_13","unstructured":"Gurevych, I., and Miyao, Y. (2018). Chinese NER Using Lattice LSTM. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics."},{"key":"ref_14","unstructured":"Inui, K., Jiang, J., Ng, V., and Wan, X. (2019). Leverage Lexical Knowledge for Chinese Named Entity Recognition via Collaborative Graph Network. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics."},{"key":"ref_15","unstructured":"Jurafsky, D., Chai, J., Schluter, N., and Tetreault, J. (2020). FLAT: Chinese NER Using Flat-Lattice Transformer. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_16","unstructured":"Burstein, J., Doran, C., and Solorio, T. (2019). GraphIE: A Graph-Based Framework for Information Extraction. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Daigavane, A., Ravindran, B., and Aggarwal, G. (2021, September 02). Understanding Convolutions on Graphs, Understanding the Building Blocks and Design Choices of Graph Neural Networks. Available online: https:\/\/distill.pub\/2021\/understanding-gnns\/.","DOI":"10.23915\/distill.00032"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"119","DOI":"10.1561\/2200000096","article-title":"Graph neural networks for natural language processing: A survey","volume":"16","author":"Wu","year":"2023","journal-title":"Found. Trends\u00ae Mach. Learn."},{"key":"ref_19","unstructured":"Lapata, M., Blunsom, P., and Koller, A. (2017). Distant Supervision for Relation Extraction beyond the Sentence Boundary. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Association for Computational Linguistics."},{"key":"ref_20","unstructured":"Jurafsky, D., Chai, J., Schluter, N., and Tetreault, J. (2020). Bipartite Flat-Graph Network for Nested Named Entity Recognition. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Sun, K., Zhang, R., Mao, Y., Mensah, S., and Liu, X. (2020, January 7\u201312). Relation extraction with convolutional network over learnable syntax-transport graph. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i05.6423"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Xue, F., Sun, A., Zhang, H., and Chng, E.S. (2021, January 2\u20139). Gdpnet: Refining latent multi-view graph for relation extraction. Proceedings of the AAAI Conference on Artificial Intelligence, Electr Network.","DOI":"10.1609\/aaai.v35i16.17670"},{"key":"ref_23","unstructured":"Riloff, E., Chiang, D., Hockenmaier, J., and Tsujii, J. (2018). Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_24","unstructured":"Korhonen, A., Traum, D., and M\u00e0rquez, L. (2019). GraphRel: Modeling Text as Relational Graphs for Joint Entity and Relation Extraction. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_25","unstructured":"Korhonen, A., Traum, D., and M\u00e0rquez, L. (2019). Attention Guided Graph Convolutional Networks for Relation Extraction. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_26","unstructured":"Korhonen, A., Traum, D., and M\u00e0rquez, L. (2019). Inter-sentence Relation Extraction with Document-level Graph Convolutional Neural Network. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_27","unstructured":"Inui, K., Jiang, J., Ng, V., and Wan, X. (2019). Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics."},{"key":"ref_28","unstructured":"Webber, B., Cohn, T., He, Y., and Liu, Y. (2020). Double Graph Based Reasoning for Document-level Relation Extraction. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics."},{"key":"ref_29","unstructured":"Burstein, J., Doran, C., and Solorio, T. (2019). A general framework for information extraction using dynamic span graphs. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"McDonald, R., Pereira, F., Kulick, S., Winters, S., Jin, Y., and White, P. (2005, January 25\u201330). Simple algorithms for complex relation extraction with applications to biomedical IE. Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL\u201905), Stroudsburg, PA, USA.","DOI":"10.3115\/1219840.1219901"},{"key":"ref_31","unstructured":"Iria, J. (,  2005). T-rex: A flexible relation extraction framework. Proceedings of the 8th Annual Colloquium for the UK Special Interest Group for Computational Linguistics (CLUK\u201905), Manchester, UK."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Culotta, A., and Sorensen, J. (2004, January 21\u201326). Dependency tree kernels for relation extraction. Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), Barcelona, Spain.","DOI":"10.3115\/1218955.1219009"},{"key":"ref_33","unstructured":"Jiang, J., and Zhai, C. (2007, January 22\u201327). A systematic exploration of the feature space for relation extraction. Proceedings of the Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, Rochester, NY, USA. Proceedings of the Main Conference."},{"key":"ref_34","unstructured":"Zeng, D., Liu, K., Lai, S., Zhou, G., and Zhao, J. (2014, January 23\u201329). Relation classification via convolutional deep neural network. Proceedings of the COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, Dublin, Ireland."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1016\/j.eswa.2018.07.032","article-title":"Joint entity recognition and relation extraction as a multi-head selection problem","volume":"114","author":"Bekoulis","year":"2018","journal-title":"Expert Syst. Appl."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Wei, Z., Su, J., Wang, Y., Tian, Y., and Chang, Y. (2019). A novel cascade binary tagging framework for relational triple extraction. arXiv.","DOI":"10.18653\/v1\/2020.acl-main.136"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Zheng, S., Wang, F., Bao, H., Hao, Y., Zhou, P., and Xu, B. (2017). Joint extraction of entities and relations based on a novel tagging scheme. arXiv.","DOI":"10.18653\/v1\/P17-1113"},{"key":"ref_38","unstructured":"Scott, D., Bel, N., and Zong, C. (2020). Span-based Joint Entity and Relation Extraction with Attention-based Span-specific and Contextual Semantic Representations. Proceedings of the 28th International Conference on Computational Linguistics, Association for Computational Linguistics."},{"key":"ref_39","unstructured":"Zong, C., Xia, F., Li, W., and Navigli, R. (2021). UniRE: A Unified Label Space for Entity Relation Extraction. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Association for Computational Linguistics."},{"key":"ref_40","unstructured":"Wang, Y., Sun, C., Wu, Y., Li, L., Yan, J., and Zhou, H. (2023). HIORE: Leveraging High-order Interactions for Unified Entity Relation Extraction. arXiv."},{"key":"ref_41","unstructured":"Bouamor, H., Pino, J., and Bali, K. (2023). Joint Entity and Relation Extraction with Span Pruning and Hypergraph Neural Networks. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_42","unstructured":"Bouamor, H., Pino, J., and Bali, K. (2023). Mirror: A Universal Framework for Various Information Extraction Tasks. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_43","unstructured":"Bouamor, H., Pino, J., and Bali, K. (2023). Set Learning for Generative Information Extraction. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"120441","DOI":"10.1016\/j.eswa.2023.120441","article-title":"Boundary regression model for joint entity and relation extraction","volume":"229","author":"Tang","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Zaratiana, U., Tomeh, N., Holat, P., and Charnois, T. (2024, January 20\u201327). An Autoregressive Text-to-Graph Framework for Joint Entity and Relation Extraction. Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada.","DOI":"10.1609\/aaai.v38i17.29919"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"103313","DOI":"10.1109\/ACCESS.2024.3420877","article-title":"A Decoupling and Aggregating Framework for Joint Extraction of Entities and Relations","volume":"12","author":"Wang","year":"2024","journal-title":"IEEE Access"},{"key":"ref_47","unstructured":"Wang, X., Zhou, W., Zu, C., Xia, H., Chen, T., Zhang, Y., Zheng, R., Ye, J., Zhang, Q., and Gui, T. (2023). InstructUIE: Multi-task Instruction Tuning for Unified Information Extraction. arXiv."},{"key":"ref_48","first-page":"9","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"Radford","year":"2019","journal-title":"OpenAI Blog"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Han, J., Zhao, S., Cheng, B., Ma, S., and Lu, W. (2022). Generative prompt tuning for relation classification. arXiv.","DOI":"10.18653\/v1\/2022.findings-emnlp.231"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Chen, X., Zhang, N., Xie, X., Deng, S., Yao, Y., Tan, C., Huang, F., Si, L., and Chen, H. (2022, January 25\u201329). Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. Proceedings of the ACM Web Conference 2022, Online.","DOI":"10.1145\/3485447.3511998"},{"key":"ref_51","unstructured":"Wei, X., Cui, X., Cheng, N., Wang, X., Zhang, X., Huang, S., Xie, P., Xu, J., Chen, Y., and Zhang, M. (2023). Zero-shot information extraction via chatting with chatgpt. arXiv."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"107214","DOI":"10.1016\/j.neunet.2025.107214","article-title":"A prompt tuning method based on relation graphs for few-shot relation extraction","volume":"185","author":"Zhang","year":"2025","journal-title":"Neural Netw."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s40747-024-01692-w","article-title":"PURE: A Prompt-based framework with dynamic Update mechanism for educational Relation Extraction","volume":"11","author":"Cui","year":"2025","journal-title":"Complex Intell. Syst."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Han, P., Liang, G., and Wang, Y. (2025). A Zero-Shot Framework for Low-Resource Relation Extraction via Distant Supervision and Large Language Models. Electronics, 14.","DOI":"10.3390\/electronics14030593"},{"key":"ref_55","unstructured":"Duan, J., Lu, F., and Liu, J. (2025). CPTuning: Contrastive Prompt Tuning for Generative Relation Extraction. arXiv."},{"key":"ref_56","unstructured":"Toutanova, K., and Wu, H. (2014). Incremental Joint Extraction of Entity Mentions and Relations. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Miwa, M., and Bansal, M. (2016, January 7\u201312). End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany.","DOI":"10.18653\/v1\/P16-1105"},{"key":"ref_58","unstructured":"Riloff, E., Chiang, D., Hockenmaier, J., and Tsujii, J. (2018). Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"885","DOI":"10.1016\/j.jbi.2012.04.008","article-title":"Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports","volume":"45","author":"Gurulingappa","year":"2012","journal-title":"J. Biomed. Inform."},{"key":"ref_60","unstructured":"Muresan, S., Nakov, P., and Villavicencio, A. (2022). Label Semantics for Few Shot Named Entity Recognition. Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, Association for Computational Linguistics."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"He, S., Liu, K., Ji, G., and Zhao, J. (2015, January 18\u201323). Learning to Represent Knowledge Graphs with Gaussian Embedding. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (CIKM \u201915), New York, NY, USA.","DOI":"10.1145\/2806416.2806502"},{"key":"ref_62","unstructured":"Goldberg, Y., Kozareva, Z., and Zhang, Y. (2022). Autoregressive Structured Prediction with Language Models. Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, Association for Computational Linguistics."},{"key":"ref_63","unstructured":"Wang, S., Sun, X., Li, X., Ouyang, R., Wu, F., Zhang, T., Li, J., and Wang, G. (2023). GPT-NER: Named Entity Recognition via Large Language Models. arXiv."},{"key":"ref_64","unstructured":"Bouamor, H., Pino, J., and Bali, K. (2023). GPT-RE: In-context Learning for Relation Extraction using Large Language Models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics."},{"key":"ref_65","unstructured":"Li, B., Fang, G., Yang, Y., Wang, Q., Ye, W., Zhao, W., and Zhang, S. (2023). Evaluating ChatGPT\u2019s Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness. arXiv."},{"key":"ref_66","unstructured":"Inui, K., Jiang, J., Ng, V., and Wan, X. (2019). Entity, Relation, and Event Extraction with Contextualized Span Representations. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics."},{"key":"ref_67","unstructured":"Eberts, M., and Ulges, A. (2019). Span-based Joint Entity and Relation Extraction with Transformer Pre-training. arXiv."},{"key":"ref_68","unstructured":"Webber, B., Cohn, T., He, Y., and Liu, Y. (2020). Two are Better than One: Joint Entity and Relation Extraction with Table-Sequence Encoders. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics."},{"key":"ref_69","unstructured":"Burstein, J., Doran, C., and Solorio, T. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics."},{"key":"ref_70","unstructured":"Inui, K., Jiang, J., Ng, V., and Wan, X. (2019). SciBERT: A Pretrained Language Model for Scientific Text. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics."}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/8\/198\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:18:13Z","timestamp":1760033893000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/8\/198"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,29]]},"references-count":70,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2025,8]]}},"alternative-id":["bdcc9080198"],"URL":"https:\/\/doi.org\/10.3390\/bdcc9080198","relation":{},"ISSN":["2504-2289"],"issn-type":[{"type":"electronic","value":"2504-2289"}],"subject":[],"published":{"date-parts":[[2025,7,29]]}}}