{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,25]],"date-time":"2025-11-25T20:45:29Z","timestamp":1764103529628,"version":"build-2065373602"},"reference-count":46,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2024,7,14]],"date-time":"2024-07-14T00:00:00Z","timestamp":1720915200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"China National Petroleum Corporation (CNPC) Science and Technology Program","award":["2021DQ06","22KJB110009","2022D01F67"],"award-info":[{"award-number":["2021DQ06","22KJB110009","2022D01F67"]}]},{"name":"The Basic Science (Natural Science) Research Projects of Universities in Jiangsu Province","award":["2021DQ06","22KJB110009","2022D01F67"],"award-info":[{"award-number":["2021DQ06","22KJB110009","2022D01F67"]}]},{"name":"Xinjiang Uygur Autonomous Region Natural Science Foundation","award":["2021DQ06","22KJB110009","2022D01F67"],"award-info":[{"award-number":["2021DQ06","22KJB110009","2022D01F67"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>Joint entity-relation extraction is a fundamental task in the construction of large-scale knowledge graphs. This task relies not only on the semantics of the text span but also on its intricate connections, including classification and structural details that most previous models overlook. In this paper, we propose the incorporation of this information into the learning process. Specifically, we design a novel two-dimensional word-pair tagging method to define the task of entity and relation extraction. This allows type markers to focus on text tokens, gathering information for their corresponding spans. Additionally, we introduce a multi-level attention neural network to enhance its capacity to perceive structure-aware features. Our experiments show that our approach can overcome the limitations of earlier tagging methods and yield more accurate results. We evaluate our model using three different datasets: SciERC, ADE, and CoNLL04. Our model demonstrates competitive performance compared to the state-of-the-art, surpassing other approaches across the majority of evaluated metrics.<\/jats:p>","DOI":"10.3390\/info15070407","type":"journal-article","created":{"date-parts":[[2024,7,15]],"date-time":"2024-07-15T14:15:49Z","timestamp":1721052949000},"page":"407","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Multi-Level Attention with 2D Table-Filling for Joint Entity-Relation Extraction"],"prefix":"10.3390","volume":"15","author":[{"given":"Zhenyu","family":"Zhang","sequence":"first","affiliation":[{"name":"Computer and Artificial Intelligence, Alibaba Cloud Big Data College, Changzhou University, Changzhou 213164, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4621-8166","authenticated-orcid":false,"given":"Lin","family":"Shi","sequence":"additional","affiliation":[{"name":"Computer and Artificial Intelligence, Alibaba Cloud Big Data College, Changzhou University, Changzhou 213164, China"}]},{"given":"Yang","family":"Yuan","sequence":"additional","affiliation":[{"name":"Computer and Artificial Intelligence, Alibaba Cloud Big Data College, Changzhou University, Changzhou 213164, China"}]},{"given":"Huanyue","family":"Zhou","sequence":"additional","affiliation":[{"name":"Computer and Artificial Intelligence, Alibaba Cloud Big Data College, Changzhou University, Changzhou 213164, China"}]},{"given":"Shoukun","family":"Xu","sequence":"additional","affiliation":[{"name":"Computer and Artificial Intelligence, Alibaba Cloud Big Data College, Changzhou University, Changzhou 213164, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,7,14]]},"reference":[{"key":"ref_1","unstructured":"Chan, Y.S., and Roth, D. (2011, January 19\u201324). Exploiting syntactico-semantic structures for relation extraction. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011, HLT\u201911, Portland, ON, USA."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Gormley, M.R., Yu, M., and Dredze, M. (2015). Improved relation extraction with feature-rich compositional embedding models. arXiv.","DOI":"10.18653\/v1\/D15-1205"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zhong, Z., and Chen, D. (2021, January 6\u201311). A Frustratingly Easy Approach for Entity and Relation Extraction. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online.","DOI":"10.18653\/v1\/2021.naacl-main.5"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ye, D., Lin, Y., Li, P., and Sun, M. (2022, January 22\u201327). Packed Levitated Marker for Entity and Relation Extraction. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland.","DOI":"10.18653\/v1\/2022.acl-long.337"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Li, Q., and Ji, H. (2014, January 22\u201327). Incremental joint extraction of entity mentions and relations. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, MD, USA.","DOI":"10.3115\/v1\/P14-1038"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Wang, S., Zhang, Y., Che, W., and Liu, T. (2018, January 13\u201319). Joint extraction of entities and relations based on a novel graph scheme. Proceedings of the Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden.","DOI":"10.24963\/ijcai.2018\/620"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Verga, P., Strubell, E., and McCallum, A. (2018). Simultaneously self-attending to all mentions for full-abstract biological relation extraction. arXiv.","DOI":"10.18653\/v1\/N18-1080"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Wang, Y., Yu, B., Zhang, Y., Liu, T., and Sun, L. (2020). TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking. arXiv.","DOI":"10.18653\/v1\/2020.coling-main.138"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zheng, H., Wen, R., Chen, X., Yang, Y., Zhang, Y., Zhang, Z., Zhang, N., Qin, B., Xu, M., and Zheng, Y. (2021, January 1\u20136). PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Virtual.","DOI":"10.18653\/v1\/2021.acl-long.486"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Sui, D., Zeng, X., Chen, Y., Liu, K., and Zhao, J. (2023). Joint entity and relation extraction with set prediction networks. IEEE Transactions on Neural Networks and Learning Systems, IEEE.","DOI":"10.1109\/TNNLS.2023.3264735"},{"key":"ref_11","unstructured":"Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Wu, S., and He, Y. (2019, January 3\u20137). Enriching pre-trained language model with entity information for relation classification. Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China.","DOI":"10.1145\/3357384.3358119"},{"key":"ref_13","unstructured":"Eberts, M., and Ulges, A. (2020). Span-Based Joint Entity and Relation Extraction with Transformer Pre-Training. arXiv."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Wang, Y., Sun, C., Wu, Y., Zhou, H., and Yan, J. (2021, January 1\u20136). UniRE: A Unified Label Space for Entity Relation Extraction. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Virtual.","DOI":"10.18653\/v1\/2021.acl-long.19"},{"key":"ref_15","first-page":"11285","article-title":"OneRel:Joint Entity and Relation Extraction with One Module in One Step","volume":"36","author":"Shang","year":"2022","journal-title":"Proc. Aaai Conf. Artif. Intell."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Tang, W., Xu, B., Zhao, Y., Mao, Z., Liu, Y., Liao, Y., and Xie, H. (2022). UniRel: Unified Representation and Interaction for Joint Relational Triple Extraction. arXiv.","DOI":"10.18653\/v1\/2022.emnlp-main.477"},{"key":"ref_17","first-page":"13174","article-title":"STAGE: Span Tagging and Greedy Inference Scheme for Aspect Sentiment Triplet Extraction","volume":"37","author":"Liang","year":"2023","journal-title":"Aaai Conf. Artif. Intell."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Ren, F., Zhang, L., Yin, S., Zhao, X., Liu, S., Li, B., and Liu, Y. (2021). A novel global feature-oriented relational triple extraction model based on table filling. arXiv.","DOI":"10.18653\/v1\/2021.emnlp-main.208"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"187","DOI":"10.5715\/jnlp.29.187","article-title":"Named entity recognition and relation extraction using enhanced table filling by contextualized representations","volume":"29","author":"Ma","year":"2022","journal-title":"J. Nat. Lang. Process."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"110228","DOI":"10.1016\/j.knosys.2022.110228","article-title":"A Span-based Multi-Modal Attention Network for joint entity-relation extraction","volume":"262","author":"Wan","year":"2023","journal-title":"Knowl.-Based Syst."},{"key":"ref_21","unstructured":"Hoffmann, R., Zhang, C., Ling, X., Zettlemoyer, L., and Weld, D.S. (2011, January 19\u201324). Knowledge-based weak supervision for information extraction of overlapping relations. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, ON, USA."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zeng, X., Zeng, D., He, S., Liu, K., and Zhao, J. (2018, January 15\u201320). Extracting relational facts by an end-to-end neural model with copy mechanism. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia.","DOI":"10.18653\/v1\/P18-1047"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Zheng, S., Wang, F., Bao, H., Hao, Y., Zhou, P., and Xu, B. (2017). Joint extraction of entities and relations based on a novel tagging scheme. arXiv.","DOI":"10.18653\/v1\/P17-1113"},{"key":"ref_24","unstructured":"Yu, B., Zhang, Z., Shu, X., Wang, Y., Liu, T., Wang, B., and Li, S. (2019). Joint extraction of entities and relations based on a novel decomposition strategy. arXiv."},{"key":"ref_25","unstructured":"Dixit, K., and Al-Onaizan, Y. (August, January 28). Span-level model for relation extraction. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Miwa, M., and Sasaki, Y. (2014, January 25\u201329). Modeling joint entity and relation extraction with table representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.","DOI":"10.3115\/v1\/D14-1200"},{"key":"ref_27","unstructured":"Tran, T., and Kavuluru, R. (2019). Neural metric learning for fast end-to-end relation extraction. arXiv."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Ren, F., Zhang, L., Yin, S., Zhao, X., Liu, S., Li, B., and Liu, Y. (2021, January 7\u201311). A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Virtual.","DOI":"10.18653\/v1\/2021.emnlp-main.208"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Luan, Y., He, L., Ostendorf, M., and Hajishirzi, H. (November, January 31). Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.","DOI":"10.18653\/v1\/D18-1360"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"885","DOI":"10.1016\/j.jbi.2012.04.008","article-title":"Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports","volume":"45","author":"Gurulingappa","year":"2012","journal-title":"J. Biomed. Inform."},{"key":"ref_31","unstructured":"Roth, D., and Yih, W.t. (2004, January 6\u20137). A linear programming formulation for global inference in natural language tasks. Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, Boston, MA, USA."},{"key":"ref_32","unstructured":"Gupta, P., Sch\u00fctze, H., and Andrassy, B. (2016, January 11\u201316). Table filling multi-task recurrent neural network for joint entity and relation extraction. Proceedings of the COLING 2016, 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan."},{"key":"ref_33","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Wang, C., Liu, X., Chen, Z., Hong, H., Tang, J., and Song, D. (2022). DeepStruct: Pretraining of language models for structure prediction. arXiv.","DOI":"10.18653\/v1\/2022.findings-acl.67"},{"key":"ref_35","unstructured":"Crone, P. (2020). Deeper task-specificity improves joint entity and relation extraction. arXiv."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Wadden, D., Wennberg, U., Luan, Y., and Hajishirzi, H. (2019). Entity, Relation, and Event Extraction with Contextualized Span Representations. arXiv.","DOI":"10.18653\/v1\/D19-1585"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Shen, Y., Ma, X., Tang, Y., and Lu, W. (2021, January 19\u201323). A Trigger-Sense Memory Flow Framework for Joint Entity and Relation Extraction. Proceedings of the Web Conference 2021, Ljubljana, Slovenia.","DOI":"10.1145\/3442381.3449895"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Yan, Z., Zhang, C., Fu, J., Zhang, Q., and Wei, Z. (2021). A partition filter network for joint entity and relation extraction. arXiv.","DOI":"10.18653\/v1\/2021.emnlp-main.17"},{"key":"ref_39","unstructured":"Santosh, T., Chakraborty, P., Dutta, S., Sanyal, D.K., and Das, P.P. (2021, January 27\u201330). Joint entity and relation extraction from scientific documents: Role of linguistic information and entity types. Proceedings of the 2nd Workshop on Extraction and Evaluation of Knowledge Entities from Scientific Documents (EEKE 2021), Online."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"60805","DOI":"10.1109\/ACCESS.2022.3180830","article-title":"Scideberta: Learning deberta for science technology documents and fine-tuning information extraction tasks","volume":"10","author":"Jeong","year":"2022","journal-title":"IEEE Access"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1016\/j.eswa.2018.07.032","article-title":"Joint entity recognition and relation extraction as a multi-head selection problem","volume":"114","author":"Giannis","year":"2018","journal-title":"Expert Syst. Appl."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Wang, J., and Lu, W. (2020, January 16\u201320). Two are Better than One: Joint Entity and Relation Extraction with Table-Sequence Encoders. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online.","DOI":"10.18653\/v1\/2020.emnlp-main.133"},{"key":"ref_43","unstructured":"Cabot, P.L.H., and Navigli, R. (2021, January 16\u201320). REBEL: Relation extraction by end-to-end language generation. Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Zhao, S., Hu, M., Cai, Z., and Liu, F. (2021, January 7\u201315). Modeling dense cross-modal interactions for joint entity-relation extraction. Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, Online.","DOI":"10.24963\/ijcai.2020\/558"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Li, X., Yin, F., Sun, Z., Li, X., Yuan, A., Chai, D., Zhou, M., and Li, J. (2019). Entity-relation extraction as multi-turn question answering. arXiv.","DOI":"10.18653\/v1\/P19-1129"},{"key":"ref_46","first-page":"10965","article-title":"Unified Named Entity Recognition as Word-Word Relation Classification","volume":"36","author":"Li","year":"2022","journal-title":"Proc. AAAI Conf. Artif. Intell."}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/15\/7\/407\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:16:36Z","timestamp":1760109396000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/15\/7\/407"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,14]]},"references-count":46,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2024,7]]}},"alternative-id":["info15070407"],"URL":"https:\/\/doi.org\/10.3390\/info15070407","relation":{},"ISSN":["2078-2489"],"issn-type":[{"type":"electronic","value":"2078-2489"}],"subject":[],"published":{"date-parts":[[2024,7,14]]}}}