{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,5,30]],"date-time":"2025-05-30T19:27:18Z","timestamp":1748633238016,"version":"3.38.0"},"reference-count":29,"publisher":"China Science Publishing & Media Ltd.","issue":"3","license":[{"start":{"date-parts":[[2023,2,22]],"date-time":"2023-02-22T00:00:00Z","timestamp":1677024000000},"content-version":"vor","delay-in-days":52,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["direct.mit.edu"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,8,1]]},"abstract":"<jats:title>ABSTRACT<\/jats:title>\n               <jats:p>To alleviate the problem of under-utilization features of sentence-level relation extraction, which leads to insufficient performance of the pre-trained language model and underutilization of the feature vector, a sentence-level relation extraction method based on adding prompt information and feature reuse is proposed. At first, in addition to the pair of nominals and sentence information, a piece of prompt information is added, and the overall feature information consists of sentence information, entity pair information, and prompt information, and then the features are encoded by the pre-trained language model ROBERTA. Moreover, in the pre-trained language model, BIGRU is also introduced in the composition of the neural network to extract information, and the feature information is passed through the neural network to form several sets of feature vectors. After that, these feature vectors are reused in different combinations to form multiple outputs, and the outputs are aggregated using ensemble-learning soft voting to perform relation classification. In addition to this, the sum of cross-entropy, KL divergence, and negative log-likelihood loss is used as the final loss function in this paper. In the comparison experiments, the model based on adding prompt information and feature reuse achieved higher results of the SemEval-2010 task 8 relational dataset.<\/jats:p>","DOI":"10.1162\/dint_a_00192","type":"journal-article","created":{"date-parts":[[2023,2,22]],"date-time":"2023-02-22T01:04:36Z","timestamp":1677027876000},"page":"824-840","update-policy":"https:\/\/doi.org\/10.1162\/mitpressjournals.corrections.policy","source":"Crossref","is-referenced-by-count":5,"title":["Relation Extraction Based on Prompt Information and Feature Reuse"],"prefix":"10.3724","volume":"5","author":[{"given":"Ping","family":"Feng","sequence":"first","affiliation":[{"name":"Jilin University, Changchun Jilin 130012, China"},{"name":"Changchun University, Changchun Jilin 130022, China"},{"name":"Jilin Provincial Key Laboratory of Human Health State Identification and Function Enhancement, Changchun Jilin 130022, China"}]},{"given":"Xin","family":"Zhang","sequence":"additional","affiliation":[{"name":"Changchun University, Changchun Jilin 130022, China"}]},{"given":"Jian","family":"Zhao","sequence":"additional","affiliation":[{"name":"Changchun University, Changchun Jilin 130022, China"},{"name":"Jilin Provincial Key Laboratory of Human Health State Identification and Function Enhancement, Changchun Jilin 130022, China"}]},{"given":"Yingying","family":"Wang","sequence":"additional","affiliation":[{"name":"Changchun University, Changchun Jilin 130022, China"}]},{"given":"Biao","family":"Huang","sequence":"additional","affiliation":[{"name":"Changchun University, Changchun Jilin 130022, China"}]}],"member":"2026","published-online":{"date-parts":[[2023,8,1]]},"reference":[{"key":"2023091215375314900_ref1","first-page":"4171","article-title":"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding","volume-title":"North American Chapter of the Association for Computational Linguistics (1)","author":"Devlin","year":"2019"},{"key":"2023091215375314900_ref2","article-title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach","volume-title":"Computing Research Repository","author":"Liu","year":"2019"},{"volume-title":"Improving language understanding by generative pre-training","year":"2018","author":"Radford","key":"2023091215375314900_ref3"},{"key":"2023091215375314900_ref4","first-page":"2335","article-title":"Relation Classification via Convolutional Deep Neural Network","volume-title":"International Conference on Computational Linguistics","author":"Zeng","year":"2014"},{"key":"2023091215375314900_ref5","article-title":"Bidirectional Long Short-Term Memory Networks for Relation Classification","volume-title":"Pacific Asia Conference on Language, Information and Computation","author":"Zhang","year":"2015"},{"key":"2023091215375314900_ref6","first-page":"2205","article-title":"Graph Convolution over Pruned Dependency Trees Improves Relation Extraction","volume-title":"Conference on Empirical Methods in Natural Language Processing","author":"Zhang","year":"2018"},{"key":"2023091215375314900_ref7","first-page":"2526","article-title":"Attention-Based Convolutional Neural Network for Semantic Relation Extraction","volume-title":"International Conference on Computational Linguistics","author":"Shen","year":"2016"},{"key":"2023091215375314900_ref8","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/P16-2034","article-title":"Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification","volume-title":"Annual Meeting of the Association for Computational Linguistics (2)","author":"Zhou","year":"2016"},{"key":"2023091215375314900_ref9","first-page":"241","article-title":"Attention Guided Graph Convolutional Networks for Relation Extraction","volume-title":"Annual Meeting of the Association for Computational Linguistics (1)","author":"Guo","year":"2019"},{"issue":"6","key":"2023091215375314900_ref10","doi-asserted-by":"crossref","first-page":"785","DOI":"10.3390\/sym11060785","article-title":"Semantic Relation Classification via Bidirectional LSTM Networks with Entity-Aware Attention Using Latent Entity Typing","volume":"11","author":"Lee","year":"2019","journal-title":"Symmetry"},{"key":"2023091215375314900_ref11","first-page":"2227","article-title":"Deep Contextualized Word Representations","volume-title":"North American Chapter of the Association for Computational Linguistics","author":"Peters","year":"2018"},{"key":"2023091215375314900_ref12","article-title":"Improving Relation Extraction by Pre-trained Language Representations","volume-title":"Conference on Automated Knowledge Base Construction","author":"Alt","year":"2019"},{"key":"2023091215375314900_ref13","first-page":"5998","article-title":"Attention is All you Need","volume-title":"Conference on Neural Information Processing Systems","author":"Vaswani","year":"2017"},{"key":"2023091215375314900_ref14","first-page":"1371","article-title":"Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers","volume-title":"Annual Meeting of the Association for Computational Linguistics (1)","author":"Wang","year":"2019"},{"key":"2023091215375314900_ref15","first-page":"2361","article-title":"Enriching Pre-trained Language Model with Entity Information for Relation Classification","volume-title":"International Conference on Information and Knowledge Management","author":"Wu","year":"2019"},{"key":"2023091215375314900_ref16","first-page":"4458","article-title":"Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks","volume-title":"Annual Meeting of the Association for Computational Linguistics (1)","author":"Tian","year":"2021"},{"key":"2023091215375314900_ref17","first-page":"1574","article-title":"Enhancing Relation Extraction Using Syntactic Indicators and Sentential Contexts","volume-title":"IEEE International Conference on Tools with Artificial Intelligence","author":"Tao","year":"2019"},{"key":"2023091215375314900_ref18","doi-asserted-by":"crossref","first-page":"182","DOI":"10.1016\/j.aiopen.2022.11.003","article-title":"PTR: Prompt Tuning with Rules for Text Classification","volume":"3","author":"Han","year":"2022","journal-title":"AI Open"},{"key":"2023091215375314900_ref19","first-page":"43","article-title":"Knowledge Enhanced Contextual Word Representations","volume-title":"Conference on Empirical Methods in Natural Language Processing (1)","author":"Peters","year":"2019"},{"key":"2023091215375314900_ref20","first-page":"1405","article-title":"K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters","volume-title":"Annual Meeting of the Association for Computational Linguistics","author":"Wang","year":"2021"},{"key":"2023091215375314900_ref21","first-page":"2776","article-title":"Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation","volume-title":"Annual Meeting of the Association for Computational Linguistics (1)","author":"Qin","year":"2022"},{"key":"2023091215375314900_ref22","first-page":"757","article-title":"A Simple yet Effective Relation Information Guided Approach for Few-Shot Relation Extraction","volume-title":"Annual Meeting of the Association for Computational Linguistics","author":"Liu","year":"2022"},{"key":"2023091215375314900_ref23","first-page":"45","article-title":"RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction","volume-title":"Annual Meeting of the Association for Computational Linguistics","author":"Chia","year":"2022"},{"key":"2023091215375314900_ref24","first-page":"681","article-title":"Research on semi-supervised classification with an ensemble strategy","volume-title":"International Conference on Sensors, Mechatronics and Automation","author":"Han","year":"2016"},{"issue":"1","key":"2023091215375314900_ref25","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1093\/jamia\/ocz100","article-title":"Ensemble method-based extraction of medication and related information from clinical texts","volume":"27","author":"Kim","year":"2020","journal-title":"Journal of the American Medical Informatics Association"},{"key":"2023091215375314900_ref26","first-page":"4532","article-title":"Ensemble Neural Relation Extraction with Adaptive Boosting","volume-title":"International Joint Conference on Artificial Intelligence","author":"Yang","year":"2018"},{"issue":"1","key":"2023091215375314900_ref27","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1093\/jamia\/ocz101","article-title":"Adverse drug events and medication relation extraction in electronic health records with ensemble deep learning methods","volume":"27","author":"Christopoulou","year":"2020","journal-title":"Journal of the American Medical Informatics Association"},{"key":"2023091215375314900_ref28","first-page":"5569","article-title":"Reproducing Neural Ensemble Classifier for Semantic Relation Extraction inScientific Papers","volume-title":"International Conference on Language Resources and Evaluation","author":"Rim","year":"2020"},{"key":"2023091215375314900_ref29","first-page":"10890","article-title":"R-Drop: Regularized Dropout for Neural Networks","volume-title":"Conference on Neural Information Processing Systems","author":"Liang","year":"2021"}],"container-title":["Data Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/direct.mit.edu\/dint\/article-pdf\/5\/3\/824\/2158216\/dint_a_00192.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/direct.mit.edu\/dint\/article-pdf\/5\/3\/824\/2158216\/dint_a_00192.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,14]],"date-time":"2025-03-14T07:43:15Z","timestamp":1741938195000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.sciengine.com\/doi\/10.1162\/dint_a_00192"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023]]},"references-count":29,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,8,1]]}},"URL":"https:\/\/doi.org\/10.1162\/dint_a_00192","relation":{},"ISSN":["2641-435X"],"issn-type":[{"type":"electronic","value":"2641-435X"}],"subject":[],"published-other":{"date-parts":[[2023]]},"published":{"date-parts":[[2023]]}}}