{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,27]],"date-time":"2025-03-27T05:55:12Z","timestamp":1743054912487,"version":"3.40.3"},"publisher-location":"Cham","reference-count":36,"publisher":"Springer Nature Switzerland","isbn-type":[{"type":"print","value":"9783031333736"},{"type":"electronic","value":"9783031333743"}],"license":[{"start":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T00:00:00Z","timestamp":1672531200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,5,27]],"date-time":"2023-05-27T00:00:00Z","timestamp":1685145600000},"content-version":"vor","delay-in-days":146,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Recently, many studies incorporate external knowledge into character-level feature based models to improve the performance of Chinese relation extraction. However, these methods tend to ignore the internal information of the Chinese character and cannot filter out the noisy information of external knowledge. To address these issues, we propose a mixture-of-view-experts framework (MoVE) to dynamically learn multi-view features for Chinese relation extraction. With both the internal and external knowledge of Chinese characters, our framework can better capture the semantic information of Chinese characters. To demonstrate the effectiveness of the proposed framework, we conduct extensive experiments on three real-world datasets in distinct domains. Experimental results show consistent and significant superiority and robustness of our proposed framework. Our code and dataset will be released at: <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/gitee.com\/tmg-nudt\/multi-view-of-expert-for-chinese-relation-extraction\">https:\/\/gitee.com\/tmg-nudt\/multi-view-of-expert-for-chinese-relation-extraction<\/jats:ext-link><\/jats:p>","DOI":"10.1007\/978-3-031-33374-3_32","type":"book-chapter","created":{"date-parts":[[2023,5,26]],"date-time":"2023-05-26T10:02:30Z","timestamp":1685095350000},"page":"405-417","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Dynamic Multi-View Fusion Mechanism for\u00a0Chinese Relation Extraction"],"prefix":"10.1007","author":[{"given":"Jing","family":"Yang","sequence":"first","affiliation":[]},{"given":"Bin","family":"Ji","sequence":"additional","affiliation":[]},{"given":"Shasha","family":"Li","sequence":"additional","affiliation":[]},{"given":"Jun","family":"Ma","sequence":"additional","affiliation":[]},{"given":"Long","family":"Peng","sequence":"additional","affiliation":[]},{"given":"Jie","family":"Yu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,5,27]]},"reference":[{"key":"32_CR1","doi-asserted-by":"crossref","unstructured":"Yu, J., Jian, X., Xin, H., Song, Y.: Joint embeddings of Chinese words, characters, and fine-grained subcharacter components. In: Empirical Methods in Natural Language Processing (2017)","DOI":"10.18653\/v1\/D17-1027"},{"key":"32_CR2","unstructured":"Meng, Y., et al.: Glyce: Glyph-vectors for Chinese character representations. In: Neural Information Processing Systems (2019)"},{"key":"32_CR3","doi-asserted-by":"crossref","unstructured":"Ma, R., Peng, M., Zhang, Q., Wei, Z., Huang, X.: Simplify the usage of lexicon in Chinese NER. In: Meeting of the Association For Computational Linguistics (2020)","DOI":"10.18653\/v1\/2020.acl-main.528"},{"key":"32_CR4","doi-asserted-by":"crossref","unstructured":"Shi, J., Sun, M., Sun, Z., Li, M., Gu, Y., Zhang, W.: Multi-level semantic fusion network for Chinese medical named entity recognition (2022)","DOI":"10.1016\/j.jbi.2022.104144"},{"key":"32_CR5","doi-asserted-by":"crossref","unstructured":"Wu, S., Song, X., Feng, Z.H.: MECT: multi-metadata embedding based cross-transformer for Chinese named entity recognition. In: Meeting of the Association for Computational Linguistics (2021)","DOI":"10.18653\/v1\/2021.acl-long.121"},{"key":"32_CR6","unstructured":"Shazeer, N., et al.: Outrageously large neural networks: the sparsely-gated mixture-of-experts layer. Learning (2017)"},{"key":"32_CR7","doi-asserted-by":"crossref","unstructured":"Ma, J., Zhao, Z., Yi, X., Chen, J., Hong, L., Chi, E.H.: Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. Knowledge Discovery and Data Mining (2018)","DOI":"10.1145\/3219819.3220007"},{"key":"32_CR8","doi-asserted-by":"crossref","unstructured":"Liu, Z., Winata, G.I., Fung, P.: Zero-resource cross-domain named entity recognition. In: Meeting of the Association for Computational Linguistics (2020)","DOI":"10.18653\/v1\/2020.repl4nlp-1.1"},{"key":"32_CR9","unstructured":"Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J.: Relation classification via convolutional deep neural network. In: International Conference on Computational Linguistics (2014)"},{"key":"32_CR10","unstructured":"Zhang, D., Wang, D.: Relation classification via recurrent neural network. arXiv:1508.01006 Computation and Language (2015)"},{"key":"32_CR11","doi-asserted-by":"crossref","unstructured":"Wu, S., He, Y.: Enriching pre-trained language model with entity information for relation classification. In: Conference on Information and Knowledge Management (2019)","DOI":"10.1145\/3357384.3358119"},{"key":"32_CR12","doi-asserted-by":"crossref","unstructured":"Li, Z., Ding, N., Liu, Z., Zheng, H.T., Shen, Y.: Chinese relation extraction with multi-grained information and external linguistic knowledge. In: Meeting of the Association for Computational Linguistics (2019)","DOI":"10.18653\/v1\/P19-1430"},{"key":"32_CR13","unstructured":"Xu, J., Wen, J., Sun, X., Su, Q.: A discourse-level named entity recognition and relation extraction dataset for Chinese literature text. arXiv:1711.07010 Computation and Language (2017)"},{"key":"32_CR14","doi-asserted-by":"crossref","unstructured":"Zhang, Q.Q., Chen, M.D., Liu, L.Z.: An effective gated recurrent unit network model for Chinese relation extraction. DEStech Transactions on Computer Science and Engineering (2018)","DOI":"10.12783\/dtcse\/wcne2017\/19833"},{"key":"32_CR15","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Yang, J.: Chinese NER using lattice LSTM. In: Meeting of the Association for Computational Linguistics (2018)","DOI":"10.18653\/v1\/P18-1144"},{"key":"32_CR16","doi-asserted-by":"crossref","unstructured":"Zhou, X., Zhang, T., Cheng, C., Song, S.: Dynamic multichannel fusion mechanism based on a graph attention network and BERT for aspect-based sentiment classification (2022)","DOI":"10.1007\/s10489-022-03851-3"},{"key":"32_CR17","doi-asserted-by":"crossref","unstructured":"Xu, H.D., et al.: Read, listen, and see: Leveraging multimodal information helps chinese spell checking. In: Meeting of the Association for Computational Linguistics (2021)","DOI":"10.18653\/v1\/2021.findings-acl.64"},{"key":"32_CR18","doi-asserted-by":"crossref","unstructured":"Wang, B., et al.: Dylex: Incorporating dynamic lexicons into BERT for sequence labeling. In: Empirical Methods in Natural Language Processing (2021)","DOI":"10.18653\/v1\/2021.emnlp-main.211"},{"key":"32_CR19","unstructured":"Dong, Z., Dong, Q.: HowNet - a hybrid language and knowledge resource. In: International Conference Natural Language Processing (2003)"},{"key":"32_CR20","doi-asserted-by":"crossref","unstructured":"Song, Y., Shi, S., Li, J.: Joint learning embeddings for Chinese words and their components via ladder structured networks. In: International Joint Conference on Artificial Intelligence (2018)","DOI":"10.24963\/ijcai.2018\/608"},{"key":"32_CR21","unstructured":"Shaosheng, C., Lu, W., Zhou, J., Li, X.: cw2vec: Learning Chinese word embeddings with stroke n-gram information. National Conference On Artificial Intelligence (2018)"},{"key":"32_CR22","doi-asserted-by":"crossref","unstructured":"Xu, C., Wang, F., Han, J., Li, C.: Exploiting multiple embeddings for chinese named entity recognition. Conference on Information and Knowledge Management (2019)","DOI":"10.1145\/3357384.3358117"},{"key":"32_CR23","unstructured":"Qi, F., Yang, C., Liu, Z., Dong, Q., Sun, M., Dong, Z.: OpenHowNet: an open sememe-based lexical knowledge base. arXiv:1901.09957 Computation and Language (2019)"},{"key":"32_CR24","doi-asserted-by":"crossref","unstructured":"Wang, X., Xiong, Y., Niu, H., Yue, J., Zhu, Y., Yu, P.S.: Improving Chinese character representation with formation graph attention network. In: Conference on Information and Knowledge Management (2021)","DOI":"10.1145\/3459637.3482265"},{"key":"32_CR25","unstructured":"Vaswani, A., et al.: Attention is all you need. Neural Information Processing Systems (2017)"},{"key":"32_CR26","doi-asserted-by":"crossref","unstructured":"Sun, Z., et al.: ChineseBERT: Chinese pretraining enhanced by glyph and pinyin information. Meeting of the Association for Computational Linguistics (2021)","DOI":"10.18653\/v1\/2021.acl-long.161"},{"key":"32_CR27","doi-asserted-by":"crossref","unstructured":"Chen, Q., Li, F.L., Xu, G., Yan, M., Zhang, J., Zhang, Y.: DictBERT: dictionary description knowledge enhanced language model pre-training via contrastive learning. In: International Joint Conference on Artificial Intelligence (2022)","DOI":"10.24963\/ijcai.2022\/567"},{"key":"32_CR28","doi-asserted-by":"crossref","unstructured":"Lai, Y., Liu, Y., Feng, Y., Huang, S., Zhao, D.: Lattice-BERT: leveraging multi-granularity representations in Chinese pre-trained language models. North American Chapter of the Association for Computational Linguistics (2021)","DOI":"10.18653\/v1\/2021.naacl-main.137"},{"key":"32_CR29","unstructured":"Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2022)"},{"key":"32_CR30","doi-asserted-by":"crossref","unstructured":"Guan, T., Zan, H., Zhou, X., Xu, H., Zhang, K.: CMeIE: construction and evaluation of Chinese medical information extraction dataset. In: International Conference Natural Language Processing (2020)","DOI":"10.1007\/978-3-030-60450-9_22"},{"key":"32_CR31","doi-asserted-by":"crossref","unstructured":"Zhou, P., et al.: Attention-based bidirectional long short-term memory networks for relation classification. Meeting of the Association for Computational Linguistics (2016)","DOI":"10.18653\/v1\/P16-2034"},{"key":"32_CR32","doi-asserted-by":"crossref","unstructured":"Lin, Y., Shen, S., Liu, Z., Luan, H., Sun, M.: Neural relation extraction with selective attention over instances. Meeting of the Association for Computational Linguistics (2016)","DOI":"10.18653\/v1\/P16-1200"},{"key":"32_CR33","doi-asserted-by":"crossref","unstructured":"Lee, J., Seo, S., Choi, Y.S.: Semantic relation classification via bidirectional LSTM networks with entity-aware attention using latent entity typing. Symmetry (2019)","DOI":"10.3390\/sym11060785"},{"key":"32_CR34","doi-asserted-by":"crossref","unstructured":"Zhang, N., et al.: DeepKE: a deep learning based knowledge extraction toolkit for knowledge base population (2022)","DOI":"10.18653\/v1\/2022.emnlp-demos.10"},{"key":"32_CR35","doi-asserted-by":"publisher","DOI":"10.1109\/TASLP.2021.3124365","volume-title":"Pre-training with whole word masking for Chinese BERT","author":"Y Cui","year":"2021","unstructured":"Cui, Y., et al.: Pre-training with whole word masking for Chinese BERT. Speech, and Language Processing, IEEE Transactions on Audio (2021)"},{"key":"32_CR36","unstructured":"Loshchilov, I., Hutter, F.: Fixing weight decay regularization in Adam (2018)"}],"container-title":["Lecture Notes in Computer Science","Advances in Knowledge Discovery and Data Mining"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-33374-3_32","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,3,13]],"date-time":"2024-03-13T20:06:02Z","timestamp":1710360362000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-33374-3_32"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023]]},"ISBN":["9783031333736","9783031333743"],"references-count":36,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-33374-3_32","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"type":"print","value":"0302-9743"},{"type":"electronic","value":"1611-3349"}],"subject":[],"published":{"date-parts":[[2023]]},"assertion":[{"value":"27 May 2023","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"PAKDD","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Pacific-Asia Conference on Knowledge Discovery and Data Mining","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Osaka","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Japan","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2023","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"25 May 2023","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"28 May 2023","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"27","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"pakdd2023","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/pakdd2023.org\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Double-blind","order":1,"name":"type","label":"Type","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"Microsoft CMT","order":2,"name":"conference_management_system","label":"Conference Management System","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"813","order":3,"name":"number_of_submissions_sent_for_review","label":"Number of Submissions Sent for Review","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"143","order":4,"name":"number_of_full_papers_accepted","label":"Number of Full Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"0","order":5,"name":"number_of_short_papers_accepted","label":"Number of Short Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"18% - The value is computed by the equation \"Number of Full Papers Accepted \/ Number of Submissions Sent for Review * 100\" and then rounded to a whole number.","order":6,"name":"acceptance_rate_of_full_papers","label":"Acceptance Rate of Full Papers","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3.5","order":7,"name":"average_number_of_reviews_per_paper","label":"Average Number of Reviews per Paper","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"10","order":8,"name":"average_number_of_papers_per_reviewer","label":"Average Number of Papers per Reviewer","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"Yes","order":9,"name":"external_reviewers_involved","label":"External Reviewers Involved","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}}]}}