{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T17:11:57Z","timestamp":1775149917741,"version":"3.50.1"},"reference-count":51,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,3,23]],"date-time":"2023-03-23T00:00:00Z","timestamp":1679529600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Natural Science Foundation of Shangdong Province","award":["ZR2022MF328"],"award-info":[{"award-number":["ZR2022MF328"]}]},{"name":"Joint Funds for Smart Computing of the Natural Science Foundation of Shangdong Province","award":["ZR2019LZH014"],"award-info":[{"award-number":["ZR2019LZH014"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61602284, 61602285"],"award-info":[{"award-number":["61602284, 61602285"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2023,3,31]]},"abstract":"<jats:p>Chinese Named Entity Recognition (NER) is an essential task in natural language processing, and its performance directly impacts the downstream tasks. The main challenges in Chinese NER are the high dependence of named entities on context and the lack of word boundary information. Therefore, how to integrate relevant knowledge into the corresponding entity has become the primary task for Chinese NER. Both the lattice LSTM model and the WC-LSTM model did not make excellent use of contextual information. Additionally, the lattice LSTM model had a complex structure and did not exploit the word information well. To address the preceding problems, we propose a Chinese NER method based on the deep neural network with multiple ways of embedding fusion. First, we use a convolutional neural network to combine the contextual information of the input sequence and apply a self-attention mechanism to integrate lexicon knowledge, compensating for the lack of word boundaries. The word feature, context feature, bigram feature, and bigram context feature are obtained for each character. Second, four different features are used to fuse information at the embedding layer. As a result, four different word embeddings are obtained through cascading. Last, the fused feature information is input to the encoding and decoding layer. Experiments on three datasets show that our model can effectively improve the performance of Chinese NER.<\/jats:p>","DOI":"10.1145\/3570328","type":"journal-article","created":{"date-parts":[[2023,2,10]],"date-time":"2023-02-10T12:02:29Z","timestamp":1676030549000},"page":"1-16","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["Deep Neural Network with Embedding Fusion for Chinese Named Entity Recognition"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9460-0052","authenticated-orcid":false,"given":"Kaifang","family":"Long","sequence":"first","affiliation":[{"name":"Shandong Normal University, Jinan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8760-0495","authenticated-orcid":false,"given":"Han","family":"Zhao","sequence":"additional","affiliation":[{"name":"Shandong Normal University, Jinan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0835-6880","authenticated-orcid":false,"given":"Zengzhen","family":"Shao","sequence":"additional","affiliation":[{"name":"Shandong Women\u2019s University and Shandong Normal University, Jinan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2992-869X","authenticated-orcid":false,"given":"Yang","family":"Cao","sequence":"additional","affiliation":[{"name":"Shandong Normal University, Jinan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5513-8929","authenticated-orcid":false,"given":"Yanfang","family":"Geng","sequence":"additional","affiliation":[{"name":"Shandong Normal University, Jinan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4938-5015","authenticated-orcid":false,"given":"Yintai","family":"Sun","sequence":"additional","affiliation":[{"name":"Shandong Normal University, Jinan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1211-1889","authenticated-orcid":false,"given":"Weizhi","family":"Xu","sequence":"additional","affiliation":[{"name":"Shandong Normal University and State Key Laboratory of High-End Server and Storage Technology, Jinan, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1769-1114","authenticated-orcid":false,"given":"Hui","family":"Yu","sequence":"additional","affiliation":[{"name":"Shandong Normal University, Jinan, China"}]}],"member":"320","published-online":{"date-parts":[[2023,3,23]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1017"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D15-1141"},{"key":"e_1_3_1_4_2","article-title":"Lightner: A lightweight generative framework with prompt-guided attention for low-resource NER","author":"Chen Xiang","year":"2021","unstructured":"Xiang Chen, Ningyu Zhang, Lei Li, Xin Xie, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2021. Lightner: A lightweight generative framework with prompt-guided attention for low-resource NER. arXiv preprint arXiv:2109.00720 (2021).","journal-title":"arXiv preprint arXiv:2109.00720"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/P15-1017"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00104"},{"key":"e_1_3_1_7_2","first-page":"Association for","volume-title":"Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021","author":"Cui Leyang","year":"2021","unstructured":"Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, 1835\u20131845."},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-017-1100-y"},{"key":"e_1_3_1_9_2","doi-asserted-by":"crossref","first-page":"1462","DOI":"10.18653\/v1\/P19-1141","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics","author":"Ding Ruixue","year":"2019","unstructured":"Ruixue Ding, Pengjun Xie, Xiaoyan Zhang, Wei Lu, Linlin Li, and Luo Si. 2019. A neural multi-digraph model for Chinese NER with gazetteers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 1462\u20131467."},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-50496-4_20"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/PROC.1973.9030"},{"key":"e_1_3_1_12_2","article-title":"TURNER: The uncertainty-based retrieval framework for Chinese NER","author":"Geng Zhichao","year":"2022","unstructured":"Zhichao Geng, Hang Yan, Zhangyue Yin, Chenxin An, and Xipeng Qiu. 2022. TURNER: The uncertainty-based retrieval framework for Chinese NER. arXiv preprint arXiv:2202.09022 (2022).","journal-title":"arXiv preprint arXiv:2202.09022"},{"key":"e_1_3_1_13_2","doi-asserted-by":"crossref","unstructured":"Tao Gui Ruotian Ma Qi Zhang Lujun Zhao Yu-Gang Jiang and Xuanjing Huang. 2019. CNN-based Chinese NER with lexicon rethinking. In Proceedings of the 28th International Joint Conference on Artificial Intelligence: Main Track . 4982\u20134988.","DOI":"10.24963\/ijcai.2019\/692"},{"key":"e_1_3_1_14_2","first-page":"1040","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP\u201919)","author":"Gui Tao","year":"2019","unstructured":"Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, and Xuan-Jing Huang. 2019. A lexicon-based graph neural network for Chinese NER. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP\u201919). 1040\u20131050."},{"key":"e_1_3_1_15_2","article-title":"F-score driven max margin neural network for named entity recognition in Chinese social media","author":"He Hangfeng","year":"2016","unstructured":"Hangfeng He and Xu Sun. 2016. F-score driven max margin neural network for named entity recognition in Chinese social media. arXiv preprint arXiv:1611.04234 (2016).","journal-title":"arXiv preprint arXiv:1611.04234"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v31i1.10977"},{"key":"e_1_3_1_17_2","article-title":"Bidirectional LSTM-CRF models for sequence tagging","author":"Huang Zhiheng","year":"2015","unstructured":"Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015).","journal-title":"arXiv preprint arXiv:1508.01991"},{"key":"e_1_3_1_18_2","first-page":"6384","volume-title":"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP\u201920)","author":"Jia Chen","year":"2020","unstructured":"Chen Jia, Yuefeng Shi, Qinrong Yang, and Yue Zhang. 2020. Entity enhanced BERT pre-training for Chinese NER. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP\u201920). 6384\u20136396."},{"key":"e_1_3_1_19_2","unstructured":"Biao Hu Zhen Huang Minghao Hu Ziwen Zhang and Yong Dou. 2022. Adaptive threshold selective self-attention for Chinese NER. In Proceedings of the 29th International Conference on Computational Linguistics 1823\u20131833."},{"key":"e_1_3_1_20_2","article-title":"Incorporating uncertain segmentation information into Chinese NER for social media text","author":"Jia Shengbin","year":"2020","unstructured":"Shengbin Jia, Ling Ding, Xiaojun Chen, Shijia E, and Yang Xiang. 2020. Incorporating uncertain segmentation information into Chinese NER for social media text. arXiv preprint arXiv:2004.06384 (2020).","journal-title":"arXiv preprint arXiv:2004.06384"},{"key":"e_1_3_1_21_2","first-page":"4171","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT\u201919)","author":"Lee Jacob Devlin, Ming-Wei Chang, Kenton","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT\u201919). 4171\u20134186."},{"key":"e_1_3_1_22_2","unstructured":"John Lafferty Andrew McCallum and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning (ICML\u201901) ."},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-1030"},{"key":"e_1_3_1_24_2","first-page":"108","volume-title":"Proceedings of the 5th SIGHAN Workshop on Chinese Language Processing","author":"Levow Gina-Anne","year":"2006","unstructured":"Gina-Anne Levow. 2006. The Third International Chinese Language Processing Bakeoff: Word segmentation and named entity recognition. In Proceedings of the 5th SIGHAN Workshop on Chinese Language Processing. 108\u2013117."},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2020.07.027"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.611"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbi.2020.103422"},{"key":"e_1_3_1_28_2","first-page":"3437","volume-title":"Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT\u201921)","author":"Liu Kun","year":"2021","unstructured":"Kun Liu, Yao Fu, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, and Sheng Gao. 2021. Noisy-labeled NER with confidence estimation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT\u201921). 3437\u20133445."},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.10.101"},{"key":"e_1_3_1_30_2","first-page":"2379","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)","author":"Liu Wei","year":"2019","unstructured":"Wei Liu, Tongge Xu, Qinghua Xu, Jiayu Song, and Yueran Zu. 2019. An encoding strategy based word-character LSTM for Chinese NER. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2379\u20132389."},{"key":"e_1_3_1_31_2","first-page":"855","volume-title":"Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC\u201916)","author":"Lu Yanan","year":"2016","unstructured":"Yanan Lu, Yue Zhang, and Donghong Ji. 2016. Multi-prototype Chinese character embedding. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC\u201916). 855\u2013859."},{"key":"e_1_3_1_32_2","article-title":"Simplify the usage of lexicon in Chinese NER","author":"Ma Ruotian","year":"2019","unstructured":"Ruotian Ma, Minlong Peng, Qi Zhang, and Xuanjing Huang. 2019. Simplify the usage of lexicon in Chinese NER. arXiv preprint arXiv:1908.05969 (2019).","journal-title":"arXiv preprint arXiv:1908.05969"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P16-1101"},{"key":"e_1_3_1_34_2","doi-asserted-by":"crossref","first-page":"3831","DOI":"10.18653\/v1\/2020.coling-main.340","volume-title":"Proceedings of the 28th International Conference on Computational Linguistics","author":"Mengge Xue","year":"2020","unstructured":"Xue Mengge, Bowen Yu, Tingwen Liu, Yue Zhang, Erli Meng, and Bin Wang. 2020. Porous lattice transformer encoder for Chinese NER. In Proceedings of the 28th International Conference on Computational Linguistics. 3831\u20133841."},{"key":"e_1_3_1_35_2","article-title":"End-to-end relation extraction using LSTMs on sequences and tree structures","author":"Miwa Makoto","year":"2016","unstructured":"Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. arXiv preprint arXiv:1601.00770 (2016).","journal-title":"arXiv preprint arXiv:1601.00770"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D15-1064"},{"key":"e_1_3_1_37_2","article-title":"Improving named entity recognition for Chinese social media with word segmentation representation learning","author":"Peng Nanyun","year":"2016","unstructured":"Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for Chinese social media with word segmentation representation learning. arXiv preprint arXiv:1603.00786 (2016).","journal-title":"arXiv preprint arXiv:1603.00786"},{"key":"e_1_3_1_38_2","first-page":"1","volume-title":"Proceedings of the Joint Conference on EMNLP and CoNLL-Shared Task (CoNLL\u201912)","author":"Pradhan Sameer","year":"2012","unstructured":"Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Proceedings of the Joint Conference on EMNLP and CoNLL-Shared Task (CoNLL\u201912). 1\u201340."},{"key":"e_1_3_1_39_2","doi-asserted-by":"crossref","first-page":"142","DOI":"10.3115\/1119176.1119195","volume-title":"Proceedings of the 7th Conference on Natural Language Learning at HLT-NAACL 2003","author":"Sang Erik Tjong Kim","year":"2003","unstructured":"Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the 7th Conference on Natural Language Learning at HLT-NAACL 2003. 142\u2013147."},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.5120\/72-166"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D17-1283"},{"key":"e_1_3_1_42_2","first-page":"3830","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP\u201919)","author":"Sui Dianbo","year":"2019","unstructured":"Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2019. Leverage lexical knowledge for Chinese named entity recognition via collaborative graph network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP\u201919). 3830\u20133840."},{"key":"e_1_3_1_43_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313743"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-021-02660-4"},{"key":"e_1_3_1_46_2","first-page":"5808","volume-title":"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)","author":"Yan Hang","year":"2021","unstructured":"Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 5808\u20135822."},{"key":"e_1_3_1_47_2","first-page":"140","volume-title":"Proceedings of the International Conference on Intelligent Text Processing and Computational Linguistics","author":"Yang Jie","year":"2016","unstructured":"Jie Yang, Zhiyang Teng, Meishan Zhang, and Yue Zhang. 2016. Combining discrete and neural features for sequence labeling. In Proceedings of the International Conference on Intelligent Text Processing and Computational Linguistics. 140\u2013154."},{"key":"e_1_3_1_48_2","first-page":"74","article-title":"NCRF++: An open-source neural sequence labeling toolkit","author":"Yang Jie","year":"2018","unstructured":"Jie Yang and Yue Zhang. 2018. NCRF++: An open-source neural sequence labeling toolkit. In Proceedings of ACL 2018: System Demonstration.74.","journal-title":"Proceedings of ACL 2018: System Demonstration."},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1144"},{"key":"e_1_3_1_50_2","first-page":"14515","volume-title":"Proceedings of the 35th AAAI Conference on Artificial Intelligence","author":"Zhao Shan","year":"2021","unstructured":"Shan Zhao, Minghao Hu, Zhiping Cai, Haiwen Chen, and Fang Liu. 2021. Dynamic modeling cross-and self-lattice attention network for Chinese NER. In Proceedings of the 35th AAAI Conference on Artificial Intelligence. 14515\u201314523."},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.3115\/1073083.1073163"},{"key":"e_1_3_1_52_2","first-page":"3384","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)","author":"Zhu Yuying","year":"2019","unstructured":"Yuying Zhu and Guoxin Wang. 2019. CAN-NER: Convolutional attention network for Chinese named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3384\u20133393."}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3570328","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3570328","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:38Z","timestamp":1750182578000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3570328"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,23]]},"references-count":51,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,3,31]]}},"alternative-id":["10.1145\/3570328"],"URL":"https:\/\/doi.org\/10.1145\/3570328","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"value":"2375-4699","type":"print"},{"value":"2375-4702","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,3,23]]},"assertion":[{"value":"2021-12-31","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-10-24","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-03-23","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}