{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:09:25Z","timestamp":1750219765083,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":41,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,21]],"date-time":"2022-10-21T00:00:00Z","timestamp":1666310400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Key R&D Program of China"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,10,21]]},"DOI":"10.1145\/3565387.3565405","type":"proceedings-article","created":{"date-parts":[[2022,12,14]],"date-time":"2022-12-14T01:47:11Z","timestamp":1670982431000},"page":"1-7","source":"Crossref","is-referenced-by-count":0,"title":["Chinese Machine Reading Comprehension Based on Language Model Containing Knowledge"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9348-8591","authenticated-orcid":false,"given":"Wentong","family":"Chen","sequence":"first","affiliation":[{"name":"Beijing University of Posts and Telecommunications, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3607-4904","authenticated-orcid":false,"given":"Chunxiao","family":"Fan","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9005-5678","authenticated-orcid":false,"given":"Yuexin","family":"Wu","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7265-4455","authenticated-orcid":false,"given":"Yitong","family":"Wang","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, China"}]}],"member":"320","published-online":{"date-parts":[[2022,12,13]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)","author":"Ostermann Simon","year":"2018","unstructured":"Simon Ostermann , Ashutosh Modi , Michael Roth , Stefan Thater , and Manfred Pinkal . Mcscript : A novel dataset for assessing machine comprehension using script knowledge . In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018) , 2018 . Simon Ostermann, Ashutosh Modi, Michael Roth, Stefan Thater, and Manfred Pinkal. Mcscript: A novel dataset for assessing machine comprehension using script knowledge. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), 2018."},{"key":"e_1_3_2_1_2_1","volume-title":"Applied Sciences","author":"Liu S","year":"2019","unstructured":"Liu S , Zhang X , Zhang S , Neural Machine Reading Comprehension : Methods and Trends[J] . Applied Sciences , 2019 . Liu S , Zhang X , Zhang S , Neural Machine Reading Comprehension: Methods and Trends[J]. Applied Sciences, 2019."},{"key":"e_1_3_2_1_3_1","first-page":"834","volume-title":"Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing","author":"Long Teng","year":"2017","unstructured":"Teng Long , Emmanuel Bengio , Ryan Lowe , Jackie Chi Kit Cheung , and Doina Precup . World knowledge for reading comprehension: Rare entity prediction with hierarchical lstms using external descriptions . In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 825\u2013 834 , 2017 . Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Chi Kit Cheung, and Doina Precup. World knowledge for reading comprehension: Rare entity prediction with hierarchical lstms using external descriptions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 825\u2013834, 2017."},{"key":"e_1_3_2_1_4_1","first-page":"1446","volume-title":"Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Yang Bishan","year":"2017","unstructured":"Bishan Yang and Tom Mitchell . Leveraging knowledge bases in lstms for improving machine reading . In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1436\u2013 1446 , 2017 . Bishan Yang and Tom Mitchell. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1436\u20131446, 2017."},{"key":"e_1_3_2_1_5_1","first-page":"832","volume-title":"Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Mihaylov Todor","year":"2018","unstructured":"Todor Mihaylov and Anette Frank . Knowledgeable reader : Enhancing cloze-style reading comprehension with external commonsense knowledge . In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 821\u2013 832 , 2018 . Todor Mihaylov and Anette Frank. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 821\u2013832, 2018."},{"key":"e_1_3_2_1_6_1","volume-title":"Knowledge based machine reading comprehension. arXiv preprint arXiv:1809.04267","author":"Sun Yibo","year":"2018","unstructured":"Yibo Sun , Daya Guo , Duyu Tang , Nan Duan , Zhao Yan , Xiaocheng Feng , and Bing Qin . Knowledge based machine reading comprehension. arXiv preprint arXiv:1809.04267 , 2018 . Yibo Sun, Daya Guo, Duyu Tang, Nan Duan, Zhao Yan, Xiaocheng Feng, and Bing Qin. Knowledge based machine reading comprehension. arXiv preprint arXiv:1809.04267, 2018."},{"key":"e_1_3_2_1_7_1","first-page":"2272","volume-title":"Proceedings of the 57th Conference of the Association for Computational Linguistics","author":"Wang Chao","year":"2019","unstructured":"Chao Wang and Hui Jiang . Explicit utilization of general knowledge in machine reading comprehension . In Proceedings of the 57th Conference of the Association for Computational Linguistics , pages 2263\u2013 2272 , 2019 . Chao Wang and Hui Jiang. Explicit utilization of general knowledge in machine reading comprehension. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 2263\u20132272, 2019."},{"key":"e_1_3_2_1_8_1","first-page":"2795","volume-title":"NIPS","author":"Bordes A.","year":"2013","unstructured":"A. Bordes , N. Usunier , A. Garcia-Duran , J. Weston , and O. Yakhnenko , \u201c Translating embeddings for modeling multi-relational data ,\u201d in NIPS , 2013 , pp. 2787\u2013 2795 . A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko, \u201cTranslating embeddings for modeling multi-relational data,\u201d in NIPS, 2013, pp. 2787\u20132795."},{"key":"e_1_3_2_1_9_1","first-page":"2181","article-title":"Learning entity and relation embeddings for knowledge graph completion","author":"Lin Y.","year":"2015","unstructured":"Y. Lin , Z. Liu , M. Sun , Y. Liu , and X. Zhu , \u201c Learning entity and relation embeddings for knowledge graph completion ,\u201d in AAAI , 2015 , pp. 2181 \u2013 2187 . Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu, \u201cLearning entity and relation embeddings for knowledge graph completion,\u201d in AAAI, 2015, pp. 2181\u20132187.","journal-title":"AAAI"},{"key":"e_1_3_2_1_10_1","first-page":"1112","article-title":"Knowledge graph embedding by translating on hyperplanes","author":"Wang Z.","year":"2014","unstructured":"Z. Wang , J. Zhang , J. Feng , and Z. Chen , \u201c Knowledge graph embedding by translating on hyperplanes ,\u201d in AAAI , 2014 , pp. 1112 \u2013 1119 . Z. Wang, J. Zhang, J. Feng, and Z. Chen, \u201cKnowledge graph embedding by translating on hyperplanes,\u201d in AAAI, 2014, pp. 1112\u20131119.","journal-title":"AAAI"},{"key":"e_1_3_2_1_11_1","volume-title":"CN-DBpedia: A Never-Ending Chinese Knowledge Extraction System[C]\/\/ International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems","author":"Bo X","year":"2017","unstructured":"Bo X , Yong X , Liang J , CN-DBpedia: A Never-Ending Chinese Knowledge Extraction System[C]\/\/ International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems . Springer , Cham , 2017 . Bo X , Yong X , Liang J , CN-DBpedia: A Never-Ending Chinese Knowledge Extraction System[C]\/\/ International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. Springer, Cham, 2017."},{"key":"e_1_3_2_1_12_1","unstructured":"https:\/\/github.com\/ownthink\/KnowledgeGraphData  https:\/\/github.com\/ownthink\/KnowledgeGraphData"},{"key":"e_1_3_2_1_13_1","volume-title":"Proceedings of the 2019. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","volume":"1","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . Bert : Pre-training of deep bidirectional transformers for language understanding . In Proceedings of the 2019. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , Volume 1 (Long and Short Papers), pages 4171\u20134186 , 2019 . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171\u20134186, 2019."},{"key":"e_1_3_2_1_14_1","first-page":"1701","volume-title":"Proceedings of the 28th International Conference on Neural Information Processing SystemsVolume 1","author":"Hermann Karl Moritz","unstructured":"Karl Moritz Hermann , Toma\u00b4s Ko \u02c7 cisk \u02c7 y, Edward Grefenstette , Lasse Espeholt , Will Kay , ` Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend . In Proceedings of the 28th International Conference on Neural Information Processing SystemsVolume 1 , pages 1693\u2013 1701 . MIT Press, 2015. Karl Moritz Hermann, Toma\u00b4s Ko \u02c7 cisk \u02c7 y, Edward Grefenstette, Lasse Espeholt, Will Kay, ` Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing SystemsVolume 1, pages 1693\u20131701. MIT Press, 2015."},{"key":"e_1_3_2_1_15_1","first-page":"2392","volume-title":"Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing","author":"Rajpurkar Pranav","year":"2016","unstructured":"Pranav Rajpurkar , Jian Zhang , Konstantin Lopyrev , and Percy Liang . Squad : 100,000+ questions for machine comprehension of text . In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , pages 2383\u2013 2392 , 2016 . Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383\u20132392, 2016."},{"key":"e_1_3_2_1_16_1","volume-title":"Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268","author":"Nguyen Tri","year":"2016","unstructured":"Tri Nguyen , Mir Rosenberg , Xia Song , Jianfeng Gao , Saurabh Tiwary , Rangan Majumder , and Li Deng . Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 , 2016 . Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016."},{"key":"e_1_3_2_1_17_1","unstructured":"Yiming Cui Ting Liu Wanxiang Che Li Xiao Zhipeng Chen Wentao Ma Shijin Wang Guoping Hu. A Span-Extraction Dataset for Chinese Machine Reading Comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) 2019.  Yiming Cui Ting Liu Wanxiang Che Li Xiao Zhipeng Chen Wentao Ma Shijin Wang Guoping Hu. A Span-Extraction Dataset for Chinese Machine Reading Comprehension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) 2019."},{"key":"e_1_3_2_1_18_1","first-page":"918","volume-title":"Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","volume":"1","author":"Kadlec Rudolf","year":"2016","unstructured":"Rudolf Kadlec , Martin Schmid , Ond\u02c7rej Bajgar , and Jan Kleindienst . Text understanding with the attention sum reader network . In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , volume 1 , pages 908\u2013 918 , 2016 . Rudolf Kadlec, Martin Schmid, Ond\u02c7rej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 908\u2013918, 2016."},{"key":"e_1_3_2_1_19_1","volume-title":"Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603","author":"Seo Minjoon","year":"2016","unstructured":"Minjoon Seo , Aniruddha Kembhavi , Ali Farhadi , and Hannaneh Hajishirzi . Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 , 2016 . Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603, 2016."},{"key":"e_1_3_2_1_20_1","volume-title":"Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541","author":"Yu Adams Wei","year":"2018","unstructured":"Adams Wei Yu , David Dohan , Minh-Thang Luong , Rui Zhao , Kai Chen , Mohammad Norouzi , and Quoc V Le . Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541 , 2018 . Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541, 2018."},{"key":"e_1_3_2_1_21_1","volume-title":"Deep contextualized word representations. arXiv preprint arXiv:1802.05365","author":"Peters Matthew E","year":"2018","unstructured":"Matthew E Peters , Mark Neumann , Mohit Iyyer , Matt Gardner , Christopher Clark , Kenton Lee , and Luke Zettlemoyer . Deep contextualized word representations. arXiv preprint arXiv:1802.05365 , 2018 . Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018."},{"key":"e_1_3_2_1_22_1","volume-title":"RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 , 2019 . Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019."},{"key":"e_1_3_2_1_23_1","first-page":"5764","volume-title":"NeurIPS","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang , Zihang Dai , Yiming Yang , Jaime Carbonell , Russ R Salakhutdinov , and Quoc V Le. XL Net : Generalized autoregressive pretraining for language understanding . In NeurIPS , pages 5754\u2013 5764 , 2019 . Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. XLNet: Generalized autoregressive pretraining for language understanding. In NeurIPS, pages 5754\u20135764, 2019."},{"key":"e_1_3_2_1_24_1","volume-title":"International Conference on Learning Representations","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . ALBERT : A lite BERT for self-supervised learning of language representations . In International Conference on Learning Representations , 2020 . Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised learning of language representations. In International Conference on Learning Representations, 2020."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/219717.219748"},{"key":"e_1_3_2_1_26_1","volume-title":"A Survey on Knowledge Graphs: Representation, Acquisition and Applications[J]","author":"Ji S","year":"2020","unstructured":"Ji S , Pan S , Cambria E , A Survey on Knowledge Graphs: Representation, Acquisition and Applications[J] . 2020 . Ji S , Pan S , Cambria E , A Survey on Knowledge Graphs: Representation, Acquisition and Applications[J]. 2020."},{"issue":"10","key":"e_1_3_2_1_27_1","doi-asserted-by":"crossref","first-page":"78","DOI":"10.1145\/2629489","article-title":"Wikidata: a free collaborative knowledge base","volume":"57","author":"D. Vrandeci\u02c7","year":"2014","unstructured":"D. Vrandeci\u02c7 c and M. Kr \u00b4 otzsch , \u201c Wikidata: a free collaborative knowledge base ,\u201d Communications of the ACM , vol. 57 , no. 10 , pp. 78 \u2013 85 , 2014 . D. Vrandeci\u02c7 c and M. Kr \u00b4 otzsch, \u201cWikidata: a free collaborative knowledge base,\u201d Communications of the ACM, vol. 57, no. 10, pp. 78\u201385, 2014.","journal-title":"Communications of the ACM"},{"key":"e_1_3_2_1_28_1","first-page":"1250","volume-title":"SIGMOD","author":"Bollacker K.","year":"2008","unstructured":"K. Bollacker , C. Evans , P. Paritosh , T. Sturge , and J. Taylor , \u201c Freebase: a collaboratively created graph database for structuring human knowledge ,\u201d in SIGMOD , 2008 , pp. 1247\u2013 1250 . K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor, \u201cFreebase: a collaboratively created graph database for structuring human knowledge,\u201d in SIGMOD, 2008, pp. 1247\u20131250."},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-76298-0_52"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/1242572.1242667"},{"issue":"10","key":"e_1_3_2_1_31_1","first-page":"26","article-title":"A Survey[J]","volume":"63","author":"Qiu X","year":"2020","unstructured":"Qiu X , Sun T , Xu Y , Pre-trained Models for Natural Language Processing : A Survey[J] . Science China Technological Sciences , 2020 , 63 ( 10 ): 26 . Qiu X , Sun T , Xu Y , Pre-trained Models for Natural Language Processing: A Survey[J]. Science China Technological Sciences, 2020, 63(10):26.","journal-title":"Science China Technological Sciences"},{"key":"e_1_3_2_1_32_1","volume-title":"SenseBERT: Driving some sense into BERT. arXiv preprint arXiv:1908.05646","author":"Levine Yoav","year":"2019","unstructured":"Yoav Levine , Barak Lenz , Or Dagan , Dan Padnos , Or Sharir , Shai Shalev-Shwartz , Amnon Shashua , and Yoav Shoham . SenseBERT: Driving some sense into BERT. arXiv preprint arXiv:1908.05646 , 2019 . Yoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. SenseBERT: Driving some sense into BERT. arXiv preprint arXiv:1908.05646, 2019."},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1139"},{"key":"e_1_3_2_1_34_1","volume-title":"EMNLP-IJCNLP","author":"Peters Matthew E.","year":"2019","unstructured":"Matthew E. Peters , Mark Neumann , Robert L. Logan IV, Roy Schwartz , Vidur Joshi , Sameer Singh , and Noah A. Smith . Knowledge enhanced contextual word representations . In EMNLP-IJCNLP , 2019 . Matthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. Knowledge enhanced contextual word representations. In EMNLP-IJCNLP, 2019."},{"key":"e_1_3_2_1_35_1","volume-title":"KEPLER: A unified model for knowledge embedding and pre-trained language representation. arXiv preprint arXiv:1911.06136","author":"Wang Xiaozhi","year":"2019","unstructured":"Xiaozhi Wang , Tianyu Gao , Zhaocheng Zhu , Zhiyuan Liu , Juanzi Li , and Jian Tang . KEPLER: A unified model for knowledge embedding and pre-trained language representation. arXiv preprint arXiv:1911.06136 , 2019 . Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juanzi Li, and Jian Tang. KEPLER: A unified model for knowledge embedding and pre-trained language representation. arXiv preprint arXiv:1911.06136, 2019."},{"key":"e_1_3_2_1_36_1","volume-title":"AAAI","author":"Liu Weijie","year":"2019","unstructured":"Weijie Liu , Peng Zhou , Zhe Zhao , Zhiruo Wang , Qi Ju , Haotang Deng , and Ping Wang . K-BERT : Enabling language representation with knowledge graph . In AAAI , 2019 . Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. K-BERT: Enabling language representation with knowledge graph. In AAAI, 2019."},{"key":"e_1_3_2_1_37_1","volume-title":"A knowledge-enhanced pretraining model for commonsense story generation. arXiv preprint arXiv:2001.05139","author":"Guan Jian","year":"2020","unstructured":"Jian Guan , Fei Huang , Zhihao Zhao , Xiaoyan Zhu , and Minlie Huang . A knowledge-enhanced pretraining model for commonsense story generation. arXiv preprint arXiv:2001.05139 , 2020 . Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. A knowledge-enhanced pretraining model for commonsense story generation. arXiv preprint arXiv:2001.05139, 2020."},{"key":"e_1_3_2_1_38_1","first-page":"2357","volume-title":"ACL","author":"Yang An","year":"2019","unstructured":"An Yang , Quan Wang , Jing Liu , Kai Liu , Yajuan Lyu , Hua Wu , Qiaoqiao She , and Sujian Li . Enhancing pre-trained language representations with rich knowledge for machine reading comprehension . In ACL , pages 2346\u2013 2357 , 2019 . An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In ACL, pages 2346\u20132357, 2019."},{"key":"e_1_3_2_1_39_1","unstructured":"https:\/\/huggingface.co\/docs\/transformers\/index  https:\/\/huggingface.co\/docs\/transformers\/index"},{"key":"e_1_3_2_1_40_1","first-page":"198","volume-title":"Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Wang Wenhui","year":"2017","unstructured":"Wenhui Wang , Nan Yang , Furu Wei , Baobao Chang , and Ming Zhou . Gated self-matching networks for reading comprehension and question answering . In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 189\u2013 198 , 2017 . Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189\u2013198, 2017."},{"key":"e_1_3_2_1_41_1","volume-title":"Pre-training with whole word masking for chinese BERT. arXiv preprint arXiv:1906.08101","author":"Cui Yiming","year":"2019","unstructured":"Yiming Cui , Wanxiang Che , Ting Liu , Bing Qin , Ziqing Yang , Shijin Wang , and Guoping Hu . Pre-training with whole word masking for chinese BERT. arXiv preprint arXiv:1906.08101 , 2019 . Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. Pre-training with whole word masking for chinese BERT. arXiv preprint arXiv:1906.08101, 2019."}],"event":{"name":"CSAE 2022: The 6th International Conference on Computer Science and Application Engineering","acronym":"CSAE 2022","location":"Virtual Event China"},"container-title":["The 6th International Conference on Computer Science and Application Engineering"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3565387.3565405","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3565387.3565405","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:13Z","timestamp":1750178233000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3565387.3565405"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,21]]},"references-count":41,"alternative-id":["10.1145\/3565387.3565405","10.1145\/3565387"],"URL":"https:\/\/doi.org\/10.1145\/3565387.3565405","relation":{},"subject":[],"published":{"date-parts":[[2022,10,21]]}}}