{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,15]],"date-time":"2025-11-15T10:31:04Z","timestamp":1763202664651,"version":"3.41.0"},"reference-count":46,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2022,12,27]],"date-time":"2022-12-27T00:00:00Z","timestamp":1672099200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Major Projects of National Social Science Foundation of China","award":["20&ZD047"],"award-info":[{"award-number":["20&ZD047"]}]},{"name":"Guizhou Provincial Science and Technology Projects","award":["[2020]3003"],"award-info":[{"award-number":["[2020]3003"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2023,3,31]]},"abstract":"<jats:p>\n            By leveraging self-supervised tasks,\n            <jats:bold>pre-trained language model (PLM)<\/jats:bold>\n            has made significant progress in the field of\n            <jats:bold>machine reading comprehension (MRC)<\/jats:bold>\n            . However, in\n            <jats:bold>classical Chinese MRC (CCMRC)<\/jats:bold>\n            , the passage is typically in classical style, but the question and options are given in modern style. Existing pre-trained methods seldom model the relationship between classical and modern styles, resulting in overall misunderstanding of the passage. In this paper, we propose a contrastive learning method between classical and modern Chinese in order to reach a deep understanding of the two different styles. In particular, a novel pre-training task and an enhanced co-matching network have been defined: (1) The\n            <jats:bold>synonym discrimination (SD)<\/jats:bold>\n            task is used to identify whether modern meaning corresponds to classical Chinese. (2) The\n            <jats:bold>enhanced dual co-matching (EDCM)<\/jats:bold>\n            network is employed for a more interactive understanding of the classical passage and the modern options. The experimental results show that our proposed method improves language understanding ability and outperforms existing PLMs on the Haihua, CCLUE, and ChID datasets.\n          <\/jats:p>","DOI":"10.1145\/3551637","type":"journal-article","created":{"date-parts":[[2022,8,5]],"date-time":"2022-08-05T11:57:28Z","timestamp":1659700648000},"page":"1-22","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Contrastive Learning between Classical and Modern Chinese for Classical Chinese Machine Reading Comprehension"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3732-4354","authenticated-orcid":false,"given":"Maofu","family":"Liu","sequence":"first","affiliation":[{"name":"Wuhan University of Science and Technology, Wuhan, Hubei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2717-7888","authenticated-orcid":false,"given":"Junyi","family":"Xiang","sequence":"additional","affiliation":[{"name":"Wuhan University of Science and Technology, Wuhan, Hubei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4318-0852","authenticated-orcid":false,"given":"Xu","family":"Xia","sequence":"additional","affiliation":[{"name":"Wuhan University of Science and Technology, Wuhan, Hubei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1086-4788","authenticated-orcid":false,"given":"Huijun","family":"Hu","sequence":"additional","affiliation":[{"name":"Wuhan University of Science and Technology, Wuhan, Hubei, China"}]}],"member":"320","published-online":{"date-parts":[[2022,12,27]]},"reference":[{"key":"e_1_3_2_2_2","volume-title":"Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920)","author":"Clark Kevin","year":"2020","unstructured":"Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre-training text encoders as discriminators rather than generators. In Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920). https:\/\/openreview.net\/forum?id=r1xMH1BtvB."},{"doi-asserted-by":"publisher","key":"e_1_3_2_3_2","DOI":"10.1109\/TASLP.2021.3124365"},{"doi-asserted-by":"publisher","key":"e_1_3_2_4_2","DOI":"10.1145\/3501399"},{"doi-asserted-by":"publisher","key":"e_1_3_2_5_2","DOI":"10.18653\/v1\/D19-1600"},{"key":"e_1_3_2_6_2","volume-title":"Robotics: Science and Systems","author":"Dahlkamp Hendrik","year":"2006","unstructured":"Hendrik Dahlkamp, Adrian Kaehler, David Stavens, Sebastian Thrun, and Gary R. Bradski. 2006. Self-supervised monocular road detection in desert terrain. In Robotics: Science and Systems, Vol. 38."},{"key":"e_1_3_2_7_2","volume-title":"A new introduction to Classical Chinese","author":"Dawson Raymond","year":"1984","unstructured":"Raymond Dawson and Raymond Stanley Dawson. 1984. A new introduction to Classical Chinese. Oxford University Press."},{"doi-asserted-by":"publisher","key":"e_1_3_2_8_2","DOI":"10.18653\/v1\/N19-1423"},{"key":"e_1_3_2_9_2","article-title":"Unified language model pre-training for natural language understanding and generation","volume":"32","author":"Dong Li","year":"2019","unstructured":"Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"doi-asserted-by":"publisher","key":"e_1_3_2_10_2","DOI":"10.18653\/v1\/W18-2605"},{"doi-asserted-by":"publisher","key":"e_1_3_2_11_2","DOI":"10.18653\/v1\/P18-1031"},{"doi-asserted-by":"publisher","key":"e_1_3_2_12_2","DOI":"10.1162\/tacl_a_00300"},{"doi-asserted-by":"publisher","key":"e_1_3_2_13_2","DOI":"10.18653\/v1\/D17-1082"},{"key":"e_1_3_2_14_2","volume-title":"Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920)","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920). https:\/\/openreview.net\/forum?id=H1eA7AEtvS."},{"doi-asserted-by":"publisher","key":"e_1_3_2_15_2","DOI":"10.18653\/v1\/2020.acl-main.703"},{"doi-asserted-by":"publisher","key":"e_1_3_2_16_2","DOI":"10.1145\/3269206.3269280"},{"doi-asserted-by":"publisher","key":"e_1_3_2_17_2","DOI":"10.1609\/aaai.v34i05.6357"},{"doi-asserted-by":"publisher","key":"e_1_3_2_18_2","DOI":"10.3390\/app9183698"},{"doi-asserted-by":"publisher","key":"e_1_3_2_19_2","DOI":"10.1609\/aaai.v34i03.5681"},{"doi-asserted-by":"publisher","key":"e_1_3_2_20_2","DOI":"10.24963\/ijcai.2020\/525"},{"key":"e_1_3_2_21_2","article-title":"RoBERTa: A robustly optimized BERT pretraining approach","volume":"1907","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv preprint abs\/1907.11692 (2019).","journal-title":"ArXiv preprint"},{"doi-asserted-by":"publisher","key":"e_1_3_2_22_2","DOI":"10.1016\/S0079-7421(08)60536-8"},{"doi-asserted-by":"publisher","key":"e_1_3_2_23_2","DOI":"10.1109\/IJCNN.2019.8852176"},{"key":"e_1_3_2_24_2","volume-title":"CoCo@ NIPS","author":"Nguyen Tri","year":"2016","unstructured":"Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPS."},{"doi-asserted-by":"publisher","key":"e_1_3_2_25_2","DOI":"10.18653\/v1\/2021.acl-long.21"},{"doi-asserted-by":"publisher","key":"e_1_3_2_26_2","DOI":"10.1609\/aaai.v30i1.10341"},{"doi-asserted-by":"publisher","key":"e_1_3_2_27_2","DOI":"10.1109\/ICEIB53692.2021.9686420"},{"doi-asserted-by":"publisher","key":"e_1_3_2_28_2","DOI":"10.18653\/v1\/2021.acl-long.260"},{"doi-asserted-by":"publisher","key":"e_1_3_2_29_2","DOI":"10.1007\/s11431-020-1647-3"},{"doi-asserted-by":"publisher","key":"e_1_3_2_30_2","DOI":"10.18653\/v1\/P18-2124"},{"doi-asserted-by":"publisher","key":"e_1_3_2_31_2","DOI":"10.1145\/3519296"},{"key":"e_1_3_2_32_2","first-page":"5926","volume-title":"International Conference on Machine Learning (ICML)","author":"Song Kaitao","year":"2019","unstructured":"Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning (ICML). 5926\u20135936."},{"doi-asserted-by":"publisher","key":"e_1_3_2_33_2","DOI":"10.1162\/tacl_a_00305"},{"doi-asserted-by":"publisher","key":"e_1_3_2_34_2","DOI":"10.29007\/21r5"},{"doi-asserted-by":"publisher","key":"e_1_3_2_35_2","DOI":"10.1177\/107769905303000401"},{"key":"e_1_3_2_36_2","first-page":"5998","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc., 5998\u20136008."},{"doi-asserted-by":"publisher","key":"e_1_3_2_37_2","DOI":"10.18653\/v1\/2021.acl-long.181"},{"key":"e_1_3_2_38_2","first-page":"1","article-title":"SikuBERT and SikuRoBERTa: Research on the construction and application of the pre-training model of Sikuquanshu for digital humanities","author":"Wang Dongbo","year":"2021","unstructured":"Dongbo Wang, Chang Liu, Zihe Zhu, Jiangfeng Liu, Haotian Hu, Si Shen, and Li Bin. 2021. SikuBERT and SikuRoBERTa: Research on the construction and application of the pre-training model of Sikuquanshu for digital humanities. Library Forum (2021), 1\u201314.","journal-title":"Library Forum"},{"doi-asserted-by":"publisher","key":"e_1_3_2_39_2","DOI":"10.18653\/v1\/P18-2118"},{"doi-asserted-by":"publisher","key":"e_1_3_2_40_2","DOI":"10.18653\/v1\/2021.acl-long.491"},{"doi-asserted-by":"publisher","key":"e_1_3_2_41_2","DOI":"10.18653\/v1\/D18-1257"},{"doi-asserted-by":"publisher","key":"e_1_3_2_42_2","DOI":"10.18653\/v1\/2020.coling-main.419"},{"key":"e_1_3_2_43_2","article-title":"Native Chinese reader: A dataset towards native-level Chinese machine reading comprehension","author":"Xu Shusheng","year":"2021","unstructured":"Shusheng Xu, Yichen Liu, Xiaoyu Yi, Siyuan Zhou, Huizi Li, and Yi Wu. 2021. Native Chinese reader: A dataset towards native-level Chinese machine reading comprehension. arXiv preprint arXiv:2112.06494 (2021).","journal-title":"arXiv preprint arXiv:2112.06494"},{"key":"e_1_3_2_44_2","article-title":"Dynamic fusion networks for machine reading comprehension","volume":"1711","author":"Xu Yichong","year":"2017","unstructured":"Yichong Xu, Jingjing Liu, Jianfeng Gao, Yelong Shen, and Xiaodong Liu. 2017. Dynamic fusion networks for machine reading comprehension. ArXiv preprint abs\/1711.04964 (2017). https:\/\/arxiv.org\/abs\/1711.04964.","journal-title":"ArXiv preprint"},{"key":"e_1_3_2_45_2","article-title":"XLNet: Generalized autoregressive pretraining for language understanding","volume":"32","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. Advances in Neural Information Processing Systems 32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"doi-asserted-by":"publisher","key":"e_1_3_2_46_2","DOI":"10.1609\/aaai.v34i05.6502"},{"doi-asserted-by":"publisher","key":"e_1_3_2_47_2","DOI":"10.18653\/v1\/P19-1075"}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3551637","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3551637","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:00:26Z","timestamp":1750186826000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3551637"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,27]]},"references-count":46,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,3,31]]}},"alternative-id":["10.1145\/3551637"],"URL":"https:\/\/doi.org\/10.1145\/3551637","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"type":"print","value":"2375-4699"},{"type":"electronic","value":"2375-4702"}],"subject":[],"published":{"date-parts":[[2022,12,27]]},"assertion":[{"value":"2022-06-14","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-07-21","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-12-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}