{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T13:38:39Z","timestamp":1760621919659,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":38,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T00:00:00Z","timestamp":1603065600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Natural Science Foundation","award":["U1916109","61672127"],"award-info":[{"award-number":["U1916109","61672127"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,10,19]]},"DOI":"10.1145\/3340531.3412013","type":"proceedings-article","created":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T06:32:45Z","timestamp":1603089165000},"page":"1015-1024","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Dual Head-wise Coattention Network for Machine Comprehension with Multiple-Choice Questions"],"prefix":"10.1145","author":[{"given":"Zhuang","family":"Liu","sequence":"first","affiliation":[{"name":"Dalian University of Technology, Dalian, China"}]},{"given":"Kaiyu","family":"Huang","sequence":"additional","affiliation":[{"name":"Dalian University of Technology, Dalian, China"}]},{"given":"Degen","family":"Huang","sequence":"additional","affiliation":[{"name":"Dalian University of Technology, Dalian, China"}]},{"given":"Zhuang","family":"Liu","sequence":"additional","affiliation":[{"name":"Union Mobile Financial Technology, Beijing, China"}]},{"given":"Jun","family":"Zhao","sequence":"additional","affiliation":[{"name":"Union Mobile Financial Technology, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2020,10,19]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings. http:\/\/arxiv.org\/abs\/1409","author":"Bahdanau Dzmitry","year":"2015","unstructured":"Dzmitry Bahdanau , Kyunghyun Cho , and Yoshua Bengio . 2015 . Neural Machine Translation by Jointly Learning to Align and Translate . In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings. http:\/\/arxiv.org\/abs\/1409 .0473 Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings. http:\/\/arxiv.org\/abs\/1409.0473"},{"key":"e_1_3_2_2_2_1","volume-title":"Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling. CoRR","author":"Bowman Samuel R.","year":"2018","unstructured":"Samuel R. Bowman , Ellie Pavlick , Edouard Grave , and Benjamin Van Durme . 2018. Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling. CoRR , Vol. abs\/ 1812 .10860 ( 2018 ). arxiv: 1812.10860 http:\/\/arxiv.org\/abs\/1812.10860 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, and Benjamin Van Durme. 2018. Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling. CoRR, Vol. abs\/1812.10860 (2018). arxiv: 1812.10860 http:\/\/arxiv.org\/abs\/1812.10860"},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33016276"},{"key":"e_1_3_2_2_4_1","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019","volume":"1","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2019 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019 , Minneapolis, MN, USA, June 2--7 , 2019, Volume 1 (Long and Short Papers). 4171--4186. https:\/\/www.aclweb.org\/anthology\/N19--1423\/ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2--7, 2019, Volume 1 (Long and Short Papers). 4171--4186. https:\/\/www.aclweb.org\/anthology\/N19--1423\/"},{"key":"e_1_3_2_2_5_1","volume-title":"Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems, NIPS 2015, December 7--12","author":"Hermann Karl Moritz","year":"2015","unstructured":"Karl Moritz Hermann , Tom\u00e1 s Kocisk\u00fd , Edward Grefenstette , Lasse Espeholt , Will Kay , Mustafa Suleyman , and Phil Blunsom . 2015 . Teaching Machines to Read and Comprehend . In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems, NIPS 2015, December 7--12 , 2015, Montreal, Quebec, Canada. 1693--1701. http:\/\/papers.nips.cc\/paper\/5945-teaching-machines-to-read-and-comprehend Karl Moritz Hermann, Tom\u00e1 s Kocisk\u00fd, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems, NIPS 2015, December 7--12, 2015, Montreal, Quebec, Canada. 1693--1701. http:\/\/papers.nips.cc\/paper\/5945-teaching-machines-to-read-and-comprehend"},{"key":"e_1_3_2_2_6_1","volume-title":"Dual Multi-head Co-attention for Multi-choice Reading Comprehension. CoRR","author":"Hu Pengfei","year":"2020","unstructured":"Pengfei Hu , Hai Zhao , and Xiaoguang Li. 2020. Dual Multi-head Co-attention for Multi-choice Reading Comprehension. CoRR , Vol. abs\/ 2001 .09415 ( 2020 ). arxiv: 2001.09415 https:\/\/arxiv.org\/abs\/2001.09415 Pengfei Hu, Hai Zhao, and Xiaoguang Li. 2020. Dual Multi-head Co-attention for Multi-choice Reading Comprehension. CoRR, Vol. abs\/2001.09415 (2020). arxiv: 2001.09415 https:\/\/arxiv.org\/abs\/2001.09415"},{"key":"e_1_3_2_2_7_1","volume-title":"GenNet: Reading Comprehension with Multiple Choice Questions using Generation and Selection model. CoRR","author":"Ingale Vaishali","year":"2020","unstructured":"Vaishali Ingale and Pushpender Singh . 2020. GenNet: Reading Comprehension with Multiple Choice Questions using Generation and Selection model. CoRR , Vol. abs\/ 2003 .04360 ( 2020 ). arxiv: 2003.04360 https:\/\/arxiv.org\/abs\/2003.04360 Vaishali Ingale and Pushpender Singh. 2020. GenNet: Reading Comprehension with Multiple Choice Questions using Generation and Selection model. CoRR, Vol. abs\/2003.04360 (2020). arxiv: 2003.04360 https:\/\/arxiv.org\/abs\/2003.04360"},{"key":"e_1_3_2_2_8_1","volume-title":"MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension. CoRR","author":"Jin Di","year":"2019","unstructured":"Di Jin , Shuyang Gao , Jiun-Yu Kao , Tagyoung Chung , and Dilek Hakkani-T\u00fc r. 2019 . MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension. CoRR , Vol. abs\/ 1910 .00458 (2019). arxiv: 1910.00458 http:\/\/arxiv.org\/abs\/1910.00458 Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, and Dilek Hakkani-T\u00fc r. 2019. MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension. CoRR, Vol. abs\/1910.00458 (2019). arxiv: 1910.00458 http:\/\/arxiv.org\/abs\/1910.00458"},{"key":"e_1_3_2_2_9_1","volume-title":"UnifiedQA: Crossing Format Boundaries With a Single QA System. CoRR","author":"Khashabi Daniel","year":"2020","unstructured":"Daniel Khashabi , Tushar Khot , Ashish Sabharwal , Oyvind Tafjord , Peter Clark , and Hannaneh Hajishirzi . 2020. UnifiedQA: Crossing Format Boundaries With a Single QA System. CoRR , Vol. abs\/ 2005 .00700 ( 2020 ). arxiv: 2005.00700 https:\/\/arxiv.org\/abs\/2005.00700 Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UnifiedQA: Crossing Format Boundaries With a Single QA System. CoRR, Vol. abs\/2005.00700 (2020). arxiv: 2005.00700 https:\/\/arxiv.org\/abs\/2005.00700"},{"key":"e_1_3_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D17-1082"},{"key":"e_1_3_2_2_11_1","volume-title":"ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations, ICLR","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2020 . ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations, ICLR 2020. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations, ICLR 2020."},{"key":"e_1_3_2_2_12_1","volume-title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR , Vol. abs\/ 1907 .11692 ( 2019 ). arxiv: 1907.11692 http:\/\/arxiv.org\/abs\/1907.11692 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR, Vol. abs\/1907.11692 (2019). arxiv: 1907.11692 http:\/\/arxiv.org\/abs\/1907.11692"},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2020\/622"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-69005-6_32"},{"key":"e_1_3_2_2_15_1","volume-title":"Proceedings of the Twenty-forth European Conference on Artificial Intelligence, ECAI 2020","author":"Liu Zhuang","year":"2020","unstructured":"Zhuang Liu , Kaiyu Huang , Degen Huang , and Jun Zhao . 2020 a. Semantics-Reinforced Networks for Question Generation . In Proceedings of the Twenty-forth European Conference on Artificial Intelligence, ECAI 2020 , Santiago de Compostela, Spain, Aug 29 - Sep 5, 2020, Long Papers. Zhuang Liu, Kaiyu Huang, Degen Huang, and Jun Zhao. 2020 a. Semantics-Reinforced Networks for Question Generation. In Proceedings of the Twenty-forth European Conference on Artificial Intelligence, ECAI 2020, Santiago de Compostela, Spain, Aug 29 - Sep 5, 2020, Long Papers."},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3372120"},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D18-1260"},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P16-2022"},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S18-1119"},{"key":"e_1_3_2_2_20_1","volume-title":"Improving Question Answering with External Knowledge. CoRR","author":"Pan Xiaoman","year":"2019","unstructured":"Xiaoman Pan , Kai Sun , Dian Yu , Heng Ji , and Dong Yu. 2019. Improving Question Answering with External Knowledge. CoRR , Vol. abs\/ 1902 .00993 ( 2019 ). arxiv: 1902.00993 http:\/\/arxiv.org\/abs\/1902.00993 Xiaoman Pan, Kai Sun, Dian Yu, Heng Ji, and Dong Yu. 2019. Improving Question Answering with External Knowledge. CoRR, Vol. abs\/1902.00993 (2019). arxiv: 1902.00993 http:\/\/arxiv.org\/abs\/1902.00993"},{"key":"e_1_3_2_2_21_1","volume-title":"Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018","author":"Parikh Soham","year":"2018","unstructured":"Soham Parikh , Ananya Sai , Preksha Nema , and Mitesh M. Khapra . 2018. ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions . In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018 , July 13 --19 , 2018 , Stockholm, Sweden. 4272--4278. https:\/\/doi.org\/10.24963\/ijcai.2018\/594 10.24963\/ijcai.2018 Soham Parikh, Ananya Sai, Preksha Nema, and Mitesh M. Khapra. 2018. ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13--19, 2018, Stockholm, Sweden. 4272--4278. https:\/\/doi.org\/10.24963\/ijcai.2018\/594"},{"key":"e_1_3_2_2_22_1","volume-title":"Proceedings of Technical report, OpenAI","author":"Radford Alec","year":"2018","unstructured":"Alec Radford , Karthik Narasimhan , Tim Salimans , and Ilya Sutskever . 2018 . Improving language understanding by generative pre-training . In Proceedings of Technical report, OpenAI (2018). https:\/\/github.com\/openai\/finetune-transformer-lm Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. In Proceedings of Technical report, OpenAI (2018). https:\/\/github.com\/openai\/finetune-transformer-lm"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D16-1264"},{"key":"e_1_3_2_2_24_1","volume-title":"Option Comparison Network for Multiple-choice Reading Comprehension. CoRR","author":"Ran Qiu","year":"2019","unstructured":"Qiu Ran , Peng Li , Weiwei Hu , and Jie Zhou . 2019. Option Comparison Network for Multiple-choice Reading Comprehension. CoRR , Vol. abs\/ 1903 .03033 ( 2019 ). arxiv: 1903.03033 http:\/\/arxiv.org\/abs\/1903.03033 Qiu Ran, Peng Li, Weiwei Hu, and Jie Zhou. 2019. Option Comparison Network for Multiple-choice Reading Comprehension. CoRR, Vol. abs\/1903.03033 (2019). arxiv: 1903.03033 http:\/\/arxiv.org\/abs\/1903.03033"},{"key":"e_1_3_2_2_25_1","volume-title":"a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR","author":"Sanh Victor","year":"2019","unstructured":"Victor Sanh , Lysandre Debut , Julien Chaumond , and Thomas Wolf . 2019. DistilBERT , a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR , Vol. abs\/ 1910 .01108 ( 2019 ). arxiv: 1910.01108 http:\/\/arxiv.org\/abs\/1910.01108 Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, Vol. abs\/1910.01108 (2019). arxiv: 1910.01108 http:\/\/arxiv.org\/abs\/1910.01108"},{"key":"e_1_3_2_2_26_1","volume-title":"5th International Conference on Learning Representations, ICLR","author":"Seo Min Joon","year":"2017","unstructured":"Min Joon Seo , Aniruddha Kembhavi , Ali Farhadi , and Hannaneh Hajishirzi . 2017. Bidirectional Attention Flow for Machine Comprehension . In 5th International Conference on Learning Representations, ICLR 2017 , Toulon, France, April 24--26, 2017, Conference Track Proceedings . https:\/\/openreview.net\/forum?id=HJ0UKP9ge Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional Attention Flow for Machine Comprehension. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24--26, 2017, Conference Track Proceedings. https:\/\/openreview.net\/forum?id=HJ0UKP9ge"},{"key":"e_1_3_2_2_27_1","volume-title":"Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR","author":"Shoeybi Mohammad","year":"2019","unstructured":"Mohammad Shoeybi , Mostofa Patwary , Raul Puri , Patrick LeGresley , Jared Casper , and Bryan Catanzaro . 2019. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR , Vol. abs\/ 1909 .08053 ( 2019 ). arxiv: 1909.08053 http:\/\/arxiv.org\/abs\/1909.08053 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR, Vol. abs\/1909.08053 (2019). arxiv: 1909.08053 http:\/\/arxiv.org\/abs\/1909.08053"},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00264"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1270"},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33017088"},{"key":"e_1_3_2_2_31_1","volume-title":"17th Annual Conference of the International Speech Communication Association","author":"Tseng Bo-Hsiang","year":"2016","unstructured":"Bo-Hsiang Tseng , Sheng-syun Shen, Hung-yi Lee, and Lin-Shan Lee . 2016 . Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine. In Interspeech 2016 , 17th Annual Conference of the International Speech Communication Association , San Francisco, CA, USA, September 8--12 , 2016. 2731--2735. https:\/\/doi.org\/10.21437\/Interspeech.2016--876 10.21437\/Interspeech.2016--876 Bo-Hsiang Tseng, Sheng-syun Shen, Hung-yi Lee, and Lin-Shan Lee. 2016. Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine. In Interspeech 2016, 17th Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, September 8--12, 2016. 2731--2735. https:\/\/doi.org\/10.21437\/Interspeech.2016--876"},{"key":"e_1_3_2_2_32_1","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N. Gomez , Lukasz Kaiser , and Illia Polosukhin . 2017 . Attention is All you Need . In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 , 4--9 December 2017, Long Beach, CA, USA. 5998--6008. http:\/\/papers.nips.cc\/paper\/7181-attention-is-all-you-need Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4--9 December 2017, Long Beach, CA, USA. 5998--6008. http:\/\/papers.nips.cc\/paper\/7181-attention-is-all-you-need"},{"key":"e_1_3_2_2_33_1","volume-title":"Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension. CoRR","author":"Wan Hui","year":"2020","unstructured":"Hui Wan . 2020. Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension. CoRR , Vol. abs\/ 2003 .04992 ( 2020 ). arxiv: 2003.04992 https:\/\/arxiv.org\/abs\/2003.04992 Hui Wan. 2020. Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension. CoRR, Vol. abs\/2003.04992 (2020). arxiv: 2003.04992 https:\/\/arxiv.org\/abs\/2003.04992"},{"key":"e_1_3_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-2118"},{"key":"e_1_3_2_2_35_1","volume-title":"Towards Human-level Machine Reading Comprehension: Reasoning and Inference with Multiple Strategies. CoRR","author":"Xu Yichong","year":"2017","unstructured":"Yichong Xu , Jingjing Liu , Jianfeng Gao , Yelong Shen , and Xiaodong Liu . 2017. Towards Human-level Machine Reading Comprehension: Reasoning and Inference with Multiple Strategies. CoRR , Vol. abs\/ 1711 .04964 ( 2017 ). arxiv: 1711.04964 http:\/\/arxiv.org\/abs\/1711.04964 Yichong Xu, Jingjing Liu, Jianfeng Gao, Yelong Shen, and Xiaodong Liu. 2017. Towards Human-level Machine Reading Comprehension: Reasoning and Inference with Multiple Strategies. CoRR, Vol. abs\/1711.04964 (2017). arxiv: 1711.04964 http:\/\/arxiv.org\/abs\/1711.04964"},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.31193\/ssap.01.9787509752807"},{"key":"e_1_3_2_2_37_1","volume-title":"Dual Co-Matching Network for Multi-choice Reading Comprehension. CoRR","author":"Zhang Shuailiang","year":"2019","unstructured":"Shuailiang Zhang , Hai Zhao , Yuwei Wu , Zhuosheng Zhang , Xi Zhou , and Xiang Zhou . 2019. DCMN+ : Dual Co-Matching Network for Multi-choice Reading Comprehension. CoRR , Vol. abs\/ 1908 .11511 ( 2019 ). arxiv: 1908.11511 http:\/\/arxiv.org\/abs\/1908.11511 Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension. CoRR, Vol. abs\/1908.11511 (2019). arxiv: 1908.11511 http:\/\/arxiv.org\/abs\/1908.11511"},{"key":"e_1_3_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v32i1.12040"}],"event":{"name":"CIKM '20: The 29th ACM International Conference on Information and Knowledge Management","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"],"location":"Virtual Event Ireland","acronym":"CIKM '20"},"container-title":["Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340531.3412013","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3340531.3412013","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:02:29Z","timestamp":1750197749000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340531.3412013"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,19]]},"references-count":38,"alternative-id":["10.1145\/3340531.3412013","10.1145\/3340531"],"URL":"https:\/\/doi.org\/10.1145\/3340531.3412013","relation":{},"subject":[],"published":{"date-parts":[[2020,10,19]]},"assertion":[{"value":"2020-10-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}