{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,20]],"date-time":"2025-11-20T18:43:31Z","timestamp":1763664211362,"version":"3.41.0"},"reference-count":23,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2020,2,20]],"date-time":"2020-02-20T00:00:00Z","timestamp":1582156800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"This work was supported by Institute for Information & Communications Technology Promotion (IITP) grants funded by the Korean government (MSIT)","award":["2013?0?0013"],"award-info":[{"award-number":["2013?0?0013"]}]},{"name":"This work was supported by Institute for Information & Communications Technology Promotion (IITP) grants funded by the Korean government (MSIT)","award":["2018?0?00605"],"award-info":[{"award-number":["2018?0?00605"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2020,5,31]]},"abstract":"<jats:p>\n            Machine reading comprehension question answering (MRC-QA) is the task of understanding the context of a given passage to find a correct answer within it. A passage is composed of several sentences; therefore, the length of the input sentence becomes longer, leading to diminished performance. In this article, we propose S\n            <jats:sup>3<\/jats:sup>\n            -NET, which adds sentence-based encoding to solve this problem. S\n            <jats:sup>3<\/jats:sup>\n            -NET, which is based on a simple recurrent unit architecture, is a deep learning model that solves the MRC-QA by applying matching network to sentence-level encoding. In addition, S\n            <jats:sup>3<\/jats:sup>\n            -NET utilizes self-matching networks to compute attention weight for its own recurrent neural network sequences. We perform MRC-QA for the SQuAD dataset of English and MindsMRC dataset of Korean. The experimental results show that for SQuAD, the S\n            <jats:sup>3<\/jats:sup>\n            -NET model proposed in this article produces 71.91% and 74.12% exact match and 81.02% and 82.34% F1 in single and ensemble models, respectively, and for MindsMRC, our model achieves 69.43% and 71.28% exact match and 81.53% and 82.77% F1 in single and ensemble models, respectively.\n          <\/jats:p>","DOI":"10.1145\/3365679","type":"journal-article","created":{"date-parts":[[2020,4,4]],"date-time":"2020-04-04T03:08:03Z","timestamp":1585969683000},"page":"1-14","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["S\n            <sup>3<\/sup>\n            -NET"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5386-0483","authenticated-orcid":false,"given":"Cheoneum","family":"Park","sequence":"first","affiliation":[{"name":"Kangwon National University, Chuncheon, Gangwondo, KOR"}]},{"given":"Heejun","family":"Song","sequence":"additional","affiliation":[{"name":"Samsung Research, Seongchon-gil, Seoul, KOR"}]},{"given":"Changki","family":"Lee","sequence":"additional","affiliation":[{"name":"Kangwon National University, Chuncheon, Gangwondo, KOR"}]}],"member":"320","published-online":{"date-parts":[[2020,2,20]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"Dzmitry Bahdanau Kyunghyun Cho and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.473.  Dzmitry Bahdanau Kyunghyun Cho and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.473."},{"key":"e_1_2_1_2_1","doi-asserted-by":"crossref","unstructured":"Danqi Chen Adam Fisch Jason Weston and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. arXiv:1704.00051.  Danqi Chen Adam Fisch Jason Weston and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. arXiv:1704.00051.","DOI":"10.18653\/v1\/P17-1171"},{"volume-title":"Smarnet: Teaching machines to read and comprehend like human. arXiv:1710.02772.","year":"2017","author":"Chen Zheqian","key":"e_1_2_1_3_1"},{"volume-title":"Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio.","year":"2014","author":"Cho Kyunghyun","key":"e_1_2_1_4_1"},{"key":"e_1_2_1_5_1","unstructured":"Junyoung Chung Caglar Gulcehre KyungHyun Cho and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555.  Junyoung Chung Caglar Gulcehre KyungHyun Cho and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555."},{"volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805.","year":"2018","author":"Devlin Jacob","key":"e_1_2_1_6_1"},{"key":"e_1_2_1_7_1","unstructured":"Felix Hill Antoine Bordes Sumit Chopra and Jason Weston. 2015. The Goldilocks principle: Reading children\u2019s books with explicit memory representations. arXiv:1511.02301.  Felix Hill Antoine Bordes Sumit Chopra and Jason Weston. 2015. The Goldilocks principle: Reading children\u2019s books with explicit memory representations. arXiv:1511.02301."},{"key":"e_1_2_1_8_1","unstructured":"Minghao Hu Yuxing Peng Zhen Huang Xipeng Qiu Furu Wei and Ming Zhou. 2017. Reinforced mnemonic reader for machine reading comprehension. arXiv:1705.02798.  Minghao Hu Yuxing Peng Zhen Huang Xipeng Qiu Furu Wei and Ming Zhou. 2017. Reinforced mnemonic reader for machine reading comprehension. arXiv:1705.02798."},{"key":"e_1_2_1_9_1","doi-asserted-by":"crossref","unstructured":"Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv:1408.5882.  Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv:1408.5882.","DOI":"10.3115\/v1\/D14-1181"},{"volume-title":"Kingma and Jimmy Ba","year":"2014","author":"Diederik","key":"e_1_2_1_10_1"},{"key":"e_1_2_1_11_1","unstructured":"Rupesh Kumar Srivastava Klaus Greff and J\u00fcrgen Schmidhuber. 2015. Highway networks. arxiv:cs.LG\/1505.00387.  Rupesh Kumar Srivastava Klaus Greff and J\u00fcrgen Schmidhuber. 2015. Highway networks. arxiv:cs.LG\/1505.00387."},{"key":"e_1_2_1_12_1","unstructured":"Tao Lei and Yu Zhang. 2017. Training RNNs as fast as CNNs. arXiv:1709.02755.  Tao Lei and Yu Zhang. 2017. Training RNNs as fast as CNNs. arXiv:1709.02755."},{"volume-title":"MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268.","year":"2016","author":"Nguyen Tri","key":"e_1_2_1_13_1"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.5626\/KTCP.2017.23.9.542"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.4218\/etrij.2017-0279"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14-1162"},{"key":"e_1_2_1_17_1","doi-asserted-by":"crossref","unstructured":"Matthew E. Peters Mark Neumann Mohit Iyyer Matt Gardner Christopher Clark Kenton Lee and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv:1802.05365.  Matthew E. Peters Mark Neumann Mohit Iyyer Matt Gardner Christopher Clark Kenton Lee and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv:1802.05365.","DOI":"10.18653\/v1\/N18-1202"},{"key":"e_1_2_1_18_1","doi-asserted-by":"crossref","unstructured":"Pranav Rajpurkar Jian Zhang Konstantin Lopyrev and Percy Liang. 2016. Squad: 100 000+ questions for machine comprehension of text. arXiv:1606.05250.  Pranav Rajpurkar Jian Zhang Konstantin Lopyrev and Percy Liang. 2016. Squad: 100 000+ questions for machine comprehension of text. arXiv:1606.05250.","DOI":"10.18653\/v1\/D16-1264"},{"key":"e_1_2_1_19_1","unstructured":"Minjoon Seo Aniruddha Kembhavi Ali Farhadi and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv:1611.01603.  Minjoon Seo Aniruddha Kembhavi Ali Farhadi and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv:1611.01603."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3098177"},{"key":"e_1_2_1_21_1","unstructured":"Oriol Vinyals Meire Fortunato and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. 2692\u20132700.  Oriol Vinyals Meire Fortunato and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. 2692\u20132700."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P17-1018"},{"key":"e_1_2_1_23_1","doi-asserted-by":"crossref","unstructured":"Dirk Weissenborn Georg Wiese and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. arXiv:1703.04816.  Dirk Weissenborn Georg Wiese and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. arXiv:1703.04816.","DOI":"10.18653\/v1\/K17-1028"}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3365679","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3365679","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:23:36Z","timestamp":1750202616000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3365679"}},"subtitle":["SRU-Based Sentence and Self-Matching Networks for Machine Reading Comprehension"],"short-title":[],"issued":{"date-parts":[[2020,2,20]]},"references-count":23,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2020,5,31]]}},"alternative-id":["10.1145\/3365679"],"URL":"https:\/\/doi.org\/10.1145\/3365679","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"type":"print","value":"2375-4699"},{"type":"electronic","value":"2375-4702"}],"subject":[],"published":{"date-parts":[[2020,2,20]]},"assertion":[{"value":"2018-08-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-09-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-02-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}