{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,9]],"date-time":"2026-04-09T03:08:47Z","timestamp":1775704127955,"version":"3.50.1"},"reference-count":28,"publisher":"Springer Science and Business Media LLC","issue":"8","license":[{"start":{"date-parts":[[2022,8,5]],"date-time":"2022-08-05T00:00:00Z","timestamp":1659657600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,8,5]],"date-time":"2022-08-05T00:00:00Z","timestamp":1659657600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["F2019203157"],"award-info":[{"award-number":["F2019203157"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62172352"],"award-info":[{"award-number":["62172352"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Sub-project of the National Key Research and Development Program","award":["2020YFC0833404"],"award-info":[{"award-number":["2020YFC0833404"]}]},{"name":"Scientific and technological research projects of colleges and universities in Hebei Province","award":["ZD2019004"],"award-info":[{"award-number":["ZD2019004"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2023,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Pre-trained language models achieve high performance on machine reading comprehension task, but these models lack robustness and are vulnerable to adversarial samples. Most of the current methods for improving model robustness are based on data enrichment. However, these methods do not solve the problem of poor context representation of the machine reading comprehension model. We find that context representation plays a key role in the robustness of the machine reading comprehension model, dense context representation space results in poor model robustness. To deal with this, we propose a Multi-task machine Reading Comprehension learning framework via Contrastive Learning. Its main idea is to improve the context representation space encoded by the machine reading comprehension models through contrastive learning. This special contrastive learning we proposed called Contrastive Learning in Context Representation Space(CLCRS). CLCRS samples sentences containing context information from the context as positive and negative samples, expanding the distance between the answer sentence and other sentences in the context. Therefore, the context representation space of the machine reading comprehension model has been expanded. The model can better distinguish between sentence containing correct answers and misleading sentence. Thus, the robustness of the model is improved. Experiment results on adversarial datasets show that our method exceeds the comparison models and achieves state-of-the-art performance.<\/jats:p>","DOI":"10.1007\/s10489-022-03947-w","type":"journal-article","created":{"date-parts":[[2022,8,5]],"date-time":"2022-08-05T09:04:52Z","timestamp":1659690292000},"page":"9103-9114","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Improving the robustness of machine reading comprehension via contrastive learning"],"prefix":"10.1007","volume":"53","author":[{"given":"Jianzhou","family":"Feng","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4082-6870","authenticated-orcid":false,"given":"Jiawei","family":"Sun","sequence":"additional","affiliation":[]},{"given":"Di","family":"Shao","sequence":"additional","affiliation":[]},{"given":"Jinman","family":"Cui","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,8,5]]},"reference":[{"key":"3947_CR1","doi-asserted-by":"publisher","unstructured":"Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on empirical methods in natural language processing. Association for Computational Linguistics, https:\/\/doi.org\/10.18653\/v1\/D16-1264. https:\/\/aclanthology.org\/D16-1264, pp 2383\u20132392","DOI":"10.18653\/v1\/D16-1264"},{"key":"3947_CR2","doi-asserted-by":"publisher","unstructured":"Lai G, Xie Q, Liu H, Yang Y, Hovy E (2017). In: Proceedings of the 2017 Conference on empirical methods in natural language processing. Association for Computational Linguistics, https:\/\/doi.org\/10.18653\/v1\/D17-1082. https:\/\/aclanthology.org\/D17-1082, pp 785\u2013794","DOI":"10.18653\/v1\/D17-1082"},{"key":"3947_CR3","doi-asserted-by":"publisher","unstructured":"Jia R, Liang P (2017) Adversarial examples for evaluating reading comprehension systems. In: Proceedings of the 2017 Conference on empirical methods in natural language processing. Association for Computational Linguistics, Copenhagen, Denmark. https:\/\/doi.org\/10.18653\/v1\/D17-1215. https:\/\/aclanthology.org\/D17-1215, pp 2021\u20132031","DOI":"10.18653\/v1\/D17-1215"},{"key":"3947_CR4","doi-asserted-by":"publisher","unstructured":"Jin D, Jin Z, Zhou JT, Szolovits P (2020) Is bert really robust? a strong baseline for natural language attack on text classification and entailment, vol 34. https:\/\/doi.org\/10.1609\/aaai.v34i05.6311","DOI":"10.1609\/aaai.v34i05.6311"},{"key":"3947_CR5","doi-asserted-by":"publisher","unstructured":"Garg S, Ramakrishnan G (2020) BAE: BERT-based adversarial examples for text classification. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/2020.emnlp-main.498. https:\/\/aclanthology.org\/2020.emnlp-main.498, pp 6174\u20136181","DOI":"10.18653\/v1\/2020.emnlp-main.498"},{"key":"3947_CR6","doi-asserted-by":"publisher","unstructured":"Welbl J, Minervini P, Bartolo M, Stenetorp P, Riedel S (2020) Undersensitivity in neural reading comprehension. In: Association for computational linguistics, https:\/\/doi.org\/10.18653\/v1\/2020.findings-emnlp.103. https:\/\/aclanthology.org\/2020.findings-emnlp.103, pp 1152\u20131165","DOI":"10.18653\/v1\/2020.findings-emnlp.103"},{"key":"3947_CR7","doi-asserted-by":"publisher","unstructured":"Wang Y, Bansal M (2018) Robust machine comprehension models via adversarial training. In: Proceedings of the 2018 Conference of the north american chapter of the association for computational linguistics: Human language technologies, Volume 2 (Short Papers). Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/N18-2091. https:\/\/aclanthology.org\/N18-2091, pp 575\u2013581","DOI":"10.18653\/v1\/N18-2091"},{"issue":"05","key":"3947_CR8","doi-asserted-by":"publisher","first-page":"8392","DOI":"10.1609\/aaai.v34i05.6357","volume":"34","author":"K Liu","year":"2020","unstructured":"Liu K, Liu X, Yang A, Liu J, Su J, Li S, She Q (2020) A robust adversarial training approach to machine reading comprehension. Proceedings of the AAAI conference on artificial intelligence 34 (05):8392\u20138400. https:\/\/doi.org\/10.1609\/aaai.v34i05.6357","journal-title":"Proceedings of the AAAI conference on artificial intelligence"},{"key":"3947_CR9","doi-asserted-by":"crossref","unstructured":"Majumder S, Samant C, Durrett G (2021) Model agnostic answer reranking system for adversarial question answering. In: Proceedings of the 16th Conference of the european chapter of the association for computational linguistics: Student research workshop. Association for Computational Linguistics. https:\/\/aclanthology.org\/2021.eacl-srw.8 , pp 50\u201357","DOI":"10.18653\/v1\/2021.eacl-srw.8"},{"issue":"15","key":"3947_CR10","doi-asserted-by":"publisher","first-page":"13762","DOI":"10.1609\/aaai.v35i15.17622","volume":"35","author":"V Schlegel","year":"2021","unstructured":"Schlegel V, Nenadic G, Batista-Navarro R (2021) Semantics altering modifications for evaluating comprehension in machine reading. Proceedings of the AAAI Conference on Artificial Intelligence 35 (15):13762\u201313770","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"3947_CR11","doi-asserted-by":"publisher","unstructured":"Wang C, Jiang H (2019) Explicit utilization of general knowledge in machine reading comprehension. In: Proceedings of the 57th Annual meeting of the association for computational linguistics. Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/P19-1219. https:\/\/aclanthology.org\/P19-1219, pp 2263\u20132272","DOI":"10.18653\/v1\/P19-1219"},{"key":"3947_CR12","unstructured":"Yang Z, Cui Y, Che W, Liu T, Wang S, Hu G (2019) Improving machine reading comprehension via adversarial training. arXiv:1911.03614"},{"key":"3947_CR13","doi-asserted-by":"publisher","unstructured":"Yang Z, Cui Y, Si C, Che W, Liu T, Wang S, Hu G (2021) Adversarial training for machine reading comprehension with virtual embeddings. In: Proceedings of *SEM 2021: The Tenth joint conference on lexical and computational semantics. Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/2021.starsem-1.30. https:\/\/aclanthology.org\/2021.starsem-1.30, pp 308\u2013313","DOI":"10.18653\/v1\/2021.starsem-1.30"},{"key":"3947_CR14","doi-asserted-by":"publisher","unstructured":"Chen J, Durrett G (2021) Robust question answering through sub-part alignment. In: Proceedings of the 2021 Conference of the north american chapter of the association for computational linguistics: human language technologies. Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/2021.naacl-main.98. https:\/\/aclanthology.org\/2021.naacl-main.98, pp 1251\u20131263","DOI":"10.18653\/v1\/2021.naacl-main.98"},{"key":"3947_CR15","doi-asserted-by":"publisher","first-page":"2500","DOI":"10.1109\/TASLP.2020.3016132","volume":"28","author":"M Zhou","year":"2020","unstructured":"Zhou M, Huang M, Zhu X (2020) Robust reading comprehension with linguistic constraints via posterior regularization. IEEE\/ACM Trans Audio, Speech, Language Process 28:2500\u20132510. https:\/\/doi.org\/10.1109\/TASLP.2020.3016132","journal-title":"IEEE\/ACM Trans Audio, Speech, Language Process"},{"key":"3947_CR16","doi-asserted-by":"publisher","unstructured":"Yeh Y-T, Chen Y-N (2019) QAInfomax: Learning robust question answering system by mutual information maximization. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/D19-1333. https:\/\/aclanthology.org\/D19-1333, pp 3370\u20133375","DOI":"10.18653\/v1\/D19-1333"},{"key":"3947_CR17","unstructured":"Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: III, H.D., Singh, A. (eds.) Proceedings of the 37th International conference on machine learning proceedings of machine learning research. PMLR. https:\/\/proceedings.mlr.press\/v119\/chen20j.html, vol 119, pp 1597\u20131607"},{"key":"3947_CR18","doi-asserted-by":"crossref","unstructured":"Gao T, Yao X, Chen D (2021) Simcse: Simple contrastive learning of sentence embeddings. arXiv:2104.08821","DOI":"10.18653\/v1\/2021.emnlp-main.552"},{"key":"3947_CR19","doi-asserted-by":"publisher","unstructured":"Wang D, Ding N, Li P, Zheng H (2021) CLINE: Contrastive learning with semantic negative examples for natural language understanding. In: Proceedings of the 59th Annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers). Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.181. https:\/\/aclanthology.org\/2021.acl-long.181, pp 2332\u20132342","DOI":"10.18653\/v1\/2021.acl-long.181"},{"key":"3947_CR20","doi-asserted-by":"publisher","unstructured":"Yan Y, Li R, Wang S, Zhang F, Wu W, Xu W (2021) ConSERT: A contrastive framework for self-supervised sentence representation transfer. In: Proceedings of the 59th Annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers). Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.393. https:\/\/aclanthology.org\/2021.acl-long.393, pp 5065\u20135075","DOI":"10.18653\/v1\/2021.acl-long.393"},{"key":"3947_CR21","doi-asserted-by":"publisher","unstructured":"Zhang D, Nan F, Wei X, Li S-W, Zhu H, McKeown K, Nallapati R, Arnold AO, Xiang B (2021) Supporting clustering with contrastive learning. In: Proceedings of the 2021 Conference of the north american chapter of the association for computational linguistics: Human language technologies. Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/2021.naacl-main.427. https:\/\/aclanthology.org\/2021.naacl-main.427, pp 5419\u20135430","DOI":"10.18653\/v1\/2021.naacl-main.427"},{"key":"3947_CR22","unstructured":"Liebel L, Korner\u0308 M (2018) Auxiliary tasks in multi-task learning. arXiv:1805.06334"},{"key":"3947_CR23","doi-asserted-by":"crossref","unstructured":"Pennington J, Socher R, Manning CD (2014) Glove: Global vectors for word representation. In: Proceedings of the 2014 Conference on empirical methods in natural language processing (EMNLP), pp 1532\u20131543","DOI":"10.3115\/v1\/D14-1162"},{"issue":"11","key":"3947_CR24","doi-asserted-by":"publisher","first-page":"39","DOI":"10.1145\/219717.219748","volume":"38","author":"GA Miller","year":"1995","unstructured":"Miller GA (1995) Wordnet: a lexical database for english. Commun ACM 38(11):39\u201341","journal-title":"Commun ACM"},{"key":"3947_CR25","doi-asserted-by":"publisher","unstructured":"Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the north american chapter of the association for computational linguistics: Human language technologies, volume 1 (Long and Short Papers). Association for Computational Linguistics. https:\/\/doi.org\/10.18653\/v1\/N19-1423. https:\/\/aclanthology.org\/N19-1423, pp 4171\u20134186","DOI":"10.18653\/v1\/N19-1423"},{"key":"3947_CR26","unstructured":"Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692"},{"key":"3947_CR27","doi-asserted-by":"publisher","unstructured":"Hu M, Peng Y, Huang Z, Qiu X, Wei F, Zhou M (2018) Reinforced mnemonic reader for machine reading comprehension. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. International Joint Conferences on Artificial Intelligence Organization???. https:\/\/doi.org\/10.24963\/ijcai.2018\/570. https:\/\/doi.org\/10.24963\/ijcai.2018\/570, pp 4099\u20134106","DOI":"10.24963\/ijcai.2018\/570 10.24963\/ijcai.2018\/570"},{"key":"3947_CR28","doi-asserted-by":"publisher","first-page":"106075","DOI":"10.1016\/j.knosys.2020.106075","volume":"203","author":"Z Wu","year":"2020","unstructured":"Wu Z, Xu H (2020) Improving the robustness of machine reading comprehension model with hierarchical knowledge and auxiliary unanswerability prediction. Knowl Based Syst 203:106075. https:\/\/doi.org\/10.1016\/j.knosys.2020.106075","journal-title":"Knowl Based Syst"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03947-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-022-03947-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03947-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,4,30]],"date-time":"2023-04-30T09:21:02Z","timestamp":1682846462000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-022-03947-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,5]]},"references-count":28,"journal-issue":{"issue":"8","published-print":{"date-parts":[[2023,4]]}},"alternative-id":["3947"],"URL":"https:\/\/doi.org\/10.1007\/s10489-022-03947-w","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"value":"0924-669X","type":"print"},{"value":"1573-7497","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,5]]},"assertion":[{"value":"29 June 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 August 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}