{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:26:29Z","timestamp":1750220789238,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":23,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,7,25]],"date-time":"2020-07-25T00:00:00Z","timestamp":1595635200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Ministry of Education","award":["2018B01004"],"award-info":[{"award-number":["2018B01004"]}]},{"name":"National Key Research and Development Program of China","award":["2017YFB1402400"],"award-info":[{"award-number":["2017YFB1402400"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,7,25]]},"DOI":"10.1145\/3397271.3401195","type":"proceedings-article","created":{"date-parts":[[2020,7,25]],"date-time":"2020-07-25T07:50:08Z","timestamp":1595663408000},"page":"1665-1668","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["A Pairwise Probe for Understanding BERT Fine-Tuning on Machine Reading Comprehension"],"prefix":"10.1145","author":[{"given":"Jie","family":"Cai","sequence":"first","affiliation":[{"name":"Peking University, Beijing, China"}]},{"given":"Zhengzhou","family":"Zhu","sequence":"additional","affiliation":[{"name":"Peking University, Beijing, China"}]},{"given":"Ping","family":"Nie","sequence":"additional","affiliation":[{"name":"Peking University, Beijing, China"}]},{"given":"Qian","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Technology Sydney, Sydney, Australia"}]}],"member":"320","published-online":{"date-parts":[[2020,7,25]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Kevin Clark and etal 2019. What Does BERT Look At? An Analysis of BERT's Attention. In ACL.  Kevin Clark and et al. 2019. What Does BERT Look At? An Analysis of BERT's Attention. In ACL.","DOI":"10.18653\/v1\/W19-4828"},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"crossref","unstructured":"Alexis Conneau and etal 2018. What you can cram into a single &!#* vector: Probing sentence embeddings for linguistic properties. In ACL.  Alexis Conneau and et al. 2018. What you can cram into a single &!#* vector: Probing sentence embeddings for linguistic properties. In ACL.","DOI":"10.18653\/v1\/P18-1198"},{"key":"e_1_3_2_2_3_1","volume-title":"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL.","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin and 2019 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL. Jacob Devlin and et al. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL."},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-5603"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"crossref","unstructured":"Ganesh Jawahar and etal 2019. What does BERT learn about the structure of language?. In ACL.  Ganesh Jawahar and et al. 2019. What does BERT learn about the structure of language?. In ACL.","DOI":"10.18653\/v1\/P19-1356"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"crossref","unstructured":"Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In ACL.  Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In ACL.","DOI":"10.18653\/v1\/P19-1066"},{"key":"e_1_3_2_2_7_1","volume-title":"Revealing the dark secrets of bert. EMNLP","author":"Kovaleva Olga","year":"2019","unstructured":"Olga Kovaleva , Alexey Romanov , Anna Rogers , and Anna Rumshisky . 2019. Revealing the dark secrets of bert. EMNLP ( 2019 ). Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. EMNLP (2019)."},{"key":"e_1_3_2_2_8_1","unstructured":"Tom Kwiatkowski and etal 2019. Natural questions: a benchmark for question answering research. In ACL.  Tom Kwiatkowski and et al. 2019. Natural questions: a benchmark for question answering research. In ACL."},{"key":"e_1_3_2_2_9_1","volume-title":"ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In ICLR.","author":"Lan Zhen-Zhong","year":"2019","unstructured":"Zhen-Zhong Lan and 2019 . ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In ICLR. Zhen-Zhong Lan and et al. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In ICLR."},{"key":"e_1_3_2_2_10_1","volume-title":"Yi Chern Tan, and Robert Frank","author":"Lin Yongjie","year":"2019","unstructured":"Yongjie Lin , Yi Chern Tan, and Robert Frank . 2019 . Open Sesame : Getting inside BERT's Linguistic Knowledge. In ACL. Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open Sesame: Getting inside BERT's Linguistic Knowledge. In ACL."},{"key":"e_1_3_2_2_11_1","unstructured":"Yinhan Liu and etal 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv abs\/1907.11692 (2019).  Yinhan Liu and et al. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv abs\/1907.11692 (2019)."},{"key":"e_1_3_2_2_12_1","volume-title":"Smith","author":"Peters Matthew E.","year":"2019","unstructured":"Matthew E. Peters , Sebastian Ruder , and Noah A . Smith . 2019 . To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks. ACL ( 2019). Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks. ACL (2019)."},{"key":"e_1_3_2_2_13_1","unstructured":"Chen Qu and etal 2019. BERT with History Answer Embedding for Conversational Question Answering. In SIGIR.  Chen Qu and et al. 2019. BERT with History Answer Embedding for Conversational Question Answering. In SIGIR."},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"crossref","unstructured":"Barbara Rychalska and etal 2018. Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System. EMNLP (2018).  Barbara Rychalska and et al. 2018. Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System. EMNLP (2018).","DOI":"10.18653\/v1\/W18-5436"},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"crossref","unstructured":"Barbara Rychalska and etal 2018. How much should you ask? On the question structure in QA systems. EMNLP (2018).  Barbara Rychalska and et al. 2018. How much should you ask? On the question structure in QA systems. EMNLP (2018).","DOI":"10.18653\/v1\/W18-5435"},{"key":"e_1_3_2_2_16_1","unstructured":"Chenglei Si and etal 2019. What does BERT Learn from Multiple-Choice Reading Comprehension Datasets? arXiv:cs.CL\/1910.12391  Chenglei Si and et al. 2019. What does BERT Learn from Multiple-Choice Reading Comprehension Datasets? arXiv:cs.CL\/1910.12391"},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"crossref","unstructured":"Cong Sun and Zhihao Yang. 2019. Transfer Learning in Biomedical Named Entity Recognition: An Evaluation of BERT in the PharmaCoNER task. In SIGIR.  Cong Sun and Zhihao Yang. 2019. Transfer Learning in Biomedical Named Entity Recognition: An Evaluation of BERT in the PharmaCoNER task. In SIGIR.","DOI":"10.18653\/v1\/D19-5715"},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"crossref","unstructured":"Ian Tenney Dipanjan Das and Ellie Pavlick. 2019. BERT Rediscovers the Classical NLP Pipeline. In ACL.  Ian Tenney Dipanjan Das and Ellie Pavlick. 2019. BERT Rediscovers the Classical NLP Pipeline. In ACL.","DOI":"10.18653\/v1\/P19-1452"},{"key":"e_1_3_2_2_19_1","unstructured":"Ian Tenney and etal 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In ICLR.  Ian Tenney and et al. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In ICLR."},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"crossref","unstructured":"Betty van Aken and etal 2019. How Does BERT Answer Questions?: A Layer- Wise Analysis of Transformer Representations. In CIKM.  Betty van Aken and et al. 2019. How Does BERT Answer Questions?: A Layer- Wise Analysis of Transformer Representations. In CIKM.","DOI":"10.1145\/3357384.3358028"},{"key":"e_1_3_2_2_21_1","unstructured":"Ashish Vaswani and etal 2017. Attention is All you Need. In NIPS.  Ashish Vaswani and et al. 2017. Attention is All you Need. In NIPS."},{"key":"e_1_3_2_2_22_1","volume-title":"Enhancing Unsupervised Pretraining with External Knowledge for Natural Language Inference. In Canadian Conference on Artificial Intelligence. Springer.","author":"Yang Xiaoyu","year":"2019","unstructured":"Xiaoyu Yang , Xiaodan Zhu , Huasha Zhao , Qiong Zhang , and Yufei Feng . 2019 . Enhancing Unsupervised Pretraining with External Knowledge for Natural Language Inference. In Canadian Conference on Artificial Intelligence. Springer. Xiaoyu Yang, Xiaodan Zhu, Huasha Zhao, Qiong Zhang, and Yufei Feng. 2019. Enhancing Unsupervised Pretraining with External Knowledge for Natural Language Inference. In Canadian Conference on Artificial Intelligence. Springer."},{"key":"e_1_3_2_2_23_1","unstructured":"Zhilin Yang and etal 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In NLPS.  Zhilin Yang and et al. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In NLPS."}],"event":{"name":"SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Retrieval","sponsor":["SIGIR ACM Special Interest Group on Information Retrieval"],"location":"Virtual Event China","acronym":"SIGIR '20"},"container-title":["Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3397271.3401195","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3397271.3401195","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:41:43Z","timestamp":1750200103000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3397271.3401195"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,25]]},"references-count":23,"alternative-id":["10.1145\/3397271.3401195","10.1145\/3397271"],"URL":"https:\/\/doi.org\/10.1145\/3397271.3401195","relation":{},"subject":[],"published":{"date-parts":[[2020,7,25]]},"assertion":[{"value":"2020-07-25","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}