{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T18:33:30Z","timestamp":1763058810091,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":68,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T00:00:00Z","timestamp":1665964800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,17]]},"DOI":"10.1145\/3511808.3557459","type":"proceedings-article","created":{"date-parts":[[2022,10,16]],"date-time":"2022-10-16T01:22:22Z","timestamp":1665883342000},"page":"1124-1134","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":13,"title":["SPOT"],"prefix":"10.1145","author":[{"given":"Jiacheng","family":"Li","sequence":"first","affiliation":[{"name":"University of California, San Diego, San Diego, CA, USA"}]},{"given":"Yannis","family":"Katsis","sequence":"additional","affiliation":[{"name":"IBM Research, San Jose, CA, USA"}]},{"given":"Tyler","family":"Baldwin","sequence":"additional","affiliation":[{"name":"IBM Research, San Jose, CA, USA"}]},{"given":"Ho-Cheol","family":"Kim","sequence":"additional","affiliation":[{"name":"IBM Research, San Jose, CA, USA"}]},{"given":"Andrew","family":"Bartko","sequence":"additional","affiliation":[{"name":"University of California, San Diego, San Diego, CA, USA"}]},{"given":"Julian","family":"McAuley","sequence":"additional","affiliation":[{"name":"University of California, San Diego, San Diego, CA, USA"}]},{"given":"Chun-Nan","family":"Hsu","sequence":"additional","affiliation":[{"name":"University of California, San Diego, San Diego, CA, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,10,17]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"Antoine Bordes Nicolas Usunier Alberto Garc\u00eda-Dur\u00e1n J. Weston and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relational Data. In NIPS.  Antoine Bordes Nicolas Usunier Alberto Garc\u00eda-Dur\u00e1n J. Weston and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relational Data. In NIPS."},{"key":"e_1_3_2_1_2_1","volume-title":"Le","author":"Dai Andrew M.","year":"2015","unstructured":"Andrew M. Dai and Quoc V . Le . 2015 . Semi-supervised Sequence Learning. In NIPS. Andrew M. Dai and Quoc V. Le. 2015. Semi-supervised Sequence Learning. In NIPS."},{"key":"e_1_3_2_1_3_1","volume-title":"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT.","author":"Devlin J.","year":"2019","unstructured":"J. Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2019 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT. J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"crossref","unstructured":"Kalpit Dixit and Yaser Al-Onaizan. 2019. Span-Level Model for Relation Extraction. In ACL.  Kalpit Dixit and Yaser Al-Onaizan. 2019. Span-Level Model for Relation Extraction. In ACL.","DOI":"10.18653\/v1\/P19-1525"},{"key":"e_1_3_2_1_5_1","volume-title":"Shazeer","author":"Fedus William","year":"2021","unstructured":"William Fedus , Barret Zoph , and Noam M . Shazeer . 2021 . Switch Transformers : Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. ArXiv , Vol. abs\/ 2101 .03961 (2021). William Fedus, Barret Zoph, and Noam M. Shazeer. 2021. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. ArXiv, Vol. abs\/2101.03961 (2021)."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"crossref","unstructured":"Claire Gardent Anastasia Shimorina Shashi Narayan and Laura Perez-Beltrachini. 2017. Creating Training Corpora for NLG Micro-Planners. In ACL.  Claire Gardent Anastasia Shimorina Shashi Narayan and Laura Perez-Beltrachini. 2017. Creating Training Corpora for NLG Micro-Planners. In ACL.","DOI":"10.18653\/v1\/P17-1017"},{"key":"e_1_3_2_1_7_1","unstructured":"Edouard Grave Armand Joulin Moustapha Ciss\u00e9 David Grangier and H. J\u00e9gou. 2017. Efficient softmax approximation for GPUs. ArXiv Vol. abs\/1609.04309 (2017).  Edouard Grave Armand Joulin Moustapha Ciss\u00e9 David Grangier and H. J\u00e9gou. 2017. Efficient softmax approximation for GPUs. ArXiv Vol. abs\/1609.04309 (2017)."},{"key":"e_1_3_2_1_8_1","unstructured":"Pankaj Gupta Hinrich Sch\u00fctze and Bernt Andrassy. 2016. Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction. In COLING.  Pankaj Gupta Hinrich Sch\u00fctze and Bernt Andrassy. 2016. Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction. In COLING."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.5555\/1859664.1859670"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"crossref","unstructured":"Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In ACL.  Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In ACL.","DOI":"10.18653\/v1\/P18-1031"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"crossref","unstructured":"Yang Jiao Jiacheng Li Jiaman Wu Dezhi Hong Rajesh K. Gupta and Jingbo Shang. 2020. SeNsER: Learning Cross-Building Sensor Metadata Tagger. In FINDINGS.  Yang Jiao Jiacheng Li Jiaman Wu Dezhi Hong Rajesh K. Gupta and Jingbo Shang. 2020. SeNsER: Learning Cross-Building Sensor Metadata Tagger. In FINDINGS.","DOI":"10.18653\/v1\/2020.findings-emnlp.85"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00300"},{"key":"e_1_3_2_1_13_1","volume-title":"1 Model: Parsing Universal Dependencies Universally. ArXiv","author":"Kondratyuk D.","year":"2099","unstructured":"D. Kondratyuk . 2019. 75 Languages , 1 Model: Parsing Universal Dependencies Universally. ArXiv , Vol. abs\/ 1904 .0 2099 (2019). D. Kondratyuk. 2019. 75 Languages, 1 Model: Parsing Universal Dependencies Universally. ArXiv, Vol. abs\/1904.02099 (2019)."},{"key":"e_1_3_2_1_14_1","volume-title":"A Mutual Information Maximization Perspective of Language Representation Learning. ArXiv","author":"Kong Lingpeng","year":"2020","unstructured":"Lingpeng Kong , Cyprien de Masson d' Autume , Wang Ling , Lei Yu , Zihang Dai , and Dani Yogatama . 2020. A Mutual Information Maximization Perspective of Language Representation Learning. ArXiv , Vol. abs\/ 1910 .08350 ( 2020 ). Lingpeng Kong, Cyprien de Masson d'Autume, Wang Ling, Lei Yu, Zihang Dai, and Dani Yogatama. 2020. A Mutual Information Maximization Perspective of Language Representation Learning. ArXiv, Vol. abs\/1910.08350 (2020)."},{"key":"e_1_3_2_1_15_1","unstructured":"Guillaume Lample and Alexis Conneau. 2019. Cross-lingual Language Model Pretraining. In NeurIPS.  Guillaume Lample and Alexis Conneau. 2019. Cross-lingual Language Model Pretraining. In NeurIPS."},{"key":"e_1_3_2_1_16_1","volume-title":"ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. ArXiv","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2020 . ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. ArXiv , Vol. abs\/ 1909 .11942 (2020). Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. ArXiv, Vol. abs\/1909.11942 (2020)."},{"key":"e_1_3_2_1_17_1","volume-title":"Informing Unsupervised Pretraining with External Linguistic Knowledge. ArXiv","author":"Lauscher Anne","year":"2019","unstructured":"Anne Lauscher , Ivan Vulic , E. Ponti , A. Korhonen , and Goran Glavas . 2019. Informing Unsupervised Pretraining with External Linguistic Knowledge. ArXiv , Vol. abs\/ 1909 .02339 ( 2019 ). Anne Lauscher, Ivan Vulic, E. Ponti, A. Korhonen, and Goran Glavas. 2019. Informing Unsupervised Pretraining with External Linguistic Knowledge. ArXiv, Vol. abs\/1909.02339 (2019)."},{"key":"e_1_3_2_1_18_1","volume-title":"End-to-end Neural Coreference Resolution. ArXiv","author":"Lee Kenton","year":"2017","unstructured":"Kenton Lee , Luheng He , M. Lewis , and Luke Zettlemoyer . 2017. End-to-end Neural Coreference Resolution. ArXiv , Vol. abs\/ 1707 .07045 ( 2017 ). Kenton Lee, Luheng He, M. Lewis, and Luke Zettlemoyer. 2017. End-to-end Neural Coreference Resolution. ArXiv, Vol. abs\/1707.07045 (2017)."},{"key":"e_1_3_2_1_19_1","volume-title":"Learning Recurrent Span Representations for Extractive Question Answering. ArXiv","author":"Lee Kenton","year":"2016","unstructured":"Kenton Lee , T. Kwiatkowski , Ankur P. Parikh , and Dipanjan Das . 2016. Learning Recurrent Span Representations for Extractive Question Answering. ArXiv , Vol. abs\/ 1611 .01436 ( 2016 ). Kenton Lee, T. Kwiatkowski, Ankur P. Parikh, and Dipanjan Das. 2016. Learning Recurrent Span Representations for Extractive Question Answering. ArXiv, Vol. abs\/1611.01436 (2016)."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"crossref","unstructured":"Yoav Levine Barak Lenz Or Dagan Ori Ram Dan Padnos Or Sharir S. Shalev-Shwartz A. Shashua and Y. Shoham. 2020. SenseBERT: Driving Some Sense into BERT. ArXiv Vol. abs\/1908.05646 (2020).  Yoav Levine Barak Lenz Or Dagan Ori Ram Dan Padnos Or Sharir S. Shalev-Shwartz A. Shashua and Y. Shoham. 2020. SenseBERT: Driving Some Sense into BERT. ArXiv Vol. abs\/1908.05646 (2020).","DOI":"10.18653\/v1\/2020.acl-main.423"},{"key":"e_1_3_2_1_21_1","volume-title":"BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension. ArXiv","author":"Lewis M.","year":"2020","unstructured":"M. Lewis , Yinhan Liu , Naman Goyal , Marjan Ghazvininejad , Abdelrahman Mohamed , Omer Levy , Veselin Stoyanov , and Luke Zettlemoyer . 2020 . BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension. ArXiv , Vol. abs\/ 1910 .13461 (2020). M. Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension. ArXiv, Vol. abs\/1910.13461 (2020)."},{"key":"e_1_3_2_1_22_1","unstructured":"Jiacheng Li Haibo Ding Jingbo Shang Julian McAuley and Zhe Feng. 2021a. Weakly Supervised Named Entity Tagging with Learnable Logical Rules. In ACL.  Jiacheng Li Haibo Ding Jingbo Shang Julian McAuley and Zhe Feng. 2021a. Weakly Supervised Named Entity Tagging with Learnable Logical Rules. In ACL."},{"key":"e_1_3_2_1_23_1","article-title":"BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database","volume":"2016","author":"Li J.","year":"2016","unstructured":"J. Li , Yueping Sun , Robin J. Johnson , D. Sciaky , Chih-Hsuan Wei , Robert Leaman , A. P. Davis , C. Mattingly , Thomas C. Wiegers , and Z. Lu . 2016 . BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database : The Journal of Biological Databases and Curation , Vol. 2016 (2016). J. Li, Yueping Sun, Robin J. Johnson, D. Sciaky, Chih-Hsuan Wei, Robert Leaman, A. P. Davis, C. Mattingly, Thomas C. Wiegers, and Z. Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database: The Journal of Biological Databases and Curation, Vol. 2016 (2016).","journal-title":"The Journal of Biological Databases and Curation"},{"key":"e_1_3_2_1_24_1","volume-title":"Daichuan Yang, Beidi Luan, and Zhen He.","author":"Li Xianming","year":"2021","unstructured":"Xianming Li , Xiaotian Luo , Cheng Jie Dong , Daichuan Yang, Beidi Luan, and Zhen He. 2021 b. TDEER : An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations. In EMNLP. Xianming Li, Xiaotian Luo, Cheng Jie Dong, Daichuan Yang, Beidi Luan, and Zhen He. 2021b. TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations. In EMNLP."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00141"},{"key":"e_1_3_2_1_26_1","volume-title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv","author":"Liu Y.","year":"2019","unstructured":"Y. Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , M. Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv , Vol. abs\/ 1907 .11692 ( 2019 ). Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, Vol. abs\/1907.11692 (2019)."},{"key":"e_1_3_2_1_27_1","unstructured":"Jiasen Lu Dhruv Batra Devi Parikh and Stefan Lee. 2019. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS.  Jiasen Lu Dhruv Batra Devi Parikh and Stefan Lee. 2019. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS."},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"crossref","unstructured":"Yi Luan Luheng He Mari Ostendorf and Hannaneh Hajishirzi. 2018. Multi-Task Identification of Entities Relations and Coreference for Scientific Knowledge Graph Construction. In EMNLP.  Yi Luan Luheng He Mari Ostendorf and Hannaneh Hajishirzi. 2018. Multi-Task Identification of Entities Relations and Coreference for Scientific Knowledge Graph Construction. In EMNLP.","DOI":"10.18653\/v1\/D18-1360"},{"key":"e_1_3_2_1_29_1","volume-title":"A General Framework for Information Extraction using Dynamic Span Graphs. ArXiv","author":"Luan Yi","year":"2019","unstructured":"Yi Luan , David Wadden , Luheng He , Amy Shah , Mari Ostendorf , and Hannaneh Hajishirzi . 2019. A General Framework for Information Extraction using Dynamic Span Graphs. ArXiv , Vol. abs\/ 1904 .03296 ( 2019 ). Yi Luan, David Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A General Framework for Information Extraction using Dynamic Span Graphs. ArXiv, Vol. abs\/1904.03296 (2019)."},{"key":"e_1_3_2_1_30_1","volume-title":"Translation: Contextualized Word Vectors. In NIPS.","author":"McCann Bryan","year":"2017","unstructured":"Bryan McCann , James Bradbury , Caiming Xiong , and R. Socher . 2017 . Learned in Translation: Contextualized Word Vectors. In NIPS. Bryan McCann, James Bradbury, Caiming Xiong, and R. Socher. 2017. Learned in Translation: Contextualized Word Vectors. In NIPS."},{"key":"e_1_3_2_1_31_1","volume-title":"UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv","author":"McInnes L.","year":"2018","unstructured":"L. McInnes and John Healy . 2018 . UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv , Vol. abs\/ 1802 .03426 (2018). L. McInnes and John Healy. 2018. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv, Vol. abs\/1802.03426 (2018)."},{"key":"e_1_3_2_1_32_1","unstructured":"Tomas Mikolov Kai Chen G. Corrado and J. Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In ICLR.  Tomas Mikolov Kai Chen G. Corrado and J. Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In ICLR."},{"key":"e_1_3_2_1_33_1","volume-title":"End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. ArXiv","author":"Miwa Makoto","year":"2016","unstructured":"Makoto Miwa and Mohit Bansal . 2016. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. ArXiv , Vol. abs\/ 1601 .00770 ( 2016 ). Makoto Miwa and Mohit Bansal. 2016. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. ArXiv, Vol. abs\/1601.00770 (2016)."},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"crossref","unstructured":"Makoto Miwa and Yutaka Sasaki. 2014. Modeling Joint Entity and Relation Extraction with Table Representation. In EMNLP.  Makoto Miwa and Yutaka Sasaki. 2014. Modeling Joint Entity and Relation Extraction with Table Representation. In EMNLP.","DOI":"10.3115\/v1\/D14-1200"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"crossref","unstructured":"Hiroki Ouchi Hiroyuki Shindo and Yuji Matsumoto. 2018. A Span Selection Model for Semantic Role Labeling. In EMNLP.  Hiroki Ouchi Hiroyuki Shindo and Yuji Matsumoto. 2018. A Span Selection Model for Semantic Role Labeling. In EMNLP.","DOI":"10.18653\/v1\/D18-1191"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"crossref","unstructured":"Hao Peng Tianyu Gao Xu Han Yankai Lin Peng Li Zhiyuan Liu Maosong Sun and Jie Zhou. 2020. Learning from Context or Names? An Empirical Study on Neural Relation Extraction. In EMNLP.  Hao Peng Tianyu Gao Xu Han Yankai Lin Peng Li Zhiyuan Liu Maosong Sun and Jie Zhou. 2020. Learning from Context or Names? An Empirical Study on Neural Relation Extraction. In EMNLP.","DOI":"10.18653\/v1\/2020.emnlp-main.298"},{"key":"e_1_3_2_1_37_1","volume-title":"Manning","author":"Pennington Jeffrey","year":"2014","unstructured":"Jeffrey Pennington , R. Socher , and Christopher D . Manning . 2014 . Glove : Global Vectors for Word Representation. In EMNLP. Jeffrey Pennington, R. Socher, and Christopher D. Manning. 2014. Glove: Global Vectors for Word Representation. In EMNLP."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"crossref","unstructured":"Matthew E. Peters Mark Neumann Mohit Iyyer Matt Gardner Christopher Clark Kenton Lee and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT.  Matthew E. Peters Mark Neumann Mohit Iyyer Matt Gardner Christopher Clark Kenton Lee and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT.","DOI":"10.18653\/v1\/N18-1202"},{"key":"e_1_3_2_1_39_1","volume-title":"Smith","author":"Peters Matthew E.","year":"2019","unstructured":"Matthew E. Peters , Mark Neumann , IV RobertL Logan , Roy Schwartz , V. Joshi , Sameer Singh , and Noah A . Smith . 2019 . Knowledge Enhanced Contextual Word Representations. In EMNLP\/IJCNLP. Matthew E. Peters, Mark Neumann, IV RobertLLogan, Roy Schwartz, V. Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge Enhanced Contextual Word Representations. In EMNLP\/IJCNLP."},{"key":"e_1_3_2_1_40_1","volume-title":"BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA. ArXiv","author":"Poerner Nina","year":"2019","unstructured":"Nina Poerner , Ulli Waltinger , and Hinrich Sch\u00fctze . 2019. BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA. ArXiv , Vol. abs\/ 1911 .03681 ( 2019 ). Nina Poerner, Ulli Waltinger, and Hinrich Sch\u00fctze. 2019. BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA. ArXiv, Vol. abs\/1911.03681 (2019)."},{"key":"e_1_3_2_1_41_1","volume-title":"ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning. In ACL\/IJCNLP.","author":"Qin Yujia","year":"2021","unstructured":"Yujia Qin , Yankai Lin , Ryuichi Takanobu , Zhiyuan Liu , Peng Li , Heng Ji , Minlie Huang , Maosong Sun , and Jie Zhou . 2021 . ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning. In ACL\/IJCNLP. Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, and Jie Zhou. 2021. ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning. In ACL\/IJCNLP."},{"key":"e_1_3_2_1_42_1","unstructured":"Alec Radford and Karthik Narasimhan. 2018. Improving Language Understanding by Generative Pre-Training.  Alec Radford and Karthik Narasimhan. 2018. Improving Language Understanding by Generative Pre-Training."},{"key":"e_1_3_2_1_43_1","volume-title":"Liu","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel , Noam M. Shazeer , Adam Roberts , Katherine Lee , Sharan Narang , Michael Matena , Yanqi Zhou , Wei Li , and Peter J . Liu . 2020 . Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. ArXiv , Vol. abs\/ 1910 .10683 (2020). Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. ArXiv, Vol. abs\/1910.10683 (2020)."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"crossref","unstructured":"Pranav Rajpurkar Jian Zhang Konstantin Lopyrev and Percy Liang. 2016. SQuAD: 100 000 Questions for Machine Comprehension of Text. In EMNLP.  Pranav Rajpurkar Jian Zhang Konstantin Lopyrev and Percy Liang. 2016. SQuAD: 100 000 Questions for Machine Comprehension of Text. In EMNLP.","DOI":"10.18653\/v1\/D16-1264"},{"key":"e_1_3_2_1_45_1","volume-title":"Shared Task: Language-Independent Named Entity Recognition. ArXiv","author":"Sang E. T. K.","year":"2003","unstructured":"E. T. K. Sang and F. D. Meulder . 2003 . Introduction to the CoNLL- 2003 Shared Task: Language-Independent Named Entity Recognition. ArXiv , Vol. cs.CL\/ 0306050 (2003). E. T. K. Sang and F. D. Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. ArXiv, Vol. cs.CL\/0306050 (2003)."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"crossref","unstructured":"Jingbo Shang Liyuan Liu Xiang Ren Xiaotao Gu Teng Ren and Jiawei Han. 2018. Learning Named Entity Tagger using Domain-Specific Dictionary. In EMNLP.  Jingbo Shang Liyuan Liu Xiang Ren Xiaotao Gu Teng Ren and Jiawei Han. 2018. Learning Named Entity Tagger using Domain-Specific Dictionary. In EMNLP.","DOI":"10.18653\/v1\/D18-1230"},{"key":"e_1_3_2_1_47_1","volume-title":"Matching the Blanks: Distributional Similarity for Relation Learning. ArXiv","author":"Soares Livio Baldini","year":"2019","unstructured":"Livio Baldini Soares , Nicholas FitzGerald , Jeffrey Ling , and Tom Kwiatkowski . 2019a. Matching the Blanks: Distributional Similarity for Relation Learning. ArXiv , Vol. abs\/ 1906 .03158 ( 2019 ). Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019a. Matching the Blanks: Distributional Similarity for Relation Learning. ArXiv, Vol. abs\/1906.03158 (2019)."},{"key":"e_1_3_2_1_48_1","volume-title":"Blanks: Distributional Similarity for Relation Learning. ArXiv","author":"Soares Livio Baldini","year":"2019","unstructured":"Livio Baldini Soares , Nicholas FitzGerald , Jeffrey Ling , and T. Kwiatkowski . 2019 b. Matching the Blanks: Distributional Similarity for Relation Learning. ArXiv , Vol. abs\/ 1906 .03158 (2019). Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and T. Kwiatkowski. 2019b. Matching the Blanks: Distributional Similarity for Relation Learning. ArXiv, Vol. abs\/1906.03158 (2019)."},{"key":"e_1_3_2_1_49_1","volume-title":"MASS: Masked Sequence to Sequence Pre-training for Language Generation. In ICML.","author":"Song Kaitao","year":"2019","unstructured":"Kaitao Song , Xu Tan , Tao Qin , Jianfeng Lu , and Tie-Yan Liu . 2019 . MASS: Masked Sequence to Sequence Pre-training for Language Generation. In ICML. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked Sequence to Sequence Pre-training for Language Generation. In ICML."},{"key":"e_1_3_2_1_50_1","volume-title":"Properties of the Hubert-Arabie adjusted Rand index. Psychological methods","author":"Steinley Douglas L.","year":"2004","unstructured":"Douglas L. Steinley . 2004. Properties of the Hubert-Arabie adjusted Rand index. Psychological methods , Vol. 9 3 ( 2004 ), 386--96. Douglas L. Steinley. 2004. Properties of the Hubert-Arabie adjusted Rand index. Psychological methods, Vol. 9 3 (2004), 386--96."},{"key":"e_1_3_2_1_51_1","volume-title":"VL-BERT: Pre-training of Generic Visual-Linguistic Representations. ArXiv","author":"Su Weijie","year":"2020","unstructured":"Weijie Su , Xizhou Zhu , Yue Cao , Bin Li , Lewei Lu , Furu Wei , and Jifeng Dai . 2020. VL-BERT: Pre-training of Generic Visual-Linguistic Representations. ArXiv , Vol. abs\/ 1908 .08530 ( 2020 ). Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: Pre-training of Generic Visual-Linguistic Representations. ArXiv, Vol. abs\/1908.08530 (2020)."},{"key":"e_1_3_2_1_52_1","volume-title":"VideoBERT: A Joint Model for Video and Language Representation Learning. 2019 IEEE\/CVF International Conference on Computer Vision (ICCV)","author":"Sun Chen","year":"2019","unstructured":"Chen Sun , Austin Myers , Carl Vondrick , Kevin P. Murphy , and Cordelia Schmid . 2019 . VideoBERT: A Joint Model for Video and Language Representation Learning. 2019 IEEE\/CVF International Conference on Computer Vision (ICCV) (2019), 7463--7472. Chen Sun, Austin Myers, Carl Vondrick, Kevin P. Murphy, and Cordelia Schmid. 2019. VideoBERT: A Joint Model for Video and Language Representation Learning. 2019 IEEE\/CVF International Conference on Computer Vision (ICCV) (2019), 7463--7472."},{"key":"e_1_3_2_1_53_1","volume-title":"LXMERT: Learning Cross-Modality Encoder Representations from Transformers. ArXiv","author":"Tan Hao Hao","year":"2019","unstructured":"Hao Hao Tan and Mohit Bansal . 2019 . LXMERT: Learning Cross-Modality Encoder Representations from Transformers. ArXiv , Vol. abs\/ 1908 .07490 (2019). Hao Hao Tan and Mohit Bansal. 2019. LXMERT: Learning Cross-Modality Encoder Representations from Transformers. ArXiv, Vol. abs\/1908.07490 (2019)."},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1093\/bioinformatics\/btaa540"},{"key":"e_1_3_2_1_55_1","volume-title":"Attention is All you Need. ArXiv","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam M. Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N. Gomez , Lukasz Kaiser , and Illia Polosukhin . 2017. Attention is All you Need. ArXiv , Vol. abs\/ 1706 .03762 ( 2017 ). Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. ArXiv, Vol. abs\/1706.03762 (2017)."},{"key":"e_1_3_2_1_56_1","volume-title":"abs\/1909.03546","author":"Wadden David","year":"2019","unstructured":"David Wadden , Ulme Wennberg , Yi Luan , and Hannaneh Hajishirzi . 2019. Entity, Relation, and Event Extraction with Contextualized Span Representations . ArXiv, Vol. abs\/1909.03546 ( 2019 ). David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, Relation, and Event Extraction with Contextualized Span Representations. ArXiv, Vol. abs\/1909.03546 (2019)."},{"key":"e_1_3_2_1_57_1","volume-title":"Bowman","author":"Wang Alex","year":"2018","unstructured":"Alex Wang , Amanpreet Singh , Julian Michael , Felix Hill , Omer Levy , and Samuel R . Bowman . 2018 . GLUE : A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In BlackboxNLP @EMNLP. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In BlackboxNLP@EMNLP."},{"key":"e_1_3_2_1_58_1","unstructured":"Ruize Wang Duyu Tang Nan Duan Zhongyu Wei X. Huang Jianshu Ji Cuihong Cao Daxin Jiang and M. Zhou. 2020a. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. ArXiv Vol. abs\/2002.01808 (2020).  Ruize Wang Duyu Tang Nan Duan Zhongyu Wei X. Huang Jianshu Ji Cuihong Cao Daxin Jiang and M. Zhou. 2020a. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. ArXiv Vol. abs\/2002.01808 (2020)."},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00360"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"crossref","unstructured":"Yucheng Wang Bowen Yu Y. Zhang Tingwen Liu Hongsong Zhu and L. Sun. 2020b. TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking. In COLING.  Yucheng Wang Bowen Yu Y. Zhang Tingwen Liu Hongsong Zhu and L. Sun. 2020b. TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking. In COLING.","DOI":"10.18653\/v1\/2020.coling-main.138"},{"key":"e_1_3_2_1_61_1","volume-title":"William Yang Wang, and Veselin Stoyanov","author":"Xiong Wenhan","year":"2020","unstructured":"Wenhan Xiong , Jingfei Du , William Yang Wang, and Veselin Stoyanov . 2020 . Pretrained Encyclopedia : Weakly Supervised Knowledge-Pretrained Language Model. ArXiv , Vol. abs\/ 1912 .09637 (2020). Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2020. Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model. ArXiv, Vol. abs\/1912.09637 (2020)."},{"key":"e_1_3_2_1_62_1","volume-title":"Self-Taught Convolutional Neural Networks for Short Text Clustering. Neural networks : the official journal of the International Neural Network Society","author":"Xu Jiaming","year":"2017","unstructured":"Jiaming Xu , Bo Xu , Peng Wang , Suncong Zheng , Guanhua Tian , and Jun Zhao . 2017. Self-Taught Convolutional Neural Networks for Short Text Clustering. Neural networks : the official journal of the International Neural Network Society , Vol. 88 ( 2017 ), 22--31. Jiaming Xu, Bo Xu, Peng Wang, Suncong Zheng, Guanhua Tian, and Jun Zhao. 2017. Self-Taught Convolutional Neural Networks for Short Text Clustering. Neural networks : the official journal of the International Neural Network Society, Vol. 88 (2017), 22--31."},{"key":"e_1_3_2_1_63_1","volume-title":"LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention. ArXiv","author":"Yamada Ikuya","year":"2020","unstructured":"Ikuya Yamada , Akari Asai , Hiroyuki Shindo , Hideaki Takeda , and Yuji Matsumoto . 2020 . LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention. ArXiv , Vol. abs\/ 2010 .01057 (2020). Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention. ArXiv, Vol. abs\/2010.01057 (2020)."},{"key":"e_1_3_2_1_64_1","unstructured":"Deming Ye Yankai Lin Jiaju Du Zhenghao Liu Maosong Sun and Zhiyuan Liu. 2020. Coreferential Reasoning Learning for Language Representation. In EMNLP.  Deming Ye Yankai Lin Jiaju Du Zhenghao Liu Maosong Sun and Zhiyuan Liu. 2020. Coreferential Reasoning Learning for Language Representation. In EMNLP."},{"key":"e_1_3_2_1_65_1","unstructured":"Xiangrong Zeng Daojian Zeng Shizhu He Kang Liu and Jun Zhao. 2018. Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism. In ACL.  Xiangrong Zeng Daojian Zeng Shizhu He Kang Liu and Jun Zhao. 2018. Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism. In ACL."},{"key":"e_1_3_2_1_66_1","volume-title":"GreaseLM: Graph REASoning Enhanced Language Models for Question Answering. ArXiv","author":"Zhang Xikun","year":"2022","unstructured":"Xikun Zhang , Antoine Bosselut , Michihiro Yasunaga , Hongyu Ren , Percy Liang , Christopher D. Manning , and Jure Leskovec . 2022. GreaseLM: Graph REASoning Enhanced Language Models for Question Answering. ArXiv , Vol. abs\/ 2201 .08860 ( 2022 ). Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, and Jure Leskovec. 2022. GreaseLM: Graph REASoning Enhanced Language Models for Question Answering. ArXiv, Vol. abs\/2201.08860 (2022)."},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D17-1004"},{"key":"e_1_3_2_1_68_1","volume-title":"ERNIE: Enhanced Language Representation with Informative Entities. In ACL.","author":"Zhang Zhengyan","year":"2019","unstructured":"Zhengyan Zhang , Xu Han , Z. Liu , Xin Jiang , M. Sun , and Qun Liu . 2019 . ERNIE: Enhanced Language Representation with Informative Entities. In ACL. Zhengyan Zhang, Xu Han, Z. Liu, Xin Jiang, M. Sun, and Qun Liu. 2019. ERNIE: Enhanced Language Representation with Informative Entities. In ACL."}],"event":{"name":"CIKM '22: The 31st ACM International Conference on Information and Knowledge Management","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"],"location":"Atlanta GA USA","acronym":"CIKM '22"},"container-title":["Proceedings of the 31st ACM International Conference on Information &amp; Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3511808.3557459","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3511808.3557459","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:48:55Z","timestamp":1750182535000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3511808.3557459"}},"subtitle":["Knowledge-Enhanced Language Representations for Information Extraction"],"short-title":[],"issued":{"date-parts":[[2022,10,17]]},"references-count":68,"alternative-id":["10.1145\/3511808.3557459","10.1145\/3511808"],"URL":"https:\/\/doi.org\/10.1145\/3511808.3557459","relation":{},"subject":[],"published":{"date-parts":[[2022,10,17]]},"assertion":[{"value":"2022-10-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}