{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T11:32:01Z","timestamp":1770463921504,"version":"3.49.0"},"reference-count":57,"publisher":"Association for Computing Machinery (ACM)","issue":"2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2022,10]]},"abstract":"<jats:p>Entity Matching (EM), which aims to identify whether two entity records from two relational tables refer to the same real-world entity, is one of the fundamental problems in data management. Traditional EM assumes that two tables are homogeneous with the aligned schema, while it is common that entity records of different formats (e.g., relational, semi-structured, or textual types) involve in practical scenarios. It is not practical to unify their schemas due to the different formats. To support EM on format-different entity records, Generalized Entity Matching (GEM) has been proposed and gained much attention recently. To do GEM, existing methods typically perform in a supervised learning way, which relies on a large amount of high-quality labeled examples. However, the labeling process is extremely labor-intensive, and frustrates the use of GEM. Low-resource GEM, i.e., GEM that only requires a small number of labeled examples, becomes an urgent need. To this end, this paper, for the first time, focuses on the low-resource GEM and proposes a novel low-resource GEM method, termed as PromptEM. PromptEM has addressed three challenging issues (i.e., designing GEM-specific prompt-tuning, improving pseudo-labels quality, and running efficient self-training) in low-resource GEM. Extensive experimental results on eight real benchmarks demonstrate the superiority of PromptEM in terms of effectiveness and efficiency.<\/jats:p>","DOI":"10.14778\/3565816.3565836","type":"journal-article","created":{"date-parts":[[2022,11,24]],"date-time":"2022-11-24T00:35:16Z","timestamp":1669250116000},"page":"369-378","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":25,"title":["PromptEM"],"prefix":"10.14778","volume":"16","author":[{"given":"Pengfei","family":"Wang","sequence":"first","affiliation":[{"name":"Zhejiang University"}]},{"given":"Xiaocan","family":"Zeng","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]},{"given":"Lu","family":"Chen","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]},{"given":"Fan","family":"Ye","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]},{"given":"Yuren","family":"Mao","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]},{"given":"Junhao","family":"Zhu","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]},{"given":"Yunjun","family":"Gao","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]}],"member":"320","published-online":{"date-parts":[[2022,11,23]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"crossref","unstructured":"Naser Ahmadi Hansjorg Sand and Paolo Papotti. 2022. Unsupervised Matching of Data and Text. In ICDE. 1058--1070.  Naser Ahmadi Hansjorg Sand and Paolo Papotti. 2022. Unsupervised Matching of Data and Text. In ICDE. 1058--1070.","DOI":"10.1109\/ICDE53745.2022.00084"},{"key":"e_1_2_1_2_1","doi-asserted-by":"crossref","unstructured":"Pasquale Balsebre Dezhong Yao Gao Cong and Zhen Hai. 2022. Geospatial Entity Resolution. In WWW. 3061--3070.  Pasquale Balsebre Dezhong Yao Gao Cong and Zhen Hai. 2022. Geospatial Entity Resolution. In WWW. 3061--3070.","DOI":"10.1145\/3485447.3512026"},{"key":"e_1_2_1_3_1","doi-asserted-by":"crossref","unstructured":"Mikhail Bilenko and Raymond J Mooney. 2003. Adaptive duplicate detection using learnable string similarity measures. In SIGKDD. 39--48.  Mikhail Bilenko and Raymond J Mooney. 2003. Adaptive duplicate detection using learnable string similarity measures. In SIGKDD. 39--48.","DOI":"10.1145\/956750.956759"},{"key":"e_1_2_1_4_1","unstructured":"Tom B Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell etal 2020. Language Models are Few-Shot Learners. In NeurIPS. 1877--1901.  Tom B Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language Models are Few-Shot Learners. In NeurIPS. 1877--1901."},{"key":"e_1_2_1_5_1","doi-asserted-by":"crossref","unstructured":"William W Cohen and Jacob Richman. 2002. Learning to match and cluster large high-dimensional data sets for data integration. In SIGKDD. 475--480.  William W Cohen and Jacob Richman. 2002. Learning to match and cluster large high-dimensional data sets for data integration. In SIGKDD. 475--480.","DOI":"10.1145\/775047.775116"},{"key":"e_1_2_1_6_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_2_1_7_1","volume-title":"Prompt-learning for fine-grained entity typing. arXiv preprint arXiv:2108.10604","author":"Ding Ning","year":"2021","unstructured":"Ning Ding , Yulin Chen , Xu Han , Guangwei Xu , Pengjun Xie , Hai-Tao Zheng , Zhiyuan Liu , Juanzi Li , and Hong-Gee Kim . 2021. Prompt-learning for fine-grained entity typing. arXiv preprint arXiv:2108.10604 ( 2021 ). Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, and Hong-Gee Kim. 2021. Prompt-learning for fine-grained entity typing. arXiv preprint arXiv:2108.10604 (2021)."},{"key":"e_1_2_1_8_1","volume-title":"Openprompt: An open-source framework for prompt-learning. arXiv preprint arXiv:2111.01998","author":"Ding Ning","year":"2021","unstructured":"Ning Ding , Shengding Hu , Weilin Zhao , Yulin Chen , Zhiyuan Liu , Hai-Tao Zheng , and Maosong Sun . 2021 . Openprompt: An open-source framework for prompt-learning. arXiv preprint arXiv:2111.01998 (2021). Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, and Maosong Sun. 2021. Openprompt: An open-source framework for prompt-learning. arXiv preprint arXiv:2111.01998 (2021)."},{"key":"e_1_2_1_9_1","doi-asserted-by":"crossref","unstructured":"Thomas Dopierre Christophe Gravier Julien Subercaze and Wilfried Logerais. 2020. Few-shot pseudo-labeling for intent detection. In COLING. 4993--5003.  Thomas Dopierre Christophe Gravier Julien Subercaze and Wilfried Logerais. 2020. Few-shot pseudo-labeling for intent detection. In COLING. 4993--5003.","DOI":"10.18653\/v1\/2020.coling-main.438"},{"key":"e_1_2_1_10_1","first-page":"1454","article-title":"Distributed representations of tuples for entity resolution","volume":"11","author":"Ebraheem Muhammad","year":"2018","unstructured":"Muhammad Ebraheem , Saravanan Thirumuruganathan , Shafiq Joty , Mourad Ouzzani , and Nan Tang . 2018 . Distributed representations of tuples for entity resolution . PVLDB 11 , 11 (2018), 1454 -- 1467 . Muhammad Ebraheem, Saravanan Thirumuruganathan, Shafiq Joty, Mourad Ouzzani, and Nan Tang. 2018. Distributed representations of tuples for entity resolution. PVLDB 11, 11 (2018), 1454--1467.","journal-title":"PVLDB"},{"key":"e_1_2_1_11_1","doi-asserted-by":"crossref","unstructured":"Ahmed Elmagarmid Ihab F Ilyas Mourad Ouzzani Jorge-Arnulfo Quian\u00e9-Ruiz Nan Tang and Si Yin. 2014. NADEEF\/ER: Generic and interactive entity resolution. In SIGMOD. 1071--1074.  Ahmed Elmagarmid Ihab F Ilyas Mourad Ouzzani Jorge-Arnulfo Quian\u00e9-Ruiz Nan Tang and Si Yin. 2014. NADEEF\/ER: Generic and interactive entity resolution. In SIGMOD. 1071--1074.","DOI":"10.1145\/2588555.2594511"},{"key":"e_1_2_1_12_1","unstructured":"Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML. 1050--1059.  Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML. 1050--1059."},{"key":"e_1_2_1_13_1","doi-asserted-by":"crossref","unstructured":"Yunjun Gao Xiaoze Liu Junyang Wu Tianyi Li Pengfei Wang and Lu Chen. 2022. ClusterEA: Scalable Entity Alignment with Stochastic Training and Normalized Mini-batch Similarities. In KDD. 421--431.  Yunjun Gao Xiaoze Liu Junyang Wu Tianyi Li Pengfei Wang and Lu Chen. 2022. ClusterEA: Scalable Entity Alignment with Stochastic Training and Normalized Mini-batch Similarities. In KDD. 421--431.","DOI":"10.1145\/3534678.3539331"},{"key":"e_1_2_1_14_1","first-page":"237","article-title":"LargeEA: Aligning Entities for Large-scale Knowledge Graphs","volume":"15","author":"Ge Congcong","year":"2022","unstructured":"Congcong Ge , Xiaoze Liu , Lu Chen , Baihua Zheng , and Yunjun Gao . 2022 . LargeEA: Aligning Entities for Large-scale Knowledge Graphs . PVLDB 15 , 2 (2022), 237 -- 245 . Congcong Ge, Xiaoze Liu, Lu Chen, Baihua Zheng, and Yunjun Gao. 2022. LargeEA: Aligning Entities for Large-scale Knowledge Graphs. PVLDB 15, 2 (2022), 237--245.","journal-title":"PVLDB"},{"key":"e_1_2_1_15_1","volume-title":"CollaborEM: A Self-supervised Entity Matching Framework Using Multi-features Collaboration. TKDE","author":"Ge Congcong","year":"2021","unstructured":"Congcong Ge , Pengfei Wang , Lu Chen , Xiaoze Liu , Baihua Zheng , and Yunjun Gao . 2021. CollaborEM: A Self-supervised Entity Matching Framework Using Multi-features Collaboration. TKDE ( 2021 ). Congcong Ge, Pengfei Wang, Lu Chen, Xiaoze Liu, Baihua Zheng, and Yunjun Gao. 2021. CollaborEM: A Self-supervised Entity Matching Framework Using Multi-features Collaboration. TKDE (2021)."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2588555.2588576"},{"key":"e_1_2_1_17_1","doi-asserted-by":"crossref","unstructured":"Alex Graves Abdel-rahman Mohamed and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In ICASSP. 6645--6649.  Alex Graves Abdel-rahman Mohamed and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In ICASSP. 6645--6649.","DOI":"10.1109\/ICASSP.2013.6638947"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.aiopen.2021.08.002"},{"key":"e_1_2_1_19_1","volume-title":"Ptr: Prompt tuning with rules for text classification. arXiv preprint arXiv:2105.11259","author":"Han Xu","year":"2021","unstructured":"Xu Han , Weilin Zhao , Ning Ding , Zhiyuan Liu , and Maosong Sun . 2021 . Ptr: Prompt tuning with rules for text classification. arXiv preprint arXiv:2105.11259 (2021). Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. Ptr: Prompt tuning with rules for text classification. arXiv preprint arXiv:2105.11259 (2021)."},{"key":"e_1_2_1_20_1","unstructured":"Junxian He Jiatao Gu Jiajun Shen and Marc'Aurelio Ranzato. 2019. Revisiting Self-Training for Neural Sequence Generation. In ICLR.  Junxian He Jiatao Gu Jiajun Shen and Marc'Aurelio Ranzato. 2019. Revisiting Self-Training for Neural Sequence Generation. In ICLR."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.14778\/3494124.3494131"},{"key":"e_1_2_1_22_1","doi-asserted-by":"crossref","unstructured":"Jungo Kasai Kun Qian Sairam Gurajada Yunyao Li and Lucian Popa. 2019. Low-resource Deep Entity Resolution with Transfer and Active Learning. In ACL. 5851--5861.  Jungo Kasai Kun Qian Sairam Gurajada Yunyao Li and Lucian Popa. 2019. Low-resource Deep Entity Resolution with Transfer and Active Learning. In ACL. 5851--5861.","DOI":"10.18653\/v1\/P19-1586"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.14778\/3007263.3007314"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.14778\/3421424.3421431"},{"key":"e_1_2_1_25_1","volume-title":"prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586","author":"Liu Pengfei","year":"2021","unstructured":"Pengfei Liu , Weizhe Yuan , Jinlan Fu , Zhengbao Jiang , Hiroaki Hayashi , and Graham Neubig . 2021. Pre-train , prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 ( 2021 ). Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021)."},{"key":"e_1_2_1_26_1","volume-title":"GPT understands, too. arXiv preprint arXiv:2103.10385","author":"Liu Xiao","year":"2021","unstructured":"Xiao Liu , Yanan Zheng , Zhengxiao Du , Ming Ding , Yujie Qian , Zhilin Yang , and Jie Tang . 2021. GPT understands, too. arXiv preprint arXiv:2103.10385 ( 2021 ). Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT understands, too. arXiv preprint arXiv:2103.10385 (2021)."},{"key":"e_1_2_1_27_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3410157"},{"key":"e_1_2_1_29_1","first-page":"49","article-title":"Generic schema matching with cupid","volume":"1","author":"Madhavan Jayant","year":"2001","unstructured":"Jayant Madhavan , Philip A Bernstein , and Erhard Rahm . 2001 . Generic schema matching with cupid . PVLDB 1 , 2001 (2001), 49 -- 58 . Jayant Madhavan, Philip A Bernstein, and Erhard Rahm. 2001. Generic schema matching with cupid. PVLDB 1, 2001 (2001), 49--58.","journal-title":"PVLDB"},{"key":"e_1_2_1_30_1","volume-title":"Human-powered sorts and joins. arXiv preprint arXiv:1109.6881","author":"Marcus Adam","year":"2011","unstructured":"Adam Marcus , Eugene Wu , David Karger , Samuel Madden , and Robert Miller . 2011. Human-powered sorts and joins. arXiv preprint arXiv:1109.6881 ( 2011 ). Adam Marcus, Eugene Wu, David Karger, Samuel Madden, and Robert Miller. 2011. Human-powered sorts and joins. arXiv preprint arXiv:1109.6881 (2011)."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3448016.3457258"},{"key":"e_1_2_1_32_1","doi-asserted-by":"crossref","unstructured":"Sidharth Mudgal Han Li Theodoros Rekatsinas AnHai Doan Youngchoon Park Ganesh Krishnan Rohit Deep Esteban Arcaute and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In SIGMOD. 19--34.  Sidharth Mudgal Han Li Theodoros Rekatsinas AnHai Doan Youngchoon Park Ganesh Krishnan Rohit Deep Esteban Arcaute and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In SIGMOD. 19--34.","DOI":"10.1145\/3183713.3196926"},{"key":"e_1_2_1_33_1","unstructured":"Subhabrata Mukherjee and Ahmed Awadallah. 2020. Uncertainty-aware self-training for few-shot text classification. NeurIPS 21199--21212.  Subhabrata Mukherjee and Ahmed Awadallah. 2020. Uncertainty-aware self-training for few-shot text classification. NeurIPS 21199--21212."},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.107729"},{"key":"e_1_2_1_35_1","unstructured":"Adam Paszke Sam Gross Francisco Massa Adam Lerer James Bradbury Gregory Chanan Trevor Killeen Zeming Lin Natalia Gimelshein Luca Antiga etal 2019. PyTorch: an imperative style high-performance deep learning library. In NeurIPS. 8026--8037.  Adam Paszke Sam Gross Francisco Massa Adam Lerer James Bradbury Gregory Chanan Trevor Killeen Zeming Lin Natalia Gimelshein Luca Antiga et al. 2019. PyTorch: an imperative style high-performance deep learning library. In NeurIPS. 8026--8037."},{"key":"e_1_2_1_36_1","unstructured":"Mansheej Paul Surya Ganguli and Gintare Karolina Dziugaite. 2021. Deep Learning on a Data Diet: Finding Important Examples Early in Training. NeurIPS 34.  Mansheej Paul Surya Ganguli and Gintare Karolina Dziugaite. 2021. Deep Learning on a Data Diet: Finding Important Examples Early in Training. NeurIPS 34."},{"key":"e_1_2_1_37_1","volume-title":"Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.","author":"Radford Alec","year":"2021","unstructured":"Alec Radford , Jong Wook Kim , Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021 . Learning transferable visual models from natural language supervision. In ICML. 8748--8763. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML. 8748--8763."},{"key":"e_1_2_1_38_1","volume-title":"Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP.","author":"Reimers Nils","year":"2019","unstructured":"Nils Reimers and Iryna Gurevych . 2019 . Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP."},{"key":"e_1_2_1_39_1","unstructured":"Mamshad Nayeem Rizve Kevin Duarte Yogesh S Rawat and Mubarak Shah. 2021. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. In ICLR.  Mamshad Nayeem Rizve Kevin Duarte Yogesh S Rawat and Mubarak Shah. 2021. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. In ICLR."},{"key":"e_1_2_1_40_1","doi-asserted-by":"crossref","unstructured":"Sunita Sarawagi and Anuradha Bhamidipaty. 2002. Interactive deduplication using active learning. In SIGKDD. 269--278.  Sunita Sarawagi and Anuradha Bhamidipaty. 2002. Interactive deduplication using active learning. In SIGKDD. 269--278.","DOI":"10.1145\/775047.775087"},{"key":"e_1_2_1_41_1","volume-title":"A mathematical exploration of why language models help solve downstream tasks. arXiv preprint arXiv:2010.03648","author":"Saunshi Nikunj","year":"2020","unstructured":"Nikunj Saunshi , Sadhika Malladi , and Sanjeev Arora . 2020. A mathematical exploration of why language models help solve downstream tasks. arXiv preprint arXiv:2010.03648 ( 2020 ). Nikunj Saunshi, Sadhika Malladi, and Sanjeev Arora. 2020. A mathematical exploration of why language models help solve downstream tasks. arXiv preprint arXiv:2010.03648 (2020)."},{"key":"e_1_2_1_42_1","volume-title":"Eric Wallace, and Sameer Singh.","author":"Shin Taylor","year":"2020","unstructured":"Taylor Shin , Yasaman Razeghi , Robert L Logan IV , Eric Wallace, and Sameer Singh. 2020 . AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In EMNLP. 4222--4235. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In EMNLP. 4222--4235."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.14778\/3149193.3149199"},{"key":"e_1_2_1_44_1","volume-title":"MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators. In ACL. 6131--6142.","author":"Tan Zhixing","year":"2022","unstructured":"Zhixing Tan , Xiangwen Zhang , Shuo Wang , and Yang Liu . 2022 . MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators. In ACL. 6131--6142. Zhixing Tan, Xiangwen Zhang, Shuo Wang, and Yang Liu. 2022. MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators. In ACL. 6131--6142."},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.14778\/3476249.3476294"},{"key":"e_1_2_1_46_1","volume-title":"Mourad Ouzzani, Nan Tang, and Shafiq Joty.","author":"Thirumuruganathan Saravanan","year":"2018","unstructured":"Saravanan Thirumuruganathan , Shameem A Puthiya Parambath , Mourad Ouzzani, Nan Tang, and Shafiq Joty. 2018 . Reuse and Adaptation for Entity Resolution through Transfer Learning . arXiv e-prints (2018), arXiv-1809. Saravanan Thirumuruganathan, Shameem A Puthiya Parambath, Mourad Ouzzani, Nan Tang, and Shafiq Joty. 2018. Reuse and Adaptation for Entity Resolution through Transfer Learning. arXiv e-prints (2018), arXiv-1809."},{"key":"e_1_2_1_47_1","doi-asserted-by":"crossref","unstructured":"Hanghang Tong Christos Faloutsos and Jia-Yu Pan. 2006. Fast random walk with restart and its applications. In ICDM. 613--622.  Hanghang Tong Christos Faloutsos and Jia-Yu Pan. 2006. Fast random walk with restart and its applications. In ICDM. 613--622.","DOI":"10.1109\/ICDM.2006.70"},{"key":"e_1_2_1_48_1","unstructured":"Jianhong Tu Ju Fan Nan Tang Peng Wang Chengliang Chai Guoliang Li Ruixue Fan and Xiaoyong Du. 2022. Domain Adaptation for Deep Entity Resolution. In SIGMOD. 443--457.  Jianhong Tu Ju Fan Nan Tang Peng Wang Chengliang Chai Guoliang Li Ruixue Fan and Xiaoyong Du. 2022. Domain Adaptation for Deep Entity Resolution. In SIGMOD. 443--457."},{"key":"e_1_2_1_49_1","volume-title":"Yee Whye Teh, and Yarin Gal","author":"Amersfoort Joost Van","year":"2020","unstructured":"Joost Van Amersfoort , Lewis Smith , Yee Whye Teh, and Yarin Gal . 2020 . Uncertainty estimation using a single deep deterministic neural network. In ICML. 9690--9700. Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. 2020. Uncertainty estimation using a single deep deterministic neural network. In ICML. 9690--9700."},{"key":"e_1_2_1_50_1","doi-asserted-by":"crossref","unstructured":"Eric Wallace Yizhong Wang Sujian Li Sameer Singh and Matt Gardner. 2019. Do NLP Models Know Numbers? Probing Numeracy in Embeddings. In EMNLP. 5307--5315.  Eric Wallace Yizhong Wang Sujian Li Sameer Singh and Matt Gardner. 2019. Do NLP Models Know Numbers? Probing Numeracy in Embeddings. In EMNLP. 5307--5315.","DOI":"10.18653\/v1\/D19-1534"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.14778\/2350229.2350263"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.14778\/2021017.2021020"},{"key":"e_1_2_1_53_1","volume-title":"Machamp: A Generalized Entity Matching Benchmark. In CIKM. 4633--4642.","author":"Wang Jin","year":"2021","unstructured":"Jin Wang , Yuliang Li , and Wataru Hirota . 2021 . Machamp: A Generalized Entity Matching Benchmark. In CIKM. 4633--4642. Jin Wang, Yuliang Li, and Wataru Hirota. 2021. Machamp: A Generalized Entity Matching Benchmark. In CIKM. 4633--4642."},{"key":"e_1_2_1_54_1","doi-asserted-by":"crossref","unstructured":"Thomas Wolf Lysandre Debut Victor Sanh Julien Chaumond Clement Delangue Anthony Moi Pierric Cistac Tim Rault R\u00e9mi Louf Morgan Funtowicz etal 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019).  Thomas Wolf Lysandre Debut Victor Sanh Julien Chaumond Clement Delangue Anthony Moi Pierric Cistac Tim Rault R\u00e9mi Louf Morgan Funtowicz et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019).","DOI":"10.18653\/v1\/2020.emnlp-demos.6"},{"key":"e_1_2_1_55_1","unstructured":"Qiantong Xu Alexei Baevski Tatiana Likhomanenko Paden Tomasello Alexis Conneau Ronan Collobert Gabriel Synnaeve and Michael Auli. 2021. Self-training and pre-training are complementary for speech recognition. In ICASSP. 3030--3034.  Qiantong Xu Alexei Baevski Tatiana Likhomanenko Paden Tomasello Alexis Conneau Ronan Collobert Gabriel Synnaeve and Michael Auli. 2021. Self-training and pre-training are complementary for speech recognition. In ICASSP. 3030--3034."},{"key":"e_1_2_1_56_1","doi-asserted-by":"crossref","unstructured":"Zijun Yao Chengjiang Li Tiansi Dong Xin Lv Jifan Yu Lei Hou Juanzi Li Yichi Zhang and Zelin Dai. 2021. Interpretable and Low-Resource Entity Matching via Decoupling Feature Learning from Decision Making. In ACL. 2770--2781.  Zijun Yao Chengjiang Li Tiansi Dong Xin Lv Jifan Yu Lei Hou Juanzi Li Yichi Zhang and Zelin Dai. 2021. Interpretable and Low-Resource Entity Matching via Decoupling Feature Learning from Decision Making. In ACL. 2770--2781.","DOI":"10.18653\/v1\/2021.acl-long.215"},{"key":"e_1_2_1_57_1","volume-title":"Auto-em: End-to-end fuzzy entity-matching using pre-trained deep models and transfer learning. In WWW. 2413--2424.","author":"Zhao Chen","year":"2019","unstructured":"Chen Zhao and Yeye He . 2019 . Auto-em: End-to-end fuzzy entity-matching using pre-trained deep models and transfer learning. In WWW. 2413--2424. Chen Zhao and Yeye He. 2019. Auto-em: End-to-end fuzzy entity-matching using pre-trained deep models and transfer learning. In WWW. 2413--2424."}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3565816.3565836","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,28]],"date-time":"2022-12-28T09:39:02Z","timestamp":1672220342000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3565816.3565836"}},"subtitle":["prompt-tuning for low-resource generalized entity matching"],"short-title":[],"issued":{"date-parts":[[2022,10]]},"references-count":57,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2022,10]]}},"alternative-id":["10.14778\/3565816.3565836"],"URL":"https:\/\/doi.org\/10.14778\/3565816.3565836","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2022,10]]},"assertion":[{"value":"2022-11-23","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}