{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,13]],"date-time":"2025-12-13T07:18:42Z","timestamp":1765610322233,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":29,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,12,23]],"date-time":"2022-12-23T00:00:00Z","timestamp":1671753600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100008838","name":"Shanghai Municipal Commission of Economy and Informatization","doi-asserted-by":"publisher","award":["2021-GZL-RGZN-01020"],"award-info":[{"award-number":["2021-GZL-RGZN-01020"]}],"id":[{"id":"10.13039\/501100008838","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,12,23]]},"DOI":"10.1145\/3578741.3578759","type":"proceedings-article","created":{"date-parts":[[2023,3,7]],"date-time":"2023-03-07T04:18:52Z","timestamp":1678162732000},"page":"88-93","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Keyword Extractor for Contrastive Learning of Unsupervised Sentence Embedding"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7738-129X","authenticated-orcid":false,"given":"Hua","family":"Cai","sequence":"first","affiliation":[{"name":"UniDT, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8216-2422","authenticated-orcid":false,"given":"Weihong","family":"Chen","sequence":"additional","affiliation":[{"name":"UniDT, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5439-7352","authenticated-orcid":false,"given":"Kehuan","family":"Shi","sequence":"additional","affiliation":[{"name":"UniDT, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4732-8981","authenticated-orcid":false,"given":"Shuaishuai","family":"Li","sequence":"additional","affiliation":[{"name":"UniDT, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3747-3325","authenticated-orcid":false,"given":"Qing","family":"Xu","sequence":"additional","affiliation":[{"name":"UniDT, China"}]}],"member":"320","published-online":{"date-parts":[[2023,3,6]]},"reference":[{"key":"e_1_3_2_1_1_1","first-page":"l","volume":"201","author":"Agirre Eneko","unstructured":"Eneko Agirre , Carmen Banea , Claire Cardie , Daniel Cer , Mona Diab , Aitor Gonzalez-Agirre , Weiwei Guo , I\u00f1igo Lopez-Gazpio , Montse Maritxalar , Rada Mihalcea , German Rigau , Larraitz Uria , and Janyce Wiebe. 201 5. SemEva l - 2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability. NAACL (2015). Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, I\u00f1igo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability. NAACL (2015).","journal-title":"Janyce Wiebe."},{"key":"e_1_3_2_1_2_1","first-page":"l","volume":"201","author":"Agirre Eneko","unstructured":"Eneko Agirre , Carmen Banea , Daniel Cer , Mona Diab , Aitor Gonzalez-Agirre , Rada Mihalcea , German Rigau , and Janyce Wiebe. 201 6. SemEva l - 2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation. NAACL (2016). Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation. NAACL (2016).","journal-title":"Janyce Wiebe."},{"key":"e_1_3_2_1_3_1","volume-title":"A simple but tough-to-beat baseline for sentence embeddings. ICLR","author":"Arora Sanjeev","year":"2017","unstructured":"Sanjeev Arora , Yingyu Liang , and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. ICLR ( 2017 ). Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. ICLR (2017)."},{"key":"e_1_3_2_1_4_1","volume-title":"Jan","author":"Blei M","year":"2003","unstructured":"David\u00a0 M Blei , Andrew\u00a0 Y Ng , and Michael\u00a0 I Jordan . 2003. Latent dirichlet allocation. Journal of machine Learning research 3 , Jan ( 2003 ), 993\u20131022. David\u00a0M Blei, Andrew\u00a0Y Ng, and Michael\u00a0I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3, Jan (2003), 993\u20131022."},{"doi-asserted-by":"crossref","unstructured":"Ricardo Campos V\u00edtor Mangaravite Arian Pasquali Al\u00edpio\u00a0M\u00e1rio Jorge C\u00e9lia Nunes and Adam Jatowt. 2020. YAKE! Keyword extraction from single documents using multiple local features. Information Sciences(2020).  Ricardo Campos V\u00edtor Mangaravite Arian Pasquali Al\u00edpio\u00a0M\u00e1rio Jorge C\u00e9lia Nunes and Adam Jatowt. 2020. YAKE! Keyword extraction from single documents using multiple local features. Information Sciences(2020).","key":"e_1_3_2_1_5_1","DOI":"10.1016\/j.ins.2019.09.013"},{"key":"e_1_3_2_1_6_1","first-page":"l","volume":"201","author":"Cer Daniel","unstructured":"Daniel Cer , Mona Diab , Eneko Agirre , I\u00f1igo Lopez-Gazpio , and Lucia Specia. 201 7. SemEva l - 2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation. arXiv: Computation and Language(2017). Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation. arXiv: Computation and Language(2017).","journal-title":"Lucia Specia."},{"key":"e_1_3_2_1_7_1","volume-title":"A Simple Framework for Contrastive Learning of Visual Representations. ICML","author":"Chen Ting","year":"2020","unstructured":"Ting Chen , Simon Kornblith , Mohammad Norouzi , and Geoffrey\u00a0 E. Hinton . 2020. A Simple Framework for Contrastive Learning of Visual Representations. ICML ( 2020 ). Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey\u00a0E. Hinton. 2020. A Simple Framework for Contrastive Learning of Visual Representations. ICML (2020)."},{"key":"e_1_3_2_1_8_1","volume-title":"SentEval: An Evaluation Toolkit for Universal Sentence Representations. LREC","author":"Conneau Alexis","year":"2018","unstructured":"Alexis Conneau and Douwe Kiela . 2018. SentEval: An Evaluation Toolkit for Universal Sentence Representations. LREC ( 2018 ). Alexis Conneau and Douwe Kiela. 2018. SentEval: An Evaluation Toolkit for Universal Sentence Representations. LREC (2018)."},{"key":"e_1_3_2_1_9_1","volume-title":"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL (2018)."},{"key":"e_1_3_2_1_10_1","volume-title":"SimCSE: Simple Contrastive Learning of Sentence Embeddings. EMNLP","author":"Gao Tianyu","year":"2021","unstructured":"Tianyu Gao , Xingcheng Yao , and Danqi Chen . 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. EMNLP ( 2021 ), 6894\u20136910. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. EMNLP (2021), 6894\u20136910."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_11_1","DOI":"10.5281\/zenodo.4461265"},{"volume-title":"International encyclopedia of statistical science","author":"Joyce M","unstructured":"James\u00a0 M Joyce . 2011. Kullback-Leibler Divergence . In International encyclopedia of statistical science . Springer , 720\u2013722. James\u00a0M Joyce. 2011. Kullback-Leibler Divergence. In International encyclopedia of statistical science. Springer, 720\u2013722.","key":"e_1_3_2_1_12_1"},{"key":"e_1_3_2_1_13_1","volume-title":"Self-Guided Contrastive Learning for BERT Sentence Representations. ACL-JCNLP","author":"Kim Taeuk","year":"2021","unstructured":"Taeuk Kim , Kang\u00a0Min Yoo , and Sang-goo Lee. 2021. Self-Guided Contrastive Learning for BERT Sentence Representations. ACL-JCNLP ( 2021 ), 2528\u20132540. Taeuk Kim, Kang\u00a0Min Yoo, and Sang-goo Lee. 2021. Self-Guided Contrastive Learning for BERT Sentence Representations. ACL-JCNLP (2021), 2528\u20132540."},{"key":"e_1_3_2_1_14_1","volume-title":"Skip-thought vectors. NeurIPS","author":"Kiros Ryan","year":"2015","unstructured":"Ryan Kiros , Yukun Zhu , Ruslan Salakhutdinov , Richard\u00a0 S. Zemel , Antonio Torralba , Raquel Urtasun , and Sanja Fidler . 2015. Skip-thought vectors. NeurIPS ( 2015 ). Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard\u00a0S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. NeurIPS (2015)."},{"key":"e_1_3_2_1_15_1","volume-title":"Cross-lingual Language Model Pretraining.NeurIPS","author":"Lample Guillaume","year":"2019","unstructured":"Guillaume Lample and Alexis Conneau . 2019. Cross-lingual Language Model Pretraining.NeurIPS ( 2019 ). Guillaume Lample and Alexis Conneau. 2019. Cross-lingual Language Model Pretraining.NeurIPS (2019)."},{"key":"e_1_3_2_1_16_1","volume-title":"An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. ACL","author":"Lau Jey\u00a0Han","year":"2016","unstructured":"Jey\u00a0Han Lau and Timothy Baldwin . 2016. An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. ACL ( 2016 ), 78. Jey\u00a0Han Lau and Timothy Baldwin. 2016. An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. ACL (2016), 78."},{"key":"e_1_3_2_1_17_1","volume-title":"On the Sentence Embeddings from Pre-trained Language Models. EMNLP","author":"Li Bohan","year":"2020","unstructured":"Bohan Li , Hao Zhou , Junxian He , Mingxuan Wang , Yiming Yang , and Lei Li. 2020. On the Sentence Embeddings from Pre-trained Language Models. EMNLP ( 2020 ). Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the Sentence Embeddings from Pre-trained Language Models. EMNLP (2020)."},{"key":"e_1_3_2_1_18_1","volume-title":"An efficient framework for learning sentence representations. ICLR","author":"Logeswaran Lajanugen","year":"2018","unstructured":"Lajanugen Logeswaran and Honglak Lee . 2018. An efficient framework for learning sentence representations. ICLR ( 2018 ). Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. ICLR (2018)."},{"key":"e_1_3_2_1_19_1","volume-title":"Textrank: Bringing order into text. EMNLP, 404\u2013411.","author":"Mihalcea Rada","year":"2004","unstructured":"Rada Mihalcea and Paul Tarau . 2004 . Textrank: Bringing order into text. EMNLP, 404\u2013411. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. EMNLP, 404\u2013411."},{"unstructured":"Martin M\u00fcller Marcel Salath\u00e9 and Per\u00a0Egil Kummervold. 2020. COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter. arXiv: Computation and Language(2020).  Martin M\u00fcller Marcel Salath\u00e9 and Per\u00a0Egil Kummervold. 2020. COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter. arXiv: Computation and Language(2020).","key":"e_1_3_2_1_20_1"},{"doi-asserted-by":"crossref","unstructured":"Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. EMNLP.  Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. EMNLP.","key":"e_1_3_2_1_21_1","DOI":"10.18653\/v1\/D19-1410"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_22_1","DOI":"10.20944\/preprints201908.0073.v1"},{"unstructured":"Jianlin Su Jiarun Cao Weijie Liu and Yangyiwen Ou. 2021. Whitening Sentence Representations for Better Semantics and Faster Retrieval.arXiv: Computation and Language(2021).  Jianlin Su Jiarun Cao Weijie Liu and Yangyiwen Ou. 2021. Whitening Sentence Representations for Better Semantics and Faster Retrieval.arXiv: Computation and Language(2021).","key":"e_1_3_2_1_23_1"},{"unstructured":"Si Sun Chenyan Xiong Zhenghao Liu Zhiyuan Liu and Jie Bao. 2020. Joint Keyphrase Chunking and Salience Ranking with BERT. CoRR abs\/2004.13639.  Si Sun Chenyan Xiong Zhenghao Liu Zhiyuan Liu and Jie Bao. 2020. Joint Keyphrase Chunking and Salience Ranking with BERT. CoRR abs\/2004.13639.","key":"e_1_3_2_1_24_1"},{"key":"e_1_3_2_1_25_1","volume-title":"SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-Trained Language Model","author":"Sun Yi","year":"2020","unstructured":"Yi Sun , Hangping Qiu , Yu Zheng , Zhongwei Wang , and Chaoran Zhang . 2020. SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-Trained Language Model . IEEE Access ( 2020 ). Yi Sun, Hangping Qiu, Yu Zheng, Zhongwei Wang, and Chaoran Zhang. 2020. SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-Trained Language Model. IEEE Access (2020)."},{"key":"e_1_3_2_1_26_1","volume-title":"CLEAR: Contrastive Learning for Sentence Representation.arXiv: Computation and Language(2020).","author":"Wu Zhuofeng","year":"2020","unstructured":"Zhuofeng Wu , Sinong Wang , Jiatao Gu , Madian Khabsa , Fei Sun , and Hao Ma . 2020 . CLEAR: Contrastive Learning for Sentence Representation.arXiv: Computation and Language(2020). Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. CLEAR: Contrastive Learning for Sentence Representation.arXiv: Computation and Language(2020)."},{"doi-asserted-by":"crossref","unstructured":"Ziyi Yang Yinfei Yang Daniel Cer Jax Law and Eric Darve. 2021. Universal Sentence Representation Learning with Conditional Masked Language Model. EMNLP 6216\u20136228.  Ziyi Yang Yinfei Yang Daniel Cer Jax Law and Eric Darve. 2021. Universal Sentence Representation Learning with Conditional Masked Language Model. EMNLP 6216\u20136228.","key":"e_1_3_2_1_27_1","DOI":"10.18653\/v1\/2021.emnlp-main.502"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_28_1","DOI":"10.23919\/ChiCC.2017.8028251"},{"key":"e_1_3_2_1_29_1","volume-title":"An Unsupervised Sentence Embedding Method by Mutual Information Maximization. EMNLP","author":"Zhang Yan","year":"2020","unstructured":"Yan Zhang , Ruidan He , Zuozhu Liu , Kwan\u00a0Hui Lim , and Lidong Bing . 2020. An Unsupervised Sentence Embedding Method by Mutual Information Maximization. EMNLP ( 2020 ), 1601\u20131610. Yan Zhang, Ruidan He, Zuozhu Liu, Kwan\u00a0Hui Lim, and Lidong Bing. 2020. An Unsupervised Sentence Embedding Method by Mutual Information Maximization. EMNLP (2020), 1601\u20131610."}],"event":{"acronym":"MLNLP 2022","name":"MLNLP 2022: 2022 5th International Conference on Machine Learning and Natural Language Processing","location":"Sanya China"},"container-title":["Proceedings of the 2022 5th International Conference on Machine Learning and Natural Language Processing"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3578741.3578759","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3578741.3578759","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T18:08:50Z","timestamp":1750183730000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3578741.3578759"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,23]]},"references-count":29,"alternative-id":["10.1145\/3578741.3578759","10.1145\/3578741"],"URL":"https:\/\/doi.org\/10.1145\/3578741.3578759","relation":{},"subject":[],"published":{"date-parts":[[2022,12,23]]},"assertion":[{"value":"2023-03-06","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}