{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:12:15Z","timestamp":1750219935441,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":20,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T00:00:00Z","timestamp":1665964800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,17]]},"DOI":"10.1145\/3511808.3557718","type":"proceedings-article","created":{"date-parts":[[2022,10,16]],"date-time":"2022-10-16T01:29:57Z","timestamp":1665883797000},"page":"4009-4013","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Unified Knowledge Prompt Pre-training for Customer Service Dialogues"],"prefix":"10.1145","author":[{"given":"Keqing","family":"He","sequence":"first","affiliation":[{"name":"Meituan Group, Beijing, China"}]},{"given":"Jingang","family":"Wang","sequence":"additional","affiliation":[{"name":"Meituan Group, Beijing, AB, China"}]},{"given":"Chaobo","family":"Sun","sequence":"additional","affiliation":[{"name":"Meituan Group, Beijing, China"}]},{"given":"Wei","family":"Wu","sequence":"additional","affiliation":[{"name":"Meituan Group, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2022,10,17]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.58"},{"key":"e_1_3_2_1_2_1","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","volume":"1","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2019 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171--4186. https:\/\/doi.org\/10. 18653\/v1\/N19--1423 10.18653\/v1 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171--4186. https:\/\/doi.org\/10.18653\/v1\/N19--1423"},{"key":"e_1_3_2_1_3_1","volume-title":"Unified Language Model Pre-training for Natural Language Understanding and Generation. ArXiv","author":"Dong Li","year":"2019","unstructured":"Li Dong , Nan Yang , Wenhui Wang , Furu Wei , Xiaodong Liu , Yu Wang , Jianfeng Gao , M. Zhou , and Hsiao-Wuen Hon . 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. ArXiv , Vol. abs\/ 1905 .03197 ( 2019 ). Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, M. Zhou, and Hsiao-Wuen Hon. 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. ArXiv, Vol. abs\/1905.03197 (2019)."},{"key":"e_1_3_2_1_4_1","volume-title":"Kang Min Yoo, and Jung-Woo Ha","author":"Gu Xiaodong","year":"2021","unstructured":"Xiaodong Gu , Kang Min Yoo, and Jung-Woo Ha . 2021 . DialogBERT: Discourse- Aware Response Generation via Learning to Recover and Rank Utterances. In AAAI. Xiaodong Gu, Kang Min Yoo, and Jung-Woo Ha. 2021. DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances. In AAAI."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.196"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14-1181"},{"key":"e_1_3_2_1_7_1","volume-title":"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics","author":"Kudo Taku","year":"2018","unstructured":"Taku Kudo and John Richardson . 2018 . SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing . In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics , Brussels, Belgium, 66--71. https:\/\/doi.org\/10. 18653\/v1\/D18--2012 10.18653\/v1 Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, Brussels, Belgium, 66--71. https:\/\/doi.org\/10.18653\/v1\/D18--2012"},{"key":"e_1_3_2_1_8_1","volume-title":"ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations, ICLR 2020","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2020 . ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations, ICLR 2020 , Addis Ababa, Ethiopia, April 26--30 , 2020. OpenReview.net. https:\/\/openreview.net\/forum?id=H1eA7AEtvS Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020. OpenReview.net. https:\/\/openreview.net\/forum?id=H1eA7AEtvS"},{"key":"e_1_3_2_1_9_1","volume-title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv , Vol. abs\/ 1907 .11692 ( 2019 ). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, Vol. abs\/1907.11692 (2019)."},{"key":"e_1_3_2_1_10_1","unstructured":"Alec Radford and Karthik Narasimhan. 2018. Improving Language Understanding by Generative Pre-Training.  Alec Radford and Karthik Narasimhan. 2018. Improving Language Understanding by Generative Pre-Training."},{"key":"e_1_3_2_1_11_1","volume":"202","author":"Raffel Colin","unstructured":"Colin Raffel , Noam M. Shazeer , Adam Roberts , Katherine Lee , Sharan Narang , Michael Matena , Yanqi Zhou , Wei Li , and Peter J. Liu. 202 0. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. ArXiv, Vol. abs\/1910.10683 (2020). Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. ArXiv, Vol. abs\/1910.10683 (2020).","journal-title":"Peter J. Liu."},{"key":"e_1_3_2_1_12_1","volume-title":"Shazeer and Mitchell Stern","author":"Noam","year":"2018","unstructured":"Noam M. Shazeer and Mitchell Stern . 2018 . Adafactor : Adaptive Learning Rates with Sublinear Memory Cost. ArXiv , Vol. abs\/ 1804 .04235 (2018). Noam M. Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive Learning Rates with Sublinear Memory Cost. ArXiv, Vol. abs\/1804.04235 (2018)."},{"key":"e_1_3_2_1_13_1","volume-title":"Attention is all you need. Advances in neural information processing systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , \u0141ukasz Kaiser , and Illia Polosukhin . 2017. Attention is all you need. Advances in neural information processing systems , Vol. 30 ( 2017 ). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, Vol. 30 (2017)."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.nlp4convai-1.13"},{"key":"e_1_3_2_1_15_1","unstructured":"Chien-Sheng Wu Steven C. H. Hoi Richard Socher and Caiming Xiong. 2020. TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue. In EMNLP.  Chien-Sheng Wu Steven C. H. Hoi Richard Socher and Caiming Xiong. 2020. TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue. In EMNLP."},{"key":"e_1_3_2_1_16_1","volume-title":"Advances in Neural Information Processing Systems,, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang , Zihang Dai , Yiming Yang , Jaime Carbonell , Russ R Salakhutdinov , and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding . In Advances in Neural Information Processing Systems,, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc , E. Fox, and R. Garnett (Eds.), Vol. 32 . Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/ 2019 \/file\/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural Information Processing Systems,, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/2019\/file\/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf"},{"key":"e_1_3_2_1_17_1","volume-title":"Dolan","author":"Zhang Yizhe","year":"2020","unstructured":"Yizhe Zhang , Siqi Sun , Michel Galley , Yen-Chun Chen , Chris Brockett , Xiang Gao , Jianfeng Gao , Jingjing Liu , and William B . Dolan . 2020 . DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation. In ACL. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B. Dolan. 2020. DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation. In ACL."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482085"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"crossref","unstructured":"Zhuosheng Zhang and Hai Zhao. 2021. Structural Pre-training for Dialogue Comprehension. In ACL.  Zhuosheng Zhang and Hai Zhao. 2021. Structural Pre-training for Dialogue Comprehension. In ACL.","DOI":"10.18653\/v1\/2021.acl-long.399"},{"key":"e_1_3_2_1_20_1","volume-title":"DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization. ArXiv","author":"Zhong Ming","year":"2021","unstructured":"Ming Zhong , Yang Liu , Yichong Xu , Chenguang Zhu , and Michael Zeng . 2021. DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization. ArXiv , Vol. abs\/ 2109 .02492 ( 2021 ). Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization. ArXiv, Vol. abs\/2109.02492 (2021)."}],"event":{"name":"CIKM '22: The 31st ACM International Conference on Information and Knowledge Management","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"],"location":"Atlanta GA USA","acronym":"CIKM '22"},"container-title":["Proceedings of the 31st ACM International Conference on Information &amp; Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3511808.3557718","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3511808.3557718","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:48:49Z","timestamp":1750182529000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3511808.3557718"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,17]]},"references-count":20,"alternative-id":["10.1145\/3511808.3557718","10.1145\/3511808"],"URL":"https:\/\/doi.org\/10.1145\/3511808.3557718","relation":{},"subject":[],"published":{"date-parts":[[2022,10,17]]},"assertion":[{"value":"2022-10-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}