{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,1]],"date-time":"2025-11-01T23:39:45Z","timestamp":1762040385568,"version":"build-2065373602"},"publisher-location":"New York, NY, USA","reference-count":34,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,8,14]],"date-time":"2021-08-14T00:00:00Z","timestamp":1628899200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,8,14]]},"DOI":"10.1145\/3447548.3467105","type":"proceedings-article","created":{"date-parts":[[2021,8,12]],"date-time":"2021-08-12T06:12:05Z","timestamp":1628748725000},"page":"3697-3707","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["Reinforcing Pretrained Models for Generating Attractive Text Advertisements"],"prefix":"10.1145","author":[{"given":"Xiting","family":"Wang","sequence":"first","affiliation":[{"name":"Microsoft Research Asia, Beijing, China"}]},{"given":"Xinwei","family":"Gu","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Beijing, China"}]},{"given":"Jie","family":"Cao","sequence":"additional","affiliation":[{"name":"Microsoft Advertising, Bellevue, WA, USA"}]},{"given":"Zihua","family":"Zhao","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Shanghai, China"}]},{"given":"Yulan","family":"Yan","sequence":"additional","affiliation":[{"name":"Microsoft Advertising, Bellevue, WA, USA"}]},{"given":"Bhuvan","family":"Middha","sequence":"additional","affiliation":[{"name":"Microsoft Advertising, Bellevue, WA, USA"}]},{"given":"Xing","family":"Xie","sequence":"additional","affiliation":[{"name":"Microsoft Research Asia, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2021,8,14]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-77105-0_57"},{"key":"e_1_3_2_2_2_1","unstructured":"Joshua Achiam David Held Aviv Tamar and Pieter Abbeel. 2017. Constrained policy optimization. In ICML .  Joshua Achiam David Held Aviv Tamar and Pieter Abbeel. 2017. Constrained policy optimization. In ICML ."},{"key":"e_1_3_2_2_3_1","volume-title":"et almbox","author":"Bao Hangbo","year":"2020","unstructured":"Hangbo Bao , Li Dong , Furu Wei , Wenhui Wang , Nan Yang , Xiaodong Liu , Yu Wang , Jianfeng Gao , Songhao Piao , Ming Zhou , et almbox . 2020 . Unilmv2: Pseudo-masked language models for unified language model pre-training. In ICML . 642--652. Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, et almbox. 2020. Unilmv2: Pseudo-masked language models for unified language model pre-training. In ICML . 642--652."},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/1386790.1386792"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"crossref","unstructured":"Andrei Broder Evgeniy Gabrilovich Vanja Josifovski George Mavromatis and Alex Smola. 2011. Bid generation for advanced match in sponsored search. In WSDM. 515--524.  Andrei Broder Evgeniy Gabrilovich Vanja Josifovski George Mavromatis and Alex Smola. 2011. Bid generation for advanced match in sponsored search. In WSDM. 515--524.","DOI":"10.1145\/1935826.1935901"},{"key":"e_1_3_2_2_6_1","volume-title":"et almbox","author":"Brown Tom B","year":"2020","unstructured":"Tom B Brown , Benjamin Mann , Nick Ryder , Melanie Subbiah , Jared Kaplan , Prafulla Dhariwal , Arvind Neelakantan , Pranav Shyam , Girish Sastry , Amanda Askell , et almbox . 2020 . Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020). Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et almbox. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020)."},{"key":"e_1_3_2_2_7_1","doi-asserted-by":"crossref","unstructured":"Zhangming Chan Yuchi Zhang Xiuying Chen Shen Gao Zhiqiang Zhang Dongyan Zhao and Rui Yan. 2020. Selection and Generation: Learning towards Multi-Product Advertisement Post Generation. In EMNLP. 3818--3829.  Zhangming Chan Yuchi Zhang Xiuying Chen Shen Gao Zhiqiang Zhang Dongyan Zhao and Rui Yan. 2020. Selection and Generation: Learning towards Multi-Product Advertisement Post Generation. In EMNLP. 3818--3829.","DOI":"10.18653\/v1\/2020.emnlp-main.313"},{"key":"e_1_3_2_2_8_1","unstructured":"Xinshi Chen Shuang Li Hui Li Shaohua Jiang Yuan Qi and Le Song. 2019. Generative adversarial user model for reinforcement learning based recommendation system. In ICML . 1052--1061.  Xinshi Chen Shuang Li Hui Li Shaohua Jiang Yuan Qi and Le Song. 2019. Generative adversarial user model for reinforcement learning based recommendation system. In ICML . 1052--1061."},{"key":"e_1_3_2_2_9_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL .","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2019 . Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL ."},{"key":"e_1_3_2_2_10_1","unstructured":"Li Dong Nan Yang Wenhui Wang Furu Wei Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou and Hsiao-Wuen Hon. 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. In NeurIPS .  Li Dong Nan Yang Wenhui Wang Furu Wei Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou and Hsiao-Wuen Hon. 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. In NeurIPS ."},{"key":"e_1_3_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2389376.2389401"},{"key":"e_1_3_2_2_12_1","unstructured":"Chaoyu Guan Xiting Wang Quanshi Zhang Runjin Chen Di He and Xing Xie. 2019. Towards a deep and unified understanding of deep neural models in nlp. In ICML . 2454--2463.  Chaoyu Guan Xiting Wang Quanshi Zhang Runjin Chen Di He and Xing Xie. 2019. Towards a deep and unified understanding of deep neural models in nlp. In ICML . 2454--2463."},{"key":"e_1_3_2_2_13_1","unstructured":"Huifeng Guo Jinkai Yu Qing Liu Ruiming Tang and Yuzhou Zhang. 2019. PAL: a position-bias aware learning framework for CTR prediction in live recommender systems. In RecSys. 452--456.  Huifeng Guo Jinkai Yu Qing Liu Ruiming Tang and Yuzhou Zhang. 2019. PAL: a position-bias aware learning framework for CTR prediction in live recommender systems. In RecSys. 452--456."},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"crossref","unstructured":"J Weston Hughes Keng-hao Chang and Ruofei Zhang. 2019. Generating Better Search Engine Text Advertisements with Deep Reinforcement Learning. In KDD . 2269--2277.  J Weston Hughes Keng-hao Chang and Ruofei Zhang. 2019. Generating Better Search Engine Text Advertisements with Deep Reinforcement Learning. In KDD . 2269--2277.","DOI":"10.1145\/3292500.3330754"},{"key":"e_1_3_2_2_15_1","volume-title":"Workshop on Sponsored Search Auctions .","author":"Jansen Bernard J","year":"2005","unstructured":"Bernard J Jansen and Marc Resnick . 2005 . Examining searcher perceptions of and interactions with sponsored results . In Workshop on Sponsored Search Auctions . Bernard J Jansen and Marc Resnick. 2005. Examining searcher perceptions of and interactions with sponsored results. In Workshop on Sponsored Search Auctions ."},{"key":"e_1_3_2_2_16_1","volume-title":"Albert: A lite bert for self-supervised learning of language representations. In ICLR .","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2020 . Albert: A lite bert for self-supervised learning of language representations. In ICLR . Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In ICLR ."},{"key":"e_1_3_2_2_17_1","unstructured":"Mu-Chu Lee Bin Gao and Ruofei Zhang. 2018. Rare query expansion through generative adversarial networks in search advertising. In KDD . 500--508.  Mu-Chu Lee Bin Gao and Ruofei Zhang. 2018. Rare query expansion through generative adversarial networks in search advertising. In KDD . 500--508."},{"key":"e_1_3_2_2_18_1","volume-title":"Rouge: A package for automatic evaluation of summaries. In Text summarization branches out . 74--81.","author":"Lin Chin-Yew","year":"2004","unstructured":"Chin-Yew Lin . 2004 . Rouge: A package for automatic evaluation of summaries. In Text summarization branches out . 74--81. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out . 74--81."},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3041021.3054192"},{"key":"e_1_3_2_2_20_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)."},{"key":"e_1_3_2_2_21_1","unstructured":"A Rupam Mahmood Hado P van Hasselt and Richard S Sutton. 2014. Weighted importance sampling for off-policy learning with linear function approximation. In NeurIPS. 3014--3022.  A Rupam Mahmood Hado P van Hasselt and Richard S Sutton. 2014. Weighted importance sampling for off-policy learning with linear function approximation. In NeurIPS. 3014--3022."},{"key":"e_1_3_2_2_22_1","unstructured":"Romain Paulus Caiming Xiong and Richard Socher. 2017. A deep reinforced model for abstractive summarization. In ICLR .  Romain Paulus Caiming Xiong and Richard Socher. 2017. A deep reinforced model for abstractive summarization. In ICLR ."},{"key":"e_1_3_2_2_23_1","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. (2018). https:\/\/openai.com\/blog\/language-unsupervised\/  Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. (2018). https:\/\/openai.com\/blog\/language-unsupervised\/"},{"key":"e_1_3_2_2_24_1","volume-title":"Language models are unsupervised multitask learners. OpenAI blog","author":"Radford Alec","year":"2019","unstructured":"Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , and Ilya Sutskever . 2019. Language models are unsupervised multitask learners. OpenAI blog , Vol. 1 , 8 ( 2019 ), 9. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog , Vol. 1, 8 (2019), 9."},{"key":"e_1_3_2_2_25_1","first-page":"1","article-title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel , Noam Shazeer , Adam Roberts , Katherine Lee , Sharan Narang , Michael Matena , Yanqi Zhou , Wei Li , and Peter J Liu . 2020 . Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer . JMLR , Vol. 21 (2020), 1 -- 67 . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR , Vol. 21 (2020), 1--67.","journal-title":"JMLR"},{"key":"e_1_3_2_2_26_1","unstructured":"Marc'Aurelio Ranzato Sumit Chopra Michael Auli and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR .  Marc'Aurelio Ranzato Sumit Chopra Michael Auli and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR ."},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"crossref","unstructured":"Sujith Ravi Andrei Broder Evgeniy Gabrilovich Vanja Josifovski Sandeep Pandey and Bo Pang. 2010. Automatic generation of bid phrases for online advertising. In WSDM. 341--350.  Sujith Ravi Andrei Broder Evgeniy Gabrilovich Vanja Josifovski Sandeep Pandey and Bo Pang. 2010. Automatic generation of bid phrases for online advertising. In WSDM. 341--350.","DOI":"10.1145\/1718487.1718530"},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"crossref","unstructured":"Steven J Rennie Etienne Marcheret Youssef Mroueh Jerret Ross and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In CVPR . 7008--7024.  Steven J Rennie Etienne Marcheret Youssef Mroueh Jerret Ross and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In CVPR . 7008--7024.","DOI":"10.1109\/CVPR.2017.131"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"crossref","unstructured":"Marco Tulio Ribeiro Sameer Singh and Carlos Guestrin. 2016. \" Why should I trust you?\" Explaining the predictions of any classifier. In KDD . 1135--1144.  Marco Tulio Ribeiro Sameer Singh and Carlos Guestrin. 2016. \" Why should I trust you?\" Explaining the predictions of any classifier. In KDD . 1135--1144.","DOI":"10.18653\/v1\/N16-3020"},{"key":"e_1_3_2_2_30_1","volume-title":"Mass: Masked sequence to sequence pre-training for language generation. In ICML .","author":"Song Kaitao","year":"2019","unstructured":"Kaitao Song , Xu Tan , Tao Qin , Jianfeng Lu , and Tie-Yan Liu . 2019 . Mass: Masked sequence to sequence pre-training for language generation. In ICML . Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In ICML ."},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"crossref","unstructured":"Stamatina Thomaidou Ismini Lourentzou Panagiotis Katsivelis-Perakis and Michalis Vazirgiannis. 2013. Automated snippet generation for online advertising. In CIKM . 1841--1844.  Stamatina Thomaidou Ismini Lourentzou Panagiotis Katsivelis-Perakis and Michalis Vazirgiannis. 2013. Automated snippet generation for online advertising. In CIKM . 1841--1844.","DOI":"10.1145\/2505515.2507876"},{"key":"e_1_3_2_2_32_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. 5998--6008.  Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. 5998--6008."},{"key":"e_1_3_2_2_33_1","volume-title":"Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning","author":"Williams Ronald J","year":"1992","unstructured":"Ronald J Williams . 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning , Vol. 8 , 3--4 ( 1992 ), 229--256. Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning , Vol. 8, 3--4 (1992), 229--256."},{"key":"e_1_3_2_2_34_1","unstructured":"Zhilin Yang Zihang Dai Yiming Yang Jaime Carbonell Ruslan Salakhutdinov and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In NeurIPS .  Zhilin Yang Zihang Dai Yiming Yang Jaime Carbonell Ruslan Salakhutdinov and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In NeurIPS ."}],"event":{"name":"KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"],"location":"Virtual Event Singapore","acronym":"KDD '21"},"container-title":["Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &amp; Data Mining"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3447548.3467105","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467105","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:28:05Z","timestamp":1750195685000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3447548.3467105"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,8,14]]},"references-count":34,"alternative-id":["10.1145\/3447548.3467105","10.1145\/3447548"],"URL":"https:\/\/doi.org\/10.1145\/3447548.3467105","relation":{},"subject":[],"published":{"date-parts":[[2021,8,14]]},"assertion":[{"value":"2021-08-14","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}