{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,6]],"date-time":"2026-04-06T10:10:55Z","timestamp":1775470255910,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":28,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,8,14]],"date-time":"2021-08-14T00:00:00Z","timestamp":1628899200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Key R&D Program of China","award":["2020AAA0105200"],"award-info":[{"award-number":["2020AAA0105200"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,8,14]]},"DOI":"10.1145\/3447548.3467418","type":"proceedings-article","created":{"date-parts":[[2021,8,12]],"date-time":"2021-08-12T06:12:03Z","timestamp":1628748723000},"page":"2450-2460","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":23,"title":["Controllable Generation from Pre-trained Language Models via Inverse Prompting"],"prefix":"10.1145","author":[{"given":"Xu","family":"Zou","sequence":"first","affiliation":[{"name":"Tsinghua University &amp; Beijing Academy of Artificial Intelligence, Beijing, China"}]},{"given":"Da","family":"Yin","sequence":"additional","affiliation":[{"name":"Tsinghua University &amp; Beijing Academy of Artificial Intelligence, Beijing, China"}]},{"given":"Qingyang","family":"Zhong","sequence":"additional","affiliation":[{"name":"Tsinghua University &amp; Beijing Academy of Artificial Intelligence, Beijing, China"}]},{"given":"Hongxia","family":"Yang","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"given":"Zhilin","family":"Yang","sequence":"additional","affiliation":[{"name":"Recurrent AI, Ltd &amp; Beijing Academy of Artificial Intelligence, Beijing, China"}]},{"given":"Jie","family":"Tang","sequence":"additional","affiliation":[{"name":"Tsinghua University &amp; Beijing Academy of Artificial Intelligence, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2021,8,14]]},"reference":[{"key":"e_1_3_2_2_1_1","first-page":"1","article-title":"Variational autoencoder based anomaly detection using reconstruction probability","volume":"2","author":"An Jinwon","year":"2015","unstructured":"Jinwon An and Sungzoon Cho . 2015 . Variational autoencoder based anomaly detection using reconstruction probability . Special Lecture on IE , Vol. 2 , 1 (2015), 1 -- 18 . Jinwon An and Sungzoon Cho. 2015. Variational autoencoder based anomaly detection using reconstruction probability. Special Lecture on IE, Vol. 2, 1 (2015), 1--18.","journal-title":"Special Lecture on IE"},{"key":"e_1_3_2_2_2_1","volume-title":"et almbox","author":"Brown Tom B","year":"2020","unstructured":"Tom B Brown , Benjamin Mann , Nick Ryder , Melanie Subbiah , Jared Kaplan , Prafulla Dhariwal , Arvind Neelakantan , Pranav Shyam , Girish Sastry , Amanda Askell , et almbox . 2020 . Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020). Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et almbox. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020)."},{"key":"e_1_3_2_2_3_1","volume-title":"a master of steganography. arXiv preprint arXiv:1712.02950","author":"Chu Casey","year":"2017","unstructured":"Casey Chu , Andrey Zhmoginov , and Mark Sandler . 2017. Cyclegan , a master of steganography. arXiv preprint arXiv:1712.02950 ( 2017 ). Casey Chu, Andrey Zhmoginov, and Mark Sandler. 2017. Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950 (2017)."},{"key":"e_1_3_2_2_4_1","volume-title":"Transformer-xl: Language modeling with longer-term dependency.","author":"Dai Zihang","year":"2018","unstructured":"Zihang Dai , Zhilin Yang , Yiming Yang , William W Cohen , Jaime Carbonell , Quoc V Le , and Ruslan Salakhutdinov . 2018 . Transformer-xl: Language modeling with longer-term dependency. (2018). Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2018. Transformer-xl: Language modeling with longer-term dependency. (2018)."},{"key":"e_1_3_2_2_5_1","volume-title":"Plug and play language models: A simple approach to controlled text generation. arXiv preprint arXiv:1912.02164","author":"Dathathri Sumanth","year":"2019","unstructured":"Sumanth Dathathri , Andrea Madotto , Janice Lan , Jane Hung , Eric Frank , Piero Molino , Jason Yosinski , and Rosanne Liu . 2019. Plug and play language models: A simple approach to controlled text generation. arXiv preprint arXiv:1912.02164 ( 2019 ). Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language models: A simple approach to controlled text generation. arXiv preprint arXiv:1912.02164 (2019)."},{"key":"e_1_3_2_2_6_1","volume-title":"As good as new. How to successfully recycle English GPT-2 to make models for other languages. arXiv preprint arXiv:2012.05628","author":"de Vries Wietse","year":"2020","unstructured":"Wietse de Vries and Malvina Nissim . 2020. As good as new. How to successfully recycle English GPT-2 to make models for other languages. arXiv preprint arXiv:2012.05628 ( 2020 ). Wietse de Vries and Malvina Nissim. 2020. As good as new. How to successfully recycle English GPT-2 to make models for other languages. arXiv preprint arXiv:2012.05628 (2020)."},{"key":"e_1_3_2_2_7_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"crossref","unstructured":"Long Jiang and Ming Zhou. 2008. Generating Chinese couplets using a statistical MT approach. In Coling'08. 377--384.  Long Jiang and Ming Zhou. 2008. Generating Chinese couplets using a statistical MT approach. In Coling'08. 377--384.","DOI":"10.3115\/1599081.1599129"},{"key":"e_1_3_2_2_9_1","volume-title":"Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858","author":"Keskar Nitish Shirish","year":"2019","unstructured":"Nitish Shirish Keskar , Bryan McCann , Lav R Varshney , Caiming Xiong , and Richard Socher . 2019 . Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019). Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019)."},{"key":"e_1_3_2_2_10_1","volume-title":"The art of Chinese poetry","author":"Liu James JY","unstructured":"James JY Liu . 1966. The art of Chinese poetry . University of Chicago Press. James JY Liu. 1966. The art of Chinese poetry. University of Chicago Press."},{"key":"e_1_3_2_2_11_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)."},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1016\/0004-3702(77)90026-1"},{"key":"e_1_3_2_2_13_1","volume-title":"Distributed representations of words and phrases and their compositionality. arXiv preprint arXiv:1310.4546","author":"Mikolov Tomas","year":"2013","unstructured":"Tomas Mikolov , Ilya Sutskever , Kai Chen , Greg Corrado , and Jeffrey Dean . 2013. Distributed representations of words and phrases and their compositionality. arXiv preprint arXiv:1310.4546 ( 2013 ). Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. arXiv preprint arXiv:1310.4546 (2013)."},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14-1162"},{"key":"e_1_3_2_2_15_1","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. (2018).  Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_2_2_16_1","volume-title":"Language models are unsupervised multitask learners. OpenAI blog","author":"Radford Alec","year":"2019","unstructured":"Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , and Ilya Sutskever . 2019. Language models are unsupervised multitask learners. OpenAI blog , Vol. 1 , 8 ( 2019 ), 9. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, Vol. 1, 8 (2019), 9."},{"key":"e_1_3_2_2_17_1","volume-title":"100,000 questions for machine comprehension of text. arXiv preprint arXiv:1606.05250","author":"Rajpurkar Pranav","year":"2016","unstructured":"Pranav Rajpurkar , Jian Zhang , Konstantin Lopyrev , and Percy Liang . 2016. Squad : 100,000 questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 ( 2016 ). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000 questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016)."},{"key":"e_1_3_2_2_18_1","unstructured":"D. Saxton Edward Grefenstette Felix Hill and P. Kohli. 2019. Analysing Mathematical Reasoning Abilities of Neural Models. ArXiv Vol. abs\/1904.01557 (2019).  D. Saxton Edward Grefenstette Felix Hill and P. Kohli. 2019. Analysing Mathematical Reasoning Abilities of Neural Models. ArXiv Vol. abs\/1904.01557 (2019)."},{"key":"e_1_3_2_2_19_1","volume-title":"Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053","author":"Shoeybi Mohammad","year":"2019","unstructured":"Mohammad Shoeybi , Mostofa Patwary , Raul Puri , Patrick LeGresley , Jared Casper , and Bryan Catanzaro . 2019 . Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053 (2019). Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053 (2019)."},{"key":"e_1_3_2_2_20_1","volume-title":"et almbox","author":"Silver David","year":"2017","unstructured":"David Silver , Julian Schrittwieser , Karen Simonyan , Ioannis Antonoglou , Aja Huang , Arthur Guez , Thomas Hubert , Lucas Baker , Matthew Lai , Adrian Bolton , et almbox . 2017 . Mastering the game of go without human knowledge. nature, Vol. 550 , 7676 (2017), 354--359. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et almbox. 2017. Mastering the game of go without human knowledge. nature, Vol. 550, 7676 (2017), 354--359."},{"key":"e_1_3_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/361953.361963"},{"key":"e_1_3_2_2_22_1","volume-title":"Parsing the turing test","author":"Turing Alan M","unstructured":"Alan M Turing . 2009. Computing machinery and intelligence . In Parsing the turing test . Springer , 23--65. Alan M Turing. 2009. Computing machinery and intelligence. In Parsing the turing test. Springer, 23--65."},{"key":"e_1_3_2_2_23_1","volume-title":"Attention is all you need. arXiv preprint arXiv:1706.03762","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , Lukasz Kaiser , and Illia Polosukhin . 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 ( 2017 ). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)."},{"key":"e_1_3_2_2_24_1","volume-title":"Dual learning for machine translation. arXiv preprint arXiv:1611.00179","author":"Xia Yingce","year":"2016","unstructured":"Yingce Xia , Di He , Tao Qin , Liwei Wang , Nenghai Yu , Tie-Yan Liu , and Wei-Ying Ma. 2016. Dual learning for machine translation. arXiv preprint arXiv:1611.00179 ( 2016 ). Yingce Xia, Di He, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. arXiv preprint arXiv:1611.00179 (2016)."},{"key":"e_1_3_2_2_25_1","volume-title":"Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600","author":"Yang Zhilin","year":"2018","unstructured":"Zhilin Yang , Peng Qi , Saizheng Zhang , Yoshua Bengio , William W Cohen , Ruslan Salakhutdinov , and Christopher D Manning . 2018 . Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 (2018). Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 (2018)."},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6488"},{"key":"e_1_3_2_2_27_1","volume-title":"et almbox","author":"Zhang Zhengyan","year":"2020","unstructured":"Zhengyan Zhang , Xu Han , Hao Zhou , Pei Ke , Yuxian Gu , Deming Ye , Yujia Qin , Yusheng Su , Haozhe Ji , Jian Guan , et almbox . 2020 . CPM : A Large-scale Generative Chinese Pre-trained Language Model . arXiv preprint arXiv:2012.00413 (2020). Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, et almbox. 2020. CPM: A Large-scale Generative Chinese Pre-trained Language Model. arXiv preprint arXiv:2012.00413 (2020)."},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-3005"}],"event":{"name":"KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","location":"Virtual Event Singapore","acronym":"KDD '21","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"]},"container-title":["Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &amp; Data Mining"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3447548.3467418","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467418","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:36Z","timestamp":1750191516000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3447548.3467418"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,8,14]]},"references-count":28,"alternative-id":["10.1145\/3447548.3467418","10.1145\/3447548"],"URL":"https:\/\/doi.org\/10.1145\/3447548.3467418","relation":{},"subject":[],"published":{"date-parts":[[2021,8,14]]},"assertion":[{"value":"2021-08-14","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}