{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:18:56Z","timestamp":1775067536605,"version":"3.50.1"},"reference-count":172,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,10,6]],"date-time":"2023-10-06T00:00:00Z","timestamp":1696550400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Natural Science Foundation of Beijing","award":["4222036"],"award-info":[{"award-number":["4222036"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,3,31]]},"abstract":"<jats:p>Controllable Text Generation (CTG) is an emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used Transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the limited level of interpretability of deep neural networks, the controllability of these methods needs to be guaranteed. To this end, controllable text generation using Transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the past 3 to 4 years, targeting different CTG tasks that require different types of controlled constraints. In this article, we present a systematic critical review on the common tasks, main approaches, and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey article to summarize the state-of-the-art CTG techniques from the perspective of Transformer-based PLMs. We hope it can help researchers and practitioners in the related fields to quickly track the academic and technological frontier, providing them with a landscape of the area and a roadmap for future research.<\/jats:p>","DOI":"10.1145\/3617680","type":"journal-article","created":{"date-parts":[[2023,8,30]],"date-time":"2023-08-30T09:50:19Z","timestamp":1693389019000},"page":"1-37","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":188,"title":["A Survey of Controllable Text Generation Using Transformer-based Pre-trained Language Models"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8715-0532","authenticated-orcid":false,"given":"Hanqing","family":"Zhang","sequence":"first","affiliation":[{"name":"Beijing Institute of Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5931-5793","authenticated-orcid":false,"given":"Haolin","family":"Song","sequence":"additional","affiliation":[{"name":"Beijing Institute of Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8280-6224","authenticated-orcid":false,"given":"Shaoyu","family":"Li","sequence":"additional","affiliation":[{"name":"Beijing Institute of Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2551-2964","authenticated-orcid":false,"given":"Ming","family":"Zhou","sequence":"additional","affiliation":[{"name":"Langboat Technology, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8660-3608","authenticated-orcid":false,"given":"Dawei","family":"Song","sequence":"additional","affiliation":[{"name":"Beijing Institute of Technology, China"}]}],"member":"320","published-online":{"date-parts":[[2023,10,6]]},"reference":[{"key":"e_1_3_1_2_2","first-page":"4699","volume-title":"Proceedings of the 12th Language Resources and Evaluation Conference","author":"Amin-Nejad Ali","year":"2020","unstructured":"Ali Amin-Nejad, Julia Ive, and Sumithra Velupillai. 2020. Exploring Transformer text generation for medical dataset augmentation. In Proceedings of the 12th Language Resources and Evaluation Conference. European Language Resources Association, Marseille, France, 4699\u20134708. https:\/\/aclanthology.org\/2020.lrec-1.578"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/d17-1098"},{"key":"e_1_3_1_4_2","first-page":"3982","volume-title":"Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC\u201912)","author":"Aziz Wilker","year":"2012","unstructured":"Wilker Aziz, Sheila Castilho, and Lucia Specia. 2012. PET: A tool for post-editing and assessing machine translation. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC\u201912). European Language Resources Association (ELRA), Istanbul, Turkey, 3982\u20133987. http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/985_Paper.pdf"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.151"},{"key":"e_1_3_1_6_2","article-title":"Longformer: The long-document Transformer","volume":"2004","author":"Beltagy Iz","year":"2020","unstructured":"Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document Transformer. CoRR abs\/2004.05150 (2020). arxiv:2004.05150. https:\/\/arxiv.org\/abs\/2004.05150","journal-title":"CoRR"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445922"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.5555\/944919.944966"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.349"},{"key":"e_1_3_1_10_2","first-page":"1877","volume-title":"Advances in Neural Information Processing Systems","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877\u20131901. https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.471"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1052"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-1150"},{"key":"e_1_3_1_14_2","article-title":"Evaluation of text generation: A survey","volume":"2006","author":"Celikyilmaz Asli","year":"2020","unstructured":"Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. CoRR abs\/2006.14799 (2020). arxiv:2006.14799. https:\/\/arxiv.org\/abs\/2006.14799","journal-title":"CoRR"},{"key":"e_1_3_1_15_2","volume-title":"International Conference on Learning Representations","author":"Chan Alvin","year":"2021","unstructured":"Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, and Jie Fu. 2021. CoCon: A self-supervised approach for controlled text generation. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=VD_ozqvBy4W"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W19-3402"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.eacl-main.223"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33018151"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/684"},{"key":"e_1_3_1_20_2","article-title":"Palm: Scaling language modeling with pathways","author":"Chowdhery Aakanksha","year":"2022","unstructured":"Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et\u00a0al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022).","journal-title":"arXiv preprint arXiv:2204.02311"},{"key":"e_1_3_1_21_2","article-title":"Scaling instruction-finetuned language models","author":"Chung Hyung Won","year":"2022","unstructured":"Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et\u00a0al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 (2022).","journal-title":"arXiv preprint arXiv:2210.11416"},{"key":"e_1_3_1_22_2","volume-title":"Cross-Lingual Language Model Pretraining","author":"Conneau Alexis","year":"2019","unstructured":"Alexis Conneau and Guillaume Lample. 2019. Cross-Lingual Language Model Pretraining. Curran Associates Inc., Red Hook, NY, USA. https:\/\/proceedings.neurips.cc\/paper\/2019\/file\/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf"},{"key":"e_1_3_1_23_2","first-page":"78","volume-title":"Proceedings of the 6th Workshop on Statistical Machine Translation","author":"Dahlmeier Daniel","year":"2011","unstructured":"Daniel Dahlmeier, Chang Liu, and Hwee Tou Ng. 2011. TESLA at WMT 2011: Translation evaluation and tunable metric. In Proceedings of the 6th Workshop on Statistical Machine Translation. Association for Computational Linguistics, Edinburgh, Scotland, 78\u201384. https:\/\/aclanthology.org\/W11-2106"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1285"},{"key":"e_1_3_1_25_2","volume-title":"International Conference on Learning Representations","author":"Dathathri Sumanth","year":"2020","unstructured":"Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=H1edEyBKDS"},{"key":"e_1_3_1_26_2","volume-title":"International Conference on Learning Representations","author":"Deng Yuntian","year":"2020","unstructured":"Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, and Marc\u2019Aurelio Ranzato. 2020. Residual energy-based models for text generation. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=B1l4SgHKDH"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/E14-1042"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1423"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.656"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.5555\/3454287.3455457"},{"key":"e_1_3_1_31_2","article-title":"Attention is not all you need: Pure attention loses rank doubly exponentially with depth","volume":"2103","author":"Dong Yihe","year":"2021","unstructured":"Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. 2021. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. CoRR abs\/2103.03404 (2021). arXiv:2103.03404https:\/\/arxiv.org\/abs\/2103.03404","journal-title":"CoRR"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P17-1123"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1082"},{"key":"e_1_3_1_34_2","article-title":"Outline to story: Fine-grained controllable story generation from cascaded events","author":"Fang Le","year":"2021","unstructured":"Le Fang, Tao Zeng, Chaochun Liu, Liefeng Bo, Wen Dong, and Changyou Chen. 2021. Outline to story: Fine-grained controllable story generation from cascaded events. arXiv preprint arXiv:2101.00822 (2021). https:\/\/arxiv.org\/pdf\/2101.00822v1.pdf","journal-title":"arXiv preprint arXiv:2101.00822"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W17-4912"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2020.3015491"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2022.3182986"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.295"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1190"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W15-4708"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.351"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3422622"},{"key":"e_1_3_1_43_2","volume-title":"Bias Correction of Learned Generative Models Using Likelihood-Free Importance Weighting","author":"Grover Aditya","year":"2019","unstructured":"Aditya Grover, Jiaming Song, Alekh Agarwal, Kenneth Tran, Ashish Kapoor, Eric Horvitz, and Stefano Ermon. 2019. Bias Correction of Learned Generative Models Using Likelihood-Free Importance Weighting. Curran Associates Inc., Red Hook, NY, USA. https:\/\/papers.nips.cc\/paper\/2019\/file\/d76d8deea9c19cc9aaf2237d2bf2f785-Paper.pdf"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1169"},{"key":"e_1_3_1_45_2","article-title":"A probabilistic formulation of unsupervised text style transfer","volume":"2002","author":"He Junxian","year":"2020","unstructured":"Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. ArXiv abs\/2002.03912 (2020).","journal-title":"ArXiv"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.681"},{"key":"e_1_3_1_47_2","volume-title":"International Conference on Learning Representations","author":"Holtzman Ari","year":"2020","unstructured":"Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rygGQyrFvH"},{"key":"e_1_3_1_48_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0933-3657(00)00114-7"},{"key":"e_1_3_1_49_2","article-title":"Controllable text generation","volume":"1703","author":"Hu Zhiting","year":"2017","unstructured":"Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Controllable text generation. CoRR abs\/1703.00955 (2017). arxiv:1703.00955. http:\/\/arxiv.org\/abs\/1703.00955","journal-title":"CoRR"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.57"},{"issue":"3","key":"e_1_3_1_51_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3383123","article-title":"Challenges in building intelligent open-domain dialog systems","volume":"38","author":"Huang Minlie","year":"2020","unstructured":"Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems (TOIS) 38, 3 (2020), 1\u201332.","journal-title":"ACM Transactions on Information Systems (TOIS)"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.7"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/1837885.1837906"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3571730"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00324"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.527"},{"key":"e_1_3_1_57_2","doi-asserted-by":"crossref","first-page":"97","DOI":"10.18653\/v1\/2020.inlg-1.14","volume-title":"Proceedings of the 13th International Conference on Natural Language Generation","author":"Kale Mihir","year":"2020","unstructured":"Mihir Kale and Abhinav Rastogi. 2020. Text-to-text pre-training for data-to-text tasks. In Proceedings of the 13th International Conference on Natural Language Generation. Association for Computational Linguistics, Dublin, Ireland, 97\u2013102. https:\/\/aclanthology.org\/2020.inlg-1.14"},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.164"},{"key":"e_1_3_1_59_2","article-title":"CTRL: A conditional Transformer language model for controllable generation","volume":"1909","author":"Keskar Nitish Shirish","year":"2019","unstructured":"Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional Transformer language model for controllable generation. CoRR abs\/1909.05858 (2019). arxiv:1909.05858. http:\/\/arxiv.org\/abs\/1909.05858","journal-title":"CoRR"},{"key":"e_1_3_1_60_2","volume-title":"International Conference on Learning Representations","author":"Khalifa Muhammad","year":"2021","unstructured":"Muhammad Khalifa, Hady Elsahar, and Marc Dymetman. 2021. A distributional approach to controlled text generation. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=jWkw45-9AbL"},{"key":"e_1_3_1_61_2","article-title":"Auto-encoding variational Bayes","author":"Kingma Diederik P.","year":"2013","unstructured":"Diederik P. Kingma and Max Welling. 2013. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013).","journal-title":"arXiv preprint arXiv:1312.6114"},{"key":"e_1_3_1_62_2","volume-title":"International Conference on Learning Representations","author":"Kitaev Nikita","year":"2020","unstructured":"Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient Transformer. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=rkgNKkHtvB"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-emnlp.424"},{"key":"e_1_3_1_64_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.243"},{"key":"e_1_3_1_65_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-1014"},{"key":"e_1_3_1_66_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P16-1094"},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.68"},{"key":"e_1_3_1_68_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2020\/503"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.353"},{"key":"e_1_3_1_70_2","article-title":"Diffusion-LM improves controllable text generation","volume":"2205","author":"Li Xiang Lisa","year":"2022","unstructured":"Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori Hashimoto. 2022. Diffusion-LM improves controllable text generation. ArXiv abs\/2205.14217 (2022). https:\/\/arxiv.org\/abs\/2205.14217","journal-title":"ArXiv"},{"key":"e_1_3_1_71_2","article-title":"GPT-based generation for classical Chinese poetry","author":"Liao Yi","year":"2019","unstructured":"Yi Liao, Yasheng Wang, Qun Liu, and Xin Jiang. 2019. GPT-based generation for classical Chinese poetry. arXiv preprint arXiv:1907.00151 (2019). https:\/\/arxiv.org\/abs\/1907.00151","journal-title":"arXiv preprint arXiv:1907.00151"},{"key":"e_1_3_1_72_2","first-page":"74","volume-title":"Text Summarization Branches Out","author":"Lin Chin-Yew","year":"2004","unstructured":"Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, 74\u201381. https:\/\/aclanthology.org\/W04-1013"},{"key":"e_1_3_1_73_2","first-page":"16081","volume-title":"35th AAAI Conference on Artificial Intelligence, AAAI 2021, 33rd Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The 11th Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021","author":"Lin Zhaojiang","year":"2021","unstructured":"Zhaojiang Lin, Andrea Madotto, Yejin Bang, and Pascale Fung. 2021. The adapter-bot: All-in-one controllable conversational model. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021, 33rd Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The 11th Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021. AAAI Press, 16081\u201316083. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/18018"},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1609\/aiide.v17i1.18891"},{"key":"e_1_3_1_75_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.522"},{"key":"e_1_3_1_76_2","first-page":"354","volume-title":"Proceedings of the Joint 5th Workshop on Statistical Machine Translation and MetricsMATR","author":"Liu Chang","year":"2010","unstructured":"Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. TESLA: Translation evaluation of sentences with linear-programming-based analysis. In Proceedings of the Joint 5th Workshop on Statistical Machine Translation and MetricsMATR. Association for Computational Linguistics, Uppsala, Sweden, 354\u2013359. https:\/\/aclanthology.org\/W10-1754"},{"key":"e_1_3_1_77_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i02.5536"},{"key":"e_1_3_1_78_2","article-title":"Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing","volume":"2107","author":"Liu Pengfei","year":"2021","unstructured":"Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR abs\/2107.13586 (2021). arXiv:2107.13586https:\/\/arxiv.org\/abs\/2107.13586","journal-title":"CoRR"},{"key":"e_1_3_1_79_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i17.17744"},{"key":"e_1_3_1_80_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.726"},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00343"},{"key":"e_1_3_1_82_2","article-title":"Naturalness evaluation of natural language generation in task-oriented dialogues using BERT","author":"Liu Ye","year":"2021","unstructured":"Ye Liu, Wolfgang Maier, Wolfgang Minker, and Stefan Ultes. 2021. Naturalness evaluation of natural language generation in task-oriented dialogues using BERT. arXiv preprint arXiv:2109.02938 (2021).","journal-title":"arXiv preprint arXiv:2109.02938"},{"key":"e_1_3_1_83_2","article-title":"RoBERTa: A robustly optimized BERT pretraining approach","volume":"1907","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR abs\/1907.11692 (2019). arxiv:1907.11692. http:\/\/arxiv.org\/abs\/1907.11692","journal-title":"CoRR"},{"key":"e_1_3_1_84_2","volume-title":"Advances in Neural Information Processing Systems","author":"Logeswaran Lajanugen","year":"2018","unstructured":"Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. 2018. Content preserving text generation with attribute controls. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31. Curran Associates, Inc.https:\/\/proceedings.neurips.cc\/paper\/2018\/file\/7cf64379eb6f29a4d25c4b6a2df713e4-Paper.pdf"},{"key":"e_1_3_1_85_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P17-1103"},{"key":"e_1_3_1_86_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1603"},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-5609"},{"key":"e_1_3_1_88_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1269"},{"key":"e_1_3_1_89_2","article-title":"Efficient estimation of word representations in vector space","author":"Mikolov Tomas","year":"2013","unstructured":"Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).","journal-title":"arXiv preprint arXiv:1301.3781"},{"key":"e_1_3_1_90_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.31"},{"key":"e_1_3_1_91_2","doi-asserted-by":"publisher","DOI":"10.1145\/2702123.2702553"},{"key":"e_1_3_1_92_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.37"},{"key":"e_1_3_1_93_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00027"},{"key":"e_1_3_1_94_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-2012"},{"key":"e_1_3_1_95_2","first-page":"27730","volume-title":"Advances in Neural Information Processing Systems","volume":"35","author":"Ouyang Long","year":"2022","unstructured":"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 27730\u201327744. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/file\/b1efde53be364a73914f58805a001731-Paper-Conference.pdf"},{"key":"e_1_3_1_96_2","doi-asserted-by":"publisher","DOI":"10.3115\/1073083.1073135"},{"key":"e_1_3_1_97_2","first-page":"3973","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event \/ Punta Cana, Dominican Republic, 16-20 November, 2021","author":"Pascual Damian","year":"2021","unstructured":"Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plug-and-play method for controlled text generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event \/ Punta Cana, Dominican Republic, 16-20 November, 2021, Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, 3973\u20133997. https:\/\/aclanthology.org\/2021.findings-emnlp.334"},{"key":"e_1_3_1_98_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.17"},{"key":"e_1_3_1_99_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-1202"},{"key":"e_1_3_1_100_2","article-title":"MAUVE: Human-machine divergence curves for evaluating open-ended text generation","volume":"2102","author":"Pillutla Krishna","year":"2021","unstructured":"Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Yejin Choi, and Za\u00efd Harchaoui. 2021. MAUVE: Human-machine divergence curves for evaluating open-ended text generation. CoRR abs\/2102.01454 (2021). arxiv:2102.01454. https:\/\/arxiv.org\/abs\/2102.01454","journal-title":"CoRR"},{"key":"e_1_3_1_101_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.coling-main.1"},{"key":"e_1_3_1_102_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5385"},{"key":"e_1_3_1_103_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33016908"},{"key":"e_1_3_1_104_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-acl.229"},{"key":"e_1_3_1_105_2","article-title":"Reducing gender bias in word-level language models with a gender-equalizing loss function","author":"Qian Yusu","year":"2019","unstructured":"Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. 2019. Reducing gender bias in word-level language models with a gender-equalizing loss function. arXiv preprint arXiv:1905.12801 (2019).","journal-title":"arXiv preprint arXiv:1905.12801"},{"key":"e_1_3_1_106_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1539"},{"key":"e_1_3_1_107_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2202.11705"},{"issue":"8","key":"e_1_3_1_108_2","first-page":"9","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"Radford Alec","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et\u00a0al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.","journal-title":"OpenAI blog"},{"issue":"140","key":"e_1_3_1_109_2","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text Transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1\u201367. http:\/\/jmlr.org\/papers\/v21\/20-074.html","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_1_110_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl- long.58"},{"key":"e_1_3_1_111_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1410"},{"key":"e_1_3_1_112_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.nlp4convai-1.20"},{"key":"e_1_3_1_113_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.351"},{"key":"e_1_3_1_114_2","article-title":"Emotion-regularized conditional variational autoencoder for emotional response generation","author":"Ruan Yu-Ping","year":"2021","unstructured":"Yu-Ping Ruan and Zhenhua Ling. 2021. Emotion-regularized conditional variational autoencoder for emotional response generation. IEEE Transactions on Affective Computing (2021). https:\/\/arxiv.org\/abs\/2104.08857","journal-title":"IEEE Transactions on Affective Computing"},{"key":"e_1_3_1_115_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2022\/716"},{"key":"e_1_3_1_116_2","article-title":"Fine-grained sentiment Controlled text generation","author":"Samanta Bidisha","year":"2020","unstructured":"Bidisha Samanta, Mohit Agarwal, and Niloy Ganguly. 2020. Fine-grained sentiment Controlled text generation. arXiv preprint arXiv:2006.09891 (2020). https:\/\/arxiv.org\/abs\/2006.09891","journal-title":"arXiv preprint arXiv:2006.09891"},{"key":"e_1_3_1_117_2","article-title":"Bloom: A 176b-parameter open-access multilingual language model","author":"Scao Teven Le","year":"2022","unstructured":"Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili\u0107, Daniel Hesslow, Roman Castagn\u00e9, Alexandra Sasha Luccioni, Fran\u00e7ois Yvon, Matthias Gall\u00e9, et\u00a0al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022).","journal-title":"arXiv preprint arXiv:2211.05100"},{"key":"e_1_3_1_118_2","volume-title":"Proceedings of the 37th International Conference on Machine Learning (ICML\u201920)","author":"Scialom Thomas","year":"2020","unstructured":"Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. Discriminative adversarial search for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning (ICML\u201920). JMLR.org, Article 793, 10 pages. https:\/\/arxiv.org\/abs\/2002.10375"},{"key":"e_1_3_1_119_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.704"},{"key":"e_1_3_1_120_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-1005"},{"key":"e_1_3_1_121_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3481964"},{"key":"e_1_3_1_122_2","article-title":"Towards controllable biases in language generation","author":"Sheng Emily","year":"2020","unstructured":"Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2020. Towards controllable biases in language generation. arXiv preprint arXiv:2005.00268 (2020).","journal-title":"arXiv preprint arXiv:2005.00268"},{"key":"e_1_3_1_123_2","first-page":"13798","volume-title":"35th AAAI Conference on Artificial Intelligence, AAAI 2021, 33rd Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The 11th Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021","author":"Sheng Zhonghao","year":"2021","unstructured":"Zhonghao Sheng, Kaitao Song, Xu Tan, Yi Ren, Wei Ye, Shikun Zhang, and Tao Qin. 2021. SongMASS: Automatic song writing with pre-training and alignment constraint. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021, 33rd Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The 11th Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021. AAAI Press, 13798\u201313805. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17626"},{"key":"e_1_3_1_124_2","first-page":"8840","volume-title":"International Conference on Machine Learning","author":"Shi Wenxian","year":"2020","unstructured":"Wenxian Shi, Hao Zhou, Ning Miao, and Lei Li. 2020. Dispersed exponential family mixture VAEs for interpretable text generation. In International Conference on Machine Learning. PMLR, 8840\u20138851. http:\/\/proceedings.mlr.press\/v119\/shi20f\/shi20f.pdf"},{"key":"e_1_3_1_125_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.346"},{"key":"e_1_3_1_126_2","doi-asserted-by":"publisher","DOI":"10.5555\/2969442.2969628"},{"key":"e_1_3_1_127_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.14"},{"key":"e_1_3_1_128_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.516"},{"key":"e_1_3_1_129_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.712"},{"key":"e_1_3_1_130_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/p19-1359"},{"key":"e_1_3_1_131_2","volume-title":"Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS\u201920)","author":"Stiennon Nisan","year":"2020","unstructured":"Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. 2020. Learning to summarize from human feedback. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS\u201920) (Vancouver, BC, Canada). Curran Associates Inc., Red Hook, NY, USA, Article 253, 14 pages. https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/1f89885d556929e98d3ef9b86448f951-Paper.pdf"},{"key":"e_1_3_1_132_2","first-page":"895","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event \/ Punta Cana, Dominican Republic, 16-20 November, 2021","author":"Su Yixuan","year":"2021","unstructured":"Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier. 2021. Plan-then-generate: Controlled data-to-text generation via planning. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event \/ Punta Cana, Dominican Republic, 16-20 November, 2021, Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, 895\u2013909. https:\/\/aclanthology.org\/2021.findings-emnlp.76"},{"key":"e_1_3_1_133_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2019\/829"},{"key":"e_1_3_1_134_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1513"},{"key":"e_1_3_1_135_2","article-title":"LLaMA: Open and efficient foundation language models","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et\u00a0al. 2023. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).","journal-title":"arXiv preprint arXiv:2302.13971"},{"key":"e_1_3_1_136_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.251"},{"key":"e_1_3_1_137_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.csl.2020.101151"},{"key":"e_1_3_1_138_2","first-page":"5998","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998\u20136008. https:\/\/papers.nips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"},{"key":"e_1_3_1_139_2","article-title":"Generating lyrics with variational autoencoder and multi-modal artist embeddings","volume":"1812","author":"Vechtomova Olga","year":"2018","unstructured":"Olga Vechtomova, Hareesh Bahuleyan, Amirpasha Ghabussi, and Vineet John. 2018. Generating lyrics with variational autoencoder and multi-modal artist embeddings. CoRR abs\/1812.08318 (2018). arxiv:1812.08318. http:\/\/arxiv.org\/abs\/1812.08318","journal-title":"CoRR"},{"key":"e_1_3_1_140_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.dialdoc-1.3"},{"key":"e_1_3_1_141_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2018\/618"},{"key":"e_1_3_1_142_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.coling-main.204"},{"key":"e_1_3_1_143_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1015"},{"key":"e_1_3_1_144_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.9"},{"key":"e_1_3_1_145_2","volume-title":"International Conference on Learning Representations","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=gEZrGCozdqR"},{"key":"e_1_3_1_146_2","doi-asserted-by":"publisher","DOI":"10.1145\/3357384.3357937"},{"key":"e_1_3_1_147_2","volume-title":"International Conference on Learning Representations","author":"Welleck Sean","year":"2019","unstructured":"Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. In International Conference on Learning Representations. https:\/\/arxiv.org\/abs\/1908.04319"},{"key":"e_1_3_1_148_2","first-page":"14085","volume-title":"35th AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021","author":"Wu Zeqiu","year":"2021","unstructured":"Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, and Bill Dolan. 2021. A controllable model of grounded response generation. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021. AAAI Press, 14085\u201314093. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17658"},{"key":"e_1_3_1_149_2","volume-title":"International Conference on Learning Representations","author":"Xie Sang Michael","year":"2022","unstructured":"Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An explanation of in-context learning as implicit Bayesian inference. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=RdJVFCHjUMI"},{"key":"e_1_3_1_150_2","article-title":"Unsupervised controllable text generation with global variation discovery and disentanglement","volume":"1905","author":"Xu Peng","year":"2019","unstructured":"Peng Xu, Yanshuai Cao, and Jackie Chi Kit Cheung. 2019. Unsupervised controllable text generation with global variation discovery and disentanglement. CoRR abs\/1905.11975 (2019). arxiv:1905.11975. http:\/\/arxiv.org\/abs\/1905.11975","journal-title":"CoRR"},{"key":"e_1_3_1_151_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.226"},{"key":"e_1_3_1_152_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.276"},{"key":"e_1_3_1_153_2","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2204. 13362"},{"key":"e_1_3_1_154_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1193"},{"key":"e_1_3_1_155_2","doi-asserted-by":"publisher","DOI":"10.5555\/3304222.3304401"},{"key":"e_1_3_1_156_2","volume-title":"Advances in Neural Information Processing Systems","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc.https:\/\/proceedings.neurips.cc\/paper\/2019\/file\/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf"},{"key":"e_1_3_1_157_2","unstructured":"Y. Zeldes D. Padnos O. Sharir and B. Peleg. 2020. Technical report: Auxiliary tuning and its application to conditional text generation. (2020). https:\/\/arxiv.org\/abs\/2006.16823"},{"key":"e_1_3_1_158_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.392"},{"key":"e_1_3_1_159_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.emnlp-main.223"},{"key":"e_1_3_1_160_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2931036"},{"key":"e_1_3_1_161_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1205"},{"key":"e_1_3_1_162_2","volume-title":"8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020","author":"Zhang Tianyi","year":"2020","unstructured":"Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. https:\/\/openreview.net\/forum?id=SkeHuCVFDr"},{"key":"e_1_3_1_163_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-demos.30"},{"key":"e_1_3_1_164_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.698"},{"key":"e_1_3_1_165_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1139"},{"key":"e_1_3_1_166_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.224"},{"key":"e_1_3_1_167_2","volume-title":"5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings","author":"Zhao Junbo Jake","year":"2017","unstructured":"Junbo Jake Zhao, Micha\u00ebl Mathieu, and Yann LeCun. 2017. Energy-based generative adversarial networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. https:\/\/openreview.net\/forum?id=ryh9pmcee"},{"key":"e_1_3_1_168_2","first-page":"9693","volume-title":"The 34th AAAI Conference on Artificial Intelligence, AAAI 2020, The 32nd Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The 10th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020","author":"Zheng Yinhe","year":"2020","unstructured":"Yinhe Zheng, Rongsheng Zhang, Minlie Huang, and Xiaoxi Mao. 2020. A pre-training based personalized dialogue generation model with persona-sparse data. In The 34th AAAI Conference on Artificial Intelligence, AAAI 2020, The 32nd Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The 10th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020. AAAI Press, 9693\u20139700. https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6518"},{"key":"e_1_3_1_169_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.531"},{"key":"e_1_3_1_170_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6521"},{"key":"e_1_3_1_171_2","doi-asserted-by":"publisher","DOI":"10.1145\/3209978.3210080"},{"key":"e_1_3_1_172_2","article-title":"Fine-tuning language models from human preferences","volume":"1909","author":"Ziegler Daniel M.","year":"2019","unstructured":"Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul F. Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. CoRR abs\/1909.08593 (2019). arxiv:1909.08593. http:\/\/arxiv.org\/abs\/1909.08593","journal-title":"CoRR"},{"key":"e_1_3_1_173_2","article-title":"Controllable generation from pre-trained language models via inverse prompting","volume":"2103","author":"Zou Xu","year":"2021","unstructured":"Xu Zou, Da Yin, Qingyang Zhong, Hongxia Yang, Zhilin Yang, and Jie Tang. 2021. Controllable generation from pre-trained language models via inverse prompting. CoRR abs\/2103.10685 (2021). arxiv:2103.10685. https:\/\/arxiv.org\/abs\/2103.10685","journal-title":"CoRR"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3617680","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3617680","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:36:32Z","timestamp":1750178192000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3617680"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,6]]},"references-count":172,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,3,31]]}},"alternative-id":["10.1145\/3617680"],"URL":"https:\/\/doi.org\/10.1145\/3617680","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,6]]},"assertion":[{"value":"2022-01-10","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-21","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-10-06","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}