{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,28]],"date-time":"2026-01-28T20:22:36Z","timestamp":1769631756171,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":12,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T00:00:00Z","timestamp":1603065600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100004663","name":"Ministry of Science and Technology, Taiwan","doi-asserted-by":"publisher","award":["108-2221-E-002-104-MY3"],"award-info":[{"award-number":["108-2221-E-002-104-MY3"]}],"id":[{"id":"10.13039\/501100004663","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,10,19]]},"DOI":"10.1145\/3340531.3418503","type":"proceedings-article","created":{"date-parts":[[2020,10,19]],"date-time":"2020-10-19T05:31:01Z","timestamp":1603085461000},"page":"3241-3244","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":7,"title":["Controlling Patent Text Generation by Structural Metadata"],"prefix":"10.1145","author":[{"given":"Jieh-Sheng","family":"Lee","sequence":"first","affiliation":[{"name":"National Taiwan University, Taipei, Taiwan Roc"}]}],"member":"320","published-online":{"date-parts":[[2020,10,19]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc","author":"Guillaume Lample Alexis","unstructured":"Alexis CONNEAU and Guillaume Lample . 2019. Cross-lingual Language Model Pretraining . In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc , E. Fox, and R. Garnett (Eds.). Curran Associates, Inc. , 7059--7069. http:\/\/papers.nips.cc\/paper\/8928-cross-lingual-language-model-pretraining.pdf Alexis CONNEAU and Guillaume Lample. 2019. Cross-lingual Language Model Pretraining. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 7059--7069. http:\/\/papers.nips.cc\/paper\/8928-cross-lingual-language-model-pretraining.pdf"},{"key":"e_1_3_2_1_2_1","volume-title":"Plug and Play Language Models: a Simple Approach to Controlled Text Generation. arXiv preprint arXiv:1912.02164","author":"Dathathri Sumanth","year":"2019","unstructured":"Sumanth Dathathri , Andrea Madotto , Janice Lan , Jane Hung , Eric Frank , Piero Molino , Jason Yosinski , and Rosanne Liu . 2019. Plug and Play Language Models: a Simple Approach to Controlled Text Generation. arXiv preprint arXiv:1912.02164 ( 2019 ). To appear in ICLR 2020. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and Play Language Models: a Simple Approach to Controlled Text Generation. arXiv preprint arXiv:1912.02164 (2019). To appear in ICLR 2020."},{"key":"e_1_3_2_1_3_1","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","volume":"1","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2019 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171--4186. https:\/\/doi.org\/10. 18653\/v1\/N19--1423 10.18653\/v1 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171--4186. https:\/\/doi.org\/10.18653\/v1\/N19--1423"},{"key":"e_1_3_2_1_4_1","volume-title":"CTRL: A Conditional Transformer Language Model for Controllable Generation. arxiv","author":"Keskar Nitish Shirish","year":"2019","unstructured":"Nitish Shirish Keskar , Bryan McCann , Lav R. Varshney , Caiming Xiong , and Richard Socher . 2019 . CTRL: A Conditional Transformer Language Model for Controllable Generation. arxiv : 1909.05858 [cs.CL] Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A Conditional Transformer Language Model for Controllable Generation. arxiv: 1909.05858 [cs.CL]"},{"key":"e_1_3_2_1_5_1","volume-title":"PatentTransformer: A Framework for Personalized Patent Claim Generation. arxiv","author":"Lee Jieh-Sheng","year":"1912","unstructured":"Jieh-Sheng Lee . 2019. PatentTransformer: A Framework for Personalized Patent Claim Generation. arxiv : 1912 .03502 [cs.CL] Presented at the 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019) and to appear in the CEUR Workshop Proceedings. Jieh-Sheng Lee. 2019. PatentTransformer: A Framework for Personalized Patent Claim Generation. arxiv: 1912.03502 [cs.CL] Presented at the 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019) and to appear in the CEUR Workshop Proceedings."},{"key":"e_1_3_2_1_6_1","volume-title":"Proceedings of the Thirteenth International Workshop on Juris-informatics (JURISIN)","author":"Lee Jieh-Sheng","year":"2019","unstructured":"Jieh-Sheng Lee and Jieh Hsiang . 2019 a. Measuring Patent Claim Generation by Span Relevancy . In Proceedings of the Thirteenth International Workshop on Juris-informatics (JURISIN) . Keio University Kanagawa, Japan. Jieh-Sheng Lee and Jieh Hsiang. 2019 a. Measuring Patent Claim Generation by Span Relevancy. In Proceedings of the Thirteenth International Workshop on Juris-informatics (JURISIN). Keio University Kanagawa, Japan."},{"key":"e_1_3_2_1_7_1","volume-title":"2019 b. Patent Claim Generation by Fine-Tuning OpenAI GPT-2.arxiv","author":"Lee Jieh-Sheng","year":"1907","unstructured":"Jieh-Sheng Lee and Jieh Hsiang . 2019 b. Patent Claim Generation by Fine-Tuning OpenAI GPT-2.arxiv : 1907 .02052v1 [cs.CL] Jieh-Sheng Lee and Jieh Hsiang. 2019 b. Patent Claim Generation by Fine-Tuning OpenAI GPT-2.arxiv: 1907.02052v1 [cs.CL]"},{"key":"e_1_3_2_1_8_1","volume-title":"2019 c. PatentBERT: Patent Classification with Fine-Tuning a pre-trained BERT Model. arxiv","author":"Lee Jieh-Sheng","year":"1906","unstructured":"Jieh-Sheng Lee and Jieh Hsiang . 2019 c. PatentBERT: Patent Classification with Fine-Tuning a pre-trained BERT Model. arxiv : 1906 .02124 [cs.CL] Jieh-Sheng Lee and Jieh Hsiang. 2019 c. PatentBERT: Patent Classification with Fine-Tuning a pre-trained BERT Model. arxiv: 1906.02124 [cs.CL]"},{"key":"e_1_3_2_1_9_1","unstructured":"Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2020. RoBERTa: A Robustly Optimized BERT Pretraining Approach. https:\/\/openreview.net\/forum?id=SyxS0T4tvS  Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2020. RoBERTa: A Robustly Optimized BERT Pretraining Approach. https:\/\/openreview.net\/forum?id=SyxS0T4tvS"},{"key":"e_1_3_2_1_10_1","unstructured":"Alec Radrof Jeffrey Wu Rewon Child David Luan Dario Amodei and Ilya Sutskever. 2018. Language Models are Unsupervised Multitask Learners.  Alec Radrof Jeffrey Wu Rewon Child David Luan Dario Amodei and Ilya Sutskever. 2018. Language Models are Unsupervised Multitask Learners."},{"key":"e_1_3_2_1_11_1","volume-title":"a distilled version of BERT: smaller, faster, cheaper and lighter. arxiv","author":"Sanh Victor","year":"1910","unstructured":"Victor Sanh , Lysandre Debut , Julien Chaumond , and Thomas Wolf . 2019. DistilBERT , a distilled version of BERT: smaller, faster, cheaper and lighter. arxiv : 1910 .01108 [cs.CL] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arxiv: 1910.01108 [cs.CL]"},{"key":"e_1_3_2_1_12_1","volume-title":"Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc","author":"Yang Zhilin","unstructured":"Zhilin Yang , Zihang Dai , Yiming Yang , Jaime Carbonell , Russ R Salakhutdinov , and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding . In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc , E. Fox, and R. Garnett (Eds.). Curran Associates, Inc. , 5753--5763. http:\/\/papers.nips.cc\/paper\/8812-xlnet-generalized-autoregressive-pretraining-for-language-understanding.pdf Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dtextquotesingle Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 5753--5763. http:\/\/papers.nips.cc\/paper\/8812-xlnet-generalized-autoregressive-pretraining-for-language-understanding.pdf"}],"event":{"name":"CIKM '20: The 29th ACM International Conference on Information and Knowledge Management","location":"Virtual Event Ireland","acronym":"CIKM '20","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"]},"container-title":["Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340531.3418503","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3340531.3418503","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:02:42Z","timestamp":1750197762000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3340531.3418503"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,19]]},"references-count":12,"alternative-id":["10.1145\/3340531.3418503","10.1145\/3340531"],"URL":"https:\/\/doi.org\/10.1145\/3340531.3418503","relation":{},"subject":[],"published":{"date-parts":[[2020,10,19]]},"assertion":[{"value":"2020-10-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}