{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,8]],"date-time":"2026-02-08T08:54:54Z","timestamp":1770540894629,"version":"3.49.0"},"reference-count":91,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T00:00:00Z","timestamp":1717545600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Interact. Intell. Syst."],"published-print":{"date-parts":[[2024,6,30]]},"abstract":"<jats:p>\n            Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation. However, the considered output candidates of the underlying search algorithm are under-explored and under-explained. We tackle this shortcoming by proposing a\n            <jats:italic>tree-in-the-loop<\/jats:italic>\n            approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs. To support these tasks, we present generAItor, a visual analytics technique, augmenting the central beam search tree with various task-specific widgets, providing targeted visualizations and interaction possibilities. Our approach allows interactions on multiple levels and offers an iterative pipeline that encompasses generating, exploring, and comparing output candidates, as well as fine-tuning the model based on adapted data. Our case study shows that our tool generates new insights in gender bias analysis beyond state-of-the-art template-based methods. Additionally, we demonstrate the applicability of our approach in a qualitative user study. Finally, we quantitatively evaluate the adaptability of the model to few samples, as occurring in text-generation use cases.\n          <\/jats:p>","DOI":"10.1145\/3652028","type":"journal-article","created":{"date-parts":[[2024,3,14]],"date-time":"2024-03-14T12:23:44Z","timestamp":1710419024000},"page":"1-32","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["-generAItor: Tree-in-the-loop Text Generation for Language Model Explainability and Adaptation"],"prefix":"10.1145","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1168-1804","authenticated-orcid":false,"given":"Thilo","family":"Spinner","sequence":"first","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0095-5865","authenticated-orcid":false,"given":"Rebecca","family":"Kehlbeck","sequence":"additional","affiliation":[{"name":"University of Konstanz, Konstanz, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2629-9579","authenticated-orcid":false,"given":"Rita","family":"Sevastjanova","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-5983-8807","authenticated-orcid":false,"given":"Tobias","family":"St\u00e4hle","sequence":"additional","affiliation":[{"name":"University of Konstanz, Konstanz, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7966-9740","authenticated-orcid":false,"given":"Daniel A.","family":"Keim","sequence":"additional","affiliation":[{"name":"University of Konstanz, Konstanz, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5803-2185","authenticated-orcid":false,"given":"Oliver","family":"Deussen","sequence":"additional","affiliation":[{"name":"University of Konstanz, Konstanz, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8526-2613","authenticated-orcid":false,"given":"Mennatallah","family":"El-Assady","sequence":"additional","affiliation":[{"name":"ETH Zurich, Zurich, Switzerland"}]}],"member":"320","published-online":{"date-parts":[[2024,6,5]]},"reference":[{"key":"e_1_3_4_2_1","article-title":"OpenAI chatbot spits out biased musings, despite guardrails","author":"Alba Davey","year":"2022","unstructured":"Davey Alba. 2022. OpenAI chatbot spits out biased musings, despite guardrails. Bloomberg. Retrieved from https:\/\/www.bloomberg.com\/news\/newsletters\/2022-12-08\/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results","journal-title":"Bloomberg"},{"key":"e_1_3_4_3_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.naacl-main.203"},{"key":"e_1_3_4_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/HICSS.2011.339"},{"key":"e_1_3_4_5_1","unstructured":"Dzmitry Bahdanau Kyunghyun Cho and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arxiv:1409.0473"},{"key":"e_1_3_4_6_1","article-title":"A neural probabilistic language model","volume":"13","author":"Bengio Yoshua","year":"2000","unstructured":"Yoshua Bengio, R\u00e9jean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. Adv. Neural Inf. Process. Syst. 13 (2000).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"e_1_3_4_7_1","first-page":"5454","volume-title":"Proceedings of the Association for Computational Linguistics","author":"Blodgett Su Lin","year":"2020","unstructured":"Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \u201cBias\u201d in NLP. In Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics, 5454\u20135476."},{"key":"e_1_3_4_8_1","unstructured":"Tom B. Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Tom Henighan Rewon Child Aditya Ramesh Daniel M. Ziegler Jeffrey Wu Clemens Winter Christopher Hesse Mark Chen Eric Sigler Mateusz Litwin Scott Gray Benjamin Chess Jack Clark Christopher Berner Sam McCandlish Alec Radford Ilya Sutskever and Dario Amodei. 2020. Language models are few-shot learners. arxiv:2005.14165"},{"key":"e_1_3_4_9_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.aal4230"},{"key":"e_1_3_4_10_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/E17-2036"},{"key":"e_1_3_4_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2019.09.013"},{"key":"e_1_3_4_12_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.373"},{"key":"e_1_3_4_13_1","article-title":"RedPajama: An Open Source Recipe to Reproduce LLaMA Training Dataset","author":"Computer Together","year":"2023","unstructured":"Together Computer. 2023. RedPajama: An Open Source Recipe to Reproduce LLaMA Training Dataset. Retrieved from https:\/\/github.com\/togethercomputer\/RedPajama-Data","journal-title":"Retrieved from"},{"key":"e_1_3_4_14_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.coling-main.291"},{"key":"e_1_3_4_15_1","doi-asserted-by":"publisher","DOI":"10.1515\/9783110874006"},{"key":"e_1_3_4_16_1","first-page":"447","volume-title":"Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing","author":"Danilevsky Marina","year":"2020","unstructured":"Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural language processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 447\u2013459."},{"key":"e_1_3_4_17_1","unstructured":"Sumanth Dathathri Andrea Madotto Janice Lan Jane Hung Eric Frank Piero Molino Jason Yosinski and Rosanne Liu. 2019. Plug and play language models: A simple approach to controlled text generation. Retrieved from https:\/\/arxiv.org\/abs\/1912.02164"},{"key":"e_1_3_4_18_1","unstructured":"Deep NLP. 2023. Bias in NLP. Retrieved from https:\/\/github.com\/cisnlp\/bias-in-nlp"},{"key":"e_1_3_4_19_1","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arxiv:1810.04805"},{"key":"e_1_3_4_20_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.in2writing-1.14"},{"key":"e_1_3_4_21_1","volume-title":"Proceedings of the ACM CHI Workshop: Human-centered Machine Learning Perspectives","author":"El-Assady M.","year":"2019","unstructured":"M. El-Assady, W. Jentner, R. Kehlbeck, U. Schlegel, R. Sevastjanova, F. Sperrle, T. Spinner, and D. Keim. 2019. Towards XAI: Structuring the processes of explanations. In Proceedings of the ACM CHI Workshop: Human-centered Machine Learning Perspectives."},{"key":"e_1_3_4_22_1","article-title":"Semantic color mapping: A pipeline for assigning meaningful colors to text","author":"El-Assady Mennatallah","year":"2022","unstructured":"Mennatallah El-Assady, Rebecca Kehlbeck, Yannick Metz, Udo Schlegel, Rita Sevastjanova, Fabian Sperrle, and Thilo Spinner. 2022. Semantic color mapping: A pipeline for assigning meaningful colors to text. In Proceedings of the 4th IEEE Workshop on Visualization Guidelines in Research, Design, and Education.","journal-title":"Proceedings of the 4th IEEE Workshop on Visualization Guidelines in Research, Design, and Education"},{"key":"e_1_3_4_23_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13425"},{"key":"e_1_3_4_24_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1006"},{"key":"e_1_3_4_25_1","doi-asserted-by":"publisher","DOI":"10.3390\/app11073184"},{"key":"e_1_3_4_26_1","doi-asserted-by":"publisher","DOI":"10.1613\/jair.5477"},{"key":"e_1_3_4_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2934595"},{"key":"e_1_3_4_28_1","doi-asserted-by":"publisher","DOI":"10.1177\/00222437211037258"},{"key":"e_1_3_4_29_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.681"},{"key":"e_1_3_4_30_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1031"},{"key":"e_1_3_4_31_1","first-page":"1587","volume-title":"Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research)","volume":"70","author":"Hu Zhiting","year":"2017","unstructured":"Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, 1587\u20131596."},{"key":"e_1_3_4_32_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.57"},{"key":"e_1_3_4_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394486.3403244"},{"key":"e_1_3_4_34_1","doi-asserted-by":"publisher","DOI":"10.1121\/1.2016299"},{"key":"e_1_3_4_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3571730"},{"key":"e_1_3_4_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3485447.3511935"},{"key":"e_1_3_4_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/TBDATA.2019.2921572"},{"key":"e_1_3_4_38_1","first-page":"3074","volume-title":"Proceedings of the 29th International Conference on Computational Linguistics.","author":"Kalouli Aikaterini-Lida","year":"2022","unstructured":"Aikaterini-Lida Kalouli, Rita Sevastjanova, Christin Beck, and Maribel Romero. 2022. Negation, coordination, and quantifiers in contextualized language models. In Proceedings of the 29th International Conference on Computational Linguistics.International Committee on Computational Linguistics, 3074\u20133085."},{"key":"e_1_3_4_39_1","article-title":"Demystifying the embedding space of language models","author":"Kehlbeck Rebecca","year":"2021","unstructured":"Rebecca Kehlbeck, Rita Sevastjanova, Thilo Spinner, Tobias St\u00e4hle, and Mennatallah El-Assady. 2021. Demystifying the embedding space of language models. In Proceedings of the Workshop on Visualization for AI Explainability (VISxAI\u201921). Retrieved from https:\/\/bert-vs-gpt2.dbvis.de\/","journal-title":"Proceedings of the Workshop on Visualization for AI Explainability (VISxAI\u201921)"},{"key":"e_1_3_4_40_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-emnlp.411"},{"key":"e_1_3_4_41_1","unstructured":"Yann LeCun. 2023. Do Language Models Need Sensory Grounding for Meaning and Understanding? Retrieved from https:\/\/drive.google.com\/file\/d\/1BU5bV3X5w65DwSMapKcsr0ZvrMRU_Nbi"},{"key":"e_1_3_4_42_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D17-2021"},{"key":"e_1_3_4_43_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"e_1_3_4_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2014.2346248"},{"key":"e_1_3_4_45_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/612"},{"key":"e_1_3_4_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3534678.3539188"},{"key":"e_1_3_4_47_1","first-page":"6565","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Liang Paul Pu","year":"2021","unstructured":"Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In Proceedings of the International Conference on Machine Learning. PMLR, 6565\u20136576."},{"key":"e_1_3_4_48_1","article-title":"Fixing weight decay regularization in adam","volume":"1711","author":"Loshchilov Ilya","year":"2017","unstructured":"Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. CoRR abs\/1711.05101 (2017).","journal-title":"CoRR"},{"key":"e_1_3_4_49_1","first-page":"189","article-title":"Gender bias in neural natural language processing","author":"Lu Kaiji","year":"2020","unstructured":"Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2020. Gender bias in neural natural language processing. Logic, Language, and Security: Essays Dedicated to Andre Scedrov on the Occasion of His 65th Birthday (2020), Springer International Publishing, 189\u2013202.","journal-title":"Logic, Language, and Security: Essays Dedicated to Andre Scedrov on the Occasion of His 65th Birthday"},{"key":"e_1_3_4_50_1","doi-asserted-by":"publisher","DOI":"10.5555\/2002472.2002491"},{"key":"e_1_3_4_51_1","doi-asserted-by":"publisher","DOI":"10.21105\/joss.00861"},{"key":"e_1_3_4_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3457607"},{"key":"e_1_3_4_53_1","article-title":"The new chatbots could change the world. Can you trust them?","author":"Metz Cade","year":"2022","unstructured":"Cade Metz. 2022. The new chatbots could change the world. Can you trust them? New York Times. Retrieved from https:\/\/www.nytimes.com\/2022\/12\/10\/technology\/ai-chat-bot-chatgpt.html","journal-title":"New York Times"},{"key":"e_1_3_4_54_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.acl-long.244"},{"key":"e_1_3_4_55_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00179"},{"key":"e_1_3_4_56_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.416"},{"key":"e_1_3_4_57_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2012.07.001"},{"key":"e_1_3_4_58_1","unstructured":"OpenAI. 2023. GPT-4 Technical Report. (2023). arxiv:2303.08774"},{"key":"e_1_3_4_59_1","first-page":"27730","volume-title":"Advances in Neural Information Processing Systems","author":"Ouyang Long","year":"2022","unstructured":"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems. S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 27730\u201327744."},{"key":"e_1_3_4_60_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.naacl-main.42"},{"key":"e_1_3_4_61_1","article-title":"A deep reinforced model for abstractive summarization","volume":"1705","author":"Paulus Romain","year":"2017","unstructured":"Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. CoRR abs\/1705.04304 (2017).","journal-title":"CoRR"},{"key":"e_1_3_4_62_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.58"},{"key":"e_1_3_4_63_1","article-title":"Better Language Models and Their Implications","author":"Radford Alec","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Dario Amodei, Daniela Amodei, Jack Clark, Miles Brundage, and Ilya Sutskever. 2019a. Better Language Models and Their Implications. Retrieved from https:\/\/openai.com\/blog\/better-language-models\/","journal-title":"R"},{"key":"e_1_3_4_64_1","unstructured":"Alec Radford Jeff Wu Rewon Child David Luan Dario Amodei and Ilya Sutskever. 2019b. Language models are unsupervised multitask learners. OpenAI blog 1 8 (2019a) 9 Pages."},{"key":"e_1_3_4_65_1","first-page":"8594","volume-title":"Advances in Neural Information Processing Systems","author":"Reif Emily","year":"2019","unstructured":"Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of BERT. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d\u2019Alch\u00e9 Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 8594\u20138603."},{"key":"e_1_3_4_66_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00349"},{"key":"e_1_3_4_67_1","article-title":"How chatbots and large language models, or LLMs, actually work","author":"Roose Kevin","year":"2023","unstructured":"Kevin Roose. 2023. How chatbots and large language models, or LLMs, actually work. New York Times. Retrieved from https:\/\/www.nytimes.com\/2023\/03\/28\/technology\/ai-chatbots-chatgpt-bing-bard-llm.html","journal-title":"New York Times"},{"issue":"6088","key":"e_1_3_4_68_1","first-page":"533","article-title":"Learning representations by back-propagating errors","volume":"323","author":"Rumelhart David E.","year":"1986","unstructured":"David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. Cahiers De La Revue De Theologie Et De Philosophie 323, 6088 (1986), 533\u2013536.","journal-title":"Cahiers De La Revue De Theologie Et De Philosophie"},{"key":"e_1_3_4_69_1","unstructured":"Teven Le Scao Angela Fan Christopher Akiki Ellie Pavlick Suzana Ili\u0107 Daniel Hesslow Roman Castagn\u00e9 Alexandra Sasha Luccioni Fran\u00e7ois Yvon Matthias Gall\u00e9 et\u00a0al. 2023. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. (2023). arxiv:2211.05100"},{"key":"e_1_3_4_70_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.285"},{"key":"e_1_3_4_71_1","article-title":"Beware the rationalization trap! When language model explainability diverges from our mental models of language","volume":"2207","author":"Sevastjanova Rita","year":"2022","unstructured":"Rita Sevastjanova and Mennatallah El-Assady. 2022. Beware the rationalization trap! When language model explainability diverges from our mental models of language. In Proceedings of the Communication in Human-AI Interaction Workshop at IJCAI-ECAI. abs\/2207.06897 (2022).","journal-title":"Proceedings of the Communication in Human-AI Interaction Workshop at IJCAI-ECAI."},{"key":"e_1_3_4_72_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14541"},{"key":"e_1_3_4_73_1","unstructured":"Thilo Spinner Rebecca Kehlbeck Rita Sevastjanova Tobias St\u00e4hle Daniel A. Keim Oliver Deussen Andreas Spitz and Mennatallah El-Assady. 2023. Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges. arxiv:2310.11252 (2023)."},{"issue":"1","key":"e_1_3_4_74_1","article-title":"explAIner: A visual analytics framework for interactive and explainable machine learning","volume":"26","author":"Spinner Thilo","year":"2020","unstructured":"Thilo Spinner, Udo Schlegel, Hanna Schafer, and Mennatallah El-Assady. 2020. explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE Trans. Visualiz. Comput. Graph. 26, 1 (2020).","journal-title":"IEEE Trans. Visualiz. Comput. Graph."},{"key":"e_1_3_4_75_1","volume-title":"Proceedings of the Computer Graphics, Visualization & Vision Conference (WSCG\u201915)","author":"Steiger Martin","year":"2015","unstructured":"Martin Steiger, J. Bernard, Simon Thum, Sebastian Mittelst\u00e4dt, Marco Hutter, Daniel A. Keim, and J\u00f6rn Kohlhammer. 2015. Explorative analysis of 2D color maps. In Proceedings of the Computer Graphics, Visualization & Vision Conference (WSCG\u201915)."},{"key":"e_1_3_4_76_1","volume-title":"Cognitive Psychology","author":"Sternberg Robert J.","year":"2016","unstructured":"Robert J. Sternberg and Karin Sternberg. 2016. Cognitive Psychology. Nelson Education."},{"key":"e_1_3_4_77_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2018.2865044"},{"key":"e_1_3_4_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2021.3114845"},{"key":"e_1_3_4_79_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE53745.2022.00093"},{"key":"e_1_3_4_80_1","doi-asserted-by":"publisher","DOI":"10.1002\/joc.2153"},{"key":"e_1_3_4_81_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N. Gomez Lukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. arxiv:1706.03762 (2017)."},{"key":"e_1_3_4_82_1","unstructured":"Patrick von Platen. 2020. How to Generate Text: Using Different Decoding Methods for Language Generation with Transformers. Retrieved from https:\/\/huggingface.co\/blog\/how-to-generate"},{"key":"e_1_3_4_83_1","volume-title":"Proceedings of KONVENS","author":"Wiedemann Gregor","year":"2019","unstructured":"Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does BERT make any sense? Interpretable word sense disambiguation with contextualized embeddings. In Proceedings of KONVENS."},{"key":"e_1_3_4_84_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-1101"},{"key":"e_1_3_4_85_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-demos.6"},{"key":"e_1_3_4_86_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-acl.96"},{"key":"e_1_3_4_87_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2022\/615"},{"key":"e_1_3_4_88_1","doi-asserted-by":"publisher","DOI":"10.1145\/3512467"},{"key":"e_1_3_4_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/3490099.3511105"},{"key":"e_1_3_4_90_1","doi-asserted-by":"publisher","DOI":"10.1145\/3219819.3220064"},{"key":"e_1_3_4_91_1","unstructured":"Hanqing Zhang Haolin Song Shaoyu Li Ming Zhou and Dawei Song. 2022. A survey of controllable text generation using transformer-based pre-trained language models. arxiv:2201.05337 (2022)."},{"key":"e_1_3_4_92_1","doi-asserted-by":"publisher","DOI":"10.1145\/3639372"}],"container-title":["ACM Transactions on Interactive Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3652028","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3652028","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:03:12Z","timestamp":1750291392000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3652028"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,5]]},"references-count":91,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,6,30]]}},"alternative-id":["10.1145\/3652028"],"URL":"https:\/\/doi.org\/10.1145\/3652028","relation":{},"ISSN":["2160-6455","2160-6463"],"issn-type":[{"value":"2160-6455","type":"print"},{"value":"2160-6463","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,5]]},"assertion":[{"value":"2023-07-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-30","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-05","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}