{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T00:49:24Z","timestamp":1776127764405,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":16,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,3,27]],"date-time":"2023-03-27T00:00:00Z","timestamp":1679875200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,3,27]]},"DOI":"10.1145\/3581754.3584136","type":"proceedings-article","created":{"date-parts":[[2023,3,26]],"date-time":"2023-03-26T22:12:25Z","timestamp":1679868745000},"page":"75-78","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":165,"title":["Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3368-0180","authenticated-orcid":false,"given":"Ziang","family":"Xiao","sequence":"first","affiliation":[{"name":"Microsoft Research, Canada and Johns Hopkins University, United States"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7660-0059","authenticated-orcid":false,"given":"Xingdi","family":"Yuan","sequence":"additional","affiliation":[{"name":"Microsoft Research, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4543-7196","authenticated-orcid":false,"given":"Q. Vera","family":"Liao","sequence":"additional","affiliation":[{"name":"Microsoft Research, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6361-6609","authenticated-orcid":false,"given":"Rania","family":"Abdelghani","sequence":"additional","affiliation":[{"name":"Inria, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1277-130X","authenticated-orcid":false,"given":"Pierre-Yves","family":"Oudeyer","sequence":"additional","affiliation":[{"name":"Inria, France"}]}],"member":"320","published-online":{"date-parts":[[2023,3,27]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"crossref","unstructured":"Rania Abdelghani Pierre-Yves Oudeyer Edith Law Catherine de Vulpillieres and H\u00e9lene Sauz\u00e9on. 2022. Conversational agents for fostering curiosity-driven learning in children. arXiv preprint arXiv:2204.03546(2022). Rania Abdelghani Pierre-Yves Oudeyer Edith Law Catherine de Vulpillieres and H\u00e9lene Sauz\u00e9on. 2022. Conversational agents for fostering curiosity-driven learning in children. arXiv preprint arXiv:2204.03546(2022).","DOI":"10.31234\/osf.io\/qa9td"},{"key":"e_1_3_2_1_2_1","volume-title":"Advances in Neural Information Processing Systems, H.\u00a0Larochelle, M.\u00a0Ranzato, R.\u00a0Hadsell, M.F. Balcan, and H.\u00a0Lin (Eds.). Vol.\u00a033. Curran Associates","author":"Brown Tom","year":"1877","unstructured":"Tom Brown , Benjamin Mann , Nick Ryder , Melanie Subbiah , Jared\u00a0 D Kaplan , Prafulla Dhariwal , Arvind Neelakantan , Pranav Shyam , Girish Sastry , Amanda Askell , Sandhini Agarwal , Ariel Herbert-Voss , Gretchen Krueger , Tom Henighan , Rewon Child , Aditya Ramesh , Daniel Ziegler , Jeffrey Wu , Clemens Winter , Chris Hesse , Mark Chen , Eric Sigler , Mateusz Litwin , Scott Gray , Benjamin Chess , Jack Clark , Christopher Berner , Sam McCandlish , Alec Radford , Ilya Sutskever , and Dario Amodei . 2020. Language Models are Few-Shot Learners . In Advances in Neural Information Processing Systems, H.\u00a0Larochelle, M.\u00a0Ranzato, R.\u00a0Hadsell, M.F. Balcan, and H.\u00a0Lin (Eds.). Vol.\u00a033. Curran Associates , Inc ., 1877 \u20131901. https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared\u00a0D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H.\u00a0Larochelle, M.\u00a0Ranzato, R.\u00a0Hadsell, M.F. Balcan, and H.\u00a0Lin (Eds.). Vol.\u00a033. Curran Associates, Inc., 1877\u20131901. https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf"},{"key":"e_1_3_2_1_3_1","volume-title":"Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311(2022).","author":"Chowdhery Aakanksha","year":"2022","unstructured":"Aakanksha Chowdhery , Sharan Narang , Jacob Devlin , Maarten Bosma , Gaurav Mishra , Adam Roberts , Paul Barham , Hyung\u00a0Won Chung , Charles Sutton , Sebastian Gehrmann , 2022 . Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311(2022). Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung\u00a0Won Chung, Charles Sutton, Sebastian Gehrmann, 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311(2022)."},{"key":"e_1_3_2_1_4_1","volume-title":"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)","author":"Gao Tianyu","year":"1865","unstructured":"Tianyu Gao , Adam Fisch , and Danqi Chen . 2021. Making Pre-trained Language Models Better Few-shot Learners . In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) . Association for Computational Linguistics , Online , 3816\u20133830. https:\/\/doi.org\/10. 1865 3\/v1\/2021.acl-long.295 10.18653\/v1 Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online, 3816\u20133830. https:\/\/doi.org\/10.18653\/v1\/2021.acl-long.295"},{"key":"e_1_3_2_1_5_1","volume-title":"Three approaches to qualitative content analysis. Qualitative health research 15, 9","author":"Hsieh Hsiu-Fang","year":"2005","unstructured":"Hsiu-Fang Hsieh and Sarah\u00a0 E Shannon . 2005. Three approaches to qualitative content analysis. Qualitative health research 15, 9 ( 2005 ), 1277\u20131288. Hsiu-Fang Hsieh and Sarah\u00a0E Shannon. 2005. Three approaches to qualitative content analysis. Qualitative health research 15, 9 (2005), 1277\u20131288."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41746-021-00464-x"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/W14-2513"},{"key":"e_1_3_2_1_8_1","unstructured":"Pengfei Liu Weizhe Yuan Jinlan Fu Zhengbao Jiang Hiroaki Hayashi and Graham Neubig. 2021. Pre-train prompt and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586(2021). Pengfei Liu Weizhe Yuan Jinlan Fu Zhengbao Jiang Hiroaki Hayashi and Graham Neubig. 2021. Pre-train prompt and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586(2021)."},{"key":"e_1_3_2_1_9_1","volume-title":"Interrater reliability: the kappa statistic. Biochemia medica 22, 3","author":"McHugh L","year":"2012","unstructured":"Mary\u00a0 L McHugh . 2012. Interrater reliability: the kappa statistic. Biochemia medica 22, 3 ( 2012 ), 276\u2013282. Mary\u00a0L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica 22, 3 (2012), 276\u2013282."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2957276.2957280"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2998181.2998363"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445591"},{"key":"e_1_3_2_1_13_1","unstructured":"William\u00a0W Wilen. 1991. Questioning skills for teachers. What research says to the teacher. (1991). William\u00a0W Wilen. 1991. Questioning skills for teachers. What research says to the teacher. (1991)."},{"key":"e_1_3_2_1_14_1","volume-title":"Wordcraft: Story Writing With Large Language Models. In 27th International Conference on Intelligent User Interfaces. 841\u2013852","author":"Yuan Ann","year":"2022","unstructured":"Ann Yuan , Andy Coenen , Emily Reif , and Daphne Ippolito . 2022 . Wordcraft: Story Writing With Large Language Models. In 27th International Conference on Intelligent User Interfaces. 841\u2013852 . Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. 2022. Wordcraft: Story Writing With Large Language Models. In 27th International Conference on Intelligent User Interfaces. 841\u2013852."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Xingdi Yuan Tong Wang Yen-Hsiang Wang Emery Fine Rania Abdelghani Pauline Lucas H\u00e9l\u00e8ne Sauz\u00e9on and Pierre-Yves Oudeyer. 2022. Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation. arXiv preprint arXiv:2209.11000(2022). Xingdi Yuan Tong Wang Yen-Hsiang Wang Emery Fine Rania Abdelghani Pauline Lucas H\u00e9l\u00e8ne Sauz\u00e9on and Pierre-Yves Oudeyer. 2022. Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation. arXiv preprint arXiv:2209.11000(2022).","DOI":"10.18653\/v1\/2023.findings-acl.820"},{"key":"e_1_3_2_1_16_1","volume-title":"Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068(2022).","author":"Zhang Susan","year":"2022","unstructured":"Susan Zhang , Stephen Roller , Naman Goyal , Mikel Artetxe , Moya Chen , Shuohui Chen , Christopher Dewan , Mona Diab , Xian Li , Xi\u00a0Victoria Lin , 2022 . Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068(2022). Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi\u00a0Victoria Lin, 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068(2022)."}],"event":{"name":"IUI '23: 28th International Conference on Intelligent User Interfaces","location":"Sydney NSW Australia","acronym":"IUI '23","sponsor":["SIGAI ACM Special Interest Group on Artificial Intelligence","SIGCHI ACM Special Interest Group on Computer-Human Interaction"]},"container-title":["28th International Conference on Intelligent User Interfaces"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3581754.3584136","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3581754.3584136","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:36:21Z","timestamp":1750178181000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3581754.3584136"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,27]]},"references-count":16,"alternative-id":["10.1145\/3581754.3584136","10.1145\/3581754"],"URL":"https:\/\/doi.org\/10.1145\/3581754.3584136","relation":{},"subject":[],"published":{"date-parts":[[2023,3,27]]},"assertion":[{"value":"2023-03-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}