{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,11]],"date-time":"2025-12-11T21:02:52Z","timestamp":1765486972841,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":44,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,10,21]],"date-time":"2023-10-21T00:00:00Z","timestamp":1697846400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"EU Horizon 2020 ITN\/ETN on Domain Specific Systems for Information Extraction and Retrieval","award":["H2020-EU.1.3.1., ID: 860721"],"award-info":[{"award-number":["H2020-EU.1.3.1., ID: 860721"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,10,21]]},"DOI":"10.1145\/3583780.3615111","type":"proceedings-article","created":{"date-parts":[[2023,10,21]],"date-time":"2023-10-21T07:45:42Z","timestamp":1697874342000},"page":"5311-5315","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["A Test Collection of Synthetic Documents for Training Rankers: ChatGPT vs. Human Experts"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4712-832X","authenticated-orcid":false,"given":"Arian","family":"Askari","sequence":"first","affiliation":[{"name":"Leiden Institute of Advanced Computer Science, Leiden University, Leiden, Netherlands"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9447-4172","authenticated-orcid":false,"given":"Mohammad","family":"Aliannejadi","sequence":"additional","affiliation":[{"name":"University of Amsterdam, Amsterdam, Netherlands"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8312-0694","authenticated-orcid":false,"given":"Evangelos","family":"Kanoulas","sequence":"additional","affiliation":[{"name":"University of Amsterdam, Amsterdam, Netherlands"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9609-9505","authenticated-orcid":false,"given":"Suzan","family":"Verberne","sequence":"additional","affiliation":[{"name":"LIACS, Leiden University, Leiden, Netherlands"}]}],"member":"320","published-online":{"date-parts":[[2023,10,21]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Retrievability Bias Estimation Using Synthetically Generated Queries. In The 32nd ACM International Conference on Information and Knowledge Management (CIKM","author":"Abolghasemi Amin","year":"2023","unstructured":"Amin Abolghasemi , Suzan Verberne , Arian Askari , and Leif Azzopardi . 2023 . Retrievability Bias Estimation Using Synthetically Generated Queries. In The 32nd ACM International Conference on Information and Knowledge Management (CIKM 2023). ACM. Amin Abolghasemi, Suzan Verberne, Arian Askari, and Leif Azzopardi. 2023. Retrievability Bias Estimation Using Synthetically Generated Queries. In The 32nd ACM International Conference on Information and Knowledge Management (CIKM 2023). ACM."},{"volume-title":"Advances in Information Retrieval","author":"Askari Arian","key":"e_1_3_2_1_2_1","unstructured":"Arian Askari , Amin Abolghasemi , Gabriella Pasi , Wessel Kraaij , and Suzan Verberne . 2023. Injecting the BM25 Score as Text Improves BERT-Based Re-rankers . In Advances in Information Retrieval . Springer Nature Switzerland , Cham , 66--83. Arian Askari, Amin Abolghasemi, Gabriella Pasi, Wessel Kraaij, and Suzan Verberne. 2023. Injecting the BM25 Score as Text Improves BERT-Based Re-rankers. In Advances in Information Retrieval. Springer Nature Switzerland, Cham, 66--83."},{"key":"e_1_3_2_1_3_1","volume-title":"Inpars: Data augmentation for information retrieval using large language models. arXiv preprint arXiv:2202.05144","author":"Bonifacio Luiz","year":"2022","unstructured":"Luiz Bonifacio , Hugo Abonizio , Marzieh Fadaee , and Rodrigo Nogueira . 2022 . Inpars: Data augmentation for information retrieval using large language models. arXiv preprint arXiv:2202.05144 (2022). Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. 2022. Inpars: Data augmentation for information retrieval using large language models. arXiv preprint arXiv:2202.05144 (2022)."},{"key":"e_1_3_2_1_4_1","volume-title":"InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers. arXiv preprint arXiv:2301.02998","author":"Boytsov Leonid","year":"2023","unstructured":"Leonid Boytsov , Preksha Patel , Vivek Sourabh , Riddhi Nisar , Sayani Kundu , Ramya Ramanathan , and Eric Nyberg . 2023. InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers. arXiv preprint arXiv:2301.02998 ( 2023 ). Leonid Boytsov, Preksha Patel, Vivek Sourabh, Riddhi Nisar, Sayani Kundu, Ramya Ramanathan, and Eric Nyberg. 2023. InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers. arXiv preprint arXiv:2301.02998 (2023)."},{"key":"e_1_3_2_1_5_1","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell etal 2020. Language models are few-shot learners. Advances in neural information processing systems Vol. 33 (2020) 1877--1901. Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. Advances in neural information processing systems Vol. 33 (2020) 1877--1901."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10916-023-01925-4"},{"key":"e_1_3_2_1_7_1","unstructured":"Shu Chen Zeqian Ju Xiangyu Dong Hongchao Fang Sicheng Wang Yue Yang Jiaqi Zeng Ruisi Zhang Ruoyu Zhang Meng Zhou etal 2020. Meddialog: a large-scale medical dialogue dataset. arXiv preprint arXiv:2004.03329 (2020). Shu Chen Zeqian Ju Xiangyu Dong Hongchao Fang Sicheng Wang Yue Yang Jiaqi Zeng Ruisi Zhang Ruoyu Zhang Meng Zhou et al. 2020. Meddialog: a large-scale medical dialogue dataset. arXiv preprint arXiv:2004.03329 (2020)."},{"key":"e_1_3_2_1_8_1","volume-title":"Overview of the TREC 2020 deep learning track. arXiv preprint arXiv:2102","author":"Craswell Nick","year":"2021","unstructured":"Nick Craswell , Bhaskar Mitra , Emine Yilmaz , and Daniel Campos . 2021 . Overview of the TREC 2020 deep learning track. arXiv preprint arXiv:2102 .07662 (2021). Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv preprint arXiv:2102.07662 (2021)."},{"key":"e_1_3_2_1_9_1","volume-title":"Overview of the TREC 2019 deep learning track. arXiv preprint arXiv:2003","author":"Craswell Nick","year":"2020","unstructured":"Nick Craswell , Bhaskar Mitra , Emine Yilmaz , Daniel Campos , and Ellen M Voorhees . 2020 . Overview of the TREC 2019 deep learning track. arXiv preprint arXiv:2003 .07820 (2020). Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv preprint arXiv:2003.07820 (2020)."},{"key":"e_1_3_2_1_10_1","volume-title":"Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755","author":"Dai Zhuyun","year":"2022","unstructured":"Zhuyun Dai , Vincent Y Zhao , Ji Ma , Yi Luan , Jianmo Ni , Jing Lu , Anton Bakalov , Kelvin Guu , Keith B Hall , and Ming-Wei Chang . 2022 . Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755 (2022). Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B Hall, and Ming-Wei Chang. 2022. Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755 (2022)."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"crossref","unstructured":"Guglielmo Faggioli Laura Dietz Charles Clarke Gianluca Demartini Matthias Hagen Claudia Hauff Noriko Kando Evangelos Kanoulas Martin Potthast Benno Stein etal 2023. Perspectives on Large Language Models for Relevance Judgment. arXiv preprint arXiv:2304.09161 (2023). Guglielmo Faggioli Laura Dietz Charles Clarke Gianluca Demartini Matthias Hagen Claudia Hauff Noriko Kando Evangelos Kanoulas Martin Potthast Benno Stein et al. 2023. Perspectives on Large Language Models for Relevance Judgment. arXiv preprint arXiv:2304.09161 (2023).","DOI":"10.1145\/3578337.3605136"},{"key":"e_1_3_2_1_12_1","volume-title":"ELI5: Long form question answering. arXiv preprint arXiv:1907.09190","author":"Fan Angela","year":"2019","unstructured":"Angela Fan , Yacine Jernite , Ethan Perez , David Grangier , Jason Weston , and Michael Auli . 2019. ELI5: Long form question answering. arXiv preprint arXiv:1907.09190 ( 2019 ). Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. arXiv preprint arXiv:1907.09190 (2019)."},{"key":"e_1_3_2_1_13_1","volume-title":"Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv preprint arXiv:2301.04246","author":"Goldstein Josh A","year":"2023","unstructured":"Josh A Goldstein , Girish Sastry , Micah Musser , Renee DiResta , Matthew Gentzel , and Katerina Sedova . 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv preprint arXiv:2301.04246 ( 2023 ). Josh A Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Katerina Sedova. 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv preprint arXiv:2301.04246 (2023)."},{"key":"e_1_3_2_1_14_1","volume-title":"How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv preprint arXiv:2301.07597","author":"Guo Biyang","year":"2023","unstructured":"Biyang Guo , Xin Zhang , Ziyuan Wang , Minqi Jiang , Jinran Nie , Yuxuan Ding , Jianwei Yue , and Yupeng Wu. 2023. How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv preprint arXiv:2301.07597 ( 2023 ). Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. 2023. How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv preprint arXiv:2301.07597 (2023)."},{"key":"e_1_3_2_1_15_1","volume-title":"Improving efficient neural ranking models with cross-architecture knowledge distillation. arXiv preprint arXiv:2010.02666","author":"Sebastian","year":"2020","unstructured":"Sebastian Hofst\"atter, Sophia Althammer , Michael Schr\u00f6der , Mete Sertkan , and Allan Hanbury . 2020. Improving efficient neural ranking models with cross-architecture knowledge distillation. arXiv preprint arXiv:2010.02666 ( 2020 ). Sebastian Hofst\"atter, Sophia Althammer, Michael Schr\u00f6der, Mete Sertkan, and Allan Hanbury. 2020. Improving efficient neural ranking models with cross-architecture knowledge distillation. arXiv preprint arXiv:2010.02666 (2020)."},{"key":"e_1_3_2_1_16_1","volume-title":"InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval. arXiv preprint arXiv:2301.01820","author":"Jeronymo Vitor","year":"2023","unstructured":"Vitor Jeronymo , Luiz Bonifacio , Hugo Abonizio , Marzieh Fadaee , Roberto Lotufo , Jakub Zavrel , and Rodrigo Nogueira . 2023. InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval. arXiv preprint arXiv:2301.01820 ( 2023 ). Vitor Jeronymo, Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, Jakub Zavrel, and Rodrigo Nogueira. 2023. InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval. arXiv preprint arXiv:2301.01820 (2023)."},{"key":"e_1_3_2_1_17_1","first-page":"372","volume-title":"TinyBERT: Distilling BERT for Natural Language Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020","author":"Jiao Xiaoqi","year":"2020","unstructured":"Xiaoqi Jiao , Yichun Yin , Lifeng Shang , Xin Jiang , Xiao Chen , Linlin Li , Fang Wang , and Qun Liu . 2020 . TinyBERT: Distilling BERT for Natural Language Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020 . Association for Computational Linguistics, Online, 4163--4174. https:\/\/doi.org\/10. 18653\/v1\/2020.findings-emnlp. 372 10.18653\/v1 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for Natural Language Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Online, 4163--4174. https:\/\/doi.org\/10.18653\/v1\/2020.findings-emnlp.372"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401075"},{"key":"e_1_3_2_1_19_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Jimmy Ba . 2014 . Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.2200\/S01123ED1V01Y202108HLT053"},{"key":"e_1_3_2_1_21_1","volume-title":"Explain like I am BM25: Interpreting a Dense Model's Ranked-List with a Sparse Approximation. arXiv preprint arXiv:2304.12631","author":"Llordes Michael","year":"2023","unstructured":"Michael Llordes , Debasis Ganguly , Sumit Bhatia , and Chirag Agarwal . 2023. Explain like I am BM25: Interpreting a Dense Model's Ranked-List with a Sparse Approximation. arXiv preprint arXiv:2304.12631 ( 2023 ). Michael Llordes, Debasis Ganguly, Sumit Bhatia, and Chirag Agarwal. 2023. Explain like I am BM25: Interpreting a Dense Model's Ranked-List with a Sparse Approximation. arXiv preprint arXiv:2304.12631 (2023)."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401262"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3184558.3192301"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3563359.3597399"},{"key":"e_1_3_2_1_25_1","volume-title":"MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPs.","author":"Nguyen Tri","year":"2016","unstructured":"Tri Nguyen , Mir Rosenberg , Xia Song , Jianfeng Gao , Saurabh Tiwary , Rangan Majumder , and Li Deng . 2016 . MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPs. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPs."},{"key":"e_1_3_2_1_26_1","volume-title":"Document ranking with a pretrained sequence-to-sequence model. arXiv preprint arXiv:2003.06713","author":"Nogueira Rodrigo","year":"2020","unstructured":"Rodrigo Nogueira , Zhiying Jiang , and Jimmy Lin . 2020. Document ranking with a pretrained sequence-to-sequence model. arXiv preprint arXiv:2003.06713 ( 2020 ). Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. arXiv preprint arXiv:2003.06713 (2020)."},{"key":"e_1_3_2_1_27_1","unstructured":"Adam Paszke Sam Gross Soumith Chintala Gregory Chanan Edward Yang Zachary DeVito Zeming Lin Alban Desmaison Luca Antiga and Adam Lerer. 2017. Automatic differentiation in pytorch. (2017). Adam Paszke Sam Gross Soumith Chintala Gregory Chanan Edward Yang Zachary DeVito Zeming Lin Alban Desmaison Luca Antiga and Adam Lerer. 2017. Automatic differentiation in pytorch. (2017)."},{"key":"e_1_3_2_1_28_1","volume-title":"Towards Making the Most of ChatGPT for Machine Translation. arXiv preprint arXiv:2303.13780","author":"Peng Keqin","year":"2023","unstructured":"Keqin Peng , Liang Ding , Qihuang Zhong , Li Shen , Xuebo Liu , Min Zhang , Yuanxin Ouyang , and Dacheng Tao . 2023. Towards Making the Most of ChatGPT for Machine Translation. arXiv preprint arXiv:2303.13780 ( 2023 ). Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao. 2023. Towards Making the Most of ChatGPT for Machine Translation. arXiv preprint arXiv:2303.13780 (2023)."},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4471-2099-5_24"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.52225\/narra.v3i1.103"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2012.2189916"},{"key":"e_1_3_2_1_32_1","volume-title":"Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542","author":"Sun Weiwei","year":"2023","unstructured":"Weiwei Sun , Lingyong Yan , Xinyu Ma , Pengjie Ren , Dawei Yin , and Zhaochun Ren . 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542 ( 2023 ). Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542 (2023)."},{"key":"e_1_3_2_1_33_1","volume-title":"A short survey of viewing large language models in legal aspect. arXiv preprint arXiv:2303.09136","author":"Sun Zhongxiang","year":"2023","unstructured":"Zhongxiang Sun . 2023. A short survey of viewing large language models in legal aspect. arXiv preprint arXiv:2303.09136 ( 2023 ). Zhongxiang Sun. 2023. A short survey of viewing large language models in legal aspect. arXiv preprint arXiv:2303.09136 (2023)."},{"key":"e_1_3_2_1_34_1","volume-title":"Applying BERT and ChatGPT for Sentiment Analysis of Lyme Disease in Scientific Literature. arXiv preprint arXiv:2302.06474","author":"Susnjak Teo","year":"2023","unstructured":"Teo Susnjak . 2023. Applying BERT and ChatGPT for Sentiment Analysis of Lyme Disease in Scientific Literature. arXiv preprint arXiv:2302.06474 ( 2023 ). Teo Susnjak. 2023. Applying BERT and ChatGPT for Sentiment Analysis of Lyme Disease in Scientific Literature. arXiv preprint arXiv:2302.06474 (2023)."},{"key":"e_1_3_2_1_35_1","unstructured":"Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https:\/\/github.com\/kingoflolz\/mesh-transformer-jax. Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https:\/\/github.com\/kingoflolz\/mesh-transformer-jax."},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"crossref","unstructured":"Liang Wang Nan Yang and Furu Wei. 2023 b. Query2doc: Query Expansion with Large Language Models. arxiv: 2303.07678 [cs.IR] Liang Wang Nan Yang and Furu Wei. 2023 b. Query2doc: Query Expansion with Large Language Models. arxiv: 2303.07678 [cs.IR]","DOI":"10.18653\/v1\/2023.emnlp-main.585"},{"key":"e_1_3_2_1_37_1","first-page":"5776","article-title":"Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers","volume":"33","author":"Wang Wenhui","year":"2020","unstructured":"Wenhui Wang , Furu Wei , Li Dong , Hangbo Bao , Nan Yang , and Ming Zhou . 2020 . Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers . Advances in Neural Information Processing Systems , Vol. 33 (2020), 5776 -- 5788 . Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems, Vol. 33 (2020), 5776--5788.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_38_1","volume-title":"2023 a. Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. arXiv preprint arXiv:2304.04339","author":"Wang Zengzhi","year":"2023","unstructured":"Zengzhi Wang , Qiming Xie , Zixiang Ding , Yi Feng , and Rui Xia . 2023 a. Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. arXiv preprint arXiv:2304.04339 ( 2023 ). Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. 2023 a. Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. arXiv preprint arXiv:2304.04339 (2023)."},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"crossref","unstructured":"Thomas Wolf Lysandre Debut Victor Sanh Julien Chaumond Clement Delangue Anthony Moi Pierric Cistac Tim Rault R\u00e9mi Louf Morgan Funtowicz etal 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019). Thomas Wolf Lysandre Debut Victor Sanh Julien Chaumond Clement Delangue Anthony Moi Pierric Cistac Tim Rault R\u00e9mi Louf Morgan Funtowicz et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019).","DOI":"10.18653\/v1\/2020.emnlp-demos.6"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D15-1237"},{"key":"e_1_3_2_1_41_1","volume-title":"How would Stance Detection Techniques Evolve after the Launch of ChatGPT? arXiv preprint arXiv:2212.14548","author":"Zhang Bowen","year":"2022","unstructured":"Bowen Zhang , Daijun Ding , and Liwen Jing . 2022. How would Stance Detection Techniques Evolve after the Launch of ChatGPT? arXiv preprint arXiv:2212.14548 ( 2022 ). Bowen Zhang, Daijun Ding, and Liwen Jing. 2022. How would Stance Detection Techniques Evolve after the Launch of ChatGPT? arXiv preprint arXiv:2212.14548 (2022)."},{"key":"e_1_3_2_1_42_1","volume-title":"Extractive Summarization via ChatGPT for Faithful Summary Generation. arXiv preprint arXiv:2304.04193","author":"Zhang Haopeng","year":"2023","unstructured":"Haopeng Zhang , Xiao Liu , and Jiawei Zhang . 2023. Extractive Summarization via ChatGPT for Faithful Summary Generation. arXiv preprint arXiv:2304.04193 ( 2023 ). Haopeng Zhang, Xiao Liu, and Jiawei Zhang. 2023. Extractive Summarization via ChatGPT for Faithful Summary Generation. arXiv preprint arXiv:2304.04193 (2023)."},{"key":"e_1_3_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-72240-1_49"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3462922"}],"event":{"name":"CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"],"location":"Birmingham United Kingdom","acronym":"CIKM '23"},"container-title":["Proceedings of the 32nd ACM International Conference on Information and Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583780.3615111","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3583780.3615111","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:36:42Z","timestamp":1750178202000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583780.3615111"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,21]]},"references-count":44,"alternative-id":["10.1145\/3583780.3615111","10.1145\/3583780"],"URL":"https:\/\/doi.org\/10.1145\/3583780.3615111","relation":{},"subject":[],"published":{"date-parts":[[2023,10,21]]},"assertion":[{"value":"2023-10-21","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}