{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T05:37:26Z","timestamp":1772861846882,"version":"3.50.1"},"reference-count":83,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,4,10]],"date-time":"2023-04-10T00:00:00Z","timestamp":1681084800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Grain Research and Development Corporation project AgAsk","award":["UOQ2003-009RTX"],"award-info":[{"award-number":["UOQ2003-009RTX"]}]},{"name":"Australian Research Council DECRA Research Fellowship","award":["DE180101579"],"award-info":[{"award-number":["DE180101579"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Inf. Syst."],"published-print":{"date-parts":[[2023,7,31]]},"abstract":"<jats:p>Pseudo Relevance Feedback (PRF) is known to improve the effectiveness of bag-of-words retrievers. At the same time, deep language models have been shown to outperform traditional bag-of-words rerankers. However, it is unclear how to integrate PRF directly with emergent deep language models. This article addresses this gap by investigating methods for integrating PRF signals with rerankers and dense retrievers based on deep language models. We consider text-based, vector-based and hybrid PRF approaches and investigate different ways of combining and scoring relevance signals. An extensive empirical evaluation was conducted across four different datasets and two task settings (retrieval and ranking).<\/jats:p>\n          <jats:p>\n            <jats:italic>Text-based PRF<\/jats:italic>\n            results show that the use of PRF had a mixed effect on deep rerankers across different datasets. We found that the best effectiveness was achieved when (i) directly concatenating each PRF passage with the query, searching with the new set of queries, and then aggregating the scores; (ii) using Borda to aggregate scores from PRF runs.\n          <\/jats:p>\n          <jats:p>\n            <jats:italic>Vector-based PRF<\/jats:italic>\n            results show that the use of PRF enhanced the effectiveness of deep rerankers and dense retrievers over several evaluation metrics. We found that higher effectiveness was achieved when (i) the query retains either the majority or the same weight within the PRF mechanism, and (ii) a shallower PRF signal (i.e., a smaller number of top-ranked passages) was employed, rather than a deeper signal. Our vector-based PRF method is computationally efficient; thus, this represents a general PRF method others can use with deep rerankers and dense retrievers.\n          <\/jats:p>","DOI":"10.1145\/3570724","type":"journal-article","created":{"date-parts":[[2023,1,10]],"date-time":"2023-01-10T12:25:19Z","timestamp":1673353519000},"page":"1-40","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":11,"title":["Pseudo Relevance Feedback with Deep Language Models and Dense Retrievers: Successes and Pitfalls"],"prefix":"10.1145","volume":"41","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5317-7227","authenticated-orcid":false,"given":"Hang","family":"Li","sequence":"first","affiliation":[{"name":"IElab, The University of Queensland, Queensland, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9423-9404","authenticated-orcid":false,"given":"Ahmed","family":"Mourad","sequence":"additional","affiliation":[{"name":"IElab, The University of Queensland, Queensland, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6711-0955","authenticated-orcid":false,"given":"Shengyao","family":"Zhuang","sequence":"additional","affiliation":[{"name":"IElab, The University of Queensland, Queensland, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5577-3391","authenticated-orcid":false,"given":"Bevan","family":"Koopman","sequence":"additional","affiliation":[{"name":"Australian E-Health Research Centre, CSIRO, Queensland, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0271-5563","authenticated-orcid":false,"given":"Guido","family":"Zuccon","sequence":"additional","affiliation":[{"name":"IElab, The University of Queensland, Queensland, Australia"}]}],"member":"320","published-online":{"date-parts":[[2023,4,10]]},"reference":[{"key":"e_1_3_2_2_2","first-page":"189","article-title":"UMass at TREC 2004: Novelty and HARD","author":"Abdul-Jaleel Nasreen","year":"2004","unstructured":"Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Mark D. Smucker, and Courtney Wade. 2004. UMass at TREC 2004: Novelty and HARD. In Proceedings of the 13th Text REtrieval Conference.Computer Science Department Faculty Publication Series, 189.","journal-title":"Computer Science Department Faculty Publication Series"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482159"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/383952.384007"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2019.05.009"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/1390334.1390377"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/2499178.2499179"},{"key":"e_1_3_2_8_2","volume-title":"Proceedings of the Text REtrieval Conference","author":"Craswell Nick","year":"2020","unstructured":"Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. In Proceedings of the Text REtrieval Conference."},{"key":"e_1_3_2_9_2","volume-title":"Proceedings of the Text REtrieval Conference","author":"Craswell Nick","year":"2021","unstructured":"Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2021. Overview of the TREC 2020 deep learning track. In Proceedings of the Text REtrieval Conference."},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331303"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1285"},{"key":"e_1_3_2_12_2","first-page":"290","volume-title":"Proceedings of the European Conference on Information Retrieval","author":"Dalton Jeffrey","year":"2019","unstructured":"Jeffrey Dalton, Shahrzad Naseri, Laura Dietz, and James Allan. 2019. Local and global query expansion for hierarchical complex topics. In Proceedings of the European Conference on Information Retrieval. Springer, 290\u2013303."},{"key":"e_1_3_2_13_2","volume-title":"Proceedings of the Text REtrieval Conference","author":"Dalton Jeffrey","year":"2020","unstructured":"Jeffrey Dalton, Chenyan Xiong, and Jamie Callan. 2020. TREC CAsT 2019: The conversational assistance track overview. In Proceedings of the Text REtrieval Conference."},{"key":"e_1_3_2_14_2","first-page":"4171","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171\u20134186."},{"key":"e_1_3_2_15_2","doi-asserted-by":"crossref","first-page":"367","DOI":"10.18653\/v1\/P16-1035","volume-title":"Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Diaz Fernando","year":"2016","unstructured":"Fernando Diaz, Bhaskar Mitra, and Nick Craswell. 2016. Query expansion with locally-trained word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 367\u2013377."},{"key":"e_1_3_2_16_2","article-title":"CogLTX: Applying BERT to long texts","author":"Ding Ming","year":"2020","unstructured":"Ming Ding, Chang Zhou, Hongxia Yang, and Jie Tang. 2020. CogLTX: Applying BERT to long texts. InProceedings of the 34th International Conference on Neural Information Processing Systems.","journal-title":"Proceedings of the 34th International Conference on Neural Information Processing Systems"},{"key":"e_1_3_2_17_2","first-page":"1722","volume-title":"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing","author":"Santos Cicero dos","year":"2020","unstructured":"Cicero dos Santos, Xiaofei Ma, Ramesh Nallapati, Zhiheng Huang, and Bing Xiang. 2020. Beyond [CLS] through ranking by generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 1722\u20131727."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2019.102067"},{"key":"e_1_3_2_19_2","unstructured":"Sebastian Hofst\u00e4tter Sophia Althammer Michael Schr\u00f6der Mete Sertkan and Allan Hanbury. 2020. Improving efficient neural ranking models with cross-architecture knowledge distillation. arXiv:2010.02666. Retrieved from https:\/\/arxiv.org\/abs\/2010.02666."},{"key":"e_1_3_2_20_2","doi-asserted-by":"crossref","unstructured":"Sebastian Hofst\u00e4tter Sheng-Chieh Lin Jheng-Hong Yang Jimmy Lin and Allan Hanbury. 2021. Efficiently teaching an effective dense retriever with balanced topic aware sampling. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval . 113\u2013122.","DOI":"10.1145\/3404835.3462891"},{"key":"e_1_3_2_21_2","first-page":"4163","volume-title":"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings","author":"Jiao Xiaoqi","year":"2020","unstructured":"Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. 4163\u20134174."},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TBDATA.2019.2921572"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/2600428.2609485"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401075"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/2983323.2983876"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3130348.3130376"},{"key":"e_1_3_2_27_2","first-page":"4482","volume-title":"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing","author":"Li Canjia","year":"2018","unstructured":"Canjia Li, Yingfei Sun, Ben He, Le Wang, Kai Hui, Andrew Yates, Le Sun, and Jungang Xu. 2018. NPRF: A neural pseudo relevance feedback framework for Ad-hoc information retrieval. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 4482\u20134491."},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477495.3531822"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477495.3531884"},{"key":"e_1_3_2_30_2","volume-title":"Proceedings of the 44th European Conference on Information Retrieval","author":"Li Hang","year":"2022","unstructured":"Hang Li, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, and Guido Zuccon. 2022. Improving query representations for Dense retrieval with pseudo relevance feedback: A reproducibility study. In Proceedings of the 44th European Conference on Information Retrieval."},{"key":"e_1_3_2_31_2","doi-asserted-by":"crossref","unstructured":"Xiao Han Yuqi Liu and Jimmy Lin. 2021. The simplest thing that can possibly work:(pseudo-) relevance feedback via text classification. Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval . 123\u2013129.","DOI":"10.1145\/3471158.3472261"},{"key":"e_1_3_2_32_2","doi-asserted-by":"crossref","unstructured":"Jimmy Lin Xueguang Ma Sheng-Chieh Lin Jheng-Hong Yang Ronak Pradeep and Rodrigo Nogueira. 2021. Pyserini: A python toolkit for reproducible information retrieval research with sparse and dense representations. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR\u201921) .","DOI":"10.1145\/3404835.3463238"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.2200\/S01123ED1V01Y202108HLT053"},{"key":"e_1_3_2_34_2","unstructured":"Sheng-Chieh Lin Jheng-Hong Yang and Jimmy Lin. 2020. Distilling dense representations for ranking using tightly-coupled teachers. arXiv:2010.11386. Retrieved from https:\/\/arxiv.org\/abs\/2010.11386."},{"key":"e_1_3_2_35_2","first-page":"163","volume-title":"Proceedings of the 6th Workshop on Representation Learning for NLP","author":"Lin Sheng-Chieh","year":"2021","unstructured":"Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2021. In-batch negatives for knowledge distillation with tightly-coupled teachers for dense retrieval. In Proceedings of the 6th Workshop on Representation Learning for NLP. 163\u2013173."},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.537"},{"key":"e_1_3_2_37_2","unstructured":"Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Retrieved from https:\/\/arxiv.org\/abs\/1907.11692."},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/1645953.1646259"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/1835449.1835546"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/2661829.2661900"},{"issue":"1","key":"e_1_3_2_41_2","first-page":"105","article-title":"PaddlePaddle: An open-source deep learning platform from industrial practice","volume":"1","author":"Ma Yanjun","year":"2019","unstructured":"Yanjun Ma, Dianhai Yu, Tian Wu, and Haifeng Wang. 2019. PaddlePaddle: An open-source deep learning platform from industrial practice. Frontiers of Data and Domputing 1, 1 (2019), 105\u2013115.","journal-title":"Frontiers of Data and Domputing"},{"issue":"3","key":"e_1_3_2_42_2","doi-asserted-by":"crossref","first-page":"259","DOI":"10.1007\/s10115-007-0105-3","article-title":"Voting techniques for expert search","volume":"16","author":"Macdonald Craig","year":"2008","unstructured":"Craig Macdonald and Iadh Ounis. 2008. Voting techniques for expert search. Knowledge and Information Systems 16, 3 (2008), 259\u2013280.","journal-title":"Knowledge and Information Systems"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3463262"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1002\/asi.4630280107"},{"key":"e_1_3_2_45_2","first-page":"4191","volume-title":"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing","author":"Mass Yosi","year":"2020","unstructured":"Yosi Mass and Haggai Roitman. 2020. Ad-hoc document retrieval using weak-supervision with BERT and GPT2. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 4191\u20134197."},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/2348283.2348356"},{"key":"e_1_3_2_47_2","first-page":"467","volume-title":"Proceedings of the 43rd European Conference on Information Retrieval","author":"Naseri Shahrzad","year":"2021","unstructured":"Shahrzad Naseri, Jeffrey Dalton, Andrew Yates, and James Allan. 2021. CEQE: Contextualized embeddings for query expansion. In Proceedings of the 43rd European Conference on Information Retrieval. 467\u2013482."},{"key":"e_1_3_2_48_2","volume-title":"Proceedings of the CIKM Workshops","author":"Naseri Shahrzad","year":"2018","unstructured":"Shahrzad Naseri, John Foley, James Allan, and Brendan T. O\u2019Connor. 2018. Exploring summary-expanded entity embeddings for entity retrieval. In Proceedings of the CIKM Workshops."},{"key":"e_1_3_2_49_2","volume-title":"Proceedings of the Workshop on Cognitive Computing at NIPS","author":"Nguyen Tri","year":"2016","unstructured":"Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computing at NIPS."},{"key":"e_1_3_2_50_2","unstructured":"Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085. Retrieved from https:\/\/arxiv.org\/abs\/1901.04085."},{"key":"e_1_3_2_51_2","volume-title":"Proceedings of the 42nd European Conference on IR Research","author":"Padaki Ramith","year":"2020","unstructured":"Ramith Padaki, Zhuyun Dai, and Jamie Callan. 2020. Rethinking query expansion for BERT reranking. In Proceedings of the 42nd European Conference on IR Research."},{"key":"e_1_3_2_52_2","unstructured":"Yingqi Qu Yuchen Ding Jing Liu Kai Liu Ruiyang Ren Xin Zhao Daxiang Dong Hua Wu and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ."},{"key":"e_1_3_2_53_2","volume-title":"OpenAI Blog","author":"Radford Alec","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019), 9."},{"key":"e_1_3_2_54_2","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research 21, 140 (2020), 1\u201367.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_55_2","doi-asserted-by":"crossref","unstructured":"Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) . 3982\u20133992.","DOI":"10.18653\/v1\/D19-1410"},{"key":"e_1_3_2_56_2","unstructured":"Ruiyang Ren Yingqi Qu Jing Liu Wayne Xin Zhao Qiaoqiao She Hua Wu Haifeng Wang and Ji-Rong Wen. 2021. RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing . 2825\u20132835."},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1561\/1500000019"},{"key":"e_1_3_2_58_2","unstructured":"J. J. Rocchio. 1971. Relevance feedback in information retrieval. The SMART Retrieval System: Experiments in Automatic Document Processing ."},{"key":"e_1_3_2_59_2","article-title":"Using word embeddings for automatic query expansion","author":"Roy Dwaipayan","year":"2016","unstructured":"Dwaipayan Roy, Debjyoti Paul, Mandar Mitra, and Utpal Garain. 2016. Using word embeddings for automatic query expansion. In Proceedings of the SIGIR 2016 Workshop on Neural Information Retrieval.","journal-title":"Proceedings of the SIGIR 2016 Workshop on Neural Information Retrieval"},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1145\/1148170.1148201"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295349"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2020.102342"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2019.102182"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/3471158.3472233"},{"key":"e_1_3_2_65_2","unstructured":"Lee Xiong Chenyan Xiong Ye Li Kwok-Fung Tang Jialin Liu Paul N. Bennett Junaid Ahmed and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. International Conference on Learning Representations ."},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/1571941.1571954"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1145\/3239571"},{"key":"e_1_3_2_68_2","unstructured":"Wei Yang Haotian Zhang and Jimmy Lin. 2019. Simple applications of BERT for ad hoc document retrieval. arXiv:1903.10972. Retrieved from https:\/\/arxiv.org\/abs\/1903.10972."},{"key":"e_1_3_2_69_2","first-page":"5753","article-title":"XLNet: Generalized autoregressive pretraining for language understanding","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 5753\u20135763.","journal-title":"Proceedings of the 33rd International Conference on Neural Information Processing Systems"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/3437963.3441667"},{"key":"e_1_3_2_71_2","first-page":"19","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing: System Demonstrations","author":"Yilmaz Zeynep Akkalyoncu","year":"2019","unstructured":"Zeynep Akkalyoncu Yilmaz, Shengjin Wang, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Applying BERT to document retrieval with birch. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing: System Demonstrations. 19\u201324."},{"key":"e_1_3_2_72_2","first-page":"3490","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing","author":"Yilmaz Zeynep Akkalyoncu","year":"2019","unstructured":"Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Cross-domain modeling of sentence-level evidence for document retrieval. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 3490\u20133496."},{"key":"e_1_3_2_73_2","volume-title":"Proceedings of the European Conference on Information Retrieval","author":"Yu HongChien","year":"2021","unstructured":"HongChien Yu, Zhuyun Dai, and Jamie Callan. 2021. PGT: Pseudo relevance feedback using a graph-based transformer. In Proceedings of the European Conference on Information Retrieval."},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1145\/3459637.3482124"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/2970398.2970405"},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3077136.3080831"},{"key":"e_1_3_2_77_2","doi-asserted-by":"publisher","DOI":"10.1145\/2983323.2983844"},{"key":"e_1_3_2_78_2","first-page":"403","volume-title":"Proceedings of the 10th ACM International Conference on Information and Knowledge Management","author":"Zhai Chengxiang","year":"2001","unstructured":"Chengxiang Zhai and John Lafferty. 2001. Model-based feedback in the language modeling approach to information retrieval. In Proceedings of the 10th ACM International Conference on Information and Knowledge Management. 403\u2013410."},{"key":"e_1_3_2_79_2","unstructured":"Jingtao Zhan Jiaxin Mao Yiqun Liu Min Zhang and Shaoping Ma. 2020. RepBERT: Contextualized text embeddings for first-stage retrieval. arXiv:2006.15498. Retrieved from https:\/\/arxiv.org\/abs\/2006.15498."},{"key":"e_1_3_2_80_2","first-page":"4718","volume-title":"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings","author":"Zheng Zhi","year":"2020","unstructured":"Zhi Zheng, Kai Hui, Ben He, Xianpei Han, Le Sun, and Andrew Yates. 2020. BERT-QE: Contextualized query expansion for document re-ranking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. 4718\u20134728."},{"key":"e_1_3_2_81_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2021.102672"},{"key":"e_1_3_2_82_2","volume-title":"Proceedings of the 43rd European Conference on Information Retrieval","author":"Zhuang Shengyao","year":"2021","unstructured":"Shengyao Zhuang, Hang Li, and Guido Zuccon. 2021. Deep query likelihood model for information retrieval. In Proceedings of the 43rd European Conference on Information Retrieval."},{"key":"e_1_3_2_83_2","volume-title":"Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval","author":"Zhuang Shengyao","year":"2022","unstructured":"Shengyao Zhuang, Hang Li, and Guido Zuccon. 2022. Implicit feedback for dense passage retrieval: A counterfactual approach. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval."},{"key":"e_1_3_2_84_2","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3462922"}],"container-title":["ACM Transactions on Information Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3570724","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3570724","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:34Z","timestamp":1750182574000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3570724"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,10]]},"references-count":83,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,7,31]]}},"alternative-id":["10.1145\/3570724"],"URL":"https:\/\/doi.org\/10.1145\/3570724","relation":{},"ISSN":["1046-8188","1558-2868"],"issn-type":[{"value":"1046-8188","type":"print"},{"value":"1558-2868","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,4,10]]},"assertion":[{"value":"2021-06-29","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-05-05","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-04-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}