{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,2]],"date-time":"2026-01-02T07:36:00Z","timestamp":1767339360834,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":68,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,10,21]],"date-time":"2023-10-21T00:00:00Z","timestamp":1697846400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Quan Cheng Laboratory","award":["QCLZD202301"],"award-info":[{"award-number":["QCLZD202301"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,10,21]]},"DOI":"10.1145\/3583780.3614923","type":"proceedings-article","created":{"date-parts":[[2023,10,21]],"date-time":"2023-10-21T07:45:26Z","timestamp":1697874326000},"page":"441-451","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":8,"title":["I3 Retriever: Incorporating Implicit Interaction in Pre-trained Language Models for Passage Retrieval"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6858-5303","authenticated-orcid":false,"given":"Qian","family":"Dong","sequence":"first","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6857-261X","authenticated-orcid":false,"given":"Yiding","family":"Liu","sequence":"additional","affiliation":[{"name":"Baidu Inc., Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5030-709X","authenticated-orcid":false,"given":"Qingyao","family":"Ai","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-8766-8610","authenticated-orcid":false,"given":"Haitao","family":"Li","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9212-1947","authenticated-orcid":false,"given":"Shuaiqiang","family":"Wang","sequence":"additional","affiliation":[{"name":"Baidu Inc., Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0140-4512","authenticated-orcid":false,"given":"Yiqun","family":"Liu","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0684-6205","authenticated-orcid":false,"given":"Dawei","family":"Yin","sequence":"additional","affiliation":[{"name":"Baidu Inc., Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8762-8268","authenticated-orcid":false,"given":"Shaoping","family":"Ma","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2023,10,21]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"InPars: Data Augmentation for Information Retrieval using Large Language Models. arXiv preprint arXiv:2202.05144","author":"Bonifacio Luiz","year":"2022","unstructured":"Luiz Bonifacio , Hugo Abonizio , Marzieh Fadaee , and Rodrigo Nogueira . 2022. InPars: Data Augmentation for Information Retrieval using Large Language Models. arXiv preprint arXiv:2202.05144 ( 2022 ). Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. 2022. InPars: Data Augmentation for Information Retrieval using Large Language Models. arXiv preprint arXiv:2202.05144 (2022)."},{"key":"e_1_3_2_1_2_1","volume-title":"Pre-training tasks for embedding-based large-scale retrieval. arXiv preprint arXiv:2002.03932","author":"Chang Wei-Cheng","year":"2020","unstructured":"Wei-Cheng Chang , Felix X Yu , Yin-Wen Chang , Yiming Yang , and Sanjiv Kumar . 2020. Pre-training tasks for embedding-based large-scale retrieval. arXiv preprint arXiv:2002.03932 ( 2020 ). Wei-Cheng Chang, Felix X Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. arXiv preprint arXiv:2002.03932 (2020)."},{"key":"e_1_3_2_1_3_1","volume-title":"Layout-aware Webpage Quality Assessment. arXiv preprint arXiv:2301.12152","author":"Cheng Anfeng","year":"2023","unstructured":"Anfeng Cheng , Yiding Liu , Weibin Li , Qian Dong , Shuaiqiang Wang , Zhengjie Huang , Shikun Feng , Zhicong Cheng , and Dawei Yin . 2023. Layout-aware Webpage Quality Assessment. arXiv preprint arXiv:2301.12152 ( 2023 ). Anfeng Cheng, Yiding Liu, Weibin Li, Qian Dong, Shuaiqiang Wang, Zhengjie Huang, Shikun Feng, Zhicong Cheng, and Dawei Yin. 2023. Layout-aware Webpage Quality Assessment. arXiv preprint arXiv:2301.12152 (2023)."},{"key":"e_1_3_2_1_4_1","unstructured":"Hyung Won Chung Le Hou Shayne Longpre Barret Zoph Yi Tay William Fedus Eric Li Xuezhi Wang Mostafa Dehghani Siddhartha Brahma etal 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 (2022).  Hyung Won Chung Le Hou Shayne Longpre Barret Zoph Yi Tay William Fedus Eric Li Xuezhi Wang Mostafa Dehghani Siddhartha Brahma et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 (2022)."},{"key":"e_1_3_2_1_5_1","volume-title":"Overview of the trec 2019 deep learning track. arXiv preprint arXiv:2003.07820","author":"Craswell Nick","year":"2020","unstructured":"Nick Craswell , Bhaskar Mitra , Emine Yilmaz , Daniel Campos , and Ellen M Voorhees . 2020. Overview of the trec 2019 deep learning track. arXiv preprint arXiv:2003.07820 ( 2020 ). Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the trec 2019 deep learning track. arXiv preprint arXiv:2003.07820 (2020)."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401204"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3159652.3159659"},{"key":"e_1_3_2_1_8_1","volume-title":"Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755","author":"Dai Zhuyun","year":"2022","unstructured":"Zhuyun Dai , Vincent Y Zhao , Ji Ma , Yi Luan , Jianmo Ni , Jing Lu , Anton Bakalov , Kelvin Guu , Keith B Hall , and Ming-Wei Chang . 2022 . Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755 (2022). Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B Hall, and Ming-Wei Chang. 2022. Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755 (2022)."},{"key":"e_1_3_2_1_9_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_2_1_10_1","volume-title":"Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking. arXiv preprint arXiv:2204.11673","author":"Dong Qian","year":"2022","unstructured":"Qian Dong , Yiding Liu , Suqi Cheng , Shuaiqiang Wang , Zhicong Cheng , Shuzi Niu , and Dawei Yin . 2022a. Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking. arXiv preprint arXiv:2204.11673 ( 2022 ). Qian Dong, Yiding Liu, Suqi Cheng, Shuaiqiang Wang, Zhicong Cheng, Shuzi Niu, and Dawei Yin. 2022a. Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking. arXiv preprint arXiv:2204.11673 (2022)."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-73197-7_6"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3462931"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1007\/s41019-022-00179-3"},{"key":"e_1_3_2_1_14_1","volume-title":"SPLADE v2: Sparse lexical and expansion model for information retrieval. arXiv preprint arXiv:2109.10086","author":"Formal Thibault","year":"2021","unstructured":"Thibault Formal , Carlos Lassance , Benjamin Piwowarski , and St\u00e9phane Clinchant . 2021. SPLADE v2: Sparse lexical and expansion model for information retrieval. arXiv preprint arXiv:2109.10086 ( 2021 ). Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and St\u00e9phane Clinchant. 2021. SPLADE v2: Sparse lexical and expansion model for information retrieval. arXiv preprint arXiv:2109.10086 (2021)."},{"key":"e_1_3_2_1_15_1","volume-title":"Unsupervised corpus aware language model pre-training for dense passage retrieval. arXiv preprint arXiv:2108.05540","author":"Gao Luyu","year":"2021","unstructured":"Luyu Gao and Jamie Callan . 2021. Unsupervised corpus aware language model pre-training for dense passage retrieval. arXiv preprint arXiv:2108.05540 ( 2021 ). Luyu Gao and Jamie Callan. 2021. Unsupervised corpus aware language model pre-training for dense passage retrieval. arXiv preprint arXiv:2108.05540 (2021)."},{"key":"e_1_3_2_1_16_1","volume-title":"COIL: Revisit exact lexical match in information retrieval with contextualized inverted list. arXiv preprint arXiv:2104.07186","author":"Gao Luyu","year":"2021","unstructured":"Luyu Gao , Zhuyun Dai , and Jamie Callan . 2021 . COIL: Revisit exact lexical match in information retrieval with contextualized inverted list. arXiv preprint arXiv:2104.07186 (2021). Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. COIL: Revisit exact lexical match in information retrieval with contextualized inverted list. arXiv preprint arXiv:2104.07186 (2021)."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/2983323.2983769"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2019.102067"},{"key":"e_1_3_2_1_19_1","volume-title":"Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 113--122","author":"Sebastian","year":"2021","unstructured":"Sebastian Hofst\"atter, Sheng-Chieh Lin , Jheng-Hong Yang , Jimmy Lin , and Allan Hanbury . 2021 . Efficiently teaching an effective dense retriever with balanced topic aware sampling . In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 113--122 . Sebastian Hofst\"atter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Efficiently teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 113--122."},{"key":"e_1_3_2_1_20_1","volume-title":"Convolutional neural network architectures for matching natural language sentences. Advances in neural information processing systems","author":"Hu Baotian","year":"2014","unstructured":"Baotian Hu , Zhengdong Lu , Hang Li , and Qingcai Chen . 2014. Convolutional neural network architectures for matching natural language sentences. Advances in neural information processing systems , Vol. 27 ( 2014 ). Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. Advances in neural information processing systems, Vol. 27 (2014)."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/2505515.2505665"},{"key":"e_1_3_2_1_22_1","volume-title":"Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969","author":"Humeau Samuel","year":"2019","unstructured":"Samuel Humeau , Kurt Shuster , Marie-Anne Lachaux , and Jason Weston . 2019 . Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969 (2019). Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969 (2019)."},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/TBDATA.2019.2921572"},{"key":"e_1_3_2_1_24_1","volume-title":"Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.","author":"Karpukhin Vladimir","year":"2020","unstructured":"Vladimir Karpukhin , Barlas Oug uz , Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020 . Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906 (2020). Vladimir Karpukhin, Barlas Oug uz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906 (2020)."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401075"},{"key":"e_1_3_2_1_26_1","volume-title":"2023 c. Pretrained Language Model based Web Search Ranking: From Relevance to Satisfaction. arXiv preprint arXiv:2306.01599","author":"Li Canjia","year":"2023","unstructured":"Canjia Li , Xiaoyang Wang , Dongdong Li , Yiding Liu , Yu Lu , Shuaiqiang Wang , Zhicong Cheng , Simiu Gu , and Dawei Yin . 2023 c. Pretrained Language Model based Web Search Ranking: From Relevance to Satisfaction. arXiv preprint arXiv:2306.01599 ( 2023 ). Canjia Li, Xiaoyang Wang, Dongdong Li, Yiding Liu, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Simiu Gu, and Dawei Yin. 2023 c. Pretrained Language Model based Web Search Ranking: From Relevance to Satisfaction. arXiv preprint arXiv:2306.01599 (2023)."},{"key":"e_1_3_2_1_27_1","volume-title":"2023 a. SAILER: Structure-aware Pre-trained Language Model for Legal Case Retrieval. arXiv preprint arXiv:2304.11370","author":"Li Haitao","year":"2023","unstructured":"Haitao Li , Qingyao Ai , Jia Chen , Qian Dong , Yueyue Wu , Yiqun Liu , Chong Chen , and Qi Tian . 2023 a. SAILER: Structure-aware Pre-trained Language Model for Legal Case Retrieval. arXiv preprint arXiv:2304.11370 ( 2023 ). Haitao Li, Qingyao Ai, Jia Chen, Qian Dong, Yueyue Wu, Yiqun Liu, Chong Chen, and Qi Tian. 2023 a. SAILER: Structure-aware Pre-trained Language Model for Legal Case Retrieval. arXiv preprint arXiv:2304.11370 (2023)."},{"key":"e_1_3_2_1_28_1","volume-title":"2023 b. Constructing Tree-based Index for Efficient and Effective Dense Retrieval. arXiv preprint arXiv:2304.11943","author":"Li Haitao","year":"2023","unstructured":"Haitao Li , Qingyao Ai , Jingtao Zhan , Jiaxin Mao , Yiqun Liu , Zheng Liu , and Zhao Cao . 2023 b. Constructing Tree-based Index for Efficient and Effective Dense Retrieval. arXiv preprint arXiv:2304.11943 ( 2023 ). Haitao Li, Qingyao Ai, Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Zheng Liu, and Zhao Cao. 2023 b. Constructing Tree-based Index for Efficient and Effective Dense Retrieval. arXiv preprint arXiv:2304.11943 (2023)."},{"key":"e_1_3_2_1_29_1","volume-title":"Learning Diverse Document Representations with Deep Query Interactions for Dense Retrieval. arXiv preprint arXiv:2208.04232","author":"Li Zehan","year":"2022","unstructured":"Zehan Li , Nan Yang , Liang Wang , and Furu Wei . 2022. Learning Diverse Document Representations with Deep Query Interactions for Dense Retrieval. arXiv preprint arXiv:2208.04232 ( 2022 ). Zehan Li, Nan Yang, Liang Wang, and Furu Wei. 2022. Learning Diverse Document Representations with Deep Query Interactions for Dense Retrieval. arXiv preprint arXiv:2208.04232 (2022)."},{"key":"e_1_3_2_1_30_1","volume-title":"Ramesh Nallapati, Zhiheng Huang, and Bing Xiang.","author":"Liang Davis","year":"2020","unstructured":"Davis Liang , Peng Xu , Siamak Shakeri , Cicero Nogueira dos Santos , Ramesh Nallapati, Zhiheng Huang, and Bing Xiang. 2020 . Embedding-based zero-shot retrieval through query generation. arXiv preprint arXiv:2009.10270 (2020). Davis Liang, Peng Xu, Siamak Shakeri, Cicero Nogueira dos Santos, Ramesh Nallapati, Zhiheng Huang, and Bing Xiang. 2020. Embedding-based zero-shot retrieval through query generation. arXiv preprint arXiv:2009.10270 (2020)."},{"key":"e_1_3_2_1_31_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)."},{"key":"e_1_3_2_1_32_1","volume-title":"Retromae: Pre-training retrieval-oriented transformers via masked auto-encoder. arXiv preprint arXiv:2205.12035","author":"Liu Zheng","year":"2022","unstructured":"Zheng Liu and Yingxia Shao . 2022 . Retromae: Pre-training retrieval-oriented transformers via masked auto-encoder. arXiv preprint arXiv:2205.12035 (2022). Zheng Liu and Yingxia Shao. 2022. Retromae: Pre-training retrieval-oriented transformers via masked auto-encoder. arXiv preprint arXiv:2205.12035 (2022)."},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.220"},{"key":"e_1_3_2_1_34_1","volume-title":"Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, et al.","author":"Lu Yuxiang","year":"2022","unstructured":"Yuxiang Lu , Yiding Liu , Jiaxiang Liu , Yunsheng Shi , Zhengjie Huang , Shikun Feng Yu Sun , Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, et al. 2022 . Ernie-search : Bridging cross-encoder with dual-encoder via self on-the-fly distillation for dense passage retrieval. arXiv preprint arXiv:2205.09153 (2022). Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, et al. 2022. Ernie-search: Bridging cross-encoder with dual-encoder via self on-the-fly distillation for dense passage retrieval. arXiv preprint arXiv:2205.09153 (2022)."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00369"},{"key":"e_1_3_2_1_36_1","volume-title":"Zero-shot neural passage retrieval via domain-targeted synthetic question generation. arXiv preprint arXiv:2004.14503","author":"Ma Ji","year":"2020","unstructured":"Ji Ma , Ivan Korotkov , Yinfei Yang , Keith Hall , and Ryan McDonald . 2020. Zero-shot neural passage retrieval via domain-targeted synthetic question generation. arXiv preprint arXiv:2004.14503 ( 2020 ). Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2020. Zero-shot neural passage retrieval via domain-targeted synthetic question generation. arXiv preprint arXiv:2004.14503 (2020)."},{"key":"e_1_3_2_1_37_1","volume-title":"Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction. arXiv preprint arXiv:2204.10641","author":"Ma Xinyu","year":"2022","unstructured":"Xinyu Ma , Jiafeng Guo , Ruqing Zhang , Yixing Fan , and Xueqi Cheng . 2022. Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction. arXiv preprint arXiv:2204.10641 ( 2022 ). Xinyu Ma, Jiafeng Guo, Ruqing Zhang, Yixing Fan, and Xueqi Cheng. 2022. Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction. arXiv preprint arXiv:2204.10641 (2022)."},{"key":"e_1_3_2_1_38_1","volume-title":"Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs","author":"Malkov Yu A","year":"2018","unstructured":"Yu A Malkov and Dmitry A Yashunin . 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs . IEEE transactions on pattern analysis and machine intelligence, Vol. 42 , 4 ( 2018 ), 824--836. Yu A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, Vol. 42, 4 (2018), 824--836."},{"key":"e_1_3_2_1_39_1","volume-title":"MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPS.","author":"Nguyen Tri","year":"2016","unstructured":"Tri Nguyen , Mir Rosenberg , Xia Song , Jianfeng Gao , Saurabh Tiwary , Rangan Majumder , and Li Deng . 2016 . MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPS. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPS."},{"key":"e_1_3_2_1_40_1","volume-title":"Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085","author":"Nogueira Rodrigo","year":"2019","unstructured":"Rodrigo Nogueira and Kyunghyun Cho . 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 ( 2019 ). Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 (2019)."},{"key":"e_1_3_2_1_41_1","volume-title":"From doc2query to docTTTTTquery. Online preprint","author":"Nogueira Rodrigo","year":"2019","unstructured":"Rodrigo Nogueira , Jimmy Lin , and AI Epistemic . 2019a. From doc2query to docTTTTTquery. Online preprint , Vol. 6 ( 2019 ). Rodrigo Nogueira, Jimmy Lin, and AI Epistemic. 2019a. From doc2query to docTTTTTquery. Online preprint, Vol. 6 (2019)."},{"key":"e_1_3_2_1_42_1","volume-title":"Multi-stage document ranking with bert. arXiv preprint arXiv:1910.14424","author":"Nogueira Rodrigo","year":"2019","unstructured":"Rodrigo Nogueira , Wei Yang , Kyunghyun Cho , and Jimmy Lin . 2019b. Multi-stage document ranking with bert. arXiv preprint arXiv:1910.14424 ( 2019 ). Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019b. Multi-stage document ranking with bert. arXiv preprint arXiv:1910.14424 (2019)."},{"key":"e_1_3_2_1_43_1","volume-title":"Document expansion by query prediction. arXiv preprint arXiv:1904.08375","author":"Nogueira Rodrigo","year":"2019","unstructured":"Rodrigo Nogueira , Wei Yang , Jimmy Lin , and Kyunghyun Cho . 2019c. Document expansion by query prediction. arXiv preprint arXiv:1904.08375 ( 2019 ). Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019c. Document expansion by query prediction. arXiv preprint arXiv:1904.08375 (2019)."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASLP.2016.2520371"},{"key":"e_1_3_2_1_45_1","unstructured":"Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community-based question answering. In Twenty-Fourth international joint conference on artificial intelligence.  Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community-based question answering. In Twenty-Fourth international joint conference on artificial intelligence."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.naacl-main.466"},{"key":"e_1_3_2_1_47_1","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans Ilya Sutskever etal 2018. Improving language understanding by generative pre-training. (2018).  Alec Radford Karthik Narasimhan Tim Salimans Ilya Sutskever et al. 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_2_1_48_1","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel , Noam Shazeer , Adam Roberts , Katherine Lee , Sharan Narang , Michael Matena , Yanqi Zhou , Wei Li , Peter J Liu , 2020 . Exploring the limits of transfer learning with a unified text-to-text transformer . J. Mach. Learn. Res. , Vol. 21 , 140 (2020), 1 -- 67 . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., Vol. 21, 140 (2020), 1--67.","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_1_49_1","volume-title":"Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen.","author":"Ren Ruiyang","year":"2021","unstructured":"Ruiyang Ren , Yingqi Qu , Jing Liu , Wayne Xin Zhao , Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021 . RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking . arXiv preprint arXiv:2110.07367 (2021). Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021. RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking. arXiv preprint arXiv:2110.07367 (2021)."},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1561\/1500000019"},{"key":"e_1_3_2_1_51_1","volume-title":"Automatic keyword extraction from individual documents. Text mining: applications and theory","author":"Rose Stuart","year":"2010","unstructured":"Stuart Rose , Dave Engel , Nick Cramer , and Wendy Cowley . 2010. Automatic keyword extraction from individual documents. Text mining: applications and theory ( 2010 ), 1--20. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text mining: applications and theory (2010), 1--20."},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.naacl-main.272"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661829.2661935"},{"key":"e_1_3_2_1_54_1","volume-title":"Improving document representations by generating pseudo query embeddings for dense retrieval. arXiv preprint arXiv:2105.03599","author":"Tang Hongyin","year":"2021","unstructured":"Hongyin Tang , Xingwu Sun , Beihong Jin , Jingang Wang , Fuzheng Zhang , and Wei Wu. 2021. Improving document representations by generating pseudo query embeddings for dense retrieval. arXiv preprint arXiv:2105.03599 ( 2021 ). Hongyin Tang, Xingwu Sun, Beihong Jin, Jingang Wang, Fuzheng Zhang, and Wei Wu. 2021. Improving document representations by generating pseudo query embeddings for dense retrieval. arXiv preprint arXiv:2105.03599 (2021)."},{"key":"e_1_3_2_1_55_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008.  Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008."},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v30i1.10342"},{"key":"e_1_3_2_1_57_1","volume-title":"Gpl: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval. arXiv preprint arXiv:2112.07577","author":"Wang Kexin","year":"2021","unstructured":"Kexin Wang , Nandan Thakur , Nils Reimers , and Iryna Gurevych . 2021 . Gpl: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval. arXiv preprint arXiv:2112.07577 (2021). Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. 2021. Gpl: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval. arXiv preprint arXiv:2112.07577 (2021)."},{"key":"e_1_3_2_1_58_1","volume-title":"Simlm: Pre-training with representation bottleneck for dense passage retrieval. arXiv preprint arXiv:2207.02578","author":"Wang Liang","year":"2022","unstructured":"Liang Wang , Nan Yang , Xiaolong Huang , Binxing Jiao , Linjun Yang , Daxin Jiang , Rangan Majumder , and Furu Wei . 2022 . Simlm: Pre-training with representation bottleneck for dense passage retrieval. arXiv preprint arXiv:2207.02578 (2022). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Simlm: Pre-training with representation bottleneck for dense passage retrieval. arXiv preprint arXiv:2207.02578 (2022)."},{"key":"e_1_3_2_1_59_1","volume-title":"Query-as-context Pre-training for Dense Passage Retrieval. arXiv preprint arXiv:2212.09598","author":"Wu Xing","year":"2022","unstructured":"Xing Wu , Guangyuan Ma , and Songlin Hu. 2022a. Query-as-context Pre-training for Dense Passage Retrieval. arXiv preprint arXiv:2212.09598 ( 2022 ). Xing Wu, Guangyuan Ma, and Songlin Hu. 2022a. Query-as-context Pre-training for Dense Passage Retrieval. arXiv preprint arXiv:2212.09598 (2022)."},{"key":"e_1_3_2_1_60_1","volume-title":"Contextual mask auto-encoder for dense passage retrieval. arXiv preprint arXiv:2208.07670","author":"Wu Xing","year":"2022","unstructured":"Xing Wu , Guangyuan Ma , Meng Lin , Zijia Lin , Zhongyuan Wang , and Songlin Hu. 2022b. Contextual mask auto-encoder for dense passage retrieval. arXiv preprint arXiv:2208.07670 ( 2022 ). Xing Wu, Guangyuan Ma, Meng Lin, Zijia Lin, Zhongyuan Wang, and Songlin Hu. 2022b. Contextual mask auto-encoder for dense passage retrieval. arXiv preprint arXiv:2208.07670 (2022)."},{"key":"e_1_3_2_1_61_1","volume-title":"Social4Rec: Distilling User Preference from Social Graph for Video Recommendation in Tencent. arXiv preprint arXiv:2302.09971","author":"Xiao Xuanji","year":"2023","unstructured":"Xuanji Xiao , Huaqiang Dai , Qian Dong , Shuzi Niu , Yuzhen Liu , and Pei Liu . 2023. Social4Rec: Distilling User Preference from Social Graph for Video Recommendation in Tencent. arXiv preprint arXiv:2302.09971 ( 2023 ). Xuanji Xiao, Huaqiang Dai, Qian Dong, Shuzi Niu, Yuzhen Liu, and Pei Liu. 2023. Social4Rec: Distilling User Preference from Social Graph for Video Recommendation in Tencent. arXiv preprint arXiv:2302.09971 (2023)."},{"key":"e_1_3_2_1_62_1","unstructured":"Xiaohui Xie Qian Dong Bingning Wang Feiyang Lv Ting Yao Weinan Gan Zhijing Wu Xiangsheng Li Haitao Li Yiqun Liu etal 2023. T2Ranking: A large-scale Chinese Benchmark for Passage Ranking. arXiv preprint arXiv:2304.03679 (2023).  Xiaohui Xie Qian Dong Bingning Wang Feiyang Lv Ting Yao Weinan Gan Zhijing Wu Xiangsheng Li Haitao Li Yiqun Liu et al. 2023. T2Ranking: A large-scale Chinese Benchmark for Passage Ranking. arXiv preprint arXiv:2304.03679 (2023)."},{"key":"e_1_3_2_1_63_1","volume-title":"Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808","author":"Xiong Lee","year":"2020","unstructured":"Lee Xiong , Chenyan Xiong , Ye Li , Kwok-Fung Tang , Jialin Liu , Paul Bennett , Junaid Ahmed , and Arnold Overwijk . 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808 ( 2020 ). Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808 (2020)."},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i5.16584"},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3488560.3498442"},{"key":"e_1_3_2_1_66_1","volume-title":"Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962","author":"You Yang","year":"2019","unstructured":"Yang You , Jing Li , Sashank Reddi , Jonathan Hseu , Sanjiv Kumar , Srinadh Bhojanapalli , Xiaodan Song , James Demmel , Kurt Keutzer , and Cho-Jui Hsieh . 2019. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962 ( 2019 ). Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2019. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962 (2019)."},{"key":"e_1_3_2_1_67_1","volume-title":"Daxin Jiang, Nan Duan, and Ji-Rong Wen.","author":"Zhou Kun","year":"2022","unstructured":"Kun Zhou , Xiao Liu , Yeyun Gong , Wayne Xin Zhao , Daxin Jiang, Nan Duan, and Ji-Rong Wen. 2022 . MASTER : Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers . arXiv preprint arXiv:2212.07841 (2022). Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, and Ji-Rong Wen. 2022. MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers. arXiv preprint arXiv:2212.07841 (2022)."},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3568681"}],"event":{"name":"CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"],"location":"Birmingham United Kingdom","acronym":"CIKM '23"},"container-title":["Proceedings of the 32nd ACM International Conference on Information and Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583780.3614923","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3583780.3614923","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:36:44Z","timestamp":1750178204000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3583780.3614923"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,21]]},"references-count":68,"alternative-id":["10.1145\/3583780.3614923","10.1145\/3583780"],"URL":"https:\/\/doi.org\/10.1145\/3583780.3614923","relation":{},"subject":[],"published":{"date-parts":[[2023,10,21]]},"assertion":[{"value":"2023-10-21","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}