{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,30]],"date-time":"2025-12-30T09:01:20Z","timestamp":1767085280643,"version":"3.44.0"},"publisher-location":"New York, NY, USA","reference-count":56,"publisher":"ACM","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,8,3]]},"DOI":"10.1145\/3711896.3737015","type":"proceedings-article","created":{"date-parts":[[2025,8,3]],"date-time":"2025-08-03T21:04:26Z","timestamp":1754255066000},"page":"1470-1480","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["KnowTrace: Bootstrapping Iterative Retrieval-Augmented Generation with Structured Knowledge Tracing"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-0625-6802","authenticated-orcid":false,"given":"Rui","family":"Li","sequence":"first","affiliation":[{"name":"Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7578-2738","authenticated-orcid":false,"given":"Quanyu","family":"Dai","sequence":"additional","affiliation":[{"name":"Huawei Noah's Ark Lab, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0048-1687","authenticated-orcid":false,"given":"Zeyu","family":"Zhang","sequence":"additional","affiliation":[{"name":"Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0144-1775","authenticated-orcid":false,"given":"Xu","family":"Chen","sequence":"additional","affiliation":[{"name":"Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2231-4663","authenticated-orcid":false,"given":"Zhenhua","family":"Dong","sequence":"additional","affiliation":[{"name":"Huawei Noah's Ark Lab, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9777-9676","authenticated-orcid":false,"given":"Ji-Rong","family":"Wen","sequence":"additional","affiliation":[{"name":"Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2025,8,3]]},"reference":[{"key":"e_1_3_2_2_1_1","volume-title":"On the relation between the natural logic of reasoning and standard logic. Psychological review","author":"Braine Martin D","year":"1978","unstructured":"Martin D Braine. 1978. On the relation between the natural logic of reasoning and standard logic. Psychological review, Vol. 85, 1 (1978), 1."},{"key":"e_1_3_2_2_2_1","first-page":"1877","article-title":"Language Models are Few-Shot Learners","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, Vol. 33. 1877-1901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_3_1","volume-title":"Information Re-Organization Improves Reasoning in Large Language Models. arXiv preprint arXiv:2404.13985","author":"Cheng Xiaoxia","year":"2024","unstructured":"Xiaoxia Cheng, Zeqi Tan, and Weiming Lu. 2024. Information Re-Organization Improves Reasoning in Large Language Models. arXiv preprint arXiv:2404.13985 (2024)."},{"key":"e_1_3_2_2_4_1","volume-title":"Interpretable AMR-based question decomposition for multi-hop question answering. arXiv preprint arXiv:2206.08486","author":"Deng Zhenyun","year":"2022","unstructured":"Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, and Patricia Riddle. 2022. Interpretable AMR-based question decomposition for multi-hop question answering. arXiv preprint arXiv:2206.08486 (2022)."},{"key":"e_1_3_2_2_5_1","unstructured":"Abhimanyu Dubey Abhinav Jauhri Abhinav Pandey Abhishek Kadian Ahmad Al-Dahle Aiesha Letman Akhil Mathur Alan Schelten Amy Yang Angela Fan et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 (2024)."},{"key":"e_1_3_2_2_6_1","volume-title":"From local to global: A graph rag approach to query-focused summarization. arXiv preprint arXiv:2404.16130","author":"Edge Darren","year":"2024","unstructured":"Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. 2024. From local to global: A graph rag approach to query-focused summarization. arXiv preprint arXiv:2404.16130 (2024)."},{"key":"e_1_3_2_2_7_1","volume-title":"Constructivism: Theory, perspectives, and practice","author":"Fosnot Catherine Twomey","year":"2013","unstructured":"Catherine Twomey Fosnot. 2013. Constructivism: Theory, perspectives, and practice. Teachers College Press."},{"key":"e_1_3_2_2_8_1","volume-title":"Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997","author":"Gao Yunfan","year":"2023","unstructured":"Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 (2023)."},{"volume-title":"Elasticsearch: the definitive guide: a distributed real-time search and analytics engine. '' O'Reilly Media","author":"Gormley Clinton","key":"e_1_3_2_2_9_1","unstructured":"Clinton Gormley and Zachary Tong. 2015. Elasticsearch: the definitive guide: a distributed real-time search and analytics engine. '' O'Reilly Media, Inc.''."},{"key":"e_1_3_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.coling-main.580"},{"key":"e_1_3_2_2_11_1","volume-title":"V-STaR: Training Verifiers for Self-Taught Reasoners. In First Conference on Language Modeling.","author":"Hosseini Arian","year":"2024","unstructured":"Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. 2024. V-STaR: Training Verifiers for Self-Taught Reasoners. In First Conference on Language Modeling."},{"key":"e_1_3_2_2_12_1","volume-title":"LoRA: Low-Rank Adaptation of Large Language Models. In The Tenth International Conference on Learning Representations.","author":"Hu Edward J.","year":"2022","unstructured":"Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In The Tenth International Conference on Learning Representations."},{"key":"e_1_3_2_2_13_1","volume-title":"Unsupervised Dense Information Retrieval with Contrastive Learning. Transactions on Machine Learning Research","author":"Izacard Gautier","year":"2022","unstructured":"Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised Dense Information Retrieval with Contrastive Learning. Transactions on Machine Learning Research (2022)."},{"key":"e_1_3_2_2_14_1","first-page":"1","article-title":"Atlas: Few-shot learning with retrieval augmented language models","volume":"24","author":"Izacard Gautier","year":"2023","unstructured":"Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023. Atlas: Few-shot learning with retrieval augmented language models. Journal of Machine Learning Research, Vol. 24, 251 (2023), 1-43.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3571730"},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.550"},{"key":"e_1_3_2_2_17_1","volume-title":"The Twelfth International Conference on Learning Representations.","author":"Kim Jaehyung","year":"2024","unstructured":"Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, and Jinwoo Shin. 2024. SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs. In The Twelfth International Conference on Learning Representations."},{"key":"e_1_3_2_2_18_1","volume-title":"Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115","author":"Lazaridou Angeliki","year":"2022","unstructured":"Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115 (2022)."},{"key":"e_1_3_2_2_19_1","unstructured":"Patrick S. H. Lewis Ethan Perez Aleksandra Piktus Fabio Petroni Vladimir Karpukhin Naman Goyal Heinrich K\u00fcttler Mike Lewis Wen-tau Yih Tim Rockt\u00e4schel Sebastian Riedel and Douwe Kiela. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In Advances in Neural Information Processing Systems."},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-emnlp.452"},{"key":"e_1_3_2_2_21_1","first-page":"28040","volume-title":"Proceedings of the 41st International Conference on Machine Learning","volume":"235","author":"Li Rui","year":"2024","unstructured":"Rui Li, Chaozhuo Li, Yanming Shen, Zeyu Zhang, and Xu Chen. 2024. Generalizing Knowledge Graph Embedding with Universal Orthogonal Parameterization. In Proceedings of the 41st International Conference on Machine Learning, Vol. 235. 28040-28059."},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.476"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.acl-long.546"},{"key":"e_1_3_2_2_24_1","volume-title":"Cicero Nogueira dos Santos, and Siamak Shakeri","author":"Misra Kanishka","year":"2023","unstructured":"Kanishka Misra, Cicero Nogueira dos Santos, and Siamak Shakeri. 2023. Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks. arXiv preprint arXiv:2306.04009 (2023)."},{"key":"e_1_3_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/354756.354805"},{"key":"e_1_3_2_2_26_1","volume-title":"Introducing chatgpt. https:\/\/openai.com\/blog\/chatgpt","author":"AI.","year":"2022","unstructured":"OpenAI. 2022. Introducing chatgpt. https:\/\/openai.com\/blog\/chatgpt (2022)."},{"volume-title":"Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 13263-13282","author":"Panda Pranoy","key":"e_1_3_2_2_27_1","unstructured":"Pranoy Panda, Ankush Agarwal, Chaitanya Devaguptapu, Manohar Kaul, and Prathosh A P. 2024. HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering using LLMs. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 13263-13282."},{"key":"e_1_3_2_2_28_1","volume-title":"Hyunwoo J Kim, and Joo-Kyung Kim.","author":"Park Jinyoung","year":"2023","unstructured":"Jinyoung Park, Ameen Patel, Omar Zia Khan, Hyunwoo J Kim, and Joo-Kyung Kim. 2023. Graph-guided reasoning for multi-hop question answering in large language models. arXiv e-prints arXiv:2311.09762 (2023)."},{"key":"e_1_3_2_2_29_1","volume-title":"Graph Retrieval-Augmented Generation: A Survey. arXiv preprint arXiv:2408.08921","author":"Peng Boci","year":"2024","unstructured":"Boci Peng, Yun Zhu, Yongchao Liu, Xiaohe Bo, Haizhou Shi, Chuntao Hong, Yan Zhang, and Siliang Tang. 2024. Graph Retrieval-Augmented Generation: A Survey. arXiv preprint arXiv:2408.08921 (2024)."},{"key":"e_1_3_2_2_30_1","volume-title":"Unsupervised question decomposition for question answering. arXiv preprint arXiv:2002.09758","author":"Perez Ethan","year":"2020","unstructured":"Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, and Douwe Kiela. 2020. Unsupervised question decomposition for question answering. arXiv preprint arXiv:2002.09758 (2020)."},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-emnlp.378"},{"key":"e_1_3_2_2_32_1","unstructured":"Peng Qi Haejun Lee Oghenetegiri Sido Christopher D Manning et al. 2020. Answering open-domain questions of varying reasoning steps from text. arXiv preprint arXiv:2010.12527 (2020)."},{"key":"e_1_3_2_2_33_1","volume-title":"The effect of sampling temperature on problem solving in large language models. arXiv preprint arXiv:2402.05201","author":"Renze Matthew","year":"2024","unstructured":"Matthew Renze and Erhan Guven. 2024. The effect of sampling temperature on problem solving in large language models. arXiv preprint arXiv:2402.05201 (2024)."},{"key":"e_1_3_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1561\/1500000019"},{"key":"e_1_3_2_2_35_1","volume-title":"HybridRAG: Integrating Knowledge Graphs and Vector Retrieval Augmented Generation for Efficient Information Extraction. arXiv preprint arXiv:2408.04948","author":"Sarmah Bhaskarjit","year":"2024","unstructured":"Bhaskarjit Sarmah, Benika Hall, Rohan Rao, Sunil Patel, Stefano Pasquali, and Dhagash Mehta. 2024. HybridRAG: Integrating Knowledge Graphs and Vector Retrieval Augmented Generation for Efficient Information Extraction. arXiv preprint arXiv:2408.04948 (2024)."},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-emnlp.620"},{"key":"e_1_3_2_2_37_1","first-page":"31210","volume-title":"International Conference on Machine Learning","volume":"202","author":"Shi Freda","year":"2023","unstructured":"Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H. Chi, Nathanael Sch\u00e4rli, and Denny Zhou. 2023. Large Language Models Can Be Easily Distracted by Irrelevant Context. In International Conference on Machine Learning, Vol. 202. 31210-31227."},{"key":"e_1_3_2_2_38_1","unstructured":"Avi Singh John D. Co-Reyes Rishabh Agarwal Ankesh Anand Piyush Patil Xavier Garcia Peter J. Liu James Harrison Jaehoon Lee Kelvin Xu Aaron T. Parisi Abhishek Kumar Alexander A. Alemi Alex Rizkowsky Azade Nova Ben Adlam Bernd Bohnet Gamaleldin Fathy Elsayed Hanie Sedghi Igor Mordatch Isabelle Simpson Izzeddin Gur Jasper Snoek Jeffrey Pennington Jiri Hron Kathleen Kenealy Kevin Swersky Kshiteej Mahajan Laura Culp Lechao Xiao Maxwell L. Bileschi Noah Constant Roman Novak Rosanne Liu Tris Warkentin Yundi Qian Yamini Bansal Ethan Dyer Behnam Neyshabur Jascha Sohl-Dickstein and Noah Fiedel. 2024. Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models. Transactions on Machine Learning Research (2024)."},{"key":"e_1_3_2_2_39_1","volume-title":"Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning. arXiv preprint arXiv:2311.08505","author":"Su Xin","year":"2023","unstructured":"Xin Su, Tiep Le, Steven Bethard, and Phillip Howard. 2023. Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning. arXiv preprint arXiv:2311.08505 (2023)."},{"key":"e_1_3_2_2_40_1","volume-title":"BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).","author":"Thakur Nandan","year":"2021","unstructured":"Nandan Thakur, Nils Reimers, Andreas R\u00fcckl\u00e9, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)."},{"key":"e_1_3_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00475"},{"key":"e_1_3_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.acl-long.557"},{"key":"e_1_3_2_2_43_1","volume-title":"Unifying structure reasoning and language model pre-training for complex reasoning. arXiv preprint arXiv:2301.08913","author":"Wang Siyuan","year":"2023","unstructured":"Siyuan Wang, Zhongyu Wei, Jiarong Xu, Taishan Li, and Zhihao Fan. 2023. Unifying structure reasoning and language model pre-training for complex reasoning. arXiv preprint arXiv:2301.08913 (2023)."},{"key":"e_1_3_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v28i1.8870"},{"key":"e_1_3_2_2_45_1","volume-title":"Quoc V. Le, and Denny Zhou.","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems."},{"key":"e_1_3_2_2_46_1","volume-title":"The Thirteenth International Conference on Learning Representations.","author":"Wei Zhepei","year":"2025","unstructured":"Zhepei Wei, Wei-Lin Chen, and Yu Meng. 2025. InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales. In The Thirteenth International Conference on Learning Representations."},{"key":"e_1_3_2_2_47_1","volume-title":"Neural Text Generation With Unlikelihood Training. In 8th International Conference on Learning Representations.","author":"Welleck Sean","year":"2020","unstructured":"Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural Text Generation With Unlikelihood Training. In 8th International Conference on Learning Representations."},{"key":"e_1_3_2_2_48_1","volume-title":"Forty-first International Conference on Machine Learning.","author":"Xi Zhiheng","year":"2024","unstructured":"Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, Wei He, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, and Xuanjing Huang. 2024. Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning. In Forty-first International Conference on Machine Learning."},{"key":"e_1_3_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657760"},{"volume-title":"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2369-2380","author":"Yang Zhilin","key":"e_1_3_2_2_50_1","unstructured":"Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2369-2380."},{"key":"e_1_3_2_2_51_1","volume-title":"ReAct: Synergizing Reasoning and Acting in Language Models. In The Eleventh International Conference on Learning Representations.","author":"Yao Shunyu","year":"2023","unstructured":"Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. In The Eleventh International Conference on Learning Representations."},{"key":"e_1_3_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.364"},{"key":"e_1_3_2_2_53_1","volume-title":"Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825","author":"Yuan Zheng","year":"2023","unstructured":"Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 (2023)."},{"key":"e_1_3_2_2_54_1","volume-title":"Goodman","author":"Zelikman Eric","year":"2022","unstructured":"Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. 2022. STaR: Bootstrapping Reasoning With Reasoning. In Advances in Neural Information Processing Systems."},{"key":"e_1_3_2_2_55_1","unstructured":"Yue Zhang Yafu Li Leyang Cui Deng Cai Lemao Liu Tingchen Fu Xinting Huang Enbo Zhao Yu Zhang Yulong Chen et al. 2023. Siren's song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 (2023)."},{"key":"e_1_3_2_2_56_1","volume-title":"Retrieving and reading: A comprehensive survey on open-domain question answering. arXiv preprint arXiv:2101.00774","author":"Zhu Fengbin","year":"2021","unstructured":"Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering. arXiv preprint arXiv:2101.00774 (2021)."}],"event":{"name":"KDD '25: The 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"],"location":"Toronto ON Canada","acronym":"KDD '25"},"container-title":["Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3711896.3737015","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,16]],"date-time":"2025-08-16T14:45:45Z","timestamp":1755355545000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3711896.3737015"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,3]]},"references-count":56,"alternative-id":["10.1145\/3711896.3737015","10.1145\/3711896"],"URL":"https:\/\/doi.org\/10.1145\/3711896.3737015","relation":{},"subject":[],"published":{"date-parts":[[2025,8,3]]},"assertion":[{"value":"2025-08-03","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}