{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T18:57:33Z","timestamp":1775156253656,"version":"3.50.1"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686318","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:00:00Z","timestamp":1761004800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,10,21]]},"abstract":"<jats:p>The growing ubiquity of Retrieval-Augmented Generation (RAG) systems in several real-world services triggers severe concerns about their security. A RAG system improves the generative capabilities of a Large Language Model (LLM) by a retrieval mechanism that operates on a private knowledge base, whose unintended exposure could lead to severe consequences, including breaches of private and sensitive information. This paper presents a black-box attack to force a RAG system to leak its private knowledge base which, unlike existing approaches, is both adaptive and automatic. A relevance-based mechanism and an attacker-side open-source LLM favor the generation of effective queries to leak most of the (hidden) knowledge base. Extensive experimentation proves the quality of the proposed algorithm in different RAG pipelines and domains, compared to very recent related approaches, which turn out to be either not fully black-box, not adaptive, or not based on open-source models. The findings from our study highlight the urgent need for more robust privacy safeguards in the design and deployment of RAG systems. We have made the open-source code for our experimental procedure available for public use\u00a0[12].<\/jats:p>","DOI":"10.3233\/faia251293","type":"book-chapter","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:57:13Z","timestamp":1761127033000},"source":"Crossref","is-referenced-by-count":2,"title":["Pirates of the RAG: Adaptively Attacking LLMs to Leak Knowledge Bases"],"prefix":"10.3233","author":[{"given":"Christian","family":"Di Maio","sequence":"first","affiliation":[{"name":"University of Pisa, Italy"}]},{"given":"Cristian","family":"Cosci","sequence":"additional","affiliation":[{"name":"Machine Learning Reply, Turin, Italy"}]},{"given":"Marco","family":"Maggini","sequence":"additional","affiliation":[{"name":"University of Siena, Italy"}]},{"given":"Valentina","family":"Poggioni","sequence":"additional","affiliation":[{"name":"University of Perugia, Italy"}]},{"given":"Stefano","family":"Melacci","sequence":"additional","affiliation":[{"name":"University of Siena, Italy"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2025"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA251293","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:57:13Z","timestamp":1761127033000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA251293"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"ISBN":["9781643686318"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia251293","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]}}}