{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,29]],"date-time":"2025-08-29T00:03:07Z","timestamp":1756425787030,"version":"3.44.0"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686165","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,8,26]],"date-time":"2025-08-26T00:00:00Z","timestamp":1756166400000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,8,26]]},"abstract":"<jats:p>Knowledge graphs (KGs) are a key tool across fields like web search, healthcare, and technical assistance but are known to be notoriously incomplete. This paper addresses open-world knowledge graph completion (OW-KGC), which involves linking newly emerging entities \u2013 which are mentioned in text but not yet present in the graph \u2013 to a KG. We investigate the potential of off-the-shelf LLMs to tackle this task. Here, a central challenge is to constrain the LLM to the candidate space of existing entities available for linking. To address this, we propose a two-stage pipeline: A lightweight weight pre-ranker fine-tuned on the particular task narrows down the candidate entities, which are then re-ranked by the LLM. We evaluate two recent open-weight LLMs, namely Meta-LLAMA3 Instruct (70B) and DeepSeek-R1-Distill-LLAMA (70B), on public OW-KGC benchmarks for the KGs Freebase (FB15k237), WordNet (WN18RR), and Wikidata (IRT2). We test two variants of the pipeline: one where the LLM is applied as a strict re-ranker, and another where the LLM is allowed to provide additional suggestions. Our experimental results show that, while LLMs used in isolation perform poorly, they improve performance substantially when applied as re-rankers, achieving new state-of-the-art results across all datasets. Notably, we observe that DeepSeek, which leverages an explicit self-reflection mechanism before producing results, outperforms LLAMA significantly when re-ranking and measuring MRR. However, LLAMA contrary to DeepSeek, is considerably better at accessing and retrieving correct answers from its internalized knowledge.<\/jats:p>","DOI":"10.3233\/ssw250010","type":"book-chapter","created":{"date-parts":[[2025,8,28]],"date-time":"2025-08-28T08:05:40Z","timestamp":1756368340000},"source":"Crossref","is-referenced-by-count":0,"title":["Re-Ranking with LLMs for Open-World Knowledge Graph Completion"],"prefix":"10.3233","author":[{"given":"Felix","family":"Hamann","sequence":"first","affiliation":[{"name":"RheinMain University of Applied Sciences"}]},{"given":"Lukas","family":"Walker","sequence":"additional","affiliation":[{"name":"RheinMain University of Applied Sciences"}]},{"given":"Adrian","family":"Ulges","sequence":"additional","affiliation":[{"name":"RheinMain University of Applied Sciences"}]}],"member":"7437","container-title":["Studies on the Semantic Web","Linking Meaning: Semantic Technologies Shaping the Future of AI"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/SSW250010","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,28]],"date-time":"2025-08-28T08:05:40Z","timestamp":1756368340000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/SSW250010"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,26]]},"ISBN":["9781643686165"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/ssw250010","relation":{},"ISSN":["1868-1158","2215-0870"],"issn-type":[{"value":"1868-1158","type":"print"},{"value":"2215-0870","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,8,26]]}}}