{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T23:37:55Z","timestamp":1761176275931,"version":"build-2065373602"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686318","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:00:00Z","timestamp":1761004800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,10,21]]},"abstract":"<jats:p>The well-known temporal misalignment in large language models (LLMs) emerges when they fail to recall temporal information. This is due to their training process, which happens without any explicit temporal grounding. To mitigate this issue, multiple approaches have been proposed, including fine-tuning on up-to-date data, retrieval augmented generation \u2013 where an LLM is directed to a recent dataset \u2013 or modifying an LLM\u2019s knowledge via knowledge editing. Regardless of the method, however, the question of building datasets that accurately and faithfully reflect changes to events or entities remains open. Doing this in free text form and not only as triplets is desirable because LLMs benefit downstream from more context and can capture more nuanced relationships and cascading knowledge updates. Resources like Wikipedia can be leveraged for this thanks to their revision histories, which are expressed in free text and are both less biased and more comprehensive than knowledge graphs like Wikidata. In this paper, we propose TACTICAL, a methodology for creating timelines of Wikipedia entities and events, represented as revision pairs extracted from a wikititle\u2019s timeline, and are categorized according to the atomicity of the changes affecting such entities or events. Our results suggest that LLMs struggle to recall event and entity timelines, even if they have seen them during pretraining. TACTICAL, on the other hand, proves to be an effective method for building temporally grounded datasets that are, in turn, effective tools for activating LLMs\u2019 temporal knowledge.<\/jats:p>","DOI":"10.3233\/faia251339","type":"book-chapter","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:58:40Z","timestamp":1761127120000},"source":"Crossref","is-referenced-by-count":0,"title":["TACTICAL: A Framework for Building Wikipedia-Derived Timelines of Atomic Changes"],"prefix":"10.3233","author":[{"given":"Hsuvas","family":"Borkakoty","sequence":"first","affiliation":[{"name":"Cardiff NLP, School of Computer Science and Informatics, Cardiff University, UK"}]},{"given":"Luis","family":"Espinosa-Anke","sequence":"additional","affiliation":[{"name":"Cardiff NLP, School of Computer Science and Informatics, Cardiff University, UK"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2025"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA251339","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:58:41Z","timestamp":1761127121000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA251339"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"ISBN":["9781643686318"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia251339","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]}}}