{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T03:22:57Z","timestamp":1773804177900,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"32","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Most state-of-the-art time series imputation methods can leverage textual information to improve imputation quality, but they often struggle because they fail to effectively filter noisy information from large language model (LLM) derived textual information. Some existing solutions only filter over the entire token set, which can introduce erroneous conditional constraints, extreme token frequency effects and increased computational complexity. To address this, we propose CaT-Diff, a novel cascaded text-enhanced diffusion model for probabilistic imputation of multivariate time series under Missing Not At Random (MNAR) scenarios. To suppress irrelevant semantics and focus on context most predictive of missing values, CaT-Diff introduces an innovative Hierarchical Semantic Filter (HSF) that collaborates with a Mixture-of-Experts (MoE) Network. The MoE projects heterogeneous text embeddings into the time series latent space, and the HSF cascade-filters text embeddings from the segment level to the token level, thereby avoiding the pitfalls of direct token-level filtering and reducing overhead. We also incorporate a lightweight Missing Mechanism Estimator, jointly optimized with the denoising network to explicitly capture MNAR missingness patterns. Extensive tests on nine domains show that CaT-Diff outperforms state-of-the-art baselines. Our work presents a new approach for selectively fusing LLM-derived textual information.<\/jats:p>","DOI":"10.1609\/aaai.v40i32.39937","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:15:50Z","timestamp":1773800150000},"page":"27215-27223","source":"Crossref","is-referenced-by-count":0,"title":["CaT-Diff: Cascaded Text-enhanced Diffusion Model for Time-Series Imputation"],"prefix":"10.1609","volume":"40","author":[{"given":"Changjian","family":"Xu","sequence":"first","affiliation":[]},{"given":"Yong","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Ruizheng","family":"Huang","sequence":"additional","affiliation":[]},{"given":"Zhicheng","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Wen","family":"Yin","sequence":"additional","affiliation":[]},{"given":"Kexin","family":"Li","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/39937\/43898","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/39937\/43898","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:15:50Z","timestamp":1773800150000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/39937"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"32","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i32.39937","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}