{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,24]],"date-time":"2025-09-24T00:14:52Z","timestamp":1758672892453,"version":"3.44.0"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:p>Few-shot tabular learning, in which machine learning models are trained with a limited amount of labeled data, provides a cost-effective approach to addressing real-world challenges. The advent of Large Language Models (LLMs) has sparked interest in leveraging their pre-trained knowledge for few-shot tabular learning. Despite promising results, existing approaches either rely on test-time knowledge extraction, which introduces undesirable latency, or text-level knowledge, which leads to unreliable feature engineering. To overcome these limitations, we propose Latte, a training-time knowledge extraction framework that transfers the latent prior knowledge within LLMs to optimize a more generalized downstream model. Latte enables general knowledge-guided downstream tabular learning, facilitating the weighted fusion of information across different feature values while reducing the risk of overfitting to limited labeled data. Furthermore, Latte is compatible with existing unsupervised pre-training paradigms and effectively utilizes available unlabeled samples to overcome the performance limitations imposed by an extremely small labeled dataset.  Extensive experiments on various few-shot tabular learning benchmarks demonstrate the superior performance of Latte, establishing it as a state-of-the-art approach in this domain. Our code is available at https:\/\/github.com\/ruxueshi\/Latte.git.<\/jats:p>","DOI":"10.24963\/ijcai.2025\/687","type":"proceedings-article","created":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T08:10:40Z","timestamp":1758269440000},"page":"6173-6181","source":"Crossref","is-referenced-by-count":0,"title":["Latte: Transfering LLMs' Latent-level Knowledge for Few-shot Tabular Learning"],"prefix":"10.24963","author":[{"given":"Ruxue","family":"Shi","sequence":"first","affiliation":[{"name":"Jilin University"}]},{"given":"Hengrui","family":"Gu","sequence":"additional","affiliation":[{"name":"Jilin University"}]},{"given":"Hangting","family":"Ye","sequence":"additional","affiliation":[{"name":"Jilin University"}]},{"given":"Yiwei","family":"Dai","sequence":"additional","affiliation":[{"name":"Jilin University"}]},{"given":"Xu","family":"Shen","sequence":"additional","affiliation":[{"name":"Jilin University"}]},{"given":"Xin","family":"Wang","sequence":"additional","affiliation":[{"name":"Jilin University"}]}],"member":"10584","event":{"number":"34","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2025","name":"Thirty-Fourth International Joint Conference on Artificial Intelligence {IJCAI-25}","start":{"date-parts":[[2025,8,16]]},"theme":"Artificial Intelligence","location":"Montreal, Canada","end":{"date-parts":[[2025,8,22]]}},"container-title":["Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T11:34:51Z","timestamp":1758627291000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2025\/687"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2025,9]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2025\/687","relation":{},"subject":[],"published":{"date-parts":[[2025,9]]}}}