{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,24]],"date-time":"2025-09-24T00:15:03Z","timestamp":1758672903113,"version":"3.44.0"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:p>Text-Attributed Graphs (TAGs) are vital for modeling entity relationships across various domains. Graph Neural Networks have become cornerstone for processing graph structures, while the integration of text attributes remains a prominent research. The development of Large Language Models (LLMs) provides new opportunities for advancing textual encoding in TAGs. However, LLMs face challenges in specialized domains due to their limited task-specific knowledge, and fine-tuning them for specific tasks demands significant resources. To cope with the above challenges, we propose HiTuner, a novel framework that leverages fine-tuned  Pre-trained Language Models (PLMs) with domain expertise as tuner to enhance the hierarchical LLM contextualized representations for modeling TAGs. Specifically, we first strategically select hierarchical hidden states of LLM to form a set of diverse and complementary descriptions as input for the sparse projection operator. Concurrently, a hybrid representation learning is developed to amalgamate the broad linguistic comprehension of LLMs with task-specific insights of the fine-tuned PLMs. Finally,  HiTuner employs a confidence network to adaptively fuse the semantically-augmented representations. Empirical results across benchmark datasets spanning various domains validate the effectiveness of the proposed framework.\n\nOur codes are available at: https:\/\/github.com\/ZihanFang11\/HiTuner<\/jats:p>","DOI":"10.24963\/ijcai.2025\/569","type":"proceedings-article","created":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T08:10:40Z","timestamp":1758269440000},"page":"5110-5117","source":"Crossref","is-referenced-by-count":0,"title":["HiTuner: Hierarchical Semantic Fusion Model Fine-Tuning on Text-Attributed Graphs"],"prefix":"10.24963","author":[{"given":"Zihan","family":"Fang","sequence":"first","affiliation":[{"name":"Fuzhou University"}]},{"given":"Zhiling","family":"Cai","sequence":"additional","affiliation":[{"name":"Fujian Agriculture and Forestry University"}]},{"given":"Yuxuan","family":"Zheng","sequence":"additional","affiliation":[{"name":"Fuzhou University"}]},{"given":"Shide","family":"Du","sequence":"additional","affiliation":[{"name":"Fuzhou University"}]},{"given":"Yanchao","family":"Tan","sequence":"additional","affiliation":[{"name":"Fuzhou University"}]},{"given":"Shiping","family":"Wang","sequence":"additional","affiliation":[{"name":"Fuzhou University"}]}],"member":"10584","event":{"number":"34","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2025","name":"Thirty-Fourth International Joint Conference on Artificial Intelligence {IJCAI-25}","start":{"date-parts":[[2025,8,16]]},"theme":"Artificial Intelligence","location":"Montreal, Canada","end":{"date-parts":[[2025,8,22]]}},"container-title":["Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T11:34:26Z","timestamp":1758627266000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2025\/569"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2025,9]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2025\/569","relation":{},"subject":[],"published":{"date-parts":[[2025,9]]}}}