{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,24]],"date-time":"2025-09-24T00:14:50Z","timestamp":1758672890805,"version":"3.44.0"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:p>Recent research has demonstrated the effectiveness of knowledge distillation in Domain Generalization. However, existing approaches often overlook domain-specific knowledge and rely on an offline distillation strategy, limiting the effectiveness of knowledge transfer. To address these limitations, we propose Balanced Online knowLedge Distillation (BOLD). BOLD leverages a multi-domain expert teacher model, with each expert specializing in a specific source domain, enabling the student to distill both domain-invariant and domain-specific knowledge. We incorporate the Pareto optimization principle and uncertainty weighting to balance these two types of knowledge, ensuring simultaneous optimization without compromising either. Additionally, BOLD employs an online knowledge distillation strategy, allowing the teacher and student to learn concurrently. This dynamic interaction enables the teacher to adapt based on student feedback, facilitating more effective knowledge transfer. Extensive experiments on seven benchmarks demonstrate that BOLD outperforms state-of-the-art methods. Furthermore, we provide theoretical insights that highlight the importance of domain-specific knowledge and the advantages of uncertainty weighting.<\/jats:p>","DOI":"10.24963\/ijcai.2025\/272","type":"proceedings-article","created":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T08:10:40Z","timestamp":1758269440000},"page":"2440-2448","source":"Crossref","is-referenced-by-count":0,"title":["Balancing Invariant and Specific Knowledge for Domain Generalization with Online Knowledge Distillation"],"prefix":"10.24963","author":[{"given":"Di","family":"Zhao","sequence":"first","affiliation":[{"name":"University of Auckland"}]},{"given":"Jingfeng","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Auckland"}]},{"given":"Hongsheng","family":"Hu","sequence":"additional","affiliation":[{"name":"University of Newcastle"}]},{"given":"Philippe","family":"Fournier-Viger","sequence":"additional","affiliation":[{"name":"Shenzhen University"}]},{"given":"Gillian","family":"Dobbie","sequence":"additional","affiliation":[{"name":"University of Auckland"}]},{"given":"Yun Sing","family":"Koh","sequence":"additional","affiliation":[{"name":"University of Auckland"}]}],"member":"10584","event":{"number":"34","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2025","name":"Thirty-Fourth International Joint Conference on Artificial Intelligence {IJCAI-25}","start":{"date-parts":[[2025,8,16]]},"theme":"Artificial Intelligence","location":"Montreal, Canada","end":{"date-parts":[[2025,8,22]]}},"container-title":["Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T11:33:34Z","timestamp":1758627214000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2025\/272"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2025,9]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2025\/272","relation":{},"subject":[],"published":{"date-parts":[[2025,9]]}}}