{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,7]],"date-time":"2024-08-07T07:41:11Z","timestamp":1723016471028},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,7]]},"abstract":"<jats:p>Existing unsupervised domain adaptation (UDA) studies focus on transferring knowledge in an offline manner. However, many tasks involve online requirements, especially in real-time systems. In this paper, we discuss Online UDA (OUDA) which assumes that the target samples are arriving sequentially as a small batch. OUDA tasks are challenging for prior UDA methods since online training suffers from catastrophic forgetting which leads to poor generalization. Intuitively, a good memory is a crucial factor in the success of OUDA. We formalize this intuition theoretically with a generalization bound where the OUDA target error can be bounded by the source error, the domain discrepancy distance, and a novel metric on forgetting in continuous online learning. Our theory illustrates the tradeoffs inherent in learning and remembering representations for OUDA. To minimize the proposed forgetting metric, we propose a novel source feature distillation (SFD) method which utilizes the source-only model as a teacher to guide the online training. In the experiment, we modify three UDA algorithms, i.e., DANN, CDAN, and MCC, and evaluate their performance on OUDA tasks with real-world datasets. By applying SFD, the performance of all baselines is significantly improved.<\/jats:p>","DOI":"10.24963\/ijcai.2022\/410","type":"proceedings-article","created":{"date-parts":[[2022,7,16]],"date-time":"2022-07-16T02:55:56Z","timestamp":1657940156000},"page":"2958-2965","source":"Crossref","is-referenced-by-count":0,"title":["Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation"],"prefix":"10.24963","author":[{"given":"Cheng","family":"Feng","sequence":"first","affiliation":[{"name":"Fujitsu R&D Center, Co., LTD, Beijing, China"}]},{"given":"Chaoliang","family":"Zhong","sequence":"additional","affiliation":[{"name":"Fujitsu R&D Center, Co., LTD, Beijing, China"}]},{"given":"Jie","family":"Wang","sequence":"additional","affiliation":[{"name":"Fujitsu R&D Center, Co., LTD, Beijing, China"}]},{"given":"Ying","family":"Zhang","sequence":"additional","affiliation":[{"name":"Fujitsu R&D Center, Co., LTD, Beijing, China"}]},{"given":"Jun","family":"Sun","sequence":"additional","affiliation":[{"name":"Fujitsu R&D Center, Co., LTD, Beijing, China"}]},{"given":"Yasuto","family":"Yokota","sequence":"additional","affiliation":[{"name":"Fujitsu LTD."}]}],"member":"10584","event":{"number":"31","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2022","name":"Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}","start":{"date-parts":[[2022,7,23]]},"theme":"Artificial Intelligence","location":"Vienna, Austria","end":{"date-parts":[[2022,7,29]]}},"container-title":["Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2022,7,18]],"date-time":"2022-07-18T11:09:33Z","timestamp":1658142573000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2022\/410"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2022,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2022\/410","relation":{},"subject":[],"published":{"date-parts":[[2022,7]]}}}