{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,11]],"date-time":"2025-12-11T21:05:39Z","timestamp":1765487139066,"version":"3.27.0"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643685489","type":"electronic"}],"license":[{"start":{"date-parts":[[2024,10,16]],"date-time":"2024-10-16T00:00:00Z","timestamp":1729036800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2024,10,16]]},"abstract":"<jats:p>Continual Learning (CL) methods enable models to learn new tasks without forgetting previously learned ones. Catastrophic Forgetting (CF) occurs when the parameters of a neural network are updated for a new task, causing the model to lose performance on tasks it has previously learned. To mitigate CF, parameter isolation methods use a \u201ctask mask\u201d to allocate a subset of weights to each task; these weights are typically frozen to preserve task performance. However, frozen weights can limit positive backward transfer, which is the beneficial reuse of knowledge from new tasks to improve the accuracy of previously learned tasks. To address this gap, we introduce LEarning AFter learning (LEAF), a novel CL method that enables positive backward transfer by dynamically updating frozen task masks based on gradient updates that signal sufficient backward knowledge transfer. This mechanism allows for selective integration of new knowledge without sacrificing previously acquired knowledge. Our experiments show that LEAF surpasses existing state-of-the-art methods in terms of accuracy while maintaining comparable memory and runtime efficiencies. Moreover, it outperforms other backward transfer techniques in improving the accuracy of a prioritized task. Our code is available at https:\/\/github.com\/wernse\/LEAF.<\/jats:p>","DOI":"10.3233\/faia240718","type":"book-chapter","created":{"date-parts":[[2024,10,17]],"date-time":"2024-10-17T13:13:00Z","timestamp":1729170780000},"source":"Crossref","is-referenced-by-count":1,"title":["Learning After Learning: Positive Backward Transfer in Continual Learning"],"prefix":"10.3233","author":[{"given":"Wern Sen","family":"Wong","sequence":"first","affiliation":[{"name":"School of Computer Science, The University of Auckland, New Zealand"}]},{"given":"Yun Sing","family":"Koh","sequence":"additional","affiliation":[{"name":"School of Computer Science, The University of Auckland, New Zealand"}]},{"given":"Gillian","family":"Dobbie","sequence":"additional","affiliation":[{"name":"School of Computer Science, The University of Auckland, New Zealand"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2024"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA240718","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,17]],"date-time":"2024-10-17T13:13:01Z","timestamp":1729170781000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA240718"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,16]]},"ISBN":["9781643685489"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia240718","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,16]]}}}