{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T14:35:17Z","timestamp":1772548517955,"version":"3.50.1"},"reference-count":71,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2025,3,13]],"date-time":"2025-03-13T00:00:00Z","timestamp":1741824000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,13]],"date-time":"2025-03-13T00:00:00Z","timestamp":1741824000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100007601","name":"Horizon 2020","doi-asserted-by":"publisher","award":["951911"],"award-info":[{"award-number":["951911"]}],"id":[{"id":"10.13039\/501100007601","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100015439","name":"Centres de Recerca de Catalunya","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100015439","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004837","name":"Ministerio de Ciencia e Innovaci\u00f3n","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100004837","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2025,7]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Vision transformers (ViTs) have achieved remarkable successes across a broad range of computer vision applications. As a consequence, there has been increasing interest in extending continual learning theory and techniques to ViT architectures. We propose a new method for exemplar-free class incremental training of ViTs. The main challenge of exemplar-free continual learning is maintaining plasticity of the learner without causing catastrophic forgetting of previously learned tasks. This is often achieved via exemplar replay which can help recalibrate previous task classifiers to the feature drift which occurs when learning new tasks. Exemplar replay, however, comes at the cost of retaining samples from previous tasks which for many applications may not be possible. To address the problem of continual ViT training, we first propose <jats:italic>gated class-attention<\/jats:italic> to minimize the drift in the final ViT transformer block. This mask-based gating is applied to class-attention mechanism of the last transformer block and strongly regulates the weights crucial for previous tasks. Importantly, gated class-attention does not require the task-ID during inference, which distinguishes it from other parameter isolation methods. Secondly, we propose a new method of <jats:italic>feature drift compensation<\/jats:italic> that accommodates feature drift in the backbone when learning new tasks. The combination of gated class-attention and cascaded feature drift compensation allows for plasticity towards new tasks while limiting forgetting of previous ones. Extensive experiments performed on CIFAR-100, Tiny-ImageNet and ImageNet100 demonstrate that our exemplar-free method obtains competitive results when compared to rehearsal based ViT methods.(Code:<jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/OcraM17\/GCAB-CFDC\" ext-link-type=\"uri\">https:\/\/github.com\/OcraM17\/GCAB-CFDC<\/jats:ext-link>)<\/jats:p>","DOI":"10.1007\/s11263-025-02374-x","type":"journal-article","created":{"date-parts":[[2025,3,13]],"date-time":"2025-03-13T11:28:26Z","timestamp":1741865306000},"page":"4571-4589","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Exemplar-Free Continual Learning of Vision Transformers via Gated Class-Attention and Cascaded Feature Drift Compensation"],"prefix":"10.1007","volume":"133","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7950-7370","authenticated-orcid":false,"given":"Marco","family":"Cotogni","sequence":"first","affiliation":[]},{"given":"Fei","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Claudio","family":"Cusano","sequence":"additional","affiliation":[]},{"given":"Andrew D.","family":"Bagdanov","sequence":"additional","affiliation":[]},{"given":"Joost","family":"van de Weijer","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,13]]},"reference":[{"key":"2374_CR1","doi-asserted-by":"crossref","unstructured":"Abati, D., Tomczak, J., Blankevoort, T., Calderara, S., Cucchiara, R., & Bejnordi, B. E. (2020). Conditional channel gated networks for task-aware continual learning. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 3931\u20133940).","DOI":"10.1109\/CVPR42600.2020.00399"},{"key":"2374_CR2","doi-asserted-by":"crossref","unstructured":"Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., & Tuytelaars, T. (2018). Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 139\u2013154).","DOI":"10.1007\/978-3-030-01219-9_9"},{"key":"2374_CR3","unstructured":"Aljundi, R., Lin, M., Goujaud, B., & Bengio, Y. (2019). Gradient based sample selection for online continual learning. In Advances in neural information processing systems 32."},{"key":"2374_CR4","doi-asserted-by":"crossref","unstructured":"Bang, J., Kim, H., Yoo, Y., Ha, JW., & Choi, J. (2021). Rainbow memory: Continual learning with a memory of diverse samples. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 8218\u20138227).","DOI":"10.1109\/CVPR46437.2021.00812"},{"key":"2374_CR5","unstructured":"Benjamin, AS., Rolnick, D., & K\u00f6rding, KP. (2019). Measuring and regularizing networks in function space. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"2374_CR6","first-page":"15920","volume":"33","author":"P Buzzega","year":"2020","unstructured":"Buzzega, P., Boschini, M., Porrello, A., Abati, D., & Calderara, S. (2020). Dark experience for general continual learning: a strong, simple baseline. Advances in neural information processing systems, 33, 15920\u201315930.","journal-title":"Advances in neural information processing systems"},{"key":"2374_CR7","doi-asserted-by":"crossref","unstructured":"Buzzega, P., Boschini, M., Porrello, A., & Calderara, S. (2021). Rethinking experience replay: a bag of tricks for continual learning. In 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 2180\u20132187).","DOI":"10.1109\/ICPR48806.2021.9412614"},{"key":"2374_CR8","doi-asserted-by":"crossref","unstructured":"Caron, M., Touvron, H., Misra, I., J\u00e9gou, H., Mairal, J., Bojanowski, P., & Joulin, A. (2021). Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (pp. 9650\u20139660).","DOI":"10.1109\/ICCV48922.2021.00951"},{"key":"2374_CR9","doi-asserted-by":"crossref","unstructured":"Castro, FM., Mar\u00edn-Jim\u00e9nez, MJ., Guil, N., Schmid, C., & Alahari, K. (2018). End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV) (pp. 233\u2013248).","DOI":"10.1007\/978-3-030-01258-8_15"},{"key":"2374_CR10","doi-asserted-by":"crossref","unstructured":"Chaudhry, A., Dokania, PK., Ajanthan, T., & Torr, PH. (2018). Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 532\u2013547).","DOI":"10.1007\/978-3-030-01252-6_33"},{"key":"2374_CR11","unstructured":"Chaudhry, A., Ranzato, M., Rohrbach, M., & Elhoseiny, M. (2019). Efficient lifelong learning with a-gem. In International Conference on Learning Representations."},{"key":"2374_CR12","doi-asserted-by":"crossref","unstructured":"Chaudhry, A., Gordo, A., Dokania, P., Torr, P., & Lopez-Paz, D. (2021). Using hindsight to anchor past knowledge in continual learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 35, 6993\u20137001.","DOI":"10.1609\/aaai.v35i8.16861"},{"key":"2374_CR13","first-page":"16736","volume":"33","author":"R Del Chiaro","year":"2020","unstructured":"Del Chiaro, R., Twardowski, B., Bagdanov, A., & Van De Weijer, J. (2020). Ratt: Recurrent attention to transient tasks for continual image captioning. Advances in Neural Information Processing Systems, 33, 16736\u201316748.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2374_CR14","doi-asserted-by":"crossref","unstructured":"Delange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., & Tuytelaars, T. (2021). A continual learning survey: Defying forgetting in classification tasks. In IEEE Transactions on Pattern Analysis and Machine Intelligence.","DOI":"10.1109\/TPAMI.2021.3057446"},{"key":"2374_CR15","doi-asserted-by":"crossref","unstructured":"Dhar, P., Singh, RV., Peng, KC., Wu, Z., & Chellappa, R. (2019). Learning without memorizing. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 5138\u20135146).","DOI":"10.1109\/CVPR.2019.00528"},{"key":"2374_CR16","unstructured":"d Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria."},{"key":"2374_CR17","doi-asserted-by":"crossref","unstructured":"Douillard, A., Cord, M., Ollion, C., Robert, T., & Valle, E. (2020). Podnet: Pooled outputs distillation for small-tasks incremental learning. In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XX 16 Springer (pp. 86\u2013102).","DOI":"10.1007\/978-3-030-58565-5_6"},{"key":"2374_CR18","doi-asserted-by":"crossref","unstructured":"Douillard A, Ram\u00e9 A, Couairon G, & Cord M .(2022). Dytox: Transformers for continual learning with dynamic token expansion. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 9285\u20139295).","DOI":"10.1109\/CVPR52688.2022.00907"},{"key":"2374_CR19","doi-asserted-by":"crossref","unstructured":"Fini, E., da\u00a0Costa, VGT., Alameda-Pineda, X., Ricci, E., Alahari, K., & Mairal, J. (2022). Self-supervised models are continual learners. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 9621\u20139630)","DOI":"10.1109\/CVPR52688.2022.00940"},{"key":"2374_CR20","unstructured":"Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, YN. (2017). Convolutional sequence to sequence learning. In International conference on machine learning (pp. 1243\u20131252)."},{"key":"2374_CR21","doi-asserted-by":"crossref","unstructured":"Gomez-Villa, A., Twardowski, B., Yu, L., Bagdanov, AD., van\u00a0de, & Weijer, J. (2022). Continually learning self-supervised representations with projected functional regularization. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 3867\u20133877).","DOI":"10.1109\/CVPRW56347.2022.00432"},{"key":"2374_CR22","unstructured":"Goodfellow, IJ., Mirza, M., Da, X., Courville, AC., & Bengio, Y. (2014). An empirical investigation of catastrophic forgeting in gradient-based neural networks. In Bengio Y, LeCun Y (eds) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings."},{"key":"2374_CR23","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770\u2013778).","DOI":"10.1109\/CVPR.2016.90"},{"key":"2374_CR24","unstructured":"Hinton, G., Vinyals, O., Dean, J., et\u00a0al. (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 2(7)"},{"key":"2374_CR25","doi-asserted-by":"crossref","unstructured":"Hou, S., Pan, X., Loy, CC., Wang, Z., & Lin, D. (2019). Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 831\u2013839).","DOI":"10.1109\/CVPR.2019.00092"},{"key":"2374_CR26","unstructured":"Jung, H., Ju, J., Jung, M., & Kim, J. (2016). Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122"},{"key":"2374_CR27","doi-asserted-by":"crossref","unstructured":"Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521\u20133526.","DOI":"10.1073\/pnas.1611835114"},{"key":"2374_CR28","unstructured":"Krizhevsky, A., Hinton, G., et\u00a0al. (2009). Learning multiple layers of features from tiny images. Tech Report"},{"key":"2374_CR29","unstructured":"Le, Y., & Yang, X. (2015). Tiny imagenet visual recognition challenge. CS 231N 7(7):3"},{"key":"2374_CR30","doi-asserted-by":"crossref","unstructured":"Lee, J., Hong, HG., Joo, D., & Kim, J. (2020). Continual learning with extended kronecker-factored approximate curvature. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 9001\u20139010).","DOI":"10.1109\/CVPR42600.2020.00902"},{"issue":"12","key":"2374_CR31","doi-asserted-by":"publisher","first-page":"2935","DOI":"10.1109\/TPAMI.2017.2773081","volume":"40","author":"Z Li","year":"2017","unstructured":"Li, Z., & Hoiem, D. (2017). Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 2935\u20132947.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"2374_CR32","doi-asserted-by":"crossref","unstructured":"Liu, X., Masana, M., Herranz, L., Van\u00a0de, Weijer, J., Lopez, AM., & Bagdanov, AD. (2018). Rotate your networks: Better weight consolidation and less catastrophic forgetting. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 2262\u20132268).","DOI":"10.1109\/ICPR.2018.8545895"},{"key":"2374_CR33","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (pp. 10012\u201310022).","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"2374_CR34","unstructured":"Lopez-Paz, D., & Ranzato, M. (2017). Gradient episodic memory for continual learning. In Advances in neural information processing systems 30."},{"key":"2374_CR35","unstructured":"Van\u00a0der Maaten, L., Hinton, G. (2008). Visualizing data using t-sne. Journal of machine learning research 9(11)."},{"key":"2374_CR36","doi-asserted-by":"crossref","unstructured":"Mallya, A., & Lazebnik, S. (2018). Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 7765\u20137773).","DOI":"10.1109\/CVPR.2018.00810"},{"key":"2374_CR37","doi-asserted-by":"crossref","unstructured":"Mallya, A., Davis, D., & Lazebnik, S. (2018). Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 67\u201382).","DOI":"10.1007\/978-3-030-01225-0_5"},{"key":"2374_CR38","doi-asserted-by":"crossref","unstructured":"Masana, M., Tuytelaars, T., & Van\u00a0de Weijer, J. (2021). Ternary feature masks: zero-forgetting for task-incremental learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 3570\u20133579).","DOI":"10.1109\/CVPRW53098.2021.00396"},{"issue":"5","key":"2374_CR39","doi-asserted-by":"publisher","first-page":"5513","DOI":"10.1109\/TPAMI.2022.3213473","volume":"45","author":"M Masana","year":"2022","unstructured":"Masana, M., Liu, X., Twardowski, B., Menta, M., Bagdanov, A. D., & Van De Weijer, J. (2022). Class-incremental learning: survey and performance evaluation on image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5), 5513\u20135533.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"2374_CR40","unstructured":"McDonnell, MD., Gong, D., Parvaneh, A., Abbasnejad, E., & van\u00a0den Hengel, A. (2024) Ranpac: Random projections and pre-trained models for continual learning. Advances in Neural Information Processing Systems."},{"key":"2374_CR41","doi-asserted-by":"crossref","unstructured":"Mermillod, M., Bugaiska, A., & Bonin, P. (2013). The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects","DOI":"10.3389\/fpsyg.2013.00504"},{"key":"2374_CR42","unstructured":"Mundt, M., Lang, S., Delfosse, Q., & Kersting, K. (2022). Cleva-compass: A continual learning evaluation assessment compass to promote research transparency and comparability. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event (pp.25-29)."},{"key":"2374_CR43","unstructured":"Paul, S., Frey, LJ., Kamath, R., Kersting, K., & Mundt, M. (2023). Masked autoencoders are efficient continual federated learners. arXiv preprint arXiv:2306.03542."},{"key":"2374_CR44","doi-asserted-by":"crossref","unstructured":"Pelosin, F., Jha, S., Torsello, A., Raducanu, B., & van\u00a0de Weijer, J. (2022). Towards exemplar-free continual learning in vision transformers: an account of attention, functional and weight regularization. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 3820\u20133829).","DOI":"10.1109\/CVPRW56347.2022.00427"},{"key":"2374_CR45","unstructured":"Rajasegaran, J., Hayat, M., Khan, SH., Khan, FS., & Shao, L. (2019). Random path selection for continual learning. Advances in Neural Information Processing Systems."},{"key":"2374_CR46","doi-asserted-by":"crossref","unstructured":"Rebuffi, SA., Kolesnikov, A., Sperl, G., & Lampert, CH. (2017). icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 2001\u20132010).","DOI":"10.1109\/CVPR.2017.587"},{"key":"2374_CR47","unstructured":"Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., & Tesauro, G. (2019). Learning to learn without forgetting by maximizing transfer and minimizing interference. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"issue":"3","key":"2374_CR48","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211\u2013252.","journal-title":"International Journal of Computer Vision"},{"key":"2374_CR49","unstructured":"Rusu, AA., Rabinowitz, NC., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., & Hadsell, R. (2016). Progressive neural networks. arXiv"},{"key":"2374_CR50","unstructured":"Serra, J., Suris, D., Miron, M., & Karatzoglou, A. (2018). Overcoming catastrophic forgetting with hard attention to the task. In International Conference on Machine Learning (pp. 4548\u20134557)."},{"key":"2374_CR51","doi-asserted-by":"crossref","unstructured":"Strudel, R., Garcia, R., Laptev, I., & Schmid, C. (2021). Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (pp. 7262\u20137272).","DOI":"10.1109\/ICCV48922.2021.00717"},{"key":"2374_CR52","doi-asserted-by":"crossref","unstructured":"Toldo, M., & Ozay, M. (2022). Bring evanescent representations to life in lifelong class incremental learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 16732\u201316741).","DOI":"10.1109\/CVPR52688.2022.01623"},{"key":"2374_CR53","doi-asserted-by":"crossref","unstructured":"Touvron H, Cord M, Sablayrolles A, Synnaeve G, & J\u00e9gou H (2021) Going deeper with image transformers. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (pp. 32\u201342).","DOI":"10.1109\/ICCV48922.2021.00010"},{"key":"2374_CR54","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, AN., Kaiser, \u0141., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems."},{"key":"2374_CR55","doi-asserted-by":"crossref","unstructured":"Wang, L., Yang, K., Li, C., Hong, L., Li, Z., & Zhu, J. (2021). Ordisco: Effective and efficient usage of incremental unlabeled data for semi-supervised continual learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 5383\u20135392).","DOI":"10.1109\/CVPR46437.2021.00534"},{"key":"2374_CR56","unstructured":"Wang, L., Xie, J., Zhang, X., Huang, M., Su, H., & Zhu, J. (2024). Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. Advances in Neural Information Processing Systems."},{"key":"2374_CR57","doi-asserted-by":"crossref","unstructured":"Wang, Z., Liu, L., Duan, Y., Kong, Y., & Tao, D. (2022a). Continual learning with lifelong vision transformer. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 171\u2013181).","DOI":"10.1109\/CVPR52688.2022.00027"},{"key":"2374_CR58","doi-asserted-by":"crossref","unstructured":"Wang, Z., Zhang, Z., Ebrahimi, S., Sun, R., Zhang, H., Lee, CY., Ren, X., Su, G., Perot, V., Dy, J., et\u00a0al. (2022b). Dualprompt: Complementary prompting for rehearsal-free continual learning. In European Conference on Computer Vision. Springer (pp. 631\u2013648).","DOI":"10.1007\/978-3-031-19809-0_36"},{"key":"2374_CR59","doi-asserted-by":"crossref","unstructured":"Wang, Z., Zhang, Z., Lee, CY., Zhang, H., Sun, R., Ren, X., Su, G., Perot, V., Dy, J., & Pfister, T. (2022c). Learning to prompt for continual learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 139\u2013149).","DOI":"10.1109\/CVPR52688.2022.00024"},{"key":"2374_CR60","doi-asserted-by":"crossref","unstructured":"Wu, Y., Chen, Y., Wang, L., Ye, Y., Liu, Z., Guo, Y., & Fu, Y. (2019). Large scale incremental learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 374\u2013382).","DOI":"10.1109\/CVPR.2019.00046"},{"key":"2374_CR61","doi-asserted-by":"crossref","unstructured":"Yan, S., Xie, J., & He, X. (2021). Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 3014\u20133023).","DOI":"10.1109\/CVPR46437.2021.00303"},{"key":"2374_CR62","unstructured":"Yoon, J., Jeong, W., Lee, G., Yang, E., & Hwang, SJ. (2021). Federated continual learning with weighted inter-client transfer. In International Conference on Machine Learning (pp. 12073\u201312086)."},{"key":"2374_CR63","doi-asserted-by":"crossref","unstructured":"Yu, L., Twardowski, B., Liu, X., Herranz, L., Wang, K., Cheng, Y., Jui, S., & Weijer, Jvd, (2020). Semantic drift compensation for class-incremental learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 6982\u20136991).","DOI":"10.1109\/CVPR42600.2020.00701"},{"key":"2374_CR64","unstructured":"Zenke, F., Poole, B., & Ganguli, S. (2017). Continual learning through synaptic intelligence. In International Conference on Machine Learning (pp. 3987\u20133995)."},{"key":"2374_CR65","doi-asserted-by":"crossref","unstructured":"Zhai, M., Chen, L., & Mori, G. (2021). Hyper-lifelonggan: scalable lifelong learning for image conditioned generation. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 2246\u20132255).","DOI":"10.1109\/CVPR46437.2021.00228"},{"key":"2374_CR66","doi-asserted-by":"crossref","unstructured":"Zhang, G., Wang, L., Kang, G., Chen, L., & Wei, Y. (2023). Slca: Slow learner with classifier alignment for continual learning on a pre-trained model. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (pp. 19148\u201319158).","DOI":"10.1109\/ICCV51070.2023.01754"},{"key":"2374_CR67","doi-asserted-by":"crossref","unstructured":"Zhang, J., Zhang, J., Ghosh, S., Li, D., Tasci, S., Heck, L., Zhang, H., & Kuo, CCJ. (2020) Class-incremental learning via deep model consolidation. In Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision (pp. 1131\u20131140).","DOI":"10.1109\/WACV45572.2020.9093365"},{"key":"2374_CR68","doi-asserted-by":"crossref","unstructured":"Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, PH., et\u00a0al. (2021) Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 6881\u20136890).","DOI":"10.1109\/CVPR46437.2021.00681"},{"key":"2374_CR69","unstructured":"Zhou, D., Wang, Q., Ye, H., & Zhan, D. (2023a). A model or 603 exemplars: Towards memory-efficient class-incremental learning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda."},{"key":"2374_CR70","doi-asserted-by":"crossref","unstructured":"Zhou, DW., Ye, HJ., Zhan, DC., & Liu, Z. (2023b). Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need. arXiv preprint arXiv:2303.07338","DOI":"10.1007\/s11263-024-02218-0"},{"key":"2374_CR71","doi-asserted-by":"crossref","unstructured":"Zhu, F., Zhang, XY., Wang, C., Yin, F., & Liu, CL. (2021). Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp 5871\u20135880).","DOI":"10.1109\/CVPR46437.2021.00581"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02374-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02374-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02374-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,7]],"date-time":"2025-06-07T06:02:30Z","timestamp":1749276150000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02374-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,13]]},"references-count":71,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2025,7]]}},"alternative-id":["2374"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02374-x","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,13]]},"assertion":[{"value":"27 July 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 February 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 March 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}