{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,17]],"date-time":"2026-01-17T07:12:44Z","timestamp":1768633964870,"version":"3.49.0"},"reference-count":58,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2023,9,19]],"date-time":"2023-09-19T00:00:00Z","timestamp":1695081600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,9,19]],"date-time":"2023-09-19T00:00:00Z","timestamp":1695081600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Mach Learn"],"published-print":{"date-parts":[[2024,2]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Gradient-based meta-learning techniques aim to distill useful prior knowledge from a set of training tasks such that new tasks can be learned more efficiently with gradient descent. While these methods have achieved successes in various scenarios, they commonly adapt <jats:italic>all<\/jats:italic> parameters of trainable layers when learning new tasks. This neglects potentially more efficient learning strategies for a given task distribution and may be susceptible to overfitting, especially in few-shot learning where tasks must be learned from a limited number of examples. To address these issues, we propose <jats:italic>Subspace Adaptation Prior<\/jats:italic> (SAP), a novel gradient-based meta-learning algorithm that jointly learns good initialization parameters (prior knowledge) and layer-wise <jats:italic>parameter subspaces<\/jats:italic> in the form of operation subsets that should be adaptable. In this way, SAP can learn which operation subsets to adjust with gradient descent based on the underlying task distribution, simultaneously decreasing the risk of overfitting when learning new tasks. We demonstrate that this ability is helpful as SAP yields superior or competitive performance in few-shot image classification settings (gains between 0.1% and 3.9% in accuracy). Analysis of the learned subspaces demonstrates that low-dimensional operations often yield high activation strengths, indicating that they may be important for achieving good few-shot learning performance. For reproducibility purposes, we publish all our research code publicly.<\/jats:p>","DOI":"10.1007\/s10994-023-06393-y","type":"journal-article","created":{"date-parts":[[2023,9,19]],"date-time":"2023-09-19T20:35:37Z","timestamp":1695155737000},"page":"725-752","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Subspace Adaptation Prior for Few-Shot Learning"],"prefix":"10.1007","volume":"113","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9215-2973","authenticated-orcid":false,"given":"Mike","family":"Huisman","sequence":"first","affiliation":[]},{"given":"Aske","family":"Plaat","sequence":"additional","affiliation":[]},{"given":"Jan N.","family":"van Rijn","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,9,19]]},"reference":[{"key":"6393_CR1","unstructured":"Andrychowicz, M., Denil, M., & Colmenarejo, S.G., et al. (2016). Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems 29. Curran Associates Inc., pp. 3988\u20133996"},{"key":"6393_CR2","unstructured":"Antoniou, A., Edwards, H., & Storkey, A. (2019). How to train your MAML. In International Conference on Learning Representations (ICLR\u201919)"},{"key":"6393_CR3","doi-asserted-by":"crossref","unstructured":"Bateni, P., Goyal, R., & Masrani, V., et al (2020) Improved few-shot visual classification. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 14,493\u201314,502","DOI":"10.1109\/CVPR42600.2020.01450"},{"key":"6393_CR4","unstructured":"Bendre, N., Mar\u00edn, H.T., & Najafirad, P. (2020). Learning from few samples: A survey. arXiv preprint arXiv:2007.15484"},{"key":"6393_CR5","unstructured":"Bertinetto, L., Henriques, J.F., & Torr, P., et.al. (2019). Meta-learning with differentiable closed-form solvers. In International Conference on Learning Representations (ICLR\u201919)"},{"key":"6393_CR6","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-67024-5","volume-title":"Metalearning: Applications to Automated Machine Learning and Data Mining","author":"P Brazdil","year":"2022","unstructured":"Brazdil, P., van Rijn, J. N., Soares, C., et al. (2022). Metalearning: Applications to Automated Machine Learning and Data Mining (2nd ed.). Springer.","edition":"2"},{"key":"6393_CR7","unstructured":"Chen, W.Y., Liu, Y.C., & Kira, Z. et. al. (2019). A closer look at few-shot classification. In International Conference on Learning Representations (ICLR\u201919)"},{"key":"6393_CR8","doi-asserted-by":"crossref","unstructured":"Chen, Y., Liu, Z., & Xu, H., et.al. (2021). Meta-baseline: Exploring simple meta-learning for few-shot learning. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 9062\u20139071","DOI":"10.1109\/ICCV48922.2021.00893"},{"key":"6393_CR9","unstructured":"Daum\u00e9, III H (2009) Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815"},{"key":"6393_CR10","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., & Socher, R. et.al. (2009). ImageNet: A large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 248\u2013255","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"6393_CR11","doi-asserted-by":"crossref","unstructured":"Elsken, T., Staffler, B., Metzen JH, et al (2020) Meta-learning of neural architectures for few-shot learning. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201920), pp. 12,365\u201312,375","DOI":"10.1109\/CVPR42600.2020.01238"},{"key":"6393_CR12","doi-asserted-by":"crossref","unstructured":"Farahani, A., Voghoei, S., & Rasheed, K., et.al. (2021). A brief review of domain adaptation. Advances in data science and information engineering: In Proceedings from ICDATA 2020 and IKE 2020 pp. 877\u2013894","DOI":"10.1007\/978-3-030-71704-9_65"},{"key":"6393_CR13","unstructured":"Finn, C., & Levine, S. (2018). Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm. In International Conference on Learning Representations (ICLR\u201918)"},{"key":"6393_CR14","unstructured":"Finn, C., Abbeel, P., & Levine, S .(2017). Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning (ICML\u201917). PMLR, pp. 1126-1135"},{"key":"6393_CR15","unstructured":"Flennerhag, S., Rusu, AA., Pascanu, R., et al. (2020). Meta-learning with warped gradient descent. In International Conference on Learning Representations (ICLR\u201920)."},{"issue":"9","key":"6393_CR16","first-page":"5149","volume":"44","author":"TM Hospedales","year":"2021","unstructured":"Hospedales, T. M., Antoniou, A., Micaelli, P., et al. (2021). Meta-learning in neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), 5149\u20135169.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"6393_CR17","unstructured":"Huisman, M., van Rijn, J.N., Plaat, A. (2021a). A preliminary study on the feature representations of transfer learning and gradient-based meta-learning techniques. In Fifth Workshop on Meta-Learning at the Conference on Neural Information Processing Systems."},{"issue":"6","key":"6393_CR18","doi-asserted-by":"publisher","first-page":"4483","DOI":"10.1007\/s10462-021-10004-4","volume":"54","author":"M Huisman","year":"2021","unstructured":"Huisman, M., van Rijn, J. N., & Plaat, A. (2021). A survey of deep meta-learning. Artificial Intelligence Review, 54(6), 4483\u20134541.","journal-title":"Artificial Intelligence Review"},{"issue":"9","key":"6393_CR19","doi-asserted-by":"publisher","first-page":"3227","DOI":"10.1007\/s10994-022-06210-y","volume":"111","author":"M Huisman","year":"2022","unstructured":"Huisman, M., Plaat, A., & van Rijn, J. N. (2022). Stateless neural meta-learning using second-order gradients. Machine Learning, 111(9), 3227\u20133244.","journal-title":"Machine Learning"},{"key":"6393_CR20","unstructured":"Jang, E., Gu, S., & Poole, B. (2017). Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, (ICLR\u201917)."},{"key":"6393_CR21","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-642-20980-2","volume-title":"Meta-Learning in Computational Intelligence","author":"N Jankowski","year":"2011","unstructured":"Jankowski, N., Duch, W., & Gr\u0105bczewski, K. (2011). Meta-Learning in Computational Intelligence (Vol. 358). Berlin Heidelberg: Springer-Verlag."},{"key":"6393_CR22","unstructured":"Jiang, W., Kwok, J., Zhang, Y. (2022). Subspace learning for effective meta-learning. In Proceedings of the 39th International Conference on Machine Learning, PMLR, pp. 10,177\u201310,194."},{"key":"6393_CR23","unstructured":"Kim, J., Lee, S., Kim, S., et.al.(2018). Auto-Meta: Automated Gradient Based Meta Learner Search. arXiv preprint arXiv:1806.06927"},{"key":"6393_CR24","first-page":"1097","volume":"25","author":"A Krizhevsky","year":"2012","unstructured":"Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25, 1097\u20131105.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6393_CR25","doi-asserted-by":"crossref","unstructured":"Lee, K., Maji, S., & Ravichandran, A., et.al. (2019) Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10,657\u201310,665","DOI":"10.1109\/CVPR.2019.01091"},{"key":"6393_CR26","unstructured":"Lee, Y., & Choi, S. (2018). Gradient-based meta-learning with learned layerwise metric and subspace. In Proceedings of the 35th International Conference on Machine Learning (ICML\u201918), PMLR, pp. 2927\u20132936."},{"key":"6393_CR27","unstructured":"Li, K., & Malik, J. (2018). Learning to Optimize Neural Nets. arXiv preprint arXiv:1703.00441"},{"key":"6393_CR28","unstructured":"Li, Z., Zhou, F., & Chen, F., et.al .(2017). Meta-SGD: Learning to Learn Quickly for Few-Shot Learning. arXiv preprint arXiv:1707.09835."},{"key":"6393_CR29","unstructured":"Lian, D., Zheng, Y., & Xu, Y., et.al. (2019). Towards fast adaptation of neural architectures with meta learning. In International Conference on Learning Representations (ICLR\u201919)."},{"key":"6393_CR30","unstructured":"Liu, H., Simonyan, K., & Yang, Y. (2019) DARTS: Differentiable architecture search. In International Conference on Learning Representations (ICLR\u201919)."},{"key":"6393_CR31","unstructured":"Lu, J., Gong, P., & Ye, J., et.al. (2020). Learning from very few samples: A survey. arXiv preprint arXiv:2009.02653"},{"key":"6393_CR32","unstructured":"Maddison, C.J., Mnih, A., Teh, Y.W. (2017). The concrete distribution: A continuous relaxation of discrete random variables. In 5th International Conference on Learning Representations, (ICLR\u201917)."},{"key":"6393_CR33","unstructured":"Mnih, V., Kavukcuoglu, K., &Silver, D., et.al. (2013). Playing Atari with Deep Reinforcement Learning. arXiv preprint arXiv:1312.5602."},{"key":"6393_CR34","doi-asserted-by":"crossref","unstructured":"Naik, D.K., Mammone, R.J. (1992). Meta-neural networks that learn by learning. In International Joint Conference on Neural Networks (IJCNN\u201992), IEEE, pp. 437\u2013442.","DOI":"10.1109\/IJCNN.1992.287172"},{"key":"6393_CR35","unstructured":"Nichol, A., Achiam, J., Schulman, J. (2018). On First-Order Meta-Learning Algorithms. arXiv preprint arXiv:1803.02999."},{"issue":"10","key":"6393_CR36","doi-asserted-by":"publisher","first-page":"1345","DOI":"10.1109\/TKDE.2009.191","volume":"22","author":"SJ Pan","year":"2009","unstructured":"Pan, S. J., & Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345\u20131359.","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"key":"6393_CR37","first-page":"3309","volume":"32","author":"E Park","year":"2019","unstructured":"Park, E., & Oliva, J. B. (2019). Meta-curvature. Advances in Neural Information Processing Systems, 32, 3309\u20133319.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6393_CR38","doi-asserted-by":"crossref","unstructured":"Perez, E., Strub, F., & De Vries, H., et al (2018). Film: Visual reasoning with a general conditioning layer. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). AAAI Press, pp. 3942\u20133951.","DOI":"10.1609\/aaai.v32i1.11671"},{"key":"6393_CR39","unstructured":"Ravi, S., Larochelle, H. (2017). Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR\u201917)."},{"key":"6393_CR40","unstructured":"Ren, M., Ravi, S., Triantafillou, E., et.al. (2018). Meta-learning for semi-supervised few-shot classification. In International Conference on Learning Representations (ICLR\u201918)."},{"key":"6393_CR41","first-page":"7957","volume":"32","author":"J Requeima","year":"2019","unstructured":"Requeima, J., Gordon, J., Bronskill, J., et al. (2019). Fast and flexible multi-task classification using conditional neural adaptive processes. Advances in Neural Information Processing Systems, 32, 7957\u20137968.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6393_CR42","unstructured":"Rusu AA, Rao D, Sygnowski J, et al. (2019). Meta-learning with latent embedding optimization. In International Conference on Learning Representations (ICLR\u201919)."},{"key":"6393_CR43","unstructured":"Schmidhuber, J. (1987). Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Master\u2019s thesis, Technische Universit\u00e4t M\u00fcnchen."},{"issue":"7587","key":"6393_CR44","doi-asserted-by":"publisher","first-page":"484","DOI":"10.1038\/nature16961","volume":"529","author":"D Silver","year":"2016","unstructured":"Silver, D., Huang, A., Maddison, C. J., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587), 484\u2013489.","journal-title":"Nature"},{"key":"6393_CR45","doi-asserted-by":"crossref","unstructured":"Simon, C., Koniusz, P., Nock, R., et.al. (2020). On modulating the gradient for meta-learning. In European Conference on Computer Vision, Springer, pp. 556\u2013572.","DOI":"10.1007\/978-3-030-58598-3_33"},{"key":"6393_CR46","unstructured":"Snell, J., Swersky, K., Zemel, R. (2017). Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems 30. Curran Associates Inc., pp. 4077\u20134087."},{"key":"6393_CR47","doi-asserted-by":"crossref","unstructured":"Sun, Q., Liu, Y., Chua, T.S., et.al.(2019). Meta-transfer learning for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 403\u2013412.","DOI":"10.1109\/CVPR.2019.00049"},{"key":"6393_CR48","doi-asserted-by":"crossref","unstructured":"Taylor, M.E., Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research 10(7).","DOI":"10.1007\/978-3-642-01882-4_2"},{"key":"6393_CR49","doi-asserted-by":"crossref","unstructured":"Thrun, S. (1998). Lifelong learning algorithms. In Learning to learn. Springer, pp. 181\u2013209.","DOI":"10.1007\/978-1-4615-5529-2_8"},{"key":"6393_CR50","doi-asserted-by":"crossref","unstructured":"Tian, Y., Wang, Y., Krishnan, D., et.al. (2020). Rethinking few-shot image classification: A good embedding is all you need? arXiv preprint arXiv:2003.11539","DOI":"10.1007\/978-3-030-58568-6_16"},{"key":"6393_CR51","unstructured":"Triantafillou, E., Larochelle, H., Zemel, R., et.al. (2021). Learning a universal template for few-shot dataset generalization. In Proceedings of the 38th International Conference on Machine Learning (ICML\u201921), PMLR, pp. 10,424\u201310,433."},{"key":"6393_CR52","unstructured":"Vinyals, O. (2017). Talk: Model vs optimization meta learning. http:\/\/metalearning-symposium.ml\/files\/vinyals.pdf, presented at a \u201cNeural Information Processing Systems\u201d workshop; Accessed 06-06-2020."},{"key":"6393_CR53","first-page":"3637","volume":"29","author":"O Vinyals","year":"2016","unstructured":"Vinyals, O., Blundell, C., Lillicrap, T., et al. (2016). Matching networks for one shot learning. Advances in Neural Information Processing Systems, 29, 3637\u20133645.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"6393_CR54","unstructured":"Wah, C., Branson, S., Welinder, P., et.al. (2011). The caltech-UCSD birds-200-2011 dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology."},{"issue":"3","key":"6393_CR55","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3386252","volume":"53","author":"Y Wang","year":"2020","unstructured":"Wang, Y., Yao, Q., Kwok, J. T., et al. (2020). Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys, 53(3), 1\u201334.","journal-title":"ACM Computing Surveys"},{"issue":"7896","key":"6393_CR56","doi-asserted-by":"publisher","first-page":"223","DOI":"10.1038\/s41586-021-04357-7","volume":"602","author":"PR Wurman","year":"2022","unstructured":"Wurman, P. R., Barrett, S., Kawamoto, K., et al. (2022). Outracing champion gran Turismo drivers with deep reinforcement learning. Nature, 602(7896), 223\u2013228.","journal-title":"Nature"},{"key":"6393_CR57","unstructured":"Yoon, J., Kim, T., & Dia, O., et al (2018) Bayesian model-agnostic meta-learning. In Advances in Neural Information Processing Systems 31. Curran Associates Inc., pp. 7332\u20137342."},{"key":"6393_CR58","unstructured":"Zintgraf, L., Shiarli, K., &Kurin, V., et.al. (2019). Fast context adaptation via meta-learning. In Proceedings of the 36th International Conference on Machine Learning (ICML\u201919), PMLR, pp. 7693\u20137702."}],"container-title":["Machine Learning"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10994-023-06393-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10994-023-06393-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10994-023-06393-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,1,18]],"date-time":"2024-01-18T19:08:26Z","timestamp":1705604906000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10994-023-06393-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,19]]},"references-count":58,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["6393"],"URL":"https:\/\/doi.org\/10.1007\/s10994-023-06393-y","relation":{},"ISSN":["0885-6125","1573-0565"],"issn-type":[{"value":"0885-6125","type":"print"},{"value":"1573-0565","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,19]]},"assertion":[{"value":"9 February 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 July 2023","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 August 2023","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 September 2023","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"Not applicable: this research does not involve personal data, and publishing of this manuscript will not result in the disruption of any individual\u2019s privacy.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}}]}}