{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T07:26:50Z","timestamp":1763018810249,"version":"3.40.4"},"reference-count":31,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,4,25]],"date-time":"2025-04-25T00:00:00Z","timestamp":1745539200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,4,25]],"date-time":"2025-04-25T00:00:00Z","timestamp":1745539200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Big Data"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>This paper focuses on the problem of the feature reconstruction network (FRN) facing difficulty in reconstructing the query set in fine-grained classification tasks when the objects in the support set have multiple attributes. To address this problem, we propose a model called FEL-FRN (fusion ECA Long-CLIP feature reconstruction network). First, we use FRN to deeply reconstruct feature maps, replacing the traditional method of using cosine similarity for category average aggregation. Moreover, we introduce the efficient channel attention (ECA) attention mechanism into the FRN to improve the model\u2019s ability to extract key features. Second, by introducing Long-CLIP to assist FRN recognition, the Long-CLIP model with a wide range of image recognition and understanding capabilities is obtained. The model does not require any task-specific fine-tuning data and can be combined with category text prediction. Finally, in each training task, the prediction results of different branches are fused. The Long-CLIP model can effectively compensate for the problem of poor prediction caused by large differences between the reconstructed support images and the poor quality of the reconstructed images, whereas the FRN reconstruction network compensates for the lack of precision in Long-CLIP direct prediction through reconstructed predictions, achieving complementary advantages. The experimental results show that this FEL-FRN method not only achieves good results on CUB-200\u20132011 and Oxford 102 flowers but also uses 5way5shot as a support set on the car dataset Stanford_Cars and the aircraft dataset FGVC_Aircraft, which have large attribute differences, with accuracies of 96.025% and 81.479%, respectively. The results show that the performance is improved compared with that of the FRN model strategy used alone. <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/feiyeha\/FEL-FRN\" ext-link-type=\"uri\">https:\/\/github.com\/feiyeha\/FEL-FRN<\/jats:ext-link>\n          <\/jats:p>","DOI":"10.1186\/s40537-025-01139-0","type":"journal-article","created":{"date-parts":[[2025,4,25]],"date-time":"2025-04-25T13:50:52Z","timestamp":1745589052000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["FEL-FRN: fusion ECA long-CLIP feature reconstruction network for few-shot classification"],"prefix":"10.1186","volume":"12","author":[{"given":"Yuanyuan","family":"Wang","sequence":"first","affiliation":[]},{"given":"Ao","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Jiange","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Kexiao","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Hauwa Suleiman","family":"Abdullahi","sequence":"additional","affiliation":[]},{"given":"Pinrong","family":"Lv","sequence":"additional","affiliation":[]},{"given":"Yu","family":"Gao","sequence":"additional","affiliation":[]},{"given":"Haiyan","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,4,25]]},"reference":[{"issue":"2","key":"1139_CR1","first-page":"123","volume":"44","author":"M Liu","year":"2022","unstructured":"Liu M, Han Z, Chen Y, et al. Classification of tree species from airborne LiDAR data using 3D deep learning. J Natl Univ Defense Technol. 2022;44(2):123\u201330.","journal-title":"J Natl Univ Defense Technol"},{"issue":"5","key":"1139_CR2","first-page":"1","volume":"49","author":"Y Peng","year":"2022","unstructured":"Peng Y, Qin X, Zhang L, et al. A review of small sample learning algorithms for image classification. Comput Sci. 2022;49(5):1\u20139.","journal-title":"Comput Sci"},{"key":"1139_CR3","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-024-09477-5","author":"K Hu","year":"2024","unstructured":"Hu K, Zhang E, Xia M, et al. Cross-dimension feature attention aggregation network for cloud and snow. Neural Comput App. 2024. https:\/\/doi.org\/10.1007\/s00521-024-09477-5.","journal-title":"Neural Comput App"},{"issue":"1","key":"1139_CR4","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2023.121463","volume":"237","author":"K Hu","year":"2024","unstructured":"Hu K, Li Y, Zhang S, et al. FedMMD: a federated weighting algorithm considering non-IID and local model deviation. Expert Syst App. 2024;237(1): 121463.","journal-title":"Expert Syst App"},{"key":"1139_CR5","doi-asserted-by":"publisher","first-page":"463","DOI":"10.1016\/j.neucom.2020.05.114","volume":"456","author":"X Li","year":"2021","unstructured":"Li X, Sun Z, Xue J, et al. A concise review of recent few-shot meta-learning methods. Neurocomputing. 2021;456:463\u20138.","journal-title":"Neurocomputing"},{"key":"1139_CR6","doi-asserted-by":"publisher","first-page":"203","DOI":"10.1016\/j.neucom.2022.04.078","volume":"494","author":"T Yingjie","year":"2022","unstructured":"Yingjie T, Xiaoxi Z, Wei H, et al. Meta-learning approaches for learning-to-learn in deep learning: a survey. Neurocomputing. 2022;494:203\u201323.","journal-title":"Neurocomputing"},{"key":"1139_CR7","doi-asserted-by":"crossref","unstructured":"Miller E G, Matsakis N E, Viola P A. et al. Learning from one example through shared densities on transforms. In: Proceedings IEEE conference on computer vision and pattern recognition; 2000. p. 464\u201371.","DOI":"10.1109\/CVPR.2000.855856"},{"key":"1139_CR8","doi-asserted-by":"crossref","unstructured":"Feifei Li, Fergus R, Pweona P, et al. A Bayesian approach to unsupervised one-shot learning of object categories. In: Proceedings 9th IEEE international conference on computer vision; 2003. p. 1134\u201341.","DOI":"10.1109\/ICCV.2003.1238476"},{"issue":"4","key":"1139_CR9","doi-asserted-by":"publisher","first-page":"594","DOI":"10.1109\/TPAMI.2006.79","volume":"28","author":"F Li","year":"2006","unstructured":"Li F, Fergus R, Pweona P, et al. One-shot learning of object categories. IEEE Trans Pattern Anal Mach Intell. 2006;28(4):594\u2013611.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1139_CR10","unstructured":"Lake B, Salakhutdinov R, Gross J, et al. One-shot learning of simple visual concepts. In: Proceedings of the annual meeting of the Cognitive Science Society. 2011, 33(33)."},{"key":"1139_CR11","unstructured":"Lake B, Salakhutdinov R, Tenenbaum J, et al. One-shot learning by inverting a compositional causal process. Advances in neural information processing systems; 2013. p. 26."},{"issue":"6266","key":"1139_CR12","doi-asserted-by":"publisher","first-page":"1332","DOI":"10.1126\/science.aab3050","volume":"350","author":"B Lake","year":"2015","unstructured":"Lake B, Salakhutdinov R, Tenenbaum J, et al. Human-level concept learning through probabilistic program induction. Science. 2015;350(6266):1332\u20138.","journal-title":"Science"},{"issue":"9","key":"1139_CR13","first-page":"5149","volume":"44","author":"T Hospedales","year":"2022","unstructured":"Hospedales T, Antoniou A, Micaelli P, et al. Meta-learning in neural networks: a survey. IEEE Trans Pattern Anal Mach Intell. 2022;44(9):5149\u201369.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"6","key":"1139_CR14","doi-asserted-by":"publisher","first-page":"4483","DOI":"10.1007\/s10462-021-10004-4","volume":"54","author":"M Huisman","year":"2021","unstructured":"Huisman M, van Rijn JN, Plaat A, et al. A survey of deep meta-learning. Artif Intell Rev. 2021;54(6):4483\u2013541.","journal-title":"Artif Intell Rev"},{"key":"1139_CR15","volume-title":"Advances in neural information processing systems","author":"M Rohrbach","year":"2013","unstructured":"Rohrbach M, Ebert S, et al. Transfer learning in a transductive setting. In: Advances in neural information processing systems. NIPS; 2013."},{"key":"1139_CR16","unstructured":"Krizhevsky A, Sutskever I, Hinton G E, et al. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems; 2012. p. 1097\u2013105."},{"key":"1139_CR17","doi-asserted-by":"crossref","unstructured":"Andriluka M, Pishchulin L, Gehler P, et al. 2D human pose estimation: new benchmark and state of the art analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2014. p. 3686\u201393.","DOI":"10.1109\/CVPR.2014.471"},{"issue":"1","key":"1139_CR18","doi-asserted-by":"publisher","first-page":"321","DOI":"10.1109\/TNNLS.2019.2904991","volume":"31","author":"Ji Zhong","year":"2020","unstructured":"Zhong Ji, Sun Y, Yunlong Yu, et al. Attribute-guided network for cross-modal zero-shot hashing. IEEE Trans Neural Netw Learn Syst. 2020;31(1):321\u201330.","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1139_CR19","doi-asserted-by":"crossref","unstructured":"Mancini M, Naeem MF, Xian Y, et al. Open-world compositional zero-shot learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR); 2021. p. 5218\u201326.","DOI":"10.1109\/CVPR46437.2021.00518"},{"key":"1139_CR20","doi-asserted-by":"publisher","first-page":"1520","DOI":"10.1109\/TIP.2022.3143005","volume":"31","author":"J Zhong","year":"2022","unstructured":"Zhong J, Hou Z, et al. Information symmetry matters: a modal-alternating propagation network for few-shot learning. IEEE Trans Image Proces. 2022;31:1520\u201331.","journal-title":"IEEE Trans Image Proces"},{"key":"1139_CR21","unstructured":"Chen X, Rostamzadeh N, Oreshkin BN, et al. Adaptive cross-modal few-shot learning. In: Proceedings of the 33rd international conference on neural information processing systems (NeurIPS); 2019. p. 4847\u201357."},{"key":"1139_CR22","unstructured":"Koch G, Zemel R, Salakhutdinov R, et al. Siamese neural networks for one-shot image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2015. p. 1701\u20138."},{"key":"1139_CR23","doi-asserted-by":"publisher","first-page":"1318","DOI":"10.1109\/TIP.2020.3043128","volume":"30","author":"X Li","year":"2020","unstructured":"Li X, Jijie Wu, Sun Z, et al. BSNet: bi-similarity network for few-shot fine-grained image classification. IEEE Trans Image Process. 2020;30:1318\u201331.","journal-title":"IEEE Trans Image Process"},{"key":"1139_CR24","doi-asserted-by":"crossref","unstructured":"Sun Q, Liu Y, Chua TS, et al. Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR); 2019. p. 403\u201312.","DOI":"10.1109\/CVPR.2019.00049"},{"key":"1139_CR25","doi-asserted-by":"crossref","unstructured":"Wertheimer D, Tang L, Hariharan B, et al. Few-shot classification with feature map reconstruction networks. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition; 2021. p. 8012\u201321","DOI":"10.1109\/CVPR46437.2021.00792"},{"key":"1139_CR26","doi-asserted-by":"crossref","unstructured":"Sung F, Yang Y, Li Z, et al. Learning to compare: relation network for few-shot learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Salt Lake City, USA; 2018. p. 1199\u2013208.","DOI":"10.1109\/CVPR.2018.00131"},{"key":"1139_CR27","unstructured":"Snell J, Swersky K, Zemel R,\u00a0et al. Prototypical networks for few-shot learning. In: Proceedings of the 31st international conference on neural information processing systems (NeurIPS); 2017. p. 4080\u201390."},{"key":"1139_CR28","doi-asserted-by":"crossref","unstructured":"Zhang B, Zhang P, Dong X, et al. Long-CLIP: unlocking the long-text capability of CLIP. In: Proceedings of the European conference on computer vision (ECCV); 2024.","DOI":"10.1007\/978-3-031-72983-6_18"},{"key":"1139_CR29","doi-asserted-by":"crossref","unstructured":"Chen G, Zhang T, Lu J, et al. Deep meta metric learning. In: Proceedings of the IEEE\/CVF international conference on computer vision; 2019.","DOI":"10.1109\/ICCV.2019.00964"},{"key":"1139_CR30","first-page":"3637","volume":"29","author":"O Vinyals","year":"2016","unstructured":"Vinyals O, Blundell C, Lillicrap T, et al. Matching networks for one-shot learning. Adv Neural Inf Process Syst. 2016;29:3637\u201345.","journal-title":"Adv Neural Inf Process Syst"},{"key":"1139_CR31","doi-asserted-by":"crossref","unstructured":"Simon C, Koniusz P, Nock R, et al. Adaptive subspaces for few-shot learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR); 2020.","DOI":"10.1109\/CVPR42600.2020.00419"}],"container-title":["Journal of Big Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-025-01139-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s40537-025-01139-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-025-01139-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,25]],"date-time":"2025-04-25T13:51:06Z","timestamp":1745589066000},"score":1,"resource":{"primary":{"URL":"https:\/\/journalofbigdata.springeropen.com\/articles\/10.1186\/s40537-025-01139-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,25]]},"references-count":31,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["1139"],"URL":"https:\/\/doi.org\/10.1186\/s40537-025-01139-0","relation":{},"ISSN":["2196-1115"],"issn-type":[{"value":"2196-1115","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,25]]},"assertion":[{"value":"1 November 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 March 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 April 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"We explained the purpose, process, risks and benefits of the study by oral or written form and obtained their informed consent. Participants had the right to know that their participation was voluntary and could withdraw from the study at any time.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate and consent to publication"}},{"value":"The authors declare no competing interests.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"104"}}