{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,26]],"date-time":"2025-09-26T00:18:22Z","timestamp":1758845902396,"version":"3.44.0"},"reference-count":65,"publisher":"Springer Science and Business Media LLC","issue":"10","license":[{"start":{"date-parts":[[2025,8,28]],"date-time":"2025-08-28T00:00:00Z","timestamp":1756339200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,8,28]],"date-time":"2025-08-28T00:00:00Z","timestamp":1756339200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100014219","name":"National Science Fund for Distinguished Young Scholars","doi-asserted-by":"publisher","award":["62006046"],"award-info":[{"award-number":["62006046"]}],"id":[{"id":"10.13039\/501100014219","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2025,10]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Few-shot object detection aims to accurately detect novel classes using a limited number of sample instances. Currently, two-stage detection methods in the fine-tuning paradigm have been widely recognized as the main strategy for FSOD. However, during the fine-tuning process, the addition of novel class representations often causes shifts and distortion in the feature distribution of base classes due to knowledge transfer. As a result, it is challenging to mitigate catastrophic forgetting for base classes while ensuring improvements in the performance of novel classes. To solve this problem, we focus on fine-tuning stage of two-stage paradigm and propose multi-dimensional feature adaptive calibration (MDFAC) method. Specifically, we propose Equiangular Tight Frame Guidance Module (ETFGM) to construct a high-dimensional hypersphere memory bank to store the pre-trained base class distributions. This module guides the classifier to follow the uniform distribution of each class center, combating catastrophic forgetting of base class knowledge and ensuring independent learning of novel class knowledge. Meanwhile, adaptive calibration classification (ACC) loss dynamically adjusts the model\u2019s attention, prioritizing categories that are perceived less favorably based on the real-time detection frequency of the classifier. Through the synergistic integration of ETFGM and ACC loss, the classifier is autonomously trained to improve its discriminative ability. Extensive benchmark results on PASCAL VOC and MS COCO datasets demonstrate that our method improves the average detection performance for novel classes by 1.8% (nAP50) and 2.1% (nAP), respectively, while maintaining competitive performance on base classes compared to existing methods. Overall, our approach outperforms the state-of-the-art.<\/jats:p>","DOI":"10.1007\/s40747-025-02053-x","type":"journal-article","created":{"date-parts":[[2025,8,28]],"date-time":"2025-08-28T07:51:20Z","timestamp":1756367480000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["MDFAC: multi-dimensional feature adaptive calibration for generalized few-shot object detection"],"prefix":"10.1007","volume":"11","author":[{"given":"Kailin","family":"Xie","sequence":"first","affiliation":[]},{"given":"Jinxiang","family":"Lai","sequence":"additional","affiliation":[]},{"given":"Zijian","family":"She","sequence":"additional","affiliation":[]},{"given":"Liang","family":"Lei","sequence":"additional","affiliation":[]},{"given":"Han","family":"Chen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,8,28]]},"reference":[{"issue":"1","key":"2053_CR1","doi-asserted-by":"publisher","first-page":"e0317619","DOI":"10.1371\/journal.pone.0317619","volume":"20","author":"M Abdelsattar","year":"2025","unstructured":"Abdelsattar M, Ismeil MA, Menoufi K, AbdelMoety A, Emad-Eldeen A (2025) Evaluating machine learning and deep learning models for predicting wind turbine power output from environmental factors. PLoS ONE 20(1):e0317619","journal-title":"PLoS ONE"},{"key":"2053_CR2","doi-asserted-by":"crossref","unstructured":"Abdelsattar M, AbdelMoety A, Emad-Eldeen A (2025) Advanced machine learning techniques for predicting power generation and fault detection in solar photovoltaic systems. Neural Comput Appl 37:8825\u20138844","DOI":"10.1007\/s00521-025-11035-6"},{"key":"2053_CR3","doi-asserted-by":"crossref","unstructured":"Abdelsattar M, AbdelMoety A, Ismeil MA, Emad-Eldeen A (2025) Automated defect detection in solar cell images using deep learning algorithms. IEEE Access 99:1\u20131","DOI":"10.1109\/ACCESS.2024.3525183"},{"issue":"1","key":"2053_CR4","doi-asserted-by":"publisher","first-page":"2175116","DOI":"10.1080\/08839514.2023.2175116","volume":"37","author":"G Cao","year":"2023","unstructured":"Cao G, Zhou W, Yang X, Zhu F, Chai L (2023) DR-CIML: few-shot object detection via base data resampling and cross-iteration metric learning. Appl Artif Intell 37(1):2175116","journal-title":"Appl Artif Intell"},{"issue":"7","key":"2053_CR5","doi-asserted-by":"publisher","first-page":"5963","DOI":"10.1109\/TCSVT.2023.3343397","volume":"34","author":"H Chen","year":"2023","unstructured":"Chen H, Wang Q, Xie K, Lei L, Lin MG, Lv T, Liu Y, Luo J (2023) SD-FSOD: self-distillation paradigm via distribution calibration for few-shot object detection. IEEE Trans Circuits Syst Video Technol 34(7):5963\u20135976","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"2053_CR6","doi-asserted-by":"crossref","unstructured":"Chen H, Wang Q, Xie K, Lei L, Wu X (2024) MPF-Net: multi-projection filtering network for few-shot object detection. Appl Intell 54(17):7777\u20137792","DOI":"10.1007\/s10489-024-05556-1"},{"key":"2053_CR7","doi-asserted-by":"crossref","unstructured":"Chen H, Wang Y, Wang G, Qiao Y (2018) LSTD: a low-shot transfer detector for object detection. In: AAAI, vol\u00a032, pp 2836\u20132843","DOI":"10.1609\/aaai.v32i1.11716"},{"key":"2053_CR8","unstructured":"Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: International conference on machine learning. PmLR, pp 1597\u20131607"},{"key":"2053_CR9","unstructured":"Chen WY, Liu YC, Kira Z, Wang YCF, Huang JB (2019) A closer look at few-shot classification. arXiv preprint arXiv:1904.04232"},{"key":"2053_CR10","unstructured":"Chen X, Fan H, Girshick R, He K (2020) Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297"},{"key":"2053_CR11","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1007\/s11263-014-0733-5","volume":"111","author":"M Everingham","year":"2015","unstructured":"Everingham M, Eslami SA, Van Gool L, Williams CK, Winn J, Zisserman A (2015) The pascal visual object classes challenge: a retrospective. IJCV 111:98\u2013136","journal-title":"IJCV"},{"key":"2053_CR12","doi-asserted-by":"publisher","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","volume":"88","author":"M Everingham","year":"2010","unstructured":"Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (VOC) challenge. IJCV 88:303\u2013338","journal-title":"IJCV"},{"issue":"3","key":"2053_CR13","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1167\/jov.21.3.16","volume":"21","author":"CM Funke","year":"2021","unstructured":"Funke CM, Borowski J, Stosio K, Brendel W, Wallis TS, Bethge M (2021) Five points to check when comparing visual perception in humans and machines. J Vis 21(3):16","journal-title":"J Vis"},{"key":"2053_CR14","doi-asserted-by":"crossref","unstructured":"Gidaris S, Komodakis N (2018) Dynamic few-shot visual learning without forgetting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4367\u20134375","DOI":"10.1109\/CVPR.2018.00459"},{"key":"2053_CR15","doi-asserted-by":"crossref","unstructured":"Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 1440\u20131448","DOI":"10.1109\/ICCV.2015.169"},{"issue":"5","key":"2053_CR16","doi-asserted-by":"publisher","first-page":"7063","DOI":"10.1007\/s40747-024-01527-8","volume":"10","author":"T Guo","year":"2024","unstructured":"Guo T, Yang Q, Wang C, Liu Y, Li P, Tang J, Li D, Wen Y (2024) KnowledgeNavigator: leveraging large language models for enhanced reasoning over knowledge graph. Complex Intell Syst 10(5):7063\u20137076","journal-title":"Complex Intell Syst"},{"key":"2053_CR17","unstructured":"Han G, Chen L, Ma J, Huang S, Chellappa R, Chang SF (2022) Multi-modal few-shot object detection with meta-learning-based cross-modal prompting. arXiv preprint arXiv:2204.07841"},{"key":"2053_CR18","doi-asserted-by":"crossref","unstructured":"Han G, Huang S, Ma J, He Y, Chang SF (2022) Meta Faster R-CNN: towards accurate few-shot object detection with attentive feature alignment. In: Proceedings of the AAAI conference on artificial intelligence, vol\u00a036, pp 780\u2013789","DOI":"10.1609\/aaai.v36i1.19959"},{"key":"2053_CR19","doi-asserted-by":"crossref","unstructured":"Han G, Ma J, Huang S, Chen L, Chang SF (2022) Few-shot object detection with fully cross-transformer. In: CVPR, pp 5321\u20135330","DOI":"10.1109\/CVPR52688.2022.00525"},{"key":"2053_CR20","doi-asserted-by":"crossref","unstructured":"Hariharan B, Girshick R (2017) Low-shot visual recognition by shrinking and hallucinating features. In: Proceedings of the IEEE international conference on computer vision, pp 3018\u20133027","DOI":"10.1109\/ICCV.2017.328"},{"key":"2053_CR21","doi-asserted-by":"crossref","unstructured":"He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 9729\u20139738","DOI":"10.1109\/CVPR42600.2020.00975"},{"key":"2053_CR22","doi-asserted-by":"crossref","unstructured":"He K, Gkioxari G, Doll\u00e1r P, Girshick R (2017) Mask R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 2961\u20132969","DOI":"10.1109\/ICCV.2017.322"},{"key":"2053_CR23","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"2053_CR24","doi-asserted-by":"crossref","unstructured":"Hu H, Bai S, Li A, Cui J, Wang L (2021) Dense relation distillation with context-aware aggregation for few-shot object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 10185\u201310194","DOI":"10.1109\/CVPR46437.2021.01005"},{"key":"2053_CR25","doi-asserted-by":"crossref","unstructured":"Jiang X, Li Z, Tian M, Liu J, Yi S, Miao D (2023) Few-shot object detection via improved classification features. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp 5386\u20135395","DOI":"10.1109\/WACV56688.2023.00535"},{"key":"2053_CR26","doi-asserted-by":"crossref","unstructured":"Kang B, Liu Z, Wang X, Yu F, Feng J, Darrell T (2019) Few-shot object detection via feature reweighting. In: ICCV, pp 8420\u20138429","DOI":"10.1109\/ICCV.2019.00851"},{"issue":"9","key":"2053_CR27","doi-asserted-by":"publisher","first-page":"1066","DOI":"10.3390\/sym11091066","volume":"11","author":"M Kaya","year":"2019","unstructured":"Kaya M, Bilge H\u015e (2019) Deep metric learning: a survey. Symmetry 11(9):1066","journal-title":"Symmetry"},{"key":"2053_CR28","unstructured":"Khodadadeh S, Boloni L, Shah M (2019) Unsupervised meta-learning for few-shot image classification. In: Advances in neural information processing systems, vol 32. Vancouver, Canada, pp 10132\u201310142"},{"key":"2053_CR29","doi-asserted-by":"crossref","unstructured":"Li A, Li Z (2021) Transformation invariant few-shot object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 3094\u20133102","DOI":"10.1109\/CVPR46437.2021.00311"},{"key":"2053_CR30","doi-asserted-by":"crossref","unstructured":"Li Y, Wang T, Kang B, Tang S, Wang C, Li J, Feng J (2020) Overcoming classifier imbalance for long-tail object detection with balanced group softmax. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 10991\u201311000","DOI":"10.1109\/CVPR42600.2020.01100"},{"key":"2053_CR31","doi-asserted-by":"crossref","unstructured":"Li Y, Zhu H, Cheng Y, Wang W, Teo CS, Xiang C, Vadakkepat P, Lee TH (2021) Few-shot object detection via classification refinement and distractor retreatment. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 15395\u201315403","DOI":"10.1109\/CVPR46437.2021.01514"},{"key":"2053_CR32","doi-asserted-by":"crossref","unstructured":"Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Doll\u00e1r P, Zitnick CL (2014) Microsoft COCO: common objects in context. In: ECCV. Springer, pp 740\u2013755","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"2053_CR33","doi-asserted-by":"crossref","unstructured":"Ma J, Niu Y, Xu J, Huang S, Han G, Chang SF (2023) DIGEO: discriminative geometry-aware learning for generalized few-shot object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 3208\u20133218","DOI":"10.1109\/CVPR52729.2023.00313"},{"key":"2053_CR34","unstructured":"Menon AK, Jayasumana S, Rawat AS, Jain H, Veit A, Kumar S (2020) Long-tail learning via logit adjustment. arXiv preprint arXiv:2007.07314"},{"key":"2053_CR35","unstructured":"Nichol A, Schulman J (2018) Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2(3):4"},{"issue":"40","key":"2053_CR36","doi-asserted-by":"publisher","first-page":"24652","DOI":"10.1073\/pnas.2015509117","volume":"117","author":"V Papyan","year":"2020","unstructured":"Papyan V, Han X, Donoho DL (2020) Prevalence of neural collapse during the terminal phase of deep learning training. Proc Natl Acad Sci 117(40):24652\u201324663","journal-title":"Proc Natl Acad Sci"},{"key":"2053_CR37","doi-asserted-by":"crossref","unstructured":"Qiao L, Zhao Y, Li Z, Qiu X, Wu J, Zhang C (2021) DEFRCN: decoupled faster R-CNN for few-shot object detection. In: ICCV, pp 8681\u20138690","DOI":"10.1109\/ICCV48922.2021.00856"},{"issue":"1","key":"2053_CR38","doi-asserted-by":"publisher","first-page":"14236","DOI":"10.1038\/s41598-025-96945-0","volume":"15","author":"A Rabee","year":"2025","unstructured":"Rabee A, Anwar Z, AbdelMoety A, Abdelsallam A, Ali M (2025) Comparative analysis of automated foul detection in football using deep learning architectures. Sci Rep 15(1):14236","journal-title":"Sci Rep"},{"key":"2053_CR39","doi-asserted-by":"crossref","unstructured":"Redmon J (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPR.2016.91"},{"issue":"6","key":"2053_CR40","doi-asserted-by":"publisher","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","volume":"39","author":"S Ren","year":"2016","unstructured":"Ren S, He K, Girshick R, Sun J (2016) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137\u20131149","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"11","key":"2053_CR41","doi-asserted-by":"publisher","first-page":"5543","DOI":"10.3390\/app12115543","volume":"12","author":"X Ren","year":"2022","unstructured":"Ren X, Zhang W, Wu M, Li C, Wang X (2022) Meta-YOLO: meta-learning for few-shot traffic sign detection via decoupling dependencies. Appl Sci 12(11):5543","journal-title":"Appl Sci"},{"key":"2053_CR42","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115:211\u2013252","journal-title":"Int J Comput Vis"},{"issue":"2","key":"2053_CR43","doi-asserted-by":"publisher","first-page":"182","DOI":"10.1111\/j.1467-7687.2005.00405.x","volume":"8","author":"LK Samuelson","year":"2005","unstructured":"Samuelson LK, Smith LB (2005) They call it like they see it: spontaneous naming and attention to shape. Dev Sci 8(2):182\u2013198","journal-title":"Dev Sci"},{"key":"2053_CR44","doi-asserted-by":"crossref","unstructured":"Shangguan Z, Huai L, Liu T, Jiang X (2023) Few-shot object detection with refined contrastive learning. In: 2023 IEEE 35th international conference on tools with artificial intelligence (ICTAI). IEEE, pp 991\u2013996","DOI":"10.1109\/ICTAI59109.2023.00148"},{"key":"2053_CR45","doi-asserted-by":"crossref","unstructured":"Shangguan Z, Rostami M (2023) Identification of novel classes for improving few-shot object detection. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 3356\u20133366","DOI":"10.1109\/ICCVW60793.2023.00360"},{"key":"2053_CR46","doi-asserted-by":"crossref","unstructured":"Sun B, Li B, Cai S, Yuan Y, Zhang C (2021) FSCE: few-shot object detection via contrastive proposal encoding. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 7352\u20137362","DOI":"10.1109\/CVPR46437.2021.00727"},{"key":"2053_CR47","doi-asserted-by":"crossref","unstructured":"Sun P, Zhang R, Jiang Y, Kong T, Xu C, Zhan W, Tomizuka M, Li L, Yuan Z, Wang C et\u00a0al (2021) Sparse R-CNN: end-to-end object detection with learnable proposals. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 14454\u201314463","DOI":"10.1109\/CVPR46437.2021.01422"},{"key":"2053_CR48","doi-asserted-by":"crossref","unstructured":"Sun Q, Liu Y, Chua TS, Schiele B (2019) Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 403\u2013412","DOI":"10.1109\/CVPR.2019.00049"},{"key":"2053_CR49","doi-asserted-by":"crossref","unstructured":"Tao X, Hong X, Chang X, Dong S, Wei X, Gong Y (2020) Few-shot class-incremental learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 12183\u201312192","DOI":"10.1109\/CVPR42600.2020.01220"},{"issue":"1","key":"2053_CR50","doi-asserted-by":"publisher","first-page":"351","DOI":"10.1007\/s10489-022-03399-2","volume":"53","author":"M Wang","year":"2023","unstructured":"Wang M, Ning H, Liu H (2023) Object detection based on few-shot learning via instance-level feature correlation and aggregation. Appl Intell 53(1):351\u2013368","journal-title":"Appl Intell"},{"key":"2053_CR51","unstructured":"Wang X, Huang TE, Darrell T, Gonzalez JE, Yu F (2020) Frustratingly simple few-shot object detection. arXiv preprint arXiv:2003.06957"},{"key":"2053_CR52","doi-asserted-by":"crossref","unstructured":"Wang YX, Girshick R, Hebert M, Hariharan B (2018) Low-shot learning from imaginary data. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7278\u20137286","DOI":"10.1109\/CVPR.2018.00760"},{"key":"2053_CR53","doi-asserted-by":"crossref","unstructured":"Wang Z, Yang B, Yue H, Ma Z (2024) Fine-grained prototypes distillation for few-shot object detection. In: Proceedings of the AAAI conference on artificial intelligence, vol\u00a038, pp 5859\u20135866","DOI":"10.1609\/aaai.v38i6.28399"},{"key":"2053_CR54","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-016-0043-6","volume":"3","author":"K Weiss","year":"2016","unstructured":"Weiss K, Khoshgoftaar TM, Wang D (2016) A survey of transfer learning. J Big data 3:1\u201340","journal-title":"J Big data"},{"issue":"2","key":"2053_CR55","doi-asserted-by":"publisher","first-page":"2639","DOI":"10.1007\/s40747-023-01281-3","volume":"10","author":"L Wu","year":"2024","unstructured":"Wu L (2024) A meta-learning network method for few-shot multi-class classification problems with numerical data. Complex Intell Syst 10(2):2639\u20132652","journal-title":"Complex Intell Syst"},{"key":"2053_CR56","doi-asserted-by":"publisher","first-page":"39","DOI":"10.1016\/j.neucom.2020.01.085","volume":"396","author":"X Wu","year":"2020","unstructured":"Wu X, Sahoo D, Hoi SC (2020) Recent advances in deep learning for object detection. Neurocomputing 396:39\u201364","journal-title":"Neurocomputing"},{"key":"2053_CR57","doi-asserted-by":"crossref","unstructured":"Wu Y, Chen Y, Wang L, Ye Y, Liu Z, Guo Y, Fu Y (2019) Large scale incremental learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 374\u2013382","DOI":"10.1109\/CVPR.2019.00046"},{"key":"2053_CR58","doi-asserted-by":"crossref","unstructured":"Wu Z, Xiong Y, Yu SX, Lin D (2018) Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3733\u20133742","DOI":"10.1109\/CVPR.2018.00393"},{"issue":"3","key":"2053_CR59","first-page":"3090","volume":"45","author":"Y Xiao","year":"2022","unstructured":"Xiao Y, Lepetit V, Marlet R (2022) Few-shot object detection and viewpoint estimation for objects in the wild. IEEE Trans Pattern Anal Mach Intell 45(3):3090\u20133106","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"7","key":"2053_CR60","doi-asserted-by":"publisher","first-page":"5818","DOI":"10.1109\/TCSVT.2024.3367666","volume":"34","author":"B Yan","year":"2024","unstructured":"Yan B, Lang C, Cheng G, Han J (2024) Understanding negative proposals in generic few-shot object detection. IEEE Trans Circuits Syst Video Technol 34(7):5818\u20135829","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"2053_CR61","doi-asserted-by":"crossref","unstructured":"Yan X, Chen Z, Xu A, Wang X, Liang X, Li L (2019) Meta R-CNN: towards general solver for instance-level few-shot learning. Cornell University - arXiv","DOI":"10.1109\/ICCV.2019.00967"},{"key":"2053_CR62","doi-asserted-by":"crossref","unstructured":"Yang Z, Wang Y, Chen X, Liu J, Qiao Y (2020) Context-transformer: tackling object confusion for few-shot detection. In: Proceedings of the AAAI conference on artificial intelligence, vol\u00a034, pp 12653\u201312660","DOI":"10.1609\/aaai.v34i07.6957"},{"issue":"11","key":"2053_CR63","first-page":"12832","volume":"45","author":"G Zhang","year":"2022","unstructured":"Zhang G, Luo Z, Cui K, Lu S, Xing EP (2022) Meta-DETR: image-level few-shot detection with inter-class correlation exploitation. IEEE Trans Pattern Anal Mach Intell 45(11):12832\u201312843","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"11","key":"2053_CR64","doi-asserted-by":"publisher","first-page":"3212","DOI":"10.1109\/TNNLS.2018.2876865","volume":"30","author":"ZQ Zhao","year":"2019","unstructured":"Zhao ZQ, Zheng P, Xu ST, Wu X (2019) Object detection with deep learning: a review. IEEE Trans Neural Netw Learn Syst 30(11):3212\u20133232","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"issue":"8","key":"2053_CR65","doi-asserted-by":"publisher","first-page":"7121","DOI":"10.1109\/TCSVT.2024.3370600","volume":"34","author":"J Zhu","year":"2024","unstructured":"Zhu J, Wang Q, Dong X, Ruan W, Chen H, Lei L, Hao G (2024) FSNA: few-shot object detection via neighborhood information adaption and all attention. IEEE Trans Circuits Syst Video Technol 34(8):7121\u20137134","journal-title":"IEEE Trans Circuits Syst Video Technol"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-02053-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-025-02053-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-02053-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T13:31:58Z","timestamp":1758807118000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-025-02053-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,28]]},"references-count":65,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2025,10]]}},"alternative-id":["2053"],"URL":"https:\/\/doi.org\/10.1007\/s40747-025-02053-x","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2025,8,28]]},"assertion":[{"value":"16 January 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 August 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 August 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All authors declare that they have no competing financial interests or entities with any financial or non-financial interests with respect to the contents discussed in this manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"427"}}