{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T07:46:38Z","timestamp":1775202398075,"version":"3.50.1"},"reference-count":33,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2023,2,2]],"date-time":"2023-02-02T00:00:00Z","timestamp":1675296000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["No. 62001386"],"award-info":[{"award-number":["No. 62001386"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>It is difficult to collect training samples for all types of synthetic aperture radar (SAR) targets. A realistic problem comes when unseen categories exist that are not included in training and benchmark data at the time of recognition, which is defined as open set recognition (OSR). Without the aid of side-information, generalized OSR methods used on ordinary optical images are usually not suitable for SAR images. In addition, OSR methods that require a large number of samples to participate in training are also not suitable for SAR images with the realistic situation of collection difficulty. In this regard, a task-oriented OSR method for SAR is proposed by distribution construction and relation measures to recognize targets of seen and unseen categories with limited training samples, and without any other simulation information. The method can judge category similarity to explain the unseen category. Distribution construction is realized by the graph convolutional network. The experimental results on the MSTAR dataset show that this method has a good recognition effect for the targets of both seen and unseen categories and excellent interpretation ability for unseen targets. Specifically, while recognition accuracy for seen targets remains above 95%, the recognition accuracy for unseen targets reaches 67% for the three-type classification problem, and 53% for the five-type classification problem.<\/jats:p>","DOI":"10.3390\/s23031668","type":"journal-article","created":{"date-parts":[[2023,2,3]],"date-time":"2023-02-03T01:40:25Z","timestamp":1675388425000},"page":"1668","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["SAR Target Recognition with Limited Training Samples in Open Set Conditions"],"prefix":"10.3390","volume":"23","author":[{"given":"Xiangyu","family":"Zhou","sequence":"first","affiliation":[{"name":"School of Software, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]},{"given":"Yifan","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Software, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]},{"given":"Di","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Software, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0236-2382","authenticated-orcid":false,"given":"Qianru","family":"Wei","sequence":"additional","affiliation":[{"name":"School of Software, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,2,2]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1757","DOI":"10.1109\/TPAMI.2012.256","article-title":"Toward Open Set Recognition","volume":"35","author":"Scheirer","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"632","DOI":"10.1109\/TAES.2015.150027","article-title":"Open set recognition for automatic target classification with rejection","volume":"52","author":"Scherreik","year":"2016","journal-title":"IEEE Trans. Aerosp. Electron. Syst."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Giusti, E., Ghio, S., Oveis, A.H., and Martorella, M. (2022). Proportional Similarity-Based Openmax Classifier for Open Set Recognition in SAR Images. Remote Sens., 14.","DOI":"10.3390\/rs14184665"},{"key":"ref_4","first-page":"4080","article-title":"Prototypical networks for few-shot learning","volume":"30","author":"Snell","year":"2017","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_5","first-page":"3630","article-title":"Matching networks for one shot learning","volume":"29","author":"Vinyals","year":"2016","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., and Hospedales, T.M. (2018, January 18\u201322). Learning to compare: Relation network for few-shot learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00131"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Rostami, M., Kolouri, S., Eaton, E., and Kim, K. (2019). Deep Transfer Learning for Few-Shot SAR Image Classification. Remote Sens., 11.","DOI":"10.20944\/preprints201905.0030.v1"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"7266","DOI":"10.1109\/TIP.2021.3104179","article-title":"Rotation Awareness Based Self-Supervised Learning for SAR Target Recognition with Limited Training Samples","volume":"30","author":"Wen","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"13387","DOI":"10.1109\/TVT.2022.3196103","article-title":"Spatial-Temporal Hybrid Feature Extraction Network for Few-Shot Automatic Modulation Classification","volume":"71","author":"Che","year":"2022","journal-title":"IEEE Trans. Veh. Technol."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Gao, F., Xu, J., Lang, R., Wang, J., Hussain, A., and Zhou, H. (2022). A Few-Shot Learning Method for SAR Images Based on Weighted Distance and Feature Fusion. Remote Sens., 14.","DOI":"10.3390\/rs14184583"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1109\/TNNLS.2020.2978386","article-title":"A comprehensive survey on graph neural networks","volume":"32","author":"Wu","year":"2020","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_12","unstructured":"Garcia, V., and Bruna, J. (2017). Few-shot learning with graph neural networks. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Zhou, X., Zhang, Y., and Wei, Q. (2022). Few-Shot Fine-Grained Image Classification via GNN. Sensors, 22.","DOI":"10.3390\/s22197640"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Bendale, A., and Boult, T.E. (2016, January 27\u201330). Towards open set deep networks. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.173"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1020206","DOI":"10.1117\/12.2262150","article-title":"Open set recognition of aircraft in aerial imagery using synthetic template models","volume":"10202","author":"Bapst","year":"2017","journal-title":"Proc. SPIE"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"762","DOI":"10.1109\/TPAMI.2017.2707495","article-title":"The extreme value machine","volume":"40","author":"Rudd","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"4445","DOI":"10.1109\/TGRS.2019.2891266","article-title":"Open Set Incremental Learning for Automatic Target Recognition","volume":"57","author":"Dang","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_18","first-page":"150","article-title":"Multi-class open set recognition for SAR imagery","volume":"9844","author":"Scherreik","year":"2016","journal-title":"Proc. SPIE"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Toizumi, T., Sagi, K., and Senda, Y. (2018, January 22\u201327). Automatic association between SAR and optical images based on zero-shot learning. Proceedings of the IGARSS 2018\u20132018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain.","DOI":"10.1109\/IGARSS.2018.8517299"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"2245","DOI":"10.1109\/LGRS.2017.2758900","article-title":"Zero-Shot Learning of SAR Target Feature Space with Deep Generative Neural Networks","volume":"14","author":"Song","year":"2017","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1092","DOI":"10.1109\/LGRS.2019.2936897","article-title":"EM Simulation-Aided Zero-Shot Learning for SAR Automatic Target Recognition","volume":"17","author":"Song","year":"2020","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Dang, S., Cao, Z., Cui, Z., and Pi, Y. (2019, January 26\u201329). Open Set SAR Target Recognition Using Class Boundary Extracting. Proceedings of the 6th Asia\u2013Pacific Conference on Synthetic Aperture Radar (APSAR), Xiamen, China.","DOI":"10.1109\/APSAR46974.2019.9048316"},{"key":"ref_23","first-page":"4002205","article-title":"Learn to Recognize Unknown SAR Targets from Reflection Similarity","volume":"19","author":"Wei","year":"2020","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_24","first-page":"4014005","article-title":"An Open Set Recognition Method for SAR Targets Based on Multitask Learning","volume":"19","author":"Ma","year":"2021","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Zeng, Z., Sun, J., Xu, C., and Wang, H. (2021). Unknown SAR Target Identification Method Based on Feature Extraction Network and KLD\u2013RPA Joint Discrimination. Remote Sens., 13.","DOI":"10.3390\/rs13152901"},{"key":"ref_26","unstructured":"Liu, Y., Lee, J., Park, M., Kim, S., and Yang, Y. (2018). Transductive propagation network for few-shot learning. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kim, J., Kim, T., Kim, S., and Yoo, C.D. (2019, January 16\u201320). Edge-labeling graph neural network for few-shot learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00010"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Gidaris, S., and Komodakis, N. (2019, January 16\u201320). Generating Classification Weights with GNN Denoising Autoencoders for Few-Shot Learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00011"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Yang, L., Li, L., Zhang, Z., Zhou, X., Zhou, E., and Liu, Y. (2020, January 13\u201319). Dpgn: Distribution propagation graph network for few-shot learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01340"},{"key":"ref_30","unstructured":"Finn, C., Abbeel, P., and Levine, S. (2017, January 6\u201311). Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. Proceedings of the International Conference on Machine Learning, Sydney, Australia."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201322). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_32","unstructured":"(2023, January 30). The Air Force Moving and Stationary Target Recognition Database. Available online: https:\/\/www.sdms.afrl.af.mil."},{"key":"ref_33","first-page":"2579C2605","article-title":"Visualizing data using t-SNE","volume":"9","author":"Hinton","year":"2008","journal-title":"J. Mach. Learn. Res."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/3\/1668\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:22:51Z","timestamp":1760120571000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/3\/1668"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,2]]},"references-count":33,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["s23031668"],"URL":"https:\/\/doi.org\/10.3390\/s23031668","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,2]]}}}