{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T18:04:59Z","timestamp":1769018699307,"version":"3.49.0"},"reference-count":27,"publisher":"MDPI AG","issue":"21","license":[{"start":{"date-parts":[[2021,10,20]],"date-time":"2021-10-20T00:00:00Z","timestamp":1634688000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61771362"],"award-info":[{"award-number":["61771362"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>It is expensive and time-consuming to obtain a large number of labeled synthetic aperture radar (SAR) images. In the task of small training data size, the results of target detection on SAR images using deep network approaches are usually not ideal. In this study, considering that optical remote sensing images are much easier to be labeled than SAR images, we assume to have a large number of labeled optical remote sensing images and a small number of labeled SAR images with the similar scenes, propose to transfer knowledge from optical remote sensing images to SAR images, and develop a domain adaptive Faster R-CNN for SAR target detection with small training data size. In the proposed method, in order to make full use of the label information and realize more accurate domain adaptation knowledge transfer, an instance level domain adaptation constraint is used rather than feature level domain adaptation constraint. Specifically, generative adversarial network (GAN) constraint is applied as the domain adaptation constraint in the adaptation module after the proposals of Faster R-CNN to achieve instance level domain adaptation and learn the transferable features. The experimental results on the measured SAR image dataset show that the proposed method has higher detection accuracy in the task of SAR target detection with small training data size than the traditional Faster R-CNN.<\/jats:p>","DOI":"10.3390\/rs13214202","type":"journal-article","created":{"date-parts":[[2021,10,20]],"date-time":"2021-10-20T21:31:26Z","timestamp":1634765486000},"page":"4202","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":29,"title":["SAR Target Detection Based on Domain Adaptive Faster R-CNN with Small Training Data Size"],"prefix":"10.3390","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3424-7231","authenticated-orcid":false,"given":"Yuchen","family":"Guo","sequence":"first","affiliation":[{"name":"Academy of Advanced Interdisciplinary Research, Xidian University, Xi\u2019an 710071, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4503-0022","authenticated-orcid":false,"given":"Lan","family":"Du","sequence":"additional","affiliation":[{"name":"National Laboratory of Radar Signal Processing, Xidian University, Xi\u2019an 710071, China"}]},{"given":"Guoxin","family":"Lyu","sequence":"additional","affiliation":[{"name":"National Laboratory of Radar Signal Processing, Xidian University, Xi\u2019an 710071, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,10,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1685","DOI":"10.1109\/TGRS.2008.2006504","article-title":"An adaptive and fast CFAR algorithm based on automatic censoring for target detection in high-resolution SAR images","volume":"47","author":"Gao","year":"2009","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015, January 7\u201313). Fast R-CNN. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_3","first-page":"91","article-title":"Faster R-CNN: Towards realtime object detection with region proposal networks","volume":"28","author":"Ren","year":"2015","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_4","first-page":"3018","article-title":"Target detection method based on convolutional neural network for SAR image","volume":"38","author":"Du","year":"2016","journal-title":"J. Electron. Inf. Technol."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Li, J., Qu, C., and Shao, J. (2017). Ship detection in SAR images based on an improved faster R-CNN. SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), IEEE.","DOI":"10.1109\/BIGSARDATA.2017.8124934"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Kang, M., Ji, K., Leng, X., and Lin, Z. (2017). Contextual region-based convolutional neural network with multilayer fusion for SAR ship detection. Remote Sens., 9.","DOI":"10.3390\/rs9080860"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"279","DOI":"10.5194\/isprs-annals-V-3-2020-279-2020","article-title":"Generating artificial near infrared spectral band from rgb image using conditional generative adversarial network","volume":"3","author":"Yuan","year":"2020","journal-title":"ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci."},{"key":"ref_8","unstructured":"Uddin, M.S., and Li, J. (2020). Generative Adversarial Networks for Visible to Infrared Video Conversion. Recent Advances in Image Restoration with Applications to Real World Problems, IntechOpen."},{"key":"ref_9","first-page":"1099502","article-title":"Improved visible to IR image transformation using synthetic data augmentation with cycle-consistent adversarial networks","volume":"Volume 10995","author":"Yun","year":"2019","journal-title":"Pattern Recognition and Tracking XXX"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Uddin, M.S., Hoque, R., Islam, K.A., Kwan, C., Gribben, D., and Li, J. (2021). Converting Optical Videos to Infrared Videos Using Attention GAN and Its Impact on Target Detection and Classification Performance. Remote Sens., 13.","DOI":"10.3390\/rs13163257"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1109\/JPROC.2020.3004555","article-title":"A comprehensive survey on transfer learning","volume":"109","author":"Zhuang","year":"2020","journal-title":"Proc. IEEE"},{"key":"ref_12","unstructured":"Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014, January 8\u201313). How transferable are features in deep neural networks?. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_13","unstructured":"Long, M., Cao, Y., Wang, J., and Jordan, M. (2015, January 1\u20139). Learning transferable features with deep adaptation networks. Proceedings of the International Conference on Machine Learning, Lille, France."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Saito, K., Watanabe, K., Ushiku, Y., and Harada, T. (2018, January 18\u201323). Maximum classifier discrepancy for unsupervised domain adaptation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00392"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. (2017, January 21\u201326). Adversarial discriminative domain adaptation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.316"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Sun, B., and Saenko, K. (2016, January 11\u201314). Deep coral: Correlation alignment for deep domain adaptation. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-49409-8_35"},{"key":"ref_17","unstructured":"Gain, Y., and Lempitsky, V. (2015, January 1\u20139). Unsupervised domain adaptation by backpropagation. Proceedings of the International Conference on Machine Learning, Lille, France."},{"key":"ref_18","first-page":"1","article-title":"Domain-adversarial training of neural networks","volume":"17","author":"Ganin","year":"2016","journal-title":"J. Mach. Learn. Res."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2324","DOI":"10.1109\/TGRS.2019.2947634","article-title":"What, where, and how to transfer in SAR target recognition based on deep CNNs","volume":"58","author":"Huang","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Inoue, N., Furuta, R., Yamasaki, T., and Aizawa, K. (2018, January 18\u201323). Cross-domain weakly-supervised object detection through progressive domain adaptation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00525"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Chen, Y., Li, W., Sakaridis, C., Dai, D., and Van Gool, L. (2018, January 18\u201323). Domain adaptive faster r-cnn for object detection in the wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00352"},{"key":"ref_22","unstructured":"He, Z., and Zhang, L. (November, January 27). Multi-adversarial faster-rcnn for unrestricted object detection. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Chen, C., Zheng, Z., Ding, X., Huang, Y., and Dou, Q. (2020, January 13\u201319). Harmonizing Transferability and Discriminability for Adapting Object Detectors. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00889"},{"key":"ref_24","first-page":"417","article-title":"Generative adversarial nets","volume":"27","author":"Goodfellow","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_25","unstructured":"Gutierrez, D. (2021, August 09). MiniSAR: A Review of 4-Inch and 1-Foot Resolution Ku-Band Imagery, Available online: https:\/\/www.sandia.gov\/radar\/Web\/images\/SAND2005-3706P-miniSAR-flight-SAR-images.pdf."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"2296","DOI":"10.1109\/TITS.2016.2517826","article-title":"Vehicle detection in high-resolution aerial images based on fast sparse representation classification and multiorder feature","volume":"17","author":"Chen","year":"2016","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"3366","DOI":"10.1109\/TGRS.2019.2953936","article-title":"Saliency-guided single shot multibox detector for target detection in SAR images","volume":"58","author":"Du","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/21\/4202\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T07:19:04Z","timestamp":1760167144000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/21\/4202"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,20]]},"references-count":27,"journal-issue":{"issue":"21","published-online":{"date-parts":[[2021,11]]}},"alternative-id":["rs13214202"],"URL":"https:\/\/doi.org\/10.3390\/rs13214202","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,10,20]]}}}