{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,8]],"date-time":"2026-01-08T22:51:43Z","timestamp":1767912703424,"version":"3.49.0"},"reference-count":39,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2022,4,30]],"date-time":"2022-04-30T00:00:00Z","timestamp":1651276800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Convolutional neural networks (CNNs) have been widely used in SAR image recognition and have achieved high recognition accuracy on some public datasets. However, due to the opacity of the decision-making mechanism, the reliability and credibility of CNNs are insufficient at present, which hinders their application in some important fields such as SAR image recognition. In recent years, various interpretable network structures have been proposed to discern the relationship between a CNN\u2019s decision and image regions. Unfortunately, most interpretable networks are based on optical images, which have poor recognition performance for SAR images, and most of them cannot accurately explain the relationship between image parts and classification decisions. Based on the above problems, in this study, we present SAR-BagNet, which is a novel interpretable recognition framework for SAR images. SAR-BagNet can provide a clear heatmap that can accurately reflect the impact of each part of a SAR image on the final network decision. Except for the good interpretability, SAR-BagNet also has high recognition accuracy and can achieve 98.25% test accuracy.<\/jats:p>","DOI":"10.3390\/rs14092150","type":"journal-article","created":{"date-parts":[[2022,5,2]],"date-time":"2022-05-02T07:08:58Z","timestamp":1651475338000},"page":"2150","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":13,"title":["SAR-BagNet: An Ante-hoc Interpretable Recognition Model Based on Deep Network for SAR Image"],"prefix":"10.3390","volume":"14","author":[{"given":"Peng","family":"Li","sequence":"first","affiliation":[{"name":"Early Warning and Detection Department, Air Force Engineering University, Xi\u2019an 710051, China"}]},{"given":"Cunqian","family":"Feng","sequence":"additional","affiliation":[{"name":"Early Warning and Detection Department, Air Force Engineering University, Xi\u2019an 710051, China"}]},{"given":"Xiaowei","family":"Hu","sequence":"additional","affiliation":[{"name":"Early Warning and Detection Department, Air Force Engineering University, Xi\u2019an 710051, China"}]},{"given":"Zixiang","family":"Tang","sequence":"additional","affiliation":[{"name":"Early Warning and Detection Department, Air Force Engineering University, Xi\u2019an 710051, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,4,30]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Wang, Y.P., Zhang, Y.B., Qu, H.Q., and Tian, Q. (2018, January 13\u201315). Target Detection and Recognition Based on Convolutional Neural Network for SAR Image. Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, Beijing, China.","DOI":"10.1109\/CISP-BMEI.2018.8633151"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Cai, J., Jia, H., Liu, G., Zhang, B., Liu, Q., Fu, Y., Wang, X., and Zhang, R. (2021). An Accurate Geocoding Method for GB-SAR Images Based on Solution Space Search and Its Application in Landslide Monitoring. Remote Sens., 13.","DOI":"10.3390\/rs13050832"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1882","DOI":"10.1109\/LGRS.2018.2865608","article-title":"Multiple Feature Aggregation Using Convolutional Neural Networks for SAR Image-Based Automatic Target Recognition","volume":"56","author":"Cho","year":"2018","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Cao, H., Zhang, H., Wang, C., and Zhang, B. (2019). Operational Flood Detection Using Sentinel-1 SAR Data over Large Areas. Water, 11.","DOI":"10.3390\/w11040786"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"458","DOI":"10.1109\/JSTARS.2017.2787591","article-title":"Eigenvalue-based urban area extraction using polarimetric SAR data","volume":"11","author":"Quan","year":"2018","journal-title":"IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Feng, Z., Zhu, M., Stankovi\u0107, L., and Ji, H. (2021). Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation. Remote Sens., 13.","DOI":"10.3390\/rs13091772"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Hu, X., Feng, W., Guo, Y., and Wang, Q. (2021). Feature Learning for SAR Target Recognition with Unknown Classes by Using CVAE-GAN. Remote Sens., 13.","DOI":"10.3390\/rs13183554"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"4598","DOI":"10.1109\/JSEN.2019.2901050","article-title":"SAR Automatic Target Recognition Based on Attribute Scattering Center Model and Discriminative Dictionary Learning","volume":"19","author":"Li","year":"2019","journal-title":"IEEE Sens. J."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"2206","DOI":"10.1109\/JSTARS.2016.2555938","article-title":"SAR Imagery Feature Extraction Using 2DPCA-Based Two-Dimensional Neighborhood Virtual Points Discriminant Embedding","volume":"9","author":"Pei","year":"2016","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Mishra, A. (2008, January 19\u201321). Validation of PCA and LDA for SAR ATR. Proceedings of the IEEE Region 10 Conference, Hyderabad, India.","DOI":"10.1109\/TENCON.2008.4766807"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"6877","DOI":"10.1109\/TGRS.2019.2909121","article-title":"Subdictionary-Based Joint Sparse Representation for SAR Target Recognition Using Multilevel Reconstruction","volume":"57","author":"Zhou","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1777","DOI":"10.1109\/LGRS.2016.2608578","article-title":"SAR Automatic Target Recognition Based on Dictionary Learning and Joint Dynamic Sparse Representation","volume":"13","author":"Sun","year":"2016","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"591","DOI":"10.1109\/TAES.2013.120340","article-title":"SAR Automatic Target Recognition Using Discriminative Graphical Models","volume":"50","author":"Srinivas","year":"2014","journal-title":"IEEE Trans. Aerosp. Electron. Syst."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Lattari, F., Gonzalez Leon, B., Asaro, F., Rucci, A., Prati, C., and Matteucci, M. (2019). Deep Learning for SAR Image Despeckling. Remote Sens., 11.","DOI":"10.3390\/rs11131532"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Dewi, C., Chen, R.-C., Yu, H., and Jiang, X. (2021). Robust detection method for improving small traffic sign recognition based on spatial pyramid pooling. J. Ambient Intell. Humaniz. Comput.","DOI":"10.1007\/s12652-021-03584-0"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Zhang, B., Liu, G., Zhang, R., Fu, Y., Liu, Q., Cai, J., Wang, X., and Li, Z. (2021). Monitoring Dynamic Evolution of the Glacial Lakes by Using Time Series of Sentinel-1A SAR Images. Remote Sens., 13.","DOI":"10.3390\/rs13071313"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Mao, S., Yang, J., Gou, S., Jiao, L., Xiong, T., and Xiong, L. (2021). Multi-Scale Fused SAR Image Registration Based on Deep Forest. Remote Sens., 13.","DOI":"10.3390\/rs13112227"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Wu, T.D., Yen, J., Wang, J.H., Huang, R.J., Lee, H.W., and Wang, H.F. (2020, January 26\u201328). Automatic Target Recognition in SAR Images Based on a Combination of CNN and SVM. Proceedings of the 2020 International Workshop on Electromagnetics: Applications and Student Innovation Competition (iWEM), Makung, Taiwan.","DOI":"10.1109\/iWEM49354.2020.9237422"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"7282","DOI":"10.1109\/TGRS.2018.2849967","article-title":"SAR ATR of Ground Vehicles Based on LM-BN-CNN","volume":"56","author":"Zhou","year":"2018","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1145\/3236386.3241340","article-title":"The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery","volume":"16","author":"Lipton","year":"2018","journal-title":"Queue"},{"key":"ref_21","unstructured":"Shrikumar, A., Greenside, P., Shcherbina, A., and Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv."},{"key":"ref_22","unstructured":"Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Fong, R.C., and Vedaldi, A. (2017, January 22\u201329). Interpretable explanations of black boxes by meaningful perturbation. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.371"},{"key":"ref_24","unstructured":"Qi, Z., Khorram, S., and Li, F. (2019, January 16\u201320). Visualizing Deep Networks by Optimizing with Integrated Gradients. Proceedings of the CVPR Workshops, Long Beach, CA, USA."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017, January 22\u201329). Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. (2016, January 27\u201330). Learning deep features for discriminative localization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.319"},{"key":"ref_27","unstructured":"Alvarez-Melis, D., and Jaakkola, T.S. (2018). Towards robust interpretability with self-explaining neural networks. arXiv."},{"key":"ref_28","unstructured":"Chen, C., Li, O., Tao, C., Barnett, A.J., Su, J., and Rudin, C. (2018). This looks like that: Deep learning for interpretable image recognition. arXiv."},{"key":"ref_29","unstructured":"Kim, E., Kim, S., Seo, M., and Yoon, S. (2016, January 27\u201330). XProtoNet: Diagnosis in Chest Radiography with Global and Local Explanations. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_30","unstructured":"Brendel, W., and Bethge, M. (2019). Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. arXiv."},{"key":"ref_31","unstructured":"Aditya, C., Anirban, S., Abhishek, D., and Prantik, H. (2018). Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks. arXiv."},{"key":"ref_32","unstructured":"Saurabh, D., and Harish, G.R. (2020, January 1\u20135). Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization. Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass, CO, USA."},{"key":"ref_33","unstructured":"Wang, H.F., Wang, Z.F., and Du, M.N. (2020, January 14\u201319). Methods for Interpreting and Understanding Deep Neural Networks. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA."},{"key":"ref_34","unstructured":"O\u2019Hara, S., and Draper, B.A. (2011). Introduction to the bag of features paradigm for image classification and retrieval. arXiv."},{"key":"ref_35","unstructured":"Luo, W., Li, Y., Urtasun, R., and Zemel, R. (2016, January 5\u201310). Understanding the effective receptive field in deep convolutional neural networks. Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain."},{"key":"ref_36","unstructured":"Dumoulin, V., and Visin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 5\u201310). Deep residual learning for image recognition. Proceedings of the Neural Information Processing Systems 29, Barcelona, Spain.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TGRS.2021.3139914","article-title":"SAE-Net: A Deep Neural Network for SAR Autofocus","volume":"60","author":"Pu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Zhao, S., Ni, J., Liang, J., Xiong, S., and Luo, Y. (2021). End-to-End SAR Deep Learning Imaging Method Based on Sparse Optimization. Remote Sens., 13.","DOI":"10.3390\/rs13214429"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/9\/2150\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T23:04:52Z","timestamp":1760137492000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/9\/2150"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,30]]},"references-count":39,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2022,5]]}},"alternative-id":["rs14092150"],"URL":"https:\/\/doi.org\/10.3390\/rs14092150","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,4,30]]}}}