{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T02:43:34Z","timestamp":1760150614188,"version":"build-2065373602"},"reference-count":29,"publisher":"MDPI AG","issue":"24","license":[{"start":{"date-parts":[[2023,12,5]],"date-time":"2023-12-05T00:00:00Z","timestamp":1701734400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"],"award-info":[{"award-number":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Pilot Base Construction and Pilot Verification Plan Program of Liaoning Province of China","award":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"],"award-info":[{"award-number":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"]}]},{"name":"Key Development Guidance Program of Liaoning Province of China","award":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"],"award-info":[{"award-number":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"]}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"publisher","award":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"],"award-info":[{"award-number":["61976033","2022JH24\/10200029","2019JH8\/10100100","2022M710569"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Radar data can be presented in various forms, unlike visible data. In the field of radar target recognition, most current work involves point cloud data due to computing limitations, but this form of data lacks useful information. This paper proposes a semantic segmentation network to process high-dimensional data and enable automatic radar target recognition. Rather than relying on point cloud data, which is common in current radar automatic target recognition algorithms, the paper suggests using a radar heat map of high-dimensional data to increase the efficiency of radar data use. The radar heat map provides more complete information than point cloud data, leading to more accurate classification results. Additionally, this paper proposes a dimension collapse module based on a vision transformer for feature extraction between two modules with dimension differences during dimension changes in high-dimensional data. This module is easily extendable to other networks with high-dimensional data collapse requirements. The network\u2019s performance is verified using a real radar dataset, showing that the radar semantic segmentation network based on a vision transformer has better performance and fewer parameters compared to segmentation networks that use other dimensional collapse methods.<\/jats:p>","DOI":"10.3390\/s23249630","type":"journal-article","created":{"date-parts":[[2023,12,5]],"date-time":"2023-12-05T02:55:32Z","timestamp":1701744932000},"page":"9630","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["A 3D U-Net Based on a Vision Transformer for Radar Semantic Segmentation"],"prefix":"10.3390","volume":"23","author":[{"given":"Tongrui","family":"Zhang","sequence":"first","affiliation":[{"name":"College of Marine Electrical Engineering, Dalian Maritime University, Dalian 116026, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7828-4902","authenticated-orcid":false,"given":"Yunsheng","family":"Fan","sequence":"additional","affiliation":[{"name":"College of Marine Electrical Engineering, Dalian Maritime University, Dalian 116026, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,12,5]]},"reference":[{"key":"ref_1","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep Residual Learning for Image Recognition. Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1343","DOI":"10.1109\/TMM.2020.2997184","article-title":"cmSalGAN: RGB-D Salient Object Detection with Cross-View Generative Adversarial Networks","volume":"23","author":"Jiang","year":"2021","journal-title":"IEEE Trans. Multimed."},{"key":"ref_3","first-page":"114005","article-title":"ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases","volume":"2022","author":"Touvron","year":"2021","journal-title":"J. Stat. Mech. Theory Exp."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"97","DOI":"10.1016\/j.neucom.2016.09.139","article-title":"Large size single image fast defogging and the real time video defogging FPGA architecture","volume":"269","author":"Liu","year":"2017","journal-title":"Neurocomputing"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Rablau, C.I. (2019, January 2). Lidar: A new self-driving vehicle for introducing optics to broader engineering and non-engineering audiences. Proceedings of the 15th Conference on Education and Training in Optics and Photonics, ETOP 2019, Quebec, QC, Canada.","DOI":"10.1117\/12.2523863"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"3981","DOI":"10.1109\/TITS.2018.2789462","article-title":"Road-Segmentation-Based Curb Detection Method for Self-Driving via a 3D-LiDAR Sensor","volume":"19","author":"Zhang","year":"2018","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Xu, Y., Peng, W., Wang, R., Wang, J., and Xiao, B. (2019, January 12). A new guidance superiority model for cooperative air combat. Proceedings of the 5th Symposium on Novel Optoelectronic Detection Technology and Application, Xi\u2019an, China.","DOI":"10.1117\/12.2520397"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2837","DOI":"10.1080\/01431161.2022.2072179","article-title":"Multi-Scale translation method from SAR to optical remote sensing images based on conditional generative adversarial network","volume":"43","author":"Kong","year":"2022","journal-title":"Int. J. Remote Sens."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"275","DOI":"10.1002\/nsg.12099","article-title":"Measuring vertical soil water content profiles by combining horizontal borehole and dispersive surface ground penetrating radar data","volume":"18","author":"Yu","year":"2020","journal-title":"Near Surf. Geophys."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Shen, Y., and Zhang, T. (2022, January 17\u201322). Radar Semantic Segmentation based on U-Net using Vision Transformer. Proceedings of the 2022 IEEE International Conference on Real-Time Computing and Robotics, Guiyang, China.","DOI":"10.1109\/RCAR54675.2022.9872193"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"858","DOI":"10.1109\/TNN.2010.2044802","article-title":"A Convolutional Learning System for Object Classification in 3-D Lidar Data","volume":"21","author":"Prokhorov","year":"2010","journal-title":"IEEE Trans. Neural Netw."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Zhang, H., Yu, L., Chen, Y., and Wei, Y. (2021). Fast complex-valued CNN for radar jamming signal recognition. Remote Sens., 13.","DOI":"10.3390\/rs13152867"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"80588","DOI":"10.1109\/ACCESS.2020.2990629","article-title":"Convolutional Neural Network-Based Radar Jamming Signal Classification with Sufficient and Limited Samples","volume":"8","author":"Shao","year":"2020","journal-title":"IEEE Access"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"669","DOI":"10.1109\/LGRS.2018.2806940","article-title":"Personnel Recognition and Gait Classification Based on Multistatic Micro-Doppler Signatures Using Deep Convolutional Neural Networks","volume":"15","author":"Chen","year":"2018","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_15","unstructured":"Wang, Y., Sun, B., and Wang, N. (2018, January 17\u201319). Recognition of radar active-jamming through convolutional neural networks. Proceedings of the IET International Radar Conference 2018, Nanjing, China."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"9099","DOI":"10.1109\/JSEN.2021.3054744","article-title":"False-Alarm-Controllable Radar Detection for Marine Target Based on Multi Features Fusion via CNNs","volume":"21","author":"Chen","year":"2021","journal-title":"IEEE Sens. J."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"10706","DOI":"10.1109\/JSEN.2020.2994292","article-title":"Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform","volume":"20","author":"Sun","year":"2020","journal-title":"IEEE Sens. J."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Sun, Y., Fei, T., Gao, S., and Pohl, N. (2019, January 12\u201317). Automatic Radar-based Gesture Detection and Classification via a Region-based Deep Convolutional Neural Network. Proceedings of the 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8682277"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"41391","DOI":"10.1109\/ACCESS.2018.2857007","article-title":"MIMO-FMCW Radar-Based Parking Monitoring Application with a Modified Convolutional Neural Network with Spatial Priors","volume":"6","author":"Zoeke","year":"2018","journal-title":"IEEE Access"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Kim, S., Lee, S., Doo, S., and Shim, B. (2018, January 3\u20137). Moving Target Classification in Automotive Radar Systems Using Convolutional Recurrent Neural Networks. Proceedings of the 26th European Signal Processing Conference, Rome, Italy.","DOI":"10.23919\/EUSIPCO.2018.8553185"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Major, B., Fontijne, D., Ansari, A., Sukhavasi, R.T., Gowaikar, R., Hamilton, M., Lee, S., Grzechnik, S., and Subramanian, S. (2019, January 27\u201328). Vehicle Detection with Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors. Proceedings of the 17th IEEE\/CVF International Conference on Computer Vision Workshop, Seoul, Republic of Korea.","DOI":"10.1109\/ICCVW.2019.00121"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"1263","DOI":"10.1109\/LRA.2020.2967272","article-title":"CNN Based Road User Detection Using the 3D Radar Cube","volume":"5","author":"Palffy","year":"2020","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"5119","DOI":"10.1109\/JSEN.2020.3036047","article-title":"RAMP-CNN: A Novel Neural Network for Enhanced Automotive Radar Object Recognition","volume":"21","author":"Gao","year":"2021","journal-title":"IEEE Sens. J."},{"key":"ref_24","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16 \u00d7 16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Nowruzi, F.E., Kolhatkar, D., Kapoor, P., Al Hassanat, F., Heravi, E.J., Laganiere, R., Rebut, J., and Malik, W. (2020, January 23). Deep Open Space Segmentation using Automotive Radar. Proceedings of the 2020 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility, Linz, Austria.","DOI":"10.1109\/ICMIM48759.2020.9299052"},{"key":"ref_26","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., and Antiga, L. (2019). Pytorch: An imperative style, high-performance deep learning library. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1429","DOI":"10.3390\/s19061429","article-title":"Finger-Counting-Based Gesture Recognition within Cars Using Impulse Radar with Convolutional Neural Network","volume":"19","author":"Shahzad","year":"2019","journal-title":"Sensors"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"188","DOI":"10.1109\/TIV.2019.2955853","article-title":"Scene Understanding with Automotive Radar","volume":"5","author":"Schumann","year":"2020","journal-title":"IEEE Trans. Intell. Veh."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"593","DOI":"10.1049\/iet-rsn.2019.0307","article-title":"Dopplernet: A convolutional neural network for recognizing targets in real scenarios using a persistent range-doppler radar","volume":"14","author":"Montero","year":"2020","journal-title":"IET Radar Sonar Navig."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/24\/9630\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:33:24Z","timestamp":1760132004000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/24\/9630"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,12,5]]},"references-count":29,"journal-issue":{"issue":"24","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["s23249630"],"URL":"https:\/\/doi.org\/10.3390\/s23249630","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,12,5]]}}}