{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T19:37:11Z","timestamp":1774553831528,"version":"3.50.1"},"reference-count":49,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2022,6,15]],"date-time":"2022-06-15T00:00:00Z","timestamp":1655251200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Key Projects of Global Change and Response of Ministry of Science and Technology of China","award":["2020YFA0608203"],"award-info":[{"award-number":["2020YFA0608203"]}]},{"name":"Key Projects of Global Change and Response of Ministry of Science and Technology of China","award":["2021YFS0335"],"award-info":[{"award-number":["2021YFS0335"]}]},{"name":"Key Projects of Global Change and Response of Ministry of Science and Technology of China","award":["2020YFG0296"],"award-info":[{"award-number":["2020YFG0296"]}]},{"name":"Key Projects of Global Change and Response of Ministry of Science and Technology of China","award":["2020YFS0338"],"award-info":[{"award-number":["2020YFS0338"]}]},{"name":"Key Projects of Global Change and Response of Ministry of Science and Technology of China","award":["FY-APP-2021.0304"],"award-info":[{"award-number":["FY-APP-2021.0304"]}]},{"name":"Science and Technology Support Project of Sichuan Province","award":["2020YFA0608203"],"award-info":[{"award-number":["2020YFA0608203"]}]},{"name":"Science and Technology Support Project of Sichuan Province","award":["2021YFS0335"],"award-info":[{"award-number":["2021YFS0335"]}]},{"name":"Science and Technology Support Project of Sichuan Province","award":["2020YFG0296"],"award-info":[{"award-number":["2020YFG0296"]}]},{"name":"Science and Technology Support Project of Sichuan Province","award":["2020YFS0338"],"award-info":[{"award-number":["2020YFS0338"]}]},{"name":"Science and Technology Support Project of Sichuan Province","award":["FY-APP-2021.0304"],"award-info":[{"award-number":["FY-APP-2021.0304"]}]},{"name":"Fengyun Satellite Application Advance Plan","award":["2020YFA0608203"],"award-info":[{"award-number":["2020YFA0608203"]}]},{"name":"Fengyun Satellite Application Advance Plan","award":["2021YFS0335"],"award-info":[{"award-number":["2021YFS0335"]}]},{"name":"Fengyun Satellite Application Advance Plan","award":["2020YFG0296"],"award-info":[{"award-number":["2020YFG0296"]}]},{"name":"Fengyun Satellite Application Advance Plan","award":["2020YFS0338"],"award-info":[{"award-number":["2020YFS0338"]}]},{"name":"Fengyun Satellite Application Advance Plan","award":["FY-APP-2021.0304"],"award-info":[{"award-number":["FY-APP-2021.0304"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Low-grade roads have complex features such as geometry, reflection spectrum, and spatial topology in remotely sensing optical images due to the different materials of those roads and also because they are easily obscured by vegetation or buildings, which leads to the low accuracy of low-grade road extraction from remote sensing images. To address this problem, this paper proposes a novel deep learning network referred to as SDG-DenseNet as well as a fusion method of optical and Synthetic Aperture Radar (SAR) data on decision level to extract low-grade roads. On one hand, in order to enlarge the receptive field and ensemble multi-scale features in commonly used deep learning networks, we develop SDG-DenseNet in terms of three modules: stem block, D-Dense block, and GIRM module, in which the Stem block applies two consecutive small-sized convolution kernels instead of the large-sized convolution kernel, the D-Dense block applies three consecutive dilated convolutions after the initial Dense block, and Global Information Recovery Module (GIRM) combines the ideas of dilated convolution and attention mechanism. On the other hand, considering the penetrating capacity and oblique observation of SAR, which can obtain information from those low-grade roads obscured by vegetation or buildings in optical images, we integrate the extracted road result from SAR images into that from optical images at decision level to enhance the extraction accuracy. The experimental result shows that the proposed SDG-DenseNet attains higher IoU and F1 scores than other network models applied to low-grade road extraction from optical images. Furthermore, it verifies that the decision-level fusion of road binary maps from SAR and optical images can further significantly improve the F1, COR, and COM scores.<\/jats:p>","DOI":"10.3390\/rs14122870","type":"journal-article","created":{"date-parts":[[2022,6,16]],"date-time":"2022-06-16T03:01:22Z","timestamp":1655348482000},"page":"2870","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["A Low-Grade Road Extraction Method Using SDG-DenseNet Based on the Fusion of Optical and SAR Images at Decision Level"],"prefix":"10.3390","volume":"14","author":[{"given":"Jinglin","family":"Zhang","sequence":"first","affiliation":[{"name":"School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China"}]},{"given":"Yuxia","family":"Li","sequence":"additional","affiliation":[{"name":"School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China"}]},{"given":"Yu","family":"Si","sequence":"additional","affiliation":[{"name":"School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China"}]},{"given":"Bo","family":"Peng","sequence":"additional","affiliation":[{"name":"School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China"}]},{"given":"Fanghong","family":"Xiao","sequence":"additional","affiliation":[{"name":"School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China"}]},{"given":"Shiyu","family":"Luo","sequence":"additional","affiliation":[{"name":"School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9875-9853","authenticated-orcid":false,"given":"Lei","family":"He","sequence":"additional","affiliation":[{"name":"School of Software Engineering, Chengdu University of Information Technology, Chengdu 610225, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,6,15]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Zhou, L., Zhang, C., and Wu, M. (2018, January 18\u201322). D-linknet: Linknet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00034"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"8919","DOI":"10.1109\/TGRS.2020.2991733","article-title":"Simultaneous Road Surface and Centerline Extraction From Large-Scale Remote Sensing Images Using CNN-Based Segmentation and Tracing","volume":"58","author":"Wei","year":"2020","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"107141","DOI":"10.1016\/j.patcog.2019.107141","article-title":"A fusion network for road detection via spatial propagation and spatial transformation","volume":"100","author":"Yang","year":"2020","journal-title":"Pattern Recognit."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"288","DOI":"10.1016\/j.isprsjprs.2020.08.019","article-title":"BT-RoadNet: A boundary and topologically-aware neural network for road extraction from high-resolution remote sensing imagery","volume":"168","author":"Zhou","year":"2020","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"2284","DOI":"10.1109\/JSTARS.2021.3053603","article-title":"Reconstruction Bias U-Net for Road Extraction From Optical Remote Sensing Images","volume":"14","author":"Chen","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_7","unstructured":"He, X., Zemel, R.S., and Carreira-Perpi\u00f1\u00e1n, M.\u00c1. (July, January 27). Multiscale conditional random fields for image labeling. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2","DOI":"10.1007\/s11263-007-0109-1","article-title":"Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context","volume":"81","author":"Shotton","year":"2009","journal-title":"Int. J. Comput. Vis."},{"key":"ref_9","first-page":"109","article-title":"Efficient inference in fully connected crfs with gaussian edge potentials","volume":"24","author":"Koltun","year":"2011","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"712","DOI":"10.1016\/j.cviu.2010.02.004","article-title":"Context based object categorization: A critical survey","volume":"114","author":"Galleguillos","year":"2010","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1915","DOI":"10.1109\/TPAMI.2012.231","article-title":"Learning hierarchical features for scene labeling","volume":"35","author":"Farabet","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Eigen, D., and Fergus, R. (2015, January 7\u201313). Predicting depth, surface normal and semantic labels with a common multi-scale convolutional architecture. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.304"},{"key":"ref_13","unstructured":"Pinheiro PH, O., and Collobert, R. (2014, January 21\u201326). Recurrent convolutional neural networks for scene labeling. Proceedings of the 31st International Conference on Machine Learning, Beijing, China."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Yang, Y., Wang, J., Xu, W., and Yuille, A.L. (2016, January 27\u201330). Attention to scale: Scale-aware semantic image segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.396"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Lin, G., Shen, C., Van Den Hengel, A., and Reid, I. (2016, January 27\u201330). Efficient piecewise training of deep structured models for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.348"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","article-title":"Segnet: A deep convolutional encoder-decoder architecture for image segmentation","volume":"39","author":"Badrinarayanan","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Lin, G., Milan, A., Shen, C., and Reid, I. (2016). RefineNet: Multi-path refinement networks with identity mappings for high-resolution semantic segmentation. arXiv.","DOI":"10.1109\/CVPR.2017.549"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Pohlen, T., Hermans, A., Mathias, M., and Leibe, B. (2017, January 21\u201326). Full-resolution residual networks for semantic segmentation in street scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.353"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Peng, C., Zhang, X., Yu, G., and Sun, J. (2017, January 21\u201326). Large Kernel Matters\u2014Improve Semantic Segmentation by Global Convolutional Network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.189"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Amirul Islam, M., Rochan, M., Bruce ND, B., and Wang, Y. (2017, January 21\u201326). Gated feedback refinement network for dense image labeling. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.518"},{"key":"ref_22","unstructured":"Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L. (2014). Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv."},{"key":"ref_23","unstructured":"Yu, F., and Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. (2017, January 21\u201326). Pyramid scene parsing network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.660"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"2925","DOI":"10.1109\/TITS.2015.2430892","article-title":"Vehicle color recognition with spatial pyramid deep learning","volume":"16","author":"Hu","year":"2015","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs","volume":"40","author":"Chen","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_27","unstructured":"Chen, L.C., Papandreou, G., Schroff, F., and Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. arXiv."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018, January 8\u201314). Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wang, X., Girshick, R., Gupta, A., and He, K. (2018, January 18\u201323). Non-local neural networks. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00813"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., and Lu, H. (2019, January 16\u201317). Dual attention network for scene segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00326"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y. (2018). CBAM: Convolutional Block Attention Module. Computer Vision\u2014ECCV 2018. ECCV 2018, Springer. Lecture Notes in Computer Science.","DOI":"10.1007\/978-3-030-01249-6"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"6302","DOI":"10.1109\/JSTARS.2021.3083055","article-title":"DA-RoadNet: A Dual-Attention Network for Road Extraction From High Resolution Satellite Imagery","volume":"14","author":"Wan","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1785","DOI":"10.1109\/TGRS.2003.813850","article-title":"Road vectors update using SAR imagery: A snake-based method","volume":"41","author":"Bentabet","year":"2003","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Sun, Z., Geng, H., Lu, Z., Scherer, R., and Wo\u017aniak, M. (2021). Review of Road Segmentation for SAR Images. Remote Sens., 13.","DOI":"10.3390\/rs13051011"},{"key":"ref_35","first-page":"156","article-title":"SAR image road detection based on Hough transform and genetic algorithm","volume":"3","author":"Jiang","year":"2005","journal-title":"Radar Sci. Technol."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Wei, X., Lv, X., and Zhang, K. (2021). Road Extraction in SAR Images Using Ordinal Regression and Road-Topology Loss. Remote Sens., 13.","DOI":"10.3390\/rs13112080"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A.A. (2017, January 2\u20134). Inception-v4, inception-resnet and the impact of residual connections on learning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Rahman, M.A., and Wang, Y. (2016, January 12\u201314). Optimizing intersection-over-union in deep neural networks for image segmentation. Proceedings of the International Symposium on Visual Computing, Las Vegas, NV, USA.","DOI":"10.1007\/978-3-319-50835-1_22"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"J\u00e9gou, S., Drozdzal, M., Vazquez, D., Romero, A., and Bengio, Y. (2017, January 21\u201326). The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.156"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"LeCun, Y.A., Bottou, L., Orr, G.B., Orr, G.B., and Muller, K.R. (2012). Efficient Backprop in Neural Networks: Tricks of the Trade, Springer.","DOI":"10.1007\/978-3-642-35289-8_3"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21\u201326). Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. (2018, January 18\u201323). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Xiao, F., Chen, Y., Tong, L., He, L., Tan, L., and Wu, B. (2016, January 10\u201315). Road detection in high-resolution SAR images using Duda and path operators. Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China.","DOI":"10.1109\/IGARSS.2016.7729321"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Mnih, V., and Hinton, G.E. (2010, January 5\u201311). Learning to Detect Roads in High-Resolution Aerial Images. Proceedings of the Computer Vision\u2014ECCV 2010\u201411th European Conference on Computer Vision, Heraklion, Crete, Greece. Proceedings, Part VI.","DOI":"10.1007\/978-3-642-15567-3_16"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Sun, T., Chen, Z., Yang, W., and Wang, Y. (2018, January 18\u201322). Stacked u-nets with multi-output for road extraction. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00033"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"749","DOI":"10.1109\/LGRS.2018.2802944","article-title":"Road extraction by deep residual U-Net","volume":"15","author":"Zhang","year":"2018","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_47","first-page":"5609413","article-title":"Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training for Road Segmentation of Remote-Sensing Images","volume":"60","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Zhang, Z., and Wang, Y. (2019). JointNet: A common neural network for road and building extraction. Remote Sens., 11.","DOI":"10.3390\/rs11060696"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"3004005","DOI":"10.1109\/LGRS.2021.3106772","article-title":"Dual-Path Morph-UNet for Road and Building Segmentation From Satellite Images","volume":"19","author":"Dey","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/12\/2870\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T23:32:25Z","timestamp":1760139145000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/12\/2870"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,15]]},"references-count":49,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2022,6]]}},"alternative-id":["rs14122870"],"URL":"https:\/\/doi.org\/10.3390\/rs14122870","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,15]]}}}