{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,30]],"date-time":"2026-01-30T05:37:56Z","timestamp":1769751476448,"version":"3.49.0"},"reference-count":41,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2023,3,29]],"date-time":"2023-03-29T00:00:00Z","timestamp":1680048000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["42171269"],"award-info":[{"award-number":["42171269"]}]},{"name":"National Natural Science Foundation of China","award":["2021D01D06"],"award-info":[{"award-number":["2021D01D06"]}]},{"name":"National Natural Science Foundation of China","award":["41961059"],"award-info":[{"award-number":["41961059"]}]},{"name":"Key Project of Natural Science Foundation of Xinjiang Uygur Autonomous Region","award":["42171269"],"award-info":[{"award-number":["42171269"]}]},{"name":"Key Project of Natural Science Foundation of Xinjiang Uygur Autonomous Region","award":["2021D01D06"],"award-info":[{"award-number":["2021D01D06"]}]},{"name":"Key Project of Natural Science Foundation of Xinjiang Uygur Autonomous Region","award":["41961059"],"award-info":[{"award-number":["41961059"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["42171269"],"award-info":[{"award-number":["42171269"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2021D01D06"],"award-info":[{"award-number":["2021D01D06"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["41961059"],"award-info":[{"award-number":["41961059"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Automatic road extraction from remote sensing images has an important impact on road maintenance and land management. While significant deep-learning-based approaches have been developed in recent years, achieving a suitable trade-off between extraction accuracy, inference speed and model size remains a fundamental and challenging issue for real-time road extraction applications, especially for rural roads. For this purpose, we developed a lightweight dynamic addition network (LDANet) to exploit rural road extraction. Specifically, considering the narrow, complex and diverse nature of rural roads, we introduce an improved Asymmetric Convolution Block (ACB)-based Inception structure to extend the low-level features in the feature extraction layer. In the deep feature association module, the depth-wise separable convolution (DSC) is introduced to reduce the computational complexity of the model, and an adaptation-weighted overlay is designed to capture the salient features. Moreover, we utilize a dynamic weighted combined loss, which can better solve the sample imbalance and boosts segmentation accuracy. In addition, we constructed a typical remote sensing dataset of rural roads based on the Deep Globe Land Cover Classification Challenge dataset. Our experiments demonstrate that LDANet performs well in road extraction with fewer model parameters (&lt;1 MB) and that the accuracy and the mean Intersection over Union reach 98.74% and 76.21% on the test dataset, respectively. Therefore, LDANet has potential to rapidly extract and monitor rural roads from remote sensing images.<\/jats:p>","DOI":"10.3390\/rs15071829","type":"journal-article","created":{"date-parts":[[2023,3,30]],"date-time":"2023-03-30T01:05:26Z","timestamp":1680138326000},"page":"1829","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["LDANet: A Lightweight Dynamic Addition Network for Rural Road Extraction from Remote Sensing Images"],"prefix":"10.3390","volume":"15","author":[{"given":"Bohua","family":"Liu","sequence":"first","affiliation":[{"name":"College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China"}]},{"given":"Jianli","family":"Ding","sequence":"additional","affiliation":[{"name":"College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China"}]},{"given":"Jie","family":"Zou","sequence":"additional","affiliation":[{"name":"College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China"}]},{"given":"Jinjie","family":"Wang","sequence":"additional","affiliation":[{"name":"College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China"}]},{"given":"Shuai","family":"Huang","sequence":"additional","affiliation":[{"name":"College of Geography and Environment, Liaocheng University, Liaocheng 252000, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,3,29]]},"reference":[{"key":"ref_1","first-page":"102341","article-title":"Adaboost-like End-to-End Multiple Lightweight U-Nets for Road Extraction from Optical Remote Sensing Images","volume":"100","author":"Chen","year":"2021","journal-title":"Int. J. Appl. Earth Obs. Geoinf."},{"key":"ref_2","first-page":"102987","article-title":"RoadFormer: Pyramidal Deformable Vision Transformers for Road Network Extraction with Remote Sensing Images","volume":"113","author":"Jiang","year":"2022","journal-title":"Int. J. Appl. Earth Obs. Geoinf."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1978","DOI":"10.11834\/jrs.20219209","article-title":"Road Extraction in Rural Areas from High Resolution Remote Sensing Image Using a Improved Full Convolution Network","volume":"25","author":"Li","year":"2021","journal-title":"Natl. Remote Sens. Bull."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Herumurti, D., Uchimura, K., Koutaki, G., and Uemura, T. (February, January 30). Urban Road Extraction Based on Hough Transform and Region Growing. Proceedings of the FCV 2013\u201419th Korea-Japan Joint Workshop on Frontiers of Computer Vision, Incheon, Republic of Korea.","DOI":"10.1109\/FCV.2013.6485491"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"3359","DOI":"10.1109\/TGRS.2013.2272593","article-title":"An Integrated Method for Urban Main-Road Centerline Extraction from Optical Remotely Sensed Imagery","volume":"52","author":"Shi","year":"2014","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"5489","DOI":"10.1109\/JSTARS.2020.3023549","article-title":"Road Extraction Methods in High-Resolution Remote Sensing Images: A Comprehensive Review","volume":"13","author":"Lian","year":"2020","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Zhao, J.Q., Yang, J., Li, P.X., and Lu, J.M. (2015, January 21\u201323). Semi-Automatic Road Extraction from SAR Images Using EKF and PF. Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences\u2014ISPRS Archives, Kona, HI, USA.","DOI":"10.5194\/isprsarchives-XL-7-W4-227-2015"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"3584","DOI":"10.1080\/01431161.2016.1201227","article-title":"A Two-Level Markov Random Field for Road Network Extraction and Its Application with Optical, SAR, and Multitemporal Data","volume":"37","author":"Perciano","year":"2016","journal-title":"Int. J. Remote Sens."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"3322","DOI":"10.1109\/TGRS.2016.2514602","article-title":"Road Network Extraction via Aperiodic Directional Structure Measurement","volume":"54","author":"Zang","year":"2016","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"93","DOI":"10.5566\/ias.1493","article-title":"FPGA Implementation of Road Network Extraction Using Morphological Operator","volume":"35","author":"Sujatha","year":"2016","journal-title":"Image Anal. Stereol."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"504","DOI":"10.1126\/science.1127647","article-title":"Reducing the Dimensionality of Data with Neural Networks","volume":"313","author":"Hinton","year":"2006","journal-title":"Science"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Zhong, Z., Li, J., Cui, W., and Jiang, H. (2016, January 10\u201315). Fully Convolutional Networks for Building and Road Extraction: Preliminary Results. Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China.","DOI":"10.1109\/IGARSS.2016.7729406"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Varia, N., Dokania, A., and Senthilnath, J. (2018, January 18\u201321). DeepExt: A Convolution Neural Network for Road Extraction Using RGB Images Captured by UAV. Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI, Bangalore, India.","DOI":"10.1109\/SSCI.2018.8628717"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Doshi, J. (2018, January 18\u201322). Residual Inception Skip Network for Binary Segmentation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00037"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhou, L., Zhang, C., and Wu, M. (2018, January 18\u201322). D-Linknet: Linknet with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00034"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"613","DOI":"10.1109\/LGRS.2018.2878771","article-title":"Road Segmentation Based on Hybrid Convolutional Network for High-Resolution Visible Remote Sensing Image","volume":"16","author":"Li","year":"2019","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"450","DOI":"10.1080\/07038992.2021.1913046","article-title":"Road Extraction from UAV Images Using a Deep ResDCLnet Architecture","volume":"47","author":"Boonpook","year":"2021","journal-title":"Can. J. Remote Sens."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"9362","DOI":"10.1109\/TGRS.2019.2926397","article-title":"Multi-scale and multi-task deep learning framework for automatic road extraction","volume":"57","author":"Lu","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_19","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.C. (2018, January 18\u201322). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_21","unstructured":"Howard, A., Sandler, M., Chen, B., Wang, W., Chen, L.C., Tan, M., Chu, G., Vasudevan, V., Zhu, Y., and Pang, R. (November, January 27). Searching for MobileNetV3. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Ma, N., Zhang, X., Zheng, H.T., and Sun, J. (2018). Shufflenet V2: Practical Guidelines for Efficient Cnn Architecture Design, Springer.","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhou, X., Lin, M., and Sun, J. (2018, January 18\u201322). ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00716"},{"key":"ref_24","unstructured":"Paszke, A., Chaurasia, A., Kim, S., and Culurciello, E. (2016). ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Emara, T., Abd El Munim, H.E., and Abbas, H.M. (2019, January 2\u20134). Liteseg: A novel lightweight convnet for semantic segmentation. Proceedings of the 2019 Digital Image Computing: Techniques and Applications (DICTA), Perth, WA, Australia.","DOI":"10.1109\/DICTA47822.2019.8945975"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"263","DOI":"10.1109\/TITS.2017.2750080","article-title":"ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation","volume":"19","author":"Romera","year":"2018","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018, January 8\u201314). Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Mehta, S., Rastegari, M., Caspi, A., Shapiro, L., and Hajishirzi, H. (2018). ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation, Springer.","DOI":"10.1007\/978-3-030-01249-6_34"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Mehta, S., Rastegari, M., Shapiro, L., and Hajishirzi, H. (2019, January 15\u201320). ESPNetv2: A Light-Weight, Power Efficient, and General Purpose Convolutional Neural Network. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00941"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s11063-023-11145-z","article-title":"ELANet: Effective Lightweight Attention-Guided Network for Real-Time Semantic Segmentation","volume":"55","author":"Yi","year":"2023","journal-title":"Neural Process. Lett."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1443","DOI":"10.1109\/TCYB.2020.2970104","article-title":"MADNet: A fast and lightweight network for single-image super resolution","volume":"51","author":"Lan","year":"2020","journal-title":"IEEE Trans. Cybern."},{"key":"ref_32","unstructured":"Mnih, V. (2013). Machine Learning for Aerial Image Labeling, University of Toronto."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D., and Raska, R. (2018, January 18\u201322). DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00031"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Ran, S., Ding, J., Liu, B., Ge, X., and Ma, G. (2021). Multi-U-Net: Residual module under multisensory field and attention mechanism based optimized U-Net for VHR image semantic segmentation. Sensors, 21.","DOI":"10.3390\/s21051794"},{"key":"ref_35","first-page":"271","article-title":"A Review of Road Extraction from Remote Sensing Images","volume":"3","author":"Wang","year":"2016","journal-title":"J. Traffic Transp. Eng. Engl. Ed."},{"key":"ref_36","unstructured":"DIng, X., Guo, Y., DIng, G., and Han, J. (November, January 27). ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention\u2014MICCAI 2015: 18th International Conference, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1856","DOI":"10.1109\/TMI.2019.2959609","article-title":"Unet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation","volume":"39","author":"Zhou","year":"2019","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Huang, H., Lin, L., Tong, R., Hu, H., Zhang, Q., Iwamoto, Y., Han, X., Chen, Y.W., and Wu, J. (2020, January 4\u20138). UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation. Proceedings of the ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053405"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Ren, Y., Zhang, X., Ma, Y., Yang, Q., Wang, C., Liu, H., and Qi, Q. (2020). Full Convolutional Neural Network Based on Multi-Scale Feature Fusion for the Class Imbalance Remote Sensing Image Classification. Remote Sens., 12.","DOI":"10.3390\/rs12213547"},{"key":"ref_41","first-page":"8007205","article-title":"MACU-Net for Semantic Segmentation of Fine-Resolution Remotely Sensed Images","volume":"19","author":"Li","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/7\/1829\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:06:26Z","timestamp":1760123186000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/7\/1829"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,29]]},"references-count":41,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2023,4]]}},"alternative-id":["rs15071829"],"URL":"https:\/\/doi.org\/10.3390\/rs15071829","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,3,29]]}}}