{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,31]],"date-time":"2026-01-31T11:59:20Z","timestamp":1769860760923,"version":"3.49.0"},"reference-count":56,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2024,1,7]],"date-time":"2024-01-07T00:00:00Z","timestamp":1704585600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["42271090"],"award-info":[{"award-number":["42271090"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["31-Y30F09-9001-20\/22"],"award-info":[{"award-number":["31-Y30F09-9001-20\/22"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["CEAIEF20230202"],"award-info":[{"award-number":["CEAIEF20230202"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["CEAIEF2022050504"],"award-info":[{"award-number":["CEAIEF2022050504"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"National High-Resolution Earth Observation Major Project","award":["42271090"],"award-info":[{"award-number":["42271090"]}]},{"name":"National High-Resolution Earth Observation Major Project","award":["31-Y30F09-9001-20\/22"],"award-info":[{"award-number":["31-Y30F09-9001-20\/22"]}]},{"name":"National High-Resolution Earth Observation Major Project","award":["CEAIEF20230202"],"award-info":[{"award-number":["CEAIEF20230202"]}]},{"name":"National High-Resolution Earth Observation Major Project","award":["CEAIEF2022050504"],"award-info":[{"award-number":["CEAIEF2022050504"]}]},{"name":"Fundamental Research Funds of the Institute of Earthquake Forecasting, China Earthquake Administration","award":["42271090"],"award-info":[{"award-number":["42271090"]}]},{"name":"Fundamental Research Funds of the Institute of Earthquake Forecasting, China Earthquake Administration","award":["31-Y30F09-9001-20\/22"],"award-info":[{"award-number":["31-Y30F09-9001-20\/22"]}]},{"name":"Fundamental Research Funds of the Institute of Earthquake Forecasting, China Earthquake Administration","award":["CEAIEF20230202"],"award-info":[{"award-number":["CEAIEF20230202"]}]},{"name":"Fundamental Research Funds of the Institute of Earthquake Forecasting, China Earthquake Administration","award":["CEAIEF2022050504"],"award-info":[{"award-number":["CEAIEF2022050504"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Automatic extraction of building contours from high-resolution images is of great significance in the fields of urban planning, demographics, and disaster assessment. Network models based on convolutional neural network (CNN) and transformer technology have been widely used for semantic segmentation of buildings from high resolution remote sensing images (HRSI). However, the fixed geometric structure and the local receptive field of the convolutional kernel are not good at global feature extraction, and the transformer technique with self-attention mechanism introduces computational redundancies and extracts local feature details poorly in the process of modeling the global contextual information. In this paper, a dual-branch fused reconstructive transformer network, DFRTNet, is proposed for efficient and accurate building extraction. In the encoder, the traditional transformer is reconfigured by designing the local and global feature extraction module (LGFE); the branch of global feature extraction (GFE) performs dynamic range attention (DRA) based on the idea of top-k attention for extracting global features; furthermore, the branch of local feature extraction (LFE) is used to obtain fine-grained features. The multilayer perceptron (MLP) is employed to efficiently fuse the local and global features. In the decoder, a simple channel attention module (CAM) is used in the up-sampling part to enhance channel dimension features. Our network achieved the best segmentation accuracy on both the WHU and Massachusetts building datasets when compared to other mainstream and state-of-the-art methods.<\/jats:p>","DOI":"10.3390\/s24020365","type":"journal-article","created":{"date-parts":[[2024,1,8]],"date-time":"2024-01-08T06:12:58Z","timestamp":1704694378000},"page":"365","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["A Dual-Branch Fusion Network Based on Reconstructed Transformer for Building Extraction in Remote Sensing Imagery"],"prefix":"10.3390","volume":"24","author":[{"ORCID":"https:\/\/orcid.org\/0009-0003-4088-3047","authenticated-orcid":false,"given":"Yitong","family":"Wang","sequence":"first","affiliation":[{"name":"Institute of Earthquake Forecasting, China Earthquake Administration, Beijing 100036, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8859-0716","authenticated-orcid":false,"given":"Shumin","family":"Wang","sequence":"additional","affiliation":[{"name":"Institute of Earthquake Forecasting, China Earthquake Administration, Beijing 100036, China"}]},{"given":"Aixia","family":"Dou","sequence":"additional","affiliation":[{"name":"Institute of Earthquake Forecasting, China Earthquake Administration, Beijing 100036, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,1,7]]},"reference":[{"key":"ref_1","first-page":"5620811","article-title":"Deep Covariance Alignment for Domain Adaptive Remote Sensing Image Segmentation","volume":"60","author":"Wu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"895","DOI":"10.1109\/LGRS.2020.2986380","article-title":"Capsule Feature Pyramid Network for Building Footprint Extraction From High-Resolution Aerial Imagery","volume":"18","author":"Yu","year":"2021","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1125","DOI":"10.1080\/15481603.2020.1847453","article-title":"Multi-Scale Three-Dimensional Detection of Urban Buildings Using Aerial LiDAR Data","volume":"57","author":"Cao","year":"2020","journal-title":"GIScience Remote Sens."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"7313","DOI":"10.1109\/ACCESS.2020.2964043","article-title":"Automatic Building Extraction from High-Resolution Aerial Imagery via Fully Convolutional Encoder-Decoder Network with Non-Local Block","volume":"8","author":"Wang","year":"2020","journal-title":"IEEE Access"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"749","DOI":"10.1080\/15481603.2018.1564499","article-title":"Semantic Segmentation of High Spatial Resolution Images with Deep Neural Networks","volume":"56","author":"Yang","year":"2019","journal-title":"GIScience Remote Sens."},{"key":"ref_6","first-page":"102768","article-title":"Multi-Scale Attention Integrated Hierarchical Networks for High-Resolution Building Footprint Extraction","volume":"109","author":"Liu","year":"2022","journal-title":"Int. J. Appl. Earth Obs. Geoinf."},{"key":"ref_7","first-page":"4402214","article-title":"Gated Spatial Memory and Centroid-Aware Network for Building Instance Extraction","volume":"60","author":"Xu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Shao, Z., Tang, P., Wang, Z., Saleem, N., Yam, S., and Sommai, C. (2020). BRRNet: A Fully Convolutional Neural Network for Automatic Building Extraction From High-Resolution Remote Sensing Images. Remote Sens., 12.","DOI":"10.3390\/rs12061050"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"2178","DOI":"10.1109\/TGRS.2019.2954461","article-title":"Toward Automatic Building Footprint Delineation From Aerial Images Using CNN and Regularization","volume":"58","author":"Wei","year":"2020","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"9454","DOI":"10.1109\/TPAMI.2023.3243048","article-title":"Conformer: Local Features Coupling Global Representations for Recognition and Detection","volume":"45","author":"Peng","year":"2023","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Yi, Y., Zhang, Z., Zhang, W., Zhang, C., Li, W., and Zhao, T. (2019). Semantic Segmentation of Urban Buildings from VHR Remote Sensing Imagery Using a Deep Convolutional Neural Network. Remote Sens., 11.","DOI":"10.3390\/rs11151774"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"6169","DOI":"10.1109\/TGRS.2020.3026051","article-title":"MAP-Net: Multiple Attending Path Neural Network for Building Footprint Extraction From Remote Sensed Imagery","volume":"59","author":"Zhu","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"6608","DOI":"10.1109\/JSTARS.2021.3076085","article-title":"Fine Building Segmentation in High-Resolution SAR Images Via Selective Pyramid Dilated Network","volume":"14","author":"Jing","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_14","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention Is All You Need. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"87","DOI":"10.1109\/TPAMI.2022.3152247","article-title":"A Survey on Visual Transformer","volume":"45","author":"Han","year":"2023","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11\u201317). Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, BC, USA.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_17","first-page":"5625711","article-title":"Building Extraction with Vision Transformer","volume":"60","author":"Wang","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Chen, K., Zou, Z., and Shi, Z. (2021). Building Extraction from Remote Sensing Images with Sparse Token Transformers. Remote Sens., 13.","DOI":"10.3390\/rs13214441"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Xu, Z., Zhang, W., Zhang, T., Yang, Z., and Li, J. (2021). Efficient Transformer for Remote Sensing Image Segmentation. Remote Sens., 13.","DOI":"10.3390\/rs13183585"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Aleissaee, A.A., Kumar, A., Anwer, R.M., Khan, S., Cholakkal, H., Xia, G.-S., and Khan, F.S. (2023). Transformers in Remote Sensing: A Survey. Remote Sens., 15.","DOI":"10.3390\/rs15071860"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Wang, H., Chen, X., Zhang, T., Xu, Z., and Li, J. (2022). CCTNet: Coupled CNN and Transformer Network for Crop Segmentation of Remote Sensing Images. Remote Sens., 14.","DOI":"10.3390\/rs14091956"},{"key":"ref_22","first-page":"12077","article-title":"SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers","volume":"Volume 34","author":"Xie","year":"2021","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Cui, Y., Jiang, C., Wang, L., and Wu, G. (2022, January 18\u201324). MixFormer: End-to-End Tracking with Iterative Mixed Attention. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01324"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Karlinsky, L., Michaeli, T., and Nishino, K. (2022, January 23\u201327). Swin-Unet: Unet-Like Pure Transformer for Medical Image Segmentation. Proceedings of the Computer Vision\u2014ECCV 2022 Workshops, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-25063-7"},{"key":"ref_25","unstructured":"Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A.L., and Zhou, Y. (2023, November 05). TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. Available online: https:\/\/arxiv.org\/abs\/2102.04306v1."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Yu, W., Luo, M., Zhou, P., Si, C., Zhou, Y., Wang, X., Feng, J., and Yan, S. (2022, January 18\u201324). MetaFormer Is Actually What You Need for Vision. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01055"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully Convolutional Networks for Semantic Segmentation. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Navab, N., Hornegger, J., Wells, W.M., and Frangi, A.F. (2015, January 5\u20139). U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention\u2014MICCAI 2015, Munich, Germany.","DOI":"10.1007\/978-3-319-24553-9"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","article-title":"SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation","volume":"39","author":"Badrinarayanan","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_30","unstructured":"Simonyan, K., and Zisserman, A. (2023, November 05). Very Deep Convolutional Networks for Large-Scale Image Recognition. Available online: https:\/\/arxiv.org\/abs\/1409.1556v6."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"3349","DOI":"10.1109\/TPAMI.2020.2983686","article-title":"Deep High-Resolution Representation Learning for Visual Recognition","volume":"43","author":"Wang","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs","volume":"40","author":"Chen","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"5367","DOI":"10.1109\/TGRS.2020.2964675","article-title":"Semantic Segmentation of Large-Size VHR Remote Sensing Images Using a Two-Stage Multiscale Training Architecture","volume":"58","author":"Ding","year":"2020","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_34","unstructured":"Chen, L.-C., Papandreou, G., Schroff, F., and Adam, H. (2023, November 05). Rethinking Atrous Convolution for Semantic Image Segmentation. Available online: https:\/\/arxiv.org\/abs\/1706.05587v3."},{"key":"ref_35","unstructured":"Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y. (2018, January 8\u201314). Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. Proceedings of the Computer Vision\u2014ECCV 2018, Munich, Germany."},{"key":"ref_36","unstructured":"Xiao, T., Singh, M., Mintun, E., Darrell, T., Doll\u00e1r, P., and Girshick, R. (2023, November 05). Early Convolutions Help Transformers See Better. Available online: https:\/\/arxiv.org\/abs\/2106.14881v3."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Li, T., Wang, C., Wu, F., Zhang, H., Zhang, B., and Xu, L. (2022, January 17\u201322). Built-Up Area Extraction From GF-3 Image Based on an Improved Transformer Model. Proceedings of the IGARSS 2022\u20132022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia.","DOI":"10.1109\/IGARSS46834.2022.9884924"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022, January 18\u201324). A ConvNet for the 2020s. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., and Shao, L. (2021, January 11\u201317). Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00061"},{"key":"ref_40","first-page":"2503605","article-title":"Multiscale Feature Learning by Transformer for Building Extraction From Satellite Images","volume":"19","author":"Chen","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"10990","DOI":"10.1109\/JSTARS.2021.3119654","article-title":"STransFuse: Fusing Swin Transformer and Convolutional Neural Network for Remote Sensing Image Semantic Segmentation","volume":"14","author":"Gao","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_42","unstructured":"Beltagy, I., Peters, M.E., and Cohan, A. (2023, November 05). Longformer: The Long-Document Transformer. Available online: https:\/\/arxiv.org\/abs\/2004.05150v2."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Yuan, W., Zhang, X., Shi, J., and Wang, J. (2023). LiteST-Net: A Hybrid Model of Lite Swin Transformer and Convolution for Building Extraction from Remote Sensing Image. Remote Sens., 15.","DOI":"10.3390\/rs15081996"},{"key":"ref_44","first-page":"4408820","article-title":"Transformer and CNN Hybrid Deep Neural Network for Semantic Segmentation of Very-High-Resolution Remote Sensing Imagery","volume":"60","author":"Zhang","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Liu, H., and Hu, Q. (October, January 27). TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation. Proceedings of the Medical Image Computing and Computer Assisted Intervention\u2014MICCAI 2021, Strasbourg, France.","DOI":"10.1007\/978-3-030-87193-2_2"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. (2017, January 21\u201326). Pyramid Scene Parsing Network. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.660"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Ferrari, V., Hebert, M., Sminchisescu, C., and Weiss, Y. (2018, January 8\u201314). Unified Perceptual Parsing for Scene Understanding. Proceedings of the Computer Vision\u2014ECCV 2018, Munich, Germany.","DOI":"10.1007\/978-3-030-01264-9"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Touvron, H., Cord, M., El-Nouby, A., Bojanowski, P., Joulin, A., Synnaeve, G., and J\u00e9gou, H. (2023, November 05). Augmenting Convolutional Networks with Attention-Based Aggregation. Available online: https:\/\/arxiv.org\/abs\/2112.13692v1.","DOI":"10.1109\/TPAMI.2022.3206148"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_50","unstructured":"Shi, W., Caballero, J., Husz\u00e1r, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., and Wang, Z. (2023, November 05). Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. Available online: https:\/\/arxiv.org\/abs\/1609.05158v2."},{"key":"ref_51","unstructured":"Pan, Z., Cai, J., and Zhuang, B. (2023, November 05). Fast Vision Transformers with HiLo Attention. Available online: https:\/\/arxiv.org\/abs\/2205.13213v5."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Ye, Z., Fu, Y., Gan, M., Deng, J., Comber, A., and Wang, K. (2019). Building Extraction from Very High Resolution Aerial Imagery Using Joint Attention Deep Neural Network. Remote Sens., 11.","DOI":"10.3390\/rs11242970"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Milletari, F., Navab, N., and Ahmadi, S.-A. (2016, January 25\u201328). V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA.","DOI":"10.1109\/3DV.2016.79"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"574","DOI":"10.1109\/TGRS.2018.2858817","article-title":"Fully Convolutional Networks for Multisource Building Extraction From an Open Aerial and Satellite Imagery Data Set","volume":"57","author":"Ji","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_55","unstructured":"Mnih, V. (2013). Machine Learning for Aerial Image Labeling. [Ph.D. Thesis, University of Toronto]."},{"key":"ref_56","unstructured":"Loshchilov, I., and Hutter, F. (2023, November 05). Decoupled Weight Decay Regularization. Available online: https:\/\/arxiv.org\/abs\/1711.05101v3."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/2\/365\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T13:41:45Z","timestamp":1760103705000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/2\/365"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,7]]},"references-count":56,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2024,1]]}},"alternative-id":["s24020365"],"URL":"https:\/\/doi.org\/10.3390\/s24020365","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,7]]}}}