{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,12]],"date-time":"2026-02-12T17:41:55Z","timestamp":1770918115455,"version":"3.50.1"},"reference-count":55,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2024,1,14]],"date-time":"2024-01-14T00:00:00Z","timestamp":1705190400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["2022YFE0128100"],"award-info":[{"award-number":["2022YFE0128100"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["32071681"],"award-info":[{"award-number":["32071681"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["32271877"],"award-info":[{"award-number":["32271877"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["CAFYBB2023PA003"],"award-info":[{"award-number":["CAFYBB2023PA003"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2022YFE0128100"],"award-info":[{"award-number":["2022YFE0128100"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["32071681"],"award-info":[{"award-number":["32071681"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["32271877"],"award-info":[{"award-number":["32271877"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["CAFYBB2023PA003"],"award-info":[{"award-number":["CAFYBB2023PA003"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Foundation Research Funds of IFRIT","award":["2022YFE0128100"],"award-info":[{"award-number":["2022YFE0128100"]}]},{"name":"Foundation Research Funds of IFRIT","award":["32071681"],"award-info":[{"award-number":["32071681"]}]},{"name":"Foundation Research Funds of IFRIT","award":["32271877"],"award-info":[{"award-number":["32271877"]}]},{"name":"Foundation Research Funds of IFRIT","award":["CAFYBB2023PA003"],"award-info":[{"award-number":["CAFYBB2023PA003"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Achieving the accurate and efficient monitoring of forests at the tree level can provide detailed information for precise and scientific forest management. However, the detection of individual trees under planted forests characterized by dense distribution, serious overlap, and complicated background information is still a challenge. A new deep learning network, YOLO-DCAM, has been developed to effectively promote individual tree detection amidst complex scenes. The YOLO-DCAM is constructed by leveraging the YOLOv5 network as the basis and further enhancing the network\u2019s capability of extracting features by reasonably incorporating deformable convolutional layers into the backbone. Additionally, an efficient multi-scale attention module is integrated into the neck to enable the network to prioritize the tree crown features and reduce the interference of background information. The combination of these two modules can greatly enhance detection performance. The YOLO-DCAM achieved an impressive performance for the detection of Chinese fir instances within a comprehensive dataset comprising 978 images across four typical planted forest scenes, with model evaluation metrics of precision (96.1%), recall (93.0%), F1-score (94.5%), and AP@0.5 (97.3%), respectively. The comparative test showed that YOLO-DCAM has a good balance between model accuracy and efficiency compared with YOLOv5 and advanced detection models. Specifically, the precision increased by 2.6%, recall increased by 1.6%, F1-score increased by 2.1%, and AP@0.5 increased by 1.4% compared to YOLOv5. Across three supplementary plots, YOLO-DCAM consistently demonstrates strong robustness. These results illustrate the effectiveness of YOLO-DCAM for detecting individual trees in complex plantation environments. This study can serve as a reference for utilizing UAV-based RGB imagery to precisely detect individual trees, offering valuable implications for forest practical applications.<\/jats:p>","DOI":"10.3390\/rs16020335","type":"journal-article","created":{"date-parts":[[2024,1,15]],"date-time":"2024-01-15T04:54:38Z","timestamp":1705294478000},"page":"335","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["Tree-Level Chinese Fir Detection Using UAV RGB Imagery and YOLO-DCAM"],"prefix":"10.3390","volume":"16","author":[{"given":"Jiansen","family":"Wang","sequence":"first","affiliation":[{"name":"Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China"},{"name":"Key Laboratory of Forestry Remote Sensing and Information System, National Forestry and Grassland Administration, Beijing 100091, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3874-5326","authenticated-orcid":false,"given":"Huaiqing","family":"Zhang","sequence":"additional","affiliation":[{"name":"Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China"},{"name":"Key Laboratory of Forestry Remote Sensing and Information System, National Forestry and Grassland Administration, Beijing 100091, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2554-1770","authenticated-orcid":false,"given":"Yang","family":"Liu","sequence":"additional","affiliation":[{"name":"Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China"},{"name":"Key Laboratory of Forestry Remote Sensing and Information System, National Forestry and Grassland Administration, Beijing 100091, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-1047-3196","authenticated-orcid":false,"given":"Huacong","family":"Zhang","sequence":"additional","affiliation":[{"name":"Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China"},{"name":"Key Laboratory of Forestry Remote Sensing and Information System, National Forestry and Grassland Administration, Beijing 100091, China"},{"name":"Experimental Center of Subtropical Forestry, Chinese Academy of Forestry, Xinyu 336600, China"}]},{"given":"Dongping","family":"Zheng","sequence":"additional","affiliation":[{"name":"Department of Second Language Studies, University of Hawai\u2018i at M\u0101noa, 1890 East-West Road, Honolulu, HI 96822, USA"}]}],"member":"1968","published-online":{"date-parts":[[2024,1,14]]},"reference":[{"key":"ref_1","unstructured":"FAO (2022). The State of the World\u2019s Forests 2022: Forest Pathways for Green Recovery and Building Inclusive, Resilient and Sustainable Economies, FAO."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"4206","DOI":"10.1038\/s41467-022-31380-7","article-title":"Rates and drivers of aboveground carbon accumulation in global monoculture plantation forests","volume":"13","author":"Bukoski","year":"2022","journal-title":"Nat. Commun."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"106677","DOI":"10.1016\/j.landusepol.2023.106677","article-title":"Plantation forestry: Carbon and climate impacts","volume":"130","author":"Smyth","year":"2023","journal-title":"Land Use Policy"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1016\/j.foreco.2015.06.021","article-title":"Changes in planted forests and future global implications","volume":"352","author":"Payn","year":"2015","journal-title":"For. Ecol. Manag."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"160482","DOI":"10.1016\/j.scitotenv.2022.160482","article-title":"A framework for precisely thinning planning in a managed pure Chinese fir forest based on UAV remote sensing","volume":"860","author":"Zhou","year":"2023","journal-title":"Sci. Total Environ."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"156","DOI":"10.1016\/j.isprsjprs.2020.08.005","article-title":"Detecting and mapping tree seedlings in UAV imagery using convolutional neural networks and field-verified data","volume":"168","author":"Pearse","year":"2020","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"5211","DOI":"10.1080\/01431161.2018.1486519","article-title":"Comparison of ALS- and UAV(SfM)-derived high-density point clouds for individual tree detection in Eucalyptus plantations","volume":"39","author":"Cosenza","year":"2018","journal-title":"Int. J. Remote Sens."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Fu, H., Li, H., Dong, Y., Xu, F., and Chen, F. (2022). Segmenting individual tree from TLS point clouds using improved DBSCAN. Forests, 13.","DOI":"10.3390\/f13040566"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"19","DOI":"10.1007\/s40725-017-0051-6","article-title":"Individual tree crown methods for 3d data from remote sensing","volume":"3","author":"Lindberg","year":"2017","journal-title":"Curr. For. Rep."},{"key":"ref_10","first-page":"102946","article-title":"Automatic detection of snow breakage at single tree level using YOLOv5 applied to UAV imagery","volume":"112","author":"Puliti","year":"2022","journal-title":"Int. J. Appl. Earth Obs. Geoinf."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"362","DOI":"10.1016\/j.isprsjprs.2018.09.013","article-title":"Individual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images","volume":"145","author":"Wagner","year":"2018","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"112397","DOI":"10.1016\/j.rse.2021.112397","article-title":"Individual tree crown detection from high spatial resolution imagery using a revised local maximum filtering","volume":"258","author":"Xu","year":"2021","journal-title":"Remote Sens. Environ."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"113143","DOI":"10.1016\/j.rse.2022.113143","article-title":"Individual tree segmentation and tree species classification in subtropical broadleaf forests using UAV-based lidar, hyperspectral, and ultrahigh-resolution RGB data","volume":"280","author":"Qin","year":"2022","journal-title":"Remote Sens. Environ."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TGRS.2022.3211202","article-title":"Individual tree crown delineation from UAS imagery based on region growing by over-segments with a competitive mechanism","volume":"60","author":"Gu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"7356","DOI":"10.1080\/01431161.2018.1513669","article-title":"Automatic detection of individual oil palm trees from UAV images using HOG features and an SVM classifier","volume":"40","author":"Wang","year":"2018","journal-title":"Int. J. Remote Sens."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep learning","volume":"521","author":"LeCun","year":"2015","journal-title":"Nature"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Yu, K., Hao, Z., Post, C.J., Mikhailova, E.A., Lin, L., Zhao, G., Tian, S., and Liu, J. (2022). Comparison of classical methods and Mask R-CNN for automatic tree detection and mapping using UAV imagery. Remote Sens., 14.","DOI":"10.3390\/rs14020295"},{"key":"ref_18","unstructured":"Dai, J., Li, Y., He, K., and Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. arXiv."},{"key":"ref_19","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A.C. (2016, January 11\u201314). Ssd: Single shot multibox detector. Proceedings of the Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands. Proceedings, Part I 14, 2016.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Santos, A.A.D., Marcato Junior, J., Ara\u00fajo, M.S., Di Martini, D.R., Tetila, E.C., Siqueira, H.L., Aoki, C., Eltner, A., Matsubara, E.T., and Pistori, H. (2019). Assessment of CNN-based methods for individual tree detection on images captured by RGB cameras attached to UAVs. Sensors, 19.","DOI":"10.3390\/s19163595"},{"key":"ref_23","first-page":"102662","article-title":"Counting trees in a subtropical mega city using the instance segmentation method","volume":"106","author":"Sun","year":"2022","journal-title":"Int. J. Appl. Earth Obs. Geoinf."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Mo, J., Lan, Y., Yang, D., Wen, F., Qiu, H., Chen, X., and Deng, X. (2021). Deep learning-based instance segmentation method of Litchi canopy from UAV-acquired images. Remote Sens., 13.","DOI":"10.3390\/rs13193919"},{"key":"ref_25","unstructured":"Jiang, P.Y., Ergu, D., Liu, F.Y., Cai, Y., and Ma, B. (2021, January 9\u201311). A review of yolo algorithm developments. Proceedings of the 8th International Conference on Information Technology and Quantitative Management (ITQM)\u2014Developing Global Digital Economy after COVID-19, Chengdu, China."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_27","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_28","unstructured":"Bochkovskiy, A., Wang, C.-Y., and Liao, H.-Y.M. (2020). YOLOv4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_29","unstructured":"Ultralytics (2023, March 04). YOLOv5. Available online: https:\/\/github.com\/ultralytics\/yolov5."},{"key":"ref_30","unstructured":"Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., Ke, Z., Li, Q., Cheng, M., and Nie, W. (2022). YOLOv6: A single-stage object detection framework for industrial applications. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Wang, C.-Y., Bochkovskiy, A., and Liao, H.-Y.M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors 2022. arXiv.","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"227","DOI":"10.1007\/s11676-021-01328-6","article-title":"Measuring loblolly pine crowns with drone imagery through deep learning","volume":"33","author":"Lou","year":"2022","journal-title":"J. For. Res."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"781","DOI":"10.1080\/17538947.2023.2173318","article-title":"An object detection method for bayberry trees based on an improved YOLO algorithm","volume":"16","author":"Chen","year":"2023","journal-title":"Int. J. Digit. Earth"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Dong, C., Cai, C., Chen, S., Xu, H., Yang, L., Ji, J., Huang, S., Hung, I.-K., Weng, Y., and Lou, X. (2023). Crown width extraction of Metasequoia Glyptostroboides using improved YOLOv7 based on UAV images. Drones, 7.","DOI":"10.3390\/drones7060336"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Wardana, D.P.T., Sianturi, R.S., and Fatwa, R. (2023, January 24\u201325). Detection of oil palm trees using deep learning method with high-resolution aerial image data. Proceedings of the 8th International Conference on Sustainable Information Engineering and Technology, Bali, Indonesia.","DOI":"10.1145\/3626641.3626667"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Xue, Z., Lin, H., and Wang, F. (2022). A small target forest fire detection model based on YOLOv5 improvement. Forests, 13.","DOI":"10.3390\/f13081332"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Qin, B., Sun, F., Shen, W., Dong, B., Ma, S., Huo, X., and Lan, P. (2023). Deep learning-based pine nematode trees\u2019 identification using multispectral and visible UAV imagery. Drones, 7.","DOI":"10.3390\/drones7030183"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Moharram, D., Yuan, X., and Li, D. (2023). Tree seedlings detection and counting using a deep learning algorithm. Appl. Sci., 13.","DOI":"10.3390\/app13020895"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"106560","DOI":"10.1016\/j.compag.2021.106560","article-title":"Deep neural network based date palm tree detection in drone imagery","volume":"192","author":"Jintasuttisak","year":"2022","journal-title":"Comput. Electron. Agric."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"149","DOI":"10.1007\/s40725-023-00184-3","article-title":"A systematic review of individual tree crown detection and delineation with convolutional neural networks (cnn)","volume":"9","author":"Zhao","year":"2023","journal-title":"Curr. For. Rep."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"100396","DOI":"10.1016\/j.tfp.2023.100396","article-title":"Long-term effects of planting density and site quality on timber assortment structure based on a 41-year plantation trial of Chinese fir","volume":"12","author":"Li","year":"2023","journal-title":"Trees For. People"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Mark Liao, H.Y., Wu, Y.H., Chen, P.Y., Hsieh, J.W., and Yeh, I.H. (2020, January 14\u201319). CSPNet: A new backbone that can enhance learning capability of cnn. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2020), Washington, DC, USA.","DOI":"10.1109\/CVPRW50498.2020.00203"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Wang, K., Liew, J.H., Zou, Y., Zhou, D., and Feng, J. (2019, January 20\u201326). Panet: Few-shot image semantic segmentation with prototype alignment. Proceedings of the IEEE International Conference on Computer Vision (ICCV 2019), Seoul, Republic of Korea.","DOI":"10.1109\/ICCV.2019.00929"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Zhu, X., Hu, H., Lin, S., and Dai, J. (2019, January 16\u201320). Deformable convnets v2: More deformable, better results. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00953"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Ouyang, D., He, S., Zhang, G., Luo, M., Guo, H., Zhan, J., and Huang, Z. (2023, January 4\u20139). Efficient multi-scale attention module with cross-spatial learning. Proceedings of the ICASSP 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece.","DOI":"10.1109\/ICASSP49357.2023.10096516"},{"key":"ref_46","unstructured":"Lv, W., Xu, S., Zhao, Y., Wang, G., Wei, J., Cui, C., Du, Y., Dang, Q., and Liu, Y. (2023). Detrs beat yolos on real-time object detection. arXiv."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Chen, X., Yu, K., Yu, S., Hu, Z., Tan, H., Chen, Y., Huang, X., and Liu, J. (2023). Study on single-tree segmentation of Chinese fir plantations using coupled local maximum and height-weighted improved k-means algorithm. Forests, 14.","DOI":"10.3390\/f14112130"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Gan, Y., Wang, Q., and Iio, A. (2023). Tree crown detection and delineation in a temperate deciduous forest from UAV RGB imagery using deep learning approaches: Effects of spatial resolution and species characteristics. Remote Sens., 15.","DOI":"10.3390\/rs15030778"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"107098","DOI":"10.1016\/j.compag.2022.107098","article-title":"A lightweight dead fish detection method based on deformable convolution and YOLOV4","volume":"198","author":"Zhao","year":"2022","journal-title":"Comput. Electron. Agric."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"924","DOI":"10.1080\/01431161.2023.2173030","article-title":"SAR image near-shore ship target detection method in complex background","volume":"44","author":"Li","year":"2023","journal-title":"Int. J. Remote Sens."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"48","DOI":"10.1016\/j.neucom.2021.03.091","article-title":"A review on the attention mechanism of deep learning","volume":"452","author":"Niu","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Beloiu, M., Heinzmann, L., Rehush, N., Gessler, A., and Griess, V.C. (2023). Individual tree-crown detection and species identification in heterogeneous forests using aerial rgb imagery and deep learning. Remote Sens., 15.","DOI":"10.3390\/rs15051463"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1093\/forestry\/cpr051","article-title":"Comparative testing of single-tree detection algorithms under different types of forest","volume":"85","author":"Vauhkonen","year":"2012","journal-title":"Forestry"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"453","DOI":"10.3390\/s120100453","article-title":"Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera","volume":"12","author":"Rosnell","year":"2012","journal-title":"Sensors"},{"key":"ref_55","first-page":"102686","article-title":"Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach","volume":"107","author":"Li","year":"2022","journal-title":"Int. J. Appl. Earth Obs. Geoinf."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/2\/335\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T13:46:37Z","timestamp":1760103997000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/2\/335"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,14]]},"references-count":55,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2024,1]]}},"alternative-id":["rs16020335"],"URL":"https:\/\/doi.org\/10.3390\/rs16020335","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,14]]}}}