{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T02:11:58Z","timestamp":1760148718143,"version":"build-2065373602"},"reference-count":63,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2023,5,24]],"date-time":"2023-05-24T00:00:00Z","timestamp":1684886400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100012190","name":"Ministry of Science and Higher Education of the Russian Federation","doi-asserted-by":"publisher","award":["No. 075-15-2022-311 dated 20 April 2022"],"award-info":[{"award-number":["No. 075-15-2022-311 dated 20 April 2022"]}],"id":[{"id":"10.13039\/501100012190","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>At the present time, many publicly available point cloud datasets exist, which are mainly focused on autonomous driving. The objective of this study is to develop a new large-scale mobile 3D LiDAR point cloud dataset for outdoor scene semantic segmentation tasks, which has a classification scheme suitable for geospatial applications. Our dataset (Saint Petersburg 3D) contains both real-world (34 million points) and synthetic (34 million points) subsets that were acquired using real and virtual sensors with the same characteristics. An original classification scheme is proposed that contains a set of 10 universal object categories into which any scene represented by dense outdoor mobile LiDAR point clouds can be divided. The evaluation procedure for semantic segmentation of point clouds for geospatial applications is described. An experiment with the Kernel Point Fully Convolution Neural Network model trained on the proposed dataset was carried out. We obtained an overall 92.56% mIoU, which demonstrates the high efficiency of using deep learning models for point cloud semantic segmentation for geospatial applications in accordance with the proposed classification scheme.<\/jats:p>","DOI":"10.3390\/rs15112735","type":"journal-article","created":{"date-parts":[[2023,5,25]],"date-time":"2023-05-25T02:00:55Z","timestamp":1684980055000},"page":"2735","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Saint Petersburg 3D: Creating a Large-Scale Hybrid Mobile LiDAR Point Cloud Dataset for Geospatial Applications"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0823-0460","authenticated-orcid":false,"given":"Sergey","family":"Lytkin","sequence":"first","affiliation":[{"name":"Laboratory \u201cModeling of Technological Processes and Design of Power Equipment\u201d, Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3054-1786","authenticated-orcid":false,"given":"Vladimir","family":"Badenko","sequence":"additional","affiliation":[{"name":"Laboratory \u201cModeling of Technological Processes and Design of Power Equipment\u201d, Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7511-8076","authenticated-orcid":false,"given":"Alexander","family":"Fedotov","sequence":"additional","affiliation":[{"name":"Laboratory \u201cModeling of Technological Processes and Design of Power Equipment\u201d, Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9339-0316","authenticated-orcid":false,"given":"Konstantin","family":"Vinogradov","sequence":"additional","affiliation":[{"name":"Department of Cartography and Geoinformatics, Institute of Earth Sciences, Saint Petersburg State University, St. Petersburg 199034, Russia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7737-6768","authenticated-orcid":false,"given":"Anton","family":"Chervak","sequence":"additional","affiliation":[{"name":"Laboratory \u201cModeling of Technological Processes and Design of Power Equipment\u201d, Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9606-8461","authenticated-orcid":false,"given":"Yevgeny","family":"Milanov","sequence":"additional","affiliation":[{"name":"Laboratory \u201cModeling of Technological Processes and Design of Power Equipment\u201d, Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia"}]},{"given":"Dmitry","family":"Zotov","sequence":"additional","affiliation":[{"name":"Laboratory \u201cModeling of Technological Processes and Design of Power Equipment\u201d, Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia"}]}],"member":"1968","published-online":{"date-parts":[[2023,5,24]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Imdad, U., Asif, M., Ahmad, M.T., Sohaib, O., Hanif, M.K., and Chaudary, M.H. (2019). Three dimensional point cloud compression and decompression using polynomials of degree one. Symmetry, 11.","DOI":"10.3390\/sym11020209"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"2309","DOI":"10.1007\/s00170-021-07286-x","article-title":"Method for clustering and identification of objects in laser scanning point clouds using dynamic logic","volume":"117","author":"Milanov","year":"2021","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"384","DOI":"10.1016\/j.trc.2018.02.012","article-title":"Autonomous vehicle perception: The technology of today and tomorrow","volume":"89","author":"Gruyer","year":"2018","journal-title":"Transp. Res. Part C Emerg. Technol."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Cheng, L., Chen, S., Liu, X., Xu, H., Wu, Y., Li, M., and Chen, Y. (2018). Registration of laser scanning point clouds: A review. Sensors, 18.","DOI":"10.3390\/s18051641"},{"key":"ref_5","first-page":"012062","article-title":"Features of information modeling of cultural heritage objects","volume":"890","author":"Badenko","year":"2020","journal-title":"IOP Conf. Ser.-Mat. Sci."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"012044","DOI":"10.1088\/1742-6596\/1118\/1\/012044","article-title":"Calibration of digital non-metric cameras for measuring works","volume":"1118","author":"Valkov","year":"2018","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Bi, S., Yuan, C., Liu, C., Cheng, J., Wang, W., and Cai, Y. (2021). A survey of low-cost 3D laser scanning technology. Appl. Sci., 11.","DOI":"10.3390\/app11093938"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Che, E., Jung, J., and Olsen, M.J. (2019). Object recognition, segmentation, and classification of mobile laser scanning point clouds: A state of the art review. Sensors, 19.","DOI":"10.3390\/s19040810"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"2942","DOI":"10.1109\/LRA.2018.2848308","article-title":"Integrating deep semantic segmentation into 3-d point cloud registration","volume":"3","author":"Zaganidis","year":"2018","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"797","DOI":"10.1080\/15481603.2020.1804248","article-title":"Mapping 3D visibility in an urban street environment from mobile LiDAR point clouds","volume":"57","author":"Zhao","year":"2020","journal-title":"GIScience Remote Sens."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"2127","DOI":"10.1016\/j.measurement.2013.03.006","article-title":"Review of mobile mapping and surveying technologies","volume":"46","author":"Puente","year":"2013","journal-title":"Measurement"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Deschaud, J.E., Duque, D., Richa, J.P., Velasco-Forero, S., Marcotegui, B., and Goulette, F. (2021). Paris-CARLA-3D: A real and synthetic outdoor point cloud dataset for challenging tasks in 3D mapping. Remote Sens., 13.","DOI":"10.3390\/rs13224713"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Hackel, T., Savinov, N., Ladicky, L., Wegner, J.D., Schindler, K., and Pollefeys, M. (2017). Semantic3d. net: A new large-scale point cloud classification benchmark. arXiv.","DOI":"10.5194\/isprs-annals-IV-1-W1-91-2017"},{"key":"ref_14","unstructured":"Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21\u201326). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Alaba, S.Y., and Ball, J.E. (2022). A survey on deep-learning-based lidar 3d object detection for autonomous driving. Sensors, 22.","DOI":"10.36227\/techrxiv.20442858.v3"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"095106","DOI":"10.1088\/1361-6501\/abead3","article-title":"Point cloud segmentation based on Euclidean clustering and multi-plane extraction in rugged field","volume":"32","author":"Liu","year":"2021","journal-title":"Meas. Sci. Technol."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"179118","DOI":"10.1109\/ACCESS.2019.2958671","article-title":"A review of deep learning-based semantic segmentation for point cloud","volume":"7","author":"Zhang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"7068349","DOI":"10.1155\/2018\/7068349","article-title":"Deep learning for computer vision: A brief review","volume":"2018","author":"Voulodimos","year":"2018","journal-title":"Comput. Intell. Neurosci."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Wu, J., Jiao, J., Yang, Q., Zha, Z.J., and Chen, X. (2019, January 21\u201325). Ground-aware point cloud semantic segmentation for autonomous driving. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3351076"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","article-title":"The pascal visual object classes (voc) challenge","volume":"88","author":"Everingham","year":"2010","journal-title":"Int. J. Comput. Vis."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft coco: Common objects in context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"317","DOI":"10.1016\/j.quaint.2020.07.039","article-title":"Potential of airborne LiDAR data for terrain parameters extraction","volume":"575","author":"Sharma","year":"2021","journal-title":"Quat. Int."},{"key":"ref_24","unstructured":"Thomas, H. (2019). Learning New Representations for 3D Point Cloud Semantic Segmentation. [Ph.D. Thesis, Universit\u00e9 Paris Sciences et Lettres]."},{"key":"ref_25","unstructured":"Thomas, H., Qi, C.R., Deschaud, J.E., Marcotegui, B., Goulette, F., and Guibas, L.J. (November, January 27). Kpconv: Flexible and deformable convolution for point clouds. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Li, X., Li, C., Tong, Z., Lim, A., Yuan, J., Wu, Y., Tang, J., and Huang, R. (2020, January 12\u201316). Campus3d: A photogrammetry point cloud benchmark for hierarchical understanding of outdoor scene. Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA.","DOI":"10.1145\/3394171.3413661"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Armeni, I., Sener, O., Zamir, A.R., Jiang, H., Brilakis, I., Fischer, M., and Savarese, S. (2016, January 27\u201330). 3D semantic parsing of large-scale indoor spaces. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.170"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Varney, N., Asari, V.K., and Graehling, Q. (2020, January 14\u201319). DALES: A large-scale aerial LiDAR data set for semantic segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00101"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom, O. (2020, January 13\u201319). nuScenes: A multimodal dataset for autonomous driving. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01164"},{"key":"ref_30","unstructured":"Geyer, J., Kassahun, Y., Mahmudi, M., Ricou, X., Durgesh, R., Chung, A.S., Hauswald, L., Pham, V.H., M\u00fchlegg, M., and Dorn, S. (2020). A2d2: Audi autonomous driving dataset. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Geiger, A., Lenz, P., and Urtasun, R. (2012, January 16\u201321). Are we ready for autonomous driving? the kitti vision benchmark suite. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA.","DOI":"10.1109\/CVPR.2012.6248074"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Huang, X., Cheng, X., Geng, Q., Cao, B., Zhou, D., Wang, P., Lin, Y., and Yang, R. (2018, January 18\u201322). The apolloscape dataset for autonomous driving. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00141"},{"key":"ref_33","unstructured":"Houston, J., Zuidhof, G., Bergamini, L., Ye, Y., Chen, L., Jain, A., Omari, S., Iglovikov, V., and Ondruska, P. (2021, January 8\u201311). One thousand and one hours: Self-driving motion prediction dataset. Proceedings of the Conference on Robot Learning, London, UK."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., and Caine, B. (2020, January 13\u201319). Scalability in perception for autonomous driving: Waymo open dataset. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00252"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Munoz, D., Bagnell, J.A., Vandapel, N., and Hebert, M. (2009, January 20\u201325). Contextual classification with functional max-margin markov networks. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206590"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"1149","DOI":"10.1016\/S0957-4158(03)00047-3","article-title":"Perception for collision avoidance and autonomous driving","volume":"13","author":"Gowdy","year":"2003","journal-title":"Mechatronics"},{"key":"ref_37","unstructured":"Serna, A., Marcotegui, B., Goulette, F., and Deschaud, J.E. (2014, January 6\u20138). Paris-rue-Madame database: A 3D mobile laser scanner dataset for benchmarking urban detection, segmentation and classification methods. Proceedings of the ICPRAM 2014\u20143rd International Conference on Pattern Recognition Applications and Methods, Loire Valley, France."},{"key":"ref_38","first-page":"78","article-title":"An integrated on-board laser range sensing system for on-the-way city and road modelling","volume":"34","author":"Goulette","year":"2006","journal-title":"Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"126","DOI":"10.1016\/j.cag.2015.03.004","article-title":"TerraMobilita\/iQmulus urban point cloud analysis benchmark","volume":"49","author":"Vallet","year":"2015","journal-title":"Comput. Graph."},{"key":"ref_40","first-page":"69","article-title":"Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology","volume":"200","author":"Paparoditis","year":"2012","journal-title":"Rev. Fran\u00e7aise Photogramm\u00e9trie T\u00e9l\u00e9d\u00e9tection"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"545","DOI":"10.1177\/0278364918767506","article-title":"Paris-Lille-3D: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification","volume":"37","author":"Roynard","year":"2018","journal-title":"Int. J. Robot. Res."},{"key":"ref_42","unstructured":"Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J. (November, January 27). Semantickitti: A dataset for semantic scene understanding of lidar sequences. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Tan, W., Qin, N., Ma, L., Li, Y., Du, J., Cai, G., Yang, K., and Li, J. (2020, January 14\u201319). Toronto-3D: A large-scale mobile LiDAR dataset for semantic segmentation of urban roadways. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00109"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Zhu, J., Gehrung, J., Huang, R., Borgmann, B., Sun, Z., Hoegner, L., Hebel, M., Xu, Y., and Stilla, U. (2020). TUM-MLS-2016: An annotated mobile LiDAR dataset of the TUM city campus for semantic point cloud interpretation in urban areas. Remote Sens., 12.","DOI":"10.3390\/rs12111875"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Pan, Y., Gao, B., Mei, J., Geng, S., Li, C., and Zhao, H. (2020, January 20\u201323). Semanticposs: A point cloud dataset with large quantity of dynamic instances. Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA.","DOI":"10.1109\/IV47402.2020.9304596"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"159","DOI":"10.1016\/j.isprsjprs.2022.02.007","article-title":"A training dataset for semantic segmentation of urban point cloud map for intelligent vehicles","volume":"187","author":"Song","year":"2022","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Gschwandtner, M., Kwitt, R., Uhl, A., and Pree, W. (2011, January 26\u201328). BlenSor: Blender sensor simulation toolbox. Proceedings of the International Symposium on Visual Computing, Las Vegas, NV, USA.","DOI":"10.1007\/978-3-642-24031-7_20"},{"key":"ref_48","unstructured":"Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., and Koltun, V. (2017, January 13\u201315). CARLA: An open urban driving simulator. Proceedings of the Conference on Robot Learning, Mountain View, CA, USA."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Skinner, J., Garg, S., S\u00fcnderhauf, N., Corke, P., Upcroft, B., and Milford, M. (2016, January 9\u201314). High-fidelity simulation for evaluating robotic vision performance. Proceedings of the 2016 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea.","DOI":"10.1109\/IROS.2016.7759425"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Haltakov, V., Unger, C., and Ilic, S. (2013, January 3\u20136). Framework for generation of synthetic ground truth data for driver assistance applications. Proceedings of the German Conference on Pattern Recognition, Saarbr\u00fccken, Germany.","DOI":"10.1007\/978-3-642-40602-7_35"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Gaidon, A., Wang, Q., Cabon, Y., and Vig, E. (2016, January 27\u201330). Virtual worlds as proxy for multi-object tracking analysis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.470"},{"key":"ref_52","unstructured":"Griffiths, D., and Boehm, J. (2019). SynthCity: A large scale synthetic point cloud. arXiv."},{"key":"ref_53","unstructured":"Xiao, A., Huang, J., Guan, D., Zhan, F., and Lu, S. (March, January 22). Transfer learning from synthetic to real LiDAR point cloud for semantic segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, Virtual."},{"key":"ref_54","unstructured":"Deschaud, J.E. (2021). KITTI-CARLA: A KITTI-like dataset generated by CARLA Simulator. arXiv."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Yue, X., Wu, B., Seshia, S.A., Keutzer, K., and Sangiovanni-Vincentelli, A.L. (2018, January 11\u201314). A lidar point cloud generator: From a virtual world to autonomous driving. Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, Yokohama, Japan.","DOI":"10.1145\/3206025.3206080"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Hurl, B., Czarnecki, K., and Waslander, S. (2019, January 9\u201312). Precise synthetic image and lidar (presil) dataset for autonomous vehicle perception. Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France.","DOI":"10.1109\/IVS.2019.8813809"},{"key":"ref_57","unstructured":"American Society for Photogrammetry & Remote Sensing, LAS SPECIFICATION (2023, April 10). Version 1.4-R13. Available online: https:\/\/www.asprs.org\/wp-content\/uploads\/2010\/12\/LAS_1_4_r13.pdf."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"112772","DOI":"10.1016\/j.rse.2021.112772","article-title":"Virtual laser scanning with HELIOS++: A novel take on ray tracing-based simulation of topographic full-waveform 3D laser scanning","volume":"269","author":"Winiwarter","year":"2022","journal-title":"Remote Sens. Environ."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Bello, S.A., Yu, S., Wang, C., Adam, J.M., and Li, J. (2020). Deep learning on 3D point clouds. Remote Sens., 12.","DOI":"10.3390\/rs12111729"},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"4338","DOI":"10.1109\/TPAMI.2020.3005434","article-title":"Deep learning for 3d point clouds: A survey","volume":"43","author":"Guo","year":"2020","journal-title":"IEEE Trans. Pattern Anal."},{"key":"ref_61","unstructured":"(2023, April 10). GPL Software. CloudCompare. Available online: https:\/\/www.danielgm.net\/cc\/."},{"key":"ref_62","unstructured":"(2023, April 10). Michael Neumann, Blender2Helios. Available online: https:\/\/github.com\/neumicha\/Blender2Helios."},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Wu, B., Zhou, X., Zhao, S., Yue, X., and Keutzer, K. (2019, January 20\u201324). Squeezesegv2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793495"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/11\/2735\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:41:25Z","timestamp":1760125285000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/11\/2735"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,24]]},"references-count":63,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2023,6]]}},"alternative-id":["rs15112735"],"URL":"https:\/\/doi.org\/10.3390\/rs15112735","relation":{},"ISSN":["2072-4292"],"issn-type":[{"type":"electronic","value":"2072-4292"}],"subject":[],"published":{"date-parts":[[2023,5,24]]}}}