{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T18:59:55Z","timestamp":1773773995714,"version":"3.50.1"},"reference-count":73,"publisher":"MDPI AG","issue":"18","license":[{"start":{"date-parts":[[2022,9,14]],"date-time":"2022-09-14T00:00:00Z","timestamp":1663113600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Depth maps produced by LiDAR-based approaches are sparse. Even high-end LiDAR sensors produce highly sparse depth maps, which are also noisy around the object boundaries. Depth completion is the task of generating a dense depth map from a sparse depth map. While the earlier approaches focused on directly completing this sparsity from the sparse depth maps, modern techniques use RGB images as a guidance tool to resolve this problem. Whilst many others rely on affinity matrices for depth completion. Based on these approaches, we have divided the literature into two major categories; unguided methods and image-guided methods. The latter is further subdivided into multi-branch and spatial propagation networks. The multi-branch networks further have a sub-category named image-guided filtering. In this paper, for the first time ever we present a comprehensive survey of depth completion methods. We present a novel taxonomy of depth completion approaches, review in detail different state-of-the-art techniques within each category for depth completion of LiDAR data, and provide quantitative results for the approaches on KITTI and NYUv2 depth completion benchmark datasets.<\/jats:p>","DOI":"10.3390\/s22186969","type":"journal-article","created":{"date-parts":[[2022,9,14]],"date-time":"2022-09-14T23:16:36Z","timestamp":1663197396000},"page":"6969","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":14,"title":["A Comprehensive Survey of Depth Completion Approaches"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8949-4243","authenticated-orcid":false,"given":"Muhammad Ahmed Ullah","family":"Khan","sequence":"first","affiliation":[{"name":"Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany"},{"name":"Mindgarage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany"},{"name":"German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6364-8427","authenticated-orcid":false,"given":"Danish","family":"Nazir","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany"},{"name":"Mindgarage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany"},{"name":"German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany"}]},{"given":"Alain","family":"Pagani","sequence":"additional","affiliation":[{"name":"German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6158-3543","authenticated-orcid":false,"given":"Hamam","family":"Mokayed","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Lule\u00e5 University of Technology, 971 87 Lule\u00e5, Sweden"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4029-6574","authenticated-orcid":false,"given":"Marcus","family":"Liwicki","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Lule\u00e5 University of Technology, 971 87 Lule\u00e5, Sweden"}]},{"given":"Didier","family":"Stricker","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany"},{"name":"German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0536-6867","authenticated-orcid":false,"given":"Muhammad Zeshan","family":"Afzal","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany"},{"name":"Mindgarage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany"},{"name":"German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Cui, Z., Heng, L., Yeo, Y.C., Geiger, A., Pollefeys, M., and Sattler, T. (2019, January 20\u201324). Real-time dense mapping for self-driving vehicles using fisheye cameras. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793884"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1016\/j.imavis.2017.07.003","article-title":"3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection","volume":"68","author":"Heng","year":"2017","journal-title":"Image Vis. Comput."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Wang, K., Zhang, Z., Yan, Z., Li, X., Xu, B., Li, J., and Yang, J. (2021, January 11\u201317). Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Virtual.","DOI":"10.1109\/ICCV48922.2021.01575"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Song, X., Wang, P., Zhou, D., Zhu, R., Guan, C., Dai, Y., Su, H., Li, H., and Yang, R. (2019, January 16\u201317). Apollocar3d: A large 3d car instance understanding benchmark for autonomous driving. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00560"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Liao, Y., Huang, L., Wang, Y., Kodagoda, S., Yu, Y., and Liu, Y. (June, January 29). Parse geometry from a line: Monocular depth estimation with partial laser observation. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.","DOI":"10.1109\/ICRA.2017.7989590"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Dey, A., Jarvis, G., Sandor, C., and Reitmayr, G. (2012, January 5\u20138). Tablet versus phone: Depth perception in handheld augmented reality. Proceedings of the 2012 IEEE international symposium on mixed and augmented reality (ISMAR), Atlanta, GA, USA.","DOI":"10.1109\/ISMAR.2012.6402556"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Kalia, M., Navab, N., and Salcudean, T. (2019, January 20\u201324). A Real-Time Interactive Augmented Reality Depth Estimation Technique for Surgical Robotics. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793610"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3272127.3275083","article-title":"Fast depth densification for occlusion-aware augmented reality","volume":"37","author":"Holynski","year":"2018","journal-title":"ACM Trans. Graph. (ToG)"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1089\/cpb.2007.9935","article-title":"Depth perception in virtual reality: Distance estimations in peri-and extrapersonal space","volume":"11","author":"Wolter","year":"2008","journal-title":"Cyberpsychol. Behav."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"27116","DOI":"10.3390\/s151027116","article-title":"An Indoor Obstacle Detection System Using Depth Information and Region Growth","volume":"15","author":"Huang","year":"2015","journal-title":"Sensors"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Park, J., Joo, K., Hu, Z., Liu, C.K., and So Kweon, I. (2020, January 23\u201328). Non-local spatial propagation network for depth completion. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58601-0_8"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"38106","DOI":"10.1109\/ACCESS.2018.2854262","article-title":"3D Reconstruction With Time-of-Flight Depth Camera and Multiple Mirrors","volume":"6","author":"Nguyen","year":"2018","journal-title":"IEEE Access"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Cui, Z., Xu, C., Yan, Y., Sebe, N., and Yang, J. (2019, January 15\u201320). Pattern-affinitive propagation across depth, surface normal and semantic segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00423"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Wang, Y., Chao, W.L., Garg, D., Hariharan, B., Campbell, M., and Weinberger, K.Q. (2019, January 15\u201320). Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00864"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Ma, F., Cavalheiro, G.V., and Karaman, S. (2019, January 20\u201324). Self-supervised sparse-to-dense: Self-supervised depth completion from lidar and monocular camera. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793637"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Chodosh, N., Wang, C., and Lucey, S. (2018, January 2\u20136). Deep convolutional compressed sensing for lidar depth completion. Proceedings of the Asian Conference on Computer Vision, Perth, Australia.","DOI":"10.1007\/978-3-030-20887-5_31"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., and Nashashibi, F. (2018, January 5\u20138). Sparse and dense data with cnns: Depth completion and semantic segmentation. Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy.","DOI":"10.1109\/3DV.2018.00017"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., and Geiger, A. (2017, January 10\u201312). Sparsity invariant cnns. Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China.","DOI":"10.1109\/3DV.2017.00012"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2423","DOI":"10.1109\/TPAMI.2019.2929170","article-title":"Confidence propagation through cnns for guided sparse depth regression","volume":"42","author":"Eldesokey","year":"2019","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Qiu, J., Cui, Z., Zhang, Y., Zhang, X., Liu, S., Zeng, B., and Pollefeys, M. (2019, January 15\u201320). Deeplidar: Deep surface normal guided depth prediction for outdoor scene from sparse lidar data and single color image. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00343"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"2361","DOI":"10.1109\/TPAMI.2019.2947374","article-title":"Learning depth with convolutional spatial propagation network","volume":"42","author":"Cheng","year":"2019","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Cheng, X., Wang, P., Guan, C., and Yang, R. (2019). CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion. arXiv.","DOI":"10.1609\/aaai.v34i07.6635"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Van Gansbeke, W., Neven, D., De Brabandere, B., and Van Gool, L. (2019, January 27\u201331). Sparse and noisy lidar completion with rgb guidance and uncertainty. Proceedings of the 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan.","DOI":"10.23919\/MVA.2019.8757939"},{"key":"ref_24","unstructured":"Bertalmio, M., Bertozzi, A.L., and Sapiro, G. (2001, January 8\u201314). Navier-stokes, fluid dynamics, and image and video inpainting. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Herrera, D., Kannala, J., Ladick\u00fd, L., and Heikkil\u00e4, J. (2013, January 17\u201320). Depth map inpainting under a second-order smoothness prior. Proceedings of the Scandinavian Conference on Image Analysis, Espoo, Finland.","DOI":"10.1007\/978-3-642-38886-6_52"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Doria, D., and Radke, R.J. (2012, January 16\u201321). Filling large holes in lidar data by inpainting depth gradients. Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA.","DOI":"10.1109\/CVPRW.2012.6238916"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Ferstl, D., Reinbacher, C., Ranftl, R., R\u00fcther, M., and Bischof, H. (2013, January 1\u20138). Image guided depth upsampling using anisotropic total generalized variation. Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia.","DOI":"10.1109\/ICCV.2013.127"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Matsuo, K., and Aoki, Y. (2015, January 7\u201312). Depth image enhancement using local tangent plane approximations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298980"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"227825","DOI":"10.1109\/ACCESS.2020.3045681","article-title":"DepthNet: Real-Time LiDAR Point Cloud Depth Completion for Autonomous Vehicles","volume":"8","author":"Bai","year":"2020","journal-title":"IEEE Access"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Eldesokey, A., Felsberg, M., Holmquist, K., and Persson, M. (2020, January 13\u201319). Uncertainty-aware cnns for depth completion: Uncertainty from beginning to end. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01203"},{"key":"ref_31","unstructured":"Eldesokey, A., Felsberg, M., and Khan, F.S. (2018). Propagating confidences through cnns for sparse data regression. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Hu, M., Wang, S., Li, B., Ning, S., Fan, L., and Gong, X. (June, January 30). Penet: Towards precise and efficient image guided depth completion. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561035"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Yan, Z., Wang, K., Li, X., Zhang, Z., Xu, B., Li, J., and Yang, J. (2021). RigNet: Repetitive image guided network for depth completion. arXiv.","DOI":"10.1007\/978-3-031-19812-0_13"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"5404","DOI":"10.1109\/TNNLS.2021.3072883","article-title":"Multitask GANs for Semantic Segmentation and Depth Completion With Cycle Consistency","volume":"32","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Nazir, D., Liwicki, M., Stricker, D., and Afzal, M.Z. (2022). SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion. arXiv.","DOI":"10.1109\/ACCESS.2022.3214316"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Zhang, Y., and Funkhouser, T. (2018, January 18\u201323). Deep depth completion of a single rgb-d image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00026"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Xu, Y., Zhu, X., Shi, J., Zhang, G., Bao, H., and Li, H. (2019, January 27\u201328). Depth completion from sparse lidar data with depth-normal constraints. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea.","DOI":"10.1109\/ICCV.2019.00290"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Yang, Y., Wong, A., and Soatto, S. (2019, January 15\u201320). Dense depth posterior (ddp) from single image and sparse range. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00347"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"5264","DOI":"10.1109\/TIP.2021.3079821","article-title":"Adaptive context-aware multi-modal network for depth completion","volume":"30","author":"Zhao","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Li, A., Yuan, Z., Ling, Y., Chi, W., and Zhang, C. (2020, January 1\u20135). A multi-scale guided cascade hourglass network for depth completion. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA.","DOI":"10.1109\/WACV45572.2020.9093407"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Newell, A., Yang, K., and Deng, J. (2016, January 11\u201314). Stacked hourglass networks for human pose estimation. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46484-8_29"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"1808","DOI":"10.1109\/LRA.2021.3060396","article-title":"DenseLiDAR: A real-time pseudo dense depth guided depth completion network","volume":"6","author":"Gu","year":"2021","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"79801","DOI":"10.1109\/ACCESS.2020.2990212","article-title":"Deep architecture with cross guidance between single image and sparse lidar data for depth completion","volume":"8","author":"Lee","year":"2020","journal-title":"IEEE Access"},{"key":"ref_45","unstructured":"Yu, F., and Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. arXiv."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"126323","DOI":"10.1109\/ACCESS.2020.3008404","article-title":"Revisiting sparsity invariant convolution: A network for image guided depth completion","volume":"8","author":"Yan","year":"2020","journal-title":"IEEE Access"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Schuster, R., Wasenmuller, O., Unger, C., and Stricker, D. (2021, January 3\u20138). Ssgp: Sparse spatial guided propagation for robust and generic interpolation. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV48630.2021.00024"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Liu, L., Song, X., Lyu, X., Diao, J., Wang, M., Liu, Y., and Zhang, L. (2020). Fcfr-net: Feature fusion based coarse-to-fine residual learning for monocular depth completion. arXiv.","DOI":"10.1609\/aaai.v35i3.16311"},{"key":"ref_49","unstructured":"Liu, R., Lehman, J., Molino, P., Such, F.P., Frank, E., Sergeev, A., and Yosinski, J. (2018). An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution. Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Cao, C., Liu, X., Yang, Y., Yu, Y., Wang, J., Wang, Z., Huang, Y., Wang, L., Huang, C., and Xu, W. (2015, January 7\u201313). Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.338"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"1116","DOI":"10.1109\/TIP.2020.3040528","article-title":"Learning guided convolutional network for depth completion","volume":"30","author":"Tang","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"1397","DOI":"10.1109\/TPAMI.2012.213","article-title":"Guided image filtering","volume":"35","author":"He","year":"2012","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Tronicke, J., and B\u00f6niger, U. (2013, January 2\u20135). Steering kernel regression: An adaptive denoising tool to process GPR data. Proceedings of the 2013 7th International Workshop on Advanced Ground Penetrating Radar, Nantes, France.","DOI":"10.1109\/IWAGPR.2013.6601539"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"2850","DOI":"10.1109\/TIP.2021.3055629","article-title":"Learning steering kernels for guided depth completion","volume":"30","author":"Liu","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Cheng, X., Wang, P., and Yang, R. (2018, January 8\u201314). Depth estimation via affinity learned with convolutional spatial propagation network. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01270-0_7"},{"key":"ref_57","unstructured":"Pereira, F., Burges, C., Bottou, L., and Weinberger, K. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Xu, Z., Yin, H., and Yao, J. (2020, January 25\u201328). Deformable spatial propagation networks for depth completion. Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates.","DOI":"10.1109\/ICIP40778.2020.9191138"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Lin, Y., Cheng, T., Zhong, Q., Zhou, W., and Yang, H. (2022). Dynamic Spatial Propagation Network for Depth Completion. arXiv.","DOI":"10.1609\/aaai.v36i2.20055"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., and Wei, Y. (2017, January 22\u201329). Deformable Convolutional Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.89"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"121","DOI":"10.1007\/s00138-021-01249-8","article-title":"Early, intermediate and late fusion strategies for robust deep learning-based multimodal action recognition","volume":"32","author":"Boulahia","year":"2021","journal-title":"Mach. Vis. Appl."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"722","DOI":"10.1109\/TITS.2020.3023541","article-title":"Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review","volume":"23","author":"Cui","year":"2022","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Hua, J., and Gong, X. (2018, January 13\u201319). A normalized convolutional neural network for guided sparse depth upsampling. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), Stockholm, Sweden.","DOI":"10.24963\/ijcai.2018\/316"},{"key":"ref_65","doi-asserted-by":"crossref","unstructured":"Geiger, A., Lenz, P., and Urtasun, R. (2012, January 16\u201321). Are we ready for autonomous driving? The kitti vision benchmark suite. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA.","DOI":"10.1109\/CVPR.2012.6248074"},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Silberman, N., Hoiem, D., Kohli, P., and Fergus, R. (2012, January 7\u201313). Indoor Segmentation and Support Inference from RGBD Images. Proceedings of the European Conference on Computer Vision, Florence, Italy.","DOI":"10.1007\/978-3-642-33715-4_54"},{"key":"ref_67","doi-asserted-by":"crossref","first-page":"328","DOI":"10.1109\/TPAMI.2007.1166","article-title":"Stereo processing by semiglobal matching and mutual information","volume":"30","author":"Hirschmuller","year":"2007","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_68","doi-asserted-by":"crossref","unstructured":"Geerse, D.J., Coolen, B.H., and Roerdink, M. (2015). Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments. PLoS ONE, 10.","DOI":"10.1371\/journal.pone.0139913"},{"key":"ref_69","doi-asserted-by":"crossref","first-page":"689","DOI":"10.1145\/1015706.1015780","article-title":"Colorization using optimization","volume":"23","author":"Levin","year":"2004","journal-title":"ACM Trans. Graph."},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"11654","DOI":"10.1109\/TITS.2021.3106055","article-title":"Self-Supervised Depth Completion From Direct Visual-LiDAR Odometry in Autonomous Driving","volume":"23","author":"Song","year":"2021","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_71","unstructured":"Faust, A., Hsu, D., and Neumann, G. (2022, January 14\u201318). Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR. Proceedings of the 5th Conference on Robot Learning, Auckland, New Zealand."},{"key":"ref_72","doi-asserted-by":"crossref","unstructured":"Wong, A., and Soatto, S. (2021, January 11\u201317). Unsupervised depth completion with calibrated backprojection layers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Virtual.","DOI":"10.1109\/ICCV48922.2021.01251"},{"key":"ref_73","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6969\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:31:41Z","timestamp":1760142701000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6969"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,14]]},"references-count":73,"journal-issue":{"issue":"18","published-online":{"date-parts":[[2022,9]]}},"alternative-id":["s22186969"],"URL":"https:\/\/doi.org\/10.3390\/s22186969","relation":{"has-preprint":[{"id-type":"doi","id":"10.20944\/preprints202205.0343.v1","asserted-by":"object"}]},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,14]]}}}