{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,4]],"date-time":"2025-11-04T11:05:19Z","timestamp":1762254319705,"version":"build-2065373602"},"reference-count":33,"publisher":"MDPI AG","issue":"24","license":[{"start":{"date-parts":[[2022,12,7]],"date-time":"2022-12-07T00:00:00Z","timestamp":1670371200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Pixel-level depth information is crucial to many applications, such as autonomous driving, robotics navigation, 3D scene reconstruction, and augmented reality. However, depth information, which is usually acquired by sensors such as LiDAR, is sparse. Depth completion is a process that predicts missing pixels\u2019 depth information from a set of sparse depth measurements. Most of the ongoing research applies deep neural networks on the entire sparse depth map and camera scene without utilizing any information about the available objects, which results in more complex and resource-demanding networks. In this work, we propose to use image instance segmentation to detect objects of interest with pixel-level locations, along with sparse depth data, to support depth completion. The framework utilizes a two-branch encoder\u2013decoder deep neural network. It fuses information about scene available objects, such as objects\u2019 type and pixel-level location, LiDAR, and RGB camera, to predict dense accurate depth maps. Experimental results on the KITTI dataset showed faster training and improved prediction accuracy. The proposed method reaches a convergence state faster and surpasses the baseline model in all evaluation metrics.<\/jats:p>","DOI":"10.3390\/s22249578","type":"journal-article","created":{"date-parts":[[2022,12,7]],"date-time":"2022-12-07T05:50:52Z","timestamp":1670392252000},"page":"9578","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8088-2282","authenticated-orcid":false,"given":"Mohammad Z.","family":"El-Yabroudi","sequence":"first","affiliation":[{"name":"Electrical and Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA"}]},{"given":"Ikhlas","family":"Abdel-Qader","sequence":"additional","affiliation":[{"name":"Electrical and Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA"}]},{"given":"Bradley J.","family":"Bazuin","sequence":"additional","affiliation":[{"name":"Electrical and Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1772-3769","authenticated-orcid":false,"given":"Osama","family":"Abudayyeh","sequence":"additional","affiliation":[{"name":"Civil and Construction Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA"}]},{"given":"Rakan C.","family":"Chabaan","sequence":"additional","affiliation":[{"name":"Hyundai America Technical Center, Inc., Superior Charter Township, MI 48198, USA"}]}],"member":"1968","published-online":{"date-parts":[[2022,12,7]]},"reference":[{"key":"ref_1","unstructured":"Fan, R., Jiao, J., Ye, H., Yu, Y., Pitas, I., and Liu, M. (2019). Key ingredients of self-driving cars. arXiv preprint."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"113816","DOI":"10.1016\/j.eswa.2020.113816","article-title":"Self-Driving Cars: A Survey","volume":"165","author":"Badue","year":"2021","journal-title":"Expert Syst. Appl."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1275","DOI":"10.1109\/COMST.2018.2869360","article-title":"Autonomous Cars: Research Results, Issues, and Future Challenges","volume":"21","author":"Hussain","year":"2019","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Fayyad, J., Jaradat, M.A., Gruyer, D., and Najjaran, H. (2020). Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors, 20.","DOI":"10.3390\/s20154220"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1005","DOI":"10.1007\/s00138-016-0784-4","article-title":"An Overview of Depth Cameras and Range Scanners Based on Time-of-Flight Technologies","volume":"27","author":"Horaud","year":"2016","journal-title":"Mach. Vis. Appl."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Yeong, D.J., Velasco-hernandez, G., Barry, J., and Walsh, J. (2021). Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors, 21.","DOI":"10.20944\/preprints202102.0459.v1"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Roldao, L., De Charette, R., and Verroust-Blondet, A. (2019, January 27\u201330). 3D Surface Reconstruction from Voxel-Based Lidar Data. Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand.","DOI":"10.1109\/ITSC.2019.8916881"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Bassier, M., Vergauwen, M., and Poux, F. (2020). Point Cloud vs. Mesh Features for Building Interior Classification. Remote Sens., 12.","DOI":"10.3390\/rs12142224"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"El-Yabroudi, M., Awedat, K., Chabaan, R.C., Abudayyeh, O., and Abdel-Qader, I. (2022, January 19\u201321). Adaptive DBSCAN LiDAR Point Cloud Clustering For Autonomous Driving Applications. Proceedings of the 2022 IEEE International Conference on Electro Information Technology (eIT), Mankato, MN, USA.","DOI":"10.1109\/eIT53891.2022.9814025"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"261","DOI":"10.1007\/s11263-019-01247-4","article-title":"Deep Learning for Generic Object Detection: A Survey","volume":"128","author":"Liu","year":"2020","journal-title":"Int. J. Comput. Vis."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., and Geiger, A. (2017, January 10\u201312). Sparsity Invariant CNNs. Proceedings of the 2017 international conference on 3D Vision (3DV), Qingdao, China.","DOI":"10.1109\/3DV.2017.00012"},{"key":"ref_12","first-page":"2428","article-title":"Analysis of LiDAR and Camera Data in Real-World Weather Conditions for Autonomous Vehicle Operations","volume":"2","author":"Goberville","year":"2020","journal-title":"SAE Tech. Pap."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Furletov, Y., Willert, V., and Adamy, J. (2021, January 11\u201317). Auditory Scene Understanding for Autonomous Driving. Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan.","DOI":"10.1109\/IV48863.2021.9575964"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Khan, M.A.U., Nazir, D., Pagani, A., Mokayed, H., Liwicki, M., Stricker, D., and Afzal, M.Z. (2022). A Comprehensive Survey of Depth Completion Approaches. Sensors, 22.","DOI":"10.20944\/preprints202205.0343.v1"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1016\/j.cag.2018.02.001","article-title":"A Comparative Review of Plausible Hole Filling Strategies in the Context of Scene Depth Image Completion","volume":"72","author":"Breckon","year":"2018","journal-title":"Comput. Graph."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"171","DOI":"10.1007\/s13735-020-00195-x","article-title":"A Survey on Instance Segmentation","volume":"9","year":"2020","journal-title":"Int. J. Multimed. Inf. Retr."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"104401","DOI":"10.1016\/j.imavis.2022.104401","article-title":"A Review on 2D Instance Segmentation Based on Deep Neural Networks","volume":"120","author":"Gu","year":"2022","journal-title":"Image Vis. Comput."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1231","DOI":"10.1177\/0278364913491297","article-title":"Vision Meets Robotics: The KITTI Dataset","volume":"32","author":"Geiger","year":"2013","journal-title":"Int. J. Rob. Res."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Premebida, C., Garrote, L., Asvadi, A., Ribeiro, A.P., and Nunes, U. (2016, January 1\u20134). High-Resolution LIDAR-Based Depth Mapping Using Bilateral Filter. Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil.","DOI":"10.1109\/ITSC.2016.7795953"},{"key":"ref_20","unstructured":"Felsberg, M., and Persson, M. (2020, January 13\u201319). Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Chodosh, N., Wang, C., and Lucey, S. (2018). Deep Convolutional Compressed Sensing for LiDAR Depth Completion. Asian Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-030-20887-5_31"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Ma, F., and Karaman, S. (2018, January 21\u201325). Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image. Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia.","DOI":"10.1109\/ICRA.2018.8460184"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Ma, F., Cavalheiro, G.V., and Karaman, S. (2019, January 20\u201324). Self-Supervised Sparse-to-Dense: Self-Supervised Depth Completion from LiDAR and Monocular Camera. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793637"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Hu, M., Wang, S., Li, B., Ning, S., Fan, L., and Gong, X. (June, January 30). PENet: Towards Precise and Efficient Image Guided Depth Completion. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561035"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Qiu, J., Cui, Z., Zhang, Y., Zhang, X., Liu, S., Zeng, B., and Pollefeys, M. (2019, January 15\u201320). DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene from Sparse LiDAR Data and Single Color Image. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00343"},{"key":"ref_26","unstructured":"Neven, D., and Leuven, K.U. (2019, January 27\u201331). Sparse and Noisy LiDAR Completion with RGB Guidance and Uncertainty. Proceedings of the 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan."},{"key":"ref_27","unstructured":"Xiong, X., Xiong, H., Xian, K., Zhao, C., Cao, Z., and Li, X. (2018). Sparse-to-Dense Depth Completion Revisited: Sampling Strategy and Graph Construction. European Conference on Computer Vision, Springer."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"5404","DOI":"10.1109\/TNNLS.2021.3072883","article-title":"Multitask GANs for Semantic Segmentation and Depth Completion with Cycle Consistency","volume":"32","author":"Zhang","year":"2021","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"6867","DOI":"10.1109\/ACCESS.2022.3142916","article-title":"Wasserstein Generative Adversarial Network for Depth Completion with Anisotropic Diffusion Depth Enhancement","volume":"10","author":"Nguyen","year":"2022","journal-title":"IEEE Access"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"386","DOI":"10.1109\/TPAMI.2018.2844175","article-title":"Mask R-CNN","volume":"42","author":"He","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Jung, S., Heo, H., Park, S., Jung, S.U., and Lee, K. (2022). Benchmarking Deep Learning Models for Instance Segmentation. Appl. Sci., 12.","DOI":"10.3390\/app12178856"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Wang, Y., Chao, W.L., Garg, D., Hariharan, B., Campbell, M., and Weinberger, K.Q. (2019, January 15\u201320). Pseudo-Lidar from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00864"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/24\/9578\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:35:32Z","timestamp":1760146532000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/24\/9578"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,7]]},"references-count":33,"journal-issue":{"issue":"24","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["s22249578"],"URL":"https:\/\/doi.org\/10.3390\/s22249578","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,12,7]]}}}