{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T19:46:25Z","timestamp":1773776785812,"version":"3.50.1"},"reference-count":42,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2023,3,6]],"date-time":"2023-03-06T00:00:00Z","timestamp":1678060800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100002301","name":"Estonian Research Council","doi-asserted-by":"publisher","award":["PRG1604"],"award-info":[{"award-number":["PRG1604"]}],"id":[{"id":"10.13039\/501100002301","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002301","name":"Estonian Research Council","doi-asserted-by":"publisher","award":["LLTAT21278"],"award-info":[{"award-number":["LLTAT21278"]}],"id":[{"id":"10.13039\/501100002301","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Bolt Technologies","award":["PRG1604"],"award-info":[{"award-number":["PRG1604"]}]},{"name":"Bolt Technologies","award":["LLTAT21278"],"award-info":[{"award-number":["LLTAT21278"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, simulation studies have shown that depth-sensing can make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. The main goal of our study is to investigate how useful such images are as inputs to a self-driving neural network. We demonstrate that such LiDAR images are sufficient for the real-car road-following task. Models using these images as input perform at least as well as camera-based models in the tested conditions. Moreover, LiDAR images are less sensitive to weather conditions and lead to better generalization. In a secondary research direction, we reveal that the temporal smoothness of off-policy prediction sequences correlates with the actual on-policy driving ability equally well as the commonly used mean absolute error.<\/jats:p>","DOI":"10.3390\/s23052845","type":"journal-article","created":{"date-parts":[[2023,3,6]],"date-time":"2023-03-06T02:28:34Z","timestamp":1678069714000},"page":"2845","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":20,"title":["LiDAR-as-Camera for End-to-End Driving"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2120-1712","authenticated-orcid":false,"given":"Ardi","family":"Tampuu","sequence":"first","affiliation":[{"name":"Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia"}]},{"given":"Romet","family":"Aidla","sequence":"additional","affiliation":[{"name":"Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia"}]},{"given":"Jan Aare","family":"van Gent","sequence":"additional","affiliation":[{"name":"Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia"}]},{"given":"Tambet","family":"Matiisen","sequence":"additional","affiliation":[{"name":"Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia"}]}],"member":"1968","published-online":{"date-parts":[[2023,3,6]]},"reference":[{"key":"ref_1","unstructured":"Tampuu, A., Matiisen, T., Semikin, M., Fishman, D., and Muhammad, N. (2020). A survey of end-to-end driving: Architectures and training methods. arXiv."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"195","DOI":"10.1109\/TIV.2020.3002505","article-title":"Learning to drive by imitation: An overview of deep behavior cloning methods","volume":"6","author":"Ly","year":"2020","journal-title":"IEEE Trans. Intell. Veh."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Huang, Y., and Chen, Y. (2020). Autonomous driving with deep learning: A survey of state-of-art technologies. arXiv.","DOI":"10.1109\/QRS-C51114.2020.00045"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Yurtsever, E., Lambert, J., Carballo, A., and Takeda, K. (2019). A Survey of Autonomous Driving: Common Practices and Emerging Technologies. arXiv.","DOI":"10.1109\/ACCESS.2020.2983149"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Bansal, M., Krizhevsky, A., and Ogale, A. (2018). Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. arXiv.","DOI":"10.15607\/RSS.2019.XV.031"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Chitta, K., Prakash, A., Jaeger, B., Yu, Z., Renz, K., and Geiger, A. (2022). Transfuser: Imitation with transformer-based sensor fusion for autonomous driving. arXiv.","DOI":"10.1109\/TPAMI.2022.3200245"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Zeng, W., Luo, W., Suo, S., Sadat, A., Yang, B., Casas, S., and Urtasun, R. (2019, January 15\u201320). End-to-end Interpretable Neural Motion Planner. Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00886"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Casas, S., Sadat, A., and Urtasun, R. (2021, January 20\u201325). Mp3: A unified model to map, perceive, predict and plan. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01417"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Sadat, A., Casas, S., Ren, M., Wu, X., Dhawan, P., and Urtasun, R. (2020, January 23\u201328). Perceive, predict, and plan: Safe motion planning through interpretable semantic representations. Proceedings of the Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK.","DOI":"10.1007\/978-3-030-58592-1_25"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"225","DOI":"10.1007\/s10846-016-0442-0","article-title":"Modelling and control strategies in path tracking control for autonomous ground vehicles: A review of state of the art and challenges","volume":"86","author":"Amer","year":"2017","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"161211","DOI":"10.1109\/ACCESS.2020.3020075","article-title":"Control strategies on path tracking for autonomous vehicle: State of the art and future challenges","volume":"8","author":"Yao","year":"2020","journal-title":"IEEE Access"},{"key":"ref_12","unstructured":"Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., and Zhang, J. (2016). End to end learning for self-driving cars. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Osi\u0144ski, B., Jakubowski, A., Mi\u0142o\u015b, P., Ziecina, P., Galias, C., and Michalewski, H. (2019). Simulation-based reinforcement learning for real-world autonomous driving. arXiv.","DOI":"10.1109\/ICRA40945.2020.9196730"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Kendall, A., Hawke, J., Janz, D., Mazur, P., Reda, D., Allen, J.M., Lam, V.D., Bewley, A., and Shah, A. (2019, January 20\u201324). Learning to drive in a day. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793742"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Codevilla, F., Miiller, M., L\u00f3pez, A., Koltun, V., and Dosovitskiy, A. (2018, January 21\u201325). End-to-end driving via conditional imitation learning. Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia.","DOI":"10.1109\/ICRA.2018.8460487"},{"key":"ref_16","unstructured":"Sauer, A., Savinov, N., and Geiger, A. (2018). Conditional affordance learning for driving in urban environments. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Hawke, J., Shen, R., Gurau, C., Sharma, S., Reda, D., Nikolov, N., Mazur, P., Micklethwaite, S., Griffiths, N., and Shah, A. (2019). Urban Driving with Conditional Imitation Learning. arXiv.","DOI":"10.1109\/ICRA40945.2020.9197408"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Zhou, B., Kr\u00e4henb\u00fchl, P., and Koltun, V. (2019). Does computer vision matter for action?. arXiv.","DOI":"10.1126\/scirobotics.aaw6661"},{"key":"ref_19","unstructured":"Xiao, Y., Codevilla, F., Gurram, A., Urfalioglu, O., and L\u00f3pez, A.M. (2019). Multimodal End-to-End Autonomous Driving. arXiv."},{"key":"ref_20","unstructured":"Godard, C., Mac Aodha, O., Firman, M., and Brostow, G.J. Proceedings of the IEEE\/CVF International Conference on Computer Vision."},{"key":"ref_21","unstructured":"Alhashim, I., and Wonka, P. (2018). High quality monocular depth estimation via transfer learning. arXiv."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Yuan, W., Gu, X., Dai, Z., Zhu, S., and Tan, P. (2022). NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation. arXiv.","DOI":"10.1109\/CVPR52688.2022.00389"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"722","DOI":"10.1109\/TITS.2020.3023541","article-title":"Deep learning for image and point cloud fusion in autonomous driving: A review","volume":"23","author":"Cui","year":"2021","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Xu, H., Lan, G., Wu, S., and Hao, Q. (2019, January 27\u201330). Online intelligent calibration of cameras and lidars for autonomous driving systems. Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand.","DOI":"10.1109\/ITSC.2019.8916872"},{"key":"ref_25","unstructured":"Pacala, A. (2023, March 15). Lidar as a Camera\u2014Digital Lidar\u2019s Implications for Computer vision. Available online: https:\/\/ouster.com\/blog\/the-camera-is-in-the-lidar\/."},{"key":"ref_26","unstructured":"Pomerleau, D.A. (1989, January 27\u201330). Alvinn: An autonomous land vehicle in a neural network. Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA."},{"key":"ref_27","unstructured":"Codevilla, F., Santana, E., L\u00f3pez, A.M., and Gaidon, A. (November, January 27). Exploring the limitations of behavior cloning for autonomous driving. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_28","unstructured":"Jain, A., Del Pero, L., Grimmett, H., and Ondruska, P. (2021). Autonomy 2.0: Why is self-driving always 5 years away?. arXiv."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Vitelli, M., Chang, Y., Ye, Y., Wo\u0142czyk, M., Osi\u0144ski, B., Niendorf, M., Grimmett, H., Huang, Q., Jain, A., and Ondruska, P. (2021). SafetyNet: Safe planning for real-world self-driving vehicles using machine-learned policies. arXiv.","DOI":"10.1109\/ICRA46639.2022.9811576"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"182","DOI":"10.1016\/j.tra.2016.09.010","article-title":"Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?","volume":"94","author":"Kalra","year":"2016","journal-title":"Transp. Res. Part Policy Pract."},{"key":"ref_31","unstructured":"Nassi, B., Nassi, D., Ben-Netanel, R., Mirsky, Y., Drokin, O., and Elovici, Y. (2023, January 20). Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems. Available online: https:\/\/eprint.iacr.org\/2020\/085.pdf."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"2425","DOI":"10.1007\/s11263-022-01657-x","article-title":"Explainability of deep vision-based autonomous driving systems: Review and challenges","volume":"130","author":"Zablocki","year":"2022","journal-title":"Int. J. Comput. Vis."},{"key":"ref_33","unstructured":"Bojarski, M., Chen, C., Daw, J., De\u011firmenci, A., Deri, J., Firner, B., Flepp, B., Gogri, S., Hong, J., and Jackel, L. (2020). The NVIDIA pilotnet experiments. arXiv."},{"key":"ref_34","unstructured":"comma.ai (2022, March 15). End-to-End Lateral Planning. Available online: https:\/\/blog.comma.ai\/end-to-end-lateral-planning\/."},{"key":"ref_35","first-page":"1","article-title":"How does batch normalization help optimization?","volume":"31","author":"Santurkar","year":"2018","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_36","unstructured":"Xu, B., Wang, N., Chen, T., and Li, M. (2015). Empirical evaluation of rectified activations in convolutional network. arXiv."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Codevilla, F., L\u00f3pez, A.M., Koltun, V., and Dosovitskiy, A. (2018, January 8\u201314). On offline evaluation of vision-based driving models. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01267-0_15"},{"key":"ref_38","unstructured":"Loshchilov, I., and Hutter, F. (2017). Decoupled weight decay regularization. arXiv."},{"key":"ref_39","unstructured":"Wallach, H., Larochelle, H., Beygelzimer, A., d\u2019Alch\u00e9-Buc, F., Fox, E., and Garnett, R. (2019). Advances in Neural Information Processing Systems 32, Curran Associates, Inc."},{"key":"ref_40","unstructured":"Eraqi, H.M., Moustafa, M.N., and Honer, J. (2017). End-to-end deep learning for steering autonomous vehicles considering temporal dependencies. arXiv."},{"key":"ref_41","unstructured":"Fernandez, N. (2018). Two-stream convolutional networks for end-to-end learning of self-driving cars. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"87","DOI":"10.2478\/v10117-011-0021-1","article-title":"Comparison of values of Pearson\u2019s and Spearman\u2019s correlation coefficients on the same sets of data","volume":"30","author":"Hauke","year":"2011","journal-title":"Quaest. Geogr."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/5\/2845\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:48:39Z","timestamp":1760122119000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/5\/2845"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,6]]},"references-count":42,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2023,3]]}},"alternative-id":["s23052845"],"URL":"https:\/\/doi.org\/10.3390\/s23052845","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,3,6]]}}}