{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,17]],"date-time":"2025-10-17T12:40:36Z","timestamp":1760704836745,"version":"build-2065373602"},"reference-count":27,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2025,10,14]],"date-time":"2025-10-14T00:00:00Z","timestamp":1760400000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"research grant of the Gyeongsang National University"},{"name":"\u201cRegional Innovation Strategy (RIS)\u201d","award":["2021RIS-003"],"award-info":[{"award-number":["2021RIS-003"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>Autonomous racing serves as a challenging testbed that exposes the limitations of perception-decision-control algorithms in extreme high-speed environments, revealing safety gaps not addressed in existing autonomous driving research. However, traditional control techniques (e.g., FGM and MPC) and reinforcement learning-based approaches (including model-free and Dreamer variants) struggle to simultaneously satisfy sample efficiency, prediction reliability, and real-time control performance, making them difficult to apply in actual high-speed racing environments. To address these challenges, we propose LiDAR Dreamer, a novel world model specialized for LiDAR sensor data. LiDAR Dreamer introduces three core techniques: (1) efficient point cloud preprocessing and encoding via Cartesian Polar Bar Charts, (2) Light Structured State-Space Cells (LS3C) that reduce RSSM parameters by 14.2% while preserving key dynamic information, and (3) a Displacement Covariance Distance divergence function, which enhances both learning stability and expressiveness. Experiments in PyBullet F1TENTH simulation environments demonstrate that LiDAR Dreamer achieves competitive performance across different track complexities. On the Austria track with complex corners, it reaches 90% of DreamerV3\u2019s performance (1.14 vs. 1.27 progress) while using 81.7% fewer parameters. On the simpler Columbia track, while model-free methods achieve higher absolute performance, LiDAR Dreamer shows improved sample efficiency compared to baseline Dreamer models, converging faster to stable performance. The Treitlstrasse environment results demonstrate comparable performance to baseline methods. Furthermore, beyond the 14.2% RSSM parameter reduction, reward loss converged more stably without spikes, improving overall training efficiency and stability.<\/jats:p>","DOI":"10.3390\/info16100898","type":"journal-article","created":{"date-parts":[[2025,10,17]],"date-time":"2025-10-17T11:39:34Z","timestamp":1760701174000},"page":"898","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["LiDAR Dreamer: Efficient World Model for Autonomous Racing with Cartesian-Polar Encoding and Lightweight State-Space Cells"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-7454-8251","authenticated-orcid":false,"given":"Myeongjun","family":"Kim","sequence":"first","affiliation":[{"name":"Department of AI Convergence Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea"}]},{"given":"Jong-Chan","family":"Park","sequence":"additional","affiliation":[{"name":"Department of AI Convergence Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea"}]},{"given":"Sang-Min","family":"Choi","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea"},{"name":"The Research Institute of Natural Science, Gyeongsang National University, Jinju 52828, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5643-4797","authenticated-orcid":false,"given":"Gun-Woo","family":"Kim","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2025,10,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"543","DOI":"10.1002\/rob.22063","article-title":"Toward a gliding hybrid aerial underwater vehicle: Design, fabrication, and experiments","volume":"39","author":"Lyu","year":"2022","journal-title":"J. Field Robot."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1267","DOI":"10.1002\/rob.21977","article-title":"AMZ driverless: The full autonomous racing system","volume":"37","author":"Kabzan","year":"2020","journal-title":"J. Field Robot."},{"key":"ref_3","unstructured":"Law, C.K., Dalal, D., and Shearrow, S. (2018). Robust model predictive control for autonomous vehicles\/self driving cars. arXiv."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"2713","DOI":"10.1109\/TCST.2019.2948135","article-title":"Learning how to autonomously race a car: A predictive control approach","volume":"28","author":"Rosolia","year":"2019","journal-title":"IEEE Trans. Control. Syst. Technol."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1123","DOI":"10.1016\/j.robot.2012.05.021","article-title":"A novel obstacle avoidance algorithm: \u201cfollow the gap method\u201d","volume":"60","author":"Sezer","year":"2012","journal-title":"Robot. Auton. Syst."},{"key":"ref_6","unstructured":"Otterness, N. (2025, October 09). Disparity Extender. Available online: https:\/\/www.nathanotterness.com\/2019\/04\/the-disparity-extender-algorithm-and.html."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"672","DOI":"10.1109\/TAES.1969.309951","article-title":"A comparison of expected flight times for intercept and pure pursuit missiles","volume":"AES-5","author":"Scharf","year":"1969","journal-title":"IEEE Trans. Aerosp. Electron. Syst."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"439","DOI":"10.1002\/rob.20031","article-title":"Autonomous ground vehicle path tracking","volume":"21","author":"Wit","year":"2004","journal-title":"J. Robot. Syst."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"335","DOI":"10.1016\/0005-1098(89)90002-2","article-title":"Model predictive control: Theory and practice\u2014A survey","volume":"25","author":"Garcia","year":"1989","journal-title":"Automatica"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Sutton, R.S., and Barto, A.G. (1998). Reinforcement Learning: An Introduction, MIT Press.","DOI":"10.1109\/TNN.1998.712192"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1140","DOI":"10.1126\/science.aar6404","article-title":"A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play","volume":"362","author":"Silver","year":"2018","journal-title":"Science"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Koh, J.Y., Lee, H., Yang, Y., Baldridge, J., and Anderson, P. (2021, January 11\u201317). Pathdreamer: A world model for indoor navigation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01447"},{"key":"ref_13","unstructured":"Li, Q., Jia, X., Wang, S., and Yan, J. (October, January 29). Think2Drive: Efficient Reinforcement Learning by Thinking with Latent World Model for Autonomous Driving (in CARLA-V2). Proceedings of the European Conference on Computer Vision, Milan, Italy."},{"key":"ref_14","unstructured":"Seo, Y., Kim, J., James, S., Lee, K., Shin, J., and Abbeel, P. (2023, January 23\u201329). Multi-view masked world models for visual robotic manipulation. Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA."},{"key":"ref_15","unstructured":"Wu, P., Escontrela, A., Hafner, D., Abbeel, P., and Goldberg, K. (2022, January 14\u201318). Daydreamer: World models for physical robot learning. Proceedings of the Conference on Robot Learning, Auckland, New Zealand."},{"key":"ref_16","unstructured":"Barth-Maron, G., Hoffman, M.W., Budden, D., Dabney, W., Horgan, D., Tb, D., Muldal, A., Heess, N., and Lillicrap, T. (2018). Distributed distributional deterministic policy gradients. arXiv."},{"key":"ref_17","unstructured":"Abdolmaleki, A., Springenberg, J.T., Tassa, Y., Munos, R., Heess, N., and Riedmiller, M. (2018). Maximum a posteriori policy optimisation. arXiv."},{"key":"ref_18","unstructured":"Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv."},{"key":"ref_19","unstructured":"Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018, January 10\u201315). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. Proceedings of the International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_20","unstructured":"Deisenroth, M., and Rasmussen, C.E. (July, January 28). PILCO: A model-based and data-efficient approach to policy search. Proceedings of the 28th International Conference on Machine Learning (ICML-11), Bellevue, WA, USA."},{"key":"ref_21","unstructured":"Janner, M., Fu, J., Zhang, M., and Levine, S. (2019, January 8\u201314). When to trust your model: Model-based policy optimization. Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Available online: https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2019\/file\/5faf461eff3099671ad63c6f3f094f7f-Paper.pdf."},{"key":"ref_22","unstructured":"Hafner, D., Lillicrap, T., Fischer, I., Villegas, R., Ha, D., Lee, H., and Davidson, J. (2019, January 9\u201315). Learning latent dynamics for planning from pixels. Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_23","unstructured":"Hafner, D., Lillicrap, T., Ba, J., and Norouzi, M. (2019). Dream to control: Learning behaviors by latent imagination. arXiv."},{"key":"ref_24","unstructured":"Hafner, D., Lillicrap, T., Norouzi, M., and Ba, J. (2020). Mastering atari with discrete world models. arXiv."},{"key":"ref_25","unstructured":"Hafner, D., Pasukonis, J., Ba, J., and Lillicrap, T. (2023). Mastering diverse domains through world models. arXiv."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Brunnbauer, A., Berducci, L., Brandst\u00e1tter, A., Lechner, M., Hasani, R., Rus, D., and Grosu, R. (2022, January 23\u201327). Latent imagination facilitates zero-shot transfer in autonomous racing. Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA.","DOI":"10.1109\/ICRA46639.2022.9811650"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019, January 4\u20138). Optuna: A next-generation hyperparameter optimization framework. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA.","DOI":"10.1145\/3292500.3330701"}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/16\/10\/898\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,17]],"date-time":"2025-10-17T12:06:53Z","timestamp":1760702813000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/16\/10\/898"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,14]]},"references-count":27,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2025,10]]}},"alternative-id":["info16100898"],"URL":"https:\/\/doi.org\/10.3390\/info16100898","relation":{},"ISSN":["2078-2489"],"issn-type":[{"value":"2078-2489","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,14]]}}}