{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,31]],"date-time":"2025-10-31T17:05:14Z","timestamp":1761930314232,"version":"build-2065373602"},"reference-count":28,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2023,1,12]],"date-time":"2023-01-12T00:00:00Z","timestamp":1673481600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Science and Technology Council, Taiwan, R.O.C.","award":["109-2221-E-035-067-MY3"],"award-info":[{"award-number":["109-2221-E-035-067-MY3"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Advanced Driver Assistance Systems (ADAS) are only applied to relatively simple scenarios, such as highways. If there is an emergency while driving, the driver should take control of the car to deal properly with the situation at any time. Obviously, this incurs the uncertainty of safety. Recently, in the literature, several studies have been proposed for the above-mentioned issue via Artificial Intelligence (AI). The achievement is exactly the aim that we look forward to, i.e., the autonomous vehicle. In this paper, we realize the autonomous driving control via Deep Reinforcement Learning (DRL) based on the CARLA (Car Learning to Act) simulator. Specifically, we use the ordinary Red-Green-Blue (RGB) camera and semantic segmentation camera to observe the view in front of the vehicle while driving. Then, the captured information is utilized as the input for different DRL models so as to evaluate the performance, where the DRL models include DDPG (Deep Deterministic Policy Gradient) and RDPG (Recurrent Deterministic Policy Gradient). Moreover, we also design an appropriate reward mechanism for these DRL models to realize efficient autonomous driving control. According to the results, only the RDPG strategies can finish the driving mission with the scenario that does not appear\/include in the training scenario, and with the help of the semantic segmentation camera, the RDPG control strategy can further improve its efficiency.<\/jats:p>","DOI":"10.3390\/s23020895","type":"journal-article","created":{"date-parts":[[2023,1,12]],"date-time":"2023-01-12T05:03:03Z","timestamp":1673499783000},"page":"895","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Autonomous Driving Control Based on the Technique of Semantic Segmentation"],"prefix":"10.3390","volume":"23","author":[{"given":"Jichiang","family":"Tsai","sequence":"first","affiliation":[{"name":"Department of Electrical Engineering & Graduate Institute of Communication Engineering, National Chung Hsing University, Taichung 402, Taiwan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7526-9990","authenticated-orcid":false,"given":"Che-Cheng","family":"Chang","sequence":"additional","affiliation":[{"name":"Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan"}]},{"given":"Tzu","family":"Li","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, National Chung Hsing University, Taichung 402, Taiwan"}]}],"member":"1968","published-online":{"date-parts":[[2023,1,12]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Cafiso, S., Graziano, A.D., Giuffr\u00e8, T., Pappalardo, G., and Severino, A. (2022). Managed Lane as Strategy for Traffic Flow and Safety: A Case Study of Catania Ring Road. Sustainability, 14.","DOI":"10.3390\/su14052915"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Zhu, M., Wang, X., and Wang, Y. (2019). Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. arXiv.","DOI":"10.1016\/j.trc.2018.10.024"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"102662","DOI":"10.1016\/j.trc.2020.102662","article-title":"Safe, Efficient, and Comfortable Velocity Control Based on Reinforcement Learning for Autonomous Driving","volume":"117","author":"Zhu","year":"2020","journal-title":"Transp. Res. Part Emerg. Technol."},{"key":"ref_4","unstructured":"Chang, C.-C., and Chan, K.-L. (2019, January 10\u201313). Collision Avoidance Architecture Based on Computer Vision with Predictive Ability. Proceedings of the 2019 International Workshop of ICAROB\u2014Intelligent Artificial Life and Robotics, Beppu, Japan."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Chang, C.-C., Tsai, J., Lin, J.-H., and Ooi, Y.-M. (2021). Autonomous Driving Control Using the DDPG and RDPG Algorithms. Appl. Sci., 11.","DOI":"10.3390\/app112210659"},{"key":"ref_6","unstructured":"(2022, October 20). Home-AirSim [Online]. Available online: https:\/\/microsoft.github.io\/AirSim\/."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Tsai, J., Chang, C.-C., Ou, Y.-C., Sieh, B.-H., and Ooi, Y.-M. (2022). Autonomous Driving Control Based on the Perception of a Lidar Sensor and Odometer. Appl. Sci., 12.","DOI":"10.3390\/app12157775"},{"key":"ref_8","unstructured":"(2022, October 20). Gazebo [Online]. Available online: http:\/\/gazebosim.org\/."},{"key":"ref_9","unstructured":"Agoston, M.K. (2005). Computer Graphics and Geometric Modeling: Implementation and Algorithms, Springer."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"2259","DOI":"10.1016\/S0031-3203(00)00149-7","article-title":"Color Image Segmentation: Advances and Prospects","volume":"34","author":"Cheng","year":"2001","journal-title":"Pattern Recognit."},{"key":"ref_11","unstructured":"(2022, October 20). CARLA Simulator [Online]. Available online: https:\/\/carla.org\/."},{"key":"ref_12","unstructured":"(2022, October 20). The Most Powerful Real-Time 3D Creation Platform\u2014Unreal Engine [Online]. Available online: https:\/\/www.unrealengine.com\/en-US\/."},{"key":"ref_13","unstructured":"(2022, October 20). ASAM OpenDRIVE [Online]. Available online: https:\/\/www.asam.net\/standards\/detail\/opendrive\/."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Alonso, I., and Murillo, A.C. (2019, January 16\u201317). EV-SegNet: Semantic Segmentation for Event-Based Cameras. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA.","DOI":"10.1109\/CVPRW.2019.00205"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Maqueda, A.I., Loquercio, A., Gallego, G., Garcia, N., and Scaramuzza, D. (2018, January 18\u201322). Event-Based Vision Meets Deep Learning on Steering Prediction for Self-Driving Cars. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00568"},{"key":"ref_16","unstructured":"Sutton, R.S., and Barto, A.G. (2018). Reinforcement Learning: An Introduction. The MIT Press."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Martin-Guerrero, J.D., and Lamata, L. (2021). Reinforcement Learning and Physics. Appl. Sci., 11.","DOI":"10.3390\/app11188589"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Jembre, Y.Z., Nugroho, Y.W., Khan, M.T.R., Attique, M., Paul, R., Shah, S.H.A., and Kim, B. (2021). Evaluation of Reinforcement and Deep Learning Algorithms in Controlling Unmanned Aerial Vehicles. Appl. Sci., 11.","DOI":"10.3390\/app11167240"},{"key":"ref_19","unstructured":"(2022, October 20). Deep Reinforcement Learning [Online]. Available online: https:\/\/julien-vitay.net\/deeprl\/."},{"key":"ref_20","unstructured":"Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2019). Continuous Control with Deep Reinforcement Learning. arXiv."},{"key":"ref_21","unstructured":"Heess, N., Hunt, J.J., Lillicrap, T.P., and Silver, D. (2015). Memory-based Control with Recurrent Neural Networks. arXiv."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"8503","DOI":"10.1038\/nature14236","article-title":"Human-level control through deep reinforcement learning","volume":"518","author":"Mnih","year":"2015","journal-title":"Nature"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"293","DOI":"10.1007\/BF00992699","article-title":"Self-improving reactive agents based on reinforcement learning, planning and teaching","volume":"8","author":"Lin","year":"1992","journal-title":"Mach. Learn."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Sewak, M. (2019). Deep Reinforcement Learning, Springer.","DOI":"10.1007\/978-981-13-8285-7"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"S117","DOI":"10.1088\/0026-1394\/45\/6\/S17","article-title":"The Ornstein-Uhlenbeck process as a model of a low pass filtered white noise","volume":"45","author":"Bibbona","year":"2008","journal-title":"Metrologia"},{"key":"ref_26","unstructured":"(2022, October 20). Vehicle Dynamics [Online]. Available online: https:\/\/ritzel.siu.edu\/courses\/302s\/vehicle\/vehicledynamics.htm."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Chaki, N., Shaikh, S.H., and Saeed, K. (2014). Exploring Image Binarization Techniques, Springer.","DOI":"10.1007\/978-81-322-1907-1"},{"key":"ref_28","unstructured":"Stockman, G., and Shapiro, L.G. (2001). Computer Vision, Prentice Hall PTR."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/2\/895\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:04:12Z","timestamp":1760119452000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/2\/895"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,12]]},"references-count":28,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2023,1]]}},"alternative-id":["s23020895"],"URL":"https:\/\/doi.org\/10.3390\/s23020895","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,1,12]]}}}