{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T16:49:01Z","timestamp":1776358141373,"version":"3.51.2"},"reference-count":52,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,1,13]],"date-time":"2022-01-13T00:00:00Z","timestamp":1642032000000},"content-version":"vor","delay-in-days":12,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Spanish MICINN\/FEDER","award":["RTI2018-099263-B-C21"],"award-info":[{"award-number":["RTI2018-099263-B-C21"]}]},{"name":"RoboCity2030-DIH- CM","award":["P2018\/NMT- 4331"],"award-info":[{"award-number":["P2018\/NMT- 4331"]}]},{"DOI":"10.13039\/501100006302","name":"Universidad de Alcal\u00e1","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100006302","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Multimed Tools Appl"],"published-print":{"date-parts":[[2022,1]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Nowadays, Artificial Intelligence (AI) is growing by leaps and bounds in almost all fields of technology, and Autonomous Vehicles (AV) research is one more of them. This paper proposes the using of algorithms based on Deep Learning (DL) in the control layer of an autonomous vehicle. More specifically, Deep Reinforcement Learning (DRL) algorithms such as Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) are implemented in order to compare results between them. The aim of this work is to obtain a trained model, applying a DRL algorithm, able of sending control commands to the vehicle to navigate properly and efficiently following a determined route. In addition, for each of the algorithms, several agents are presented as a solution, so that each of these agents uses different data sources to achieve the vehicle control commands. For this purpose, an open-source simulator such as CARLA is used, providing to the system with the ability to perform a multitude of tests without any risk into an hyper-realistic urban simulation environment, something that is unthinkable in the real world. The results obtained show that both DQN and DDPG reach the goal, but DDPG obtains a better performance. DDPG perfoms trajectories very similar to classic controller as LQR. In both cases RMSE is lower than 0.1m following trajectories with a range 180-700m. To conclude, some conclusions and future works are commented.<\/jats:p>","DOI":"10.1007\/s11042-021-11437-3","type":"journal-article","created":{"date-parts":[[2022,1,13]],"date-time":"2022-01-13T19:03:50Z","timestamp":1642100630000},"page":"3553-3576","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":148,"title":["Deep reinforcement learning based control for Autonomous Vehicles in CARLA"],"prefix":"10.1007","volume":"81","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6350-3076","authenticated-orcid":false,"given":"\u00d3scar","family":"P\u00e9rez-Gil","sequence":"first","affiliation":[]},{"given":"Rafael","family":"Barea","sequence":"additional","affiliation":[]},{"given":"Elena","family":"L\u00f3pez-Guill\u00e9n","sequence":"additional","affiliation":[]},{"given":"Luis M.","family":"Bergasa","sequence":"additional","affiliation":[]},{"given":"Carlos","family":"G\u00f3mez-Hu\u00e9lamo","sequence":"additional","affiliation":[]},{"given":"Rodrigo","family":"Guti\u00e9rrez","sequence":"additional","affiliation":[]},{"given":"Alejandro","family":"D\u00edaz-D\u00edaz","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,1,13]]},"reference":[{"issue":"21","key":"11437_CR1","doi-asserted-by":"publisher","first-page":"6121","DOI":"10.3390\/s20216121","volume":"20","author":"JF Arango","year":"2020","unstructured":"Arango JF, Bergasa LM, Revenga PA, Barea R, L\u00f3pez-Guill\u00e9n E, G\u00f3mez-Hu\u00e9lamo C, Araluce J, Guti\u00e9rrez R (2020) Drive-by-wire development process based on ros for an autonomous electric vehicle. Sensors 20(21):6121","journal-title":"Sensors"},{"key":"11437_CR2","doi-asserted-by":"crossref","unstructured":"Barea R, P\u00e9rez C, Bergasa LM, L\u00f3pez-Guill\u00e9n E, Romera E, Molinos E, Ocana M, L\u00f3pez J (2018)\u00a0Vehicle detection and localization using 3d lidar point cloud and image semantic segmentation. In:\u00a02018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, pp 3481\u20133486","DOI":"10.1109\/ITSC.2018.8569962"},{"issue":"1","key":"11437_CR3","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1016\/S0005-1098(01)00174-1","volume":"38","author":"A Bemporad","year":"2002","unstructured":"Bemporad A, Morari M, Dua V, Pistikopoulos EN (2002) The explicit linear quadratic regulator for constrained systems. Automatica 38(1):3\u201320","journal-title":"Automatica"},{"issue":"4\u20137","key":"11437_CR4","doi-asserted-by":"publisher","first-page":"343","DOI":"10.1016\/0895-7177(95)00143-P","volume":"22","author":"R Byrne","year":"1995","unstructured":"Byrne R, Abdallah C (1995) Design of a model reference adaptive controller for vehicle road following. Math Comput Model 22(4\u20137):343\u2013354","journal-title":"Math Comput Model"},{"issue":"3","key":"11437_CR5","doi-asserted-by":"publisher","first-page":"208","DOI":"10.1016\/j.ijtst.2017.07.008","volume":"6","author":"CY Chan","year":"2017","unstructured":"Chan CY (2017) Advancements, prospects, and impacts of automated driving systems. Int J Transp Sci Technol 6(3):208\u2013216","journal-title":"Int J Transp Sci Technol"},{"key":"11437_CR6","doi-asserted-by":"crossref","unstructured":"Cheein FAA, De La Cruz C, Bastos TF, Carelli R (2010) Slam-based cross-a-door solution approach for a robotic wheelchair. Int J Adv Robot Syst 155\u2013164","DOI":"10.5772\/7230"},{"key":"11437_CR7","doi-asserted-by":"crossref","unstructured":"Chen J, Yuan B, Tomizuka M (2019)\u00a0Deep imitation learning for autonomous driving in generic urban scenarios with enhanced safety. arXiv preprint arXiv:1903.00640","DOI":"10.1109\/IROS40897.2019.8968225"},{"key":"11437_CR8","doi-asserted-by":"crossref","unstructured":"Chen L, Hu X, Tang B, Cheng Y (2020) Conditional DQN-based motion planning with fuzzy logic for autonomous driving. IEEE Trans Intell Transp Syst","DOI":"10.1109\/TITS.2020.3025671"},{"issue":"3","key":"11437_CR9","doi-asserted-by":"publisher","first-page":"20","DOI":"10.5772\/5789","volume":"2","author":"R Choomuang","year":"2005","unstructured":"Choomuang R, Afzulpurkar N (2005) Hybrid kalman filter\/fuzzy logic based position control of autonomous mobile robot. Int J Adv Robot Syst 2(3):20","journal-title":"Int J Adv Robot Syst"},{"key":"11437_CR10","doi-asserted-by":"crossref","unstructured":"Codevilla F, Miiller M, L\u00f3pez A, Koltun V, Dosovitskiy A (2018)\u00a0End-to-end driving via conditional imitation learning. In:\u00a02018 IEEE International Conference on Robotics and Automation (ICRA).\u00a0IEEE, pp 1\u20139","DOI":"10.1109\/ICRA.2018.8460487"},{"key":"11437_CR11","unstructured":"Coulter RC (1992) Implementation of the pure pursuit path tracking algorithm.\u00a0Tech. rep., Carnegie-Mellon UNIV Pittsburgh PA Robotics INST"},{"key":"11437_CR12","unstructured":"De Bruin T, Kober J, Tuyls K, Babu\u0161ka R (2015) The importance of experience replay database composition in deep reinforcement learning. In: Deep reinforcement learning workshop, NIPS"},{"key":"11437_CR13","doi-asserted-by":"crossref","unstructured":"del Egido J, Bergasa LM, Romera E, Hu\u00e9lamo CG, Araluce J, Barea R (2018)\u00a0Self-driving a car in simulation through a CNN. In: Workshop of Physical Agents.\u00a0Springer, pp 31\u201343","DOI":"10.1007\/978-3-319-99885-5_3"},{"key":"11437_CR14","unstructured":"Dosovitskiy A, Ros G, Codevilla F, Lopez A, Koltun V (2017)\u00a0Carla:\u00a0An open urban driving simulator. arXiv preprint arXiv:1711.03938"},{"key":"11437_CR15","unstructured":"Duan Y, Chen X, Houthooft R, Schulman J, Abbeel P (2016)\u00a0Benchmarking deep reinforcement learning for continuous control. In: International Conference on Machine Learning.\u00a0pp 1329\u20131338"},{"key":"11437_CR16","unstructured":"Dupuis M, Strobl M, Grezlikowski H (2010)\u00a0Opendrive 2010 and beyond\u2013status and future of the de facto standard for the description of road networks. In: Proc. of the Driving Simulation Conference Europe.\u00a0pp 231\u2013242"},{"key":"11437_CR17","unstructured":"Fan J, Wang Z, Xie Y, Yang Z (2020)\u00a0A theoretical analysis of deep Q-learning. In: Learning for Dynamics and Control.\u00a0PMLR, pp 486\u2013489"},{"key":"11437_CR18","unstructured":"Ganesh A, Charalel J, Sarma MD, Xu N (2016) Deep reinforcement learning for simulated autonomous driving"},{"key":"11437_CR19","doi-asserted-by":"crossref","unstructured":"Geiger A, Lenz P, Urtasun R (2012)\u00a0Are we ready for autonomous driving? the Kitti vision benchmark suite. In:\u00a02012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 3354\u20133361","DOI":"10.1109\/CVPR.2012.6248074"},{"issue":"14","key":"11437_CR20","doi-asserted-by":"publisher","first-page":"4062","DOI":"10.3390\/s20144062","volume":"20","author":"R Guti\u00e9rrez","year":"2020","unstructured":"Guti\u00e9rrez R, L\u00f3pez-Guill\u00e9n E, Bergasa LM, Barea R, P\u00e9rez \u00d3, G\u00f3mez-Hu\u00e9lamo C, Arango F, Del Egido J, L\u00f3pez-Fern\u00e1ndez J (2020) A waypoint tracking controller for autonomous road vehicles using ros framework. Sensors 20(14):4062","journal-title":"Sensors"},{"issue":"4","key":"11437_CR21","doi-asserted-by":"publisher","first-page":"55","DOI":"10.1109\/37.295971","volume":"14","author":"T Hessburg","year":"1994","unstructured":"Hessburg T, Tomizuka M (1994) Fuzzy logic control for lateral vehicle guidance. IEEE Control Syst Mag 14(4):55\u201363","journal-title":"IEEE Control Syst Mag"},{"key":"11437_CR22","doi-asserted-by":"crossref","unstructured":"Hou Y, Liu L, Wei Q, Xu X, Chen C (2017)\u00a0A novel DDPG method with prioritized experience replay. In:\u00a02017 IEEE International Conference on Systems, Man, and Cybernetics (SMC).\u00a0IEEE, pp 316\u2013321","DOI":"10.1109\/SMC.2017.8122622"},{"key":"11437_CR23","doi-asserted-by":"crossref","unstructured":"Kendall A, Hawke J, Janz D, Mazur P, Reda D, Allen JM, Lam VD, Bewley A, Shah A (2019)\u00a0Learning to drive in a day. In:\u00a02019 International Conference on Robotics and Automation (ICRA).\u00a0IEEE, pp 8248\u20138254","DOI":"10.1109\/ICRA.2019.8793742"},{"issue":"1","key":"11437_CR24","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.ejor.2005.01.036","volume":"171","author":"T Le-Anh","year":"2006","unstructured":"Le-Anh T, De Koster M (2006) A review of design and control of automated guided vehicle systems. Eur J Oper Res 171(1):1\u201323","journal-title":"Eur J Oper Res"},{"key":"11437_CR25","doi-asserted-by":"crossref","unstructured":"Lenain R, Thuilot B, Cariou C, Martinet P (2005) Model predictive control for vehicle guidance in presence of sliding: Application to farm vehicles path tracking. In: Proceedings of the 2005 IEEE international conference on robotics and automation.\u00a0IEEE, pp 885\u2013890","DOI":"10.1109\/ROBOT.2005.1570229"},{"key":"11437_CR26","doi-asserted-by":"crossref","unstructured":"Liang M, Yang B, Wang S, Urtasun R (2018)\u00a0Deep continuous fusion for multi-sensor 3d object detection. In: Proceedings of the European Conference on Computer Vision (ECCV).\u00a0pp 641\u2013656","DOI":"10.1007\/978-3-030-01270-0_39"},{"key":"11437_CR27","doi-asserted-by":"crossref","unstructured":"Liang X, Wang T, Yang L, Xing E (2018)\u00a0Cirl: Controllable imitative reinforcement learning for vision-based self-driving. In: Proceedings of the European Conference on Computer Vision (ECCV).\u00a0pp 584\u2013599","DOI":"10.1007\/978-3-030-01234-2_36"},{"key":"11437_CR28","unstructured":"Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2015)\u00a0Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971"},{"key":"11437_CR29","unstructured":"Lin LJ\u00a0(1992)\u00a0Reinforcement learning for robots using neural networks (phd thesis)"},{"issue":"10","key":"11437_CR30","doi-asserted-by":"publisher","first-page":"2446","DOI":"10.1016\/j.automatica.2009.06.022","volume":"45","author":"Y Luo","year":"2009","unstructured":"Luo Y, Chen Y (2009) Fractional order [proportional derivative] controller for a class of fractional order systems. Automatica 45(10):2446\u20132450","journal-title":"Automatica"},{"key":"11437_CR31","doi-asserted-by":"crossref","unstructured":"Mao H, Alizadeh M, Menache I, Kandula S (2016)\u00a0Resource management with deep reinforcement learning. In: Proceedings of the 15th ACM Workshop on Hot Topics in Networks.\u00a0pp 50\u201356","DOI":"10.1145\/3005745.3005750"},{"key":"11437_CR32","unstructured":"Mart\u00edn UI et\u00a0al\u00a0(2018)\u00a0Generaci\u00f3n de trayectorias rob\u00f3ticas mediante aprendizaje profundo por refuerzo. Master\u2019s thesis, Universitat Polit\u00e8cnica de Catalunya"},{"key":"11437_CR33","doi-asserted-by":"crossref","unstructured":"Matt V, Aran N (2017) Deep reinforcement learning approach to autonomous driving","DOI":"10.2352\/ISSN.2470-1173.2017.19.AVM-023"},{"key":"11437_CR34","unstructured":"Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller M (2013)\u00a0Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602"},{"key":"11437_CR35","doi-asserted-by":"crossref","unstructured":"Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G et\u00a0al (2015)\u00a0Human-level control through deep reinforcement learning. Nature 518(7540):529\u2013533","DOI":"10.1038\/nature14236"},{"issue":"9","key":"11437_CR36","doi-asserted-by":"publisher","first-page":"569","DOI":"10.1002\/rob.20258","volume":"25","author":"M Montemerlo","year":"2008","unstructured":"Montemerlo M, Becker J, Bhat S, Dahlkamp H, Dolgov D, Ettinger S, Haehnel D, Hilden T, Hoffmann G, Huhnke B et al (2008) Junior: The stanford entry in the urban challenge. J Field Rob 25(9):569\u2013597","journal-title":"J Field Rob"},{"key":"11437_CR37","doi-asserted-by":"crossref","unstructured":"P\u00e9rez-Gil \u00d3, Barea R, L\u00f3pez-Guill\u00e9n E, Bergasa LM, Revenga PA, Guti\u00e9rrez R, D\u00edaz A\u00a0(2020)\u00a0DQN-based deep reinforcement learning for autonomous driving. In: Workshop of Physical Agents.\u00a0Springer, pp 60\u201376","DOI":"10.1007\/978-3-030-62579-5_5"},{"issue":"2\u20133","key":"11437_CR38","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1016\/j.robot.2005.04.006","volume":"52","author":"FM Raimondi","year":"2005","unstructured":"Raimondi FM, Melluso M (2005) A new fuzzy robust dynamic controller for autonomous vehicles with nonholonomic constraints. Robot Auton Syst 52(2\u20133):115\u2013131","journal-title":"Robot Auton Syst"},{"issue":"3","key":"11437_CR39","doi-asserted-by":"publisher","first-page":"503","DOI":"10.3390\/s19030503","volume":"19","author":"\u00c1 S\u00e1ez","year":"2019","unstructured":"S\u00e1ez \u00c1, Bergasa LM, L\u00f3pez-Guill\u00e9n E, Romera E, Tradacete M, G\u00f3mez-Hu\u00e9lamo C, del Egido J (2019) Real-time semantic segmentation for fisheye urban driving images based on erfnet. Sensors 19(3):503","journal-title":"Sensors"},{"key":"11437_CR40","doi-asserted-by":"crossref","unstructured":"Sanders A (2016) An introduction to unreal engine 4. AK Peters\/CRC Press","DOI":"10.1201\/9781315382555"},{"key":"11437_CR41","doi-asserted-by":"crossref","unstructured":"Sasaki H, Horiuchi T, Kato S (2017)\u00a0A study on vision-based mobile robot learning by deep q-network. In:\u00a02017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE).\u00a0IEEE, pp 799\u2013804","DOI":"10.23919\/SICE.2017.8105597"},{"key":"11437_CR42","unstructured":"Silver D, Hubert T, Schrittwieser J, Antonoglou I, Lai M, Guez A, Lanctot M, Sifre L, Kumaran D, Graepel T et\u00a0al (2017)\u00a0Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815"},{"issue":"8","key":"11437_CR43","doi-asserted-by":"publisher","first-page":"425","DOI":"10.1002\/rob.20255","volume":"25","author":"C Urmson","year":"2008","unstructured":"Urmson C, Anhalt J, Bagnell D, Baker C, Bittner R, Clark M, Dolan J, Duggins D, Galatali T, Geyer C et al (2008) Autonomous driving in urban environments: Boss and the urban challenge. J Field Rob 25(8):425\u2013466","journal-title":"J Field Rob"},{"key":"11437_CR44","doi-asserted-by":"crossref","unstructured":"Wang FY (2017)\u00a0Ai and intelligent vehicles future challenge (IVFC) in China: From cognitive intelligence to parallel intelligence. In:\u00a02017 ITU Kaleidoscope: Challenges for a Data-Driven Society (ITU K).\u00a0IEEE, pp 1\u20132","DOI":"10.23919\/ITU-WT.2017.8246841"},{"key":"11437_CR45","unstructured":"Wang S, Jia D, Weng X (2018)\u00a0Deep reinforcement learning for autonomous driving. arXiv preprint arXiv:1811.11329"},{"issue":"3","key":"11437_CR46","doi-asserted-by":"publisher","first-page":"26","DOI":"10.5772\/5604","volume":"5","author":"W Wang","year":"2008","unstructured":"Wang W, Nonami K, Ohira Y (2008) Model reference sliding mode control of small helicopter XRB based on vision. Int J Adv Robot Syst 5(3):26","journal-title":"Int J Adv Robot Syst"},{"key":"11437_CR47","unstructured":"Xiong X, Wang J, Zhang F, Li K (2016)\u00a0Combining deep reinforcement learning and safety based control for autonomous driving. arXiv preprint arXiv:1612.00147"},{"key":"11437_CR48","doi-asserted-by":"crossref","unstructured":"Ye F, Zhang S, Wang P, Chan CY (2021)\u00a0A survey of deep reinforcement learning algorithms for motion planning and control of autonomous vehicles. arXiv preprint arXiv:2105.14218","DOI":"10.1109\/IV48863.2021.9575880"},{"key":"11437_CR49","doi-asserted-by":"crossref","unstructured":"Yurtsever E, Capito L, Redmill K, Ozguner U (2020)\u00a0Integrating deep reinforcement learning with model-based path planners for automated driving. arXiv preprint arXiv:2002.00434","DOI":"10.1109\/IV47402.2020.9304735"},{"key":"11437_CR50","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1016\/j.neucom.2020.05.097","volume":"411","author":"F Zhang","year":"2020","unstructured":"Zhang F, Li J, Li Z (2020) A td3-based multi-agent deep reinforcement learning method in mixed cooperation-competition environment. Neurocomputing 411:206\u2013215","journal-title":"Neurocomputing"},{"key":"11437_CR51","doi-asserted-by":"crossref","unstructured":"Zhao J, Ye C, Wu Y, Guan L, Cai L, Sun L, Yang T, He X, Li J, Ding Y, et al (2018) Tiev: The tongji intelligent electric vehicle in the intelligent vehicle future challenge of China. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, pp 1303\u20131309","DOI":"10.1109\/ITSC.2018.8569629"},{"issue":"2","key":"11437_CR52","first-page":"0278","volume":"41","author":"D Zhuang","year":"2007","unstructured":"Zhuang D, Yu F, Lin Y (2007) The vehicle directional control based on fractional order pd^ m^ u controller. Journal-Shanghai Jiaotong University-Chinese Edition 41(2):0278","journal-title":"Journal-Shanghai Jiaotong University-Chinese Edition"}],"container-title":["Multimedia Tools and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-021-11437-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11042-021-11437-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-021-11437-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,2,21]],"date-time":"2022-02-21T19:17:40Z","timestamp":1645471060000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11042-021-11437-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1]]},"references-count":52,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,1]]}},"alternative-id":["11437"],"URL":"https:\/\/doi.org\/10.1007\/s11042-021-11437-3","relation":{},"ISSN":["1380-7501","1573-7721"],"issn-type":[{"value":"1380-7501","type":"print"},{"value":"1573-7721","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1]]},"assertion":[{"value":"29 January 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 July 2021","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 August 2021","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 January 2022","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}