{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,20]],"date-time":"2026-02-20T18:57:59Z","timestamp":1771613879735,"version":"3.50.1"},"reference-count":50,"publisher":"MDPI AG","issue":"13","license":[{"start":{"date-parts":[[2023,7,6]],"date-time":"2023-07-06T00:00:00Z","timestamp":1688601600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100004826","name":"Beijing Natural Science Foundation","doi-asserted-by":"publisher","award":["4222002"],"award-info":[{"award-number":["4222002"]}],"id":[{"id":"10.13039\/501100004826","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004826","name":"Beijing Natural Science Foundation","doi-asserted-by":"publisher","award":["L202016"],"award-info":[{"award-number":["L202016"]}],"id":[{"id":"10.13039\/501100004826","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004826","name":"Beijing Natural Science Foundation","doi-asserted-by":"publisher","award":["L211002"],"award-info":[{"award-number":["L211002"]}],"id":[{"id":"10.13039\/501100004826","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004826","name":"Beijing Natural Science Foundation","doi-asserted-by":"publisher","award":["Z191100001119094"],"award-info":[{"award-number":["Z191100001119094"]}],"id":[{"id":"10.13039\/501100004826","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004826","name":"Beijing Natural Science Foundation","doi-asserted-by":"publisher","award":["KM202110005021"],"award-info":[{"award-number":["KM202110005021"]}],"id":[{"id":"10.13039\/501100004826","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004826","name":"Beijing Natural Science Foundation","doi-asserted-by":"publisher","award":["040000514122607"],"award-info":[{"award-number":["040000514122607"]}],"id":[{"id":"10.13039\/501100004826","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Beijing Nova Program of Science and Technology","award":["4222002"],"award-info":[{"award-number":["4222002"]}]},{"name":"Beijing Nova Program of Science and Technology","award":["L202016"],"award-info":[{"award-number":["L202016"]}]},{"name":"Beijing Nova Program of Science and Technology","award":["L211002"],"award-info":[{"award-number":["L211002"]}]},{"name":"Beijing Nova Program of Science and Technology","award":["Z191100001119094"],"award-info":[{"award-number":["Z191100001119094"]}]},{"name":"Beijing Nova Program of Science and Technology","award":["KM202110005021"],"award-info":[{"award-number":["KM202110005021"]}]},{"name":"Beijing Nova Program of Science and Technology","award":["040000514122607"],"award-info":[{"award-number":["040000514122607"]}]},{"name":"Foundation of Beijing Municipal Commission of Education","award":["4222002"],"award-info":[{"award-number":["4222002"]}]},{"name":"Foundation of Beijing Municipal Commission of Education","award":["L202016"],"award-info":[{"award-number":["L202016"]}]},{"name":"Foundation of Beijing Municipal Commission of Education","award":["L211002"],"award-info":[{"award-number":["L211002"]}]},{"name":"Foundation of Beijing Municipal Commission of Education","award":["Z191100001119094"],"award-info":[{"award-number":["Z191100001119094"]}]},{"name":"Foundation of Beijing Municipal Commission of Education","award":["KM202110005021"],"award-info":[{"award-number":["KM202110005021"]}]},{"name":"Foundation of Beijing Municipal Commission of Education","award":["040000514122607"],"award-info":[{"award-number":["040000514122607"]}]},{"name":"Urban Carbon Neutral Science and Technology Innovation Fund Project of Beijing University of Technology","award":["4222002"],"award-info":[{"award-number":["4222002"]}]},{"name":"Urban Carbon Neutral Science and Technology Innovation Fund Project of Beijing University of Technology","award":["L202016"],"award-info":[{"award-number":["L202016"]}]},{"name":"Urban Carbon Neutral Science and Technology Innovation Fund Project of Beijing University of Technology","award":["L211002"],"award-info":[{"award-number":["L211002"]}]},{"name":"Urban Carbon Neutral Science and Technology Innovation Fund Project of Beijing University of Technology","award":["Z191100001119094"],"award-info":[{"award-number":["Z191100001119094"]}]},{"name":"Urban Carbon Neutral Science and Technology Innovation Fund Project of Beijing University of Technology","award":["KM202110005021"],"award-info":[{"award-number":["KM202110005021"]}]},{"name":"Urban Carbon Neutral Science and Technology Innovation Fund Project of Beijing University of Technology","award":["040000514122607"],"award-info":[{"award-number":["040000514122607"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Multiple unmanned aerial vehicles (UAVs) have a greater potential to be widely used in UAV-assisted IoT applications. UAV formation, as an effective way to improve surveillance and security, has been extensively of concern. The leader\u2013follower approach is efficient for UAV formation, as the whole formation system needs to find only the leader\u2019s trajectory. This paper studies the leader\u2013follower surveillance system. Owing to different scenarios and assignments, the leading velocity is dynamic. The inevitable communication time delays resulting from information sending, communicating and receiving process bring challenges in the design of real-time UAV formation control. In this paper, the design of UAV formation tracking based on deep reinforcement learning (DRL) is investigated for high mobility scenarios in the presence of communication delay. To be more specific, the optimization UAV formation problem is firstly formulated to be a state error minimization problem by using the quadratic cost function when the communication delay is considered. Then, the delay-informed Markov decision process (DIMDP) is developed by including the previous actions in order to compensate the performance degradation induced by the time delay. Subsequently, an extended-delay informed deep deterministic policy gradient (DIDDPG) algorithm is proposed. Finally, some issues, such as computational complexity analysis and the effect of the time delay are discussed, and then the proposed intelligent algorithm is further extended to the arbitrary communication delay case. Numerical experiments demonstrate that the proposed DIDDPG algorithm can significantly alleviate the performance degradation caused by time delays.<\/jats:p>","DOI":"10.3390\/s23136190","type":"journal-article","created":{"date-parts":[[2023,7,7]],"date-time":"2023-07-07T01:57:09Z","timestamp":1688695029000},"page":"6190","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Delay-Informed Intelligent Formation Control for UAV-Assisted IoT Application"],"prefix":"10.3390","volume":"23","author":[{"given":"Lihan","family":"Liu","sequence":"first","affiliation":[{"name":"School of Statistics and Data Science, Beijing Wuzi University, Beijing 101149, China"}]},{"given":"Mengjiao","family":"Xu","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2880-3329","authenticated-orcid":false,"given":"Zhuwei","family":"Wang","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3989-3964","authenticated-orcid":false,"given":"Chao","family":"Fang","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China"},{"name":"Purple Mountain Laboratory: Networking, Communications and Security, Nanjing 210096, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1025-0412","authenticated-orcid":false,"given":"Zhensong","family":"Li","sequence":"additional","affiliation":[{"name":"School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing 100101, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5576-9883","authenticated-orcid":false,"given":"Meng","family":"Li","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8470-3982","authenticated-orcid":false,"given":"Yang","family":"Sun","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2799-6464","authenticated-orcid":false,"given":"Huamin","family":"Chen","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,7,6]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"2624","DOI":"10.1109\/COMST.2016.2560343","article-title":"Survey on unmanned aerial vehicle networks for civil applications: A communications viewpoint","volume":"18","author":"Hayat","year":"2016","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1123","DOI":"10.1109\/COMST.2015.2495297","article-title":"Survey of important issues in UAV communication networks","volume":"18","author":"Gupta","year":"2015","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"157906","DOI":"10.1109\/ACCESS.2020.3019963","article-title":"Towards persistent surveillance and reconnaissance using a connected swarm of multiple UAVs","volume":"8","author":"Cho","year":"2020","journal-title":"IEEE Access"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1516","DOI":"10.1109\/TCST.2017.2705072","article-title":"Robust team formation control for quadrotors","volume":"26","author":"Jasim","year":"2017","journal-title":"IEEE Trans. Control Syst. Technol."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"261367","DOI":"10.1155\/2012\/261367","article-title":"UAV formation flight based on nonlinear model predictive control","volume":"2012","author":"Chao","year":"2012","journal-title":"Math. Probl. Eng."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Chao, Z., Ming, L., Shaolei, Z., and Wenguang, Z. (2011, January 9\u201311). Collision-free UAV formation flight control based on nonlinear MPC. Proceedings of the 2011 International Conference on Electronics, Communications and Control (ICECC), Ningbo, China.","DOI":"10.1109\/ICECC.2011.6066578"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Cordeiro, T.F.K., Ferreira, H.C., and Ishihara, J.Y. (2017, January 13\u201316). Non linear controller and path planner algorithm for an autonomous variable shape formation flight. Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA.","DOI":"10.1109\/ICUAS.2017.7991441"},{"key":"ref_8","first-page":"1087","article-title":"Nonlinear PID controller design for a 6-DOF UAV quadrotor system","volume":"22","author":"Najm","year":"2019","journal-title":"Eng. Sci. Technol. Int. J."},{"key":"ref_9","unstructured":"Sutton, R.S., and Barto, A.G. (2018). Reinforcement Learning: An Introduction, MIT Press."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"2699","DOI":"10.1016\/j.automatica.2012.06.096","article-title":"Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics","volume":"48","author":"Jiang","year":"2012","journal-title":"Automatica"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"529","DOI":"10.1038\/nature14236","article-title":"Human-level control through deep reinforcement learning","volume":"518","author":"Mnih","year":"2015","journal-title":"Nature"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"3133","DOI":"10.1109\/COMST.2019.2916583","article-title":"Applications of deep reinforcement learning in communications and networking: A survey","volume":"21","author":"Luong","year":"2019","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"26","DOI":"10.1109\/MSP.2017.2743240","article-title":"Deep reinforcement learning: A brief survey","volume":"34","author":"Arulkumaran","year":"2017","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"29064","DOI":"10.1109\/ACCESS.2020.2971780","article-title":"Path planning for UAV ground target tracking via deep reinforcement learning","volume":"8","author":"Li","year":"2020","journal-title":"IEEE Access"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Bayerlein, H., Theile, M., Caccamo, M., and Gesbert, D. (2020, January 7\u201311). UAV path planning for wireless data harvesting: A deep reinforcement learning approach. Proceedings of the GLOBECOM 2020\u20142020 IEEE Global Communications Conference, Taipei, Taiwan.","DOI":"10.1109\/GLOBECOM42002.2020.9322234"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3301273","article-title":"Reinforcement learning for UAV attitude control","volume":"3","author":"Koch","year":"2019","journal-title":"ACM Trans.-Cyber-Phys. Syst."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Buechel, M., and Knoll, A. (2018, January 4\u20137). Deep reinforcement learning for predictive longitudinal control of automated vehicles. Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA.","DOI":"10.1109\/ITSC.2018.8569977"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Chu, T., and Kalabi, U. (2019, January 11\u201313). Model-based deep reinforcement learning for CACC in mixed-autonomy vehicle platoon. Proceedings of the 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, France.","DOI":"10.1109\/CDC40024.2019.9030110"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Bouhamed, O., Ghazzai, H., Besbes, H., and Massoud, Y. (2020, January 10\u201321). Autonomous UAV navigation: A DDPG-based deep reinforcement learning approach. Proceedings of the 2020 IEEE International Symposium on circuits and systems (ISCAS), Virtual Event.","DOI":"10.1109\/ISCAS45731.2020.9181245"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Wan, K., Gao, X., Hu, Z., and Wu, G. (2020). Robust motion control for UAV in dynamic uncertain environments using deep reinforcement learning. Remote Sens., 12.","DOI":"10.3390\/rs12040640"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"2719","DOI":"10.1109\/TFUZZ.2017.2787561","article-title":"Optimized multi-agent formation control based on an identifier-actor-critic reinforcement learning algorithm","volume":"26","author":"Wen","year":"2017","journal-title":"IEEE Trans. Fuzzy Syst."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"2139","DOI":"10.1109\/TNNLS.2018.2803059","article-title":"Leader-follower output synchronization of linear heterogeneous systems with active leader using reinforcement learning","volume":"29","author":"Yang","year":"2018","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"5468","DOI":"10.1109\/TNNLS.2021.3068762","article-title":"USV formation and path-following control via deep reinforcement learning with random braking","volume":"32","author":"Zhao","year":"2021","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1019","DOI":"10.1017\/S0263574718000218","article-title":"A survey of formation control and motion planning of multiple unmanned vehicles","volume":"36","author":"Liu","year":"2018","journal-title":"Robotica"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"672","DOI":"10.1109\/TCST.2007.899191","article-title":"Cooperative UAV formation flying with obstacle\/collision avoidance","volume":"15","author":"Wang","year":"2007","journal-title":"IEEE Trans. Control Syst. Technol."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1731","DOI":"10.1109\/TCST.2012.2218815","article-title":"Integrated optimal formation control of multiple unmanned aerial vehicles","volume":"21","author":"Wang","year":"2012","journal-title":"IEEE Trans. Control Syst. Technol."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kuriki, Y., and Namerikawa, T. (2014, January 4\u20136). Consensus-based cooperative formation control with collision avoidance for a multi-UAV system. Proceedings of the 2014 American Control Conference, Portland, OR, USA.","DOI":"10.1109\/ACC.2014.6858777"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"340","DOI":"10.1109\/TCST.2014.2314460","article-title":"Time-varying formation control for unmanned aerial vehicles: Theories and applications","volume":"23","author":"Dong","year":"2015","journal-title":"IEEE Trans. Control Syst. Technol."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"5014","DOI":"10.1109\/TIE.2016.2593656","article-title":"Time-varying formation tracking for second-order multi-agent systems subjected to switching topologies with application to quadrotor formation flying","volume":"64","author":"Dong","year":"2017","journal-title":"IEEE Trans. Ind. Electron."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"297","DOI":"10.1007\/s10846-019-01073-3","article-title":"Towards real-time path planning through deep reinforcement learning for a UAV in dynamic environments","volume":"98","author":"Yan","year":"2020","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Zhu, Y., Mottaghi, R., Kolve, E., Lim, J.J., Gupta, A., Fei-Fei, L., and Farhadi, A. (June, January 29). Target-driven visual navigation in indoor scenes using deep reinforcement learning. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.","DOI":"10.1109\/ICRA.2017.7989381"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"83","DOI":"10.1007\/s10458-008-9056-7","article-title":"Learning and planning in environments with delayed feedback","volume":"18","author":"Walsh","year":"2009","journal-title":"Auton. Agents Multi-Agent Syst."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Adlakha, S., Madan, R., Lall, S., and Goldsmith, A. (2007, January 12\u201314). Optimal control of distributed Markov decision processes with network delays. Proceedings of the 2007 46th IEEE Conference on Decision and Control, New Orleans, LA, USA.","DOI":"10.1109\/CDC.2007.4434792"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Zhong, A., Li, Z., Wu, D., Tang, T., and Wang, R. (2023). Stochastic peak age of information guarantee for cooperative sensing in internet of everything. IEEE Internet Things J., 1\u201310.","DOI":"10.1109\/JIOT.2023.3264826"},{"key":"ref_35","unstructured":"Ramstedt, S., and Pal, C. (2019, January 8\u201314). Real-time reinforcement learning. Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Li, Z., Li, F., Tang, T., Zhang, H., and Yang, J. (2022). Video caching and scheduling with edge cooperation. Digit. Commun. Netw., 1\u201313.","DOI":"10.1016\/j.dcan.2022.09.012"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"207","DOI":"10.1109\/TSTE.2021.3107439","article-title":"Exploiting the flexibility inside park-level commercial buildings considering heat transfer time delay: A memory-augmented deep reinforcement learning approach","volume":"13","author":"Zhao","year":"2021","journal-title":"IEEE Trans. Sustain. Energy"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Nath, S., Baranwal, M., and Khadilkar, H. (2021, January 1\u20135). Revisiting state augmentation methods for reinforcement learning with stochastic delays. Proceedings of the 30th ACM International Conference on Information and Knowledge Management, Virtual Event.","DOI":"10.1145\/3459637.3482386"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"119","DOI":"10.1016\/j.neucom.2021.04.015","article-title":"Delay-aware model-based reinforcement learning for continuous control","volume":"450","author":"Chen","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"776","DOI":"10.1109\/TGCN.2021.3138729","article-title":"Energy-efficient mobile edge computing under delay constraints","volume":"6","author":"Li","year":"2021","journal-title":"IEEE Trans. Green Commun. Netw."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"7907","DOI":"10.1109\/TCOMM.2019.2931583","article-title":"Joint communication and control for wireless autonomous vehicular platoon systems","volume":"67","author":"Zeng","year":"2019","journal-title":"IEEE Trans. Commun."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"17359","DOI":"10.1109\/JIOT.2022.3156046","article-title":"Fairness-aware federated learning with unreliable links in resource-constrained Internet of things","volume":"9","author":"Li","year":"2022","journal-title":"IEEE Internet Things J."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"2124","DOI":"10.1109\/TVT.2018.2890773","article-title":"Autonomous navigation of UAVs in large-scale complex environments: A deep reinforcement learning approach","volume":"68","author":"Wang","year":"2019","journal-title":"IEEE Trans. Veh. Technol."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"5853","DOI":"10.1109\/ACCESS.2018.2889858","article-title":"Finite-time formation control for unmanned aerial vehicle swarm system with time-delay and input saturation","volume":"7","author":"Zhang","year":"2018","journal-title":"IEEE Access"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"11645","DOI":"10.1016\/j.ifacol.2017.08.1667","article-title":"Time delay compensation based on Smith predictor in multiagent formation control","volume":"50","author":"Gonzalez","year":"2017","journal-title":"IFAC-PapersOnLine"},{"key":"ref_46","unstructured":"Su, H., Wang, X., and Lin, Z. (2007, January 12\u201314). Flocking of multi-agents with a virtual leader part II: With a virtual leader of varying velocity. Proceedings of the 2007 46th IEEE Conference on Decision and Control, New Orleans, LA, USA."},{"key":"ref_47","unstructured":"Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. (2014, January 3\u20135). Deterministic policy gradient algorithms. Proceedings of the International Conference on Machine Learning, Detroit, MI, USA."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"2913","DOI":"10.1109\/JSYST.2019.2933001","article-title":"Optimal connected cruise control with arbitrary communication delays","volume":"14","author":"Wang","year":"2019","journal-title":"IEEE Syst. J."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"8577","DOI":"10.1109\/JIOT.2019.2921159","article-title":"Deep deterministic policy gradient (DDPG)-based energy harvesting wireless communications","volume":"6","author":"Qiu","year":"2019","journal-title":"IEEE Internet Things J."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"1805","DOI":"10.1103\/PhysRevE.62.1805","article-title":"Congested traffic states in empirical observations and microscopic simulations","volume":"62","author":"Treiber","year":"2000","journal-title":"Phys. Rev. E"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/13\/6190\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:07:16Z","timestamp":1760126836000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/13\/6190"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,6]]},"references-count":50,"journal-issue":{"issue":"13","published-online":{"date-parts":[[2023,7]]}},"alternative-id":["s23136190"],"URL":"https:\/\/doi.org\/10.3390\/s23136190","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,6]]}}}