{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T13:41:44Z","timestamp":1769002904622,"version":"3.49.0"},"reference-count":69,"publisher":"MDPI AG","issue":"16","license":[{"start":{"date-parts":[[2022,8,22]],"date-time":"2022-08-22T00:00:00Z","timestamp":1661126400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Minciencias","award":["ID5005"],"award-info":[{"award-number":["ID5005"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Reinforcement Learning (RL) comes with the promise of automating network management. However, due to its trial-and-error learning approach, model-based RL (MBRL) is not applicable in some network management scenarios. This paper explores the potential of using Automated Planning (AP) to achieve this MBRL in the functional areas of network management. In addition, a comparison of several integration strategies of AP and RL is depicted. We also describe an architecture that realizes a cognitive management control loop by combining AP and RL. Our experiments evaluate on a simulated environment evidence that the combination proposed improves model-free RL but demonstrates lower performance than Deep RL regarding the reward and convergence time metrics. Nonetheless, AP-based MBRL is useful when the prediction model needs to be understood and when the high computational complexity of Deep RL can not be used.<\/jats:p>","DOI":"10.3390\/s22166301","type":"journal-article","created":{"date-parts":[[2022,8,22]],"date-time":"2022-08-22T23:49:56Z","timestamp":1661212196000},"page":"6301","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Model-Based Reinforcement Learning with Automated Planning for Network Management"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6544-0283","authenticated-orcid":false,"given":"Armando","family":"Ordonez","sequence":"first","affiliation":[{"name":"Universidad ICESI, Cali 760031, Colombia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2223-947X","authenticated-orcid":false,"given":"Oscar Mauricio","family":"Caicedo","sequence":"additional","affiliation":[{"name":"Departamento de Telematica, Universidad del Cauca, Popayan 190002, Colombia"}]},{"given":"William","family":"Villota","sequence":"additional","affiliation":[{"name":"Institute of Computing, University of Campinas, Campinas 13083-852, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5329-4397","authenticated-orcid":false,"given":"Angela","family":"Rodriguez-Vivas","sequence":"additional","affiliation":[{"name":"Departamento de Telematica, Universidad del Cauca, Popayan 190002, Colombia"}]},{"given":"Nelson L. S.","family":"da Fonseca","sequence":"additional","affiliation":[{"name":"Institute of Computing, University of Campinas, Campinas 13083-852, Brazil"}]}],"member":"1968","published-online":{"date-parts":[[2022,8,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"3201","DOI":"10.1109\/TSG.2020.2971427","article-title":"A multi-agent reinforcement learning-based data-driven method for home energy management","volume":"11","author":"Xu","year":"2020","journal-title":"IEEE Trans. Smart Grid"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"4685","DOI":"10.1109\/TWC.2020.2986114","article-title":"Reinforcement learning based capacity management in multi-layer satellite networks","volume":"19","author":"Jiang","year":"2020","journal-title":"IEEE Trans. Wirel. Commun."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"108088","DOI":"10.1109\/ACCESS.2020.3000893","article-title":"Learn to Schedule (LEASCH): A Deep reinforcement learning approach for radio resource scheduling in the 5G MAC layer","volume":"8","author":"Correia","year":"2020","journal-title":"IEEE Access"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"870","DOI":"10.1109\/TNSM.2020.3036911","article-title":"Intelligent Routing Based on Reinforcement Learning for Software-Defined Networking","volume":"18","author":"Rendon","year":"2021","journal-title":"IEEE Trans. Netw. Serv. Manag."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Jumnal, A., and Kumar, S.D. (2021, January 4\u20136). Optimal VM Placement Approach Using Fuzzy Reinforcement Learning for Cloud Data Centers. Proceedings of the 3rd IEEE International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India.","DOI":"10.1109\/ICICV50876.2021.9388424"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Iwana, B.K., and Uchida, S. (2021). An empirical survey of data augmentation for time series classification with neural networks. PLoS ONE, 16.","DOI":"10.1371\/journal.pone.0254841"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"012045","DOI":"10.1088\/1742-6596\/1757\/1\/012045","article-title":"Research on Generating Adversarial Examples in Applications","volume":"1757","author":"Zhao","year":"2021","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Mohakud, R., and Dash, R. (2021). Survey on hyperparameter optimization using nature-inspired algorithm of deep convolution neural network. Intelligent and Cloud Computing, Springer.","DOI":"10.1007\/978-981-15-5971-6_77"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"141","DOI":"10.1007\/s12650-019-00607-z","article-title":"Visualizing surrogate decision trees of convolutional neural networks","volume":"23","author":"Jia","year":"2020","journal-title":"J. Vis."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"248","DOI":"10.1016\/j.comcom.2018.07.015","article-title":"From 4G to 5G: Self-organized network management meets machine learning","volume":"129","author":"Moysen","year":"2018","journal-title":"Comput. Commun."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"441","DOI":"10.1287\/moor.12.3.441","article-title":"The complexity of Markov decision processes","volume":"12","author":"Papadimitriou","year":"1987","journal-title":"Math. Oper. Res."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Yang, F., Lyu, D., Liu, B., and Gustafson, S. (2018, January 13\u201319). PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-making. Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI\u201918), Stockholm, Sweden.","DOI":"10.24963\/ijcai.2018\/675"},{"key":"ref_13","unstructured":"Moerland, T.M., Broekens, J., and Jonker, C.M. (2020). Model-based reinforcement learning: A survey. arXiv."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"160","DOI":"10.1145\/122344.122377","article-title":"Dyna, an integrated architecture for learning, planning, and reacting","volume":"2","author":"Sutton","year":"1991","journal-title":"ACM Sigart Bull."},{"key":"ref_15","unstructured":"Rybkin, O., Zhu, C., Nagabandi, A., Daniilidis, K., Mordatch, I., and Levine, S. (2021, January 18\u201324). Model-Based Reinforcement Learning via Latent-Space Collocation. Proceedings of the International Conference on Machine Learning (PMLR), Virtual."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"604","DOI":"10.1038\/s41586-020-03051-4","article-title":"Mastering atari, go, chess and shogi by planning with a learned model","volume":"588","author":"Schrittwieser","year":"2020","journal-title":"Nature"},{"key":"ref_17","unstructured":"Ayoub, A., Jia, Z., Szepesvari, C., Wang, M., and Yang, L. (2020, January 13\u201318). Model-based reinforcement learning with value-targeted regression. Proceedings of the International Conference on Machine Learning (PMLR), Virtual."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"103","DOI":"10.1016\/j.artint.2016.07.004","article-title":"A synthesis of automated planning and reinforcement learning for efficient, robust decision-making","volume":"241","author":"Leonetti","year":"2016","journal-title":"Artif. Intell."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"4197","DOI":"10.1109\/TNSM.2021.3120804","article-title":"Network abnormal traffic detection model based on semi-supervised deep reinforcement learning","volume":"18","author":"Dong","year":"2021","journal-title":"IEEE Trans. Netw. Serv. Manag."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Todorov, D., Valchanov, H., and Aleksieva, V. (2020, January 1\u20133). Load balancing model based on machine learning and segment routing in SDN. Proceedings of the 2020 IEEE International Conference Automatics and Informatics (ICAI), Varna, Bulgaria.","DOI":"10.1109\/ICAI50593.2020.9311385"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"5234","DOI":"10.1103\/PhysRevLett.85.5234","article-title":"Topology of evolving networks: Local events and universality","volume":"85","author":"Albert","year":"2000","journal-title":"Phys. Rev. Lett."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"4271","DOI":"10.1109\/TVT.2020.2972999","article-title":"Mode selection and resource allocation in sliced fog radio access networks: A reinforcement learning approach","volume":"69","author":"Xiang","year":"2020","journal-title":"IEEE Trans. Veh. Technol."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"2356","DOI":"10.1109\/JSAC.2020.3000416","article-title":"Design of a 5G network slice extension with MEC UAVs managed with reinforcement learning","volume":"38","author":"Faraci","year":"2020","journal-title":"IEEE J. Sel. Areas Commun."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"36988","DOI":"10.1109\/ACCESS.2020.2975238","article-title":"Traffic Measurement Optimization Based on Reinforcement Learning in Large-Scale ITS-Oriented Backbone Networks","volume":"8","author":"Nie","year":"2020","journal-title":"IEEE Access"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Strehl, A.L., Li, L., Wiewiora, E., Langford, J., and Littman, M.L. (2006, January 25\u201329). PAC model-free reinforcement learning. Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA.","DOI":"10.1145\/1143844.1143955"},{"key":"ref_26","unstructured":"Sutton, R.S., and Barto, A.G. (2018). Reinforcement Learning: An Introduction, MIT Press."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep learning","volume":"521","author":"LeCun","year":"2015","journal-title":"Nature"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"26","DOI":"10.1109\/MSP.2017.2743240","article-title":"Deep reinforcement learning: A brief survey","volume":"34","author":"Arulkumaran","year":"2017","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Ghallab, M., Nau, D.S., and Traverso, P. (2004). Automated Planning\u2014Theory and Practice, Elsevier.","DOI":"10.1016\/B978-155860856-6\/50021-1"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Rodriguez-Vivas, A., Caicedo, O.M., Ordo\u00f1ez, A., Nobre, J.C., and Granville, L.Z. (2021). NORA: An Approach for Transforming Network Management Policies into Automated Planning Problems. Sensors, 21.","DOI":"10.3390\/s21051790"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Gironza-Ceron, M.A., Villota-Jacome, W.F., Ordonez, A., Estrada-Solano, F., and Rendon, O.M.C. (2017, January 3\u20136). SDN management based on Hierarchical Task Network and Network Functions Virtualization. Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece.","DOI":"10.1109\/ISCC.2017.8024713"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Gopalan, N., Littman, M., MacGlashan, J., Squire, S., Tellex, S., Winder, J., and Wong, L. (2017, January 18\u201323). Planning with abstract Markov decision processes. Proceedings of the International Conference on Automated Planning and Scheduling, Pittsburgh, PA, USA.","DOI":"10.1609\/icaps.v27i1.13867"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1907","DOI":"10.1109\/TITS.2020.3041228","article-title":"A Knowledge-Based Temporal Planning Approach for Urban Traffic Control","volume":"22","author":"Lu","year":"2020","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Nejati, N., Langley, P., and Konik, T. (2006, January 25\u201329). Learning hierarchical task networks by observation. Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA.","DOI":"10.1145\/1143844.1143928"},{"key":"ref_35","unstructured":"Erol, K., Hendler, J.A., and Nau, D.S. (1994, January 13\u201315). UMCP: A Sound and Complete Procedure for Hierarchical Task-network Planning. Proceedings of the AIPS, Chicago, IL, USA."},{"key":"ref_36","unstructured":"Georgievski, I., and Aiello, M. (2014). An overview of hierarchical task network planning. arXiv."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"38026","DOI":"10.1109\/ACCESS.2018.2852649","article-title":"On the Feasibility of Using Hierarchical Task Networks and Network Functions Virtualization for Managing Software-Defined Networks","volume":"6","author":"Villota","year":"2018","journal-title":"IEEE Access"},{"key":"ref_38","first-page":"63","article-title":"The essential deployment metamodel: A systematic review of deployment automation technologies","volume":"35","author":"Wurster","year":"2020","journal-title":"SICS Softw. Intensive-Cyber-Phys. Syst."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Ibrahim, A., Yousef, A.H., and Medhat, W. (2022, January 8\u20139). DevSecOps: A Security Model for Infrastructure as Code Over the Cloud. Proceedings of the 2nd IEEE International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), Cairo, Egypt.","DOI":"10.1109\/MIUCC55081.2022.9781709"},{"key":"ref_40","unstructured":"Janner, M., Fu, J., Zhang, M., and Levine, S. (2019). When to trust your model: Model-based policy optimization. Adv. Neural Inf. Process. Syst., 32."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Hayamizu, Y., Amiri, S., Chandan, K., Takadama, K., and Zhang, S. (2021, January 7\u201312). Guiding Robot Exploration in Reinforcement Learning via Automated Planning. Proceedings of the International Conference on Automated Planning and Scheduling, Guangzhou, China.","DOI":"10.1609\/icaps.v31i1.16011"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Illanes, L., Yan, X., Icarte, R.T., and McIlraith, S.A. (2020, January 14\u201319). Symbolic plans as high-level instructions for reinforcement learning. Proceedings of the International Conference on Automated Planning and Scheduling, Nancy, France.","DOI":"10.1609\/icaps.v30i1.6750"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"205","DOI":"10.1016\/j.comcom.2020.03.011","article-title":"Fault management frameworks in wireless sensor networks: A survey","volume":"155","author":"Moridi","year":"2020","journal-title":"Comput. Commun."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Chen, X., Proietti, R., Liu, C.Y., and Yoo, S.B. (2020, January 18\u201321). Towards Self-Driving Optical Networking with Reinforcement Learning and Knowledge Transferring. Proceedings of the 2020 IEEE International Conference on Optical Network Design and Modeling (ONDM), Barcelona, Spain.","DOI":"10.23919\/ONDM48393.2020.9133022"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Mismar, F.B., and Evans, B.L. (2018, January 28\u201331). Deep Q-Learning for Self-Organizing Networks Fault Management and Radio Performance Improvement. Proceedings of the 52nd IEEE Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA.","DOI":"10.1109\/ACSSC.2018.8645083"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"74429","DOI":"10.1109\/ACCESS.2018.2881964","article-title":"Deep reinforcement learning for resource management in network slicing","volume":"6","author":"Li","year":"2018","journal-title":"IEEE Access"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1543","DOI":"10.1109\/TNET.2019.2924471","article-title":"RL-NSB: Reinforcement Learning-Based 5G Network Slice Broker","volume":"27","author":"Sciancalepore","year":"2019","journal-title":"IEEE\/ACM Trans. Netw."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"858","DOI":"10.1109\/TCCN.2019.2952882","article-title":"An Artificial Intelligence Framework for Slice Deployment and Orchestration in 5G Networks","volume":"6","author":"Dandachi","year":"2020","journal-title":"IEEE Trans. Cogn. Commun. Netw."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"3513","DOI":"10.1109\/JIOT.2018.2812210","article-title":"SDCoR: Software defined cognitive routing for Internet of vehicles","volume":"5","author":"Wang","year":"2018","journal-title":"IEEE Internet Things J."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Bouzid, S., Serrestou, Y., Raoof, K., and Omri, M. (2020, January 2\u20135). Efficient Routing Protocol for Wireless Sensor Network based on Reinforcement Learning. Proceedings of the 5th IEEE International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia.","DOI":"10.1109\/ATSIP49331.2020.9231883"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"47815","DOI":"10.1109\/ACCESS.2021.3068459","article-title":"Deep Reinforcement Learning-Based Traffic Sampling for Multiple Traffic Analyzers on Software-Defined Networks","volume":"9","author":"Kim","year":"2021","journal-title":"IEEE Access"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"1061","DOI":"10.1007\/s00607-020-00883-w","article-title":"VNE solution for network differentiated QoS and security requirements: From the perspective of deep reinforcement learning","volume":"103","author":"Wang","year":"2021","journal-title":"Computing"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Patra, S., Velazquez, A., Kang, M., and Nau, D. (2021, January 2\u20139). Using online planning and acting to recover from cyberattacks on software-defined networks. Proceedings of the AAAI Conference on Artificial Intelligence, Virtual.","DOI":"10.1609\/aaai.v35i17.17806"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Speicher, P., Steinmetz, M., Hoffmann, J., Backes, M., and K\u00fcnnemann, R. (2019, January 8\u201312). Towards automated network mitigation analysis. Proceedings of the 34th ACM\/SIGAPP Symposium on Applied Computing, Limassol, Cyprus.","DOI":"10.1145\/3297280.3297473"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"158","DOI":"10.1109\/MCOM.2018.1700560","article-title":"Machine learning for cognitive network management","volume":"56","author":"Ayoubi","year":"2018","journal-title":"IEEE Commun. Mag."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Sanchez-Navarro, I., Salva-Garcia, P., Wang, Q., and Calero, J.M.A. (2020, January 10\u201312). New Immersive Interface for Zero-Touch Management in 5G Networks. Proceedings of the 3rd IEEE 5G World Forum (5GWF), Bangalore, India.","DOI":"10.1109\/5GWF49715.2020.9221116"},{"key":"ref_57","first-page":"91","article-title":"End-to-End Service Monitoring for Zero-Touch Networks","volume":"9","author":"Perez","year":"2021","journal-title":"J. ICT Stand."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"107108","DOI":"10.1016\/j.comnet.2020.107108","article-title":"IPro: An approach for intelligent SDN monitoring","volume":"170","author":"Castillo","year":"2020","journal-title":"Comput. Netw."},{"key":"ref_59","unstructured":"ETSI (2020). Zero Touch Network and Service Management (ZSM), ETSI. Reference Architecture, Standard ETSI GS ZSM."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"234","DOI":"10.1007\/s11036-020-01700-6","article-title":"A comprehensive survey on machine learning-based big data analytics for IoT-enabled smart healthcare system","volume":"26","author":"Li","year":"2021","journal-title":"Mob. Netw. Appl."},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"725","DOI":"10.1109\/TNSM.2016.2569020","article-title":"Orchestrating Virtualized Network Functions","volume":"13","author":"Bari","year":"2016","journal-title":"IEEE Trans. Netw. Serv. Manag."},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Villota J\u00e1come, W.F., Caicedo Rendon, O.M., and da Fonseca, N.L.S. (2021). Admission Control for 5G Network Slicing based on (Deep) Reinforcement Learning. IEEE Syst. J.","DOI":"10.36227\/techrxiv.14498190"},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Partalas, I., Vrakas, D., and Vlahavas, I. (2008). Reinforcement learning and automated planning: A survey. Artificial Intelligence for Advanced Problem Solving Techniques, IGI Global.","DOI":"10.4018\/978-1-59904-705-8.ch006"},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"3133","DOI":"10.1109\/COMST.2019.2916583","article-title":"Applications of Deep Reinforcement Learning in Communications and Networking: A Survey","volume":"21","author":"Luong","year":"2019","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"529","DOI":"10.1038\/nature14236","article-title":"Human-level control through deep reinforcement learning","volume":"518","author":"Mnih","year":"2015","journal-title":"Nature"},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Arulkumaran, K., Deisenroth, M.P., Brundage, M., and Bharath, A.A. (2017). A brief survey of deep reinforcement learning. arXiv.","DOI":"10.1109\/MSP.2017.2743240"},{"key":"ref_67","unstructured":"Song, H., Luan, D., Ding, W., Wang, M.Y., and Chen, Q. (2022, January 8\u201311). Learning to predict vehicle trajectories with model-based planning. Proceedings of the Conference on Robot Learning (PMLR), London, UK."},{"key":"ref_68","first-page":"4246","article-title":"Network traffic classification based on deep learning","volume":"14","author":"Li","year":"2020","journal-title":"KSII Trans. Internet Inf. Syst. (TIIS)"},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Susilo, B., and Sari, R.F. (2021, January 27\u201330). Intrusion Detection in Software Defined Network Using Deep Learning Approach. Proceedings of the 11th IEEE Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA.","DOI":"10.1109\/CCWC51732.2021.9375951"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/16\/6301\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:13:26Z","timestamp":1760141606000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/16\/6301"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,22]]},"references-count":69,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2022,8]]}},"alternative-id":["s22166301"],"URL":"https:\/\/doi.org\/10.3390\/s22166301","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,22]]}}}