{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T05:42:48Z","timestamp":1774330968758,"version":"3.50.1"},"reference-count":65,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2025,10,13]],"date-time":"2025-10-13T00:00:00Z","timestamp":1760313600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100002583","name":"Gyeongsang National University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100002583","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Korea government","award":["20241K00000010"],"award-info":[{"award-number":["20241K00000010"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Computers"],"abstract":"<jats:p>Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor\u2013Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent\u2019s generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments.<\/jats:p>","DOI":"10.3390\/computers14100434","type":"journal-article","created":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T07:17:52Z","timestamp":1760512672000},"page":"434","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games"],"prefix":"10.3390","volume":"14","author":[{"given":"Sehar Shahzad","family":"Farooq","sequence":"first","affiliation":[{"name":"School of Aerospace Engineering, Department of Control and Robot Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea"}]},{"given":"Hameedur","family":"Rahman","sequence":"additional","affiliation":[{"name":"Department of Computer Games Development, Faculty of Computing and AI, Air University, Islamabad 44000, Pakistan"}]},{"given":"Samiya","family":"Abdul Wahid","sequence":"additional","affiliation":[{"name":"Department of Computer Games Development, Faculty of Computing and AI, Air University, Islamabad 44000, Pakistan"}]},{"given":"Muhammad","family":"Alyan Ansari","sequence":"additional","affiliation":[{"name":"Department of Computer Games Development, Faculty of Computing and AI, Air University, Islamabad 44000, Pakistan"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-3026-8262","authenticated-orcid":false,"given":"Saira","family":"Abdul Wahid","sequence":"additional","affiliation":[{"name":"Department of Psychology, Faculty of Social Sciences, Air University, Islamabad 44000, Pakistan"}]},{"given":"Hosu","family":"Lee","sequence":"additional","affiliation":[{"name":"School of Aerospace Engineering, Department of Control and Robot Engineering, Gyeongsang National University, Jinju 52828, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2025,10,13]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"120495","DOI":"10.1016\/j.eswa.2023.120495","article-title":"Reinforcement learning algorithms: A brief survey","volume":"231","author":"Shakya","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"e12447","DOI":"10.1049\/cmu2.12447","article-title":"A survey on deep reinforcement learning architectures, applications and emerging trends","volume":"19","author":"Balhara","year":"2022","journal-title":"IET Commun."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Liang, J., Miao, H., Li, K., Tan, J., Wang, X., Luo, R., and Jiang, Y. (2025). A review of multi-agent reinforcement learning algorithms. Electronics, 14.","DOI":"10.3390\/electronics14040820"},{"key":"ref_4","unstructured":"L\u00f3pez, K.F.C. (2022). Reinforcement Learning Neural Agents in Clever Game Playing. [Bachelor\u2019s Thesis, Universidad de Investigaci\u00f3n de Tecnolog\u00eda Experimental Yachay]."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"12059","DOI":"10.1007\/s11042-023-15742-x","article-title":"Deep ensemble learning of tactics to control the main force in a real-time strategy game","volume":"83","author":"Han","year":"2023","journal-title":"Multimed. Tools Appl."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Gordillo, C., Bergdahl, J., Tollmar, K., and Gissl\u00e9n, L. (2021, January 17\u201320). Improving playtesting coverage via curiosity driven reinforcement learning agents. Proceedings of the 2021 IEEE Conference on Games (CoG), Virtual.","DOI":"10.1109\/CoG52621.2021.9619048"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1537","DOI":"10.1007\/s11042-024-18925-2","article-title":"Continual learning, deep reinforcement learning, and microcircuits: A novel method for clever game playing","volume":"84","author":"Chang","year":"2025","journal-title":"Multimed. Tools Appl."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Pathak, D., Agrawal, P., Efros, A.A., and Darrell, T. (2017, January 6\u201311). Curiosity\u2013driven exploration by self-supervised prediction. Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia.","DOI":"10.1109\/CVPRW.2017.70"},{"key":"ref_9","unstructured":"Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu, K. (2016, January 20\u201322). Asynchronous methods for deep reinforcement learning. Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Mikhaylova, E., and Makarov, I. (2022, January 15\u201317). Curiosity-driven exploration in vizdoom. Proceedings of the 2022 IEEE 20th Jubilee International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia.","DOI":"10.1109\/SISY56759.2022.10036273"},{"key":"ref_11","unstructured":"Knorr, J.W.B.M. (2021). Dynamic Difficulty Adjustment in First Person Shooters. [Ph.D. Dissertation, Instituto Politecnico do Porto (Portugal)]."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Chaplot, D.S., Jiang, H., Gupta, S., and Gupta, A. (2020, January 23\u201328). Semantic curiosity for active visual learning. Proceedings of the Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK. Part VI 16.","DOI":"10.1007\/978-3-030-58539-6_19"},{"key":"ref_13","unstructured":"Zhang, X. (2023). Simulation-based game testing for estimating player curiosity. [Master\u2019s Thesis, Aalto University]."},{"key":"ref_14","unstructured":"Yannakakis, G.N., Hallam, J., and Lund, H.H. (2006, January 26\u201328). Comparative fun analysis in the innovative playware game platform. Proceedings of the 1st World Conference for Fun\u2019n Games, Preston, UK."},{"key":"ref_15","unstructured":"Chen, Y., and Xiao, J. (2023). Target search and navigation in heterogeneous robot systems with deep reinforcement learning. arXiv."},{"key":"ref_16","unstructured":"Sun, C., Qian, H., and Miao, C. (2022). From psychological curiosity to artificial curiosity: Curiosity-driven learning in artificial intelligence tasks. arXiv."},{"key":"ref_17","unstructured":"Mantiuk, F., Zhou, H., and Wu, C.M. (August, January 30). From curiosity to competence: How world models interact with the dynamics of exploration. 2025. Proceedings of the 47th Annual Conference of the Cognitive Science Society, San Francisco, CA, USA."},{"key":"ref_18","unstructured":"Zhong, Y., He, J., and Kong, L. (2023). Double a3c: Deep reinforcement learning on openai gym games. arXiv."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"4881","DOI":"10.1080\/10494820.2023.2205906","article-title":"Comprehending the influence of brain games mode over playfulness and playability metrics: A fused exploratory research of players\u2019 experience","volume":"32","author":"Ahmad","year":"2023","journal-title":"Interact. Learn. Environ."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"41323","DOI":"10.1007\/s11042-023-15100-x","article-title":"A pilot study on the evaluation of cognitive abilities\u2019 cluster through game-based intelligent technique","volume":"82","author":"Ahmad","year":"2023","journal-title":"Multimed. Tools Appl."},{"key":"ref_21","unstructured":"Muneeb, S., Sitbon, L., and Ahmad, F. (December, January 29). Opportunities for serious game technologies to engage children with autism in a Pakistani sociocultural and institutional context: An investigation of the design space for serious game technologies to enhance engagement of children with autism and to facilitate external support provided. Proceedings of the 34th Australian Conference on Human-Computer Interaction, Canberra, ACT, Australia."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"859","DOI":"10.1007\/s12530-022-09472-y","article-title":"An augmented reality pqrst based method to improve self-learning skills for preschool autistic children","volume":"14","author":"Sulaiman","year":"2023","journal-title":"Evol. Syst."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"67","DOI":"10.4018\/IJGBL.2021010105","article-title":"Effect of gaming mode upon the players\u2019 cognitive performance during brain games play: An exploratory research","volume":"11","author":"Ahmad","year":"2021","journal-title":"Int. J. Game Based Learn."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1265","DOI":"10.1080\/10494820.2020.1827440","article-title":"Behavioral profiling: A generationwide study of players\u2019 experiences during brain games play","volume":"31","author":"Ahmad","year":"2023","journal-title":"Interact. Learn. Environ."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Tan, W., Patel, D., and Kozma, R. (2021, January 2\u20139). Strategy and benchmark for converting deep q-networks to event-driven spiking neural networks. Proceedings of the AAAI conference on artificial intelligence, Vancouver, BC, Canada.","DOI":"10.1609\/aaai.v35i11.17180"},{"key":"ref_26","first-page":"260","article-title":"Evaluating the efficacy of deep neural networks in reinforcement learning problems","volume":"46","author":"Girgis","year":"2018","journal-title":"Am. Sci. Res. J. Eng. Technol. Sci."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"e253","DOI":"10.1017\/S0140525X16001837","article-title":"Building machines that learn and think like people","volume":"40","author":"Lake","year":"2017","journal-title":"Behav. Brain Sci."},{"key":"ref_28","unstructured":"Mahmud, A., Khan, A.K., Rafi, M.M.H., and Fahim, K.R. (2023). Implementation of Reinforcement Learning Architecture to Augment an Ai That Can Self-Learn to Play Video Games. [Ph.D. Dissertation, Brac University]."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"10335","DOI":"10.1007\/s00521-021-05795-0","article-title":"A prioritized objective actor-critic method for deep reinforcement learning","volume":"33","author":"Nguyen","year":"2021","journal-title":"Neural Comput. Appl."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Yoon, S., and Kim, K.-J. (2017, January 22\u201325). Deep q networks for visual fighting game ai. Proceedings of the 2017 IEEE Conference on Computational Intelligence and Games (CIG), New York, NY, USA.","DOI":"10.1109\/CIG.2017.8080451"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"290","DOI":"10.1109\/TG.2018.2846028","article-title":"Hierarchical reinforcement learning with monte carlo tree search in computer fighting game","volume":"11","author":"Pinto","year":"2018","journal-title":"IEEE Trans. Games"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Takano, Y., Inoue, H., Thawonmas, R., and Harada, T. (2019, January 5\u20137). Self-play for training general fighting game ai. Proceedings of the 2019 Nicograph International (NicoInt), Yangling, China.","DOI":"10.1109\/NICOInt.2019.00034"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Takano, Y., Ito, S., Harada, T., and Thawonmas, R. (2018, January 9\u201312). Utilizing multiple agents for decision making in a fighting game. Proceedings of the 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), IEEE, Nara, Japan.","DOI":"10.1109\/GCCE.2018.8574675"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Chakravarthi, B., Ng, S.-C., Ezilarasan, M., and Leung, M.-F. (2022). Eeg-based emotion recognition using hybrid cnn and lstm classification. Front. Comput. Neurosci., 16.","DOI":"10.3389\/fncom.2022.1019776"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Inoue, H., Takano, Y., Thawonmas, R., and Harada, T. (2019, January 5\u20137). Verifica- tion of applying curiosity-driven to fighting game ai. Proceedings of the 2019 Nicograph International (NicoInt), IEEE, Yangling, China.","DOI":"10.1109\/NICOInt.2019.00033"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Li, Y.-J., Chang, H.-Y., Lin, Y.-J., Wu, P.-W., and Wang, Y.-C.F. (2018, January 7\u201310). Deep reinforcement learning for playing 2.5 d fighting games. Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, Athens, Greece.","DOI":"10.1109\/ICIP.2018.8451491"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Ishii, R., Ito, S., Ishihara, M., Harada, T., and Thawonmas, R. (2018, January 14\u201317). Monte-carlo tree search implementation of fighting game ais having personas. Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games (CIG), IEEE, Maastricht, The Netherlands.","DOI":"10.1109\/CIG.2018.8490367"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Bezerra, J.R., G\u00f3es, L.F.W., and Da Silva, A.R. (2020, January 25). Development of an autonomous agent based on reinforcement learning for a digital fighting game. Proceedings of the 2020 19th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames), IEEE, Recife, Brazil.","DOI":"10.1109\/SBGames51465.2020.00017"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Kim, D.-W., Park, S., and Yang, S.-I. (2020, January 24\u201327). Mastering fighting game using deep reinforcement learning with self-play. Proceedings of the 2020 IEEE Conference on Games (CoG), IEEE, Osaka, Japan.","DOI":"10.1109\/CoG47356.2020.9231639"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"212","DOI":"10.1109\/TG.2021.3049539","article-title":"Creating pro-level ai for a real-time fighting game using deep reinforcement learning","volume":"14","author":"Oh","year":"2021","journal-title":"IEEE Trans. Games"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Lu, F., Yamamoto, K., Nomura, L.H., Mizuno, S., Lee, Y., and Thawonmas, R. (2013, January 4). Fighting game artificial intelligence competition platform. Proceedings of the 2013 IEEE 2nd Global Conference on Consumer Electronics (GCCE), IEEE, Makuhari, Japan.","DOI":"10.1109\/GCCE.2013.6664844"},{"key":"ref_42","unstructured":"Schmidhuber, J. A possibility for implementing curiosity and boredom in model-building neural controllers. Proceedings of the International Conference on Simulation of Adaptive Behavior: From Animals to Animats."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"230","DOI":"10.1109\/TAMD.2010.2056368","article-title":"Formal theory of creativity, fun, and intrinsic motivation (1990\u20132010)","volume":"2","author":"Schmidhuber","year":"2010","journal-title":"IEEE Trans. Auton. Ment. Dev."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Cuccu, G., Luciw, M., Schmidhuber, J., and Gomez, F. (2011, January 24\u201327). Intrinsically motivated neuroevolution for vision-based reinforcement learning. Proceedings of the 2011 IEEE International Conference on Development and Learning (ICDL), IEEE, Frankfurt, Germany.","DOI":"10.1109\/DEVLRN.2011.6037324"},{"key":"ref_45","unstructured":"Kearns, M., and Koller, D. (August, January 31). Efficient reinforcement learning in factored mdps. Proceedings of the International Joint Conference on Artificial Intelligence, Stockholm, Sweden."},{"key":"ref_46","first-page":"213","article-title":"R-max-a general polyno- mial time algorithm for near-optimal reinforcement learning","volume":"3","author":"Brafman","year":"2002","journal-title":"J. Mach. Learn. Res."},{"key":"ref_47","unstructured":"Klyubin, A.S., Polani, D., and Nehaniv, C.L. (2005, January 2\u20135). Empowerment: A universal agent-centric measure of control. Proceedings of the 2005 IEEE Congress on Evolutionary Computation, IEEE, Edinburgh, UK."},{"key":"ref_48","first-page":"2125","article-title":"Variational information maximisation for intrinsically motivated reinforcement learning","volume":"28","author":"Mohamed","year":"2015","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_49","unstructured":"Stadie, B.C., Levine, S., and Abbeel, P. (2015). Incentivizing exploration in reinforcement learning with deep predictive models. arXiv."},{"key":"ref_50","unstructured":"Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. (2016, January 5\u201311). Deep exploration via bootstrapped DQN. Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, Barcelona, Spain."},{"key":"ref_51","unstructured":"Osband, I., Aslanides, J., and Cassirer, A. (2018, January 2\u20138). Randomized prior functions for deep reinforcement learning. Proceedings of the 32nd International Conference on Neural Information Processing System, Montr\u00e9al, QC, Canada."},{"key":"ref_52","unstructured":"Mobin, S.A., Arnemann, J.A., and Sommer, F. (2014, January 8\u201313). Information-based learning by agents in unbounded state spaces. Proceedings of the 28th International Conference on Neural Information Processing Systems, Montr\u00e9al, QC, Canada."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1007\/s12064-011-0142-z","article-title":"An information-theoretic approach to curiosity-driven reinforcement learning","volume":"131","author":"Still","year":"2012","journal-title":"Theory Biosci."},{"key":"ref_54","unstructured":"Storck, J., Hochreiter, S., and Schmidhuber, J. (December, January 27). Reinforcement driven information acquisition in non-deterministic environments. Proceedings of the International Conference on Artificial Neural Networks, Paris, France."},{"key":"ref_55","unstructured":"Gao, S., Ver Steeg, G., and Galstyan, A. (2016, January 5\u201311). Variational information maximization for feature selection. Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, Barcelona, Spain."},{"key":"ref_56","unstructured":"Fang, Z., Yang, K., Tao, J., Lyu, J., Li, L., Shen, L., and Li, X. (2025). Exploration by Random Distribution Distillation. arXiv."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"0140","DOI":"10.34133\/icomputing.0140","article-title":"Action-Curiosity-Based Deep Reinforcement Learning Algorithm for Path Planning in a Nondeterministic Environment","volume":"4","author":"Xue","year":"2025","journal-title":"Intell. Comput."},{"key":"ref_58","unstructured":"Houthooft, R., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016, January 5\u201311). Vime: Variational information maximizing exploration. Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Yang, Z., Du, H., Wu, Y., Jiang, Z., and Qu, H. (2024, January 16). Intrinsic Motivation Exploration via Self-Supervised Prediction in Reinforcement Learning. Proceedings of the 6th International Conference on Data-Driven Optimization of Complex Systems (DOCS), Hangzhou, China.","DOI":"10.1109\/DOCS63458.2024.10704242"},{"key":"ref_60","unstructured":"Sun, W., Cheng, X., Yu, X., Xu, H., Yang, Z., He, S., Zhao, J., and Liu, K. (2025). Probabilistic Uncertain Reward Model. arXiv."},{"key":"ref_61","unstructured":"Clevert, D.-A., Unterthiner, T., and Hochreiter, S. (2015). Fast and accurate deep network learning by exponential linear units (elus). arXiv."},{"key":"ref_62","unstructured":"Schmidhuber, J., Hochreiter, S., and Bengio, Y. (2001). Evaluating benchmark problems by random guessing. A Field Guide to Dynamical Recurrent Networks, Wiley-IEEE Press."},{"key":"ref_63","first-page":"12585","article-title":"Cleanrl: High-quality single-file implementations of deep reinforcement learning algorithms","volume":"23","author":"Huang","year":"2022","journal-title":"J. Mach. Learn. Res."},{"key":"ref_64","unstructured":"Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv."},{"key":"ref_65","doi-asserted-by":"crossref","unstructured":"Li, Z., Ji, Q., Ling, X., and Liu, Q. (2025). A comprehensive review of multi-agent reinforcement learning in video games. arXiv.","DOI":"10.36227\/techrxiv.173603149.94954703\/v2"}],"container-title":["Computers"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/10\/434\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T04:44:41Z","timestamp":1760589881000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/10\/434"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,13]]},"references-count":65,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2025,10]]}},"alternative-id":["computers14100434"],"URL":"https:\/\/doi.org\/10.3390\/computers14100434","relation":{},"ISSN":["2073-431X"],"issn-type":[{"value":"2073-431X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,13]]}}}