{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,3]],"date-time":"2025-11-03T15:22:48Z","timestamp":1762183368848,"version":"build-2065373602"},"reference-count":166,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2025,11,1]],"date-time":"2025-11-01T00:00:00Z","timestamp":1761955200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["MAKE"],"abstract":"<jats:p>Large Language Models (LLMs) offer new opportunities to devise automated implementation generation methods that can tackle problem solving beyond traditional methods, which usually require algorithmic specifications and use only static domain knowledge. LLMs can support devising new methods to support activities in tackling open-ended problems, like problem framing, exploring possible solving approaches, feature elaboration and combination, advanced implementation assessment, and handling unexpected situations. This paper presents a detailed overview of the current work on LLMs, including model prompting, retrieval-augmented generation (RAG), and reinforcement learning. It then proposes a novel, LLM-based Cognitive Architecture (CA) to generate programming code starting from verbal discussions in natural language, a particular kind of problem-solving activity. The CA uses four strategies, three top-down and one bottom-up, to elaborate, adaptively process, memorize, and learn. Experiments are devised to study the CA performance, e.g., convergence rate, semantic fidelity, and code correctness.<\/jats:p>","DOI":"10.3390\/make7040134","type":"journal-article","created":{"date-parts":[[2025,11,3]],"date-time":"2025-11-03T13:22:36Z","timestamp":1762176156000},"page":"134","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["An Overview of Large Language Models and a Novel, Large Language Model-Based Cognitive Architecture for Solving Open-Ended Problems"],"prefix":"10.3390","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0009-0000-0014-2322","authenticated-orcid":false,"given":"Hashmath","family":"Shaik","sequence":"first","affiliation":[{"name":"Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY 11794-2350, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-5929-1377","authenticated-orcid":false,"given":"Gnaneswar","family":"Villuri","sequence":"additional","affiliation":[{"name":"Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY 11794-2350, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2472-4014","authenticated-orcid":false,"given":"Alex","family":"Doboli","sequence":"additional","affiliation":[{"name":"Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY 11794-2350, USA"}]}],"member":"1968","published-online":{"date-parts":[[2025,11,1]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1177\/0018720810369807","article-title":"Toward an understanding of macrocognition in teams: Predicting processes in complex collaborative contexts","volume":"52","author":"Fiore","year":"2010","journal-title":"Hum. Factors"},{"key":"ref_2","first-page":"19","article-title":"The process of solving complex problems","volume":"4","author":"Fischer","year":"2012","journal-title":"J. Probl. Solving"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"103672","DOI":"10.1016\/j.compedu.2019.103672","article-title":"Towards a generalized competency model of collaborative problem solving","volume":"143","author":"Sun","year":"2020","journal-title":"Comput. Educ."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"129","DOI":"10.1111\/cogs.12482","article-title":"Problem-solving phase transitions during team collaboration","volume":"42","author":"Wiltshire","year":"2018","journal-title":"Cogn. Sci."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"523","DOI":"10.1002\/acp.2350090605","article-title":"Cognitive processes in well-defined and ill-defined problem solving","volume":"9","author":"Schraw","year":"1995","journal-title":"Appl. Cogn. Psychol."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"298","DOI":"10.1016\/j.destud.2014.01.001","article-title":"The role of precedents in increasing creativity during iterative design of electronic embedded systems","volume":"35","author":"Doboli","year":"2014","journal-title":"Des. Stud."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1016\/j.knosys.2015.01.014","article-title":"Modeling semantic knowledge structures for creative problem solving: Studies on expressing concepts, categories, associations, goals and context","volume":"78","author":"Doboli","year":"2015","journal-title":"Knowl.-Based Syst."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Koza, J.R., Bennett, F.H., Andre, D., and Keane, M.A. (1996, January 7\u20138). Reuse, parameterized reuse, and hierarchical reuse of substructures in evolving electrical circuits using genetic programming. Proceedings of the Evolvable Systems: From Biology to Hardware, ICES\u201996, Tsukuba, Japan.","DOI":"10.1007\/3-540-63173-9_56"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Wirfs-Brock, R., Taylor, P., and Noble, J. (2006, January 21\u201323). Problem frame patterns: An exploration of patterns in the problem space. Proceedings of the Conference on Pattern Languages of Programs, Portland, OR, USA.","DOI":"10.1145\/1415472.1415497"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"597","DOI":"10.1109\/TCAD.2005.854633","article-title":"High-Level Synthesis of Delta-Sigma Modulators Optimized for Complexity, Sensitivity and Power Consumption","volume":"25","author":"Tang","year":"2006","journal-title":"IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"480","DOI":"10.1109\/TCAD.2006.885734","article-title":"Systematic Methodology for Designing Reconfigurable Delta-Sigma Modulator Topologies for Multimode Communication Systems","volume":"26","author":"Wei","year":"2007","journal-title":"IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1080\/00461527809529193","article-title":"Improvement of skills for solving-ill-defined problems","volume":"13","author":"Klein","year":"1978","journal-title":"Educ. Psychol."},{"key":"ref_13","first-page":"409","article-title":"Assessment of student problem-solving on ill-defined tasks","volume":"45","author":"Leighton","year":"1999","journal-title":"Alta. J. Educ. Res."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"2094","DOI":"10.1007\/s10489-020-01919-6","article-title":"A novel agent-based, evolutionary model for expressing the dynamics of creative open-problem solving in small groups","volume":"51","author":"Doboli","year":"2021","journal-title":"Appl. Intell."},{"key":"ref_15","unstructured":"Wang, R., Lehman, J., Rawal, A., Zhi, J., Li, Y., Clune, J., and Stanley, K. (2020, January 13\u201318). Enhanced POET: Open-ended reinforcement learning through unbounded invention of learning challenges and their solutions. Proceedings of the International Conference on Machine Learning (ICML), Virtual."},{"key":"ref_16","unstructured":"Newell, A., and Simon, H.A. (1972). Human Problem Solving, Prentice-Hall."},{"key":"ref_17","unstructured":"Aho, A., Lam, M., Sethi, R., and Ullman, J. (2006). Compilers: Principles, Techniques, and Tools, Addison-Wesley. [2nd ed.]."},{"key":"ref_18","unstructured":"Fingeroff, M. (2010). High-Level Synthesis Blue Book, Xlibris."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"McConaghy, T., Palmers, P., Gao, P., Steyaert, M., and Gielen, G. (2009). Variation-Aware Analog Structural Synthesis, Springer.","DOI":"10.1007\/978-90-481-2906-5"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1504","DOI":"10.1109\/TCAD.2003.818302","article-title":"Behavioral Modeling for High-Level Synthesis of Analog and Mixed-Signal Systems from VHDL-AMS","volume":"22","author":"Doboli","year":"2003","journal-title":"IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1556","DOI":"10.1109\/TCAD.2003.818374","article-title":"Exploration-Based High-Level Synthesis of Linear Analog Systems Operating at Low\/Medium Frequencies","volume":"22","author":"Doboli","year":"2003","journal-title":"IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"167","DOI":"10.3758\/BF03209392","article-title":"When concepts combine","volume":"5","author":"Wisniewski","year":"1997","journal-title":"Psychon. Bull. Rev."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Kruiskamp, W., and Leenaerts, D. (1995, January 12\u201316). Darwin: CMOS opamp synthesis by means of genetic algorithm. Proceedings of the Design Automation Conference, San Francisco, CA, USA.","DOI":"10.1145\/217474.217566"},{"key":"ref_24","first-page":"1","article-title":"Research directions in agent communication","volume":"4","author":"Chopra","year":"2013","journal-title":"ACM Trans. Intell. Syst. Technol."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"7280","DOI":"10.1073\/pnas.082080899","article-title":"Agent-based modeling: Methods and techniques for simulating human systems","volume":"99","author":"Bonabeau","year":"2002","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_26","unstructured":"Lapp, S., Jablokow, K., and McComb, C. (2017, January 6\u20139). Collaborating with Style: Using an Agent-Based Model to Simulate Cognitive Style Diversity in Problem Solving Teams. Proceedings of the ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Cleveland, OH, USA."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Laird, J.E. (2012). The Soar Cognitive Architecture, The MIT Press.","DOI":"10.7551\/mitpress\/7688.001.0001"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"355","DOI":"10.1037\/0003-066X.51.4.355","article-title":"ACT: A simple theory of complex cognition","volume":"51","author":"Anderson","year":"1996","journal-title":"Am. Psychol."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"391","DOI":"10.1207\/s15327051hci1204_4","article-title":"An overview of the EPIC architecture for cognition and performance with application to human-computer interaction","volume":"12","author":"Kieras","year":"1997","journal-title":"Hum.-Comput. Interact."},{"key":"ref_30","first-page":"1","article-title":"The Sigma cognitive architecture and system: Towards functionally elegant grand unification","volume":"7","author":"Rosenbloom","year":"2016","journal-title":"J. Artif. Gen. Intell."},{"key":"ref_31","unstructured":"Sun, R. (2025, October 24). A Tutorial on CLARION 5.0. Cognitive Science Department, Rensselaer Polytechnic Institute. Available online: https:\/\/homepages.hass.rpi.edu\/rsun\/sun.tutorial.pdf."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"1943","DOI":"10.1109\/TCAD.2017.2783344","article-title":"InnovA: A Cognitive Architecture for Computational Innovation through Robust Divergence and Its Application for Analog Circuit Design","volume":"37","author":"Li","year":"2018","journal-title":"IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst."},{"key":"ref_33","unstructured":"Vaswani, A. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA."},{"key":"ref_34","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"1798","DOI":"10.1109\/TPAMI.2013.50","article-title":"Representation learning: A review and new perspectives","volume":"35","author":"Bengio","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"e253","DOI":"10.1017\/S0140525X16001837","article-title":"Building machines that learn and think like people","volume":"40","author":"Lake","year":"2017","journal-title":"Behav. Brain Sci."},{"key":"ref_37","unstructured":"Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.S., Cheng, M., Glaese, M., Balle, B., and Kasirzadeh, A. (2021). Ethical and social risks of harm from language models. arXiv."},{"key":"ref_38","unstructured":"Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y.T., Li, Y., and Lundberg, S. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv."},{"key":"ref_39","unstructured":"Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., and Choi, Y. (2019, January 8\u201314). Defending against neural fake news. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Post, M., and Vilar, D. (2018). Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. arXiv.","DOI":"10.18653\/v1\/N18-1119"},{"key":"ref_41","unstructured":"Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. (2023). Parameter-efficient transfer learning for NLP. International Conference on Machine Learning, PMLR."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"665","DOI":"10.1038\/s42256-020-00257-z","article-title":"Shortcut learning in deep neural networks","volume":"2","author":"Geirhos","year":"2020","journal-title":"Nat. Mach. Intell."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Vatsal, S., Singh, A., and Tafreshi, S. (2024). Can GPT Improve the State of Prior Authorization Via Guideline Based Automated Question Answering?. AI for Health Equity and Fairness, Springer.","DOI":"10.1007\/978-3-031-63592-2_12"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"1812","DOI":"10.1093\/jamia\/ocad259","article-title":"Improving large language models for clinical named entity recognition via prompt engineering","volume":"31","author":"Hu","year":"2024","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"ref_45","unstructured":"Boyle, T. (2024, December 26). Medical Transcriptions. Available online: https:\/\/www.kaggle.com\/datasets\/tboyle10\/medicaltranscriptions."},{"key":"ref_46","unstructured":"Centers for Disease Control and Prevention, and U.S. Food and Drug Administration (2024, December 26). Vaccine Adverse Event Reporting System (VAERS), Available online: https:\/\/vaers.hhs.gov\/data\/datasets.html."},{"key":"ref_47","first-page":"24824","article-title":"Chain-of-thought prompting elicits reasoning in large language models","volume":"35","author":"Wei","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_48","unstructured":"Fu, Y., Peng, H., Sabharwal, A., Clark, P., and Khot, T. (2022). Complexity-based prompting for multi-step reasoning. arXiv."},{"key":"ref_49","unstructured":"Zhou, Y., Geng, X., Shen, T., Tao, C., Long, G., Lou, J.G., and Shen, J. (2023). Thread of thought unraveling chaotic contexts. arXiv."},{"key":"ref_50","unstructured":"Li, X., Zhao, R., Chia, Y.K., Ding, B., Joty, S., Poria, S., and Bing, L. (2023). Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources. arXiv."},{"key":"ref_51","unstructured":"Li, C., Liang, J., Zeng, A., Chen, X., Hausman, K., Sadigh, D., Levine, S., Fei-Fei, L., Xia, F., and Ichter, B. (2023). Chain of code: Reasoning with a language model-augmented code emulator. arXiv."},{"key":"ref_52","unstructured":"Zhao, X., Li, M., Lu, W., Weber, C., Lee, J.H., Chu, K., and Wermter, S. (2023). Enhancing zero-shot chain-of-thought reasoning in large language models through logic. arXiv."},{"key":"ref_53","first-page":"229","article-title":"Chain-of-event prompting for multi-document summarization by large language models","volume":"20","author":"Bao","year":"2024","journal-title":"Int. J. Web Inf. Syst."},{"key":"ref_54","unstructured":"Wang, Z., Zhang, H., Li, C.L., Eisenschlos, J.M., Perot, V., Wang, Z., Miculicich, L., Fujii, Y., Shang, J., and Lee, C.Y. (2024). Chain-of-table: Evolving tables in the reasoning chain for table understanding. arXiv."},{"key":"ref_55","unstructured":"Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou, D. (2022). Self-consistency improves chain of thought reasoning in language models. arXiv."},{"key":"ref_56","unstructured":"Chia, Y.K., Chen, G., Tuan, L.A., Poria, S., and Bing, L. (2023). Contrastive chain-of-thought prompting. arXiv."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Liu, X., Pang, T., and Fan, C. (2023). Federated prompting and chain-of-thought reasoning for improving LLMs answering. International Conference on Knowledge Science, Engineering and Management, Springer Nature.","DOI":"10.1007\/978-3-031-40292-0_1"},{"key":"ref_58","first-page":"11809","article-title":"Tree of thoughts: Deliberate problem solving with large language models","volume":"36","author":"Yao","year":"2024","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Jung, J., Qin, L., Welleck, S., Brahman, F., Bhagavatula, C., Le Bras, R., and Choi, Y. (2022). Maieutic prompting: Logically consistent reasoning with recursive explanations. arXiv.","DOI":"10.18653\/v1\/2022.emnlp-main.82"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Wang, L., Xu, W., Lan, Y., Hu, Z., Lan, Y., Lee, R.K.W., and Lim, E.P. (2023). Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv.","DOI":"10.18653\/v1\/2023.acl-long.147"},{"key":"ref_61","unstructured":"Chen, W., Ma, X., Wang, X., and Cohen, W.W. (2022). Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv."},{"key":"ref_62","unstructured":"Hu, H., Lu, H., Zhang, H., Song, Y.Z., Lam, W., and Zhang, Y. (2023). Chain-of-symbol prompting elicits planning in large language models. arXiv."},{"key":"ref_63","first-page":"1","article-title":"Structured chain-of-thought prompting for code generation","volume":"34","author":"Li","year":"2023","journal-title":"ACM Trans. Softw. Eng. Methodol."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Fei, H., Li, B., Liu, Q., Bing, L., Li, F., and Chua, T.S. (2023). Reasoning implicit sentiment with chain-of-thought prompting. arXiv.","DOI":"10.18653\/v1\/2023.acl-short.101"},{"key":"ref_65","unstructured":"Singhal, K., Tu, T., Gottweis, J., Sayres, R., Wulczyn, E., Hou, L., Clark, K., Pfohl, S., Cole-Lewis, H., and Neal, D. (2023). Towards expert-level medical question answering with large language models. arXiv."},{"key":"ref_66","unstructured":"Zhang, Z., Zhang, A., Li, M., and Smola, A. (2022). Automatic chain of thought prompting in large language models. arXiv."},{"key":"ref_67","unstructured":"Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., and Cao, Y. (2022). ReAct: Synergizing reasoning and acting in language models. arXiv."},{"key":"ref_68","unstructured":"Diao, S., Wang, P., Lin, Y., Pan, R., Liu, X., and Zhang, T. (2023). Active prompting with chain-of-thought for large language models. arXiv."},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Imani, S., Du, L., and Shrivastava, H. (2023). MathPrompter: Mathematical reasoning using large language models. arXiv.","DOI":"10.18653\/v1\/2023.acl-industry.4"},{"key":"ref_70","unstructured":"Yasunaga, M., Chen, X., Li, Y., Pasupat, P., Leskovec, J., Liang, P., Chi, E.H., and Zhou, D. (2023). Large language models as analogical reasoners. arXiv."},{"key":"ref_71","unstructured":"Shao, Z., Gong, Y., Shen, Y., Huang, M., Duan, N., and Chen, W. (2023). Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models. arXiv."},{"key":"ref_72","unstructured":"Weston, J., and Sukhbaatar, S. (2023). System 2 Attention (is something you might need too). arXiv."},{"key":"ref_73","unstructured":"Wang, Y., and Zhao, Y. (2023). Metacognitive prompting improves understanding in large language models. arXiv."},{"key":"ref_74","unstructured":"Zhou, D., Sch\u00e4rli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., and Le, Q. (2022). Least-to-most prompting enables complex reasoning in large language models. arXiv."},{"key":"ref_75","unstructured":"Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., and Sabharwal, A. (2022). Decomposed prompting: A modular approach for solving complex tasks. arXiv."},{"key":"ref_76","unstructured":"Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., and Neubig, G. (2022). PAL: Program-aided Language Models. arXiv."},{"key":"ref_77","unstructured":"Cheng, Z., Xie, T., Shi, P., Li, C., Nadkarni, R., Hu, Y., Xiong, C., Radev, D., Ostendorf, M., and Zettlemoyer, L. (2022). Binding language models in symbolic languages. arXiv."},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Ye, Y., Hui, B., Yang, M., Li, B., Huang, F., and Li, Y. (2023). Large language models are versatile decomposers: Decompose evidence and questions for table-based reasoning. arXiv.","DOI":"10.1145\/3539618.3591708"},{"key":"ref_79","first-page":"9459","article-title":"Retrieval-augmented generation for knowledge-intensive NLP tasks","volume":"33","author":"Lewis","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_80","unstructured":"B\u00e9chard, P., and Ayala, O.M. (2024). Reducing hallucination in structured outputs via Retrieval-Augmented Generation. arXiv."},{"key":"ref_81","unstructured":"Dixit, P., and Oates, T. (2024). SBI-RAG: Enhancing math word problem solving for students through schema-based instruction and retrieval-augmented generation. arXiv."},{"key":"ref_82","doi-asserted-by":"crossref","unstructured":"Matsumoto, N., Moran, J., Choi, H., Hernandez, M.E., Venkatesan, M., Wang, P., and Moore, J.H. (2024). KRAGEN: A knowledge graph-enhanced RAG framework for biomedical problem solving using large language models. Bioinformatics, 40.","DOI":"10.1093\/bioinformatics\/btae353"},{"key":"ref_83","doi-asserted-by":"crossref","unstructured":"Liu, X., Wang, R., Song, Y., and Kong, L. (2024, January 25\u201329). GRAM: Generative Retrieval Augmented Matching of Data Schemas in the Context of Data Security. Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining KDD \u201924, Barcelona, Spain.","DOI":"10.1145\/3637528.3671602"},{"key":"ref_84","doi-asserted-by":"crossref","unstructured":"Chen, S.-A., Miculicich, L., Eisenschlos, J.M., Wang, Z., Wang, Z., Chen, Y., Fujii, Y., Lin, H.T., Lee, C.Y., and Pfister, T. (2024). TableRAG: Million-token table understanding with language models. arXiv.","DOI":"10.52202\/079017-2382"},{"key":"ref_85","doi-asserted-by":"crossref","unstructured":"Yao, Z., Qi, W., Pan, L., Cao, S., Hu, L., Liu, W., Hou, L., and Li, J. (2024). Seakr: Self-aware knowledge retrieval for adaptive retrieval augmented generation. arXiv.","DOI":"10.18653\/v1\/2025.acl-long.1312"},{"key":"ref_86","unstructured":"Asai, A., Wu, Z., Wang, Y., Sil, A., and Hajishirzi, H. (2023, January 15). Self-RAG: Self-reflective retrieval augmented generation. Proceedings of the NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, New Orleans, LA, USA."},{"key":"ref_87","unstructured":"Wang, Z., Wang, Z., Le, L., Zheng, H.S., Mishra, S., Perot, V., Zhang, Y., Mattapalli, A., Taly, A., and Shang, J. (2024). Speculative RAG: Enhancing retrieval augmented generation through drafting. arXiv."},{"key":"ref_88","doi-asserted-by":"crossref","unstructured":"Xu, R., Liu, H., Nag, S., Dai, Z., Xie, Y., Tang, X., Luo, C., Li, Y., Ho, J.C., and Yang, C. (2024). SimRAG: Self-Improving Retrieval-Augmented Generation for Adapting Large Language Models to Specialized Domains. arXiv.","DOI":"10.18653\/v1\/2025.naacl-long.575"},{"key":"ref_89","doi-asserted-by":"crossref","unstructured":"Li, X., Xu, W., Zhao, R., Jiao, F., Joty, S., and Bing, L. (2024). Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks. arXiv.","DOI":"10.18653\/v1\/2025.acl-long.1244"},{"key":"ref_90","doi-asserted-by":"crossref","unstructured":"Hu, M., Zong, L., Wang, H., Zhou, J., Li, J., Gao, Y., Wong, K.F., Li, Y., and King, I. (2024). SeRTS: Self-Rewarding Tree Search for Biomedical Retrieval-Augmented Generation. Findings of ACL: EMNLP 2024, Association for Computational Linguistics.","DOI":"10.18653\/v1\/2024.findings-emnlp.71"},{"key":"ref_91","doi-asserted-by":"crossref","unstructured":"Qu, C., Yang, L., Qiu, M., Croft, W.B., Zhang, Y., and Iyyer, M. (2019, January 21\u201325). BERT with history answer embedding for conversational question answering. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval SIGIR \u201919, Paris, France.","DOI":"10.1145\/3331184.3331341"},{"key":"ref_92","doi-asserted-by":"crossref","unstructured":"Guti\u00e9rrez, B.J., Shu, Y., Gu, Y., Yasunaga, M., and Su, Y. (2024). HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models. arXiv.","DOI":"10.52202\/079017-1902"},{"key":"ref_93","doi-asserted-by":"crossref","unstructured":"Ho, X., Nguyen, A.K.D., Sugawara, S., and Aizawa, A. (2020, January 8\u201313). Constructing a multi-hop QA dataset for comprehensive evaluation of reasoning steps. Proceedings of the COLING 2020, Online.","DOI":"10.18653\/v1\/2020.coling-main.580"},{"key":"ref_94","doi-asserted-by":"crossref","unstructured":"Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W.W., Salakhutdinov, R., and Manning, C.D. (November, January 31). HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. Proceedings of the EMNLP 2018, Brussels, Belgium.","DOI":"10.18653\/v1\/D18-1259"},{"key":"ref_95","doi-asserted-by":"crossref","unstructured":"Sarmah, B., Mehta, D., Hall, B., Rao, R., Patel, S., and Pasquali, S. (2024). HybridRAG: Integrating Knowledge Graphs and Vector Retrieval Augmented Generation for Efficient Information Extraction. ACM International Conference on AI in Finance (ICAIF \u201924), ACM.","DOI":"10.1145\/3677052.3698671"},{"key":"ref_96","doi-asserted-by":"crossref","unstructured":"Hu, Y., Lei, Z., Zhang, Z., Pan, B., Ling, C., and Zhao, L. (2024). GRAG: Graph Retrieval-Augmented Generation. arXiv.","DOI":"10.18653\/v1\/2025.findings-naacl.232"},{"key":"ref_97","doi-asserted-by":"crossref","unstructured":"Cai, Y., Guo, Z., Pei, Y., Bian, W., and Zheng, W. (2024). SimGRAG: Leveraging Similar Subgraphs for Knowledge Graphs Driven Retrieval-Augmented Generation. arXiv.","DOI":"10.18653\/v1\/2025.findings-acl.163"},{"key":"ref_98","first-page":"74530","article-title":"Augmenting language models with long-term memory","volume":"36","author":"Wang","year":"2024","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_99","unstructured":"Aadhithya, A., Kumar, S., and Soman, K.P. (2024). Enhancing Long-Term Memory using Hierarchical Aggregate Tree for Retrieval Augmented Generation. arXiv."},{"key":"ref_100","unstructured":"Qian, H., Zhang, P., Liu, Z., Mao, K., and Dou, Z. (2024). MemoRAG: Moving towards next-gen RAG via memory-inspired knowledge discovery. arXiv."},{"key":"ref_101","unstructured":"Bai, Y., Miao, Y., Chen, L., Wang, D., Li, D., Ren, Y., Xie, H., Yang, C., and Cai, X. (2024). Pistis-RAG: Enhancing Retrieval-Augmented Generation with Human Feedback. arXiv."},{"key":"ref_102","unstructured":"Gan, C., Yang, D., Hu, B., Zhang, H., Li, S., Liu, Z., Shen, Y., Ju, L., Zhang, Z., and Gu, J. (2024). Similarity is Not All You Need: Endowing Retrieval Augmented Generation with Multi Layered Thoughts. arXiv."},{"key":"ref_103","doi-asserted-by":"crossref","unstructured":"Jiang, J., Chen, J., Li, J., Ren, R., Wang, S., Zhao, W.X., Song, Y., and Zhang, T. (2024). RAG-Star: Enhancing Deliberative Reasoning with Retrieval Augmented Verification and Refinement. arXiv.","DOI":"10.18653\/v1\/2025.naacl-long.361"},{"key":"ref_104","unstructured":"Cho, J., Mahata, D., Irsoy, O., He, Y., and Bansal, M. (2024). M3DoCRAG: Multi-modal retrieval is what you need for multi-page multi-document understanding. arXiv."},{"key":"ref_105","unstructured":"Yu, S., Tang, C., Xu, B., Cui, J., Ran, J., Yan, Y., Liu, Z., Wang, S., Han, X., and Liu, Z. (2024). VisRAG: Vision-based retrieval-augmented generation on multi-modality documents. arXiv."},{"key":"ref_106","unstructured":"Li, Y., Li, Y., Wang, X., Jiang, Y., Zhang, Z., Zheng, X., Wang, H., Yu, P.S., Huang, F., and Zhou, J. (2024). Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent. arXiv."},{"key":"ref_107","doi-asserted-by":"crossref","unstructured":"Dhuliawala, S., Komeili, M., Xu, J., Raileanu, R., Li, X., Celikyilmaz, A., and Weston, J. (2023). Chain-of-verification reduces hallucination in large language models. arXiv.","DOI":"10.18653\/v1\/2024.findings-acl.212"},{"key":"ref_108","doi-asserted-by":"crossref","unstructured":"Zhao, R., Li, X., Joty, S., Qin, C., and Bing, L. (2023). Verify-and-edit: A knowledge-enhanced chain-of-thought framework. arXiv.","DOI":"10.18653\/v1\/2023.acl-long.320"},{"key":"ref_109","unstructured":"Zhai, Y., Bai, H., Lin, Z., Pan, J., Tong, S., Zhou, Y., Suhr, A., Xie, S., LeCun, Y., and Ma, Y. (2024). Fine-Tuning Large Vision-Language Models as Decision-Making Agents via reinforcement learning. arXiv."},{"key":"ref_110","unstructured":"Sutton, R.S., and Barto, A.G. (2018). Reinforcement Learning: An Introduction, MIT Press. [2nd ed.]."},{"key":"ref_111","unstructured":"Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv."},{"key":"ref_112","first-page":"27730","article-title":"Training language models to follow instructions with human feedback","volume":"35","author":"Ouyang","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_113","doi-asserted-by":"crossref","unstructured":"Shen, W., Zheng, R., Zhan, W., Zhao, J., Dou, S., Gui, T., Zhang, Q., and Huang, X. (2023). Loose lips sink ships: Mitigating length bias in reinforcement learning from human feedback. arXiv.","DOI":"10.18653\/v1\/2023.findings-emnlp.188"},{"key":"ref_114","unstructured":"Wang, B., Zheng, R., Chen, L., Liu, Y., Dou, S., Huang, C., Shen, W., Jin, S., Zhou, E., and Shi, C. (2024). Secrets of RLHF in large language models Part II: Reward modeling. arXiv."},{"key":"ref_115","doi-asserted-by":"crossref","unstructured":"Havrilla, A., Zhuravinskyi, M., Phung, D., Tiwari, A., Tow, J., Biderman, S., Anthony, Q., and Castricato, L. (2023, January 6\u201310). trlX: A framework for large scale reinforcement learning from human feedback. Proceedings of the EMNLP 2023, Singapore.","DOI":"10.18653\/v1\/2023.emnlp-main.530"},{"key":"ref_116","unstructured":"Cui, G., Yuan, L., Ding, N., Yao, G., Zhu, W., Ni, Y., Xie, G., Liu, Z., and Sun, M. (2023). Ultrafeedback: Boosting language models with high-quality feedback. arXiv."},{"key":"ref_117","unstructured":"Liu, C.Y., Zeng, L., Liu, J., Yan, R., He, J., Wang, C., Yan, S., Liu, Y., and Zhou, Y. (2024). Skywork-reward: Bag of tricks for reward modeling in LLMs. arXiv."},{"key":"ref_118","unstructured":"Ivison, H., Wang, Y., Pyatkin, V., Lambert, N., Peters, M., Dasigi, P., Jang, J., Wadden, D., Smith, N.A., and Beltagy, I. (2023). Camels in a changing climate: Enhancing LM adaptation with Tulu 2. arXiv."},{"key":"ref_119","unstructured":"Li, L., Chai, Y., Wang, S., Sun, Y., Tian, H., Zhang, N., and Wu, H. (2023). Tool-augmented reward modeling. arXiv."},{"key":"ref_120","unstructured":"Munos, R., Valko, M., Calandriello, D., Azar, M.G., Rowland, M., Guo, Z.D., Tang, Y., Geist, M., Mesnard, T., and Michi, A. (2023). Nash learning from human feedback. arXiv."},{"key":"ref_121","unstructured":"Gao, L., Schulman, J., and Hilton, J. (2023, January 23\u201319). Scaling laws for reward model overoptimization. Proceedings of the International Conference on Machine Learning (ICML), Honolulu, HI, USA."},{"key":"ref_122","unstructured":"Kaufmann, T., Weng, P., Bengs, V., and H\u00fcllermeier, E. (2023). A survey of reinforcement learning from human feedback. arXiv."},{"key":"ref_123","unstructured":"Lee, H., Phatale, S., Mansoor, H., Lu, K.R., Mesnard, T., Ferret, J., Bishop, C., Hall, E., Carbune, V., and Rastogi, A. (2023). RLAIF vs. RLHF: Scaling reinforcement learning from human feedback with AI feedback. arXiv."},{"key":"ref_124","unstructured":"Li, A., Xiao, Q., Cao, P., Tang, J., Yuan, Y., Zhao, Z., Chen, X., Zhang, L., Li, X., and Yang, K. (2024). HRLAIF: Improvements in helpfulness and harmlessness in open-domain reinforcement learning from AI feedback. arXiv."},{"key":"ref_125","unstructured":"Xu, Z., Jiang, F., Niu, L., Deng, Y., Poovendran, R., Choi, Y., and Lin, B.Y. (2024). Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. arXiv."},{"key":"ref_126","unstructured":"Wang, Z., Bukharin, A., Delalleau, O., Egert, D., Shen, G., Zeng, J., Kuchaiev, O., and Dong, Y. (2024). Helpsteer2-preference: Complementing ratings with preferences. arXiv."},{"key":"ref_127","unstructured":"Du, Y., Watkins, O., Wang, Z., Colas, C., Darrell, T., Abbeel, P., Gupta, A., and Andreas, J. (2023, January 23\u201329). Guiding pretraining in reinforcement learning with large language models. Proceedings of the International Conference on Machine Learning (ICML), Honolulu, HI, USA."},{"key":"ref_128","unstructured":"Kwon, M., Xie, S.M., Bullard, K., and Sadigh, D. (2023). Reward design with language models. arXiv."},{"key":"ref_129","unstructured":"Ma, Y.J., Liang, W., Wang, G., Huang, D.A., Bastani, O., Jayaraman, D., Zhu, Y., Fan, L., and Anandkumar, A. (2023). Eureka: Human-level reward design via coding large language models. arXiv."},{"key":"ref_130","unstructured":"Song, J., Zhou, Z., Liu, J., Fang, C., Shu, Z., and Ma, L. (2023). Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics. arXiv."},{"key":"ref_131","unstructured":"Yuan, W., Pang, R.Y., Cho, K., Sukhbaatar, S., Xu, J., and Weston, J. (2024). Self-rewarding language models. arXiv."},{"key":"ref_132","doi-asserted-by":"crossref","unstructured":"Wang, H., Zariphopoulou, T., and Zhou, X. (2018). Exploration versus exploitation in reinforcement learning: A stochastic control approach. arXiv.","DOI":"10.2139\/ssrn.3316387"},{"key":"ref_133","unstructured":"Dann, C., Mansour, Y., Mohri, M., Sekhari, A., and Sridharan, K. (2022, January 17\u201323). Guarantees for epsilon-greedy reinforcement learning with function approximation. Proceedings of the ICML, Baltimore, MD, USA."},{"key":"ref_134","doi-asserted-by":"crossref","unstructured":"Tokic, M. (2010). Adaptive \u03b5-greedy exploration in reinforcement learning based on value differences. Annual Conference on Artificial Intelligence, Springer.","DOI":"10.1007\/978-3-642-16111-7_23"},{"key":"ref_135","unstructured":"Cesa-Bianchi, N., Gentile, C., Lugosi, G., and Neu, G. (2017, January 4\u20139). Boltzmann exploration done right. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA."},{"key":"ref_136","doi-asserted-by":"crossref","unstructured":"Ma, R., Luijkx, J., Ajanovic, Z., and Kober, J. (2024). ExploRLLM: Guiding Exploration in Reinforcement Learning with Large Language Models. arXiv.","DOI":"10.1109\/ICRA55743.2025.11127622"},{"key":"ref_137","doi-asserted-by":"crossref","unstructured":"Zhao, Q., Fu, H., Sun, C., and Konidaris, G. (2024). EPO: Hierarchical LLM agents with environment preference optimization. arXiv.","DOI":"10.18653\/v1\/2024.emnlp-main.367"},{"key":"ref_138","unstructured":"Nguyen, H.-T., and Satoh, K. (2024). Balancing Exploration and Exploitation in LLM using Soft RLLF for Enhanced Negation Understanding. arXiv."},{"key":"ref_139","doi-asserted-by":"crossref","unstructured":"Yang, F., Zhao, P., Wang, Z., Wang, L., Zhang, J., Garg, M., Lin, Q., Rajmohan, S., and Zhang, D. (2023). Empower large language model to perform better on industrial domain-specific question answering. arXiv.","DOI":"10.18653\/v1\/2023.emnlp-industry.29"},{"key":"ref_140","first-page":"53728","article-title":"Direct preference optimization: Your language model is secretly a reward model","volume":"36","author":"Rafailov","year":"2024","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_141","unstructured":"Pal, A., Karkhanis, D., Dooley, S., Roberts, M., Naidu, S., and White, C. (2024). Smaug: Fixing failure modes of preference optimisation with DPO-positive. arXiv."},{"key":"ref_142","unstructured":"Wu, J., Xie, Y., Yang, Z., Wu, J., Gao, J., Ding, B., Wang, X., and He, X. (2024). \u03b2-DPO: Direct Preference Optimization with Dynamic \u03b2. arXiv."},{"key":"ref_143","unstructured":"Kim, D., Kim, Y., Song, W., Kim, H., Kim, Y., Kim, S., and Park, C. (2024). sDPO: Don\u2019t Use Your Data All at Once. arXiv."},{"key":"ref_144","unstructured":"Yang, Z., Wan, F., Zhong, L., Shi, T., and Quan, X. (2024). Weighted-Reward Preference Optimization for Implicit Model Fusion. arXiv."},{"key":"ref_145","unstructured":"Chowdhury, S.R., Kini, A., and Natarajan, N. (2024). Provably robust DPO: Aligning language models with noisy feedback. arXiv."},{"key":"ref_146","unstructured":"Azar, M.G., Guo, Z.D., Piot, B., Munos, R., Rowland, M., Valko, M., and Calandriello, D. (2024, January 2\u20134). A general theoretical paradigm to understand learning from human preferences. Proceedings of the AISTATS, Valencia, Spain."},{"key":"ref_147","unstructured":"Sun, H., Shen, Y., and Ton, J.F. (2024). Rethinking Bradley-Terry Models in Preference-Based Reward Modeling: Foundations, Theory, and Alternatives. arXiv."},{"key":"ref_148","unstructured":"Wu, Y., Sun, Z., Yuan, H., Ji, K., Yang, Y., and Gu, Q. (2024). Self-play preference optimization for language model alignment. arXiv."},{"key":"ref_149","unstructured":"Wang, C., Zhao, Z., Zhu, C., Sankararaman, K.A., Valko, M., Cao, X., Chen, Z., Khabsa, M., Chen, Y., and Ma, H. (2024). Preference optimization with multi-sample comparisons. arXiv."},{"key":"ref_150","unstructured":"Fisch, A., Eisenstein, J., Zayats, V., Agarwal, A., Beirami, A., Nagpal, C., Shaw, P., and Berant, J. (2024). Robust preference optimization through reward model distillation. arXiv."},{"key":"ref_151","doi-asserted-by":"crossref","unstructured":"Dong, Y., Luo, K., Jiang, X., Jin, Z., and Li, G. (2023). PACE: Improving Prompt with Actor-Critic Editing for Large Language Model. arXiv.","DOI":"10.18653\/v1\/2024.findings-acl.436"},{"key":"ref_152","unstructured":"Ziegler, D.M., Stiennon, N., Wu, J., Brown, T.B., Radford, A., Amodei, D., Christiano, P., and Irving, G. (2019). Fine-tuning language models from human preferences. arXiv."},{"key":"ref_153","unstructured":"Yao, W., Mi, H., and Yu, D. (2024). HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows. arXiv."},{"key":"ref_154","unstructured":"Liu, G., Ji, K., Zheng, R., Wu, Z., Dun, C., Gu, Q., and Yan, L. (2024). Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization. arXiv."},{"key":"ref_155","unstructured":"Spiro, R.J., Bruce, B.C., and Brewer, W.F. (1980). Schemata: The building blocks of cognition. Theoretical Issues in Reading Comprehension, Lawrence Erlbaum."},{"key":"ref_156","first-page":"34651","article-title":"Towards understanding grokking: An effective theory of representation learning","volume":"35","author":"Liu","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst. (NeurIPS)"},{"key":"ref_157","unstructured":"d\u2019Avila Garcez, A., and Lamb, L.C. (2020). Neurosymbolic AI: The 3rd Wave. arXiv, Available online: https:\/\/arxiv.org\/abs\/2012.05876."},{"key":"ref_158","unstructured":"Mao, J., Gan, C., Kohli, P., Tenenbaum, J.B., and Wu, J. (2019). The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision. arXiv, Available online: https:\/\/arxiv.org\/abs\/1904.12584."},{"key":"ref_159","unstructured":"Xu, H., Hu, J., Zhang, K., Yu, L., Tang, Y., Song, X., Duan, Y., Ai, L., and Shi, B. (2025). SEDM: Scalable Self-Evolving Distributed Memory for Agents. arXiv, Available online: https:\/\/arxiv.org\/html\/2509.09498v3."},{"key":"ref_160","doi-asserted-by":"crossref","unstructured":"Zhu, Z., Liao, Y., Xu, C., Guan, Y., Wang, Y., and Wang, Y. (2024, January 12\u201316). RA2FD: Distilling Faithfulness into Efficient Dialogue Systems. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), Miami, FL, USA.","DOI":"10.18653\/v1\/2024.emnlp-main.685"},{"key":"ref_161","unstructured":"Lu, S., Wang, L., Wen, S., Wang, Z., and Zhang, H. (2025). FedDTRE: Federated Dialogue Generation Models Powered by Trustworthiness Evaluation. arXiv."},{"key":"ref_162","doi-asserted-by":"crossref","unstructured":"Sun, W., Cai, H., Chen, H., Ren, P., Chen, Z., de Rijke, M., and Ren, Z. (2023). Answering ambiguous questions via iterative prompting. arXiv.","DOI":"10.18653\/v1\/2023.acl-long.424"},{"key":"ref_163","doi-asserted-by":"crossref","first-page":"372","DOI":"10.1186\/s13063-021-05322-5","article-title":"Fidelity is not easy! Challenges and guidelines for assessing fidelity in complex interventions","volume":"22","author":"Ginsburg","year":"2021","journal-title":"Trials"},{"key":"ref_164","first-page":"331","article-title":"Llamea: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics","volume":"29","year":"2024","journal-title":"IEEE Trans. Evol. Comput."},{"key":"ref_165","unstructured":"Malone, T., and Bernstein, M. (2015). Handbook of Collective Intelligence, The MIT Press."},{"key":"ref_166","unstructured":"Malone, T., and Bernstein, M. (2015). Collective Intelligence in Teams and Organizations. Handbook of Collective Intelligence, The MIT Press."}],"container-title":["Machine Learning and Knowledge Extraction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/134\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,3]],"date-time":"2025-11-03T14:32:27Z","timestamp":1762180347000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/134"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,1]]},"references-count":166,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["make7040134"],"URL":"https:\/\/doi.org\/10.3390\/make7040134","relation":{},"ISSN":["2504-4990"],"issn-type":[{"value":"2504-4990","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,11,1]]}}}