{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T16:51:56Z","timestamp":1776358316899,"version":"3.51.2"},"reference-count":72,"publisher":"IOP Publishing","issue":"4","license":[{"start":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T00:00:00Z","timestamp":1766534400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"},{"start":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T00:00:00Z","timestamp":1766534400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/iopscience.iop.org\/info\/page\/text-and-data-mining"}],"funder":[{"DOI":"10.13039\/100006224","name":"Argonne National Laboratory","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100006224","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/100006151","name":"Basic Energy Sciences","doi-asserted-by":"crossref","award":["C-STEEL"],"award-info":[{"award-number":["C-STEEL"]}],"id":[{"id":"10.13039\/100006151","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["iopscience.iop.org"],"crossmark-restriction":false},"short-container-title":["Mach. Learn.: Sci. Technol."],"published-print":{"date-parts":[[2025,12,30]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>As large language models (LLMs) become central tools in science, improving their reasoning capabilities is critical for meaningful and trustworthy applications. We introduce a Socratic agent for scientific reasoning, implemented through a structured system prompt that guides LLMs via classical principles of inquiry. Unlike typical prompt engineering or retrieval-based methods, our approach leverages definition, analogy, hypothesis elimination, and other Socratic techniques to generate more coherent, critical, and domain-aware responses. We evaluate the agent across diverse scientific domains and benchmark it on the abstraction and reasoning corpus challenge dataset, achieving 97.15% under a fixed prompting protocol and without fine-tuning or external tools. Expert evaluation shows improved reasoning depth, clarity, and adaptability over conventional LLM outputs, suggesting that structured prompting rooted in philosophical reasoning can improve the scientific utility of language models.<\/jats:p>","DOI":"10.1088\/2632-2153\/ae277f","type":"journal-article","created":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T15:48:12Z","timestamp":1764776892000},"page":"045073","update-policy":"https:\/\/doi.org\/10.1088\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Towards philosophical reasoning with agentic LLMs: Socratic method for scientific assistance"],"prefix":"10.1088","volume":"6","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6016-3122","authenticated-orcid":true,"given":"Hassan","family":"Harb","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8044-4936","authenticated-orcid":false,"given":"Yunkai","family":"Sun","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5829-4724","authenticated-orcid":false,"given":"Mustafa","family":"Unal","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3342-6484","authenticated-orcid":true,"given":"Abhishek","family":"Aggarwal","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3061-0144","authenticated-orcid":true,"given":"Chiara","family":"Bissolotti","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9058-8351","authenticated-orcid":false,"given":"Isik Su","family":"Buyuker","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8729-0861","authenticated-orcid":true,"given":"Sungil","family":"Hong","sequence":"additional","affiliation":[]},{"given":"Luke R","family":"Johnson","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6117-1554","authenticated-orcid":true,"given":"Lateef","family":"Jolaoso","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2694-5582","authenticated-orcid":true,"given":"Bratin","family":"Sengupta","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4536-6060","authenticated-orcid":true,"given":"Michael","family":"Stuhr","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1073-3799","authenticated-orcid":false,"given":"Zhenzhen","family":"Yang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5219-7517","authenticated-orcid":false,"given":"Brian J","family":"Ingram","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9571-3307","authenticated-orcid":false,"given":"Rajeev","family":"Surendran Assary","sequence":"additional","affiliation":[]}],"member":"266","published-online":{"date-parts":[[2025,12,24]]},"reference":[{"key":"mlstae277fbib1","doi-asserted-by":"publisher","first-page":"1649","DOI":"10.1021\/acs.jcim.3c00285","type":"journal-article","article-title":"Do large language models understand chemistry? A conversation with ChatGPT","volume":"63","author":"Castro Nascimento","year":"2023","journal-title":"J. Chem. Inf. Model."},{"key":"mlstae277fbib2","article-title":"Transformers and large language models for chemistry and drug discovery","author":"Bran","year":"2023","type":"preprint"},{"key":"mlstae277fbib3","article-title":"Structured information extraction from complex scientific text with fine-tuned large language models","author":"Dunn","year":"2022","type":"preprint"},{"key":"mlstae277fbib4","doi-asserted-by":"publisher","first-page":"368","DOI":"10.1039\/D2DD00087C","type":"journal-article","article-title":"Assessment of chemistry knowledge in large language models that generate code","volume":"2","author":"White","year":"2023","journal-title":"Digit. Discov."},{"key":"mlstae277fbib5","doi-asserted-by":"publisher","first-page":"866","DOI":"10.1016\/j.omtn.2023.08.009","type":"journal-article","article-title":"Artificial intelligence enabled ChatGPT and large language models in drug target discovery, drug discovery, and development","volume":"33","author":"Chakraborty","year":"2023","journal-title":"Mol. Ther.\u2014Nucleic Acids"},{"key":"mlstae277fbib6","article-title":"A survey on evaluation of large language models","author":"Chang","year":"2023","type":"preprint"},{"key":"mlstae277fbib7","doi-asserted-by":"publisher","first-page":"2514","DOI":"10.1039\/D4SC03921A","type":"journal-article","article-title":"A review of large language models and autonomous agents in chemistry","volume":"16","author":"Ramos","year":"2025","journal-title":"Chem. Sci."},{"key":"mlstae277fbib8","article-title":"2023 A comprehensive overview of large language models","author":"Naveed","type":"preprint"},{"key":"mlstae277fbib9","doi-asserted-by":"crossref","DOI":"10.1088\/2632-2153\/ae011a","type":"preprint","article-title":"Examples of LLM applications in materials science and chemistry: towards automation, assistants, agents, and accelerated scientific discovery","author":"Zimmermann","year":"2025"},{"key":"mlstae277fbib10","doi-asserted-by":"publisher","first-page":"1233","DOI":"10.1039\/D3DD00113J","type":"journal-article","article-title":"14 examples of how LLMS can transform materials science and chemistry: a reflection on a large language model hackathon","volume":"2","author":"Jablonka","year":"2023","journal-title":"Digit. Discov."},{"key":"mlstae277fbib11","article-title":"Reflections from the 2024 large language model (LLM) hackathon for applications in materials science and chemistry","author":"Zimmermann","year":"2024","type":"preprint"},{"key":"mlstae277fbib12","article-title":"MolecularGPT: open large language model (LLM) for few-shot molecular property prediction","author":"Liu","year":"2024","type":"preprint"},{"key":"mlstae277fbib13","doi-asserted-by":"publisher","DOI":"10.26434\/chemrxiv-2025-n1b4l","type":"journal-article","article-title":"Learning advance: robotics-LLM guided hypotheses generation for the discovery of chemical knowledge","author":"Yin","year":"2025","journal-title":"ChemRxiv Preprint"},{"key":"mlstae277fbib14","doi-asserted-by":"publisher","DOI":"10.26434\/chemrxiv-2025-rwgt8","type":"journal-article","article-title":"Augmented and programmatically optimized LLM prompts reduce chemical hallucinations","author":"Reed","year":"2025","journal-title":"ChemRxiv Preprint"},{"key":"mlstae277fbib15","article-title":"Introducing OpenAI o1","author":"OpenAI Team","type":"web-resource"},{"key":"mlstae277fbib16","article-title":"Introducing OpenAI o3 and o4-mini","author":"OpenAI Team","type":"web-resource"},{"key":"mlstae277fbib17","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.acl-long.294","type":"preprint","article-title":"Reasoning with language model prompting: a survey","author":"Qiao","year":"2023"},{"key":"mlstae277fbib18","article-title":"Reasoning with large language models, a Survey","author":"Plaat","year":"2024","type":"preprint"},{"key":"mlstae277fbib19","article-title":"Language models are few-shot learners","author":"Brown","year":"2020","type":"preprint"},{"key":"mlstae277fbib20","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.ijcnlp-main.45","type":"preprint","article-title":"A multitask,multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity","author":"Bang","year":"2023"},{"key":"mlstae277fbib21","article-title":"Chain-of-thought prompting elicits reasoning in large language models","author":"Wei","year":"2023","type":"preprint"},{"key":"mlstae277fbib22","article-title":"On the opportunities and risks of foundation models","author":"Bommasani","year":"2022","type":"preprint"},{"key":"mlstae277fbib23","article-title":"Agentic reasoning: reasoning LLMs with tools for the deep research","author":"Wu","year":"2025","type":"preprint"},{"key":"mlstae277fbib24","article-title":"Flow of reasoning: training LLMs for divergent reasoning with minimal examples","author":"Yu","year":"2025","type":"preprint"},{"key":"mlstae277fbib25","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2024.emnlp-main.880","type":"preprint","article-title":"SciAgent: tool-augmented language models for scientific reasoning","author":"Ma","year":"2024"},{"key":"mlstae277fbib26","article-title":"Advancing reasoning in large language models: promising methods and approaches","author":"Patil","year":"2025","type":"preprint"},{"key":"mlstae277fbib27","article-title":"Understanding LLM scientific reasoning through promptings and model\u2019s explanation on the answers","author":"Rueda","year":"2025","type":"preprint"},{"key":"mlstae277fbib28","article-title":"A survey on mathematical reasoning and optimization with large language models","author":"Forootani","year":"2025","type":"preprint"},{"key":"mlstae277fbib29","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2024.emnlp-main.13","type":"preprint","article-title":"Thinking fair and slow: on the efficacy of structured prompts for debiasing language models","author":"Furniturewala","year":"2024"},{"key":"mlstae277fbib30","article-title":"LogicPuzzleRL: cultivating robust mathematical reasoning in LLMs via reinforcement learning","author":"Wong","year":"2025","type":"preprint"},{"key":"mlstae277fbib31","article-title":"MedReason: eliciting factual medical reasoning steps in LLMs via knowledge graphs","author":"Wu","year":"2025","type":"preprint"},{"key":"mlstae277fbib32","doi-asserted-by":"crossref","DOI":"10.1093\/gigascience\/giaf109","type":"preprint","article-title":"A retrieval-augmented knowledge mining method with deep thinking LLMS for biomedical research and clinical support","author":"Feng","year":"2025"},{"key":"mlstae277fbib33","article-title":"DrSR: LLM based scientific equation discovery with dual reasoning from data and experience","author":"Wang","year":"2025","type":"preprint"},{"key":"mlstae277fbib34","article-title":"Self-GIVE: associative thinking from limited structured knowledge for enhanced large language model reasoning","author":"He","year":"2025","type":"preprint"},{"key":"mlstae277fbib35","doi-asserted-by":"crossref","DOI":"10.36227\/techrxiv.173324189.99227671\/v1","type":"preprint","article-title":"Generative AI as a tool for enhancing reflective learning in students","author":"Yuan","year":"2024"},{"key":"mlstae277fbib36","article-title":"A survey of slow thinking-based reasoning LLMs using reinforced learning and inference-time scaling law","author":"Pan","year":"2025","type":"preprint"},{"key":"mlstae277fbib37","article-title":"Self-consistency improves chain of thought reasoning in language models","author":"Wang","year":"2023","type":"preprint"},{"key":"mlstae277fbib38","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.findings-emnlp.378","type":"preprint","article-title":"Measuring and narrowing the compositionality gap in language models","author":"Press","year":"2023"},{"key":"mlstae277fbib39","article-title":"React: synergizing reasoning and acting in language models J. Cell. Mol. Med.","author":"Yao","year":"2018","type":"preprint"},{"key":"mlstae277fbib40","first-page":"11809","type":"journal-article","article-title":"Tree of thoughts: deliberate problem solving with large language models","volume":"36","author":"Yao","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"mlstae277fbib41","doi-asserted-by":"publisher","first-page":"24","DOI":"10.3390\/robotics14030024","type":"journal-article","article-title":"Agentic workflows for improving large language model reasoning in robotic object-centered planning","volume":"14","author":"Moncada-Ramirez","year":"2025","journal-title":"Robotics"},{"key":"mlstae277fbib42","article-title":"LLM-based agentic reasoning frameworks: a survey from methods to scenarios","author":"Zhao","year":"2025","type":"preprint"},{"key":"mlstae277fbib43","article-title":"The landscape of agentic reinforcement learning for LLMs: a survey","author":"Zhang","year":"2025","type":"preprint"},{"key":"mlstae277fbib44","doi-asserted-by":"publisher","first-page":"8633","DOI":"10.1038\/s41467-025-63804-5","type":"journal-article","article-title":"A brain-inspired agentic architecture to improve planning with LLMs","volume":"16","author":"Webb","year":"2025","journal-title":"Nat. Commun."},{"key":"mlstae277fbib45","author":"Losee","year":"2001","type":"book"},{"key":"mlstae277fbib46","first-page":"743","type":"journal-article","article-title":"An introduction to the philosophy of science","volume":"44","author":"O\u2019hear","year":"1993","journal-title":"Br. J. Philos. Sci."},{"key":"mlstae277fbib47","author":"Carnap","year":"2012","type":"book"},{"key":"mlstae277fbib48","author":"Salmon","year":"1999","type":"book"},{"key":"mlstae277fbib49","author":"Seeskin","year":"2016","type":"book"},{"key":"mlstae277fbib50","article-title":"The Socratic way of questioning: how to use Socrates\u2019 method to discover the truth and argue wisely","author":"Thinknetic","year":"2022","type":"other"},{"key":"mlstae277fbib51","article-title":"The Socratic method of teaching: engaging students through inquiry and critical thinking\u2014zone of education\u2014your gateway to quality education","author":"Yousafzai","type":"web-resource"},{"key":"mlstae277fbib52","doi-asserted-by":"publisher","first-page":"254","DOI":"10.1002\/9780470996218.ch16","type":"book","article-title":"Socratic method and Socratic truth","author":"Tarrant","year":"2005"},{"key":"mlstae277fbib53","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2024.findings-naacl.175","type":"preprint","article-title":"SocREval: large language models with the Socratic method for reference-free reasoning evaluation","author":"He","year":"2024"},{"key":"mlstae277fbib54","article-title":"The Socratic method of large language models","author":"Szopa","type":"web-resource"},{"key":"mlstae277fbib55","article-title":"Large language models and the Socratic method","author":"Br\u00eencoveanu","type":"web-resource"},{"key":"mlstae277fbib56","doi-asserted-by":"publisher","first-page":"3730","DOI":"10.1145\/3627673.3679881","type":"conference-proceedings","article-title":"2024 Boosting large language models with Socratic method for conversational mathematics teaching Proc. 33rd ACM Int. Conf. on Information and Knowledge Management (ACM)","author":"Ding"},{"key":"mlstae277fbib57","doi-asserted-by":"publisher","DOI":"10.26434\/chemrxiv-2025-djf43","type":"journal-article","article-title":"The hitchhiker\u2019s guide to Socratic methods in prompting large language models for chemistry applications","author":"Harb","year":"2025","journal-title":"ChemRxiv Preprint"},{"key":"mlstae277fbib58","doi-asserted-by":"crossref","DOI":"10.1109\/CCWC57344.2023.10099179","type":"preprint","article-title":"Prompting large language models with the Socratic method","author":"Chang","year":"2023"},{"key":"mlstae277fbib59","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.emnlp-main.255","type":"preprint","article-title":"The art of Socratic questioning: recursive thinking with large language models","author":"Qi","year":"2023"},{"key":"mlstae277fbib60","article-title":"Fine tuning a large language model for Socratic interactions","author":"Bonino","type":"other"},{"key":"mlstae277fbib61","doi-asserted-by":"publisher","first-page":"85693","DOI":"10.52202\/079017-2721","type":"journal-article","article-title":"SocraticLM: exploring Socratic personalized teaching with large language models","volume":"vol 37","author":"Liu","year":"2024","journal-title":"Advances in Neural Information Processing Systems"},{"key":"mlstae277fbib62","article-title":"Towards revealing the mystery behind chain of thought: a theoretical perspective","author":"Feng","year":"2023","type":"preprint"},{"key":"mlstae277fbib63","article-title":"Igniting language intelligence: the hitchhiker\u2019s guide from chain-of-thought reasoning to language agents","author":"Zhang","year":"2023","type":"preprint"},{"key":"mlstae277fbib64","article-title":"Papers with code\u2014ARC (AI2 reasoning challenge) dataset","author":"Clark","type":"web-resource"},{"key":"mlstae277fbib65","article-title":"Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge","author":"Clark","type":"preprint"},{"key":"mlstae277fbib66","doi-asserted-by":"publisher","first-page":"2578","DOI":"10.18653\/v1\/D19-1260","type":"conference-proceedings","article-title":"Quick and (not so) dirty: unsupervised selection of justification sentences for multi-hop question answering","author":"Yadav","year":"2019"},{"key":"mlstae277fbib67","article-title":"Argo | Argonne national laboratory","author":"Argonne National Lab","type":"web-resource"},{"key":"mlstae277fbib68","article-title":"OpenAI Platform","author":"OpenAI Team","type":"web-resource"},{"key":"mlstae277fbib69","article-title":"Models\u2014OpenAI API","author":"OpenAI Team","type":"web-resource"},{"key":"mlstae277fbib70","article-title":"Submissions\u2014ARC: AI2 reasoning challenge leaderboard","author":"Clark","type":"web-resource"},{"key":"mlstae277fbib71","article-title":"Are large language models superhuman chemists?","author":"Mirza","year":"2024","type":"preprint"},{"key":"mlstae277fbib72","doi-asserted-by":"crossref","DOI":"10.1038\/s43588-025-00836-3","type":"preprint","article-title":"Probing the limitations of multimodal language models for chemistry and materials research","author":"Alampara","year":"2025"}],"container-title":["Machine Learning: Science and Technology"],"original-title":[],"link":[{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f","content-type":"text\/html","content-version":"am","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f\/pdf","content-type":"application\/pdf","content-version":"am","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f\/pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f\/pdf","content-type":"application\/pdf","content-version":"am","intended-application":"syndication"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f\/pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f\/pdf","content-type":"application\/pdf","content-version":"am","intended-application":"similarity-checking"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f\/pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T15:12:54Z","timestamp":1766589174000},"score":1,"resource":{"primary":{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/ae277f"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,24]]},"references-count":72,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,12,24]]},"published-print":{"date-parts":[[2025,12,30]]}},"URL":"https:\/\/doi.org\/10.1088\/2632-2153\/ae277f","relation":{"has-preprint":[{"id-type":"doi","id":"10.26434\/chemrxiv-2025-rwxdk","asserted-by":"object"}]},"ISSN":["2632-2153"],"issn-type":[{"value":"2632-2153","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,24]]},"assertion":[{"value":"Towards philosophical reasoning with agentic LLMs: Socratic method for scientific assistance","name":"article_title","label":"Article Title"},{"value":"Machine Learning: Science and Technology","name":"journal_title","label":"Journal Title"},{"value":"paper","name":"article_type","label":"Article Type"},{"value":"\u00a9 2025 Argonne National Laboratory. Published by IOP Publishing Ltd","name":"copyright_information","label":"Copyright Information"},{"value":"2025-08-04","name":"date_received","label":"Date Received","group":{"name":"publication_dates","label":"Publication dates"}},{"value":"2025-12-03","name":"date_accepted","label":"Date Accepted","group":{"name":"publication_dates","label":"Publication dates"}},{"value":"2025-12-24","name":"date_epub","label":"Online publication date","group":{"name":"publication_dates","label":"Publication dates"}}]}}