{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,17]],"date-time":"2026-04-17T05:07:48Z","timestamp":1776402468298,"version":"3.51.2"},"reference-count":70,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,12,6]],"date-time":"2025-12-06T00:00:00Z","timestamp":1764979200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,1,12]],"date-time":"2026-01-12T00:00:00Z","timestamp":1768176000000},"content-version":"vor","delay-in-days":37,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"National Institutes of Health\u2019s National Center","award":["R01AT009457"],"award-info":[{"award-number":["R01AT009457"]}]},{"name":"National Institutes of Health\u2019s National Center","award":["R01AT009457"],"award-info":[{"award-number":["R01AT009457"]}]},{"name":"National Institutes of Health\u2019s National Center","award":["R01AT009457"],"award-info":[{"award-number":["R01AT009457"]}]},{"name":"National Institutes of Health\u2019s National Center","award":["R01AT009457"],"award-info":[{"award-number":["R01AT009457"]}]},{"DOI":"10.13039\/100000049","name":"National Institute on Aging","doi-asserted-by":"publisher","award":["R01AG078154"],"award-info":[{"award-number":["R01AG078154"]}],"id":[{"id":"10.13039\/100000049","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000049","name":"National Institute on Aging","doi-asserted-by":"publisher","award":["R01AG078154"],"award-info":[{"award-number":["R01AG078154"]}],"id":[{"id":"10.13039\/100000049","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000049","name":"National Institute on Aging","doi-asserted-by":"publisher","award":["R01AG078154"],"award-info":[{"award-number":["R01AG078154"]}],"id":[{"id":"10.13039\/100000049","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000049","name":"National Institute on Aging","doi-asserted-by":"publisher","award":["R01AG078154"],"award-info":[{"award-number":["R01AG078154"]}],"id":[{"id":"10.13039\/100000049","id-type":"DOI","asserted-by":"publisher"}]},{"name":"National Cancer Institute","award":["R01CA287413"],"award-info":[{"award-number":["R01CA287413"]}]},{"name":"National Cancer Institute","award":["R01CA287413"],"award-info":[{"award-number":["R01CA287413"]}]},{"name":"National Cancer Institute","award":["R01CA287413"],"award-info":[{"award-number":["R01CA287413"]}]},{"name":"National Cancer Institute","award":["R01CA287413"],"award-info":[{"award-number":["R01CA287413"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digit. Med."],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>As large language models (LLMs) become increasingly integrated into clinical decision-making, ensuring trustworthy reasoning is paramount. However, current evaluation strategies of LLMs\u2019 medical reasoning capability either suffer from unsatisfactory assessment or poor scalability, and a rigorous benchmark remains absent. To address this, we present MedThink-Bench, a benchmark designed for rigorous and scalable assessment of LLMs\u2019 medical reasoning. MedThink-Bench comprises 500 high-complexity questions spanning ten medical domains, accompanied by expert-authored, step-by-step rationales that elucidate intermediate reasoning processes. Further, we introduce LLM-w-Rationale, an evaluation framework that combines fine-grained rationale assessment with an LLM-as-a-Judge paradigm, enabling expert-level fidelity in evaluating reasoning quality while preserving scalability. Results show that LLM-w-Rationale correlates strongly with expert evaluation (Pearson coefficient up to 0.87) while requiring only 1.4% of the evaluation time. Overall, MedThink-Bench establishes a rigorous and scalable standard for evaluating medical reasoning in LLMs, advancing their safe and responsible deployment in clinical practice.<\/jats:p>","DOI":"10.1038\/s41746-025-02208-7","type":"journal-article","created":{"date-parts":[[2025,12,6]],"date-time":"2025-12-06T11:15:47Z","timestamp":1765019747000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Automating expert-level medical reasoning evaluation of large language models"],"prefix":"10.1038","volume":"9","author":[{"given":"Shuang","family":"Zhou","sequence":"first","affiliation":[]},{"given":"Wenya","family":"Xie","sequence":"additional","affiliation":[]},{"given":"Jiaxi","family":"Li","sequence":"additional","affiliation":[]},{"given":"Zaifu","family":"Zhan","sequence":"additional","affiliation":[]},{"given":"Meijia","family":"Song","sequence":"additional","affiliation":[]},{"given":"Han","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Cheyenna","family":"Espinoza","sequence":"additional","affiliation":[]},{"given":"Lindsay","family":"Welton","sequence":"additional","affiliation":[]},{"given":"Xinnie","family":"Mai","sequence":"additional","affiliation":[]},{"given":"Yanwei","family":"Jin","sequence":"additional","affiliation":[]},{"given":"Zidu","family":"Xu","sequence":"additional","affiliation":[]},{"given":"Yuen-Hei","family":"Chung","sequence":"additional","affiliation":[]},{"given":"Yiyun","family":"Xing","sequence":"additional","affiliation":[]},{"given":"Meng-Han","family":"Tsai","sequence":"additional","affiliation":[]},{"given":"Emma","family":"Schaffer","sequence":"additional","affiliation":[]},{"given":"Yucheng","family":"Shi","sequence":"additional","affiliation":[]},{"given":"Ninghao","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Zirui","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Rui","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,6]]},"reference":[{"key":"2208_CR1","doi-asserted-by":"publisher","first-page":"9","DOI":"10.1038\/s44387-025-00011-z","volume":"1","author":"S Zhou","year":"2025","unstructured":"Zhou, S. et al. Large language models for disease diagnosis: a scoping review. npj Artif. Intell. 1, 9 (2025).","journal-title":"npj Artif. Intell."},{"key":"2208_CR2","doi-asserted-by":"publisher","first-page":"159","DOI":"10.1038\/s41746-025-01550-0","volume":"8","author":"X Chen","year":"2025","unstructured":"Chen, X. et al. Enhancing diagnostic capability with multi-agents conversational large language models. npj Digit. Med. 8, 159 (2025).","journal-title":"npj Digit. Med."},{"key":"2208_CR3","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-023-47500-2","volume":"13","author":"D Truhn","year":"2023","unstructured":"Truhn, D. et al. A pilot study on the efficacy of GPT-4 in providing orthopedic treatment recommendations from MRI reports. Sci. Rep. 13, 20159 (2023).","journal-title":"Sci. Rep."},{"key":"2208_CR4","doi-asserted-by":"publisher","first-page":"1233","DOI":"10.1038\/s41591-024-03456-y","volume":"31","author":"E Goh","year":"2025","unstructured":"Goh, E. et al. GPT-4 assistance for improvement of physician performance on patient care tasks: a randomized controlled trial. Nat. Med. 31, 1233\u20131238 (2025).","journal-title":"Nat. Med."},{"key":"2208_CR5","doi-asserted-by":"publisher","unstructured":"Chen, C. et al. ClinicalBench: Can LLMs beat traditional ML models in clinical prediction? Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2411.06469 (2024).","DOI":"10.48550\/arXiv.2411.06469"},{"key":"2208_CR6","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1038\/s41746-024-01074-z","volume":"7","author":"M Abbasian","year":"2024","unstructured":"Abbasian, M. et al. Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI. npj Digit. Med. 7, 82 (2024).","journal-title":"npj Digit. Med."},{"key":"2208_CR7","doi-asserted-by":"publisher","unstructured":"Li, J. et al. Fact or guesswork? Evaluating large language model\u2019s medical knowledge with structured one-hop judgment. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2502.14275 (2025).","DOI":"10.48550\/arXiv.2502.14275"},{"key":"2208_CR8","doi-asserted-by":"publisher","first-page":"690","DOI":"10.1038\/s41746-025-02071-6","volume":"8","author":"S Zhou","year":"2025","unstructured":"Zhou, S. et al. Uncertainty-aware large language models for explainable disease diagnosis. npj Digit. Med. 8, 690 (2025).","journal-title":"npj Digit. Med."},{"key":"2208_CR9","doi-asserted-by":"publisher","first-page":"2613","DOI":"10.1038\/s41591-024-03097-1","volume":"30","author":"P Hager","year":"2024","unstructured":"Hager, P. et al. Evaluation and mitigation of the limitations of large language models in clinical decision-making. Nat. Med. 30, 2613\u20132622 (2024).","journal-title":"Nat. Med."},{"key":"2208_CR10","doi-asserted-by":"publisher","first-page":"451","DOI":"10.1038\/s41586-025-08869-4","volume":"642","author":"D McDuff","year":"2025","unstructured":"McDuff, D. et al. Towards accurate differential diagnosis with large language models. Nature 642, 451\u2013457 (2025).","journal-title":"Nature"},{"key":"2208_CR11","doi-asserted-by":"crossref","unstructured":"Zhang, Y. et al. Siren\u2019s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. Comput. Linguist. 1\u201346 (2025).","DOI":"10.1162\/COLI.a.16"},{"key":"2208_CR12","doi-asserted-by":"publisher","unstructured":"Liu, W. et al. Mitigating hallucination through theory-consistent Symmetric Multimodal Preference Optimization. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2506.11712 (2025).","DOI":"10.48550\/arXiv.2506.11712"},{"key":"2208_CR13","unstructured":"M\u00fcndler, N., He, J., Jenko, S. & Vechev, M. T. Self-contradictory hallucinations of large language models: evaluation, detection and mitigation. In Proc. International Conference on Learning Representations (ICLR, 2024)."},{"key":"2208_CR14","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1038\/s41746-024-01157-x","volume":"7","author":"J Haltaufderheide","year":"2024","unstructured":"Haltaufderheide, J. & Ranisch, R. The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs). npj Digit. Med. 7, 183 (2024).","journal-title":"npj Digit. Med."},{"key":"2208_CR15","doi-asserted-by":"publisher","first-page":"20","DOI":"10.1038\/s41746-024-01010-1","volume":"7","author":"T Savage","year":"2024","unstructured":"Savage, T., Nayak, A., Gallo, R., Rangan, E. & Chen, J. H. Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine. npj Digit. Med. 7, 20 (2024).","journal-title":"npj Digit. Med."},{"key":"2208_CR16","doi-asserted-by":"publisher","first-page":"240","DOI":"10.1038\/s41746-025-01653-8","volume":"8","author":"H Kim","year":"2025","unstructured":"Kim, H. et al. Small language models learn enhanced reasoning skills from medical textbooks. npj Digit. Med. 8, 240 (2025).","journal-title":"npj Digit. Med."},{"key":"2208_CR17","doi-asserted-by":"publisher","unstructured":"Wang, W. et al. Medical reasoning in the era of LLMs: a systematic review of enhancement techniques and applications. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2508.00669 (2025).","DOI":"10.48550\/arXiv.2508.00669"},{"key":"2208_CR18","doi-asserted-by":"publisher","unstructured":"Peng, Q. et al. Aligning clinical needs and AI capabilities: a survey on LLMs for medical reasoning. Preprint at techrxiv https:\/\/doi.org\/10.36227\/techrxiv.175790907.73315176\/v1 (2025).","DOI":"10.36227\/techrxiv.175790907.73315176\/v1"},{"key":"2208_CR19","doi-asserted-by":"publisher","unstructured":"Zhu, Y. et al. DiagnosisArena: benchmarking diagnostic reasoning for large language models. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2505.14107 (2025).","DOI":"10.48550\/arXiv.2505.14107"},{"key":"2208_CR20","doi-asserted-by":"publisher","unstructured":"Tang, X. et al. MedAgentsBench: benchmarking thinking models and agent frameworks for complex medical reasoning. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2503.07459 (2025).","DOI":"10.48550\/arXiv.2503.07459"},{"key":"2208_CR21","unstructured":"Zuo, Y. et al. MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding. In Proc. Forty-Second International Conference on Machine Learning, International Conference on Machine Learning (ICML, 2025)."},{"key":"2208_CR22","doi-asserted-by":"publisher","first-page":"172","DOI":"10.1038\/s41586-023-06291-2","volume":"620","author":"K Singhal","year":"2023","unstructured":"Singhal, K. et al. Large language models encode clinical knowledge. Nature 620, 172\u2013180 (2023).","journal-title":"Nature"},{"key":"2208_CR23","doi-asserted-by":"publisher","first-page":"e70080","DOI":"10.2196\/70080","volume":"27","author":"H Yang","year":"2025","unstructured":"Yang, H. et al. Large language model Synergy for ensemble learning in Medical Question Answering: design and evaluation study. J. Med. Internet Res. 27, e70080 (2025).","journal-title":"J. Med. Internet Res."},{"key":"2208_CR24","doi-asserted-by":"publisher","first-page":"190","DOI":"10.1038\/s41746-024-01185-7","volume":"7","author":"Q Jin","year":"2024","unstructured":"Jin, Q. et al. Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine. npj Digit. Med. 7, 190 (2024).","journal-title":"npj Digit. Med."},{"key":"2208_CR25","doi-asserted-by":"publisher","first-page":"100943","DOI":"10.1016\/j.patter.2024.100943","volume":"5","author":"V Li\u00e9vin","year":"2024","unstructured":"Li\u00e9vin, V., Hother, C. E., Motzfeldt, A. G. & Winther, O. Can large language models reason about medical questions? Patterns 5, 100943 (2024).","journal-title":"Patterns"},{"key":"2208_CR26","doi-asserted-by":"publisher","first-page":"12","DOI":"10.1038\/s44401-025-00015-6","volume":"2","author":"S Zhou","year":"2025","unstructured":"Zhou, S. et al. Explainable differential diagnosis with dual-inference large language models. npj Health Syst. 2, 12 (2025).","journal-title":"npj Health Syst."},{"key":"2208_CR27","doi-asserted-by":"crossref","unstructured":"Kim, Y. et al. MedExQA: Medical Question Answering Benchmark with Multiple Explanations. In Proc. 23rd Workshop on Biomedical Natural Language Processing, 167\u2013181 (Association for Computational Linguistics, 2024).","DOI":"10.18653\/v1\/2024.bionlp-1.14"},{"key":"2208_CR28","doi-asserted-by":"publisher","unstructured":"Brodeur, P. G. et al. Superhuman performance of a large language model on the reasoning tasks of a physician. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2412.10849 (2024).","DOI":"10.48550\/arXiv.2412.10849"},{"key":"2208_CR29","doi-asserted-by":"publisher","first-page":"943","DOI":"10.1038\/s41591-024-03423-7","volume":"31","author":"K Singhal","year":"2025","unstructured":"Singhal, K. et al. Toward expert-level medical question answering with large language models. Nat. Med. 31, 943\u2013950 (2025).","journal-title":"Nat. Med."},{"key":"2208_CR30","doi-asserted-by":"crossref","unstructured":"Li, D. et al. ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist Examination. Findings of the Association for Computational Linguistics: EMNLP 2023, 1922\u201340 (Association for Computational Linguistics, 2023).","DOI":"10.18653\/v1\/2023.findings-emnlp.129"},{"key":"2208_CR31","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-025-64769-1","volume":"16","author":"P Qiu","year":"2025","unstructured":"Qiu, P. et al. Quantifying the reasoning abilities of LLMs on clinical cases. Nat. Commun. 16, 9799 (2025).","journal-title":"Nat. Commun"},{"key":"2208_CR32","doi-asserted-by":"publisher","unstructured":"Papineni, K., Roukos, S., Ward, T. & Zhu, W.-J. BLEU. In Proc. 40th Annual Meeting on Association for Computational Linguistics - ACL \u201902. https:\/\/doi.org\/10.3115\/1073083.1073135 (Association for Computational Linguistics, 2001).","DOI":"10.3115\/1073083.1073135"},{"key":"2208_CR33","unstructured":"Lin, C.-Y. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out. 74\u201381 (Association for Computational Linguistics, Barcelona, Spain, 2004)."},{"key":"2208_CR34","unstructured":"Zhang, T. et al. BERTScore: Evaluating Text Generation with BERT. International Conference on Learning Representations (2020)."},{"key":"2208_CR35","doi-asserted-by":"crossref","unstructured":"Liu, L. et al. Towards automatic evaluation for LLMs\u2019 clinical capabilities: Metric, data, and algorithm. In Proc. 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining Vol. 614, 5466\u20135475 (ACM, 2024).","DOI":"10.1145\/3637528.3671575"},{"key":"2208_CR36","doi-asserted-by":"publisher","first-page":"6","DOI":"10.1038\/s44401-024-00011-2","volume":"2","author":"E Croxford","year":"2025","unstructured":"Croxford, E. et al. Current and future state of evaluation of large language models for medical summarization tasks. npj Health Syst. 2, 6 (2025).","journal-title":"npj Health Syst."},{"key":"2208_CR37","doi-asserted-by":"publisher","unstructured":"Wu, K. et al. MedCaseReasoning: evaluating and learning diagnostic reasoning from clinical case reports. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2505.11733 (2025).","DOI":"10.48550\/arXiv.2505.11733"},{"key":"2208_CR38","doi-asserted-by":"publisher","unstructured":"Ding, C. et al. Building a human-verified clinical reasoning dataset via a human LLM hybrid pipeline for trustworthy medical AI. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2505.06912 (2025).","DOI":"10.48550\/arXiv.2505.06912"},{"key":"2208_CR39","doi-asserted-by":"publisher","first-page":"111","DOI":"10.1038\/s41746-024-01101-z","volume":"7","author":"X Chen","year":"2024","unstructured":"Chen, X. et al. FFA-GPT: an automated pipeline for fundus fluorescein angiography interpretation and question-answer. npj Digit. Med. 7, 111 (2024).","journal-title":"npj Digit. Med."},{"key":"2208_CR40","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-024-55628-6","volume":"16","author":"M Griot","year":"2025","unstructured":"Griot, M., Hemptinne, C., Vanderdonckt, J. & Yuksel, D. Large Language Models lack essential metacognition for reliable medical reasoning. Nat. Commun. 16, 642 (2025).","journal-title":"Nat. Commun."},{"key":"2208_CR41","doi-asserted-by":"crossref","unstructured":"Chen, H. et al. Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions. In Proc. 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 3563\u20133599 (Association for Computational Linguistics, 2025).","DOI":"10.18653\/v1\/2025.naacl-long.182"},{"key":"2208_CR42","doi-asserted-by":"publisher","unstructured":"Sellergren, A. et al. MedGemma Technical Report. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2507.05201 (2025).","DOI":"10.48550\/arXiv.2507.05201"},{"key":"2208_CR43","doi-asserted-by":"crossref","unstructured":"Zhang, H. et al. HuatuoGPT, Towards Taming Language Model to Be a Doctor. Findings of the Association for Computational Linguistics: EMNLP 2023, 10859\u201310885 (Association for Computational Linguistics, 2023).","DOI":"10.18653\/v1\/2023.findings-emnlp.725"},{"key":"2208_CR44","doi-asserted-by":"publisher","first-page":"633","DOI":"10.1038\/s41586-025-09422-z","volume":"645","author":"D Guo","year":"2025","unstructured":"Guo, D. et al. DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning. Nature 645, 633\u2013638 (2025).","journal-title":"Nature"},{"key":"2208_CR45","unstructured":"Wei, J. et al. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Proc. 36th International Conference on Neural Information Processing Systems (Curran Associates Inc., 2022)."},{"key":"2208_CR46","doi-asserted-by":"publisher","first-page":"e2500076","DOI":"10.1200\/CCI-25-00076","volume":"9","author":"S Zhou","year":"2025","unstructured":"Zhou, S. et al. Mitigating ethical issues for large language models in oncology: a systematic review. JCO Clin. Cancer Inform. 9, e2500076 (2025).","journal-title":"JCO Clin. Cancer Inform."},{"key":"2208_CR47","doi-asserted-by":"publisher","unstructured":"Gu, J. et al. A survey on LLM-as-a-Judge. arXiv https:\/\/doi.org\/10.48550\/arXiv.2411.15594 (2024).","DOI":"10.48550\/arXiv.2411.15594"},{"key":"2208_CR48","doi-asserted-by":"crossref","unstructured":"Chen, H., Fang, Z., Singla, Y. & Dredze, M. Benchmarking large language models on answering and explaining challenging medical questions. In Proc. 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (eds Chiruzzo, L., Ritter, A. & Wang, L.) 3563\u20133599 (Association for Computational Linguistics, 2025).","DOI":"10.18653\/v1\/2025.naacl-long.182"},{"key":"2208_CR49","doi-asserted-by":"crossref","unstructured":"Wang, Y. et al. MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark. In Proc. 38th International Conference on Neural Information Processing Systems, vol. 37, 95266\u201395290 (Curran Associates Inc., 2024).","DOI":"10.52202\/079017-3018"},{"key":"2208_CR50","doi-asserted-by":"crossref","unstructured":"Kim, Y., Wu, J., Abdulle, Y. & Wu, H. MedExQA: medical question answering benchmark with multiple explanations. In Proc. 23rd Workshop on Biomedical Natural Language Processing (eds Demner-Fushman, D., Ananiadou, S., Miwa, M., Roberts, K. & Tsujii, J.) 167\u2013181 (Association for Computational Linguistics, 2024).","DOI":"10.18653\/v1\/2024.bionlp-1.14"},{"key":"2208_CR51","doi-asserted-by":"publisher","unstructured":"Phan, L. et al. Humanity\u2019s last exam. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2501.14249 (2025).","DOI":"10.48550\/arXiv.2501.14249"},{"key":"2208_CR52","doi-asserted-by":"publisher","first-page":"6421","DOI":"10.3390\/app11146421","volume":"11","author":"D Jin","year":"2021","unstructured":"Jin, D. et al. What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams. Appl. Sci. 11, 6421 (2021).","journal-title":"Appl. Sci."},{"key":"2208_CR53","doi-asserted-by":"crossref","unstructured":"Jin, Q., Dhingra, B., Liu, Z., Cohen, W. & Lu, X. PubMedQA: A Dataset for Biomedical Research Question Answering. In Proc. 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (eds Inui, K., Jiang, J., Ng, V. & Wan, X.) 2567\u20132577 (Association for Computational Linguistics, 2019).","DOI":"10.18653\/v1\/D19-1259"},{"key":"2208_CR54","first-page":"248","volume":"174","author":"A Pal","year":"2022","unstructured":"Pal, A., Umapathi, L. K. & Sankarasubbu, M. MedMCQA : a large-scale multi-subject multi-choice dataset for medical domain question answering. CHIL 174, 248\u2013260 (2022).","journal-title":"CHIL"},{"key":"2208_CR55","unstructured":"Hendrycks, Dan, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring Massive Multitask Language Understanding. In Proc. International Conference on Learning Representations (ICLR, 2021)."},{"key":"2208_CR56","doi-asserted-by":"crossref","unstructured":"Vilares, D. & G\u00f3mez-Rodr\u00edguez, C. HEAD-QA: a healthcare dataset for complex reasoning. In Proc. 57th Annual Meeting of the Association for Computational Linguistics (eds Korhonen, A., Traum, D. & M\u00e0rquez, L.) 960\u2013966 (Association for Computational Linguistics, 2019).","DOI":"10.18653\/v1\/P19-1092"},{"key":"2208_CR57","unstructured":"Banerjee, S. & Alon, L. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proc. ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and\/or Summarization, 65\u201372 (Association for Computational Linguistics, 2005)."},{"key":"2208_CR58","doi-asserted-by":"crossref","unstructured":"Sellam, T. et al. BLEURT: Learning Robust Metrics for Text Generation. In Proc. 58th Annual Meeting of the Association for Computational Linguistics, 7881-7892 (Association for Computational Linguistics, 2020) .","DOI":"10.18653\/v1\/2020.acl-main.704"},{"key":"2208_CR59","doi-asserted-by":"publisher","unstructured":"Wei, H. et al. Systematic evaluation of LLM-as-a-judge in LLM alignment tasks: explainable metrics and diverse prompt templates. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2408.13006 (2024).","DOI":"10.48550\/arXiv.2408.13006"},{"key":"2208_CR60","doi-asserted-by":"publisher","unstructured":"Open AI et al. GPT-4o System Card. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2408.13006 (2024).","DOI":"10.48550\/arXiv.2408.13006"},{"key":"2208_CR61","unstructured":"Anthropic. Claude 3.5 Sonnet model card addendum. Available online at: https:\/\/www-cdn.anthropic.com\/fed9cc193a14b84131812372d8d5857f8f304c52\/Model_Card_Claude_3_Addendum.pdf"},{"key":"2208_CR62","unstructured":"Comanici, G. et al. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. 207, 06261 (2025)."},{"key":"2208_CR63","doi-asserted-by":"publisher","unstructured":"Wang, B. et al. Baichuan-M1: pushing the medical capability of large language models. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2502.12671 (2025).","DOI":"10.48550\/arXiv.2502.12671"},{"key":"2208_CR64","doi-asserted-by":"crossref","unstructured":"Chen, J. et al. Towards Medical Complex Reasoning with LLMs through Medical Verifiable Problems. Findings of the Association for Computational Linguistics: ACL 2025, 14552\u201314573 (Association for Computational Linguistics, 2025).","DOI":"10.18653\/v1\/2025.findings-acl.751"},{"key":"2208_CR65","doi-asserted-by":"publisher","unstructured":"Christophe, C., Kanithi, P. K., Raha, T., Khan, S. & Pimentel, M. A. F. Med42-v2: A suite of clinical LLMs. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2408.06142 (2024).","DOI":"10.48550\/arXiv.2408.06142"},{"key":"2208_CR66","doi-asserted-by":"publisher","unstructured":"Grattafiori, A. et al. The Llama 3 herd of models. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2407.21783 (2024).","DOI":"10.48550\/arXiv.2407.21783"},{"key":"2208_CR67","doi-asserted-by":"publisher","unstructured":"Yang, A. et al. Qwen3 Technical Report. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2505.09388 (2025).","DOI":"10.48550\/arXiv.2505.09388"},{"key":"2208_CR68","unstructured":"Team, Q. Qwq-32b: Embracing the Power of Reinforcement Learning. https:\/\/qwenlm.github.io\/blog\/qwq-32b\/ (2025)."},{"key":"2208_CR69","doi-asserted-by":"publisher","first-page":"583","DOI":"10.1080\/01621459.1952.10483441","volume":"47","author":"WH Kruskal","year":"1952","unstructured":"Kruskal, W. H. & Wallis, W. A. Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 47, 583 (1952).","journal-title":"J. Am. Stat. Assoc."},{"key":"2208_CR70","doi-asserted-by":"crossref","unstructured":"Dong, Y. et al. Generalization or Memorization: Data Contamination and Trustworthy Evaluation for Large Language Models. Findings of the Association for Computational Linguistics: ACL 2024, 12039\u201312050 (Association for Computational Linguistics, 2024).","DOI":"10.18653\/v1\/2024.findings-acl.716"}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-02208-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-02208-7","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-02208-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T22:06:38Z","timestamp":1768341998000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-02208-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,6]]},"references-count":70,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2026,12]]}},"alternative-id":["2208"],"URL":"https:\/\/doi.org\/10.1038\/s41746-025-02208-7","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,6]]},"assertion":[{"value":"11 August 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 November 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"34"}}