{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T01:31:40Z","timestamp":1773797500783,"version":"3.50.1"},"reference-count":58,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T00:00:00Z","timestamp":1755561600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T00:00:00Z","timestamp":1755561600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]},{"name":"Wellcome Trust Collaborative Award","award":["210572\/Z\/18\/Z"],"award-info":[{"award-number":["210572\/Z\/18\/Z"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digit. Med."],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Clinicians spend significant time reviewing medical images and transcribing findings. By integrating visual and textual data, foundation models have the potential to reduce workloads and boost efficiency, yet their practical clinical value remains uncertain. In this study, we find that OpenAI\u2019s ChatGPT-4o and two medical vision-language models (VLMs) significantly underperform ophthalmologists in key tasks for age-related macular degeneration (AMD). To address this, we developed a dedicated training curriculum, designed by domain specialists, to optimize VLMs for tasks related to clinical decision making. The resulting model, RetinaVLM-Specialist, significantly outperforms foundation medical VLMs and ChatGPT-4o in AMD disease staging (F1: 0.63 vs. 0.33) and referral (0.67 vs. 0.50), achieving performance comparable to junior ophthalmologists. In a reader study, two senior ophthalmologists confirmed that RetinaVLM\u2019s reports were substantially more accurate than those written by ChatGPT-4o (64.3% vs. 14.3%). Overall, our curriculum-based approach offers a blueprint for adapting foundation models to real-world medical applications.<\/jats:p>","DOI":"10.1038\/s41746-025-01893-8","type":"journal-article","created":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T11:21:58Z","timestamp":1755602518000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Specialized curricula for training vision language models in retinal image analysis"],"prefix":"10.1038","volume":"8","author":[{"given":"Robbie","family":"Holland","sequence":"first","affiliation":[]},{"given":"Thomas R. P.","family":"Taylor","sequence":"additional","affiliation":[]},{"given":"Christopher","family":"Holmes","sequence":"additional","affiliation":[]},{"given":"Sophie","family":"Riedl","sequence":"additional","affiliation":[]},{"given":"Julia","family":"Mai","sequence":"additional","affiliation":[]},{"given":"Maria","family":"Patsiamanidi","sequence":"additional","affiliation":[]},{"given":"Dimitra","family":"Mitsopoulou","sequence":"additional","affiliation":[]},{"given":"Paul","family":"Hager","sequence":"additional","affiliation":[]},{"given":"Philip","family":"M\u00fcller","sequence":"additional","affiliation":[]},{"given":"Johannes C.","family":"Paetzold","sequence":"additional","affiliation":[]},{"given":"Hendrik P. N.","family":"Scholl","sequence":"additional","affiliation":[]},{"given":"Hrvoje","family":"Bogunovi\u0107","sequence":"additional","affiliation":[]},{"given":"Ursula","family":"Schmidt-Erfurth","sequence":"additional","affiliation":[]},{"given":"Daniel","family":"Rueckert","sequence":"additional","affiliation":[]},{"given":"Sobha","family":"Sivaprasad","sequence":"additional","affiliation":[]},{"given":"Andrew J.","family":"Lotery","sequence":"additional","affiliation":[]},{"given":"Martin J.","family":"Menten","sequence":"additional","affiliation":[]},{"name":"On behalf of the PINNACLE consortium","sequence":"additional","affiliation":[]},{"given":"Toby","family":"Prevost","sequence":"additional","affiliation":[]},{"given":"Lars","family":"Fritsche","sequence":"additional","affiliation":[]},{"given":"Kristina","family":"Pfau","sequence":"additional","affiliation":[]},{"given":"Maximilian","family":"Pfau","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,8,19]]},"reference":[{"key":"1893_CR1","doi-asserted-by":"publisher","first-page":"998","DOI":"10.1093\/jamia\/ocaa325","volume":"28","author":"AJ Moy","year":"2021","unstructured":"Moy, A. J. et al. Measurement of clinical documentation burden among physicians and nurses using electronic health records: a scoping review. J. Am. Med. Inform. Assoc. 28, 998\u20131008 (2021).","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"1893_CR2","doi-asserted-by":"publisher","first-page":"1773","DOI":"10.1038\/s41591-022-01981-2","volume":"28","author":"JN Acosta","year":"2022","unstructured":"Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773\u20131784 (2022).","journal-title":"Nat. Med."},{"key":"1893_CR3","doi-asserted-by":"publisher","first-page":"259","DOI":"10.1038\/s41586-023-05881-4","volume":"616","author":"M Moor","year":"2023","unstructured":"Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259\u2013265 (2023).","journal-title":"Nature"},{"key":"1893_CR4","doi-asserted-by":"publisher","first-page":"1981","DOI":"10.1056\/NEJMra2301725","volume":"388","author":"P Rajpurkar","year":"2023","unstructured":"Rajpurkar, P. & Lungren, M. P. The current and future state of ai interpretation of medical images. N. Engl. J. Med. 388, 1981\u20131990 (2023).","journal-title":"N. Engl. J. Med."},{"key":"1893_CR5","unstructured":"Zhang, Y., Jiang, H., Miura, Y., Manning, C. D. & Langlotz, C. P. Contrastive learning of medical visual representations from paired images and text. In Machine Learning for Healthcare Conference, 2\u201325 (PMLR, 2022)."},{"key":"1893_CR6","doi-asserted-by":"publisher","first-page":"2307","DOI":"10.1038\/s41591-023-02504-3","volume":"29","author":"Z Huang","year":"2023","unstructured":"Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T. J. & Zou, J. A visual\u2013language foundation model for pathology image analysis using medical twitter. Nat. Med. 29, 2307\u20132316 (2023).","journal-title":"Nat. Med."},{"key":"1893_CR7","doi-asserted-by":"publisher","first-page":"863","DOI":"10.1038\/s41591-024-02856-4","volume":"30","author":"MY Lu","year":"2024","unstructured":"Lu, M. Y. et al. A visual-language foundation model for computational pathology. Nat. Med. 30, 863\u2013874 (2024).","journal-title":"Nat. Med."},{"key":"1893_CR8","doi-asserted-by":"crossref","unstructured":"Christensen, M., Vukadinovic, M., Yuan, N. & Ouyang, D. Vision\u2013language foundation model for echocardiogram interpretation. Nat. Med. 30, 1481\u20131488 (2024).","DOI":"10.1038\/s41591-024-02959-y"},{"key":"1893_CR9","doi-asserted-by":"crossref","unstructured":"Li, C. et al. Llava-med: training a large language-and-vision assistant for biomedicine in one day. Adv. Neural Inf. Proces. Syst. 36 28541\u201328564 (2024).","DOI":"10.32388\/VLXB6M"},{"key":"1893_CR10","unstructured":"Moor, M. et al. Med-flamingo: a multimodal medical few-shot learner. In Machine Learning for Health, 353\u2013367 (PMLR, 2023)."},{"key":"1893_CR11","doi-asserted-by":"publisher","first-page":"AIoa2300138","DOI":"10.1056\/AIoa2300138","volume":"1","author":"T Tu","year":"2024","unstructured":"Tu, T. et al. Towards generalist biomedical AI. NEJM AI 1, AIoa2300138 (2024).","journal-title":"NEJM AI"},{"key":"1893_CR12","doi-asserted-by":"publisher","first-page":"e0000198","DOI":"10.1371\/journal.pdig.0000198","volume":"2","author":"TH Kung","year":"2023","unstructured":"Kung, T. H. et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLoS Digital Health 2, e0000198 (2023).","journal-title":"PLoS Digital Health"},{"key":"1893_CR13","doi-asserted-by":"publisher","first-page":"172","DOI":"10.1038\/s41586-023-06291-2","volume":"620","author":"K Singhal","year":"2023","unstructured":"Singhal, K. et al. Large language models encode clinical knowledge. Nature 620, 172\u2013180 (2023).","journal-title":"Nature"},{"key":"1893_CR14","unstructured":"Hager, P. et al. Evaluation and mitigation of the limitations of large language models in clinical decision-making. Nat. Med. https:\/\/www.nature.com\/articles\/s41591-024-03097-1 (2024)."},{"key":"1893_CR15","doi-asserted-by":"crossref","unstructured":"Bengio, Y., Louradour, J., Collobert, R. & Weston, J. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, 41\u201348 (ICML, 2009).","DOI":"10.1145\/1553374.1553380"},{"key":"1893_CR16","first-page":"27730","volume":"35","author":"L Ouyang","year":"2022","unstructured":"Ouyang, L. et al. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 35, 27730\u201327744 (2022).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"1893_CR17","unstructured":"Wei, J. et al. Finetuned language models are zero-shot learners. In Proceedings of the International Conference on Learning Representations, (ICLR, 2022)."},{"key":"1893_CR18","unstructured":"Liu, H., Li, C., Wu, Q. & Lee, Y. J. Visual instruction tuning. In Advances in neural information processing systems, 36 (NeurIPS, 2024)."},{"key":"1893_CR19","doi-asserted-by":"publisher","first-page":"1147","DOI":"10.1016\/S0140-6736(18)31550-2","volume":"392","author":"P Mitchell","year":"2018","unstructured":"Mitchell, P., Liew, G., Gopinath, B. & Wong, T. Y. Age-related macular degeneration. Lancet 392, 1147\u20131159 (2018).","journal-title":"Lancet"},{"key":"1893_CR20","doi-asserted-by":"publisher","first-page":"e106","DOI":"10.1016\/S2214-109X(13)70145-1","volume":"2","author":"WL Wong","year":"2014","unstructured":"Wong, W. L. et al. Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis. Lancet Glob. Health 2, e106\u2013e116 (2014).","journal-title":"Lancet Glob. Health"},{"key":"1893_CR21","doi-asserted-by":"publisher","first-page":"103296","DOI":"10.1016\/j.media.2024.103296","volume":"97","author":"R Holland","year":"2024","unstructured":"Holland, R. et al. Metadata-enhanced contrastive learning from retinal optical coherence tomography images. Med. Image Anal. 97, 103296 (2024).","journal-title":"Med. Image Anal."},{"key":"1893_CR22","doi-asserted-by":"publisher","first-page":"156","DOI":"10.1038\/s41586-023-06555-x","volume":"622","author":"Y Zhou","year":"2023","unstructured":"Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156\u2013163 (2023).","journal-title":"Nature"},{"key":"1893_CR23","unstructured":"Meta AI. Introducing Meta Llama 3: The most capable openly available LLM to date https:\/\/ai.meta.com\/blog\/meta-llama-3\/ (2024). Accessed: 2024-06-25."},{"key":"1893_CR24","unstructured":"Zhu, D. et al. MiniGPT-4: Enhancing vision-language understanding with advanced large language models. In Proceedings of the International Conference on Learning Representations, (ICLR, 2024)."},{"key":"1893_CR25","doi-asserted-by":"crossref","unstructured":"Anderson, P. et al. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 6077\u20136086 (IEEE, 2018).","DOI":"10.1109\/CVPR.2018.00636"},{"key":"1893_CR26","doi-asserted-by":"publisher","first-page":"783","DOI":"10.1136\/bjophthalmol-2011-301378","volume":"96","author":"S Resnikoff","year":"2012","unstructured":"Resnikoff, S., Felch, W., Gauthier, T.-M. & Spivey, B. The number of ophthalmologists in practice and training worldwide: a growing gap despite more than 200 000 practitioners. Br. J. Ophthalmol. 96, 783\u2013787 (2012).","journal-title":"Br. J. Ophthalmol."},{"key":"1893_CR27","unstructured":"OpenAI. Gpt-4o: Openai\u2019s multimodal language model. https:\/\/openai.com\/index\/hello-gpt-4o\/ (2024). Accessed: 2025-02-17."},{"key":"1893_CR28","doi-asserted-by":"crossref","unstructured":"Van Veen, D. et al. Adapted large language models can outperform medical experts in clinical text summarization. Nat. Med. 30, 1134\u20131142 (2024).","DOI":"10.1038\/s41591-024-02855-5"},{"key":"1893_CR29","doi-asserted-by":"publisher","first-page":"6096017","DOI":"10.1155\/2021\/6096017","volume":"2021","author":"S Fragiotta","year":"2021","unstructured":"Fragiotta, S. et al. Significance of hyperreflective foci as an optical coherence tomography biomarker in retinal diseases: characterization and clinical implications. J. Ophthalmol. 2021, 6096017 (2021).","journal-title":"J. Ophthalmol."},{"key":"1893_CR30","doi-asserted-by":"publisher","first-page":"203","DOI":"10.1038\/s41592-020-01008-z","volume":"18","author":"F Isensee","year":"2021","unstructured":"Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J. & Maier-Hein, K. H. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203\u2013211 (2021).","journal-title":"Nat. Methods"},{"key":"1893_CR31","doi-asserted-by":"crossref","unstructured":"Selvaraju, R. R. et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618\u2013626 (IEEE, 2017).","DOI":"10.1109\/ICCV.2017.74"},{"key":"1893_CR32","doi-asserted-by":"publisher","first-page":"1342","DOI":"10.1038\/s41591-018-0107-6","volume":"24","author":"J De Fauw","year":"2018","unstructured":"De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24, 1342\u20131350 (2018).","journal-title":"Nat. Med."},{"key":"1893_CR33","doi-asserted-by":"crossref","unstructured":"Sambasivan, N. et al. Everyone wants to do the model work, not the data work: data cascades in high-stakes AI. SIGCHI, ACM, https:\/\/research.google\/pubs\/everyone-wants-to-do-the-model-work-not-the-data-work-data-cascades-in-high-stakes-ai\/ (2021).","DOI":"10.1145\/3411764.3445518"},{"key":"1893_CR34","unstructured":"Heikkil\u00e4, M. OpenAI\u2019s hunger for data is coming back to bite it. MIT Technology Review, https:\/\/www.technologyreview.com\/2023\/04\/19\/1071789\/openais-hunger-for-data-is-coming-back-to-bite-it\/ (2023)."},{"key":"1893_CR35","doi-asserted-by":"crossref","unstructured":"Li, Y. et al. Evaluating object hallucination in large vision-language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, (EMNLP, Singapore, 2023).","DOI":"10.18653\/v1\/2023.emnlp-main.20"},{"key":"1893_CR36","unstructured":"Liu, H. et al. A survey on hallucination in large vision-language models. Preprint at https:\/\/arxiv.org\/abs\/2402.00253 (2024)."},{"key":"1893_CR37","doi-asserted-by":"publisher","first-page":"423","DOI":"10.1109\/TPAMI.2018.2798607","volume":"41","author":"T Baltru\u0161aitis","year":"2018","unstructured":"Baltru\u0161aitis, T., Ahuja, C. & Morency, L.-P. Multimodal machine learning: a survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 41, 423\u2013443 (2018).","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"1893_CR38","doi-asserted-by":"publisher","first-page":"e1221","DOI":"10.1016\/S2214-109X(17)30393-5","volume":"5","author":"SR Flaxman","year":"2017","unstructured":"Flaxman, S. R. et al. Global causes of blindness and distance vision impairment 1990\u20132020: a systematic review and meta-analysis. Lancet Glob. Health 5, e1221\u2013e1234 (2017).","journal-title":"Lancet Glob. Health"},{"key":"1893_CR39","doi-asserted-by":"publisher","first-page":"169","DOI":"10.1109\/RBME.2010.2084567","volume":"3","author":"MD Abr\u00e0moff","year":"2010","unstructured":"Abr\u00e0moff, M. D., Garvin, M. K. & Sonka, M. Retinal imaging and image analysis. IEEE Rev. Biomed. Eng. 3, 169\u2013208 (2010).","journal-title":"IEEE Rev. Biomed. Eng."},{"key":"1893_CR40","doi-asserted-by":"publisher","first-page":"367","DOI":"10.1016\/S0039-6257(05)80092-X","volume":"39","author":"AC Bird","year":"1995","unstructured":"Bird, A. C. et al. An international classification and grading system for age-related maculopathy and age-related macular degeneration. Surv. Ophthalmol. 39, 367\u2013374 (1995).","journal-title":"Surv. Ophthalmol."},{"key":"1893_CR41","doi-asserted-by":"publisher","first-page":"14","DOI":"10.3109\/09286586.2013.867512","volume":"21","author":"R Klein","year":"2014","unstructured":"Klein, R. et al. Harmonizing the classification of age-related macular degeneration in the three-continent amd consortium. Ophthalmic Epidemiol. 21, 14\u201323 (2014).","journal-title":"Ophthalmic Epidemiol."},{"key":"1893_CR42","doi-asserted-by":"publisher","first-page":"1570","DOI":"10.1001\/archopht.123.11.1570","volume":"123","author":"FL Ferris","year":"2005","unstructured":"Ferris, F. L. et al. A simplified severity scale for age-related macular degeneration. Arch. Ophthalmol. 123, 1570\u20131574 (2005).","journal-title":"Arch. Ophthalmol."},{"key":"1893_CR43","doi-asserted-by":"publisher","first-page":"844","DOI":"10.1016\/j.ophtha.2012.10.036","volume":"120","author":"FL Ferris III","year":"2013","unstructured":"Ferris III, F. L. et al. Clinical classification of age-related macular degeneration. Ophthalmology 120, 844\u2013851 (2013).","journal-title":"Ophthalmology"},{"key":"1893_CR44","doi-asserted-by":"publisher","first-page":"537","DOI":"10.1016\/j.ophtha.2017.09.028","volume":"125","author":"SR Sadda","year":"2018","unstructured":"Sadda, S. R. et al. Consensus definition for atrophy associated with age-related macular degeneration on oct: classification of atrophy report 3. Ophthalmology 125, 537\u2013548 (2018).","journal-title":"Ophthalmology"},{"key":"1893_CR45","first-page":"21271","volume":"33","author":"J-B Grill","year":"2020","unstructured":"Grill, J.-B. et al. Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural Inf. Process. Syst. 33, 21271\u201321284 (2020).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"1893_CR46","unstructured":"AI@Meta. Llama 3 model card, https:\/\/github.com\/meta-llama\/llama3\/blob\/main\/MODEL_CARD.md (2024)."},{"key":"1893_CR47","unstructured":"Chen, Z. et al. Chexagent: towards a foundation model for chest x-ray interpretation. Preprint at https:\/\/arxiv.org\/abs\/2401.12208 (2024)."},{"key":"1893_CR48","first-page":"23716","volume":"35","author":"J-B Alayrac","year":"2022","unstructured":"Alayrac, J.-B. et al. Flamingo: a visual language model for few-shot learning. Adv. neural Inf. Process. Syst. 35, 23716\u201323736 (2022).","journal-title":"Adv. neural Inf. Process. Syst."},{"key":"1893_CR49","doi-asserted-by":"crossref","unstructured":"Lin, W. et al. PMC-CLIP: contrastive language-image pre-training using biomedical documents. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 525\u2013536 (Springer, 2023).","DOI":"10.1007\/978-3-031-43993-3_51"},{"key":"1893_CR50","unstructured":"Zhang, S. et al. Large-scale domain-specific pretraining for biomedical vision-language processing. Preprint at https:\/\/arxiv.org\/abs\/2303.00915 (2023)."},{"key":"1893_CR51","doi-asserted-by":"publisher","first-page":"103001","DOI":"10.1016\/j.artmed.2024.103001","volume":"157","author":"Z Deng","year":"2024","unstructured":"Deng, Z. et al. Ophglm: an ophthalmology large language-and-vision assistant. Artif. Intell. Med. 157, 103001 (2024).","journal-title":"Artif. Intell. Med."},{"key":"1893_CR52","doi-asserted-by":"crossref","unstructured":"Zhang, K. et al. A generalist vision\u2013language foundation model for diverse biomedical tasks. Nat. Med. 30, 3129\u20133141 (2024).","DOI":"10.1038\/s41591-024-03185-2"},{"key":"1893_CR53","doi-asserted-by":"crossref","unstructured":"Holland, R. et al. Deep-learning-based clustering of OCT images for biomarker discovery in age-related macular degeneration (PINNACLE study report 4). Ophthalmol. Sci. 4, 100543 (2024).","DOI":"10.1016\/j.xops.2024.100543"},{"key":"1893_CR54","doi-asserted-by":"publisher","first-page":"186","DOI":"10.1136\/bjo.2004.059824","volume":"90","author":"DM Stein","year":"2006","unstructured":"Stein, D. M. et al. A new quality assessment parameter for optical coherence tomography. Br. J. Ophthalmol. 90, 186\u2013190 (2006).","journal-title":"Br. J. Ophthalmol."},{"key":"1893_CR55","first-page":"24824","volume":"35","author":"J Wei","year":"2022","unstructured":"Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural Inf. Process. Syst. 35, 24824\u201324837 (2022).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"1893_CR56","doi-asserted-by":"publisher","first-page":"153","DOI":"10.1007\/BF02295996","volume":"12","author":"Q McNemar","year":"1947","unstructured":"McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 12, 153\u2013157 (1947).","journal-title":"Psychometrika"},{"key":"1893_CR57","unstructured":"Paszke, A. et al. PyTorch: An Imperative Style, High\u2011Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32, pp. 8024\u20138035 (NeurIPS, 2019)."},{"key":"1893_CR58","doi-asserted-by":"publisher","first-page":"1275","DOI":"10.1038\/s41433-022-02097-0","volume":"37","author":"J Sutton","year":"2023","unstructured":"Sutton, J. et al. Developing and validating a multivariable prediction model which predicts progression of intermediate to late age-related macular degeneration-the PINNACLE trial protocol. Eye 37, 1275\u20131283 (2023).","journal-title":"Eye"}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01893-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01893-8","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01893-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,15]],"date-time":"2025-09-15T20:03:09Z","timestamp":1757966589000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-025-01893-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,19]]},"references-count":58,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["1893"],"URL":"https:\/\/doi.org\/10.1038\/s41746-025-01893-8","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,8,19]]},"assertion":[{"value":"20 November 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 July 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 August 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"532"}}