{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T19:56:17Z","timestamp":1777492577375,"version":"3.51.4"},"reference-count":20,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,7,6]],"date-time":"2023-07-06T00:00:00Z","timestamp":1688601600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,6]],"date-time":"2023-07-06T00:00:00Z","timestamp":1688601600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digit. Med."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The rapid advancements in artificial intelligence (AI) have led to the development of sophisticated large language models (LLMs) such as GPT-4 and Bard. The potential implementation of LLMs in healthcare settings has already garnered considerable attention because of their diverse applications that include facilitating clinical documentation, obtaining insurance pre-authorization, summarizing research papers, or working as a chatbot to answer questions for patients about their specific data and concerns. While offering transformative potential, LLMs warrant a very cautious approach since these models are trained differently from AI-based medical technologies that are regulated already, especially within the critical context of caring for patients. The newest version, GPT-4, that was released in March, 2023, brings the potentials of this technology to support multiple medical tasks; and risks from mishandling results it provides to varying reliability to a new level. Besides being an advanced LLM, it will be able to read texts on images and analyze the context of those images. The regulation of GPT-4 and generative AI in medicine and healthcare without damaging their exciting and transformative potential is a timely and critical challenge to ensure safety, maintain ethical standards, and protect patient privacy. We argue that regulatory oversight should assure medical professionals and patients can use LLMs without causing harm or compromising their data or privacy. This paper summarizes our practical recommendations for what we can expect from regulators to bring this vision to reality.<\/jats:p>","DOI":"10.1038\/s41746-023-00873-0","type":"journal-article","created":{"date-parts":[[2023,7,6]],"date-time":"2023-07-06T05:01:54Z","timestamp":1688619714000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":796,"title":["The imperative for regulatory oversight of large language models (or generative AI) in healthcare"],"prefix":"10.1038","volume":"6","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7005-7083","authenticated-orcid":false,"given":"Bertalan","family":"Mesk\u00f3","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1478-4729","authenticated-orcid":false,"given":"Eric J.","family":"Topol","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,6]]},"reference":[{"key":"873_CR1","unstructured":"Introducing ChatGPT. OpenAI, https:\/\/openai.com\/blog\/chatgpt (2022)."},{"key":"873_CR2","unstructured":"Pichai, S. An important next step on our AI journey. Google The Keyword, https:\/\/blog.google\/technology\/ai\/bard-google-ai-search-updates\/ (2023)."},{"key":"873_CR3","doi-asserted-by":"publisher","unstructured":"Sallam, M. The utility of chatGPT as an example of large language models in healthcare education, research and practice: systematic review on the future perspectives and potential limitations. medRxiv, https:\/\/doi.org\/10.1101\/2023.02.19.23286155 (2023).","DOI":"10.1101\/2023.02.19.23286155"},{"key":"873_CR4","doi-asserted-by":"publisher","unstructured":"Li, J., Dada, A., Kleesiek, J. & Egger, J. ChatGPT in healthcare: a taxonomy and systematic review. medRxiv, https:\/\/doi.org\/10.1101\/2023.03.30.23287899 (2023).","DOI":"10.1101\/2023.03.30.23287899"},{"key":"873_CR5","doi-asserted-by":"publisher","first-page":"192","DOI":"10.1016\/j.hlpt.2019.05.006","volume":"8","author":"KA Yaeger","year":"2019","unstructured":"Yaeger, K. A., Martini, M., Yaniv, G., Oermann, E. K. & Costa, A. B. United States regulatory approval of medical devices and software applications enhanced by artificial intelligence. Heal. Policy Technol. 8, 192\u2013197 (2019).","journal-title":"Heal. Policy Technol."},{"key":"873_CR6","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41746-020-00324-0","volume":"3","author":"S Benjamens","year":"2020","unstructured":"Benjamens, S., Dhunnoo, P. & Mesk\u00f3, B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. npj Digit. Med. 3, 1\u20138 (2020).","journal-title":"npj Digit. Med."},{"key":"873_CR7","unstructured":"FDA. Software as a Medical Device (SAMD): clinical evaluation. https:\/\/www.fda.gov\/media\/100714\/download (2017)."},{"key":"873_CR8","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1038\/s41591-018-0300-7","volume":"25","author":"EJ Topol","year":"2019","unstructured":"Topol, E. J. High-performance medicin0e: the convergence of human and artificial intelligence. Nat. Med. 25, 44\u201356 (2019).","journal-title":"Nat. Med."},{"key":"873_CR9","unstructured":"FDA. Artificial intelligence and machine learning in software as a medical device. https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-and-machine-learning-software-medical-device (2021)."},{"key":"873_CR10","doi-asserted-by":"publisher","first-page":"m689","DOI":"10.1136\/bmj.m689","volume":"368","author":"M Nagendran","year":"2020","unstructured":"Nagendran, M. et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ 368, m689 (2020).","journal-title":"BMJ"},{"key":"873_CR11","doi-asserted-by":"publisher","first-page":"1233","DOI":"10.1056\/NEJMsr2214184","volume":"388","author":"P Lee","year":"2023","unstructured":"Lee, P., Bubeck, S. & Petro, J. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N. Engl. J. Med. 388, 1233\u20131239 (2023).","journal-title":"N. Engl. J. Med."},{"key":"873_CR12","unstructured":"Nuance. Nuance is revolutionizing the contact center with GPT technology (Nuance, 2023)."},{"key":"873_CR13","unstructured":"Lunden, I. Nabla, a digital health startup, launches Copilot, using GPT-3 to turn patient conversations into action (TechCrunch, 2023)."},{"key":"873_CR14","unstructured":"Singhal K., et al. Large language models encode clinical knowledge. Preprint at https:\/\/arxiv.org\/abs\/2212.13138 (2022)."},{"key":"873_CR15","doi-asserted-by":"publisher","unstructured":"Hacker, P., Engel, A. & Mauer, M. Regulating ChatGPT and other Large Generative AI Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT '23), 1112\u20131123 (Association for Computing Machinery, New York, NY, USA, 2023). https:\/\/doi.org\/10.1145\/3593013.3594067.","DOI":"10.1145\/3593013.3594067"},{"key":"873_CR16","doi-asserted-by":"publisher","unstructured":"M\u00f6kander, J. et al. Auditing large language models: a three-layered approach. AI Ethics. https:\/\/doi.org\/10.1007\/s43681-023-00289-2 (2023).","DOI":"10.1007\/s43681-023-00289-2"},{"key":"873_CR17","unstructured":"Will Knight, P. D. In sudden alarm, tech doyens call for a pause on ChatGPT (Wired, 2023)."},{"key":"873_CR18","unstructured":"Ng, A. Andrew Ng\u2019s Twitter. Twitter https:\/\/twitter.com\/AndrewYNg\/status\/1641121451611947009 (2023)."},{"key":"873_CR19","unstructured":"McCallum, S. ChatGPT banned in Italy over privacy concerns (BBC, 2023)."},{"key":"873_CR20","doi-asserted-by":"publisher","first-page":"e39178","DOI":"10.2196\/39178","volume":"24","author":"B Mesk\u00f3","year":"2022","unstructured":"Mesk\u00f3, B. & deBronkart, D. Patient design: the importance of including patients in designing health care. J. Med. Internet Res. 24, e39178 (2022).","journal-title":"J. Med. Internet Res."}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-023-00873-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-023-00873-0","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-023-00873-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,6]],"date-time":"2023-07-06T05:04:06Z","timestamp":1688619846000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-023-00873-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,6]]},"references-count":20,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["873"],"URL":"https:\/\/doi.org\/10.1038\/s41746-023-00873-0","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,6]]},"assertion":[{"value":"7 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 June 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 July 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"120"}}