{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,7]],"date-time":"2025-10-07T00:39:09Z","timestamp":1759797549442,"version":"build-2065373602"},"reference-count":13,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T00:00:00Z","timestamp":1759708800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Artif Intell"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>This study explored a critical gap in fundamental knowledge of AI\/client interactions by asking students to compare the accuracy, thoroughness, and helpfulness of chatbot responses pertaining to the pharmacology of important medications. Eighteen undergraduates enrolled in an introductory pharmacology course at a Midwestern public university used standardized prompts to elicit drug interaction information for five commonly prescribed medications: aspirin, semaglutide, losartan, Yescarta, and a student-selected anesthetic. The chatbots were ChatGPT 3.0, Copilot, and Gemini 1.5. Each student evaluated responses generated by two of three platforms. While all chatbots were rated highly for accuracy, perceptions of helpfulness and thoroughness varied across platforms and prompts. ChatGPT was most consistently rated as thorough and helpful overall, though Gemini outperformed it on select prompts. Comparisons between Copilot and Gemini slightly favored Copilot, but not across all prompts. Taken together, student feedback indicates that the tone and delivery of information may influence perceptions of chatbot helpfulness and completeness. In effect, chatbots\u2019 bedside manner may influence users. Two-thirds of participants indicated they would recommend using AI chatbots to understand medications. These findings underscore the importance of developing patient-centered educational resources that guide effective and ethical use of AI tools in healthcare communication, particularly as AI becomes more consistently integrated into clinical and medical education settings.<\/jats:p>","DOI":"10.1007\/s44163-025-00527-y","type":"journal-article","created":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T12:51:28Z","timestamp":1759755088000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Undergraduates perceive differences in helpfulness and thoroughness of responses of ChatGPT 3.0, Gemini 1.5, and copilot responses about drug interactions"],"prefix":"10.1007","volume":"5","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7189-2905","authenticated-orcid":false,"given":"Jennifer E.","family":"Grant","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,10,6]]},"reference":[{"key":"527_CR1","doi-asserted-by":"crossref","unstructured":"Clark M, Bailey S (2024) Chatbots in Health Care: Connecting Patients to Information: Emerging Health Technologies [Internet]. Canadian Agency for Drugs and Technologies in Health. https:\/\/www.ncbi.nlm.nih.gov\/books\/NBK602381\/","DOI":"10.51731\/cjht.2024.818"},{"key":"527_CR2","unstructured":"Driphydration.com [Internet]. 2025 Drip hydration survey. [cited 2025 Aug 11]. Available from: The Digital diagnosis: americans increasingly turn to ai for medical guidance - Drip Hydration - IV Therapy & Medical Wellness"},{"issue":"16","key":"527_CR3","doi-asserted-by":"publisher","first-page":"924","DOI":"10.5435\/JAAOS-D-25-00260","volume":"33","author":"H Chavda","year":"2025","unstructured":"Chavda H, Sontam TR, Skinner WC, Ingall EM, Zide JR. Comparison of responses from ChatGPT-4, Google Gemini, and Google Search to common patient questions about ankle sprains: a readability analysis. J Am Acad Orthop Surg. 2025;33(16):924\u201330. https:\/\/doi.org\/10.5435\/JAAOS-D-25-00260.","journal-title":"J Am Acad Orthop Surg"},{"issue":"6","key":"527_CR4","doi-asserted-by":"publisher","first-page":"589","DOI":"10.1001\/jamainternmed.2023.1838","volume":"183","author":"JW Ayers","year":"2023","unstructured":"Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023;183(6):589\u201396. https:\/\/doi.org\/10.1001\/jamainternmed.2023.1838.","journal-title":"JAMA Intern Med"},{"key":"527_CR5","doi-asserted-by":"publisher","DOI":"10.2196\/68409","volume":"13","author":"C Park","year":"2025","unstructured":"Park C, An MH, Hwang G, Park RW, An J. Clinical performance and communication skills of ChatGPT versus physicians in emergency medicine: simulated patient study. JMIR Med Inform. 2025;13:e68409. https:\/\/doi.org\/10.2196\/68409.","journal-title":"JMIR Med Inform"},{"issue":"2","key":"527_CR6","doi-asserted-by":"publisher","DOI":"10.1001\/jamanetworkopen.2024.57879","volume":"8","author":"B Huo","year":"2025","unstructured":"Huo B, Boyle A, Marfo N, Tangamornsuksan W, Steen JP, McKechnie T, et al. Large language models for chatbot health advice studies: a systematic review. JAMA Netw Open. 2025;8(2):e2457879. https:\/\/doi.org\/10.1001\/jamanetworkopen.2024.57879.","journal-title":"JAMA Netw Open"},{"issue":"1","key":"527_CR7","doi-asserted-by":"publisher","first-page":"481","DOI":"10.1038\/s41746-025-01830-9","volume":"8","author":"JT Lee","year":"2025","unstructured":"Lee JT, Li VC, Wu JJ, Chen HH, Su SS, Chang BP, et al. Evaluation of performance of generative large language models for stroke care. NPJ Digit Med. 2025;8(1):481. https:\/\/doi.org\/10.1038\/s41746-025-01830-9.","journal-title":"NPJ Digit Med"},{"issue":"4","key":"527_CR8","doi-asserted-by":"publisher","first-page":"860","DOI":"10.1093\/cid\/ciad633","volume":"78","author":"IS Schwartz","year":"2024","unstructured":"Schwartz IS, Link KE, Daneshjou R, Cort\u00e9s-Penfield N. Black box warning: large language models and the future of infectious diseases consultation. Clin Infect Dis. 2024;78(4):860\u20136. https:\/\/doi.org\/10.1093\/cid\/ciad633.","journal-title":"Clin Infect Dis"},{"issue":"5","key":"527_CR9","doi-asserted-by":"publisher","first-page":"e2412767","DOI":"10.1001\/jamanetworkopen.2024.12767","volume":"7","author":"E Steimetz","year":"2024","unstructured":"Steimetz E, Minkowitz J, Gabutan EC, Ngichabe J, Attia H, Hershkop M, et al. Use of artificial intelligence chatbots in interpretation of pathology reports. JAMA Netw Open. 2024;7(5):e2412767. https:\/\/doi.org\/10.1001\/jamanetworkopen.2024.12767.","journal-title":"JAMA Netw Open"},{"issue":"12","key":"527_CR10","doi-asserted-by":"publisher","DOI":"10.3390\/healthcare13121394","volume":"13","author":"P Mashburn","year":"2025","unstructured":"Mashburn P, Weuthen FA, Otte N, Krabbe H, Fernandez GM, Kraus T, et al. Gender differences in the use of ChatGPT as generative artificial intelligence for clinical research and decision-making in occupational medicine. Healthcare. 2025;13(12):1394. https:\/\/doi.org\/10.3390\/healthcare13121394.","journal-title":"Healthcare"},{"key":"527_CR11","doi-asserted-by":"publisher","first-page":"1469487","DOI":"10.3389\/fnume.2024.1469487","volume":"4","author":"M Alvarez","year":"2024","unstructured":"Alvarez M. Can ChatGPT help patients understand radiopharmaceutical extravasations? Front Nucl Med. 2024;4:1469487. https:\/\/doi.org\/10.3389\/fnume.2024.1469487. (Erratum. In: Front Nucl Med 5, 1534645, 10.3389\/fnume.2025.1534645).","journal-title":"Front Nucl Med"},{"issue":"7","key":"527_CR12","doi-asserted-by":"publisher","first-page":"219","DOI":"10.3390\/fi16070219","volume":"16","author":"K Wangsa","year":"2024","unstructured":"Wangsa K, Karim SG, Elkhodr M. A systematic review and comprehensive analysis of pioneering AI chatbot models from education tohealthcare: ChatGPT, Bard, Llama. Ernie Grok Future Internet. 2024;16(7):219. https:\/\/doi.org\/10.3390\/fi16070219.","journal-title":"Ernie Grok Future Internet"},{"key":"527_CR13","doi-asserted-by":"publisher","first-page":"137","DOI":"10.2147\/DHPS.S425858","volume":"15","author":"FY Al-Ashwal","year":"2023","unstructured":"Al-Ashwal FY, Zawiah M, Gharaibeh L, Abu-Farha R, Bitar AN. Evaluating the sensitivity, specificity, and accuracy of ChatGPT-3.5, ChatGPT-4, Bing AI, and Bard against conventional drug-drug interactions clinical tools. Drug Healthc Patient Saf. 2023;15:137\u201347. https:\/\/doi.org\/10.2147\/DHPS.S425858.","journal-title":"Drug Healthc Patient Saf"}],"container-title":["Discover Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00527-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44163-025-00527-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00527-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T12:51:29Z","timestamp":1759755089000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44163-025-00527-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,6]]},"references-count":13,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["527"],"URL":"https:\/\/doi.org\/10.1007\/s44163-025-00527-y","relation":{},"ISSN":["2731-0809"],"issn-type":[{"value":"2731-0809","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,6]]},"assertion":[{"value":"8 July 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 September 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 October 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"JEG obtained prior approval under UW-Stout\u2019s Institutional Review Board (IRB) policies and from UW-Stout\u2019s IRB committee.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"No third party content was used in this submission.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to publication"}},{"value":"JG does not have any collaborations with the companies associated with these chatbots but JG owns microsoft stock and mutual funds that may own shares of any of the companies involved, potentially including specific pharmaceutical and technology companies.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"260"}}