{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,17]],"date-time":"2025-12-17T08:32:26Z","timestamp":1765960346121,"version":"3.41.2"},"reference-count":23,"publisher":"Frontiers Media SA","license":[{"start":{"date-parts":[[2024,10,8]],"date-time":"2024-10-08T00:00:00Z","timestamp":1728345600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["frontiersin.org"],"crossmark-restriction":true},"short-container-title":["Front. Comput. Sci."],"abstract":"<jats:sec><jats:title>Background<\/jats:title><jats:p>The ability of large language models to generate general purpose natural language represents a significant step forward in creating systems able to augment a range of human endeavors. However, concerns have been raised about the potential for misplaced trust in the potentially hallucinatory outputs of these models.<\/jats:p><\/jats:sec><jats:sec><jats:title>Objectives<\/jats:title><jats:p>The study reported in this paper is a preliminary exploration of whether trust in the content of output generated by an LLM may be inflated in relation to other forms of ecologically valid, AI-sourced information.<\/jats:p><\/jats:sec><jats:sec><jats:title>Method<\/jats:title><jats:p>Participants were presented with a series of general knowledge questions and a recommended answer from an AI-assistant that had either been generated by an ChatGPT-3 or sourced by Google\u2019s AI-powered featured snippets function. We also systematically varied whether the AI-assistant\u2019s advice was accurate or inaccurate.<\/jats:p><\/jats:sec><jats:sec><jats:title>Results<\/jats:title><jats:p>Trust and reliance in LLM-generated recommendations were not significantly higher than that of recommendations from a non-LLM source. While accuracy of the recommendations resulted in a significant reduction in trust, this did not differ significantly by AI-application.<\/jats:p><\/jats:sec><jats:sec><jats:title>Conclusion<\/jats:title><jats:p>Using three predefined general knowledge tasks and fixed recommendation sets from the AI-assistant, we did not find evidence that trust in LLM-generated output is artificially inflated, or that people are more likely to miscalibrate their trust in this novel technology than another commonly drawn on form of AI-sourced information.<\/jats:p><\/jats:sec>","DOI":"10.3389\/fcomp.2024.1456098","type":"journal-article","created":{"date-parts":[[2024,10,8]],"date-time":"2024-10-08T04:40:57Z","timestamp":1728362457000},"update-policy":"https:\/\/doi.org\/10.3389\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Users do not trust recommendations from a large language model more than AI-sourced snippets"],"prefix":"10.3389","volume":"6","author":[{"given":"Melanie J.","family":"McGrath","sequence":"first","affiliation":[]},{"given":"Patrick S.","family":"Cooper","sequence":"additional","affiliation":[]},{"given":"Andreas","family":"Duenser","sequence":"additional","affiliation":[]}],"member":"1965","published-online":{"date-parts":[[2024,10,8]]},"reference":[{"year":"2023","author":"Akata","key":"ref1"},{"year":"2023","author":"Bohannon","key":"ref2"},{"year":"2020","author":"Brown","key":"ref3"},{"key":"ref4","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.4635674","article-title":"Do people trust humans more than ChatGPT?","author":"Buchanan","year":"2023","journal-title":"SSRN Electron J"},{"key":"ref1002","doi-asserted-by":"publisher","first-page":"104","DOI":"10.1007\/s10551-010-0491-4","article-title":"The meaning(s) of trust. A content analysis on the diverse conceptualizations of trust in scholarly research on business relationships","volume":"9","author":"Castaldo","year":"2010","journal-title":"J Bussiness Ethi."},{"year":"2024","key":"ref5"},{"year":"2023","author":"Grant","key":"ref6"},{"year":"2023","author":"Gupta","key":"ref7"},{"year":"2022","author":"Heaven","key":"ref8"},{"year":"2023","author":"Herbert","key":"ref9"},{"key":"ref10","doi-asserted-by":"publisher","first-page":"407","DOI":"10.1177\/0018720814547570","article-title":"Trust in automation: integrating empirical evidence on factors that influence trust","volume":"57","author":"Hoff","year":"2015","journal-title":"Hum Factors"},{"year":"2024","author":"Huang","key":"ref11"},{"year":"2023","author":"Huschens","key":"ref12"},{"key":"ref13","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3571730","article-title":"Survey of hallucination in natural language generation","volume":"55","author":"Ji","year":"2023","journal-title":"ACM Comput Surv"},{"key":"ref14","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1207\/S15327566IJCE0401_04","article-title":"Foundations for an empirically determined scale of trust in automated systems","volume":"4","author":"Jian","year":"2000","journal-title":"Int J Cogn Ergon"},{"key":"ref15","doi-asserted-by":"publisher","first-page":"104","DOI":"10.1017\/XPS.2020.37","article-title":"All the news that\u2019s fit to fabricate: AI-generated text as a tool of media misinformation","volume":"9","author":"Kreps","year":"2022","journal-title":"J Exp Polit Sci"},{"key":"ref16","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1518\/hfes.46.1.50.30392","article-title":"Trust in automation: designing for appropriate reliance","volume":"46","author":"Lee","year":"2004","journal-title":"Hum Factors"},{"key":"ref17","doi-asserted-by":"crossref","first-page":"87","DOI":"10.1007\/978-3-642-39330-3_10","article-title":"\u201cRealness\u201d in chatbots: establishing quantifiable criteria","volume-title":"Human-Computer Interaction. Interaction Modalities and Techniques","author":"Morrissey","year":"2013"},{"key":"ref18","doi-asserted-by":"publisher","first-page":"22","DOI":"10.1016\/j.jbef.2017.12.004","article-title":"Prolific.ac\u2014a subject pool for online experiments","volume":"17","author":"Palan","year":"2018","journal-title":"J Behav Exp Financ"},{"key":"ref19","doi-asserted-by":"publisher","first-page":"230","DOI":"10.1518\/001872097778543886","article-title":"Humans and automation: use, misuse, disuse, abuse","volume":"39","author":"Parasuraman","year":"1997","journal-title":"Hum Factors"},{"year":"2016","author":"Robinette","key":"ref20"},{"key":"ref21","first-page":"103642","article-title":"Direct answers in Google search results","volume-title":"IEEE Access","author":"Strzelecki","year":"2020"},{"year":"2022","author":"Sun","key":"ref22"}],"container-title":["Frontiers in Computer Science"],"original-title":[],"link":[{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/fcomp.2024.1456098\/full","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,8]],"date-time":"2024-10-08T04:40:58Z","timestamp":1728362458000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/fcomp.2024.1456098\/full"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,8]]},"references-count":23,"alternative-id":["10.3389\/fcomp.2024.1456098"],"URL":"https:\/\/doi.org\/10.3389\/fcomp.2024.1456098","relation":{},"ISSN":["2624-9898"],"issn-type":[{"type":"electronic","value":"2624-9898"}],"subject":[],"published":{"date-parts":[[2024,10,8]]},"article-number":"1456098"}}