{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T12:34:55Z","timestamp":1776083695133,"version":"3.50.1"},"reference-count":31,"publisher":"Association for Computing Machinery (ACM)","issue":"10","license":[{"start":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T00:00:00Z","timestamp":1758758400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Commun. ACM"],"published-print":{"date-parts":[[2025,10,1]]},"abstract":"<jats:p>One of today\u2019s principal defenses against weaponized synthetic media continues to be the ability of the targeted individual to visually or auditorily recognize AI-generated content when they encounter it. However, as the realism of synthetic media continues to rapidly improve, it is vital to have an accurate understanding of just how susceptible people currently are to potentially being misled by convincing but false AI-generated content. To ascertain this, we conducted a perceptual study with 1,276 participants to assess how capable people were at distinguishing between authentic and synthetic images, audio, video, and audiovisual media. As AI-generated content is proliferating across online platforms in particular, the surveys were designed to emulate some of the ecological conditions typical of an online platform. We find that, on average, people struggled to distinguish between synthetic and authentic media, with the mean detection performance close to a chance-level performance of 50%. We also find that accuracy rates worsen when the stimuli contain any degree of synthetic content, feature foreign languages, and the media type is a single modality. People are also less accurate at identifying synthetic images when they feature human faces, and when audiovisual stimuli have heterogeneous authenticity. Finally, we find that higher degrees of prior knowledge about synthetic media does not significantly impact detection-accuracy rates, but age does, with older individuals performing worse than their younger counterparts. Collectively, these results highlight that it is no longer feasible to rely on people\u2019s perceptual capabilities to protect themselves against the growing threat of weaponized synthetic media, and that the need for alternative countermeasures is more critical than ever before.<\/jats:p>","DOI":"10.1145\/3729417","type":"journal-article","created":{"date-parts":[[2025,9,22]],"date-time":"2025-09-22T18:16:21Z","timestamp":1758564981000},"page":"100-109","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["As Good as a Coin Toss: Human Detection of AI-Generated Content"],"prefix":"10.1145","volume":"68","author":[{"ORCID":"https:\/\/orcid.org\/0009-0002-2428-8267","authenticated-orcid":false,"given":"Di","family":"Cooke","sequence":"first","affiliation":[{"name":"King\u2019s College London, Department of War Studies, London, London, United Kingdom of Great Britain and Northern Ireland"},{"name":"Center for Strategic and International Studies, International Security Program, Washington, United States"}]},{"given":"Abigail","family":"Edwards","sequence":"additional","affiliation":[{"name":"Center for Strategic and International Studies, Washington, District of Columbia, United States"}]},{"given":"Sophia","family":"Barkoff","sequence":"additional","affiliation":[{"name":"Center for Strategic and International Studies, Washington, District of Columbia, United States"}]},{"given":"Kathryn","family":"Kelly","sequence":"additional","affiliation":[{"name":"Center for Strategic and International Studies, Washington, District of Columbia, United States"}]}],"member":"320","published-online":{"date-parts":[[2025,9,25]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"crossref","unstructured":"Cartella G. et al. Unveiling the truth: exploring human gaze patterns in fake images. arXiv (2024).","DOI":"10.1109\/LSP.2024.3375288"},{"key":"e_1_3_2_3_2","doi-asserted-by":"crossref","unstructured":"Doss C. et al. Deepfakes and scientific knowledge dissemination. In Rev. (2022).","DOI":"10.21203\/rs.3.rs-1408525\/v1"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1080\/1369118X.2019.1631367"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.maturitas.2017.03.318"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.2110013119"},{"key":"e_1_3_2_7_2","doi-asserted-by":"crossref","unstructured":"Groh M. et al. Human detection of political speech deepfakes across transcripts audio and video. arXiv (2023).","DOI":"10.1038\/s41467-024-51998-z"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3613905.3636315"},{"key":"e_1_3_2_9_2","doi-asserted-by":"crossref","unstructured":"Josephs E. Fosco C. and Oliva A. Artifact magnification on deepfake videos increases human detection and subjective confidence. arXiv (2023).","DOI":"10.1167\/jov.23.9.5327"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13414-021-02267-4"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.3389\/fdata.2022.1001063"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.isci.2021.103364"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0285333"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3425780"},{"key":"e_1_3_2_15_2","doi-asserted-by":"crossref","unstructured":"M\u00fcller N.M. Pizzi K. and Williams J. Human perception of audio deepfakes. Proceedings of the 1st Intern. Workshop on Deepfake Detection for Audio Multimedia (2022) 85\u201391.","DOI":"10.1145\/3552466.3556531"},{"key":"e_1_3_2_16_2","unstructured":"National Institute on Deafness and Other Communication Disorders. Statistics about hearing balance & dizziness: 2024; https:\/\/www.nidcd.nih.gov\/health\/statistics\/quick-statistics-hearing"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00426-005-0031-5"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.2120481119"},{"key":"e_1_3_2_19_2","unstructured":"Online news: research update: 2024; https:\/\/www.ofcom.org.uk\/siteassets\/resources\/documents\/research-and-data\/multi-sector\/media-plurality\/2024\/0324-online-news-research-update.pdf?v=356802"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.3390\/brainsci13081126"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","unstructured":"Prasad S.S. et al. Human vs. automatic detection of deepfake videos over noisy channels. In 2022 IEEE Intern. Conf. on Multimedia and Expo.IEEE(2022) 1\u20136; 10.1109\/ICME52920.2022.9859954","DOI":"10.1109\/ICME52920.2022.9859954"},{"key":"e_1_3_2_22_2","volume-title":"Oxford Research Encyclopedia.\u00a0","author":"Rosenblum L.","year":"2019","unstructured":"Rosenblum, L. Audiovisual speech perception and the McGurk effect.\u00a0 Oxford Research Encyclopedia.\u00a0 Oxford University Press\u00a0(2019)."},{"key":"e_1_3_2_23_2","doi-asserted-by":"crossref","unstructured":"Rossler A. et al. Faceforensics++: Learning to detect manipulated facial images. In 2019 IEEE\/CVF Intern. Conf. on Computer Vision. IEEE (2019) 1\u201311.","DOI":"10.1109\/ICCV.2019.00009"},{"key":"e_1_3_2_24_2","volume-title":"Deepfakes and National Security,\u00a0Technical Report #IF11333","author":"Sayler K.","year":"2023","unstructured":"Sayler, K. and Harris, L. Deepfakes and National Security,\u00a0Technical Report #IF11333.\u00a0Congresssional Research Service (2023).\u00a0"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.3758\/BF03206849"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1177\/14614448211011447"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.visres.2011.04.002"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.isci.2022.105441"},{"key":"e_1_3_2_29_2","unstructured":"Walker M. Americans favor mobile devices over desktops and laptops for getting news. Pew Research Center (2019)."},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.wocn.2009.04.002"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-022-05095-0"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1049\/bme2.12031"}],"container-title":["Communications of the ACM"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3729417","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T19:45:39Z","timestamp":1775245539000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3729417"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,25]]},"references-count":31,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2025,10,1]]}},"alternative-id":["10.1145\/3729417"],"URL":"https:\/\/doi.org\/10.1145\/3729417","relation":{},"ISSN":["0001-0782","1557-7317"],"issn-type":[{"value":"0001-0782","type":"print"},{"value":"1557-7317","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,9,25]]},"assertion":[{"value":"2024-07-26","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-25","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}