{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,30]],"date-time":"2026-01-30T03:27:30Z","timestamp":1769743650569,"version":"3.49.0"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2024,2,7]],"date-time":"2024-02-07T00:00:00Z","timestamp":1707264000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,2,7]],"date-time":"2024-02-07T00:00:00Z","timestamp":1707264000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Nat Mach Intell"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Recent work has reported that respiratory audio-trained AI classifiers can accurately predict SARS-CoV-2 infection status. However, it has not yet been determined whether such model performance is driven by latent audio biomarkers with true causal links to SARS-CoV-2 infection or by confounding effects, such as recruitment bias, present in observational studies. Here we undertake a large-scale study of audio-based AI classifiers as part of the UK government\u2019s pandemic response. We collect a dataset of audio recordings from 67,842 individuals, with linked metadata, of whom 23,514 had positive polymerase chain reaction tests for SARS-CoV-2. In an unadjusted analysis, similar to that in previous works, AI classifiers predict SARS-CoV-2 infection status with high accuracy (ROC\u2013AUC\u2009=\u20090.846 [0.838\u20130.854]). However, after matching on measured confounders, such as self-reported symptoms, performance is much weaker (ROC\u2013AUC\u2009=\u20090.619 [0.594\u20130.644]). Upon quantifying the utility of audio-based classifiers in practical settings, we find them to be outperformed by predictions on the basis of user-reported symptoms. We make best-practice recommendations for handling recruitment bias, and for assessing audio-based classifiers by their utility in relevant practical settings. Our work provides insights into the value of AI audio analysis and the importance of study design and treatment of confounders in AI-enabled diagnostics.<\/jats:p>","DOI":"10.1038\/s42256-023-00773-8","type":"journal-article","created":{"date-parts":[[2024,2,7]],"date-time":"2024-02-07T11:02:30Z","timestamp":1707303750000},"page":"229-242","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["Audio-based AI classifiers show no evidence of improved COVID-19 screening over simple symptoms checkers"],"prefix":"10.1038","volume":"6","author":[{"given":"Harry","family":"Coppock","sequence":"first","affiliation":[]},{"given":"George","family":"Nicholson","sequence":"additional","affiliation":[]},{"given":"Ivan","family":"Kiskin","sequence":"additional","affiliation":[]},{"given":"Vasiliki","family":"Koutra","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3312-4806","authenticated-orcid":false,"given":"Kieran","family":"Baker","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3337-6859","authenticated-orcid":false,"given":"Jobie","family":"Budd","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1721-3868","authenticated-orcid":false,"given":"Richard","family":"Payne","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6576-6053","authenticated-orcid":false,"given":"Emma","family":"Karoune","sequence":"additional","affiliation":[]},{"given":"David","family":"Hurley","sequence":"additional","affiliation":[]},{"given":"Alexander","family":"Titcomb","sequence":"additional","affiliation":[]},{"given":"Sabrina","family":"Egglestone","sequence":"additional","affiliation":[]},{"given":"Ana","family":"Tendero Ca\u00f1adas","sequence":"additional","affiliation":[]},{"given":"Lorraine","family":"Butler","sequence":"additional","affiliation":[]},{"given":"Radka","family":"Jersakova","sequence":"additional","affiliation":[]},{"given":"Jonathon","family":"Mellor","sequence":"additional","affiliation":[]},{"given":"Selina","family":"Patel","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9276-052X","authenticated-orcid":false,"given":"Tracey","family":"Thornley","sequence":"additional","affiliation":[]},{"given":"Peter","family":"Diggle","sequence":"additional","affiliation":[]},{"given":"Sylvia","family":"Richardson","sequence":"additional","affiliation":[]},{"given":"Josef","family":"Packham","sequence":"additional","affiliation":[]},{"given":"Bj\u00f6rn W.","family":"Schuller","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4591-4167","authenticated-orcid":false,"given":"Davide","family":"Pigoli","sequence":"additional","affiliation":[]},{"given":"Steven","family":"Gilmour","sequence":"additional","affiliation":[]},{"given":"Stephen","family":"Roberts","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6667-4943","authenticated-orcid":false,"given":"Chris","family":"Holmes","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,2,7]]},"reference":[{"key":"773_CR1","unstructured":"Rosengren, H. 14.9 Million excess deaths associated with the COVID-19 pandemic in 2020 and 2021 World Health Organization (5 May 2022); https:\/\/www.who.int\/news\/item\/05-05-2022-14.9-million-excess-deaths-were-associated-with-the-covid-19-pandemic-in-2020-and-2021"},{"key":"773_CR2","doi-asserted-by":"crossref","unstructured":"Kucharski, A. J. et al. Effectiveness of isolation, testing, contact tracing, and physical distancing on reducing transmission of SARS-CoV-2 in different settings: a mathematical modelling study. Lancet Infect. Dis. 20, 1151\u20131160 (2020).","DOI":"10.1016\/S1473-3099(20)30457-6"},{"key":"773_CR3","doi-asserted-by":"crossref","unstructured":"Muller, C. P. Do asymptomatic carriers of SARS-COV-2 transmit the virus? Lancet Reg. 4,100082 (2021).","DOI":"10.1016\/j.lanepe.2021.100082"},{"key":"773_CR4","doi-asserted-by":"publisher","unstructured":"Nessiem, M. A et al. Detecting COVID-19 from breathing and coughing sounds using deep neural networks. In IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS) https:\/\/doi.org\/10.1109\/CBMS52027.2021.00069 (IEEE, 2021).","DOI":"10.1109\/CBMS52027.2021.00069"},{"key":"773_CR5","doi-asserted-by":"crossref","unstructured":"Laguarta, J., Hueto, F. & Subirana, B. COVID-19 artificial intelligence diagnosis using only cough recordings. IEEE Open J. Eng. Med. Biol. 1, 275\u2013281 (2020).","DOI":"10.1109\/OJEMB.2020.3026928"},{"key":"773_CR6","doi-asserted-by":"publisher","unstructured":"Bagad, P. et al. Cough against COVID: evidence of COVID-19 signature in cough sounds. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2009.08790 (2020).","DOI":"10.48550\/arXiv.2009.08790"},{"key":"773_CR7","doi-asserted-by":"publisher","unstructured":"Brown, C. et al. Exploring automatic diagnosis of COVID-19 from crowdsourced respiratory sound data. In Proc. 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 3474\u20133484 (ACM, 2020); https:\/\/doi.org\/10.1145\/3394486.3412865","DOI":"10.1145\/3394486.3412865"},{"key":"773_CR8","doi-asserted-by":"publisher","first-page":"100378","DOI":"10.1016\/j.imu.2020.100378","volume":"20","author":"A Imran","year":"2020","unstructured":"Imran, A. et al. AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app. Inform. Med. Unlocked 20, 100378 (2020).","journal-title":"Inform. Med. Unlocked"},{"key":"773_CR9","doi-asserted-by":"publisher","first-page":"268","DOI":"10.1109\/OJEMB.2020.3026468","volume":"1","author":"G Pinkas","year":"2020","unstructured":"Pinkas, G. et al. SARS-CoV-2 detection from voice. IEEE Open J. Eng. Med. Biol. 1, 268\u2013274 (2020).","journal-title":"IEEE Open J. Eng. Med. Biol."},{"key":"773_CR10","doi-asserted-by":"crossref","unstructured":"Hassan, A., Shahin, I. & Alsabek, M. B. COVID-19 detection system using recurrent neural networks. In 2020 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI) 1\u20135 (IEEE, 2020).","DOI":"10.1109\/CCCI49893.2020.9256562"},{"key":"773_CR11","doi-asserted-by":"crossref","unstructured":"Han, J. et al. Exploring automatic COVID-19 diagnosis via voice and symptoms from crowdsourced data. In ICASSP 2021\u20132021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 8328\u20138332 (IEEE, 2021).","DOI":"10.1109\/ICASSP39728.2021.9414576"},{"key":"773_CR12","doi-asserted-by":"publisher","unstructured":"Chaudhari, G. et al. Virufy: global applicability of crowdsourced and clinical datasets for AI detection of COVID-19 from cough. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2011.13320 (2021).","DOI":"10.48550\/arXiv.2011.13320"},{"key":"773_CR13","doi-asserted-by":"publisher","first-page":"240","DOI":"10.3934\/publichealth.2021019","volume":"8","author":"KK Lella","year":"2021","unstructured":"Lella, K. K. & Pja, A. Automatic COVID-19 disease diagnosis using 1D convolutional neural network and augmentation with human respiratory sound based on parameters: cough, breath, and voice. AIMS Public Health 8, 240\u2013264 (2021).","journal-title":"AIMS Public Health"},{"key":"773_CR14","doi-asserted-by":"crossref","unstructured":"Andreu-Perez, J. et al. A generic deep learning based cough analysis system from clinically validated samples for point-of-need COVID-19 test and severity levels. IEEE Trans. Services Comput. 15, 9361107 (2021).","DOI":"10.31219\/osf.io\/tm2f7"},{"key":"773_CR15","doi-asserted-by":"crossref","unstructured":"Coppock, H. et al. End-to-end convolutional neural network enables COVID-19 detection from breath and cough audio: a pilot study. BMJ Innov. 7, 000668 (2021).","DOI":"10.1136\/bmjinnov-2021-000668"},{"key":"773_CR16","doi-asserted-by":"publisher","first-page":"104572","DOI":"10.1016\/j.compbiomed.2021.104572","volume":"135","author":"M Pahar","year":"2021","unstructured":"Pahar, M., Klopper, M., Warren, R. & Niesler, T. COVID-19 cough classification using machine learning and global smartphone recordings. Comput. Biol. Med. 135, 104572 (2021).","journal-title":"Comput. Biol. Med."},{"key":"773_CR17","doi-asserted-by":"publisher","unstructured":"Pizzo, D. T. & Esteban, S. IATos: AI-powered pre-screening tool for COVID-19 from cough audio samples. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2104.13247 (2021).","DOI":"10.48550\/arXiv.2104.13247"},{"key":"773_CR18","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41746-021-00553-x","volume":"5","author":"J Han","year":"2022","unstructured":"Han, J. et al. Sounds of COVID-19: exploring realistic performance of audio-based digital testing. npj Digit. Med. 5, 1\u20139 (2022).","journal-title":"npj Digit. Med."},{"key":"773_CR19","doi-asserted-by":"publisher","first-page":"m1328","DOI":"10.1136\/bmj.m1328","volume":"369","author":"L Wynants","year":"2020","unstructured":"Wynants, L. et al. Prediction models for diagnosis and prognosis of COVID-19: systematic review and critical appraisal. Br. Med. J. 369, m1328 (2020).","journal-title":"Br. Med. J."},{"key":"773_CR20","doi-asserted-by":"publisher","first-page":"199","DOI":"10.1038\/s42256-021-00307-0","volume":"3","author":"M Roberts","year":"2021","unstructured":"Roberts, M. et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat. Mach. Intell. 3, 199\u2013217 (2021).","journal-title":"Nat. Mach. Intell."},{"key":"773_CR21","doi-asserted-by":"publisher","first-page":"e537","DOI":"10.1016\/S2589-7500(21)00141-2","volume":"3","author":"H Coppock","year":"2021","unstructured":"Coppock, H., Jones, L., Kiskin, I. & Schuller, B. COVID-19 detection from audio: seven grains of salt. Lancet Digit. Health 3, e537\u2013e538 (2021).","journal-title":"Lancet Digit. Health"},{"key":"773_CR22","doi-asserted-by":"publisher","first-page":"610","DOI":"10.1038\/s42256-021-00338-7","volume":"3","author":"AJ DeGrave","year":"2021","unstructured":"DeGrave, A. J., Janizek, J. D. & Lee, S.-I. AI for radiographic COVID-19 detection selects shortcuts over signal. Nat. Mach. Intell. 3, 610\u2013619 (2021).","journal-title":"Nat. Mach. Intell."},{"key":"773_CR23","doi-asserted-by":"publisher","unstructured":"Budd, J. et al. A large-scale and PCR-referenced vocal audio dataset for COVID-19. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2212.07738 (2023).","DOI":"10.48550\/arXiv.2212.07738"},{"key":"773_CR24","unstructured":"Speak Up and Help Beat Coronavirus (COVID-19) (UK Government, 2021); https:\/\/www.gov.uk\/government\/news\/speak-up-and-help-beat-coronavirus-covid-19"},{"key":"773_CR25","unstructured":"Department of Health and Social Care (UK), COVID-19 Testing Data: Methodology Note (UK Government, 2022); https:\/\/www.gov.uk\/government\/publications\/coronavirus-covid-19-testing-data-methodology\/covid-19-testing-data-methodology-note"},{"key":"773_CR26","unstructured":"Murphy, K. P. Probabilistic Machine Learning: An introduction (MIT Press, 2022)."},{"key":"773_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1214\/09-STS313","volume":"25","author":"EA Stuart","year":"2010","unstructured":"Stuart, E. A. Matching methods for causal inference: a review and a look forward. Stat. Sci. 25, 1\u201321 (2010).","journal-title":"Stat. Sci."},{"key":"773_CR28","doi-asserted-by":"publisher","DOI":"10.1186\/s12916-020-01706-7","volume":"18","author":"BC Kahan","year":"2020","unstructured":"Kahan, B. C., Forbes, G. & Cro, S. How to design a pre-specified statistical analysis approach to limit p-hacking in clinical trials: the Pre-SPEC framework. BMC Med. 18, 253 (2020).","journal-title":"BMC Med."},{"key":"773_CR29","doi-asserted-by":"publisher","first-page":"e2109229118","DOI":"10.1073\/pnas.2109229118","volume":"118","author":"P Sah","year":"2021","unstructured":"Sah, P. et al. Asymptomatic SARS-CoV-2 infection: a systematic review and meta-analysis. Proc. Natl Acad. Sci. USA 118, e2109229118 (2021).","journal-title":"Proc. Natl Acad. Sci. USA"},{"key":"773_CR30","doi-asserted-by":"publisher","unstructured":"Pigoli, D. et al. Statistical design and analysis for robust machine learning: a case study from COVID-19. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.2212.08571 (2022).","DOI":"10.48550\/arXiv.2212.08571"},{"key":"773_CR31","doi-asserted-by":"crossref","unstructured":"Chadeau-Hyam, M. et al. REACT-1 study round 14: high and increasing prevalence of SARS-CoV-2 infection among school-aged children during September 2021 and vaccine effectiveness against infection in England. Preprint at medRxiv https:\/\/www.medrxiv.org\/content\/early\/2021\/10\/22\/2021.10.14.21264965 (2021).","DOI":"10.1101\/2021.10.14.21264965"},{"key":"773_CR32","doi-asserted-by":"publisher","DOI":"10.1186\/s12916-014-0241-z","volume":"13","author":"GS Collins","year":"2015","unstructured":"Collins, G. S., Reitsma, J. B., Altman, D. G. & Moons, K. G. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. BMC Med. 13, 1 (2015).","journal-title":"BMC Med."},{"key":"773_CR33","doi-asserted-by":"publisher","first-page":"29","DOI":"10.1148\/radiology.143.1.7063747","volume":"143","author":"JA Hanley","year":"1982","unstructured":"Hanley, J. A. & McNeil, B. J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 29\u201336 (1982).","journal-title":"Radiology"},{"key":"773_CR34","doi-asserted-by":"publisher","first-page":"837","DOI":"10.2307\/2531595","volume":"44","author":"ER DeLong","year":"1988","unstructured":"DeLong, E. R., DeLong, D. M. & Clarke-Pearson, D. L. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44, 837\u2013845 (1988).","journal-title":"Biometrics"},{"key":"773_CR35","doi-asserted-by":"crossref","unstructured":"Eyben, F., W\u00f6llmer, M. & Schuller, B. OpenSmile\u2014the Munich versatile and fast open-source audio feature extractor. In Proc. 18th ACM International Conference on Multimedia 1459\u20131462 (ACM, 2010).","DOI":"10.1145\/1873951.1874246"},{"key":"773_CR36","unstructured":"Vadera, M. P., Ghosh, S., Ng, K. & Marlin, B. M. Post-hoc loss-calibration for Bayesian neural networks. In Proc. Thirty-Seventh Conference on Uncertainty in Artificial Intelligence 1403\u20131412 (PMLR, 2021)."},{"key":"773_CR37","doi-asserted-by":"publisher","unstructured":"Cobb, A. D., Roberts, S. J. & Gal, Y. Loss-calibrated approximate inference in Bayesian neural networks. Preprint at arXiv https:\/\/doi.org\/10.48550\/arXiv.1805.03901 (2018).","DOI":"10.48550\/arXiv.1805.03901"},{"key":"773_CR38","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770\u2013778 (IEEE, 2016).","DOI":"10.1109\/CVPR.2016.90"},{"key":"773_CR39","unstructured":"Gal, Y. & Ghahramani, Z. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In Proc. 33rd International Conference on Machine Learning1050\u20131059 (PMLR, 2016)."},{"key":"773_CR40","unstructured":"Tensorflow\/Models (GitHub, 2019); https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/audioset\/vggish\/vggish_input.py"},{"key":"773_CR41","unstructured":"Vaswani, A. et al. Attention is all you need. In 31st Conference on Neural Information Processing Systems https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2017 (2017)."},{"key":"773_CR42","unstructured":"Baevski, A., Zhou, Y., Mohamed, A. & Auli, M. wav2vec 2.0: a framework for self-supervised learning of speech representations. In Advances in Neural Information Processing Systems (eds. Lin, H. et al.) Vol. 33, 12449\u201312460 (Curran Associates, 2020); https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/92d1e1eb1cd6f9fba3227870bb6d7f07-Paper.pdf"},{"key":"773_CR43","unstructured":"Dosovitskiy, A. et al. An image is worth 16\u2009\u00d7\u200916 words: transformers for image recognition at scale. In International Conference on Learning Representations(ICLR, 2021); https:\/\/openreview.net\/forum?id=YicbFdNTTy"},{"key":"773_CR44","doi-asserted-by":"publisher","unstructured":"Gong, Y., Lai, C.-I. J., Chung, Y.-A. & Glass, J. SSAST: self-supervised audio spectrogram transformer. In Proc. AAAI Conference on Artificial Intelligence https:\/\/doi.org\/10.1609\/aaai.v36i10.21315 (AAAI, 2022).","DOI":"10.1609\/aaai.v36i10.21315"},{"key":"773_CR45","doi-asserted-by":"crossref","unstructured":"Gemmeke, J. F. et al. Audio set: an ontology and human-labeled dataset for audio events. In Proc. IEEE ICASSP 2017 (IEEE, 2017).","DOI":"10.1109\/ICASSP.2017.7952261"},{"key":"773_CR46","doi-asserted-by":"crossref","unstructured":"Panayotov, V., Chen, G., Povey, D. & Khudanpur, S. Librispeech: an ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 5206\u20135210 (IEEE, 2015).","DOI":"10.1109\/ICASSP.2015.7178964"},{"key":"773_CR47","doi-asserted-by":"crossref","unstructured":"Park, D. S. et al. SpecAugment: a simple data augmentation method for automatic apeech recognition. In Proc. Interspeech 2019 2613\u20132617 (ISCA, 2019).","DOI":"10.21437\/Interspeech.2019-2680"},{"key":"773_CR48","doi-asserted-by":"publisher","unstructured":"Coppock, H. et al. The UK COVID-19 Vocal Audio Dataset (openAccessv1.0) (Zenodo, 2023); https:\/\/doi.org\/10.5281\/zenodo.10043978","DOI":"10.5281\/zenodo.10043978"},{"key":"773_CR49","doi-asserted-by":"publisher","unstructured":"Coppock, H. et al. Alan-Turing-Institute\/Turing-RSS-Health-Data-Lab-Biomedical-Acoustic-Markers: Initial (Zenodo, 2023); https:\/\/doi.org\/10.5281\/zenodo.8130844","DOI":"10.5281\/zenodo.8130844"}],"container-title":["Nature Machine Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00773-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00773-8","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00773-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,23]],"date-time":"2024-02-23T00:09:42Z","timestamp":1708646982000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00773-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,7]]},"references-count":49,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2024,2]]}},"alternative-id":["773"],"URL":"https:\/\/doi.org\/10.1038\/s42256-023-00773-8","relation":{},"ISSN":["2522-5839"],"issn-type":[{"value":"2522-5839","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,7]]},"assertion":[{"value":"19 January 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 November 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 February 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}},{"value":"This study has been approved by The National Statistician\u2019s Data Ethics Advisory Committee (reference NSDEC(21)01) and the Cambridge South NHS Research Ethics Committee (reference 21\/EE\/0036) and Nottingham NHS Research Ethics Committee (reference 21\/EM\/0067). All participants reviewed the provided participant information and gave their informed consent to take part in the study.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics statement"}}]}}