{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T15:28:19Z","timestamp":1775230099275,"version":"3.50.1"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2021,4,6]],"date-time":"2021-04-06T00:00:00Z","timestamp":1617667200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,4,6]],"date-time":"2021-04-06T00:00:00Z","timestamp":1617667200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["EURASIP J. on Info. Security"],"published-print":{"date-parts":[[2021,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Several methods for synthetic audio speech generation have been developed in the literature through the years. With the great technological advances brought by deep learning, many novel synthetic speech techniques achieving incredible realistic results have been recently proposed. As these methods generate convincing fake human voices, they can be used in a malicious way to negatively impact on today\u2019s society (e.g., people impersonation, fake news spreading, opinion formation). For this reason, the ability of detecting whether a speech recording is synthetic or pristine is becoming an urgent necessity. In this work, we develop a synthetic speech detector. This takes as input an audio recording, extracts a series of hand-crafted features motivated by the speech-processing literature, and classify them in either closed-set or open-set. The proposed detector is validated on a publicly available dataset consisting of 17 synthetic speech generation algorithms ranging from old fashioned vocoders to modern deep learning solutions. Results show that the proposed method outperforms recently proposed detectors in the forensics literature.<\/jats:p>","DOI":"10.1186\/s13635-021-00116-3","type":"journal-article","created":{"date-parts":[[2021,4,6]],"date-time":"2021-04-06T18:02:47Z","timestamp":1617732167000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":60,"title":["Synthetic speech detection through short-term and long-term prediction traces"],"prefix":"10.1186","volume":"2021","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8127-9976","authenticated-orcid":false,"given":"Clara","family":"Borrelli","sequence":"first","affiliation":[]},{"given":"Paolo","family":"Bestagini","sequence":"additional","affiliation":[]},{"given":"Fabio","family":"Antonacci","sequence":"additional","affiliation":[]},{"given":"Augusto","family":"Sarti","sequence":"additional","affiliation":[]},{"given":"Stefano","family":"Tubaro","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,4,6]]},"reference":[{"key":"116_CR1","unstructured":"B. Dolhansky, J. Bitton, B. Pflaum, R. Lu, R. Howes, M. Wang, C. C. Ferrer, The deepfake detection challenge dataset. CoRR http:\/\/arxiv.org\/abs\/2006.07397(2020)."},{"key":"116_CR2","unstructured":"L. Verdoliva, Media forensics and deepfakes: an overview. CoRR http:\/\/arxiv.org\/abs\/2001.06564(2020)."},{"key":"116_CR3","unstructured":"Deepfakes github. https:\/\/github.com\/deepfakes\/faceswap."},{"key":"116_CR4","volume-title":"IEEE International Workshop on Information Forensics and Security (WIFS)","author":"Y. Li","year":"2018","unstructured":"Y. Li, M. Chang, S. Lyu, in IEEE International Workshop on Information Forensics and Security (WIFS). In ictu oculi: exposing AI created fake videos by detecting eye blinking (IEEEHong Kong, 2018)."},{"key":"116_CR5","volume-title":"IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS)","author":"D. G\u00fcera","year":"2018","unstructured":"D. G\u00fcera, E. J. Delp, in IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS). Deepfake video detection using recurrent neural networks (IEEEAuckland, 2018)."},{"key":"116_CR6","volume-title":"IEEE Winter Applications of Computer Vision Workshops (WACVW)","author":"F. Matern","year":"2019","unstructured":"F. Matern, C. Riess, M. Stamminger, in IEEE Winter Applications of Computer Vision Workshops (WACVW). Exploiting visual artifacts to expose deepfakes and face manipulations (IEEEWaikoloa, 2019)."},{"key":"116_CR7","volume-title":"International Conference on Pattern Recognition (ICPR)","author":"N. Bonettini","year":"2020","unstructured":"N. Bonettini, E. D. Cannas, S. Mandelli, L. Bondi, P. Bestagini, S. Tubaro, in International Conference on Pattern Recognition (ICPR). Video face manipulation detection through ensemble of CNNs (SpringerMilan, 2020)."},{"key":"116_CR8","first-page":"2577","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"A. Lieto","year":"2019","unstructured":"A. Lieto, D. Moro, F. Devoti, C. Parera, V. Lipari, P. Bestagini, S. Tubaro, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Hello? Who am i talking to? A shallow CNN approach for human vs. bot speech classification (IEEEBrighton, 2019), pp. 2577\u20132581."},{"key":"116_CR9","first-page":"104","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","author":"E. A. AlBadawy","year":"2019","unstructured":"E. A. AlBadawy, S. Lyu, H. Farid, in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Detecting AI-synthesized speech using bispectral analysis (Computer Vision Foundation\/IEEELong Beach, 2019), pp. 104\u2013109."},{"key":"116_CR10","volume-title":"Conference of the International Speech Communication Association (INTERSPEECH)","author":"M. Schr\u00f6der","year":"2011","unstructured":"M. Schr\u00f6der, M. Charfuelan, S. Pammi, I. Steiner, in Conference of the International Speech Communication Association (INTERSPEECH). Open source voice creation toolkit for the MARY TTS platform (ISCAFlorence, 2011)."},{"key":"116_CR11","doi-asserted-by":"publisher","first-page":"1877","DOI":"10.1587\/transinf.2015EDP7457","volume":"99","author":"M. Morise","year":"2016","unstructured":"M. Morise, F. Yokomori, K. Ozawa, WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. IEICE Trans. Inf. Syst.99:, 1877\u20131884 (2016).","journal-title":"IEICE Trans. Inf. Syst."},{"key":"116_CR12","unstructured":"A. V. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, K. Kavukcuoglu, Wavenet: a generative model for raw audio. CoRR http:\/\/arxiv.org\/abs\/1609.03499(2016)."},{"key":"116_CR13","volume-title":"Conference of the International Speech Communication Association (INTERSPEECH)","author":"M. Sahidullah","year":"2015","unstructured":"M. Sahidullah, T. Kinnunen, C. Hanil\u00e7i, in Conference of the International Speech Communication Association (INTERSPEECH). A comparison of features for synthetic speech detection (ISCADresden, 2015)."},{"key":"116_CR14","doi-asserted-by":"publisher","first-page":"684","DOI":"10.1109\/JSTSP.2016.2647199","volume":"11","author":"C. Zhang","year":"2017","unstructured":"C. Zhang, C. Yu, J. H. Hansen, An investigation of deep-learning frameworks for speaker verification antispoofing. IEEE J. Sel. Top. Sig. Process. 11:, 684\u2013694 (2017).","journal-title":"IEEE J. Sel. Top. Sig. Process"},{"key":"116_CR15","volume-title":"Sixteenth Annual Conference of the International Speech Communication Association","author":"A. Janicki","year":"2015","unstructured":"A. Janicki, in Sixteenth Annual Conference of the International Speech Communication Association. Spoofing countermeasure based on analysis of linear prediction error (ISCADresden, 2015)."},{"key":"116_CR16","volume-title":"Conference of the International Speech Communication Association (INTERSPEECH)","author":"M. Todisco","year":"2019","unstructured":"M. Todisco, X. Wang, M. Sahidullah, H. Delgado, A. Nautsch, J. Yamagishi, N. Evans, T. Kinnunen, K. A. Lee, in Conference of the International Speech Communication Association (INTERSPEECH). ASVspoof 2019: future horizons in spoofed and fake audio detection (ISCAGraz, 2019)."},{"key":"116_CR17","doi-asserted-by":"publisher","first-page":"101","DOI":"10.1016\/j.csl.2020.101114","volume":"64","author":"X. Wang","year":"2020","unstructured":"X. Wang, J. Yamagishi, M. Todisco, H. Delgado, A. Nautsch, N. Evans, M. Sahidullah, V. Vestman, T. Kinnunen, K. A. Lee, L. Juvela, P. Alku, Y. -H. Peng, H. -T. Hwang, Y. Tsao, H. -M. Wang, S. L. Maguer, M. Becker, F. Henderson, R. Clark, Y. Zhang, Q. Wang, Y. Jia, K. Onuma, K. Mushika, T. Kaneda, Y. Jiang, L. -J. Liu, Y. -C. Wu, W. -C. Huang, T. Toda, K. Tanaka, H. Kameoka, I. Steiner, D. Matrouf, J. -F. Bonastre, A. Govender, S. Ronanki, J. -X. Zhang, Z. -H. Ling, ASVspoof 2019: a large-scale public database of synthesized, converted and replayed speech. Comput. Speech Lang.64:, 101\u2013114 (2020).","journal-title":"Comput. Speech Lang."},{"key":"116_CR18","doi-asserted-by":"publisher","first-page":"453","DOI":"10.1016\/0167-6393(90)90021-Z","volume":"9","author":"E. Moulines","year":"1990","unstructured":"E. Moulines, F. Charpentier, Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech Comm.9:, 453\u2013467 (1990).","journal-title":"Speech Comm."},{"key":"116_CR19","volume-title":"1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings","author":"A. J. Hunt","year":"1996","unstructured":"A. J. Hunt, A. W. Black, in 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings. Unit selection in a concatenative speech synthesis system using a large speech database (IEEEAtlanta, 1996)."},{"key":"116_CR20","volume-title":"EUROSPEECH","author":"A. Black","year":"1995","unstructured":"A. Black, N. Campbell, in EUROSPEECH. Optimising selection of units from speech databases for concatenative synthesis (ISCAMadrid, 1995)."},{"key":"116_CR21","doi-asserted-by":"publisher","first-page":"959","DOI":"10.1007\/s10772-017-9463-8","volume":"20","author":"S. P. Panda","year":"2017","unstructured":"S. P. Panda, A. K. Nayak, A waveform concatenation technique for text-to-speech synthesis. Int. J. Speech Technol.20:, 959\u2013976 (2017).","journal-title":"Int. J. Speech Technol."},{"key":"116_CR22","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"T. Masuko","year":"1996","unstructured":"T. Masuko, K. Tokuda, T. Kobayashi, S. Imai, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Speech synthesis using HMMs with dynamic features (IEEEAtlanta, 1996)."},{"key":"116_CR23","volume-title":"IEEE Speech Synthesis Workshop","author":"K. Tokuda","year":"2002","unstructured":"K. Tokuda, H. Zen, A. W. Black, in IEEE Speech Synthesis Workshop. An HMM-based speech synthesis system applied to English (IEEESanta Monica, 2002)."},{"key":"116_CR24","doi-asserted-by":"publisher","first-page":"1133","DOI":"10.1109\/LSP.2017.2712646","volume":"24","author":"M. K. Reddy","year":"2017","unstructured":"M. K. Reddy, K. S. Rao, Robust pitch extraction method for the HMM-based speech synthesis system. IEEE Sig. Process. Lett.24:, 1133\u20131137 (2017).","journal-title":"IEEE Sig. Process. Lett."},{"key":"116_CR25","doi-asserted-by":"publisher","first-page":"169","DOI":"10.1121\/1.1916020","volume":"11","author":"H. Dudley","year":"1939","unstructured":"H. Dudley, Remaking speech. J. Acoust. Soc. Am.11:, 169\u2013177 (1939).","journal-title":"J. Acoust. Soc. Am."},{"key":"116_CR26","doi-asserted-by":"publisher","first-page":"187","DOI":"10.1016\/S0167-6393(98)00085-5","volume":"27","author":"H. Kawahara","year":"1999","unstructured":"H. Kawahara, I. Masuda-Katsuse, A. De Cheveigne, Restructuring speech representations using a pitch-adaptive time\u2013frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds. Speech Comm.27:, 187\u2013207 (1999).","journal-title":"Speech Comm."},{"key":"116_CR27","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"Y. Agiomyrgiannakis","year":"2015","unstructured":"Y. Agiomyrgiannakis, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Vocaine the vocoder and applications in speech synthesis (IEEEBrisbane, 2015)."},{"key":"116_CR28","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"J. Shen","year":"2018","unstructured":"J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerrv-Ryan, R. A. Saurous, Y. Agiomvrgiannakis, Y. Wu, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions (IEEECalgary, 2018)."},{"key":"116_CR29","unstructured":"N. Kalchbrenner, E. Elsen, K. Simonyan, S. Noury, N. Casagrande, E. Lockhart, F. Stimberg, A. van den Oord, S. Dieleman, K. Kavukcuoglu, Efficient neural audio synthesis. CoRR http:\/\/arxiv.org\/abs\/1802.08435(2018)."},{"key":"116_CR30","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1017\/ATSIP.2020.16","volume":"9","author":"M. R. Kamble","year":"2020","unstructured":"M. R. Kamble, H. B. Sailor, H. A. Patil, H. Li, Advances in anti-spoofing: from the perspective of ASVspoof challenges. APSIPA Trans. Sig. Inf. Process.9:, 18 (2020). https:\/\/www.cambridge.org\/core\/journals\/apsipa-transactions-on-signal-and-information-processing\/article\/advances-in-antispoofing-from-the-perspective-of-asvspoof-challenges\/6B5BB5B75A49022EB869C7117D5E4A9C.","journal-title":"APSIPA Trans. Sig. Inf. Process."},{"key":"116_CR31","doi-asserted-by":"publisher","first-page":"516","DOI":"10.1016\/j.csl.2017.01.001","volume":"45","author":"M. Todisco","year":"2017","unstructured":"M. Todisco, H. Delgado, N. Evans, Constant Q cepstral coefficients: a spoofing countermeasure for automatic speaker verification. Comput. Speech Lang.45:, 516\u2013535 (2017).","journal-title":"Comput. Speech Lang."},{"key":"116_CR32","volume-title":"Sixteenth Annual Conference of the International Speech Communication Association","author":"X. Xiao","year":"2015","unstructured":"X. Xiao, X. Tian, S. Du, H. Xu, E. S. Chng, H. Li, in Sixteenth Annual Conference of the International Speech Communication Association. Spoofing speech detection using high dimensional magnitude and phase features: the NTU approach for ASVspoof 2015 challenge (ISCADresden, 2015)."},{"key":"116_CR33","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"H. Dinkel","year":"2017","unstructured":"H. Dinkel, N. Chen, Y. Qian, K. Yu, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). End-to-end spoofing detection with raw waveform CLDNNS (IEEENew Orleans, 2017)."},{"key":"116_CR34","first-page":"21","volume":"1","author":"G. Fant","year":"1981","unstructured":"G. Fant, The source filter concept in voice production. Speech Transm. Lab. Q. Prog. Status Rep.1:, 21\u201337 (1981).","journal-title":"Speech Transm. Lab. Q. Prog. Status Rep."},{"key":"116_CR35","doi-asserted-by":"crossref","unstructured":"Linear prediction in narrowband and wideband coding (John Wiley & Sons, LtdHoboken, 2005), pp. 91\u2013112. Chap. 4.","DOI":"10.1002\/9780470041970.ch4"},{"key":"116_CR36","doi-asserted-by":"publisher","first-page":"573","DOI":"10.1093\/biomet\/72.3.573","volume":"72","author":"J. Franke","year":"1985","unstructured":"J. Franke, A Levinson-Durbin recursion for autoregressive-moving average processes. Biometrika. 72:, 573\u2013581 (1985).","journal-title":"Biometrika"},{"key":"116_CR37","unstructured":"VCTK corpus. doi:10.7488\/ds\/1994. Accessed 23 Mar 2021."},{"key":"116_CR38","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"X. Wang","year":"2018","unstructured":"X. Wang, J. Lorenzo-Trueba, S. Takaki, L. Juvela, J. Yamagishi, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). A comparison of recent waveform generation and acoustic modeling methods for neural-network-based speech synthesis (IEEECalgary, 2018)."},{"key":"116_CR39","volume-title":"Speech Synthesis Workshop (SSW)","author":"Z. Wu","year":"2016","unstructured":"Z. Wu, O. Watts, S. King, in Speech Synthesis Workshop (SSW). Merlin: an open source neural network speech synthesis system (SunnyvaleISCA, 2016)."},{"key":"116_CR40","volume-title":"Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","author":"C. Hsu","year":"2016","unstructured":"C. Hsu, H. Hwang, Y. Wu, Y. Tsao, H. Wang, in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). Voice conversion from non-parallel corpora using variational auto-encoder (IEEEJeju, 2016)."},{"key":"116_CR41","volume-title":"IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP)","author":"D. Matrouf","year":"2006","unstructured":"D. Matrouf, J. Bonastre, C. Fredouille, in IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP). Effect of speech transformation on impostor acceptance (IEEEToulouse, 2006)."},{"key":"116_CR42","unstructured":"K. Tanaka, H. Kameoka, T. Kaneko, N. Hojo, WaveCycleGAN2: time-domain neural post-filter for speech waveform generation. CoRR http:\/\/arxiv.org\/abs\/1904.02892(2019)."},{"key":"116_CR43","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"X. Wang","year":"2019","unstructured":"X. Wang, S. Takaki, J. Yamagishi, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Neural source-filter-based waveform model for statistical parametric speech synthesis (IEEEBrighton, 2019)."},{"key":"116_CR44","volume-title":"Conference of the International Speech Communication Association (INTERSPEECH)","author":"H. Zen","year":"2016","unstructured":"H. Zen, Y. Agiomyrgiannakis, N. Egberts, F. Henderson, P. Szczepaniak, in Conference of the International Speech Communication Association (INTERSPEECH). Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthesizers for mobile devices (ISCASan Francisco, 2016)."},{"key":"116_CR45","volume-title":"Advances in Neural Information Processing Systems (NIPS)","author":"Y. Jia","year":"2018","unstructured":"Y. Jia, Y. Zhang, R. Weiss, Q. Wang, J. Shen, F. Ren, z. Chen, P. Nguyen, R. Pang, I. Lopez Moreno, Y. Wu, in Advances in Neural Information Processing Systems (NIPS). Transfer learning from speaker verification to multispeaker text-to-speech synthesis (Curran Associates, Inc.Montreal, 2018)."},{"key":"116_CR46","doi-asserted-by":"publisher","first-page":"236","DOI":"10.1109\/TASSP.1984.1164317","volume":"32","author":"D. Griffin","year":"1984","unstructured":"D. Griffin, J. Lim, Signal estimation from modified short-time Fourier transform. IEEE Trans. Acoust. Speech Sig. Process. (TASLP). 32:, 236\u2013243 (1984).","journal-title":"IEEE Trans. Acoust. Speech Sig. Process. (TASLP)"},{"key":"116_CR47","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1016\/j.specom.2018.03.011","volume":"99","author":"K. Kobayashi","year":"2018","unstructured":"K. Kobayashi, T. Toda, S. Nakamura, Intra-gender statistical singing voice conversion with direct waveform modification using log-spectral differential. Speech Commun.99:, 211\u2013220 (2018).","journal-title":"Speech Commun."},{"key":"116_CR48","volume-title":"The Speaker and Language Recognition Workshop","author":"T. Kinnunen","year":"2018","unstructured":"T. Kinnunen, J. Lorenzo-Trueba, J. Yamagishi, T. Toda, D. Saito, F. Villavicencio, Z. Ling, in The Speaker and Language Recognition Workshop. A spoofing benchmark for the 2018 voice conversion challenge: leveraging from spoofing countermeasures for speech artifact assessment (ISCALes Sables d\u2019Olonne, 2018)."},{"key":"116_CR49","first-page":"2825","volume":"12","author":"F. Pedregosa","year":"2011","unstructured":"F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay, Scikit-learn: machine learning in Python. J. Mach. Learn. Res. (JMLR). 12:, 2825\u20132830 (2011).","journal-title":"J. Mach. Learn. Res. (JMLR)"}],"container-title":["EURASIP Journal on Information Security"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1186\/s13635-021-00116-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1186\/s13635-021-00116-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1186\/s13635-021-00116-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,11,1]],"date-time":"2023-11-01T18:54:45Z","timestamp":1698864885000},"score":1,"resource":{"primary":{"URL":"https:\/\/jis-eurasipjournals.springeropen.com\/articles\/10.1186\/s13635-021-00116-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,4,6]]},"references-count":49,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,12]]}},"alternative-id":["116"],"URL":"https:\/\/doi.org\/10.1186\/s13635-021-00116-3","relation":{},"ISSN":["2510-523X"],"issn-type":[{"value":"2510-523X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,4,6]]},"assertion":[{"value":"22 September 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 March 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 April 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare that they have no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"2"}}