{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T10:10:28Z","timestamp":1773828628982,"version":"3.50.1"},"reference-count":61,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,11,23]],"date-time":"2023-11-23T00:00:00Z","timestamp":1700697600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,11,23]],"date-time":"2023-11-23T00:00:00Z","timestamp":1700697600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100009367","name":"Mansoura University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100009367","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Neural Comput &amp; Applic"],"published-print":{"date-parts":[[2024,2]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The availability of large-scale facial datasets with the rapid progress of deep learning techniques, such as Generative Adversarial Networks, has enabled anyone to create realistic fake videos. These fake videos can potentially become harmful when used for fake news, hoaxes, and identity fraud. We propose a deep learning bagging ensemble classifier to detect manipulated faces in videos. The proposed bagging classifier uses the convolution and self-attention network (CoAtNet) model as a base learner. CoAtNet model is vertically stacking depthwise convolution layers and self-attention layers in such a way that generalization, capacity, and efficiency are improved. Depthwise convolution captures local features from faces extracted from video then pass these features to the attention layers to extract global information and efficiently capture long-range dependencies of spatial details. Each learner is trained on a different subset randomly taken of training data with a replacement then models\u2019 predictions are combined to classify the video either as real or fake. We also use CutMix data augmentation on the extracted faces to enhance the generalization and localization performance of the base learner model. Our experimental results show that our proposed method achieves higher efficiency compared to state-of-the-art methods with AUC values of 99.70%, 97.49%, 98.90%, and 87.62% on the different manipulation techniques of the FaceForensics++ dataset (DeepFakes (DF), Face2Face (F2F), FaceSwap (FS), and NeuralTextures (NT)), respectively, and 99.74% on the Celeb-DF dataset.<\/jats:p>","DOI":"10.1007\/s00521-023-09196-3","type":"journal-article","created":{"date-parts":[[2023,11,23]],"date-time":"2023-11-23T15:02:23Z","timestamp":1700751743000},"page":"2749-2765","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":40,"title":["An ensemble of CNNs with self-attention mechanism for DeepFake video detection"],"prefix":"10.1007","volume":"36","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3590-3424","authenticated-orcid":false,"given":"Karima","family":"Omar","sequence":"first","affiliation":[]},{"given":"Rasha H.","family":"Sakr","sequence":"additional","affiliation":[]},{"given":"Mohammed F.","family":"Alrahmawy","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,11,23]]},"reference":[{"key":"9196_CR1","unstructured":"Bitesize B (2019) deepfakes: What are they and why would i make one? 2019.[Online]"},{"issue":"11","key":"9196_CR2","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1145\/3422622","volume":"63","author":"I Goodfellow","year":"2020","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139\u2013144","journal-title":"Commun ACM"},{"key":"9196_CR3","unstructured":"Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114"},{"key":"9196_CR4","unstructured":"FaceApp: Perfect Face Editor. [Online; accessed 21-December-2022]. https:\/\/apps.apple.com\/gb\/app\/faceapp-ai-face-editor\/id1180884341"},{"key":"9196_CR5","unstructured":"FaceSwap github. [Online; accessed 05-December-2022]. https:\/\/github.com\/MarekKowalski\/FaceSwap\/"},{"key":"9196_CR6","doi-asserted-by":"crossref","unstructured":"Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 4401\u20134410","DOI":"10.1109\/CVPR.2019.00453"},{"key":"9196_CR7","unstructured":"Deepfakes github. [Online; accessed 05-December-2022]. https:\/\/github.com\/deepfakes\/faceswap"},{"key":"9196_CR8","unstructured":"ZAO App. [Online; accessed 05-December-2022]. https:\/\/apps.apple.com\/cn\/app\/id1465199127"},{"key":"9196_CR9","unstructured":"facebook. [Online; accessed 21-December-2022]. https:\/\/www.bbc.com\/news\/technology-48607673"},{"key":"9196_CR10","doi-asserted-by":"crossref","unstructured":"Thies J, Zollhofer M, Stamminger M, Theobalt C, Nie\u00dfner M (2016) Face2face: real-time face capture and reenactment of RGB videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2387\u20132395","DOI":"10.1109\/CVPR.2016.262"},{"issue":"4","key":"9196_CR11","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3306346.3323035","volume":"38","author":"J Thies","year":"2019","unstructured":"Thies J, Zollh\u00f6fer M, Nie\u00dfner M (2019) Deferred neural rendering: image synthesis using neural textures. ACM Trans Graph (TOG) 38(4):1\u201312","journal-title":"ACM Trans Graph (TOG)"},{"key":"9196_CR12","doi-asserted-by":"crossref","unstructured":"Choi Y, Choi M, Kim M, Ha J-W, Kim S, Choo J (2018) Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 8789\u20138797","DOI":"10.1109\/CVPR.2018.00916"},{"issue":"1","key":"9196_CR13","doi-asserted-by":"publisher","first-page":"71","DOI":"10.1007\/s11263-010-0403-1","volume":"92","author":"I Yerushalmy","year":"2011","unstructured":"Yerushalmy I, Hel-Or H (2011) Digital image forgery detection based on lens and sensor aberration. Int J Comput Vision 92(1):71\u201391","journal-title":"Int J Comput Vision"},{"issue":"3","key":"9196_CR14","doi-asserted-by":"publisher","first-page":"1099","DOI":"10.1109\/TIFS.2011.2129512","volume":"6","author":"I Amerini","year":"2011","unstructured":"Amerini I, Ballan L, Caldelli R, Del Bimbo A, Serra G (2011) A sift-based forensic method for copy-move attack detection and transformation recovery. IEEE Trans Inf Forensics Secur 6(3):1099\u20131110","journal-title":"IEEE Trans Inf Forensics Secur"},{"key":"9196_CR15","unstructured":"Agarwal S, Farid H, Gu Y, He M, Nagano K, Li H (2019) Protecting world leaders against deep fakes. In: CVPR Workshops, vol 1, p 38"},{"key":"9196_CR16","doi-asserted-by":"crossref","unstructured":"Yang X, Li Y, Lyu S (2019) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 8261\u20138265. IEEE","DOI":"10.1109\/ICASSP.2019.8683164"},{"key":"9196_CR17","doi-asserted-by":"crossref","unstructured":"Rossler A, Cozzolino D, Verdoliva L, Riess C, Thies J, Nie\u00dfner M (2019) Faceforensics++: learning to detect manipulated facial images. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 1\u201311","DOI":"10.1109\/ICCV.2019.00009"},{"issue":"3","key":"9196_CR18","doi-asserted-by":"publisher","first-page":"868","DOI":"10.1109\/TIFS.2012.2190402","volume":"7","author":"J Fridrich","year":"2012","unstructured":"Fridrich J, Kodovsky J (2012) Rich models for steganalysis of digital images. IEEE Trans Inf Forensics Secur 7(3):868\u2013882","journal-title":"IEEE Trans Inf Forensics Secur"},{"key":"9196_CR19","doi-asserted-by":"crossref","unstructured":"Afchar D, Nozick V, Yamagishi J, Echizen I (2018) Mesonet: a compact facial video forgery detection network. In: 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pp 1\u20137.","DOI":"10.1109\/WIFS.2018.8630761"},{"key":"9196_CR20","doi-asserted-by":"crossref","unstructured":"Rahmouni N, Nozick V, Yamagishi J, Echizen I (2017) Distinguishing computer graphics from natural images using convolution neural networks. In: 2017 IEEE Workshop on Information Forensics and Security (WIFS), pp 1\u20136.","DOI":"10.1109\/WIFS.2017.8267647"},{"key":"9196_CR21","doi-asserted-by":"crossref","unstructured":"Ciftci UA, Demir I, Yin L (2020) Fakecatcher: detection of synthetic portrait videos using biological signals. In: IEEE transactions on pattern analysis and machine intelligence","DOI":"10.1109\/TPAMI.2020.3009287"},{"key":"9196_CR22","doi-asserted-by":"crossref","unstructured":"Nguyen H, Yamagishi J, Echizen I (2019) Use of a capsule network to detect fake images and videos. arxiv 2019. arXiv preprint arXiv:1910.12467","DOI":"10.1109\/ICASSP.2019.8682602"},{"key":"9196_CR23","doi-asserted-by":"crossref","unstructured":"Liu H, Li X, Zhou W, Chen Y, He Y, Xue H, Zhang W, Yu N (2021) Spatial-phase shallow learning: rethinking face forgery detection in frequency domain. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 772\u2013781","DOI":"10.1109\/CVPR46437.2021.00083"},{"issue":"4","key":"9196_CR24","first-page":"90","volume":"35","author":"F Dong","year":"2023","unstructured":"Dong F, Zou X, Wang J, Liu X (2023) Contrastive learning-based general deepfake detection with multi-scale RGB frequency clues. J King Saud Univ-Comput Inf Sci 35(4):90\u201399","journal-title":"J King Saud Univ-Comput Inf Sci"},{"key":"9196_CR25","doi-asserted-by":"crossref","unstructured":"Dang H, Liu F, Stehouwer J, Liu X, Jain AK (2020) On the detection of digital face manipulation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 5781\u20135790","DOI":"10.1109\/CVPR42600.2020.00582"},{"key":"9196_CR26","doi-asserted-by":"publisher","first-page":"119843","DOI":"10.1016\/j.eswa.2023.119843","volume":"222","author":"F Khalid","year":"2023","unstructured":"Khalid F, Javed A, Ilyas H, Irtaza A et al (2023) DFGNN: An interpretable and generalized graph neural network for deepfakes detection. Expert Syst Appl 222:119843","journal-title":"Expert Syst Appl"},{"key":"9196_CR27","unstructured":"de Lima O, Franklin S, Basu S, Karwoski B, George A (2020) Deepfake detection using spatiotemporal convolutional networks. arXiv preprint arXiv:2006.14749"},{"issue":"3","key":"9196_CR28","doi-asserted-by":"publisher","first-page":"1089","DOI":"10.1109\/TCSVT.2021.3074259","volume":"32","author":"J Hu","year":"2021","unstructured":"Hu J, Liao X, Wang W, Qin Z (2021) Detecting compressed deepfake videos in social networks using frame-temporality two-stream convolutional network. IEEE Trans Circuits Syst Video Technol 32(3):1089\u20131102","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"9196_CR29","first-page":"100217","volume":"4","author":"SH Silva","year":"2022","unstructured":"Silva SH, Bethany M, Votto AM, Scarff IH, Beebe N, Najafirad P (2022) Deepfake forensics analysis: an explainable hierarchical ensemble of weakly supervised models. Forensic Sci Int: Synergy 4:100217","journal-title":"Forensic Sci Int: Synergy"},{"key":"9196_CR30","doi-asserted-by":"crossref","unstructured":"Rana MS, Sung AH (2020) Deepfakestack: a deep ensemble-based learning technique for deepfake detection. In: 2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)\/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom), pp 70\u201375.","DOI":"10.1109\/CSCloud-EdgeCom49738.2020.00021"},{"key":"9196_CR31","doi-asserted-by":"crossref","unstructured":"Chen H-S, Rouhsedaghat M, Ghani H, Hu S, You S, Kuo C-CJ (2021) Defakehop: a light-weight high-performance deepfake detector. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp 1\u20136","DOI":"10.1109\/ICME51207.2021.9428361"},{"key":"9196_CR32","unstructured":"Heo Y-J, Choi Y-J, Lee Y-W, Kim B-G (2021) Deepfake detection scheme based on vision transformer and distillation. arXiv preprint arXiv:2104.01353"},{"issue":"7","key":"9196_CR33","doi-asserted-by":"publisher","first-page":"7512","DOI":"10.1007\/s10489-022-03867-9","volume":"53","author":"Y-J Heo","year":"2023","unstructured":"Heo Y-J, Yeo W-H, Kim B-G (2023) Deepfake detection algorithm based on improved vision transformer. Appl Intell 53(7):7512\u20137527","journal-title":"Appl Intell"},{"key":"9196_CR34","unstructured":"Wodajo D, Atnafu S (2021) Deepfake video detection using convolutional vision transformer. arXiv preprint arXiv:2102.11126"},{"key":"9196_CR35","doi-asserted-by":"crossref","unstructured":"Wang X, Girshick R, Gupta A, He K (2018) Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7794\u20137803","DOI":"10.1109\/CVPR.2018.00813"},{"key":"9196_CR36","doi-asserted-by":"crossref","unstructured":"Bello I, Zoph B, Vaswani A, Shlens J, Le QV (2019) Attention augmented convolutional networks. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 3286\u20133295","DOI":"10.1109\/ICCV.2019.00338"},{"key":"9196_CR37","doi-asserted-by":"crossref","unstructured":"Srinivas A, Lin T-Y, Parmar N, Shlens J, Abbeel P, Vaswani A (2021) Bottleneck transformers for visual recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 16519\u201316529","DOI":"10.1109\/CVPR46437.2021.01625"},{"key":"9196_CR38","unstructured":"Shen Z, Zhang M, Zhao H, Yi S, Li H (2021) Efficient attention: attention with linear complexities. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, pp 3531\u20133539"},{"key":"9196_CR39","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S et al. (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929"},{"key":"9196_CR40","unstructured":"Tan M, Le Q (2019) Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp 6105\u20136114. PMLR"},{"key":"9196_CR41","doi-asserted-by":"crossref","unstructured":"Vaswani A, Ramachandran P, Srinivas A, Parmar N, Hechtman B, Shlens J (2021) Scaling local self-attention for parameter efficient visual backbones. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 12894\u201312904","DOI":"10.1109\/CVPR46437.2021.01270"},{"key":"9196_CR42","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"9196_CR43","first-page":"3965","volume":"34","author":"Z Dai","year":"2021","unstructured":"Dai Z, Liu H, Le QV, Tan M (2021) Coatnet: marrying convolution and attention for all data sizes. Adv Neural Inf Process Syst 34:3965\u20133977","journal-title":"Adv Neural Inf Process Syst"},{"key":"9196_CR44","doi-asserted-by":"crossref","unstructured":"Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4510\u20134520","DOI":"10.1109\/CVPR.2018.00474"},{"key":"9196_CR45","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems vol 30"},{"key":"9196_CR46","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556"},{"key":"9196_CR47","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"9196_CR48","doi-asserted-by":"crossref","unstructured":"Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1\u20139","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"9196_CR49","doi-asserted-by":"crossref","unstructured":"Shaw P, Uszkoreit J, Vaswani A (2018) Self-attention with relative position representations. arXiv preprint arXiv:1803.02155","DOI":"10.18653\/v1\/N18-2074"},{"key":"9196_CR50","unstructured":"Huang C-ZA, Vaswani A, Uszkoreit J, Shazeer N, Simon I, Hawthorne C, Dai AM, Hoffman MD, Dinculescu M, Eck D (2018) Music transformer. arXiv preprint arXiv:1809.04281x"},{"key":"9196_CR51","unstructured":"Ramachandran P, Parmar N, Vaswani A, Bello I, Levskaya A, Shlens J (2019) Stand-alone self-attention in vision models. In: Advances in Neural Information Processing Systems vol. 32"},{"key":"9196_CR52","doi-asserted-by":"crossref","unstructured":"Yun S, Han D, Oh SJ, Chun S, Choe J, Yoo Y (2019) Cutmix: regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 6023\u20136032","DOI":"10.1109\/ICCV.2019.00612"},{"key":"9196_CR53","unstructured":"DeVries T, Taylor GW (2017) Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552"},{"key":"9196_CR54","doi-asserted-by":"crossref","unstructured":"Zhong Z, Zheng L, Kang G, Li S, Yang Y (2020) Random erasing data augmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 34, pp 13001\u201313008","DOI":"10.1609\/aaai.v34i07.7000"},{"key":"9196_CR55","unstructured":"Zhang H, Cisse M, Dauphin YN, Lopez-Paz D (2017) mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412"},{"key":"9196_CR56","doi-asserted-by":"crossref","unstructured":"Zhou Z-H (2021) Ensemble learning. In: Machine Learning, pp 181\u2013210. Springer","DOI":"10.1007\/978-981-15-1967-3_8"},{"key":"9196_CR57","doi-asserted-by":"crossref","unstructured":"Li Y, Yang X, Sun P, Qi H, Lyu S (2020) Celeb-df: a large-scale challenging dataset for deepfake forensics. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 3207\u20133216","DOI":"10.1109\/CVPR42600.2020.00327"},{"issue":"5","key":"9196_CR58","doi-asserted-by":"publisher","first-page":"34","DOI":"10.1109\/38.946629","volume":"21","author":"E Reinhard","year":"2001","unstructured":"Reinhard E, Adhikhmin M, Gooch B, Shirley P (2001) Color transfer between images. IEEE Comput Graph Appl 21(5):34\u201341","journal-title":"IEEE Comput Graph Appl"},{"key":"9196_CR59","doi-asserted-by":"crossref","unstructured":"Ma L, Jia X, Sun Q, Schiele B, Tuytelaars T, Van\u00a0Gool L (2017) Pose guided person image generation. In: Advances in neural information processing systems vol 30","DOI":"10.1109\/CVPR.2018.00018"},{"issue":"10","key":"9196_CR60","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","volume":"23","author":"K Zhang","year":"2016","unstructured":"Zhang K, Zhang Z, Li Z, Qiao Y (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23(10):1499\u20131503","journal-title":"IEEE Signal Process Lett"},{"key":"9196_CR61","doi-asserted-by":"crossref","unstructured":"Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1251\u20131258","DOI":"10.1109\/CVPR.2017.195"}],"container-title":["Neural Computing and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-023-09196-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00521-023-09196-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-023-09196-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,1,23]],"date-time":"2024-01-23T07:07:31Z","timestamp":1705993651000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00521-023-09196-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,23]]},"references-count":61,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["9196"],"URL":"https:\/\/doi.org\/10.1007\/s00521-023-09196-3","relation":{},"ISSN":["0941-0643","1433-3058"],"issn-type":[{"value":"0941-0643","type":"print"},{"value":"1433-3058","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,11,23]]},"assertion":[{"value":"15 February 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"20 October 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 November 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}