{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:24:23Z","timestamp":1775067863678,"version":"3.50.1"},"reference-count":68,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,2,13]],"date-time":"2024-02-13T00:00:00Z","timestamp":1707782400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,2,13]],"date-time":"2024-02-13T00:00:00Z","timestamp":1707782400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Swiss National Foundation for Scientific Research","award":["20CH21 195532"],"award-info":[{"award-number":["20CH21 195532"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Image Video Proc."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Detecting digital face manipulation in images and video has attracted extensive attention due to the potential risk to public trust. To counteract the malicious usage of such techniques, deep learning-based deepfake detection methods have been employed and have exhibited remarkable performance. However, the performance of such detectors is often assessed on related benchmarks that hardly reflect real-world situations. For example, the impact of various image and video processing operations and typical workflow distortions on detection accuracy has not been systematically measured. In this paper, a more reliable assessment framework is proposed to evaluate the performance of learning-based deepfake detectors in more realistic settings. To the best of our acknowledgment, it is the first systematic assessment approach for deepfake detectors that not only reports the general performance under real-world conditions but also quantitatively measures their robustness toward different processing operations. To demonstrate the effectiveness and usage of the framework, extensive experiments and detailed analysis of four popular deepfake detection methods are further presented in this paper. In addition, a stochastic degradation-based data augmentation method driven by realistic processing operations is designed, which significantly improves the robustness of deepfake detectors.<\/jats:p>","DOI":"10.1186\/s13640-024-00621-8","type":"journal-article","created":{"date-parts":[[2024,2,13]],"date-time":"2024-02-13T12:02:35Z","timestamp":1707825755000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":42,"title":["Assessment framework for deepfake detection in real-world situations"],"prefix":"10.1186","volume":"2024","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8191-3167","authenticated-orcid":false,"given":"Yuhang","family":"Lu","sequence":"first","affiliation":[]},{"given":"Touradj","family":"Ebrahimi","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,2,13]]},"reference":[{"key":"621_CR1","unstructured":"T. Karras, T. Aila, S. Laine, J. Lehtinen, Progressive growing of gans for improved quality, stability, and variation. arXiv preprint (2017)"},{"key":"621_CR2","doi-asserted-by":"crossref","unstructured":"T. Karras, S. Laine, T. Aila, A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401\u20134410 (2019)","DOI":"10.1109\/CVPR.2019.00453"},{"key":"621_CR3","doi-asserted-by":"crossref","unstructured":"T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, T. Aila, Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110\u20138119 (2020)","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"621_CR4","doi-asserted-by":"crossref","unstructured":"A. R\u00f6ssler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, M. Nie\u00dfner, FaceForensics++: Learning to detect manipulated facial images. In: International Conference on Computer Vision (ICCV) (2019)","DOI":"10.1109\/ICCV.2019.00009"},{"issue":"4","key":"621_CR5","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3306346.3323035","volume":"38","author":"J Thies","year":"2019","unstructured":"J. Thies, M. Zollh\u00f6fer, M. Nie\u00dfner, Deferred neural rendering: image synthesis using neural textures. ACM Trans. Graph. 38(4), 1\u201312 (2019)","journal-title":"ACM Trans. Graph."},{"key":"621_CR6","doi-asserted-by":"crossref","unstructured":"Y. Nirkin, Y. Keller, T. Hassner, Fsgan: Subject agnostic face swapping and reenactment. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 7184\u20137193 (2019)","DOI":"10.1109\/ICCV.2019.00728"},{"key":"621_CR7","doi-asserted-by":"crossref","unstructured":"E. Zakharov, A. Shysheya, E. Burkov, V. Lempitsky, Few-shot adversarial learning of realistic neural talking head models. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 9459\u20139468 (2019)","DOI":"10.1109\/ICCV.2019.00955"},{"key":"621_CR8","unstructured":"B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, C.C. Ferrer, The deepfake detection challenge dataset 2006, 07397 (2020)"},{"key":"621_CR9","doi-asserted-by":"crossref","unstructured":"L. Jiang, R. Li, W. Wu, C. Qian, C.C. Loy, DeeperForensics-1.0: a large-scale dataset for real-world face forgery detection (2020). 2001.03024","DOI":"10.1109\/CVPR42600.2020.00296"},{"key":"621_CR10","doi-asserted-by":"crossref","unstructured":"W. Chen, B. Chua, S. Winkler, AI Singapore trusted media challenge dataset. arXiv preprint arXiv:2201.04788 (2022)","DOI":"10.1145\/3511808.3557715"},{"key":"621_CR11","doi-asserted-by":"crossref","unstructured":"H.H. Nguyen, J. Yamagishi, I. Echizen, Use of a capsule network to detect fake images and videos. ArXiv (2019)","DOI":"10.1109\/ICASSP.2019.8682602"},{"key":"621_CR12","doi-asserted-by":"crossref","unstructured":"H. Zhao, W. Zhou, D. Chen, T. Wei, W. Zhang, N. Yu, Multi-attentional deepfake detection. 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2185\u20132194 (2021)","DOI":"10.1109\/CVPR46437.2021.00222"},{"key":"621_CR13","doi-asserted-by":"crossref","unstructured":"H. Liu, X. Li, W. Zhou, Y. Chen, Y. He, H. Xue, W. Zhang, N. Yu, Spatial-phase shallow learning: rethinking face forgery detection in frequency domain. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 772\u2013781 (2021)","DOI":"10.1109\/CVPR46437.2021.00083"},{"key":"621_CR14","doi-asserted-by":"crossref","unstructured":"Y. Qian, G. Yin, L. Sheng, Z. Chen, J. Shao, Thinking in frequency: Face forgery detection by mining frequency-aware clues. In: European Conference on Computer Vision, pp. 86\u2013103 (2020). Springer","DOI":"10.1007\/978-3-030-58610-2_6"},{"key":"621_CR15","doi-asserted-by":"crossref","unstructured":"J. Li, H. Xie, J. Li, Z. Wang, Y. Zhang, Frequency-aware discriminative feature learning supervised by single-center loss for face forgery detection. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 6458\u20136467 (2021)","DOI":"10.1109\/CVPR46437.2021.00639"},{"key":"621_CR16","doi-asserted-by":"crossref","unstructured":"Y. Luo, Y. Zhang, J. Yan, W. Liu, Generalizing face forgery detection with high-frequency features. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 16317\u201316326 (2021)","DOI":"10.1109\/CVPR46437.2021.01605"},{"key":"621_CR17","doi-asserted-by":"crossref","unstructured":"A. Khodabakhsh, R. Ramachandra, K. Raja, P. Wasnik, C. Busch, Fake face detection methods: Can they be generalized? In: 2018 International Conference of the Biometrics Special Interest Group (BIOSIG), pp. 1\u20136 (2018). IEEE","DOI":"10.23919\/BIOSIG.2018.8553251"},{"key":"621_CR18","doi-asserted-by":"crossref","unstructured":"X. Xuan, B. Peng, W. Wang, J. Dong, On the generalization of GAN image forensics. In: Chinese Conference on Biometric Recognition, pp. 134\u2013141 (2019). Springer","DOI":"10.1007\/978-3-030-31456-9_15"},{"key":"621_CR19","doi-asserted-by":"crossref","unstructured":"A. Haliassos, K. Vougioukas, S. Petridis, M. Pantic, Lips don\u2019t lie: A generalisable and robust approach to face forgery detection. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 5039\u20135049 (2021)","DOI":"10.1109\/CVPR46437.2021.00500"},{"key":"621_CR20","doi-asserted-by":"crossref","unstructured":"M. Kim, S. Tariq, S.S. Woo, Fretal: Generalizing deepfake detection using knowledge distillation and representation learning. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 1001\u20131012 (2021)","DOI":"10.1109\/CVPRW53098.2021.00111"},{"key":"621_CR21","doi-asserted-by":"crossref","unstructured":"K. Shiohara, T. Yamasaki, Detecting deepfakes with self-blended images. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18720\u201318729 (2022)","DOI":"10.1109\/CVPR52688.2022.01816"},{"key":"621_CR22","doi-asserted-by":"crossref","unstructured":"S.F. Dodge, L. Karam, Understanding how image quality affects deep neural networks. 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), 1\u20136 (2016)","DOI":"10.1109\/QoMEX.2016.7498955"},{"key":"621_CR23","unstructured":"M. Mehdipour\u00a0Ghazi, H. Kemal\u00a0Ekenel, A comprehensive analysis of deep learning based representation for face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34\u201341 (2016)"},{"issue":"1","key":"621_CR24","doi-asserted-by":"publisher","first-page":"81","DOI":"10.1049\/iet-bmt.2017.0083","volume":"7","author":"K Grm","year":"2018","unstructured":"K. Grm, V. \u0160truc, A. Artiges, M. Caron, H.K. Ekenel, Strengths and weaknesses of deep learning models for face recognition against image degradations. IET Biom. 7(1), 81\u201389 (2018)","journal-title":"IET Biom."},{"key":"621_CR25","doi-asserted-by":"crossref","unstructured":"Y. Lu, T. Ebrahimi, A novel assessment framework for learning-based deepfake detectors in realistic conditions. In: Applications of Digital Image Processing XLV, vol. 12226, pp. 207\u2013217 (2022). SPIE","DOI":"10.1117\/12.2636683"},{"key":"621_CR26","unstructured":"S. Agarwal, H. Farid, Y. Gu, M. He, K. Nagano, H. Li, Protecting world leaders against deep fakes. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2019)"},{"key":"621_CR27","doi-asserted-by":"crossref","unstructured":"X. Yang, Y. Li, S. Lyu, Exposing deep fakes using inconsistent head poses. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 8261\u20138265 (2019)","DOI":"10.1109\/ICASSP.2019.8683164"},{"key":"621_CR28","doi-asserted-by":"publisher","first-page":"83144","DOI":"10.1109\/ACCESS.2020.2988660","volume":"8","author":"T Jung","year":"2020","unstructured":"T. Jung, S. Kim, K. Kim, Deepvision: deepfakes detection using human eye blinking pattern. IEEE Access 8, 83144\u201383154 (2020). https:\/\/doi.org\/10.1109\/ACCESS.2020.2988660","journal-title":"IEEE Access"},{"key":"621_CR29","doi-asserted-by":"publisher","unstructured":"P. Zhou, X. Han, V.I. Morariu, L.S. Davis, Two-Stream Neural Networks for Tampered Face Detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1831\u20131839 (2017). https:\/\/doi.org\/10.1109\/CVPRW.2017.229. ISSN: 2160-7516","DOI":"10.1109\/CVPRW.2017.229"},{"key":"621_CR30","doi-asserted-by":"crossref","unstructured":"F. Chollet, Xception: Deep learning with depthwise separable convolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1800\u20131807 (2017)","DOI":"10.1109\/CVPR.2017.195"},{"key":"621_CR31","unstructured":"S. Sabour, N. Frosst, G.E. Hinton, Dynamic routing between capsules. Adv. Neural Inf. Process.30 (2017)"},{"key":"621_CR32","doi-asserted-by":"crossref","unstructured":"D. G\u00fcera, E.J. Delp, Deepfake video detection using recurrent neural networks. In: 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1\u20136 (2018). IEEE","DOI":"10.1109\/AVSS.2018.8639163"},{"issue":"1","key":"621_CR33","first-page":"80","volume":"3","author":"E Sabir","year":"2019","unstructured":"E. Sabir, J. Cheng, A. Jaiswal, W. AbdAlmageed, I. Masi, P. Natarajan, Recurrent convolutional strategies for face manipulation detection in videos. Interfaces (GUI) 3(1), 80\u201387 (2019)","journal-title":"Interfaces (GUI)"},{"key":"621_CR34","doi-asserted-by":"crossref","unstructured":"I. Masi, A. Killekar, R.M. Mascarenhas, S.P. Gurudatt, W. AbdAlmageed, Two-branch recurrent network for isolating deepfakes in videos. In: European Conference on Computer Vision, pp. 667\u2013684 (2020). Springer","DOI":"10.1007\/978-3-030-58571-6_39"},{"key":"621_CR35","doi-asserted-by":"crossref","unstructured":"H.H. Nguyen, F. Fang, J. Yamagishi, I. Echizen, Multi-task learning for detecting and segmenting manipulated facial images and videos. In: 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1\u20138 (2019). IEEE","DOI":"10.1109\/BTAS46853.2019.9185974"},{"key":"621_CR36","doi-asserted-by":"crossref","unstructured":"M. Du, S. Pentyala, Y. Li, X. Hu, Towards generalizable deepfake detection with locality-aware autoencoder. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 325\u2013334 (2020)","DOI":"10.1145\/3340531.3411892"},{"key":"621_CR37","doi-asserted-by":"crossref","unstructured":"D.M. Montserrat, H. Hao, S.K. Yarlagadda, S. Baireddy, R. Shao, J. Horvath, E. Bartusiak, J. Yang, D. Guera, F. Zhu, E.J. Delp, Deepfakes detection with automatic face weighting. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2020)","DOI":"10.1109\/CVPRW50498.2020.00342"},{"key":"621_CR38","doi-asserted-by":"crossref","unstructured":"Y. Zheng, J. Bao, D. Chen, M. Zeng, F. Wen, Exploring temporal coherence for more general video face forgery detection. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 15044\u201315054 (2021)","DOI":"10.1109\/ICCV48922.2021.01477"},{"key":"621_CR39","doi-asserted-by":"crossref","unstructured":"W. Zhuang, Q. Chu, Z. Tan, Q. Liu, H. Yuan, C. Miao, Z. Luo, N. Yu, UIA-VIT: unsupervised inconsistency-aware method based on vision transformer for face forgery detection. In: European Conference on Computer Vision, pp. 391\u2013407 (2022). Springer","DOI":"10.1007\/978-3-031-20065-6_23"},{"key":"621_CR40","doi-asserted-by":"crossref","unstructured":"H. Dang, F. Liu, J. Stehouwer, X. Liu, A.K. Jain, On the detection of digital face manipulation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 5781\u20135790 (2020)","DOI":"10.1109\/CVPR42600.2020.00582"},{"key":"621_CR41","doi-asserted-by":"crossref","unstructured":"T. Saikia, C. Schmid, T. Brox, Improving robustness against common corruptions with frequency biased models. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 10211\u201310220 (2021)","DOI":"10.1109\/ICCV48922.2021.01005"},{"key":"621_CR42","doi-asserted-by":"crossref","unstructured":"L. Li, J. Bao, T. Zhang, H. Yang, D. Chen, F. Wen, B. Guo, Face x-ray for more general face forgery detection. 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5000\u20135009 (2020)","DOI":"10.1109\/CVPR42600.2020.00505"},{"key":"621_CR43","unstructured":"S. Seferbekov, https:\/\/github.com\/selimsef\/dfdc_deepfake_challenge"},{"key":"621_CR44","unstructured":"Z. Hanqing, C. Hao, Z. Wenbo, https:\/\/github.com\/cuihaoleo\/kaggle-dfdc"},{"key":"621_CR45","unstructured":"A. Davletshin, https:\/\/github.com\/NTech-Lab\/deepfake-detection-challenge"},{"key":"621_CR46","unstructured":"S. Jing, S. Huafeng, Y. Zhenfei, F. Zheng, Y. Guojun, C. Siyu, N. Ning, L. Yu, https:\/\/github.com\/Siyu-C\/RobustForensics"},{"key":"621_CR47","unstructured":"H. James, P. Ian, https:\/\/github.com\/jphdotam\/DFDC\/"},{"key":"621_CR48","unstructured":"D. Hendrycks, T. Dietterich, Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations (2019)"},{"key":"621_CR49","doi-asserted-by":"publisher","unstructured":"J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248\u2013255 (2009). https:\/\/doi.org\/10.1109\/CVPR.2009.5206848","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"621_CR50","unstructured":"C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A.S. Ecker, M. Bethge, W. Brendel, Benchmarking robustness in object detection: Autonomous driving when winter is coming. arXiv preprint arXiv:1907.07484 (2019)"},{"key":"621_CR51","doi-asserted-by":"publisher","unstructured":"C. Kamann, C. Rother, Benchmarking the robustness of semantic segmentation models. 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020). https:\/\/doi.org\/10.1109\/cvpr42600.2020.00885","DOI":"10.1109\/cvpr42600.2020.00885"},{"key":"621_CR52","doi-asserted-by":"crossref","unstructured":"C. Sakaridis, D. Dai, L. Van\u00a0Gool, Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), pp. 10765\u201310775 (2021)","DOI":"10.1109\/ICCV48922.2021.01059"},{"key":"621_CR53","doi-asserted-by":"crossref","unstructured":"S. Karahan, M.K. Yildirum, K. Kirtac, F.S. Rende, G. Butun, H.K. Ekenel, How image degradations affect deep cnn-based face recognition? In: 2016 International Conference of the Biometrics Special Interest Group (BIOSIG), pp. 1\u20135 (2016). IEEE","DOI":"10.1109\/BIOSIG.2016.7736924"},{"key":"621_CR54","doi-asserted-by":"publisher","unstructured":"Y. Lu, L. Barras, T. Ebrahimi, A novel framework for assessment of deep face recognition systems in realistic conditions. In: 2022 10th European Workshop on Visual Information Processing (EUVIP), pp. 1\u20136 (2022). https:\/\/doi.org\/10.1109\/EUVIP53989.2022.9922840","DOI":"10.1109\/EUVIP53989.2022.9922840"},{"key":"621_CR55","doi-asserted-by":"crossref","unstructured":"F.A. Petitcolas, R.J. Anderson, M.G. Kuhn, Attacks on copyright marking systems. In: International Workshop on Information Hiding, pp. 218\u2013238 (1998). Springer","DOI":"10.1007\/3-540-49380-8_16"},{"key":"621_CR56","doi-asserted-by":"crossref","unstructured":"R. Cogranne, Q. Giboulot, P. Bas, Alaska# 2: Challenging academic research on steganalysis with realistic images. In: 2020 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1\u20135 (2020). IEEE","DOI":"10.1109\/WIFS49906.2020.9360896"},{"issue":"10","key":"621_CR57","doi-asserted-by":"publisher","first-page":"1737","DOI":"10.1109\/TIP.2008.2001399","volume":"17","author":"A Foi","year":"2008","unstructured":"A. Foi, M. Trimeche, V. Katkovnik, K. Egiazarian, Practical Poissonian-gaussian noise modeling and fitting for single-image raw-data. IEEE Trans. Image Process. 17(10), 1737\u20131754 (2008). https:\/\/doi.org\/10.1109\/TIP.2008.2001399","journal-title":"IEEE Trans. Image Process."},{"key":"621_CR58","doi-asserted-by":"publisher","unstructured":"T. Marciniak, A. Chmielewska, R. Weychan, M. Parzych, A. Dabrowski, Influence of low resolution of images on reliability of face detection and recognition. Multimed. Tools Appl. 74 (2013). https:\/\/doi.org\/10.1007\/s11042-013-1568-8","DOI":"10.1007\/s11042-013-1568-8"},{"key":"621_CR59","doi-asserted-by":"publisher","first-page":"2000","DOI":"10.1109\/TIFS.2018.2890812","volume":"14","author":"P Li","year":"2019","unstructured":"P. Li, L. Prieto, D. Mery, P.J. Flynn, On low-resolution face recognition in the wild: comparisons and new techniques. IEEE Trans. Inf. Forensics Secur. 14, 2000\u20132012 (2019)","journal-title":"IEEE Trans. Inf. Forensics Secur."},{"key":"621_CR60","unstructured":"J. Ball\u00e9, D. Minnen, S. Singh, S.J. Hwang, N. Johnston, Variational image compression with a scale hyperprior. In: International Conference on Learning Representations (2018)"},{"key":"621_CR61","first-page":"11913","volume":"33","author":"F Mentzer","year":"2020","unstructured":"F. Mentzer, G.D. Toderici, M. Tschannen, E. Agustsson, High-fidelity generative image compression. Adv. Neural. Inf. Process. Syst. 33, 11913\u201311924 (2020)","journal-title":"Adv. Neural. Inf. Process. Syst."},{"issue":"7","key":"621_CR62","doi-asserted-by":"publisher","first-page":"3142","DOI":"10.1109\/TIP.2017.2662206","volume":"26","author":"K Zhang","year":"2017","unstructured":"K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142\u20133155 (2017). https:\/\/doi.org\/10.1109\/TIP.2017.2662206","journal-title":"IEEE Trans. Image Process."},{"key":"621_CR63","doi-asserted-by":"crossref","unstructured":"Y. Li, X. Yang, P. Sun, H. Qi, S. Lyu, Celeb-df: A large-scale challenging dataset for deepfake forensics. In: IEEE Conference on Computer Vision and Patten Recognition (CVPR) (2020)","DOI":"10.1109\/CVPR42600.2020.00327"},{"key":"621_CR64","unstructured":"M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105\u20136114 (2019). PMLR"},{"key":"621_CR65","first-page":"1755","volume":"10","author":"DE King","year":"2009","unstructured":"D.E. King, DLIB-ML: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755\u20131758 (2009)","journal-title":"J. Mach. Learn. Res."},{"key":"621_CR66","unstructured":"P. Foret, A. Kleiner, H. Mobahi, B. Neyshabur, Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412 (2020)"},{"key":"621_CR67","unstructured":"D. Hendrycks, N. Mu, E.D. Cubuk, B. Zoph, J. Gilmer, B. Lakshminarayanan, Augmix: A simple data processing method to improve robustness and uncertainty. In: International Conference on Learning Representations (2019)"},{"key":"621_CR68","doi-asserted-by":"crossref","unstructured":"J. Sabel, F. Johansson, On the robustness and generalizability of face synthesis detection methods. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 962\u2013971 (2021)","DOI":"10.1109\/CVPRW53098.2021.00107"}],"container-title":["EURASIP Journal on Image and Video Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13640-024-00621-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s13640-024-00621-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13640-024-00621-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,17]],"date-time":"2024-02-17T15:17:14Z","timestamp":1708183034000},"score":1,"resource":{"primary":{"URL":"https:\/\/jivp-eurasipjournals.springeropen.com\/articles\/10.1186\/s13640-024-00621-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,13]]},"references-count":68,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,12]]}},"alternative-id":["621"],"URL":"https:\/\/doi.org\/10.1186\/s13640-024-00621-8","relation":{},"ISSN":["1687-5281"],"issn-type":[{"value":"1687-5281","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,13]]},"assertion":[{"value":"4 January 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 January 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 February 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"6"}}