{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,27]],"date-time":"2026-02-27T15:28:23Z","timestamp":1772206103965,"version":"3.50.1"},"reference-count":19,"publisher":"MDPI AG","issue":"21","license":[{"start":{"date-parts":[[2021,11,5]],"date-time":"2021-11-05T00:00:00Z","timestamp":1636070400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the Korea government(MSIT)","award":["2018R1A2B6009620"],"award-info":[{"award-number":["2018R1A2B6009620"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Recently, artificial intelligence has been successfully used in fields, such as computer vision, voice, and big data analysis. However, various problems, such as security, privacy, and ethics, also occur owing to the development of artificial intelligence. One such problem are deepfakes. Deepfake is a compound word for deep learning and fake. It refers to a fake video created using artificial intelligence technology or the production process itself. Deepfakes can be exploited for political abuse, pornography, and fake information. This paper proposes a method to determine integrity by analyzing the computer vision features of digital content. The proposed method extracts the rate of change in the computer vision features of adjacent frames and then checks whether the video is manipulated. The test demonstrated the highest detection rate of 97% compared to the existing method or machine learning method. It also maintained the highest detection rate of 96%, even for the test that manipulates the matrix of the image to avoid the convolutional neural network detection method.<\/jats:p>","DOI":"10.3390\/s21217367","type":"journal-article","created":{"date-parts":[[2021,11,7]],"date-time":"2021-11-07T20:42:54Z","timestamp":1636317774000},"page":"7367","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":43,"title":["Deepfake Detection Using the Rate of Change between Frames Based on Computer Vision"],"prefix":"10.3390","volume":"21","author":[{"given":"Gihun","family":"Lee","sequence":"first","affiliation":[{"name":"Department of Computer Science & Engineering, Computer System Institute, Hankyong National University, Jungang-ro, Anseong-si 17579, Gyeonggi-do, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4896-7400","authenticated-orcid":false,"given":"Mihui","family":"Kim","sequence":"additional","affiliation":[{"name":"Department of Computer Science & Engineering, Computer System Institute, Hankyong National University, Jungang-ro, Anseong-si 17579, Gyeonggi-do, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2021,11,5]]},"reference":[{"key":"ref_1","unstructured":"Ruben, T., Ruben, V.R., Julian, F., Aythami, M., and Javier, O.G. (2020). DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection. arXiv."},{"key":"ref_2","unstructured":"(2021, May 03). Faceswap. Available online: https:\/\/faceswap.dev."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Afchar, D., Nozick, V., Yamagishi, J., and Echizen, I. (2018, January 11\u201313). MesoNet: A Compact Facial Video Forgery Detection Network. Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Hong Kong, China.","DOI":"10.1109\/WIFS.2018.8630761"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"G\u00fcera, D., and Delp, E.J. (2018, January 27\u201330). Deepfake Video Detection Using Recurrent Neural Networks. Proceedings of the 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand.","DOI":"10.1109\/AVSS.2018.8639163"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Li, Y., Chang, M.-C., and Lyu, S. (2018, January 11\u201313). In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking. Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Hong Kong, China.","DOI":"10.1109\/WIFS.2018.8630787"},{"key":"ref_6","unstructured":"Li, Y., and Lyu, S. (2019). Exposing DeepFake Videos by Detecting Face Warping Artifacts. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Yang, X., Li, Y., and Lyu, S. (2018). Exposing Deep Fakes Using Inconsistent Head Poses. arXiv.","DOI":"10.1109\/ICASSP.2019.8683164"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Agarwal, S., Farid, H., Fried, O., and Agrawala, M. (2020, January 14\u201319). Detecting Deep-Fake Videos from Phoneme-Viseme Mismatches. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00338"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1049\/iet-bmt.2017.0083","article-title":"Strengths and weaknesses of deep learning models for face recognition against image degradations","volume":"7","author":"Grm","year":"2018","journal-title":"IET Biom."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Hou, X., Shen, L., Sun, K., and Qiu, G. (2017, January 24\u201331). Deep Feature Consistent Variational Autoencoder. Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA.","DOI":"10.1109\/WACV.2017.131"},{"key":"ref_11","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative Adversarial Networks. arXiv."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., and Choo, J. (2018, January 18\u201323). StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00916"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet classification with deep convolutional neural networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. ACM"},{"key":"ref_14","unstructured":"Roy, P., Ghosh, S., Bhattacharya, S., and Pal, U. (2019). Effects of Degradations on Deep Neural Network Architectures. arXiv."},{"key":"ref_15","unstructured":"(2021, May 03). Deepfake Detection Challenge|Kaggle. Available online: https:\/\/www.kaggle.com\/c\/deepfake-detection-challenge."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","article-title":"Joint face detection and alignment using multi-task cascaded convolutional networks","volume":"23","author":"Zhang","year":"2016","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_17","unstructured":"(2021, May 03). OpenCV. Available online: https:\/\/opencv.org\/."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"R\u00f6ssler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., and Nie\u00dfner, M. (2019). FaceForensics++: Learning to Detect Manipulated Facial Images. arXiv.","DOI":"10.1109\/ICCV.2019.00009"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Schuldt, C., Laptev, I., and Caputo, B. (2004, January 26\u201326). Recognizing Human Actions: A Local SVM Approach. Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK.","DOI":"10.1109\/ICPR.2004.1334462"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/21\/7367\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T07:26:30Z","timestamp":1760167590000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/21\/7367"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,11,5]]},"references-count":19,"journal-issue":{"issue":"21","published-online":{"date-parts":[[2021,11]]}},"alternative-id":["s21217367"],"URL":"https:\/\/doi.org\/10.3390\/s21217367","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,11,5]]}}}