{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,25]],"date-time":"2026-01-25T02:10:36Z","timestamp":1769307036908,"version":"3.49.0"},"reference-count":54,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2020,3,25]],"date-time":"2020-03-25T00:00:00Z","timestamp":1585094400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2017R1C1B5074062"],"award-info":[{"award-number":["NRF-2017R1C1B5074062"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2019R1A2C1083813"],"award-info":[{"award-number":["NRF-2019R1A2C1083813"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2016M3A9E1915855"],"award-info":[{"award-number":["NRF-2016M3A9E1915855"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.<\/jats:p>","DOI":"10.3390\/s20071810","type":"journal-article","created":{"date-parts":[[2020,3,25]],"date-time":"2020-03-25T13:10:47Z","timestamp":1585141847000},"page":"1810","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Presentation Attack Face Image Generation Based on a Deep Generative Adversarial Network"],"prefix":"10.3390","volume":"20","author":[{"given":"Dat Tien","family":"Nguyen","sequence":"first","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Tuyen Danh","family":"Pham","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Ganbayar","family":"Batchuluun","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Kyoung Jun","family":"Noh","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Kang Ryoung","family":"Park","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2020,3,25]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1109\/TCSVT.2003.818349","article-title":"An Introduction to Biometric Recognition","volume":"14","author":"Jain","year":"2004","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1166\/asl.2012.2177","article-title":"Combining Touched Fingerprint and Finger-Vein of a Finger, and Its Usability Evaluation","volume":"5","author":"Nguyen","year":"2012","journal-title":"Adv. Sci. Lett."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Taigman, Y., Yang, M., Ranzato, M.A., and Wolf, L. (2014, January 23\u201328). DeepFace: Closing the Gap to Human-Level Performance in Face Verification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.220"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"21726","DOI":"10.3390\/s141121726","article-title":"Face Recognition System for Set-Top Box-Based Intelligent TV","volume":"14","author":"Lee","year":"2014","journal-title":"Sensors"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"2679","DOI":"10.1109\/TCSVT.2017.2710120","article-title":"Unconstrained Face Recognition Using a Set-To-Set Distance Measure on Deep Learned Features","volume":"28","author":"Zhao","year":"2017","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Maatta, J., Hadid, A., and Pietikainen, M. (2011, January 11\u201313). Face Spoofing Detection from Single Image Using Micro-Texture Analysis. Proceedings of the International Joint Conference on Biometric, Washington, DC, USA.","DOI":"10.1109\/IJCB.2011.6117510"},{"key":"ref_7","unstructured":"Zhang, Z., Yan, J., Liu, S., Lei, Z., Yi, D., and Li, S.Z. (April, January 29). A Face Anti-Spoofing Database with Diverse Attack. Proceedings of the 5th International Conference on Biometric, New Delhi, India."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1537","DOI":"10.3390\/s150101537","article-title":"Face Liveness Detection Using Defocus","volume":"15","author":"Kim","year":"2015","journal-title":"Sensors"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Costa-Pazo, A., Bhattacharjee, S., Vazquez-Fernandez, E., and Marcel, S. (2016, January 21\u201323). The Replay-Mobile Face Presentation Attack Database. Proceedings of the International Conference on the Biometrics Special Interest Group, Darmstadt, Germary.","DOI":"10.1109\/BIOSIG.2016.7736936"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Boulkenafet, Z., Komulainen, J., and Hadid, A. (2015, January 27\u201330). Face Anti-Spoofing Based on Color Texture Analysis. Proceedings of the IEEE International Conference on Image Processing, Quebec City, QC, Canada.","DOI":"10.1109\/ICIP.2015.7351280"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Parveen, S., Ahmad, S.M.S., Abbas, N.H., Adnan, W.A.W., Hanafi, M., and Naeem, N. (2016). Face Liveness Detection Using Dynamic Local Ternary Pattern (DLTP). Computers, 5.","DOI":"10.3390\/computers5020010"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"864","DOI":"10.1109\/TIFS.2015.2398817","article-title":"Deep Representation for Iris, Face and Fingerprint Spoofing Detection","volume":"10","author":"Menotti","year":"2015","journal-title":"IEEE Trans. Inf. Forensic Secur."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Nguyen, D.T., Pham, D.T., Baek, N.R., and Park, K.R. (2018). Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors. Sensors, 18.","DOI":"10.3390\/s18030699"},{"key":"ref_14","first-page":"1397","article-title":"Deep Texture Features for Robust Face Spoofing Detection","volume":"64","author":"Pires","year":"2017","journal-title":"IEEE Trans. Circuits Syst. II-Express"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Nguyen, D.T., Pham, D.T., Lee, M.B., and Park, K.R. (2019). Visible-light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information. Sensors, 19.","DOI":"10.3390\/s19020410"},{"key":"ref_16","unstructured":"(2020, January 03). Dongguk Generation Model of Presentation Attack Face Image (DG_FACE_PAD_GEN). Available online: http:\/\/dm.dongguk.edu\/link.html."},{"key":"ref_17","unstructured":"Benlamoudi, A., Zighem, M.E., Bougourzi, F., Bekhouche, S.E., Ouafi, A., and Taleb-Ahmed, A. (2017, January 17\u201318). Face Anti-Spoofing Combining MLLBP and MLBSIF. Proceedings of the CGE10SPOOFING, Alger, Alg\u00e9rie."},{"key":"ref_18","unstructured":"Liu, Y., Stehouwer, J., Jourabloo, A., and Liu, X. (2020, January 03). Deep Tree Learning for Zero-Shot Face Anti-Spoofing. Available online: https:\/\/arxiv.org\/abs\/1904.02860v2."},{"key":"ref_19","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2020, January 03). Generative Adversarial Nets. Available online: https:\/\/arxiv.org\/abs\/1406.2661."},{"key":"ref_20","unstructured":"Perarnau, G., Weijer, J., Raducanu, B., and Alvarez, J.M. (2020, January 03). Invertible Conditional GAN for Image Editing. Available online: https:\/\/arxiv.org\/abs\/1611.06355."},{"key":"ref_21","unstructured":"Zhang, H., Sindagi, V., and Patel, V.M. (2020, January 03). Image De-Raining Using a Conditional Generative Adversarial Network. Available online: https:\/\/arxiv.org\/abs\/1701.05957v4."},{"key":"ref_22","unstructured":"Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., and Wang, Z. (2020, January 03). Photo-Realistic Single Image Super Resolution Using a Generative Adversarial Network. Available online: https:\/\/arxiv.org\/abs\/1609.04802v5."},{"key":"ref_23","unstructured":"Chen, J., Tai, Y., Liu, X., Shen, C., and Yang, J. (2020, January 03). FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors. Available online: https:\/\/arxiv.org\/abs\/1711.10703v1."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Tan, D.S., Lin, J.-M., Lai, Y.-C., Ilao, J., and Hua, K.-L. (2019). Depth Map Up-sampling Via Multi-Modal Generative Adversarial Network. Sensors, 19.","DOI":"10.3390\/s19071587"},{"key":"ref_25","unstructured":"Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., and Matas, J. (2020, January 03). DeblurGAN: Blind Motion De-Blurring Using Conditional Adversarial Networks. Available online: https:\/\/arxiv.org\/abs\/1711.07064v4."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Wang, G., Kang, W., Wu, Q., Wang, Z., and Gao, J. (2018, January 10\u201313). Generative Adversarial Network (GAN) Based Data Augmentation for Palm-print Recognition. Proceedings of the Digital Image Computing: Techniques and Applications (DICTA), Canberra, Australia.","DOI":"10.1109\/DICTA.2018.8615782"},{"key":"ref_27","unstructured":"Minaee, S., and Abdolrashidi, A. (2020, January 03). Iris-GAN: Learning to Generate Realistic Iris Images Using Convolutional GAN. Available online: https:\/\/arxiv.org\/abs\/1812.04822v3."},{"key":"ref_28","unstructured":"Minaee, S., and Abdolrashidi, A. (2020, January 03). Finger-GAN: Generating Realistic Fingerprint Images Using Connectivity Imposed GAN. Available online: https:\/\/arxiv.org\/abs\/1812.10482v1."},{"key":"ref_29","unstructured":"Zhu, J.-Y., Park, T., Isola, P., and Efros, A.A. (2020, January 03). Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. Available online: https:\/\/arxiv.org\/abs\/1703.10593v6."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1021","DOI":"10.1109\/ACCESS.2018.2886213","article-title":"Generative Adversarial Network-based Method for Transforming Single RGB Image into 3D Point Cloud","volume":"7","author":"Chu","year":"2019","journal-title":"IEEE Access"},{"key":"ref_31","unstructured":"Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A.A. (2020, January 03). Image-to-Image Translation with Conditional Adversarial Networks. Available online: https:\/\/arxiv.org\/abs\/1611.07004v3."},{"key":"ref_32","unstructured":"Pan, J., Canton-Ferrer, C., McGuinnes, K., O\u2019Connor, N.E., Torres, J., Sayol, E., and Giro-i-Nieto, X. (2020, January 03). SalGAN: Visual Saliency Prediction with Adversarial Networks. Available online: https:\/\/arxiv.org\/abs\/1701.01081v3."},{"key":"ref_33","unstructured":"Bontrager, P., Roy, A., Togelius, J., and Memon, N. (2020, January 03). Deepmasterprint: Fingerprint Spoofing via Latent Variable Evolution. Available online: https:\/\/arxiv.org\/abs\/1705.07386v4."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Kazemi, V., and Sullivan, J. (2014, January 23\u201328). One Millisecond Face Alignment with an Ensemble of Regression Trees. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.241"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based Learning Applied to Document Recognition","volume":"86","author":"Lecun","year":"1998","journal-title":"Proc. IEEE"},{"key":"ref_36","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012, January 3\u20138). Imagenet Classification with Deep Convolutional Neural Networks. Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA."},{"key":"ref_37","unstructured":"Simonyan, K., and Zisserman, A. (2020, January 03). Very Deep Convolutional Neural Networks for Large-Scale Image Recognition. Available online: https:\/\/arxiv.org\/abs\/1409.1556v6."},{"key":"ref_38","first-page":"1929","article-title":"Dropout: A Simple Way to Prevent Neural Networks from Overfitting","volume":"15","author":"Srivastava","year":"2014","journal-title":"J. Mach. Learn. Res."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Nguyen, D.T., Kim, K.W., Hong, H.G., Koo, J.H., Kim, M.C., and Park, K.R. (2017). Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction. Sensors, 17.","DOI":"10.3390\/s17030637"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"18174","DOI":"10.1109\/ACCESS.2018.2812835","article-title":"Convolutional Neural Networks Based Fire Detection in Surveillance Videos","volume":"6","author":"Muhammad","year":"2018","journal-title":"IEEE Access"},{"key":"ref_41","unstructured":"Huang, G., Liu, Z., Maatern, L., and Weinberger, K.Q. (2020, January 03). Densely Connected Convolutional Network. Available online: https:\/\/arxiv.org\/abs\/1608.06993v5."},{"key":"ref_42","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2020, January 03). Deep Residual Learning for Image Recognition. Available online: https:\/\/arxiv.org\/abs\/1512.03385v1."},{"key":"ref_43","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2020, January 03). Going Deeper with Convolutions. Available online: https:\/\/arxiv.org\/abs\/1409.4842v1."},{"key":"ref_44","unstructured":"Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., and Smolley, S.P. (2020, January 03). Least Squares Generative Adversarial Networks. Available online: https:\/\/arxiv.org\/abs\/1611.04076v3."},{"key":"ref_45","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2020, January 03). GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. Available online: https:\/\/arxiv.org\/abs\/1706.08500v6."},{"key":"ref_46","unstructured":"Zhang, H., Goodfellow, I., Metaxas, D., and Odena, A. (2020, January 03). Self-Attention Generative Adversarial Networks. Available online: https:\/\/arxiv.org\/abs\/1805.08318v2."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1016\/j.cviu.2018.10.009","article-title":"Pros and Cons of GAN Evaluation Measures","volume":"179","author":"Borji","year":"2019","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_48","unstructured":"Lucic, M., Kurach, K., Michalski, M., Gelly, S., and Bousquet, O. (2020, January 03). Are GANs Created Equal? A Large-Scale Study. Available online: https:\/\/arxiv.org\/abs\/1711.10337v4."},{"key":"ref_49","unstructured":"ISO Standard (2020, January 03). ISO\/IEC 30107-3:2017 [ISO\/IEC 30107-3:2017] Information Technology\u2014Biometric Presentation Attack Detection\u2014Part 3: Testing and Reporting. Available online: https:\/\/www.iso.org\/standard\/67381.html."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"971","DOI":"10.1109\/TPAMI.2002.1017623","article-title":"Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns","volume":"24","author":"Ojala","year":"2002","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_51","unstructured":"Yi, Z., Zhang, H., Tan, P., and Gong, M. (2020, January 03). DualGAN: Unsupervised Dual Learning for Image-to-Image Translation. Available online: https:\/\/arxiv.org\/abs\/1704.02510v4."},{"key":"ref_52","unstructured":"(2020, January 03). Jetson TX2 Module. Available online: https:\/\/www.nvidia.com\/en-us\/autonomous-machines\/embedded-systems-dev-kits-modules\/."},{"key":"ref_53","unstructured":"(2020, January 03). NVIDIA TitanX GPU. Available online: https:\/\/www.nvidia.com\/en-us\/geforce\/products\/10series\/titan-x-pascal\/."},{"key":"ref_54","unstructured":"(2020, January 03). Tensorflow Deep-Learning Library. Available online: https:\/\/www.tensorflow.org\/."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/7\/1810\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T09:11:17Z","timestamp":1760173877000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/7\/1810"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,3,25]]},"references-count":54,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2020,4]]}},"alternative-id":["s20071810"],"URL":"https:\/\/doi.org\/10.3390\/s20071810","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,3,25]]}}}