{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:52:04Z","timestamp":1760147524346,"version":"build-2065373602"},"reference-count":44,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2023,2,9]],"date-time":"2023-02-09T00:00:00Z","timestamp":1675900800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Natural Science Foundation of Guangdong Province","award":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"],"award-info":[{"award-number":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"]}]},{"name":"Ministry of Science and Technology of China","award":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"],"award-info":[{"award-number":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"]}]},{"name":"Key Research and Development Program of Jiangxi Province","award":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"],"award-info":[{"award-number":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"]}]},{"name":"National Natural Science Foundation of China","award":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"],"award-info":[{"award-number":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"]}]},{"name":"Natural Science Foundation of Jiangxi Province","award":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"],"award-info":[{"award-number":["2019A1515011793","G2022022001L","20192BBE50079","61962021","51978271","20223BBE51039","20224BAB202016","20224BAB212014"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Despite progress in the past decades, 3D shape acquisition techniques are still a threshold for various 3D face-based applications and have therefore attracted extensive research. Moreover, advanced 2D data generation models based on deep networks may not be directly applicable to 3D objects because of the different dimensionality of 2D and 3D data. In this work, we propose two novel sampling methods to represent 3D faces as matrix-like structured data that can better fit deep networks, namely (1) a geometric sampling method for the structured representation of 3D faces based on the intersection of iso-geodesic curves and radial curves, and (2) a depth-like map sampling method using the average depth of grid cells on the front surface. The above sampling methods can bridge the gap between unstructured 3D face models and powerful deep networks for an unsupervised generative 3D face model. In particular, the above approaches can obtain the structured representation of 3D faces, which enables us to adapt the 3D faces to the Deep Convolution Generative Adversarial Network (DCGAN) for 3D face generation to obtain better 3D faces with different expressions. We demonstrated the effectiveness of our generative model by producing a large variety of 3D faces with different expressions using the two novel down-sampling methods mentioned above.<\/jats:p>","DOI":"10.3390\/s23041937","type":"journal-article","created":{"date-parts":[[2023,2,10]],"date-time":"2023-02-10T02:09:59Z","timestamp":1675994999000},"page":"1937","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation"],"prefix":"10.3390","volume":"23","author":[{"given":"Guoliang","family":"Luo","sequence":"first","affiliation":[{"name":"Virtual Reality and Interactive Techniques Institute, East China Jiaotong University, Nanchang 330013, China"}]},{"given":"Guoming","family":"Xiong","sequence":"additional","affiliation":[{"name":"Virtual Reality and Interactive Techniques Institute, East China Jiaotong University, Nanchang 330013, China"}]},{"given":"Xiaojun","family":"Huang","sequence":"additional","affiliation":[{"name":"Virtual Reality and Interactive Techniques Institute, East China Jiaotong University, Nanchang 330013, China"}]},{"given":"Xin","family":"Zhao","sequence":"additional","affiliation":[{"name":"Virtual Reality and Interactive Techniques Institute, East China Jiaotong University, Nanchang 330013, China"}]},{"given":"Yang","family":"Tong","sequence":"additional","affiliation":[{"name":"Virtual Reality and Interactive Techniques Institute, East China Jiaotong University, Nanchang 330013, China"}]},{"given":"Qiang","family":"Chen","sequence":"additional","affiliation":[{"name":"Virtual Reality and Interactive Techniques Institute, East China Jiaotong University, Nanchang 330013, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0939-0741","authenticated-orcid":false,"given":"Zhiliang","family":"Zhu","sequence":"additional","affiliation":[{"name":"Virtual Reality and Interactive Techniques Institute, East China Jiaotong University, Nanchang 330013, China"}]},{"given":"Haopeng","family":"Lei","sequence":"additional","affiliation":[{"name":"School of Computer Science, Jiangxi Normal University, Nanchang 330022, China"}]},{"given":"Juncong","family":"Lin","sequence":"additional","affiliation":[{"name":"School of Information, Xiamen University, Xiamen 361005, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,2,9]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Richardson, E., Sela, M., and Kimmel, R. (2016, January 25\u201328). 3d face reconstruction by learning from synthetic data. Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA.","DOI":"10.1109\/3DV.2016.56"},{"key":"ref_2","unstructured":"Gilani, S.Z., and Mian, A. (2018, January 18\u201323). Learning from millions of 3d scans for large-scale 3d face recognition. Proceedings of the IEEE Conference of Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Luo, G., Zhao, X., Tong, Y., Chen, Q., Zhu, Z., Lei, H., and Lin, J. (2020, January 19\u201324). Geometry Sampling for 3D Face Generation via DCGAN. Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK.","DOI":"10.1109\/IJCNN48605.2020.9207557"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"4357","DOI":"10.1109\/TIP.2018.2835143","article-title":"Gabor convolutional networks","volume":"27","author":"Luan","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_5","unstructured":"Minaee, S., Liang, X., and Yan, S. (2022). Modern augmented reality: Applications, trends, and future directions. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"D\u00fcnser, A., and Hornecker, E. (2007, January 20\u201323). Lessons from an ar book study. Proceedings of the 1st International Conference on Tangible and Embedded Interaction, Yokohama, Japan.","DOI":"10.1145\/1226969.1227006"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"396","DOI":"10.1016\/j.ridd.2014.10.015","article-title":"Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders","volume":"36","author":"Chen","year":"2015","journal-title":"Res. Dev. Disabil."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Avinash, P., and Sharma, M. (2019, January 12\u201312). Predicting forward & backward facial depth maps from a single rgb image for mobile 3d ar application. Proceedings of the 2019 International Conference on 3D Immersion (IC3D), Brussels, Belgium.","DOI":"10.1109\/IC3D48390.2019.8975899"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., and Davison, A. (2011, January 16\u201319). Kinectfusion: Real-time 3d reconstruction and interaction using a moving depth camera. Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA.","DOI":"10.1145\/2047196.2047270"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Song, S., Lichtenberg, S.P., and Xiao, J. (2015, January 7\u201312). Sun rgb-d: A rgb-d scene understanding benchmark suite. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298655"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Handa, A., Whelan, T., McDonald, J., and Davison, A.J. (June, January 31). A benchmark for rgb-d visual odometry, 3d reconstruction and slam. Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China.","DOI":"10.1109\/ICRA.2014.6907054"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"394","DOI":"10.1109\/TPAMI.2010.63","article-title":"3d face reconstruction from a single image using a single reference face shape","volume":"33","author":"Basri","year":"2011","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Zhu, Y., Mottaghi, R., Kolve, E., Lim, J.J., Gupta, A., Fei-Fei, L., and Farhadi, A. (June, January 29). Target-driven visual navigation in indoor scenes using deep reinforcement learning. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.","DOI":"10.1109\/ICRA.2017.7989381"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Choy, C.B., Xu, D., Gwak, J., Chen, K., and Savarese, S. (2016, January 8\u201316). 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46484-8_38"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Fan, H., Su, H., and Guibas, L. (2017, January 21\u201326). A point set generation network for 3d object reconstruction from a single image. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.264"},{"key":"ref_16","first-page":"28","article-title":"Reconstruction of personalized 3d face rigs from monocular video","volume":"35","author":"Garrido","year":"2016","journal-title":"ACM Trans. Graph."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., and Tong, X. (2019, January 15\u201320). Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPRW.2019.00038"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. (2020, January 13\u201319). Analyzing and improving the image quality of stylegan. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Lu, Y., Tai, Y.-W., and Tang, C.-K. (2018, January 8\u201314). Attribute-guided face generation using conditional cyclegan. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01258-8_18"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., and Aila, T. (2019, January 15\u201320). A style-based generator architecture for generative adversarial networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00453"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"391","DOI":"10.1016\/j.patcog.2017.08.003","article-title":"A survey of local feature methods for 3d face recognition","volume":"72","author":"Soltanpour","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"659","DOI":"10.1007\/s00371-005-0319-x","article-title":"Mesh segmentation driven by gaussian curvature","volume":"21","author":"Yamauchi","year":"2005","journal-title":"Vis. Comput."},{"key":"ref_23","unstructured":"Drira, H., Amor, B.B., Daoudi, M., and Srivastava, A. (2020, January 7\u201310). Pose and expression-invariant 3d face recognition using elastic radial curves. Proceedings of the British Machine Vision Conference, Virtual Event."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1145\/1531326.1531378","article-title":"M\u00f6bius voting for surface correspondence","volume":"28","author":"Lipman","year":"2009","journal-title":"ACM Trans. Graph. (TOG)"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"799","DOI":"10.1016\/j.cad.2004.09.009","article-title":"Freeform surface flattening based on fitting a woven mesh model","volume":"37","author":"Wang","year":"2005","journal-title":"Comput.-Aided Des."},{"key":"ref_26","unstructured":"Brice\u00f1o, H.M., Sander, P.V., McMillan, L., Gortler, S., and Hoppe, H. (2003., January 26\u201327). Geometry videos: A new representation for 3d animations. Proceedings of the ACM SIGGRAPH\/Eurographics Symposium on Computer Animation, San Diego, CA, USA."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Xia, J., He, Y., Quynh, D., Chen, X., and Hoi, S.C. (2010, January 25\u201329). Modeling 3d facial expressions using geometry videos. Proceedings of the 18th ACM international conference on Multimedia, Firenze, Italy.","DOI":"10.1145\/1873951.1874010"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"194","DOI":"10.1145\/3130800.3130813","article-title":"Learning a model of facial shape and expression from 4d scans","volume":"36","author":"Li","year":"2017","journal-title":"Acm Trans. Graph. (TOG)"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"e1810","DOI":"10.1002\/cav.1810","article-title":"Synthesizing cloth wrinkles by cnn-based geometry image superresolution","volume":"29","author":"Chen","year":"2018","journal-title":"Comput. Animat. Virtual Worlds"},{"key":"ref_30","unstructured":"Fabius, O., and van Amersfoort, J.R. (2014). Variational recurrent auto-encoders. arXiv."},{"key":"ref_31","unstructured":"Kingma, D.P., and Welling, M. (2013). Auto-encoding variational bayes. arXiv."},{"key":"ref_32","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8\u201313). Generative adversarial nets. Proceedings of the 27th International Conference on Neural Information Processing, Montreal, QC, Canada."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Lin, S., Ji, R., Yan, C., Zhang, B., Cao, L., Ye, Q., Huang, F., and Doermann, D. (2019, January 15\u201320). Towards optimal structured cnn pruning via generative adversarial learning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00290"},{"key":"ref_34","unstructured":"Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Kossaifi, J., Tran, L., Panagakis, Y., and Pantic, M. (2017). GAGAN: Geometry-aware generative adversarial networks. arXiv.","DOI":"10.1109\/CVPR.2018.00098"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"522","DOI":"10.1109\/THMS.2016.2515602","article-title":"Frenet frame-based generalized space curve representation for pose-invariant classification and recognition of 3-d face","volume":"46","author":"Samad","year":"2016","journal-title":"IEEE Trans. Hum.-Mach. Syst."},{"key":"ref_37","unstructured":"Nair, V., and Hinton, G.E. (2010, January 21\u201324). Rectified linear units improve restricted boltzmann machines. Proceedings of the 27th ICML, Haifa, Israel."},{"key":"ref_38","unstructured":"Maas, A.L., Hannun, A.Y., and Ng, A.Y. (2013, January 16\u201321). Rectifier nonlinearities improve neural network acoustic models. Proceedings of the ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Atlanta, GA, USA."},{"key":"ref_39","unstructured":"Xu, B., Wang, N., Chen, T., and Li, M. (2015). Empirical evaluation of rectified activations in convolutional network. CoRR."},{"key":"ref_40","unstructured":"Ioffe, S., and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Pfister, H., Zwicker, M., Baar, J.V., and Gross, M. (2000, January 23\u201328). Surfels: Surface elements as rendering primitives. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA.","DOI":"10.1145\/344779.344936"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Levoy, M., Pulli, K., Curless, B., Rusinkiewicz, S., Koller, D., Pereira, L., Ginzton, M., Anderson, S., Davis, J., and Ginsberg, J. (2000, January 23\u201328). The digital michelangelo project: 3D scanning of large statues. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA.","DOI":"10.1145\/344779.344849"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Rusinkiewicz, S., and Levoy, M. (2000, January 23\u201328). Qsplat: A multiresolution point rendering system for large meshes. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA.","DOI":"10.1145\/344779.344940"},{"key":"ref_44","unstructured":"Yin, L., Wei, X., Sun, Y., Wang, J., and Rosato, M.J. (2006, January 10\u201312). A 3d facial expression database for facial behavior research. Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Southampton, UK."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/4\/1937\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:28:16Z","timestamp":1760120896000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/4\/1937"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,9]]},"references-count":44,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["s23041937"],"URL":"https:\/\/doi.org\/10.3390\/s23041937","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,2,9]]}}}