{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,27]],"date-time":"2026-02-27T22:48:36Z","timestamp":1772232516405,"version":"3.50.1"},"reference-count":75,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,3,17]],"date-time":"2024-03-17T00:00:00Z","timestamp":1710633600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2024,3,17]],"date-time":"2024-03-17T00:00:00Z","timestamp":1710633600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"funder":[{"DOI":"10.13039\/501100010023","name":"Natural Science Research of Jiangsu Higher Education Institutions of China","doi-asserted-by":"publisher","award":["BK20181269"],"award-info":[{"award-number":["BK20181269"]}],"id":[{"id":"10.13039\/501100010023","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Special Project on Basic Research of Frontier Leading Technology of Jiangsu Province of China","award":["BK20192004C"],"award-info":[{"award-number":["BK20192004C"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis Comput"],"published-print":{"date-parts":[[2025,1]]},"DOI":"10.1007\/s00371-024-03316-3","type":"journal-article","created":{"date-parts":[[2024,3,17]],"date-time":"2024-03-17T10:01:20Z","timestamp":1710669680000},"page":"157-171","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Refined dense face alignment through image matching"],"prefix":"10.1007","volume":"41","author":[{"given":"Chunlu","family":"Li","sequence":"first","affiliation":[]},{"given":"Feipeng","family":"Da","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,3,17]]},"reference":[{"key":"3316_CR1","doi-asserted-by":"crossref","unstructured":"Ma, Z., Zhu, X., Qi, G.-J., Lei, Z., Zhang, L.: Otavatar: One-shot talking face avatar with controllable tri-plane rendering. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 16901\u201316910 (2023)","DOI":"10.1109\/CVPR52729.2023.01621"},{"key":"3316_CR2","doi-asserted-by":"publisher","first-page":"95","DOI":"10.1007\/s00371-020-01982-7","volume":"37","author":"X Huang","year":"2021","unstructured":"Huang, X., Wang, M., Gong, M.: Fine-grained talking face generation with video reinterpretation. Vis. Comput. 37, 95\u2013105 (2021)","journal-title":"Vis. Comput."},{"key":"3316_CR3","doi-asserted-by":"crossref","unstructured":"Fang, Z., Liu, Z., Liu, T., Hung, C.-C., Xiao, J., Feng, G.: Facial expression gan for voice-driven face generation. Vis. Comput., 38(3), 1151\u20131164 (2022)","DOI":"10.1007\/s00371-021-02074-w"},{"issue":"8","key":"3316_CR4","doi-asserted-by":"publisher","first-page":"6949","DOI":"10.1109\/JIOT.2020.3037207","volume":"8","author":"P Chhikara","year":"2020","unstructured":"Chhikara, P., Singh, P., Tekchandani, R., Kumar, N., Guizani, M.: Federated learning meets human emotions: a decentralized framework for human-computer interaction for iot applications. IEEE Internet Things J. 8(8), 6949\u20136962 (2020)","journal-title":"IEEE Internet Things J."},{"issue":"9\u201311","key":"3316_CR5","doi-asserted-by":"publisher","first-page":"2907","DOI":"10.1007\/s00371-021-02198-z","volume":"37","author":"Y Ju","year":"2021","unstructured":"Ju, Y., Zhang, J., Mao, X., Xu, J.: Adaptive semantic attribute decoupling for precise face image editing. Vis. Comput. 37(9\u201311), 2907\u20132918 (2021)","journal-title":"Vis. Comput."},{"key":"3316_CR6","doi-asserted-by":"crossref","unstructured":"Onizuka, H., Thomas, D., Uchiyama, H., Taniguchi, R.-i.: Landmark-guided deformation transfer of template facial expressions for automatic generation of avatar blendshapes. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision Workshops (2019)","DOI":"10.1109\/ICCVW.2019.00265"},{"key":"3316_CR7","doi-asserted-by":"crossref","unstructured":"Feng, Y., Wu, F., Shao, X., Wang, Y., Zhou, X.: Joint 3d face reconstruction and dense alignment with position map regression network. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 534\u2013551 (2018)","DOI":"10.1007\/978-3-030-01264-9_33"},{"key":"3316_CR8","doi-asserted-by":"crossref","unstructured":"Guo, J., Zhu, X., Yang, Y., Yang, F., Lei, Z., Li, S.Z.: Towards fast, accurate and stable 3d dense face alignment. In: European Conference on Computer Vision, pp. 152\u2013168. Springer (2020)","DOI":"10.1007\/978-3-030-58529-7_10"},{"key":"3316_CR9","first-page":"1755","volume":"10","author":"DE King","year":"2009","unstructured":"King, D.E.: Dlib-ml: A machine learning toolkit. J. Mach. Learn. Res. 10, 1755\u20131758 (2009)","journal-title":"J. Mach. Learn. Res."},{"key":"3316_CR10","doi-asserted-by":"publisher","first-page":"1944","DOI":"10.1109\/LSP.2020.3032277","volume":"27","author":"X Ning","year":"2020","unstructured":"Ning, X., Duan, P., Li, W., Zhang, S.: Real-time 3d face alignment using an encoder-decoder network with an efficient deconvolution layer. IEEE Signal Process. Lett. 27, 1944\u20131948 (2020). https:\/\/doi.org\/10.1109\/LSP.2020.3032277","journal-title":"IEEE Signal Process. Lett."},{"key":"3316_CR11","doi-asserted-by":"crossref","unstructured":"Wood, E., Baltru\u0161aitis, T., Hewitt, C., Johnson, M., Shen, J., Milosavljevi\u0107, N., Wilde, D., Garbin, S., Sharp, T., Stojiljkovi\u0107, I., et al.: 3d face reconstruction with dense landmarks. In: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XIII, pp. 160\u2013177. Springer (2022)","DOI":"10.1007\/978-3-031-19778-9_10"},{"key":"3316_CR12","doi-asserted-by":"crossref","unstructured":"Zielonka, W., Bolkart, T., Thies, J.: Towards metrical reconstruction of human faces. In: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XIII, pp. 250\u2013269. Springer (2022)","DOI":"10.1007\/978-3-031-19778-9_15"},{"key":"3316_CR13","doi-asserted-by":"crossref","unstructured":"Zhang, T., Chu, X., Liu, Y., Lin, L., Yang, Z., Xu, Z., Cao, C., Yu, F., Zhou, C., Yuan, C., et al.: Accurate 3d face reconstruction with facial component tokens. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 9033\u20139042 (2023)","DOI":"10.1109\/ICCV51070.2023.00829"},{"key":"3316_CR14","doi-asserted-by":"crossref","unstructured":"Koizumi, T., Smith, W.A.: \u201cLook ma, no landmarks!\u201d\u2013unsupervised, model-based dense face alignment. In: European Conference on Computer Vision, pp. 690\u2013706. Springer (2020)","DOI":"10.1007\/978-3-030-58536-5_41"},{"key":"3316_CR15","doi-asserted-by":"publisher","unstructured":"Tran, A.T., Hassner, T., Masi, I., Medioni, G.: Regressing robust and discriminative 3d morphable models with a very deep neural network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1493\u20131502 (2017). https:\/\/doi.org\/10.1109\/CVPR.2017.163","DOI":"10.1109\/CVPR.2017.163"},{"key":"3316_CR16","doi-asserted-by":"crossref","unstructured":"Zhu, X., Lei, Z., Liu, X., Shi, H., Li, S.Z.: Face alignment across large poses: A 3d solution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 146\u2013155 (2016)","DOI":"10.1109\/CVPR.2016.23"},{"key":"3316_CR17","doi-asserted-by":"crossref","unstructured":"Danecek, R., Black, M.J., Bolkart, T.: EMOCA: Emotion driven monocular face capture and animation. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2022). https:\/\/emoca.is.tue.mpg.de","DOI":"10.1109\/CVPR52688.2022.01967"},{"key":"3316_CR18","doi-asserted-by":"crossref","unstructured":"Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., Tong, X.: Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In: IEEE Computer Vision and Pattern Recognition Workshops (2019)","DOI":"10.1109\/CVPRW.2019.00038"},{"key":"3316_CR19","doi-asserted-by":"crossref","unstructured":"Feng, Y., Feng, H., Black, M.J., Bolkart, T.: Learning an animatable detailed 3D face model from in-the-wild images. ACM Trans. Graph. (Proc. SIGGRAPH) 40(4), 1\u201313 (2021)","DOI":"10.1145\/3476576.3476646"},{"key":"3316_CR20","unstructured":"Li, C., Morel-Forster, A., Vetter, T., Egger, B., Kortylewski, A.: To fit or not to fit: model-based face reconstruction and occlusion segmentation from weak supervision. arXiv:2106.09614 (2021)"},{"key":"3316_CR21","doi-asserted-by":"publisher","unstructured":"Tewari, A., Zollh\u00f6fer, M., Kim, H., Garrido, P., Bernard, F., P\u00e9rez, P., Theobalt, C.: Mofa: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3735\u20133744 (2017). https:\/\/doi.org\/10.1109\/ICCV.2017.401","DOI":"10.1109\/ICCV.2017.401"},{"key":"3316_CR22","doi-asserted-by":"crossref","unstructured":"Yang, W., Zhao, Y., Yang, B., Shen, J.: Learning 3d face reconstruction from the cycle-consistency of dynamic faces. IEEE Trans. Multimed. 26, 3663\u20133675 (2023)","DOI":"10.1109\/TMM.2023.3322895"},{"issue":"5","key":"3316_CR23","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3395208","volume":"39","author":"B Egger","year":"2020","unstructured":"Egger, B., Smith, W.A.P., Tewari, A., Wuhrer, S., Vetter, T.: 3d morphable face models-past, present, and future. ACM Trans. Graph. 39(5), 1\u201338 (2020)","journal-title":"ACM Trans. Graph."},{"key":"3316_CR24","doi-asserted-by":"crossref","unstructured":"Gerig, T., Morel-Forster, A., Blumer, C., Egger, B., Luthi, M., Sch\u00f6nborn, S., Vetter, T.: Morphable face models-an open framework. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 75\u201382. IEEE (2018)","DOI":"10.1109\/FG.2018.00021"},{"issue":"6","key":"3316_CR25","doi-asserted-by":"publisher","first-page":"194","DOI":"10.1145\/3130800.3130813","volume":"36","author":"T Li","year":"2017","unstructured":"Li, T., Bolkart, T., Black, M.J., Li, H., Romero, J.: Learning a model of facial shape and expression from 4d scans. ACM Trans. Graph. 36(6), 194\u20131 (2017)","journal-title":"ACM Trans. Graph."},{"key":"3316_CR26","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s11432-019-2757-1","volume":"63","author":"G Zhai","year":"2020","unstructured":"Zhai, G., Min, X.: Perceptual image quality assessment: a survey. Sci. China Inf. Sci. 63, 1\u201352 (2020)","journal-title":"Sci. China Inf. Sci."},{"issue":"9","key":"3316_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3470970","volume":"54","author":"X Min","year":"2021","unstructured":"Min, X., Gu, K., Zhai, G., Yang, X., Zhang, W., Le Callet, P., Chen, C.W.: Screen content quality assessment: overview, benchmark, and beyond. ACM Comput. Surv. (CSUR) 54(9), 1\u201336 (2021)","journal-title":"ACM Comput. Surv. (CSUR)"},{"issue":"8","key":"3316_CR28","doi-asserted-by":"publisher","first-page":"1213","DOI":"10.1109\/LSP.2017.2715076","volume":"24","author":"Y Zhong","year":"2017","unstructured":"Zhong, Y., Chen, J., Huang, B.: Toward end-to-end face recognition through alignment learning. IEEE Signal Process. Lett. 24(8), 1213\u20131217 (2017)","journal-title":"IEEE Signal Process. Lett."},{"key":"3316_CR29","doi-asserted-by":"crossref","unstructured":"Zhou, E., Cao, Z., Sun, J.: Gridface: face rectification via learning local homography transformations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3\u201319 (2018)","DOI":"10.1007\/978-3-030-01270-0_1"},{"key":"3316_CR30","doi-asserted-by":"crossref","unstructured":"An, Z., Deng, W., Zhong, Y., Huang, Y., Tao, X.: Apa: adaptive pose alignment for robust face recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (2019)","DOI":"10.1109\/CVPRW.2019.00032"},{"issue":"4","key":"3316_CR31","doi-asserted-by":"publisher","first-page":"384","DOI":"10.1037\/0003-066X.48.4.384","volume":"48","author":"P Ekman","year":"1993","unstructured":"Ekman, P.: Facial expression and emotion. Am. Psychol. 48(4), 384 (1993)","journal-title":"Am. Psychol."},{"key":"3316_CR32","doi-asserted-by":"publisher","first-page":"1618","DOI":"10.1109\/TIP.2019.2912358","volume":"29","author":"M Verma","year":"2019","unstructured":"Verma, M., Vipparthi, S.K., Singh, G., Murala, S.: Learnet: dynamic imaging network for micro expression recognition. IEEE Trans. Image Process. 29, 1618\u20131627 (2019)","journal-title":"IEEE Trans. Image Process."},{"key":"3316_CR33","doi-asserted-by":"publisher","first-page":"585","DOI":"10.1007\/s00371-023-02803-3","volume":"40","author":"Y Gan","year":"2023","unstructured":"Gan, Y., Lien, S.-E., Chiang, Y.-C., Liong, S.-T.: Laenet for micro-expression recognition. Vis. Comput. 40, 585\u2013599 (2023)","journal-title":"Vis. Comput."},{"key":"3316_CR34","doi-asserted-by":"crossref","unstructured":"Liu, Y., Jourabloo, A., Ren, W., Liu, X.: Dense face alignment. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1619\u20131628 (2017)","DOI":"10.1109\/ICCVW.2017.190"},{"key":"3316_CR35","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.neucom.2022.11.048","volume":"520","author":"H Mohaghegh","year":"2023","unstructured":"Mohaghegh, H., Boussaid, F., Laga, H., Rahmani, H., Bennamoun, M.: Robust monocular 3d face reconstruction under challenging viewing conditions. Neurocomputing 520, 82\u201393 (2023). https:\/\/doi.org\/10.1016\/j.neucom.2022.11.048","journal-title":"Neurocomputing"},{"key":"3316_CR36","doi-asserted-by":"crossref","unstructured":"Xu, H., Zhang, J., Cai, J., Rezatofighi, H., Tao, D.: Gmflow: learning optical flow via global matching. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 8121\u20138130 (2022)","DOI":"10.1109\/CVPR52688.2022.00795"},{"key":"3316_CR37","doi-asserted-by":"crossref","unstructured":"Sanyal, S., Bolkart, T., Feng, H., Black, M.: Learning to regress 3D face shape and expression from an image without 3D supervision. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7763\u20137772 (2019)","DOI":"10.1109\/CVPR.2019.00795"},{"key":"3316_CR38","doi-asserted-by":"crossref","unstructured":"Feng, Z.H., Huber, P., Kittler, J., Hancock, P., Rtsch, M.: Evaluation of dense 3d reconstruction from 2d face images in the wild. IEEE (2018)","DOI":"10.1109\/FG.2018.00123"},{"key":"3316_CR39","doi-asserted-by":"crossref","unstructured":"Tewari, A., Bernard, F., Garrido, P., Bharaj, G., Elgharib, M., Seidel, H.-P., P\u00e9rez, P., Zollhofer, M., Theobalt, C.: Fml: face model learning from videos. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 10812\u201310822 (2019)","DOI":"10.1109\/CVPR.2019.01107"},{"key":"3316_CR40","doi-asserted-by":"crossref","unstructured":"Tewari, A., Zollh\u00f6fer, M., Garrido, P., Bernard, F., Kim, H., P\u00e9rez, P., Theobalt, C.: Self-supervised multi-level face model learning for monocular reconstruction at over 250 hz. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2549\u20132559 (2018)","DOI":"10.1109\/CVPR.2018.00270"},{"key":"3316_CR41","doi-asserted-by":"crossref","unstructured":"Tran, L., Liu, F., Liu, X.: Towards high-fidelity nonlinear 3d face morphable model. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 1126\u20131135 (2019)","DOI":"10.1109\/CVPR.2019.00122"},{"key":"3316_CR42","doi-asserted-by":"crossref","unstructured":"Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: single-shot multi-level face localisation in the wild. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 5203\u20135212 (2020)","DOI":"10.1109\/CVPR42600.2020.00525"},{"key":"3316_CR43","doi-asserted-by":"crossref","unstructured":"Bulat, A., Tzimiropoulos, G.: How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In: International Conference on Computer Vision (2017)","DOI":"10.1109\/ICCV.2017.116"},{"key":"3316_CR44","doi-asserted-by":"publisher","first-page":"477","DOI":"10.1016\/j.neucom.2018.11.108","volume":"396","author":"Z Shao","year":"2020","unstructured":"Shao, Z., Zhu, H., Tan, X., Hao, Y., Ma, L.: Deep multi-center learning for face alignment. Neurocomputing 396, 477\u2013486 (2020). https:\/\/doi.org\/10.1016\/j.neucom.2018.11.108","journal-title":"Neurocomputing"},{"key":"3316_CR45","doi-asserted-by":"publisher","first-page":"477","DOI":"10.1016\/j.neucom.2018.11.108","volume":"396","author":"Z Shao","year":"2020","unstructured":"Shao, Z., Zhu, H., Tan, X., Hao, Y., Ma, L.: Deep multi-center learning for face alignment. Neurocomputing 396, 477\u2013486 (2020)","journal-title":"Neurocomputing"},{"key":"3316_CR46","doi-asserted-by":"crossref","unstructured":"Wu, C.-Y., Xu, Q., Neumann, U.: Synergy between 3dmm and 3d landmarks for accurate 3d facial geometry. In: 2021 International Conference on 3D Vision (3DV) (2021)","DOI":"10.1109\/3DV53792.2021.00055"},{"issue":"8","key":"3316_CR47","doi-asserted-by":"publisher","first-page":"2416","DOI":"10.1109\/TCSVT.2018.2868123","volume":"29","author":"Y Liu","year":"2018","unstructured":"Liu, Y., Lu, Z., Li, J., Yang, T.: Hierarchically learned view-invariant representations for cross-view action recognition. IEEE Trans. Circuits Syst. Video Technol. 29(8), 2416\u20132430 (2018)","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"3316_CR48","doi-asserted-by":"crossref","unstructured":"Liu, Y., Li, G., Lin, L.: Cross-modal causal relational reasoning for event-level visual question answering. IEEE Trans. Pattern Anal. Mach. Intell. 45(10), 11624\u201311641 (2023)","DOI":"10.1109\/TPAMI.2023.3284038"},{"key":"3316_CR49","doi-asserted-by":"publisher","first-page":"3168","DOI":"10.1109\/TIP.2019.2957930","volume":"29","author":"Y Liu","year":"2019","unstructured":"Liu, Y., Lu, Z., Li, J., Yang, T., Yao, C.: Deep image-to-video adaptation and fusion networks for action recognition. IEEE Trans. Image Process. 29, 3168\u20133182 (2019)","journal-title":"IEEE Trans. Image Process."},{"key":"3316_CR50","doi-asserted-by":"publisher","first-page":"1978","DOI":"10.1109\/TIP.2022.3147032","volume":"31","author":"Y Liu","year":"2022","unstructured":"Liu, Y., Wang, K., Liu, L., Lan, H., Lin, L.: Tcgl: temporal contrastive graph for self-supervised video representation learning. IEEE Trans. Image Process. 31, 1978\u20131993 (2022)","journal-title":"IEEE Trans. Image Process."},{"issue":"9","key":"3316_CR51","doi-asserted-by":"publisher","first-page":"1063","DOI":"10.1109\/TPAMI.2003.1227983","volume":"25","author":"V Blanz","year":"2003","unstructured":"Blanz, V., Vetter, T.: Face recognition based on fitting a 3d morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1063\u20131074 (2003). https:\/\/doi.org\/10.1109\/TPAMI.2003.1227983","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"issue":"Jun. 26","key":"3316_CR52","doi-asserted-by":"publisher","first-page":"149","DOI":"10.1016\/j.neucom.2015.08.114","volume":"195","author":"Y Yang","year":"2016","unstructured":"Yang, Y., Su, Y., Cai, D., Xu, M.: Nonlinear deformation learning for face alignment across expression and pose. Neurocomputing 195(Jun. 26), 149\u2013158 (2016)","journal-title":"Neurocomputing"},{"key":"3316_CR53","doi-asserted-by":"crossref","unstructured":"Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: Flownet 2.0: evolution of optical flow estimation with deep networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017). http:\/\/lmb.informatik.uni-freiburg.de\/\/Publications\/2017\/IMKDB17","DOI":"10.1109\/CVPR.2017.179"},{"key":"3316_CR54","doi-asserted-by":"crossref","unstructured":"Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934\u20138943 (2018)","DOI":"10.1109\/CVPR.2018.00931"},{"key":"3316_CR55","doi-asserted-by":"crossref","unstructured":"Teed, Z., Deng, J.: Raft: recurrent all-pairs field transforms for optical flow. In: European Conference on Computer Vision, pp. 402\u2013419. Springer (2020)","DOI":"10.1007\/978-3-030-58536-5_24"},{"key":"3316_CR56","doi-asserted-by":"crossref","unstructured":"Koujan, M.R., Roussos, A., Zafeiriou, S.: Deepfaceflow: In-the-wild dense 3d facial motion estimation. In: The IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)","DOI":"10.1109\/CVPR42600.2020.00665"},{"issue":"1","key":"3316_CR57","doi-asserted-by":"publisher","first-page":"109","DOI":"10.1007\/s41095-021-0267-z","volume":"9","author":"Z Peng","year":"2023","unstructured":"Peng, Z., Jiang, B., Xu, H., Feng, W., Zhang, J.: Facial optical flow estimation via neural non-rigid registration. Comput. Vis. Media 9(1), 109\u2013122 (2023)","journal-title":"Comput. Vis. Media"},{"key":"3316_CR58","unstructured":"DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv:1708.04552 (2017)"},{"key":"3316_CR59","doi-asserted-by":"crossref","unstructured":"Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 6023\u20136032 (2019)","DOI":"10.1109\/ICCV.2019.00612"},{"key":"3316_CR60","unstructured":"Hongyi, Z., Moustapha, C., Yann, N.D., David, L.-P.: Mixup: beyond empirical risk minimization. In: International Conference on Learning Representations (2018)"},{"issue":"3","key":"3316_CR61","doi-asserted-by":"publisher","first-page":"351","DOI":"10.1109\/TPAMI.2006.53","volume":"28","author":"L Zhang","year":"2006","unstructured":"Zhang, L., Samaras, D.: Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics. IEEE Trans. Pattern Anal. Mach. Intell. 28(3), 351\u2013363 (2006)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"3316_CR62","doi-asserted-by":"crossref","unstructured":"Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4690\u20134699 (2019)","DOI":"10.1109\/CVPR.2019.00482"},{"key":"3316_CR63","doi-asserted-by":"crossref","unstructured":"Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015)","DOI":"10.1109\/ICCV.2015.425"},{"key":"3316_CR64","doi-asserted-by":"crossref","unstructured":"Baker, S., Roth, S., Scharstein, D., Black, M.J., Lewis, J.P., Szeliski, R.: A database and evaluation methodology for optical flow. In: IEEE International Conference on Computer Vision (2007)","DOI":"10.1109\/ICCV.2007.4408903"},{"key":"3316_CR65","doi-asserted-by":"crossref","unstructured":"Chai, Z., Zhang, H., Ren, J., Kang, D., Xu, Z., Zhe, X., Yuan, C., Bao, L.: Realy: rethinking the evaluation of 3d face reconstruction. In: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part VIII, pp. 74\u201392. Springer (2022)","DOI":"10.1007\/978-3-031-20074-8_5"},{"key":"3316_CR66","doi-asserted-by":"crossref","unstructured":"Shang, J., Shen, T., Li, S., Zhou, L., Zhen, M., Fang, T., Quan, L.: Self-supervised monocular 3d face reconstruction by occlusion-aware multi-view geometry consistency. In: European Conference on Computer Vision, pp. 53\u201370. Springer (2020)","DOI":"10.1007\/978-3-030-58555-6_4"},{"issue":"4","key":"3316_CR67","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1109\/TIP.2003.819861","volume":"13","author":"Z Wang","year":"2004","unstructured":"Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600\u2013612 (2004)","journal-title":"IEEE Trans. Image Process."},{"issue":"1","key":"3316_CR68","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1109\/TIP.2017.2760518","volume":"27","author":"S Bosse","year":"2017","unstructured":"Bosse, S., Maniry, D., M\u00fcller, K.-R., Wiegand, T., Samek, W.: Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206\u2013219 (2017)","journal-title":"IEEE Trans. Image Process."},{"key":"3316_CR69","doi-asserted-by":"crossref","unstructured":"Bambach, S., Lee, S., Crandall, D.J., Yu, C.: Lending a hand: detecting hands and recognizing activities in complex egocentric interactions. In: The IEEE International Conference on Computer Vision (ICCV) (2015)","DOI":"10.1109\/ICCV.2015.226"},{"issue":"2","key":"3316_CR70","doi-asserted-by":"publisher","first-page":"508","DOI":"10.1109\/TBC.2018.2816783","volume":"64","author":"X Min","year":"2018","unstructured":"Min, X., Zhai, G., Gu, K., Liu, Y., Yang, X.: Blind image quality estimation via distortion aggravation. IEEE Trans. Broadcast. 64(2), 508\u2013517 (2018)","journal-title":"IEEE Trans. Broadcast."},{"key":"3316_CR71","doi-asserted-by":"crossref","unstructured":"Min, X., Ma, K., Gu, K., Zhai, G., Wang, Z., Lin, W.: Unified blind quality assessment of compressed natural, graphic, and screen content images. IEEE Trans. Image Process. 26(11), 5462\u20135474 (2017)","DOI":"10.1109\/TIP.2017.2735192"},{"issue":"8","key":"3316_CR72","doi-asserted-by":"publisher","first-page":"2879","DOI":"10.1109\/TITS.2018.2868771","volume":"20","author":"X Min","year":"2018","unstructured":"Min, X., Zhai, G., Gu, K., Yang, X., Guan, X.: Objective quality evaluation of dehazed images. IEEE Trans. Intell. Transp. Syst. 20(8), 2879\u20132892 (2018)","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"issue":"8","key":"3316_CR73","doi-asserted-by":"publisher","first-page":"2049","DOI":"10.1109\/TMM.2017.2788206","volume":"20","author":"X Min","year":"2017","unstructured":"Min, X., Gu, K., Zhai, G., Liu, J., Yang, X., Chen, C.W.: Blind quality assessment based on pseudo-reference image. IEEE Trans. Multimed. 20(8), 2049\u20132062 (2017)","journal-title":"IEEE Trans. Multimed."},{"key":"3316_CR74","doi-asserted-by":"publisher","first-page":"3805","DOI":"10.1109\/TIP.2020.2966082","volume":"29","author":"X Min","year":"2020","unstructured":"Min, X., Zhai, G., Zhou, J., Zhang, X.-P., Yang, X., Guan, X.: A multimodal saliency model for videos with high audio-visual correspondence. IEEE Trans. Image Process. 29, 3805\u20133819 (2020)","journal-title":"IEEE Trans. Image Process."},{"key":"3316_CR75","doi-asserted-by":"publisher","first-page":"6054","DOI":"10.1109\/TIP.2020.2988148","volume":"29","author":"X Min","year":"2020","unstructured":"Min, X., Zhai, G., Zhou, J., Farias, M.C., Bovik, A.C.: Study of subjective and objective quality assessment of audio-visual signals. IEEE Trans. Image Process. 29, 6054\u20136068 (2020)","journal-title":"IEEE Trans. Image Process."}],"container-title":["The Visual Computer"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-024-03316-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00371-024-03316-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-024-03316-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,24]],"date-time":"2025-01-24T13:00:05Z","timestamp":1737723605000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00371-024-03316-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,17]]},"references-count":75,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1]]}},"alternative-id":["3316"],"URL":"https:\/\/doi.org\/10.1007\/s00371-024-03316-3","relation":{},"ISSN":["0178-2789","1432-2315"],"issn-type":[{"value":"0178-2789","type":"print"},{"value":"1432-2315","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,17]]},"assertion":[{"value":"10 February 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 March 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"While engaged in this research project, Feipeng Da is employed in the school of Automation, Southeast University and Chunlu Li is a PhD student in the same school. Both of them The authors declare that there is not any financial or non-financial relationship with any universities, companies, organizations or identities, either directly or indirectly.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"In this work, no participant is included and the authors follow the ethical standards carefully. The face images used for experiments are obtained from existing datasets, and we follow strictly their protocols during usage.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval"}},{"value":"In this work, no participant is included and the authors follow the ethical standards carefully.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"All authors of this work consent to publish.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The codes of our baseline, i.e. the FOCUS model [], are open-source, as well as the PWCNet [] and the pre-processing codes from the Deep3D model []. We are also authorized to access the Basel Face Model [] by the owners. We will release our codes after the paper is accepted.","order":6,"name":"Ethics","group":{"name":"EthicsHeading","label":"Code availability"}}]}}