{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T10:23:17Z","timestamp":1771064597859,"version":"3.50.1"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"10-11","license":[{"start":{"date-parts":[[2020,5,6]],"date-time":"2020-05-06T00:00:00Z","timestamp":1588723200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,5,6]],"date-time":"2020-05-06T00:00:00Z","timestamp":1588723200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000266","name":"EPSRC","doi-asserted-by":"crossref","award":["EP\/N509486\/1"],"award-info":[{"award-number":["EP\/N509486\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100000266","name":"EPSRC","doi-asserted-by":"crossref","award":["EP\/N007743\/1"],"award-info":[{"award-number":["EP\/N007743\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100000266","name":"EPSRC","doi-asserted-by":"crossref","award":["EP\/S010203\/1"],"award-info":[{"award-number":["EP\/S010203\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2020,11]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Over the past few years, Generative Adversarial Networks (GANs) have garnered increased interest among researchers in Computer Vision, with applications including, but not limited to, image generation, translation, imputation, and super-resolution. Nevertheless, no GAN-based method has been proposed in the literature that can successfully represent, generate or translate 3D facial shapes (meshes). This can be primarily attributed to two facts, namely that (a) publicly available 3D face databases are scarce as well as limited in terms of sample size and variability (e.g., few subjects, little diversity in race and gender), and (b) mesh convolutions for deep networks present several challenges that are not entirely tackled in the literature, leading to operator approximations and model instability, often failing to preserve high-frequency components of the distribution. As a result, linear methods such as Principal Component Analysis (PCA) have been mainly utilized towards 3D shape analysis, despite being unable to capture non-linearities and high frequency details of the 3D face\u2014such as eyelid and lip variations. In this work, we present 3DFaceGAN, the first GAN tailored towards modeling the distribution of 3D facial surfaces, while retaining the high frequency details of 3D face shapes. We conduct an extensive series of both qualitative and quantitative experiments, where the merits of 3DFaceGAN are clearly demonstrated against other, state-of-the-art methods in tasks such as 3D shape representation, generation, and translation.\n<\/jats:p>","DOI":"10.1007\/s11263-020-01329-8","type":"journal-article","created":{"date-parts":[[2020,5,6]],"date-time":"2020-05-06T15:04:01Z","timestamp":1588777441000},"page":"2534-2551","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":47,"title":["3DFaceGAN: Adversarial Nets for 3D Face Representation, Generation, and Translation"],"prefix":"10.1007","volume":"128","author":[{"given":"Stylianos","family":"Moschoglou","sequence":"first","affiliation":[]},{"given":"Stylianos","family":"Ploumpis","sequence":"additional","affiliation":[]},{"given":"Mihalis A.","family":"Nicolaou","sequence":"additional","affiliation":[]},{"given":"Athanasios","family":"Papaioannou","sequence":"additional","affiliation":[]},{"given":"Stefanos","family":"Zafeiriou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,5,6]]},"reference":[{"key":"1329_CR1","unstructured":"Berthelot, D., Schumm, T., & Metz. L. (2017). Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717"},{"key":"1329_CR2","first-page":"586","volume":"1611","author":"PJ Besl","year":"1992","unstructured":"Besl, P. J., & McKay, N. D. (1992). Method for registration of 3-D shapes. Sensor Fusion IV: Control Paradigms and Data Structures, 1611, 586\u2013607.","journal-title":"Sensor Fusion IV: Control Paradigms and Data Structures"},{"key":"1329_CR3","doi-asserted-by":"crossref","unstructured":"Booth, J., & Zafeiriou, S. (2014). Optimal uv spaces for facial morphable model construction. In Proceedings of the IEEE international conference on image processing (ICIP), (pp. 4672\u20134676).","DOI":"10.1109\/ICIP.2014.7025947"},{"key":"1329_CR4","doi-asserted-by":"crossref","unstructured":"Booth, J., Roussos, A., Zafeiriou, S., Ponniah, A., & Dunaway, D. (2016). A 3D morphable model learnt from 10,000 faces. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5543\u20135552).","DOI":"10.1109\/CVPR.2016.598"},{"key":"1329_CR5","doi-asserted-by":"crossref","unstructured":"Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., & Krishnan, D. (2017). Unsupervised pixel-level domain adaptation with generative adversarial networks. In  Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (vol.\u00a01, p.\u00a07).","DOI":"10.1109\/CVPR.2017.18"},{"issue":"4","key":"1329_CR6","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/MSP.2017.2693418","volume":"34","author":"MM Bronstein","year":"2017","unstructured":"Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., & Vandergheynst, P. (2017). Geometric deep learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4), 18\u201342.","journal-title":"IEEE Signal Processing Magazine"},{"key":"1329_CR7","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.cviu.2014.05.005","volume":"128","author":"A Brunton","year":"2014","unstructured":"Brunton, A., Salazar, A., Bolkart, T., & Wuhrer, S. (2014). Review of statistical shape spaces for 3D data with comparative analysis for human faces. Computer Vision and Image Understanding, 128, 1\u201317.","journal-title":"Computer Vision and Image Understanding"},{"key":"1329_CR8","doi-asserted-by":"crossref","unstructured":"Cheng, S., Kotsia, I., Pantic, M., & Zafeiriou, S. (2018). 4DFAB: A large scale 4D database for facial expression analysis and biometric applications. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (pp. 5117\u20135126).","DOI":"10.1109\/CVPR.2018.00537"},{"key":"1329_CR9","doi-asserted-by":"crossref","unstructured":"Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., & Choo, J. (2018). Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 8789\u20138797).","DOI":"10.1109\/CVPR.2018.00916"},{"key":"1329_CR10","unstructured":"Clevert, D.A., Unterthiner, T., & Hochreiter, S. (2016). Fast and accurate deep network learning by exponential linear units (elus). In Proceedings of the international conference for learning representations (ICLR)."},{"key":"1329_CR11","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-84800-138-1_2","volume-title":"Statistical models of shape: Optimization and evaluation","author":"R Davies","year":"2008","unstructured":"Davies, R., Twining, C., & Taylor, C. (2008). Statistical models of shape: Optimization and evaluation. Berlin: Springer."},{"key":"1329_CR12","doi-asserted-by":"crossref","unstructured":"De\u00a0Smet, M., & Van\u00a0Gool, L. (2010) .Optimal regions for linear model-based 3D face reconstruction. In Proceedings of the Asian conference on computer vision, (pp. 276\u2013289).","DOI":"10.1007\/978-3-642-19318-7_22"},{"key":"1329_CR13","unstructured":"Dosovitskiy, A., & Brox, T. (2016). Generating images with perceptual similarity metrics based on deep networks. In Proceedings of the advances in neural information processing systems (NIPS), (pp. 658\u2013666)."},{"key":"1329_CR14","doi-asserted-by":"crossref","unstructured":"Dou, P., Shah, S.K., & Kakadiaris, I.A. (2017). End-to-end 3D face reconstruction with deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (pp. 21\u201326).","DOI":"10.1109\/CVPR.2017.164"},{"key":"1329_CR15","doi-asserted-by":"crossref","unstructured":"Fan, H., Su, H., & Guibas, L.J. (2017). A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (vol.\u00a02, p.\u00a06).","DOI":"10.1109\/CVPR.2017.264"},{"key":"1329_CR16","doi-asserted-by":"crossref","unstructured":"Feng, Y., Wu, F., Shao, X., Wang, Y., & Zhou, X. (2018). Joint 3D face reconstruction and dense alignment with position map regression network. In Proceedings of the European conference on computer vision (ECCV), (pp. 534\u2013551).","DOI":"10.1007\/978-3-030-01264-9_33"},{"key":"1329_CR17","doi-asserted-by":"crossref","unstructured":"Genova, K., Cole, F., Maschinot, A., Sarna, A., Vlasic, D., & Freeman, W.T. (2018). Unsupervised training for 3D morphable model regression. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (pp. 8377\u20138386).","DOI":"10.1109\/CVPR.2018.00874"},{"key":"1329_CR18","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Proceedings of the advances in neural information processing systems, (pp. 2672\u20132680)."},{"issue":"1","key":"1329_CR19","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1007\/BF02291478","volume":"40","author":"JC Gower","year":"1975","unstructured":"Gower, J. C. (1975). Generalized procrustes analysis. Psychometrika, 40(1), 33\u201351.","journal-title":"Psychometrika"},{"key":"1329_CR20","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (pp. 770\u2013778).","DOI":"10.1109\/CVPR.2016.90"},{"key":"1329_CR21","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der\u00a0Maaten, L., & Weinberger, K.Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (vol.\u00a01, p. \u00a03).","DOI":"10.1109\/CVPR.2017.243"},{"key":"1329_CR22","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., & Efros, A.A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1125\u20131134).","DOI":"10.1109\/CVPR.2017.632"},{"key":"1329_CR23","doi-asserted-by":"crossref","unstructured":"Jackson, A.S., Bulat, A., Argyriou, V., & Tzimiropoulos, G. (2017). Large pose 3D face reconstruction from a single image via direct volumetric cnn regression. In Proceedings of the IEEE international conference on computer vision (ICCV), (pp. 1031\u20131039).","DOI":"10.1109\/ICCV.2017.117"},{"key":"1329_CR24","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European conference on computer vision, Springer, (pp. 694\u2013711).","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"1329_CR25","doi-asserted-by":"publisher","first-page":"1094","DOI":"10.1007\/978-3-642-04898-2_455","volume-title":"International encyclopedia of statistical science","author":"I Jolliffe","year":"2011","unstructured":"Jolliffe, I. (2011). Principal component analysis. In M. Lovric (Ed.), International encyclopedia of statistical science (pp. 1094\u20131096). Berlin: Springer."},{"key":"1329_CR26","unstructured":"Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018). Progressive growing of gans for improved quality, stability, and variation. In Proceedings of the International Conference for Learning Representations (ICLR)."},{"key":"1329_CR27","unstructured":"Kim, T., Cha, M., Kim, H., Lee, J.K., & Kim, J. (2017). Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the 34th international conference on machine learning, (vol. 70, pp. 1857\u20131865)."},{"key":"1329_CR28","unstructured":"Kingma, D.P., & Ba, J. (2015). Adam: A method for stochastic optimization. In Proceedings of the 3rd international conference for learning representations (ICLR)."},{"key":"1329_CR29","unstructured":"Kingma, D.P., & Welling, M. (2014). Auto-encoding variational Bayes. In Proceedings of the International Conference for Learning Representations (ICLR)."},{"key":"1329_CR30","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Husz\u00e1r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J.&, Wang, Z., et\u00a0al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (vol.\u00a02, p.\u00a04).","DOI":"10.1109\/CVPR.2017.19"},{"key":"1329_CR31","unstructured":"Lei, T., Jin, W., Barzilay, R., & Jaakkola, T. (2017). Deriving neural architectures from sequence and graph kernels. In Proceedings of the 34th international conference on machine learning, (vol. 70, pp. 2024\u20132033)."},{"key":"1329_CR32","doi-asserted-by":"crossref","unstructured":"Li, Y., Liu, S., Yang, J., & Yang, M.H. (2017). Generative face completion. In Proceedings of the the IEEE conference on computer vision and pattern recognition (CVPR), (vol.\u00a01, p.\u00a03).","DOI":"10.1109\/CVPR.2017.624"},{"key":"1329_CR33","doi-asserted-by":"crossref","unstructured":"Litany, O., Remez, T., Rodol\u00e0, E., Bronstein, A.M., & Bronstein, M.M. (2017). Deep functional maps: Structured prediction for dense shape correspondence. In Proceedings of the IEEE international conference on computer vision (ICCV), (pp. 5660\u20135668).","DOI":"10.1109\/ICCV.2017.603"},{"key":"1329_CR34","doi-asserted-by":"crossref","unstructured":"Litany, O., Bronstein, A., Bronstein, M., & Makadia, A. (2018). Deformable shape completion with graph convolutional autoencoders. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1886\u20131895).","DOI":"10.1109\/CVPR.2018.00202"},{"key":"1329_CR35","doi-asserted-by":"crossref","unstructured":"Liu, F., Tran, L., & Liu, X. (2019). 3D face modeling from diverse raw scan data. arXiv preprint arXiv:1902.04943.","DOI":"10.1109\/ICCV.2019.00950"},{"key":"1329_CR36","unstructured":"Lucic, M., Kurach, K., Michalski, M., Gelly, S., & Bousquet, O. (2018). Are gans created equal? a large-scale study. In Proceedings of the Advances in Neural Information Processing Systems (NIPS)."},{"key":"1329_CR37","doi-asserted-by":"crossref","unstructured":"Mahendran, A., & Vedaldi, A. (2015). Understanding deep image representations by inverting them. InProceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (pp. 5188\u20135196).","DOI":"10.1109\/CVPR.2015.7299155"},{"issue":"8","key":"1329_CR38","doi-asserted-by":"publisher","first-page":"1520","DOI":"10.1109\/TPAMI.2011.248","volume":"34","author":"L Maier-Hein","year":"2011","unstructured":"Maier-Hein, L., Franz, A. M., Dos Santos, T. R., Schmidt, M., Fangerau, M., Meinzer, H. P., et al. (2011). Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(8), 1520\u20131532.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"issue":"4","key":"1329_CR39","doi-asserted-by":"publisher","first-page":"71","DOI":"10.1145\/3072959.3073616","volume":"36","author":"H Maron","year":"2017","unstructured":"Maron, H., Galun, M., Aigerman, N., Trope, M., Dym, N., Yumer, E., et al. (2017). Convolutional neural networks on surfaces via seamless toric covers. ACM Transactions on Graphics, 36(4), 71.","journal-title":"ACM Transactions on Graphics"},{"key":"1329_CR40","unstructured":"Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784."},{"key":"1329_CR41","doi-asserted-by":"crossref","unstructured":"Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., & Fitzgibbon, A. (2011). Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the IEEE international symposium on Mixed and Augmented Reality (ISMAR), (pp. 127\u2013136).","DOI":"10.1109\/ISMAR.2011.6092378"},{"key":"1329_CR42","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1016\/j.patcog.2018.01.002","volume":"78","author":"K Nguyen","year":"2018","unstructured":"Nguyen, K., Fookes, C., Sridharan, S., Tistarelli, M., & Nixon, M. (2018). Super-resolution for biometrics: A comprehensive survey. Pattern Recognition, 78, 23\u201342.","journal-title":"Pattern Recognition"},{"issue":"2","key":"1329_CR43","first-page":"4","volume":"1","author":"CR Qi","year":"2017","unstructured":"Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3D classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1(2), 4.","journal-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"},{"key":"1329_CR44","unstructured":"Radford, A., Metz, L., & Chintala, S. (2016). Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the international conference for learning representations (ICLR)."},{"key":"1329_CR45","doi-asserted-by":"crossref","unstructured":"Ranjan, A., Bolkart, T., Sanyal, S., & Black, M.J. (2018). Generating 3D faces using convolutional mesh autoencoders. In Proceedings of the European conference on computer vision (ECCV), (pp. 704\u2013720).","DOI":"10.1007\/978-3-030-01219-9_43"},{"key":"1329_CR46","doi-asserted-by":"crossref","unstructured":"Richardson, E., Sela, M., Or-El, R., & Kimmel, R. (2017). Learning detailed face reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (pp. 5553\u20135562).","DOI":"10.1109\/CVPR.2017.589"},{"key":"1329_CR47","doi-asserted-by":"crossref","unstructured":"Tewari, A., Zollh\u00f6fer, M., Garrido, P., Bernard, F., Kim, H., P\u00e9rez, P., & Theobalt, C. (2018). Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 2549\u20132559).","DOI":"10.1109\/CVPR.2018.00270"},{"key":"1329_CR48","doi-asserted-by":"crossref","unstructured":"Tewari, A., Bernard, F., Garrido, P., Bharaj, G., Elgharib, M., Seidel, H.P., P\u00e9rez, P., Zollhofer, M., & Theobalt, C. (2019) Fml: face model learning from videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 10812\u201310822).","DOI":"10.1109\/CVPR.2019.01107"},{"key":"1329_CR49","doi-asserted-by":"crossref","unstructured":"Tran, A.T., Hassner, T., Masi, I., & Medioni, G. (2017). Regressing robust and discriminative 3D morphable models with a very deep neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (pp. 1493\u20131502).","DOI":"10.1109\/CVPR.2017.163"},{"key":"1329_CR50","doi-asserted-by":"crossref","unstructured":"Tran, L., & Liu, X. (2018). Nonlinear 3D face morphable model. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 7346\u20137355).","DOI":"10.1109\/CVPR.2018.00767"},{"key":"1329_CR51","doi-asserted-by":"crossref","unstructured":"Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference computer vision and pattern recognition (CVPR), (vol.\u00a01, p.\u00a04).","DOI":"10.1109\/CVPR.2017.316"},{"key":"1329_CR52","doi-asserted-by":"crossref","unstructured":"Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (vol.\u00a01, p.\u00a05).","DOI":"10.1109\/CVPR.2018.00917"},{"key":"1329_CR53","doi-asserted-by":"crossref","unstructured":"Wang, W., Huang, Q., You, S., Yang, C., & Neumann, U. (2017). Shape inpainting using 3d generative adversarial network and recurrent convolutional networks. In Proceedings of the IEEE international conference on computer vision, (pp. 2298\u20132306).","DOI":"10.1109\/ICCV.2017.252"},{"key":"1329_CR54","doi-asserted-by":"crossref","unstructured":"Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., & Li, H. (2017). High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (vol.\u00a01, p.\u00a03).","DOI":"10.1109\/CVPR.2017.434"},{"key":"1329_CR55","unstructured":"Zhao, J., Mathieu, M., & LeCun, Y. (2017). Energy-based generative adversarial network. In Proceedings of the international conference for learning representations (ICLR)."},{"key":"1329_CR56","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., & Efros, A.A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, (pp. 2223\u20132232).","DOI":"10.1109\/ICCV.2017.244"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-020-01329-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-020-01329-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-020-01329-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,5,5]],"date-time":"2021-05-05T23:52:31Z","timestamp":1620258751000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-020-01329-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,5,6]]},"references-count":56,"journal-issue":{"issue":"10-11","published-print":{"date-parts":[[2020,11]]}},"alternative-id":["1329"],"URL":"https:\/\/doi.org\/10.1007\/s11263-020-01329-8","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,5,6]]},"assertion":[{"value":"30 April 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 April 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 May 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}