{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T07:02:18Z","timestamp":1775199738943,"version":"3.50.1"},"reference-count":63,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2020,1,2]],"date-time":"2020-01-02T00:00:00Z","timestamp":1577923200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,1,2]],"date-time":"2020-01-02T00:00:00Z","timestamp":1577923200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001711","name":"Schweizerischer Nationalfonds zur F\u00f6rderung der Wissenschaftlichen Forschung","doi-asserted-by":"crossref","award":["CRSII2 160811"],"award-info":[{"award-number":["CRSII2 160811"]}],"id":[{"id":"10.13039\/501100001711","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100007601","name":"Horizon 2020","doi-asserted-by":"publisher","award":["762021"],"award-info":[{"award-number":["762021"]}],"id":[{"id":"10.13039\/501100007601","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2020,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>We present a novel approach to automatic Sign Language Production using recent developments in Neural Machine Translation (NMT), Generative Adversarial Networks, and motion generation. Our system is capable of producing sign videos from spoken language sentences. Contrary to current approaches that are dependent on heavily annotated data, our approach requires minimal gloss and skeletal level annotations for training. We achieve this by breaking down the task into dedicated sub-processes. We first translate spoken language sentences into sign pose sequences by combining an NMT network with a Motion Graph. The resulting pose information is then used to condition a generative model that produces photo realistic sign language video sequences. This is the first approach to continuous sign video generation that does not use a classical graphical avatar. We evaluate the translation abilities of our approach on the PHOENIX14<jats:bold>T<\/jats:bold>Sign Language Translation dataset. We set a baseline for text-to-gloss translation, reporting a BLEU-4 score of 16.34\/15.26 on dev\/test sets. We further demonstrate the video generation capabilities of our approach for both multi-signer and high-definition settings qualitatively and quantitatively using broadcast quality assessment metrics.<\/jats:p>","DOI":"10.1007\/s11263-019-01281-2","type":"journal-article","created":{"date-parts":[[2020,1,2]],"date-time":"2020-01-02T15:03:14Z","timestamp":1577977394000},"page":"891-908","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":220,"title":["Text2Sign: Towards Sign Language Production Using Neural Machine Translation and Generative Adversarial Networks"],"prefix":"10.1007","volume":"128","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3582-3969","authenticated-orcid":false,"given":"Stephanie","family":"Stoll","sequence":"first","affiliation":[]},{"given":"Necati Cihan","family":"Camgoz","sequence":"additional","affiliation":[]},{"given":"Simon","family":"Hadfield","sequence":"additional","affiliation":[]},{"given":"Richard","family":"Bowden","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,1,2]]},"reference":[{"key":"1281_CR1","doi-asserted-by":"crossref","unstructured":"Ahn, H., Ha, T., Choi, Y., Yoo, H., & Oh, S. (2018). Text2action: Generative adversarial synthesis from language to action. In IEEE international conference on robotics and automation (ICRA).","DOI":"10.1109\/ICRA.2018.8460608"},{"key":"1281_CR2","doi-asserted-by":"crossref","unstructured":"Arikan, O., & Forsyth, D. A. (2002). Interactive motion generation from examples. In Proceedings of the 29th annual conference on computer graphics and interactive techniques, SIGGRAPH \u201902 (pp. 483\u2013490). ACM, New York, NY.","DOI":"10.1145\/566570.566606"},{"key":"1281_CR3","unstructured":"Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473."},{"key":"1281_CR4","doi-asserted-by":"crossref","unstructured":"Bangham, J. A., Cox, S. J., Elliott, R., Glauert, J. R. W., Marshall, I., Rankov, S., & Wells, M. (2000). Virtual signing: Capture, animation, storage and transmission-an overview of the visicast project. In IEE Seminar on speech and language processing for disabled and elderly people (Ref. No. 2000\/025) (pp. 6\/1\u20136\/7).","DOI":"10.1049\/ic:20000136"},{"key":"1281_CR5","unstructured":"BDA: British Deaf Association (2019). BSL statistics. https:\/\/bda.org.uk\/help-resources\/#statistics. Accessed 16 Nov 2019."},{"key":"1281_CR6","unstructured":"Bowden, R., Zisserman, A., Hogg, D., & Magee, D. (2016). Learning to recognise dynamic visual content from broadcast footage. https:\/\/cvssp.org\/projects\/dynavis\/index.html. Accessed 1 Nov 2018."},{"key":"1281_CR7","doi-asserted-by":"crossref","unstructured":"Camgoz, N. C., Hadfield, S., Koller, O., Ney, H., & Bowden, R. (2018). Neural sign language translation. In IEEE Conference on computer vision and pattern recognition (CVPR).","DOI":"10.1109\/CVPR.2018.00812"},{"key":"1281_CR8","doi-asserted-by":"crossref","unstructured":"Cao, Z., Simon, T., Wei, S., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity fields. In 2017 IEEE Conference on computer vision and pattern recognition (CVPR) (Vol.\u00a000, pp. 1302\u20131310).","DOI":"10.1109\/CVPR.2017.143"},{"key":"1281_CR9","unstructured":"Chan, C., Ginosar, S., Zhou, T., & Efros, A. A. (2018). Everybody dance now. CoRR arXiv:1808.07371."},{"key":"1281_CR10","doi-asserted-by":"crossref","unstructured":"Chen, Q., & Koltun, V. (2017). Photographic image synthesis with cascaded refinement networks. In ICCV (pp. 1520\u20131529). IEEE Computer Society.","DOI":"10.1109\/ICCV.2017.168"},{"key":"1281_CR11","doi-asserted-by":"crossref","unstructured":"Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder\u2013decoder for statistical machine translation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1724\u20131734). Association for Computational Linguistics.","DOI":"10.3115\/v1\/D14-1179"},{"key":"1281_CR12","unstructured":"Chung, J., G\u00fcl\u00e7ehre, \u00c7., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR arXiv:1412.3555."},{"key":"1281_CR13","doi-asserted-by":"crossref","unstructured":"Cox, S., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., & Abbott, S. (2002). Tessa, a system to aid communication with deaf people. In Proceedings of the 5th international ACM conference on assistive technologies (pp. 205\u2013212). ACM","DOI":"10.1145\/638249.638287"},{"key":"1281_CR14","unstructured":"Ebling, S., Camgoz, N.C., Braem, P., Tissi, K., Sidler-Miserez, S., Stoll, S., Hadfield, S., Haug, T., Bowden, R., Tornay, S., Razavi, M., & Magimai-Doss, M. (2018). Smile Swiss German sign language dataset. In 11th Edition of the language resources and evaluation conference (LREC)."},{"key":"1281_CR15","unstructured":"Ebling, S., & Glauert, J. (2013). Exploiting the full potential of JASigning to build an avatar signing train announcements. In 3rd International symposium on sign language translation and avatar technology."},{"key":"1281_CR16","doi-asserted-by":"crossref","unstructured":"Ebling, S., & Huenerfauth, M. (2015). Bridging the gap between sign language machine translation and sign language animation using sequence classification. In SLPAT@Interspeech.","DOI":"10.18653\/v1\/W15-5102"},{"key":"1281_CR17","doi-asserted-by":"crossref","unstructured":"Efthimiou, E. (2012). The dicta-sign wiki: Enabling web communication for the deaf. In K. Miesenberger, A. Karshmer, P. Penaz, & W. Zagler (Eds.) Computers helping people with special needs. ICCHP 2012. Lecture notes in computer science (Vol. 7383). Springer, Berlin, Heidelberg.","DOI":"10.1007\/978-3-642-31534-3_32"},{"key":"1281_CR18","unstructured":"Elwazer, M. (2018). Kintrans. http:\/\/www.kintrans.com\/. Accessed 12 Nov 2018."},{"key":"1281_CR19","unstructured":"EU: European Parliament (2018). Sign languages in the EU. http:\/\/www.europarl.europa.eu\/RegData\/etudes\/ATAG\/2018\/625196\/EPRS_ATA(2018)625196_EN.pdf. Accessed 16 Nov 2019."},{"key":"1281_CR20","unstructured":"Forster, J., Schmidt, C., Koller, O., Bellgardt, M., & Ney, H. (2014). Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-Weather. In Language resources and evaluation (pp. 1911\u20131916). Reykjavik."},{"issue":"4","key":"1281_CR21","doi-asserted-by":"publisher","first-page":"525","DOI":"10.1007\/s10209-015-0411-6","volume":"15","author":"S Gibet","year":"2016","unstructured":"Gibet, S., Lefebvre-Albaret, F., Hamon, L., Brun, R., & Turki, A. (2016). Interactive editing in french sign language dedicated to virtual signers: Requirements and challenges. Universal Access in the Information Society, 15(4), 525\u2013539.","journal-title":"Universal Access in the Information Society"},{"issue":"4","key":"1281_CR22","doi-asserted-by":"publisher","first-page":"207","DOI":"10.3233\/TAD-2006-18408","volume":"18","author":"J Glauert","year":"2006","unstructured":"Glauert, J., Elliott, R., Cox, S., Tryggvason, J., & Sheard, M. (2006). VANESSA-A system for communication between Deaf and hearing people. Technology and Disability, 18(4), 207\u2013216.","journal-title":"Technology and Disability"},{"key":"1281_CR23","unstructured":"Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. C., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems 27: Annual conference on neural information processing systems 2014, December 8-13 2014 (pp. 2672\u20132680). Montreal, Quebec."},{"key":"1281_CR24","unstructured":"Gregor, K., Danihelka, I., Graves, A., Rezende, D., & Wierstra, D. (2015). Draw: A recurrent neural network for image generation. In F.\u00a0Bach, & D.\u00a0Blei (Eds.) Proceedings of the 32nd international conference on machine learning, Proceedings of Machine Learning Research (Vol.\u00a037, pp. 1462\u20131471). PMLR, Lille."},{"issue":"1","key":"1281_CR25","doi-asserted-by":"publisher","first-page":"8:1","DOI":"10.1145\/3152121","volume":"14","author":"D Guo","year":"2017","unstructured":"Guo, D., Zhou, W., Li, H., & Wang, M. (2017). Online early-late fusion based on adaptive hmm for sign language recognition. ACM Transactions on Multimedia Computing, Communications, and Applications, 14(1), 8:1\u20138:18. https:\/\/doi.org\/10.1145\/3152121.","journal-title":"ACM Transactions on Multimedia Computing, Communications, and Applications"},{"key":"1281_CR26","doi-asserted-by":"crossref","unstructured":"Guo, D., Zhou, W., Li, H., & Wang, M. (2018). Hierarchical LSTM for sign language translation. In AAAI.","DOI":"10.1609\/aaai.v32i1.12235"},{"key":"1281_CR27","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735\u201380. https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735.","journal-title":"Neural Computation"},{"key":"1281_CR28","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5967\u20135976).","DOI":"10.1109\/CVPR.2017.632"},{"key":"1281_CR29","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision.","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"1281_CR30","unstructured":"Kalchbrenner, N., Espeholt, L., Simonyan, K., van\u00a0den Oord, A., Graves, A., & Kavukcuoglu, K. (2016). Neural machine translation in linear time. CoRR arXiv:1610.10099."},{"key":"1281_CR31","unstructured":"Kennaway, R. (2013). Avatar-independent scripting for real-time gesture animation. CoRR arXiv:1502.02961."},{"key":"1281_CR32","unstructured":"Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. CoRR arXiv:1312.6114."},{"key":"1281_CR33","doi-asserted-by":"crossref","unstructured":"Kipp, M., H\u00e9loir, A., & Nguyen, Q. (2011). Sign language avatars: Animation and comprehensibility. In IVA.","DOI":"10.1007\/978-3-642-23974-8_13"},{"key":"1281_CR34","doi-asserted-by":"crossref","unstructured":"Kovar, L., Gleicher, M., & Pighin, F. (2002). Motion graphs. In Proceedings of the 29th annual conference on computer graphics and interactive techniques, SIGGRAPH \u201902 (pp. 473\u2013482). ACM, New York, NY.","DOI":"10.1145\/566570.566605"},{"key":"1281_CR35","unstructured":"Larsen, A. B. L., S\u00f8nderby, S. K., Larochelle, H., & Winther, O. (2016). Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the 33rd international conference on international conference on machine learning\u2014Volume 48, ICML\u201916 (pp. 1558\u20131566). JMLR.org."},{"key":"1281_CR36","doi-asserted-by":"crossref","unstructured":"Lee, J., Chai, J., Reitsma, P. S. A., Hodgins, J. K., & Pollard, N. S. (2002). Interactive control of avatars animated with human motion data. In Proceedings of the 29th annual conference on computer graphics and interactive techniques, SIGGRAPH \u201902 (pp. 491\u2013500). ACM, New York, NY.","DOI":"10.1145\/566570.566607"},{"key":"1281_CR37","doi-asserted-by":"crossref","unstructured":"Lee, J., & Shin, S. Y. (1999). A hierarchical approach to interactive motion editing for human-like figures. In SIGGRAPH.","DOI":"10.1145\/311535.311539"},{"key":"1281_CR38","doi-asserted-by":"crossref","unstructured":"Luong, T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. In Conference on empirical methods in natural language processing (EMNNLP).","DOI":"10.18653\/v1\/D15-1166"},{"key":"1281_CR39","unstructured":"Ma, L., Jia, X., Sun, Q., Schiele, B., Tuytelaars, T., & Van\u00a0Gool, L. (2017). Pose guided person image generation. In I.\u00a0Guyon, U. V. Luxburg, S.\u00a0Bengio, H.\u00a0Wallach, R.\u00a0Fergus, S.\u00a0Vishwanathan, & R.\u00a0Garnett (Eds.) Advances in neural information processing systems (Vol. 30, pp. 406\u2013416). Curran Associates, Inc."},{"key":"1281_CR40","unstructured":"Makhzani, A., Shlens, J., Jaitly, N., & Goodfellow, I. (2016). Adversarial autoencoders. In International conference on learning representations."},{"issue":"4","key":"1281_CR41","doi-asserted-by":"publisher","first-page":"551","DOI":"10.1007\/s10209-015-0407-2","volume":"15","author":"J McDonald","year":"2016","unstructured":"McDonald, J., Wolfe, R., Schnepp, J., Hochgesang, J., Jamrozik, D. G., Stumbo, M., et al. (2016). An automated technique for real-time production of lifelike animations of american sign language. Universal Access in the Information Society, 15(4), 551\u2013566.","journal-title":"Universal Access in the Information Society"},{"issue":"6","key":"1281_CR42","doi-asserted-by":"publisher","first-page":"153:1","DOI":"10.1145\/2366145.2366172","volume":"31","author":"J Min","year":"2012","unstructured":"Min, J., & Chai, J. (2012). Motion graphs++: A compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics, 31(6), 153:1\u2013153:12.","journal-title":"ACM Transactions on Graphics"},{"key":"1281_CR43","unstructured":"Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. CoRR arXiv:1411.1784."},{"key":"1281_CR44","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1109\/MRA.2012.2192811","volume":"19","author":"M Mori","year":"2012","unstructured":"Mori, M., MacDorman, K., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19, 98\u2013100.","journal-title":"IEEE Robotics & Automation Magazine"},{"key":"1281_CR45","unstructured":"Oord, A. V., Kalchbrenner, N., & Kavukcuoglu, K. (2016). Pixel recurrent neural networks. In M. F. Balcan, & K. Q. Weinberger (Eds.) Proceedings of the 33rd international conference on machine learning, proceedings of machine learning research (Vol.\u00a048, pp. 1747\u20131756). PMLR, New York, New York."},{"key":"1281_CR46","unstructured":"Perarnau, G., van\u00a0de Weijer, J., Raducanu, B., & \u00c1lvarez, J. M. (2016). Invertible conditional GANs for image editing. CoRR arXiv:1611.06355."},{"key":"1281_CR47","unstructured":"Prillwitz, S. (1989). HamNoSys. Version 2.0. Hamburg notation system for sign languages. An introductory guide. Hamburg: Signum Press."},{"key":"1281_CR48","unstructured":"Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR arXiv:1511.06434."},{"key":"1281_CR49","unstructured":"Reed, S. E., Akata, Z., Mohan, S., Tenka, S., Schiele, B., & Lee, H. (2016). Learning what and where to draw. In D. D. Lee, M.\u00a0Sugiyama, U. V. Luxburg, I.\u00a0Guyon, & R.\u00a0Garnett (Eds.) Advances in neural information processing systems (Vol. 29, pp. 217\u2013225). Curran Associates, Inc."},{"key":"1281_CR50","unstructured":"Robotka, Z. (2018). Signall. http:\/\/www.signall.us\/. Accessed 12 Nov 2018."},{"issue":"8","key":"1281_CR51","doi-asserted-by":"publisher","first-page":"1627","DOI":"10.1021\/ac60214a047","volume":"36","author":"A Savitzky","year":"1964","unstructured":"Savitzky, A., & Golay, M. J. E. (1964). Smoothing and differentiation of data by simplified least squares procedures. Analytical Chemistry, 36(8), 1627\u20131639.","journal-title":"Analytical Chemistry"},{"key":"1281_CR52","doi-asserted-by":"crossref","unstructured":"Siarohin, A., Sangineto, E., Lathuili\u00e8re, S., & Sebe, N. (2018). Deformable GANs for pose-based human image generation. In IEEE Conference on computer vision and pattern recognition (pp. 3408\u20133416). Salt Lake City, United States.","DOI":"10.1109\/CVPR.2018.00359"},{"key":"1281_CR53","unstructured":"Stoll, S., Camgoz, N. C., Hadfield, S., & Bowden, R. (2018). Sign language production using neural machine translation and generative adversarial networks. In British machine vision conference (BMVC)."},{"key":"1281_CR54","unstructured":"Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems."},{"key":"1281_CR55","unstructured":"van\u00a0den Oord, A., Kalchbrenner, N., Espeholt, L., kavukcuoglu, K., Vinyals, O., & Graves, A. (2016). Conditional image generation with pixelcnn decoders. In D. D. Lee, M.\u00a0Sugiyama, U. V. Luxburg, I.\u00a0Guyon, & R.\u00a0Garnett (Eds.) Advances in neural information processing systems (Vol. 29, pp. 4790\u20134798). Curran Associates, Inc."},{"key":"1281_CR56","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. CoRR arXiv:1706.03762."},{"key":"1281_CR57","unstructured":"Virtual Humans Group. (2017). Virtual humans research for sign language animation. http:\/\/vh.cmp.uea.ac.uk\/index.php\/Main_Page."},{"key":"1281_CR58","doi-asserted-by":"publisher","unstructured":"Wang, S., Guo, D., Zhou, W. G., Zha, Z. J., & Wang, M. (2018a). Connectionist temporal fusion for sign language translation. In Proceedings of the 26th ACM international conference on multimedia, MM \u201918 (pp. 1483\u20131491). ACM, New York. https:\/\/doi.org\/10.1145\/3240508.3240671.","DOI":"10.1145\/3240508.3240671"},{"key":"1281_CR59","doi-asserted-by":"crossref","unstructured":"Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018b). High-resolution image synthesis and semantic manipulation with conditional GANs. In Proceedings of the IEEE conference on computer vision and pattern recognition.","DOI":"10.1109\/CVPR.2018.00917"},{"issue":"4","key":"1281_CR60","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1109\/TIP.2003.819861","volume":"13","author":"Z Wang","year":"2004","unstructured":"Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600\u2013612.","journal-title":"IEEE Transactions on Image Processing"},{"key":"1281_CR61","unstructured":"WHO: World Health Organization (2018). Deafness and hearing loss. http:\/\/www.who.int\/mediacentre\/factsheets\/fs300\/en\/. Accessed 21 Nov 2018."},{"key":"1281_CR62","doi-asserted-by":"crossref","unstructured":"Yan, X., Yang, J., Sohn, K., & Lee, H. (2016). Attribute2image: Conditional image generation from visual attributes. In ECCV (4). Lecture Notes in Computer Science (Vol. 9908, pp. 776\u2013791). Springer.","DOI":"10.1007\/978-3-319-46257-8"},{"key":"1281_CR63","unstructured":"Zwitserlood, I., Verlinden, M., Ros, J., & Schoot, S. V. D. (2005). Synthetic signing for the deaf: eSIGN. http:\/\/www.visicast.cmp.uea.ac.uk\/Papers\/Synthetic%20signing%20for%20the%20Deaf,%20eSIGN.pdf."}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-019-01281-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1007\/s11263-019-01281-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-019-01281-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,10,9]],"date-time":"2022-10-09T18:56:19Z","timestamp":1665341779000},"score":1,"resource":{"primary":{"URL":"http:\/\/link.springer.com\/10.1007\/s11263-019-01281-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,1,2]]},"references-count":63,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2020,4]]}},"alternative-id":["1281"],"URL":"https:\/\/doi.org\/10.1007\/s11263-019-01281-2","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,1,2]]},"assertion":[{"value":"17 December 2018","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 December 2019","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 January 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}