{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,29]],"date-time":"2025-09-29T08:15:32Z","timestamp":1759133732481,"version":"3.37.3"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,11,10]],"date-time":"2020-11-10T00:00:00Z","timestamp":1604966400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,11,10]],"date-time":"2020-11-10T00:00:00Z","timestamp":1604966400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["2019K2A9A1A06100184"],"award-info":[{"award-number":["2019K2A9A1A06100184"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100012046","name":"Vietnam Academy of Science and Technology","doi-asserted-by":"publisher","award":["QTKR01.01\/20-21"],"award-info":[{"award-number":["QTKR01.01\/20-21"]}],"id":[{"id":"10.13039\/100012046","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100014819","name":"U.S. Army Combat Capabilities Development Command","doi-asserted-by":"crossref","award":["W90GQZ-93290007"],"award-info":[{"award-number":["W90GQZ-93290007"]}],"id":[{"id":"10.13039\/100014819","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Hum. Cent. Comput. Inf. Sci."],"published-print":{"date-parts":[[2020,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>Pose-invariant face recognition refers to the problem of identifying or verifying a person by analyzing face images captured from different poses. This problem is challenging due to the large variation of pose, illumination and facial expression. A promising approach to deal with pose variation is to fulfill incomplete UV maps extracted from in-the-wild faces, then attach the completed UV map to a fitted 3D mesh and finally generate different 2D faces of arbitrary poses. The synthesized faces increase the pose variation for training deep face recognition models and reduce the pose discrepancy during the testing phase. In this paper, we propose a novel generative model called Attention ResCUNet-GAN to improve the UV map completion. We enhance the original UV-GAN by using a couple of U-Nets. Particularly, the skip connections within each U-Net are boosted by attention gates. Meanwhile, the features from two U-Nets are fused with trainable scalar weights. The experiments on the popular benchmarks, including Multi-PIE, LFW, CPLWF and CFP datasets, show that the proposed method yields superior performance compared to other existing methods.<\/jats:p>","DOI":"10.1186\/s13673-020-00250-w","type":"journal-article","created":{"date-parts":[[2020,11,10]],"date-time":"2020-11-10T13:09:58Z","timestamp":1605013798000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Facial UV map completion for pose-invariant face recognition: a novel adversarial approach based on coupled attention residual UNets"],"prefix":"10.1186","volume":"10","author":[{"given":"In Seop","family":"Na","sequence":"first","affiliation":[]},{"given":"Chung","family":"Tran","sequence":"additional","affiliation":[]},{"given":"Dung","family":"Nguyen","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9254-1327","authenticated-orcid":false,"given":"Sang","family":"Dinh","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,11,10]]},"reference":[{"key":"250_CR1","doi-asserted-by":"crossref","unstructured":"Masi I, Wu Y, Hassner T, Natarajan P (2018) Deep face recognition: a survey. In: 2018 31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), IEEE, pp 471\u2013478","DOI":"10.1109\/SIBGRAPI.2018.00067"},{"issue":"1","key":"250_CR2","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1186\/s13673-018-0157-2","volume":"8","author":"S Zhou","year":"2018","unstructured":"Zhou S, Xiao S (2018) 3d face recognition: a survey. Hum-Cent Comput Inf Sci 8(1):35","journal-title":"Hum-Cent Comput Inf Sci"},{"issue":"2","key":"250_CR3","doi-asserted-by":"publisher","first-page":"41","DOI":"10.3745\/JIPS.2009.5.2.041","volume":"5","author":"R Jafri","year":"2009","unstructured":"Jafri R, Arabnia HR (2009) A survey of face recognition techniques. J Inf Process Syst 5(2):41\u201368","journal-title":"J Inf Process Syst"},{"key":"250_CR4","doi-asserted-by":"crossref","unstructured":"Tran L, Yin X, Liu X (2017) Disentangled representation learning gan for pose-invariant face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1415\u20131424","DOI":"10.1109\/CVPR.2017.141"},{"key":"250_CR5","doi-asserted-by":"crossref","unstructured":"Parkhi OM, Vedaldi A, Zisserman A (2015) Deep face recognition","DOI":"10.5244\/C.29.41"},{"key":"250_CR6","doi-asserted-by":"crossref","unstructured":"Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815\u2013823","DOI":"10.1109\/CVPR.2015.7298682"},{"key":"250_CR7","unstructured":"Sun Y, Chen Y, Wang X, Tang X (2014) Deep learning face representation by joint identification-verification. In: Advances in neural information processing systems, pp 1988\u20131996"},{"key":"250_CR8","doi-asserted-by":"crossref","unstructured":"Taigman Y, Yang M, Ranzato M, Wolf L (2014) Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1701\u20131708","DOI":"10.1109\/CVPR.2014.220"},{"key":"250_CR9","unstructured":"Yang J, Reed SE, Yang M-H, Lee H (2015) Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In: Advances in neural information processing systems, pp 1099\u20131107"},{"issue":"1","key":"250_CR10","first-page":"6","volume":"16","author":"M Sayan","year":"2020","unstructured":"Sayan M, Mohamed A-M, Shihab SA (2020) Multimodal biometrics recognition from facial video with missing modalities using deep learning. J Inf Process Syst 16(1):6\u201329","journal-title":"J Inf Process Syst"},{"key":"250_CR11","doi-asserted-by":"crossref","unstructured":"Sang DV, Van\u00a0Dat N, et\u00a0al (2017) Facial expression recognition using deep convolutional neural networks. In: 2017 9th international conference on knowledge and systems engineering (KSE), IEEE, pp 130\u2013135","DOI":"10.1109\/KSE.2017.8119447"},{"key":"250_CR12","unstructured":"Hai-Duong N, Sun-Hee K, Guee-Sang L, Hyung-Jeong N, Yang abd\u00a0In-Seop, Soo-Hyung K (2019) Facial expression recognition using a temporal ensemble of multi-level convolutional neural networks. IEEE Trans Affect Comput"},{"issue":"1","key":"250_CR13","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1186\/s13673-015-0043-0","volume":"5","author":"R Blanco-Gonzalo","year":"2015","unstructured":"Blanco-Gonzalo R, Poh N, Wong R, Sanchez-Reillo R (2015) Time evolution of face recognition in accessible scenarios. Hum-Cent Comput Inf Sci 5(1):24","journal-title":"Hum-Cent Comput Inf Sci"},{"key":"250_CR14","doi-asserted-by":"crossref","unstructured":"Masi I, Tran AT, Hassner T, Leksut JT, Medioni G (2016) Do we really need to collect millions of faces for effective face recognition? In: European conference on computer vision, Springer, pp 579\u2013596","DOI":"10.1007\/978-3-319-46454-1_35"},{"key":"250_CR15","doi-asserted-by":"crossref","unstructured":"Sagonas C, Panagakis Y, Zafeiriou S, Pantic M (2015) Robust statistical face frontalization. In: Proceedings of the IEEE international conference on computer vision, pp 3871\u20133879","DOI":"10.1109\/ICCV.2015.441"},{"key":"250_CR16","doi-asserted-by":"crossref","unstructured":"Kan M, Shan S, Chang H Chen X (2014) Stacked progressive auto-encoders (spae) for face recognition across poses. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1883\u20131890","DOI":"10.1109\/CVPR.2014.243"},{"key":"250_CR17","doi-asserted-by":"crossref","unstructured":"Hassner T, Harel S, Paz E, Enbar R (2015) Effective face frontalization in unconstrained images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4295\u20134304","DOI":"10.1109\/CVPR.2015.7299058"},{"key":"250_CR18","doi-asserted-by":"crossref","unstructured":"Peng X, Yu X, Sohn K, Metaxas DN, Chandraker M (2017) Reconstruction-based disentanglement for pose-invariant face recognition. In: Proceedings of the IEEE international conference on computer vision, pp 1623\u20131632","DOI":"10.1109\/ICCV.2017.180"},{"key":"250_CR19","unstructured":"Chongxuan L, Xu T, Zhu J, Zhang B (2017) Triple generative adversarial nets. In: Advances in neural information processing systems, pp 4088\u20134098"},{"key":"250_CR20","doi-asserted-by":"crossref","unstructured":"Yeh R, Chen C, Lim TY, Hasegawa-Johnson M, Do MN (2016) Semantic image inpainting with perceptual and contextual losses, 2(3). arXiv preprint arXiv:1607.07539","DOI":"10.1109\/CVPR.2017.728"},{"key":"250_CR21","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018) Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5505\u20135514","DOI":"10.1109\/CVPR.2018.00577"},{"key":"250_CR22","doi-asserted-by":"crossref","unstructured":"Luan F, Paris S, Shechtman E, Bala K (2017) Deep photo style transfer. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4990\u20134998","DOI":"10.1109\/CVPR.2017.740"},{"key":"250_CR23","doi-asserted-by":"crossref","unstructured":"Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223\u20132232","DOI":"10.1109\/ICCV.2017.244"},{"key":"250_CR24","unstructured":"Karras T, Aila T, Laine S, Lehtinen J (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196"},{"key":"250_CR25","doi-asserted-by":"crossref","unstructured":"Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4401\u20134410","DOI":"10.1109\/CVPR.2019.00453"},{"key":"250_CR26","doi-asserted-by":"crossref","unstructured":"Ledig C, Theis L, Husz\u00e1r F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, et\u00a0al (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681\u20134690","DOI":"10.1109\/CVPR.2017.19"},{"issue":"12","key":"250_CR27","doi-asserted-by":"publisher","first-page":"3007","DOI":"10.1109\/TPAMI.2018.2868350","volume":"41","author":"L Tran","year":"2018","unstructured":"Tran L, Yin X, Liu X (2018) Representation learning by rotating your faces. IEEE Trans Pattern Anal Mach Intell 41(12):3007\u20133021","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"250_CR28","doi-asserted-by":"crossref","unstructured":"Wang Q, Fan H, Sun G, Ren W, Tang Y (2020) Recurrent generative adversarial network for face completion. IEEE Trans Multimed","DOI":"10.1109\/TMM.2020.2978633"},{"key":"250_CR29","doi-asserted-by":"crossref","unstructured":"Huang R, Zhang S, Li T, He R (2017) Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In: Proceedings of the IEEE international conference on computer vision, pp 2439\u20132448","DOI":"10.1109\/ICCV.2017.267"},{"key":"250_CR30","doi-asserted-by":"crossref","unstructured":"Yin X, Yu X, Sohn K, Liu X, Chandraker M (2017) Towards large-pose face frontalization in the wild. In: Proceedings of the IEEE international conference on computer vision, pp 3990\u20133999","DOI":"10.1109\/ICCV.2017.430"},{"key":"250_CR31","doi-asserted-by":"crossref","unstructured":"Zhao J, Cheng Y, Xu Y, Xiong L, Li J, Zhao F, Jayashree K, Pranata S, Shen S, Xing J, et\u00a0al (2018) Towards pose invariant face recognition in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2207\u20132216","DOI":"10.1109\/CVPR.2018.00235"},{"key":"250_CR32","doi-asserted-by":"crossref","unstructured":"Duan Q, Zhang L (2020) Look more into occlusion: realistic face frontalization and recognition with boostgan. IEEE Trans Neural Netw Learn Syst","DOI":"10.1109\/TNNLS.2020.2978127"},{"issue":"2","key":"250_CR33","doi-asserted-by":"publisher","first-page":"460","DOI":"10.1007\/s11263-019-01252-7","volume":"128","author":"J Zhao","year":"2020","unstructured":"Zhao J, Xing J, Xiong L, Yan S, Feng J (2020) Recognizing profile faces by imagining frontal view. Int J Comput Vis 128(2):460\u2013478","journal-title":"Int J Comput Vis"},{"key":"250_CR34","doi-asserted-by":"publisher","first-page":"4445","DOI":"10.1109\/TIP.2020.2972114","volume":"29","author":"F Zhang","year":"2020","unstructured":"Zhang F, Zhang T, Mao Q, Xu C (2020) Geometry guided pose-invariant facial expression recognition. IEEE Trans Image Process 29:4445\u20134460","journal-title":"IEEE Trans Image Process"},{"key":"250_CR35","doi-asserted-by":"crossref","unstructured":"Deng J, Cheng S, Xue N, Zhou Y, Zafeiriou S (2018) Uv-gan: adversarial facial uv map completion for pose-invariant face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7093\u20137102","DOI":"10.1109\/CVPR.2018.00741"},{"key":"250_CR36","doi-asserted-by":"crossref","unstructured":"Booth J, Antonakos E, Ploumpis S, Trigeorgis G, Panagakis Y, Zafeiriou S (2017) 3d face morphable models \u201cin-the-wild\u201d. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 5464\u20135473","DOI":"10.1109\/CVPR.2017.580"},{"key":"250_CR37","doi-asserted-by":"crossref","unstructured":"Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125\u20131134","DOI":"10.1109\/CVPR.2017.632"},{"key":"250_CR38","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.632","volume-title":"Image-to-image translation with conditional adversarial networks","author":"P Isola","year":"2017","unstructured":"Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. CVPR, Salt Lake City"},{"key":"250_CR39","doi-asserted-by":"crossref","unstructured":"Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, Springer, pp 234\u2013241","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"250_CR40","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"issue":"10","key":"250_CR41","doi-asserted-by":"publisher","first-page":"2349","DOI":"10.1109\/TPAMI.2019.2902556","volume":"41","author":"N Xue","year":"2019","unstructured":"Xue N, Deng J, Cheng S, Panagakis Y, Zafeiriou S (2019) Side information for face completion: a robust PCA approach. IEEE Trans Pattern Anal Mach Intell 41(10):2349\u20132364","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"5","key":"250_CR42","doi-asserted-by":"publisher","first-page":"1025","DOI":"10.1109\/TPAMI.2019.2961900","volume":"42","author":"R He","year":"2019","unstructured":"He R, Cao J, Song L, Sun Z, Tan T (2019) Adversarial cross-spectral face completion for nir-vis face recognition. IEEE Trans Pattern Anal Mach Intell 42(5):1025\u20131037","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"250_CR43","unstructured":"Shah S, Ghosh P, Davis LS, Goldstein T (2018) Stacked u-nets: a no-frills approach to natural image segmentation. arXiv preprint arXiv:1804.10343"},{"key":"250_CR44","doi-asserted-by":"crossref","unstructured":"Newell A, Yang K, Deng J (2016) Stacked hourglass networks for human pose estimation. In: European conference on computer vision, Springer, pp 483\u2013499","DOI":"10.1007\/978-3-319-46484-8_29"},{"key":"250_CR45","doi-asserted-by":"publisher","first-page":"74","DOI":"10.1016\/j.neunet.2019.08.025","volume":"121","author":"N Ibtehaz","year":"2020","unstructured":"Ibtehaz N, Rahman MS (2020) Multiresunet: rethinking the u-net architecture for multimodal biomedical image segmentation. Neural Netw 121:74\u201387","journal-title":"Neural Netw"},{"key":"250_CR46","doi-asserted-by":"publisher","first-page":"197","DOI":"10.1016\/j.media.2019.01.012","volume":"53","author":"J Schlemper","year":"2019","unstructured":"Schlemper J, Oktay O, Schaap M, Heinrich M, Kainz B, Glocker B, Rueckert D (2019) Attention gated networks: learning to leverage salient regions in medical images. Med Image Anal 53:197\u2013207","journal-title":"Med Image Anal"},{"key":"250_CR47","unstructured":"Tang Z, Peng X, Geng S, Zhu Y, Metaxas DN (2019) Cu-net: coupled u-nets. In: 29th British machine vision conference, BMVC 2018"},{"issue":"1","key":"250_CR48","doi-asserted-by":"publisher","first-page":"78","DOI":"10.1109\/TPAMI.2017.2778152","volume":"41","author":"X Zhu","year":"2017","unstructured":"Zhu X, Liu X, Lei Z, Li SZ (2017) Face alignment in full pose range: a 3d total solution. IEEE Trans Pattern Anal Mach Intell 41(1):78\u201392","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"250_CR49","doi-asserted-by":"crossref","unstructured":"Blanz V, Vetter T (1999) A morphable model for the synthesis of 3d faces. In: Proceedings of the 26th annual conference on computer graphics and interactive techniques, pp 187\u2013194","DOI":"10.1145\/311535.311556"},{"key":"250_CR50","doi-asserted-by":"crossref","unstructured":"Tan M, Pang R, Le QV (2019) Efficientdet: scalable and efficient object detection. arXiv preprint arXiv:1911.09070","DOI":"10.1109\/CVPR42600.2020.01079"},{"issue":"5","key":"250_CR51","doi-asserted-by":"publisher","first-page":"807","DOI":"10.1016\/j.imavis.2009.08.002","volume":"28","author":"R Gross","year":"2010","unstructured":"Gross R, Matthews I, Cohn J, Kanade T, Baker S (2010) Multi-pie. Image Vis Comput 28(5):807\u2013813","journal-title":"Image Vis Comput"},{"key":"250_CR52","doi-asserted-by":"crossref","unstructured":"P\u00e9rez P, Gangnet M, Blake A (2003) Poisson image editing. In: ACM SIGGRAPH 2003 papers, pp 313\u2013318","DOI":"10.1145\/882262.882269"},{"issue":"10","key":"250_CR53","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","volume":"23","author":"K Zhang","year":"2016","unstructured":"Zhang K, Zhang Z, Li Z, Qiao Y (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23(10):1499\u20131503. https:\/\/doi.org\/10.1109\/LSP.2016.2603342","journal-title":"IEEE Signal Process Lett"},{"key":"250_CR54","doi-asserted-by":"crossref","unstructured":"Deng J, Guo J, Xue N, Zafeiriou S (2019) Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4690\u20134699","DOI":"10.1109\/CVPR.2019.00482"},{"key":"250_CR55","unstructured":"Tan M, Le QV (2019) Efficientnet: rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946"},{"key":"250_CR56","doi-asserted-by":"crossref","unstructured":"Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J (2018) Unet++: a nested u-net architecture for medical image segmentation. In: Deep learning in medical image analysis and multimodal learning for clinical decision support, pp 3\u201311","DOI":"10.1007\/978-3-030-00889-5_1"}],"container-title":["Human-centric Computing and Information Sciences"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13673-020-00250-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s13673-020-00250-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13673-020-00250-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,7,30]],"date-time":"2021-07-30T12:27:22Z","timestamp":1627648042000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1186\/s13673-020-00250-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,11,10]]},"references-count":56,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,12]]}},"alternative-id":["250"],"URL":"https:\/\/doi.org\/10.1186\/s13673-020-00250-w","relation":{},"ISSN":["2192-1962"],"issn-type":[{"type":"electronic","value":"2192-1962"}],"subject":[],"published":{"date-parts":[[2020,11,10]]},"assertion":[{"value":"21 March 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 October 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 November 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare that they have no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"45"}}