{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,5,14]],"date-time":"2025-05-14T00:52:46Z","timestamp":1747183966480,"version":"3.40.5"},"reference-count":48,"publisher":"Wiley","issue":"2","license":[{"start":{"date-parts":[[2021,6,4]],"date-time":"2021-06-04T00:00:00Z","timestamp":1622764800000},"content-version":"vor","delay-in-days":34,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"funder":[{"DOI":"10.13039\/501100014188","name":"Ministry of Science and ICT, South Korea","doi-asserted-by":"publisher","award":["IITP-2015-0-00174"],"award-info":[{"award-number":["IITP-2015-0-00174"]}],"id":[{"id":"10.13039\/501100014188","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100007431","name":"Neurosciences Research Foundation","doi-asserted-by":"publisher","award":["NRF-2017M3C4A7066317"],"award-info":[{"award-number":["NRF-2017M3C4A7066317"]}],"id":[{"id":"10.13039\/100007431","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Computer Graphics Forum"],"published-print":{"date-parts":[[2021,5]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>This paper presents an effective method for generating a spatiotemporal (time\u2010varying) texture map for a dynamic object using a single RGB\u2010D camera. The input of our framework is a 3D template model and an RGB\u2010D image sequence. Since there are invisible areas of the object at a frame in a single\u2010camera setup, textures of such areas need to be borrowed from other frames. We formulate the problem as an MRF optimization and define cost functions to reconstruct a plausible spatiotemporal texture for a dynamic object. Experimental results demonstrate that our spatiotemporal textures can reproduce the active appearances of captured objects better than approaches using a single texture map.<\/jats:p>","DOI":"10.1111\/cgf.142652","type":"journal-article","created":{"date-parts":[[2021,6,4]],"date-time":"2021-06-04T16:37:32Z","timestamp":1622824652000},"page":"523-535","update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Spatiotemporal Texture Reconstruction for Dynamic Objects Using a Single RGB\u2010D Camera"],"prefix":"10.1111","volume":"40","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2162-4627","authenticated-orcid":false,"given":"Hyomin","family":"Kim","sequence":"first","affiliation":[{"name":"POSTECH"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4212-1970","authenticated-orcid":false,"given":"Jungeon","family":"Kim","sequence":"additional","affiliation":[{"name":"POSTECH"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4033-901X","authenticated-orcid":false,"given":"Hyeonseo","family":"Nam","sequence":"additional","affiliation":[{"name":"POSTECH"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5541-409X","authenticated-orcid":false,"given":"Jaesik","family":"Park","sequence":"additional","affiliation":[{"name":"POSTECH"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8159-4271","authenticated-orcid":false,"given":"Seungyong","family":"Lee","sequence":"additional","affiliation":[{"name":"POSTECH"}]}],"member":"311","published-online":{"date-parts":[[2021,6,4]]},"reference":[{"key":"e_1_2_10_2_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.2517-6161.1986.tb01412.x"},{"key":"e_1_2_10_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073610"},{"key":"e_1_2_10_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/2766945"},{"key":"e_1_2_10_5_2","doi-asserted-by":"crossref","unstructured":"CreteF. DolmiereT. LadretP. NicolasM.: The blur effect: perception and estimation with a new no-reference perceptual blur metric. InProc. SPIE(2007). 8","DOI":"10.1117\/12.702790"},{"key":"e_1_2_10_6_2","doi-asserted-by":"crossref","unstructured":"ChenQ. KoltunV.: Fast MRF optimization with application to depth reconstruction. InProc. CVPR(2014). 6","DOI":"10.1109\/CVPR.2014.500"},{"key":"e_1_2_10_7_2","doi-asserted-by":"crossref","unstructured":"DuR. ChuangM. ChangW. HoppeH. VarshneyA.: Montage4D: Interactive seamless fusion of multiview video textures. InProc. ACM SIGGRAPH Symposium on I3D(2018). 1 2","DOI":"10.1145\/3190834.3190843"},{"key":"e_1_2_10_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130801"},{"key":"e_1_2_10_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925969"},{"key":"e_1_2_10_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/3054739"},{"key":"e_1_2_10_11_2","doi-asserted-by":"crossref","unstructured":"DouM. TaylorJ. FuchsH. FitzgibbonA. IzadiS.: 3D scanning deformable objects with a single RGBD sensor. InProc. CVPR(2015). 2","DOI":"10.1109\/CVPR.2015.7298647"},{"key":"e_1_2_10_12_2","doi-asserted-by":"crossref","unstructured":"FuY. YanQ. LiaoJ. XiaoC.: Joint texture and geometry optimization for rgb-d reconstruction. InProc. CVPR(2020). 2 10","DOI":"10.1109\/CVPR42600.2020.00599"},{"key":"e_1_2_10_13_2","doi-asserted-by":"crossref","unstructured":"FuY. YanQ. YangL. LiaoJ. XiaoC.: Texture mapping for 3D reconstruction with rgb-d sensor. InProc. CVPR(2018). 1 2","DOI":"10.1109\/CVPR.2018.00488"},{"key":"e_1_2_10_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.1984.4767596"},{"issue":"6","key":"e_1_2_10_15_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3355089.3356571","article-title":"The relightables: Volumetric performance capture of humans with realistic relighting","volume":"38","author":"Guo K.","year":"2019","journal-title":"ACM TOG"},{"key":"e_1_2_10_16_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2009.01617.x"},{"key":"e_1_2_10_17_2","doi-asserted-by":"crossref","unstructured":"GuoK. XuF. WangY. LiuY. DaiQ.: Robust non-rigid motion tracking and surface reconstruction using L0 regularization. InProc. ICCV(2015). 2 7","DOI":"10.1109\/ICCV.2015.353"},{"issue":"5","key":"e_1_2_10_18_2","first-page":"1770","article-title":"Robust non-rigid motion tracking and surface reconstruction using L0 regularization","volume":"24","author":"Guo K.","year":"2018","journal-title":"IEEE TVCG"},{"key":"e_1_2_10_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/3083722"},{"key":"e_1_2_10_20_2","doi-asserted-by":"crossref","unstructured":"HuangF.-C. ChenB.-Y. ChuangY.-Y.: Progressive deforming meshes based on deformation oriented decimation and dynamic connectivity updating. InSymposium on Computer Animation(2006). 7","DOI":"10.1145\/1179622.1179633"},{"key":"e_1_2_10_21_2","doi-asserted-by":"crossref","unstructured":"HuangJ. ThiesJ. DaiA. KunduA. JiangC. GuibasL. J. NiessnerM. FunkhouserT.: Adversarial texture optimization from rgb-d scans. InProc. CVPR(2020). 2","DOI":"10.1109\/CVPR42600.2020.00163"},{"key":"e_1_2_10_22_2","doi-asserted-by":"crossref","unstructured":"InnmannM. Zollh\u00f6ferM. NiessnerM. TheobaltC. StammingerM.: Volumedeform: Real-time volumetric non-rigid reconstruction. InProc. ECCV(2016). 1 2","DOI":"10.1007\/978-3-319-46484-8_22"},{"key":"e_1_2_10_23_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00371-016-1249-5"},{"key":"e_1_2_10_24_2","doi-asserted-by":"crossref","unstructured":"KimJ. KimH. ParkJ. LeeS.: Global texture mapping for dynamic objects. InComputer Graphics Forum(2019) vol. 38 pp.697\u2013705. 1 2 3 4 8 9 11","DOI":"10.1111\/cgf.13872"},{"key":"e_1_2_10_25_2","doi-asserted-by":"crossref","unstructured":"LiH. AdamsB. GuibasL. J. PaulyM.: Robust single-view geometry and motion reconstruction. InACM SIGGRAPH Asia(2009). 2","DOI":"10.1145\/1661412.1618521"},{"issue":"6","key":"e_1_2_10_26_2","first-page":"2296","article-title":"Fast texture mapping adjustment via local\/global optimization","volume":"25","author":"Li W.","year":"2019","journal-title":"IEEE TVCG"},{"key":"e_1_2_10_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/2461912.2461950"},{"issue":"6","key":"e_1_2_10_28_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2508363.2508407","article-title":"3D self-portraits","volume":"32","author":"Li H.","year":"2013","journal-title":"ACM TOG"},{"issue":"6","key":"e_1_2_10_29_2","first-page":"2255","article-title":"Robust non-rigid registration with reweighted position and transformation sparsity","volume":"25","author":"Li K.","year":"2018","journal-title":"IEEE TVCG"},{"key":"e_1_2_10_30_2","unstructured":"Microsoft: UVAtlas 2011. Online; accessed 24 Feb 2020. URL:https:\/\/github.com\/Microsoft\/UVAtlas. 3"},{"key":"e_1_2_10_31_2","unstructured":"Microsoft: Azure Kinect DK \u2013 Develop AI Models: Microsoft Azure 2020. Online; accessed 19 Jan 2020. URL:https:\/\/azure.microsoft.com\/en-us\/services\/kinect-dk\/. 6"},{"key":"e_1_2_10_32_2","doi-asserted-by":"crossref","unstructured":"NewcombeR. A. FoxD. SeitzS. M.: Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. InProc. CVPR(2015). 1 2","DOI":"10.1109\/CVPR.2015.7298631"},{"key":"e_1_2_10_33_2","unstructured":"NewcombeR. A. IzadiS. HilligesO. MolyneauxD. KimD. DavisonA. J. KohiP. ShottonJ. HodgesS. FitzgibbonA.: Kinectfusion: Real-time dense surface mapping and tracking. InISMAR(2011) pp.127\u2013136. 1"},{"key":"e_1_2_10_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/2508363.2508374"},{"key":"e_1_2_10_35_2","doi-asserted-by":"crossref","unstructured":"Orts-EscolanoS. RhemannC. FanelloS. ChangW. KowdleA. DegtyarevY. KimD. DavidsonP. L. KhamisS. DouM. TankovichV. LoopC. CaiQ. ChouP. A. MennickenS. ValentinJ. PradeepV. WangS. KangS. B. KohliP. LutchynY. KeskinC. IzadiS.: Holoportation: Virtual 3D teleportation in real-time. InProceedings of the 29th Annual Symposium on User Interface Software and Technology(2016) p.741\u2013754. 1 2","DOI":"10.1145\/2984511.2984517"},{"key":"e_1_2_10_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073679"},{"key":"e_1_2_10_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201317"},{"key":"e_1_2_10_38_2","doi-asserted-by":"crossref","unstructured":"PandeyR. TkachA. YangS. PidlypenskyiP. TaylorJ. Martin-BruallaR. TagliasacchiA. PapandreouG. DavidsonP. KeskinC. et al.: Volumetric capture of humans with a single RGBD camera via semi-parametric learning. InProc. CVPR(2019). 3 9","DOI":"10.1109\/CVPR.2019.00994"},{"key":"e_1_2_10_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015720"},{"key":"e_1_2_10_40_2","doi-asserted-by":"crossref","unstructured":"SaitoS. HuangZ. NatsumeR. MorishimaS. KanazawaA. LiH.: PIFu: Pixel-aligned implicit function for high-resolution clothed human digitization. InProc. ICCV(2019). 3 9","DOI":"10.1109\/ICCV.2019.00239"},{"key":"e_1_2_10_41_2","doi-asserted-by":"crossref","unstructured":"SumnerR. W. SchmidJ. PaulyM.: Embedded deformation for shape manipulation. InACM SIGGRAPH 2007 papers(2007) pp.80\u2013es. 2 7","DOI":"10.1145\/1275808.1276478"},{"key":"e_1_2_10_42_2","doi-asserted-by":"crossref","unstructured":"TombariF. SaltiS. Di StefanoL.: Unique signatures of histograms for local surface description. InProc. ECCV(2010). 5","DOI":"10.1007\/978-3-642-15558-1_26"},{"key":"e_1_2_10_43_2","unstructured":"ThuerckD. WaechterM. WidmerS. vonBuelowM. SeemannP. PfetschM. E. GoeseleM.: A fast massively parallel solver for large irregular pairwise Markov random fields. InHigh Performance Graphics(2016) pp.173\u2013183. 6"},{"key":"e_1_2_10_44_2","doi-asserted-by":"crossref","unstructured":"WaechterM. MoehrleN. GoeseleM.: Let there be color! large-scale texturing of 3D reconstructions. InProc. ECCV(2014) pp.836\u2013850. 1","DOI":"10.1007\/978-3-319-10602-1_54"},{"key":"e_1_2_10_45_2","doi-asserted-by":"crossref","unstructured":"YaoY. DengB. XuW. ZhangJ.: Quasi-newton solver for robust non-rigid registration. InProc. CVPR(2020). 10","DOI":"10.1109\/CVPR42600.2020.00762"},{"key":"e_1_2_10_46_2","doi-asserted-by":"crossref","unstructured":"YuT. GuoK. XuF. DongY. SuZ. ZhaoJ. LiJ. DaiQ. LiuY.: Bodyfusion: Real-time capture of human motion and surface geometry using a single depth camera. InProc. ICCV(2017) pp.910\u2013919. 2","DOI":"10.1109\/ICCV.2017.104"},{"key":"e_1_2_10_47_2","doi-asserted-by":"crossref","unstructured":"YuT. ZhengZ. GuoK. ZhaoJ. DaiQ. LiH. Pons-MollG. LiuY.: Doublefusion: Real-time capture of human performances with inner body shapes from a single depth sensor. InProc. CVPR(2018). 2","DOI":"10.1109\/CVPR.2018.00761"},{"key":"e_1_2_10_48_2","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601134"},{"key":"e_1_2_10_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601165"}],"container-title":["Computer Graphics Forum"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/cgf.142652","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/full-xml\/10.1111\/cgf.142652","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/cgf.142652","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,9,1]],"date-time":"2024-09-01T07:20:27Z","timestamp":1725175227000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/cgf.142652"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5]]},"references-count":48,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2021,5]]}},"alternative-id":["10.1111\/cgf.142652"],"URL":"https:\/\/doi.org\/10.1111\/cgf.142652","archive":["Portico"],"relation":{},"ISSN":["0167-7055","1467-8659"],"issn-type":[{"type":"print","value":"0167-7055"},{"type":"electronic","value":"1467-8659"}],"subject":[],"published":{"date-parts":[[2021,5]]},"assertion":[{"value":"2021-06-04","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}