{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:12:21Z","timestamp":1775067141263,"version":"3.50.1"},"reference-count":50,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2021,12,1]],"date-time":"2021-12-01T00:00:00Z","timestamp":1638316800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2021,12]]},"abstract":"<jats:p>\n            Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space. However, these deformation-based approaches struggle to model changes in topology, as topological changes require a discontinuity in the deformation field, but these deformation fields are necessarily continuous. We address this limitation by lifting NeRFs into a higher dimensional space, and by representing the 5D radiance field corresponding to each individual input image as a slice through this \"hyper-space\". Our method is inspired by level set methods, which model the evolution of surfaces as slices through a higher dimensional surface. We evaluate our method on two tasks: (i) interpolating smoothly between \"moments\", i.e., configurations of the scene, seen in the input images while maintaining visual plausibility, and (ii) novel-view synthesis at fixed moments. We show that our method, which we dub\n            <jats:italic>HyperNeRF<\/jats:italic>\n            , outperforms existing methods on both tasks. Compared to Nerfies, HyperNeRF reduces average error rates by 4.1% for interpolation and 8.6% for novel-view synthesis, as measured by LPIPS. Additional videos, results, and visualizations are available at hypernerf.github.io.\n          <\/jats:p>","DOI":"10.1145\/3478513.3480487","type":"journal-article","created":{"date-parts":[[2021,12,10]],"date-time":"2021-12-10T18:28:45Z","timestamp":1639160925000},"page":"1-12","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":548,"title":["HyperNeRF"],"prefix":"10.1145","volume":"40","author":[{"given":"Keunhong","family":"Park","sequence":"first","affiliation":[{"name":"University of Washington"}]},{"given":"Utkarsh","family":"Sinha","sequence":"additional","affiliation":[{"name":"Google Research"}]},{"given":"Peter","family":"Hedman","sequence":"additional","affiliation":[{"name":"Google Research"}]},{"given":"Jonathan T.","family":"Barron","sequence":"additional","affiliation":[{"name":"Google Research"}]},{"given":"Sofien","family":"Bouaziz","sequence":"additional","affiliation":[{"name":"Facebook Reality Labs"}]},{"given":"Dan B","family":"Goldman","sequence":"additional","affiliation":[{"name":"Google Research"}]},{"given":"Ricardo","family":"Martin-Brualla","sequence":"additional","affiliation":[{"name":"Google Research"}]},{"given":"Steven M.","family":"Seitz","sequence":"additional","affiliation":[{"name":"University of Washington"}]}],"member":"320","published-online":{"date-parts":[[2021,12,10]]},"reference":[{"key":"e_1_2_2_1_1","volume-title":"Neural point-based graphics. arXiv:1906.08240","author":"Aliev Kara-Ali","year":"2019","unstructured":"Kara-Ali Aliev , Artem Sevastopolsky , Maria Kolos , Dmitry Ulyanov , and Victor Lempitsky . 2019. Neural point-based graphics. arXiv:1906.08240 ( 2019 ). Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, and Victor Lempitsky. 2019. Neural point-based graphics. arXiv:1906.08240 (2019)."},{"key":"e_1_2_2_2_1","volume-title":"Srinivasan","author":"Barron Jonathan T.","year":"2021","unstructured":"Jonathan T. Barron , Ben Mildenhall , Matthew Tancik , Peter Hedman , Ricardo Martin-Brualla , and Pratul P . Srinivasan . 2021 . Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields . arXiv:2103.13415 [cs.CV] Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. 2021. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. arXiv:2103.13415 [cs.CV]"},{"key":"e_1_2_2_3_1","volume-title":"Optimizing the Latent Space of Generative Networks. ICML","author":"Bojanowski Piotr","year":"2018","unstructured":"Piotr Bojanowski , Armand Joulin , David Lopez-Pas , and Arthur Szlam . 2018. Optimizing the Latent Space of Generative Networks. ICML ( 2018 ). Piotr Bojanowski, Armand Joulin, David Lopez-Pas, and Arthur Szlam. 2018. Optimizing the Latent Space of Generative Networks. ICML (2018)."},{"key":"e_1_2_2_4_1","volume-title":"Neural Non-Rigid Tracking. arXiv preprint arXiv:2006.13240","author":"Bo\u017ei\u010d Alja\u017e","year":"2020","unstructured":"Alja\u017e Bo\u017ei\u010d , Pablo Palafox , Michael Zollh\u00f6fer , Angela Dai , Justus Thies , and Matthias Nie\u00dfner . 2020. Neural Non-Rigid Tracking. arXiv preprint arXiv:2006.13240 ( 2020 ). Alja\u017e Bo\u017ei\u010d, Pablo Palafox, Michael Zollh\u00f6fer, Angela Dai, Justus Thies, and Matthias Nie\u00dfner. 2020. Neural Non-Rigid Tracking. arXiv preprint arXiv:2006.13240 (2020)."},{"key":"e_1_2_2_5_1","volume-title":"Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang.","author":"Bradbury James","year":"2018","unstructured":"James Bradbury , Roy Frostig , Peter Hawkins , Matthew James Johnson , Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018 . JAX: composable transformations of Python +NumPy programs. http:\/\/github.com\/google\/jax James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018. JAX: composable transformations of Python+NumPy programs. http:\/\/github.com\/google\/jax"},{"key":"e_1_2_2_6_1","volume-title":"Recovering non-rigid 3D shape from image streams. CVPR","author":"Bregler Christoph","year":"2000","unstructured":"Christoph Bregler , Aaron Hertzmann , and Henning Biermann . 2000. Recovering non-rigid 3D shape from image streams. CVPR ( 2000 ). Christoph Bregler, Aaron Hertzmann, and Henning Biermann. 2000. Recovering non-rigid 3D shape from image streams. CVPR (2000)."},{"key":"e_1_2_2_7_1","doi-asserted-by":"crossref","unstructured":"Zhiqin Chen and Hao Zhang. 2019. Learning implicit fields for generative shape modeling. In CVPR. 5939--5948.  Zhiqin Chen and Hao Zhang. 2019. Learning implicit fields for generative shape modeling. In CVPR. 5939--5948.","DOI":"10.1109\/CVPR.2019.00609"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766945"},{"key":"e_1_2_2_9_1","volume-title":"Srinivasan","author":"Deng Boyang","year":"2020","unstructured":"Boyang Deng , Jonathan T. Barron , and Pratul P . Srinivasan . 2020 . JaxNeRF: an efficient JAX implementation of NeRF. https:\/\/github.com\/google-research\/google-research\/tree\/master\/jaxnerf Boyang Deng, Jonathan T. Barron, and Pratul P. Srinivasan. 2020. JaxNeRF: an efficient JAX implementation of NeRF. https:\/\/github.com\/google-research\/google-research\/tree\/master\/jaxnerf"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925969"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323028"},{"key":"e_1_2_2_12_1","doi-asserted-by":"crossref","unstructured":"Guy Gafni Justus Thies Michael Zollh\u00f6fer and Matthias Nie\u00dfner. 2021. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. In CVPR. 8649--8658.  Guy Gafni Justus Thies Michael Zollh\u00f6fer and Matthias Nie\u00dfner. 2021. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. In CVPR. 8649--8658.","DOI":"10.1109\/CVPR46437.2021.00854"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.5555\/2969033.2969125"},{"key":"e_1_2_2_14_1","unstructured":"Amir Hertz Or Perel Raja Giryes Olga Sorkine-Hornung and Daniel Cohen-Or. 2021. Progressive Encoding for Neural Optimization. arXiv:2104.09125 [cs.LG]  Amir Hertz Or Perel Raja Giryes Olga Sorkine-Hornung and Daniel Cohen-Or. 2021. Progressive Encoding for Neural Optimization. arXiv:2104.09125 [cs.LG]"},{"key":"e_1_2_2_15_1","doi-asserted-by":"crossref","unstructured":"Phillip Isola Jun-Yan Zhu Tinghui Zhou and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. In CVPR.  Phillip Isola Jun-Yan Zhu Tinghui Zhou and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. In CVPR.","DOI":"10.1109\/CVPR.2017.632"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.5555\/3327757.3327948"},{"key":"e_1_2_2_17_1","unstructured":"Chiyu Jiang Jingwei Huang Andrea Tagliasacchi Leonidas Guibas etal 2020. Shape-Flow: Learnable Deformations Among 3D Shapes. arXiv preprint arXiv:2006.07982 (2020).  Chiyu Jiang Jingwei Huang Andrea Tagliasacchi Leonidas Guibas et al. 2020. Shape-Flow: Learnable Deformations Among 3D Shapes. arXiv preprint arXiv:2006.07982 (2020)."},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201283"},{"key":"e_1_2_2_19_1","volume-title":"Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114","author":"Kingma Diederik P","year":"2013","unstructured":"Diederik P Kingma and Max Welling . 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 ( 2013 ). Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)."},{"key":"e_1_2_2_20_1","unstructured":"Tianye Li Mira Slavcheva Michael Zollhoefer Simon Green Christoph Lassner Changil Kim Tanner Schmidt Steven Lovegrove Michael Goesele and Zhaoyang Lv. 2021. Neural 3D Video Synthesis. arXiv:2103.02597 [cs.CV]  Tianye Li Mira Slavcheva Michael Zollhoefer Simon Green Christoph Lassner Changil Kim Tanner Schmidt Steven Lovegrove Michael Goesele and Zhaoyang Lv. 2021. Neural 3D Video Synthesis. arXiv:2103.02597 [cs.CV]"},{"key":"e_1_2_2_21_1","volume-title":"Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. arXiv preprint arXiv:2011.13084","author":"Li Zhengqi","year":"2020","unstructured":"Zhengqi Li , Simon Niklaus , Noah Snavely , and Oliver Wang . 2020. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. arXiv preprint arXiv:2011.13084 ( 2020 ). Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2020. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. arXiv preprint arXiv:2011.13084 (2020)."},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323020"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275099"},{"key":"e_1_2_2_24_1","volume-title":"NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. CVPR","author":"Martin-Brualla Ricardo","year":"2021","unstructured":"Ricardo Martin-Brualla , Noha Radwan , Mehdi S. M. Sajjadi , Jonathan T. Barron , Alexey Dosovitskiy , and Daniel Duckworth . 2021. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. CVPR ( 2021 ). Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2021. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. CVPR (2021)."},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323027"},{"key":"e_1_2_2_26_1","doi-asserted-by":"crossref","unstructured":"Lars Mescheder Michael Oechsle Michael Niemeyer Sebastian Nowozin and Andreas Geiger. 2019. Occupancy networks: Learning 3D reconstruction in function space. In CVPR.  Lars Mescheder Michael Oechsle Michael Niemeyer Sebastian Nowozin and Andreas Geiger. 2019. Occupancy networks: Learning 3D reconstruction in function space. In CVPR.","DOI":"10.1109\/CVPR.2019.00459"},{"key":"e_1_2_2_27_1","doi-asserted-by":"crossref","unstructured":"Moustafa Meshry Dan B Goldman Sameh Khamis Hugues Hoppe Rohit Pandey Noah Snavely and Ricardo Martin-Brualla. 2019. Neural rerendering in the wild. In CVPR.  Moustafa Meshry Dan B Goldman Sameh Khamis Hugues Hoppe Rohit Pandey Noah Snavely and Ricardo Martin-Brualla. 2019. Neural rerendering in the wild. In CVPR.","DOI":"10.1109\/CVPR.2019.00704"},{"key":"e_1_2_2_28_1","volume-title":"NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV","author":"Mildenhall Ben","year":"2020","unstructured":"Ben Mildenhall , Pratul P. Srinivasan , Matthew Tancik , Jonathan T. Barron , Ravi Ramamoorthi , and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV ( 2020 ). Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV (2020)."},{"key":"e_1_2_2_29_1","volume-title":"DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. CVPR","author":"Newcombe Richard A","year":"2015","unstructured":"Richard A Newcombe , Dieter Fox , and Steven M Seitz . 2015. DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. CVPR ( 2015 ). Richard A Newcombe, Dieter Fox, and Steven M Seitz. 2015. DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. CVPR (2015)."},{"key":"e_1_2_2_30_1","volume-title":"Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics. ICCV","author":"Niemeyer Michael","year":"2019","unstructured":"Michael Niemeyer , Lars Mescheder , Michael Oechsle , and Andreas Geiger . 2019. Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics. ICCV ( 2019 ). Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2019. Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics. ICCV (2019)."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1016\/0021-9991(88)90002-2"},{"key":"e_1_2_2_32_1","unstructured":"Jeong Joon Park Peter Florence Julian Straub Richard Newcombe and Steven Love-grove. 2019. DeepSDF: Learning continuous signed distance functions for shape representation. In CVPR.  Jeong Joon Park Peter Florence Julian Straub Richard Newcombe and Steven Love-grove. 2019. DeepSDF: Learning continuous signed distance functions for shape representation. In CVPR."},{"key":"e_1_2_2_33_1","volume-title":"Nerfies: Deformable Neural Radiance Fields. arXiv preprint arXiv:2011.12948.","author":"Park Keunhong","year":"2020","unstructured":"Keunhong Park , Utkarsh Sinha , Jonathan T. Barron , Sofien Bouaziz , Dan B Goldman , Steven M. Seitz , and Ricardo Martin-Brualla . 2020 . Nerfies: Deformable Neural Radiance Fields. arXiv preprint arXiv:2011.12948. Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2020. Nerfies: Deformable Neural Radiance Fields. arXiv preprint arXiv:2011.12948."},{"key":"e_1_2_2_34_1","volume-title":"D-NeRF: Neural Radiance Fields for Dynamic Scenes. arXiv preprint arXiv:2011.13961","author":"Pumarola Albert","year":"2020","unstructured":"Albert Pumarola , Enric Corona , Gerard Pons-Moll , and Francesc Moreno-Noguer . 2020. D-NeRF: Neural Radiance Fields for Dynamic Scenes. arXiv preprint arXiv:2011.13961 ( 2020 ). Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2020. D-NeRF: Neural Radiance Fields for Dynamic Scenes. arXiv preprint arXiv:2011.13961 (2020)."},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10514-015-9462-z"},{"key":"e_1_2_2_36_1","doi-asserted-by":"crossref","unstructured":"Johannes Lutz Sch\u00f6nberger and Jan-Michael Frahm. 2016. Structure-from-Motion Revisited. CVPR (2016).  Johannes Lutz Sch\u00f6nberger and Jan-Michael Frahm. 2016. Structure-from-Motion Revisited. CVPR (2016).","DOI":"10.1109\/CVPR.2016.445"},{"key":"e_1_2_2_37_1","doi-asserted-by":"crossref","unstructured":"Vincent Sitzmann Justus Thies Felix Heide Matthias Nie\u00dfner Gordon Wetzstein and Michael Zollh\u00f6fer. 2019a. DeepVoxels: Learning Persistent 3D Feature Embeddings. In CVPR.  Vincent Sitzmann Justus Thies Felix Heide Matthias Nie\u00dfner Gordon Wetzstein and Michael Zollh\u00f6fer. 2019a. DeepVoxels: Learning Persistent 3D Feature Embeddings. In CVPR.","DOI":"10.1109\/CVPR.2019.00254"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.5555\/3454287.3454388"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323008"},{"key":"e_1_2_2_40_1","volume-title":"Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. NeurIPS","author":"Tancik Matthew","year":"2020","unstructured":"Matthew Tancik , Pratul P. Srinivasan , Ben Mildenhall , Sara Fridovich-Keil , Nithin Raghavan , Utkarsh Singhal , Ravi Ramamoorthi , Jonathan T. Barron , and Ren Ng. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. NeurIPS ( 2020 ). Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. NeurIPS (2020)."},{"key":"e_1_2_2_41_1","doi-asserted-by":"crossref","unstructured":"A. Tewari O. Fried J. Thies V. Sitzmann S. Lombardi K. Sunkavalli R. Martin-Brualla T. Simon J. Saragih M. Nie\u00dfner R. Pandey S. Fanello G. Wetzstein J.-Y. Zhu C. Theobalt M. Agrawala E. Shechtman D. B Goldman and M. Zollh\u00f6fer. 2020. State of the Art on Neural Rendering. Computer Graphics Forum (2020).  A. Tewari O. Fried J. Thies V. Sitzmann S. Lombardi K. Sunkavalli R. Martin-Brualla T. Simon J. Saragih M. Nie\u00dfner R. Pandey S. Fanello G. Wetzstein J.-Y. Zhu C. Theobalt M. Agrawala E. Shechtman D. B Goldman and M. Zollh\u00f6fer. 2020. State of the Art on Neural Rendering. Computer Graphics Forum (2020).","DOI":"10.1111\/cgf.14022"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323035"},{"key":"e_1_2_2_43_1","volume-title":"Face2Face: Real-time face capture and reenactment of rgb videos. CVPR","author":"Thies Justus","year":"2016","unstructured":"Justus Thies , Michael Zollhofer , Marc Stamminger , Christian Theobalt , and Matthias Nie\u00dfner . 2016. Face2Face: Real-time face capture and reenactment of rgb videos. CVPR ( 2016 ). Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nie\u00dfner. 2016. Face2Face: Real-time face capture and reenactment of rgb videos. CVPR (2016)."},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2007.70752"},{"key":"e_1_2_2_45_1","doi-asserted-by":"crossref","unstructured":"Edgar Tretschk Ayush Tewari Vladislav Golyanik Michael Zollh\u00f6fer Christoph Lassner and Christian Theobalt. 2021. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. arXiv:2012.12247 [cs.CV]  Edgar Tretschk Ayush Tewari Vladislav Golyanik Michael Zollh\u00f6fer Christoph Lassner and Christian Theobalt. 2021. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. arXiv:2012.12247 [cs.CV]","DOI":"10.1109\/ICCV48922.2021.01272"},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACSSC.2003.1292216"},{"key":"e_1_2_2_47_1","volume-title":"Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv preprint arXiv:2011.12950","author":"Xian Wenqi","year":"2020","unstructured":"Wenqi Xian , Jia-Bin Huang , Johannes Kopf , and Changil Kim . 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv preprint arXiv:2011.12950 ( 2020 ). Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv preprint arXiv:2011.12950 (2020)."},{"key":"e_1_2_2_48_1","volume-title":"Multiview neural surface reconstruction by disentangling geometry and appearance. arXiv preprint arXiv:2003.09852","author":"Yariv Lior","year":"2020","unstructured":"Lior Yariv , Yoni Kasten , Dror Moran , Meirav Galun , Matan Atzmon , Ronen Basri , and Yaron Lipman . 2020. Multiview neural surface reconstruction by disentangling geometry and appearance. arXiv preprint arXiv:2003.09852 ( 2020 ). Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Ronen Basri, and Yaron Lipman. 2020. Multiview neural surface reconstruction by disentangling geometry and appearance. arXiv preprint arXiv:2003.09852 (2020)."},{"key":"e_1_2_2_49_1","volume-title":"Hyun Soo Park, and Jan Kautz","author":"Yoon Jae Shin","year":"2020","unstructured":"Jae Shin Yoon , Kihwan Kim , Orazio Gallo , Hyun Soo Park, and Jan Kautz . 2020 . Novel view synthesis of dynamic scenes with globally coherent depths from a monocular camera. In CVPR. Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz. 2020. Novel view synthesis of dynamic scenes with globally coherent depths from a monocular camera. In CVPR."},{"key":"e_1_2_2_50_1","doi-asserted-by":"crossref","unstructured":"Richard Zhang Phillip Isola Alexei A Efros Eli Shechtman and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR.  Richard Zhang Phillip Isola Alexei A Efros Eli Shechtman and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR.","DOI":"10.1109\/CVPR.2018.00068"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478513.3480487","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3478513.3480487","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:11:48Z","timestamp":1750191108000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3478513.3480487"}},"subtitle":["a higher-dimensional representation for topologically varying neural radiance fields"],"short-title":[],"issued":{"date-parts":[[2021,12]]},"references-count":50,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2021,12]]}},"alternative-id":["10.1145\/3478513.3480487"],"URL":"https:\/\/doi.org\/10.1145\/3478513.3480487","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,12]]},"assertion":[{"value":"2021-12-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}