{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T06:55:27Z","timestamp":1776063327869,"version":"3.50.1"},"reference-count":67,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2023,7,26]],"date-time":"2023-07-26T00:00:00Z","timestamp":1690329600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"ERC Starting Grant LEGO3D","award":["850533"],"award-info":[{"award-number":["850533"]}]},{"name":"DFG EXC number","award":["2064\/1"],"award-info":[{"award-number":["2064\/1"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:p>Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.<\/jats:p>","DOI":"10.1145\/3592426","type":"journal-article","created":{"date-parts":[[2023,7,26]],"date-time":"2023-07-26T14:29:21Z","timestamp":1690381761000},"page":"1-12","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":179,"title":["MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes"],"prefix":"10.1145","volume":"42","author":[{"ORCID":"https:\/\/orcid.org\/0009-0002-1050-3958","authenticated-orcid":false,"given":"Christian","family":"Reiser","sequence":"first","affiliation":[{"name":"Google Research, London, United Kingdom"},{"name":"University of T\u00fcbingen, T\u00fcbingen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-5300-5475","authenticated-orcid":false,"given":"Rick","family":"Szeliski","sequence":"additional","affiliation":[{"name":"Google Research, Seattle, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8798-3270","authenticated-orcid":false,"given":"Dor","family":"Verbin","sequence":"additional","affiliation":[{"name":"Google Research, Boston, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-8268-3285","authenticated-orcid":false,"given":"Pratul","family":"Srinivasan","sequence":"additional","affiliation":[{"name":"Google Research, San Francisco, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-9796-6121","authenticated-orcid":false,"given":"Ben","family":"Mildenhall","sequence":"additional","affiliation":[{"name":"Google Research, San Francisco, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8151-3726","authenticated-orcid":false,"given":"Andreas","family":"Geiger","sequence":"additional","affiliation":[{"name":"University of T\u00fcbingen, T\u00fcbingen, Germany"},{"name":"T\u00fcbingen AI Center, T\u00fcbingen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-4016-9448","authenticated-orcid":false,"given":"Jon","family":"Barron","sequence":"additional","affiliation":[{"name":"Google Research, San Francisco, United States of America"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2182-0185","authenticated-orcid":false,"given":"Peter","family":"Hedman","sequence":"additional","affiliation":[{"name":"Google Research, London, United Kingdom"}]}],"member":"320","published-online":{"date-parts":[[2023,7,26]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"D.G. Aliaga T. Funkhouser D. Yanovsky and I. Carlbom. 2002. Sea of images. IEEE Visualization (2002)."},{"key":"e_1_2_2_2_1","volume-title":"Learning Neural Light Fields with Ray-Space Embedding Networks. CVPR","author":"Attal Benjamin","year":"2022","unstructured":"Benjamin Attal, Jia-Bin Huang, Michael Zollh\u00f6fer, Johannes Kopf, and Changil Kim. 2022. Learning Neural Light Fields with Ray-Space Embedding Networks. CVPR (2022)."},{"key":"e_1_2_2_3_1","volume-title":"MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. ECCV","author":"Attal Benjamin","year":"2020","unstructured":"Benjamin Attal, Selena Ling, Aaron Gokaslan, Christian Richardt, and James Tompkin. 2020. MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. ECCV (2020)."},{"key":"e_1_2_2_4_1","volume-title":"Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. CVPR","author":"Barron Jonathan T.","year":"2022","unstructured":"Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. CVPR (2022)."},{"key":"e_1_2_2_5_1","volume-title":"Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv:1308.3432","author":"Bengio Yoshua","year":"2013","unstructured":"Yoshua Bengio, Nicholas L\u00e9onard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv:1308.3432 (2013)."},{"key":"e_1_2_2_6_1","volume-title":"Immersive Light Field Video with a Layered Mesh Representation. ACM Transactions on Graphics","author":"Broxton Michael","year":"2020","unstructured":"Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, and Paul Debevec. 2020. Immersive Light Field Video with a Layered Mesh Representation. ACM Transactions on Graphics (2020)."},{"key":"e_1_2_2_7_1","volume-title":"Unstructured Lumigraph Rendering. SIGGRAPH","author":"Buehler Chris","year":"2001","unstructured":"Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, and Michael Cohen. 2001. Unstructured Lumigraph Rendering. SIGGRAPH (2001)."},{"key":"e_1_2_2_8_1","volume-title":"Real-Time Neural Light Field on Mobile Devices. CVPR","author":"Cao Junli","year":"2023","unstructured":"Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makoviichuk, Sergey Tulyakov, and Jian Ren. 2023. Real-Time Neural Light Field on Mobile Devices. CVPR (2023)."},{"key":"e_1_2_2_9_1","volume-title":"Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein.","author":"Chan Eric R.","year":"2022","unstructured":"Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. 2022. Efficient Geometry-aware 3D Generative Adversarial Networks. CVPR (2022)."},{"key":"e_1_2_2_10_1","volume-title":"Depth Synthesis and Local Warps for Plausible Image-Based Navigation. ACM Transactions on Graphics","author":"Chaurasia Gaurav","year":"2013","unstructured":"Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis. 2013. Depth Synthesis and Local Warps for Plausible Image-Based Navigation. ACM Transactions on Graphics (2013)."},{"key":"e_1_2_2_11_1","volume-title":"TensoRF: Tensorial Radiance Fields. ECCV","author":"Chen Anpei","year":"2022","unstructured":"Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. 2022. TensoRF: Tensorial Radiance Fields. ECCV (2022)."},{"key":"e_1_2_2_12_1","volume-title":"MobileNeRF: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. CVPR","author":"Chen Zhiqin","year":"2023","unstructured":"Zhiqin Chen, Thomas Funkhouser, Peter Hedman, and Andrea Tagliasacchi. 2023. MobileNeRF: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. CVPR (2023)."},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.5555\/893689"},{"key":"e_1_2_2_14_1","volume-title":"Compressing Explicit Voxel Grid Representations: Fast NeRFs Become Also Small. WACV","author":"Deng Chenxi Lola","year":"2023","unstructured":"Chenxi Lola Deng and Enzo Tartaglione. 2023. Compressing Explicit Voxel Grid Representations: Fast NeRFs Become Also Small. WACV (2023)."},{"key":"e_1_2_2_15_1","volume-title":"Nitish Srivastava, Graham W. Taylor, and Joshua M. Susskind.","author":"DeVries Terrance","year":"2021","unstructured":"Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W. Taylor, and Joshua M. Susskind. 2021. Unconstrained Scene Generation with Locally Conditioned Radiance Fields. ICCV (2021)."},{"key":"e_1_2_2_16_1","volume-title":"DeepView: View synthesis with learned gradient descent. CVPR","author":"Flynn John","year":"2019","unstructured":"John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. DeepView: View synthesis with learned gradient descent. CVPR (2019)."},{"key":"e_1_2_2_17_1","volume-title":"FastNeRF: High-Fidelity Neural Rendering at 200FPS. ICCV","author":"Garbin Stephan J.","year":"2021","unstructured":"Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. 2021. FastNeRF: High-Fidelity Neural Rendering at 200FPS. ICCV (2021)."},{"key":"e_1_2_2_18_1","volume-title":"Cohen","author":"Gortler Steven J.","year":"1996","unstructured":"Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F. Cohen. 1996. The lumigraph. SIGGRAPH (1996)."},{"key":"e_1_2_2_19_1","volume-title":"Deep blending for free-viewpoint image-based rendering. SIGGRAPH Asia","author":"Hedman Peter","year":"2018","unstructured":"Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. 2018. Deep blending for free-viewpoint image-based rendering. SIGGRAPH Asia (2018)."},{"key":"e_1_2_2_20_1","volume-title":"Baking Neural Radiance Fields for Real-Time View Synthesis. ICCV","author":"Hedman Peter","year":"2021","unstructured":"Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul Debevec. 2021. Baking Neural Radiance Fields for Real-Time View Synthesis. ICCV (2021)."},{"key":"e_1_2_2_21_1","volume-title":"ReLU fields: The little non-linearity that could. SIGGRAPH","author":"Karnewar Animesh","year":"2022","unstructured":"Animesh Karnewar, Tobias Ritschel, Oliver Wang, and Niloy Mitra. 2022. ReLU fields: The little non-linearity that could. SIGGRAPH (2022)."},{"key":"e_1_2_2_22_1","volume-title":"Neural Lumigraph Rendering. CVPR","author":"Kellnhofer Petr","year":"2021","unstructured":"Petr Kellnhofer, Lars Jebe, Andrew Jones, Ryan Spicer, Kari Pulli, and Gordon Wetzstein. 2021. Neural Lumigraph Rendering. CVPR (2021)."},{"key":"e_1_2_2_23_1","volume-title":"Kingma and Jimmy Ba","author":"Diederik","year":"2015","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR."},{"key":"e_1_2_2_24_1","volume-title":"AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance Fields. ECCV","author":"Kurz Andreas","year":"2022","unstructured":"Andreas Kurz, Thomas Neff, Zhaoyang Lv, Michael Zollh\u00f6fer, and Markus Steinberger. 2022. AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance Fields. ECCV (2022)."},{"key":"e_1_2_2_25_1","volume-title":"Light field rendering. SIGGRAPH","author":"Levoy Marc","year":"1996","unstructured":"Marc Levoy and Pat Hanrahan. 1996. Light field rendering. SIGGRAPH (1996)."},{"key":"e_1_2_2_26_1","volume-title":"Compressing Volumetric Radiance Fields to 1 MB. arXiv:2211.16386","author":"Li Lingzhi","year":"2022","unstructured":"Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, and Liefeng Bo. 2022b. Compressing Volumetric Radiance Fields to 1 MB. arXiv:2211.16386 (2022)."},{"key":"e_1_2_2_27_1","volume-title":"NerfAcc: A General NeRF Accleration Toolbox. arXiv:2210.04847","author":"Li Ruilong","year":"2022","unstructured":"Ruilong Li, Matthew Tancik, and Angjoo Kanazawa. 2022d. NerfAcc: A General NeRF Accleration Toolbox. arXiv:2210.04847 (2022)."},{"key":"e_1_2_2_28_1","volume-title":"SteerNeRF: Accelerating NeRF Rendering via Smooth Viewpoint Trajectory. arXiv:2212.08476","author":"Li Sicheng","year":"2022","unstructured":"Sicheng Li, Hao Li, Yue Wang, Yiyi Liao, and Lu Yu. 2022a. SteerNeRF: Accelerating NeRF Rendering via Smooth Viewpoint Trajectory. arXiv:2212.08476 (2022)."},{"key":"e_1_2_2_29_1","volume-title":"NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field. EGSR","author":"Li Zhong","year":"2022","unstructured":"Zhong Li, Liangchen Song, Celong Liu, Junsong Yuan, and Yi Xu. 2022c. NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field. EGSR (2022)."},{"key":"e_1_2_2_30_1","volume-title":"Yu-Chiang Frank Wang, and Shenlong Wang","author":"Lin Zhi-Hao","year":"2022","unstructured":"Zhi-Hao Lin, Wei-Chiu Ma, Hao-Yu Hsu, Yu-Chiang Frank Wang, and Shenlong Wang. 2022. NeurMiPs: Neural Mixture of Planar Experts for View Synthesis. CVPR (2022)."},{"key":"e_1_2_2_31_1","volume-title":"AutoInt: Automatic Integration for Fast Neural Rendering. CVPR","author":"Lindell David B.","year":"2021","unstructured":"David B. Lindell, Julien N.P. Martel, and Gordon Wetzstein. 2021. AutoInt: Automatic Integration for Fast Neural Rendering. CVPR (2021)."},{"key":"e_1_2_2_32_1","volume-title":"Tat-Seng Chua, and Christian Theobalt.","author":"Liu Lingjie","year":"2020","unstructured":"Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020. Neural Sparse Voxel Fields. NeurIPS (2020)."},{"key":"e_1_2_2_33_1","volume-title":"NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. CVPR","author":"Martin-Brualla Ricardo","year":"2021","unstructured":"Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2021. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. CVPR (2021)."},{"key":"e_1_2_2_34_1","doi-asserted-by":"crossref","unstructured":"Nelson Max. 1995. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics (1995).","DOI":"10.1109\/2945.468400"},{"key":"e_1_2_2_35_1","volume-title":"Ravi Ramamoorthi, Ren Ng, and Abhishek Kar.","author":"Mildenhall Ben","year":"2019","unstructured":"Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines. ACM Transactions on Graphics (2019)."},{"key":"e_1_2_2_36_1","volume-title":"NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV","author":"Mildenhall Ben","year":"2020","unstructured":"Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. ECCV (2020)."},{"key":"e_1_2_2_37_1","volume-title":"Instant neural graphics primitives with a multiresolution hash encoding. SIGGRAPH","author":"M\u00fcller Thomas","year":"2022","unstructured":"Thomas M\u00fcller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant neural graphics primitives with a multiresolution hash encoding. SIGGRAPH (2022)."},{"key":"e_1_2_2_38_1","volume-title":"Extracting Triangular 3D Models, Materials, and Lighting From Images. CVPR","author":"Munkberg Jacob","year":"2022","unstructured":"Jacob Munkberg, Wenzheng Chen, Jon Hasselgren, Alex Evans, Tianchang Shen, Thomas M\u00fcller, Jun Gao, and Sanja Fidler. 2022a. Extracting Triangular 3D Models, Materials, and Lighting From Images. CVPR (2022)."},{"key":"e_1_2_2_39_1","volume-title":"Extracting Triangular 3D Models, Materials, and Lighting From Images. CVPR","author":"Munkberg Jacob","year":"2022","unstructured":"Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas M\u00fcller, and Sanja Fidler. 2022b. Extracting Triangular 3D Models, Materials, and Lighting From Images. CVPR (2022)."},{"key":"e_1_2_2_40_1","volume-title":"DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks. Computer Graphics Forum","author":"Neff Thomas","year":"2021","unstructured":"Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H. Mueller, Chakravarty R. Alla Chaitanya, Anton Kaplanyan, and Markus Steinberger. 2021. DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks. Computer Graphics Forum (2021)."},{"key":"e_1_2_2_41_1","volume-title":"Convolutional Occupancy Networks. ECCV","author":"Peng Songyou","year":"2020","unstructured":"Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. 2020. Convolutional Occupancy Networks. ECCV (2020)."},{"key":"e_1_2_2_42_1","volume-title":"TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering. 3DV","author":"Piala Martin","year":"2021","unstructured":"Martin Piala and Ronald Clark. 2021. TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering. 3DV (2021)."},{"key":"e_1_2_2_43_1","volume-title":"KiloNeRF: Speeding up neural radiance fields with thousands of tiny MLPs. ICCV","author":"Reiser Christian","year":"2021","unstructured":"Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. 2021. KiloNeRF: Speeding up neural radiance fields with thousands of tiny MLPs. ICCV (2021)."},{"key":"e_1_2_2_44_1","volume-title":"Stable view synthesis. CVPR","author":"Riegler Gernot","year":"2021","unstructured":"Gernot Riegler and Vladlen Koltun. 2021. Stable view synthesis. CVPR (2021)."},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530122"},{"key":"e_1_2_2_46_1","volume-title":"Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering. NeurIPS","author":"Sitzmann Vincent","year":"2021","unstructured":"Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, and Fredo Durand. 2021. Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering. NeurIPS (2021)."},{"key":"e_1_2_2_47_1","volume-title":"Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. CVPR","author":"Sun Cheng","year":"2022","unstructured":"Cheng Sun, Min Sun, and Hwann-Tzong Chen. 2022. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. CVPR (2022)."},{"key":"e_1_2_2_48_1","volume-title":"Variable Bitrate Neural Fields. ACM Transactions on Graphics","author":"Takikawa Towaki","year":"2022","unstructured":"Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas M\u00fcller, Morgan McGuire, Alec Jacobson, and Sanja Fidler. 2022. Variable Bitrate Neural Fields. ACM Transactions on Graphics (2022)."},{"key":"e_1_2_2_49_1","volume-title":"Block-NeRF: Scalable Large Scene Neural View Synthesis. CVPR","author":"Tancik Matthew","year":"2022","unstructured":"Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul Srinivasan, Jonathan T. Barron, and Henrik Kretzschmar. 2022. Block-NeRF: Scalable Large Scene Neural View Synthesis. CVPR (2022)."},{"key":"e_1_2_2_50_1","doi-asserted-by":"crossref","unstructured":"Ayush Tewari Justus Thies Ben Mildenhall Pratul Srinivasan Edgar Tretschk W Yifan Christoph Lassner Vincent Sitzmann Ricardo Martin-Brualla Stephen Lombardi et al. 2022. Advances in neural rendering. Computer Graphics Forum (2022).","DOI":"10.1111\/cgf.14507"},{"key":"e_1_2_2_51_1","volume-title":"Mega-NERF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs. CVPR","author":"Turki Haithem","year":"2022","unstructured":"Haithem Turki, Deva Ramanan, and Mahadev Satyanarayanan. 2022. Mega-NERF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs. CVPR (2022)."},{"key":"e_1_2_2_52_1","volume-title":"R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis. ECCV","author":"Wang Huan","year":"2022","unstructured":"Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, and Sergey Tulyakov. 2022b. R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis. ECCV (2022)."},{"key":"e_1_2_2_53_1","unstructured":"Peng Wang Lingjie Liu Yuan Liu Christian Theobalt Taku Komura and Wenping Wang. 2021. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. NeurIPS (2021)."},{"key":"e_1_2_2_54_1","volume-title":"Simoncelli","author":"Wang Zhou","year":"2004","unstructured":"Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE TIP (2004)."},{"key":"e_1_2_2_55_1","volume-title":"4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions. arXiv:2212.04701","author":"Wang Zhongshu","year":"2022","unstructured":"Zhongshu Wang, Lingzhi Li, Zhen Shen, Li Shen, and Liefeng Bo. 2022a. 4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions. arXiv:2212.04701 (2022)."},{"key":"e_1_2_2_56_1","volume-title":"NeX: Real-time View Synthesis with Neural Basis Expansion. CVPR","author":"Wizadwongsa Suttisak","year":"2021","unstructured":"Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn. 2021. NeX: Real-time View Synthesis with Neural Basis Expansion. CVPR (2021)."},{"key":"e_1_2_2_57_1","volume-title":"Anand Bhattad, Yuxiong Wang, and David Forsyth.","author":"Wu Liwen","year":"2022","unstructured":"Liwen Wu, Jae Yong Lee, Anand Bhattad, Yuxiong Wang, and David Forsyth. 2022a. DIVeR: Real-time and Accurate Neural Radiance Fields with Deterministic Integration for Volume Rendering. CVPR (2022)."},{"key":"e_1_2_2_58_1","volume-title":"Scalable Neural Indoor Scene Rendering. ACM TOG","author":"Wu Xiuchao","year":"2022","unstructured":"Xiuchao Wu, Jiamin Xu, Zihan Zhu, Hujun Bao, Qixing Huang, James Tompkin, and Weiwei Xu. 2022b. Scalable Neural Indoor Scene Rendering. ACM TOG (2022)."},{"key":"e_1_2_2_59_1","unstructured":"Lior Yariv Jiatao Gu Yoni Kasten and Yaron Lipman. 2021. Volume rendering of neural implicit surfaces. NeurIPS (2021)."},{"key":"e_1_2_2_60_1","volume-title":"BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis. SIGGRAPH","author":"Yariv Lior","year":"2023","unstructured":"Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, and Ben Mildenhall. 2023. BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis. SIGGRAPH (2023)."},{"key":"e_1_2_2_61_1","volume-title":"Understanding straight-through estimator in training activation quantized neural nets. ICLR","author":"Yin Penghang","year":"2019","unstructured":"Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, and Jack Xin. 2019. Understanding straight-through estimator in training activation quantized neural nets. ICLR (2019)."},{"key":"e_1_2_2_62_1","volume-title":"Plenoxels: Radiance fields without neural networks. CVPR","author":"Yu Alex","year":"2022","unstructured":"Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance fields without neural networks. CVPR (2022)."},{"key":"e_1_2_2_63_1","volume-title":"PlenOctrees for real-time rendering of neural radiance fields. ICCV","author":"Yu Alex","year":"2021","unstructured":"Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. 2021. PlenOctrees for real-time rendering of neural radiance fields. ICCV (2021)."},{"key":"e_1_2_2_64_1","volume-title":"Digging into Radiance Grid for Real-Time View Synthesis with Detail Preservation. ECCV","author":"Zhang Jian","year":"2022","unstructured":"Jian Zhang, Jinchi Huang, Bowen Cai, Huan Fu, Mingming Gong, Chaohui Wang, Jiaming Wang, Hongchen Luo, Rongfei Jia, Binqiang Zhao, and Xing Tang. 2022. Digging into Radiance Grid for Real-Time View Synthesis with Detail Preservation. ECCV (2022)."},{"key":"e_1_2_2_65_1","volume-title":"Analyzing and Improving Neural Radiance Fields. arXiv:2010.07492","author":"Zhang Kai","year":"2020","unstructured":"Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. 2020. NeRF++: Analyzing and Improving Neural Radiance Fields. arXiv:2010.07492 (2020)."},{"key":"e_1_2_2_66_1","volume-title":"The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. CVPR","author":"Zhang Richard","year":"2018","unstructured":"Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. CVPR (2018)."},{"key":"e_1_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201323"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3592426","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3592426","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:48:59Z","timestamp":1750182539000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3592426"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,26]]},"references-count":67,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["10.1145\/3592426"],"URL":"https:\/\/doi.org\/10.1145\/3592426","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,26]]},"assertion":[{"value":"2023-07-26","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}