{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,30]],"date-time":"2026-04-30T17:25:07Z","timestamp":1777569907385,"version":"3.51.4"},"reference-count":95,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T00:00:00Z","timestamp":1731974400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2024,12,19]]},"abstract":"<jats:p>\n            Autostereoscopic display technology, despite decades of development, has not achieved extensive application, primarily due to the daunting challenge of three-dimensional (3D) content creation for non-specialists. The emergence of Radiance Field as an innovative 3D representation has markedly revolutionized the domains of 3D reconstruction and generation, simplifying 3D content creation for common users and broadening the applicability of Light Field Displays (LFDs). However, the combination of these two technologies remains largely unexplored. The standard paradigm to create optimal content for parallax-based light field displays demands rendering at least 45 slightly shifted views preferably at high resolution per frame, a substantial hurdle for real-time rendering. We introduce DirectL, a novel rendering paradigm for Radiance Fields on autostereoscopic displays with lenticular lens. By thoroughly analyzing the interleaved mapping of spatial rays to screen sub-pixels, we accurately render only the light rays entering the human eye and propose subpixel repurposing to significantly reduce the pixel count required for rendering. Tailored for the two predominant radiance fields---Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), we propose corresponding optimized rendering pipelines that directly render the light field images instead of multi-view images, achieving state-of-the-art rendering speeds on autostereoscopic displays. Extensive experiments across various autostereoscopic displays and user visual perception assessments demonstrate that DirectL accelerates rendering by up to 40 times compared to the standard paradigm without sacrificing visual quality. Its rendering process-only modification allows seamless integration into subsequent radiance field tasks. Finally, we incorporate DirectL into diverse applications, showcasing the stunning visual experiences and the synergy between Light Field Displays and Radiance Fields, which reveals the immense potential for application prospects.\n            <jats:bold>DirectL Project Homepage: direct-l.github.io<\/jats:bold>\n          <\/jats:p>","DOI":"10.1145\/3687897","type":"journal-article","created":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T15:46:04Z","timestamp":1732031164000},"page":"1-19","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":14,"title":["DirectL: Efficient Radiance Fields Rendering for 3D Light Field Displays"],"prefix":"10.1145","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2067-1875","authenticated-orcid":false,"given":"Zongyuan","family":"Yang","sequence":"first","affiliation":[{"name":"State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-1313-7486","authenticated-orcid":false,"given":"Baolin","family":"Liu","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-3776-237X","authenticated-orcid":false,"given":"Yingde","family":"Song","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-6606-1917","authenticated-orcid":false,"given":"Lan","family":"Yi","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5333-204X","authenticated-orcid":false,"given":"Yongping","family":"Xiong","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-1717-953X","authenticated-orcid":false,"given":"Zhaohe","family":"Zhang","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-5096-238X","authenticated-orcid":false,"given":"Xunbo","family":"Yu","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,11,19]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"crossref","first-page":"36","DOI":"10.3390\/ma9010036","article-title":"Liquid crystal microlenses for autostereoscopic displays","volume":"9","author":"Algorri Jos\u00e9 Francisco","year":"2016","unstructured":"Jos\u00e9 Francisco Algorri, Virginia Urruchi, Braulio Garc\u00eda-C\u00e1mara, and Jos\u00e9 M S\u00e1nchez-Pena. 2016. Liquid crystal microlenses for autostereoscopic displays. Materials 9, 1 (2016), 36.","journal-title":"Materials"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01920"},{"key":"e_1_2_1_3_1","unstructured":"Sherwin Bahmani Ivan Skorokhodov Aliaksandr Siarohin Willi Menapace Guocheng Qian Michael Vasilkovsky Hsin-Ying Lee Chaoyang Wang Jiaxu Zou Andrea Tagliasacchi et al. 2024. VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control. arXiv preprint arXiv:2407.12781 (2024)."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00539"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00805"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19824-3_20"},{"key":"e_1_2_1_7_1","volume-title":"A survey on 3d gaussian splatting. arXiv preprint arXiv:2401.03890","author":"Chen Guikun","year":"2024","unstructured":"Guikun Chen and Wenguan Wang. 2024. A survey on 3d gaussian splatting. arXiv preprint arXiv:2401.03890 (2024)."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01590"},{"key":"e_1_2_1_9_1","volume-title":"DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos. arXiv preprint arXiv:2405.02280","author":"Chu Wen-Hsuan","year":"2024","unstructured":"Wen-Hsuan Chu, Lei Ke, and Katerina Fragkiadaki. 2024. DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos. arXiv preprint arXiv:2405.02280 (2024)."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2005.252"},{"key":"e_1_2_1_11_1","volume-title":"SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration. arXiv preprint arXiv:2312.07541","author":"Duckworth Daniel","year":"2023","unstructured":"Daniel Duckworth, Peter Hedman, Christian Reiser, Peter Zhizhin, Jean-Fran\u00e7ois Thibert, Mario Lu\u010di\u0107, Richard Szeliski, and Jonathan T Barron. 2023. SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration. arXiv preprint arXiv:2312.07541 (2023)."},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3658193"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925921"},{"key":"e_1_2_1_14_1","volume-title":"Lightgaussian: Unbounded 3d gaussian compression with 15x reduction and 200+ fps. arXiv preprint arXiv:2311.17245","author":"Fan Zhiwen","year":"2023","unstructured":"Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, and Zhangyang Wang. 2023. Lightgaussian: Unbounded 3d gaussian compression with 15x reduction and 200+ fps. arXiv preprint arXiv:2311.17245 (2023)."},{"key":"e_1_2_1_15_1","volume-title":"3D Gaussian as a New Vision Era: A Survey. arXiv preprint arXiv:2402.07181","author":"Fei Ben","year":"2024","unstructured":"Ben Fei, Jingyi Xu, Rui Zhang, Qingyuan Zhou, Weidong Yang, and Ying He. 2024. 3D Gaussian as a New Vision Era: A Survey. arXiv preprint arXiv:2402.07181 (2024)."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3585498"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00542"},{"key":"e_1_2_1_18_1","volume-title":"CAT3D: Create Anything in 3D with Multi-View Diffusion Models. arXiv preprint arXiv.2405.10314","author":"Gao Ruiqi","year":"2024","unstructured":"Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T Barron, and Ben Poole. 2024. CAT3D: Create Anything in 3D with Multi-View Diffusion Models. arXiv preprint arXiv.2405.10314 (2024)."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3651300"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3596711.3596760"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1364\/OE.408857"},{"key":"e_1_2_1_22_1","volume-title":"SIGGRAPH Asia 2023 Conference Papers. 1--11","author":"Gupta Kunal","year":"2023","unstructured":"Kunal Gupta, Milos Hasan, Zexiang Xu, Fujun Luan, Kalyan Sunkavalli, Xin Sun, Manmohan Chandraker, and Sai Bi. 2023. MCNeRF: Monte Carlo rendering and denoising for real-time NeRFs. In SIGGRAPH Asia 2023 Conference Papers. 1--11."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/280814.280884"},{"key":"e_1_2_1_24_1","volume-title":"Assessment of the definition varying with display depth for three-dimensional light field displays. Optics Communications","author":"He Jinhong","year":"2024","unstructured":"Jinhong He, Xunbo Yu, Xin Gao, Binbin Yan, Yixiang Tong, Xinhui Xie, Hui Zhang, Kaixin Shi, Xuanbin Hu, and Xinzhu Sang. 2024. Assessment of the definition varying with display depth for three-dimensional light field displays. Optics Communications (2024), 130623."},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00582"},{"key":"e_1_2_1_26_1","volume-title":"Lrm: Large reconstruction model for single image to 3d. arXiv preprint arXiv:2311.04400","author":"Hong Yicong","year":"2023","unstructured":"Yicong Hong, Kai Zhang, Jiuxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. 2023. Lrm: Large reconstruction model for single image to 3d. arXiv preprint arXiv:2311.04400 (2023)."},{"key":"e_1_2_1_27_1","volume-title":"2D Gaussian Splatting for Geometrically Accurate Radiance Fields. arXiv preprint arXiv:2403.17888","author":"Huang Binbin","year":"2024","unstructured":"Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2024. 2D Gaussian Splatting for Geometrically Accurate Radiance Fields. arXiv preprint arXiv:2403.17888 (2024)."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/1174429.1174479"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980251"},{"key":"e_1_2_1_30_1","volume-title":"Proceedings of the 5th High-Performance Graphics Conference. 89--99","author":"Karras Tero","year":"2013","unstructured":"Tero Karras and Timo Aila. 2013. Fast parallel construction of high-quality bounding volume hierarchies. In Proceedings of the 5th High-Performance Graphics Conference. 89--99."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592433"},{"key":"e_1_2_1_32_1","article-title":"A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets","volume":"44","author":"Kerbl Bernhard","year":"2024","unstructured":"Bernhard Kerbl, Andreas Meuleman, Georgios Kopanas, Michael Wimmer, Alexandre Lanvin, and George Drettakis. 2024. A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets. ACM Transactions on Graphics 44, 3 (2024).","journal-title":"ACM Transactions on Graphics"},{"key":"e_1_2_1_33_1","volume-title":"European Conference on Computer Vision. Springer, 596--614","author":"Keselman Leonid","year":"2022","unstructured":"Leonid Keselman and Martial Hebert. 2022. Approximate differentiable rendering with algebraic surfaces. In European Conference on Computer Vision. Springer, 596--614."},{"key":"e_1_2_1_34_1","first-page":"782","article-title":"Optical film arrangements for electronic device displays","volume":"11","author":"Kim ByoungSuk","year":"2023","unstructured":"ByoungSuk Kim, Yi Huang, Jun Qi, Victor H Yin, Seung Wook Kim, Nicolas V Scapel, and Yi-Pai Huang. 2023. Optical film arrangements for electronic device displays. US Patent 11,782,190.","journal-title":"US Patent"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2006.270"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3596711.3596759"},{"key":"e_1_2_1_37_1","doi-asserted-by":"crossref","first-page":"939","DOI":"10.1109\/JDT.2015.2405065","article-title":"3D synthesis and crosstalk reduction for lenticular autostereoscopic displays","volume":"11","author":"Li Dongxiao","year":"2015","unstructured":"Dongxiao Li, Dongning Zang, Xiaotian Qiao, Lianghao Wang, and Ming Zhang. 2015. 3D synthesis and crosstalk reduction for lenticular autostereoscopic displays. Journal of Display Technology 11, 11 (2015), 939--946.","journal-title":"Journal of Display Technology"},{"key":"e_1_2_1_38_1","volume-title":"CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians. arXiv preprint arXiv:2404.01133","author":"Liu Yang","year":"2024","unstructured":"Yang Liu, He Guan, Chuanchen Luo, Lue Fan, Junran Peng, and Zhaoxiang Zhang. 2024. CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians. arXiv preprint arXiv:2404.01133 (2024)."},{"key":"e_1_2_1_39_1","unstructured":"Looking Glass Factory. 2024a. Documentation Looking Glass: Light Field Displays. Web Page. https:\/\/lookingglassfactory.com\/"},{"key":"e_1_2_1_40_1","unstructured":"Looking Glass Factory. 2024b. Looking Glass 65: Big. Bold. Magic. Web Page. https:\/\/lookingglassfactory.com\/looking-glass-65"},{"key":"e_1_2_1_41_1","doi-asserted-by":"crossref","unstructured":"William E Lorensen and Harvey E Cline. 1998. Marching cubes: A high resolution 3D surface construction algorithm. In Seminal graphics: pioneering efforts that shaped the field. 347--353.","DOI":"10.1145\/280811.281026"},{"key":"e_1_2_1_42_1","unstructured":"Adam Marrs Benjamin Watson and Christopher G Healey. 2017. Real-Time View Independent Rasterization for Multi-View Rendering.. In Eurographics (Short Papers). 17--20."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015805"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/2945.468400"},{"key":"e_1_2_1_45_1","article-title":"Weighted blended order-independent transparency","volume":"2","author":"McGuire Morgan","year":"2013","unstructured":"Morgan McGuire and Louis Bavoil. 2013. Weighted blended order-independent transparency. Journal of Computer Graphics Techniques 2, 4 (2013).","journal-title":"Journal of Computer Graphics Techniques"},{"key":"e_1_2_1_46_1","volume-title":"A survey on bounding","author":"Meister Daniel","unstructured":"Daniel Meister, Shinji Ogaki, Carsten Benthin, Michael J Doyle, Michael Guthe, and Ji\u0159\u00ed Bittner. 2021. A survey on bounding volume hierarchies for ray tracing. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 683--712."},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3503250"},{"key":"e_1_2_1_48_1","volume-title":"Gavriel State, Sanja Fidler, Nicholas Sharp, and Zan Gojcic.","author":"Moenne-Loccoz Nicolas","year":"2024","unstructured":"Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, and Zan Gojcic. 2024. 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes. arXiv preprint arXiv:2407.07090 (2024)."},{"key":"e_1_2_1_49_1","volume-title":"Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG) 41, 4","author":"M\u00fcller Thomas","year":"2022","unstructured":"Thomas M\u00fcller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG) 41, 4 (2022), 1--15."},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2017.2686445"},{"key":"e_1_2_1_52_1","volume-title":"Compressed 3d gaussian splatting for accelerated novel view synthesis. arXiv preprint arXiv:2401.02436","author":"Niedermayr Simon","year":"2023","unstructured":"Simon Niedermayr, Josef Stumpfegger, and R\u00fcdiger Westermann. 2023. Compressed 3d gaussian splatting for accelerated novel view synthesis. arXiv preprint arXiv:2401.02436 (2023)."},{"key":"e_1_2_1_53_1","first-page":"44","article-title":"Multi-view Rendering using GPU for 3-D Displays","volume":"1","author":"Nozick Vincent","year":"2010","unstructured":"Vincent Nozick, Fran\u00e7ois de Sorbier, and Hideo Saito. 2010. Multi-view Rendering using GPU for 3-D Displays. GSTF Journal on Computing 1, 1 (2010), 44--49.","journal-title":"GSTF Journal on Computing"},{"key":"e_1_2_1_54_1","volume-title":"Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228","author":"Park Keunhong","year":"2021","unstructured":"Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M Seitz. 2021. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228 (2021)."},{"key":"e_1_2_1_55_1","volume-title":"Austin Robison, et al","author":"Parker Steven G","year":"2010","unstructured":"Steven G Parker, James Bigler, Andreas Dietrich, Heiko Friedrich, Jared Hoberock, David Luebke, David McAllister, Morgan McGuire, Keith Morley, Austin Robison, et al. 2010. Optix: a general purpose ray tracing engine. Acm transactions on graphics (tog) 29, 4 (2010), 1--13."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/344779.344933"},{"key":"e_1_2_1_57_1","volume-title":"Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952","author":"Podell Dustin","year":"2023","unstructured":"Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M\u00fcller, Joe Penna, and Robin Rombach. 2023. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)."},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592426"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_2_1_60_1","volume-title":"Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation. arXiv preprint arXiv:2403.12015","author":"Sauer Axel","year":"2024","unstructured":"Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, and Robin Rombach. 2024. Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation. arXiv preprint arXiv:2403.12015 (2024)."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01596"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.displa.2022.102320"},{"key":"e_1_2_1_63_1","first-page":"6087","article-title":"Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis","volume":"34","author":"Shen Tianchang","year":"2021","unstructured":"Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, and Sanja Fidler. 2021. Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis. Advances in Neural Information Processing Systems 34 (2021), 6087--6101.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130832"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3588037.3595385"},{"key":"e_1_2_1_66_1","volume-title":"Nintendo 3DS Draws Gamers Looking for 'Glasses-Free' 3D Console. The Guardian (March","author":"Stuart Keith","year":"2011","unstructured":"Keith Stuart. 2011. Nintendo 3DS Draws Gamers Looking for 'Glasses-Free' 3D Console. The Guardian (March 2011), 1. https:\/\/www.theguardian.com\/technology\/2011\/mar\/24\/nintendo-3ds-gamers-3d-console"},{"key":"e_1_2_1_67_1","volume-title":"LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. arXiv preprint arXiv:2402.05054","author":"Tang Jiaxiang","year":"2024","unstructured":"Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. 2024. LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. arXiv preprint arXiv:2402.05054 (2024)."},{"key":"e_1_2_1_68_1","volume-title":"Triposr: Fast 3d object reconstruction from a single image. arXiv preprint arXiv:2403.02151","author":"Tochilkin Dmitry","year":"2024","unstructured":"Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao. 2024. Triposr: Fast 3d object reconstruction from a single image. arXiv preprint arXiv:2403.02151 (2024)."},{"key":"e_1_2_1_69_1","unstructured":"Hanzhang Tu Ruizhi Shao Xue Dong Shunyuan Zheng Hao Zhang Lili Chen Meili Wang Wenyu Li Siyan Ma Shengping Zhang et al. 2024. Tele-Aloha: A Low-budget and High-authenticity Telepresence System Using Sparse RGB Cameras. arXiv preprint arXiv:2405.14866 (2024)."},{"key":"e_1_2_1_70_1","unstructured":"Johannes Unterguggenberger Bernhard Kerbl Markus Steinberger Dieter Schmalstieg and Michael Wimmer. 2020. Fast Multi-View Rendering for Real-Time Applications.. In EGPGV@ Eurographics\/EuroVis. 13--23."},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2010.2098351"},{"key":"e_1_2_1_72_1","volume-title":"Stereoscopic Displays and Virtual Reality Systems VI","author":"Berkel Cees Van","unstructured":"Cees Van Berkel. 1999. Image preparation for 3D LCD. In Stereoscopic Displays and Virtual Reality Systems VI, Vol. 3639. SPIE, 84--91."},{"key":"e_1_2_1_73_1","volume-title":"Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis. arXiv preprint arXiv:2405.14868","author":"Hoorick Basile Van","year":"2024","unstructured":"Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, and Carl Vondrick. 2024. Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis. arXiv preprint arXiv:2405.14868 (2024)."},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/RT.2007.4342588"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/1189762.1206075"},{"key":"e_1_2_1_76_1","volume-title":"European Conference on Computer Vision. Springer, 612--629","author":"Wang Huan","year":"2022","unstructured":"Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, and Sergey Tulyakov. 2022. R2l: Distilling neural radiance field to neural light field for efficient novel view synthesis. In European Conference on Computer Vision. Springer, 612--629."},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1117\/1.JEI.21.4.040902"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2017.2747126"},{"key":"e_1_2_1_79_1","volume-title":"4d gaussian splatting for real-time dynamic scene rendering. arXiv preprint arXiv:2310.08528","author":"Wu Guanjun","year":"2023","unstructured":"Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 2023. 4d gaussian splatting for real-time dynamic scene rendering. arXiv preprint arXiv:2310.08528 (2023)."},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00435"},{"key":"e_1_2_1_81_1","volume-title":"CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation. arXiv preprint arXiv:2406.02509","author":"Xu Dejia","year":"2024","unstructured":"Dejia Xu, Weili Nie, Chao Liu, Sifei Liu, Jan Kautz, Zhangyang Wang, and Arash Vahdat. 2024a. CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation. arXiv preprint arXiv:2406.02509 (2024)."},{"key":"e_1_2_1_82_1","volume-title":"Gaussian head avatar: Ultra high-fidelity head avatar via dynamic gaussians. arXiv preprint arXiv:2312.03029","author":"Xu Yuelang","year":"2023","unstructured":"Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, and Yebin Liu. 2023a. Gaussian head avatar: Ultra high-fidelity head avatar via dynamic gaussians. arXiv preprint arXiv:2312.03029 (2023)."},{"key":"e_1_2_1_83_1","volume-title":"Grm: Large gaussian reconstruction model for efficient 3d reconstruction and generation. arXiv preprint arXiv:2403.14621","author":"Xu Yinghao","year":"2024","unstructured":"Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen, and Gordon Wetzstein. 2024b. Grm: Large gaussian reconstruction model for efficient 3d reconstruction and generation. arXiv preprint arXiv:2403.14621 (2024)."},{"key":"e_1_2_1_84_1","volume-title":"4k4d: Real-time 4d view synthesis at 4k resolution. arXiv preprint arXiv:2310.11448","author":"Xu Zhen","year":"2023","unstructured":"Zhen Xu, Sida Peng, Haotong Lin, Guangzhao He, Jiaming Sun, Yujun Shen, Hujun Bao, and Xiaowei Zhou. 2023c. 4k4d: Real-time 4d view synthesis at 4k resolution. arXiv preprint arXiv:2310.11448 (2023)."},{"key":"e_1_2_1_85_1","volume-title":"Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. arXiv preprint arXiv:2309.13101","author":"Yang Ziyi","year":"2023","unstructured":"Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. 2023. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. arXiv preprint arXiv:2309.13101 (2023)."},{"key":"e_1_2_1_86_1","doi-asserted-by":"publisher","DOI":"10.1145\/3588432.3591536"},{"key":"e_1_2_1_87_1","volume-title":"Optics of liquid crystal displays","author":"Yeh Pochi","unstructured":"Pochi Yeh and Claire Gu. 2009. Optics of liquid crystal displays. Vol. 67. John Wiley & Sons."},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1145\/3105762.3105773"},{"key":"e_1_2_1_89_1","volume-title":"ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis. arXiv preprint arXiv:2409.02048","author":"Yu Wangbo","year":"2024","unstructured":"Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, and Yonghong Tian. 2024. ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis. arXiv preprint arXiv:2409.02048 (2024)."},{"key":"e_1_2_1_90_1","doi-asserted-by":"crossref","first-page":"93","DOI":"10.2528\/PIER17060101","article-title":"Illumination optics in emerging naked-eye 3d display (invited review)","volume":"159","author":"Zhang Aiqin","year":"2017","unstructured":"Aiqin Zhang, Jiahui Wang, Yangui Zhou, Haowen Liang, Hang Fan, Kunyang Li, Peter Krebs, and Jianying Zhou. 2017. Illumination optics in emerging naked-eye 3d display (invited review). Progress In Electromagnetics Research 159 (2017), 93--124.","journal-title":"Progress In Electromagnetics Research"},{"key":"e_1_2_1_91_1","volume-title":"GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting. arXiv preprint arXiv:2404.19702","author":"Zhang Kai","year":"2024","unstructured":"Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, and Zexiang Xu. 2024. GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting. arXiv preprint arXiv:2404.19702 (2024)."},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00355"},{"key":"e_1_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1186\/s42492-021-00096-8"},{"key":"e_1_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.5555\/2383894.2383905"},{"key":"e_1_2_1_95_1","doi-asserted-by":"publisher","DOI":"10.1109\/VISUAL.2001.964490"},{"key":"e_1_2_1_96_1","unstructured":"ZX-Real. 2024. ZX-Real Official Website: Light Field Displays. Web Page. https:\/\/www.zx-real.com\/lightFieldHolographicScreen"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687897","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3687897","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:17:45Z","timestamp":1750295865000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3687897"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,19]]},"references-count":95,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,12,19]]}},"alternative-id":["10.1145\/3687897"],"URL":"https:\/\/doi.org\/10.1145\/3687897","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,19]]},"assertion":[{"value":"2024-11-19","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}