{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,4]],"date-time":"2026-04-04T18:32:35Z","timestamp":1775327555869,"version":"3.50.1"},"reference-count":44,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2020,8,12]],"date-time":"2020-08-12T00:00:00Z","timestamp":1597190400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2020,8,31]]},"abstract":"<jats:p>We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells that are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final compressed representation is lightweight and can be rendered on mobile VR\/AR platforms or in a web browser. We demonstrate light field video results using data from the 16-camera rig of [Pozo et al. 2019] as well as a new low-cost hemispherical array made from 46 synchronized action sports cameras. From this data we produce 6 degree of freedom volumetric videos with a wide 70 cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view, at 30 frames per second video frame rates. Advancing over previous work, we show that our system is able to reproduce challenging content such as view-dependent reflections, semi-transparent surfaces, and near-field objects as close as 34 cm to the surface of the camera rig.<\/jats:p>","DOI":"10.1145\/3386569.3392485","type":"journal-article","created":{"date-parts":[[2020,8,12]],"date-time":"2020-08-12T11:44:27Z","timestamp":1597232667000},"update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":219,"title":["Immersive light field video with a layered mesh representation"],"prefix":"10.1145","volume":"39","author":[{"given":"Michael","family":"Broxton","sequence":"first","affiliation":[{"name":"Google"}]},{"given":"John","family":"Flynn","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Ryan","family":"Overbeck","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Daniel","family":"Erickson","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Peter","family":"Hedman","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Matthew","family":"Duvall","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Jason","family":"Dourgarian","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Jay","family":"Busch","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Matt","family":"Whalen","sequence":"additional","affiliation":[{"name":"Google"}]},{"given":"Paul","family":"Debevec","sequence":"additional","affiliation":[{"name":"Google"}]}],"member":"320","published-online":{"date-parts":[[2020,8,12]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980257"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1117\/12.2529137"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/383259.383309"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/166117.166153"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766945"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2012.03009.x"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.5555\/893689"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925969"},{"key":"e_1_2_2_9_1","unstructured":"Draco. 2019. https:\/\/google.github.io\/draco\/"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00247"},{"key":"e_1_2_2_11_1","doi-asserted-by":"crossref","unstructured":"John Flynn Ivan Neulander James Philbin and Noah Snavely. 2016. DeepStereo: Learning to Predict New Views From the World's Imagery. In CVPR.","DOI":"10.1109\/CVPR.2016.595"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1561\/0600000052"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/237170.237200"},{"key":"e_1_2_2_14_1","volume-title":"Multiple View Geometry in Computer Vision (2 ed.)","author":"Hartley Richard","unstructured":"Richard Hartley and Andrew Zisserman. 2003. Multiple View Geometry in Computer Vision (2 ed.). Cambridge University Press, New York, NY, USA."},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130828"},{"key":"e_1_2_2_16_1","volume-title":"Van Gool","author":"Heigl Benno","year":"1999","unstructured":"Benno Heigl, Reinhard Koch, Marc Pollefeys, Joachim Denzler, and Luc J. Van Gool. 1999. Plenoptic Modeling and Rendering from Image Sequences Taken by Hand-Held Camera. In Mustererkennung 1999, 21. DAGM-Symposium. Springer-Verlag, Berlin, Heidelberg, 94--101."},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/PCCGA.1997.626172"},{"key":"e_1_2_2_18_1","unstructured":"Jukka Jyl\u00e4nki. 2010. A thousand ways to pack the bin - a practical approach to two-dimensional rectangle bin packing."},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980251"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/237170.237199"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2010.01824.x"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275099"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/218380.218398"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322980"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3089269.3089283"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3226552.3226557"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130855"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356555"},{"key":"e_1_2_2_29_1","volume-title":"Guibas","author":"Qi Charles Ruizhongtai","year":"2017","unstructured":"Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. 2017a. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In CVPR."},{"key":"e_1_2_2_30_1","unstructured":"Charles Ruizhongtai Qi Li Yi Hao Su and Leonidas J Guibas. 2017b. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In NIPS."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/280814.280882"},{"key":"e_1_2_2_32_1","unstructured":"Heung-Yeung Shum Shing-Chow Chan and Sing Bing Kang. 2007. Image-Based Rendering (1 ed.). Springer."},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/311535.311573"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00254"},{"key":"e_1_2_2_35_1","volume-title":"Pushing the Boundaries of View Extrapolation with Multiplane Images. CVPR","author":"Srinivasan Pratul P.","year":"2019","unstructured":"Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. 2019. Pushing the Boundaries of View Extrapolation with Multiplane Images. CVPR (2019)."},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323035"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601199"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073614"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/1073204.1073259"},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1117\/12.451074"},{"key":"e_1_2_2_42_1","doi-asserted-by":"crossref","unstructured":"Richard Zhang Phillip Isola Alexei A Efros Eli Shechtman and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In CVPR.","DOI":"10.1109\/CVPR.2018.00068"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201323"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015766"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3386569.3392485","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3386569.3392485","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,25]],"date-time":"2025-06-25T05:37:42Z","timestamp":1750829862000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3386569.3392485"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,8,12]]},"references-count":44,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2020,8,31]]}},"alternative-id":["10.1145\/3386569.3392485"],"URL":"https:\/\/doi.org\/10.1145\/3386569.3392485","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,8,12]]},"assertion":[{"value":"2020-08-12","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}