{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,10,5]],"date-time":"2024-10-05T04:28:32Z","timestamp":1728102512308},"reference-count":33,"publisher":"Institute of Electronics, Information and Communications Engineers (IEICE)","issue":"10","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IEICE Trans. Inf. &amp; Syst."],"published-print":{"date-parts":[[2022,10,1]]},"DOI":"10.1587\/transinf.2022pcp0003","type":"journal-article","created":{"date-parts":[[2022,9,30]],"date-time":"2022-09-30T22:34:18Z","timestamp":1664577258000},"page":"1679-1690","source":"Crossref","is-referenced-by-count":4,"title":["Time-Multiplexed Coded Aperture and Coded Focal Stack -Comparative Study on Snapshot Compressive Light Field Imaging"],"prefix":"10.1587","volume":"E105.D","author":[{"given":"Kohei","family":"TATEISHI","sequence":"first","affiliation":[{"name":"Graduate School of Engineering, Nagoya University"}]},{"given":"Chihiro","family":"TSUTAKE","sequence":"additional","affiliation":[{"name":"Graduate School of Engineering, Nagoya University"}]},{"given":"Keita","family":"TAKAHASHI","sequence":"additional","affiliation":[{"name":"Graduate School of Engineering, Nagoya University"}]},{"given":"Toshiaki","family":"FUJII","sequence":"additional","affiliation":[{"name":"Graduate School of Engineering, Nagoya University"}]}],"member":"532","reference":[{"key":"1","doi-asserted-by":"crossref","unstructured":"[1] E.H. Adelson and J.R. Bergen, \u201cThe plenoptic function and the elements of early vision,\u201d Computational Models of Visual Processing, pp.3-20, 1991. 10.7551\/mitpress\/2002.003.0004","DOI":"10.7551\/mitpress\/2002.003.0004"},{"key":"2","doi-asserted-by":"crossref","unstructured":"[2] S.J. Gortler, R. Grzeszczuk, R. Szeliski, and M.F. Cohen, \u201cThe lumigraph,\u201d Proc. 23th Annual Conf. Computer Graphics and Interactive Techniques, pp.43-54, Aug. 1996. 10.1145\/237170.237200","DOI":"10.1145\/237170.237200"},{"key":"3","doi-asserted-by":"crossref","unstructured":"[3] A. Isaksen, L. McMillan, and S.J. Gortler, \u201cDynamically reparameterized light fields,\u201d Proc. 27th Annual Conf. Computer Graphics and Interactive Techniques, pp.297-306, July 2000. 10.1145\/344779.344929","DOI":"10.1145\/344779.344929"},{"key":"4","unstructured":"[4] R. Ng, M. Levoy, M. Br\u00e9dif, G. Duval, M. Horowitz, and P. Hanrahan, \u201cLight field photography with a hand-held plenoptic camera,\u201d Computer Science Technical Report, vol.2, no.11, pp.1-11, 2005."},{"key":"5","doi-asserted-by":"publisher","unstructured":"[5] B. Mildenhall, P.P. Srinivasan, R. Ortiz-Cayon, N.K. Kalantari, R. Ramamoorthi, R. Ng, and A. Kar, \u201cLocal light field fusion: Practical view synthesis with prescriptive sampling guidelines,\u201d ACM TOG, vol.38, no.4, pp.1-14, Aug. 2019. 10.1145\/3306346.3322980","DOI":"10.1145\/3306346.3322980"},{"key":"6","doi-asserted-by":"publisher","unstructured":"[6] T.C. Wang, A.A. Efros, and R. Ramamoorthi, \u201cDepth estimation with occlusion modeling using light-field cameras,\u201d IEEE Trans. PAMI, vol.38, no.11, pp.2170-2181, Nov. 2016. 10.1109\/TPAMI.2016.2515615","DOI":"10.1109\/TPAMI.2016.2515615"},{"key":"7","doi-asserted-by":"crossref","unstructured":"[7] C. Shin, H. Jeon, Y. Yoon, I. Kweon, and S. Kim, \u201cEpinet: A fully-convolutional neural network using epipolar geometry for depth from light field images,\u201d IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp.4748-4757, 2018. 10.1109\/CVPR.2018.00499","DOI":"10.1109\/CVPR.2018.00499"},{"key":"8","doi-asserted-by":"publisher","unstructured":"[8] G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, \u201cTensor displays: Compressive light field synthesis using multilayer displays with directional backlighting,\u201d ACM TOG, vol.31, no.4, pp.1-11, July 2012. 10.1145\/2185520.2185576","DOI":"10.1145\/2185520.2185576"},{"key":"9","doi-asserted-by":"publisher","unstructured":"[9] S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, \u201cAdditive light field displays: realization of augmented reality with holographic optical elements,\u201d ACM TOG, vol.35, no.4, Article No. 60, July 2016. 10.1145\/2897824.2925971","DOI":"10.1145\/2897824.2925971"},{"key":"10","doi-asserted-by":"publisher","unstructured":"[10] B. Wilburn, N. Joshi, V. Vaish, E.V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, \u201cHigh performance imaging using large camera arrays,\u201d ACM TOG, vol.24, no.3, pp.765-776, July 2005. 10.1145\/1073204.1073259","DOI":"10.1145\/1073204.1073259"},{"key":"11","doi-asserted-by":"crossref","unstructured":"[11] T. Fujii, K. Mori, K. Takeda, K. Mase, M. Tanimoto, and Y. Suenaga, \u201cMultipoint measuring system for video and sound-100-camera and microphone system,\u201d IEEE Int. Conf. Multimedia and Expo (ICME), pp.437-440, 2006. 10.1109\/ICME.2006.262566","DOI":"10.1109\/ICME.2006.262566"},{"key":"12","doi-asserted-by":"crossref","unstructured":"[12] M. Levoy and P. Hanrahan, \u201cLight field rendering,\u201d Proc. 23th Annual Conf. Computer Graphics and Interactive Techniques, pp.31-42, Aug. 1996. 10.1145\/237170.237199","DOI":"10.1145\/237170.237199"},{"key":"13","doi-asserted-by":"crossref","unstructured":"[13] H. Nagahara, C. Zhou, T. Watanabe, H. Ishiguro, and S.K. Nayar, \u201cProgrammable aperture camera using LCoS,\u201d European Conf. Comput. Vis. (ECCV), pp.337-350, 2010. 10.1007\/978-3-642-15567-3_25","DOI":"10.1007\/978-3-642-15567-3_25"},{"key":"14","doi-asserted-by":"crossref","unstructured":"[14] Y. Inagaki, Y. Kobayashi, K. Takahashi, T. Fujii, and H. Nagahara, \u201cLearning to capture light fields through a coded aperture camera,\u201d European Conf. Comput. Vis. (ECCV), pp.431-448, 2018. 10.1007\/978-3-030-01234-2_26","DOI":"10.1007\/978-3-030-01234-2_26"},{"key":"15","doi-asserted-by":"publisher","unstructured":"[15] A.K. Vadathya, S. Girish, and K. Mitra, \u201cA unified-learning based framework for light field reconstruction from coded projections,\u201d IEEE Trans. Comput. Imag., pp.304-316, 2019. 10.1109\/TCI.2019.2948780","DOI":"10.1109\/TCI.2019.2948780"},{"key":"16","doi-asserted-by":"crossref","unstructured":"[16] M. Guo, J. Hou, J. Jin, J. Chen, and L.P. Chau, \u201cDeep spatial-angular regularization for compressive light field reconstruction over coded apertures,\u201d European Conf. Comput. Vis. (ECCV), pp.278-294, 2020. 10.1007\/978-3-030-58536-5_17","DOI":"10.1007\/978-3-030-58536-5_17"},{"key":"17","doi-asserted-by":"publisher","unstructured":"[17] Y. Inagaki, K. Takahashi, and T. Fujii, \u201cLight field acquisition from focal stack via a deep CNN,\u201d International Display Workshop (IDW), pp.1077-1080, 2019. 10.36463\/idw.2019.1077","DOI":"10.36463\/idw.2019.1077"},{"key":"18","doi-asserted-by":"publisher","unstructured":"[18] K. Takahashi, Y. Kobayashi, and T. Fujii, \u201cFrom focal stack to tensor light-field display,\u201d IEEE Trans. Image Process., vol.27, no.9, pp.4571-4584, Sept. 2018. 10.1109\/TIP.2018.2839263","DOI":"10.1109\/TIP.2018.2839263"},{"key":"19","doi-asserted-by":"crossref","unstructured":"[19] K. Tateishi, K. Sakai, C. Tsutake, K. Takahashi, and T. Fujii, \u201cFactorized modulation for singleshot lightfield acquisition,\u201d IEEE Int. Conf. Image Process. (ICIP), pp.3253-3257, 2021. 10.1109\/ICIP42928.2021.9506797","DOI":"10.1109\/ICIP42928.2021.9506797"},{"key":"20","doi-asserted-by":"crossref","unstructured":"[20] E. Vargas, J.N. Martel, G. Wetzstein, and H. Arguello, \u201cTime-multiplexed coded aperture imaging: Learned coded aperture and pixel exposures for compressive imaging systems,\u201d IEEE Int. Conf. Comput. Vis. (ICCV), pp.2692-2702, 2021. 10.1109\/ICCV48922.2021.00269","DOI":"10.1109\/ICCV48922.2021.00269"},{"key":"21","doi-asserted-by":"crossref","unstructured":"[21] X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, \u201cCoded focal stack photography,\u201d IEEE Int. Conf. Computational Photography (ICCP), pp.1-9, 2013. 10.1109\/ICCPhot.2013.6528297","DOI":"10.1109\/ICCPhot.2013.6528297"},{"key":"22","doi-asserted-by":"publisher","unstructured":"[22] D. Liu, J. Gu, Y. Hitomi, M. Gupta, T. Mitsunaga, and S.K. Nayar, \u201cEfficient space-time sampling with pixel-wise coded exposure for high-speed imaging,\u201d IEEE Trans. PAMI, vol.36, no.2, pp.248-260, 2013. 10.1109\/TPAMI.2013.129","DOI":"10.1109\/TPAMI.2013.129"},{"key":"23","doi-asserted-by":"publisher","unstructured":"[23] M. Yoshida, T. Sonoda, H. Nagahara, K. Endo, Y. Sugiyama, and R.I. Taniguchi, \u201cHigh-speed imaging using CMOS image sensor with quasi pixel-wise exposure,\u201d IEEE Trans. Comput. Imag., vol.6, pp.463-476, 2019. 10.1109\/TCI.2019.2956885","DOI":"10.1109\/TCI.2019.2956885"},{"key":"24","doi-asserted-by":"crossref","unstructured":"[24] A. Levin and F. Durand, \u201cLinear view synthesis using a dimensionality gap light field prior,\u201d IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp.1831-1838, 2010. 10.1109\/CVPR.2010.5539854","DOI":"10.1109\/CVPR.2010.5539854"},{"key":"25","doi-asserted-by":"publisher","unstructured":"[25] K. Kodama and A. Kubota, \u201cEfficient reconstruction of all-in-focus images through shifted pinholes from multi-focus images for dense light field synthesis and rendering,\u201d IEEE Trans. Image Process., vol.22, no.11, pp.4407-4421, Nov. 2013. 10.1109\/TIP.2013.2273668","DOI":"10.1109\/TIP.2013.2273668"},{"key":"26","doi-asserted-by":"publisher","unstructured":"[26] H. Nagahara, S. Kuthirummal, C. Zhou, and S.K. Nayar, \u201cFlexible depth of field photography,\u201d European Conf. Comput. Vis. (ECCV), pp.60-73, 2008. 10.1007\/978-3-540-88693-8_5","DOI":"10.1007\/978-3-540-88693-8_5"},{"key":"27","doi-asserted-by":"crossref","unstructured":"[27] K. Sakai, K. Takahashi, T. Fujii, and H. Nagahara, \u201cAcquiring dynamic light fields through coded aperture camera,\u201d European Conf. Comput. Vis. (ECCV), pp.368-385, 2020. 10.1007\/978-3-030-58529-7_22","DOI":"10.1007\/978-3-030-58529-7_22"},{"key":"28","unstructured":"[28] Computer Graphics Laboratory, Stanford University, \u201cThe (new) stanford light field archive,\u201d 2018. http:\/\/lightfield.stanford.edu."},{"key":"29","unstructured":"[29] MIT Media Lab&apos;s Camera Culture Group, \u201cCompressive light field camera,\u201d 2015. http:\/\/cameraculture.media.mit.edu\/projects\/compressive-light-field-camera\/."},{"key":"30","unstructured":"[30] Heidelberg Collaboratory for Image Processing, \u201cDatasets and benchmarks for densely sampled 4D light fields,\u201d 2016. http:\/\/lightfieldgroup.iwr.uni-heidelberg.de\/?page_id=713."},{"key":"31","unstructured":"[31] Heidelberg Collaboratory for Image Processing, \u201c4D light field dataset,\u201d 2018. http:\/\/hci-lightfield.iwr.uni-heidelberg.de\/."},{"key":"32","doi-asserted-by":"crossref","unstructured":"[32] P.P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, \u201cLearning to synthesize a 4D RGBD light field from a single image,\u201d European Conf. Comput. Vis. (ECCV), pp.2262-2270, 2017. 10.1109\/ICCV.2017.246","DOI":"10.1109\/ICCV.2017.246"},{"key":"33","doi-asserted-by":"publisher","unstructured":"[33] W. Zhou, E. Zhou, G. Liu, L. Lin, and A. Lumsdaine, \u201cUnsupervised monocular depth estimation from light field image,\u201d IEEE Trans. Image Process., vol.29, pp.1606-1617, 2019. 10.1109\/TIP.2019.2944343","DOI":"10.1109\/TIP.2019.2944343"}],"container-title":["IEICE Transactions on Information and Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transinf\/E105.D\/10\/E105.D_2022PCP0003\/_pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,5]],"date-time":"2024-10-05T02:12:12Z","timestamp":1728094332000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transinf\/E105.D\/10\/E105.D_2022PCP0003\/_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,1]]},"references-count":33,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2022]]}},"URL":"https:\/\/doi.org\/10.1587\/transinf.2022pcp0003","relation":{},"ISSN":["0916-8532","1745-1361"],"issn-type":[{"type":"print","value":"0916-8532"},{"type":"electronic","value":"1745-1361"}],"subject":[],"published":{"date-parts":[[2022,10,1]]},"article-number":"2022PCP0003"}}