{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,18]],"date-time":"2025-12-18T14:20:51Z","timestamp":1766067651530,"version":"build-2065373602"},"reference-count":33,"publisher":"MDPI AG","issue":"23","license":[{"start":{"date-parts":[[2022,12,4]],"date-time":"2022-12-04T00:00:00Z","timestamp":1670112000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Key Research and Development Program of Guangzhou OF FUNDER","award":["202007050002"],"award-info":[{"award-number":["202007050002"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Computer vision tasks, such as motion estimation, depth estimation, object detection, etc., are better suited to light field images with more structural information than traditional 2D monocular images. However, since costly data acquisition instruments are difficult to calibrate, it is always hard to obtain real-world scene light field images. The majority of the datasets for static light field images now available are modest in size and cannot be used in methods such as transformer to fully leverage local and global correlations. Additionally, studies on dynamic situations, such as object tracking and motion estimates based on 4D light field images, have been rare, and we anticipate a superior performance. In this paper, we firstly propose a new static light field dataset that contains up to 50 scenes and takes 8 to 10 perspectives for each scene, with the ground truth including disparities, depths, surface normals, segmentations, and object poses. This dataset is larger scaled compared to current mainstream datasets for depth estimation refinement, and we focus on indoor and some outdoor scenarios. Second, to generate additional optical flow ground truth that indicates 3D motion of objects in addition to the ground truth obtained in static scenes in order to calculate more precise pixel level motion estimation, we released a light field scene flow dataset with dense 3D motion ground truth of pixels, and each scene has 150 frames. Thirdly, by utilizing the DistgDisp and DistgASR, which decouple the angular and spatial domain of the light field, we perform disparity estimation and angular super-resolution to evaluate the performance of our light field dataset. The performance and potential of our dataset in disparity estimation and angular super-resolution have been demonstrated by experimental results.<\/jats:p>","DOI":"10.3390\/s22239483","type":"journal-article","created":{"date-parts":[[2022,12,5]],"date-time":"2022-12-05T08:10:57Z","timestamp":1670227857000},"page":"9483","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["A New Parallel Intelligence Based Light Field Dataset for Depth Refinement and Scene Flow Estimation"],"prefix":"10.3390","volume":"22","author":[{"given":"Yu","family":"Shen","sequence":"first","affiliation":[{"name":"The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China"},{"name":"School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0299-6887","authenticated-orcid":false,"given":"Yuhang","family":"Liu","sequence":"additional","affiliation":[{"name":"The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China"},{"name":"School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"given":"Yonglin","family":"Tian","sequence":"additional","affiliation":[{"name":"The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China"}]},{"given":"Zhongmin","family":"Liu","sequence":"additional","affiliation":[{"name":"North Automatic Control Technology Institute, Taiyuan 030006, China"}]},{"given":"Feiyue","family":"Wang","sequence":"additional","affiliation":[{"name":"The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China"},{"name":"Macao Institute of Systems Engineering, Macau University of Science and Technology, Macao 999078, China"},{"name":"Beijing Engineering Research Center of Intelligent Systems and Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,12,4]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"539","DOI":"10.1109\/JAS.2017.7510841","article-title":"Training and testing object detectors with virtual images","volume":"5","author":"Tian","year":"2018","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_2","unstructured":"Xie, E., Wang, W., Yu, Z., An kumar, A., Alvarez, J.M., and Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems 34, Curran Associates, Inc."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Xu, H., Zhang, J., Cai, J., Rezatofighi, H., and Tao, D. (2022, January 21\u201324). GMFlow: Learning Optical Flow via Global Matching. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00795"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Huang, Z., Hu, X., Xue, Z., Xu, W., and Yue, T. (2021, January 11\u201317). Fast Light-field Disparity Estimation with Multi-disparity-scale Cost Aggregation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00626"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Li, Z., Liu, X., Drenkow, N., Ding, A., Creighton, F.X., Taylor, R.H., and Unberath, M. (2021, January 11\u201317). Revisiting stereo depth estimation from a sequence-to-sequence perspective with transformers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00614"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Tian, Y., Wang, X., Shen, Y., Guo, Z., Wang, Z., and Wang, F.-Y. (2021). Parallel Point Clouds: Hybrid Point Cloud Generation and 3D Model Enhancement via Virtual\u2013Real Integration. Remote Sens., 13.","DOI":"10.3390\/rs13152868"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"104336","DOI":"10.1016\/j.imavis.2021.104336","article-title":"Edge supervision and multi-scale cost volume for stereo matching","volume":"117","author":"Yang","year":"2022","journal-title":"Image Vis. Comput."},{"key":"ref_8","unstructured":"Wang, F.-Y. (2018, October 01). Parallel Light Field and Parallel Optics, from Optical Computing Experiment to Optical Guided Intelligence. Available online: http:\/\/www.sklmccs.ia.ac.cn\/2018reports.html."},{"key":"ref_9","first-page":"21","article-title":"Spatio-Angular Resolution Tradeoffs in Integral Photography","volume":"2006","author":"Georgiev","year":"2006","journal-title":"Render. Tech."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Wilburn, B., Joshi, N., Vaish, V., Talvala, E.-V., Antunez, E., Barth, A., Adams, A., Horowitz, M., and Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers, ACM.","DOI":"10.1145\/1186822.1073259"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2461912.2461914","article-title":"Compressive light field photography using overcomplete dictionaries and optimized projections","volume":"32","author":"Marwah","year":"2013","journal-title":"ACM Trans. Graph."},{"key":"ref_12","unstructured":"Ng, R., Levoy, M., Br\u00e9dif, M., Duval, G., Horowitz, M., and Hanrahan, P. (2005). Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR 2(11), Stanford University."},{"key":"ref_13","first-page":"110","article-title":"Parallel light field: The framework and processes","volume":"3","author":"Wang","year":"2021","journal-title":"Chin. J. Intell. Sci. Technol."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"630","DOI":"10.1109\/TITS.2010.2060218","article-title":"Parallel control and management for intelligent transportation systems: Concepts, architectures, and applications","volume":"11","author":"Wang","year":"2010","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"426","DOI":"10.1109\/TIV.2019.2960944","article-title":"A General Approach for Simulating Rain Effects on Sensor Data in Real and Virtual Environments","volume":"5","author":"Hasirlioglu","year":"2019","journal-title":"IEEE Trans. Intell. Veh."},{"key":"ref_16","unstructured":"Blender Online Community (2016). Blender\u2014A 3D Modelling and Rendering Package, Blender Institute."},{"key":"ref_17","first-page":"1","article-title":"Parallel Light Field: A Perspective and a Framework","volume":"9","author":"Wang","year":"2022","journal-title":"IEEE\/CAA J. Autom. Sin. Lett."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"534","DOI":"10.1109\/TIV.2020.2987440","article-title":"Shadow Detection and Removal for Illumination Consistency on the Road","volume":"5","author":"Wang","year":"2020","journal-title":"IEEE Trans. Intell. Veh."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1007\/s11263-017-1036-4","article-title":"Baseline and Triangulation Geometry in a Standard Plenoptic Camera","volume":"126","author":"Hahne","year":"2018","journal-title":"Int. J. Comput. Vis."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Honauer, K., Johannsen, O., Kondermann, D., and Goldluecke, B. (2016, January 20\u201324). A dataset and evaluation methodology for depth estimation on 4D light fields. Proceedings of the Asian Conference on Computer Vision, Taipei, Taiwan.","DOI":"10.1007\/978-3-319-54187-7_2"},{"key":"ref_21","unstructured":"Hu, X., Wang, C., Pan, Y., Liu, Y., Wang, Y., Liu, Y., Zhang, L., and Shirmohammadi, S. (October, January 28). 4DLFVD: A 4D Light Field Video Dataset. Proceedings of the 12th ACM Multimedia Systems Conference, Istanbul, Turkey."},{"key":"ref_22","unstructured":"Guillo, L., Jiang, X., Lafruit, G., and Guillemot, C. (2018). ISO\/IEC JTC1\/SC29\/WG1 & WG11, International Organisation for Standardisation. Light field video dataset captured by a R8 Raytrix camera (with disparity maps)."},{"key":"ref_23","unstructured":"(2017, May 29). The Stanford Light Field Archive. Available online: http:\/\/lightfield.stanford.edu\/lfs.html."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Wang, Y., Wang, L., Wu, G., Yang, J., An, W., Yu, J., and Guo, Y. (2022). Disentangling light fields for super-resolution and disparity estimation. IEEE Trans. Pattern Anal. Mach. Intell.","DOI":"10.1109\/TPAMI.2022.3152488"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1145\/3503250","article-title":"Nerf: Representing scenes as neural radiance fields for view synthesis","volume":"65","author":"Mildenhall","year":"2021","journal-title":"Commun. ACM"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Suhail, M., Esteves, C., Sigal, L., and Makadia, A. (2022, January 21\u201324). Light Field Neural Rendering. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00809"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"690","DOI":"10.1109\/TIV.2020.3049008","article-title":"Vehicle Detection and Disparity Estimation Using Blended Stereo Images","volume":"6","author":"Zhou","year":"2021","journal-title":"IEEE Trans. Intell. Veh."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Tsai, Y.J., Liu, Y.L., Ouhyoung, M., and Chuang, Y.Y. (2020, January 7\u201312). Attention-based view selection networks for light-field disparity estimation. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6888"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Jin, J., Hou, J., Yuan, H., and Kwong, S. (2020, January 7\u201312). Learning light field angular super-resolution via a geometry-aware network. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6771"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"193","DOI":"10.1145\/2980179.2980251","article-title":"Learning-based view synthesis for light field cameras","volume":"35","author":"Kalantari","year":"2016","journal-title":"ACM Trans. Graph."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Shi, J., Jiang, X., and Guillemot, C. (2020, January 13\u201319). Learning fused pixel and feature-based view reconstructions for light fields. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00263"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Cao, F., An, P., Huang, X., Yang, C., and Wu, Q. (2021, January 6\u201311). Multi-Models Fusion for Light Field Angular Super-Resolution. Proceedings of the ICASSP 2021\u20142021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada.","DOI":"10.1109\/ICASSP39728.2021.9413824"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Shin, C., Jeon, H.-G., Yoon, Y., Kweon, I.S., and Kim, S.J. (2018, January 18\u201323). EPINET: A Fully-Convolutional Neural Network Using Epipolar Geometry for Depth from Light Field Images. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00499"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/23\/9483\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:33:58Z","timestamp":1760146438000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/23\/9483"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,4]]},"references-count":33,"journal-issue":{"issue":"23","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["s22239483"],"URL":"https:\/\/doi.org\/10.3390\/s22239483","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,12,4]]}}}