{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T20:07:44Z","timestamp":1770494864034,"version":"3.49.0"},"reference-count":33,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2023,7,8]],"date-time":"2023-07-08T00:00:00Z","timestamp":1688774400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100014188","name":"MSIT (Ministry of Science and ICT), Korea","doi-asserted-by":"publisher","award":["IITP-2023-2020-0-01846"],"award-info":[{"award-number":["IITP-2023-2020-0-01846"]}],"id":[{"id":"10.13039\/501100014188","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100014188","name":"MSIT (Ministry of Science and ICT), Korea","doi-asserted-by":"publisher","award":["NRF-2018R1D1A3B07044041"],"award-info":[{"award-number":["NRF-2018R1D1A3B07044041"]}],"id":[{"id":"10.13039\/501100014188","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100014188","name":"MSIT (Ministry of Science and ICT), Korea","doi-asserted-by":"publisher","award":["NRF-2020R1A2C1101258"],"award-info":[{"award-number":["NRF-2020R1A2C1101258"]}],"id":[{"id":"10.13039\/501100014188","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea (NRF)","doi-asserted-by":"publisher","award":["IITP-2023-2020-0-01846"],"award-info":[{"award-number":["IITP-2023-2020-0-01846"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea (NRF)","doi-asserted-by":"publisher","award":["NRF-2018R1D1A3B07044041"],"award-info":[{"award-number":["NRF-2018R1D1A3B07044041"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea (NRF)","doi-asserted-by":"publisher","award":["NRF-2020R1A2C1101258"],"award-info":[{"award-number":["NRF-2020R1A2C1101258"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>We propose a high-quality, three-dimensional display system based on a simplified light field image acquisition method, and a custom-trained full-connected deep neural network is proposed. The ultimate goal of the proposed system is to acquire and reconstruct the light field images with possibly the most elevated quality from the real-world objects in a general environment. A simplified light field image acquisition method acquires the three-dimensional information of natural objects in a simple way, with high-resolution\/high-quality like multicamera-based methods. We trained a full-connected deep neural network model to output desired viewpoints of the object with the same quality. The custom-trained instant neural graphics primitives model with hash encoding output the overall desired viewpoints of the object within the acquired viewing angle in the same quality, based on the input perspectives, according to the pixel density of a display device and lens array specifications within the significantly short processing time. Finally, the elemental image array was rendered through the pixel re-arrangement from the entire viewpoints to visualize the entire field-of-view and re-constructed as a high-quality three-dimensional visualization on the integral imaging display. The system was implemented successfully, and the displayed visualizations and corresponding evaluated results confirmed that the proposed system offers a simple and effective way to acquire light field images from real objects with high-resolution and present high-quality three-dimensional visualization on the integral imaging display system.<\/jats:p>","DOI":"10.3390\/s23146245","type":"journal-article","created":{"date-parts":[[2023,7,10]],"date-time":"2023-07-10T01:02:50Z","timestamp":1688950970000},"page":"6245","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":14,"title":["Comprehensive High-Quality Three-Dimensional Display System Based on a Simplified Light-Field Image Acquisition Method and a Full-Connected Deep Neural Network"],"prefix":"10.3390","volume":"23","author":[{"given":"Munkh-Uchral","family":"Erdenebat","sequence":"first","affiliation":[{"name":"School of Information and Communication Engineering, Chungbuk National University, Chungbuk 28644, Republic of Korea"}]},{"given":"Tuvshinjargal","family":"Amgalan","sequence":"additional","affiliation":[{"name":"School of Information and Communication Engineering, Chungbuk National University, Chungbuk 28644, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6058-7200","authenticated-orcid":false,"given":"Anar","family":"Khuderchuluun","sequence":"additional","affiliation":[{"name":"School of Information and Communication Engineering, Chungbuk National University, Chungbuk 28644, Republic of Korea"}]},{"given":"Oh-Seung","family":"Nam","sequence":"additional","affiliation":[{"name":"School of Information and Communication Engineering, Chungbuk National University, Chungbuk 28644, Republic of Korea"}]},{"given":"Seok-Hee","family":"Jeon","sequence":"additional","affiliation":[{"name":"Department of Electronics Engineering, Incheon National University, Incheon 22012, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0334-504X","authenticated-orcid":false,"given":"Ki-Chul","family":"Kwon","sequence":"additional","affiliation":[{"name":"School of Information and Communication Engineering, Chungbuk National University, Chungbuk 28644, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8109-2055","authenticated-orcid":false,"given":"Nam","family":"Kim","sequence":"additional","affiliation":[{"name":"School of Information and Communication Engineering, Chungbuk National University, Chungbuk 28644, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2023,7,8]]},"reference":[{"key":"ref_1","first-page":"512","article-title":"Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems","volume":"10","author":"Javidi","year":"2020","journal-title":"Adv. Opt. Photon."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"32266","DOI":"10.1364\/OE.402193","article-title":"Roadmap on 3D integral imaging: Sensing, processing, and display","volume":"28","author":"Javidi","year":"2020","journal-title":"Opt. Express"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1301","DOI":"10.1109\/JDT.2016.2594076","article-title":"Three-dimensional integral-imaging display from calibrated and depth-hole filtered Kinect information","volume":"12","author":"Hong","year":"2016","journal-title":"J. Disp. Technol."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1109\/MSP.2017.2669347","article-title":"Computational depth sensing: Toward high-performance commodity depth cameras","volume":"34","author":"Xiong","year":"2017","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"7796","DOI":"10.1364\/AO.56.007796","article-title":"Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging","volume":"56","author":"Erdenebat","year":"2017","journal-title":"Appl. Opt."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"126494","DOI":"10.1016\/j.optcom.2020.126494","article-title":"Advanced visualization using image super-resolution method for three-dimensional mobile system","volume":"480","author":"Erdenebat","year":"2021","journal-title":"Opt. Commun."},{"key":"ref_7","first-page":"02","article-title":"Light field photography with a hand-held plenoptic camera","volume":"2005","author":"Ng","year":"2005","journal-title":"Stanf. Univ. Comp. Sci. Tech. Rep."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"23662","DOI":"10.1364\/OE.21.023662","article-title":"Hologram synthesis of three-dimensional real objects using portable integral imaging camera","volume":"21","author":"Lee","year":"2013","journal-title":"Opt. Express"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Jeon, H.-G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y.-W., and Kweon, I.S. (2015, January 7\u201312). Accurate depth map estimation from a lenslet light field camera. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298762"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"836","DOI":"10.1109\/LPT.2015.2393875","article-title":"A gradient index liquid crystal microlens array for light-field camera applications","volume":"27","author":"Kwon","year":"2015","journal-title":"IEEE Photon. Technol. Lett."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"10333","DOI":"10.1364\/AO.54.010333","article-title":"Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera","volume":"54","author":"Jeong","year":"2015","journal-title":"Appl. Opt."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Perra, C., Murgia, F., and Giusto, D. (2016, January 12\u201315). An analysis of 3D point cloud reconstruction from light field images. Proceedings of the 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), Oulu, Finland.","DOI":"10.1109\/IPTA.2016.7821011"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Feng, M., Gilani, S.Z., Wang, Y., and Mian, A. (2018, January 8\u201314). 3D face reconstruction from light field images: A model-free approach. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01249-6_31"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"A1","DOI":"10.1364\/AO.57.0000A1","article-title":"On the fundamental comparison between unfocused and focused light field cameras","volume":"57","author":"Zhu","year":"2018","journal-title":"Appl. Opt."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"6900512","DOI":"10.1109\/JPHOT.2018.2890429","article-title":"Resolution-enhancement for an integral imaging microscopy using deep learning","volume":"11","author":"Kwon","year":"2019","journal-title":"IEEE Photon. J."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"2173","DOI":"10.3390\/s23042173","article-title":"High-quality 3D visualization system for light-field microscopy with fine-scale shape measurement through accurate 3D surface data","volume":"23","author":"Kwon","year":"2023","journal-title":"Sensors"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"841","DOI":"10.1109\/TVCG.2009.30","article-title":"TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters","volume":"15","author":"Taguchi","year":"2009","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"3179","DOI":"10.1364\/BOE.6.003179","article-title":"Camera array based light field microscopy","volume":"15","author":"Lin","year":"2015","journal-title":"Biomed. Opt. Express"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1567","DOI":"10.1364\/JOSAA.35.001567","article-title":"Hybrid camera array based calibration for computer-generated integral photography display","volume":"35","author":"Chen","year":"2015","journal-title":"J. Opt. Soc. Am."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"691","DOI":"10.1007\/s11704-015-4237-4","article-title":"Camera array calibration for light field acquisition","volume":"9","author":"Xu","year":"2015","journal-title":"Front. Comput. Sci."},{"key":"ref_21","unstructured":"Xing, Y., Xiong, Z.-L., Zhao, M., and Wang, Q.-H. (2018, January 19). Real-time integral imaging pickup system using camera array. Proceedings of the SPIE Photonics West 2018, San Francisco, CA, USA."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"639117","DOI":"10.3389\/fphy.2021.639117","article-title":"Performance enhanced elemental array generation for integral image display using pixel fusion","volume":"9","author":"Huang","year":"2021","journal-title":"Front. Phys."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Shin, C., Jeon, H.-G., Yoon, Y., Kweon, I.S., and Kim, S.J. (2018, January 18\u201323). EPINET: A fully-convolutional neural network using epipolar geometry for depth from light field images. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00499"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"3900714","DOI":"10.1109\/JPHOT.2020.3010319","article-title":"Advanced three-dimensional visualization system for an integral imaging microscope using a fully convolutional depth estimation network","volume":"12","author":"Kwon","year":"2020","journal-title":"IEEE Photon. J."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Schonberger, J.L., and Frahm, J.-M. (2016, January 27\u201330). Structure-from-motion revisited. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.445"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1145\/3503250","article-title":"NeRF: Representing scenes as neural radiance fields for view synthesis","volume":"65","author":"Mildenhall","year":"2022","journal-title":"Commun. ACM"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Hu, T., Liu, S., Chen, Y., Shen, T., and Jia, J. (2022, January 18\u201324). EfficientNeRF efficient neural radiance fields. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01256"},{"key":"ref_28","first-page":"102","article-title":"Instant neural graphics primitives with a multiresolution hash encoding","volume":"41","author":"Evans","year":"2022","journal-title":"ACM Trans. Graph."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"6","DOI":"10.1007\/3DRes.03(2012)6","article-title":"Numerical reconstruction of full parallax holographic stereograms","volume":"3","author":"Park","year":"2012","journal-title":"3D Res."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"4235","DOI":"10.1364\/AO.423205","article-title":"Simplified digital content generation based on an inverse-directed propagation algorithm for holographic stereogram printing","volume":"60","author":"Khuderchuluun","year":"2021","journal-title":"Appl. Opt."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Vieira, A., Duarte, H., Perra, C., Tavora, L., and Assuncao, P. (2015, January 10\u201313). Data formats for high efficiency coding of Lytro-Illum light fields. Proceedings of the 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA), Orleans, France.","DOI":"10.1109\/IPTA.2015.7367195"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/LSP.2012.2227726","article-title":"Making a completely blind image quality analyzer","volume":"60","author":"Mittal","year":"2013","journal-title":"IEEE Sig. Process. Lett."},{"key":"ref_33","unstructured":"Venkatanath, N., Praneeth, D., Chandrasekhar, B.M., Channappayya, S.S., and Medasani, S.S. (2021, January 14\u201323). Blind image quality evaluation using perception based features. Proceedings of the IEEE 21st National Conference on Communications, Montr\u00e9al, QC, Canada."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/14\/6245\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:08:45Z","timestamp":1760126925000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/14\/6245"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,8]]},"references-count":33,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2023,7]]}},"alternative-id":["s23146245"],"URL":"https:\/\/doi.org\/10.3390\/s23146245","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,8]]}}}