{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,28]],"date-time":"2026-02-28T13:14:35Z","timestamp":1772284475016,"version":"3.50.1"},"reference-count":33,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2024,3,27]],"date-time":"2024-03-27T00:00:00Z","timestamp":1711497600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Beijing Municipal Natural Science Foundation Key Research Project","award":["Z200006"],"award-info":[{"award-number":["Z200006"]}]},{"name":"Beijing Municipal Natural Science Foundation Key Research Project","award":["2022YFF1300103"],"award-info":[{"award-number":["2022YFF1300103"]}]},{"name":"Beijing Municipal Natural Science Foundation Key Research Project","award":["42276197"],"award-info":[{"award-number":["42276197"]}]},{"name":"Beijing Municipal Natural Science Foundation Key Research Project","award":["Y2021044"],"award-info":[{"award-number":["Y2021044"]}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["Z200006"],"award-info":[{"award-number":["Z200006"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["2022YFF1300103"],"award-info":[{"award-number":["2022YFF1300103"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["42276197"],"award-info":[{"award-number":["42276197"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["Y2021044"],"award-info":[{"award-number":["Y2021044"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["Z200006"],"award-info":[{"award-number":["Z200006"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2022YFF1300103"],"award-info":[{"award-number":["2022YFF1300103"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["42276197"],"award-info":[{"award-number":["42276197"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["Y2021044"],"award-info":[{"award-number":["Y2021044"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004739","name":"Youth Innovation Promotion Association of the Chinese Academy of Sciences","doi-asserted-by":"publisher","award":["Z200006"],"award-info":[{"award-number":["Z200006"]}],"id":[{"id":"10.13039\/501100004739","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004739","name":"Youth Innovation Promotion Association of the Chinese Academy of Sciences","doi-asserted-by":"publisher","award":["2022YFF1300103"],"award-info":[{"award-number":["2022YFF1300103"]}],"id":[{"id":"10.13039\/501100004739","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004739","name":"Youth Innovation Promotion Association of the Chinese Academy of Sciences","doi-asserted-by":"publisher","award":["42276197"],"award-info":[{"award-number":["42276197"]}],"id":[{"id":"10.13039\/501100004739","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004739","name":"Youth Innovation Promotion Association of the Chinese Academy of Sciences","doi-asserted-by":"publisher","award":["Y2021044"],"award-info":[{"award-number":["Y2021044"]}],"id":[{"id":"10.13039\/501100004739","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Three-dimensional (3D) range-gated imaging can obtain high spatial resolution intensity images as well as pixel-wise depth information. Several algorithms have been developed to recover depth from gated images such as the range-intensity correlation algorithm and deep-learning-based algorithm. The traditional range-intensity correlation algorithm requires specific range-intensity profiles, which are hard to generate, while the existing deep-learning-based algorithm requires large number of real-scene training data. In this work, we propose a method of range-intensity-profile-guided gated light ranging and imaging to recover depth from gated images based on a convolutional neural network. In this method, the range-intensity profile (RIP) of a given gated light ranging and imaging system is obtained to generate synthetic training data from Grand Theft Auto V for our range-intensity ratio and semantic network (RIRS-net). The RIRS-net is mainly trained on synthetic data and fine-tuned with RIP data. The network learns both semantic depth cues and range-intensity depth cues in the synthetic data, and learns accurate range-intensity depth cues in the RIP data. In the evaluation experiments on both a real-scene and synthetic test dataset, our method shows a better result compared to other algorithms.<\/jats:p>","DOI":"10.3390\/s24072151","type":"journal-article","created":{"date-parts":[[2024,3,27]],"date-time":"2024-03-27T13:39:56Z","timestamp":1711546796000},"page":"2151","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Range-Intensity-Profile-Guided Gated Light Ranging and Imaging Based on a Convolutional Neural Network"],"prefix":"10.3390","volume":"24","author":[{"given":"Chenhao","family":"Xia","sequence":"first","affiliation":[{"name":"Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"given":"Xinwei","family":"Wang","sequence":"additional","affiliation":[{"name":"Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"},{"name":"School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"given":"Liang","family":"Sun","sequence":"additional","affiliation":[{"name":"Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3843-4205","authenticated-orcid":false,"given":"Yue","family":"Zhang","sequence":"additional","affiliation":[{"name":"Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"}]},{"given":"Bo","family":"Song","sequence":"additional","affiliation":[{"name":"Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]},{"given":"Yan","family":"Zhou","sequence":"additional","affiliation":[{"name":"Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"},{"name":"School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,3,27]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1738","DOI":"10.1109\/TPAMI.2020.3032602","article-title":"A Survey on Deep Learning Techniques for Stereo-Based Depth Estimation","volume":"44","author":"Laga","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.-Y., and Kweon, I.S. (2018, January 8\u201314). Cbam: Convolutional Block Attention Module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"331","DOI":"10.1007\/s41095-022-0271-y","article-title":"Attention Mechanisms in Computer Vision: A Survey","volume":"8","author":"Guo","year":"2022","journal-title":"Comput. Vis. Media"},{"key":"ref_4","unstructured":"Gruber, T., Julca-Aguilar, F., Bijelic, M., and Heide, F. (November, January 27). Gated2depth: Real-Time Dense Lidar from Gated Images. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_5","unstructured":"Godard, C., Mac Aodha, O., Firman, M., and Brostow, G.J. (November, January 27). Digging into Self-Supervised Monocular Depth Estimation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_6","unstructured":"Saxena, A., Chung, S., and Ng, A. (2005, January 5\u20138). Learning Depth from Single Monocular Images. Proceedings of the Advances in Neural Information Processing Systems 18 [Neural Information Processing Systems 2005], Vancouver, BC, Canada."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Liu, C., Yuen, J., Torralba, A., Sivic, J., and Freeman, W.T. (2008, January 12\u201318). Sift Flow: Dense Correspondence across Different Scenes. Proceedings of the Computer Vision\u2013ECCV 2008: 10th European Conference on Computer Vision, Marseille, France. Proceedings, Part III 10.","DOI":"10.1007\/978-3-540-88690-7_3"},{"key":"ref_8","unstructured":"Eigen, D., Puhrsch, C., and Fergus, R. (2014). Depth Map Prediction from a Single Image Using a Multi-Scale Deep Network. Adv. Neural Inf. Process Syst., 27."},{"key":"ref_9","first-page":"2287","article-title":"Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches","volume":"17","author":"Zbontar","year":"2016","journal-title":"J. Mach. Learn. Res."},{"key":"ref_10","unstructured":"Lange, R. (2000). 3D Time-of-Flight Distance Measurement with Custom Solid-State Image Sensors in CMOS\/CCD-Technology. [Ph.D. Thesis, University of Siegen]."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"429","DOI":"10.1038\/nphoton.2010.148","article-title":"Mapping the World in 3D","volume":"4","author":"Schwarz","year":"2010","journal-title":"Nat. Photonics"},{"key":"ref_12","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_13","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, Inc."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"114170","DOI":"10.1016\/j.chaos.2023.114170","article-title":"Phase space visibility graph","volume":"176","author":"Ren","year":"2023","journal-title":"Chaos Solitons Fractals"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Ren, W., Jin, N., and Ouyang, L. (2024). Phase Space Graph Convolutional Network for Chaotic Time Series Learning. IEEE Trans. Ind. Inform., 1\u20139.","DOI":"10.1109\/TII.2024.3363089"},{"key":"ref_17","unstructured":"Yin, W., Liu, Y., Shen, C., and Yan, Y. (November, January 27). Enforcing Geometric Constraints of Virtual Normal for Depth Prediction. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Jie, Z., Wang, P., Ling, Y., Zhao, B., Wei, Y., Feng, J., and Liu, W. (2018, January 18\u201323). Left-Right Comparative Recurrent Model for Stereo Matching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00404"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"889","DOI":"10.1109\/JSSC.2019.2959502","article-title":"A VGA Indirect Time-of-Flight CMOS Image Sensor With 4-Tap 7\u03bcm Global-Shutter Pixel and Fixed-Pattern Phase Noise Self-Compensation","volume":"55","author":"Keel","year":"2019","journal-title":"IEEE J. Solid-State Circuits"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Walia, A., Walz, S., Bijelic, M., Mannan, F., Julca-Aguilar, F., Langer, M., Ritter, W., and Heide, F. (2022, January 18\u201324). Gated2gated: Self-Supervised Depth Estimation from Gated Images. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00283"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"7399","DOI":"10.1364\/AO.52.007399","article-title":"Triangular-Range-Intensity Profile Spatial-Correlation Method for 3D Super-Resolution Range-Gated Imaging","volume":"52","author":"Wang","year":"2013","journal-title":"Appl. Opt."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"3146","DOI":"10.1364\/OL.32.003146","article-title":"Long-Range Three-Dimensional Active Imaging with Superresolution Depth Mapping","volume":"32","author":"Laurenzis","year":"2007","journal-title":"Opt. Lett."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Gruber, T., Kokhova, M., Ritter, W., Haala, N., and Dictmayer, K. (2018, January 4\u20137). Learning Super-Resolved Depth from Active Gated Imaging. Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA.","DOI":"10.1109\/ITSC.2018.8569590"},{"key":"ref_24","unstructured":"Rockstar Games (2024, March 01). Policy on Posting Copyrighted Rockstar Games Material. Available online: http:\/\/Tinyurl.Com\/Pjfoqo5."},{"key":"ref_25","unstructured":"Karlsson, B. (2024, March 01). RenderDoc. Available online: https:\/\/renderdoc.org."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Richter, S.R., Vineet, V., Roth, S., and Koltun, V. (2016, January 11\u201314). Playing for Data: Ground Truth from Computer Games. Proceedings of the Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands. Proceedings, Part II 14.","DOI":"10.1007\/978-3-319-46475-6_7"},{"key":"ref_27","unstructured":"Nair, V., and Hinton, G.E. (2010, January 21\u201324). Rectified Linear Units Improve Restricted Boltzmann Machines. Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel."},{"key":"ref_28","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention\u2013MICCAI 2015: 18th International Conference, Munich, Germany. Proceedings, Part III 18."},{"key":"ref_29","first-page":"1","article-title":"Pytorch: An Imperative Style, High-Performance Deep Learning Library","volume":"32","author":"Paszke","year":"2019","journal-title":"Adv. Neural Inf. Process Syst."},{"key":"ref_30","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A Method for Stochastic Optimization. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., and Navab, N. (2016, January 25\u201328). Deeper Depth Prediction with Fully Convolutional Residual Networks. Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA.","DOI":"10.1109\/3DV.2016.32"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Fu, H., Gong, M., Wang, C., Batmanghelich, K., and Tao, D. (2018;, January 18\u201323). Deep Ordinal Regression Network for Monocular Depth Estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00214"},{"key":"ref_33","first-page":"12626","article-title":"Forget about the Lidar: Self-Supervised Depth Estimators with Med Probability Volumes","volume":"33","author":"Kim","year":"2020","journal-title":"Adv. Neural Inf. Process Syst."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/7\/2151\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T14:19:38Z","timestamp":1760105978000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/7\/2151"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,27]]},"references-count":33,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2024,4]]}},"alternative-id":["s24072151"],"URL":"https:\/\/doi.org\/10.3390\/s24072151","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,27]]}}}