{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,10]],"date-time":"2026-03-10T23:51:39Z","timestamp":1773186699621,"version":"3.50.1"},"reference-count":44,"publisher":"MDPI AG","issue":"16","license":[{"start":{"date-parts":[[2022,8,16]],"date-time":"2022-08-16T00:00:00Z","timestamp":1660608000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Typical visual simultaneous localization and mapping (SLAM) systems rely on front-end odometry for feature extraction and matching to establish the relations between adjacent images. In a low-light environment, the image obtained by a camera is dim and shows scarce information, hindering the extraction of sufficient stable feature points, consequently undermining visual SLAM. Most existing methods focus on low-light enhancement of a single image, neglecting the strong temporal correlation across images in visual SLAM. We propose a method that leverages the temporal information of an input image sequence to enhance the low-light image and employed the enhanced result to improve the feature extraction and matching quality of visual SLAM. Our method trains a three-dimensional convolutional neural network to estimate pixelwise grayscale transformation curves to obtain a low-light enhancement image. Then, the grayscale transformation curves are iteratively applied to obtain the final enhanced result. The training process of the network does not require any paired reference images. We also introduced a spatial consistency loss for the enhanced image to retain the content and texture of the original image. We further integrated our method into VINS-Mono and compared with similar low-light image enhancement methods on the TUM-VI public dataset. The proposed method provides a lower positioning error. The positioning root-mean-squared error of our method is 19.83% lower than that of Zero-DCE++ in low-light environments. Moreover, the proposed network achieves real-time operation, being suitable for integration into a SLAM system.<\/jats:p>","DOI":"10.3390\/rs14163985","type":"journal-article","created":{"date-parts":[[2022,8,17]],"date-time":"2022-08-17T03:15:27Z","timestamp":1660706127000},"page":"3985","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":14,"title":["3D Convolutional Neural Network for Low-Light Image Sequence Enhancement in SLAM"],"prefix":"10.3390","volume":"14","author":[{"given":"Yizhuo","family":"Quan","sequence":"first","affiliation":[{"name":"Aerospace Information Reaserch Institute, China Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China"},{"name":"University of Chinese Academy of Sciences, No 19(A) Yuquan Road, Shijingshan District, Beijing 100049, China"}]},{"given":"Dong","family":"Fu","sequence":"additional","affiliation":[{"name":"Aerospace Information Reaserch Institute, China Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China"}]},{"given":"Yuanfei","family":"Chang","sequence":"additional","affiliation":[{"name":"Aerospace Information Reaserch Institute, China Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China"}]},{"given":"Chengbo","family":"Wang","sequence":"additional","affiliation":[{"name":"Aerospace Information Reaserch Institute, China Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,8,16]]},"reference":[{"key":"ref_1","unstructured":"Nguyen, H., Mascarich, F., Dang, T., and Alexis, K. (2020). Autonomous aerial robotic surveying and mapping with application to construction operations. arXiv."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"140901","DOI":"10.1007\/s11432-019-2796-1","article-title":"Landing site topographic mapping and rover localization for Chang\u2019e-4 mission","volume":"63","author":"Liu","year":"2020","journal-title":"Sci. China Inf. Sci."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Chen, X., Zhang, H., Lu, H., Xiao, J., Qiu, Q., and Li, Y. (2017, January 11\u201313). Robust SLAM System based on Monocular Vision and LiDAR for Robotic Urban Search and Rescue. Proceedings of the 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Shanghai, China.","DOI":"10.1109\/SSRR.2017.8088138"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Chiang, K.-W., Tsai, G.-J., Li, Y.-H., Li, Y., and El-Sheimy, N. (2020). Navigation engine design for automated driving using INS\/GNSS\/3D LiDAR-SLAM and integrity assessment. Remote Sens., 12.","DOI":"10.3390\/rs12101564"},{"key":"ref_5","first-page":"770","article-title":"Progress and applications of visual SLAM","volume":"47","author":"Kaichang","year":"2018","journal-title":"Acta Geod. Cartogr. Sin."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1309","DOI":"10.1109\/TRO.2016.2624754","article-title":"Simultaneous localization and mapping: Present, future, and the robust-perception age","volume":"32","author":"Cadena","year":"2016","journal-title":"IEEE Trans. Robot."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1052","DOI":"10.1109\/TPAMI.2007.1049","article-title":"MonoSLAM: Real-time single camera SLAM","volume":"29","author":"Davison","year":"2007","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Hartley, R., and Zisserman, A. (2003). Multiple View Geometry in Computer Vision, Cambridge University Press.","DOI":"10.1017\/CBO9780511811685"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"287","DOI":"10.1007\/s10846-010-9490-z","article-title":"Fusion of IMU and vision for absolute scale estimation in monocular SLAM","volume":"61","author":"Weiss","year":"2011","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1004","DOI":"10.1109\/TRO.2018.2853729","article-title":"Vins-mono: A robust and versatile monocular visual-inertial state estimator","volume":"34","author":"Qin","year":"2018","journal-title":"IEEE Trans. Robot."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1874","DOI":"10.1109\/TRO.2021.3075644","article-title":"ORB-SLAM3: An accurate open-source library for visual, visual\u2013inertial, and multimap SLAM","volume":"37","author":"Campos","year":"2021","journal-title":"IEEE Trans. Robot."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Mourikis, A.I., and Roumeliotis, S.I. (2007, January 10\u201314). A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation. Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy.","DOI":"10.1109\/ROBOT.2007.364024"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Li, C., Guo, C., Han, L.-H., Jiang, J., Cheng, M.-M., Gu, J., and Loy, C.C. (2021). Low-light image and video enhancement using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell.","DOI":"10.1109\/TPAMI.2021.3126387"},{"key":"ref_14","unstructured":"Harris, C., and Stephens, M. (September, January 31). A Combined Corner and Edge Detector. Proceedings of the Alvey Vision Conference, Manchester, UK."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"91","DOI":"10.1023\/B:VISI.0000029664.99615.94","article-title":"Distinctive image features from scale-invariant keypoints","volume":"60","author":"Lowe","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011, January 6\u201313). ORB: An Efficient Alternative to SIFT or SURF. Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain.","DOI":"10.1109\/ICCV.2011.6126544"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"593","DOI":"10.1109\/TCE.2007.381734","article-title":"A dynamic histogram equalization for image contrast enhancement","volume":"53","author":"Kabir","year":"2007","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1752","DOI":"10.1109\/TCE.2007.4429280","article-title":"Brightness preserving dynamic histogram equalization for image contrast enhancement","volume":"53","author":"Ibrahim","year":"2007","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"193","DOI":"10.1007\/BF03178082","article-title":"Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms","volume":"11","author":"Pisano","year":"1998","journal-title":"J. Digit. Imaging"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"18027","DOI":"10.1007\/s11042-021-10614-8","article-title":"An optimization-based approach to gamma correction parameter estimation for low-light image enhancement","volume":"80","author":"Jeong","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Li, C., Tang, S., Yan, J., and Zhou, T. (2020). Low-light image enhancement based on quasi-symmetric correction functions by fusion. Symmetry, 12.","DOI":"10.3390\/sym12091561"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"309","DOI":"10.1016\/j.sigpro.2014.02.013","article-title":"A novel approach for enhancing very dark image sequences","volume":"103","author":"Xu","year":"2014","journal-title":"Signal Process."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"108","DOI":"10.1038\/scientificamerican1277-108","article-title":"The retinex theory of color vision","volume":"237","author":"Land","year":"1977","journal-title":"Sci. Am."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Parihar, A.S., and Singh, K. (2018, January 19\u201320). A Study on Retinex Based Method for Image Enhancement. Proceedings of the 2018 2nd International Conference on Inventive Systems and Control (ICISC), Coimbatore, India.","DOI":"10.1109\/ICISC.2018.8398874"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"6","DOI":"10.1016\/j.procs.2018.04.179","article-title":"Fast algorithm of image enhancement based on multi-scale retinex","volume":"131","author":"Zotin","year":"2018","journal-title":"Procedia Comput. Sci."},{"key":"ref_26","unstructured":"Fu, X., Zeng, D., Huang, Y., Zhang, X.-P., and Ding, X. (2018, January 18\u201323). A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"2828","DOI":"10.1109\/TIP.2018.2810539","article-title":"Structure-revealing low-light image enhancement via robust retinex model","volume":"27","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"650","DOI":"10.1016\/j.patcog.2016.06.008","article-title":"LLNet: A deep autoencoder approach to natural low-light image enhancement","volume":"61","author":"Lore","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_29","unstructured":"Lv, F., Lu, F., Wu, J., and Lim, C. (2018, January 3\u20136). MBLLEN: Low-Light Image\/Video Enhancement Using CNNs. Proceedings of the 29th British Machine Vision Conference (BMVC), Northumbria University, Newcastle, UK."},{"key":"ref_30","unstructured":"Wei, C., Wang, W., Yang, W., and Liu, J.J. (2018). Deep retinex decomposition for low-light enhancement. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1013","DOI":"10.1007\/s11263-020-01407-x","article-title":"Beyond brightening low-light images","volume":"129","author":"Zhang","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"4364","DOI":"10.1109\/TIP.2019.2910412","article-title":"Low-light image enhancement via a deep hybrid network","volume":"28","author":"Ren","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Zhang, L., Zhang, L., Liu, X., Shen, Y., Zhang, S., and Zhao, S. (2019, January 21\u201325). Zero-Shot Restoration of Back-Lit Images Using Deep Internal Learning. Proceedings of the 2019 ACM International Conference on Multimedia (ACMMM), Nice, France.","DOI":"10.1145\/3343031.3351069"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., and Cong, R. (2020, January 14\u201319). Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual.","DOI":"10.1109\/CVPR42600.2020.00185"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Li, C., Guo, C., and Loy, C.C. (2021). Learning to enhance low-light image via zero-reference deep curve estimation. arXiv.","DOI":"10.1109\/TPAMI.2021.3063604"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going Deeper with Convolutions. Proceedings of the 2015 the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based learning applied to document recognition","volume":"86","author":"LeCun","year":"1998","journal-title":"Proc. IEEE"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","article-title":"3D convolutional neural networks for human action recognition","volume":"35","author":"Ji","year":"2012","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Tran, D., Bourdev, L., Fergus, R., Torresani, L., and Paluri, M. (2015, January 13\u201316). Learning Spatiotemporal Features with 3D Convolutional Networks. In Proceeding of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.510"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Schubert, D., Goll, T., Demmel, N., Usenko, V., St\u00fcckler, J., and Cremers, D. (2018, January 1\u20135). The TUM VI Benchmark for Evaluating Visual-Inertial Odometry. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8593419"},{"key":"ref_41","unstructured":"Grupp, M. (2022, July 01). Evo: Python Package for the Evaluation of Odometry and SLAM; 2017. Available online: http:\/\/github.com\/MichaelGrupp\/evo."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Sturm, J., Engelhard, N., Endres, F., Burgard, W., and Cremers, D. (2012, January 7\u201312). A Benchmark for the Evaluation of RGB-D SLAM Systems. Proceedings of the 2012 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Algarve, Portugal.","DOI":"10.1109\/IROS.2012.6385773"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"S\u00fczen, A.A., Duman, B., and \u015een, B. (2020, January 26\u201328). Benchmark Analysis of Jetson tx2, Jetson Nano and Raspberry pi Using Deep-cnn. Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey.","DOI":"10.1109\/HORA49412.2020.9152915"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Ullah, S., and Kim, D.-H. (2020, January 19\u201322). Benchmarking Jetson platform for 3D Point-Cloud and Hyper-Spectral Image Classification. Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Korea.","DOI":"10.1109\/BigComp48618.2020.00-21"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/16\/3985\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:10:43Z","timestamp":1760141443000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/16\/3985"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,16]]},"references-count":44,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2022,8]]}},"alternative-id":["rs14163985"],"URL":"https:\/\/doi.org\/10.3390\/rs14163985","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,16]]}}}