{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T15:14:48Z","timestamp":1775834088286,"version":"3.50.1"},"reference-count":33,"publisher":"Cambridge University Press (CUP)","issue":"6","license":[{"start":{"date-parts":[[2025,6,4]],"date-time":"2025-06-04T00:00:00Z","timestamp":1748995200000},"content-version":"unspecified","delay-in-days":3,"URL":"https:\/\/www.cambridge.org\/core\/terms"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Robotica"],"published-print":{"date-parts":[[2025,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>This paper focuses on the feature-based visual-inertial odometry (VIO) in dynamic illumination environments. While the performance of most existing feature-based VIO methods is degraded by the dynamic illumination, which leads to unstable feature association, we propose a tightly-coupled VIO algorithm termed RAFT-VINS, integrating a Lite-RAFT tracker into the visual inertial navigation system (VINS). The key module of this odometry algorithm is a lightweight optical flow network designed for accurate feature tracking with real-time operation. It guarantees robust feature association in dynamic illumination environments and thereby ensures the performance of the odometry. Besides, to further improve the accuracy of the pose estimation, a moving consistency check strategy is developed in RAFT-VINS to identify and remove the outlier feature points. Meanwhile, a tightly-coupled optimization-based framework is employed to fuse IMU and visual measurements in the sliding window for efficient and accurate pose estimation. Through comprehensive experiments in the public datasets and real-world scenarios, the proposed RAFT-VINS is validated for its capacity to provide trustable pose estimates in challenging dynamic illumination environments. Our codes are open-sourced on <jats:uri xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/USTC-AIS-Lab\/RAFT-VINS\">https:\/\/github.com\/USTC-AIS-Lab\/RAFT-VINS<\/jats:uri>.<\/jats:p>","DOI":"10.1017\/s0263574725000608","type":"journal-article","created":{"date-parts":[[2025,6,4]],"date-time":"2025-06-04T01:58:11Z","timestamp":1749002291000},"page":"2304-2319","source":"Crossref","is-referenced-by-count":1,"title":["Tightly-coupled visual-inertial odometry with robust feature association in dynamic illumination environments"],"prefix":"10.1017","volume":"43","author":[{"given":"Jie","family":"Zhang","sequence":"first","affiliation":[{"name":"University of Science and Technology of China"}]},{"given":"Cong","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5892-3591","authenticated-orcid":false,"given":"Qingchen","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4861-3565","authenticated-orcid":false,"given":"Qichao","family":"Ma","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7580-0836","authenticated-orcid":false,"given":"Jiahu","family":"Qin","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China"}]}],"member":"56","published-online":{"date-parts":[[2025,6,4]]},"reference":[{"key":"S0263574725000608_ref19","unstructured":"[19] Fu, Q. , Wang, J. , Yu, H. , Ali, I. , Guo, F. , He, Y. and Zhang, H. , PL-VINS: Real-time monocular visual-inertial SLAM with point and line features. arXiv preprint arXiv: 2009.07462 (2020)"},{"key":"S0263574725000608_ref4","doi-asserted-by":"publisher","DOI":"10.1177\/0278364914554813"},{"key":"S0263574725000608_ref23","doi-asserted-by":"crossref","unstructured":"[23] Carion, N. , Massa, F. , Synnaeve, G. , Usunier, N. , Kirillov, A. and Zagoruyko, S. , \u201cEnd-to-End Object Detection with Transformers,\u201d European Conference on Computer Vision (2020) pp. 213\u2013229.","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"S0263574725000608_ref3","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2017.2658577"},{"key":"S0263574725000608_ref10","doi-asserted-by":"publisher","DOI":"10.1177\/0278364920938439"},{"key":"S0263574725000608_ref20","doi-asserted-by":"crossref","unstructured":"[20] Xu, K. , Hao, Y. , Yuan, S. , Wang, C. and Xie, L. , \u201cAirVO: An Illumination-Robust Point-Line Visual Odometry,\u201d 2023 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023) pp. 3429\u20133436.","DOI":"10.1109\/IROS55552.2023.10341914"},{"key":"S0263574725000608_ref29","doi-asserted-by":"crossref","unstructured":"[29] Xu, H. , Zhang, J. , Cai, J. , Rezatofighi, H. and Tao, D. , \u201cGMFlow: Learning Optical Flow via Global Matching,\u201d Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (2022) pp. 8121\u20138130.","DOI":"10.1109\/CVPR52688.2022.00795"},{"key":"S0263574725000608_ref14","doi-asserted-by":"crossref","unstructured":"[14] Ilg, E. , Mayer, N. , Saikia, T. , Keuper, M. , Dosovitskiy, A. and Brox, T. , \u201cFlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks,\u201d Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) pp. 2462\u20132470.","DOI":"10.1109\/CVPR.2017.179"},{"key":"S0263574725000608_ref7","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574724000754"},{"key":"S0263574725000608_ref9","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574724001553"},{"key":"S0263574725000608_ref17","doi-asserted-by":"crossref","unstructured":"[17] Teed, Z. and Deng, J. , \u201cRAFT: Recurrent All-Pairs Field Transforms for Optical Flow,\u201d European Conferenceon Computer Vision (2020) pp. 402\u2013419.","DOI":"10.1007\/978-3-030-58536-5_24"},{"key":"S0263574725000608_ref12","unstructured":"[12] Lucas, B. D. and Kanade, T. , \u201cAn Iterative Image Registration Technique with an Application to Stereo Vision,\u201d IJCAI\u201981: 7th International Joint Conference on Artificial Intelligence, vol. 2 (1981) pp. 674\u2013679."},{"key":"S0263574725000608_ref6","doi-asserted-by":"publisher","DOI":"10.1109\/TIE.2022.3176304"},{"key":"S0263574725000608_ref25","doi-asserted-by":"crossref","unstructured":"[25] Butler, D. J. , Wulff, J. , Stanley, G. B. and Black, M. J. , \u201cA Naturalistic Open Source Movie for Optical Flow Evaluation,\u201d European Conferenceon Computer Vision (2012) pp. 611\u2013625.","DOI":"10.1007\/978-3-642-33783-3_44"},{"key":"S0263574725000608_ref26","doi-asserted-by":"crossref","unstructured":"[26] Shi, J. and Tomasi, C. , \u201cGood Features to Track,\u201d Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (1994) pp. 593\u2013600.","DOI":"10.1109\/CVPR.1994.323794"},{"key":"S0263574725000608_ref21","doi-asserted-by":"crossref","unstructured":"[21] Shen, S. , Michael, N. and Kumar, V. , \u201cTightly-Coupled Monocular Visual-Inertial Fusion for Autonomous Flight of Rotorcraft MAVs,\u201d 2015 IEEE International Conference on Robotics and Automation (ICRA) (2015) pp. 5303\u20135310.","DOI":"10.1109\/ICRA.2015.7139939"},{"key":"S0263574725000608_ref13","doi-asserted-by":"crossref","unstructured":"[13] Dosovitskiy, A. , Fischer, P. , Ilg, E. , Hausser, P. , Hazirbas, C. , Golkov, V. , Van Der Smagt, P. , Cremers, D. and Brox, T. , \u201cFlowNet: Learning Optical Flow with Convolutional Networks,\u201d Proceedings of the IEEE International Conference on Computer Vision (2015) pp. 2758\u20132766.","DOI":"10.1109\/ICCV.2015.316"},{"key":"S0263574725000608_ref28","unstructured":"[28] Agarwal, S. , Mierle, K. and The Ceres Solver Team, \u201cCeres Solver,\u201d (2023)."},{"key":"S0263574725000608_ref27","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4612-4380-9_35"},{"key":"S0263574725000608_ref31","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2021.3075644"},{"key":"S0263574725000608_ref32","unstructured":"[32] Teed, Z. and Deng, J. , \u201cDROID-SLAM: Deep visual SLAM for Monocular, Stereo, and RGB-D Cameras,\u201d Advances in Neural Information Processing Systems 34 (2021) pp.16558\u201316569."},{"key":"S0263574725000608_ref22","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2022.3185385"},{"key":"S0263574725000608_ref15","doi-asserted-by":"crossref","unstructured":"[15] Ranjan, A. and Black, M. J. , \u201cOptical Flow Estimation Using a Spatial Pyramid Network,\u201d Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) pp. 4161\u20134170.","DOI":"10.1109\/CVPR.2017.291"},{"key":"S0263574725000608_ref33","doi-asserted-by":"publisher","DOI":"10.1177\/0278364915620033"},{"key":"S0263574725000608_ref30","doi-asserted-by":"crossref","unstructured":"[30] Shi, X. , Li, D. , Zhao, P. , Tian, Q. , Tian, Y. , Long, Q. , Zhu, C. , Song, J. , Qiao, F. , Song, L. , Guo, Y. , Wang, Z. , Zhang, Y. , Qin, B. , Yang, W. , Wang, F. , Chan, R. H. M. and She, Q. , \u201cAre We Ready for Service Robots? The OpenLORIS-Scene Datasets for Lifelong SLAM,\u201d 2020 IEEE International Conference on Robotics and Automation (ICRA) (2020) pp. 3139\u20133145.","DOI":"10.1109\/ICRA40945.2020.9196638"},{"key":"S0263574725000608_ref11","doi-asserted-by":"publisher","DOI":"10.1016\/0004-3702(81)90024-2"},{"key":"S0263574725000608_ref8","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574724000420"},{"key":"S0263574725000608_ref5","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2018.2853729"},{"key":"S0263574725000608_ref16","doi-asserted-by":"crossref","unstructured":"[16] Sun, D. , Yang, X. , Liu, M.-Y. and Kautz, J. , \u201cPWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume,\u201d Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018) pp. 8934\u20138943.","DOI":"10.1109\/CVPR.2018.00931"},{"key":"S0263574725000608_ref18","doi-asserted-by":"crossref","unstructured":"[18] Pumarola, A. , Vakhitov, A. , Agudo, A. , Sanfeliu, A. and Moreno-Noguer, F. , \u201cPL-SLAM: Real-Time Monocular Visual SLAM with Points and Lines,\u201d 2017 IEEE International Conference on Robotics and Automation (ICRA) (2017) pp. 4503\u20134508.","DOI":"10.1109\/ICRA.2017.7989522"},{"key":"S0263574725000608_ref24","doi-asserted-by":"crossref","unstructured":"[24] Mayer, N. , Ilg, E. , Hausser, P. , Fischer, P. , Cremers, D. , Dosovitskiy, A. and Brox, T. , \u201cA Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation,\u201d Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) pp. 4040\u20134048.","DOI":"10.1109\/CVPR.2016.438"},{"key":"S0263574725000608_ref2","doi-asserted-by":"publisher","DOI":"10.1109\/TRO.2015.2463671"},{"key":"S0263574725000608_ref1","doi-asserted-by":"crossref","unstructured":"[1] Forster, C. , Pizzoli, M. and Scaramuzza, D. , \u201cSVO: Fast Semi-Direct Monocular Visual Odometry,\u201d 2014 IEEE International Conference on Robotics and Automation (ICRA) (2014) pp. 15\u201322.","DOI":"10.1109\/ICRA.2014.6906584"}],"container-title":["Robotica"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.cambridge.org\/core\/services\/aop-cambridge-core\/content\/view\/S0263574725000608","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,6]],"date-time":"2025-08-06T09:52:08Z","timestamp":1754473928000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.cambridge.org\/core\/product\/identifier\/S0263574725000608\/type\/journal_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6]]},"references-count":33,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,6]]}},"alternative-id":["S0263574725000608"],"URL":"https:\/\/doi.org\/10.1017\/s0263574725000608","relation":{},"ISSN":["0263-5747","1469-8668"],"issn-type":[{"value":"0263-5747","type":"print"},{"value":"1469-8668","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6]]}}}