{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,14]],"date-time":"2026-01-14T23:58:58Z","timestamp":1768435138989,"version":"3.49.0"},"reference-count":33,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2024,6,24]],"date-time":"2024-06-24T00:00:00Z","timestamp":1719187200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,6,24]],"date-time":"2024-06-24T00:00:00Z","timestamp":1719187200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012165","name":"Key Technologies Research and Development Program","doi-asserted-by":"publisher","award":["2022YFB4702202"],"award-info":[{"award-number":["2022YFB4702202"]}],"id":[{"id":"10.13039\/501100012165","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100013058","name":"Jiangsu Provincial Key Research and Development Program","doi-asserted-by":"publisher","award":["BE2021009-02"],"award-info":[{"award-number":["BE2021009-02"]}],"id":[{"id":"10.13039\/501100013058","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61773273"],"award-info":[{"award-number":["61773273"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Event cameras produce asynchronous discrete outputs due to the independent response of camera pixels to changes in brightness. The asynchronous and discrete nature of event data facilitate the tracking of prolonged feature trajectories. Nonetheless, this necessitates the adaptation of feature tracking techniques to efficiently process this type of data. In addressing this challenge, we proposed a hybrid data-driven feature tracking method that utilizes data from both event cameras and frame-based cameras to track features asynchronously. It mainly includes patch initialization, patch optimization, and patch association modules. In the patch initialization module, FAST corners are detected in frame images, providing points responsive to local brightness changes. The patch association module introduces a nearest-neighbor (NN) algorithm to filter new feature points effectively. The patch optimization module assesses optimization quality for tracking quality monitoring. We evaluate the tracking accuracy and robustness of our method using public and self-collected datasets, focusing on average tracking error and feature age. In contrast to the event-based Kanade\u2013Lucas\u2013Tomasi tracker method, our method decreases the average tracking error ranging from 1.3 to 29.2% and boosts the feature age ranging from 9.6 to 32.1%, while ensuring the computational efficiency improvement of 1.2\u20137.6%. Thus, our proposed feature tracking method utilizes the unique characteristics of event cameras and traditional cameras to deliver a robust and efficient tracking system.<\/jats:p>","DOI":"10.1007\/s40747-024-01513-0","type":"journal-article","created":{"date-parts":[[2024,6,24]],"date-time":"2024-06-24T07:02:49Z","timestamp":1719212569000},"page":"6885-6899","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Enhancing robustness in asynchronous feature tracking for event cameras through fusing frame steams"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8014-2380","authenticated-orcid":false,"given":"Haidong","family":"Xu","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2625-0310","authenticated-orcid":false,"given":"Shumei","family":"Yu","sequence":"additional","affiliation":[]},{"given":"Shizhao","family":"Jin","sequence":"additional","affiliation":[]},{"given":"Rongchuan","family":"Sun","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4835-708X","authenticated-orcid":false,"given":"Guodong","family":"Chen","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8354-2440","authenticated-orcid":false,"given":"Lining","family":"Sun","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,6,24]]},"reference":[{"issue":"2","key":"1513_CR1","doi-asserted-by":"publisher","first-page":"566","DOI":"10.1109\/JSSC.2007.914337","volume":"43","author":"P Lichtsteiner","year":"2008","unstructured":"Lichtsteiner P, Posch C, Delbruck T (2008) A $$128\\times 128$$ 120\u00a0db 15\u00a0$$\\upmu $$s latency asynchronous temporal contrast vision sensor. IEEE J Solid State Circuits 43(2):566\u2013576","journal-title":"IEEE J Solid State Circuits"},{"issue":"10","key":"1513_CR2","doi-asserted-by":"publisher","first-page":"2333","DOI":"10.1109\/JSSC.2014.2342715","volume":"49","author":"C Brandli","year":"2014","unstructured":"Brandli C, Berner R, Yang M, Liu S-C, Delbruck T (2014) A $$240\\times 180$$ 130\u00a0db 3\u00a0$$\\upmu $$s latency global shutter spatiotemporal vision sensor. IEEE J Solid State Circuits 49(10):2333\u20132341","journal-title":"IEEE J Solid State Circuits"},{"issue":"5","key":"1513_CR3","doi-asserted-by":"publisher","first-page":"1147","DOI":"10.1109\/TRO.2015.2463671","volume":"31","author":"R Mur-Artal","year":"2015","unstructured":"Mur-Artal R, Montiel JMM, Tard\u00f3s JD (2015) Orb-slam: a versatile and accurate monocular slam system. IEEE Trans Robot 31(5):1147\u20131163","journal-title":"IEEE Trans Robot"},{"issue":"4","key":"1513_CR4","doi-asserted-by":"publisher","first-page":"258","DOI":"10.1049\/cvi2.12041","volume":"15","author":"KA Tsintotas","year":"2021","unstructured":"Tsintotas KA, Bampis L, Gasteratos A (2021) Tracking-DOSeqSLAM: a dynamic sequence-based visual place recognition paradigm. IET Comput Vis 15(4):258\u2013273","journal-title":"IET Comput Vis"},{"issue":"4","key":"1513_CR5","doi-asserted-by":"publisher","first-page":"144","DOI":"10.1049\/iet-cvi.2019.0623","volume":"14","author":"R Ramli","year":"2020","unstructured":"Ramli R, Idris MYI, Hasikin K et al (2020) Local descriptor for retinal fundus image registration. IET Comput Vis 14(4):144\u2013153","journal-title":"IET Comput Vis"},{"key":"1513_CR6","doi-asserted-by":"crossref","unstructured":"Kueng B, Mueggler E, Gallego G, Scaramuzza D (2016) Low-latency visual odometry using event-based feature tracks. In: IEEE\/RSJ international conference on intelligent robots and systems (IROS), Daejeon, Korea (South). IEEE Press, pp 16\u201323","DOI":"10.1109\/IROS.2016.7758089"},{"key":"1513_CR7","doi-asserted-by":"crossref","unstructured":"Zhu AZ, Atanasov N, Daniilidis K (2017) Event-based visual inertial odometry. In: IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, USA. IEEE Press, pp 5816\u20135824","DOI":"10.1109\/CVPR.2017.616"},{"key":"1513_CR8","doi-asserted-by":"crossref","unstructured":"Guan W, Chen P, Xie Y, Lu P (2022) PL-EVIO: robust monocular event-based visual inertial odometry with point and line features. IEEE Trans Autom Sci Eng 1\u201317","DOI":"10.1109\/TASE.2023.3324365"},{"key":"1513_CR9","doi-asserted-by":"crossref","unstructured":"Le\u00a0Gentil C, Tschopp F, Alzugaray I et\u00a0al (2020) IDOL: a framework for IMU-DVS odometry using lines. In: IEEE\/RSJ international conference on intelligent robots and systems (IROS), Las Vegas, NV, USA. IEEE Press, pp 5863\u20135870","DOI":"10.1109\/IROS45743.2020.9341208"},{"key":"1513_CR10","doi-asserted-by":"crossref","unstructured":"Vasco V, Glover A, Bartolozzi C (2016) Fast event-based Harris corner detection exploiting the advantages of event-driven cameras. In: IEEE\/RSJ international conference on intelligent robots and systems (IROS), Daejeon, Korea (South). IEEE Press, pp 4144\u20134149","DOI":"10.1109\/IROS.2016.7759610"},{"key":"1513_CR11","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1007\/s11263-020-01359-2","volume":"129","author":"J Ma","year":"2021","unstructured":"Ma J, Jiang X, Fan A, Jiang J, Yan J (2021) Image matching from handcrafted to deep features: a survey. Int J Comput Vis 129:23\u201379","journal-title":"Int J Comput Vis"},{"key":"1513_CR12","doi-asserted-by":"crossref","unstructured":"Rosten E, Drummond T (2006) Machine learning for high-speed corner detection. In: Computer vision\u2014ECCV: 9th European conference on computer vision, Graz, Austria. Springer Press, pp 430\u2013443","DOI":"10.1007\/11744023_34"},{"key":"1513_CR13","doi-asserted-by":"crossref","unstructured":"Mueggler E, Bartolozzi C, Scaramuzza D (2017) Fast event-based corner detection. In: British machine vision conference (BMVC), London, UK. Zurich Open Repository and Archive, UZH, pp 1\u20138","DOI":"10.5244\/C.31.33"},{"issue":"4","key":"1513_CR14","doi-asserted-by":"publisher","first-page":"3177","DOI":"10.1109\/LRA.2018.2849882","volume":"3","author":"I Alzugaray","year":"2018","unstructured":"Alzugaray I, Chli M (2018) Asynchronous corner detection and tracking for event cameras in real time. IEEE Robot Autom Lett 3(4):3177\u20133184","journal-title":"IEEE Robot Autom Lett"},{"key":"1513_CR15","doi-asserted-by":"crossref","unstructured":"Li R, Shi D, Zhang Y, Li K, Li R(2019) FA-Harris: a fast and asynchronous corner detector for event cameras. In: IEEE\/RSJ international conference on intelligent robots and systems (IROS), Macau, China. IEEE Press, pp 6223\u20136229","DOI":"10.1109\/IROS40897.2019.8968491"},{"key":"1513_CR16","doi-asserted-by":"crossref","unstructured":"Mohamed SAS et\u00a0al (2021) Dynamic resource-aware corner detection for bio-inspired vision sensors. In: 25th International conference on pattern recognition (ICPR), Milan, Italy. IEEE Press, pp 10465\u201310472","DOI":"10.1109\/ICPR48806.2021.9412314"},{"key":"1513_CR17","doi-asserted-by":"crossref","unstructured":"Tedaldi D, Gallego G, Mueggler E, Scaramuzza D (2016) Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS). In: Second international conference on event-based control, communication, and signal processing (EBCCSP), Krakow, Poland. IEEE Press, pp 1\u20137","DOI":"10.1109\/EBCCSP.2016.7605086"},{"key":"1513_CR18","doi-asserted-by":"crossref","unstructured":"Zhu AZ, Atanasov N, Daniilidis K (2017) Event-based feature tracking with probabilistic data association. In: IEEE international conference on robotics and automation (ICRA), Singapore. IEEE Press, pp 4465\u20134470","DOI":"10.1109\/ICRA.2017.7989517"},{"key":"1513_CR19","doi-asserted-by":"crossref","unstructured":"Alzugaray I, Chli M (2018) ACE: an efficient asynchronous corner tracker for event cameras. In: 2018 International conference on 3D vision (3DV), Verona, Italy. IEEE Press, pp 653\u2013661","DOI":"10.1109\/3DV.2018.00080"},{"key":"1513_CR20","doi-asserted-by":"crossref","unstructured":"Alzugaray I, Chli M (2019) Asynchronous multi-hypothesis tracking of features with event cameras. In: 2019 International conference on 3D vision (3DV), Qu\u00e9bec, Canada. IEEE Press, pp 269\u2013278","DOI":"10.1109\/3DV.2019.00038"},{"key":"1513_CR21","unstructured":"Alzugaray I (2022) Event-driven feature detection and tracking for visual SLAM. PhD thesis, ETH Zurich, Switzerland"},{"issue":"4","key":"1513_CR22","doi-asserted-by":"publisher","first-page":"1475","DOI":"10.3390\/s21041475","volume":"21","author":"J Duo","year":"2021","unstructured":"Duo J, Zhao L (2021) An asynchronous real-time corner extraction and tracking algorithm for event camera. Sensors 21(4):1475","journal-title":"Sensors"},{"key":"1513_CR23","doi-asserted-by":"crossref","unstructured":"Li R, Shi D, Zhang Y, Li R, Wang M (2021) Asynchronous event feature generation and tracking based on gradient descriptor for event cameras. Int J Adv Robot Syst 18(4). https:\/\/doi.org\/10.1177\/17298814211027028","DOI":"10.1177\/17298814211027028"},{"issue":"6","key":"1513_CR24","doi-asserted-by":"publisher","first-page":"3461","DOI":"10.1109\/TSMC.2022.3225381","volume":"53","author":"Z Zhuang","year":"2023","unstructured":"Zhuang Z, Tao H, Chen Y, Stojanovic V, Paszke W (2023) An optimal iterative learning control approach for linear systems with nonuniform trial lengths under input constraints. IEEE Trans Syst Man Cybern: Syst 53(6):3461\u20133473","journal-title":"IEEE Trans Syst Man Cybern: Syst"},{"key":"1513_CR25","doi-asserted-by":"publisher","first-page":"101","DOI":"10.1016\/j.ins.2023.03.070","volume":"634","author":"H Wan","year":"2023","unstructured":"Wan H, Luan X, Stojanovic V, Liu F (2023) Self-triggered finite-time control for discrete-time Markov jump systems. Inf Sci 634:101\u2013121","journal-title":"Inf Sci"},{"key":"1513_CR26","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2023.126498","volume":"550","author":"X Song","year":"2023","unstructured":"Song X, Wu N, Song S, Zhang Y, Stojanovic V (2023) Bipartite synchronization for cooperative-competitive neural networks with reaction\u2013diffusion terms via dual event-triggered mechanism. Neurocomputing 550:126498","journal-title":"Neurocomputing"},{"key":"1513_CR27","unstructured":"Mohamed A-B et al.(2023) IoT based aerial device to detect and monitor carbon dioxide in an environment. WIPO. https:\/\/patentscope2.wipo.int\/search\/en\/detail.jsf?docId=DE405681734 &_cid=P20-LQBPYR-57176-1. Accessed 16 Oct 2023"},{"issue":"3","key":"1513_CR28","doi-asserted-by":"publisher","first-page":"601","DOI":"10.1007\/s11263-019-01209-w","volume":"128","author":"D Gehrig","year":"2020","unstructured":"Gehrig D, Rebecq H, Gallego G, Scaramuzza D (2020) EKLT: asynchronous photometric feature tracking using events and frames. Int J Comput Vis 128(3):601\u2013618","journal-title":"Int J Comput Vis"},{"key":"1513_CR29","doi-asserted-by":"crossref","unstructured":"Rosten E, Drummond T (2005) Fusing points and lines for high performance tracking. In: IEEE international conference on computer vision (ICCV\u201905), Beijing, China. IEEE Press, pp 1508\u20131515","DOI":"10.1109\/ICCV.2005.104"},{"issue":"1","key":"1513_CR30","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s42400-019-0038-7","volume":"2","author":"A Khraisat","year":"2019","unstructured":"Khraisat A, Gondal I, Vamplew P, Kamruzzaman J (2019) Survey of intrusion detection systems: techniques, datasets and challenges. Cybersecurity 2(1):1\u201322","journal-title":"Cybersecurity"},{"issue":"2","key":"1513_CR31","doi-asserted-by":"publisher","first-page":"142","DOI":"10.1177\/0278364917691115","volume":"36","author":"E Mueggler","year":"2017","unstructured":"Mueggler E, Rebecq H, Gallego G, Delbruck T, Scaramuzza D (2017) The event-camera dataset and simulator: event-based data for pose estimation, visual odometry, and slam. Int J Robot Res 36(2):142\u2013149","journal-title":"Int J Robot Res"},{"issue":"10","key":"1513_CR32","doi-asserted-by":"publisher","first-page":"2402","DOI":"10.1109\/TPAMI.2017.2769655","volume":"40","author":"G Gallego","year":"2017","unstructured":"Gallego G, Lund JEA, Mueggler E, Rebecq H, Delbruck T, Scaramuzza D (2017) Event-based, 6-DOF camera tracking from photometric depth maps. IEEE Trans Pattern Anal Mach Intell 40(10):2402\u20132412","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"2","key":"1513_CR33","doi-asserted-by":"publisher","first-page":"249","DOI":"10.1109\/TRO.2016.2623335","volume":"33","author":"C Forster","year":"2016","unstructured":"Forster C, Zhang Z, Gassner M, Werlberger M, Scaramuzza D (2016) SVO: semidirect visual odometry for monocular and multicamera systems. IEEE Trans Robot 33(2):249\u2013265","journal-title":"IEEE Trans Robot"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01513-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01513-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01513-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,9,14]],"date-time":"2024-09-14T15:19:13Z","timestamp":1726327153000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01513-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,24]]},"references-count":33,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2024,10]]}},"alternative-id":["1513"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01513-0","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,24]]},"assertion":[{"value":"29 November 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 May 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 June 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no conflict interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}