{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,29]],"date-time":"2025-10-29T01:10:41Z","timestamp":1761700241913,"version":"build-2065373602"},"reference-count":37,"publisher":"Institution of Engineering and Technology (IET)","issue":"10","license":[{"start":{"date-parts":[[2024,6,2]],"date-time":"2024-06-02T00:00:00Z","timestamp":1717286400000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100004826","name":"Natural Science Foundation of Beijing Municipality","doi-asserted-by":"publisher","award":["L231004","3222016"],"award-info":[{"award-number":["L231004","3222016"]}],"id":[{"id":"10.13039\/501100004826","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62103035"],"award-info":[{"award-number":["62103035"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"publisher","award":["2021M690337"],"award-info":[{"award-number":["2021M690337"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["ietresearch.onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["IET Image Processing"],"published-print":{"date-parts":[[2024,8]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The precise perception of the surrounding environment in traffic scenes is an important part of an intelligent transportation system. The event camera could provide complementary information to traditional frame\u2010based cameras, such as high dynamic range, and high time resolution, in the perception of traffic targets. To improve the precision and reliability of perception as well as facilitate lots of RGB camera\u2010based studies introduced to event cameras directly, a refined registration method for event\u2010based cameras and RGB cameras on the basis of pixel\u2010level region segmentation is proposed, to provide a fusion method at pixel level. A total of eight sequences and a dataset containing 260 typical traffic scenes are contained in the experiment dataset, both selected from DSEC, a traffic event\u2010based dataset. The registered event image shows a better spatial consistency with RGB images visually. Compared to the baseline, the evaluation indicators, such as the performance of the contrast, the proportion of overlapping pixels, and average registration accuracy have been improved. In the traffic object segmentation task, the average boundary displacement error of our method has decreased and the max decline value has reached 79.665%, compared to the boundary displacement error between ground truth and baseline. These results indicate prospective applications in the perception of intelligent transportation systems combined with event and RGB cameras. The traffic dataset with pixel\u2010level semantic annotations will be provided\u00a0soon.<\/jats:p>","DOI":"10.1049\/ipr2.13131","type":"journal-article","created":{"date-parts":[[2024,6,3]],"date-time":"2024-06-03T02:11:15Z","timestamp":1717380675000},"page":"2732-2744","update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["RRER: A refined registration method based on contrast minimum for event and RGB cameras"],"prefix":"10.1049","volume":"18","author":[{"given":"Shijie","family":"Zhang","sequence":"first","affiliation":[{"name":"School of Automation and Intelligence Beijing Jiaotong University Beijing China"}]},{"given":"Tao","family":"Tang","sequence":"additional","affiliation":[{"name":"School of Automation and Intelligence Beijing Jiaotong University Beijing China"}]},{"given":"Fan","family":"Sang","sequence":"additional","affiliation":[{"name":"School of Automation and Intelligence Beijing Jiaotong University Beijing China"}]},{"given":"Xuan","family":"Pei","sequence":"additional","affiliation":[{"name":"School of Automation and Intelligence Beijing Jiaotong University Beijing China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6992-5269","authenticated-orcid":false,"given":"Taogang","family":"Hou","sequence":"additional","affiliation":[{"name":"School of Automation and Intelligence Beijing Jiaotong University Beijing China"}]}],"member":"265","published-online":{"date-parts":[[2024,6,2]]},"reference":[{"key":"e_1_2_11_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11431-017-9338-1"},{"key":"e_1_2_11_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2020.3033569"},{"key":"e_1_2_11_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2020.2972912"},{"key":"e_1_2_11_5_1","doi-asserted-by":"crossref","unstructured":"Sakaridis C. Dai D. Hecker S. Van.Gool L.:Model adaptation with synthetic and real data for semantic dense foggy scene understanding. In:Proceedings of the European Conference on Computer Vision (ECCV) pp.687\u2013704.Springer Cham(2018)","DOI":"10.1007\/978-3-030-01261-8_42"},{"key":"e_1_2_11_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2012.2202229"},{"key":"e_1_2_11_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2020.2972974"},{"key":"e_1_2_11_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2020.3008413"},{"key":"e_1_2_11_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2022.3149370"},{"key":"e_1_2_11_10_1","unstructured":"Binas J. Neil D. Liu S.C. Delbruck T.:DDD17: end\u2010to\u2010end Davis driving dataset. arXiv:171101458 (2017)"},{"key":"e_1_2_11_11_1","doi-asserted-by":"crossref","unstructured":"Hu Y. Binas J. Neil D. Liu S.C. Delbruck T.:DDD20 end\u2010to\u2010end event camera driving dataset: fusing frames and events with deep learning for improved steering prediction. In:2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) pp.1\u20136.IEEE Piscataway NJ(2020)","DOI":"10.1109\/ITSC45102.2020.9294515"},{"key":"e_1_2_11_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2018.2800793"},{"key":"e_1_2_11_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2021.3068942"},{"key":"e_1_2_11_14_1","doi-asserted-by":"crossref","unstructured":"Cordts M. Omran M. Ramos S. Rehfeld T. Schiele B.:The cityscapes dataset for semantic urban scene understanding. In:2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.3213\u20133223.IEEE Piscataway NJ(2016)","DOI":"10.1109\/CVPR.2016.350"},{"key":"e_1_2_11_15_1","doi-asserted-by":"crossref","unstructured":"Zhang J. Yang K. Stiefelhagen R.:Issafe: Improving semantic segmentation in accidents by fusing event\u2010based data. In:2021 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS) pp.1132\u20131139.IEEE Piscataway NJ(2021)","DOI":"10.1109\/IROS51168.2021.9636109"},{"key":"e_1_2_11_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2021.3134828"},{"key":"e_1_2_11_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11431-021-1930-8"},{"key":"e_1_2_11_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2021.3059674"},{"key":"e_1_2_11_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2022.3146087"},{"key":"e_1_2_11_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSSC.2014.2342715"},{"key":"e_1_2_11_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2021.3068942"},{"key":"e_1_2_11_22_1","doi-asserted-by":"crossref","unstructured":"Tulyakov S. Gehrig D. Georgoulis S. Erbach J. Gehrig M. Li Y. et\u00a0al.:TimeLens: event\u2010based video frame interpolation. pp.16155\u201316164.IEEE Piscataway NJ(2021)","DOI":"10.1109\/CVPR46437.2021.01589"},{"key":"e_1_2_11_23_1","doi-asserted-by":"crossref","unstructured":"Xu L. Xu W. Golyanik V. Habermann M. Fang L. Theobalt C.:EventCap: monocular 3D capture of high\u2010speed human motions using an event camera. In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp.4967\u20134977.IEEE Piscataway NJ(2020)","DOI":"10.1109\/CVPR42600.2020.00502"},{"key":"e_1_2_11_24_1","doi-asserted-by":"crossref","unstructured":"Alonso I. Murillo A.C.:EV\u2010SegNet: semantic segmentation for event\u2010based cameras. In:Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops pp.1624\u20131633.IEEE Piscataway NJ(2019)","DOI":"10.1109\/CVPRW.2019.00205"},{"key":"e_1_2_11_25_1","doi-asserted-by":"publisher","DOI":"10.1177\/0278364917691115"},{"key":"e_1_2_11_26_1","doi-asserted-by":"crossref","unstructured":"Nehvi J. Golyanik V. Mueller F. Seidel H.P. Elgharib M. Theobalt C.:Differentiable event stream simulator for non\u2010rigid 3D tracking.IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops pp.1302\u20131311(2021)","DOI":"10.1109\/CVPRW53098.2021.00143"},{"key":"e_1_2_11_27_1","doi-asserted-by":"crossref","unstructured":"Korman S. Reichman D. Tsur G. Avidan S.:FasT\u2010match: Fast affine template matching. In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp.2331\u20132338.IEEE Piscataway NJ(2013)","DOI":"10.1109\/CVPR.2013.302"},{"key":"e_1_2_11_28_1","doi-asserted-by":"crossref","unstructured":"Lowe D.G.:Object recognition from local scale\u2010invariant features. In:Proceedings of the Seventh IEEE International Conference on Computer Vision vol.2 pp.1150\u20131157.IEEE Piscataway NJ(1999)","DOI":"10.1109\/ICCV.1999.790410"},{"key":"e_1_2_11_29_1","doi-asserted-by":"crossref","unstructured":"Bay H. Tuytelaars T. Gool L.V.:SURF: Speeded Up Robust Features. In:European Conference on Computer Vision\u2010ECCV 2006 pp.404\u2013417.Springer Cham(2006)","DOI":"10.1007\/11744023_32"},{"key":"e_1_2_11_30_1","doi-asserted-by":"crossref","unstructured":"Alahi A. Ortiz R. Vandergheynst P.:FREAK: fast retina keypoint. In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp.510\u2013517.IEEE Piscataway NJ(2012)","DOI":"10.1109\/CVPR.2012.6247715"},{"key":"e_1_2_11_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2015.2470598"},{"key":"e_1_2_11_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2018.2849882"},{"key":"e_1_2_11_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2021.3060707"},{"key":"e_1_2_11_34_1","doi-asserted-by":"crossref","unstructured":"Zhu A.Z. Yuan L. Chaney K. Daniilidis K.:Live demonstration: unsupervised event\u2010based learning of optical flow depth and egomotion.IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops pp.1694\u20131694.IEEE Piscataway NJ(2019)","DOI":"10.1109\/CVPRW.2019.00216"},{"key":"e_1_2_11_35_1","doi-asserted-by":"crossref","unstructured":"Nag S.:Image registration techniques: a survey. arXiv:1712.07540 (2017)","DOI":"10.31224\/osf.io\/rv65c"},{"key":"e_1_2_11_36_1","doi-asserted-by":"crossref","unstructured":"Liu D. Parra A. Chin T.J.:Globally optimal contrast maximisation for event\u2010based motion estimation. In:2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition pp.6348\u20136357.IEEE Piscataway NJ(2020)","DOI":"10.1109\/CVPR42600.2020.00638"},{"key":"e_1_2_11_37_1","doi-asserted-by":"crossref","unstructured":"Zhang S. Sang F. Li J. Tang T. Zhang J. Hou T.:Ercm: Bionic event\u2010based registration method based on contrast minimum for intelligent unmanned systems. In:2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC) pp.734\u2013739.IEEE Piscataway NJ(2022)","DOI":"10.1109\/YAC57282.2022.10023860"},{"key":"e_1_2_11_38_1","unstructured":"Tang L. Li B. Zhong Y. Ding S. Song M.:Disentangled high quality salient object detection. In:Proceedings of the IEEE\/CVF International Conference on Computer Vision pp.3580\u20133590.IEEE Piscataway NJ(2021)"}],"container-title":["IET Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/pdf\/10.1049\/ipr2.13131","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,28]],"date-time":"2025-10-28T21:15:22Z","timestamp":1761686122000},"score":1,"resource":{"primary":{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/10.1049\/ipr2.13131"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,2]]},"references-count":37,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2024,8]]}},"alternative-id":["10.1049\/ipr2.13131"],"URL":"https:\/\/doi.org\/10.1049\/ipr2.13131","archive":["Portico"],"relation":{},"ISSN":["1751-9659","1751-9667"],"issn-type":[{"type":"print","value":"1751-9659"},{"type":"electronic","value":"1751-9667"}],"subject":[],"published":{"date-parts":[[2024,6,2]]},"assertion":[{"value":"2023-09-14","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-05-09","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-02","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}