{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T02:08:15Z","timestamp":1740103695597,"version":"3.37.3"},"reference-count":13,"publisher":"Wiley","license":[{"start":{"date-parts":[[2020,3,28]],"date-time":"2020-03-28T00:00:00Z","timestamp":1585353600000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61633008","51609046"],"award-info":[{"award-number":["61633008","51609046"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61633008","51609046"],"award-info":[{"award-number":["61633008","51609046"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Complexity"],"published-print":{"date-parts":[[2020,3,28]]},"abstract":"<jats:p>Deep learning-based visual odometry systems have shown promising performance compared with geometric-based visual odometry systems. In this paper, we propose a new framework of deep neural network, named Deep Siamese convolutional neural network (DSCNN), and design a DL-based monocular VO relying on DSCNN. The proposed DSCNN-VO not only considers positive order information of image sequence but also focuses on the reverse order information. It employs supervised data-driven training without relying on any modules in traditional visual odometry algorithm to make the DSCNN to learn the geometry information between consecutive images and estimate a six-DoF pose and recover trajectory using a monocular camera. After the DSCNN is trained, the output of DSCNN-VO is a relative pose. Then, trajectory is recovered by translating the relative pose to the absolute pose. Finally, compared with other DL-based VO systems, we demonstrate the proposed DSCNN-VO achieve a more accurate performance in terms of pose estimation and trajectory recovering through experiments. Meanwhile, we discuss the loss function of DSCNN and find a best scale factor to balance the translation error and rotation error.<\/jats:p>","DOI":"10.1155\/2020\/6367273","type":"journal-article","created":{"date-parts":[[2020,3,28]],"date-time":"2020-03-28T23:31:02Z","timestamp":1585438262000},"page":"1-13","source":"Crossref","is-referenced-by-count":5,"title":["Monocular VO Based on Deep Siamese Convolutional Neural Network"],"prefix":"10.1155","volume":"2020","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1187-0135","authenticated-orcid":true,"given":"Hongjian","family":"Wang","sequence":"first","affiliation":[{"name":"College of Automation, Harbin Engineering University, Harbin 150001, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8271-9040","authenticated-orcid":true,"given":"Xicheng","family":"Ban","sequence":"additional","affiliation":[{"name":"College of Automation, Harbin Engineering University, Harbin 150001, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6439-6660","authenticated-orcid":true,"given":"Fuguang","family":"Ding","sequence":"additional","affiliation":[{"name":"College of Automation, Harbin Engineering University, Harbin 150001, China"}]},{"given":"Yao","family":"Xiao","sequence":"additional","affiliation":[{"name":"College of Automation, Harbin Engineering University, Harbin 150001, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8028-7894","authenticated-orcid":true,"given":"Jiajia","family":"Zhou","sequence":"additional","affiliation":[{"name":"College of Automation, Harbin Engineering University, Harbin 150001, China"}]}],"member":"311","reference":[{"key":"2","doi-asserted-by":"publisher","DOI":"10.1109\/mra.2011.943233"},{"key":"3","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2016.2577031"},{"key":"4","doi-asserted-by":"publisher","DOI":"10.1145\/3065386"},{"key":"6","doi-asserted-by":"publisher","DOI":"10.1109\/mra.2012.2182810"},{"year":"2005","key":"7"},{"year":"2004","key":"12"},{"issue":"3","key":"14","doi-asserted-by":"crossref","first-page":"346","DOI":"10.1016\/j.cviu.2007.09.014","volume":"110","year":"2008","journal-title":"Computer Vision and Image Understanding"},{"key":"17","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2007.1049"},{"key":"20","doi-asserted-by":"publisher","DOI":"10.1109\/tro.2015.2463671"},{"key":"23","doi-asserted-by":"publisher","DOI":"10.1109\/tro.2008.2004829"},{"key":"36","doi-asserted-by":"publisher","DOI":"10.1109\/lra.2015.2505717"},{"key":"40","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-015-0816-y"},{"issue":"1","key":"41","first-page":"1929","volume":"15","year":"2014","journal-title":"The Journal of Machine Learning Research"}],"container-title":["Complexity"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2020\/6367273.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2020\/6367273.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2020\/6367273.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2020,3,28]],"date-time":"2020-03-28T23:31:08Z","timestamp":1585438268000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/complexity\/2020\/6367273\/"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,3,28]]},"references-count":13,"alternative-id":["6367273","6367273"],"URL":"https:\/\/doi.org\/10.1155\/2020\/6367273","relation":{},"ISSN":["1076-2787","1099-0526"],"issn-type":[{"type":"print","value":"1076-2787"},{"type":"electronic","value":"1099-0526"}],"subject":[],"published":{"date-parts":[[2020,3,28]]}}}