{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,4]],"date-time":"2026-04-04T17:54:20Z","timestamp":1775325260435,"version":"3.50.1"},"reference-count":35,"publisher":"MDPI AG","issue":"17","license":[{"start":{"date-parts":[[2019,8,23]],"date-time":"2019-08-23T00:00:00Z","timestamp":1566518400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.<\/jats:p>","DOI":"10.3390\/rs11171990","type":"journal-article","created":{"date-parts":[[2019,8,26]],"date-time":"2019-08-26T04:38:23Z","timestamp":1566794303000},"page":"1990","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":35,"title":["Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: A Computer Vision Approach"],"prefix":"10.3390","volume":"11","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7504-7581","authenticated-orcid":false,"given":"Mostafa","family":"Mansour","sequence":"first","affiliation":[{"name":"Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland"},{"name":"Department of Information and Navigation Systems, ITMO University, 197101 St. Petersburg, Russia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2617-3156","authenticated-orcid":false,"given":"Pavel","family":"Davidson","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3640-3760","authenticated-orcid":false,"given":"Oleg","family":"Stepanov","sequence":"additional","affiliation":[{"name":"Department of Information and Navigation Systems, ITMO University, 197101 St. Petersburg, Russia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1158-6951","authenticated-orcid":false,"given":"Robert","family":"Pich\u00e9","sequence":"additional","affiliation":[{"name":"Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland"}]}],"member":"1968","published-online":{"date-parts":[[2019,8,23]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"389","DOI":"10.1016\/0042-6989(94)00176-M","article-title":"Measurement and modeling of depth cue combination: In defense of weak fusion","volume":"35","author":"Landy","year":"1995","journal-title":"Vis. Res."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Smolyanskiy, N., Kamenev, A., and Birchfield, S. (2018, January 18\u201322). On the importance of stereo for accurate depth estimation: An efficient semi-supervised deep neural network approach. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00147"},{"key":"ref_3","unstructured":"Saxena, A., Schulte, J., and Ng, A.Y. (2007, January 6\u201312). Depth Estimation Using Monocular and Stereo Cues. Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"3457","DOI":"10.1016\/0042-6989(96)00072-7","article-title":"The interaction of binocular disparity and motion parallax in the computation of depth","volume":"36","author":"Bradshaw","year":"1996","journal-title":"Vis. Res."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"679","DOI":"10.1037\/0096-1523.21.3.679","article-title":"Comparing depth from motion with depth from binocular disparity","volume":"21","author":"Durgin","year":"1995","journal-title":"J. Exp. Psychol. Hum. Percept. Perform."},{"key":"ref_6","first-page":"199","article-title":"Depth and motion perceptions produced by motion parallax","volume":"33","author":"Ono","year":"2006","journal-title":"Teach. Psychol."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1167\/10.10.5","article-title":"The precision of binocular and monocular depth judgments in natural settings","volume":"10","author":"McKee","year":"2010","journal-title":"J. Vis."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1786","DOI":"10.1016\/j.visres.2010.05.035","article-title":"A new binocular cue for absolute distance: Disparity relative to the most distant structure","volume":"50","author":"Sousa","year":"2010","journal-title":"Vis. Res."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Hartley, R., and Zisserman, A. (2003). Multiple View Geometry in Computer Vision, Cambridge University Press.","DOI":"10.1017\/CBO9780511811685"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"435","DOI":"10.1080\/15599610802438680","article-title":"Review of stereo vision algorithms: From software to hardware","volume":"2","author":"Lazaros","year":"2008","journal-title":"Int. J. Optomechatron."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"8742920","DOI":"10.1155\/2016\/8742920","article-title":"Literature survey on stereo vision disparity map algorithms","volume":"2016","author":"Hamzah","year":"2016","journal-title":"J. Sens."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Vishnyakov, B.V., Vizilter, Y.V., Knyaz, V.A., Malin, I.K., Vygolov, O.V., and Zheltov, S.Y. (2015, January 21\u201325). Stereo sequences analysis for dynamic scene understanding in a driver assistance system. Proceedings of the Automated Visual Inspection and Machine Vision, Munich, Germany.","DOI":"10.1117\/12.2184849"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"695","DOI":"10.14358\/PERS.77.7.695","article-title":"A triangulation-based hierarchical image matching method for wide-baseline images","volume":"77","author":"Wu","year":"2011","journal-title":"Photogramm. Eng. Remote Sens."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1007\/s11370-014-0146-x","article-title":"3D reconstruction and classification of natural environments by an autonomous vehicle using multi-baseline stereo","volume":"7","author":"Milella","year":"2014","journal-title":"Intell. Serv. Robot."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1007\/s11554-012-0313-2","article-title":"Review of stereo vision algorithms and their suitability for resource-limited systems","volume":"11","author":"Tippetts","year":"2016","journal-title":"J. Real-Time Image Process."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Kyt\u00f6, M., Nuutinen, M., and Oittinen, P. (2011, January 24\u201327). Method for measuring stereo camera depth accuracy based on stereoscopic vision. Proceedings of the SPIE Three-Dimensional Imaging, Interaction, and Measurement Conference, San-Francisco, CA, USA.","DOI":"10.1117\/12.872015"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Sabattini, L., Levratti, A., Venturi, F., Amplo, E., Fantuzzi, C., and Secchi, C. (2012, January 5\u20137). Experimental comparison of 3D vision sensors for mobile robot localization for industrial application: Stereo-camera and RGB-D sensor. Proceedings of the 2012 12th International Conference on Control Automation Robotics & Vision (ICARCV), Guangzhou, China.","DOI":"10.1109\/ICARCV.2012.6485264"},{"key":"ref_18","first-page":"1","article-title":"Depth Data Error Modeling of the ZED 3D Vision Sensor from Stereolabs","volume":"17","author":"Ortiz","year":"2018","journal-title":"Electron. Lett. Comput. Vis. Image Anal."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1007\/s00422-008-0224-2","article-title":"Motion parallax contribution to perception of self-motion and depth","volume":"98","author":"Hanes","year":"2008","journal-title":"Biol. Cybern."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1016\/j.visres.2015.07.002","article-title":"Motion parallax thresholds for unambiguous depth perception","volume":"115","author":"Holmin","year":"2015","journal-title":"Vis. Res."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"136","DOI":"10.1038\/scientificamerican0779-136","article-title":"The visual perception of motion in depth","volume":"241","author":"Regan","year":"1979","journal-title":"Sci. Am."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"1296","DOI":"10.1364\/JOSA.55.001296","article-title":"Static and dynamic visual fields in human space perception","volume":"55","author":"Gordon","year":"1965","journal-title":"J. Opt. Soc. Am."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Gibson, J.J. (2014). The Ecological Approach to Visual Perception: Classic Edition, Psychology Press.","DOI":"10.4324\/9781315740218"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Gibson, J.J. (1950). The Perception of the Visual World, Houghton Mifflin.","DOI":"10.2307\/1418003"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Grabe, V., Bulthoff, H.H., and Giordano, P.R. (2013, January 3\u20137). A comparison of scale estimation schemes for a quadrotor UAV based on optical flow and IMU measurements. Proceedings of the 2013 IEEE\/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan.","DOI":"10.1109\/IROS.2013.6697107"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Schmid, S., and Fritsch, D. (2017, January 26\u201327). Precision analysis of triangulations using forward-facing vehicle-mounted cameras for augmented reality applications. Proceedings of the Videometrics, Range Imaging, and Applications XIV, Munich, Germany.","DOI":"10.1117\/12.2269716"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"150","DOI":"10.1134\/S2075108717020043","article-title":"Monocular vision-based range estimation supported by proprioceptive motion","volume":"8","author":"Davidson","year":"2017","journal-title":"Gyroscopy Navig."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"111","DOI":"10.1134\/S2075108719030064","article-title":"Depth estimation with egomotion assisted monocular camera","volume":"10","author":"Mansour","year":"2019","journal-title":"Gyroscopy Navig."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Oshman, Y., and Davidson, P. (1996, January 29\u201331). Optimal observer trajectories for passive target localization using bearing-only measurements. Proceedings of the AIAA Guidance, Navigation, and Control Conference, San-Diego, CA, USA.","DOI":"10.2514\/6.1996-3740"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"892","DOI":"10.1109\/7.784059","article-title":"Optimization of observer trajectories for bearings-only target localization","volume":"35","author":"Oshman","year":"1999","journal-title":"IEEE Trans. Aerosp. Electron. Syst."},{"key":"ref_31","unstructured":"Kaehler, A., and Bradski, G. (2016). Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library, O\u2019Reilly Media Inc."},{"key":"ref_32","unstructured":"Corke, P. (2011). Robotics, Vision and Control: Fundamental Algorithms in MATLAB\u00ae, Springer-Verlag."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"385","DOI":"10.1098\/rspb.1980.0057","article-title":"The interpretation of a moving retinal image","volume":"208","author":"Prazdny","year":"1980","journal-title":"Proc. R. Soc. Lond. B"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Davidson, P., Mansour, M., Stepanov, O., and Pich\u00e9, R. (2019, January 27\u201329). Depth estimation from motion parallax: Experimental evaluation. Proceedings of the 2019 26th Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), St. Petersburg, Russia.","DOI":"10.23919\/ICINS.2019.8769338"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Cooper, M.A., Raquet, J.F., and Patton, R. (2018). Range Information Characterization of the Hokuyo UST-20LX LiDAR Sensor. Photonics, 5.","DOI":"10.3390\/photonics5020012"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/11\/17\/1990\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T13:13:22Z","timestamp":1760188402000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/11\/17\/1990"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,8,23]]},"references-count":35,"journal-issue":{"issue":"17","published-online":{"date-parts":[[2019,9]]}},"alternative-id":["rs11171990"],"URL":"https:\/\/doi.org\/10.3390\/rs11171990","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,8,23]]}}}