{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T06:11:44Z","timestamp":1771913504615,"version":"3.50.1"},"reference-count":53,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2024,3,31]],"date-time":"2024-03-31T00:00:00Z","timestamp":1711843200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera\u2019s field of view as it moves. Additionally, modeling the impact of translational motion on the values of global features poses a significant challenge, as it is dependent on scene depths, particularly for non-planar scenes. To address these issues, this paper combines the utilization of image masks to mitigate abrupt changes in global feature values and the application of neural networks to tackle the modeling challenge posed by translational motion. By employing masks at various locations within the image, multiple estimations of rotation corresponding to the motion of each selected region can be obtained. Our contribution lies in offering a rapid method for implementing numerous masks on the image with real-time inference speed, rendering it suitable for embedded robot applications. Extensive experiments have been conducted on both real-world and synthetic datasets generated using Blender. The results obtained validate the accuracy, robustness, and real-time performance of the proposed method compared to a state-of-the-art method.<\/jats:p>","DOI":"10.3390\/s24072246","type":"journal-article","created":{"date-parts":[[2024,3,31]],"date-time":"2024-03-31T13:32:56Z","timestamp":1711891976000},"page":"2246","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features"],"prefix":"10.3390","volume":"24","author":[{"given":"Yao","family":"Du","sequence":"first","affiliation":[{"name":"Universit\u00e9 Bourgogne, 21000 Dijon, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1730-4233","authenticated-orcid":false,"given":"Carlos","family":"Mateo","sequence":"additional","affiliation":[{"name":"ICB UMR CNRS 6303, Universit\u00e9 Bourgogne, 21000 Dijon, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1015-8878","authenticated-orcid":false,"given":"Omar","family":"Tahri","sequence":"additional","affiliation":[{"name":"ICB UMR CNRS 6303, Universit\u00e9 Bourgogne, 21000 Dijon, France"}]}],"member":"1968","published-online":{"date-parts":[[2024,3,31]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s11427-018-9402-9","article-title":"Uncalibrated downward-looking UAV visual compass based on clustered point features","volume":"62","author":"Liu","year":"2019","journal-title":"Sci. China Inf. Sci."},{"key":"ref_2","unstructured":"Anderson, P., and Hengst, B. (2014). Proceedings of the RoboCup 2013: Robot World Cup XVII 17, Springer."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Liu, Y., Tao, J., Kong, D., Zhang, Y., and Li, P. (2022). A Visual Compass Based on Point and Line Features for UAV High-Altitude Orientation Estimation. Remote Sens., 14.","DOI":"10.3390\/rs14061430"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"865","DOI":"10.1017\/S026357471100110X","article-title":"Combined visual odometry and visual compass for off-road mobile robots localization","volume":"30","author":"Gonzalez","year":"2012","journal-title":"Robotica"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"17","DOI":"10.1007\/s10514-010-9183-2","article-title":"Unmanned aerial vehicles UAVs attitude, height, motion estimation and control using visual systems","volume":"29","author":"Campoy","year":"2010","journal-title":"Auton. Robot."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"eabe1315","DOI":"10.1126\/scirobotics.abe1315","article-title":"Visual-inertial hand motion tracking with robustness against occlusion, interference, and contact","volume":"6","author":"Lee","year":"2021","journal-title":"Sci. Robot."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"922","DOI":"10.1109\/JSYST.2014.2320639","article-title":"Navigation assistance for the visually impaired using RGB-D sensor with range expansion","volume":"10","author":"Aladren","year":"2014","journal-title":"IEEE Syst. J."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Guo, R., Peng, K., Zhou, D., and Liu, Y. (2019). Robust visual compass using hybrid features for indoor environments. Electronics, 8.","DOI":"10.3390\/electronics8020220"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"120390","DOI":"10.1016\/j.eswa.2023.120390","article-title":"A real-time visual compass from two planes for indoor unmanned aerial vehicles (UAVs)","volume":"229","author":"Wang","year":"2023","journal-title":"Expert Syst. Appl."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"588","DOI":"10.1108\/IR-03-2016-0103","article-title":"Performance improvement of visual-inertial navigation system by using polarized light compass","volume":"43","author":"Kong","year":"2016","journal-title":"Ind. Robot. Int. J."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"5646","DOI":"10.1109\/JSEN.2017.2725938","article-title":"Polarized light compass-aided visual-inertial navigation under foliage environment","volume":"17","author":"Wang","year":"2017","journal-title":"IEEE Sens. J."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"055111","DOI":"10.1088\/1361-6501\/ac4637","article-title":"Polarized light-aided visual-inertial navigation system: Global heading measurements and graph optimization-based multi-sensor fusion","volume":"33","author":"Xia","year":"2022","journal-title":"Meas. Sci. Technol."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1015","DOI":"10.1109\/TRO.2008.2004490","article-title":"Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles","volume":"24","author":"Scaramuzza","year":"2008","journal-title":"IEEE Trans. Robot."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"913","DOI":"10.1002\/rob.20159","article-title":"The visual compass: Performance and limitations of an appearance-based method","volume":"23","author":"Labrosse","year":"2006","journal-title":"J. Field Robot."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1016\/j.aei.2015.10.005","article-title":"Combining visual natural markers and IMU for improved AR based indoor navigation","volume":"31","author":"Neges","year":"2017","journal-title":"Adv. Eng. Inform."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Hildebrandt, M., and Kirchner, F. (2010, January 24\u201327). Imu-aided stereo visual odometry for ground-tracking auv applications. Proceedings of the OCEANS\u201910 IEEE SYDNEY, Sydney, NSW, Australia.","DOI":"10.1109\/OCEANSSYD.2010.5603681"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1155","DOI":"10.1016\/j.patrec.2004.03.010","article-title":"A new method of camera pose estimation using 2D\u20133D corner correspondence","volume":"25","author":"Shi","year":"2004","journal-title":"Pattern Recognit. Lett."},{"key":"ref_18","first-page":"355","article-title":"Pose estimation and map building with a time-of-flight-camera for robot navigation","volume":"5","author":"Prusak","year":"2008","journal-title":"Int. J. Intell. Syst. Technol. Appl."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Yang, A., Beheshti, M., Hudson, T.E., Vedanthan, R., Riewpaiboon, W., Mongkolwat, P., Feng, C., and Rizzo, J.R. (2022). UNav: An Infrastructure-Independent Vision-Based Navigation System for People with Blindness and Low Vision. Sensors, 22.","DOI":"10.3390\/s22228894"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"789","DOI":"10.1109\/TMECH.2014.2305916","article-title":"Virtual visual servoing for multicamera pose estimation","volume":"20","author":"Assa","year":"2014","journal-title":"IEEE\/ASME Trans. Mechatron."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"9017","DOI":"10.3182\/20110828-6-IT-1002.02970","article-title":"Virtual visual servoing for real-time robot pose estimation","volume":"44","author":"Gratal","year":"2011","journal-title":"IFAC Proc. Vol."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"1774","DOI":"10.1080\/00207721.2023.2210132","article-title":"Image-based prescribed performance visual servoing control of a QUAV with hysteresis quantised input","volume":"54","author":"Chen","year":"2023","journal-title":"Int. J. Syst. Sci."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"81822","DOI":"10.1109\/ACCESS.2020.2990996","article-title":"Holistic descriptors of omnidirectional color images and their performance in estimation of position and orientation","volume":"8","author":"Amoros","year":"2020","journal-title":"IEEE Access"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1016\/j.robot.2016.12.001","article-title":"Comparing holistic and feature-based visual methods for estimating the relative pose of mobile robots","volume":"89","author":"Fleer","year":"2017","journal-title":"Robot. Auton. Syst."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Berganski, C., Hoffmann, A., and M\u00f6ller, R. (2023). Tilt Correction of Panoramic Images for a Holistic Visual Homing Method with Planar-Motion Assumption. Robotics, 12.","DOI":"10.3390\/robotics12010020"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1625","DOI":"10.2514\/1.G000977","article-title":"Extended Kalman filter for spacecraft pose estimation using dual quaternions","volume":"38","author":"Filipe","year":"2015","journal-title":"J. Guid. Control. Dyn."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Alatise, M.B., and Hancke, G.P. (2017). Pose estimation of a mobile robot based on fusion of IMU data and vision data using an extended Kalman filter. Sensors, 17.","DOI":"10.3390\/s17102164"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Tran, H.T., Vo, T.C., Tran, D.L., Nguyen, Q.N., Ha, D.M., Pham, Q.N., Le, T.Q., Nguyen, T.K., Do, H.T., and Nguyen, M.T. (2022, January 14\u201316). Extended Kalman filter (EKF) based localization algorithms for mobile robots utilizing vision and odometry. Proceedings of the 2022 IEEE 21st Mediterranean Electrotechnical Conference (MELECON), Palermo, Italy.","DOI":"10.1109\/MELECON53508.2022.9843066"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"741","DOI":"10.1016\/S0262-8856(00)00108-6","article-title":"Continuity properties of the appearance manifold for mobile robot position estimation","volume":"19","author":"Crowley","year":"2001","journal-title":"Image Vis. Comput."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"80","DOI":"10.1109\/MRA.2011.943233","article-title":"Visual odometry [tutorial]","volume":"18","author":"Scaramuzza","year":"2011","journal-title":"IEEE Robot. Autom. Mag."},{"key":"ref_31","unstructured":"Montiel, J.M., and Davison, A.J. (2006, January 15\u201319). A visual compass based on SLAM. Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA), Orlando, FL, USA."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Huang, H., and Yeung, S.K. (2022, January 23\u201327). 360vo: Visual odometry using a single 360 camera. Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA.","DOI":"10.1109\/ICRA46639.2022.9812203"},{"key":"ref_33","unstructured":"Shavit, Y., and Ferens, R. (2019). Introduction to camera pose estimation with deep learning. arXiv Prep."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Kim, D., Pathak, S., Moro, A., Komatsu, R., Yamashita, A., and Asama, H. (2019, January 12\u201317). E-CNN: Accurate spherical camera rotation estimation via uniformization of distorted optical flow fields. Proceedings of the ICASSP 2019\u20142019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8682203"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Horst, M., and M\u00f6ller, R. (2017). Visual place recognition for autonomous mobile robots. Robotics, 6.","DOI":"10.3390\/robotics6020009"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Andr\u00e9, A.N., and Caron, G. (2022, January 18\u201324). Photometric Visual Gyroscope for Full-View Spherical Camera. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPRW56347.2022.00571"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"1170","DOI":"10.1109\/TPAMI.2006.150","article-title":"Rotation recovery from spherical images without correspondences","volume":"28","author":"Makadia","year":"2006","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1899","DOI":"10.1109\/TPAMI.2010.107","article-title":"Robust FFT-based scale-invariant image registration with image gradients","volume":"32","author":"Tzimiropoulos","year":"2010","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Du, Y., Tahri, O., and Hadj-Abdelkader, H. (2020, January 13\u201315). An improved method for Rotation Estimation Using Photometric Spherical Moments. Proceedings of the 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), Shenzhen, China.","DOI":"10.1109\/ICARCV50220.2020.9305374"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Du, Y., Mateo, C.M., and Tahri, O. (2022, January 11\u201313). Robot Rotation Estimation Using Spherical Moments in Neural Networks. Proceedings of the 2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA.","DOI":"10.1109\/AIPR57179.2022.10092205"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Fekri-Ershad, S., and Alsaffar, M.F. (2023). Developing a tuned three-layer perceptron fed with trained deep convolutional neural networks for cervical cancer diagnosis. Diagnostics, 13.","DOI":"10.3390\/diagnostics13040686"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Yu, T., Li, X., Cai, Y., Sun, M., and Li, P. (2022, January 3\u20138). S2-mlp: Spatial-shift mlp architecture for vision. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV51458.2022.00367"},{"key":"ref_43","first-page":"24261","article-title":"MLP-Mixer: An all-MLP Architecture for Vision","volume":"Volume 34","author":"Ranzato","year":"2021","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"35479","DOI":"10.1109\/ACCESS.2023.3266093","article-title":"Object detection using deep learning, CNNs and vision transformers: A review","volume":"11","author":"Amjoud","year":"2023","journal-title":"IEEE Access"},{"key":"ref_45","unstructured":"Couturier, A. (2022, February 15). SceneCity. Available online: https:\/\/www.cgchan.com\/store\/scenecity."},{"key":"ref_46","unstructured":"Zhang, Z., Rebecq, H., Forster, C., and Scaramuzza, D. (2016, January 16\u201321). Benefit of large field-of-view cameras for visual odometry. Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Courbon, J., Mezouar, Y., Eckt, L., and Martinet, P. (November, January 29). A generic fisheye camera model for robotic applications. Proceedings of the 2007 IEEE\/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA.","DOI":"10.1109\/IROS.2007.4399233"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Hadj-Abdelkader, H., Tahri, O., and Benseddik, H.E. (2019). Rotation estimation: A closed-form solution using spherical moments. Sensors, 19.","DOI":"10.3390\/s19224958"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/BF02289451","article-title":"A generalized solution of the orthogonal procrustes problem","volume":"31","year":"1966","journal-title":"Psychometrika"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Hadj-Abdelkader, H., Tahri, O., and Benseddik, H.E. (2018, January 1\u20135). Closed form solution for rotation estimation using photometric spherical moments. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8593920"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"1226","DOI":"10.1109\/TRO.2018.2830379","article-title":"A direct dense visual servoing approach using photometric moments","volume":"34","author":"Bakthavatchalam","year":"2018","journal-title":"IEEE Trans. Robot."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"1037","DOI":"10.1177\/0278364920915248","article-title":"PanoraMIS: An ultra-wide field of view image dataset for vision-based robot-motion estimation","volume":"39","author":"Benseddik","year":"2020","journal-title":"Int. J. Robot. Res."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"688","DOI":"10.1109\/LRA.2017.2650150","article-title":"Phase correlation for dense visual compass from omnidirectional camera-robot images","volume":"2","author":"Morbidi","year":"2017","journal-title":"IEEE Robot. Autom. Lett."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/7\/2246\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T14:21:53Z","timestamp":1760106113000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/7\/2246"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,31]]},"references-count":53,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2024,4]]}},"alternative-id":["s24072246"],"URL":"https:\/\/doi.org\/10.3390\/s24072246","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,31]]}}}