{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,20]],"date-time":"2026-04-20T23:43:51Z","timestamp":1776728631652,"version":"3.51.2"},"reference-count":38,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2020,6,10]],"date-time":"2020-06-10T00:00:00Z","timestamp":1591747200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Robotics Enabling 418 Capabilities and Technologies with the grant number RGAST1910 and administered by the Agency for Science, 419 Technology and Research","award":["RGAST1910"],"award-info":[{"award-number":["RGAST1910"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Periodic cleaning of all frequently touched social areas such as walls, doors, locks, handles, windows has become the first line of defense against all infectious diseases. Among those, cleaning of large wall areas manually is always tedious, time-consuming, and astounding task. Although numerous cleaning companies are interested in deploying robotic cleaning solutions, they are mostly not addressing wall cleaning. To this end, we are proposing a new vision-based wall following framework that acts as an add-on for any professional robotic platform to perform wall cleaning. The proposed framework uses Deep Learning (DL) framework to visually detect, classify, and segment the wall\/floor surface and instructs the robot to wall follow to execute the cleaning task. Also, we summarized the system architecture of Toyota Human Support Robot (HSR), which has been used as our testing platform. We evaluated the performance of the proposed framework on HSR robot under various defined scenarios. Our experimental results indicate that the proposed framework could successfully classify and segment the wall\/floor surface and also detect the obstacle on wall and floor with high detection accuracy and demonstrates a robust behavior of wall following.<\/jats:p>","DOI":"10.3390\/s20113298","type":"journal-article","created":{"date-parts":[[2020,6,10]],"date-time":"2020-06-10T07:13:16Z","timestamp":1591773196000},"page":"3298","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":26,"title":["Vision Based Wall Following Framework: A Case Study With HSR Robot for Cleaning Application"],"prefix":"10.3390","volume":"20","author":[{"given":"Tey Wee","family":"Teng","sequence":"first","affiliation":[{"name":"Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD); Singapore 487372, Singapore"}]},{"given":"Prabakaran","family":"Veerajagadheswar","sequence":"additional","affiliation":[{"name":"Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD); Singapore 487372, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3243-9814","authenticated-orcid":false,"given":"Balakrishnan","family":"Ramalingam","sequence":"additional","affiliation":[{"name":"Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD); Singapore 487372, Singapore"}]},{"given":"Jia","family":"Yin","sequence":"additional","affiliation":[{"name":"Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD); Singapore 487372, Singapore"}]},{"given":"Rajesh","family":"Elara\u00a0Mohan","sequence":"additional","affiliation":[{"name":"Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD); Singapore 487372, Singapore"}]},{"given":"Braulio F\u00e9lix","family":"G\u00f3mez","sequence":"additional","affiliation":[{"name":"Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD); Singapore 487372, Singapore"}]}],"member":"1968","published-online":{"date-parts":[[2020,6,10]]},"reference":[{"key":"ref_1","unstructured":"(2020, June 09). Coronavirus Disease 2019 (COVID-19): Situation Report. Available online: https:\/\/www.who.int\/emergencies\/diseases\/novel-coronavirus-2019\/situation."},{"key":"ref_2","unstructured":"Cepolina, F., Michelini, R., Razzoli, R., and Zoppi, M. (2003, January 13\u201315). Gecko, a climbing robot for walls cleaning. Proceedings of the International Workshop on Advances in Service Robotics (ASER03), Bardolino, Italy."},{"key":"ref_3","unstructured":"Graham-Rowe, D. (2020, June 09). Wall-Climbing Robot: A Newly Created Robot Improves upon a Gecko\u2019s Sticking Power. Available online: https:\/\/www.technologyreview.com\/2007\/04\/30\/225854\/wall-climbing-robot\/."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Muthugala, M., Vega-Heredia, M., Mohan, R.E., and Vishaal, S.R. (2020). Design and Control of a Wall Cleaning Robot with Adhesion-Awareness. Symmetry, 12.","DOI":"10.3390\/sym12010122"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Sun, G., Li, X., Li, P., Yue, L., Yu, Z., Zhou, Y., and Liu, Y.H. (2019, January 3\u20138). Adaptive Vision-Based Control for Rope-Climbing Robot Manipulator. Proceedings of the 2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China.","DOI":"10.1109\/IROS40897.2019.8967976"},{"key":"ref_6","unstructured":"Bullen, I., Harry, W., and Ranjan, P. (2009). Chaotic Transitions in Wall Following Robots. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Hammad, I., El-Sankary, K., and Gu, J. (2019). A Comparative Study on Machine Learning Algorithms for the Control of a Wall Following Robot. arXiv.","DOI":"10.1109\/ROBIO49542.2019.8961836"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Antoun, S.M., and McKerrow, P.J. (2010). Wall following with a single ultrasonic sensor. International Conference on Intelligent Robotics and Applications, Springer.","DOI":"10.1007\/978-3-642-16587-0_13"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Jia, B., Weiguo, F., and Zhu, M. (2015). Obstacle detection in single images with deep neural networks. Signal Image Video Process., 10.","DOI":"10.1007\/s11760-015-0855-4"},{"key":"ref_10","unstructured":"Hua, M., Nan, Y., and Lian, S. (27\u20132, January 27). Small Obstacle Avoidance Based on RGB-D Semantic Segmentation. Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Yu, H., Hong, R., Huang, X., and Wang, Z. (2013, January 28\u201329). Obstacle Detection with Deep Convolutional Neural Network. Proceedings of the 2013 Sixth International Symposium on Computational Intelligence and Design, Hangzhou, China.","DOI":"10.1109\/ISCID.2013.73"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Kore, P., and Khoje, S. (2017, January 15\u201316). Obstacle Detection for Auto-Driving Using Convolutional Neural Network: ICDECT 2017. Proceedings of the 2nd International Conference on Data Engineering and Communication Technology, Pune, Maharashtra, India.","DOI":"10.1007\/978-981-13-1610-4_28"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"845","DOI":"10.3934\/mbe.2020045","article-title":"Convolutional neural network based obstacle detection for unmanned surface vehicle","volume":"17","author":"Ma","year":"2019","journal-title":"Math. Biosci. Eng. MBE"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Bai, J., Lian, S., Liu, Z., Wang, K., and Liu, D. (2019). Deep Learning Based Robot for Automatically Picking up Garbage on the Grass. IEEE Transactions on Consumer Electronics, IEEE.","DOI":"10.1109\/TCE.2018.2859629"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Ramalingam, B., Lakshmanan, A.K., Ilyas, M., Le, A.V., and Elara, M.R. (2018). Cascaded Machine-Learning Technique for Debris Classification in Floor-Cleaning Robot Application. Appl. Sci., 8.","DOI":"10.3390\/app8122649"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Cauli, N., Vicente, P., Kim, J., Damas, B., Bernardino, A., Cavallo, F., and Santos-Victor, J. (2018, January 17\u201320). Autonomous table-cleaning from kinesthetic demonstrations using Deep Learning. Proceedings of the 2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Tokyo, Japan.","DOI":"10.1109\/DEVLRN.2018.8761013"},{"key":"ref_17","first-page":"2327","article-title":"Human Action Recognition via Depth Maps Body Parts of Action","volume":"12","author":"Farooq","year":"2018","journal-title":"TIIS"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Yin, J., Apuroop, K.G.S., Tamilselvam, Y.K., Mohan, R.E., Ramalingam, B., and Le, A.V. (2020). Table Cleaning Task by Human Support Robot Using Deep Learning Technique. Sensors, 20.","DOI":"10.3390\/s20061698"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully convolutional networks for semantic segmentation. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016). Ssd: Single shot multibox detector. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.C. (2018). Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation. arXiv.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_22","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Skovsen, S., Dyrmann, M., Krogh Mortensen, A., Steen, K., Green, O., Eriksen, J., Gislum, R., J\u00f8rgensen, R., and Karstoft, H. (2017). Estimation of the Botanical Composition of Clover-Grass Leys from RGB Images Using Data Simulation and Fully Convolutional Neural Networks. Sensors, 17.","DOI":"10.3390\/s17122930"},{"key":"ref_24","unstructured":"Tieleman, T., and Hinton, G. (2012). Lecture 6.5-RMSProp, COURSERA: Neural networks for machine learning, University of Toronto."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1109\/MRA.2006.250573","article-title":"Visual servo control, part i: Basic approaches","volume":"13","author":"Hutchinson","year":"2006","journal-title":"IEEE Robot. Autom. Mag."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1729881420914441","DOI":"10.1177\/1729881420914441","article-title":"Motion planner for a Tetris-inspired reconfigurable floor cleaning robot","volume":"17","author":"Veerajagadheswar","year":"2020","journal-title":"Int. J. Adv. Robot. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Le, A., Prabakaran, V., Sivanantham, V., and Mohan, R.E. (2018). Modified A-Star Algorithm for Efficient Coverage Path Planning in Tetris Inspired Self-Reconfigurable Robot with Integrated Laser Sensor. Sensors, 18.","DOI":"10.3390\/s18082585"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"3998","DOI":"10.1109\/LRA.2020.2983683","article-title":"Path Tracking Control of Self-Reconfigurable Robot hTetro with Four Differential Drive Units","volume":"5","author":"Shi","year":"2020","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"23","DOI":"10.1007\/s00138-015-0713-y","article-title":"A survey of datasets for visual tracking","volume":"27","author":"Dubuisson","year":"2016","journal-title":"Mach. Vis. Appl."},{"key":"ref_30","unstructured":"Bonarini, A., Burgard, W., Fontana, G., Matteucci, M., Sorrenti, D.G., and Tardos, J.D. (2006, January 9\u201315). Rawseeds: Robotics advancement through web-publishing of sensorial and elaborated extensive data sets. Proceedings of the IROS, Beijing, China."},{"key":"ref_31","unstructured":"Yang, S., Maturana, D., and Scherer, S. (2016, January 16\u201321). Real-time 3D scene layout from a single image using convolutional neural networks. Proceedings of the 2016 IEEE international conference on robotics and automation (ICRA), Stockholm, Sweden."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"71","DOI":"10.1016\/j.dib.2017.12.047","article-title":"MCIndoor20000: A fully-labeled image dataset to advance indoor objects detection","volume":"17","author":"Bashiri","year":"2018","journal-title":"Data Brief"},{"key":"ref_33","unstructured":"Huitl, R., Schroth, G., Hilsenbeck, S., Schweiger, F., and Steinbach, E. (October, January 30). TUMindoor: An Extensive Image and Point Cloud Dataset for Visual Indoor Localization and Mapping. Proceedings of the International Conference on Image Processing, Orlando, FL, USA. Available online: http:\/\/navvis.de\/dataset."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Quattoni, A., and Torralba, A. (2009, January 20\u201325). Recognizing indoor scenes. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPRW.2009.5206537"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Chou, S., Sun, C., Chang, W., Hsu, W., Sun, M., and Fu, J. (2020, January 1\u20135). 360-Indoor: Towards Learning Real-World Objects in 360\u00b0 Indoor Equirectangular Images. Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA.","DOI":"10.1109\/WACV45572.2020.9093262"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Adhikari, B., Peltomaki, J., Puura, J., and Huttunen, H. (2018, January 26\u201328). Faster Bounding Box Annotation for Object Detection in Indoor Scenes. Proceedings of the 2018 7th European Workshop on Visual Information Processing (EUVIP), Tampere, Finland.","DOI":"10.1109\/EUVIP.2018.8611732"},{"key":"ref_37","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Neural Inf. Process. Syst., 25."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., and Guadarrama, S. (2017, January 21\u201326). Speed\/accuracy trade-offs for modern convolutional object detectors. Proceedings of the IEEE CVPR, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.351"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/11\/3298\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T09:37:23Z","timestamp":1760175443000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/11\/3298"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,6,10]]},"references-count":38,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2020,6]]}},"alternative-id":["s20113298"],"URL":"https:\/\/doi.org\/10.3390\/s20113298","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,6,10]]}}}