{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,16]],"date-time":"2026-01-16T19:45:49Z","timestamp":1768592749140,"version":"3.49.0"},"reference-count":119,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,5,18]],"date-time":"2024-05-18T00:00:00Z","timestamp":1715990400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,5,18]],"date-time":"2024-05-18T00:00:00Z","timestamp":1715990400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100003759","name":"Universidad Polit\u00e9cnica de Madrid","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100003759","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Multimedia Systems"],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>This paper discusses the challenges of the current state of computer vision-based indoor scene understanding assistive solutions for the person with visual impairment (P-VI)\/blindness. It focuses on two main issues: the lack of user-centered approach in the development process and the lack of guidelines for the selection of appropriate technologies. First, it discusses the needs of users of an assistive solution through state-of-the-art analysis based on a previous systematic review of literature and commercial products and on semi-structured user interviews. Then it proposes an analysis and design framework to address these needs. Our paper presents a set of structured use cases that help to visualize and categorize the diverse real-world challenges faced by the P-VI\/blindness in indoor settings, including scene description, object finding, color detection, obstacle avoidance and text reading across different contexts. Next, it details the functional and non-functional requirements to be fulfilled by indoor scene understanding assistive solutions and provides a reference architecture that helps to map the needs into solutions, identifying the components that are necessary to cover the different use cases and respond to the requirements. To further guide the development of the architecture components, the paper offers insights into various available technologies like depth cameras, object detection, segmentation algorithms and optical character recognition (OCR), to enable an informed selection of the most suitable technologies for the development of specific assistive solutions, based on aspects like effectiveness, price and computational cost. In conclusion, by systematically analyzing user needs and providing guidelines for technology selection, this research contributes to the development of more personalized and practical assistive solutions tailored to the unique challenges faced by the P-VI\/blindness.<\/jats:p>","DOI":"10.1007\/s00530-024-01350-8","type":"journal-article","created":{"date-parts":[[2024,5,18]],"date-time":"2024-05-18T16:01:38Z","timestamp":1716048098000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Analysis and design framework for the development of indoor scene understanding assistive solutions for the person with visual impairment\/blindness"],"prefix":"10.1007","volume":"30","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7810-5051","authenticated-orcid":false,"given":"Moeen","family":"Valipoor","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8936-9095","authenticated-orcid":false,"given":"Ang\u00e9lica","family":"de Antonio","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7154-2451","authenticated-orcid":false,"given":"Juli\u00e1n","family":"Cabrera","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,5,18]]},"reference":[{"key":"1350_CR1","doi-asserted-by":"publisher","first-page":"614","DOI":"10.1136\/bjophthalmol-2011-300539","volume":"96","author":"D Pascolini","year":"2012","unstructured":"Pascolini, D., Mariotti, S.P.: Global estimates of visual impairment: 2010. Br. J. Ophthalmol.Ophthalmol. 96, 614\u2013618 (2012). https:\/\/doi.org\/10.1136\/bjophthalmol-2011-300539","journal-title":"Br. J. Ophthalmol.Ophthalmol."},{"key":"1350_CR2","doi-asserted-by":"publisher","unstructured":"Nguyen, M., Le, H., Yan, WQ., Dawda, A.: A vision aid for the visually impaired using commodity dual-rear-camera smartphones. Proceedings of the 2018 25th international conference on mechatronics and machine vision in practice M2VIP 2018, 1, 8\u201313 (2019). https:\/\/doi.org\/10.1109\/M2VIP.2018.8600857","DOI":"10.1109\/M2VIP.2018.8600857"},{"key":"1350_CR3","doi-asserted-by":"publisher","DOI":"10.1007\/s10209-022-00868-w","author":"MM Valipoor","year":"2022","unstructured":"Valipoor, M.M., de Antonio, A.: Recent trends in computer vision-driven scene understanding for VI\/blind users: a systematic mapping. Univers. Access. Inf. Soc. (2022). https:\/\/doi.org\/10.1007\/s10209-022-00868-w","journal-title":"Univers. Access. Inf. Soc."},{"key":"1350_CR4","doi-asserted-by":"publisher","first-page":"252609","DOI":"10.1080\/11762322.2010.523626","volume":"7","author":"MA Hersh","year":"2010","unstructured":"Hersh, M.A., Johnson, M.A.: A robotic guide for blind people part 1 a multi-national survey of the attitudes, requirements and preferences of potential end-users. Appl. Bionics. Biomech. 7, 252609 (2010). https:\/\/doi.org\/10.1080\/11762322.2010.523626","journal-title":"Appl. Bionics. Biomech."},{"key":"1350_CR5","doi-asserted-by":"publisher","first-page":"149","DOI":"10.1007\/s12193-016-0235-6","volume":"11","author":"A Bhowmick","year":"2017","unstructured":"Bhowmick, A., Hazarika, S.M.: An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends. J. Multimodal. User. Interface. 11, 149\u2013172 (2017). https:\/\/doi.org\/10.1007\/s12193-016-0235-6","journal-title":"J. Multimodal. User. Interface."},{"key":"1350_CR6","doi-asserted-by":"publisher","DOI":"10.13140\/2.1.2492.3845","author":"P Conradie","year":"2014","unstructured":"Conradie, P., Mioch, T., Saldien, J.: Blind user requirements to support tactile mobility. CEUR. Workshop. Proc. (2014). https:\/\/doi.org\/10.13140\/2.1.2492.3845","journal-title":"CEUR. Workshop. Proc."},{"key":"1350_CR7","unstructured":"Wang, S., Yu, J.: Everyday information behaviour of the visually impaired in China. Inf. Res. 22(1), 743 (2017). Retrieved from http:\/\/InformationR.net\/ir\/22-1\/paper743.html (Archived by WebCite\u00ae at http:\/\/www.webcitation.org\/6pFtXbqJr)"},{"key":"1350_CR8","doi-asserted-by":"publisher","first-page":"e37841","DOI":"10.7554\/eLife.37841","volume":"7","author":"Y Liu","year":"2018","unstructured":"Liu, Y., Stiles, N.R.B., Meister, M.: Augmented reality powers a cognitive assistant for the blind. Elife 7, e37841 (2018). https:\/\/doi.org\/10.7554\/eLife.37841","journal-title":"Elife"},{"key":"1350_CR9","doi-asserted-by":"publisher","first-page":"95","DOI":"10.1016\/j.jvcir.2017.01.025","volume":"44","author":"ML Mekhalfi","year":"2017","unstructured":"Mekhalfi, M.L., Melgani, F., Bazi, Y., Alajlan, N.: Fast indoor scene description for blind people with multiresolution random projections. J. Vis. Commun. Image Represent.Commun. Image. Represent. 44, 95\u2013105 (2017). https:\/\/doi.org\/10.1016\/j.jvcir.2017.01.025","journal-title":"J. Vis. Commun. Image Represent.Commun. Image. Represent."},{"key":"1350_CR10","doi-asserted-by":"crossref","unstructured":"Khairnar, DP., Karad, RB., Kapse A, et al.: Partha: A Visually Impaired Assistance System. In: 2020 3rd International Conference on Communication Systems, Computing and IT Applications, CSCITA 2020\u2013\u2013Proceedings. Pune Institute of Computer Technology pp. 32\u201337. Pune, India (2020)","DOI":"10.1109\/CSCITA47329.2020.9137791"},{"key":"1350_CR11","doi-asserted-by":"publisher","unstructured":"Imtiaz, MA., Aziz, S., Zaib, A., et al.: Wearable scene classification system for visually impaired individuals 2nd international conference on electrical communication and computer engineering. ICECCE. (2020).https:\/\/doi.org\/10.1109\/ICECCE49384.2020.9179439","DOI":"10.1109\/ICECCE49384.2020.9179439"},{"key":"1350_CR12","doi-asserted-by":"publisher","unstructured":"Presti, G., Ahmetovic, D., Ducci, M., et al.: Watchout: obstacle sonifcation for people with visual impairment or blindness. ASSETS 2019\u2013\u201321st International ACM SIGACCESS Conference on Computers and Accessibility 402\u2013413 (2019). https:\/\/doi.org\/10.1145\/3308561.3353779","DOI":"10.1145\/3308561.3353779"},{"key":"1350_CR13","unstructured":"Apple unveils ARKit 2. https:\/\/www.apple.com\/newsroom\/2018\/06\/apple-unveils-arkit-2\/ (2007). Accessed 2 Mar 2024"},{"key":"1350_CR14","doi-asserted-by":"publisher","unstructured":"Sarwar, M.G., Dey, A., Das, A.: Developing a LBPH-based face recognition system for visually impaired people 2021 1st International Conference on Artificial Intelligence and Data Analytics, CAIDA 2021 286\u2013289 (2021). https:\/\/doi.org\/10.1109\/CAIDA51941.2021.9425275","DOI":"10.1109\/CAIDA51941.2021.9425275"},{"key":"1350_CR15","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3390\/s21041536","volume":"21","author":"Z Chen","year":"2021","unstructured":"Chen, Z., Liu, X., Kojima, M., et al.: A wearable navigation device for visually impaired people based on the real-time semantic visual slam system. Sensors 21, 1\u201314 (2021). https:\/\/doi.org\/10.3390\/s21041536","journal-title":"Sensors"},{"key":"1350_CR16","doi-asserted-by":"crossref","unstructured":"Abraham, L., Mathew, N. S., George L, Sajan, S. S. VISION\u2013\u2013wearable speech based feedback system for the visually impaired using computer vision. In: proceedings of the 4th International Conference on Trends in Electronics and Informatics, ICOEI 2020. Saintgits College of Engineering, Computer Science and Engineering Department, pp 972\u2013976. India (2020)","DOI":"10.1109\/ICOEI48184.2020.9142984"},{"key":"1350_CR17","unstructured":"Envision. https:\/\/www.letsenvision.com\/ (2023). Accessed 14 Dec 2023"},{"key":"1350_CR18","unstructured":"Seeing AI. https:\/\/www.microsoft.com\/en-us\/ai\/seeing-ai\/ (2023). Accessed 14 Dec 2023"},{"key":"1350_CR19","unstructured":"Lookout\u2013\u2013Assisted vision. https:\/\/play.google.com\/store\/apps\/details?id=com.google.android.apps.accessibility.reveal&hl=en&gl=US (2023). Accessed 13 Dec 2023"},{"key":"1350_CR20","unstructured":"Aira. https:\/\/aira.io\/ (2023). Accessed 14 Dec 2023"},{"key":"1350_CR21","unstructured":"Be My Eyes. https:\/\/www.bemyeyes.com\/ (2023). Accessed 14 Dec 2023"},{"key":"1350_CR22","doi-asserted-by":"publisher","DOI":"10.1145\/3517384","author":"A Stangl","year":"2022","unstructured":"Stangl, A., Shiroma, K., Davis, N., et al.: Privacy concerns for visual assistance technologies. ACM. Trans. Access. Comput. (2022). https:\/\/doi.org\/10.1145\/3517384","journal-title":"ACM. Trans. Access. Comput."},{"key":"1350_CR23","doi-asserted-by":"publisher","unstructured":"Norlund, T., Hagstr\u00f6m, L., Johansson, R.: Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it ? BlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP 149\u2013162 (2021). https:\/\/doi.org\/10.18653\/V1\/2021.BLACKBOXNLP-1.10","DOI":"10.18653\/V1\/2021.BLACKBOXNLP-1.10"},{"key":"1350_CR24","doi-asserted-by":"crossref","unstructured":"Wise, E., Li, B., Gallagher, T., et al.: Indoor navigation for the blind and vision impaired: Where are we and where are we going ? In: 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, pp 1\u20137 (2012).","DOI":"10.1109\/IPIN.2012.6418894"},{"key":"1350_CR25","doi-asserted-by":"publisher","first-page":"277","DOI":"10.1080\/11762322.2010.523626","volume":"7","author":"MA Hersh","year":"2010","unstructured":"Hersh, M.A., Johnson, M.A.: A robotic guide for blind people. Part 1. A multi-national survey of the attitudes, requirements and preferences of potential end-users. Appl. Bionics. Biomech. 7, 277\u2013288 (2010). https:\/\/doi.org\/10.1080\/11762322.2010.523626","journal-title":"Appl. Bionics. Biomech."},{"key":"1350_CR26","doi-asserted-by":"publisher","first-page":"463","DOI":"10.1007\/s10209-021-00857-5","volume":"22","author":"S Ruffieux","year":"2023","unstructured":"Ruffieux, S., Hwang, C., Junod, V., et al.: Tailoring assistive smart glasses according to pathologies of visually impaired individuals: an exploratory investigation on social needs and difficulties experienced by visually impaired individuals. Univers. Access. Inf. Soc. 22, 463\u2013475 (2023). https:\/\/doi.org\/10.1007\/s10209-021-00857-5","journal-title":"Univers. Access. Inf. Soc."},{"key":"1350_CR27","doi-asserted-by":"publisher","unstructured":"Akter, T.: Privacy considerations of the visually impaired with camera based assistive tools. Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW 69\u201374 (2020). https:\/\/doi.org\/10.1145\/3406865.3418382","DOI":"10.1145\/3406865.3418382"},{"key":"1350_CR28","doi-asserted-by":"publisher","DOI":"10.1177\/0264619619833723","author":"W Jeamwatthanachai","year":"2018","unstructured":"Jeamwatthanachai, W., Wald, M., Wills, G.: Indoor navigation by blind people: behaviors and challenges in unfamiliar spaces and buildings. Br. J. Vis. Impair. (2018). https:\/\/doi.org\/10.1177\/0264619619833723","journal-title":"Br. J. Vis. Impair."},{"key":"1350_CR29","doi-asserted-by":"crossref","unstructured":"Alamri, A.: Development of ontology-based indoor navigation algorithm for indoor obstacle identification for the visually impaired. 2023 9th International Conference on Engineering, Applied Sciences, and Technology (ICEAST) 38\u201342 (2023)","DOI":"10.1109\/ICEAST58324.2023.10157934"},{"key":"1350_CR30","doi-asserted-by":"crossref","unstructured":"Szpiro, S., Zhao, Y., Azenkot, S.: Finding a store, searching for a product: a study of daily challenges of low vision people. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Association for Computing Machinery, pp 61\u201372, New York (2016)","DOI":"10.1145\/2971648.2971723"},{"key":"1350_CR31","doi-asserted-by":"publisher","first-page":"104471","DOI":"10.1016\/j.imavis.2022.104471","volume":"123","author":"K Tong","year":"2022","unstructured":"Tong, K., Wu, Y.: Deep learning-based detection from the perspective of small or tiny objects: a survey. Image Vis. Comput.Comput. 123, 104471 (2022). https:\/\/doi.org\/10.1016\/j.imavis.2022.104471","journal-title":"Image Vis. Comput.Comput."},{"key":"1350_CR32","doi-asserted-by":"publisher","first-page":"1497","DOI":"10.18494\/SAM.2020.2646","volume":"32","author":"C Silva","year":"2020","unstructured":"Silva, C., Wimalaratne, P.: Context-aware assistive indoor navigation of visually impaired persons. Sens. Mater. 32, 1497 (2020). https:\/\/doi.org\/10.18494\/SAM.2020.2646","journal-title":"Sens. Mater."},{"key":"1350_CR33","doi-asserted-by":"publisher","first-page":"100265","DOI":"10.1016\/j.biosx.2022.100265","volume":"12","author":"M Mashiata","year":"2022","unstructured":"Mashiata, M., Ali, T., Das, P., et al.: Towards assisting visually impaired individuals: a review on current status and future prospects. Biosens. Bioelectron. X 12, 100265 (2022). https:\/\/doi.org\/10.1016\/j.biosx.2022.100265","journal-title":"Biosens. Bioelectron. X"},{"key":"1350_CR34","doi-asserted-by":"publisher","DOI":"10.1007\/s10209-020-00764-1","author":"C Ntakolia","year":"2020","unstructured":"Ntakolia, C., Dimas, G., Iakovidis, D.K.: User-centered system design for assisted navigation of visually impaired individuals in outdoor cultural environments. Univers. Access. Inf. Soc. (2020). https:\/\/doi.org\/10.1007\/s10209-020-00764-1","journal-title":"Univers. Access. Inf. Soc."},{"key":"1350_CR35","doi-asserted-by":"crossref","unstructured":"Bajpai, V., Gorthi, R. P.: On non-functional requirements: a survey. 9\u201312 (2012)","DOI":"10.1109\/SCEECS.2012.6184810"},{"key":"1350_CR36","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1016\/j.patrec.2018.10.031","volume":"137","author":"R Tapu","year":"2020","unstructured":"Tapu, R., Mocanu, B., Zaharia, T.: Wearable assistive devices for visually impaired: a state of the art survey. Pattern. Recognit. Lett. 137, 37\u201352 (2020). https:\/\/doi.org\/10.1016\/j.patrec.2018.10.031","journal-title":"Pattern. Recognit. Lett."},{"key":"1350_CR37","unstructured":"Ohn-Bar, E., Kitani, K., Asakawa, C.: Personalized Dynamics Models for Adaptive Assistive Navigation Systems. In: Conference on Robot Learning (2018)"},{"key":"1350_CR38","doi-asserted-by":"crossref","unstructured":"Shen, J., Dong, Z., Qin, D., et al.: ivision: an assistive system for the blind based on augmented reality and machine learning. Springer International Publishing (2020)","DOI":"10.1007\/978-3-030-49282-3_28"},{"key":"1350_CR39","volume-title":"An insight into smartphone-based assistive solutions for visually impaired and blind people: issues, challenges and opportunities","author":"A Khan","year":"2020","unstructured":"Khan, A., Khusro, S.: An insight into smartphone-based assistive solutions for visually impaired and blind people: issues, challenges and opportunities. Springer, Berlin Heidelberg (2020)"},{"key":"1350_CR40","doi-asserted-by":"crossref","unstructured":"Shinohara, K., Wobbrock, J. O.: In the shadow of misperception: assistive technology use and social interactions. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, pp. 705\u2013714. New York (2011)","DOI":"10.1145\/1978942.1979044"},{"key":"1350_CR41","doi-asserted-by":"publisher","first-page":"152","DOI":"10.1080\/17483107.2020.1768308","volume":"17","author":"ADP dos Santos","year":"2022","unstructured":"dos Santos, A.D.P., Ferrari, A.L.M., Medola, F.O., Sandnes, F.E.: Aesthetics and the perceived stigma of assistive technology for visual impairment. Disabil. Rehabil. Assist. Technol.. Rehabil. Assist. Technol. 17, 152\u2013158 (2022). https:\/\/doi.org\/10.1080\/17483107.2020.1768308","journal-title":"Disabil. Rehabil. Assist. Technol.. Rehabil. Assist. Technol."},{"key":"1350_CR42","doi-asserted-by":"crossref","unstructured":"Kuriakose, B., Shrestha, R., Sandnes, F. E.: SceneRecog: A Deep Learning Scene Recognition Model for Assisting Blind and Visually Impaired Navigate using Smartphones. In: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). pp 2464\u20132470 (2021)","DOI":"10.1109\/SMC52423.2021.9658913"},{"key":"1350_CR43","doi-asserted-by":"publisher","unstructured":"Pawar, P. G., Devendran, V.: Scene understanding: a survey to see the world at a single glance. 2019 2nd international conference on intelligent communication and computational techniques, ICCT 2019 182\u2013186. (2019) https:\/\/doi.org\/10.1109\/ICCT46177.2019.8969051","DOI":"10.1109\/ICCT46177.2019.8969051"},{"key":"1350_CR44","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107205","author":"L Xie","year":"2020","unstructured":"Xie, L., Lee, F., Liu, L., et al.: Scene recognition: a comprehensive survey. Pattern. Recognit. (2020). https:\/\/doi.org\/10.1016\/j.patcog.2020.107205","journal-title":"Pattern. Recognit."},{"key":"1350_CR45","doi-asserted-by":"crossref","unstructured":"Liu, Y., Chen, Q., Chen, W., Wassell, I.: Dictionary Learning Inspired Deep Network for Scene Recognition. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. AAAI Press (2018)","DOI":"10.1609\/aaai.v32i1.12312"},{"key":"1350_CR46","doi-asserted-by":"publisher","first-page":"45230","DOI":"10.1109\/ACCESS.2019.2908448","volume":"7","author":"J Shi","year":"2019","unstructured":"Shi, J., Zhu, H., Yu, S., et al.: Scene categorization model using deep visually sensitive features. IEEE Access 7, 45230\u201345239 (2019). https:\/\/doi.org\/10.1109\/ACCESS.2019.2908448","journal-title":"IEEE Access"},{"key":"1350_CR47","doi-asserted-by":"publisher","first-page":"82066","DOI":"10.1109\/ACCESS.2020.2989863","volume":"8","author":"H Seong","year":"2020","unstructured":"Seong, H., Hyun, J., Kim, E.: FOSNet: An end-to-end trainable deep neural network for scene recognition. IEEE Access 8, 82066\u201382077 (2020). https:\/\/doi.org\/10.1109\/ACCESS.2020.2989863","journal-title":"IEEE Access"},{"key":"1350_CR48","unstructured":"Tan, M., Le, Q.: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th International Conference on Machine Learning. PMLR, pp. 6105\u20136114 (2019)"},{"key":"1350_CR49","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 779\u2013788 (2016)","DOI":"10.1109\/CVPR.2016.91"},{"key":"1350_CR50","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4615-9325-6","author":"B Kuipers","year":"1983","unstructured":"Kuipers, B.: The cognitive map: could it have been any other way? Spat. Orientat. (1983). https:\/\/doi.org\/10.1007\/978-1-4615-9325-6","journal-title":"Spat. Orientat."},{"key":"1350_CR51","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1007\/s10514-021-10014-9","volume":"46","author":"S Tan","year":"2022","unstructured":"Tan, S., Guo, D., Liu, H., et al.: Embodied scene description. Auton. Robot.. Robot. 46, 21\u201343 (2022). https:\/\/doi.org\/10.1007\/s10514-021-10014-9","journal-title":"Auton. Robot.. Robot."},{"key":"1350_CR52","doi-asserted-by":"crossref","unstructured":"Delloul, K., Larabi, S.: Egocentric scene description for the blind and visually impaired. In: 2022 5th international symposium on informatics and its applications (ISIA). pp 1\u20136 (2022)","DOI":"10.1109\/ISIA55826.2022.9993531"},{"key":"1350_CR53","doi-asserted-by":"publisher","DOI":"10.1145\/3375279","author":"M Hersh","year":"2020","unstructured":"Hersh, M.: Mental maps and the use of sensory information by blind and partially sighted people. ACM. Trans. Access. Comput. (2020). https:\/\/doi.org\/10.1145\/3375279","journal-title":"ACM. Trans. Access. Comput."},{"key":"1350_CR54","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). pp 770\u2013778 (2016)","DOI":"10.1109\/CVPR.2016.90"},{"key":"1350_CR55","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1145\/3065386","volume":"60","author":"A Krizhevsky","year":"2017","unstructured":"Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM. ACM. 60, 84\u201390 (2017). https:\/\/doi.org\/10.1145\/3065386","journal-title":"Commun. ACM. ACM."},{"key":"1350_CR56","doi-asserted-by":"publisher","first-page":"3212","DOI":"10.1109\/TNNLS.2018.2876865","volume":"30","author":"ZQ Zhao","year":"2019","unstructured":"Zhao, Z.Q., Zheng, P., Xu, S.T., Wu, X.: Object detection with deep learning: a review. IEEE. Trans. Neural. Netw. Learn. Syst. 30, 3212\u20133232 (2019). https:\/\/doi.org\/10.1109\/TNNLS.2018.2876865","journal-title":"IEEE. Trans. Neural. Netw. Learn. Syst."},{"key":"1350_CR57","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., et al.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on computer vision and pattern recognition (CVPR). pp 2818\u20132826 (2016)","DOI":"10.1109\/CVPR.2016.308"},{"key":"1350_CR58","doi-asserted-by":"crossref","unstructured":"Bhumbla, S., Gupta, D. K., Nisha.: A Review: Object Detection Algorithms. In: ICSCCC 2023 - 3rd International Conference on Secure Cyber Computing and Communications. Institute of Electrical and Electronics Engineers Inc., pp 827\u2013832 (2023)","DOI":"10.1109\/ICSCCC58608.2023.10176865"},{"key":"1350_CR59","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Dollar, P., Girshick, R. Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)","DOI":"10.1109\/ICCV.2017.322"},{"key":"1350_CR60","doi-asserted-by":"publisher","first-page":"334-1","DOI":"10.2352\/EI.2022.34.9.IQSP-334","volume":"34","author":"AK Venkataramanan","year":"2022","unstructured":"Venkataramanan, A.K., Facktor, M., Gupta, P., Bovik, A.C.: Assessing the impact of image quality on object-detection algorithms. Electron. Imaging. 34, 334-1\u2013334-1 (2022). https:\/\/doi.org\/10.2352\/EI.2022.34.9.IQSP-334","journal-title":"Electron. Imaging."},{"key":"1350_CR61","doi-asserted-by":"crossref","unstructured":"Guo, H., Lu, T., Wu, Y. Dynamic low-light image enhancement for object detection via end-to-end training in: 2020 25th international conference on pattern recognition (ICPR), pp. 5611\u20135618 (2021)","DOI":"10.1109\/ICPR48806.2021.9412802"},{"key":"1350_CR62","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1007\/s13735-020-00195-x","volume":"9","author":"AM Hafiz","year":"2020","unstructured":"Hafiz, A.M., Bhat, G.M.: A survey on instance segmentation: state of the art. Int. J. Multimed. Inf. Retr. 9, 171\u2013189 (2020). https:\/\/doi.org\/10.1007\/s13735-020-00195-x","journal-title":"Int. J. Multimed. Inf. Retr."},{"key":"1350_CR63","doi-asserted-by":"publisher","first-page":"30","DOI":"10.3991\/ijoe.v19i09.39177","volume":"19","author":"PL Kompalli","year":"2023","unstructured":"Kompalli, P.L., Kalidindi, A., Chilukala, J., et al.: A color guide for color blind people using image processing and openCV. iJOE 19, 30\u201346 (2023). https:\/\/doi.org\/10.3991\/ijoe.v19i09.39177","journal-title":"iJOE"},{"key":"1350_CR64","doi-asserted-by":"publisher","first-page":"195","DOI":"10.1007\/978-981-16-2275-5_12","volume-title":"Digital Transformation Technology","author":"M Allam","year":"2022","unstructured":"Allam, M., ElShaarawy, I., Farghal, S.A.: In: Magdi Dalia, A., Helmy, Y.K., M, M., J, A. (eds.) Digital Transformation Technology, pp. 195\u2013216. Springer, Singapore (2022)"},{"key":"1350_CR65","doi-asserted-by":"crossref","unstructured":"Widayani, A., Kusuma, H., Purwanto, D.: Visually impaired person detection using deep learning for dangerous area warning system. In: 2022 international seminar on intelligent technology and its applications: advanced innovations of electrical systems for humanity, ISITIA 2022\u2013\u2013Proceeding. institute of electrical and electronics engineers Inc., pp 204\u2013208 (2022)","DOI":"10.1109\/ISITIA56226.2022.9855268"},{"key":"1350_CR66","doi-asserted-by":"crossref","unstructured":"Chung, M. A., Chai, S. Y., Hsieh, M. C. et al.: Road Pothole Detection Algorithm and Guide Belt Designed for Visually Impaired. In: 2023 IEEE 3rd International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). pp. 475\u2013478 (2023)","DOI":"10.1109\/ICEIB57887.2023.10170695"},{"key":"1350_CR67","unstructured":"Google cloud vision"},{"key":"1350_CR68","unstructured":"Microsoft azure computer vision"},{"key":"1350_CR69","unstructured":"Amazon Rekognition. https:\/\/aws.amazon.com\/rekognition\/ (2024). Accessed 5 Feb 2024"},{"key":"1350_CR70","doi-asserted-by":"crossref","unstructured":"Miles, F. A.: Binocular vision and stereopsis by Ian P. Howard and Brian J. Rogers, Oxford University Press, 1995. \u00a390.00 (736 pages) ISBN 0 19 508476 4. Trends Neurosci 19: 407\u2013408 (1996)","DOI":"10.1016\/S0166-2236(96)60026-5"},{"key":"1350_CR71","doi-asserted-by":"crossref","unstructured":"Scharstein, D., Szeliski, R., Zabih, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. In: proceedings IEEE workshop on stereo and multi-baseline vision (SMBV 2001). pp. 131\u2013140 (2001)","DOI":"10.1109\/SMBV.2001.988771"},{"key":"1350_CR72","doi-asserted-by":"publisher","first-page":"177","DOI":"10.1049\/cit2.12098","volume":"7","author":"E Adil","year":"2022","unstructured":"Adil, E., Mikou, M., Mouhsen, A.: A novel algorithm for distance measurement using stereo camera. CAAI. Trans. Intell. Technol. 7, 177\u2013186 (2022). https:\/\/doi.org\/10.1049\/cit2.12098","journal-title":"CAAI. Trans. Intell. Technol."},{"key":"1350_CR73","unstructured":"C\u00e1mara Intel\u00ae RealSenseTM SR300. https:\/\/www.intel.la\/content\/www\/xl\/es\/products\/sku\/92329\/intel-realsense-camera-sr300\/specifications.html (2024). Accessed 8 Feb 2024"},{"key":"1350_CR74","doi-asserted-by":"publisher","DOI":"10.1016\/j.measurement.2022.111643","author":"G Maculotti","year":"2022","unstructured":"Maculotti, G., Ulrich, L., Olivetti, E.C., et al.: A methodology for task-specific metrological characterization of low-cost 3D camera for face analysis. Measurement. (Lond.) (2022). https:\/\/doi.org\/10.1016\/j.measurement.2022.111643","journal-title":"Measurement. (Lond.)"},{"key":"1350_CR75","unstructured":"A Brief Analysis of the Principles of Depth Cameras: Structured Light, TOF, and Stereo Vision. https:\/\/wiki.dfrobot.com\/brief_analysis_of_camera_principles (2024). Accessed 25 Feb 2024"},{"key":"1350_CR76","unstructured":"Azure kinect depth camera"},{"key":"1350_CR77","unstructured":"Azure Kinect DK hardware specifications. https:\/\/learn.microsoft.com\/en-us\/azure\/kinect-dk\/hardware-specification (2024). Accessed 21 Jan 2024"},{"key":"1350_CR78","unstructured":"Intel RealSense LiDAR Camera L515. https:\/\/intelrealsense.com\/lidar-camera-l515\/ (2024). Accessed 21 Jan 2024"},{"key":"1350_CR79","doi-asserted-by":"publisher","DOI":"10.3390\/s20082272","author":"F Khan","year":"2020","unstructured":"Khan, F., Salahuddin, S., Javidnia, H.: Deep learning-based monocular depth estimation methods\u2014A state-of-the-art review. Sensors (2020). https:\/\/doi.org\/10.3390\/s20082272","journal-title":"Sensors"},{"key":"1350_CR80","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3390\/s20082272","volume":"20","author":"F Khan","year":"2020","unstructured":"Khan, F., Salahuddin, S., Javidnia, H.: Deep learning-based monocular depth estimation methods\u2014A state-of-the-art review. Sensors (Switzerland) 20, 1\u201316 (2020). https:\/\/doi.org\/10.3390\/s20082272","journal-title":"Sensors (Switzerland)"},{"key":"1350_CR81","unstructured":"Unsupervised monocular depth estimation in highly complex environments. https:\/\/en.x-mol.com\/paper\/article\/1420832096734703616 (2022). Accessed 29 Nov 2022"},{"key":"1350_CR82","doi-asserted-by":"publisher","first-page":"14","DOI":"10.1016\/J.NEUCOM.2020.12.089","volume":"438","author":"Y Ming","year":"2021","unstructured":"Ming, Y., Meng, X., Fan, C., Yu, H.: Deep learning for monocular depth estimation: a review. Neurocomputing 438, 14\u201333 (2021). https:\/\/doi.org\/10.1016\/J.NEUCOM.2020.12.089","journal-title":"Neurocomputing"},{"key":"1350_CR83","doi-asserted-by":"publisher","DOI":"10.3390\/s17061371","author":"BS Lin","year":"2017","unstructured":"Lin, B.S., Lee, C.C., Chiang, P.Y.: Simple smartphone-based guiding system for visually impaired people. Sensors (Switzerland) (2017). https:\/\/doi.org\/10.3390\/s17061371","journal-title":"Sensors (Switzerland)"},{"key":"1350_CR84","doi-asserted-by":"publisher","first-page":"1052","DOI":"10.1109\/TPAMI.2007.1049","volume":"29","author":"AJ Davison","year":"2007","unstructured":"Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell.Intell. 29, 1052\u20131067 (2007). https:\/\/doi.org\/10.1109\/TPAMI.2007.1049","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell.Intell."},{"key":"1350_CR85","first-page":"479","volume-title":"LiDAR-Based Obstacle Detection and\u00a0Distance Estimation in\u00a0Navigation Assistance for\u00a0Visually Impaired","author":"B Kuriakose","year":"2022","unstructured":"Kuriakose, B., Shrestha, R., Sandnes, F.E.: In: Antona, M., Stephanidis, C. (eds.) LiDAR-Based Obstacle Detection and\u00a0Distance Estimation in\u00a0Navigation Assistance for\u00a0Visually Impaired, pp. 479\u2013491. Springer International Publishing, Cham (2022)"},{"key":"1350_CR86","doi-asserted-by":"publisher","unstructured":"Hakim, H. Fadhil, A. (2019) Navigation system for visually impaired people based on RGB-D camera and ultrasonic sensor. ACM International Conference Proceeding Series 172\u2013177. https:\/\/doi.org\/10.1145\/3321289.3321303","DOI":"10.1145\/3321289.3321303"},{"key":"1350_CR87","doi-asserted-by":"publisher","first-page":"66587","DOI":"10.1109\/ACCESS.2023.3285396","volume":"11","author":"P Xu","year":"2023","unstructured":"Xu, P., Kennedy, G.A., Zhao, F.Y., et al.: Wearable obstacle avoidance electronic travel aids for blind and visually impaired individuals: a systematic review. IEEE Access 11, 66587\u201366613 (2023). https:\/\/doi.org\/10.1109\/ACCESS.2023.3285396","journal-title":"IEEE Access"},{"key":"1350_CR88","doi-asserted-by":"crossref","unstructured":"F, A., NADA, A., A, M., MASHALI, S.: Effective fast response smart stick for blind people. Institute of research engineers and doctors, LLC, pp. 5\u201311 (2015)","DOI":"10.15224\/978-1-63248-043-9-29"},{"key":"1350_CR89","doi-asserted-by":"publisher","first-page":"26712","DOI":"10.1109\/ACCESS.2021.3052415","volume":"9","author":"S Khan","year":"2021","unstructured":"Khan, S., Nazir, S., Khan, H.U.: Analysis of navigation assistants for blind and visually impaired people: a systematic review. IEEE Access 9, 26712\u201326734 (2021). https:\/\/doi.org\/10.1109\/ACCESS.2021.3052415","journal-title":"IEEE Access"},{"key":"1350_CR90","first-page":"3485","volume":"119","author":"M Vanitha","year":"2018","unstructured":"Vanitha, M., Rajiv, A., Elangovan, K., Kumar, S.V.: A smart walking stick for visually impaired using raspberry pi. Int. J. Appl. Math. 119, 3485\u20133489 (2018)","journal-title":"Int. J. Appl. Math."},{"key":"1350_CR91","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3390\/SYM12010119","volume":"12","author":"SG Jin","year":"2020","unstructured":"Jin, S.G., Ahmed, M.U., Kim, J.W., et al.: Combining obstacle avoidance and visual simultaneous localization and mapping for indoor navigation. Symmetry (Basel) 12, 1\u201313 (2020). https:\/\/doi.org\/10.3390\/SYM12010119","journal-title":"Symmetry (Basel)"},{"key":"1350_CR92","doi-asserted-by":"crossref","unstructured":"Jia, Y., Yan, X., Xu, Y.: A Survey of simultaneous localization and mapping for robot. In: 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). pp. 857\u2013861 (2019)","DOI":"10.1109\/IAEAC47372.2019.8997820"},{"key":"1350_CR93","doi-asserted-by":"publisher","DOI":"10.1063\/1.5121082","author":"O Atoui","year":"2019","unstructured":"Atoui, O., Husni, H., Mat, R.C.: Visual-based semantic simultaneous localization and mapping for Robotic applications a review. AIP Conf. Proc. (2019). https:\/\/doi.org\/10.1063\/1.5121082","journal-title":"AIP Conf. Proc."},{"key":"1350_CR94","doi-asserted-by":"publisher","unstructured":"Rui, C., Liu, Y., Shen, J., et al.: A Multi-Sensory Blind Guidance System Based on YOLO and ORB-SLAM. Proceedings of the 2021 IEEE International Conference on Progress in Informatics and Computing, PIC 2021 409\u2013414 (2021). https:\/\/doi.org\/10.1109\/PIC53636.2021.9687018","DOI":"10.1109\/PIC53636.2021.9687018"},{"key":"1350_CR95","doi-asserted-by":"publisher","first-page":"1874","DOI":"10.1109\/TRO.2021.3075644","volume":"37","author":"C Campos","year":"2021","unstructured":"Campos, C., Elvira, R., Rodriguez, J.J.G., et al.: ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Trans. Robot. 37, 1874\u20131890 (2021). https:\/\/doi.org\/10.1109\/TRO.2021.3075644","journal-title":"IEEE Trans. Robot."},{"key":"1350_CR96","doi-asserted-by":"publisher","DOI":"10.3390\/electronics9050741","author":"T Raj","year":"2020","unstructured":"Raj, T., Hashim, F.H., Huddin, A.B., et al.: A survey on LiDAR scanning mechanisms. Electronics (Basel) (2020). https:\/\/doi.org\/10.3390\/electronics9050741","journal-title":"Electronics (Basel)"},{"key":"1350_CR97","unstructured":"Google Tesseract An optical character recognition (OCR) engine (2015)"},{"key":"1350_CR98","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1007\/978-981-19-1324-2_13","volume-title":"Recent Trends in Communication and Intelligent Systems","author":"N Anwar","year":"2022","unstructured":"Anwar, N., Khan, T., Mollah, A.F.: Text Detection from Scene and Born Images: How Good is Tesseract? In: Pundir, A.K.S., Yadav, N., Sharma, H., Das, S. (eds.) Recent Trends in Communication and Intelligent Systems, pp. 115\u2013122. Springer Nature, Singapore (2022)"},{"key":"1350_CR99","doi-asserted-by":"crossref","unstructured":"Neat, L., Peng, R., Qin, S., Manduchi, R.: Scene text access: a comparison of mobile ocr modalities for blind users. In: proceedings of the 24th international conference on intelligent user interfaces. association for computing machinery, New York, pp. 197\u2013207 (2019)","DOI":"10.1145\/3301275.3302271"},{"key":"1350_CR100","doi-asserted-by":"publisher","first-page":"82496","DOI":"10.1109\/ACCESS.2023.3291074","volume":"11","author":"J Madake","year":"2023","unstructured":"Madake, J., Bhatlawande, S., Solanke, A., Shilaskar, S.: A qualitative and quantitative analysis of research in mobility technologies for visually impaired people. IEEE. Access. 11, 82496\u201382520 (2023). https:\/\/doi.org\/10.1109\/ACCESS.2023.3291074","journal-title":"IEEE. Access."},{"key":"1350_CR101","doi-asserted-by":"publisher","DOI":"10.3390\/s17030565","author":"W Elmannai","year":"2017","unstructured":"Elmannai, W., Elleithy, K.: Sensor-based assistive devices for visually-impaired people: current status, challenges, and future directions. Sensors (2017). https:\/\/doi.org\/10.3390\/s17030565","journal-title":"Sensors"},{"key":"1350_CR102","doi-asserted-by":"publisher","first-page":"277","DOI":"10.1177\/0145482X211027492","volume":"115","author":"C Granquist","year":"2021","unstructured":"Granquist, C., Sun, S.Y., Montezuma, S.R., et al.: Evaluation and comparison of Artificial Intelligence vision aids: Orcam myeye 1 and seeing AI. J. Vis. Impair. Blind. 115, 277\u2013285 (2021). https:\/\/doi.org\/10.1177\/0145482X211027492","journal-title":"J. Vis. Impair. Blind."},{"key":"1350_CR103","doi-asserted-by":"crossref","unstructured":"Lee, J., Herskovitz, J., Peng, YH., Guo, A.: ImageExplorer: Multi-layered touch exploration to encourage skepticism towards imperfect AI-generated image captions. In: conference on human factors in computing systems - proceedings. association for computing machinery (2022)","DOI":"10.1145\/3491102.3501966"},{"key":"1350_CR104","doi-asserted-by":"publisher","first-page":"1182","DOI":"10.1109\/TMM.2019.2942478","volume":"22","author":"L Xie","year":"2020","unstructured":"Xie, L., Lee, F., Liu, L., et al.: Hierarchical coding of convolutional features for scene recognition. IEEE. Trans. Multimed. 22, 1182\u20131192 (2020). https:\/\/doi.org\/10.1109\/TMM.2019.2942478","journal-title":"IEEE. Trans. Multimed."},{"key":"1350_CR105","doi-asserted-by":"crossref","unstructured":"Wang, C. Y., Bochkovskiy, A., Liao, H. YM.: YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. Institute of Electrical and Electronics Engineers (IEEE), pp. 7464\u20137475 (2023)","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"1350_CR106","unstructured":"YoloV7-ncnn-Raspberry-Pi-4. https:\/\/github.com\/Qengineering\/YoloV7-ncnn-Raspberry-Pi-4 (2024). Accessed 18 Jan 2024"},{"key":"1350_CR107","unstructured":"Detectron2 Model Zoo and Baselines. https:\/\/github.com\/facebookresearch\/detectron2\/blob\/main\/MODEL_ZOO.md (2024). Accessed 19 Jan 2024"},{"key":"1350_CR108","doi-asserted-by":"crossref","unstructured":"Caesar, H., Uijlings, J., Ferrari, V.: COCO-Stuff: Thing and Stuff Classes in Context. In: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition. pp. 1209\u20131218 (2018)","DOI":"10.1109\/CVPR.2018.00132"},{"key":"1350_CR109","doi-asserted-by":"publisher","DOI":"10.3390\/s23063002","author":"S Cakic","year":"2023","unstructured":"Cakic, S., Popovic, T., Krco, S., et al.: Developing edge AI computer vision for smart poultry farms using deep learning and HPC. Sensors (2023). https:\/\/doi.org\/10.3390\/s23063002","journal-title":"Sensors"},{"key":"1350_CR110","doi-asserted-by":"crossref","unstructured":"Cabanillas-Carbonell, M., Ch\u00e1vez, A. A., Barrientos, J. B.: Glasses Connected to Google Vision that Inform Blind People about what is in Front of Them. In: 2020 International Conference on e-Health and Bioengineering (EHB). pp. 1\u20135 (2020)","DOI":"10.1109\/EHB50910.2020.9280268"},{"key":"1350_CR111","unstructured":"Intel\u00ae RealSenseTM Depth Camera D455. https:\/\/www.intelrealsense.com\/depth-camera-d455\/. Accessed 19 Jan 2024"},{"key":"1350_CR112","doi-asserted-by":"publisher","first-page":"1623","DOI":"10.1109\/TPAMI.2020.3019967","volume":"44","author":"R Ranftl","year":"2022","unstructured":"Ranftl, R., Lasinger, K., Hafner, D., et al.: Towards robust monocular depth estimation: mixing datasets for zero-shot cross-dataset transfer. IEEE Trans. Pattern Anal. Mach. Intell.Intell. 44, 1623\u20131637 (2022). https:\/\/doi.org\/10.1109\/TPAMI.2020.3019967","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell.Intell."},{"key":"1350_CR113","unstructured":"MiDAS Github Repository. https:\/\/github.com\/isl-org\/MiDaS. Accessed 21 Jan 2024"},{"key":"1350_CR114","doi-asserted-by":"crossref","unstructured":"Beshley, M., Volodymyr, P., Beshley, H., Gregus, M.: A smartphone-based computer vision assistance system with neural network depth estimation for the visually impaired. pp. 26\u201336 (2023)","DOI":"10.1007\/978-3-031-42508-0_3"},{"key":"1350_CR115","doi-asserted-by":"publisher","first-page":"020010","DOI":"10.1063\/5.0103097","volume":"2520","author":"S Saranya","year":"2022","unstructured":"Saranya, S., Sudha, G., Subbiah, S.: Raspberry Pi based smart walking stick for visually impaired person. AIP Conf. Proc. 2520, 020010 (2022). https:\/\/doi.org\/10.1063\/5.0103097","journal-title":"AIP Conf. Proc."},{"key":"1350_CR116","doi-asserted-by":"crossref","unstructured":"Kunta, V., Tuniki, C., Sairam, U.: Multi-functional blind stick for visually impaired people. In: 2020 5th international conference on communication and electronics systems (ICCES). pp. 895\u2013899 (2020)","DOI":"10.1109\/ICCES48766.2020.9137870"},{"key":"1350_CR117","unstructured":"Intel\u00ae RealSenseTM Depth Camera D415"},{"key":"1350_CR118","doi-asserted-by":"crossref","unstructured":"Hussain, S. S., Durrani, D., Khan, A. A., et al.: In-door obstacle detection and avoidance system for visually impaired people. In: 2020 IEEE Global Humanitarian Technology Conference (GHTC). pp. 1\u20137 (2020)","DOI":"10.1109\/GHTC46280.2020.9342942"},{"key":"1350_CR119","doi-asserted-by":"crossref","unstructured":"Rajesh, M., Rajan, B. K., Roy, A., et al.: Text recognition and face detection aid for visually impaired person using Raspberry PI. In: 2017 International Conference on Circuit, Power and Computing Technologies (ICCPCT). pp. 1\u20135 (2017)","DOI":"10.1109\/ICCPCT.2017.8074355"}],"container-title":["Multimedia Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00530-024-01350-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00530-024-01350-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00530-024-01350-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T10:14:20Z","timestamp":1732011260000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00530-024-01350-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,5,18]]},"references-count":119,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["1350"],"URL":"https:\/\/doi.org\/10.1007\/s00530-024-01350-8","relation":{},"ISSN":["0942-4962","1432-1882"],"issn-type":[{"value":"0942-4962","type":"print"},{"value":"1432-1882","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,5,18]]},"assertion":[{"value":"16 September 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 May 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 May 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"The user research was granted approval by the Ethics Committee of Universidad Politecnica de Madrid under reference number 2022\u2013077. Each participant approved a consent form in order to participate in the interviews.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}}],"article-number":"152"}}