{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T17:58:13Z","timestamp":1770487093068,"version":"3.49.0"},"reference-count":48,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2019,4,5]],"date-time":"2019-04-05T00:00:00Z","timestamp":1554422400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Peripheral vision loss results in the inability to detect objects in the peripheral visual field which affects the ability to evaluate and avoid potential hazards. A different number of assistive navigation systems have been developed to help people with vision impairments using wearable and portable devices. Most of these systems are designed to search for obstacles and provide safe navigation paths for visually impaired people without any prioritisation of the degree of danger for each hazard. This paper presents a new context-aware hybrid (indoor\/outdoor) hazard classification assistive technology to help people with peripheral vision loss in their navigation using computer-enabled smart glasses equipped with a wide-angle camera. Our proposed system augments users\u2019 existing healthy vision with suitable, meaningful and smart notifications to attract the user\u2019s attention to possible obstructions or hazards in their peripheral field of view. A deep learning object detector is implemented to recognise static and moving objects in real time. After detecting the objects, a Kalman Filter multi-object tracker is used to track these objects over time to determine the motion model. For each tracked object, its motion model represents its way of moving around the user. Motion features are extracted while the object is still in the user\u2019s field of vision. These features are then used to quantify the danger using five predefined hazard classes using a neural network-based classifier. The classification performance is tested on both publicly available and private datasets and the system shows promising results with up to 90% True Positive Rate (TPR) associated with as low as 7% False Positive Rate (FPR), 13% False Negative Rate (FNR) and an average testing Mean Square Error (MSE) of 8.8%. The provided hazard type is then translated into a smart notification to increase the user\u2019s cognitive perception using the healthy vision within the visual field. A participant study was conducted with a group of patients with different visual field defects to explore their feedback about the proposed system and the notification generation stage. The real-world outdoor evaluation of human subjects is planned to be performed in our near future work.<\/jats:p>","DOI":"10.3390\/s19071630","type":"journal-article","created":{"date-parts":[[2019,4,5]],"date-time":"2019-04-05T11:36:01Z","timestamp":1554464161000},"page":"1630","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":27,"title":["A Smart Context-Aware Hazard Attention System to Help People with Peripheral Vision Loss"],"prefix":"10.3390","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9216-6707","authenticated-orcid":false,"given":"Ola","family":"Younis","sequence":"first","affiliation":[{"name":"Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8927-2368","authenticated-orcid":false,"given":"Waleed","family":"Al-Nuaimy","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9210-9131","authenticated-orcid":false,"given":"Fiona","family":"Rowe","sequence":"additional","affiliation":[{"name":"Department of Health Services Research, University of Liverpool, Liverpool L69 3GL, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7874-7679","authenticated-orcid":false,"given":"Mohammad H.","family":"Alomari","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Liverpool, Liverpool L69 3BX, UK"}]}],"member":"1968","published-online":{"date-parts":[[2019,4,5]]},"reference":[{"key":"ref_1","first-page":"19","article-title":"Leading causes of blindness worldwide","volume":"283","author":"Roodhooft","year":"2002","journal-title":"Bull. Soc. Belge Ophtalmol."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"629","DOI":"10.1136\/bjophthalmol-2013-304033","article-title":"Prevalence and causes of vision loss in high-income countries and in Eastern and Central Europe: 1990\u20132010","volume":"98","author":"Bourne","year":"2014","journal-title":"Br. J. Ophthalmol."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1167\/11.5.13","article-title":"Peripheral vision and pattern recognition: A review","volume":"11","author":"Strasburger","year":"2011","journal-title":"J. Vis."},{"key":"ref_4","unstructured":"Anderson, D., and Patella, M. (1992). Automated Static Perimetry, Mosby Year Book."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1241","DOI":"10.3758\/BF03208380","article-title":"The eccentricity effect: Target eccentricity affects performance on conjunction searches","volume":"57","author":"Carrasco","year":"1995","journal-title":"Percept. Psychophys."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"586","DOI":"10.1167\/3.10.1","article-title":"Visual field representations and locations of visual areas V1\/2\/3 in human visual cortex","volume":"3","author":"Dougherty","year":"2003","journal-title":"J. Vis."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Hersh, M., and Johnson, M. (2008). Assistive Technology for Visually Impaired and Blind Peoplel, Springer-Verlag. [1st ed.].","DOI":"10.1007\/978-1-84628-867-8"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Ervasti, M., Isomursu, M., and Leibar, I.I. (2011, January 15\u201316). Touch-and audio-based medication management service concept for vision impaired older people. Proceedings of the IEEE International Conference on RFID-Technologies and Applications, Sitges, Spain.","DOI":"10.1109\/RFID-TA.2011.6068645"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"31","DOI":"10.4103\/2320-3897.122659","article-title":"Interpretation of autoperimetry","volume":"2","author":"Nayak","year":"2014","journal-title":"J. Clin. Ophthalmol. Res."},{"key":"ref_10","unstructured":"Woodrow, B., and Thomas, C. (2000). Fundamentals of Wearable Computers and Augmented Reality, Lawrence Erlbaum Associates, Inc."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Szeliski, R. (2010). Computer Vision: Algorithms and Applications, Springer Science & Business Media.","DOI":"10.1007\/978-1-84882-935-0"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1016\/j.procs.2014.09.039","article-title":"Context-Aware Systems: A More Appropriate Response System to Hurricanes and Other Natural Disasters","volume":"36","author":"Millham","year":"2014","journal-title":"Procedia Comput. Sci."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Schilit, B., Adams, N., and Want, R. (1994, January 8\u20139). Context-aware computing applications. Proceedings of the IEEE Workshop on Mobile Computing Systems and Applications, Santa Cruz, CA, USA.","DOI":"10.1109\/WMCSA.1994.16"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Korn, O., Funk, M., Abele, S., H\u00f6rz, T., and Schmidt, A. (2014, January 27\u201330). Context-aware Assistive Systems at the Workplace: Analyzing the Effects of Projection and Gamification. Proceedings of the 7th International Conference on PErvasive Technologies Related to Assistive Environments, Rhodes, Greece.","DOI":"10.1145\/2674396.2674406"},{"key":"ref_15","unstructured":"Ong, S., and Nee, A.Y.C. (2013). Virtual and Augmented Reality Applications in Manufacturing, Springer Science & Business Media."},{"key":"ref_16","unstructured":"Ohta, Y., and Tamura, H. (2014). Mixed Reality: Merging Real and Virtual Worlds, Springer Publishing Company. [1st ed.]."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Barfield, W. (2015). Fundamentals of Wearable Computers and Augmented Reality, CRC Press.","DOI":"10.1201\/b18703"},{"key":"ref_18","first-page":"1","article-title":"A Hazard Detection and Tracking System for People with Peripheral Vision Loss using Smart Glasses and Augmented Reality","volume":"10","author":"Younis","year":"2019","journal-title":"Int. J. Adv. Comput. Sci. Appl."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Younis, O., Al-Nuaimy, W., Rowe, F., and Alomari, M.H. (2018, January 8\u201313). Real-time Detection of Wearable Camera Motion Using Optical Flow. Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil.","DOI":"10.1109\/CEC.2018.8477783"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"25","DOI":"10.1109\/TSMCC.2009.2021255","article-title":"Wearable obstacle avoidance electronic travel aids for blind: A survey","volume":"40","author":"Dakopoulos","year":"2010","journal-title":"IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.)"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"602","DOI":"10.1889\/1.1831932","article-title":"Augmented View for Tunnel Vision: Device Testing by Patients in Real Environments","volume":"Volume 32","author":"Peli","year":"2001","journal-title":"SID Symposium Digest of Technical Papers"},{"key":"ref_22","first-page":"1","article-title":"Wearable real-time stereo vision for the visually impaired","volume":"14","author":"Balakrishnan","year":"2007","journal-title":"Eng. Lett."},{"key":"ref_23","first-page":"205","article-title":"CNN based Augmented Reality Using Numerical Approximation Techniques","volume":"1","author":"Elango","year":"2010","journal-title":"Int. J. Signal Image Process."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Fiannaca, A., Apostolopoulous, I., and Folmer, E. (2014, January 20\u201322). Headlock: A wearable navigation aid that helps blind cane users traverse large open spaces. Proceedings of the 16th international ACM SIGACCESS Conference on Computers & Accessibility, Rochester, NY, USA.","DOI":"10.1145\/2661334.2661453"},{"key":"ref_25","unstructured":"Cloix, S., Weiss, V., Bologna, G., Pun, T., and Hasler, D. (2014, January 5\u20138). Obstacle and planar object detection using sparse 3D information for a smart walker. Proceedings of the International Conference on Computer Vision Theory and Applications, Lisbon, Portugal."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Everding, L., Walger, L., Ghaderi, V.S., and Conradt, J. (2016, January 14\u201316). A mobility device for the blind with improved vertical resolution using dynamic vision sensors. Proceedings of the IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany.","DOI":"10.1109\/HealthCom.2016.7749459"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Li, B., Zhang, X., Mu\u00f1oz, J.P., Xiao, J., Rong, X., and Tian, Y. (2015, January 6\u20139). Assisting blind people to avoid obstacles: An wearable obstacle stereo feedback system based on 3D detection. Proceedings of the IEEE International Conference on Robotics and Biomimetics, Zhuhai, China.","DOI":"10.1109\/ROBIO.2015.7419118"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Elmannai, W., and Elleithy, K. (2017). Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions. Sensors, 17.","DOI":"10.3390\/s17030565"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Yang, K., Wang, K., Lin, S., Bai, J., Bergasa, L.M., and Arroyo, R. (2018, January 27\u201329). Long-Range Traversability Awareness and Low-Lying Obstacle Negotiation with RealSense for the Visually Impaired. Proceedings of the 2018 International Conference on Information Science and System, Jeju, Korea.","DOI":"10.1145\/3209914.3209943"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Abowd, G., Dey, A., Brown, P., Davies, N., Smith, M., and Steggles, P. (1999). Towards a better understanding of context and context-awareness. International Symposium on Handheld and Ubiquitous Computing, Springer.","DOI":"10.1007\/3-540-48157-5_29"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Li, B., Munoz, P., Rong, X., Xiao, J., Tian, Y., and Arditi, A. (2016). ISANA: Wearable context-aware indoor assistive navigation with obstacle avoidance for the blind. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-48881-3_31"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Tapu, R., Mocanu, B., and Zaharia, T. (2017). DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance. Sensors, 17.","DOI":"10.3390\/s17112473"},{"key":"ref_33","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (July, January 26). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vrgas, NV, USA."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Croce, D., Gallo, P., Garlisi, D., Giarr\u00e9, L., Mangione, S., and Tinnirello, I. (2014, January 16\u201319). ARIANNA: A smartphone-based navigation system with human in the loop. Proceedings of the IEEE 22nd Mediterranean Conference on Control and Automation (MED), Palermo, Italy.","DOI":"10.1109\/MED.2014.6961318"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Croce, D., Giarr\u00e9, L., Rosa, F.G.L., Montana, E., and Tinnirello, I. (2016, January 21\u201324). Enhancing tracking performance in a smartphone-based navigation system for visually impaired people. Proceedings of the 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece.","DOI":"10.1109\/MED.2016.7535871"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"702","DOI":"10.1109\/TMC.2018.2842751","article-title":"Vision-based Mobile Indoor Assistive Navigation Aid for Blind People","volume":"18","author":"Li","year":"2019","journal-title":"IEEE Trans. Mobile Comput."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1186\/s13673-018-0134-9","article-title":"User-centered design of a depth data based obstacle detection and avoidance system for the visually impaired","volume":"8","author":"Jafri","year":"2018","journal-title":"Hum.-Cent. Comput. Inf. Sci."},{"key":"ref_38","unstructured":"Stephanidis, C., and Antona, M. (2013). Gathering the Users\u2019 Needs in the Development of Assistive Technology: A Blind Navigation System Use Case. Universal Access in Human-Computer Interaction. Applications and Services for Quality of Life, Springer."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Benabidvww, A., and AlZuhair, M. (2014, January 2\u20135). User involvement in the development of indoor navigation system for the visually impaired: A needs-finding study. Proceedings of the 3rd International Conference on User Science and Engineering, Shah Alam, Malaysia.","DOI":"10.1109\/IUSER.2014.7002684"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.cviu.2016.09.001","article-title":"Computer vision for assistive technologies","volume":"154","author":"Leo","year":"2017","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_41","unstructured":"Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., and Garnett, R. (2015). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Advances in Neural Information Processing Systems 28, Curran Associates, Inc."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016). SSD: Single shot multibox detector. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_43","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (arXiv, 2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications, arXiv."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014). Microsoft coco: Common objects in context. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1007\/s11263-014-0733-5","article-title":"The Pascal Visual Object Classes Challenge: A Retrospective","volume":"111","author":"Everingham","year":"2015","journal-title":"Int. J. Comput. Vis."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"83","DOI":"10.1002\/nav.3800020109","article-title":"The Hungarian method for the assignment problem","volume":"2","author":"Kuhn","year":"1955","journal-title":"Naval Res. Log. Q."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Brostow, G.J., Shotton, J., Fauqueur, J., and Cipolla, R. (2008). Segmentation and Recognition Using Structure from Motion Point Clouds. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-540-88682-2_5"},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1016\/j.patrec.2008.04.005","article-title":"Semantic Object Classes in Video: A High-Definition Ground Truth Database","volume":"30","author":"Brostow","year":"2008","journal-title":"Pattern Recogn. Lett."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/7\/1630\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T12:43:09Z","timestamp":1760186589000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/7\/1630"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,4,5]]},"references-count":48,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2019,4]]}},"alternative-id":["s19071630"],"URL":"https:\/\/doi.org\/10.3390\/s19071630","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,4,5]]}}}