{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T18:08:12Z","timestamp":1774375692106,"version":"3.50.1"},"reference-count":26,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2022,2,14]],"date-time":"2022-02-14T00:00:00Z","timestamp":1644796800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia","doi-asserted-by":"publisher","award":["UIDB\/50014\/2020"],"award-info":[{"award-number":["UIDB\/50014\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Glaucoma is a silent disease that leads to vision loss or irreversible blindness. Current deep learning methods can help glaucoma screening by extending it to larger populations using retinal images. Low-cost lenses attached to mobile devices can increase the frequency of screening and alert patients earlier for a more thorough evaluation. This work explored and compared the performance of classification and segmentation methods for glaucoma screening with retinal images acquired by both retinography and mobile devices. The goal was to verify the results of these methods and see if similar results could be achieved using images captured by mobile devices. The used classification methods were the Xception, ResNet152 V2 and the Inception ResNet V2 models. The models\u2019 activation maps were produced and analysed to support glaucoma classifier predictions. In clinical practice, glaucoma assessment is commonly based on the cup-to-disc ratio (CDR) criterion, a frequent indicator used by specialists. For this reason, additionally, the U-Net architecture was used with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the performance of the models reached close to that of state-of-the-art methods, and the classification method applied to a low-quality private dataset illustrates the advantage of using cheaper lenses.<\/jats:p>","DOI":"10.3390\/s22041449","type":"journal-article","created":{"date-parts":[[2022,2,14]],"date-time":"2022-02-14T03:46:00Z","timestamp":1644810360000},"page":"1449","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":34,"title":["Evaluations of Deep Learning Approaches for Glaucoma Screening Using Retinal Images from Mobile Device"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4132-3186","authenticated-orcid":false,"given":"Alexandre","family":"Neto","sequence":"first","affiliation":[{"name":"Escola de Ci\u00eancias de Tecnologia, University of Tr\u00e1s-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal"},{"name":"INESC TEC\u2014Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2207-0897","authenticated-orcid":false,"given":"Jos\u00e9","family":"Camara","sequence":"additional","affiliation":[{"name":"INESC TEC\u2014Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal"},{"name":"Departamento de Ci\u00eancias e Tecnologia, University Aberta, 1250-100 Lisboa, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3458-7693","authenticated-orcid":false,"given":"Ant\u00f3nio","family":"Cunha","sequence":"additional","affiliation":[{"name":"Escola de Ci\u00eancias de Tecnologia, University of Tr\u00e1s-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal"},{"name":"INESC TEC\u2014Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2022,2,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"803","DOI":"10.1109\/JBHI.2016.2544961","article-title":"Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted from Fundus Images","volume":"21","author":"Maheshwari","year":"2017","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"892","DOI":"10.1364\/BOE.10.000892","article-title":"Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning","volume":"10","author":"Fatti","year":"2019","journal-title":"Biomed. Opt. Express"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Srivastava, R., Liu, H., Chen, X., Duan, L., Wong, D.W.K., Kwoh, C.K., Wong, T.Y., and Liu, J. (2014). A survey on computer aided diagnosis for ocular diseases. BMC Med. Inform. Decis. Mak., 14.","DOI":"10.1186\/1472-6947-14-80"},{"key":"ref_4","first-page":"43","article-title":"Retinal Fundus Image for Glaucoma Detection: A Review and Study","volume":"28","author":"Kanse","year":"2019","journal-title":"J. Intell. Syst."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Sreng, S., Maneerat, N., Hamamoto, K., and Win, K.Y. (2020). Deep learning for optic disc segmentation and glaucoma diagnosis on retinal images. Appl. Sci., 10.","DOI":"10.3390\/app10144916"},{"key":"ref_6","first-page":"29","article-title":"M\u00e9todos computacionais para segmenta\u00e7\u00e3o do disco \u00f3ptico em imagens de retina: Uma revis\u00e3o","volume":"10","author":"Claro","year":"2018","journal-title":"Rev. Bras. Comput. Apl."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1016\/j.bspc.2018.01.014","article-title":"Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma","volume":"42","author":"Thakur","year":"2018","journal-title":"Biomed. Signal Process. Control"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Bajwa, M.N., Malik, M.I., Siddiqui, S.A., Dengel, A., Shafait, F., Neumeier, W., and Ahmed, S. (2019). Correction to: Two-stage framework for optic disc localization and glaucoma classification in retinal fundus images using deep learning. BMC Med. Inform. Decis. Mak., 19.","DOI":"10.1186\/s12911-019-0876-y"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.cmpb.2018.07.012","article-title":"Computer-aided diagnosis of glaucoma using fundus images: A review","volume":"165","author":"Hagiwara","year":"2018","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"20","DOI":"10.1186\/s12938-020-00767-2","article-title":"Machine learning applied to retinal image processing for glaucoma detection: Review and perspective","volume":"19","author":"Barros","year":"2020","journal-title":"Biomed. Eng. Online"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"29","DOI":"10.1186\/s12938-019-0649-y","article-title":"CNNs for automatic glaucoma assessment using fundus images: An extensive validation","volume":"18","author":"Morales","year":"2019","journal-title":"Biomed. Eng. Online"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Serener, A., and Serte, S. (2019, January 3\u20135). Transfer learning for early and advanced glaucoma detection with convolutional neural networks. Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey.","DOI":"10.1109\/TIPTEKNO.2019.8894965"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Norouzifard, M., Nemati, A., Gholamhosseini, H., Klette, R., Nouri-Mahdavi, K., and Yousefi, S. (2018, January 19\u201321). Automated glaucoma diagnosis using deep and transfer learning: Proposal of a system for clinical testing. Proceedings of the 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand.","DOI":"10.1109\/IVCNZ.2018.8634671"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Al-Bander, B., Williams, B.M., Al-Nuaimy, W., Al-Taee, M.A., Pratt, H., and Zheng, Y. (2018). Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis. Symmetry, 10.","DOI":"10.3390\/sym10040087"},{"key":"ref_15","first-page":"373","article-title":"Retinal Optic Disc Segmentation Using Conditional Generative Adversarial Network","volume":"308","author":"Singh","year":"2018","journal-title":"Front. Artif. Intell. Appl."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Qin, Y., and Hawbani, A. (2019, January 25\u201328). A novel segmentation method for optic disc and optic cup based on deformable U-net. Proceedings of the 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China.","DOI":"10.1109\/ICAIBD.2019.8837025"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1016\/j.compmedimag.2019.02.005","article-title":"Robust optic disc and cup segmentation with deep learning for glaucoma detection","volume":"74","author":"Yu","year":"2019","journal-title":"Comput. Med. Imaging Graph."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Wong, D.W.K., Liu, J., Lim, J.H., Jia, X., Yin, F., Li, H., and Wong, T.Y. (2008, January 20\u201325). Level-set based automatic cup-to-disc ratio determination using retinal fundus images in argali. Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada.","DOI":"10.1109\/IEMBS.2008.4649648"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"44","DOI":"10.5815\/ijigsp.2012.09.07","article-title":"Techniques of Glaucoma Detection from Color Fundus Images: A Review","volume":"4","author":"Nath","year":"2012","journal-title":"Int. J. Image Graph. Signal Process."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1019","DOI":"10.1109\/TMI.2013.2247770","article-title":"Superpixel classification based optic disc and optic cup segmentation for glaucoma screening","volume":"32","author":"Cheng","year":"2013","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Diaz, A., Morales, S., Naranjo, V., Alcoceryz, P., and Lanzagortayz, A. (September, January 29). Glaucoma diagnosis by means of optic cup feature analysis in color fundus images. Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary.","DOI":"10.1109\/EUSIPCO.2016.7760610"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"1299","DOI":"10.1109\/TMI.2016.2535302","article-title":"Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?","volume":"35","author":"Tajbakhsh","year":"2017","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Pashaei, M., Kamangir, H., Starek, M.J., and Tissot, P. (2020). Review and evaluation of deep learning architectures for efficient land cover mapping with UAS hyper-spatial imagery: A case study over a wetland. Remote Sens., 12.","DOI":"10.3390\/rs12060959"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A.A. (2017, January 4\u20139). Inception-v4, inception-ResNet and the impact of residual connections on learning. Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Carranza-Garc\u00eda, M., Torres-Mateo, J., Lara-Ben\u00edtez, P., and Garc\u00eda-Guti\u00e9rrez, J. (2021). On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sens., 13.","DOI":"10.3390\/rs13010089"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/4\/1449\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:18:59Z","timestamp":1760134739000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/4\/1449"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,2,14]]},"references-count":26,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2022,2]]}},"alternative-id":["s22041449"],"URL":"https:\/\/doi.org\/10.3390\/s22041449","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,2,14]]}}}