{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T08:38:28Z","timestamp":1775810308238,"version":"3.50.1"},"reference-count":19,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2022,3,7]],"date-time":"2022-03-07T00:00:00Z","timestamp":1646611200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia","doi-asserted-by":"publisher","award":["LA\/P\/0063\/2020"],"award-info":[{"award-number":["LA\/P\/0063\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia","doi-asserted-by":"publisher","award":["UIDB\/50008\/2020"],"award-info":[{"award-number":["UIDB\/50008\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Ideally, to carry out screening for eye diseases, it is expected to use specialized medical equipment to capture retinal fundus images. However, since this kind of equipment is generally expensive and has low portability, and with the development of technology and the emergence of smartphones, new portable and cheaper screening options have emerged, one of them being the D-Eye device. When compared to specialized equipment, this equipment and other similar devices associated with a smartphone present lower quality and less field-of-view in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. Individuals can be referred for specialized screening to obtain a medical diagnosis if necessary. Two methods were proposed to extract the relevant regions from these lower-quality videos (the retinal zone). The first one is based on classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLO v4, which was demonstrated to be the preferred method to apply. A mosaicing technique was implemented from the relevant retina regions to obtain a more informative single image with a higher field of view. It was divided into two stages: the GLAMpoints neural network was applied to extract relevant points in the first stage. Some homography transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images.<\/jats:p>","DOI":"10.3390\/s22052059","type":"journal-article","created":{"date-parts":[[2022,3,9]],"date-time":"2022-03-09T01:50:53Z","timestamp":1646790653000},"page":"2059","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Detection and Mosaicing Techniques for Low-Quality Retinal Videos"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2207-0897","authenticated-orcid":false,"given":"Jos\u00e9","family":"Camara","sequence":"first","affiliation":[{"name":"Departamento de Ci\u00eancias e Tecnologia, University Aberta, 1250-100 Lisboa, Portugal"},{"name":"Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), 4200-465 Porto, Portugal"}]},{"given":"Bruno","family":"Silva","sequence":"additional","affiliation":[{"name":"Polytechnic of Leiria, 2411-901 Leiria, Portugal"}]},{"given":"Ant\u00f3nio","family":"Gouveia","sequence":"additional","affiliation":[{"name":"Escola de Ci\u00eancias e Tecnologias, University of Tr\u00e1s-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3394-6762","authenticated-orcid":false,"given":"Ivan Miguel","family":"Pires","sequence":"additional","affiliation":[{"name":"Escola de Ci\u00eancias e Tecnologias, University of Tr\u00e1s-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal"},{"name":"Instituto de Telecomunica\u00e7\u00f5es, Universidade da Beira Interior, 6200-001 Covilh\u00e3, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4383-0472","authenticated-orcid":false,"given":"Paulo","family":"Coelho","sequence":"additional","affiliation":[{"name":"Polytechnic of Leiria, 2411-901 Leiria, Portugal"},{"name":"Institute for Systems Engineering and Computers at Coimbra (INESC Coimbra), DEEC, P\u00f3lo II, 3030-290 Coimbra, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3458-7693","authenticated-orcid":false,"given":"Ant\u00f3nio","family":"Cunha","sequence":"additional","affiliation":[{"name":"Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), 4200-465 Porto, Portugal"},{"name":"Escola de Ci\u00eancias e Tecnologias, University of Tr\u00e1s-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2022,3,7]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1155\/2015\/823139","article-title":"A novel device to exploit the smartphone camera for fundus photography","volume":"2015","author":"Russo","year":"2015","journal-title":"J. Ophthalmol."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"438","DOI":"10.1136\/bjophthalmol-2013-303797","article-title":"A mobile phone-based retinal camera for portable wide field imaging","volume":"98","author":"Maamari","year":"2014","journal-title":"Br. J. Ophthalmol."},{"key":"ref_3","unstructured":"(2021, February 26). Inview\u00ae. Available online: https:\/\/www.volk.com\/collections\/diagnostic-imaging\/products\/inview-for-iphone-6-6s.html."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"201","DOI":"10.3928\/01913913-20180220-01","article-title":"Comparison study of funduscopic examination using a smartphone-based digital ophthalmoscope and the direct ophthalmoscope","volume":"55","author":"Wu","year":"2018","journal-title":"J. Pediatr. Ophthalmol. Strabismus"},{"key":"ref_5","first-page":"16","article-title":"FIRE: Fundus image registration dataset","volume":"1","author":"Zabulis","year":"2017","journal-title":"J. Model. Ophthalmol."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Zengin, H., Camara, J., Coelho, P., Rodrigues, J.M., and Cunha, A. (2020, January 19\u201324). Low-Resolution Retinal Image Vessel Segmentation. Proceedings of the International Conference on Human-Computer Interaction, Copenhagen, Denmark.","DOI":"10.1007\/978-3-030-49108-6_44"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Truong, P., Apostolopoulos, S., Mosinska, A., Stucky, S., Ciller, C., and Zanet, S.D. (2019, January 27\u201328). GLAMpoints: Greedily Learned Accurate Match points. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea.","DOI":"10.1109\/ICCV.2019.01083"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"261","DOI":"10.1007\/s11263-019-01247-4","article-title":"Deep learning for generic object detection: A survey","volume":"128","author":"Liu","year":"2020","journal-title":"Int. J. Comput. Vis."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"128837","DOI":"10.1109\/ACCESS.2019.2939201","article-title":"A survey of deep learning-based object detection","volume":"7","author":"Jiao","year":"2019","journal-title":"IEEE Access"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1016\/j.preteyeres.2005.07.001","article-title":"Retinal image analysis: Concepts, applications and potential","volume":"25","author":"Patton","year":"2006","journal-title":"Prog. Retin. Eye Res."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Melo, T., Mendon\u00e7a, A.M., and Campilho, A. (2018, January 27\u201329). Creation of Retinal Mosaics for Diabetic Retinopathy Screening: A Comparative Study. Proceedings of the International Conference Image Analysis and Recognition, P\u00f3voa de Varzim, Portugal.","DOI":"10.1007\/978-3-319-93000-8_76"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.jvcir.2015.10.014","article-title":"A survey on image mosaicing techniques","volume":"34","author":"Ghosh","year":"2016","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft Coco: Common objects in context. Proceedings of the European conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_15","unstructured":"(2021, September 20). WandB. Available online: http:\/\/www.wandb.ai."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"120","DOI":"10.1145\/360666.360677","article-title":"Finding circles by an array of accumulators","volume":"18","author":"Kimme","year":"1975","journal-title":"Commun. ACM"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"62","DOI":"10.1109\/TSMC.1979.4310076","article-title":"A threshold selection method from gray-level histograms","volume":"9","author":"Otsu","year":"1979","journal-title":"IEEE Trans. Syst. Man. Cybern."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"79","DOI":"10.3354\/cr030079","article-title":"Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance","volume":"30","author":"Willmott","year":"2005","journal-title":"Clim. Res."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Zhou, D., Fang, J., Song, X., Guan, C., Yin, J., Dai, Y., and Yang, R. (2019, January 15\u201318). IoU loss for 2d\/3d object detection. Proceedings of the 2019 International Conference on 3D Vision (3DV), Qu\u00e9bec City, QC, Canada.","DOI":"10.1109\/3DV.2019.00019"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/5\/2059\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:33:11Z","timestamp":1760135591000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/5\/2059"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,7]]},"references-count":19,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2022,3]]}},"alternative-id":["s22052059"],"URL":"https:\/\/doi.org\/10.3390\/s22052059","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,7]]}}}