{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:30:32Z","timestamp":1760146232745,"version":"build-2065373602"},"reference-count":33,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2024,10,17]],"date-time":"2024-10-17T00:00:00Z","timestamp":1729123200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia, IP (FCT)","award":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"],"award-info":[{"award-number":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"]}]},{"name":"Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior (CAPES)","award":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"],"award-info":[{"award-number":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"]}]},{"name":"Conselho Nacional de Desenvolvimento Cient\u00edfico e Tecnol\u00f3gico (CNPq), Brazil","award":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"],"award-info":[{"award-number":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"]}]},{"name":"Funda\u00e7\u00e3o de Amparo \u00e0 Pesquisa Desenvolvimento Cient\u00edfico e Tecnol\u00f3gico do Maranh\u00e3o (FAPEMA) Brazil","award":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"],"award-info":[{"award-number":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"]}]},{"name":"Empresa Brasileira de Servi\u00e7os Hospitalares (Ebserh) Brazil","award":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"],"award-info":[{"award-number":["UIDB\/00319\/2020","001","000527\/2024","409593\/2021-4"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>Great advances in stitching high-quality retinal images have been made in recent years. On the other hand, very few studies have been carried out on low-resolution retinal imaging. This work investigates the challenges of low-resolution retinal images obtained by the D-EYE smartphone-based fundus camera. The proposed method uses homography estimation to register and stitch low-quality retinal images into a cohesive mosaic. First, a Siamese neural network extracts features from a pair of images, after which the correlation of their feature maps is computed. This correlation map is fed through four independent CNNs to estimate the homography parameters, each specializing in different corner coordinates. Our model was trained on a synthetic dataset generated from the Microsoft Common Objects in Context (MSCOCO) dataset; this work added an important data augmentation phase to improve the quality of the model. Then, the same is evaluated on the FIRE retina and D-EYE datasets for performance measurement using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The obtained results are promising: the average PSNR was 26.14 dB, with an SSIM of 0.96 on the D-EYE dataset. Compared to the method that uses a single neural network for homography calculations, our approach improves the PSNR by 7.96 dB and achieves a 7.86% higher SSIM score.<\/jats:p>","DOI":"10.3390\/info15100652","type":"journal-article","created":{"date-parts":[[2024,10,17]],"date-time":"2024-10-17T11:19:18Z","timestamp":1729163958000},"page":"652","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Image Stitching of Low-Resolution Retinography Using Fundus Blur Filter and Homography Convolutional Neural Network"],"prefix":"10.3390","volume":"15","author":[{"given":"Levi","family":"Santos","sequence":"first","affiliation":[{"name":"Applied Computing Group (NCA\u2014UFMA), Federal University of Maranh\u00e3o, Av. dos Portugueses, 1966\u2014Vila Bacanga, Saint Louis 65080-805, MA, Brazil"}]},{"given":"Maur\u00edcio","family":"Almeida","sequence":"additional","affiliation":[{"name":"Applied Computing Group (NCA\u2014UFMA), Federal University of Maranh\u00e3o, Av. dos Portugueses, 1966\u2014Vila Bacanga, Saint Louis 65080-805, MA, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7013-9700","authenticated-orcid":false,"given":"Jo\u00e3o","family":"Almeida","sequence":"additional","affiliation":[{"name":"Applied Computing Group (NCA\u2014UFMA), Federal University of Maranh\u00e3o, Av. dos Portugueses, 1966\u2014Vila Bacanga, Saint Louis 65080-805, MA, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3731-6431","authenticated-orcid":false,"given":"Geraldo","family":"Braz","sequence":"additional","affiliation":[{"name":"Applied Computing Group (NCA\u2014UFMA), Federal University of Maranh\u00e3o, Av. dos Portugueses, 1966\u2014Vila Bacanga, Saint Louis 65080-805, MA, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2207-0897","authenticated-orcid":false,"given":"Jos\u00e9","family":"Camara","sequence":"additional","affiliation":[{"name":"School of Science and Technology, University of Tr\u00e1s-os-Montes e Alto Douro, Quinta de Prados, 5000-801 Vila Real, Portugal"},{"name":"ALGORITMI Research Centre, University of Minho, 4800-058 Guimaraes, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3458-7693","authenticated-orcid":false,"given":"Ant\u00f3nio","family":"Cunha","sequence":"additional","affiliation":[{"name":"School of Science and Technology, University of Tr\u00e1s-os-Montes e Alto Douro, Quinta de Prados, 5000-801 Vila Real, Portugal"},{"name":"ALGORITMI Research Centre, University of Minho, 4800-058 Guimaraes, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2024,10,17]]},"reference":[{"key":"ref_1","unstructured":"WHO (2019). World Report on Vision, World Health Organization."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Neto, A., Camara, J., and Cunha, A. (2022). Evaluations of deep learning approaches for glaucoma screening using retinal images from mobile device. Sensors, 22.","DOI":"10.3390\/s22041449"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Pachade, S., Porwal, P., Thulkar, D., Kokare, M., Deshmukh, G., Sahasrabuddhe, V., Giancardo, L., Quellec, G., and M\u00e9riaudeau, F. (2021). Retinal fundus multi-disease image dataset (RFMiD): A dataset for multi-disease detection research. Data, 6.","DOI":"10.3390\/data6020014"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"588","DOI":"10.1136\/bjophthalmol-2019-314336","article-title":"Estimated number of ophthalmologists worldwide (International Council of Ophthalmology update): Will we meet the needs?","volume":"104","author":"Resnikoff","year":"2020","journal-title":"Br. J. Ophthalmol."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"331","DOI":"10.1080\/09286586.2022.2127784","article-title":"WHO Vision 2020: Have we done it?","volume":"30","author":"Abdulhussein","year":"2023","journal-title":"Ophthalmic Epidemiol."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"e21","DOI":"10.1016\/j.jaapos.2016.07.080","article-title":"D-EYE: A portable and inexpensive option for fundus photography and videography in the pediatric population with telemedicine potential","volume":"20","author":"Pihlblad","year":"2016","journal-title":"J. Am. Assoc. Pediatr. Ophthalmol. Strabismus"},{"key":"ref_7","unstructured":"Barritt, N., Pilon, L., MacLean, A., Lin, A., Cole, A., Faruq, I., and Lakshminarayanan, V. (September, January 24). Development and testing of a stabilization and image processing system for improvement of mobile fundus camera image quality. Proceedings of the Novel Optical Systems, Methods, and Applications XXIII, SPIE, Virtual."},{"key":"ref_8","unstructured":"Correia, T.V.S. (2023). Detection and Mosaicing through Deep Learning Models for Low-Quality Retinal Images. [Master\u2019s Thesis, School of Technology and Management of the Polytechnic Institute of Leiria]. Available online: https:\/\/iconline.ipleiria.pt\/handle\/10400.8\/8892."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Liu, J., Li, X., Wei, Q., Xu, J., and Ding, D. (2022, January 23\u201327). Semi-supervised keypoint detector and descriptor for retinal image matching. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-19803-8_35"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Hu, R., Chalakkal, R., Linde, G., and Dhupia, J.S. (2022, January 11\u201315). Multi-image stitching for smartphone-based retinal fundus stitching. Proceedings of the 2022 IEEE\/ASME International Conference on Advanced Intelligent Mechatronics (AIM), IEEE, Sapporo, Japan.","DOI":"10.1109\/AIM52237.2022.9863260"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"4516","DOI":"10.1109\/TGRS.2011.2144607","article-title":"Uniform Robust Scale-Invariant Feature Matching for Optical Remote Sensing Images","volume":"49","author":"Sedaghat","year":"2011","journal-title":"IEEE Trans. Geosci. Remote. Sens."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"102950","DOI":"10.1016\/j.jvcir.2020.102950","article-title":"A view-free image stitching network based on global homography","volume":"73","author":"Nie","year":"2020","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_13","unstructured":"DeTone, D., Malisiewicz, T., and Rabinovich, A. (2016). Deep image homography estimation. arXiv."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1007\/s12204-022-2513-7","article-title":"Unsupervised Oral Endoscope Image Stitching Algorithm","volume":"29","author":"Huang","year":"2024","journal-title":"J. Shanghai Jiaotong Univ. Sci."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D., Zitnick, C.L., and Doll\u00e1r, P. (2015). Microsoft COCO: Common Objects in Context. arXiv.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_16","unstructured":"Chopra, S., Hadsell, R., and LeCun, Y. (2005, January 20\u201326). Learning a similarity metric discriminatively, with application to face verification. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201905), IEEE, San Diego, CA, USA."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"73","DOI":"10.1007\/978-1-0716-0826-5_3","article-title":"Siamese neural networks: An overview","volume":"2190","author":"Chicco","year":"2021","journal-title":"Artif. Neural Netw."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Hartley, R., and Zisserman, A. (2003). Multiple View Geometry in Computer Vision, Cambridge University Press.","DOI":"10.1017\/CBO9780511811685"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"109460","DOI":"10.1109\/ACCESS.2019.2933635","article-title":"Combining convolutional neural network and photometric refinement for accurate homography estimation","volume":"7","author":"Kang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_20","unstructured":"O\u2019shea, K., and Nash, R. (2015). An introduction to convolutional neural networks. arXiv."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Szeliski, R. (2022). Computer Vision: Algorithms and Applications, Springer Nature.","DOI":"10.1007\/978-3-030-34372-9"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"7885","DOI":"10.1109\/TPAMI.2022.3223789","article-title":"Unsupervised global and local homography estimation with motion basis learning","volume":"45","author":"Liu","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Zhou, Q., and Li, X. (2019). STN-Homography: Direct estimation of homography parameters for image pairs. Appl. Sci., 9.","DOI":"10.3390\/app9235187"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"413","DOI":"10.1007\/s00530-020-00651-y","article-title":"Review on image-stitching techniques","volume":"26","author":"Wang","year":"2020","journal-title":"Multimed. Syst."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"133","DOI":"10.1016\/j.aqpro.2015.02.019","article-title":"A review of quality metrics for fused image","volume":"4","author":"Jagalingam","year":"2015","journal-title":"Aquat. Procedia"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Dissanayake, V., Herath, S., Rasnayaka, S., Seneviratne, S., Vidanaarachchi, R., and Gamage, C. (2015, January 23\u201325). Quantitative and Qualitative Evaluation of Performance and Robustness of Image Stitching Algorithms. Proceedings of the 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, Australia.","DOI":"10.1109\/DICTA.2015.7371297"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1109\/MSP.2008.930649","article-title":"Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures","volume":"26","author":"Wang","year":"2009","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_29","first-page":"249","article-title":"Unsupervised deep learning image stitching model assisted with infrared images","volume":"Volume 12969","author":"Zhu","year":"2024","journal-title":"Proceedings of the International Conference on Algorithm, Imaging Processing, and Machine Vision (AIPMV 2023)"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1150","DOI":"10.1109\/JSTSP.2023.3250956","article-title":"Attentive deep image quality assessment for omnidirectional stitching","volume":"17","author":"Duan","year":"2023","journal-title":"IEEE J. Sel. Top. Signal Process."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"29452","DOI":"10.1109\/JSEN.2024.3436051","article-title":"A Fast Unsupervised Image Stitching Model Based on Homography Estimation","volume":"24","author":"Ni","year":"2024","journal-title":"IEEE Sens. J."},{"key":"ref_32","unstructured":"The GIMP Development Team (2024, February 10). GIMP: GNU Image Manipulation Program. Version 2.10.36. The GIMP Development Team. Available online: https:\/\/www.gimp.org."},{"key":"ref_33","first-page":"16","article-title":"FIRE: Fundus image registration dataset","volume":"1","author":"Zabulis","year":"2017","journal-title":"Model. Artif. Intell. Ophthalmol."}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/15\/10\/652\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T16:15:35Z","timestamp":1760112935000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/15\/10\/652"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,17]]},"references-count":33,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2024,10]]}},"alternative-id":["info15100652"],"URL":"https:\/\/doi.org\/10.3390\/info15100652","relation":{},"ISSN":["2078-2489"],"issn-type":[{"type":"electronic","value":"2078-2489"}],"subject":[],"published":{"date-parts":[[2024,10,17]]}}}