{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,8]],"date-time":"2025-09-08T05:35:08Z","timestamp":1757309708964,"version":"3.37.3"},"reference-count":42,"publisher":"Springer Science and Business Media LLC","issue":"36","license":[{"start":{"date-parts":[[2024,10,9]],"date-time":"2024-10-09T00:00:00Z","timestamp":1728432000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,10,9]],"date-time":"2024-10-09T00:00:00Z","timestamp":1728432000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"research project RetinaReadRisk, with EIT Health and Horizon Europe funding","award":["220718"],"award-info":[{"award-number":["220718"]}]},{"DOI":"10.13039\/501100007512","name":"Universitat Rovira i Virgili","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100007512","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Neural Comput &amp; Applic"],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Widespread eye conditions such as cataracts, diabetic retinopathy, and glaucoma impact people worldwide. Ophthalmology uses fundus photography for diagnosing these retinal disorders, but fundus images are prone to image quality challenges. Accurate diagnosis hinges on high-quality fundus images. Therefore, there is a need for image quality assessment methods to evaluate fundus images before diagnosis. Consequently, this paper introduces a deep learning model tailored for fundus images that supports large images. Our division method centres on preserving the original image\u2019s high-resolution features while maintaining low computing and high accuracy. The proposed approach encompasses two fundamental components: an autoencoder model for input image reconstruction and image classification to classify the image quality based on the latent features extracted by the autoencoder, all performed at the original image size, without alteration, before reassembly for decoding networks. Through post hoc interpretability methods, we verified that our model focuses on key elements of fundus image quality. Additionally, an intrinsic interpretability module has been designed into the network that allows decomposing class scores into underlying concepts quality such as brightness or presence of anatomical structures. Experimental results in our model with EyeQ, a fundus image dataset with three categories (Good, Usable, and Rejected) demonstrate that our approach produces competitive outcomes compared to other deep learning-based methods with an overall accuracy of 0.9066, a precision of 0.8843, a recall of 0.8905, and an impressive <jats:italic>F<\/jats:italic>1-score of 0.8868. The code is publicly available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/saifalkhaldiurv\/VISTA_-Image-Quality-Assessment\">https:\/\/github.com\/saifalkhaldiurv\/VISTA_-Image-Quality-Assessment<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s00521-024-10174-6","type":"journal-article","created":{"date-parts":[[2024,10,9]],"date-time":"2024-10-09T04:01:42Z","timestamp":1728446502000},"page":"23149-23168","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["VISTA: vision improvement via split and reconstruct deep neural network for fundus image quality assessment"],"prefix":"10.1007","volume":"36","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8707-7291","authenticated-orcid":false,"given":"Saif","family":"Khalid","sequence":"first","affiliation":[]},{"given":"Saddam","family":"Abdulwahab","sequence":"additional","affiliation":[]},{"given":"Oscar Agust\u00edn","family":"Stanchi","sequence":"additional","affiliation":[]},{"given":"Facundo Manuel","family":"Quiroga","sequence":"additional","affiliation":[]},{"given":"Franco","family":"Ronchetti","sequence":"additional","affiliation":[]},{"given":"Domenec","family":"Puig","sequence":"additional","affiliation":[]},{"given":"Hatem A.","family":"Rashwan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,10,9]]},"reference":[{"key":"10174_CR1","doi-asserted-by":"crossref","unstructured":"Khalid S, Abdulwahab S, Rashwan HA, Abdel-Nasser M, Sharaf N, Puig D (2022) Robust yet simple deep learning-based ensemble approach for assessing diabetic retinopathy in fundus images. In: 2022 5th international conference on multimedia, signal processing and communication technologies (IMPACT). IEEE, pp 1\u20135","DOI":"10.1109\/IMPACT55510.2022.10029219"},{"key":"10174_CR2","doi-asserted-by":"publisher","DOI":"10.1201\/9781420037005","volume-title":"Automated image detection of retinal pathology","author":"H Jelinek","year":"2009","unstructured":"Jelinek H, Cree MJ (2009) Automated image detection of retinal pathology. CRC Press, Boca Raton"},{"issue":"3","key":"10174_CR3","doi-asserted-by":"publisher","first-page":"1120","DOI":"10.1167\/iovs.05-1155","volume":"47","author":"AD Fleming","year":"2006","unstructured":"Fleming AD, Philip S, Goatman KA, Olson JA, Sharp PF (2006) Automated assessment of diabetic retinal image quality based on clarity and field definition. Investig Ophthalmol Vis Sci 47(3):1120\u20131125","journal-title":"Investig Ophthalmol Vis Sci"},{"issue":"5","key":"10174_CR4","doi-asserted-by":"publisher","first-page":"0127914","DOI":"10.1371\/journal.pone.0127914","volume":"10","author":"TJ MacGillivray","year":"2015","unstructured":"MacGillivray TJ, Cameron JR, Zhang Q, El-Medany A, Mulholland C, Sheng Z, Dhillon B, Doubal FN, Foster PJ, Trucco E et al (2015) Suitability of UK biobank retinal images for automatic analysis of morphometric properties of the vasculature. PLoS ONE 10(5):0127914","journal-title":"PLoS ONE"},{"key":"10174_CR5","doi-asserted-by":"crossref","unstructured":"Fu H, Wang B, Shen J, Cui S, Xu Y, Liu J, Shao L (2019) Evaluation of retinal image quality assessment networks in different color-spaces. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 48\u201356","DOI":"10.1007\/978-3-030-32239-7_6"},{"key":"10174_CR6","doi-asserted-by":"crossref","unstructured":"Lee SC, Wang Y (1999) Automatic retinal image quality assessment and enhancement. In: Medical imaging 1999: image processing, vol 3661. International Society for Optics and Photonics, pp 1581\u20131590","DOI":"10.1117\/12.348562"},{"key":"10174_CR7","doi-asserted-by":"publisher","first-page":"73","DOI":"10.1016\/j.inffus.2012.08.001","volume":"19","author":"JMP Dias","year":"2014","unstructured":"Dias JMP, Oliveira CM, Silva Cruz LA (2014) Retinal image quality assessment using generic image quality indicators. Inf Fusion 19:73\u201390","journal-title":"Inf Fusion"},{"issue":"4","key":"10174_CR8","doi-asserted-by":"publisher","first-page":"1046","DOI":"10.1109\/TMI.2015.2506902","volume":"35","author":"S Wang","year":"2015","unstructured":"Wang S, Jin K, Lu H, Cheng C, Ye J, Qian D (2015) Human visual system-based fundus image quality assessment of portable fundus camera photographs. IEEE Trans Med Imaging 35(4):1046\u20131055","journal-title":"IEEE Trans Med Imaging"},{"key":"10174_CR9","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818\u20132826","DOI":"10.1109\/CVPR.2016.308"},{"key":"10174_CR10","doi-asserted-by":"publisher","first-page":"101654","DOI":"10.1016\/j.media.2020.101654","volume":"61","author":"Y Shen","year":"2020","unstructured":"Shen Y, Sheng B, Fang R, Li H, Dai L, Stolte S, Qin J, Jia W, Shen D (2020) Domain-invariant interpretable fundus image quality assessment. Med Image Anal 61:101654","journal-title":"Med Image Anal"},{"key":"10174_CR11","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-98131-4","volume-title":"Explainable and interpretable models in computer vision and machine learning","author":"HJ Escalante","year":"2018","unstructured":"Escalante HJ, Escalera S, Guyon I, Bar\u00f3 X, G\u00fc\u00e7l\u00fct\u00fcrk Y, G\u00fc\u00e7l\u00fc U, Gerven M, Lier R (2018) Explainable and interpretable models in computer vision and machine learning. Springer, Berlin"},{"issue":"3","key":"10174_CR12","doi-asserted-by":"publisher","first-page":"772","DOI":"10.1016\/j.bbe.2022.06.002","volume":"42","author":"Z Xu","year":"2022","unstructured":"Xu Z, Zou B, Liu Q (2022) A dark and bright channel prior guided deep network for retinal image quality assessment. Biocybern Biomed Eng 42(3):772\u2013783","journal-title":"Biocybern Biomed Eng"},{"key":"10174_CR13","doi-asserted-by":"crossref","unstructured":"Jiang H, Yang K, Gao M, Zhang D, Ma H, Qian W (2019) An interpretable ensemble deep learning model for diabetic retinopathy disease classification. In: 2019 41st annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, pp 2045\u20132048","DOI":"10.1109\/EMBC.2019.8857160"},{"key":"10174_CR14","doi-asserted-by":"publisher","first-page":"34005","DOI":"10.1007\/s11042-023-14805-3","volume":"82","author":"Z Xu","year":"2023","unstructured":"Xu Z, Zou B, Liu Q (2023) A deep retinal image quality assessment network with salient structure priors. Multimed Tools Appl 82:34005\u201334028","journal-title":"Multimed Tools Appl"},{"key":"10174_CR15","doi-asserted-by":"publisher","first-page":"121644","DOI":"10.1016\/j.eswa.2023.121644","volume":"238","author":"S Khalid","year":"2024","unstructured":"Khalid S, Rashwan HA, Abdulwahab S, Abdel-Nasser M, Quiroga FM, Puig D (2024) FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning. Expert Syst Appl 238:121644","journal-title":"Expert Syst Appl"},{"key":"10174_CR16","doi-asserted-by":"publisher","first-page":"57810","DOI":"10.1109\/ACCESS.2020.2982588","volume":"8","author":"A Raj","year":"2020","unstructured":"Raj A, Shah NA, Tiwari AK, Martini MG (2020) Multivariate regression-based convolutional neural network model for fundus image quality assessment. IEEE Access 8:57810\u201357821","journal-title":"IEEE Access"},{"issue":"2","key":"10174_CR17","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s11063-024-11585-1","volume":"56","author":"Q Li","year":"2024","unstructured":"Li Q, Wei H, Hua D, Wang J, Yang J (2024) Stabilization of semi-Markovian jumping uncertain complex-valued networks with time-varying delay: a sliding-mode control approach. Neural Process Lett 56(2):1\u201322","journal-title":"Neural Process Lett"},{"key":"10174_CR18","doi-asserted-by":"publisher","first-page":"204","DOI":"10.1016\/j.matcom.2023.11.028","volume":"218","author":"Q Li","year":"2024","unstructured":"Li Q, Liang J, Gong W, Wang K, Wang J (2024) Nonfragile state estimation for semi-Markovian switching CVNS with general uncertain transition rates: An event-triggered scheme. Math Comput Simul 218:204\u2013222","journal-title":"Math Comput Simul"},{"key":"10174_CR19","doi-asserted-by":"crossref","unstructured":"Muddamsetty SM, Moeslund TB (2021) Multi-level quality assessment of retinal fundus images using deep convolution neural networks. In: 16th international joint conference on computer vision, imaging and computer graphics theory and application. SCITEPRESS Digital Library, pp 661\u2013668","DOI":"10.5220\/0010250506610668"},{"key":"10174_CR20","doi-asserted-by":"crossref","unstructured":"Li S, Wang M, Hou C (2019) No-reference stereoscopic image quality assessment based on shuffle-convolutional neural network. In: 2019 IEEE visual communications and image processing (VCIP). IEEE, pp 1\u20134","DOI":"10.1109\/VCIP47243.2019.8965759"},{"key":"10174_CR21","doi-asserted-by":"crossref","unstructured":"Ou F-Z, Wang Y-G, Zhu G (2019) A novel blind image quality assessment method based on refined natural scene statistics. In: 2019 IEEE international conference on image processing (ICIP). IEEE, pp 1004\u20131008","DOI":"10.1109\/ICIP.2019.8803047"},{"issue":"12","key":"10174_CR22","doi-asserted-by":"publisher","first-page":"4695","DOI":"10.1109\/TIP.2012.2214050","volume":"21","author":"A Mittal","year":"2012","unstructured":"Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 21(12):4695\u20134708","journal-title":"IEEE Trans Image Process"},{"issue":"5","key":"10174_CR23","doi-asserted-by":"publisher","first-page":"2200","DOI":"10.1109\/TIP.2018.2883741","volume":"28","author":"Q Yan","year":"2018","unstructured":"Yan Q, Gong D, Zhang Y (2018) Two-stream convolutional networks for blind image quality assessment. IEEE Trans Image Process 28(5):2200\u20132211","journal-title":"IEEE Trans Image Process"},{"key":"10174_CR24","doi-asserted-by":"crossref","unstructured":"P\u00e9rez AD, Perdomo O, Gonz\u00e1lez FA (2020) A lightweight deep learning model for mobile eye fundus image quality assessment. In: 15th international symposium on medical information processing and analysis, vol 11330. SPIE, pp 151\u2013158","DOI":"10.1117\/12.2547126"},{"key":"10174_CR25","doi-asserted-by":"crossref","unstructured":"Zhou X, Wu Y, Xia Y (2020) Retinal image quality assessment via specific structures segmentation. In: Ophthalmic Medical Image Analysis: 7th international workshop, OMIA 2020, held in conjunction with MICCAI 2020, Lima, Peru, 8 Oct 2020, Proceedings 7. Springer, pp 53\u201361","DOI":"10.1007\/978-3-030-63419-3_6"},{"issue":"11","key":"10174_CR26","doi-asserted-by":"publisher","first-page":"2559","DOI":"10.1049\/ipr2.12244","volume":"15","author":"Y-P Liu","year":"2021","unstructured":"Liu Y-P, Lv Y, Li Z, Li J, Liu Y, Chen P, Liang R (2021) Blood vessel and background separation for retinal image quality assessment. IET Image Proc 15(11):2559\u20132571","journal-title":"IET Image Proc"},{"key":"10174_CR27","first-page":"31","volume-title":"MICCAI challenge on mitosis domain generalization","author":"Z Chen","year":"2022","unstructured":"Chen Z, Huang L (2022) Deep convolutional neural network for image quality assessment and diabetic retinopathy grading. MICCAI challenge on mitosis domain generalization. Springer, Cham, pp 31\u201337"},{"key":"10174_CR28","doi-asserted-by":"publisher","first-page":"64","DOI":"10.1016\/j.compbiomed.2018.10.004","volume":"103","author":"GT Zago","year":"2018","unstructured":"Zago GT, Andre\u00e3o RV, Dorizzi B, Salles EOT (2018) Retinal image quality assessment using deep learning. Comput Biol Med 103:64\u201370","journal-title":"Comput Biol Med"},{"key":"10174_CR29","doi-asserted-by":"publisher","first-page":"215","DOI":"10.1007\/s11760-019-01544-y","volume":"14","author":"F Zhang","year":"2020","unstructured":"Zhang F, Xu X, Xiao Z, Wu J, Geng L, Wang W, Liu Y (2020) Automated quality classification of colour fundus images based on a modified residual dense block network. Signal Image Video Process 14:215\u2013223","journal-title":"Signal Image Video Process"},{"key":"10174_CR30","doi-asserted-by":"crossref","unstructured":"Hou J, Lin W, Zhao B (2020) Content-dependency reduction with multi-task learning in blind stitched panoramic image quality assessment. In: 2020 IEEE international conference on image processing (ICIP). IEEE, pp 3463\u20133467","DOI":"10.1109\/ICIP40778.2020.9191241"},{"issue":"4","key":"10174_CR31","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1109\/TIP.2003.819861","volume":"13","author":"Z Wang","year":"2004","unstructured":"Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600\u2013612","journal-title":"IEEE Trans Image Process"},{"key":"10174_CR32","doi-asserted-by":"crossref","unstructured":"Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: Computer vision\u2013ECCV 2014: 13th European conference, Zurich, Switzerland, 6\u201312 Sept 2014, Proceedings, Part I 13. Springer, pp 818\u2013833","DOI":"10.1007\/978-3-319-10590-1_53"},{"key":"10174_CR33","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp 618\u2013626","DOI":"10.1109\/ICCV.2017.74"},{"key":"10174_CR34","doi-asserted-by":"crossref","unstructured":"Stanchi O, Ronchetti F, Quiroga F (2023) The implementation of the rise algorithm for the captum framework. In: Conference on cloud computing, big data & emerging topics. Springer, pp 91\u2013104","DOI":"10.1007\/978-3-031-40942-4_7"},{"issue":"12","key":"10174_CR35","doi-asserted-by":"publisher","first-page":"772","DOI":"10.1038\/s42256-020-00265-z","volume":"2","author":"Z Chen","year":"2020","unstructured":"Chen Z, Bei Y, Rudin C (2020) Concept whitening for interpretable image recognition. Nat Mach Intell 2(12):772\u2013782","journal-title":"Nat Mach Intell"},{"key":"10174_CR36","doi-asserted-by":"crossref","unstructured":"Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) MobileNetV2:: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510\u20134520","DOI":"10.1109\/CVPR.2018.00474"},{"key":"10174_CR37","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700\u20134708","DOI":"10.1109\/CVPR.2017.243"},{"key":"10174_CR38","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"10174_CR39","doi-asserted-by":"crossref","unstructured":"Xie S, Girshick R, Doll\u00e1r P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1492\u20131500","DOI":"10.1109\/CVPR.2017.634"},{"key":"10174_CR40","first-page":"3965","volume":"34","author":"Z Dai","year":"2021","unstructured":"Dai Z, Liu H, Le QV, Tan M (2021) CoAtNet: marrying convolution and attention for all data sizes. Adv Neural Inf Process Syst 34:3965\u20133977","journal-title":"Adv Neural Inf Process Syst"},{"key":"10174_CR41","doi-asserted-by":"crossref","unstructured":"Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2017) Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 31","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"10174_CR42","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556"}],"container-title":["Neural Computing and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-024-10174-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00521-024-10174-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-024-10174-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,26]],"date-time":"2024-11-26T20:06:39Z","timestamp":1732651599000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00521-024-10174-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,9]]},"references-count":42,"journal-issue":{"issue":"36","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["10174"],"URL":"https:\/\/doi.org\/10.1007\/s00521-024-10174-6","relation":{},"ISSN":["0941-0643","1433-3058"],"issn-type":[{"type":"print","value":"0941-0643"},{"type":"electronic","value":"1433-3058"}],"subject":[],"published":{"date-parts":[[2024,10,9]]},"assertion":[{"value":"18 February 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 July 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 October 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}}]}}