{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,21]],"date-time":"2025-12-21T07:12:03Z","timestamp":1766301123810,"version":"build-2065373602"},"reference-count":49,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2025,4,3]],"date-time":"2025-04-03T00:00:00Z","timestamp":1743638400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100004739","name":"Youth Innovation Promotion Association of the Chinese Academy of Sciences","doi-asserted-by":"publisher","award":["Y202072","ZR2021QE205","SSD2024013"],"award-info":[{"award-number":["Y202072","ZR2021QE205","SSD2024013"]}],"id":[{"id":"10.13039\/501100004739","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100007129","name":"Natural Science Foundation of Shandong Province","doi-asserted-by":"publisher","award":["Y202072","ZR2021QE205","SSD2024013"],"award-info":[{"award-number":["Y202072","ZR2021QE205","SSD2024013"]}],"id":[{"id":"10.13039\/501100007129","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Suzhou Basic Scientific Research Project","award":["Y202072","ZR2021QE205","SSD2024013"],"award-info":[{"award-number":["Y202072","ZR2021QE205","SSD2024013"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Future Internet"],"abstract":"<jats:p>With the development of the internet, the incidence of myopia is showing a trend towards younger ages, making routine vision screening increasingly essential. This paper designs an online refractive error screening solution centered on the CFGN (Comparative Feature-Guided Network), a refractive error screening network based on the eccentric photorefraction method. Additionally, a training strategy incorporating an objective model-eye pretraining model is introduced to enhance screening accuracy. Specifically, we obtain six-channel infrared eccentric photorefraction pupil images to enrich image information and design a comparative feature-guided module and a multi-channel information fusion module based on the characteristics of each channel image to enhance network performance. Experimental results show that CFGN achieves an accuracy exceeding 92% within a \u00b11.00 D refractive error range across datasets from two regions, with mean absolute errors (MAEs) of 0.168 D and 0.108 D, outperforming traditional models and meeting vision screening requirements. The pretrained model helps achieve better performance with small samples. The vision screening scheme proposed in this study is more efficient and accurate than existing networks, and the cost-effectiveness of the pretrained model with transfer learning provides a technical foundation for subsequent rapid online screening and routine tracking via networking.<\/jats:p>","DOI":"10.3390\/fi17040160","type":"journal-article","created":{"date-parts":[[2025,4,4]],"date-time":"2025-04-04T03:36:45Z","timestamp":1743737805000},"page":"160","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Comparative Feature-Guided Regression Network with a Model-Eye Pretrained Model for Online Refractive Error Screening"],"prefix":"10.3390","volume":"17","author":[{"given":"Jiayi","family":"Wang","sequence":"first","affiliation":[{"name":"School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China"},{"name":"Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2310-2988","authenticated-orcid":false,"given":"Tianyou","family":"Zheng","sequence":"additional","affiliation":[{"name":"Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2615-3585","authenticated-orcid":false,"given":"Yang","family":"Zhang","sequence":"additional","affiliation":[{"name":"Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China"},{"name":"Jinan Guoke Medical Technology Development Co., Ltd., Jinan 250000, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-1115-3037","authenticated-orcid":false,"given":"Tianli","family":"Zheng","sequence":"additional","affiliation":[{"name":"Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7192-3118","authenticated-orcid":false,"given":"Weiwei","family":"Fu","sequence":"additional","affiliation":[{"name":"School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China"},{"name":"Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,4,3]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"7","DOI":"10.1186\/s40101-024-00354-7","article-title":"The influence of the environment and lifestyle on myopia","volume":"43","author":"Biswas","year":"2024","journal-title":"J. Physiol. Anthropol."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Zong, Z., Zhang, Y., Qiao, J., Tian, Y., and Xu, S. (2024). The association between screen time exposure and myopia in children and adolescents: A meta-analysis. BMC Public Health, 24.","DOI":"10.1186\/s12889-024-19113-5"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"739","DOI":"10.1007\/s10792-013-9864-x","article-title":"Comparison of photorefraction, autorefractometry and retinoscopy in children","volume":"34","author":"Demirci","year":"2014","journal-title":"Int. Ophthalmol."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Reali, G., and Femminella, M. (2024). Artificial Intelligence to Reshape the Healthcare Ecosystem. Future Internet, 16.","DOI":"10.3390\/fi16090343"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Gao, X., He, P., Zhou, Y., and Qin, X. (2024). Artificial Intelligence Applications in Smart Healthcare: A Survey. Future Internet, 16.","DOI":"10.3390\/fi16090308"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Priyadarshini, I. (2023). Autism screening in toddlers and adults using deep learning and fair AI techniques. Future Internet, 15.","DOI":"10.3390\/fi15090292"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"2861","DOI":"10.1167\/iovs.18-23887","article-title":"Deep learning for predicting refractive error from retinal fundus images","volume":"59","author":"Varadarajan","year":"2018","journal-title":"Investig. Ophthalmol. Vis. Sci."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zou, H., Shi, S., Yang, X., Ma, J., Fan, Q., Chen, X., Wang, Y., Zhang, M., Song, J., and Jiang, Y. (2022). Identification of ocular refraction based on deep learning algorithm as a novel retinoscopy method. BioMed. Eng. OnLine, 21.","DOI":"10.1186\/s12938-022-01057-9"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"614","DOI":"10.1097\/00006324-198509000-00006","article-title":"Eccentric photorefraction: Optical analysis and empirical measures","volume":"62","author":"Bobier","year":"1985","journal-title":"Optom. Vis. Sci."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"103","DOI":"10.1119\/1.4905810","article-title":"Photorefraction of the Eye","volume":"53","author":"Colicchia","year":"2015","journal-title":"Phys. Teach."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"6108","DOI":"10.1364\/BOE.400720","article-title":"Utilizing minicomputer technology for low-cost photorefraction: A feasibility study","volume":"11","author":"Agarwala","year":"2020","journal-title":"Biomed. Opt. Express"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"e16225","DOI":"10.2196\/16225","article-title":"Deep learning-based prediction of refractive error using photorefraction images captured by a smartphone: Model development and validation study","volume":"8","author":"Chun","year":"2020","journal-title":"JMIR Med. Inform."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Fu, E., Yang, Z., Leong, H., Ngai, G., Do, C.W., and Chan, L. (2020, January 12\u201316). Exploiting Active Learning in Novel Refractive Error Detection with Smartphones. Proceedings of the 28th ACM international Conference on Multimedia, Seattle, WA, USA.","DOI":"10.1145\/3394171.3413748"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Yang, C.C., Su, J.J., Li, J.E., Zhu, Z.Y., Tseng, J.S., Cheng, C.M., and Tien, C.H. (2019, January 13\u201314). Accessing refractive errors via eccentric infrared photorefraction based on deep learning. Proceedings of the SPIE Future Sensing Technologies, Tokyo, Japan.","DOI":"10.1117\/12.2542652"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Xu, D., Ding, S., Zheng, T., Zhu, X., Gu, Z., Ye, B., and Fu, W. (2022). Deep learning for predicting refractive error from multiple photorefraction images. BioMed. Eng. OnLine, 21.","DOI":"10.1186\/s12938-022-01025-3"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Linde, G., Chalakkal, R., Zhou, L., Huang, J.L., O\u2019Keeffe, B., Shah, D., Davidson, S., and Hong, S.C. (2023). Automatic refractive error estimation using deep learning-based analysis of red reflex images. Diagnostics, 13.","DOI":"10.3390\/diagnostics13172810"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). Imagenet: A large-scale hierarchical image database. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Chang, K., Gidwani, M., Patel, J.B., Li, M.D., and Kalpathy-Cramer, J. (2021). Data Curation Challenges for Artificial Intelligence. Auto-Segmentation for Radiation Oncology, CRC Press.","DOI":"10.1201\/9780429323782-17"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Razzak, M.I., Naz, S., and Zaib, A. (2018). Deep learning for medical image processing: Overview, challenges and the future. Classification in BioApps: Automation of Decision Making, Springer.","DOI":"10.1007\/978-3-319-65981-7_12"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., and Webb, R. (2017, January 21\u201326). Learning from simulated and unsupervised images through adversarial training. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.241"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Nair, N., Kothari, R., Chaudhary, A.K., Yang, Z., Diaz, G.J., Pelz, J.B., and Bailey, R.J. (2020, January 12\u201313). RIT-Eyes: Rendering of near-eye images for eye-tracking applications. Proceedings of the ACM Symposium on Applied Perception 2020, Virtual Event.","DOI":"10.1145\/3385955.3407935"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1237","DOI":"10.1109\/TMI.2023.3332168","article-title":"Edge-Guided Contrastive Adaptation Network for Arteriovenous Nicking Classification Using Synthetic Data","volume":"43","author":"Liu","year":"2023","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Kaspar, M., Osorio, J.D.M., and Bock, J. (2020\u201324, January 24). Sim2real transfer for reinforcement learning without dynamics randomization. Proceedings of the 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS45743.2020.9341260"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 21\u201326). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Honolulu, HI, USA.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1345","DOI":"10.1109\/TKDE.2009.191","article-title":"A survey on transfer learning","volume":"22","author":"Pan","year":"2009","journal-title":"IEEE Trans. Knowl. Data"},{"key":"ref_27","unstructured":"Ridnik, T., Ben-Baruch, E., Noy, A., and Zelnik-Manor, L. (2021). Imagenet-21k pretraining for the masses. arXiv."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"103145","DOI":"10.1016\/j.jvcir.2021.103145","article-title":"Rethinking pre-training on medical imaging","volume":"78","author":"Wen","year":"2021","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Mishra, S., Panda, R., Phoo, C.P., Chen, C.F.R., Karlinsky, L., Saenko, K., Saligrama, V., and Feris, R.S. (2022, January 18\u201324). Task2Sim: Towards Effective Pre-Training and Transfer From Synthetic Data. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00898"},{"key":"ref_30","unstructured":"Vaswani, A. (2017). Attention is all you need. arXiv."},{"key":"ref_31","unstructured":"Dosovitskiy, A. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"87","DOI":"10.1109\/TPAMI.2022.3152247","article-title":"A survey on vision transformer","volume":"45","author":"Han","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. (2020, January 23\u201328). End-to-end object detection with transformers. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"ref_34","unstructured":"Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A., and Zhou, Y. (2021). Transunet: Transformers Make Strong Encoders for Medical Image Segmentation, Johns Hopkins University. Technical Report."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Hu, R., and Singh, A. (2021, January 10\u201317). Unit: Multimodal multitask learning with a unified transformer. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00147"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Wright, D., and Augenstein, I. (2020). Transformer based multi-source domain adaptation. arXiv.","DOI":"10.18653\/v1\/2020.emnlp-main.639"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Lengyel, A., Garg, S., Milford, M., and van Gemert, J.C. (2021, January 10\u201317). Zero-shot day-night domain adaptation with a physics prior. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00436"},{"key":"ref_38","unstructured":"Chen, H., Wu, C., Xu, Y., and Du, B. (2021). Unsupervised Domain Adaptation for Semantic Segmentation via Low-Level Edge Information Transfer, Wuhan University. Technical Report."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Bay, H., Tuytelaars, T., and Van Gool, L. (2006, January 7\u201313). Surf: Speeded up robust features. Proceedings of the Computer Vision\u2014ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria. Proceedings, Part I 9.","DOI":"10.1007\/11744023_32"},{"key":"ref_41","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015, January 7\u201313). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.123"},{"key":"ref_43","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., and Antiga, L. (2019, January 8\u201314). PyTorch: An Imperative Style, High-Performance Deep Learning Library. Proceedings of the NeurIPS, Vancouver, BC, Canada."},{"key":"ref_44","unstructured":"Karen, S. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21\u201326). Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_46","first-page":"323","article-title":"Imagenet classification with deep convolutional neural networks","volume":"25","author":"Krizhevsky","year":"2012","journal-title":"NeurIPS"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"}],"container-title":["Future Internet"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-5903\/17\/4\/160\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:09:40Z","timestamp":1760029780000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-5903\/17\/4\/160"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,3]]},"references-count":49,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,4]]}},"alternative-id":["fi17040160"],"URL":"https:\/\/doi.org\/10.3390\/fi17040160","relation":{},"ISSN":["1999-5903"],"issn-type":[{"type":"electronic","value":"1999-5903"}],"subject":[],"published":{"date-parts":[[2025,4,3]]}}}