{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T19:41:25Z","timestamp":1773776485623,"version":"3.50.1"},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T00:00:00Z","timestamp":1737072000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T00:00:00Z","timestamp":1737072000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["No.82272086"],"award-info":[{"award-number":["No.82272086"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Guangdong Provincial Key Laboratory","award":["No.2020B121201001"],"award-info":[{"award-number":["No.2020B121201001"]}]},{"name":"Shenzhen Natural Science Fund","award":["No.JCYJ20200109140820699"],"award-info":[{"award-number":["No.JCYJ20200109140820699"]}]},{"name":"the Stable Support Plan Program","award":["No. 20200925174052004"],"award-info":[{"award-number":["No. 20200925174052004"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis. Comput. Ind. Biomed. Art"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Cataract is the leading ocular disease of blindness and visual impairment globally. Deep neural networks (DNNs) have achieved promising cataracts recognition performance based on anterior segment optical coherence tomography (AS-OCT) images; however, they have poor explanations, limiting their clinical applications. In contrast, visual features extracted from original AS-OCT images and their transform forms (e.g., AS-OCT-based histograms) have good explanations but have not been fully exploited. Motivated by these observations, an explainable machine learning framework to recognize cataracts severity levels automatically using AS-OCT images was proposed, consisting of three stages: visual feature extraction, feature importance explanation and selection, and recognition. First, the intensity histogram and intensity-based statistical methods are applied to extract visual features from original AS-OCT images and AS-OCT-based histograms. Subsequently, the SHapley Additive exPlanations and Pearson correlation coefficient methods are applied to analyze the feature importance and select significant visual features. Finally, an ensemble multi-class ridge regression method is applied to recognize the cataracts severity levels based on the selected visual features. Experiments on a clinical AS-OCT-NC dataset demonstrate that the proposed framework not only achieves competitive performance through comparisons with DNNs, but also has a good explanation ability, meeting the requirements of clinical diagnostic practice.<\/jats:p>","DOI":"10.1186\/s42492-024-00183-6","type":"journal-article","created":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T09:01:03Z","timestamp":1737104463000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Explainable machine learning framework for cataracts recognition using visual features"],"prefix":"10.1186","volume":"8","author":[{"given":"Xiao","family":"Wu","sequence":"first","affiliation":[]},{"given":"Lingxi","family":"Hu","sequence":"additional","affiliation":[]},{"given":"Zunjie","family":"Xiao","sequence":"additional","affiliation":[]},{"given":"Xiaoqing","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Risa","family":"Higashita","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6281-6505","authenticated-orcid":false,"given":"Jiang","family":"Liu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,1,17]]},"reference":[{"issue":"14","key":"183_CR1","doi-asserted-by":"publisher","first-page":"5872","DOI":"10.1167\/iovs.16-19894","volume":"57","author":"W Wang","year":"2017","unstructured":"Wang W, Yan W, Fotis K, Prasad NM, Lansingh VC, Taylor HR et al (2017) Cataract surgical rate and socioeconomics: a global study. Invest Ophthalmol Vis Sci 57(14):5872\u20135881. https:\/\/doi.org\/10.1167\/iovs.16-19894","journal-title":"Invest Ophthalmol Vis Sci"},{"issue":"3","key":"183_CR2","doi-asserted-by":"publisher","first-page":"184","DOI":"10.1007\/s11633-022-1329-0","volume":"19","author":"XQ Zhang","year":"2022","unstructured":"Zhang XQ, Hu Y, Xiao ZJ, Fang JS, Higashita R, Liu J (2022) Machine learning for cataract classification\/grading on ophthalmic imaging modalities: a survey. Mach Intell Res 19(3):184\u2013208. https:\/\/doi.org\/10.1007\/s11633-022-1329-0","journal-title":"Mach Intell Res"},{"issue":"1","key":"183_CR3","doi-asserted-by":"publisher","first-page":"13","DOI":"10.1097\/ICU.0000000000000542","volume":"30","author":"HE Gali","year":"2019","unstructured":"Gali HE, Sella R, Afshari NA (2019) Cataract grading systems: a review of past and present. Curr Opin Ophthalmol 30(1):13\u201318. https:\/\/doi.org\/10.1097\/ICU.0000000000000542","journal-title":"Curr Opin Ophthalmol"},{"issue":"10094","key":"183_CR4","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1016\/S0140-6736(17)30544-5","volume":"390","author":"YC Liu","year":"2017","unstructured":"Liu YC, Wilkins M, Kim T, Malyugin B, Mehta JS (2017) Cataracts. Lancet 390(10094):600\u2013612. https:\/\/doi.org\/10.1016\/S0140-6736(17)30544-5","journal-title":"Lancet"},{"issue":"9","key":"183_CR5","doi-asserted-by":"publisher","first-page":"1930","DOI":"10.1109\/TMI.2017.2703147","volume":"36","author":"HZ Fu","year":"2017","unstructured":"Fu HZ, Xu YW, Lin S, Zhang XQ, Wong DWK, Liu J et al (2017) Segmentation and quantification for angle-closure glaucoma assessment in anterior segment OCT. IEEE Trans Med Imaging 36(9):1930\u20131938. https:\/\/doi.org\/10.1109\/TMI.2017.2703147","journal-title":"IEEE Trans Med Imaging"},{"key":"183_CR6","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1016\/j.ajo.2019.02.028","volume":"203","author":"HZ Fu","year":"2019","unstructured":"Fu HZ, Baskaran M, Xu YW, Lin S, Wong DWK, Liu J et al (2019) A deep learning system for automated angle-closure detection in anterior segment optical coherence tomography images. Am J Ophthalmol 203:37\u201345. https:\/\/doi.org\/10.1016\/j.ajo.2019.02.028","journal-title":"Am J Ophthalmol"},{"key":"183_CR7","doi-asserted-by":"publisher","unstructured":"Fu HZ, Xu YW, Lin S, Wong DWK, Mani B, Mahesh M et al (2018) Multicontext deep network for angleclosure glaucoma screening in anterior segment OCT. In: Frangi AF, Schnabel JA, Davatzikos C, Alberola-L\u00f3pez C, Fichtinger G (eds) Medical image computing and computer assisted intervention - MICCAI 2018. 21st international conference, Granada, Spain, September 16\u201320, 2018. Lecture notes in computer science, vol 11071. Springer, Cham, pp 356\u2013363.\u00a0https:\/\/doi.org\/10.1007\/978-3-030-00934-2_40.","DOI":"10.1007\/978-3-030-00934-2_40"},{"key":"183_CR8","doi-asserted-by":"publisher","first-page":"110069","DOI":"10.1016\/j.patcog.2023.110069","volume":"147","author":"XQ Zhang","year":"2024","unstructured":"Zhang XQ, Xiao ZJ, Yang B, Wu X, Higashita R, Liu J (2024) Regional context-based recalibration network for cataract recognition in AS-OCT. Pattern Recognit 147:110069. https:\/\/doi.org\/10.1016\/j.patcog.2023.110069","journal-title":"Pattern Recognit"},{"key":"183_CR9","doi-asserted-by":"publisher","first-page":"104037","DOI":"10.1016\/j.jbi.2022.104037","volume":"128","author":"XQ Zhang","year":"2022","unstructured":"Zhang XQ, Xiao ZJ, Higashita R, Hu Y, Chen W, Yuan J et al (2022) Adaptive feature squeeze network for nuclear cataract classification in AS-OCT image. J Biomed Inf 128:104037. https:\/\/doi.org\/10.1016\/j.jbi.2022.104037","journal-title":"J Biomed Inf"},{"issue":"1","key":"183_CR10","doi-asserted-by":"publisher","first-page":"61","DOI":"10.1136\/bjo.2008.137653","volume":"93","author":"AL Wong","year":"2009","unstructured":"Wong AL, Leung CKS, Weinreb RN, Cheng AKC, Cheung CYL, Lam PTH et al (2009) Quantitative assessment of lens opacities with anterior segment optical coherence tomography. Br J Ophthalmol 93(1):61\u201365. https:\/\/doi.org\/10.1136\/bjo.2008.137653","journal-title":"Br J Ophthalmol"},{"issue":"6","key":"183_CR11","doi-asserted-by":"publisher","first-page":"790","DOI":"10.1136\/bjophthalmol-2020-318334","volume":"106","author":"W Wang","year":"2022","unstructured":"Wang W, Zhang JQ, Gu XX, Ruan XT, Chen XY, Tan XH et al (2022) Objective quantification of lens nuclear opacities using swept-source anterior segment optical coherence tomography. Br J Ophthalmol 106(6):790\u2013794. https:\/\/doi.org\/10.1136\/bjophthalmol-2020-318334","journal-title":"Br J Ophthalmol"},{"key":"183_CR12","doi-asserted-by":"publisher","unstructured":"Zhang XQ, Xiao ZJ, Higashita R, Chen W, Yuan J, Fang JS et al (2020) A novel deep learning method for nuclear cataract classification based on anterior segment optical coherence tomography images. In: Proceedings of the IEEE international conference on systems, man, and cybernetics, IEEE, Toronto, 11\u201314 October 2020. https:\/\/doi.org\/10.1109\/SMC42975.2020.9283218","DOI":"10.1109\/SMC42975.2020.9283218"},{"key":"183_CR13","doi-asserted-by":"publisher","unstructured":"Xiao ZJ, Zhang XQ, Higashita R, Hu Y, Yuan J, Chen W et al (2021) Gated channel attention network for cataract classification on AS-OCT image. In: Mantoro T, Lee M, Ayu MA, Wong KW, Hidayanto AN (eds) Neural information processing. 28th international conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8\u201312, 2021. Lecture notes in computer science, vol 13110. Springer, Cham, pp 357\u2013368. https:\/\/doi.org\/10.1007\/978-3-030-92238-2_30","DOI":"10.1007\/978-3-030-92238-2_30"},{"key":"183_CR14","doi-asserted-by":"publisher","first-page":"105836","DOI":"10.1016\/j.bspc.2023.105836","volume":"90","author":"YY Gu","year":"2024","unstructured":"Gu YY, Fang LX, Mou L, Ma SD, Yan QF, Zhang J et al (2024) A ranking-based multi-scale feature calibration network for nuclear cataract grading in AS-OCT images. Biomed Signal Process Control 90:105836. https:\/\/doi.org\/10.1016\/j.bspc.2023.105836","journal-title":"Biomed Signal Process Control"},{"key":"183_CR15","unstructured":"Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems, ACM, Long Beach, 4\u20139 December 2017."},{"issue":"3","key":"183_CR16","first-page":"204","volume":"49","author":"XQ Zhang","year":"2022","unstructured":"Zhang XQ, Fang JS, Xiao ZJ, Chen B, Higashita R, Chen W et al (2022) Classification algorithm of nuclear cataract based on anterior segment coherence tomography image. Comput Sci 49(3):204\u2013210.","journal-title":"Comput Sci"},{"key":"183_CR17","doi-asserted-by":"publisher","unstructured":"Huang W, Li HQ, Chan KL, Lim JH, Liu J, Wong TY (2009) A computeraided diagnosis system of nuclear cataract via ranking. In: Yang GZ, Hawkes D, Rueckert D, Noble A, Taylor C (eds) Medical image computing and computerassisted intervention-MICCAI 2009. 12th international conference, London, UK, September 20\u201324, 2009. Lecture notes in computer science, vol 5762. Springer, Berlin, Heidelberg, pp 803\u2013810. https:\/\/doi.org\/10.1007\/978-3-642-04271-3_97","DOI":"10.1007\/978-3-642-04271-3_97"},{"key":"183_CR18","doi-asserted-by":"publisher","unstructured":"Xu YW, Gao XT, Lin S, Wong DWK, Liu J, Xu D et al (2013) Automatic grading of nuclear cataracts from slit-lamp lens images using group sparsity regression. In: Mori K, Sakuma I, Sato Y, Barillot C, Navab N (eds) Medical image computing and computer-assisted intervention-MICCAI 2013. 16th international conference, Nagoya, Japan, September 22\u201326, 2013. Lecture notes in computer science, vol 8150. Springer, Berlin, Heidelberg, pp 468\u2013475. https:\/\/doi.org\/10.1007\/978-3-642-40763-5_58","DOI":"10.1007\/978-3-642-40763-5_58"},{"issue":"11","key":"183_CR19","doi-asserted-by":"publisher","first-page":"2326","DOI":"10.1109\/TBME.2016.2527787","volume":"63","author":"M Caixinha","year":"2016","unstructured":"Caixinha M, Amaro J, Santos M, Perdig\u00e3o F, Gomes M, Santos J (2016) In-vivo automatic nuclear cataract detection and classification in an animal model by ultrasounds. IEEE Trans Biomed Eng 63(11):2326\u20132335. https:\/\/doi.org\/10.1109\/TBME.2016.2527787","journal-title":"IEEE Trans Biomed Eng"},{"key":"183_CR20","doi-asserted-by":"publisher","first-page":"196","DOI":"10.1016\/j.inffus.2019.06.022","volume":"53","author":"LC Cao","year":"2020","unstructured":"Cao LC, Li HQ, Zhang YJ, Zhang L, Xu L (2020) Hierarchical method for cataract grading based on retinal images using improved Haar wavelet. Inf Fusion 53:196\u2013208. https:\/\/doi.org\/10.1016\/j.inffus.2019.06.022","journal-title":"Inf Fusion"},{"issue":"11","key":"183_CR21","doi-asserted-by":"publisher","first-page":"2693","DOI":"10.1109\/TBME.2015.2444389","volume":"62","author":"XT Gao","year":"2015","unstructured":"Gao XT, Lin S, Wong TY (2015) Automatic feature learning to grade nuclear cataracts based on deep learning. IEEE Trans Biomed Eng 62(11):2693\u20132701. https:\/\/doi.org\/10.1109\/TBME.2015.2444389","journal-title":"IEEE Trans Biomed Eng"},{"key":"183_CR22","doi-asserted-by":"publisher","unstructured":"Xu CX, Zhu XJ, He WW, Lu Y, He XX, Shang ZJ et al (2019) Fully deep learning for slit-lamp photo based nuclear cataract grading. In: Shen DG, Liu TM, Peters TM, Staib LH, Essert C, Zhou SA et al (eds) Medical image computing and computer assisted intervention - MICCAI 2019. 22nd international conference, Shenzhen, China, October 13\u201317, 2019. Lecture notes in computer science, vol 11767. Springer, Cham, pp 513\u2013521. https:\/\/doi.org\/10.1007\/978-3-030-32251-9_56","DOI":"10.1007\/978-3-030-32251-9_56"},{"issue":"2","key":"183_CR23","doi-asserted-by":"publisher","first-page":"1479","DOI":"10.1007\/s40747-022-00869-5","volume":"9","author":"XQ Zhang","year":"2023","unstructured":"Zhang XQ, Xiao ZJ, Wu X, Chen Y, Higashita R, Chen W et al (2023) Nuclear cataract classification in anterior segment OCT based on clinical globallocal features. Complex Intell Syst 9(2):1479\u20131493. https:\/\/doi.org\/10.1007\/s40747-022-00869-5","journal-title":"Complex Intell Syst"},{"issue":"1","key":"183_CR24","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1007\/s13755-022-00170-2","volume":"10","author":"XQ Zhang","year":"2022","unstructured":"Zhang XQ, Xiao ZJ, Li XL, Wu X, Sun HX, Yuan J et al (2022) Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images. Health Inf Sci Syst 10(1):3. https:\/\/doi.org\/10.1007\/s13755-022-00170-2","journal-title":"Health Inf Sci Syst"},{"key":"183_CR25","doi-asserted-by":"publisher","first-page":"102499","DOI":"10.1016\/j.media.2022.102499","volume":"80","author":"XQ Zhang","year":"2022","unstructured":"Zhang XQ, Xiao ZJ, Fu HZ, Hu Y, Yuan J, Xu YW et al (2022) Attention to region: Region-based integration-and-recalibration networks for nuclear cataract classification using AS-OCT images. Med Image Anal 80:102499. https:\/\/doi.org\/10.1016\/j.media.2022.102499","journal-title":"Med Image Anal"},{"key":"183_CR26","doi-asserted-by":"publisher","first-page":"107958","DOI":"10.1016\/j.cmpb.2023.107958","volume":"244","author":"ZJ Xiao","year":"2024","unstructured":"Xiao ZJ, Zhang XQ, Zheng BF, Guo YT, Higashita R, Liu J (2024) Multi-style spatial attention module for cortical cataract classification in AS-OCT image with supervised contrastive learning. Comput Methods Programs Biomed 244:107958. https:\/\/doi.org\/10.1016\/j.cmpb.2023.107958","journal-title":"Comput Methods Programs Biomed"},{"key":"183_CR27","doi-asserted-by":"publisher","unstructured":"Xiao ZJ, Zhang XQ, Sun QY, Wei ZF, Xu GL, Jin Y et al (2022) A Novel Local-Global Spatial Attention Network for Cortical Cataract Classification in AS-OCT. In: Yu SQ, Zhang ZX, Yuen PC, Han JW, Tan TN, Guo YK et al (eds) Pattern recognition and computer vision. 5th chinese conference, PRCV 2022, Shenzhen, China, November 4\u20137, 2022. Lecture notes in computer science, vol 13535. Springer, Cham, pp 262\u2013273. https:\/\/doi.org\/10.1007\/978-3-031-18910-422","DOI":"10.1007\/978-3-031-18910-422"},{"key":"183_CR28","doi-asserted-by":"publisher","unstructured":"Wu X, Chen Y, Yan QY, Zhao YH, Zhao JL, Zhang XQ et al (2023) DMINet: A lightweight dual-mixed channel-independent network for cataract recognition. In: Proceedings of the international joint conference on neural networks, IEEE, Gold Coast, 18\u201323 June 2023. https:\/\/doi.org\/10.1109\/IJCNN54540.2023.10191292","DOI":"10.1109\/IJCNN54540.2023.10191292"},{"issue":"2","key":"183_CR29","doi-asserted-by":"publisher","first-page":"319","DOI":"10.1049\/cit2.12246","volume":"9","author":"XQ Zhang","year":"2024","unstructured":"Zhang XQ, Wu X, Xiao ZJ, Hu LX, Qiu ZX, Sun QY et al (2024) Mixeddecomposed convolutional network: A lightweight yet efficient convolutional neural network for ocular disease recognition. CAAI Trans Intell Technol 9(2):319\u2013332. https:\/\/doi.org\/10.1049\/cit2.12246","journal-title":"CAAI Trans Intell Technol"},{"key":"183_CR30","doi-asserted-by":"publisher","unstructured":"Zhang XQ, Xu GL, Shen JY, Xiao ZJ, Yan QY, Yuan J et al (2022) Channel-Wise and Spatial Feature Recalibration Network for Nuclear Cataract Classification. In: Proceedings of the IEEE international conference on multimedia and expo, IEEE, Taipei, China, 18\u201322 July 2022. https:\/\/doi.org\/10.1109\/ICME52920.2022.9860008","DOI":"10.1109\/ICME52920.2022.9860008"},{"key":"183_CR31","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th international conference on neural information processing systems, ACM, Lake Tahoe, 3\u20136 December 2012."},{"key":"183_CR32","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for largescale image recognition. In: Proceedings of the 3rd international conference on learning representations, ICLR, San Diego, 7\u20139 May 2015."},{"key":"183_CR33","doi-asserted-by":"publisher","unstructured":"He KM, Zhang XY, Ren SQ, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, IEEE, Las Vegas, 27\u201330 June 2016. https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"183_CR34","doi-asserted-by":"publisher","unstructured":"Xie SN, Girshick R, Doll\u00e1r P, Tu ZW, He KM (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, IEEE, Honolulu, 21\u201326 July 2017. https:\/\/doi.org\/10.1109\/CVPR.2017.634","DOI":"10.1109\/CVPR.2017.634"},{"key":"183_CR35","doi-asserted-by":"publisher","unstructured":"Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, IEEE, Salt Lake City, 18\u201323 June 2018. https:\/\/doi.org\/10.1109\/CVPR.2018.00745","DOI":"10.1109\/CVPR.2018.00745"},{"key":"183_CR36","doi-asserted-by":"publisher","unstructured":"Wang TT, Borji A, Zhang LH, Zhang PP, Lu HC (2017) A stagewise refinement model for detecting salient objects in images. In: Proceedings of the IEEE international conference on computer vision, IEEE, Venice, 22\u201329 October 2017. https:\/\/doi.org\/10.1109\/ICCV.2017.433","DOI":"10.1109\/ICCV.2017.433"},{"key":"183_CR37","unstructured":"Tan MX, Le QV (2019) EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of the 36th international conference on machine learning, ICML, Long Beach, 9\u201315 June 2019."},{"key":"183_CR38","doi-asserted-by":"publisher","unstructured":"Sandler M, Howard A, Zhu ML, Zhmoginov A, Chen LC (2018) MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, IEEE, Salt Lake City, 18\u201323 June 2018. https:\/\/doi.org\/10.1109\/CVPR.2018.00474","DOI":"10.1109\/CVPR.2018.00474"},{"key":"183_CR39","doi-asserted-by":"publisher","unstructured":"Ma NN, Zhang XY, Zheng HT, Sun J (2018) ShuffleNet V2: practical guidelines for efficient CNN architecture design. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer vision - ECCV 2018. 15th European conference, Munich, Germany, September 8\u201314, 2018. Lecture notes in computer science, vol 11218. Springer, Cham, pp 122\u2013138. https:\/\/doi.org\/10.1007\/978-3-030-01264-9_8","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"183_CR40","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai XH, Unterthiner T et al (2021) An image is worth 16x16 words: Transformers for image recognition at scale. In: Proceedings of the 9th international conference on learning representations, ICLR, OpenReview.net, Virtual Event, Austria, 3\u20137 May 2021."},{"key":"183_CR41","unstructured":"Tolstikhin I, Houlsby N, Kolesnikov A, Beyer L, Zhai XH, Unterthiner T et al (2021) MLP-mixer: an all-MLP architecture for vision. In: Proceedings of the 35th international conference on neural information processing systems, ACM, Red Hook, 6\u201314 December 2021."}],"container-title":["Visual Computing for Industry, Biomedicine, and Art"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42492-024-00183-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42492-024-00183-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42492-024-00183-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,17]],"date-time":"2025-01-17T20:54:38Z","timestamp":1737147278000},"score":1,"resource":{"primary":{"URL":"https:\/\/vciba.springeropen.com\/articles\/10.1186\/s42492-024-00183-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,17]]},"references-count":41,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["183"],"URL":"https:\/\/doi.org\/10.1186\/s42492-024-00183-6","relation":{},"ISSN":["2524-4442"],"issn-type":[{"value":"2524-4442","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,17]]},"assertion":[{"value":"12 May 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 December 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 January 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"No potential competing interest was reported by the authors.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"3"}}