{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T07:03:14Z","timestamp":1763794994716,"version":"3.45.0"},"reference-count":55,"publisher":"Association for Computing Machinery (ACM)","issue":"12","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62301330 and 62101346"],"award-info":[{"award-number":["62301330 and 62101346"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100021171","name":"Guangdong Basic and Applied Basic Research Foundation","doi-asserted-by":"crossref","award":["2024A1515010496 and 2022A1515110101"],"award-info":[{"award-number":["2024A1515010496 and 2022A1515110101"]}],"id":[{"id":"10.13039\/501100021171","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Stable Support Plan for Shenzhen Higher Education Institutions","award":["20231121103807001"],"award-info":[{"award-number":["20231121103807001"]}]},{"name":"Guangdong Provincial Key Laboratory","award":["2023B1212060076"],"award-info":[{"award-number":["2023B1212060076"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2025,12,31]]},"abstract":"<jats:p>Automatic retinal vessel segmentation is crucial in the diagnosis and treatment of various cardiovascular and eye diseases. Although current vessel segmentation methods have achieved impressive performance, some challenging issues still need to be addressed. For example, existing methods always cannot segment complex capillaries well because they may be interfered with or covered by other components in the retina, and they need to further improve the continuity and consistency of vessel segmentation results. Moreover, the excellent vessel segmentation methods are usually built on bulky and cumbersome models which greatly limit their application range. In this article, we propose a novel efficient depthwise separable convolution network with frequency-domain enhancement (dubbed RetiNeXt) for retinal vessel segmentation. Firstly, we design a lightweight vessel enhancement module to extract global fine topological structure features from the frequency domain to enhance the complex capillary vessel details. Secondly, we propose a global feature extraction block to fully capture the large-scale spatial information and global characterizations, which enables the model to maintain vessel structural coherence from a global perspective. Thirdly, we construct a local feature mixing block based on SimAM attention mechanism to highlight the tiny capillary topological structure features and optimize the segmentation of low-contrast blood vessels, thereby improving the integrity and continuity of complex capillaries. Comprehensive comparison experiments on three well-benchmarked retinal vessel segmentation datasets fully verify the effectiveness and superiority of the proposed RetiNeXt. To further demonstrate the universality of RetiNeXt for medical image segmentation, we also conduct sufficient comparative experiments on two classical coronary angiography datasets. Extensive quantitative and qualitative experiments fully show that RetiNeXt outperforms other state-of-the-art methods with only 0.4M of trainable parameters.<\/jats:p>","DOI":"10.1145\/3767732","type":"journal-article","created":{"date-parts":[[2025,10,13]],"date-time":"2025-10-13T14:50:56Z","timestamp":1760367056000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["A Lightweight Depthwise Separable ConvNet with Frequency-domain Enhancement for Retinal Vessel Segmentation"],"prefix":"10.1145","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6303-8178","authenticated-orcid":false,"given":"Yang","family":"Wen","sequence":"first","affiliation":[{"name":"Guangdong Provincial Key Laboratory of Intelligent Information Processing, College of Electronic and Information Engineering, Shenzhen University, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-0723-7368","authenticated-orcid":false,"given":"Shunzhe","family":"Shen","sequence":"additional","affiliation":[{"name":"Guangdong Provincial Key Laboratory of Intelligent Information Processing, College of Electronic and Information Engineering, Shenzhen University, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6819-0125","authenticated-orcid":false,"given":"Wuzhen","family":"Shi","sequence":"additional","affiliation":[{"name":"Guangdong Provincial Key Laboratory of Intelligent Information Processing, College of Electronic and Information Engineering, Shenzhen University, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8174-6167","authenticated-orcid":false,"given":"Wenming","family":"Cao","sequence":"additional","affiliation":[{"name":"Guangdong Provincial Key Laboratory of Intelligent Information Processing, College of Electronic and Information Engineering, Shenzhen University, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9759-0200","authenticated-orcid":false,"given":"Lei","family":"Bi","sequence":"additional","affiliation":[{"name":"Institute of Translational Medicine, Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4029-3322","authenticated-orcid":false,"given":"Xiaokang","family":"Yang","sequence":"additional","affiliation":[{"name":"Shanghai Jiao Tong University, Shanghai, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8678-2784","authenticated-orcid":false,"given":"Bin","family":"Sheng","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China"}]}],"member":"320","published-online":{"date-parts":[[2025,11,21]]},"reference":[{"key":"e_1_3_1_2_2","unstructured":"Md Zahangir Alom Mahmudul Hasan Chris Yakopcic Tarek M. Taha and Vijayan K. Asari. 2018. Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:3573942"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIINFS.2017.8300426"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2018.06.007"},{"key":"e_1_3_1_5_2","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1016\/j.apradiso.2017.08.007","article-title":"Coronary artery segmentation in X-ray angiograms using Gabor filters and differential evolution","volume":"138","author":"Cervantes-Sanchez Fernando","year":"2017","unstructured":"Fernando Cervantes-Sanchez, Ivan Cruz-Aceves, Arturo Hernandez-Aguirre, Sergio Solorio-Meza, Teodoro Cordova-Fraga, and Juan Gabriel Avi\u00f1a-Cervantes. 2017. Coronary artery segmentation in X-ray angiograms using Gabor filters and differential evolution. Applied Radiation and Isotopes: including Data, Instrumentation and Methods for Use in Agriculture, Industry and Medicine 138 (2017), 18\u201324. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:30591878","journal-title":"Applied Radiation and Isotopes: including Data, Instrumentation and Methods for Use in Agriculture, Industry and Medicine"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.2174\/1874364101206010004"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2015.11.001"},{"key":"e_1_3_1_8_2","volume-title":"International Conference on Learning Representations","author":"Dosovitskiy Alexey","year":"2021","unstructured":"Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=YicbFdNTTy"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR48806.2021.9413346"},{"key":"e_1_3_1_10_2","unstructured":"Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. arXiv:1502.01852. Retrieved from https:\/\/arxiv.org\/abs\/1502.01852"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/42.845178"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01264-9_48"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41597-022-01564-3"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2019.04.025"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.compmedimag.2015.12.004"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10439-022-03058-0"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-16434-7_47"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV45572.2020.9093621"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMI.2022.3151666"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2022.3188710"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2019.2932062"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/3DV.2016.79"},{"key":"e_1_3_1_25_2","unstructured":"Ozan Oktay Jo Schlemper Lo\u00efc Le Folgoc M. J. Lee Mattias P. Heinrich Kazunari Misawa Kensaku Mori Steven G. McDonagh Nils Y. Hammerla Bernhard Kainz Ben Glocker and Daniel Rueckert. 2018. Attention U-Net: Learning Where to Look for the Pancreas. Retrieved from https:\/\/api.semanticscholar.org\/CorpusID:4861068"},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.1167\/iovs.08-3018"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/3376922"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3430806"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-43901-8_39"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cmpb.2020.105769"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.displa.2023.102527"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2019.101556"},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.3390\/app13074445"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2950228"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2794463"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMI.2004.825627"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3380743"},{"key":"e_1_3_1_39_2","article-title":"Patches are all you need","volume":"2023","author":"Trockman Asher","year":"2022","unstructured":"Asher Trockman and J. Zico Kolter. 2022. Patches are all you need? Transactions on Machine Learning Research 2023 (2022).","journal-title":"Transactions on Machine Learning Research"},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cmpb.2018.01.002"},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-59722-1_77"},{"key":"e_1_3_1_42_2","unstructured":"Ziyang Wang Jian-Qing Zheng Yichi Zhang Ge Cui and Lei Li. 2024. Mamba-Unet: Unet-like pure visual mamba for medical image segmentation. arXiv:2402.05079. Retrieved from https:\/\/arxiv.org\/abs\/2402.05079"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2874285"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11390-024-3679-2"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01548"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2021.102025"},{"key":"e_1_3_1_47_2","unstructured":"Renkai Wu Yinghao Liu Pengchen Liang and Qing Chang. 2024. H-vmunet: High-order vision mamba UNet for medical image segmentation. arXiv: 2403.13642. Retrieved from https:\/\/arxiv.org\/abs\/2403.13642"},{"key":"e_1_3_1_48_2","unstructured":"Renkai Wu Yinghao Liu Pengchen Liang and Qing Chang. 2024. UltraLight VM-UNet: Parallel vision mamba significantly reduces parameters for skin lesion segmentation. arXiv:2403.20035. Retrieved from https:\/\/arxiv.org\/abs\/2403.20035"},{"key":"e_1_3_1_49_2","first-page":"11863","volume-title":"38th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 139)","author":"Yang Lingxiao","year":"2021","unstructured":"Lingxiao Yang, Ru-Yuan Zhang, Lida Li, and Xiaohua Xie. 2021. SimAM: A simple, parameter-free attention module for convolutional neural networks. In 38th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 11863\u201311874. Retrieved from https:\/\/proceedings.mlr.press\/v139\/yang21o.html"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3446618"},{"key":"e_1_3_1_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/3592614"},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.1145\/3653715"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2799846"},{"key":"e_1_3_1_54_2","doi-asserted-by":"crossref","unstructured":"Shihao Zhang Huazhu Fu Yuguang Yan Yubing Zhang Qingyao Wu Ming Yang Mingkui Tan and Yanwu Xu. 2019. Attention guided network for retinal image segmentation. arXiv:1907.12930. Retrieved from https:\/\/arxiv.org\/abs\/1907.12930","DOI":"10.1007\/978-3-030-32239-7_88"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-87193-2_6"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-00889-5_1"}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3767732","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,22]],"date-time":"2025-11-22T06:58:50Z","timestamp":1763794730000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3767732"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,21]]},"references-count":55,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2025,12,31]]}},"alternative-id":["10.1145\/3767732"],"URL":"https:\/\/doi.org\/10.1145\/3767732","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"type":"print","value":"1551-6857"},{"type":"electronic","value":"1551-6865"}],"subject":[],"published":{"date-parts":[[2025,11,21]]},"assertion":[{"value":"2024-09-29","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-01","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-11-21","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}