{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T02:24:07Z","timestamp":1774405447270,"version":"3.50.1"},"reference-count":38,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2025,6,6]],"date-time":"2025-06-06T00:00:00Z","timestamp":1749168000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["F2022201013"],"award-info":[{"award-number":["F2022201013"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["2024AH051686"],"award-info":[{"award-number":["2024AH051686"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["2023HK037"],"award-info":[{"award-number":["2023HK037"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["22100084"],"award-info":[{"award-number":["22100084"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["2023"],"award-info":[{"award-number":["2023"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Scientific Research Program of Anhui Provincial Ministry of Education","award":["F2022201013"],"award-info":[{"award-number":["F2022201013"]}]},{"name":"Scientific Research Program of Anhui Provincial Ministry of Education","award":["2024AH051686"],"award-info":[{"award-number":["2024AH051686"]}]},{"name":"Scientific Research Program of Anhui Provincial Ministry of Education","award":["2023HK037"],"award-info":[{"award-number":["2023HK037"]}]},{"name":"Scientific Research Program of Anhui Provincial Ministry of Education","award":["22100084"],"award-info":[{"award-number":["22100084"]}]},{"name":"Scientific Research Program of Anhui Provincial Ministry of Education","award":["2023"],"award-info":[{"award-number":["2023"]}]},{"name":"Science and Technology Program of Huaibei","award":["F2022201013"],"award-info":[{"award-number":["F2022201013"]}]},{"name":"Science and Technology Program of Huaibei","award":["2024AH051686"],"award-info":[{"award-number":["2024AH051686"]}]},{"name":"Science and Technology Program of Huaibei","award":["2023HK037"],"award-info":[{"award-number":["2023HK037"]}]},{"name":"Science and Technology Program of Huaibei","award":["22100084"],"award-info":[{"award-number":["22100084"]}]},{"name":"Science and Technology Program of Huaibei","award":["2023"],"award-info":[{"award-number":["2023"]}]},{"name":"Anhui Shenhua Meat Products Co., Ltd., Cooperation Project","award":["F2022201013"],"award-info":[{"award-number":["F2022201013"]}]},{"name":"Anhui Shenhua Meat Products Co., Ltd., Cooperation Project","award":["2024AH051686"],"award-info":[{"award-number":["2024AH051686"]}]},{"name":"Anhui Shenhua Meat Products Co., Ltd., Cooperation Project","award":["2023HK037"],"award-info":[{"award-number":["2023HK037"]}]},{"name":"Anhui Shenhua Meat Products Co., Ltd., Cooperation Project","award":["22100084"],"award-info":[{"award-number":["22100084"]}]},{"name":"Anhui Shenhua Meat Products Co., Ltd., Cooperation Project","award":["2023"],"award-info":[{"award-number":["2023"]}]},{"name":"Entrusted Project by Huaibei Mining Group","award":["F2022201013"],"award-info":[{"award-number":["F2022201013"]}]},{"name":"Entrusted Project by Huaibei Mining Group","award":["2024AH051686"],"award-info":[{"award-number":["2024AH051686"]}]},{"name":"Entrusted Project by Huaibei Mining Group","award":["2023HK037"],"award-info":[{"award-number":["2023HK037"]}]},{"name":"Entrusted Project by Huaibei Mining Group","award":["22100084"],"award-info":[{"award-number":["22100084"]}]},{"name":"Entrusted Project by Huaibei Mining Group","award":["2023"],"award-info":[{"award-number":["2023"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>Deep learning techniques play a crucial role in medical image segmentation for diagnostic purposes, with traditional convolutional neural networks (CNNs) and emerging transformers having achieved satisfactory results. CNN-based methods focus on extracting the local features of an image, which are beneficial for handling image details and textural features. However, the receptive fields of CNNs are relatively small, resulting in poor performance when processing images with long-range dependencies. Conversely, transformer-based methods are effective in handling global information; however, they suffer from significant computational complexity arising from the building of long-range dependencies. Additionally, they lack the ability to perceive image details and adopt channel features. These problems can result in unclear image segmentation and blurred boundaries. Accordingly, in this study, a multiscale recombined channel attention (MRCA) module is proposed, which can simultaneously extract both global and local features and has the capability of exploring channel features during feature fusion. Specifically, the proposed MRCA first employs multibranch extraction of image features and performs operations such as blocking, shifting, and aggregating the image at different scales. This step enables the model to recognize multiscale information locally and globally. Feature selection is then performed to enhance the predictive capability of the model. Finally, features from different branches are connected and recombined across channels to complete the feature fusion. Benefiting from fully exploring the channel features, an MRCA-based U-Net (MRCA-UNet) framework is proposed for medical image segmentation. Experiments conducted on the Synapse multi-organ segmentation (Synapse) dataset and the International Skin Imaging Collaboration (ISIC-2018) dataset demonstrate the competitive segmentation performance of the proposed MRCA-UNet, achieving an average Dice Similarity Coefficient (DSC) of 81.61% and a Hausdorff Distance (HD) of 23.36 on Synapse and an Accuracy of 95.94% on ISIC-2018.<\/jats:p>","DOI":"10.3390\/sym17060892","type":"journal-article","created":{"date-parts":[[2025,6,6]],"date-time":"2025-06-06T06:11:08Z","timestamp":1749190268000},"page":"892","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["MRCA-UNet: A Multiscale Recombined Channel Attention U-Net Model for Medical Image Segmentation"],"prefix":"10.3390","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1962-8856","authenticated-orcid":false,"given":"Lei","family":"Liu","sequence":"first","affiliation":[{"name":"School of Computer Science and Technology, Huaibei Normal University, Huaibei 235000, China"},{"name":"Huaibei Key Laboratory of Digital Multimedia Intelligent Information Processing, Huaibei 235000, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-6207-9874","authenticated-orcid":false,"given":"Xiang","family":"Li","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, Huaibei Normal University, Huaibei 235000, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-2206-1047","authenticated-orcid":false,"given":"Shuai","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, Huaibei Normal University, Huaibei 235000, China"},{"name":"Huaibei Key Laboratory of Digital Multimedia Intelligent Information Processing, Huaibei 235000, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5901-9019","authenticated-orcid":false,"given":"Jun","family":"Wang","sequence":"additional","affiliation":[{"name":"College of Electronic and Information Engineering, Hebei University, Baoding 071000, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3363-5208","authenticated-orcid":false,"given":"Silas N.","family":"Melo","sequence":"additional","affiliation":[{"name":"Department of Geography, Universidade Estadual do Maranh\u00e3o, S\u00e3o Lu\u00eds 65055-000, Brazil"}]}],"member":"1968","published-online":{"date-parts":[[2025,6,6]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"271","DOI":"10.1080\/21681163.2020.1835554","article-title":"Towards markerless computer-aided surgery combining deep segmentation and geometric pose estimation: Application in total knee arthroplasty","volume":"9","author":"Raposo","year":"2021","journal-title":"Comput. Methods Biomech. Biomed. Eng. Imaging Vis."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"2252","DOI":"10.1109\/JBHI.2021.3138024","article-title":"MSRF-Net: A multi-scale residual fusion network for biomedical image segmentation","volume":"26","author":"Srivastava","year":"2021","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"954","DOI":"10.1109\/TMI.2023.3327428","article-title":"FoPro-KD: Fourier prompted effective knowledge distillation for long-tailed medical image recognition","volume":"43","author":"Elbatel","year":"2024","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Xiao, X., Lian, S., Luo, Z., and Li, S. (2018, January 19\u201321). Weighted res-unet for high-quality retina vessel segmentation. Proceedings of the 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China.","DOI":"10.1109\/ITME.2018.00080"},{"key":"ref_6","unstructured":"Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., and Kainz, B. (2018). Attention U-Net: Learning where to look for the pancreas. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., and Liang, J. (2018, January 20). UNet++: A nested u-net architecture for medical image segmentation. Proceedings of the International Worshops on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (DLMIA 2018 and ML-CDS 2018), held in conjunction with MICCAI 2018, Granada, Spain.","DOI":"10.1007\/978-3-030-00889-5_1"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Hou, Q., Zhang, L., Cheng, M.M., and Feng, J. (2020, January 13\u201319). Strip pooling: Rethinking spatial pooling for scene parsing. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00406"},{"key":"ref_9","unstructured":"Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., and Liu, W. (November, January 27). CCNet: Criss-cross attention for semantic segmentation. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"654","DOI":"10.1038\/s41467-024-44824-z","article-title":"Segment anything in medical images","volume":"15","author":"Ma","year":"2024","journal-title":"Nat. Commun."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M., and Asari, V.K. (2018). Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv.","DOI":"10.1109\/NAECON.2018.8556686"},{"key":"ref_14","unstructured":"Zeng, Y.X., Hsieh, J.W., Li, X., and Chang, M.C. (2023). MixNet: Toward accurate detection of challenging scene text in the wild. arXiv."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Wang, X., Girshick, R., Gupta, A., and He, K. (2018, January 18\u201323). Non-local neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00813"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018, January 8\u201314). CBAM: Convolutional block attention module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_18","unstructured":"Tan, M., and Le, Q. (2019, January 9\u201315). EfficientNet: Rethinking model scaling for convolutional neural networks. Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, CA, USA."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022, January 18\u201324). A ConvNet for the 2020s. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"ref_21","unstructured":"Yang, J., Li, C., Dai, X., and Gao, J. (December, January 28). Focal modulation networks. Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS), New Orleans, LA, USA."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"8274","DOI":"10.1109\/TPAMI.2024.3401450","article-title":"Conv2Former: A simple transformer-style convnet for visual recognition","volume":"46","author":"Hou","year":"2024","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_23","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Fu, S., Lu, Y., Wang, Y., Zhou, Y., Shen, W., Fishman, E., and Yuille, A. (2020, January 4\u20138). Domain adaptive relational reasoning for 3d multi-organ segmentation. Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Lima, Peru.","DOI":"10.1007\/978-3-030-59710-8_64"},{"key":"ref_25","unstructured":"Zhou, J., Wang, P., Wang, F., Liu, Q., Li, H., and Jin, R. (2021). ELSA: Enhanced local self-attention for vision transformer. arXiv."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Wang, A., Chen, H., Lin, Z., Han, J., and Ding, G. (2024, January 16\u201322). RepVit: Revisiting mobile cnn from vit perspective. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.01506"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Huang, H., Lin, L., Tong, R., Hu, H., Zhang, Q., Iwamoto, Y., Han, X., Chen, Y.W., and Wu, J. (2020, January 4\u20138). UNet 3+: A full-scale connected unet for medical image segmentation. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053405"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Lou, A., Guan, S., and Loew, M. (2021, January 15\u201319). DC-UNet: Rethinking the U-Net architecture with dual channel efficient CNN for medical image segmentation. Proceedings of the SPIE 11596, Medical Imaging 2021: Image Processing, Online.","DOI":"10.1117\/12.2582338"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., and Lo, W.-Y. (2023, January 2\u20136). Segment anything. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Paris, France.","DOI":"10.1109\/ICCV51070.2023.00371"},{"key":"ref_30","unstructured":"Qi, M., Zhu, P., Li, X., Bi, X., Qi, L., Ma, H., and Yang, M.-H. (2025). DC-SAM: In-context segment anything in images and videos via dual consistency. arXiv."},{"key":"ref_31","unstructured":"Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A.L., and Zhou, Y. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., and Wang, M. (2022, January 23\u201327). Swin-Unet: Unet-like pure transformer for medical image segmentation. Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-25066-8_9"},{"key":"ref_33","unstructured":"Huang, X., Deng, Z., Li, D., and Yuan, X. (2021). MISSFormer: An effective medical image segmentation transformer. arXiv."},{"key":"ref_34","unstructured":"Shi, B., Gai, S., Darrell, T., and Wang, X. (2023). Refocusing is key to transfer learning. arXiv."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Milletari, F., Navab, N., and Ahmadi, S.A. (2016, January 25\u201328). V-Net: Fully convolutional neural networks for volumetric medical image segmentation. Proceedings of the 4th International Conference on 3D Vision (3DV), Stanford, CA, USA.","DOI":"10.1109\/3DV.2016.79"},{"key":"ref_36","unstructured":"Valanarasu, J.M.J., Oza, P., Hacihaliloglu, I., and Patel, V.M. (October, January 27). Medical transformer: Gated axial-attention for medical image segmentation. Proceedings of the 24th International Conference on Medical image computing and computer assisted intervention (MICCAI)."},{"key":"ref_37","unstructured":"Cai, P., Lu, J., Li, Y., and Lan, L. (2023). Pubic symphysis-fetal head segmentation using pure transformer with bi-level routing attention. arXiv."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Xu, Q., Ma, Z., Na, H., and Duan, W. (2023). DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation. Comput. Biol. Med., 154.","DOI":"10.1016\/j.compbiomed.2023.106626"}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/6\/892\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:47:36Z","timestamp":1760032056000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/6\/892"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,6]]},"references-count":38,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2025,6]]}},"alternative-id":["sym17060892"],"URL":"https:\/\/doi.org\/10.3390\/sym17060892","relation":{},"ISSN":["2073-8994"],"issn-type":[{"value":"2073-8994","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,6]]}}}