{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,19]],"date-time":"2025-12-19T15:54:25Z","timestamp":1766159665903,"version":"build-2065373602"},"reference-count":39,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2025,3,31]],"date-time":"2025-03-31T00:00:00Z","timestamp":1743379200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["12374440"],"award-info":[{"award-number":["12374440"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>U-Net and its various variants have been widely applied in medical image segmentation in recent years, and significant success has been achieved in addressing complex segmentation tasks. These networks excel in feature extraction and enable efficient identification of key structural features in diverse medical images. However, convolutional neural networks face limitations during feature extraction, especially when modeling long-range contextual dependencies. This limitation hinders their ability to capture global features and may cause a decline in performance for complex segmentation tasks. To address these challenges, a novel architecture called BAG-Net (Boundary And Global Attention Network) is proposed that integrates global contextual information with local features more effectively. The network includes a global context attention component, which helps model long-range contextual features. Furthermore, a U-Net variant is created by introducing SE-Net into the skip connections in order to extract local information. In addition, a boundary self-attention component is employed to capture boundary details. The combined effect of these three components enables BAG-Net to fully exploit both local and global information and achieve high-precision segmentation. Experimental results show that BAG-Net outperforms traditional methods across all performance metrics. Thus, new perspectives for the advancement of medical image segmentation techniques are offered, and this provides a valuable reference for clinical applications.<\/jats:p>","DOI":"10.3390\/sym17040531","type":"journal-article","created":{"date-parts":[[2025,4,1]],"date-time":"2025-04-01T04:21:01Z","timestamp":1743481261000},"page":"531","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["BAG-Net: A Novel Architecture for Enhanced Medical Image Segmentation with Global Context Attention and Boundary Self-Attention"],"prefix":"10.3390","volume":"17","author":[{"given":"Yuyang","family":"Lei","sequence":"first","affiliation":[{"name":"Shaanxi Key Laboratory of Ultrasonics, School of Physics and Information Technology, Shaanxi Normal University, Xi\u2019an 710119, China"}]},{"given":"Shengxian","family":"Yan","sequence":"additional","affiliation":[{"name":"Shaanxi Key Laboratory of Ultrasonics, School of Physics and Information Technology, Shaanxi Normal University, Xi\u2019an 710119, China"}]},{"given":"Jing","family":"Zhang","sequence":"additional","affiliation":[{"name":"Shaanxi Key Laboratory of Ultrasonics, School of Physics and Information Technology, Shaanxi Normal University, Xi\u2019an 710119, China"}]},{"given":"Xiang","family":"Li","sequence":"additional","affiliation":[{"name":"Shaanxi Key Laboratory of Ultrasonics, School of Physics and Information Technology, Shaanxi Normal University, Xi\u2019an 710119, China"}]},{"given":"Penghui","family":"Wang","sequence":"additional","affiliation":[{"name":"Shaanxi Key Laboratory of Ultrasonics, School of Physics and Information Technology, Shaanxi Normal University, Xi\u2019an 710119, China"}]},{"given":"Xiao","family":"Gao","sequence":"additional","affiliation":[{"name":"Shaanxi Key Laboratory of Ultrasonics, School of Physics and Information Technology, Shaanxi Normal University, Xi\u2019an 710119, China"}]},{"given":"Hui","family":"Cao","sequence":"additional","affiliation":[{"name":"Shaanxi Key Laboratory of Ultrasonics, School of Physics and Information Technology, Shaanxi Normal University, Xi\u2019an 710119, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,3,31]]},"reference":[{"doi-asserted-by":"crossref","unstructured":"Thawabteh, A.M., Jibreen, A., Karaman, D., Thawabteh, A., and Karaman, R. (2023). Skin Pigmentation Types, Causes and Treatment\u2014A Review. Molecules, 28.","key":"ref_1","DOI":"10.20944\/preprints202305.0751.v1"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"599","DOI":"10.1038\/s41580-024-00715-1","article-title":"Cellular and molecular mechanisms of skin wound healing","volume":"25","author":"Martin","year":"2024","journal-title":"Nat. Rev. Mol. Cell Biol."},{"doi-asserted-by":"crossref","unstructured":"Navab, N., Hornegger, J., Wells, W.M., and Frangi, A.F. (2015, January 5\u20139). U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention\u2014MICCAI 2015, Munich, Germany.","key":"ref_3","DOI":"10.1007\/978-3-319-24553-9"},{"doi-asserted-by":"crossref","unstructured":"Adnan, M., Akhter, M.H., Afzal, O., Altamimi, A.S.A., Ahmad, I., Alossaimi, M.A., Jaremko, M., Emwas, A.H., Haider, T., and Haider, M.F. (2023). Exploring Nanocarriers as Treatment Modalities for Skin Cancer. Molecules, 28.","key":"ref_4","DOI":"10.3390\/molecules28155905"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"e15707","DOI":"10.7717\/peerj.15707","article-title":"Comparative analysis of automatic segmentation of esophageal cancer using 3D Res-UNet on conventional and 40-keV virtual mono-energetic CT Images: A retrospective study","volume":"11","author":"Zhong","year":"2023","journal-title":"PeerJ"},{"doi-asserted-by":"crossref","unstructured":"Song, H., Wang, Y., Zeng, S., Guo, X., and Li, Z. (2023). OAU-net: Outlined Attention U-net for biomedical image segmentation. Biomed. Signal Process. Control, 79.","key":"ref_6","DOI":"10.1016\/j.bspc.2022.104038"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"e607","DOI":"10.7717\/peerj-cs.607","article-title":"Chest X-ray pneumothorax segmentation using U-Net with EfficientNet and ResNet architectures","volume":"7","author":"Abedalla","year":"2021","journal-title":"PeerJ Comput. Sci."},{"doi-asserted-by":"crossref","unstructured":"Uzun, Y., and Bilgin, M. (2025). Medical image enhancement using war strategy optimization algorithm. Biomed. Signal Process. Control, 106.","key":"ref_8","DOI":"10.1016\/j.bspc.2025.107740"},{"doi-asserted-by":"crossref","unstructured":"Stoyanov, D., Taylor, Z., Carneiro, G., Syeda-Mahmood, T., Martel, A., Maier-Hein, L., Tavares, J.M.R., Bradley, A., Papa, J.P., and Belagiannis, V. (2018). UNet++: A Nested U-Net Architecture for Medical Image Segmentation. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Proceedings of the 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 20 September 2018, Springer.","key":"ref_9","DOI":"10.1007\/978-3-030-00889-5"},{"doi-asserted-by":"crossref","unstructured":"Zunair, H., and Ben Hamza, A. (2021). Sharp U-Net: Depthwise convolutional network for biomedical image segmentation. Comput. Biol. Med., 136.","key":"ref_10","DOI":"10.1016\/j.compbiomed.2021.104699"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"10076","DOI":"10.1109\/TPAMI.2024.3435571","article-title":"Medical Image Segmentation Review: The Success of U-Net","volume":"46","author":"Azad","year":"2024","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"82031","DOI":"10.1109\/ACCESS.2021.3086020","article-title":"U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications","volume":"9","author":"Siddique","year":"2021","journal-title":"IEEE Access"},{"doi-asserted-by":"crossref","unstructured":"Ma, S., Tang, J., and Guo, F. (2021). Multi-Task Deep Supervision on Attention R2U-Net for Brain Tumor Segmentation. Front. Oncol., 11.","key":"ref_13","DOI":"10.3389\/fonc.2021.704850"},{"doi-asserted-by":"crossref","unstructured":"Chen, D., Ao, Y., and Liu, S. (2020). Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry, 12.","key":"ref_14","DOI":"10.3390\/sym12071067"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"e17005","DOI":"10.7717\/peerj.17005","article-title":"Enhancing medical image segmentation with a multi-transformer U-Net","volume":"12","author":"Dan","year":"2024","journal-title":"PeerJ"},{"doi-asserted-by":"crossref","unstructured":"Wu, S., Zhu, Y., and Liang, P. (2024). DSCU-Net: MEMS Defect Detection Using Dense Skip-Connection U-Net. Symmetry, 16.","key":"ref_16","DOI":"10.3390\/sym16030300"},{"unstructured":"Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., and Kainz, B. (2018). Attention U-Net: Learning Where to Look for the Pancreas. arXiv.","key":"ref_17"},{"doi-asserted-by":"crossref","unstructured":"Zhang, Y., Liu, X., Wa, S., Liu, Y., Kang, J., and Lv, C. (2021). GenU-Net++: An Automatic Intracranial Brain Tumors Segmentation Algorithm on 3D Image Series with High Performance. Symmetry, 13.","key":"ref_18","DOI":"10.3390\/sym13122395"},{"doi-asserted-by":"crossref","unstructured":"Luo, H., Zhang, X., Yuan, F., Yu, J., Ding, H., Xu, H., and Hong, S. (2025). MOSSNet: A Lightweight Dual-Branch Multiscale Attention Neural Network for Bryophyte Identification. Symmetry, 17.","key":"ref_19","DOI":"10.3390\/sym17030347"},{"doi-asserted-by":"crossref","unstructured":"Wang, Y., Li, Y., Wang, G., and Liu, X. (2024, January 17\u201318). Multi-scale Attention Network for Single Image Super-Resolution. Proceedings of the 2024 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA.","key":"ref_20","DOI":"10.1109\/CVPRW63382.2024.00602"},{"doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-Excitation Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","key":"ref_21","DOI":"10.1109\/CVPR.2018.00745"},{"doi-asserted-by":"crossref","unstructured":"Yao, Z., Cao, Y., Zheng, S., Huang, G., and Lin, S. (2021, January 20\u201325). Cross-Iteration Batch Normalization. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA.","key":"ref_22","DOI":"10.1109\/CVPR46437.2021.01215"},{"unstructured":"Bertasius, G., Wang, H., and Torresani, L. (2021). Is Space-Time Attention All You Need for Video Understanding?. arXiv.","key":"ref_23"},{"doi-asserted-by":"crossref","unstructured":"Ouyang, D., He, S., Zhang, G., Luo, M., Guo, H., Zhan, J., and Huang, Z. (2023, January 4\u201310). Efficient Multi-Scale Attention Module with Cross-Spatial Learning. Proceedings of the ICASSP 2023\u20142023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece.","key":"ref_24","DOI":"10.1109\/ICASSP49357.2023.10096516"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"105055","DOI":"10.1016\/j.imavis.2024.105055","article-title":"GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation","volume":"146","author":"Ekenel","year":"2024","journal-title":"Image Vis. Comput."},{"doi-asserted-by":"crossref","unstructured":"Li, Y., Hou, Q., Zheng, Z., Cheng, M.M., Yang, J., and Li, X. (2023, January 1\u20136). Large Selective Kernel Network for Remote Sensing Object Detection. Proceedings of the 2023 IEEE\/CVF International Conference on Computer Vision (ICCV), Paris, France.","key":"ref_26","DOI":"10.1109\/ICCV51070.2023.01540"},{"unstructured":"Salajegheh, F., Asadi, N., Saryazdi, S., and Mudur, S. (2023). DAS: A Deformable Attention to Capture Salient Information in CNNs. arXiv.","key":"ref_27"},{"unstructured":"Yu, J., Wang, Z., Vasudevan, V., Yeung, L., Seyedhosseini, M., and Wu, Y. (2022). CoCa: Contrastive Captioners are Image-Text Foundation Models. arXiv.","key":"ref_28"},{"unstructured":"Fan, Q., Huang, H., Guan, J., and He, R. (2023). Rethinking Local Perception in Lightweight Vision Transformer. arXiv.","key":"ref_29"},{"key":"ref_30","first-page":"3523","article-title":"Image Segmentation Using Deep Learning: A Survey","volume":"44","author":"Minaee","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1109\/TETCI.2023.3309626","article-title":"TransAttUnet: Multi-Level Attention-Guided U-Net With Transformer for Medical Image Segmentation","volume":"8","author":"Chen","year":"2024","journal-title":"IEEE Trans. Emerg. Top. Comput. Intell."},{"doi-asserted-by":"crossref","unstructured":"Lian, C., Cao, X., Rekik, I., Xu, X., and Cui, Z. (2022). Contextual Attention Network: Transformer Meets U-Net. Machine Learning in Medical Imaging, Proceedings of the 13th International Workshop, MLMI 2022, Held in Conjunction with MICCAI 2022, Singapore, 18 September 2022, Springer.","key":"ref_32","DOI":"10.1007\/978-3-031-21014-3"},{"doi-asserted-by":"crossref","unstructured":"Subakan, C., Ravanelli, M., Cornell, S., Bronzi, M., and Zhong, J. (2021, January 6\u201311). Attention Is All You Need In Speech Separation. Proceedings of the ICASSP 2021\u20142021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada.","key":"ref_33","DOI":"10.1109\/ICASSP39728.2021.9413901"},{"doi-asserted-by":"crossref","unstructured":"Wang, H., Zhu, Y., Adam, H., Yuille, A., and Chen, L.C. (2021, January 20\u201325). MaX-DeepLab: End-to-End Panoptic Segmentation With Mask Transformers. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA.","key":"ref_34","DOI":"10.1109\/CVPR46437.2021.00542"},{"unstructured":"(2024, September 27). ISIC 2017: Skin Lesion Analysis Toward Melanoma Detection. Available online: https:\/\/www.isic-archive.com.","key":"ref_35"},{"unstructured":"(2024, September 27). ISIC 2018: Skin Lesion Analysis Toward Melanoma Detection. Available online: https:\/\/www.isic-archive.com.","key":"ref_36"},{"unstructured":"PH2 Database (2024, September 27). PH2: A Dermoscopic Image Database for the Analysis of Skin Lesions. Available online: http:\/\/www.fc.up.pt\/addi\/ph2%20database.html.","key":"ref_37"},{"unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017). Attention Is All You Need. arXiv.","key":"ref_38"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"102327","DOI":"10.1016\/j.media.2021.102327","article-title":"FAT-Net: Feature adaptive transformers for automated skin lesion segmentation","volume":"76","author":"Wu","year":"2022","journal-title":"Med. Image Anal."}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/4\/531\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:06:49Z","timestamp":1760029609000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/4\/531"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,31]]},"references-count":39,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,4]]}},"alternative-id":["sym17040531"],"URL":"https:\/\/doi.org\/10.3390\/sym17040531","relation":{},"ISSN":["2073-8994"],"issn-type":[{"type":"electronic","value":"2073-8994"}],"subject":[],"published":{"date-parts":[[2025,3,31]]}}}