{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,28]],"date-time":"2026-01-28T10:23:14Z","timestamp":1769595794762,"version":"3.49.0"},"reference-count":40,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,7,18]],"date-time":"2023-07-18T00:00:00Z","timestamp":1689638400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,18]],"date-time":"2023-07-18T00:00:00Z","timestamp":1689638400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62072345"],"award-info":[{"award-number":["62072345"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["41671382"],"award-info":[{"award-number":["41671382"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["BMC Bioinformatics"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Deep learning-based medical image segmentation has made great progress over the past decades. Scholars have proposed many novel transformer-based segmentation networks to solve the problems of building long-range dependencies and global context connections in convolutional neural networks (CNNs). However, these methods usually replace the CNN-based blocks with improved transformer-based structures, which leads to the lack of local feature extraction ability, and these structures require a huge number of data for training. Moreover, those methods did not pay attention to edge information, which is essential in medical image segmentation. To address these problems, we proposed a new network structure, called P-TransUNet. This network structure combines the designed efficient P-Transformer and the fusion module, which extract distance-related long-range dependencies and local information respectively and produce the fused features. Besides, we introduced edge loss into training to focus the attention of the network on the edge of the lesion area to improve segmentation performance. Extensive experiments across four tasks of medical image segmentation demonstrated the effectiveness of P-TransUNet, and showed that our network outperforms other state-of-the-art methods.<\/jats:p>","DOI":"10.1186\/s12859-023-05409-7","type":"journal-article","created":{"date-parts":[[2023,7,18]],"date-time":"2023-07-18T13:02:19Z","timestamp":1689685339000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["P-TransUNet: an improved parallel network for medical image segmentation"],"prefix":"10.1186","volume":"24","author":[{"given":"Yanwen","family":"Chong","sequence":"first","affiliation":[]},{"given":"Ningdi","family":"Xie","sequence":"additional","affiliation":[]},{"given":"Xin","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Shaoming","family":"Pan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,18]]},"reference":[{"key":"5409_CR1","doi-asserted-by":"publisher","first-page":"117006","DOI":"10.1016\/j.eswa.2022.117006","volume":"200","author":"F Behrad","year":"2022","unstructured":"Behrad F, Abadeh MS. An overview of deep learning methods for multimodal medical data mining. Expert Syst Appl. 2022;200:117006.","journal-title":"Expert Syst Appl"},{"key":"5409_CR2","unstructured":"Stoyanov D, et al. Deep learning in medical image analysis and multimodal learning for clinical decision support: 4th international workshop, DLMIA 2018, and 8th international workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings, 1st ed. Cham: Springer International Publishing : Imprint: Springer, 2018, pp. 1 online resource (XVII, 387 pages 197 illustrations, 149 illustrations in color. [Online]. Available."},{"issue":"23","key":"5409_CR3","doi-asserted-by":"publisher","first-page":"235001","DOI":"10.1088\/1361-6560\/abc363","volume":"65","author":"WC Chi","year":"2020","unstructured":"Chi WC, Ma L, Wu JJ, Chen ML, Lu WG, Gu XJ. Deep learning-based medical image segmentation with limited labels. Phys Med Biol. 2020;65(23):235001.","journal-title":"Phys Med Biol"},{"key":"5409_CR4","unstructured":"Shuai B, Liu T, Wang G. Improving fully convolution network for semantic segmentation. arXiv preprint arxiv:1611.08986. 2016."},{"key":"5409_CR5","doi-asserted-by":"crossref","unstructured":"Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention, Pt Iii, 2015. vol. 9351, pp. 234-241.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"5409_CR6","doi-asserted-by":"crossref","unstructured":"Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. UNet++: a nested U-Net architecture for medical image segmentation. In: 4th deep learning in medical image analysis (DLMIA) Workshop. 2018.","DOI":"10.1007\/978-3-030-00889-5_1"},{"key":"5409_CR7","doi-asserted-by":"crossref","unstructured":"Shen FL, Gan R, Zeng G. Weighted residuals for very deep networks. In: 2016 3rd international conference on systems and informatics (Icsai), 2016. pp. 936\u2013941.","DOI":"10.1109\/ICSAI.2016.7811085"},{"issue":"12","key":"5409_CR8","doi-asserted-by":"publisher","first-page":"2663","DOI":"10.1109\/TMI.2018.2845918","volume":"37","author":"XM Li","year":"2018","unstructured":"Li XM, Chen H, Qi XJ, Dou Q, Fu CW, Heng PA. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans Med Imaging. 2018;37(12):2663\u201374.","journal-title":"IEEE Trans Med Imaging"},{"key":"5409_CR9","doi-asserted-by":"crossref","unstructured":"Valanarasu JMJ, Oza P, Hacihaliloglu I, Patel VM. Medical transformer: gated axial-attention for medical image segmentation. In: Medical image computing and computer assisted intervention\u2014Miccai 2021, Pt I. 2021. vol. 12901, pp. 36\u201346","DOI":"10.1007\/978-3-030-87193-2_4"},{"key":"5409_CR10","unstructured":"Chen LC, Papandreou G, Schroff F, Adam H. Rethinking atrous convolution for semantic image segmentation. arxiv:1706.05587. 2017."},{"key":"5409_CR11","unstructured":"Oktay O, et al. Attention U-Net: learning where to look for the pancreas. arXiv preprint arxiv:1804.03999. 2018."},{"key":"5409_CR12","unstructured":"Vaswani A, et al. Attention is all you need. In: Advances in neural information processing systems 30 (Nips 2017), 2017. vol. 30."},{"key":"5409_CR13","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn A, Houlsby N. An image is worth 16 x 16 words: transformers for image recognition at scale. arxiv:2010.11929. 2020."},{"key":"5409_CR14","doi-asserted-by":"crossref","unstructured":"Liu Z, et al. Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF international conference on computer vision. 2021.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"5409_CR15","unstructured":"Chen, Y. Lu, Q. Yu, X. Luo, and Y. Zhou, \"TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arxiv:2102.04306. 2021."},{"key":"5409_CR16","doi-asserted-by":"crossref","unstructured":"Zhang Y, Liu H, Hu Q, TransFuse: fusing transformers and CNNs for medical image segmentation. 2021.","DOI":"10.1007\/978-3-030-87193-2_2"},{"key":"5409_CR17","doi-asserted-by":"crossref","unstructured":"Wang Y, et al. Deep attentional features for prostate segmentation in ultrasound. In: Medical image computing and computer assisted intervention\u2014Miccai 2018, Pt Iv, 2018. vol. 11073, pp. 523\u2013530.","DOI":"10.1007\/978-3-030-00937-3_60"},{"issue":"4","key":"5409_CR18","doi-asserted-by":"publisher","first-page":"637","DOI":"10.1109\/LGRS.2020.2983464","volume":"18","author":"SM Pan","year":"2021","unstructured":"Pan SM, Tao YL, Nie CC, Chong YW. PEGNet: progressive edge guidance network for semantic segmentation of remote sensing images. IEEE Geosci Remote Sens Lett. 2021;18(4):637\u201341.","journal-title":"IEEE Geosci Remote Sens Lett"},{"key":"5409_CR19","doi-asserted-by":"crossref","unstructured":"Jha D, Riegler MA, Johansen D, Halvorsen P, Johansen HD. DoubleU-Net: a deep convolutional neural network for medical image segmentation. In: 2020 IEEE 33rd international symposium on computer-based medical systems (Cbms 2020). 2020. pp. 558\u2013564","DOI":"10.1109\/CBMS49503.2020.00111"},{"key":"5409_CR20","doi-asserted-by":"crossref","unstructured":"Lin A, Chen B, Xu J, Zhang Z, Lu G, DS-TransUNet: dual swin transformer U-Net for medical image segmentation. arxiv:2106.06716. 2021.","DOI":"10.1109\/TIM.2022.3178991"},{"key":"5409_CR21","unstructured":"Ho J, Kalchbrenner N, Weissenborn D, Salimans TJA. Axial attention in multidimensional transformers. https:\/\/arxiv.org\/abs\/1912.12180. 2019."},{"key":"5409_CR22","doi-asserted-by":"crossref","unstructured":"He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (Cvpr). 2016. pp. 770\u2013778.","DOI":"10.1109\/CVPR.2016.90"},{"key":"5409_CR23","doi-asserted-by":"crossref","unstructured":"Woo SH, Park J, Lee JY, Kweon IS. CBAM: convolutional block attention module. In: Computer Vision\u2014Eccv 2018, Pt Vii. 2018. vol. 11211, pp. 3\u201319.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"5409_CR24","doi-asserted-by":"crossref","unstructured":"Shrivastava A, Gupta A, Girshick R. Training Region-based Object Detectors with Online Hard Example Mining. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (Cvpr). 2016. pp. 761\u2013769.","DOI":"10.1109\/CVPR.2016.89"},{"key":"5409_CR25","doi-asserted-by":"crossref","unstructured":"Jha D, Smedsrud PH, Riegler MA, Halvorsen P, Johansen HD. Kvasir-SEG: a segmented polyp dataset. In: 26th international conference on multimedia modelling. 2020.","DOI":"10.1007\/978-3-030-37734-2_37"},{"issue":"2","key":"5409_CR26","doi-asserted-by":"publisher","first-page":"283","DOI":"10.1007\/s11548-013-0926-3","volume":"9","author":"J Silva","year":"2013","unstructured":"Silva J, Histace A, Romain O, Dray X, Granado B. Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer. Int J Comput Assist Radiol Surg. 2013;9(2):283\u201393.","journal-title":"Int J Comput Assist Radiol Surg"},{"issue":"99","key":"5409_CR27","first-page":"1","volume":"PP","author":"D Jha","year":"2021","unstructured":"Jha D, Ali S, Johansen HD, Johansen DD, Halvorsen PJIA. Real-time polyp detection, localization and segmentation in colonoscopy using deep learning. IEEE Access. 2021;PP(99):1\u20131.","journal-title":"IEEE Access"},{"key":"5409_CR28","unstructured":"Jose JM, Sindagi V, Hacihaliloglu I, Patel VM. KiU-Net: towards accurate segmentation of biomedical images using over-complete representations. 2020."},{"issue":"12","key":"5409_CR29","doi-asserted-by":"publisher","first-page":"1247","DOI":"10.1038\/s41592-019-0612-7","volume":"16","author":"JC Caicedo","year":"2019","unstructured":"Caicedo JC, et al. Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl. Nat Methods. 2019;16(12):1247\u201353.","journal-title":"Nat Methods"},{"key":"5409_CR30","doi-asserted-by":"crossref","unstructured":"Fan D-P, et al. PraNet: parallel reverse attention network for polyp segmentation. https:\/\/arxiv.org\/abs\/2006.11392. 2020.","DOI":"10.1007\/978-3-030-59725-2_26"},{"key":"5409_CR31","doi-asserted-by":"crossref","unstructured":"Jha D, Smedsrud PH, Riegler MA, Johansen D, Simulamet. ResUNet++: an advanced architecture for medical image segmentation. In: 21st IEEE international symposium on multimedia, 2019.","DOI":"10.1109\/ISM46123.2019.00049"},{"key":"5409_CR32","doi-asserted-by":"crossref","unstructured":"Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (Cvpr), 2015. pp. 3431\u20133440.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"5409_CR33","doi-asserted-by":"publisher","first-page":"7306","DOI":"10.1109\/ACCESS.2020.3046667","volume":"9","author":"SH Yang","year":"2021","unstructured":"Yang SH, Chen WR, Huang WJ, Chen YP. DDaNet: dual-path depth-aware attention network for fingerspelling recognition using RGB-D images. IEEE Access. 2021;9:7306\u201322.","journal-title":"IEEE Access"},{"key":"5409_CR34","doi-asserted-by":"crossref","unstructured":"Tomar NK, et al. FANet: a feedback attention network for improved biomedical image segmentation. In IEEE Transactions on Neural Networks and Learning Systems, 2022.","DOI":"10.1109\/TNNLS.2022.3159394"},{"issue":"5","key":"5409_CR35","doi-asserted-by":"publisher","first-page":"2252","DOI":"10.1109\/JBHI.2021.3138024","volume":"26","author":"A Srivastava","year":"2022","unstructured":"Srivastava A, et al. MSRF-Net: a multi-scale residual fusion network for biomedical image segmentation. IEEE J Biomed Health Inform. 2022;26(5):2252\u201363.","journal-title":"IEEE J Biomed Health Inform"},{"key":"5409_CR36","doi-asserted-by":"crossref","unstructured":"Sanderson E, Matuszewski BJJA. FCN-transformer feature fusion for polyp segmentation. ArXiv e-prints, arxiv:2208.08352. 2022.","DOI":"10.1007\/978-3-031-12053-4_65"},{"key":"5409_CR37","doi-asserted-by":"crossref","unstructured":"Chen LCE, Zhu YK, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation (in English). In: Computer Vision\u2014Eccv 2018, Pt Vii, 2018. vol. 11211, pp. 833-851.","DOI":"10.1007\/978-3-030-01234-2_49"},{"issue":"5","key":"5409_CR38","doi-asserted-by":"publisher","first-page":"749","DOI":"10.1109\/LGRS.2018.2802944","volume":"15","author":"ZX Zhang","year":"2018","unstructured":"Zhang ZX, Liu QJ, Wang YH. Road extraction by deep residual U-Net (in English). IEEE Geosci Remote Sens Lett. 2018;15(5):749\u201353.","journal-title":"IEEE Geosci Remote Sens Lett"},{"key":"5409_CR39","unstructured":"Chen B, Liu Y, Zhang Z, Lu G, Zhang D. TransAttUnet: multi-level attention-guided U-Net with transformer for medical image segmentation. arXiv preprint arxiv:2107.05274. 2021."},{"key":"5409_CR40","unstructured":"Sun K, Zhao Y, Jiang B, Cheng T, Wang J. High-resolution representations for labeling pixels and regions. arXiv preprint arxiv:1904.04514. 2019."}],"container-title":["BMC Bioinformatics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12859-023-05409-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s12859-023-05409-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12859-023-05409-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,24]],"date-time":"2024-10-24T10:50:16Z","timestamp":1729767016000},"score":1,"resource":{"primary":{"URL":"https:\/\/bmcbioinformatics.biomedcentral.com\/articles\/10.1186\/s12859-023-05409-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,18]]},"references-count":40,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["5409"],"URL":"https:\/\/doi.org\/10.1186\/s12859-023-05409-7","relation":{},"ISSN":["1471-2105"],"issn-type":[{"value":"1471-2105","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,18]]},"assertion":[{"value":"24 March 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 July 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 July 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"285"}}