{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,7]],"date-time":"2026-05-07T12:17:56Z","timestamp":1778156276590,"version":"3.51.4"},"reference-count":21,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,1,27]],"date-time":"2022-01-27T00:00:00Z","timestamp":1643241600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,1,27]],"date-time":"2022-01-27T00:00:00Z","timestamp":1643241600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["BMC Med Imaging"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:sec>\n                    <jats:title>Background<\/jats:title>\n                    <jats:p>For the encoding part of U-Net3+,the ability of brain tumor feature extraction is insufficient, as a result, the features can not be fused well during up-sampling, and the accuracy of segmentation will reduce.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Methods<\/jats:title>\n                    <jats:p>In this study, we put forward an improved U-Net3+ segmentation network based on stage residual. In the encoder part, the encoder based on the stage residual structure is used to solve the vanishing gradient problem caused by the increasing in network depth, and enhances the feature extraction ability of the encoder which is instrumental in full feature fusion when up-sampling in the network. What\u2019s more, we replaced batch normalization (BN) layer with filter response normalization (FRN) layer to eliminate batch size impact on the network. Based on the improved U-Net3+ two-dimensional (2D) model with stage residual, IResUnet3+ three-dimensional (3D) model is constructed. We propose appropriate methods to deal with 3D data, which achieve accurate segmentation of the 3D network.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Results<\/jats:title>\n                    <jats:p>\n                      The experimental results showed that: the sensitivity of WT, TC, and ET increased by 1.34%, 4.6%, and 8.44%, respectively. And the Dice coefficients of ET and WT were further increased by 3.43% and 1.03%, respectively. To facilitate further research, source code can be found at:\n                      <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/YuOnlyLookOne\/IResUnet3Plus\">https:\/\/github.com\/YuOnlyLookOne\/IResUnet3Plus<\/jats:ext-link>\n                      .\n                    <\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Conclusion<\/jats:title>\n                    <jats:p>The improved network has a significant improvement in the segmentation task of the brain tumor BraTS2018 dataset, compared with the classical networks u-net, v-net, resunet and u-net3+, the proposed network has smaller parameters and significantly improved accuracy.<\/jats:p>\n                  <\/jats:sec>","DOI":"10.1186\/s12880-022-00738-0","type":"journal-article","created":{"date-parts":[[2022,1,27]],"date-time":"2022-01-27T06:02:49Z","timestamp":1643263369000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":26,"title":["Improved U-Net3+ with stage residual for brain tumor segmentation"],"prefix":"10.1186","volume":"22","author":[{"given":"Chuanbo","family":"Qin","sequence":"first","affiliation":[]},{"given":"Yujie","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Wenbin","family":"Liao","sequence":"additional","affiliation":[]},{"given":"Junying","family":"Zeng","sequence":"additional","affiliation":[]},{"given":"Shufen","family":"Liang","sequence":"additional","affiliation":[]},{"given":"Xiaozhi","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,1,27]]},"reference":[{"key":"738_CR1","doi-asserted-by":"publisher","first-page":"4756","DOI":"10.1049\/iet-ipr.2020.0469","volume":"14","author":"S Pasban","year":"2021","unstructured":"Pasban S, Mohamadzadeh S, Zeraatkar-Moghaddam J, Keivan SA. Infant brain segmentation based on a combination of VGG-16 and U-Net deep neural networks. IET Image Proc. 2021;14:4756\u201365.","journal-title":"IET Image Proc"},{"key":"738_CR2","unstructured":"Liu Z, Chen L, Tong L, Jiang Z, Chen L, Zhou F, Zhang Q, Zhang X, Jin Y, Zhou H. Deep learning based brain tumor segmentation: a survey. arXiv preprint arXiv: 2007.09479, 2020."},{"key":"738_CR3","first-page":"640","volume":"39","author":"J Long","year":"2015","unstructured":"Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2015;39:640\u201351.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"738_CR4","doi-asserted-by":"crossref","unstructured":"Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Cham: Springer; 2015. p. 234\u2013241.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"738_CR5","doi-asserted-by":"crossref","unstructured":"Jiang Z, Ding C, Liu M. Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task. In: International MICCAI Brainlesion Workshop. Cham: Springer; 2019. p. 231\u2013241.","DOI":"10.1007\/978-3-030-46640-4_22"},{"key":"738_CR6","doi-asserted-by":"crossref","unstructured":"Zhou Z, Siddiquee M M R, Tajbakhsh N. Unet++: a nested u-net architecture for medical image segmentation. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Cham: Springer; 2018. p. 3\u201311.","DOI":"10.1007\/978-3-030-00889-5_1"},{"key":"738_CR7","doi-asserted-by":"crossref","unstructured":"Huang H, Lin L, Tong R. Unet 3+: a full-scale connected unet for medical image segmentation. In: ICASSP 2020\u20132020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE; 2020. p. 1055\u20131059.","DOI":"10.1109\/ICASSP40776.2020.9053405"},{"key":"738_CR8","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1016\/j.media.2016.05.004","volume":"35","author":"M Havaei","year":"2017","unstructured":"Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with deep neural networks. Med Image Anal. 2017;35:18\u201331.","journal-title":"Med Image Anal"},{"issue":"5","key":"738_CR9","doi-asserted-by":"publisher","first-page":"749","DOI":"10.1109\/LGRS.2018.2802944","volume":"15","author":"Z Zhang","year":"2018","unstructured":"Zhang Z, Liu Q, Wang Y. Road extraction by deep residual u-net. IEEE Geosci Remotes. 2018;15(5):749\u201353.","journal-title":"IEEE Geosci Remotes"},{"key":"738_CR10","doi-asserted-by":"crossref","unstructured":"J\u00e9gou S, Drozdzal M, Vazquez D, et al. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: IEEE conference on computer vision and pattern recognition workshops, Honolulu, HI, USA; 2017. p. 11\u201319.","DOI":"10.1109\/CVPRW.2017.156"},{"key":"738_CR11","doi-asserted-by":"crossref","unstructured":"Milletari F, Navab N, Ahmadi S A. \u2018V-net: fully convolutional neural networks for volumetric medical image segmentation. In: International conference on 3D vision, Stanford, US; 2016. p. 565\u2013571.","DOI":"10.1109\/3DV.2016.79"},{"key":"738_CR12","doi-asserted-by":"crossref","unstructured":"Colmeiro RGR, Verrastro CA, Grosges T. Multimodal brain tumor segmentation using 3D convolutional networks. In: International conference of MICCAI, Quebec, Canada; 2017. p 226\u2013240.","DOI":"10.1007\/978-3-319-75238-9_20"},{"key":"738_CR13","doi-asserted-by":"crossref","unstructured":"Singh S, Krishnan S. Filter response normalization layer: eliminating batch dependence in the training of deep neural networks. In: IEEE conference on computer vision and pattern recognition, Seattle, WA, USA; 2020. p. 11237\u201311246.","DOI":"10.1109\/CVPR42600.2020.01125"},{"key":"738_CR14","unstructured":"Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning, Lille, France; 2015. p. 448\u2013456."},{"key":"738_CR15","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA; 2016. p. 770\u2013778.","DOI":"10.1109\/CVPR.2016.90"},{"key":"738_CR16","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, et al. Identity mappings in deep residual networks. In: European conference on computer vision, Amsterdam, Netherlands; 2016. p. 630\u2013645.","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"738_CR17","unstructured":"Duta IC, Liu L, Zhu F, et al. Improved residual networks for image and video recognition. arXiv preprint arXiv: 2004.04989, 2020."},{"key":"738_CR18","doi-asserted-by":"crossref","unstructured":"Wu Y, Kaiming H. Group normalization. In: European conference on computer vision, Munich, Germany; 2018. p. 3\u201319.","DOI":"10.1007\/978-3-030-01261-8_1"},{"key":"738_CR19","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B. Swin transformer: hierarchical vision transformer using shifted windows. arXiv preprint arXiv: 2103.14030, 2021."},{"key":"738_CR20","doi-asserted-by":"crossref","unstructured":"\u00c7i\u00e7ek \u00d6, Abdulkadir A, Lienkamp SS, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International conference on medical image computing and computer-assisted intervention. Cham: Springer;. 2016. p. 424\u2013432.","DOI":"10.1007\/978-3-319-46723-8_49"},{"key":"738_CR21","unstructured":"Ulyanov D, Vedaldi A, Lempitsky V. Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv: 1607.08022, 2016."}],"container-title":["BMC Medical Imaging"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12880-022-00738-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s12880-022-00738-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12880-022-00738-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,1,27]],"date-time":"2022-01-27T06:03:00Z","timestamp":1643263380000},"score":1,"resource":{"primary":{"URL":"https:\/\/bmcmedimaging.biomedcentral.com\/articles\/10.1186\/s12880-022-00738-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1,27]]},"references-count":21,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["738"],"URL":"https:\/\/doi.org\/10.1186\/s12880-022-00738-0","relation":{"has-preprint":[{"id-type":"doi","id":"10.21203\/rs.3.rs-898744\/v1","asserted-by":"object"}]},"ISSN":["1471-2342"],"issn-type":[{"value":"1471-2342","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1,27]]},"assertion":[{"value":"12 September 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 January 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 January 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The dataset used in this work is openly accessible and free to the public. No direct interaction with a human or animal entity was conducted in this work. And All procedures were performed in accordance with relevant guidelines.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics and consent to participate"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"14"}}