{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:19:02Z","timestamp":1775067542614,"version":"3.50.1"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"20","license":[{"start":{"date-parts":[[2022,6,3]],"date-time":"2022-06-03T00:00:00Z","timestamp":1654214400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,6,3]],"date-time":"2022-06-03T00:00:00Z","timestamp":1654214400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Umea University"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Neural Comput &amp; Applic"],"published-print":{"date-parts":[[2022,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>U-Net is a widely adopted neural network in the domain of medical image segmentation. Despite its quick embracement by the medical imaging community, its performance suffers on complicated datasets. The problem can be ascribed to its simple feature extracting blocks: encoder\/decoder, and the semantic gap between encoder and decoder. Variants of U-Net (such as R2U-Net) have been proposed to address the problem of simple feature extracting blocks by making the network deeper, but it does not deal with the semantic gap problem. On the other hand, another variant UNET++\u2009deals with the semantic gap problem by introducing dense skip connections but has simple feature extraction blocks. To overcome these issues, we propose a new U-Net based medical image segmentation architecture R2U++. In the proposed architecture, the adapted changes from vanilla U-Net are: (1) the plain convolutional backbone is replaced by a deeper recurrent residual convolution block. The increased field of view with these blocks aids in extracting crucial features for segmentation which is proven by improvement in the overall performance of the network. (2) The semantic gap between encoder and decoder is reduced by dense skip pathways. These pathways accumulate features coming from multiple scales and apply concatenation accordingly. The modified architecture has embedded multi-depth models, and an ensemble of outputs taken from varying depths improves the performance on foreground objects appearing at various scales in the images. The performance of R2U++\u2009is evaluated on four distinct medical imaging modalities: electron microscopy, X-rays, fundus, and computed tomography. The average gain achieved in IoU score is 1.5\u2009\u00b1\u20090.37% and in dice score is 0.9\u2009\u00b1\u20090.33% over UNET++, whereas, 4.21\u2009\u00b1\u20092.72 in IoU and 3.47\u2009\u00b1\u20091.89 in dice score over R2U-Net across different medical imaging segmentation datasets.<\/jats:p>","DOI":"10.1007\/s00521-022-07419-7","type":"journal-article","created":{"date-parts":[[2022,6,3]],"date-time":"2022-06-03T17:02:37Z","timestamp":1654275757000},"page":"17723-17739","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":88,"title":["R2U++: a multiscale recurrent residual U-Net with dense skip connections for medical image segmentation"],"prefix":"10.1007","volume":"34","author":[{"given":"Mehreen","family":"Mubashar","sequence":"first","affiliation":[]},{"given":"Hazrat","family":"Ali","sequence":"additional","affiliation":[]},{"given":"Christer","family":"Gr\u00f6nlund","sequence":"additional","affiliation":[]},{"given":"Shoaib","family":"Azmat","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,6,3]]},"reference":[{"key":"7419_CR1","doi-asserted-by":"publisher","first-page":"518","DOI":"10.1002\/mrd.22489","volume":"82","author":"J Schindelin","year":"2015","unstructured":"Schindelin J, Rueden CT, Hiner MC, Eliceiri KW (2015) The ImageJ ecosystem: an open platform for biomedical. Mol Reprod Dev 82:518\u2013529","journal-title":"Mol Reprod Dev"},{"key":"7419_CR2","unstructured":"Facts & Figures 2018: Rate of deaths from cancer continues decline. Jan 14, 2018. Accessed on: July 23, 2020. https:\/\/www.cancer.org\/latest-news\/facts-and-figures-2018-rate-of-deaths-from-cancer-continues-decline.html#reviewed_by"},{"key":"7419_CR3","doi-asserted-by":"publisher","first-page":"1525","DOI":"10.1007\/s00417-017-3677-y","volume":"225","author":"SAA Shah","year":"2017","unstructured":"Shah SAA, Tang TB, Faye I, Laude A (2017) Blood vessel segmentation in color fundus images based on regional and Hessian features. Graefes Arch Clin Exp Ophthalmol 225:1525\u20131533","journal-title":"Graefes Arch Clin Exp Ophthalmol"},{"issue":"4","key":"7419_CR4","doi-asserted-by":"publisher","first-page":"543","DOI":"10.1016\/j.media.2009.05.004","volume":"13","author":"T Heimann","year":"2009","unstructured":"Heimann T, Meinzer HP (2009) Statistical shape models for 3D medical image segmentation: a review. Med Image Anal 13(4):543\u2013563","journal-title":"Med Image Anal"},{"key":"7419_CR5","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Part of Advances in Neural Information Processing Systems (NIPS 2012), vol 25. Curran Associates, Inc."},{"key":"7419_CR6","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large scale image recoginition. arXiv:1409.1556"},{"key":"7419_CR7","doi-asserted-by":"crossref","unstructured":"Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"7419_CR8","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1016\/j.media.2017.07.005","volume":"42","author":"G Litjens","year":"2017","unstructured":"Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Laak JAWMV, Ginneken B, Sanchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60\u201368","journal-title":"Med Image Anal"},{"key":"7419_CR9","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2019.101552","volume":"58","author":"X Yi","year":"2019","unstructured":"Yi X, Walia E, Babyn P (2019) Generative adversarial network in medical imaging: a review. Med Image Anal 58:101552","journal-title":"Med Image Anal"},{"issue":"4","key":"7419_CR10","doi-asserted-by":"publisher","first-page":"640","DOI":"10.1109\/TPAMI.2016.2572683","volume":"39","author":"J Long","year":"2017","unstructured":"Long J, Shelhamer E, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640\u2013651","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"7419_CR11","doi-asserted-by":"crossref","unstructured":"Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. arXiv:1505.04597v1","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"7419_CR12","doi-asserted-by":"publisher","first-page":"74","DOI":"10.1016\/j.neunet.2019.08.025","volume":"121","author":"N Ibtehaz","year":"2019","unstructured":"Ibtehaz N, Rahman MS (2019) MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw 121:74\u201387","journal-title":"Neural Netw"},{"key":"7419_CR13","doi-asserted-by":"crossref","unstructured":"Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J (2018) UNet++: a nested U-Net architecture for medical image segmentation. arXiv:1807.10165","DOI":"10.1007\/978-3-030-00889-5_1"},{"key":"7419_CR14","doi-asserted-by":"crossref","unstructured":"Chen F, Ding Y, Wu Z, Wu D, Wen J (2018) An improved framework called DU++ applied to brain. In: 15th international computer conference on wavelet active media technology and information processing (ICCWAMTIP)","DOI":"10.1109\/ICCWAMTIP.2018.8632559"},{"key":"7419_CR15","doi-asserted-by":"crossref","unstructured":"Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J (2019) UNet++: redesigning skip connections to exploit multiscale features in image segmentation. J IEEE Trans Med Imaging","DOI":"10.1109\/TMI.2019.2959609"},{"key":"7419_CR16","doi-asserted-by":"crossref","unstructured":"Alom MZ, Hasan M, Yakopcic C, Taha TM, Asari VK (2018) Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation. arXiv:1802.06955","DOI":"10.1109\/NAECON.2018.8556686"},{"key":"7419_CR17","unstructured":"Oktay O, Schlemper J, Folgoc L, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, Glocker B, Rueckert D (2018) Attention U-Net: learning where to look for the pancreas. arXiv:1804.03999v3"},{"key":"7419_CR18","unstructured":"Zhang J, Jin Y, Xu J, Xu X, Zhang Y (2018) MDU-net: multi-scale densely connected U-net for biomedical image segmentation. arXiv:1812.00352"},{"key":"7419_CR19","doi-asserted-by":"crossref","unstructured":"Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv:1606.00915v2","DOI":"10.1109\/TPAMI.2017.2699184"},{"key":"7419_CR20","unstructured":"Badrinarayanan V, Kendall A, Cipolla R (2015) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv:1511.00561v3"},{"issue":"11","key":"7419_CR21","doi-asserted-by":"publisher","first-page":"3954","DOI":"10.1109\/JSTARS.2018.2833382","volume":"11","author":"R Li","year":"2018","unstructured":"Li R, Liu W, Yang L, Sun S, Hu W, Zhang F, Li W (2018) DeepUNet: a deep fully convolutional network for pixel-level sea-land segmentation. IEEE J Sel Top Appl Earth Observ Remote Sens 11(11):3954\u20133962","journal-title":"IEEE J Sel Top Appl Earth Observ Remote Sens"},{"key":"7419_CR22","unstructured":"Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: International conference on artificial intelligence and statistics (AISTATS)"},{"key":"7419_CR23","unstructured":"Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. PMLR, pp 448\u2013456"},{"key":"7419_CR24","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas","DOI":"10.1109\/CVPR.2016.90"},{"key":"7419_CR25","doi-asserted-by":"crossref","unstructured":"Liang M, Hu X (2015) Recurrent convolutional neural network for object recognition. In: Proceedings of IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPRW.2015.7301333"},{"key":"7419_CR26","unstructured":"Kayal\u0131bay B, Jensen G, Smag PVD (2017) CNN-based segmentation of medical imaging data. arXiv:1701.03056v2"},{"key":"7419_CR27","doi-asserted-by":"crossref","unstructured":"Soni A, Koner R, Villuri VGK (2019) M-UNet: Modified U-Net segmentation framework with satellite imagery. In: Proceedings of the global AI congress 2019","DOI":"10.1007\/978-981-15-2188-1_4"},{"key":"7419_CR28","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Maaten LVD (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPR.2017.243"},{"key":"7419_CR29","doi-asserted-by":"crossref","unstructured":"Zhang Z, Wu C, Coleman S, Kerr D (2020) DENSE-INception U-net for medical image segmentation. In: Computer methods and programs in biomedicine, vol 192","DOI":"10.1016\/j.cmpb.2020.105395"},{"issue":"3","key":"7419_CR30","doi-asserted-by":"publisher","first-page":"241","DOI":"10.5614\/itbj.ict.res.appl.2019.13.3.5","volume":"13","author":"I Rubasinghe","year":"2019","unstructured":"Rubasinghe I, Meedeniya D (2019) Ultrasound nerve segmentation using deep probabilistic programming. J ICT Res Appl 13(3):241\u2013256","journal-title":"J ICT Res Appl"},{"key":"7419_CR31","unstructured":"Chen X, Yao L, Zhang Y (2020) Residual attention U-Net for automated multi-class segmentation of COVID-19 chest CT images. arXiv:2004.05645"},{"key":"7419_CR32","unstructured":"Yan Q, Wang B, Gong D, Luo C, Zhao W, Shen J, Shi Q, Jin S, Zhang L, You Z (2020) COVID-19 chest CT image segmentation\u2014a deep convolutional neural network solution. arXiv:2004.10987"},{"key":"7419_CR33","doi-asserted-by":"crossref","unstructured":"Wu Z, Chen F, Wu D (2018) A novel framework called HDU for segmentation of brain tumor. In: 2018 15th international computer conference on wavelet active media technology and information processing (ICCWAMTIP)","DOI":"10.1109\/ICCWAMTIP.2018.8632590"},{"key":"7419_CR34","doi-asserted-by":"crossref","unstructured":"Cicek O, Abdulkadir A, Lienkamp S, Brox T, Ronneberger O (2016) 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 424\u2013432","DOI":"10.1007\/978-3-319-46723-8_49"},{"key":"7419_CR35","doi-asserted-by":"crossref","unstructured":"Milletari F, Navab N, Ahmadi SA (2016) V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). IEEE, pp 565\u2013571","DOI":"10.1109\/3DV.2016.79"},{"key":"7419_CR36","doi-asserted-by":"publisher","first-page":"166823","DOI":"10.1109\/ACCESS.2019.2953934","volume":"7","author":"T Song","year":"2019","unstructured":"Song T, Meng F, Paton AR, Li P, Zheng P, Wang X (2019) U-Next: A Novel Convolution Neural Network. IEEE Access 7:166823\u2013166832","journal-title":"IEEE Access"},{"key":"7419_CR37","doi-asserted-by":"crossref","unstructured":"Wu S, Wang Z, Liu C, Zhu C, Wu S, Xiao K (2019) Automatical segmentation of pelvic organs after hysterectomy by using dilated convolutions U-Net++. In: Proceedings of IEEE 19th international conference on software quality, reliability and security companion (QRS-C)","DOI":"10.1109\/QRS-C.2019.00074"},{"key":"7419_CR38","doi-asserted-by":"crossref","unstructured":"Chaurasia A, Culurciello E (2017) LinkNet: exploiting encoder representations for efficient semantic segmentation. In: 2017 IEEE visual communication and image processing (VCIP)","DOI":"10.1109\/VCIP.2017.8305148"},{"key":"7419_CR39","doi-asserted-by":"crossref","unstructured":"Lin G, Milan A, Shen C, Reid I (2017) RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPR.2017.549"},{"key":"7419_CR40","doi-asserted-by":"crossref","unstructured":"Zhao H, Qi X, Shen X, Shi J, Jia J (2018) ICNet for real-time semantic segmentation on high-resolution images. In: Proceedings of the European conference on computer vision","DOI":"10.1007\/978-3-030-01219-9_25"},{"key":"7419_CR41","doi-asserted-by":"crossref","unstructured":"Tajbakhsh N, Lai B, Ananth SP, Ding X (2020) ERRORNET: learning error representations from limited data to improve vascular segmentation. In: 2020 IEEE 17th international symposium on biomedical imaging (ISBI). IEEE, pp 1364\u20131368","DOI":"10.1109\/ISBI45749.2020.9098451"},{"key":"7419_CR42","doi-asserted-by":"crossref","unstructured":"Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C (2016) The importance of skip connections in biomedical image segmentation. In: Deep learning and data labeling for medical applications. Springer, Cham, pp 179\u2013187","DOI":"10.1007\/978-3-319-46976-8_19"},{"key":"7419_CR43","doi-asserted-by":"crossref","unstructured":"Zhou C, Chen S, Ding C, Tao D (2019) Learning contextual and attentive information for brain tumor segmentation. International MICCAI Brainlesion Workshop","DOI":"10.1007\/978-3-030-11726-9_44"},{"issue":"10","key":"7419_CR44","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pbio.1000502","volume":"8","author":"A Cardona","year":"2010","unstructured":"Cardona A, Saalfeld S, Preibisch S, Schmid B, Cheng A, Pulokas J, Tomancak P, Hartenstein V (2010) An Integrated micro- and macroarchitectural analysis of the drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol 8(10):e1000502","journal-title":"PLoS Biol"},{"key":"7419_CR45","unstructured":"COVID-19 CT segmentation dataset. https:\/\/medicalsegmentation.com\/covid19\/"},{"issue":"8","key":"7419_CR46","doi-asserted-by":"publisher","first-page":"2626","DOI":"10.1109\/TMI.2020.2996645","volume":"39","author":"D-P Fan","year":"2020","unstructured":"Fan D-P, Zhou T, Ji G-P, Zhou Y, Chen G, Fu H, Shen J, Shao L (2020) Inf-Net: Automatic COVID-19 lung infection segmentation from CT scans. IEEE Trans Med Imaging 39(8):2626\u20132637","journal-title":"IEEE Trans Med Imaging"},{"issue":"1","key":"7419_CR47","doi-asserted-by":"publisher","first-page":"71","DOI":"10.2214\/ajr.174.1.1740071","volume":"174","author":"SJS Katsuragawa","year":"2000","unstructured":"Katsuragawa SJS, Ikezoe J, Matsumoto T, Kobayashi T, Komatsu K-I, Matsui M, Fujita H, Kodera Y, Doi K (2000) Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists\u2019 detection of pulmonary nodules. Am J Roentgenol 174(1):71\u201374","journal-title":"Am J Roentgenol"},{"key":"7419_CR48","unstructured":"Drive database. https:\/\/drive.grand-challenge.org\/"},{"key":"7419_CR49","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929"}],"container-title":["Neural Computing and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-022-07419-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00521-022-07419-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00521-022-07419-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,23]],"date-time":"2022-09-23T15:23:08Z","timestamp":1663946588000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00521-022-07419-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,3]]},"references-count":49,"journal-issue":{"issue":"20","published-print":{"date-parts":[[2022,10]]}},"alternative-id":["7419"],"URL":"https:\/\/doi.org\/10.1007\/s00521-022-07419-7","relation":{},"ISSN":["0941-0643","1433-3058"],"issn-type":[{"value":"0941-0643","type":"print"},{"value":"1433-3058","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,3]]},"assertion":[{"value":"29 November 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 May 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 June 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}