{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T18:36:17Z","timestamp":1773945377629,"version":"3.50.1"},"reference-count":33,"publisher":"Springer Science and Business Media LLC","issue":"17","license":[{"start":{"date-parts":[[2022,6,25]],"date-time":"2022-06-25T00:00:00Z","timestamp":1656115200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,6,25]],"date-time":"2022-06-25T00:00:00Z","timestamp":1656115200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"NNSF","doi-asserted-by":"crossref","award":["61771347"],"award-info":[{"award-number":["61771347"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"NNSF","doi-asserted-by":"crossref","award":["62071213"],"award-info":[{"award-number":["62071213"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"SPKAAIGU","award":["2019KZDZX1017"],"award-info":[{"award-number":["2019KZDZX1017"]}]},{"name":"GDDSIPL","award":["2019GDDSIPL-03"],"award-info":[{"award-number":["2019GDDSIPL-03"]}]},{"name":"GDDSIPL","award":["2020GDDSIPL-03"],"award-info":[{"award-number":["2020GDDSIPL-03"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Soft Comput"],"published-print":{"date-parts":[[2022,9]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Automatic segmentation of breast cancer lesions in dynamic contrast-enhanced magnetic resonance imaging is challenged by low accuracy of delineation of the infiltration area, variable structure and shapes, large intensity heterogeneity changes, and low boundary contrast. This study constructed a two-stage breast cancer image segmentation framework and proposes a novel breast cancer lesion segmentation model (TR-IMUnet). The benchmark U-Net network model enables a rough delineation of the breast area in the acquired images and eliminates the influence of unrelated tissues (chest muscle, fat, and heart) on breast tumor segmentation. Based on the extracted results of the region of interest, the rectified linear unit (ReLU) function of the encoding\u2013decoding structure in the model was replaced by an improved ReLU function to reserve and adjust the data dynamically according to input information. The segmentation accuracy of breast cancer lesions was improved by embedding a multi-scale fusion block and a transformer module in the coding path of the model, thereby obtaining multi-scale and global attention information. The experimental results showed that the breast tumor segmentation indexes Dice coefficient (Dice), Intersection over Union (IoU), Sensitivity (SEN), and Positive Predictive Value (PPV) increased by 4.27, 5.21, 3.37, and 3.68%, respectively, relative to the U-Net reference model. The proposed model improves the segmentation results of breast cancer lesions and reduces small area mis-segmentation and calcification segmentation.<\/jats:p>","DOI":"10.1007\/s00500-022-07235-0","type":"journal-article","created":{"date-parts":[[2022,6,25]],"date-time":"2022-06-25T04:03:04Z","timestamp":1656129784000},"page":"8317-8334","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":43,"title":["Joint Transformer and Multi-scale CNN for DCE-MRI Breast Cancer Segmentation"],"prefix":"10.1007","volume":"26","author":[{"given":"Chuanbo","family":"Qin","sequence":"first","affiliation":[]},{"given":"Yujie","family":"Wu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7559-0637","authenticated-orcid":false,"given":"Junying","family":"Zeng","sequence":"additional","affiliation":[]},{"given":"Lianfang","family":"Tian","sequence":"additional","affiliation":[]},{"given":"Yikui","family":"Zhai","sequence":"additional","affiliation":[]},{"given":"Fang","family":"Li","sequence":"additional","affiliation":[]},{"given":"Xiaozhi","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,6,25]]},"reference":[{"key":"7235_CR1","doi-asserted-by":"crossref","unstructured":"Carion N, Massa F, Synnaeve G, et al (2020) End-to-end object detection with transformers. In: European conference on computer vision. Springer, Cham, pp 213\u2013229","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"7235_CR2","doi-asserted-by":"crossref","unstructured":"Chen Y, Dai X, Liu M, et al (2020) Dynamic relu. In: European conference on computer vision. Springer, Cham, pp 351\u2013367","DOI":"10.1007\/978-3-030-58529-7_21"},{"key":"7235_CR3","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, et al (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929"},{"key":"7235_CR4","unstructured":"Han K, Xiao A, Wu E, et al (2021) Transformer in transformer. Adv Neural Inf Process Syst 34"},{"key":"7235_CR5","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, et al (2015) Delving deep into rectifiers: Surpassing human-level performance on imageNet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026\u20131034","DOI":"10.1109\/ICCV.2015.123"},{"key":"7235_CR6","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"7235_CR7","unstructured":"Hu J, Shen L, Sun G (2014) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132\u20137141"},{"key":"7235_CR8","doi-asserted-by":"crossref","unstructured":"Huang H, Lin L, Tong R, et al (2020) Unet 3+: A full-scale connected unet for medical image segmentation. In: ICASSP 2020\u20132020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 1055\u20131059","DOI":"10.1109\/ICASSP40776.2020.9053405"},{"key":"7235_CR9","unstructured":"Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980"},{"issue":"12","key":"7235_CR10","doi-asserted-by":"publisher","first-page":"2663","DOI":"10.1109\/TMI.2018.2845918","volume":"37","author":"X Li","year":"2018","unstructured":"Li X, Chen H, Qi X et al (2018) H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans Med Imaging 37(12):2663\u20132674","journal-title":"IEEE Trans Med Imaging"},{"key":"7235_CR11","doi-asserted-by":"crossref","unstructured":"Li Z, Liu X, Creighton FX, et al (2020) Revisiting stereo depth estimation from a sequence-to-sequence perspective with transformers. arXiv preprint arXiv:2011.02910","DOI":"10.1109\/ICCV48922.2021.00614"},{"key":"7235_CR12","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1016\/j.media.2017.07.005","volume":"42","author":"G Litjens","year":"2017","unstructured":"Litjens G, Kooi T, Bejnordi BE et al (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60\u201388","journal-title":"Med Image Anal"},{"key":"7235_CR13","doi-asserted-by":"crossref","unstructured":"Liu R, Yuan Z, Liu T, et al (2021) End-to-end Lane shape prediction with transformers. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp 3694\u20133702.","DOI":"10.1109\/WACV48630.2021.00374"},{"key":"7235_CR14","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"7235_CR15","unstructured":"Maas AL, Hannun AY, Ng Y (2013) Rectifier nonlinearities improve neural network acoustic models. Proc. Icml. 30(1): 3."},{"key":"7235_CR16","doi-asserted-by":"crossref","unstructured":"Milletari F, Navab N, Ahmadi SA (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation[C]\/\/2016 fourth international conference on 3D vision (3DV). IEEE, 2016: 565\u2013571.","DOI":"10.1109\/3DV.2016.79"},{"key":"7235_CR17","unstructured":"Nair V, Hinton GE (2010a) Rectified linear units improve restricted boltzmann machines"},{"key":"7235_CR18","unstructured":"Nair V, Hinton GE (2010b) Rectified Linear Units Improve Restricted Boltzmann Machines. International Conference on Machine Learning"},{"key":"7235_CR19","unstructured":"Oktay O, Schlemper J, Folgoc L L, et al (2018) Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999"},{"issue":"1","key":"7235_CR20","doi-asserted-by":"publisher","first-page":"315","DOI":"10.1146\/annurev.bioeng.2.1.315","volume":"2","author":"DL Pham","year":"2000","unstructured":"Pham DL, Xu C, Prince JL (2000) Current methods in medical image segmentation[J]. Annu Rev Biomed Eng 2(1):315\u2013337","journal-title":"Annu Rev Biomed Eng"},{"key":"7235_CR21","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2019.101781","volume":"103","author":"G Piantadosi","year":"2020","unstructured":"Piantadosi G, Sansone M, Fusco R et al (2020) Multi-planar 3D breast segmentation in MRI via deep convolutional neural networks. Artif Intell Med 103:101781","journal-title":"Artif Intell Med"},{"key":"7235_CR22","doi-asserted-by":"crossref","unstructured":"Prangemeier T, Reich C, Koeppl H (2020) attention-based transformers for instance segmentation of cells in microstructures. In: 2020 IEEE international conference on bioinformatics and biomedicine (BIBM). IEEE, pp 700\u2013707","DOI":"10.1109\/BIBM49941.2020.9313305"},{"key":"7235_CR23","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1007\/3-540-49430-8_3","volume-title":"Early stopping-but when? [M]\/\/Neural Networks: Tricks of the trade","author":"L Prechelt","year":"1998","unstructured":"Prechelt L (1998) Early stopping-but when? [M]\/\/Neural Networks: Tricks of the trade. Springer, Berlin, Heidelberg, pp 55\u201369"},{"key":"7235_CR24","doi-asserted-by":"crossref","unstructured":"Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 234\u2013241","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"7235_CR25","first-page":"379","volume-title":"Tversky loss function for image segmentation using 3D fully convolutional deep networks[C]\/\/International workshop on machine learning in medical imaging","author":"SSM Salehi","year":"2017","unstructured":"Salehi SSM, Erdogmus D, Gholipour A (2017) Tversky loss function for image segmentation using 3D fully convolutional deep networks[C]\/\/International workshop on machine learning in medical imaging. Springer, Cham, pp 379\u2013387"},{"key":"7235_CR26","doi-asserted-by":"publisher","DOI":"10.1007\/s00500-022-06830-5","author":"MN Uddin","year":"2022","unstructured":"Uddin MN, Li B, Ali Z et al (2022) Software defect prediction employing BiLSTM and BERT-based semantic feature. Soft Comput. https:\/\/doi.org\/10.1007\/s00500-022-06830-5","journal-title":"Soft Comput"},{"key":"7235_CR27","unstructured":"Vaswani A, Shazeer N, Parmar N, et al (2017) Attention is all you need. arXiv preprint arXiv:1706.03762"},{"issue":"6","key":"7235_CR28","doi-asserted-by":"publisher","first-page":"1567","DOI":"10.1109\/TBME.2018.2875955","volume":"66","author":"D Wei","year":"2018","unstructured":"Wei D, Weinstein S, Hsieh MK et al (2018) Three-dimensional whole breast segmentation in sagittal and axial breast MRI with dense depth field modeling and localized self-adaptation for chest-wall line detection. IEEE Trans Biomed Eng 66(6):1567\u20131579","journal-title":"IEEE Trans Biomed Eng"},{"key":"7235_CR29","doi-asserted-by":"crossref","unstructured":"Woo S, Park J, Lee J Y, et al (2018) Cbam: convolutional block attention module[C]\/\/Proceedings of the European conference on computer vision (ECCV), pp 3\u201319","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"7235_CR30","doi-asserted-by":"publisher","first-page":"42","DOI":"10.1038\/s41523-021-00247-3","volume":"7","author":"J Xiao","year":"2021","unstructured":"Xiao J, Rahbar H, Hippe DS et al (2021) Dynamic contrast-enhanced breast MRI features correlate with invasive breast cancer angiogenesis. NPJ Breast Cancer 7:42","journal-title":"NPJ Breast Cancer"},{"issue":"2","key":"7235_CR31","doi-asserted-by":"publisher","first-page":"435","DOI":"10.1109\/TMI.2018.2865671","volume":"38","author":"J Zhang","year":"2018","unstructured":"Zhang J, Saha A, Zhu Z et al (2018) Hierarchical convolutional neural networks for segmentation of breast tumors in MRI with application to radiogenomics. IEEE Trans Med Imaging 38(2):435\u2013447","journal-title":"IEEE Trans Med Imaging"},{"key":"7235_CR32","doi-asserted-by":"publisher","DOI":"10.1007\/s00500-021-06449-y","author":"K Zhang","year":"2021","unstructured":"Zhang K, Shi Y, Hu C et al (2021) Nucleus image segmentation method based on GAN and FCN model. Soft Comput. https:\/\/doi.org\/10.1007\/s00500-021-06449-y","journal-title":"Soft Comput"},{"key":"7235_CR33","doi-asserted-by":"crossref","unstructured":"Zheng S, Lu J, Zhao H, et al (2020) Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. arXiv preprint arXiv:2012.15840","DOI":"10.1109\/CVPR46437.2021.00681"}],"container-title":["Soft Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00500-022-07235-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00500-022-07235-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00500-022-07235-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,9]],"date-time":"2023-02-09T12:10:59Z","timestamp":1675944659000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00500-022-07235-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,25]]},"references-count":33,"journal-issue":{"issue":"17","published-print":{"date-parts":[[2022,9]]}},"alternative-id":["7235"],"URL":"https:\/\/doi.org\/10.1007\/s00500-022-07235-0","relation":{},"ISSN":["1432-7643","1433-7479"],"issn-type":[{"value":"1432-7643","type":"print"},{"value":"1433-7479","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,25]]},"assertion":[{"value":"17 May 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 June 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}