{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T16:13:42Z","timestamp":1772727222929,"version":"3.50.1"},"reference-count":46,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,3,23]],"date-time":"2024-03-23T00:00:00Z","timestamp":1711152000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,3,23]],"date-time":"2024-03-23T00:00:00Z","timestamp":1711152000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"National Key Research and Development Program of China under Grant","award":["2021YFE0102100"],"award-info":[{"award-number":["2021YFE0102100"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62172002"],"award-info":[{"award-number":["62172002"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62303014"],"award-info":[{"award-number":["62303014"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100017128","name":"Science Fund for Distinguished Young Scholars of Anhui Province","doi-asserted-by":"publisher","award":["2308085QF225"],"award-info":[{"award-number":["2308085QF225"]}],"id":[{"id":"10.13039\/100017128","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Since the impressive superior performance demonstrated by deep learning methods is widely used in histopathological image analysis and diagnosis, existing work cannot fully extract the information in the breast cancer images due to the limited high resolution of histopathological images. In this study, we construct a novel intermediate layer structure that fully extracts feature information and name it DMBANet, which can extract as much feature information as possible from the input image by up-dimensioning the intermediate convolutional layers to improve the performance of the network. Furthermore, we employ the depth-separable convolution method on the Spindle Structure by decoupling the intermediate convolutional layers and convolving them separately, to significantly reduce the number of parameters and computation of the Spindle Structure and improve the overall network operation speed. We also design the Spindle Structure as a multi-branch model and add different attention mechanisms to different branches. Spindle Structure can effectively improve the performance of the network, the branches with added attention can extract richer and more focused feature information, and the branch with residual connections can minimize the degradation phenomenon in our network and speed up network optimization. The comprehensive experiment shows the superior performance of DMBANet compared to the state-of-the-art method, achieving about 98% classification accuracy, which is better than existing methods. The code is available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/Nagi-Dr\/DMBANet-main\">https:\/\/github.com\/Nagi-Dr\/DMBANet-main<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s40747-024-01398-z","type":"journal-article","created":{"date-parts":[[2024,3,23]],"date-time":"2024-03-23T06:35:07Z","timestamp":1711175707000},"page":"4571-4587","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":14,"title":["A deep multi-branch attention model for histopathological breast cancer image classification"],"prefix":"10.1007","volume":"10","author":[{"given":"Rui","family":"Ding","sequence":"first","affiliation":[]},{"given":"Xiaoping","family":"Zhou","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3960-8852","authenticated-orcid":false,"given":"Dayu","family":"Tan","sequence":"additional","affiliation":[]},{"given":"Yansen","family":"Su","sequence":"additional","affiliation":[]},{"given":"Chao","family":"Jiang","sequence":"additional","affiliation":[]},{"given":"Guo","family":"Yu","sequence":"additional","affiliation":[]},{"given":"Chunhou","family":"Zheng","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,3,23]]},"reference":[{"issue":"1","key":"1398_CR1","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1038\/s41746-020-00376-2","volume":"4","author":"A Esteva","year":"2021","unstructured":"Esteva A, Chou K, Yeung S et al (2021) Deep learning-enabled medical computer vision. NPJ Digit Med 4(1):5","journal-title":"NPJ Digit Med"},{"issue":"1","key":"1398_CR2","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1038\/s41746-021-00438-z","volume":"4","author":"R Aggarwal","year":"2021","unstructured":"Aggarwal R, Sounderajah V, Martin G et al (2021) Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 4(1):65","journal-title":"NPJ Digit Med"},{"issue":"7","key":"1398_CR3","doi-asserted-by":"publisher","first-page":"1455","DOI":"10.1109\/TBME.2015.2496264","volume":"63","author":"FA Spanhol","year":"2015","unstructured":"Spanhol FA, Oliveira LS, Petitjean C, Heutte L (2015) A dataset for breast cancer histopathological image classification. Proc IEEE Trans Biomed Eng 63(7):1455\u20131462","journal-title":"Proc IEEE Trans Biomed Eng"},{"key":"1398_CR4","unstructured":"Wei B, Han Z, He X, Yin Y (2017) Deep learning model based breast cancer histopathological image classification. In: Proc IEEE 2nd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), pp 348\u2013353"},{"key":"1398_CR5","doi-asserted-by":"publisher","first-page":"52","DOI":"10.1016\/j.ymeth.2019.06.014","volume":"173","author":"R Yan","year":"2020","unstructured":"Yan R, Ren F, Wang Z, Wang L, Zhang T, Liu Y, Rao X, Zheng C, Zhang F (2020) Breast cancer histopathological image classification using a hybrid deep neural network. Methods 173:52\u201360","journal-title":"Methods"},{"key":"1398_CR6","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the Inception Architecture for Computer Vision. In: Proc IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2818\u20132826","DOI":"10.1109\/CVPR.2016.308"},{"key":"1398_CR7","doi-asserted-by":"crossref","unstructured":"Hu J, Shen L, Sun G (2018) Squeeze-and-Excitation Networks. In: Proc IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 7132\u20137141","DOI":"10.1109\/CVPR.2018.00745"},{"issue":"6","key":"1398_CR8","doi-asserted-by":"publisher","first-page":"1930","DOI":"10.1109\/TMI.2019.2962013","volume":"39","author":"B Xu","year":"2020","unstructured":"Xu B, Liu J, Hou X, Liu B, Garibaldi J et al (2020) Attention by selection: a deep selective attention approach to breast cancer classification. Proc IEEE Trans Med Imaging 39(6):1930\u20131941","journal-title":"Proc IEEE Trans Med Imaging"},{"issue":"9","key":"1398_CR9","doi-asserted-by":"publisher","first-page":"2092","DOI":"10.1109\/TMI.2019.2893944","volume":"38","author":"J Zhang","year":"2019","unstructured":"Zhang J, Xie Y, Xia Y, Shen C (2019) Attention residual learning for skin lesion classification. Proc IEEE Trans Med Imaging 38(9):2092\u20132103","journal-title":"Proc IEEE Trans Med Imaging"},{"issue":"9","key":"1398_CR10","doi-asserted-by":"publisher","first-page":"2354","DOI":"10.1109\/TMI.2021.3077079","volume":"40","author":"W Zhu","year":"2021","unstructured":"Zhu W, Sun L, Huang J, Han L, Zhang D (2021) Dual attention multi-instance deep learning for Alzheimer\u2019s disease diagnosis with structural MRI. Proc IEEE Trans Med Imaging 40(9):2354\u20132366","journal-title":"Proc IEEE Trans Med Imaging"},{"issue":"2","key":"1398_CR11","doi-asserted-by":"publisher","first-page":"699","DOI":"10.1109\/TMI.2020.3035253","volume":"40","author":"R Gu","year":"2021","unstructured":"Gu R et al (2021) CA-Net: comprehensive attention convolutional neural networks for explainable medical image segmentation. Proc IEEE Trans Med Imaging 40(2):699\u2013711","journal-title":"Proc IEEE Trans Med Imaging"},{"issue":"6","key":"1398_CR12","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1145\/3065386","volume":"60","author":"A Krizhevsky","year":"2017","unstructured":"Krizhevsky A, Sutskever I, Hinton G (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84\u201390","journal-title":"Commun ACM"},{"key":"1398_CR13","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) ImageNet: a large-scale hierarchical image database. In: Proc IEEE Conference on Computer Vision and Pattern Recognition, pp 248\u2013255","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"1398_CR14","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely Connected Convolutional Networks. In: Proc IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2261\u20132269","DOI":"10.1109\/CVPR.2017.243"},{"key":"1398_CR15","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Doll\u00e1r P et al (2017) Feature Pyramid Networks for Object Detection. In: Proc IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 936\u2013944","DOI":"10.1109\/CVPR.2017.106"},{"key":"1398_CR16","unstructured":"Tan D et al (2023) Large-scale data-driven optimization in deep modeling with an intelligent decision-making mechanism. In: Proc IEEE Transactions on Cybernetics"},{"key":"1398_CR17","doi-asserted-by":"crossref","unstructured":"Tan D et al (2023) Deep adaptive fuzzy clustering for evolutionary unsupervised representation learning. In: Proc IEEE Transactions on Neural Networks and Learning Systems","DOI":"10.1109\/TNNLS.2023.3243666"},{"key":"1398_CR18","doi-asserted-by":"crossref","unstructured":"Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: Proc IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPR.2017.195"},{"key":"1398_CR19","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proc IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"1398_CR20","unstructured":"Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv:1704.04861"},{"key":"1398_CR21","doi-asserted-by":"crossref","unstructured":"Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) MobileNetV2: Inverted residuals and linear bottlenecks. In: Proc IEEE Conference on Computer Vision and Pattern Recognition, pp 4510\u20134520","DOI":"10.1109\/CVPR.2018.00474"},{"key":"1398_CR22","doi-asserted-by":"crossref","unstructured":"Ma N, Zhang X, Zheng H, Sun J (2018) Shufflenet v2: Practical guidelines for efficient CNN architecture design. In: Proc European conference on computer vision (ECCV), pp 116\u2013131","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"1398_CR23","unstructured":"Tan M, Le Q (2019) EfficientNet: Rethinking model scaling for convolutional neural networks. In: Proc International Conference on Machine Learning, pp 6105\u20136114"},{"key":"1398_CR24","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X et al (2021) An image is worth 16 x 16 words: transformers for image recognition at scale. In: International Conference on Learning Representations"},{"key":"1398_CR25","doi-asserted-by":"crossref","unstructured":"Liu Z, Mao H, Wu C-Y, Feichtenhofer C, Darrell T, Xie S (2022) A ConvNet for the 2020s. In: Proc IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 11966\u201311976","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"1398_CR26","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin Transformer: hierarchical vision transformer using shifted windows. In: Proc IEEE Conference on Computer Vision and Pattern Recognition, pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"1398_CR27","doi-asserted-by":"crossref","unstructured":"Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) Dual attention network for scene segmentation. In: Proc IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June","DOI":"10.1109\/CVPR.2019.00326"},{"key":"1398_CR28","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. In: Proc. 30th Int. Adv. Neural Inf. Neural Inf. Process. Syst"},{"key":"1398_CR29","doi-asserted-by":"crossref","unstructured":"Xie S et al (2017) Aggregated Residual Transformations for Deep Neural Networks. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5987\u20135995","DOI":"10.1109\/CVPR.2017.634"},{"key":"1398_CR30","doi-asserted-by":"crossref","unstructured":"Han D, Kim J, Kim J (2017) Deep pyramidal residual networks. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6307\u20136315","DOI":"10.1109\/CVPR.2017.668"},{"key":"1398_CR31","doi-asserted-by":"crossref","unstructured":"Woo S, Park J, Lee J-Y, Kweon IS (2018) CBAM: Convolutional block attention module. In: Proc. European Conference on Computer Vision (ECCV), pp 3\u201319","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"1398_CR32","unstructured":"Reddi SJ, Kale S, Kumar S (2019) On the Convergence of Adam and Beyond. arXiv preprint arXiv:1904.09237"},{"key":"1398_CR33","unstructured":"Wilson AC, Roelofs R, Stern M, Srebro N, Recht B (2017) The Marginal Value of Adaptive Gradient Methods in Machine Learning. arXiv preprint arXiv:1705.08292"},{"key":"1398_CR34","unstructured":"Keskar NS, Socher R (2017) Improving Generalization Performance by Switching from Adam to SGD. arXiv preprint arXiv:1712.07628"},{"key":"1398_CR35","unstructured":"Loffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. PMLR"},{"issue":"1","key":"1398_CR36","doi-asserted-by":"publisher","first-page":"71","DOI":"10.1109\/TAI.2021.3074106","volume":"2","author":"Q Liu","year":"2021","unstructured":"Liu Q, Li D, Ge SS, Ouyang Z (2021) Adaptive feedforward neural network control with an optimized hidden node distribution. Proc IEEE Trans Artif Intell 2(1):71\u201382","journal-title":"Proc IEEE Trans Artif Intell"},{"key":"1398_CR37","doi-asserted-by":"crossref","unstructured":"Howard A et al (2019) Searching for MobileNetV3. In: Proc IEEE\/CVF International Conference on Computer Vision (ICCV), pp 1314\u20131324","DOI":"10.1109\/ICCV.2019.00140"},{"key":"1398_CR38","doi-asserted-by":"crossref","unstructured":"Zhang X, Zhou X, Lin M, Sun J (2018) ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: Proc IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp 6848\u20136856","DOI":"10.1109\/CVPR.2018.00716"},{"issue":"3","key":"1398_CR39","doi-asserted-by":"publisher","first-page":"331","DOI":"10.1007\/s41095-022-0271-y","volume":"8","author":"M Guo","year":"2022","unstructured":"Guo M, Xu T, Liu J et al (2022) Attention mechanisms in computer vision: a survey. Comput Vis Media 8(3):331\u201368","journal-title":"Comput Vis Media"},{"key":"1398_CR40","doi-asserted-by":"crossref","unstructured":"Yang J, Zheng W-S, Yang Q, Chen Y-C, Tian Q (2020) Spatial-temporal graph convolutional network for video-based person re-identification. In: Proc IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 3289\u20133299","DOI":"10.1109\/CVPR42600.2020.00335"},{"key":"1398_CR41","doi-asserted-by":"crossref","unstructured":"Ding X et al (2021) RepVGG: Making VGG-style ConvNets Great Again. In: Proc IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 13728\u201313737","DOI":"10.1109\/CVPR46437.2021.01352"},{"key":"1398_CR42","doi-asserted-by":"crossref","unstructured":"Hou L, Samaras D et al (2016) Patch-based convolutional neural network for whole slide tissue image classification. In: Proc IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPR.2016.266"},{"issue":"1","key":"1398_CR43","doi-asserted-by":"publisher","first-page":"3358","DOI":"10.1038\/s41598-019-40041-7","volume":"9","author":"JW Wei","year":"2019","unstructured":"Wei JW, Tafe LJ et al (2019) Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks. Sci Rep 9(1):3358","journal-title":"Sci Rep"},{"key":"1398_CR44","doi-asserted-by":"crossref","unstructured":"Vente CD et al (2022) Automated COVID-19 grading with convolutional neural networks in computed tomography scans: a systematic comparison. In: Proc IEEE Transactions on Artificial Intelligence, vol. 3, no. 2, pp 129\u2013138","DOI":"10.1109\/TAI.2021.3115093"},{"key":"1398_CR45","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2023.3301067","author":"B Pan","year":"2023","unstructured":"Pan B, Li C, Che H, Leung M-F, Yu K (2023) Low-rank tensor regularized graph fuzzy learning for multi-view data processing. Proc IEEE Trans Consum Electron. https:\/\/doi.org\/10.1109\/TCE.2023.3301067","journal-title":"Proc IEEE Trans Consum Electron"},{"key":"1398_CR46","unstructured":"Tan MX, Le Q (2021) Efficientnetv2: smaller models and faster training. In: International conference on machine learning. PMLR"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01398-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01398-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01398-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,16]],"date-time":"2024-05-16T18:32:01Z","timestamp":1715884321000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01398-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,23]]},"references-count":46,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["1398"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01398-z","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,23]]},"assertion":[{"value":"3 November 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 February 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 March 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}