{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T15:45:48Z","timestamp":1774453548336,"version":"3.50.1"},"reference-count":41,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,11,18]],"date-time":"2024-11-18T00:00:00Z","timestamp":1731888000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,11,18]],"date-time":"2024-11-18T00:00:00Z","timestamp":1731888000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["G5000220192"],"award-info":[{"award-number":["G5000220192"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017596","name":"Natural Science Basic Research Program of Shaanxi Province","doi-asserted-by":"publisher","award":["2022JM-206"],"award-info":[{"award-number":["2022JM-206"]}],"id":[{"id":"10.13039\/501100017596","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Xi\u2019an Science and Technology planning project","award":["21RGZN0008"],"award-info":[{"award-number":["21RGZN0008"]}]},{"DOI":"10.13039\/501100001809","name":"he National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61603233"],"award-info":[{"award-number":["61603233"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2025,1]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Capsule networks overcome the two drawbacks of convolutional neural networks: weak rotated object recognition and poor spatial discrimination. However, they still have encountered problems with complex images, including high computational cost and limited accuracy. To address these challenges, this work has developed effective solutions. Specifically, a novel windowed dynamic up-and-down attention routing process is first introduced, which can effectively reduce the computational complexity from quadratic to linear order. A novel deconvolution-based decoder is also used to further reduce the computational complexity. Then, a novel LayerNorm strategy is used to pre-process neuron values in the squash function. This prevents saturation and mitigates the gradient vanishing problem. In addition, a novel gradient-friendly network structure is developed to facilitate the extraction of complex features with deeper networks. Experiments show that our methods are effective and competitive, outperforming existing techniques.<\/jats:p>","DOI":"10.1007\/s40747-024-01640-8","type":"journal-article","created":{"date-parts":[[2024,11,18]],"date-time":"2024-11-18T03:37:15Z","timestamp":1731901035000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Enhancing classification efficiency in capsule networks through windowed routing: tackling gradient vanishing, dynamic routing, and computational complexity challenges"],"prefix":"10.1007","volume":"11","author":[{"given":"Gangqi","family":"Chen","sequence":"first","affiliation":[]},{"given":"Zhaoyong","family":"Mao","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6563-9206","authenticated-orcid":false,"given":"Junge","family":"Shen","sequence":"additional","affiliation":[]},{"given":"Dongdong","family":"Hou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,11,18]]},"reference":[{"key":"1640_CR1","doi-asserted-by":"publisher","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84\u201390. https:\/\/doi.org\/10.1145\/3065386","DOI":"10.1145\/3065386"},{"key":"1640_CR2","doi-asserted-by":"publisher","first-page":"1865","DOI":"10.1007\/s40747021003474","volume":"8","author":"X Ai","year":"2022","unstructured":"Ai X, Zhuang J, Wang Y et al (2022) Rescaps: an improved capsule network and its application in ultrasonic image classification of thyroid papillary carcinoma. Complex Intell Syst 8:1865\u20131873. https:\/\/doi.org\/10.1007\/s40747021003474","journal-title":"Complex Intell Syst"},{"key":"1640_CR3","doi-asserted-by":"publisher","first-page":"2651","DOI":"10.1007\/s40747021003189","volume":"9","author":"G Kalyani","year":"2023","unstructured":"Kalyani G, Janakiramaiah B, Karuna A et al (2023) Diabetic retinopathy detection and classification using capsule networks. Complex Intell Syst 9:2651\u20132664. https:\/\/doi.org\/10.1007\/s40747021003189","journal-title":"Complex Intell Syst"},{"key":"1640_CR4","unstructured":"Vaswani A, Shazeer N, Parmar N, et\u00a0al (2017) Attention is all you need. arXiv: 1706.03762"},{"key":"1640_CR5","unstructured":"Park N, Kim S (2022) How do vision transformers work? arXiv:2202.06709"},{"key":"1640_CR6","unstructured":"Sabour S, Frosst N, Hinton GE (2017) Dynamic routing between capsules. arXiv:1710.09829"},{"key":"1640_CR7","doi-asserted-by":"publisher","unstructured":"Zhu K, Chen Y, Ghamisi P et al (2019) Deep convolutional capsule network for hyperspectral image spectral and spectral-spatial classification. Remote Sens. https:\/\/doi.org\/10.3390\/rs11030223, https:\/\/www.mdpi.com\/2072-4292\/11\/3\/223","DOI":"10.3390\/rs11030223"},{"key":"1640_CR8","doi-asserted-by":"publisher","unstructured":"Peer D, Stabinger S, Rodr\u00edguez-S\u00e1nchez A (2021) Limitation of capsule networks. Pattern Recogn Lett 144:68\u201374. https:\/\/doi.org\/10.1016\/j.patrec.2021.01.017, https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0167865521000301","DOI":"10.1016\/j.patrec.2021.01.017"},{"issue":"12","key":"1640_CR9","doi-asserted-by":"publisher","first-page":"1850","DOI":"10.1109\/LSP.2018.2873892","volume":"25","author":"C Xiang","year":"2018","unstructured":"Xiang C, Zhang L, Tang Y et al (2018) Ms-capsnet: A novel multi-scale capsule network. IEEE Signal Process Lett 25(12):1850\u20131854. https:\/\/doi.org\/10.1109\/LSP.2018.2873892","journal-title":"IEEE Signal Process Lett"},{"issue":"5","key":"1640_CR10","doi-asserted-by":"publisher","first-page":"4229","DOI":"10.1007\/s11063-022-10806-9","volume":"54","author":"X Jia","year":"2022","unstructured":"Jia X, Li J, Zhao B et al (2022) Res-capsnet: Residual capsule network for data classification. Neural Process Lett 54(5):4229\u20134245. https:\/\/doi.org\/10.1007\/s11063-022-10806-9","journal-title":"Neural Process Lett"},{"key":"1640_CR11","doi-asserted-by":"publisher","unstructured":"Zhuoran S, Mingyuan Z, Haiyu Z, et\u00a0al (2021) Efficient attention: Attention with linear complexities. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pp 3530\u20133538, https:\/\/doi.org\/10.1109\/WACV48630.2021.00357","DOI":"10.1109\/WACV48630.2021.00357"},{"key":"1640_CR12","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, et\u00a0al (2016) Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770\u2013778. https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"1640_CR13","unstructured":"Ba JL, Kiros JR, Hinton GE (2016) Layer normalization. arXiv:1607.06450"},{"key":"1640_CR14","doi-asserted-by":"publisher","unstructured":"Rajasegaran J, Jayasundara V, Jayasekara S, et\u00a0al (2019) Deepcaps: Going deeper with capsule networks. In: 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 10,717\u201310,725, https:\/\/doi.org\/10.1109\/CVPR.2019.01098","DOI":"10.1109\/CVPR.2019.01098"},{"key":"1640_CR15","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1007\/978-3-642-21735-7_6","volume-title":"Artificial Neural Networks and Machine Learning - ICANN 2011","author":"GE Hinton","year":"2011","unstructured":"Hinton GE, Krizhevsky A, Wang SD (2011) Transforming auto-encoders. In: Honkela T, Duch W, Girolami M et al (eds) Artificial Neural Networks and Machine Learning - ICANN 2011. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 44\u201351"},{"key":"1640_CR16","unstructured":"Hinton GE, Sabour S, Frosst N (2018) Matrix capsules with EM routing. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30\u2013May 3, 2018, Conference Track Proceedings. OpenReview.net, https:\/\/openreview.net\/forum?id=HJWLfGWRb"},{"issue":"6","key":"1640_CR17","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1109\/MSP.2012.2211477","volume":"29","author":"L Deng","year":"2012","unstructured":"Deng L (2012) The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process Magaz 29(6):141\u2013142. https:\/\/doi.org\/10.1109\/MSP.2012.2211477","journal-title":"IEEE Signal Process Magaz"},{"key":"1640_CR18","doi-asserted-by":"publisher","unstructured":"LeCun Y, Huang FJ, Bottou L (2004) Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. pp II\u2013104 Vol.2, https:\/\/doi.org\/10.1109\/CVPR.2004.1315150","DOI":"10.1109\/CVPR.2004.1315150"},{"key":"1640_CR19","unstructured":"Krizhevsky A (2009) Learning multiple layers of features from tiny images. https:\/\/api.semanticscholar.org\/CorpusID:18268744"},{"key":"1640_CR20","unstructured":"Lenssen JE, Fey M, Libuschewski P (2018) Group equivariant capsule networks. arXiv:1806.05086"},{"issue":"5","key":"1640_CR21","doi-asserted-by":"publisher","first-page":"5587","DOI":"10.1007\/s1048902203849x","volume":"53","author":"H Zhou","year":"2023","unstructured":"Zhou H, Zhang C, Zhang X et al (2023) Image classification based on quaternionvalued capsule network. Appl Intell 53(5):5587\u20135606. https:\/\/doi.org\/10.1007\/s1048902203849x","journal-title":"Appl Intell"},{"key":"1640_CR22","doi-asserted-by":"publisher","first-page":"85,492","DOI":"10.1109\/ACCESS.2019.2924548","volume":"7","author":"X Cheng","year":"2019","unstructured":"Cheng X, He J, He J et al (2019) Cv-capsnet: Complex-valued capsule network. IEEE Access 7:85,492-85,499. https:\/\/doi.org\/10.1109\/ACCESS.2019.2924548","journal-title":"IEEE Access"},{"key":"1640_CR23","doi-asserted-by":"publisher","unstructured":"Choi J, Seo H, Im S, et\u00a0al. (2019) Attention routing between capsules. In: 2019 IEEE\/CVF International Conference on Computer Vision Workshop (ICCVW), pp 1981\u20131989. https:\/\/doi.org\/10.1109\/ICCVW.2019.00247","DOI":"10.1109\/ICCVW.2019.00247"},{"key":"1640_CR24","unstructured":"Zhao Z, Kleinhans A, Sandhu G, et\u00a0al (2019) Capsule networks with max-min normalization. arXiv: 1903.09662"},{"key":"1640_CR25","doi-asserted-by":"crossref","unstructured":"Edraki M, Rahnavard N, Shah M (2020) Subspace capsule network. arXiv:2002.02924","DOI":"10.1609\/aaai.v34i07.6703"},{"issue":"3","key":"1640_CR26","doi-asserted-by":"publisher","first-page":"3066","DOI":"10.1007\/s10489-021-02630-w","volume":"52","author":"G Sun","year":"2022","unstructured":"Sun G, Ding S, Sun T et al (2022) A novel dense capsule network based on dense capsule layers. Appl Intell 52(3):3066\u20133076. https:\/\/doi.org\/10.1007\/s10489-021-02630-w","journal-title":"Appl Intell"},{"issue":"5","key":"1640_CR27","doi-asserted-by":"publisher","first-page":"4229","DOI":"10.1007\/s11063-022-10806-9","volume":"54","author":"X Jia","year":"2022","unstructured":"Jia X, Li J, Zhao B et al (2022) Res-capsnet: Residual capsule network for data classification. Neural Process Lett 54(5):4229\u20134245. https:\/\/doi.org\/10.1007\/s11063-022-10806-9","journal-title":"Neural Process Lett"},{"key":"1640_CR28","unstructured":"Nair P, Doshi R, Keselj S (2021) Pushing the limits of capsule networks. arXiv:2103.08074"},{"key":"1640_CR29","unstructured":"Phaye SSR, Sikka A, Dhall A, et\u00a0al (2018) Dense and diverse capsule networks: Making the capsules learn better. arXiv: 1805.04001"},{"key":"1640_CR30","unstructured":"Agarap AF (2019) Deep learning using rectified linear units (relu). arXiv: 1803.08375"},{"key":"1640_CR31","doi-asserted-by":"crossref","unstructured":"Dubey SR, Singh SK, Chaudhuri BB (2022) Activation functions in deep learning: A comprehensive survey and benchmark. arXiv: 2109.14545","DOI":"10.1016\/j.neucom.2022.06.111"},{"key":"1640_CR32","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, et\u00a0al (2021) Swin transformer: Hierarchical vision transformer using shifted windows. arXiv:2103.14030","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"1640_CR33","unstructured":"Xu B, Wang N, Chen T, et\u00a0al (2015) Empirical evaluation of rectified activations in convolutional network. arXiv:1505.00853"},{"key":"1640_CR34","doi-asserted-by":"publisher","unstructured":"Shruthi Bhamidi SB, El-Sharkawy M (2019). Residual capsule network. https:\/\/doi.org\/10.1109\/UEMCON47517.2019.8993019","DOI":"10.1109\/UEMCON47517.2019.8993019"},{"key":"1640_CR35","unstructured":"Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747"},{"key":"1640_CR36","doi-asserted-by":"publisher","DOI":"10.1007\/s1106302311155x","author":"S Abbaasi","year":"2023","unstructured":"Abbaasi S, GhiasiShirazi K, Harati A (2023) A multiprototype capsule network for image recognition with high intraclass variations. Neural Process Lett. https:\/\/doi.org\/10.1007\/s1106302311155x","journal-title":"Neural Process Lett"},{"key":"1640_CR37","doi-asserted-by":"publisher","first-page":"7895","DOI":"10.1007\/s0050002308018x","volume":"27","author":"J Zhang","year":"2023","unstructured":"Zhang J, Xu Q, Guo L et al (2023) A novel capsule network based on deep routing and residual learning. Soft Comput 27:7895\u20137906. https:\/\/doi.org\/10.1007\/s0050002308018x","journal-title":"Soft Comput"},{"key":"1640_CR38","doi-asserted-by":"publisher","first-page":"645","DOI":"10.1007\/s11265021017316","volume":"94","author":"P Shiri","year":"2022","unstructured":"Shiri P, Baniasadi A (2022) Convolutional fullyconnected capsule network (cfccapsnet): A novel and fast capsule network. J Signal Process Syst Signal Image Video Technol 94:645\u2013658. https:\/\/doi.org\/10.1007\/s11265021017316","journal-title":"J Signal Process Syst Signal Image Video Technol"},{"key":"1640_CR39","first-page":"208","volume":"48","author":"C Shan","year":"2022","unstructured":"Shan C, Rencheng S, Fengjing S et al (2022) Research and improvement of dynamic routing based on capsule network. Comput Eng 48:208\u2013214","journal-title":"Comput Eng"},{"key":"1640_CR40","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, van\u00a0der Maaten L, et\u00a0al (2018) Densely connected convolutional networks. arXiv: 1608.06993","DOI":"10.1109\/CVPR.2017.243"},{"issue":"3005\u20133008","key":"1640_CR41","first-page":"3039","volume":"38","author":"L Linsong","year":"2021","unstructured":"Linsong L, Minglei T, Dongliang W (2021) Sacapsnet: selfattention capsule network. Appl Res Comput 38(3005\u20133008):3039","journal-title":"Appl Res Comput"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01640-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01640-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01640-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,30]],"date-time":"2025-01-30T20:19:32Z","timestamp":1738268372000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01640-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,18]]},"references-count":41,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1]]}},"alternative-id":["1640"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01640-8","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,18]]},"assertion":[{"value":"26 October 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 August 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 November 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All the authors have approved the manuscript for publication, and there is no Conflict of interest exists.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"45"}}