{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,7]],"date-time":"2026-01-07T07:57:45Z","timestamp":1767772665773,"version":"3.37.3"},"reference-count":21,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2021,3,31]],"date-time":"2021-03-31T00:00:00Z","timestamp":1617148800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,3,31]],"date-time":"2021-03-31T00:00:00Z","timestamp":1617148800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2022,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Ultrasonic image examination is the first choice for the diagnosis of thyroid papillary carcinoma. However, there are some problems in the ultrasonic image of thyroid papillary carcinoma, such as poor definition, tissue overlap and low resolution, which make the ultrasonic image difficult to be diagnosed. Capsule network (CapsNet) can effectively address tissue overlap and other problems. This paper investigates a new network model based on capsule network, which is named as ResCaps network. ResCaps network uses residual modules and enhances the abstract expression of the model. The experimental results reveal that the characteristic classification accuracy of ResCaps3 network model for self-made data set of thyroid papillary carcinoma was <jats:inline-formula><jats:alternatives><jats:tex-math>$$81.06\\%$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mrow>\n                    <mml:mn>81.06<\/mml:mn>\n                    <mml:mo>%<\/mml:mo>\n                  <\/mml:mrow>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula>. Furthermore, Fashion-MNIST data set is also tested to show the reliability and validity of ResCaps network model. Notably, the ResCaps network model not only improves the accuracy of CapsNet significantly, but also provides an effective method for the classification of lesion characteristics of thyroid papillary carcinoma ultrasonic images.<\/jats:p>","DOI":"10.1007\/s40747-021-00347-4","type":"journal-article","created":{"date-parts":[[2021,3,31]],"date-time":"2021-03-31T22:45:40Z","timestamp":1617230740000},"page":"1865-1873","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["ResCaps: an improved capsule network and its application in ultrasonic image classification of thyroid papillary carcinoma"],"prefix":"10.1007","volume":"8","author":[{"given":"Xiongzhi","family":"Ai","sequence":"first","affiliation":[]},{"given":"Jiawei","family":"Zhuang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0051-7224","authenticated-orcid":false,"given":"Yonghua","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Pin","family":"Wan","sequence":"additional","affiliation":[]},{"given":"Yu","family":"Fu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,3,31]]},"reference":[{"issue":"18","key":"347_CR1","doi-asserted-by":"publisher","first-page":"2164","DOI":"10.1001\/jama.295.18.2164","volume":"295","author":"L Davies","year":"2006","unstructured":"Davies L, Welch H (2006) Increasing incidence of thyroid cancer in the United States. JAMA 295(18):2164\u20132167","journal-title":"JAMA"},{"issue":"6","key":"347_CR2","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1109\/MSP.2012.2205597","volume":"29","author":"G Hinton","year":"2012","unstructured":"Hinton G, Deng L, Yu D, Dahl GE, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, Kingsbury B (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82\u201397","journal-title":"IEEE Signal Process Mag"},{"key":"347_CR3","first-page":"3856","volume":"2017","author":"S Sabour","year":"2017","unstructured":"Sabour S, Frosst N, Hinton G, Hinton GE (2017) Dynamic routing between capsules. Adv Neural Inf Process Syst 2017:3856\u20133866","journal-title":"Adv Neural Inf Process Syst"},{"key":"347_CR4","first-page":"82","volume":"2017","author":"W Ke","year":"2017","unstructured":"Ke W, Wang Y, Wan P, Liu W, Li H (2017) An ultrasonic image recognition method for papillary thyroid carcinoma based on depth convolution neural network. Neural Inf Process 2017:82\u201391","journal-title":"Neural Inf Process"},{"issue":"1","key":"347_CR5","doi-asserted-by":"publisher","first-page":"6600","DOI":"10.1038\/s41598-018-25005-7","volume":"8","author":"H Li","year":"2018","unstructured":"Li H, Weng J, Shi Y, Gu W, Mao Y, Wang Y, Liu W, Zhang J (2018) An improved deep learning approach for detection of thyroid papillary cancer in approach images. Sci Rep 8(1):6600","journal-title":"Sci Rep"},{"key":"347_CR6","doi-asserted-by":"crossref","unstructured":"Sarraf S, Tofighi G (2016) Deep learning-based pipeline to recognize Alzheimer\u2019s disease using fMRI data. In: Future technologies conference","DOI":"10.1101\/066910"},{"issue":"6","key":"347_CR7","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1109\/MSP.2012.2211477","volume":"29","author":"L Deng","year":"2012","unstructured":"Deng L (2012) The MNIST database of handwritten digit images for machine learning research. IEEE Signal Process Mag 29(6):141\u2013142","journal-title":"IEEE Signal Process Mag"},{"key":"347_CR8","unstructured":"Geoffrey E H, Sara S, Nicholas F (2018) Matrix capsules with EM routing. In: Conference on learning representations"},{"key":"347_CR9","doi-asserted-by":"crossref","unstructured":"Mobiny A, Van Nguyen h (2018) Fast CapsNet for lung cancer screening. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, 2018, pp 741\u2013749","DOI":"10.1007\/978-3-030-00934-2_82"},{"key":"347_CR10","doi-asserted-by":"crossref","unstructured":"Shruthi Bhamidi SB, El-Sharkawy M (2019) Residual capsule network. In: IEEE 10th Annual ubiquitous computing, electronics and mobile communication conference (UEMCON). New York, NY, USA 2019, pp 557\u2013560","DOI":"10.1109\/UEMCON47517.2019.8993019"},{"key":"347_CR11","doi-asserted-by":"crossref","unstructured":"Shruthi Bhamidi SB, El-Sharkawy M (2020) 3-Level residual capsule network for complex datasets. In: IEEE 11th Latin American symposium on circuits and systems (LASCAS). San Jose, Costa Rica 2020, pp 1\u20134","DOI":"10.1109\/LASCAS45839.2020.9068990"},{"key":"347_CR12","first-page":"12","volume":"2018","author":"K Qiao","year":"2018","unstructured":"Qiao K, Zhang C, Wang L, Chen J, Zeng L, Tong L, Bi Y (2018) Accurate reconstruction of image stimuli from human fMRI based on the decoding model with capsule network architecture. Front Neuroinform 2018:12","journal-title":"Front Neuroinform"},{"key":"347_CR13","unstructured":"Lalonde R, Bagci U (2018) Capsules for object segmentation. arXiv preprint arXiv:1804.04241"},{"key":"347_CR14","doi-asserted-by":"crossref","unstructured":"Jaiswal A, AbdAlmageed W, Wu Y, Premkumar N (2018) Capsulegan: generative adversarial capsule network. In: Proceedings of the European conference on computer vision (ECCV) workshops","DOI":"10.1007\/978-3-030-11015-4_38"},{"key":"347_CR15","unstructured":"Arjovsky M, Chintala S, Bottou l (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875"},{"key":"347_CR16","doi-asserted-by":"crossref","unstructured":"Rajasegaran J, Jayasundara V, Jayasekara S, Jayasekara, H, Seneviratne S, Rodrigo R (2020), DeepCaps: going deeper with capsule networks. In: 2019 IEEE\/CVF conference on computer vision and pattern recognition (CVPR) (2019), pp 10725\u201310733","DOI":"10.1109\/CVPR.2019.01098"},{"key":"347_CR17","unstructured":"Ho-Phuoc T (2018) CIFAR10 to compare visual recognition performance between deep neural networks and humans. In: Computer vision and pattern recognition"},{"key":"347_CR18","unstructured":"Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747"},{"issue":"7","key":"347_CR19","first-page":"44","volume":"27","author":"S Wang","year":"2008","unstructured":"Wang S, Yang K (2008) Research and implementation of image scaling algorithm based on bilinear interpolation. Autom Technol Appl 27(7):44\u201345","journal-title":"Autom Technol Appl"},{"key":"347_CR20","doi-asserted-by":"crossref","unstructured":"Kun B, Feifei H, Cheng W (2011) An image correction method of Fisheye lens based on bilinear interpolation. In: Fourth international conference on intelligent computation technology and automation 2011, pp 428\u2013431","DOI":"10.1109\/ICICTA.2011.391"},{"issue":"9","key":"347_CR21","first-page":"2579","volume":"9","author":"L Maaten","year":"2008","unstructured":"Maaten L, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(9):2579\u20132605","journal-title":"J Mach Learn Res"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-021-00347-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-021-00347-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-021-00347-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,5,30]],"date-time":"2022-05-30T01:06:37Z","timestamp":1653872797000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-021-00347-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,3,31]]},"references-count":21,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022,6]]}},"alternative-id":["347"],"URL":"https:\/\/doi.org\/10.1007\/s40747-021-00347-4","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2021,3,31]]},"assertion":[{"value":"18 October 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 March 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 March 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All the authors have approved the manuscript for publication, and there is no conflict of interest exists.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}