{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,29]],"date-time":"2025-11-29T16:22:39Z","timestamp":1764433359738},"reference-count":20,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,7,21]],"date-time":"2020-07-21T00:00:00Z","timestamp":1595289600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,7,21]],"date-time":"2020-07-21T00:00:00Z","timestamp":1595289600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["BMC Med Inform Decis Mak"],"published-print":{"date-parts":[[2020,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n<jats:title>Background<\/jats:title>\n<jats:p>Radiation therapy requires precision to target and escalate the doses to affected regions while reducing the adjacent normal tissue exposed to high radiotherapy doses. Image guidance has become the start of the art in the treating process. Registering the digital radiographs megavoltage x ray (MV-DRs) and the kilovoltage digital reconstructed radiographs (KV-DRRs) is difficult because of the poor quality of MV-DRs. We simplify the problem by registering between landmarks instead of entire image information, thence we propose a model to estimate the landmark accurately.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Methods<\/jats:title>\n<jats:p>After doctors\u2019 analysis, it is proved that it is effective to register through several physiological features such as spinous process, tracheal bifurcation, Louis angle. We propose the LandmarkNet, a novel keypoint estimation architecture, can automatically detect keypoints in blurred medical images. The method applies the idea of Feature Pyramid Network (FPN) twice to merge the cross-scale and cross-layer features for feature extraction and landmark estimation successively. Intermediate supervision is used at the end of the first FPN to ensure that the underlying parameters are updated normally. The network finally produces heatmap to display the approximate location of landmarks and we obtain accurate position estimation after non-maximum suppression (NMS) processing.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Results<\/jats:title>\n<jats:p>Our method could obtain accurate landmark estimation in the dataset provided by several cancer hospitals and labeled by ourselves. The standard percentage of correct keypoints (PCK) within 8 pixels of estimation for the spinous process, tracheal bifurcation and Louis angle is 81.24%, 98.95% and 85.61% respectively. For the above three landmarks, the mean deviation between the predicted location of each landmark and corresponding ground truth is 2.38, 0.98 and 2.64 pixels respectively.<\/jats:p>\n<\/jats:sec><jats:sec>\n<jats:title>Conclusion<\/jats:title>\n<jats:p>Landmark estimation based on LandmarkNet has high accuracy for different kinds of landmarks. Our model estimates the location of tracheal bifurcation especially accurately because of its obvious features. For the spinous process, our model performs well in quantity estimation as well as in position estimation. The wide application of our method assists doctors in image-guided radiotherapy (IGRT) and provides the possibility of precise treatment in the true sense.<\/jats:p>\n<\/jats:sec>","DOI":"10.1186\/s12911-020-01164-4","type":"journal-article","created":{"date-parts":[[2020,7,21]],"date-time":"2020-07-21T20:02:47Z","timestamp":1595361767000},"update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["LandmarkNet: a 2D digital radiograph landmark estimator for registration"],"prefix":"10.1186","volume":"20","author":[{"given":"Zhen","family":"Wang","sequence":"first","affiliation":[]},{"given":"Cong","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Longhua","family":"Ma","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,7,21]]},"reference":[{"issue":"11","key":"1164_CR1","doi-asserted-by":"publisher","first-page":"977","DOI":"10.1016\/S0262-8856(03)00137-9","volume":"21","author":"B Zitov\u00e1","year":"2003","unstructured":"Zitov\u00e1 B, Flusser J. Image registration methods: a survey. Image Vis Comput. 2003; 21(11):977\u20131000.","journal-title":"Image Vis Comput"},{"issue":"12","key":"1164_CR2","doi-asserted-by":"publisher","first-page":"762","DOI":"10.1049\/el.2017.4572","volume":"54","author":"L Cong","year":"2018","unstructured":"Cong L, Miao H, M L. Synthesizing kv-drrs from mv-drs with fractal hourglass convolutional network. Electron Lett. 2018; 54(12):762\u20134.","journal-title":"Electron Lett"},{"issue":"11856","key":"1164_CR3","first-page":"1","volume":"1805","author":"T Lan","year":"2018","unstructured":"Lan T, Li Y, Murugi JK, Ding Y, Qin Z. Run: Residual u-net for computer-aided detection of pulmonary nodules without candidate selection. arXiv preprint arXiv:1805.11856. 2018; 1805(11856):1\u201312.","journal-title":"arXiv preprint arXiv:1805.11856"},{"issue":"1","key":"1164_CR4","doi-asserted-by":"publisher","first-page":"38","DOI":"10.1006\/cviu.1995.1004","volume":"61","author":"TF Cootes","year":"1995","unstructured":"Cootes TF, Taylor CJ, Cooper DH, Graham J. Active shape models-their training and application. Comp Vision Image Underst. 1995; 61(1):38\u201359.","journal-title":"Comp Vision Image Underst"},{"key":"1164_CR5","doi-asserted-by":"crossref","unstructured":"Edwards GJ, Cootes TF, Taylor CJ. Face recognition using active appearance models. In: European Conference on Computer Vision. Springer: 1998. p. 581\u201395.","DOI":"10.1007\/BFb0054766"},{"key":"1164_CR6","doi-asserted-by":"publisher","first-page":"681","DOI":"10.1109\/34.927467","volume":"6","author":"TF Cootes","year":"2001","unstructured":"Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Mach Intell. 2001; 6:681\u20135.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1164_CR7","doi-asserted-by":"crossref","unstructured":"Doll\u00e1r P, Welinder P, Perona P. Cascaded pose regression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE: 2010. p. 1078\u201385.","DOI":"10.1109\/CVPR.2010.5540094"},{"key":"1164_CR8","doi-asserted-by":"crossref","unstructured":"Sun Y, Wang X, Tang X. Deep convolutional network cascade for facial point detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE: 2013. p. 3476\u20133483.","DOI":"10.1109\/CVPR.2013.446"},{"key":"1164_CR9","doi-asserted-by":"crossref","unstructured":"Zhou E, Fan H, Cao Z, Jiang Y, Yin Q. Extensive facial landmark localization with coarse-to-fine convolutional network cascade. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. IEEE: 2013. p. 386\u2013391.","DOI":"10.1109\/ICCVW.2013.58"},{"key":"1164_CR10","doi-asserted-by":"crossref","unstructured":"Zhang Z, Luo P, Loy CC, Tang X. Facial landmark detection by deep multi-task learning. In: European Conference on Computer Vision. Springer: 2014. p. 94\u2013108.","DOI":"10.1007\/978-3-319-10599-4_7"},{"issue":"12","key":"1164_CR11","doi-asserted-by":"publisher","first-page":"3067","DOI":"10.1109\/TPAMI.2017.2787130","volume":"40","author":"Y Wu","year":"2018","unstructured":"Wu Y, Hassner T, Kim K, Medioni G, Natarajan P. Facial landmark detection with tweaked convolutional neural networks. IEEE Trans Pattern Anal Mach Intell. 2018; 40(12):3067\u201374.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"10","key":"1164_CR12","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","volume":"23","author":"K Zhang","year":"2016","unstructured":"Zhang K, Zhang Z, Li Z, Qiao Y. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett. 2016; 23(10):1499\u20131503.","journal-title":"IEEE Signal Process Lett"},{"key":"1164_CR13","doi-asserted-by":"crossref","unstructured":"Kowalski M, Naruniec J, Trzcinski T. Deep alignment network: A convolutional neural network for robust face alignment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE: 2017. p. 88\u201397.","DOI":"10.1109\/CVPRW.2017.254"},{"key":"1164_CR14","doi-asserted-by":"crossref","unstructured":"Pfister T, Charles J, Zisserman A. Flowing convnets for human pose estimation in videos. In: Proceedings of the IEEE International Conference on Computer Vision. IEEE: 2015. p. 1913\u201321.","DOI":"10.1109\/ICCV.2015.222"},{"key":"1164_CR15","doi-asserted-by":"crossref","unstructured":"Newell A, Yang K, Deng J. Stacked hourglass networks for human pose estimation. In: European Conference on Computer Vision. Springer: 2016. p. 483\u201399.","DOI":"10.1007\/978-3-319-46484-8_29"},{"key":"1164_CR16","doi-asserted-by":"crossref","unstructured":"Kocabas M, Karagoz S, Akbas E. Multiposenet: Fast multi-person pose estimation using pose residual network. In: Proceedings of the European Conference on Computer Vision (ECCV). ECCV: 2018. p. 417\u2013433.","DOI":"10.1007\/978-3-030-01252-6_26"},{"key":"1164_CR17","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Doll\u00e1r P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE: 2017. p. 2117\u201325.","DOI":"10.1109\/CVPR.2017.106"},{"issue":"07372","key":"1164_CR18","first-page":"1","volume":"1801","author":"A Nibali","year":"2018","unstructured":"Nibali A, He Z, Morgan S, Prendergast L. Numerical coordinate regression with convolutional neural networks. arXiv preprint. 2018; 1801(07372):1\u20138.","journal-title":"arXiv preprint"},{"issue":"05587","key":"1164_CR19","first-page":"1","volume":"1706","author":"L-C Chen","year":"2017","unstructured":"Chen L-C, Papandreou G, Schroff F, Adam H. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. 2017; 1706(05587):1\u201312.","journal-title":"arXiv preprint arXiv:1706.05587"},{"key":"1164_CR20","doi-asserted-by":"crossref","unstructured":"Yang W, Li S, Ouyang W, Li H, Wang X. Learning feature pyramids for human pose estimation. In: Proceedings of the IEEE International Conference on Computer Vision. IEEE: 2017. p. 1281\u20131290.","DOI":"10.1109\/ICCV.2017.144"}],"container-title":["BMC Medical Informatics and Decision Making"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-020-01164-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s12911-020-01164-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s12911-020-01164-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,7,20]],"date-time":"2021-07-20T23:14:47Z","timestamp":1626822887000},"score":1,"resource":{"primary":{"URL":"https:\/\/bmcmedinformdecismak.biomedcentral.com\/articles\/10.1186\/s12911-020-01164-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,21]]},"references-count":20,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,12]]}},"alternative-id":["1164"],"URL":"https:\/\/doi.org\/10.1186\/s12911-020-01164-4","relation":{},"ISSN":["1472-6947"],"issn-type":[{"value":"1472-6947","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,7,21]]},"assertion":[{"value":"25 October 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 June 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 July 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The study was conducted with an approval by Radiation and Medical Oncology Department (RMOD) of First Affiliated Hospital of Wenzhou Medical University. The RMOD waived the mandate for obtaining a written informed consent from subjects. Participants were provided with an information sheet which detailed relevant information about the study, potential benefits and risks of participation in this study, the opportunity and means to ask questions, and the options regarding voluntary agreement to participate in this study. Verbal consent was then requested prior to commencement of the survey. This study was provided as an anonymous survey of individuals for which no personal, identifiable information was collected.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"168"}}