{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,4]],"date-time":"2026-04-04T17:34:10Z","timestamp":1775324050835,"version":"3.50.1"},"reference-count":36,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,5,14]],"date-time":"2020-05-14T00:00:00Z","timestamp":1589414400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,5,14]],"date-time":"2020-05-14T00:00:00Z","timestamp":1589414400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100000098","name":"U.S. Department of Health & Human Services | NIH | NIH Clinical Center","doi-asserted-by":"publisher","award":["Intramural Research Programs"],"award-info":[{"award-number":["Intramural Research Programs"]}],"id":[{"id":"10.13039\/100000098","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000098","name":"U.S. Department of Health & Human Services | NIH | NIH Clinical Center","doi-asserted-by":"publisher","award":["Intramural Research Program"],"award-info":[{"award-number":["Intramural Research Program"]}],"id":[{"id":"10.13039\/100000098","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000002","name":"U.S. Department of Health & Human Services | National Institutes of Health","doi-asserted-by":"publisher","award":["NIH-PingAn Cooperative Research and Development Agreement"],"award-info":[{"award-number":["NIH-PingAn Cooperative Research and Development Agreement"]}],"id":[{"id":"10.13039\/100000002","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000002","name":"U.S. Department of Health & Human Services | National Institutes of Health","doi-asserted-by":"publisher","award":["NIH-PingAn Cooperative Research and Development Agreement"],"award-info":[{"award-number":["NIH-PingAn Cooperative Research and Development Agreement"]}],"id":[{"id":"10.13039\/100000002","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000092","name":"U.S. Department of Health & Human Services | NIH | U.S. National Library of Medicine","doi-asserted-by":"publisher","award":["K99LM013001"],"award-info":[{"award-number":["K99LM013001"]}],"id":[{"id":"10.13039\/100000092","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000092","name":"U.S. Department of Health & Human Services | NIH | U.S. National Library of Medicine","doi-asserted-by":"publisher","award":["ntramural Research Programs"],"award-info":[{"award-number":["ntramural Research Programs"]}],"id":[{"id":"10.13039\/100000092","id-type":"DOI","asserted-by":"publisher"}]},{"name":"U.S. Department of Health & Human Services | NIH | U.S. National Library of Medicine"},{"name":"U.S. Department of Health & Human Services | NIH | NIH Clinical Center"},{"name":"U.S. Department of Health & Human Services | National Institutes of Health"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digit. Med."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>As one of the most ubiquitous diagnostic imaging tests in medical practice, chest radiography requires timely reporting of potential findings and diagnosis of diseases in the images. Automated, fast, and reliable detection of diseases based on chest radiography is a critical step in radiology workflow. In this work, we developed and evaluated various deep convolutional neural networks (CNN) for differentiating between normal and abnormal frontal chest radiographs, in order to help alert radiologists and clinicians of potential abnormal findings as a means of work list triaging and reporting prioritization. A CNN-based model achieved an AUC of 0.9824\u2009\u00b1\u20090.0043 (with an accuracy of 94.64\u2009\u00b1\u20090.45%, a sensitivity of 96.50\u2009\u00b1\u20090.36% and a specificity of 92.86\u2009\u00b1\u20090.48%) for normal versus abnormal chest radiograph classification. The CNN model obtained an AUC of 0.9804\u2009\u00b1\u20090.0032 (with an accuracy of 94.71\u2009\u00b1\u20090.32%, a sensitivity of 92.20\u2009\u00b1\u20090.34% and a specificity of 96.34\u2009\u00b1\u20090.31%) for normal versus lung opacity classification. Classification performance on the external dataset showed that the CNN model is likely to be highly generalizable, with an AUC of 0.9444\u2009\u00b1\u20090.0029. The CNN model pre-trained on cohorts of adult patients and fine-tuned on pediatric patients achieved an AUC of 0.9851\u2009\u00b1\u20090.0046 for normal versus pneumonia classification. Pretraining with natural images demonstrates benefit for a moderate-sized training image set of about 8500 images. The remarkable performance in diagnostic accuracy observed in this study shows that deep CNNs can accurately and effectively differentiate normal and abnormal chest radiographs, thereby providing potential benefits to radiology workflow and patient care.<\/jats:p>","DOI":"10.1038\/s41746-020-0273-z","type":"journal-article","created":{"date-parts":[[2020,5,14]],"date-time":"2020-05-14T10:03:18Z","timestamp":1589450598000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":199,"title":["Automated abnormality classification of chest radiographs using deep convolutional neural networks"],"prefix":"10.1038","volume":"3","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6452-2751","authenticated-orcid":false,"given":"Yu-Xing","family":"Tang","sequence":"first","affiliation":[]},{"given":"You-Bao","family":"Tang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9309-8331","authenticated-orcid":false,"given":"Yifan","family":"Peng","sequence":"additional","affiliation":[]},{"given":"Ke","family":"Yan","sequence":"additional","affiliation":[]},{"given":"Mohammadhadi","family":"Bagheri","sequence":"additional","affiliation":[]},{"given":"Bernadette A.","family":"Redd","sequence":"additional","affiliation":[]},{"given":"Catherine J.","family":"Brandon","sequence":"additional","affiliation":[]},{"given":"Zhiyong","family":"Lu","sequence":"additional","affiliation":[]},{"given":"Mei","family":"Han","sequence":"additional","affiliation":[]},{"given":"Jing","family":"Xiao","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8081-7376","authenticated-orcid":false,"given":"Ronald M.","family":"Summers","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,5,14]]},"reference":[{"key":"273_CR1","doi-asserted-by":"publisher","first-page":"1459","DOI":"10.1016\/S0140-6736(16)31012-1","volume":"388","author":"H Wang","year":"2016","unstructured":"Wang, H. et al. Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980\u20132015: a systematic analysis for the globalburden of disease study 2015. Lancet 388, 1459\u20131544 (2016).","journal-title":"Lancet"},{"key":"273_CR2","unstructured":"American Lung Association. Trends in lung cancer morbidity and mortality. https:\/\/www.lung.org\/assets\/documents\/research\/lc-trend-report.pdf (2014)."},{"key":"273_CR3","doi-asserted-by":"publisher","first-page":"827","DOI":"10.1016\/j.crad.2018.05.015","volume":"73","author":"E Yates","year":"2018","unstructured":"Yates, E., Yates, L. & Harvey, H. Machine learning-red dot: open-source, cloud, deep convolutional neural networks in chest radiograph binary normality classification. Clin. Radiol. 73, 827\u2013831 (2018).","journal-title":"Clin. Radiol."},{"key":"273_CR4","doi-asserted-by":"publisher","first-page":"537","DOI":"10.1148\/radiol.2018181422","volume":"290","author":"JA Dunnmon","year":"2019","unstructured":"Dunnmon, J. A. et al. Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology 290, 537\u2013544 (2019).","journal-title":"Radiology"},{"key":"273_CR5","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y LeCun","year":"2015","unstructured":"LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436\u2013444 (2015).","journal-title":"Nature"},{"key":"273_CR6","doi-asserted-by":"publisher","first-page":"1074","DOI":"10.1073\/pnas.1821594116","volume":"116","author":"MM Waldrop","year":"2019","unstructured":"Waldrop, M. M. News feature: What are the limits of deep learning? Proc. Natl Acad. Sci. 116, 1074\u20131077 (2019).","journal-title":"Proc. Natl Acad. Sci."},{"key":"273_CR7","doi-asserted-by":"publisher","first-page":"20170387","DOI":"10.1098\/rsif.2017.0387","volume":"15","author":"T Ching","year":"2018","unstructured":"Ching, T. et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15, 20170387 (2018).","journal-title":"J. R. Soc. Interface"},{"key":"273_CR8","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S. & Sun, J. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision 1026\u20131034 (IEEE, 2015).","DOI":"10.1109\/ICCV.2015.123"},{"key":"273_CR9","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1038\/nature21056","volume":"542","author":"A Esteva","year":"2017","unstructured":"Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115\u2013118 (2017).","journal-title":"Nature"},{"key":"273_CR10","doi-asserted-by":"publisher","first-page":"2211","DOI":"10.1001\/jama.2017.18152","volume":"318","author":"DSW Ting","year":"2017","unstructured":"Ting, D. S. W. et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318, 2211\u20132223 (2017).","journal-title":"JAMA"},{"key":"273_CR11","doi-asserted-by":"publisher","first-page":"11591","DOI":"10.1073\/pnas.1806905115","volume":"115","author":"R Lindsey","year":"2018","unstructured":"Lindsey, R. et al. Deep neural network improves fracture detection by clinicians. Proc. Natl Acad. Sci. 115, 11591\u201311596 (2018).","journal-title":"Proc. Natl Acad. Sci."},{"key":"273_CR12","doi-asserted-by":"publisher","first-page":"565","DOI":"10.1016\/j.ophtha.2018.11.015","volume":"126","author":"Y Peng","year":"2019","unstructured":"Peng, Y. et al. DeepSeeNet: a deep learning model for automated classification of patient-based age-related macular degeneration severity from color fundus photographs. Ophthalmology 126, 565\u2013575 (2019).","journal-title":"Ophthalmology"},{"key":"273_CR13","doi-asserted-by":"publisher","first-page":"574","DOI":"10.1148\/radiol.2017162326","volume":"284","author":"P Lakhani","year":"2017","unstructured":"Lakhani, P. & Sundaram, B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284, 574\u2013582 (2017).","journal-title":"Radiology"},{"key":"273_CR14","doi-asserted-by":"publisher","first-page":"218","DOI":"10.1148\/radiol.2018180237","volume":"290","author":"JG Nam","year":"2018","unstructured":"Nam, J. G. et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 290, 218\u2013228 (2018).","journal-title":"Radiology"},{"key":"273_CR15","doi-asserted-by":"crossref","unstructured":"Wang, X., Peng, Y., Lu, L., Lu, Z. & Summers, R. M. TieNet: text-image embedding network for common thorax disease classification and reporting in chest x-rays. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 9049\u20139058 (IEEE, 2018).","DOI":"10.1109\/CVPR.2018.00943"},{"key":"273_CR16","doi-asserted-by":"crossref","unstructured":"Wang, X. et al. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2097\u20132106 (IEEE, 2017).","DOI":"10.1109\/CVPR.2017.369"},{"key":"273_CR17","doi-asserted-by":"crossref","unstructured":"Irvin, J. et al. CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence 590\u2013597 (AAAI, 2019).","DOI":"10.1609\/aaai.v33i01.3301590"},{"key":"273_CR18","doi-asserted-by":"crossref","unstructured":"Johnson, A. E. W. et al. MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042 (2019).","DOI":"10.1038\/s41597-019-0322-0"},{"key":"273_CR19","doi-asserted-by":"crossref","unstructured":"Li, Z. et al. Thoracic disease identification and localization with limited supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 8290\u20138299 (IEEE, 2018).","DOI":"10.1109\/CVPR.2018.00865"},{"key":"273_CR20","doi-asserted-by":"crossref","unstructured":"Tang, Y. et al. Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In International Workshop on Machine Learning in Medical Imaging 249\u2013258 (Springer, 2018).","DOI":"10.1007\/978-3-030-00919-9_29"},{"key":"273_CR21","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pmed.1002686","volume":"15","author":"P Rajpurkar","year":"2018","unstructured":"Rajpurkar, P. et al. Deep learning for chest radiograph diagnosis: a retrospective comparison of the chexnext algorithm to practicing radiologists. PLoS Med. 15, e1002686 (2018).","journal-title":"PLoS Med."},{"key":"273_CR22","doi-asserted-by":"publisher","first-page":"106","DOI":"10.1016\/j.acra.2019.10.006","volume":"27","author":"L Oakden-Rayner","year":"2020","unstructured":"Oakden-Rayner, L. Exploring large-scale public medical image datasets. Acad. Radiol. 27, 106\u2013112 (2020).","journal-title":"Acad. Radiol."},{"key":"273_CR23","doi-asserted-by":"publisher","first-page":"196","DOI":"10.1148\/radiol.2018180921","volume":"291","author":"M Annarumma","year":"2019","unstructured":"Annarumma, M. et al. Automated triaging of adult chest radiographs with deep artificial neural networks. Radiology 291, 196\u2013202 (2019).","journal-title":"Radiology"},{"key":"273_CR24","doi-asserted-by":"publisher","first-page":"304","DOI":"10.1093\/jamia\/ocv080","volume":"23","author":"D Demner-Fushman","year":"2015","unstructured":"Demner-Fushman, D. et al. Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 23, 304\u2013310 (2015).","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"273_CR25","unstructured":"Krizhevsky, A., Sutskever, I. & Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 1097\u20131105 (NeurIPS, 2012)."},{"key":"273_CR26","unstructured":"Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR, 2015)."},{"key":"273_CR27","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2818\u20132826 (IEEE, 2016)","DOI":"10.1109\/CVPR.2016.308"},{"key":"273_CR28","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 770\u2013778 (IEEE, 2016).","DOI":"10.1109\/CVPR.2016.90"},{"key":"273_CR29","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L. & Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 4700\u20134708 (IEEE, 2017).","DOI":"10.1109\/CVPR.2017.243"},{"key":"273_CR30","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky, O. et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis 115, 211\u2013252 (2015).","journal-title":"Int. J. Comput. Vis"},{"key":"273_CR31","doi-asserted-by":"crossref","unstructured":"Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2921\u20132929 (IEEE, 2016).","DOI":"10.1109\/CVPR.2016.319"},{"key":"273_CR32","doi-asserted-by":"publisher","first-page":"1965","DOI":"10.1109\/TMI.2015.2418031","volume":"34","author":"RH Philipsen","year":"2015","unstructured":"Philipsen, R. H. et al. Localized energy-based normalization of medical images: application to chest radiography. IEEE Trans. Med. Imaging 34, 1965\u20131975 (2015).","journal-title":"IEEE Trans. Med. Imaging"},{"key":"273_CR33","volume":"1","author":"G Shih","year":"2019","unstructured":"Shih, G. et al. Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia. Radiology 1, e180041 (2019).","journal-title":"Radiology"},{"key":"273_CR34","doi-asserted-by":"publisher","first-page":"1122","DOI":"10.1016\/j.cell.2018.02.010","volume":"172","author":"DS Kermany","year":"2018","unstructured":"Kermany, D. S. et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172, 1122\u20131131 (2018).","journal-title":"Cell"},{"key":"273_CR35","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1177\/001316446002000104","volume":"20","author":"J Cohen","year":"1960","unstructured":"Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20, 37\u201346 (1960).","journal-title":"Educ. Psychol. Meas."},{"key":"273_CR36","doi-asserted-by":"publisher","first-page":"837","DOI":"10.2307\/2531595","volume":"44","author":"ER DeLong","year":"1988","unstructured":"DeLong, E. R., DeLong, D. M. & Clarke-Pearson, D. L. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44, 837\u2013845 (1988).","journal-title":"Biometrics"}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-0273-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-0273-z","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-0273-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,7]],"date-time":"2022-12-07T02:13:39Z","timestamp":1670379219000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-020-0273-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,5,14]]},"references-count":36,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2020,12]]}},"alternative-id":["273"],"URL":"https:\/\/doi.org\/10.1038\/s41746-020-0273-z","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,5,14]]},"assertion":[{"value":"3 September 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 April 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 May 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Y.X.T. and Y.B.T. have been offered employment at PAII Inc. Y.P. is a co-inventor on patents awarded and pending. B.A.R. has no competing interests. She is a consultant\/reviewer for the American College of Radiology\u2019s breast MRI accreditation program. K.Y. and M.H. are currently employees of PAII Inc. J.X. is an employee of Ping An Technology. M.B., C.J.B., and Z.L. have no competing interests. R.M.S. receives royalties from PingAn, ScanMed, iCAD, Philips and Translation Holdings. He is a co-inventor on patents awarded and pending. He received research support from PingAn (Cooperative Research and Development Agreement) and NVIDIA (GPU card donations).","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"70"}}