{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,30]],"date-time":"2026-04-30T20:03:31Z","timestamp":1777579411206,"version":"3.51.4"},"reference-count":18,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2018,3,21]],"date-time":"2018-03-21T00:00:00Z","timestamp":1521590400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2018,3,21]],"date-time":"2018-03-21T00:00:00Z","timestamp":1521590400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["npj Digital Med"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Echocardiography is essential to cardiology. However, the need for human interpretation has limited echocardiography\u2019s full potential for precision medicine. Deep learning is an emerging tool for analyzing images but has not yet been widely applied to echocardiograms, partly due to their complex multi-view format. The essential first step toward comprehensive computer-assisted echocardiographic interpretation is determining whether computers can learn to recognize these views. We trained a convolutional neural network to simultaneously classify 15 standard views (12 video, 3 still), based on labeled still images and videos from 267 transthoracic echocardiograms that captured a range of real-world clinical variation. Our model classified among 12 video views with 97.8% overall test accuracy without overfitting. Even on single low-resolution images, accuracy among 15 views was 91.7% vs. 70.2\u201384.0% for board-certified echocardiographers. Data visualization experiments showed that the model recognizes similarities among related views and classifies using clinically relevant image features. Our results provide a foundation for artificial intelligence-assisted echocardiographic interpretation.<\/jats:p>","DOI":"10.1038\/s41746-017-0013-1","type":"journal-article","created":{"date-parts":[[2018,1,30]],"date-time":"2018-01-30T11:00:12Z","timestamp":1517310012000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":472,"title":["Fast and accurate view classification of echocardiograms using deep learning"],"prefix":"10.1038","volume":"1","author":[{"given":"Ali","family":"Madani","sequence":"first","affiliation":[]},{"given":"Ramy","family":"Arnaout","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7004-4859","authenticated-orcid":false,"given":"Mohammad","family":"Mofrad","sequence":"additional","affiliation":[]},{"given":"Rima","family":"Arnaout","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2018,3,21]]},"reference":[{"key":"13_CR1","unstructured":"Karpathy, A. The Unreasonable Effectiveness of Recurrent Neural Networks. http:\/\/karpathy.github.io\/2015\/05\/21\/rnn-effectiveness\/ (2015)."},{"key":"13_CR2","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1038\/nature21056","volume":"542","author":"A Esteva","year":"2017","unstructured":"Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115\u2013118 (2017).","journal-title":"Nature"},{"key":"13_CR3","doi-asserted-by":"publisher","first-page":"2402","DOI":"10.1001\/jama.2016.17216","volume":"316","author":"V Gulshan","year":"2016","unstructured":"Gulshan, V. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402\u20132410 (2016).","journal-title":"JAMA"},{"key":"13_CR4","unstructured":"Litjens, G. et al. A survey on deep learning in medical image analysis. eprint at https:\/\/arxiv.org\/pdf\/1702.05747.pdf (2017)."},{"key":"13_CR5","doi-asserted-by":"publisher","first-page":"1126","DOI":"10.1016\/j.jacc.2010.11.002","volume":"57","author":"PS Douglas","year":"2011","unstructured":"Douglas, P. S. et al. ACCF\/ASE\/AHA\/ASNC\/HFSA\/HRS\/SCAI\/SCCM\/SCCT\/SCMR 2011 appropriate use criteria for echocardiography. J. Am. Coll. Cardiol. 57, 1126\u20131166 (2011).","journal-title":"J. Am. Coll. Cardiol."},{"key":"13_CR6","doi-asserted-by":"publisher","first-page":"G9","DOI":"10.1530\/ERP-14-0079","volume":"2","author":"G Wharton","year":"2015","unstructured":"Wharton, G. et al. A minimum dataset for a standard adult transthoracic echocardiogram: a guideline protocol from the British Society of Echocardiography. Echo Res. Pract. 2, G9\u2013G24 (2015).","journal-title":"Echo Res. Pract."},{"key":"13_CR7","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1016\/j.media.2016.10.007","volume":"36","author":"H Khamis","year":"2017","unstructured":"Khamis, H. et al. Automatic apical view classification of echocardiograms using a discriminative learning dictionary. Med. Image Anal. 36, 15\u201321 (2017).","journal-title":"Med. Image Anal."},{"key":"13_CR8","doi-asserted-by":"publisher","first-page":"1456","DOI":"10.1016\/j.jacc.2015.07.052","volume":"66","author":"C Knackstedt","year":"2015","unstructured":"Knackstedt, C. et al. Fully automated versus standard tracking of left ventricular ejection fraction and longitudinal strain: the FAST-EFs multicenter study. J. Am. Coll. Cardiol. 66, 1456\u20131466 (2015).","journal-title":"J. Am. Coll. Cardiol."},{"key":"13_CR9","doi-asserted-by":"publisher","first-page":"2287","DOI":"10.1016\/j.jacc.2016.08.062","volume":"68","author":"S Narula","year":"2016","unstructured":"Narula, S. et al. Machine-learning algorithms to automate morphological and functional assessments in 2D echocardiography. J. Am. Coll. Cardiol. 68, 2287\u20132295 (2016).","journal-title":"J. Am. Coll. Cardiol."},{"key":"13_CR10","first-page":"230","volume":"11","author":"J Park","year":"2008","unstructured":"Park, J., Zhou, S. K., Simopoulos, C. & Comaniciu, D. AutoGate: fast and automatic Doppler gate localization in B-mode echocardiogram. Med. Image Comput. Comput. Assist. Interv. 11, 230\u2013237 (2008).","journal-title":"Med. Image Comput. Comput. Assist. Interv."},{"key":"13_CR11","doi-asserted-by":"publisher","unstructured":"Sengupta, P. P. et al. Cognitive machine-learning algorithm for cardiac imaging: a pilot study for differentiating constrictive pericarditis from restrictive cardiomyopathy. Circ. Cardiovasc. Imaging 9, https:\/\/doi.org\/10.1161\/CIRCIMAGING.115.004330 (2016).","DOI":"10.1161\/CIRCIMAGING.115.004330"},{"key":"13_CR12","doi-asserted-by":"publisher","first-page":"103","DOI":"10.1016\/j.inffus.2016.11.007","volume":"36","author":"XH Gao","year":"2017","unstructured":"Gao, X. H., Li, W., Loomes, M. & Wang, L. Y. A fused deep learning architecture for viewpoint classification of echocardiography. Inf. Fusion 36, 103\u2013113 (2017).","journal-title":"Inf. Fusion"},{"key":"13_CR13","doi-asserted-by":"publisher","first-page":"66","DOI":"10.1016\/j.compbiomed.2015.08.004","volume":"66","author":"OA Penatti","year":"2015","unstructured":"Penatti, O. A. et al. Mid-level image representations for real-time heart view plane classification of echocardiograms. Comput. Biol. Med. 66, 66\u201381 (2015).","journal-title":"Comput. Biol. Med."},{"key":"13_CR14","unstructured":"Abadi, M. et al. TensorFlow: Large-scale machine learning on heterogeneous systems https:\/\/www.tensorflow.org\/about\/bib (2015)."},{"key":"13_CR15","unstructured":"Keras (GitHub, 2015)."},{"key":"13_CR16","unstructured":"Python Language Reference, version 2.7. Python Software Foundation (2017)."},{"key":"13_CR17","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211\u2013252 (2015).","journal-title":"Int. J. Comput. Vision"},{"key":"13_CR18","first-page":"2579","volume":"9","author":"L van der Maaten","year":"2008","unstructured":"van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579\u20132605 (2008).","journal-title":"J. Mach. Learn. Res."}],"container-title":["npj Digital Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s41746-017-0013-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-017-0013-1","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s41746-017-0013-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,21]],"date-time":"2022-12-21T12:35:52Z","timestamp":1671626152000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s41746-017-0013-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,3,21]]},"references-count":18,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2018,12]]}},"alternative-id":["13"],"URL":"https:\/\/doi.org\/10.1038\/s41746-017-0013-1","relation":{},"ISSN":["2398-6352"],"issn-type":[{"value":"2398-6352","type":"electronic"}],"subject":[],"published":{"date-parts":[[2018,3,21]]},"assertion":[{"value":"6 August 2017","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 November 2017","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 November 2017","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 March 2018","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing financial interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"6"}}