{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,16]],"date-time":"2026-03-16T23:14:48Z","timestamp":1773702888762,"version":"3.50.1"},"reference-count":31,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,10,14]],"date-time":"2024-10-14T00:00:00Z","timestamp":1728864000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,10,14]],"date-time":"2024-10-14T00:00:00Z","timestamp":1728864000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100004837","name":"Ministerio de Ciencia e Innovaci\u00f3n","doi-asserted-by":"publisher","award":["PDC2021-121656-I00"],"award-info":[{"award-number":["PDC2021-121656-I00"]}],"id":[{"id":"10.13039\/501100004837","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004587","name":"Instituto de Salud Carlos III","doi-asserted-by":"publisher","award":["PMPTA22\/00121"],"award-info":[{"award-number":["PMPTA22\/00121"]}],"id":[{"id":"10.13039\/501100004587","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004587","name":"Instituto de Salud Carlos III","doi-asserted-by":"publisher","award":["PMPTA22\/00118"],"award-info":[{"award-number":["PMPTA22\/00118"]}],"id":[{"id":"10.13039\/501100004587","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100007601","name":"Horizon 2020","doi-asserted-by":"publisher","award":["801091"],"award-info":[{"award-number":["801091"]}],"id":[{"id":"10.13039\/501100007601","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Digit Imaging. Inform. med."],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Radiation dose and image quality in radiology are influenced by the X-ray prime factors: KVp, mAs, and source-detector distance. These parameters are set by the X-ray technician prior to the acquisition considering the radiographic position. A wrong setting of these parameters may result in exposure errors, forcing the test to be repeated with the increase of the radiation dose delivered to the patient. This work presents a novel approach based on deep learning that automatically estimates the radiographic position from a photograph captured prior to X-ray exposure, which can then be used to select the optimal prime factors. We created a database using 66 radiographic positions commonly used in clinical settings, prospectively obtained during 2022 from 75 volunteers in two different X-ray facilities. The architecture for radiographic position classification was a lightweight version of <jats:italic>ConvNeXt<\/jats:italic> trained with fine-tuning, discriminative learning rates, and a one-cycle policy scheduler. Our resulting model achieved an accuracy of 93.17% for radiographic position classification and increased to 95.58% when considering the correct selection of prime factors, since half of the errors involved positions with the same KVp and mAs values. Most errors occurred for radiographic positions with similar patient pose in the photograph. Results suggest the feasibility of the method to facilitate the acquisition workflow reducing the occurrence of exposure errors while preventing unnecessary radiation dose delivered to patients.<\/jats:p>","DOI":"10.1007\/s10278-024-01256-x","type":"journal-article","created":{"date-parts":[[2024,10,14]],"date-time":"2024-10-14T19:02:05Z","timestamp":1728932525000},"page":"1661-1668","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Deep Learning\u2013Based Estimation of Radiographic Position to Automatically Set Up the X-Ray Prime Factors"],"prefix":"10.1007","volume":"38","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0112-9709","authenticated-orcid":false,"given":"C. F.","family":"Del Cerro","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3090-126X","authenticated-orcid":false,"given":"R. C.","family":"Gim\u00e9nez","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1452-1918","authenticated-orcid":false,"given":"J.","family":"Garc\u00eda-Blas","sequence":"additional","affiliation":[]},{"given":"K.","family":"Sosenko","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0005-1492-3513","authenticated-orcid":false,"given":"J. M.","family":"Ortega","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0989-3231","authenticated-orcid":false,"given":"M.","family":"Desco","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4847-7233","authenticated-orcid":false,"given":"M.","family":"Abella","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,10,14]]},"reference":[{"key":"1256_CR1","unstructured":"Panzer, W., P. Shrimpton, Jessen K.: European Guidelines on Quality Criteria for Computed Tomography. Office for Official Publications of the European Communities, 2020"},{"key":"1256_CR2","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1007\/s10278-008-9112-5","volume":"22","author":"DH Foos","year":"2009","unstructured":"Foos, D. H., Sehnert, W. J., Reiner, B., Siegel, E. L., Segal, A., Waldman, D. L.: Digital radiography reject analysis: data collection methodology, results, and recommendations from an in-depth investigation at two hospitals.\u00a0Journal of digital imaging:\u00a022, 89\u201398, 2009","journal-title":"Journal of digital imaging"},{"key":"1256_CR3","unstructured":"Bushberg, J. T., Boone J. M.: The essential physics of medical imaging, Lippincott Williams & Wilkins, 2011"},{"issue":"1","key":"1256_CR4","doi-asserted-by":"publisher","first-page":"66","DOI":"10.1007\/s10278-020-00408-z","volume":"34","author":"X Fang","year":"2021","unstructured":"Fang, X., Harris, L., Zhou, W., and Huo, D., Generalized radiographic view identification with deep learning. Journal of Digital Imaging: 34(1), 66\u201374, 2021","journal-title":"Journal of Digital Imaging"},{"key":"1256_CR5","unstructured":"Mairh\u00f6fer, D., Laufer, M., Simon, P. M., Sieren, M., Bischof, A., K\u00e4ster, T., Barth E., Barkhausen J., Martinetz, T.: An AI-based framework for diagnostic quality assessment of ankle radiographs. International Conference on Medical Imaging with Deep Learning, 2021"},{"key":"1256_CR6","unstructured":"Simonyan, K., Zisserman A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014"},{"key":"1256_CR7","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition: 770\u2013778, 2016","DOI":"10.1109\/CVPR.2016.90"},{"key":"1256_CR8","doi-asserted-by":"crossref","unstructured":"Kim, T. K., Yi, P. H., Wei, J., Shin, J. W., Hager, G., Hui, F. K., Sair, H. I., Lin, C. T.: Deep learning method for automated classification of anteroposterior and posteroanterior chest radiographs. Journal of digital imaging: 32, 925\u2013930, 2019","DOI":"10.1007\/s10278-019-00208-0"},{"key":"1256_CR9","doi-asserted-by":"crossref","unstructured":"Hosch, R., Kroll, L., Nensa, F., Koitka, S.: Differentiation between anteroposterior and posteroanterior chest X-ray view position with convolutional neural networks. In R\u00f6Fo-Fortschritte auf dem Gebiet der R\u00f6ntgenstrahlen und der bildgebenden Verfahren: 193(2), 168\u2013176, 2021","DOI":"10.1055\/a-1183-5227"},{"issue":"2","key":"1256_CR10","doi-asserted-by":"publisher","first-page":"75","DOI":"10.1177\/2292550321997012","volume":"29","author":"TJ Saun","year":"2021","unstructured":"Saun, T.J.:\u00a0Automated classification of radiographic positioning of hand X-rays using a deep neural network. Plastic Surgery: 29(2), 75\u201380, 2021","journal-title":"Plastic Surgery"},{"key":"1256_CR11","doi-asserted-by":"crossref","unstructured":"Wang, C. Y., Yeh, I. H., Liao, H. Y. M.: Yolov9: Learning what you want to learn using programmable gradient information. arXiv preprint arXiv:2402.13616, 2024","DOI":"10.1007\/978-3-031-72751-1_1"},{"key":"1256_CR12","doi-asserted-by":"crossref","unstructured":"Medaramatla, S. C., Samhitha, C. V., Pande, S. D., Vinta, S. R.: Detection of Hand Bone Fractures in X-ray Images using Hybrid YOLO NAS. IEEE Accessed 2024","DOI":"10.1109\/ACCESS.2024.3379760"},{"key":"1256_CR13","doi-asserted-by":"crossref","unstructured":"Zheng, C., Wu, W., Chen, C., Yang, T., Zhu, S., Shen, J., Kehtarnavaz N., Shah, M.: Deep learning-based human pose estimation: A survey. ACM Computing Surveys: 56(1), 1\u201337, 2023","DOI":"10.1145\/3603618"},{"key":"1256_CR14","doi-asserted-by":"publisher","first-page":"215","DOI":"10.1016\/j.media.2016.07.001","volume":"35","author":"A Kadkhodamohammadi","year":"2017","unstructured":"Kadkhodamohammadi, A., Gangi, A., de Mathelin, M., Padoy, N.: Articulated clinician detection using 3D pictorial structures on RGB-D data. Medical image analysis: 35, 215\u2013224, 2017","journal-title":"Medical image analysis"},{"key":"1256_CR15","doi-asserted-by":"publisher","first-page":"102525","DOI":"10.1016\/j.media.2022.102525","volume":"80","author":"V Srivastav","year":"2022","unstructured":"Srivastav V., Gangi A., and Padoy N.: Unsupervised domain adaptation for clinician pose estimation and instance segmentation in the operating room. Medical Image Analysis: 80,\u00a0102525, 2022","journal-title":"Medical Image Analysis"},{"key":"1256_CR16","doi-asserted-by":"publisher","first-page":"102887","DOI":"10.1016\/j.media.2023.102887","volume":"89","author":"A Bigalke","year":"2023","unstructured":"Bigalke, A., Hansen, L., Diesel, J., Hennigs, C., Rostalski, P., Heinrich, M. P.: Anatomy-guided domain adaptation for 3D in-bed human pose estimation. Medical Image Analysis: 89, 102887, 2023","journal-title":"Medical Image Analysis"},{"key":"1256_CR17","doi-asserted-by":"publisher","first-page":"102654","DOI":"10.1016\/j.media.2022.102654","volume":"83","author":"H Ni","year":"2023","unstructured":"Ni, H., Xue, Y., Ma, L., Zhang, Q., Li, X., Huang, S. X.: Semi-supervised body parsing and pose estimation for enhancing infant general movement assessment. Medical Image Analysis: 83, 102654, 2023","journal-title":"Medical Image Analysis"},{"issue":"19","key":"1256_CR18","doi-asserted-by":"publisher","first-page":"10156","DOI":"10.3390\/app121910156","volume":"12","author":"RO Ogundokun","year":"2022","unstructured":"Ogundokun, R. O., Maskeli\u016bnas, R., Dama\u0161evi\u010dius, R.: Human posture detection using image augmentation and hyperparameter-optimized transfer learning algorithms. Applied Sciences: 12(19), 10156, 2022","journal-title":"Applied Sciences"},{"key":"1256_CR19","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., Xie, S.: A convnet for the 2020s. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition: 11976\u201311986, 2022","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"1256_CR20","unstructured":"Wightman, R.: PyTorch Image Models. Availble at GitHub https:\/\/github.com\/huggingface\/pytorch-image-models. Accessed June 2024"},{"key":"1256_CR21","unstructured":"Wright, L.: New deep learning optimizer, ranger: Synergistic combination of radam+ lookahead for the best of both. Availabl at Github https:\/\/github.com\/lessw2020\/Ranger-Deep-Learning-Optimizer. Accessed Aug 2023"},{"key":"1256_CR22","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition: 248\u2013255, 2009","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"1256_CR23","doi-asserted-by":"crossref","unstructured":"Smith, L. N.: Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV): 464\u2013472, 2017","DOI":"10.1109\/WACV.2017.58"},{"key":"1256_CR24","doi-asserted-by":"crossref","unstructured":"Howard, J., Ruder, S.: Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018","DOI":"10.18653\/v1\/P18-1031"},{"key":"1256_CR25","unstructured":"Paszke, A., Gross, S., Chintala S., Chanan G., Yang E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In proceedings of the Conference on Neural Information Processing Systems (NIPS), 2017"},{"issue":"2","key":"1256_CR26","doi-asserted-by":"publisher","first-page":"108","DOI":"10.3390\/info11020108","volume":"11","author":"J Howard","year":"2020","unstructured":"Howard, J., Gugger S.: Fastai: A layered API for deep learning. Information: 11(2), 108, 2020","journal-title":"Information"},{"key":"1256_CR27","unstructured":"Bradski, G.: The openCV library. Dr. Dobb's Journal: Software Tools for the Professional Programmer: 25(11), 120\u2013123, 2000"},{"key":"1256_CR28","doi-asserted-by":"crossref","unstructured":"Li, Y., Hu, J., Wen, Y., Evangelidis, G., Salahi, K., Wang, Y., Tulyakov, S., Ren, J.: Rethinking vision transformers for mobilenet size and speed. In Proceedings of the IEEE\/CVF International Conference on Computer Vision:16889\u201316900, 2023","DOI":"10.1109\/ICCV51070.2023.01549"},{"key":"1256_CR29","doi-asserted-by":"crossref","unstructured":"Qin, D., Leichner, C., Delakis, M., Fornoni, M., Luo, S., Yang, F., Wang, W., Banbury, C., Ye, C., Akin, B., Aggarwal, V., Zhu, T., Moro, D., Howard, A.: MobileNetV4-Universal Models for the Mobile Ecosystem. arXiv preprint arXiv:2404.10518, 2024","DOI":"10.1007\/978-3-031-73661-2_5"},{"key":"1256_CR30","doi-asserted-by":"publisher","first-page":"8","DOI":"10.1016\/j.neunet.2019.04.024","volume":"117","author":"T Bouwmans","year":"2019","unstructured":"Bouwmans, T., Javed, S., Sultana, M., Jung, S. K.: Deep neural network concepts for background subtraction: A systematic review and comparative evaluation. Neural Networks: 117, 8\u201366, 2019","journal-title":"Neural Networks"},{"key":"1256_CR31","unstructured":"England, N., Improvement N.: Diagnostic imaging dataset statistical release, in London. Department of Health, 2023"}],"updated-by":[{"DOI":"10.1007\/s10278-025-01476-9","type":"correction","label":"Correction","source":"publisher","updated":{"date-parts":[[2025,4,24]],"date-time":"2025-04-24T00:00:00Z","timestamp":1745452800000}}],"container-title":["Journal of Imaging Informatics in Medicine"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10278-024-01256-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10278-024-01256-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10278-024-01256-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,5,20]],"date-time":"2025-05-20T17:27:02Z","timestamp":1747762022000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10278-024-01256-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,14]]},"references-count":31,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2025,6]]}},"alternative-id":["1256"],"URL":"https:\/\/doi.org\/10.1007\/s10278-024-01256-x","relation":{"correction":[{"id-type":"doi","id":"10.1007\/s10278-025-01476-9","asserted-by":"object"}]},"ISSN":["2948-2933"],"issn-type":[{"value":"2948-2933","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,14]]},"assertion":[{"value":"11 June 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 August 2024","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 August 2024","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 October 2024","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 April 2025","order":5,"name":"change_date","label":"Change Date","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Correction","order":6,"name":"change_type","label":"Change Type","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"A Correction to this paper has been published:","order":7,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"https:\/\/doi.org\/10.1007\/s10278-025-01476-9","URL":"https:\/\/doi.org\/10.1007\/s10278-025-01476-9","order":8,"name":"change_details","label":"Change Details","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"This is an observational study. Approval was granted by the Ethics Committee of the University Carlos III de Madrid.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics Approval"}},{"value":"All participants signed a legal consent which recognizes the protection of their data as established in the \u201cLey Org\u00e1nica de Protecci\u00f3n Jur\u00eddica del Menor\u201d (Ley O. 1\/96, of January 15th), \u201cLey General de Sanidad\u201d (Article 10.3, Ley 14\/1986, of April 25th), \u201cLey B\u00e1sica de Autonom\u00eda del Paciente y de Informaci\u00f3n y Documentaci\u00f3n Cl\u00ednica\u201d (Chapters I and III Ley 41\/2002, of November 14th), \u201cLey Org\u00e1nica 3\/20 18\u201d (of December 5th regarding the Protection of Personal Data and the Guarantee of Digital Rights), and the General Data Protection Regulation (EU Regulation 2016\/679).","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to Participate"}},{"value":"The authors affirm that human research participants provided informed consent for the publication of the images in Figure(s) 1, 4, and 6.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for Publication"}},{"value":"The authors declare no competing interests.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing Interests"}}]}}