{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,9]],"date-time":"2026-04-09T12:22:55Z","timestamp":1775737375268,"version":"3.50.1"},"reference-count":21,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2025,5,26]],"date-time":"2025-05-26T00:00:00Z","timestamp":1748217600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["J. Imaging"],"abstract":"<jats:p>As artificial intelligence advances in medical image analysis, its environmental impact remains largely overlooked. This study analyzes the energy demands of AI workflows for medical image segmentation using the popular Kidney Tumor Segmentation-2019 (KiTS-19) dataset. It examines how training and inference differ in energy consumption, focusing on factors that influence resource usage, such as computational complexity, memory access, and I\/O operations. To address these aspects, we evaluated three variants of convolution\u2014Standard Convolution, Depthwise Convolution, and Group Convolution\u2014combined with optimization techniques such as Mixed Precision and Gradient Accumulation. While training is energy-intensive, the recurring nature of inference often results in significantly higher cumulative energy consumption over a model\u2019s life cycle. Depthwise Convolution with Mixed Precision achieves the lowest energy consumption during training while maintaining strong performance, making it the most energy-efficient configuration among those tested. In contrast, Group Convolution fails to achieve energy efficiency due to significant input\/output overhead. These findings emphasize the need for GPU-centric strategies and energy-conscious AI practices, offering actionable guidance for designing scalable, sustainable innovation in medical image analysis.<\/jats:p>","DOI":"10.3390\/jimaging11060174","type":"journal-article","created":{"date-parts":[[2025,5,26]],"date-time":"2025-05-26T02:50:05Z","timestamp":1748227805000},"page":"174","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["A Study on Energy Consumption in AI-Driven Medical Image Segmentation"],"prefix":"10.3390","volume":"11","author":[{"given":"R.","family":"Prajwal","sequence":"first","affiliation":[{"name":"Radiomics Lab, University of Southern California, Los Angeles, CA 90033, USA"}]},{"given":"S. J.","family":"Pawan","sequence":"additional","affiliation":[{"name":"Radiomics Lab, University of Southern California, Los Angeles, CA 90033, USA"}]},{"given":"Shahin","family":"Nazarian","sequence":"additional","affiliation":[{"name":"Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA"}]},{"given":"Nicholas","family":"Heller","sequence":"additional","affiliation":[{"name":"Glickman Urological Institute, Cleveland Clinic, Cleaveland, OH 44125, USA"}]},{"given":"Christopher J.","family":"Weight","sequence":"additional","affiliation":[{"name":"Glickman Urological Institute, Cleveland Clinic, Cleaveland, OH 44125, USA"},{"name":"Cleveland Clinic Lerner, School of Medicine, College of Medicine of Case, Western Reserve University, Cleveland, OH 44106, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4808-5715","authenticated-orcid":false,"given":"Vinay","family":"Duddalwar","sequence":"additional","affiliation":[{"name":"Radiomics Lab, University of Southern California, Los Angeles, CA 90033, USA"},{"name":"Alfred E Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089, USA"},{"name":"Institute of Urology, University of Southern California, Los Angeles, CA 90033, USA"},{"name":"Department of Radiology, Los Angeles General Medical Center, Los Angeles, CA 90033, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9474-5035","authenticated-orcid":false,"given":"C.-C. Jay","family":"Kuo","sequence":"additional","affiliation":[{"name":"Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA"}]}],"member":"1968","published-online":{"date-parts":[[2025,5,26]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"397","DOI":"10.1016\/j.neucom.2022.04.065","article-title":"Medical image segmentation with 3D convolutional neural networks: A survey","volume":"493","author":"Niyas","year":"2022","journal-title":"Neurocomputing"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Kirkpatrick, K. (2023). The Carbon Footprint of Artificial Intelligence, ACM.","DOI":"10.1145\/3603746"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"423","DOI":"10.1038\/s42256-020-0219-9","article-title":"The carbon impact of artificial intelligence","volume":"2","author":"Dhar","year":"2020","journal-title":"Nat. Mach. Intell."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"The Lancet Digital Health (2023). Curbing the carbon footprint of health care. Lancet Digit Health, 5, e848.","DOI":"10.1016\/S2589-7500(23)00229-7"},{"key":"ref_5","unstructured":"Heikkil\u00e4, M. (2024, August 29). We\u2019re Getting a Better Idea of AI\u2019s True Carbon Footprint. Available online: https:\/\/www.technologyreview.com\/2022\/11\/14\/1063192\/were-getting-a-better-idea-of-ais-true-carbon-footprint\/."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Georgiou, S., Kechagia, M., Sharma, T., Sarro, F., and Zou, Y. (2022, January 25\u201327). Green ai: Do deep learning frameworks have different costs?. Proceedings of the 44th International Conference on Software Engineering, Pittsburgh, PA, USA.","DOI":"10.1145\/3510003.3510221"},{"key":"ref_7","unstructured":"(2024, January 15). Global Cancer Burden Growing, Amidst Mounting Need for Services. Available online: https:\/\/www.iarc.who.int\/news-events\/global-cancer-burden-growing-amidst-mounting-need-for-services\/."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Xu, Y., Mart\u00ednez-Fern\u00e1ndez, S., Martinez, M., and Franch, X. (2023). Energy efficiency of training neural network architectures: An empirical study. arXiv.","DOI":"10.24251\/HICSS.2023.098"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Yang, T.-J., Chen, Y.-H., Emer, J., and Sze, V. (November, January 29). A method to estimate the energy consumption of deep neural networks. Proceedings of the 2017 51st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA.","DOI":"10.1109\/ACSSC.2017.8335698"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"135334","DOI":"10.1016\/j.jclepro.2022.135334","article-title":"Do we need exotic models? Engineering metrics to enable green machine learning from tackling accuracy-energy trade-offs","volume":"382","author":"Naser","year":"2023","journal-title":"J. Clean. Prod."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"101729","DOI":"10.1016\/j.compmedimag.2020.101729","article-title":"Simplification of neural networks for skin lesion image segmentation using color channel pruning","volume":"82","author":"Hajabdollahi","year":"2020","journal-title":"Comput. Med. Imaging Graph."},{"key":"ref_12","unstructured":"Wen, Y., and Gregg, D. (2020). Exploiting weight redundancy in CNNs: Beyond pruning and quantization. arXiv."},{"key":"ref_13","unstructured":"(2025, January 15). AI and Compute. Available online: https:\/\/openai.com\/index\/ai-and-compute\/."},{"key":"ref_14","unstructured":"Heller, N., Isensee, F., Trofimova, D., Tejpaul, R., Zhao, Z., Chen, H., Wang, L., Golts, A., Khapun, D., and Shats, D. (2023). The kits21 challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase ct. arXiv."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_16","first-page":"1097","article-title":"Imagenet classification with deep convolutional neural networks","volume":"Volume 25","author":"Pereira","year":"2012","journal-title":"Advances in Neural Information Processing Systems"},{"key":"ref_17","unstructured":"Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., and Venkatesh, G. (2017). Mixed precision training. arXiv."},{"key":"ref_18","unstructured":"Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Pre-training of deep bidirectional transformers for language understanding. arXiv. arXiv."},{"key":"ref_19","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017). Attention Is All You Need.(Nips). arXiv."},{"key":"ref_20","first-page":"223","article-title":"Rethinking floating point overheads for mixed precision DNN accelerators","volume":"3","author":"Abdelaziz","year":"2021","journal-title":"Proc. Mach. Learn. Syst."},{"key":"ref_21","unstructured":"Courty, B., Schmidt, V., Kamal, G., Coutarel, M., Feld, B., Lecourt, J., Connell, L., Amine, S. (mlco2\/codecarbon, 2024). mlco2\/codecarbon, version v2.4.1."}],"container-title":["Journal of Imaging"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2313-433X\/11\/6\/174\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:40:16Z","timestamp":1760031616000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2313-433X\/11\/6\/174"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,26]]},"references-count":21,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2025,6]]}},"alternative-id":["jimaging11060174"],"URL":"https:\/\/doi.org\/10.3390\/jimaging11060174","relation":{},"ISSN":["2313-433X"],"issn-type":[{"value":"2313-433X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,26]]}}}