{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T14:49:36Z","timestamp":1772722176624,"version":"3.50.1"},"reference-count":45,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2021,1,27]],"date-time":"2021-01-27T00:00:00Z","timestamp":1611705600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["J. Imaging"],"abstract":"<jats:p>Image structures are segmented automatically using deep learning (DL) for analysis and processing. The three most popular base loss functions are cross entropy (crossE), intersect-over-the-union (IoU), and dice. Which should be used, is it useful to consider simple variations, such as modifying formula coefficients? How do characteristics of different image structures influence scores? Taking three different medical image segmentation problems (segmentation of organs in magnetic resonance images (MRI), liver in computer tomography images (CT) and diabetic retinopathy lesions in eye fundus images (EFI)), we quantify loss functions and variations, as well as segmentation scores of different targets. We first describe the limitations of metrics, since loss is a metric, then we describe and test alternatives. Experimentally, we observed that DeeplabV3 outperforms UNet and fully convolutional network (FCN) in all datasets. Dice scored 1 to 6 percentage points (pp) higher than cross entropy over all datasets, IoU improved 0 to 3 pp. Varying formula coefficients improved scores, but the best choices depend on the dataset: compared to crossE, different false positive vs. false negative weights improved MRI by 12 pp, and assigning zero weight to background improved EFI by 6 pp. Multiclass segmentation scored higher than n-uniclass segmentation in MRI by 8 pp. EFI lesions score low compared to more constant structures (e.g., optic disk or even organs), but loss modifications improve those scores significantly 6 to 9 pp. Our conclusions are that dice is best, it is worth assigning 0 weight to class background and to test different weights on false positives and false negatives.<\/jats:p>","DOI":"10.3390\/jimaging7020016","type":"journal-article","created":{"date-parts":[[2021,1,27]],"date-time":"2021-01-27T12:20:26Z","timestamp":1611750026000},"page":"16","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":21,"title":["Testing Segmentation Popular Loss and Variations in Three Multiclass Medical Imaging Problems"],"prefix":"10.3390","volume":"7","author":[{"given":"Pedro","family":"Furtado","sequence":"first","affiliation":[{"name":"Dei\/FCT\/CISUC, University of Coimbra, Polo II, 3030-290 Coimbra, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2021,1,27]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"633","DOI":"10.1016\/j.nicl.2017.06.016","article-title":"Fully automatic acute ischemic lesionsegmentation in DWI using convolutional neural networks","volume":"15","author":"Chen","year":"2017","journal-title":"NeuroImage Clin."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1016\/j.media.2016.05.004","article-title":"Brain tumor segmentation with deep neural networks","volume":"35","author":"Havaei","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"146","DOI":"10.1016\/j.jneumeth.2016.10.007","article-title":"Fast and robust segmentation of the striatumusing deep convolutional neural networks","volume":"274","author":"Choi","year":"2016","journal-title":"J. Neurosci. Methods"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"547","DOI":"10.1002\/mp.12045","article-title":"Segmentation of organs-at-risks in head andneck CT images using convolutional neural networks","volume":"44","author":"Ibragimov","year":"2017","journal-title":"Med. Phys."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"442","DOI":"10.1007\/s10278-017-9978-1","article-title":"Performance of an artificial multi-observer deep neural net-work for fully automated segmentation of polycystic kidneys","volume":"30","author":"Kline","year":"2017","journal-title":"J. Digit. Imaging"},{"key":"ref_6","first-page":"1077","article-title":"Deformable MR prostate segmentation viadeep feature learning and sparse patch matching","volume":"35","author":"Guo","year":"2016","journal-title":"IEEE Trans. MedImaging"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1016\/j.media.2018.01.004","article-title":"3D multi-scaleFCN with random modality voxel dropout learning for intervertebraldisc localization and segmentation from multi-modality MR images","volume":"45","author":"Li","year":"2018","journal-title":"Med. Image Anal."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1016\/j.media.2017.07.005","article-title":"A survey on deep learning in medical image analysis","volume":"42","author":"Litjens","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"20170387","DOI":"10.1098\/rsif.2017.0387","article-title":"Opportunities and obstacles for deep learning in biology andmedicine","volume":"15","author":"Ching","year":"2018","journal-title":"J. R. Soc. Interface"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Porwal, P., Pachade, S., Kamble, R., Kokare, M., Deshmukh, G., Sahasrabuddhe, V., and Meriaudeau, F. (2018). Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research. Data, 3.","DOI":"10.3390\/data3030025"},{"key":"ref_11","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs","volume":"40","author":"Chen","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"71","DOI":"10.1016\/j.bspc.2015.04.005","article-title":"Automatic 3D model-based method for liver segmentation in MRI based on active contours and total variation minimization","volume":"20","author":"Bereciartua","year":"2015","journal-title":"Biomed. Sign. Process. Control."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1361","DOI":"10.3233\/BME-151434","article-title":"Fully automatic scheme for measuring liver volume in 3D MR images","volume":"26","author":"Le","year":"2015","journal-title":"Bio-Med. Mater. Eng."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"235","DOI":"10.1007\/s11548-016-1498-9","article-title":"Fully automated MR liver volumetry using watershed segmentation coupled with active contouring","volume":"12","author":"Huynh","year":"2018","journal-title":"Int. J. Comput. Assist. Radiol. Surg."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Zhou, X., Takayama, R., Wang, S., Zhou, X., Hara, T., and Fujita, H. (2017, January 11\u201316). Automated segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach. Proceedings of the Medical Imaging 2017: Image Processing, Orlando, FL, USA.","DOI":"10.1117\/12.2254201"},{"key":"ref_20","unstructured":"Bobo, M., Bao, S., Huo, Y., Yao, Y., Virostko, J., Plassard, A., and Landman, B. (2018, January 10\u201315). Fully convolutional neural networks improve abdominal organ segmentation. Proceedings of the Medical Imaging 2018: Image Processing, Houston, TX, USA."},{"key":"ref_21","unstructured":"Larsson, M., Zhang, Y., and Kahl, F. (2016, January 14\u201316). Deepseg: Abdominal organ segmentation using deep convolutional neural networks. Proceedings of the Swedish Symposium on Image Analysis 2016, G\u00f6teborg, Sweden."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Chen, Y., Ruan, D., Xiao, J., Wang, L., Sun, B., Saouaf, R., Yang, W., Li, D., and Fan, Z. (2019). Fully Automated Multi-Organ Segmentation in Abdominal Magnetic Resonance Imaging with Deep Neural Networks. arXiv.","DOI":"10.1002\/mp.14429"},{"key":"ref_23","unstructured":"Groza, V., Brosch, T., Eschweiler, D., Schulz, H., Renisch, S., and Nickisch, H. (2018, January 4\u20136). Comparison of deep learning-based techniques for organ segmentation in abdominal CT images. Proceedings of the 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Conze, P., Kavur, A., Gall, E., Gezer, N., Meur, Y., Selver, M., and Rousseau, F. (2020). Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. arXiv.","DOI":"10.1016\/j.artmed.2021.102109"},{"key":"ref_25","first-page":"442","article-title":"Pancreas segmentation in MRI using graph-based decision fusion on convolutional neural networks","volume":"Volume 9901","author":"Ourselin","year":"2016","journal-title":"Proceedings of the MICCAI 2016, LNCS"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Prenta\u0161i\u0107, P., and Lon\u010dari\u0107, S. (2015, January 6\u20138). Detection of exudates in fundus photographs using convolutional neural networks. Proceedings of the 2015 9th International Symposium on Image and Signal Processing and Analysis (ISPA), Edinburgh, UK.","DOI":"10.1109\/ISPA.2015.7306056"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Gondal, W.M., K\u00f6hler, J.M., Grzeszick, R., Fink, G.A., and Hirsch, M. (2017, January 17\u201320). Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China.","DOI":"10.1109\/ICIP.2017.8296646"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"178","DOI":"10.1016\/j.media.2017.04.012","article-title":"Deep image mining for diabetic retinopathy screening","volume":"39","author":"Quellec","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_29","unstructured":"Haloi, M. (2015). Improved microaneurysm detection using deep neural networks. arXiv."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1273","DOI":"10.1109\/TMI.2016.2526689","article-title":"Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images","volume":"35","author":"Hoyng","year":"2016","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1016\/j.cmpb.2017.10.017","article-title":"An ensemble deep learning based approach for red lesion detection in fundus images","volume":"153","author":"Orlando","year":"2018","journal-title":"Comput. Methods Progr. Biomed."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Shan, J., and Li, L. (2016, January 27\u201329). A deep learning method for microaneurysm detection in fundus images. Proceedings of the 2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Washington, DC, USA.","DOI":"10.1109\/CHASE.2016.12"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1026","DOI":"10.1016\/j.media.2014.05.004","article-title":"Exudate detection in color retinal images for mass screening of diabetic retinopathy","volume":"18","author":"Zhang","year":"2014","journal-title":"Med. Image Anal."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Jadon, S. (2020). A survey of loss functions for semantic segmentation. arXiv.","DOI":"10.1109\/CIBCB48159.2020.9277638"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Salehi, S.S., Erdogmus, D., and Gholipour, A. (2017). Tversky loss function for image segmentation using 3D fully convolutional deep networks. International Workshop on Machine Learning in Medical Imaging, Springer.","DOI":"10.1007\/978-3-319-67389-9_44"},{"key":"ref_36","unstructured":"Jurdia, R.E., Petitjean, C., Honeine, P., Cheplygina, V., and Abdallah, F. (2020). High-level Prior-based Loss Functions for Medical Image Segmentation: A Survey. arXiv."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Kavur, A., Sinem, N., Bar\u0131s, M., Conze, P., Groza, V., Pham, D., Chatterjee, S., Ernst, P., Ozkan, S., and Baydar, B. (2020). CHAOS Challenge\u2014Combined (CT-MR) Healthy Abdominal Organ Segmentation. arXiv.","DOI":"10.1016\/j.media.2020.101950"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Deb, K. (2014). Multi-objective optimization. Search Methodologies, Springer.","DOI":"10.1007\/978-1-4614-6940-7_15"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"5129","DOI":"10.1002\/mp.13221","article-title":"A novel MRI segmentation method using CNN-based correction network for MRI-guided adaptive radiotherapy","volume":"45","author":"Fu","year":"2018","journal-title":"Med. Phys."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Chlebus, G., Meine, H., Thoduka, S., Abolmaali, N., van Ginneken, B., Hahn, H., and Schenk, A. (2019). Reducing inter-observer variability and interaction time of MR liver volumetry by combining automatic CNN-based liver segmentation and manual corrections. PLoS ONE, 14.","DOI":"10.1371\/journal.pone.0217228"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"399","DOI":"10.1007\/s11548-016-1501-5","article-title":"Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets","volume":"12","author":"Hu","year":"2017","journal-title":"Int. J. Comput. Assist. Radiol. Surg."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1016\/j.media.2019.04.005","article-title":"Abdominal multi-organ segmentation with organ-attention networks and statistical fusion","volume":"55","author":"Wang","year":"2019","journal-title":"Med. Image Anal."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Roth, R., Shen, C., Oda, H., Sugino, T., Oda, M., Hayashi, H., Misawa, K., and Mori, K. (2018, January 16\u201320). A multi-scale pyramid of 3D fully convolutional networks for abdominal multi-organ segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain.","DOI":"10.1007\/978-3-030-00937-3_48"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Gibson, E., Giganti, F., Hu, Y., Bonmati, E., Bandula, S., Gurusamy, K., Davidson, B., Pereira, S., Clarkson, M., and Barratt, D. (2017, January 11\u201313). Towards image-guided pancreas and biliary endoscopy: Automatic multi-organ segmentation on abdominal ct with dense dilated networks. Proceedings of the MICCAI, Quebec City, QC, Canada.","DOI":"10.1007\/978-3-319-66182-7_83"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Kim, J., and Lee, J. (2019, January 7\u20139). Deep-learning-based fast and fully automated segmentation on abdominal multiple organs from CT. Proceedings of the International Forum on Medical Imaging in Asia 2019, Singapore.","DOI":"10.1117\/12.2521689"}],"container-title":["Journal of Imaging"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2313-433X\/7\/2\/16\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T05:16:03Z","timestamp":1760159763000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2313-433X\/7\/2\/16"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,1,27]]},"references-count":45,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2021,2]]}},"alternative-id":["jimaging7020016"],"URL":"https:\/\/doi.org\/10.3390\/jimaging7020016","relation":{},"ISSN":["2313-433X"],"issn-type":[{"value":"2313-433X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,1,27]]}}}