{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T21:05:28Z","timestamp":1774386328743,"version":"3.50.1"},"reference-count":62,"publisher":"MDPI AG","issue":"16","license":[{"start":{"date-parts":[[2022,8,13]],"date-time":"2022-08-13T00:00:00Z","timestamp":1660348800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Science Foundation of China","doi-asserted-by":"publisher","award":["61875102"],"award-info":[{"award-number":["61875102"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Science Foundation of China","doi-asserted-by":"publisher","award":["JCYJ20180508152528735"],"award-info":[{"award-number":["JCYJ20180508152528735"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Science Foundation of China","doi-asserted-by":"publisher","award":["HW2018007"],"award-info":[{"award-number":["HW2018007"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Science Foundation of China","doi-asserted-by":"publisher","award":["2020Z99CFZ023"],"award-info":[{"award-number":["2020Z99CFZ023"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Science and Technology Research Program of Shenzhen City","award":["61875102"],"award-info":[{"award-number":["61875102"]}]},{"name":"Science and Technology Research Program of Shenzhen City","award":["JCYJ20180508152528735"],"award-info":[{"award-number":["JCYJ20180508152528735"]}]},{"name":"Science and Technology Research Program of Shenzhen City","award":["HW2018007"],"award-info":[{"award-number":["HW2018007"]}]},{"name":"Science and Technology Research Program of Shenzhen City","award":["2020Z99CFZ023"],"award-info":[{"award-number":["2020Z99CFZ023"]}]},{"name":"Tsinghua University","award":["61875102"],"award-info":[{"award-number":["61875102"]}]},{"name":"Tsinghua University","award":["JCYJ20180508152528735"],"award-info":[{"award-number":["JCYJ20180508152528735"]}]},{"name":"Tsinghua University","award":["HW2018007"],"award-info":[{"award-number":["HW2018007"]}]},{"name":"Tsinghua University","award":["2020Z99CFZ023"],"award-info":[{"award-number":["2020Z99CFZ023"]}]},{"name":"singhua University Spring Breeze Fund","award":["61875102"],"award-info":[{"award-number":["61875102"]}]},{"name":"singhua University Spring Breeze Fund","award":["JCYJ20180508152528735"],"award-info":[{"award-number":["JCYJ20180508152528735"]}]},{"name":"singhua University Spring Breeze Fund","award":["HW2018007"],"award-info":[{"award-number":["HW2018007"]}]},{"name":"singhua University Spring Breeze Fund","award":["2020Z99CFZ023"],"award-info":[{"award-number":["2020Z99CFZ023"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on sparse patch annotation, i.e., only small portions of patches in an image are labeled as \u2018tumor\u2019 or \u2018normal\u2019. The framework consists of a patch-wise segmentation model called PSeger, and an innovative semi-supervised algorithm. PSeger has two branches for patch classification and image classification, respectively. This two-branch structure enables the model to learn more general features and thus reduce the risk of overfitting when learning sparsely annotated data. We incorporate the idea of consistency learning and self-training into the semi-supervised training strategy to take advantage of the unlabeled images. Trained on the BCSS dataset with only 25% of the images labeled (five patches for each labeled image), our proposed method achieved competitive performance compared to the fully supervised pixel-wise segmentation models. Experiments demonstrate that the proposed solution has the potential to reduce the burden of labeling histopathological images.<\/jats:p>","DOI":"10.3390\/s22166053","type":"journal-article","created":{"date-parts":[[2022,8,15]],"date-time":"2022-08-15T23:44:03Z","timestamp":1660607043000},"page":"6053","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Using Sparse Patch Annotation for Tumor Segmentation in Histopathological Images"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8157-2814","authenticated-orcid":false,"given":"Yiqing","family":"Liu","sequence":"first","affiliation":[{"name":"Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China"}]},{"given":"Qiming","family":"He","sequence":"additional","affiliation":[{"name":"Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2890-4963","authenticated-orcid":false,"given":"Hufei","family":"Duan","sequence":"additional","affiliation":[{"name":"Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China"}]},{"given":"Huijuan","family":"Shi","sequence":"additional","affiliation":[{"name":"Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3357-5348","authenticated-orcid":false,"given":"Anjia","family":"Han","sequence":"additional","affiliation":[{"name":"Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China"}]},{"given":"Yonghong","family":"He","sequence":"additional","affiliation":[{"name":"Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,8,13]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1301","DOI":"10.1038\/s41591-019-0508-1","article-title":"Clinical-grade computational pathology using weakly supervised deep learning on whole slide images","volume":"25","author":"Campanella","year":"2019","journal-title":"Nat. Med."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"555","DOI":"10.1038\/s41551-020-00682-w","article-title":"Data-efficient and weakly supervised computational pathology on whole-slide images","volume":"5","author":"Lu","year":"2021","journal-title":"Nat. Biomed. Eng."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1559","DOI":"10.1038\/s41591-018-0177-5","article-title":"Classification and mutation prediction from non\u2013small cell lung cancer histopathology images using deep learning","volume":"24","author":"Coudray","year":"2018","journal-title":"Nat. Med."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1519","DOI":"10.1038\/s41591-019-0583-3","article-title":"Deep learning-based classification of mesothelioma improves prediction of patient outcome","volume":"25","author":"Courtiol","year":"2019","journal-title":"Nat. Med."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1054","DOI":"10.1038\/s41591-019-0462-y","article-title":"Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer","volume":"25","author":"Kather","year":"2019","journal-title":"Nat. Med."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"106","DOI":"10.1038\/s41586-021-03512-4","article-title":"AI-based pathology predicts origins for cancers of unknown primary","volume":"594","author":"Lu","year":"2021","journal-title":"Nature"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"5727","DOI":"10.1038\/s41467-020-19334-3","article-title":"Deep learning-enabled breast cancer hormonal receptor status determination from base-level H&E stains","volume":"11","author":"Naik","year":"2020","journal-title":"Nat. Commun."},{"key":"ref_8","unstructured":"Wang, D., Khosla, A., Gargeya, R., Irshad, H., and Beck, A.H. (2016). Deep learning for identifying metastatic breast cancer. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.media.2019.03.014","article-title":"Fast and accurate tumor segmentation of histology images using persistent homology and deep convolutional features","volume":"55","author":"Qaiser","year":"2019","journal-title":"Med. Image Anal."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Ni, H., Liu, H., Wang, K., Wang, X., Zhou, X., and Qian, Y. (2019). WSI-Net: Branch-based and hierarchy-aware network for segmentation and classification of breast histopathological whole-slide images. International Workshop on Machine Learning in Medical Imaging, Springer.","DOI":"10.1007\/978-3-030-32692-0_5"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Hou, L., Samaras, D., Kurc, T.M., Gao, Y., Davis, J.E., and Saltz, J.H. (2016, January 27\u201330). Patch-based convolutional neural network for whole slide tissue image classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.266"},{"key":"ref_12","unstructured":"Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G.E., Kohlberger, T., Boyko, A., Venugopalan, S., Timofeev, A., Nelson, P.Q., and Corrado, G.S. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"4605","DOI":"10.2147\/CMAR.S312608","article-title":"Deep learning-based multi-class classification of breast digital pathology images","volume":"13","author":"Mi","year":"2021","journal-title":"Cancer Manag. Res."},{"key":"ref_14","unstructured":"Li, Z., Tao, R., Wu, Q., and Li, B. (2019). Da-refinenet: A dual input whole slide image segmentation algorithm based on attention. arXiv."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Dong, N., Kampffmeyer, M., Liang, X., Wang, Z., Dai, W., and Xing, E. (2018). Reinforced auto-zoom net: Towards accurate and fast breast cancer segmentation in whole-slide images. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer.","DOI":"10.1007\/978-3-030-00889-5_36"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"101890","DOI":"10.1016\/j.media.2020.101890","article-title":"HookNet: Multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images","volume":"68","author":"Balkenhol","year":"2021","journal-title":"Med. Image Anal."},{"key":"ref_17","unstructured":"Chan, L., Hosseini, M.S., Rowsell, C., Plataniotis, K.N., and Damaskinos, S. (November, January 27). Histosegnet: Semantic segmentation of histological tissue type in whole slide images. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"101914","DOI":"10.1016\/j.media.2020.101914","article-title":"A hybrid network for automatic hepatocellular carcinoma segmentation in H&E-stained whole slide images","volume":"68","author":"Wang","year":"2021","journal-title":"Med. Image Anal."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Cho, S., Jang, H., Tan, J.W., and Jeong, W.K. (2021, January 13\u201316). DeepScribble: Interactive Pathology Image Segmentation Using Deep Neural Networks with Scribbles. Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France.","DOI":"10.1109\/ISBI48211.2021.9434105"},{"key":"ref_20","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_22","unstructured":"Tarvainen, A., and Valpola, H. (2017, January 4\u20139). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_23","unstructured":"Yalniz, I.Z., J\u00e9gou, H., Chen, K., Paluri, M., and Mahajan, D. (2019). Billion-scale semi-supervised learning for image classification. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Belharbi, S., Ben Ayed, I., McCaffrey, L., and Granger, E. (2021, January 3\u20138). Deep active learning for joint classification & segmentation with weak annotator. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV48630.2021.00338"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"1817","DOI":"10.1109\/TMI.2021.3066295","article-title":"Detection of prostate cancer in whole-slide images through end-to-end training with image-level labels","volume":"40","author":"Pinckaers","year":"2021","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"101861","DOI":"10.1016\/j.compmedimag.2021.101861","article-title":"Histopathology classification and localization of colorectal cancer using global labels by weakly supervised deep learning","volume":"88","author":"Zhou","year":"2021","journal-title":"Comput. Med. Imaging Graph."},{"key":"ref_27","unstructured":"Lin, D., Dai, J., Jia, J., He, K., and Sun, J. (July, January 26). Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Bearman, A., Russakovsky, O., Ferrari, V., and Fei-Fei, L. (2016, January 11\u201314). What is the point: Semantic segmentation with point supervision. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46478-7_34"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"3655","DOI":"10.1109\/TMI.2020.3002244","article-title":"Weakly supervised deep nuclei segmentation using partial points annotation in histopathology images","volume":"39","author":"Qu","year":"2020","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Mahani, G.K., Li, R., Evangelou, N., Sotiropolous, S., Morgan, P.S., French, A.P., and Chen, X. (2022, January 28\u201331). Bounding Box Based Weakly Supervised Deep Convolutional Neural Network for Medical Image Segmentation Using an Uncertainty Guided and Spatially Constrained Loss. Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India.","DOI":"10.1109\/ISBI52829.2022.9761558"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Liang, Y., Yin, Z., Liu, H., Zeng, H., Wang, J., Liu, J., and Che, N. (IEEE ACM Trans. Comput. Biol. Bioinform., 2022). Weakly Supervised Deep Nuclei Segmentation with Sparsely Annotated Bounding Boxes for DNA Image Cytometry, IEEE ACM Trans. Comput. Biol. Bioinform., early access.","DOI":"10.1109\/TCBB.2021.3138189"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"2376","DOI":"10.1109\/TMI.2017.2724070","article-title":"Constrained deep weak supervision for histopathology image segmentation","volume":"36","author":"Jia","year":"2017","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1016\/j.media.2019.02.009","article-title":"Constrained-CNN losses for weakly supervised segmentation","volume":"54","author":"Kervadec","year":"2019","journal-title":"Med. Image Anal."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"30","DOI":"10.1093\/nsr\/nwx105","article-title":"An overview of multi-task learning","volume":"5","author":"Zhang","year":"2018","journal-title":"Natl. Sci. Rev."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Graham, S., Vu, Q.D., Jahanifar, M., Minhas, F., Snead, D., and Rajpoot, N. (2022). One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification. arXiv.","DOI":"10.1016\/j.media.2022.102685"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"1520","DOI":"10.1109\/TMI.2022.3142321","article-title":"A Fully Automated Multimodal MRI-based Multi-task Learning for Glioma Segmentation and IDH Genotyping","volume":"41","author":"Cheng","year":"2022","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"882","DOI":"10.1038\/s41598-018-37492-9","article-title":"A fast and refined cancer regions segmentation framework in whole-slide breast pathological images","volume":"9","author":"Guo","year":"2019","journal-title":"Sci. Rep."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"771","DOI":"10.1109\/TMI.2021.3123572","article-title":"Semi-Supervised Deep Transfer Learning for Benign-Malignant Diagnosis of Pulmonary Nodules in Chest CT Images","volume":"41","author":"Shi","year":"2021","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"4346","DOI":"10.1109\/TMI.2020.3017007","article-title":"Semixup: In-and out-of-manifold regularization for deep semi-supervised knee osteoarthritis severity grading from plain radiographs","volume":"39","author":"Nguyen","year":"2020","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"1331","DOI":"10.1109\/TMI.2021.3139999","article-title":"Shadow-consistent Semi-supervised Learning for Prostate Ultrasound Segmentation","volume":"41","author":"Xu","year":"2021","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"2629","DOI":"10.1109\/TMI.2021.3053008","article-title":"Few-shot learning by a Cascaded framework with shape-constrained Pseudo label assessment for whole Heart segmentation","volume":"40","author":"Wang","year":"2021","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"491","DOI":"10.1016\/j.neucom.2021.08.051","article-title":"Twin self-supervision based semi-supervised learning (TS-SSL): Retinal anomaly classification in SD-OCT images","volume":"462","author":"Zhang","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Li, D., Yang, J., Kreis, K., Torralba, A., and Fidler, S. (2021, January 19\u201325). Semantic segmentation with generative models: Semi-supervised learning and strong out-of-domain generalization. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Virtual.","DOI":"10.1109\/CVPR46437.2021.00820"},{"key":"ref_44","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_45","unstructured":"Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and J\u00e9gou, H. (2021, January 18\u201324). Training data-efficient image transformers & distillation through attention. Proceedings of the International Conference on Machine Learning, Vienna, Austria."},{"key":"ref_46","unstructured":"Laine, S., and Aila, T. (2016). Temporal ensembling for semi-supervised learning. arXiv."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"523","DOI":"10.1109\/TNNLS.2020.2995319","article-title":"Transformation-consistent self-ensembling model for semisupervised medical image segmentation","volume":"32","author":"Li","year":"2020","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"3461","DOI":"10.1093\/bioinformatics\/btz083","article-title":"Structured crowdsourcing enables convolutional segmentation of histology images","volume":"35","author":"Amgad","year":"2019","journal-title":"Bioinformatics"},{"key":"ref_49","unstructured":"Loshchilov, I., and Hutter, F. (2017). Decoupled weight decay regularization. arXiv."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018, January 8\u201314). Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., and Liang, J. (2018). Unet++: A nested u-net architecture for medical image segmentation. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer.","DOI":"10.1007\/978-3-030-00889-5_1"},{"key":"ref_52","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_53","unstructured":"Tan, M., and Le, Q. (2019, January 10\u201315). Efficientnet: Rethinking model scaling for convolutional neural networks. Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Radosavovic, I., Kosaraju, R.P., Girshick, R., He, K., and Doll\u00e1r, P. (2020, January 13\u201319). Designing network design spaces. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01044"},{"key":"ref_55","unstructured":"Yakubovskiy, P. (2022, June 01). Segmentation Models Pytorch. Available online: https:\/\/github.com\/qubvel\/segmentation_models.pytorch."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21\u201326). Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"3349","DOI":"10.1109\/TPAMI.2020.2983686","article-title":"Deep high-resolution representation learning for visual recognition","volume":"43","author":"Wang","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Yang, L., Zhang, Y., Chen, J., Zhang, S., and Chen, D.Z. (2017). Suggestive annotation: A deep active learning framework for biomedical image segmentation. International Conference on Medical Image Computing And Computer-Assisted Intervention, Springer.","DOI":"10.1007\/978-3-319-66179-7_46"},{"key":"ref_60","unstructured":"Xie, Y., Zhang, J., Shen, C., and Xia, Y. (October, January 27). Cotr: Efficiently bridging cnn and transformer for 3d medical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strastbourg, France."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Dalmaz, O., Yurt, M., and \u00c7ukur, T. (2021). ResViT: Residual vision transformers for multi-modal medical image synthesis. arXiv.","DOI":"10.1109\/TMI.2022.3167808"},{"key":"ref_62","unstructured":"Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C.A. (2019, January 8\u201314). Mixmatch: A holistic approach to semi-supervised learning. Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/16\/6053\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:08:11Z","timestamp":1760141291000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/16\/6053"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,13]]},"references-count":62,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2022,8]]}},"alternative-id":["s22166053"],"URL":"https:\/\/doi.org\/10.3390\/s22166053","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,13]]}}}