{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,31]],"date-time":"2026-01-31T04:28:15Z","timestamp":1769833695101,"version":"3.49.0"},"reference-count":41,"publisher":"MDPI AG","issue":"21","license":[{"start":{"date-parts":[[2020,10,22]],"date-time":"2020-10-22T00:00:00Z","timestamp":1603324800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2017R1C1B5074062"],"award-info":[{"award-number":["NRF-2017R1C1B5074062"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2020R1A2C1006179"],"award-info":[{"award-number":["NRF-2020R1A2C1006179"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2016M3A9E1915855"],"award-info":[{"award-number":["NRF-2016M3A9E1915855"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In vivo diseases such as colorectal cancer and gastric cancer are increasingly occurring in humans. These are two of the most common types of cancer that cause death worldwide. Therefore, the early detection and treatment of these types of cancer are crucial for saving lives. With the advances in technology and image processing techniques, computer-aided diagnosis (CAD) systems have been developed and applied in several medical systems to assist doctors in diagnosing diseases using imaging technology. In this study, we propose a CAD method to preclassify the in vivo endoscopic images into negative (images without evidence of a disease) and positive (images that possibly include pathological sites such as a polyp or suspected regions including complex vascular information) cases. The goal of our study is to assist doctors to focus on the positive frames of endoscopic sequence rather than the negative frames. Consequently, we can help in enhancing the performance and mitigating the efforts of doctors in the diagnosis procedure. Although previous studies were conducted to solve this problem, they were mostly based on a single classification model, thus limiting the classification performance. Thus, we propose the use of multiple classification models based on ensemble learning techniques to enhance the performance of pathological site classification. Through experiments with an open database, we confirmed that the ensemble of multiple deep learning-based models with different network architectures is more efficient for enhancing the performance of pathological site classification using a CAD system as compared to the state-of-the-art methods.<\/jats:p>","DOI":"10.3390\/s20215982","type":"journal-article","created":{"date-parts":[[2020,10,22]],"date-time":"2020-10-22T20:51:00Z","timestamp":1603399860000},"page":"5982","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":25,"title":["Enhanced Image-Based Endoscopic Pathological Site Classification Using an Ensemble of Deep Learning Models"],"prefix":"10.3390","volume":"20","author":[{"given":"Dat Tien","family":"Nguyen","sequence":"first","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Min Beom","family":"Lee","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Tuyen Danh","family":"Pham","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Ganbayar","family":"Batchuluun","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1868-5207","authenticated-orcid":false,"given":"Muhammad","family":"Arsalan","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Kang Ryoung","family":"Park","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2020,10,22]]},"reference":[{"key":"ref_1","unstructured":"(2020, July 30). World Health Organization\u2013Cancer Report. Available online: https:\/\/www.who.int\/news-room\/fact-sheets\/detail\/cancer."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"271","DOI":"10.1016\/j.patrec.2019.11.013","article-title":"Deep-learning framework to detect lung abnormality\u2014A study with chest x-ray and lung CT scan images","volume":"129","author":"Bhandary","year":"2020","journal-title":"Pattern Recognit. Lett."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"67905","DOI":"10.1109\/ACCESS.2019.2918224","article-title":"Using a noisy U-Net for detecting lung nodule candidates","volume":"7","author":"Huang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"340","DOI":"10.1016\/j.patcog.2018.02.012","article-title":"Automatic breast ultrasound image segmentation: A survey","volume":"79","author":"Xian","year":"2018","journal-title":"Pattern Recognit."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.ultras.2018.07.006","article-title":"Medical breast ultrasound image segmentation by machine learning","volume":"91","author":"Xu","year":"2019","journal-title":"Ultrasonics"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"2188","DOI":"10.1109\/TMI.2019.2902600","article-title":"High-contrast, low-cost, 3-D visualization of skin cancer using ultra-high-resolution millimeter-wave imaging","volume":"38","author":"Oppelaar","year":"2019","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"2482","DOI":"10.1109\/TMI.2020.2972964","article-title":"A mutual bootstrapping model for automated skin lesion segmentation and classification","volume":"39","author":"Xie","year":"2020","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.gii.2014.02.005","article-title":"Early detection of early gastric cancer using image-enhanced endoscopy: Current trends","volume":"3","author":"Song","year":"2014","journal-title":"Gastrointest. Interv."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"806","DOI":"10.1016\/j.gie.2018.11.011","article-title":"Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy","volume":"89","author":"Zhu","year":"2019","journal-title":"Gastrointest. Endosc."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Li, Y., Li, X., Xie, X., and Shen, L. (2018, January 4\u20137). Deep learning based gastric cancer identification. Proceedings of the IEEE International Symposium on Biomedical Imaging, Washington, DC, USA.","DOI":"10.1109\/ISBI.2018.8363550"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Patino-Barrientos, S., Sierra-Sosa, D., Garcia-Zapirain, B., Castillo-Olea, C., and Elmaghraby, A. (2020). Kudo\u2019s classification for colon polyp assessment using a deep learning approach. Appl. Sci., 10.","DOI":"10.3390\/app10020501"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"30","DOI":"10.4103\/jpi.jpi_34_17","article-title":"Deep Learning for Classification of Colorectal Polyps on Whole-slide Images","volume":"8","author":"Hassanpour","year":"2017","journal-title":"J. Pathol. Inform."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Ribeiro, E., Uhl, A., and Hafner, M. (2016, January 20\u201324). Colonic Polyp Classification with Convolutional Neural Networks. Proceedings of the IEEE 29th International Symposium on Computer-Based Medical Systems, Dublin, Ireland.","DOI":"10.1109\/CBMS.2016.39"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"101619","DOI":"10.1016\/j.media.2019.101619","article-title":"Uncertainty and interpretability in convolutional neural networks for sematic segmentation of colorectal polyps","volume":"60","author":"Wickstrom","year":"2020","journal-title":"Med. Image Anal."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Ribeiro, E., Uhl, A., Wimmer, G., and Hafner, M. (2016). Exploring deep learning and transfer learning for colonic polyp classification. Comput. Math. Method Med., 6584725.","DOI":"10.1155\/2016\/6584725"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Fonolla, R., Sommen, F., Schreuder, R.M., Shoon, E.J., and With, P. (2019, January 8\u201311). Multi-modal classification of polyp malignancy using CNN features with balanced class augmentation. Proceedings of the IEEE International Symposium on Biomedical Imaging, Venice, Italy.","DOI":"10.1109\/ISBI.2019.8759320"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1016\/j.ultras.2016.09.011","article-title":"A pretrained convolutional neural network based method for thyroid nodule diagnosis","volume":"73","author":"Ma","year":"2017","journal-title":"Ultrasonics"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Nguyen, D.T., Pham, D.T., Batchuluun, G., Yoon, H.S., and Park, K.R. (2019). Artificial Intelligence-based thyroid nodule classification using information from spatial and frequency domains. J. Clin. Med., 8.","DOI":"10.3390\/jcm8111976"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"477","DOI":"10.1007\/s10278-017-9997-y","article-title":"Thyroid nodule classification in ultrasound images by fine-tuning deep convolutional neural network","volume":"30","author":"Chi","year":"2017","journal-title":"J. Digit. Imaging"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1016\/j.media.2016.05.004","article-title":"Brain tumor segmentation with deep neural networks","volume":"35","author":"Havaei","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"92","DOI":"10.1016\/j.cviu.2017.04.002","article-title":"Hough-CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound","volume":"16","author":"Milletari","year":"2017","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1016\/j.media.2016.10.004","article-title":"Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation","volume":"36","author":"Kamnitsas","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. arXiv.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_24","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012, January 3\u20138). ImageNet classification with deep convolutional neural networks. Proceedings of the Neural Information Processing Systems, Lake Tahoe, NV, USA."},{"key":"ref_25","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional neural networks for large-scale image recognition. arXiv."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2014). Going deeper with convolutions. arXiv.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_27","first-page":"1929","article-title":"Dropout: A simple way to prevent neural networks from overfitting","volume":"15","author":"Srivastava","year":"2014","journal-title":"J. Mach. Learn. Res."},{"key":"ref_28","unstructured":"Kowsari, K., Heidarysafa, M., Brown, D.E., Meimandi, K.J., and Barnes, L.E. (2018, January 9\u201311). RMDL: Random multi-model deep learning for classification. Proceedings of the 2nd International Conference on Information System and Data Mining, Lakeland, FL, USA."},{"key":"ref_29","unstructured":"(2020, August 01). Pathological Site Classification Models with Algorithm. Available online: http:\/\/dm.dongguk.edu\/link.html."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"144","DOI":"10.1016\/j.media.2015.10.003","article-title":"Online tracking and retargeting with applications to optical biopsy in gastrointestinal endocopic examinations","volume":"30","author":"Ye","year":"2016","journal-title":"Med. Image Anal."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Maaten, L., and Weinberger, K.Q. (2016). Densely connected convolutional networks. arXiv.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_32","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv."},{"key":"ref_33","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"1021","DOI":"10.1109\/ACCESS.2018.2886213","article-title":"Generative adversarial network-based method for transforming single RGB image into 3D point cloud","volume":"7","author":"Chu","year":"2018","journal-title":"IEEE Access"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Nguyen, D.T., Pham, D.T., Batchuluun, G., Noh, K.J., and Park, K.R. (2020). Presentation attack face image generation based on a deep generative adversarial network. Sensors, 20.","DOI":"10.3390\/s20071810"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Zhu, J.-Y., Park, T., Isola, P., and Efros, A.A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"511","DOI":"10.1542\/pir.31.12.511","article-title":"Research and statistics: Sensitivity, specificity, predictive values, and likelihood ratios","volume":"31","author":"Carvajal","year":"2010","journal-title":"Pediatr. Rev."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Nguyen, D.T., Kang, J.K., Pham, D.T., Batchuluun, G., and Park, K.R. (2020). Ultrasound image-based diagnosis of malignant thyroid nodule using artificial intelligence. Sensors, 20.","DOI":"10.3390\/s20071822"},{"key":"ref_39","unstructured":"(2019, September 20). Tensorflow Deep-Learning Library. Available online: https:\/\/www.tensorflow.org\/."},{"key":"ref_40","unstructured":"(2019, September 20). NVIDIA TitanX GPU. Available online: https:\/\/www.nvidia.com\/en-us\/geforce\/products\/10series\/titan-x-pascal\/."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Rarikh, D., and Batra, D. (2016). Grad-CAM: Visual explanations from deep networks via gradient-based localization. arXiv.","DOI":"10.1109\/ICCV.2017.74"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/21\/5982\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T10:26:09Z","timestamp":1760178369000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/21\/5982"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,22]]},"references-count":41,"journal-issue":{"issue":"21","published-online":{"date-parts":[[2020,11]]}},"alternative-id":["s20215982"],"URL":"https:\/\/doi.org\/10.3390\/s20215982","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,10,22]]}}}