{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,23]],"date-time":"2026-02-23T16:31:27Z","timestamp":1771864287582,"version":"3.50.1"},"reference-count":36,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,4,28]],"date-time":"2022-04-28T00:00:00Z","timestamp":1651104000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,28]],"date-time":"2022-04-28T00:00:00Z","timestamp":1651104000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Big Data"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Laparoscopic surgery also know as minimally invasive surgery (MIS), is a type of surgical procedure that allows a surgeon to examine the organs inside of the abdomen without having to make large incisions in the skin. It unifies the competence and skills of highly trained surgeons with the power and precision of machines. Furthermore, surgical instruments are inserted through the abdomen with the help of a laparoscope, which is a tube with a high-intensity light and a high-resolution camera at the end. In addition, recorded videos from this type of surgery have become a steadily more important information source. However, MIS videos are often very long, thereby, navigating through these videos is time and effort consuming. The automatic identification of tool presence in laparoscopic videos leads to detecting what tools are used at each time in surgery and helps in the automatic recognition of surgical workflow. The aim of this paper is to predict surgical tools from laparoscopic videos using three states of the arts CNNs, namely: VGG19, Inception v-4, and NASNet-A. In addition, an ensemble learning method is proposed, combining the three CNNs, to solve the tool presence detection problem as a multi-label classification problem. The proposed methods are evaluated on a dataset of 80 cholecystectomy videos (Cholec80 dataset). The results present an improvement of approximately 6.19% and a mean average precision of 97.84% when the ensemble learning method is applied.<\/jats:p>","DOI":"10.1186\/s40537-022-00602-6","type":"journal-article","created":{"date-parts":[[2022,4,28]],"date-time":"2022-04-28T13:07:37Z","timestamp":1651151257000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["The impact of ensemble learning on surgical tools classification during laparoscopic cholecystectomy"],"prefix":"10.1186","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8825-5427","authenticated-orcid":false,"given":"Jaafar","family":"Jaafari","sequence":"first","affiliation":[]},{"given":"Samira","family":"Douzi","sequence":"additional","affiliation":[]},{"given":"Khadija","family":"Douzi","sequence":"additional","affiliation":[]},{"given":"Badr","family":"Hssina","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,28]]},"reference":[{"issue":"4","key":"602_CR1","doi-asserted-by":"publisher","first-page":"531","DOI":"10.1093\/bja\/87.4.531","volume":"87","author":"F Carli","year":"2001","unstructured":"Carli F, et al. Editorial I: Measuring the outcome of surgical procedures: what are the challenges? Br J Anaesth. 2001;87(4):531\u20133.","journal-title":"Br J Anaesth"},{"issue":"3","key":"602_CR2","doi-asserted-by":"publisher","first-page":"828","DOI":"10.1016\/j.jsurg.2017.09.027","volume":"75","author":"P Mota","year":"2018","unstructured":"Mota P, Carvalho N, Carvalho-Dias E, Joao Costa M, Correia- Pinto J, Lima E. Video-based surgical learning: improving trainee education and preparation for surgery. J Surg Educ. 2018;75(3):828\u201335. https:\/\/doi.org\/10.1016\/j.jsurg.2017.09.027.","journal-title":"J Surg Educ"},{"key":"602_CR3","doi-asserted-by":"publisher","first-page":"29092916","DOI":"10.1007\/s00464-012-2284-6","volume":"26","author":"KR Henken","year":"2012","unstructured":"Henken KR, Jansen FW, Klein J, Stassen LPS, Dankelman J, van den Dobbelsteen JJ. Implications of the law on video recording in clinical practice. Surg Endosc. 2012;26:29092916. https:\/\/doi.org\/10.1007\/s00464-012-2284-6.","journal-title":"Surg Endosc"},{"key":"602_CR4","unstructured":"Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv:1409.1556."},{"key":"602_CR5","doi-asserted-by":"crossref","unstructured":"Szegedy C, Ioffe S, Vanhoucke V, Alemi A. Inception-v4, inception-ResNet and the impact of residual connections on learning. In: AAAI conference on artificial intelligence. 2016.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"602_CR6","doi-asserted-by":"publisher","unstructured":"Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. In: IEEE\/CVF conference on computer vision and pattern recognition. 2018. p. 8697\u2013710. https:\/\/doi.org\/10.1109\/CVPR.2018.00907.","DOI":"10.1109\/CVPR.2018.00907"},{"key":"602_CR7","doi-asserted-by":"publisher","unstructured":"Li L, Huang H, Jin X. AE-CNN classification of pulmonary tuberculosis based on CT images. In: 2018 9th international conference on information technology in medicine and education (ITME); 2018. https:\/\/doi.org\/10.1109\/itme.2018.00020.","DOI":"10.1109\/itme.2018.00020"},{"key":"602_CR8","doi-asserted-by":"publisher","unstructured":"Xiao Z, Huang R, Ding Y, Lan T, Dong R, Qin Z, Zhang X, Wang W. A deep learning-based segmentation method for brain tumor in MR images. In: 2016 IEEE 6th international conference on computational advances in bio and medical sciences (ICCABS); 2016. https:\/\/doi.org\/10.1109\/iccabs.2016.7802771.","DOI":"10.1109\/iccabs.2016.7802771"},{"key":"602_CR9","doi-asserted-by":"publisher","unstructured":"Joshi S, Gore S. Ishemic stroke lesion segmentation by analyzing MRI images using dilated and transposed convolutions in convolutional neural networks. In: 2018 fourth international conference on computing communication control and automation (ICCUBEA); 2018. https:\/\/doi.org\/10.1109\/iccubea.2018.8697545.","DOI":"10.1109\/iccubea.2018.8697545"},{"key":"602_CR10","doi-asserted-by":"publisher","unstructured":"Ye J, Luo Y, Zhu C, Liu F, Zhang Y. Breast cancer image classification on WSI with spatial correlations. In: ICASSP 2019\u20142019 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2019. https:\/\/doi.org\/10.1109\/icassp.2019.8682560.","DOI":"10.1109\/icassp.2019.8682560"},{"key":"602_CR11","doi-asserted-by":"crossref","unstructured":"Kiruthika M, Swapna TR, Santhosh Kumar C, Peeyush KP. Artery and vein classification for hypertensive retinopathy. In: 2019 3rd international conference on trends in electronics and informatics.","DOI":"10.1109\/ICOEI.2019.8862719"},{"key":"602_CR12","doi-asserted-by":"publisher","first-page":"228853","DOI":"10.1109\/ACCESS.2020.3046258","volume":"8","author":"P Shi","year":"2020","unstructured":"Shi P, Zhao Z, Hu S, Chang F. Real-time surgical tool detection in minimally invasive surgery based on attention-guided convolutional neural network. IEEE Access. 2020;8:228853\u201362. https:\/\/doi.org\/10.1109\/ACCESS.2020.3046258.","journal-title":"IEEE Access"},{"key":"602_CR13","doi-asserted-by":"publisher","unstructured":"Wang S, Raju A, Huang J. Deep learning based multi-label classification for surgical tool presence detection in laparoscopic videos. In: 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017); 2017. p. 620\u20133. https:\/\/doi.org\/10.1109\/ISBI.2017.7950597.","DOI":"10.1109\/ISBI.2017.7950597"},{"key":"602_CR14","doi-asserted-by":"publisher","unstructured":"Kletz S, Schoeffmann K, Benois-Pineau J, Husslein H. Identifying surgical instruments in laparoscopy using deep learning instance segmentation. In: International conference on content-based multimedia indexing (CBMI). 2019. p. 1\u20136. https:\/\/doi.org\/10.1109\/CBMI.2019.8877379.","DOI":"10.1109\/CBMI.2019.8877379"},{"key":"602_CR15","doi-asserted-by":"publisher","first-page":"405","DOI":"10.1515\/cdbme-2019-0102","volume":"5","author":"Nour Jalal","year":"2019","unstructured":"Jalal Nour, Alshirbaji Tamer, M\u00f6ller Knut. Predicting surgical phases using CNN-NARX neural network. Curr Dir Biomed Eng. 2019;5:405\u20137. https:\/\/doi.org\/10.1515\/cdbme-2019-0102.","journal-title":"Curr Dir Biomed Eng"},{"key":"602_CR16","doi-asserted-by":"publisher","first-page":"181723","DOI":"10.1109\/ACCESS.2020.3028910","volume":"8","author":"G Wang","year":"2020","unstructured":"Wang G, Wang S. Surgical tools detection based on training sample adaptation in laparoscopic videos. IEEE Access. 2020;8:181723\u201332. https:\/\/doi.org\/10.1109\/ACCESS.2020.3028910.","journal-title":"IEEE Access"},{"key":"602_CR17","doi-asserted-by":"publisher","first-page":"23748","DOI":"10.1109\/ACCESS.2020.2969885","volume":"8","author":"B Zhang","year":"2020","unstructured":"Zhang B, Wang S, Dong L, Chen P. Surgical tools detection based on modulated anchoring network in laparoscopic videos. IEEE Access. 2020;8:23748\u201358. https:\/\/doi.org\/10.1109\/ACCESS.2020.2969885.","journal-title":"IEEE Access"},{"key":"602_CR18","unstructured":"Namazi B, et al. LapTool-Net: a contextual detector of surgical tools in laparoscopic videos based on recurrent convolutional neural networks; 2019. arXiv:1905.08983."},{"key":"602_CR19","doi-asserted-by":"publisher","unstructured":"Chittajallu DR, Dong B, Tunison P, Collins R, Wells K, Fleshman J, Sankaranarayanan G, Schwaitzberg S, Cavuoto L, Enquobahrie A. XAI-CBIR: explainable AI system for content based retrieval of video frames from minimally invasive surgery videos. In: 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019); 2019. https:\/\/doi.org\/10.1109\/isbi.2019.8759428.","DOI":"10.1109\/isbi.2019.8759428"},{"key":"602_CR20","doi-asserted-by":"crossref","unstructured":"Kletz S, et al. identifying surgical instruments in laparoscopy using deep learning instance segmentation. In: 2019 international conference on content-based multimedia indexing (CBMI); 2019. p. 1\u20136.","DOI":"10.1109\/CBMI.2019.8877379"},{"key":"602_CR21","doi-asserted-by":"publisher","unstructured":"Shvets A, Rakhlin A, Kalinin A, Iglovikov V. Automatic instrument segmentation in robot-assisted surgery using deep learning. In: 2018 17th IEEE international conference on machine learning and applications (ICMLA); 2018. p. 624\u20138. https:\/\/doi.org\/10.1109\/ICMLA.2018.00100.","DOI":"10.1109\/ICMLA.2018.00100"},{"key":"602_CR22","doi-asserted-by":"publisher","unstructured":"Kanakatte A, Ramaswamy A, Gubbi J, Ghose A, Purushothaman B. Surgical tool segmentation and localization using spatio-temporal deep network. In: 2020 42nd annual international conference of the IEEE engineering in medicine & biology society (EMBC); 2020. p. 1658\u201361. https:\/\/doi.org\/10.1109\/EMBC44109.2020.9176676.","DOI":"10.1109\/EMBC44109.2020.9176676"},{"key":"602_CR23","doi-asserted-by":"publisher","DOI":"10.1515\/cdbme-2020-0002","author":"TA lshirbaji","year":"2020","unstructured":"lshirbaji TA, et al. A convolutional neural network with a two-stage LSTM model for tool presence detection in laparoscopic videos. Curr Dir Biomed Eng. 2020. https:\/\/doi.org\/10.1515\/cdbme-2020-0002.","journal-title":"Curr Dir Biomed Eng"},{"key":"602_CR24","doi-asserted-by":"publisher","DOI":"10.1007\/s12525-021-00475-2","author":"C Janiesch","year":"2021","unstructured":"Janiesch C, Zschech P, Heinrich K. Machine learning and deep learning. Electron Mark. 2021. https:\/\/doi.org\/10.1007\/s12525-021-00475-2.","journal-title":"Electron Mark"},{"key":"602_CR25","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-014-0007-7","volume":"2","author":"MM Najafabadi","year":"2015","unstructured":"Najafabadi MM, Villanustre F, Khoshgoftaar TM, et al. Deep learning applications and challenges in big data analytics. J Big Data. 2015;2:1. https:\/\/doi.org\/10.1186\/s40537-014-0007-7.","journal-title":"J Big Data"},{"key":"602_CR26","unstructured":"Thompson NC, Greenewald KH, Lee K, Manso GF. The computational limits of deep learning; 2020. arXiv:2007.05558."},{"key":"602_CR27","doi-asserted-by":"publisher","first-page":"85","DOI":"10.1016\/j.neunet.2014.09.003","volume":"61","author":"J\u00fcrgen Schmidhuber","year":"2015","unstructured":"Schmidhuber J\u00fcrgen. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85\u2013117. https:\/\/doi.org\/10.1016\/j.neunet.2014.09.003.","journal-title":"Neural Netw"},{"key":"602_CR28","doi-asserted-by":"publisher","first-page":"9","DOI":"10.1186\/s40537-016-0043-6","volume":"3","author":"K Weiss","year":"2016","unstructured":"Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. J Big Data. 2016;3:9. https:\/\/doi.org\/10.1186\/s40537-016-0043-6.","journal-title":"J Big Data"},{"key":"602_CR29","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition; 2009. p. 248\u201355.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"602_CR30","series-title":"Lecture notes in computer science","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10602-1_48","volume-title":"Computer vision\u2013ECCV 2014. ECCV 2014","author":"TY Lin","year":"2014","unstructured":"Lin TY, et al. Microsoft COCO: common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, et al., editors. Computer vision\u2013ECCV 2014. ECCV 2014, vol. 8693. Lecture notes in computer science. Cham: Springer; 2014. https:\/\/doi.org\/10.1007\/978-3-319-10602-1_48."},{"key":"602_CR31","unstructured":"Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th international conference on neural information processing systems (NIPS\u201912), Vol. 1. Red Hook: Curran Associates Inc.; 2012. p. 1097\u2013105."},{"issue":"1","key":"602_CR32","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1109\/tmi.2016.2593957","volume":"36","author":"AP Twinanda","year":"2017","unstructured":"Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N. EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging. 2017;36(1):86\u201397. https:\/\/doi.org\/10.1109\/tmi.2016.2593957.","journal-title":"IEEE Trans Med Imaging"},{"key":"602_CR33","unstructured":"Sahu M, Mukhopadhyay A, Szengel A, Zachow S. Tool and phase recognition using contextual CNN features; 2016. arXiv:1610.08854."},{"key":"602_CR34","doi-asserted-by":"crossref","unstructured":"Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L. Tool detection and operative skill assessment in surgical videos using region-based convolutional Neural Networks. In: 2018 IEEE winter conference on applications of computer vision (WACV). 2018.","DOI":"10.1109\/WACV.2018.00081"},{"key":"602_CR35","doi-asserted-by":"publisher","first-page":"2865","DOI":"10.3390\/app9142865","volume":"9","author":"K Jo","year":"2019","unstructured":"Jo K, Choi Y, Choi J, Chung JW. Robust real-time detection of laparoscopic instruments in robot surgery using convolutional neural networks with motion vector prediction. Appl Sci. 2019;9:2865.","journal-title":"Appl Sci"},{"issue":"2","key":"602_CR36","doi-asserted-by":"publisher","first-page":"318","DOI":"10.1109\/TPAMI.2018.2858826","volume":"42","author":"TY Lin","year":"2020","unstructured":"Lin TY, Goyal P, Girshick R, He K, Dollar P. Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell. 2020;42(2):318\u201327. https:\/\/doi.org\/10.1109\/TPAMI.2018.2858826.","journal-title":"IEEE Trans Pattern Anal Mach Intell"}],"container-title":["Journal of Big Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-022-00602-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s40537-022-00602-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-022-00602-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,3]],"date-time":"2023-02-03T22:06:01Z","timestamp":1675461961000},"score":1,"resource":{"primary":{"URL":"https:\/\/journalofbigdata.springeropen.com\/articles\/10.1186\/s40537-022-00602-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,28]]},"references-count":36,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["602"],"URL":"https:\/\/doi.org\/10.1186\/s40537-022-00602-6","relation":{},"ISSN":["2196-1115"],"issn-type":[{"value":"2196-1115","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,4,28]]},"assertion":[{"value":"13 September 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 April 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 April 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The author confirms the sole responsibility for this manuscript. The author read and approved the final manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article\u2019s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article\u2019s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"49"}}