{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,18]],"date-time":"2026-04-18T18:06:28Z","timestamp":1776535588194,"version":"3.51.2"},"reference-count":15,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T00:00:00Z","timestamp":1715904000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T00:00:00Z","timestamp":1715904000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/Y01958X\/1"],"award-info":[{"award-number":["EP\/Y01958X\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/W00805X\/1"],"award-info":[{"award-number":["EP\/W00805X\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n                <jats:title>Purpose<\/jats:title>\n                <jats:p>The recent segment anything model (SAM) has demonstrated impressive performance with point, text or bounding box prompts, in various applications. However, in safety-critical surgical tasks, prompting is not possible due to (1) the lack of per-frame prompts for supervised learning, (2) it is unrealistic to prompt frame-by-frame in a real-time tracking application, and (3) it is expensive to annotate prompts for offline applications.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Methods<\/jats:title>\n                <jats:p>We develop Surgical-DeSAM to generate automatic bounding box prompts for decoupling SAM to obtain instrument segmentation in real-time robotic surgery. We utilise a commonly used detection architecture, DETR, and fine-tuned it to obtain bounding box prompt for the instruments. We then empolyed decoupling SAM (DeSAM) by replacing the image encoder with DETR encoder and fine-tune prompt encoder and mask decoder to obtain instance segmentation for the surgical instruments. To improve detection performance, we adopted the Swin-transformer to better feature representation.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Results<\/jats:title>\n                <jats:p>The proposed method has been validated on two publicly available datasets from the MICCAI surgical instruments segmentation challenge EndoVis 2017 and 2018. The performance of our method is also compared with SOTA instrument segmentation methods and demonstrated significant improvements with dice metrics of 89.62 and 90.70 for the EndoVis 2017 and 2018<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Conclusion<\/jats:title>\n                <jats:p>Our extensive experiments and validations demonstrate that Surgical-DeSAM enables real-time instrument segmentation without any additional prompting and outperforms other SOTA segmentation methods<\/jats:p>\n              <\/jats:sec>","DOI":"10.1007\/s11548-024-03163-6","type":"journal-article","created":{"date-parts":[[2024,5,17]],"date-time":"2024-05-17T05:01:41Z","timestamp":1715922101000},"page":"1267-1271","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["Surgical-DeSAM: decoupling SAM for instrument segmentation in robotic surgery"],"prefix":"10.1007","volume":"19","author":[{"given":"Yuyang","family":"Sheng","sequence":"first","affiliation":[]},{"given":"Sophia","family":"Bano","sequence":"additional","affiliation":[]},{"given":"Matthew J.","family":"Clarkson","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7162-2822","authenticated-orcid":false,"given":"Mobarakol","family":"Islam","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,5,17]]},"reference":[{"key":"3163_CR1","doi-asserted-by":"crossref","unstructured":"Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, Xiao T, Whitehead S, Berg AC, Lo W-Y, et al (2023) Segment anything. arXiv preprint arXiv:2304.02643","DOI":"10.1109\/ICCV51070.2023.00371"},{"key":"3163_CR2","doi-asserted-by":"crossref","unstructured":"Ma J, Wang B (2023) Segment anything in medical images. arXiv preprint arXiv:2304.12306","DOI":"10.1038\/s41467-024-44824-z"},{"key":"3163_CR3","doi-asserted-by":"crossref","unstructured":"Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European conference on computer vision, pp 213\u2013229. Springer","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"3163_CR4","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"3163_CR5","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"3163_CR6","doi-asserted-by":"crossref","unstructured":"Rezatofighi H, Tsoi N, Gwak J, Sadeghian A, Reid I, Savarese S (2019) Generalized intersection over union: a metric and a loss for bounding box regression. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 658\u2013666","DOI":"10.1109\/CVPR.2019.00075"},{"key":"3163_CR7","doi-asserted-by":"crossref","unstructured":"Gonz\u00e1lez C, Bravo-S\u00e1nchez L, Arbelaez P (2020) Isinet: an instance-based approach for surgical instrument segmentation. In: Conference on medical image computing and computer-assisted intervention, pp 595\u2013605. Springer","DOI":"10.1007\/978-3-030-59716-0_57"},{"key":"3163_CR8","unstructured":"Iglovikov V, Shvets A (2018) Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv preprint arXiv:1801.05746"},{"key":"3163_CR9","doi-asserted-by":"crossref","unstructured":"Jin Y, Cheng K, Dou Q, Heng P-A (2019) Incorporating temporal prior from motion flow for instrument segmentation in minimally invasive surgery video. In: Medical image computing and computer assisted intervention\u2013MICCAI 2019: 22nd international conference, Shenzhen, China, Proceedings, Part V 22, pp 440\u2013448. Springer","DOI":"10.1007\/978-3-030-32254-0_49"},{"key":"3163_CR10","doi-asserted-by":"crossref","unstructured":"Zhao Z, Jin Y, Gao X, Dou Q, Heng P-A (2020) Learning motion flows for semi-supervised instrument segmentation from robotic surgical video. In: Medical image computing and computer assisted intervention\u2013MICCAI 2020: 23rd International conference, Lima, Peru, Proceedings, Part III 23, pp 679\u2013689. Springer","DOI":"10.1007\/978-3-030-59716-0_65"},{"key":"3163_CR11","doi-asserted-by":"crossref","unstructured":"Meinhardt T, Kirillov A, Leal-Taixe L, Feichtenhofer C (2022) Trackformer: multi-object tracking with transformers. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 8844\u20138854","DOI":"10.1109\/CVPR52688.2022.00864"},{"key":"3163_CR12","doi-asserted-by":"crossref","unstructured":"Zhao Z, Jin Y, Heng P-A (2022) Trasetr: track-to-segment transformer with contrastive query for instance-level instrument segmentation in robotic surgery. In: 2022 International conference on robotics and automation (ICRA), pp 11186\u201311193. IEEE","DOI":"10.1109\/ICRA46639.2022.9811873"},{"key":"3163_CR13","doi-asserted-by":"crossref","unstructured":"Baby B, et al (2023) From forks to forceps: a new framework for instance segmentation of surgical instruments. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp 6191\u20136201","DOI":"10.1109\/WACV56688.2023.00613"},{"key":"3163_CR14","doi-asserted-by":"crossref","unstructured":"Yue W, Zhang J, Hu K, Xia Y, Luo J, Wang Z (2023) Surgicalsam: efficient class promptable surgical instrument segmentation. arXiv preprint arXiv:2308.08746","DOI":"10.1609\/aaai.v38i7.28514"},{"key":"3163_CR15","doi-asserted-by":"crossref","unstructured":"Wang A, Islam M, Xu M, Zhang Y, Ren H (2023) Sam meets robotic surgery: an empirical study on generalization, robustness and adaptation. Medical image computing and computer assisted intervention\u2014 MICCAI 2023 workshops: ISIC 2023. Care-AI 2023, MedAGI 2023, DeCaF 2023, held in conjunction with MICCAI 2023, Vancouver, BC, Canada, Proceedings. Springer, Berlin, Heidelberg, pp 234\u2013244","DOI":"10.1007\/978-3-031-47401-9_23"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-024-03163-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-024-03163-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-024-03163-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,8]],"date-time":"2024-07-08T17:15:35Z","timestamp":1720458935000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-024-03163-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,5,17]]},"references-count":15,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2024,7]]}},"alternative-id":["3163"],"URL":"https:\/\/doi.org\/10.1007\/s11548-024-03163-6","relation":{},"ISSN":["1861-6429"],"issn-type":[{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,5,17]]},"assertion":[{"value":"4 March 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 April 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 May 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article does not contain any studies with human participants or animals performed by any of the authors.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"This articles does not contain patient data.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}}]}}