{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,5]],"date-time":"2026-05-05T12:04:07Z","timestamp":1777982647737,"version":"3.51.4"},"reference-count":19,"publisher":"Springer Science and Business Media LLC","issue":"8","license":[{"start":{"date-parts":[[2025,6,24]],"date-time":"2025-06-24T00:00:00Z","timestamp":1750723200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,6,24]],"date-time":"2025-06-24T00:00:00Z","timestamp":1750723200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:sec>\n                    <jats:title>Purpose<\/jats:title>\n                    <jats:p>Automating suturing in robotic-assisted surgery offers significant benefits including enhanced precision, reduced operative time, and alleviated surgeon fatigue. Achieving this requires robust computer vision (CV) models. Still, their development is hindered by the scarcity of task-specific datasets and the complexity of acquiring and annotating real surgical data. This work addresses these challenges using a sim-to-real approach to create synthetic datasets and a data-driven methodology for model training and evaluation.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Methods<\/jats:title>\n                    <jats:p>Existing 3D models of Da Vinci tools were modified and new models\u2013needle and tissue cuts\u2013were created to account for diverse data scenarios, enabling the generation of three synthetic datasets with increasing realism using Unity and the Perception package. These datasets were then employed to train several YOLOv8-m models for object detection to evaluate the generalizability of synthetic-trained models in real scenarios and the impact of dataset realism on model performance. Additionally, a real-time instance segmentation model was developed through a hybrid training strategy combining synthetic and a minimal set of real images.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Results<\/jats:title>\n                    <jats:p>Synthetic-trained models showed improved performance on real test sets as training dataset realism increased, but realism levels remained insufficient for complete generalization. Instead, the hybrid approach significantly increased performance in real scenarios. Indeed, the hybrid instance segmentation model exhibited real-time capabilities and robust accuracy, achieving the best Dice coefficient (0.92) with minimal dependence on real training data (30\u201350 images).<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Conclusions<\/jats:title>\n                    <jats:p>This study demonstrates the potential of sim-to-real synthetic datasets to advance robotic suturing automation through a simple and reproducible framework. By sharing 3D models, Unity environments and annotated datasets, this work provides resources for creating additional images, expanding datasets, and enabling fine-tuning or semi-supervised learning. By facilitating further exploration, this work lays a foundation for advancing suturing automation and addressing task-specific dataset scarcity.<\/jats:p>\n                  <\/jats:sec>","DOI":"10.1007\/s11548-025-03460-8","type":"journal-article","created":{"date-parts":[[2025,6,24]],"date-time":"2025-06-24T08:09:00Z","timestamp":1750752540000},"page":"1567-1576","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["A reproducible framework for synthetic data generation and instance segmentation in robotic suturing"],"prefix":"10.1007","volume":"20","author":[{"given":"Pietro","family":"Leoncini","sequence":"first","affiliation":[]},{"given":"Francesco","family":"Marzola","sequence":"additional","affiliation":[]},{"given":"Matteo","family":"Pescio","sequence":"additional","affiliation":[]},{"given":"Maura","family":"Casadio","sequence":"additional","affiliation":[]},{"given":"Alberto","family":"Arezzo","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1613-1051","authenticated-orcid":false,"given":"Giulio","family":"Dagnino","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,6,24]]},"reference":[{"issue":"1","key":"3460_CR1","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1146\/annurev-bioeng-060418-052502","volume":"21","author":"J Troccaz","year":"2019","unstructured":"Troccaz J, Dagnino G, Yang G-Z (2019) Frontiers of medical robotics: from concept to systems to clinical translation. Annu Rev Biomed Eng 21(1):193\u2013218. https:\/\/doi.org\/10.1146\/annurev-bioeng-060418-052502","journal-title":"Annu Rev Biomed Eng"},{"issue":"5","key":"3460_CR2","doi-asserted-by":"publisher","first-page":"2383","DOI":"10.1007\/s00464-024-10788-w","volume":"38","author":"BT Ostrander","year":"2024","unstructured":"Ostrander BT, Massillon D, Meller L, Chiu Z-Y, Yip M, Orosco RK (2024) The current state of autonomous suturing: a systematic review. Surg Endosc 38(5):2383\u20132397. https:\/\/doi.org\/10.1007\/s00464-024-10788-w","journal-title":"Surg Endosc"},{"issue":"1","key":"3460_CR3","doi-asserted-by":"publisher","first-page":"651","DOI":"10.1146\/annurev-control-062420-090543","volume":"4","author":"A Attanasio","year":"2021","unstructured":"Attanasio A, Scaglioni B, De Momi E, Fiorini P, Valdastri P (2021) Autonomy in surgical robotics. Ann Rev Control Robot Autonom Syst 4(1):651\u2013679. https:\/\/doi.org\/10.1146\/annurev-control-062420-090543","journal-title":"Ann Rev Control Robot Autonom Syst"},{"issue":"4","key":"3460_CR4","doi-asserted-by":"publisher","first-page":"812","DOI":"10.1007\/s41315-024-00341-2","volume":"8","author":"G Dagnino","year":"2024","unstructured":"Dagnino G, Kundrat D (2024) Robot-assistive minimally invasive surgery: trends and future directions. Int J Intell Robot Appl 8(4):812\u2013826. https:\/\/doi.org\/10.1007\/s41315-024-00341-2","journal-title":"Int J Intell Robot Appl"},{"key":"3460_CR5","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-59716-067","author":"E Colleoni","year":"2020","unstructured":"Colleoni E, Edwards P, Stoyanov D (2020) Synthetic and real inputs for tool segmentation in robotic surgery. Int Conf Med Image Comput Comput-Assist Intervent. https:\/\/doi.org\/10.1007\/978-3-030-59716-067","journal-title":"Int Conf Med Image Comput Comput-Assist Intervent"},{"issue":"4","key":"3460_CR6","doi-asserted-by":"publisher","first-page":"56","DOI":"10.1109\/MRA.2021.3101646","volume":"28","author":"C Dettorre","year":"2021","unstructured":"Dettorre C, Mariani A, Stilli A et al (2021) Accelerating surgical robotics research: a review of 10\u00a0years with the da vinci research kit. IEEE Robot Automat Mag 28(4):56\u201378. https:\/\/doi.org\/10.1109\/MRA.2021.3101646","journal-title":"IEEE Robot Automat Mag"},{"issue":"9","key":"3460_CR7","doi-asserted-by":"publisher","first-page":"2222","DOI":"10.1007\/s11263-022-01640-6","volume":"130","author":"M Rodrigues","year":"2022","unstructured":"Rodrigues M, Mayo M, Patros P (2022) Surgical tool datasets for machine learning research: a survey. Int J Comput Vision 130(9):2222\u20132248. https:\/\/doi.org\/10.1007\/s11263-022-01640-6","journal-title":"Int J Comput Vision"},{"issue":"11","key":"3460_CR8","doi-asserted-by":"publisher","first-page":"310","DOI":"10.3390\/jimaging8110310","volume":"8","author":"K Man","year":"2022","unstructured":"Man K, Chahl J (2022) A review of synthetic image data and its use in computer vision. J Imag 8(11):310. https:\/\/doi.org\/10.3390\/jimaging8110310","journal-title":"J Imag"},{"issue":"1","key":"3460_CR9","doi-asserted-by":"publisher","first-page":"163","DOI":"10.1038\/s41746-022-00707-5","volume":"5","author":"P Mascagni","year":"2022","unstructured":"Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y, Alseidi A, Redan JA, Alfieri S, Costamagna G, Bo\u02c7skoski I, Padoy N, Hashimoto DA (2022) Computer vision in surgery: from potential to clinical value. npj Digital Med 5(1):163. https:\/\/doi.org\/10.1038\/s41746-022-00707-5","journal-title":"npj Digital Med"},{"issue":"9","key":"3460_CR10","doi-asserted-by":"publisher","first-page":"9221","DOI":"10.1007\/s10462-022-10358-3","volume":"56","author":"G Paulin","year":"2023","unstructured":"Paulin G, Ivasic-Kos M (2023) Review and analysis of synthetic dataset generation methods and techniques for application in computer vision. Artif Intell Rev 56(9):9221\u20139265. https:\/\/doi.org\/10.1007\/s10462-022-10358-3","journal-title":"Artif Intell Rev"},{"issue":"5","key":"3460_CR11","doi-asserted-by":"publisher","first-page":"961","DOI":"10.1007\/s11548-022-02598-z","volume":"17","author":"T Dowrick","year":"2022","unstructured":"Dowrick T, Davidson B, Gurusamy K, Clarkson MJ (2022) Large scale simulation of labeled intraoperative scenes in unity. Int J Comput Assist Radiol Surg 17(5):961\u2013963. https:\/\/doi.org\/10.1007\/s11548-022-02598-z","journal-title":"Int J Comput Assist Radiol Surg"},{"key":"3460_CR12","doi-asserted-by":"publisher","first-page":"107929","DOI":"10.1016\/j.compbiomed.2024.107929","volume":"169","author":"T Rueckert","year":"2024","unstructured":"Rueckert T, Rueckert D, Palm C (2024) Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: a review of the state of the art. Comput Biol Med 169:107929. https:\/\/doi.org\/10.1016\/j.compbiomed.2024.107929","journal-title":"Comput Biol Med"},{"key":"3460_CR13","doi-asserted-by":"publisher","unstructured":"Hinterstoisser S, Pauly O, Heibel H, Martina M, Bokeloh M, (2019) An annotation saved is an annotation earned: Using fully synthetic training for object detection. 2019 IEEE\/CVF International Conference on Computer Vision Workshop (ICCVW), 2787\u20132796 https:\/\/doi.org\/10.1109\/ICCVW.2019.00340","DOI":"10.1109\/ICCVW.2019.00340"},{"issue":"7","key":"3460_CR14","doi-asserted-by":"publisher","first-page":"1167","DOI":"10.1007\/s11548-019-01962-w","volume":"14","author":"A Rau","year":"2019","unstructured":"Rau A, Edwards PJE, Ahmad OF, Riordan P, Janatka M, Lovat LB, Stoyanov D (2019) Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy. Int J Comput Assist Radiol Surg 14(7):1167\u20131176. https:\/\/doi.org\/10.1007\/s11548-019-01962-w","journal-title":"Int J Comput Assist Radiol Surg"},{"key":"3460_CR15","doi-asserted-by":"publisher","unstructured":"Le HB, Kim TD, Ha MH, Tran ALQ, Nguyen DT, Dinh XM (2023) Robust surgical tool detection in laparoscopic surgery using YOLOv8 Model. 2023 International Conference on System Science and Engineering (ICSSE) 537\u2013542 https:\/\/doi.org\/10.1109\/ICSSE58758.2023.10227217","DOI":"10.1109\/ICSSE58758.2023.10227217"},{"issue":"11","key":"3460_CR16","doi-asserted-by":"publisher","first-page":"2215","DOI":"10.1007\/s11548-024-03115-0","volume":"19","author":"X Pan","year":"2024","unstructured":"Pan X, Bi M, Wang H, Ma C, He X (2024) DBHYOLO: a surgical instrument detection method based on feature separation in laparoscopic surgery. Int J Comput Assist Radiol Surg 19(11):2215\u20132225. https:\/\/doi.org\/10.1007\/s11548-024-03115-0","journal-title":"Int J Comput Assist Radiol Surg"},{"key":"3460_CR17","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-024-20016-1","author":"SS Sengar","year":"2024","unstructured":"Sengar SS, Hasan AB, Kumar S, Carroll F (2024) Generative artificial intelligence: a systematic review and applications. Multimed Tools Appl. https:\/\/doi.org\/10.1007\/s11042-024-20016-1","journal-title":"Multimed Tools Appl"},{"key":"3460_CR18","doi-asserted-by":"publisher","unstructured":"Ou Y and Tavakoli M (2024) Learning autonomous surgical irrigation and suction with the da vinci research kit using reinforcement learning. arXiv preprint arXiv:2411.14622https:\/\/doi.org\/10.48550\/arXiv.2411.14622arXiv:2411.14622","DOI":"10.48550\/arXiv.2411.14622"},{"key":"3460_CR19","doi-asserted-by":"publisher","unstructured":"Ou Y, Zargarzadeh S, Sedighi P, Tavakoli M (2024) A realistic surgical simulator for non-rigid and contact-rich manipulation in surgeries with the da vinci research kit. 2024 21st International Conference on Ubiquitous Robots (UR), 64\u201370 https:\/\/doi.org\/10.1109\/ur61395.2024.10597513","DOI":"10.1109\/ur61395.2024.10597513"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-025-03460-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-025-03460-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-025-03460-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,13]],"date-time":"2025-08-13T05:34:05Z","timestamp":1755063245000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-025-03460-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,24]]},"references-count":19,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2025,8]]}},"alternative-id":["3460"],"URL":"https:\/\/doi.org\/10.1007\/s11548-025-03460-8","relation":{"has-preprint":[{"id-type":"doi","id":"10.36227\/techrxiv.174439990.08139657\/v1","asserted-by":"object"}]},"ISSN":["1861-6429"],"issn-type":[{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,24]]},"assertion":[{"value":"10 January 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 June 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 June 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interest"}},{"value":"This article is accompanied by a\nsupplementary video demonstrating the p erformance of the\ninstance segmentation model developed in this study, providing\nvisual evidence of the results discussed.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Supplementary information"}}]}}