{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T21:17:54Z","timestamp":1774387074702,"version":"3.50.1"},"reference-count":37,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2025,12,2]],"date-time":"2025-12-02T00:00:00Z","timestamp":1764633600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,12,2]],"date-time":"2025-12-02T00:00:00Z","timestamp":1764633600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:sec>\n                    <jats:title>\n                      <jats:bold>Purpose:<\/jats:bold>\n                    <\/jats:title>\n                    <jats:p>Surgical video review is essential for minimally invasive surgical training, but manual annotation of surgical steps is time-consuming and limits scalability. We propose a weakly supervised pre-training framework that leverages unannotated or heterogeneously labeled surgical videos to improve automated surgical step recognition.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>\n                      <jats:bold>Methods:<\/jats:bold>\n                    <\/jats:title>\n                    <jats:p>\n                      We evaluate three types of weak labels derived from unannotated datasets: (1) surgical phases from the same or other procedures, (2) surgical steps from different procedure types, and (3) intraoperative time progression. Using datasets from four robotic-assisted procedures (sleeve gastrectomy, hysterectomy, cholecystectomy, and radical prostatectomy), we simulate real-world annotation scarcity by varying the proportion of available step annotations (\n                      <jats:inline-formula>\n                        <jats:alternatives>\n                          <jats:tex-math>$$\\alpha $$<\/jats:tex-math>\n                          <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                            <mml:mi>\u03b1<\/mml:mi>\n                          <\/mml:math>\n                        <\/jats:alternatives>\n                      <\/jats:inline-formula>\n                      <jats:inline-formula>\n                        <jats:alternatives>\n                          <jats:tex-math>$$\\in $$<\/jats:tex-math>\n                          <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                            <mml:mo>\u2208<\/mml:mo>\n                          <\/mml:math>\n                        <\/jats:alternatives>\n                      <\/jats:inline-formula>\n                      0.25, 0.5, 0.75, 1.0). We benchmark the performance of a 2D CNN model trained with and without weak label pre-training.\n                    <\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>\n                      <jats:bold>Results:<\/jats:bold>\n                    <\/jats:title>\n                    <jats:p>\n                      Pre-training with surgical phase labels\u2014particularly from the same procedure type (\n                      <jats:sc>Phase-Within<\/jats:sc>\n                      )\u2014consistently improved step recognition performance, with gains up to 6.4 f1-score points over standard ImageNet-based models under limited annotation conditions (\n                      <jats:inline-formula>\n                        <jats:alternatives>\n                          <jats:tex-math>$$\\alpha $$<\/jats:tex-math>\n                          <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                            <mml:mi>\u03b1<\/mml:mi>\n                          <\/mml:math>\n                        <\/jats:alternatives>\n                      <\/jats:inline-formula>\n                      = 0.25 on SLG). Cross-procedure step pre-training was beneficial for some procedures, and time-based labels provided moderate gains depending on procedure structure. Label efficiency analysis shows the baseline model would require labeling an additional 30\u201360 videos at\n                      <jats:inline-formula>\n                        <jats:alternatives>\n                          <jats:tex-math>$$\\alpha $$<\/jats:tex-math>\n                          <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                            <mml:mi>\u03b1<\/mml:mi>\n                          <\/mml:math>\n                        <\/jats:alternatives>\n                      <\/jats:inline-formula>\n                      = 0.25 to match the performance achieved by the best weak-pretraining strategy across procedures.\n                    <\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>\n                      <jats:bold>Conclusion:<\/jats:bold>\n                    <\/jats:title>\n                    <jats:p>Weakly supervised pre-training offers a practical strategy to improve surgical step recognition when annotated data is scarce. This approach can support scalable feedback and assessment in surgical training workflows where comprehensive annotations are infeasible.<\/jats:p>\n                  <\/jats:sec>","DOI":"10.1007\/s11548-025-03555-2","type":"journal-article","created":{"date-parts":[[2025,12,2]],"date-time":"2025-12-02T06:00:54Z","timestamp":1764655254000},"page":"267-277","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Weakly supervised pre-training for surgical step recognition using unannotated and heterogeneously labeled videos"],"prefix":"10.1007","volume":"21","author":[{"given":"Sreeram","family":"Kamabattula","sequence":"first","affiliation":[]},{"given":"Kai","family":"Chen","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1776-4782","authenticated-orcid":false,"given":"Kiran","family":"Bhattacharyya","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,2]]},"reference":[{"issue":"6","key":"3555_CR1","doi-asserted-by":"publisher","first-page":"334","DOI":"10.1016\/j.surge.2018.10.004","volume":"17","author":"V Celentano","year":"2019","unstructured":"Celentano V, Smart N, Cahill RA, McGrath JS, Gupta S, Griffith JP, Acheson AG, Cecil TD, Coleman MG (2019) Use of laparoscopic videos amongst surgical trainees in the United Kingdom. Surgeon 17(6):334\u2013339","journal-title":"Surgeon"},{"key":"3555_CR2","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1016\/j.jss.2018.09.015","volume":"235","author":"JL Green","year":"2019","unstructured":"Green JL, Suresh V, Bittar P, Ledbetter L, Mithani SK, Allori A (2019) The utilization of video technology in surgical education: a systematic review. J Surg Res 235:171\u2013180","journal-title":"J Surg Res"},{"issue":"2","key":"3555_CR3","doi-asserted-by":"publisher","first-page":"221","DOI":"10.1002\/jso.26496","volume":"124","author":"TM Ward","year":"2021","unstructured":"Ward TM, Mascagni P, Madani A, Padoy N, Perretta S, Hashimoto DA (2021) Surgical data science and artificial intelligence for surgical education. J Surg Oncol 124(2):221\u2013230","journal-title":"J Surg Oncol"},{"issue":"1","key":"3555_CR4","doi-asserted-by":"publisher","first-page":"70","DOI":"10.1097\/SLA.0000000000002693","volume":"268","author":"DA Hashimoto","year":"2018","unstructured":"Hashimoto DA, Rosman G, Rus D, Meireles OR (2018) Artificial intelligence in surgery: promises and perils. Ann Surg 268(1):70\u201376","journal-title":"Ann Surg"},{"issue":"4","key":"3555_CR5","doi-asserted-by":"publisher","first-page":"684","DOI":"10.1097\/SLA.0000000000004425","volume":"273","author":"CR Garrow","year":"2021","unstructured":"Garrow CR, Kowalewski K-F, Li L, Wagner M, Schmidt MW, Engelhardt S, Hashimoto DA, Kenngott HG, Bodenstedt S, Speidel S et al (2021) Machine learning for surgical phase recognition: a systematic review. Ann Surg 273(4):684\u2013693","journal-title":"Ann Surg"},{"key":"3555_CR6","doi-asserted-by":"crossref","unstructured":". Meireles OR, Rosman G, Altieri MS, Carin L, Hager G, Madani A, Padoy N, Pugh CM, Sylla P, Ward TM et al (2021) Sages consensus recommendations on an annotation framework for surgical video. Surg Endosc 35(9):4918\u20134929","DOI":"10.1007\/s00464-021-08578-9"},{"issue":"3","key":"3555_CR7","doi-asserted-by":"publisher","first-page":"151","DOI":"10.1002\/bjs5.47","volume":"2","author":"T Nazari","year":"2018","unstructured":"Nazari T, Vlieger E, Dankbaar M, Merri\u00ebnboer J, Lange J, Wiggers T (2018) Creation of a universal language for surgical procedures using the step-by-step framework. BJS Open 2(3):151\u2013157","journal-title":"BJS Open"},{"issue":"6","key":"3555_CR8","doi-asserted-by":"publisher","first-page":"532","DOI":"10.1097\/UPJ.0000000000000344","volume":"9","author":"TF Haque","year":"2022","unstructured":"Haque TF, Hui A, You J, Ma R, Nguyen JH, Lei X, Cen S, Aron M, Collins JW, Djaladat H et al (2022) An assessment tool to provide targeted feedback to robotic surgical trainees: development and validation of the end-to-end assessment of suturing expertise (ease). Urol Pract 9(6):532\u2013539","journal-title":"Urol Pract"},{"key":"3555_CR9","doi-asserted-by":"crossref","unstructured":"Mlambo B, Shields M, Bach S, Bauer A, Hung A, Kudsi OY, Neis F, Lazar J, Oh D, Perez R, et al. (2025) A standardized temporal segmentation framework and annotation resource library in robotic surgery. Mayo clinic proceedings: digital health, 100257","DOI":"10.1016\/j.mcpdig.2025.100257"},{"issue":"1","key":"3555_CR10","doi-asserted-by":"publisher","first-page":"58","DOI":"10.1080\/24699322.2021.1937320","volume":"26","author":"TM Ward","year":"2021","unstructured":"Ward TM, Fer DM, Ban Y, Rosman G, Meireles OR, Hashimoto DA (2021) Challenges in surgical video annotation. Comput Assist Surg 26(1):58\u201368","journal-title":"Comput Assist Surg"},{"key":"3555_CR11","doi-asserted-by":"publisher","first-page":"102888","DOI":"10.1016\/j.media.2023.102888","volume":"89","author":"CI Nwoye","year":"2023","unstructured":"Nwoye CI, Yu T, Sharma S, Murali A, Alapatt D, Vardazaryan A, Yuan K, Hajek J, Reiter W, Yamlahi A et al (2023) Cholectriplet 2022: Show me a tool and tell me the triplet-an endoscopic vision challenge for surgical action triplet detection. Med Image Anal 89:102888","journal-title":"Med Image Anal"},{"issue":"9","key":"3555_CR12","doi-asserted-by":"publisher","first-page":"2592","DOI":"10.1109\/TMI.2023.3262847","volume":"42","author":"S Ramesh","year":"2023","unstructured":"Ramesh S, Dal\u013eAlba D, Gonzalez C, Yu T, Mascagni P, Mutter D, Marescaux J, Fiorini P, Padoy N (2023) Weakly supervised temporal convolutional networks for fine-grained surgical activity recognition. IEEE Trans Med Imaging 42(9):2592\u20132602. https:\/\/doi.org\/10.1109\/TMI.2023.3262847","journal-title":"IEEE Trans Med Imaging"},{"issue":"4","key":"3555_CR13","doi-asserted-by":"publisher","first-page":"1069","DOI":"10.1109\/TMI.2018.2878055","volume":"38","author":"AP Twinanda","year":"2018","unstructured":"Twinanda AP, Yengera G, Mutter D, Marescaux J, Padoy N (2018) Rsdnet: learning to predict remaining surgery duration from laparoscopic videos without manual annotations. IEEE Trans Med Imaging 38(4):1069\u20131078","journal-title":"IEEE Trans Med Imaging"},{"key":"3555_CR14","doi-asserted-by":"crossref","unstructured":"Mahajan D, Girshick R, Ramanathan V, He K, Paluri M, Li Y, Bharambe A, Van Der Maaten L (2018) Exploring the limits of weakly supervised pretraining. In: Proceedings of the European conference on computer vision (ECCV), pp. 181\u2013196","DOI":"10.1007\/978-3-030-01216-8_12"},{"key":"3555_CR15","doi-asserted-by":"publisher","DOI":"10.1016\/j.compmedimag.2023.102297","volume":"109","author":"B Felfeliyan","year":"2023","unstructured":"Felfeliyan B, Forkert ND, Hareendranathan A, Cornel D, Zhou Y, Kuntze G, Jaremko JL, Ronsky JL (2023) Self-supervised-rcnn for medical image segmentation with limited data annotation. Comput Med Imaging Graph 109:102297","journal-title":"Comput Med Imaging Graph"},{"key":"3555_CR16","doi-asserted-by":"crossref","unstructured":"Funke I, Jenke A, Mees ST, Weitz J, Speidel S, Bodenstedt S (2018) Temporal coherence-based self-supervised learning for laparoscopic workflow analysis. In: International workshop on computer-assisted and robotic endoscopy, pp. 85\u201393. Springer","DOI":"10.1007\/978-3-030-01201-4_11"},{"key":"3555_CR17","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2023.102844","volume":"88","author":"S Ramesh","year":"2023","unstructured":"Ramesh S, Srivastav V, Alapatt D, Yu T, Murali A, Sestini L, Nwoye CI, Hamoud I, Sharma S, Fleurentin A et al (2023) Dissecting self-supervised learning methods for surgical computer vision. Med Image Anal 88:102844","journal-title":"Med Image Anal"},{"key":"3555_CR18","unstructured":"Li J, Quaranto BR, Xu C, Mishra I, Qin R, Liu D, Kim PC, Xiong J (2025) Recognize any surgical object: unleashing the power of weakly-supervised data. In: The Thirteenth international conference on learning representations"},{"key":"3555_CR19","doi-asserted-by":"crossref","unstructured":"Lee H-Y, Huang J-B, Singh M, Yang M-H (2017) Unsupervised representation learning by sorting sequences. In: Proceedings of the IEEE international conference on computer vision, pp. 667\u2013676","DOI":"10.1109\/ICCV.2017.79"},{"key":"3555_CR20","doi-asserted-by":"publisher","first-page":"1059","DOI":"10.1007\/s11548-019-01958-6","volume":"14","author":"CI Nwoye","year":"2019","unstructured":"Nwoye CI, Mutter D, Marescaux J, Padoy N (2019) Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos. Int J Comput Assist Radiol Surg 14:1059\u20131067","journal-title":"Int J Comput Assist Radiol Surg"},{"key":"3555_CR21","doi-asserted-by":"crossref","unstructured":"Yang S, Zhou F, Mayer L, Huang F, Chen Y, Wang Y, He S, Nie Y, Wang X, S\u00fcmer \u00d6, et al. (2025) Large-scale self-supervised video foundation model for intelligent surgery. arXiv preprint arXiv:2506.02692","DOI":"10.1038\/s41746-026-02403-0"},{"issue":"1","key":"3555_CR22","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41597-025-05093-7","volume":"12","author":"Z Ye","year":"2025","unstructured":"Ye Z, Zhou R, Deng Z, Wang D, Zhu Y, Jin X, Zhang L, Chen T, Zhang H, Wang M (2025) A comprehensive video dataset for surgical laparoscopic action analysis. Sci Data 12(1):1\u201310","journal-title":"Sci Data"},{"key":"3555_CR23","doi-asserted-by":"crossref","unstructured":"Jaspers TJ, Jong RL, Al Khalil Y, Zeelenberg T, Kusters CH, Li Y, Jaarsveld RC, Bakker FH, Ruurda JP, Brinkman WM, et al. (2024) Exploring the effect of dataset diversity in self-supervised learning for surgical computer vision. In: MICCAI workshop on data engineering in medical imaging, pp. 43\u201353. Springer","DOI":"10.1007\/978-3-031-73748-0_5"},{"key":"3555_CR24","unstructured":"Tan M, Le Q (2021) Efficientnetv2: Smaller models and faster training. In: international conference on machine learning, pp. 10096\u201310106. PMLR"},{"key":"3555_CR25","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp. 248\u2013255. IEEE","DOI":"10.1109\/CVPR.2009.5206848"},{"issue":"1","key":"3555_CR26","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1109\/TMI.2016.2593957","volume":"36","author":"AP Twinanda","year":"2016","unstructured":"Twinanda AP, Shehata S, Mutter D, Marescaux J, De Mathelin M, Padoy N (2016) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36(1):86\u201397","journal-title":"IEEE Trans Med Imaging"},{"issue":"4","key":"3555_CR27","doi-asserted-by":"publisher","first-page":"673","DOI":"10.1007\/s11548-019-02108-8","volume":"15","author":"G Lecuyer","year":"2020","unstructured":"Lecuyer G, Ragot M, Martin N, Launay L, Jannin P (2020) Assisted phase and step annotation for surgical videos. Int J Comput Assist Radiol Surg 15(4):673\u2013680","journal-title":"Int J Comput Assist Radiol Surg"},{"key":"3555_CR28","doi-asserted-by":"crossref","unstructured":"Czempiel T, Paschali M, Keicher M, Simson W, Feussner H, Kim ST, Navab N (2020) Tecno: Surgical phase recognition with multi-stage temporal convolutional networks. In: Medical image computing and computer assisted intervention\u2013MICCAI 2020: 23rd international conference, Lima, Peru, October 4\u20138, 2020, Proceedings, Part III 23, pp. 343\u2013352. Springer","DOI":"10.1007\/978-3-030-59716-0_33"},{"issue":"17","key":"3555_CR29","doi-asserted-by":"publisher","first-page":"8746","DOI":"10.3390\/app12178746","volume":"12","author":"K Kirtac","year":"2022","unstructured":"Kirtac K, Aydin N, Lavanchy JL, Beldi G, Smit M, Woods MS, Aspart F (2022) Surgical phase recognition: from public datasets to real-world data. Appl Sci 12(17):8746","journal-title":"Appl Sci"},{"key":"3555_CR30","doi-asserted-by":"crossref","unstructured":"Wisotzky EL, Renz-Kiefel L, Beckmann S, L\u00fcnse S, Mantke R, Hilsmann A, Eisert P (2023) Surgical phase recognition for different hospitals. In: Current directions in biomedical engineering, 9:315\u2013318. De Gruyter","DOI":"10.1515\/cdbme-2023-1079"},{"key":"3555_CR31","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2024.103366","volume":"99","author":"Y Liu","year":"2025","unstructured":"Liu Y, Boels M, Garcia-Peraza-Herrera LC, Vercauteren T, Dasgupta P, Granados A, Ourselin S (2025) LOVIT: long video transformer for surgical phase recognition. Med Image Anal 99:103366","journal-title":"Med Image Anal"},{"key":"3555_CR32","doi-asserted-by":"crossref","unstructured":"Liu Y, Huo J, Peng J, Sparks R, Dasgupta P, Granados A, Ourselin S (2023) Skit: a fast key information video transformer for online surgical phase recognition. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp. 21074\u201321084","DOI":"10.1109\/ICCV51070.2023.01927"},{"key":"3555_CR33","doi-asserted-by":"crossref","unstructured":"Yang S, Luo L, Wang Q, Chen H (2024) Surgformer: Surgical transformer with hierarchical temporal attention for surgical phase recognition. In: International conference on medical image computing and computer-assisted intervention, pp. 606\u2013616. Springer","DOI":"10.1007\/978-3-031-72089-5_57"},{"issue":"11","key":"3555_CR34","doi-asserted-by":"publisher","first-page":"2249","DOI":"10.1007\/s11548-024-03166-3","volume":"19","author":"JL Lavanchy","year":"2024","unstructured":"Lavanchy JL, Ramesh S, Dall\u2019Alba D, Gonzalez C, Fiorini P, M\u00fcller-Stich BP, Nett PC, Marescaux J, Mutter D, Padoy N (2024) Challenges in multi-centric generalization: phase and step recognition in roux-en-y gastric bypass surgery. Int J Comput Assist Radiol Surg 19(11):2249\u20132257","journal-title":"Int J Comput Assist Radiol Surg"},{"key":"3555_CR35","doi-asserted-by":"crossref","unstructured":"Mottaghi A, Sharghi A, Yeung S, Mohareri O (2022) Adaptation of surgical activity recognition models across operating rooms. In: International conference on medical image computing and computer-assisted intervention, pp. 530\u2013540. Springer","DOI":"10.1007\/978-3-031-16449-1_51"},{"issue":"1","key":"3555_CR36","doi-asserted-by":"publisher","first-page":"12575","DOI":"10.1038\/s41598-022-16923-8","volume":"12","author":"D Kitaguchi","year":"2022","unstructured":"Kitaguchi D, Fujino T, Takeshita N, Hasegawa H, Mori K, Ito M (2022) Limited generalizability of single deep neural network for surgical instrument segmentation in different surgical environments. Sci Rep 12(1):12575","journal-title":"Sci Rep"},{"key":"3555_CR37","doi-asserted-by":"crossref","unstructured":"Chen K, Kamabattula S, Bhattacharyya K (2024) Surgical site-specific ensemble model for surgical procedure segmentation. In Medical Imaging 2024: Image-Guided Procedures, Robotic Interventions, and Modeling (Vol. 12928, p. 129281L). SPIE","DOI":"10.1117\/12.2691631"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-025-03555-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-025-03555-2","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-025-03555-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T18:02:55Z","timestamp":1774375375000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-025-03555-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,2]]},"references-count":37,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2026,2]]}},"alternative-id":["3555"],"URL":"https:\/\/doi.org\/10.1007\/s11548-025-03555-2","relation":{},"ISSN":["1861-6429"],"issn-type":[{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,2]]},"assertion":[{"value":"7 July 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 November 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Sreeram Kamabattula, Kai Chen, and Kiran Bhattacharyya are all employed by Intuitive Surgical, Inc.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article does not contain any experiments with human participants or animals performed by any of the authors. The procedure data used for this study was de-identified. The authors did not have access to any identifiable information when conducting analyses.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical Approvals"}},{"value":"All procedure data and activities included in this study were collected with approval from Western IRB (now part of WCG Clinical) with appropriate consents from surgeons and patients. Protocol number 20182083.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed Consent"}}]}}