{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,2]],"date-time":"2026-02-02T19:39:03Z","timestamp":1770061143666,"version":"3.49.0"},"reference-count":15,"publisher":"Springer Science and Business Media LLC","issue":"12","license":[{"start":{"date-parts":[[2025,5,11]],"date-time":"2025-05-11T00:00:00Z","timestamp":1746921600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,5,11]],"date-time":"2025-05-11T00:00:00Z","timestamp":1746921600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:sec>\n                    <jats:title>Purpose<\/jats:title>\n                    <jats:p>The principal objective of this study was to develop and evaluate a deep learning model for segmenting the common iliac vein (CIV) from intraoperative endoscopic videos during oblique lateral interbody fusion for L5\/S1 (OLIF51), a minimally invasive surgical procedure for degenerative lumbosacral spine diseases. The study aimed to address the challenge of intraoperative differentiation of the CIV from surrounding tissues to minimize the risk of vascular damage during the surgery.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Methods<\/jats:title>\n                    <jats:p>We employed two convolutional neural network (CNN) architectures: U-Net and U-Net++ with a ResNet18 backbone, for semantic segmentation. Gamma correction was applied during image preprocessing to improve luminance contrast between the CIV and adjacent tissues. We used a dataset of 614 endoscopic images from OLIF51 surgeries for model training, validation, and testing.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Results<\/jats:title>\n                    <jats:p>The U-Net++\/ResNet18 model outperformed, achieving a Dice score of 0.70, indicating superior ability in delineating the position and shape of the CIV compared to the U-Net\/ResNet18 model, which achieved a Dice score of 0.59. Gamma correction increased the differentiation between the CIV and the artery, improving the Dice score from 0.44 to 0.70.<\/jats:p>\n                  <\/jats:sec>\n                  <jats:sec>\n                    <jats:title>Conclusion<\/jats:title>\n                    <jats:p>The findings demonstrate that deep learning models, especially the U-Net++ with ResNet18 enhanced by gamma correction preprocessing, can effectively segment the CIV in intraoperative videos. This approach has the potential to significantly improve intraoperative assistance and reduce the risk of vascular injury during OLIF51 procedures, despite the need for further research and refinement of the model for clinical application.<\/jats:p>\n                  <\/jats:sec>","DOI":"10.1007\/s11548-025-03388-z","type":"journal-article","created":{"date-parts":[[2025,5,11]],"date-time":"2025-05-11T10:29:59Z","timestamp":1746959399000},"page":"2461-2467","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Enhancing segmentation accuracy of the common iliac vein in OLIF51 surgery in intraoperative endoscopic video through gamma correction: a deep learning approach"],"prefix":"10.1007","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0009-0006-5524-7504","authenticated-orcid":false,"given":"Kaori","family":"Yamamoto","sequence":"first","affiliation":[]},{"given":"Reoto","family":"Ueda","sequence":"additional","affiliation":[]},{"given":"Kazuhide","family":"Inage","sequence":"additional","affiliation":[]},{"given":"Yawara","family":"Eguchi","sequence":"additional","affiliation":[]},{"given":"Miyako","family":"Narita","sequence":"additional","affiliation":[]},{"given":"Yasuhiro","family":"Shiga","sequence":"additional","affiliation":[]},{"given":"Masahiro","family":"Inoue","sequence":"additional","affiliation":[]},{"given":"Noriyasu","family":"Toshi","sequence":"additional","affiliation":[]},{"given":"Soichiro","family":"Tokeshi","sequence":"additional","affiliation":[]},{"given":"Kohei","family":"Okuyama","sequence":"additional","affiliation":[]},{"given":"Shuhei","family":"Ohyama","sequence":"additional","affiliation":[]},{"given":"Satoshi","family":"Maki","sequence":"additional","affiliation":[]},{"given":"Takeo","family":"Furuya","sequence":"additional","affiliation":[]},{"given":"Seiji","family":"Ohtori","sequence":"additional","affiliation":[]},{"given":"Sumihisa","family":"Orita","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,5,11]]},"reference":[{"key":"3388_CR1","doi-asserted-by":"publisher","first-page":"545","DOI":"10.1016\/j.spinee.2016.10.026","volume":"17","author":"W Kamal","year":"2017","unstructured":"Kamal W, James B, Richard H (2017) Technical description of oblique lateral interbody fusion at L1\u2013L5 (OLIF25) and at L5\u2013S1 (OLIF51) and evaluation of complication and fusion rates. Spine J 17:545\u2013553. https:\/\/doi.org\/10.1016\/j.spinee.2016.10.026","journal-title":"Spine J"},{"key":"3388_CR2","doi-asserted-by":"publisher","first-page":"223","DOI":"10.1053\/j.oto.2017.09.004","volume":"27","author":"S Orita","year":"2017","unstructured":"Orita S, Kazuhide I, Takeo F, Masao K, Yasuchika A, Go K, Junichi N et al (2017) Oblique lateral interbody fusion (OLIF): indications and techniques. Oper Tech Orthop 27:223\u2013230. https:\/\/doi.org\/10.1053\/j.oto.2017.09.004","journal-title":"Oper Tech Orthop"},{"key":"3388_CR3","doi-asserted-by":"publisher","first-page":"723","DOI":"10.3340\/jkns.2018.0215","volume":"63","author":"M Hah","year":"2020","unstructured":"Hah M, Myeong K, Young K, Seung P (2020) Usefulness of oblique lateral interbody fusion at L5\u2013S1 level compared to transforaminal lumbar interbody fusion. J Korean Neurosurg Soc 63:723\u2013729. https:\/\/doi.org\/10.3340\/jkns.2018.0215","journal-title":"J Korean Neurosurg Soc"},{"key":"3388_CR4","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1038\/s41591-018-0316-z","volume":"25","author":"E Andre","year":"2019","unstructured":"Andre E, Alexandre R, Bharath R, Volodymyr K, Mark D, Katherine C, Claire C, Greg C, Sebastian T, Jeff D (2019) A guide to deep learning in healthcare. Nat Med 25:24\u201329. https:\/\/doi.org\/10.1038\/s41591-018-0316-z","journal-title":"Nat Med"},{"key":"3388_CR5","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2022.102444","volume":"79","author":"C Xuxin","year":"2020","unstructured":"Xuxin C, Ximin W, Ke Z, Kar-Ming F, Theresa C, Kathleen M, Robert S, Hong L, Bin Z, Yuchen Q (2020) Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 79:102444. https:\/\/doi.org\/10.1016\/j.media.2022.102444","journal-title":"Med Image Anal"},{"key":"3388_CR6","doi-asserted-by":"publisher","unstructured":"Alessandro C, Sara M, Chiara C, Emanuele F, Elena M, Leonardo S (2021) NephCNN: A Deep-Learning Framework for Vessel Segmentation in Nephrectomy Laparoscopic Videos. In: 2020 25th International Conference on Pattern Recognition, 6144\u201349. https:\/\/doi.org\/10.1109\/ICPR48806.2021.9412810","DOI":"10.1109\/ICPR48806.2021.9412810"},{"key":"3388_CR7","doi-asserted-by":"publisher","DOI":"10.1016\/j.jpi.2023.100197","volume":"14","author":"N Roi","year":"2023","unstructured":"Roi N, Issa N, Dror R, Mustafa Y, David A (2023) Segmentation of polyps based on pyramid vision transformers and residual block for real-time endoscopy imaging. J Pathol Inform 14:100197. https:\/\/doi.org\/10.1016\/j.jpi.2023.100197","journal-title":"J Pathol Inform"},{"key":"3388_CR8","doi-asserted-by":"publisher","first-page":"ooad079","DOI":"10.1093\/jamiaopen\/ooad079","volume":"6","author":"K Tejal","year":"2023","unstructured":"Tejal K, David C, Christine A, David C, Nikki G, Andrew R, Frank F (2023) How can artificial intelligence decrease cognitive and work burden for front line practitioners? JAMIA Open 6:ooad079. https:\/\/doi.org\/10.1093\/jamiaopen\/ooad079","journal-title":"JAMIA Open"},{"key":"3388_CR9","doi-asserted-by":"publisher","unstructured":"Olaf R, Philipp F, Thomas B (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention\u2014MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham, pp 234\u2013241. https:\/\/doi.org\/10.1007\/978-3-319-24574-4_28","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"3388_CR10","doi-asserted-by":"publisher","unstructured":"Zongwei Z, Mahfuzur S, Nima T, Jianming L (2018) UNet++: A Nested U-Net Architecture for Medical Image Segmentation. Deep Learn Med Image Anal Multimodal Learn Clin Decis Support, pp 3\u201311. https:\/\/doi.org\/10.1007\/978-3-030-00889-5_1","DOI":"10.1007\/978-3-030-00889-5_1"},{"key":"3388_CR11","doi-asserted-by":"publisher","unstructured":"Kaiming H, Xiangyu Z, Shaoqing R, Jian S (2016) Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770\u201378. https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"3388_CR12","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"H Sepp","year":"1997","unstructured":"Sepp H, J\u00fcrgen S (1997) Long short-term memory. Neural Comput 9:1735\u201380. https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735","journal-title":"Neural Comput"},{"key":"3388_CR13","unstructured":"Junyoung C, Caglar G, KyungHyun C, Yoshua B (2014) Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. In: Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS 2014), vol 2, pp 1786\u20131794. https:\/\/nyuscholars.nyu.edu\/en\/publications\/empirical-evaluation-of-gated-recurrent-neural-networks-on-sequen"},{"key":"3388_CR14","doi-asserted-by":"publisher","first-page":"1859","DOI":"10.3390\/land12101859","volume":"12","author":"Y Lirong","year":"2023","unstructured":"Lirong Y, Lei W, Tingqiao L, Siyu L, Jiawei T, Zhengtong Y, Xiaolu L, Wenfeng Z (2023) U-Net-LSTM: time series-enhanced lake boundary prediction model. Land 12:1859. https:\/\/doi.org\/10.3390\/land12101859","journal-title":"Land"},{"key":"3388_CR15","doi-asserted-by":"publisher","first-page":"3791","DOI":"10.1007\/s00371-021-02221-3","volume":"38","author":"S Eisuke","year":"2022","unstructured":"Eisuke S, Kazuhiro H (2022) Cell image segmentation by using feedback and convolutional LSTM. Vis Comput 38:3791\u20133801. https:\/\/doi.org\/10.1007\/s00371-021-02221-3","journal-title":"Vis Comput"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-025-03388-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-025-03388-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-025-03388-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T13:39:39Z","timestamp":1765287579000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-025-03388-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,11]]},"references-count":15,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["3388"],"URL":"https:\/\/doi.org\/10.1007\/s11548-025-03388-z","relation":{},"ISSN":["1861-6429"],"issn-type":[{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,11]]},"assertion":[{"value":"31 March 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 April 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 May 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing interests. We did not receive payments or other benefits or a commitment or agreement to provide such benefits from any commercial entities.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict interest"}},{"value":"We declare that all protocols involving humans have been approved by the Chiba university Hospital and have been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. We declare that all participants provided written informed consent before their inclusion in this study.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics and consent to participate"}}]}}