{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T01:46:47Z","timestamp":1773193607609,"version":"3.50.1"},"reference-count":35,"publisher":"Springer Science and Business Media LLC","issue":"9","license":[{"start":{"date-parts":[[2021,6,24]],"date-time":"2021-06-24T00:00:00Z","timestamp":1624492800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,6,24]],"date-time":"2021-06-24T00:00:00Z","timestamp":1624492800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100013000","name":"Politecnico di Torino","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100013000","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"published-print":{"date-parts":[[2021,9]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n                <jats:title>Purpose<\/jats:title>\n                <jats:p>The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient\u2019s organ with its 2D endoscopic image, to assist surgeons during the procedure.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Methods<\/jats:title>\n                <jats:p>This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (<jats:italic>A, B, C, D, E<\/jats:italic>), selected and tagged by our team\u2019s specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Results<\/jats:title>\n                <jats:p>U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (<jats:italic>\u03c3<\/jats:italic> = 0.076) compared to 0.339 (<jats:italic>\u03c3<\/jats:italic> = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model\u2019s anchor point, from 12.569 (<jats:italic>\u03c3<\/jats:italic>= 4.456) to 4.160 (<jats:italic>\u03c3<\/jats:italic> = 1.448) and in the Geodesic Distance between the predicted and actual model\u2019s rotations, from 0.266 (<jats:italic>\u03c3<\/jats:italic> = 0.131) to 0.169 (<jats:italic>\u03c3<\/jats:italic> = 0.073).<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Conclusion<\/jats:title>\n                <jats:p>This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.<\/jats:p>\n              <\/jats:sec>","DOI":"10.1007\/s11548-021-02432-y","type":"journal-article","created":{"date-parts":[[2021,6,24]],"date-time":"2021-06-24T07:03:07Z","timestamp":1624518187000},"page":"1435-1445","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":78,"title":["Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance"],"prefix":"10.1007","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8813-6388","authenticated-orcid":false,"given":"Leonardo","family":"Tanzi","sequence":"first","affiliation":[]},{"given":"Pietro","family":"Piazzolla","sequence":"additional","affiliation":[]},{"given":"Francesco","family":"Porpiglia","sequence":"additional","affiliation":[]},{"given":"Enrico","family":"Vezzetti","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,6,24]]},"reference":[{"issue":"21","key":"2432_CR1","doi-asserted-by":"publisher","first-page":"4550","DOI":"10.3390\/app9214550","volume":"9","author":"EC Olivetti","year":"2019","unstructured":"Olivetti EC, Nicotera S, Marcolin F, Vezzetti E, Sotong JPA, Zavattero E, Ramieri G (2019) 3D soft-tissue prediction methodologies for orthognathic surgery\u2014a literature review. Appl Sci. 9(21):4550","journal-title":"Appl Sci."},{"issue":"7553","key":"2432_CR2","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y LeCun","year":"2015","unstructured":"LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature. 521(7553):436\u2013444","journal-title":"Nature."},{"issue":"4","key":"2432_CR3","doi-asserted-by":"publisher","first-page":"1507","DOI":"10.3390\/app10041507","volume":"10","author":"L Tanzi","year":"2020","unstructured":"Tanzi L, Vezzetti E, Moreno R, Moos S (2020) X-Ray Bone fracture classification using deep learning: a baseline for designing a reliable approach. Appl Sci. 10(4):1507","journal-title":"Appl Sci."},{"key":"2432_CR4","doi-asserted-by":"publisher","first-page":"109373","DOI":"10.1016\/j.ejrad.2020.109373","volume":"133","author":"L Tanzi","year":"2020","unstructured":"Tanzi L, Vezzetti E, Moreno R, Aprato A, Audisio A, Mass\u00e8 A (2020) Hierarchical fracture classification of proximal femur X-Ray images using a multistage Deep Learning approach. Eur J Radiol. 133:109373","journal-title":"Eur J Radiol."},{"key":"2432_CR5","doi-asserted-by":"publisher","first-page":"8","DOI":"10.1016\/j.csbj.2014.11.005","volume":"13","author":"K Kourou","year":"2015","unstructured":"Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI (2015) Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J. 13:8\u201317","journal-title":"Comput Struct Biotechnol J."},{"issue":"1","key":"2432_CR6","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1038\/s41591-018-0316-z","volume":"25","author":"A Esteva","year":"2019","unstructured":"Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, Cui C, Corrado G, Thrun S, Dean J (2019) A guide to deep learning in healthcare. Nature Med. 25(1):24\u20139","journal-title":"Nature Med."},{"key":"2432_CR7","doi-asserted-by":"publisher","first-page":"66","DOI":"10.1016\/j.media.2017.01.007","volume":"37","author":"S Bernhardt","year":"2017","unstructured":"Bernhardt S, Nicolau SA, Soler L, Doignon C (2017) The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal. 37:66\u201390","journal-title":"Med Image Anal."},{"key":"2432_CR8","doi-asserted-by":"publisher","first-page":"816","DOI":"10.1007\/11866565_100","volume-title":"Medical image computing and computer-assisted intervention \u2013 MICCAI 2006","author":"C Wengert","year":"2006","unstructured":"Wengert C, Cattin PC, Duff JM, Baur C, Sz\u00e9kely G (2006) Markerless endoscopic registration and referencing. In: Larsen R, Nielsen M, Sporring J (eds) Medical image computing and computer-assisted intervention \u2013 MICCAI 2006. Springer, Berlin, Heidelberg, pp 816\u2013823"},{"issue":"11","key":"2432_CR9","doi-asserted-by":"publisher","first-page":"1082","DOI":"10.1109\/42.896784","volume":"19","author":"PJ Edwards","year":"2000","unstructured":"Edwards PJ, King AP, Maurer CR, de Cunha DA, Hawkes DJ, Hill DL, Gaston RP, Fenlon MR, Jusczyzck A, Strong AJ, Chandler CL, Gleeson MJ (2000) Design and evaluation of a system for microscope-assisted guided interventions (MAGI). IEEE Trans Med Imaging. 19(11):1082\u20131093","journal-title":"IEEE Trans Med Imaging."},{"issue":"1","key":"2432_CR10","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1109\/TMI.2016.2593957","volume":"36","author":"AP Twinanda","year":"2017","unstructured":"Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging. 36(1):86\u201397","journal-title":"IEEE Trans Med Imaging."},{"issue":"5","key":"2432_CR11","doi-asserted-by":"publisher","first-page":"1114","DOI":"10.1109\/TMI.2017.2787657","volume":"37","author":"Y Jin","year":"2018","unstructured":"Jin Y, Dou Q, Chen H, Yu L, Qin J, Fu C-W, Heng P-A (2018) SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans Med Imaging. 37(5):1114\u20131126","journal-title":"IEEE Trans Med Imaging."},{"issue":"11","key":"2432_CR12","doi-asserted-by":"publisher","first-page":"1871","DOI":"10.1007\/s11548-019-02044-7","volume":"14","author":"L Hansen","year":"2019","unstructured":"Hansen L, Siebert M, Diesel J, Heinrich MP (2019) Fusing information from multiple 2D depth cameras for 3D human pose estimation in the operating room. Int J CARS. 14(11):1871\u20131879","journal-title":"Int J CARS."},{"issue":"7","key":"2432_CR13","doi-asserted-by":"publisher","first-page":"1035","DOI":"10.1007\/s00138-016-0792-4","volume":"27","author":"V Belagiannis","year":"2016","unstructured":"Belagiannis V, Wang X, Shitrit HBB, Hashimoto K, Stauder R, Aoki Y, Kranzfelder M, Schneider A, Fua P, Ilic S, Feussner H, Navab N (2016) Parsing human skeletons in an operating room. Mach Vis Appl. 27(7):1035\u20131046","journal-title":"Mach Vis Appl."},{"issue":"14","key":"2432_CR14","doi-asserted-by":"publisher","first-page":"1619","DOI":"10.1177\/0278364919872252","volume":"38","author":"T Zhou","year":"2019","unstructured":"Zhou T, Wachs JP (2019) Spiking Neural Networks for early prediction in human\u2013robot collaboration. Int J Robot Res 38(14):1619\u20131643","journal-title":"Int J Robot Res"},{"issue":"4","key":"2432_CR15","doi-asserted-by":"publisher","first-page":"242","DOI":"10.7599\/hmr.2016.36.4.242","volume":"36","author":"HG Ha","year":"2016","unstructured":"Ha HG, Hong J (2016) Augmented reality in medicine. Hanyang Med Rev. 36(4):242\u2013247","journal-title":"Hanyang Med Rev."},{"issue":"5","key":"2432_CR16","doi-asserted-by":"publisher","first-page":"e2136","DOI":"10.1002\/rcs.2136","volume":"16","author":"L Tanzi","year":"2020","unstructured":"Tanzi L, Piazzolla P, Vezzetti E (2020) Intraoperative surgery room management: A deep learning perspective. Int J Med Robot Comput Assist Surg. 16(5):e2136","journal-title":"Int J Med Robot Comput Assist Surg."},{"key":"2432_CR17","doi-asserted-by":"publisher","first-page":"105505","DOI":"10.1016\/j.cmpb.2020.105505","volume":"191","author":"M Gribaudo","year":"2020","unstructured":"Gribaudo M, Piazzolla P, Porpiglia F, Vezzetti E, Violante MG (2020) 3D augmentation of the surgical video stream: toward a modular approach. Comput Method Program Biomed. 191:105505","journal-title":"Comput Method Program Biomed."},{"issue":"suppl_1","key":"2432_CR18","doi-asserted-by":"publisher","first-page":"i72","DOI":"10.1093\/bja\/aex383","volume":"119","author":"H Ashrafian","year":"2017","unstructured":"Ashrafian H, Clancy O, Grover V, Darzi A (2017) The evolution of robotic surgery: surgical and anaesthetic aspects. Br J Anaesth. 119(suppl_1):i72-84","journal-title":"Br J Anaesth."},{"issue":"3","key":"2432_CR19","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1016\/j.aju.2018.07.001","volume":"16","author":"NNP Buchholz","year":"2018","unstructured":"Buchholz NNP, Bach C (2018) The age of robotic surgery \u2013 Is laparoscopy dead? Arab J Urol. 16(3):261","journal-title":"Arab J Urol."},{"key":"2432_CR20","unstructured":"Fischer J, Neff M, Freudenstein D, Bartz D (2004) Medical augmented reality based on commercial image guided surgery. In: Proceedings of the tenth eurographics conference on virtual environments. goslar, DEU: Eurographics Association pp 83\u201386. (EGVE\u201904)."},{"issue":"2","key":"2432_CR21","doi-asserted-by":"publisher","first-page":"121","DOI":"10.1097\/MOU.0b013e3283501774","volume":"22","author":"M Nakamoto","year":"2012","unstructured":"Nakamoto M, Ukimura O, Faber K, Gill IS (2012) Current progress on augmented reality visualization in endoscopic surgery. Curr Opin Urol. 22(2):121\u2013126","journal-title":"Curr Opin Urol."},{"issue":"Suppl 1","key":"2432_CR22","doi-asserted-by":"publisher","first-page":"S-28","DOI":"10.1089\/end.2017.0723","volume":"32","author":"LM Huynh","year":"2018","unstructured":"Huynh LM, Ahlering TE (2018) Robot-assisted radical prostatectomy: a step-by-step guide. J Endourol. 32(Suppl 1):S-28","journal-title":"J Endourol."},{"issue":"12","key":"2432_CR23","doi-asserted-by":"publisher","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","volume":"39","author":"V Badrinarayanan","year":"2017","unstructured":"Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 39(12):2481\u20132495","journal-title":"IEEE Trans Pattern Anal Mach Intell."},{"key":"2432_CR24","first-page":"234","volume-title":"MICCAI 2015","author":"O Ronneberger","year":"2015","unstructured":"Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds) MICCAI 2015. Springer, Berlin, pp 234\u2013241"},{"key":"2432_CR25","doi-asserted-by":"crossref","unstructured":"Zhao H, Shi J, Qi X, Wang X, Jia J (2017) Pyramid scene parsing network. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). pp 6230\u20136239.","DOI":"10.1109\/CVPR.2017.660"},{"key":"2432_CR26","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR) [Internet]. IEEE pp 770\u20138. Available from: http:\/\/ieeexplore.ieee.org\/document\/7780459\/","DOI":"10.1109\/CVPR.2016.90"},{"key":"2432_CR27","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Bengio Y, LeCun Y, (eds). 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA"},{"key":"2432_CR28","unstructured":"Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) MobileNets: efficient convolutional neural networks for mobile vision applications. CoRR [Internet]. Available from: http:\/\/arxiv.org\/abs\/1704.04861"},{"key":"2432_CR29","unstructured":"Wada K (2016) Labelme: image polygonal annotation with python [Internet]. Available from: https:\/\/github.com\/wkentaro\/labelme"},{"issue":"1","key":"2432_CR30","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1016\/0734-189X(85)90016-7","volume":"30","author":"S Suzuki","year":"1985","unstructured":"Suzuki S, Be K (1985) Topological structural analysis of digitized binary images by border following. Comput Vis Graph Image Process. 30(1):32\u201346","journal-title":"Comput Vis Graph Image Process."},{"issue":"2","key":"2432_CR31","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1016\/0167-8655(82)90016-2","volume":"1","author":"J Sklansky","year":"1982","unstructured":"Sklansky J (1982) Finding the convex hull of a simple polygon. Pattern Recogn Lett. 1(2):79\u201383","journal-title":"Pattern Recogn Lett."},{"key":"2432_CR32","unstructured":"Kingma DP, Ba J (2017) Adam: a method for stochastic optimization. [cs] [Internet]. [cited 2019 Nov 27]; Available from: http:\/\/arxiv.org\/abs\/1412.6980"},{"key":"2432_CR33","unstructured":"Chollet F et al (2015) Keras [Internet]. Available from: https:\/\/keras.io"},{"key":"2432_CR34","unstructured":"Gupta D (2019) keras-segmentation [Internet]. Available from: https:\/\/github.com\/divamgupta\/image-segmentation-keras"},{"issue":"4","key":"2432_CR35","doi-asserted-by":"publisher","first-page":"505","DOI":"10.1016\/j.eururo.2019.03.037","volume":"76","author":"F Porpiglia","year":"2019","unstructured":"Porpiglia F, Checcucci E, Amparore D, Manfredi M, Massa F, Piazzolla P, Manfrin D, Piana A, Tota D, Bollito E, Fiori C (2019) Three-dimensional elastic augmented-reality robot-assisted radical prostatectomy using hyperaccuracy three-dimensional reconstruction technology: a step further in the identification of capsular involvement. Eur Urol. 76(4):505\u2013514","journal-title":"Eur Urol."}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-021-02432-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-021-02432-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-021-02432-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,8,10]],"date-time":"2021-08-10T12:21:29Z","timestamp":1628598089000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-021-02432-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,6,24]]},"references-count":35,"journal-issue":{"issue":"9","published-print":{"date-parts":[[2021,9]]}},"alternative-id":["2432"],"URL":"https:\/\/doi.org\/10.1007\/s11548-021-02432-y","relation":{},"ISSN":["1861-6410","1861-6429"],"issn-type":[{"value":"1861-6410","type":"print"},{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,6,24]]},"assertion":[{"value":"20 January 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 May 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 June 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declaration"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}