{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,14]],"date-time":"2026-01-14T00:18:30Z","timestamp":1768349910844,"version":"3.49.0"},"reference-count":26,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2020,4,29]],"date-time":"2020-04-29T00:00:00Z","timestamp":1588118400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,4,29]],"date-time":"2020-04-29T00:00:00Z","timestamp":1588118400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Wellcome\/EPSRC","award":["203145Z\/16\/Z"],"award-info":[{"award-number":["203145Z\/16\/Z"]}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/P027938\/1"],"award-info":[{"award-number":["EP\/P027938\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/R004080\/1"],"award-info":[{"award-number":["EP\/R004080\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["NS\/A000027\/1"],"award-info":[{"award-number":["NS\/A000027\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100010664","name":"H2020 Future and Emerging Technologies","doi-asserted-by":"publisher","award":["GA 863146"],"award-info":[{"award-number":["GA 863146"]}],"id":[{"id":"10.13039\/100010664","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Royal Academy of Engineering Chair in Emerging Technologies","award":["CiET1819\/2\/36"],"award-info":[{"award-number":["CiET1819\/2\/36"]}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/P012841\/1"],"award-info":[{"award-number":["EP\/P012841\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Medtronic\/Royal Academy of Engineering Research Chair","award":["RCSRF1819\/7\/34"],"award-info":[{"award-number":["RCSRF1819\/7\/34"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"published-print":{"date-parts":[[2020,5]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n                <jats:title>Purpose<\/jats:title>\n                <jats:p>Fetoscopic laser photocoagulation is a minimally invasive surgery for the treatment of twin-to-twin transfusion syndrome (TTTS). By using a lens\/fibre-optic scope, inserted into the amniotic cavity, the abnormal placental vascular anastomoses are identified and ablated to regulate blood flow to both fetuses. Limited field-of-view, occlusions due to fetus presence and low visibility make it difficult to identify all vascular anastomoses. Automatic computer-assisted techniques may provide better understanding of the anatomical structure during surgery for risk-free laser photocoagulation and may facilitate in improving mosaics from fetoscopic videos.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Methods<\/jats:title>\n                <jats:p>We propose FetNet, a combined convolutional neural network (CNN) and long short-term memory (LSTM) recurrent neural network architecture for the spatio-temporal identification of fetoscopic events. We adapt an existing CNN architecture for spatial feature extraction and integrated it with the LSTM network for end-to-end spatio-temporal inference. We introduce differential learning rates during the model training to effectively utilising the pre-trained CNN weights. This may support computer-assisted interventions (CAI) during fetoscopic laser photocoagulation.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Results<\/jats:title>\n                <jats:p>We perform quantitative evaluation of our method using 7 in vivo fetoscopic videos captured from different human TTTS cases. The total duration of these videos was 5551 s (138,780 frames). To test the robustness of the proposed approach, we perform 7-fold cross-validation where each video is treated as a hold-out or test set and training is performed using the remaining videos.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Conclusion<\/jats:title>\n                <jats:p>FetNet achieved superior performance compared to the existing CNN-based methods and provided improved inference because of the spatio-temporal information modelling. Online testing of FetNet, using a Tesla V100-DGXS-32GB GPU, achieved a frame rate of 114 fps. These results show that our method could potentially provide a real-time solution for CAI and automating occlusion and photocoagulation identification during fetoscopic procedures.<\/jats:p>\n              <\/jats:sec>","DOI":"10.1007\/s11548-020-02169-0","type":"journal-article","created":{"date-parts":[[2020,4,29]],"date-time":"2020-04-29T09:03:44Z","timestamp":1588151024000},"page":"791-801","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos"],"prefix":"10.1007","volume":"15","author":[{"given":"Sophia","family":"Bano","sequence":"first","affiliation":[]},{"given":"Francisco","family":"Vasconcelos","sequence":"additional","affiliation":[]},{"given":"Emmanuel","family":"Vander Poorten","sequence":"additional","affiliation":[]},{"given":"Tom","family":"Vercauteren","sequence":"additional","affiliation":[]},{"given":"Sebastien","family":"Ourselin","sequence":"additional","affiliation":[]},{"given":"Jan","family":"Deprest","sequence":"additional","affiliation":[]},{"given":"Danail","family":"Stoyanov","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,4,29]]},"reference":[{"key":"2169_CR1","unstructured":"Bahdanau D, Cho K, Bengio Y (2015) A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the international conference on learning representations"},{"key":"2169_CR2","first-page":"311","volume-title":"Lecture Notes in Computer Science","author":"Sophia Bano","year":"2019","unstructured":"Bano S, Vasconcelos F, Amo MT, Dwyer G, Gruijthuijsen C, Deprest J, Ourselin S, Vander\u00a0Poorten E, Vercauteren T, Stoyanov D (2019) Deep sequential mosaicking of fetoscopic videos. In: International conference on medical image computing and computer-assisted intervention. Springer, New York, pp 311\u2013319"},{"issue":"2","key":"2169_CR3","first-page":"107","volume":"39","author":"A Baschat","year":"2011","unstructured":"Baschat A, Chmait RH, Deprest J, Gratac\u00f3s E, Hecher K, Kontopoulos E, Quintero R, Skupski DW, Valsky DV, Ville Y (2011) Twin-to-twin transfusion syndrome (TTTS). J Perinat Med 39(2):107\u2013112","journal-title":"J Perinat Med"},{"issue":"3","key":"2169_CR4","doi-asserted-by":"publisher","first-page":"e1-197","DOI":"10.1016\/j.ajog.2012.11.027","volume":"208","author":"D Baud","year":"2013","unstructured":"Baud D, Windrim R, Keunen J, Kelly EN, Shah P, Van Mieghem T, Seaward PGR, Ryan G (2013) Fetoscopic laser therapy for twin-twin transfusion syndrome before 17 and after 26 weeks\u2019 gestation. Am J Obstet Gynecol 208(3):e1-197","journal-title":"Am J Obstet Gynecol"},{"key":"2169_CR5","unstructured":"Cadene R, Robert T, Thome N, Cord M (2016) M2cai workflow challenge: convolutional neural networks with time smoothing and hidden Markov model for video frames classification. arXiv preprint arXiv:1610.05541"},{"key":"2169_CR6","doi-asserted-by":"crossref","unstructured":"Daga P, Chadebecq F, Shakir DI, Herrera LCGP, Tella M, Dwyer G, David AL, Deprest J, Stoyanov D, Vercauteren T (2016) Real-time mosaicing of fetoscopic videos using sift. In: Medical imaging 2016: image-guided procedures, robotic interventions, and modeling, vol 9786. International Society for Optics and Photonics, p 97861R","DOI":"10.1117\/12.2217172"},{"key":"2169_CR7","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: Proceedings of the IEEE conference on computer vision and pattern recognition, IEEE, pp 248\u2013255","DOI":"10.1109\/CVPR.2009.5206848"},{"issue":"5","key":"2169_CR8","doi-asserted-by":"publisher","first-page":"347","DOI":"10.1046\/j.1469-0705.1998.11050347.x","volume":"11","author":"J Deprest","year":"1998","unstructured":"Deprest J, Van Schoubroeck D, Van Ballaer P, Flageole H, Van Assche FA, Vandenberghe K (1998) Alternative technique for Nd: YAG laser coagulation in twin-to-twin transfusion syndrome with anterior placenta. Ultrasound Obstet Gynecol J 11(5):347\u2013352","journal-title":"Ultrasound Obstet Gynecol J"},{"key":"2169_CR9","doi-asserted-by":"publisher","first-page":"551","DOI":"10.1007\/978-3-319-46720-7_64","volume-title":"Medical Image Computing and Computer-Assisted Intervention \u2013 MICCAI 2016","author":"Robert DiPietro","year":"2016","unstructured":"DiPietro R, Lea C, Malpani A, Ahmidi N, Vedula SS, Lee GI, Lee MR, Hager GD (2016) Recognizing surgical activities with recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, New York, pp 551\u2013558"},{"key":"2169_CR10","doi-asserted-by":"crossref","unstructured":"Donahue J, Anne\u00a0Hendricks L, Guadarrama S, Rohrbach M, Venugopalan S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2625\u20132634","DOI":"10.1109\/CVPR.2015.7298878"},{"key":"2169_CR11","unstructured":"Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the international conference on artificial intelligence and statistics, pp 249\u2013256"},{"key":"2169_CR12","unstructured":"Goodfellow I, Bengio Y, Courville A (2016) Deep learning, Chapter 15. Representation Learning, MIT press"},{"key":"2169_CR13","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"issue":"5","key":"2169_CR14","doi-asserted-by":"publisher","first-page":"1114","DOI":"10.1109\/TMI.2017.2787657","volume":"37","author":"Y Jin","year":"2017","unstructured":"Jin Y, Dou Q, Chen H, Yu L, Qin J, Fu CW, Heng PA (2017) SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans Med Imaging 37(5):1114\u20131126","journal-title":"IEEE Trans Med Imaging"},{"issue":"1","key":"2169_CR15","doi-asserted-by":"publisher","first-page":"19","DOI":"10.1016\/j.ajog.2012.09.025","volume":"208","author":"L Lewi","year":"2013","unstructured":"Lewi L, Deprest J, Hecher K (2013) The vascular anastomoses in monochorionic twin pregnancies and their clinical consequences. Am J Obstet Gynecol 208(1):19\u201330","journal-title":"Am J Obstet Gynecol"},{"issue":"2\u20133","key":"2169_CR16","doi-asserted-by":"publisher","first-page":"204","DOI":"10.1016\/j.placenta.2006.03.005","volume":"28","author":"E Lopriore","year":"2007","unstructured":"Lopriore E, Middeldorp JM, Oepkes D, Klumper FJ, Walther FJ, Vandenbussche FP (2007) Residual anastomoses after fetoscopic laser surgery in twin-to-twin transfusion syndrome: frequency, associated risks and outcome. Placenta 28(2\u20133):204\u2013208","journal-title":"Placenta"},{"issue":"5","key":"2169_CR17","doi-asserted-by":"publisher","first-page":"713","DOI":"10.1007\/s11548-018-1728-4","volume":"13","author":"L Peter","year":"2018","unstructured":"Peter L, Tella-Amo M, Shakir DI, Attilakos G, Wimalasundera R, Deprest J, Ourselin S, Vercauteren T (2018) Retrieval and registration of long-range overlapping frames for scalable mosaicking of in vivo fetoscopy. Int J Comput Assist Radiol Surg 13(5):713\u2013720","journal-title":"Int J Comput Assist Radiol Surg"},{"issue":"10","key":"2169_CR18","doi-asserted-by":"publisher","first-page":"763","DOI":"10.1080\/14767050701591827","volume":"20","author":"RA Quintero","year":"2007","unstructured":"Quintero RA, Ishii K, Chmait RH, Bornick PW, Allen MH, Kontopoulos EV (2007) Sequential selective laser photocoagulation of communicating vessels in twin-twin transfusion syndrome. J Mater Fetal Neonatal Med 20(10):763\u2013768","journal-title":"J Mater Fetal Neonatal Med"},{"issue":"2","key":"2169_CR19","doi-asserted-by":"publisher","first-page":"227","DOI":"10.1007\/s11548-018-1886-4","volume":"14","author":"P Sadda","year":"2019","unstructured":"Sadda P, Imamoglu M, Dombrowski M, Papademetris X, Bahtiyar MO, Onofrey J (2019) Deep-learned placental vessel segmentation for intraoperative video enhancement in fetoscopic surgery. Int J Comput Assist Radiol Surg 14(2):227\u2013235","journal-title":"Int J Comput Assist Radiol Surg"},{"issue":"2","key":"2169_CR20","doi-asserted-by":"publisher","first-page":"136","DOI":"10.1056\/NEJMoa032597","volume":"351","author":"MV Senat","year":"2004","unstructured":"Senat MV, Deprest J, Boulvain M, Paupe A, Winer N, Ville Y (2004) Endoscopic laser surgery versus serial amnioreduction for severe twin-to-twin transfusion syndrome. N Engl J Med 351(2):136\u2013144","journal-title":"N Engl J Med"},{"key":"2169_CR21","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of the international conference on learning representations"},{"key":"2169_CR22","unstructured":"Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems, pp 3104\u20133112"},{"issue":"3","key":"2169_CR23","doi-asserted-by":"publisher","first-page":"035001","DOI":"10.1117\/1.JMI.6.3.035001","volume":"6","author":"M Tella-Amo","year":"2019","unstructured":"Tella-Amo M, Peter L, Shakir DI, Deprest J, Stoyanov D, Vercauteren T, Ourselin S (2019) Pruning strategies for efficient online globally consistent mosaicking in fetoscopy. J Med Imaging 6(3):035001","journal-title":"J Med Imaging"},{"issue":"1","key":"2169_CR24","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1109\/TMI.2016.2593957","volume":"36","author":"AP Twinanda","year":"2017","unstructured":"Twinanda AP, Shehata S, Mutter D, Marescaux J, De Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36(1):86\u201397","journal-title":"IEEE Trans Med Imaging"},{"issue":"10","key":"2169_CR25","doi-asserted-by":"publisher","first-page":"1661","DOI":"10.1007\/s11548-018-1813-8","volume":"13","author":"F Vasconcelos","year":"2018","unstructured":"Vasconcelos F, Brand\u00e3o P, Vercauteren T, Ourselin S, Deprest J, Peebles D, Stoyanov D (2018) Towards computer-assisted TTTS: laser ablation detection for workflow segmentation from fetoscopic video. Int J Comput Assist Radiol Surg 13(10):1661\u20131670","journal-title":"Int J Comput Assist Radiol Surg"},{"key":"2169_CR26","doi-asserted-by":"publisher","first-page":"818","DOI":"10.1007\/978-3-319-10590-1_53","volume-title":"Computer Vision \u2013 ECCV 2014","author":"Matthew D. Zeiler","year":"2014","unstructured":"Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision. Springer, New York, pp 818\u2013833"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-020-02169-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-020-02169-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-020-02169-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,4,29]],"date-time":"2021-04-29T00:54:35Z","timestamp":1619657675000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-020-02169-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,4,29]]},"references-count":26,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2020,5]]}},"alternative-id":["2169"],"URL":"https:\/\/doi.org\/10.1007\/s11548-020-02169-0","relation":{},"ISSN":["1861-6410","1861-6429"],"issn-type":[{"value":"1861-6410","type":"print"},{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,4,29]]},"assertion":[{"value":"16 November 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 April 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 April 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Compliance with ethical standards"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"For this type of study formal consent is not required.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"No animals or humans were involved in this research. All videos were anonymised before delivery to the researchers.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}}]}}