{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,9]],"date-time":"2026-04-09T14:11:24Z","timestamp":1775743884997,"version":"3.50.1"},"reference-count":19,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,3,31]],"date-time":"2023-03-31T00:00:00Z","timestamp":1680220800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,3,31]],"date-time":"2023-03-31T00:00:00Z","timestamp":1680220800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100010269","name":"Wellcome Trust","doi-asserted-by":"publisher","award":["[203145\/Z\/16\/Z"],"award-info":[{"award-number":["[203145\/Z\/16\/Z"]}],"id":[{"id":"10.13039\/100010269","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/P027938\/1"],"award-info":[{"award-number":["EP\/P027938\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/R004080\/1"],"award-info":[{"award-number":["EP\/R004080\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/P012841\/1"],"award-info":[{"award-number":["EP\/P012841\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000272","name":"National Institute for Health and Care Research","doi-asserted-by":"publisher","award":["NIHR UCLH\/UCL BRC Neuroscience"],"award-info":[{"award-number":["NIHR UCLH\/UCL BRC Neuroscience"]}],"id":[{"id":"10.13039\/501100000272","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000287","name":"Royal Academy of Engineering","doi-asserted-by":"publisher","award":["CiET1819\/2\/36"],"award-info":[{"award-number":["CiET1819\/2\/36"]}],"id":[{"id":"10.13039\/501100000287","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n                <jats:title>Purpose<\/jats:title>\n                <jats:p>Microsurgical Aneurysm Clipping Surgery (MACS) carries a high risk for intraoperative aneurysm rupture. Automated recognition of instances when the aneurysm is exposed in the surgical video would be a valuable reference point for neuronavigation, indicating phase transitioning and more importantly designating moments of high risk for rupture. This article introduces the MACS dataset containing 16 surgical videos with frame-level expert annotations and proposes a learning methodology for surgical scene understanding identifying video frames with the aneurysm present in the operating microscope\u2019s field-of-view.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Methods<\/jats:title>\n                <jats:p>Despite the dataset imbalance (80% no presence, 20% presence) and developed without explicit annotations, we demonstrate the applicability of Transformer-based deep learning architectures (MACSSwin-T, vidMACSSwin-T) to detect the aneurysm and classify MACS frames accordingly. We evaluate the proposed models in multiple-fold cross-validation experiments with independent sets and in an unseen set of 15 images against 10 human experts (neurosurgeons).<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Results<\/jats:title>\n                <jats:p>Average (across folds) accuracy of 80.8% (range 78.5\u201382.4%) and 87.1% (range 85.1\u201391.3%) is obtained for the image- and video-level approach, respectively, demonstrating that the models effectively learn the classification task. Qualitative evaluation of the models\u2019 class activation maps shows these to be localized on the aneurysm\u2019s actual location. Depending on the decision threshold, MACSWin-T achieves 66.7\u201386.7% accuracy in the unseen images, compared to 82% of human raters, with moderate to strong correlation.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Conclusions<\/jats:title>\n                <jats:p>Proposed architectures show robust performance and with an adjusted threshold promoting detection of the underrepresented (aneurysm presence) class, comparable to human expert accuracy. Our work represents the first step towards landmark detection in MACS with the aim to inform surgical teams to attend to high-risk moments, taking precautionary measures to avoid rupturing.<\/jats:p>\n              <\/jats:sec>","DOI":"10.1007\/s11548-023-02871-9","type":"journal-article","created":{"date-parts":[[2023,3,31]],"date-time":"2023-03-31T18:03:20Z","timestamp":1680285800000},"page":"1033-1041","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Shifted-windows transformers for the detection of cerebral aneurysms in microsurgery"],"prefix":"10.1007","volume":"18","author":[{"given":"Jinfan","family":"Zhou","sequence":"first","affiliation":[]},{"given":"William","family":"Muirhead","sequence":"additional","affiliation":[]},{"given":"Simon C.","family":"Williams","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0980-3227","authenticated-orcid":false,"given":"Danail","family":"Stoyanov","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8000-392X","authenticated-orcid":false,"given":"Hani J.","family":"Marcus","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0357-5996","authenticated-orcid":false,"given":"Evangelos B.","family":"Mazomenos","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,3,31]]},"reference":[{"key":"2871_CR1","doi-asserted-by":"crossref","unstructured":"Lee D, Yu HW, Kim S, Yoon J, Lee K, Chai YJ, Choi JY, Kong H-J, Lee KE, Cho HS, Kim HC (2020) Vision-based tracking system for augmented reality to localize recurrent laryngeal nerve during robotic thyroid surgery. Sci Rep 10(1)","DOI":"10.1038\/s41598-020-65439-6"},{"issue":"2","key":"2871_CR2","doi-asserted-by":"publisher","first-page":"252","DOI":"10.1227\/NEU.0000000000000328","volume":"10","author":"IPB Cabrilo","year":"2014","unstructured":"Cabrilo IPB, Schaller K (2014) Augmented reality in the surgery of cerebral aneurysms: a technical report. Oper Neurosurg 10(2):252\u2013261","journal-title":"Oper Neurosurg"},{"issue":"4","key":"2871_CR3","doi-asserted-by":"publisher","first-page":"504","DOI":"10.1227\/NEU.0000000000000921","volume":"11","author":"SR Kantelhardt","year":"2015","unstructured":"Kantelhardt SR, Gutenberg A, Neulen A, Keric N, Renovanz M, Giese A (2015) Video-assisted navigation for adjustment of image-guidance accuracy to slight brain shift. Oper Neurosurg 11(4):504\u2013511","journal-title":"Oper Neurosurg"},{"issue":"4","key":"2871_CR4","doi-asserted-by":"publisher","first-page":"537","DOI":"10.1007\/s10143-016-0732-9","volume":"40","author":"A Meola","year":"2017","unstructured":"Meola A, Cutolo F, Carbone M, Cagnazzo F, Ferrari M, Ferrari V (2017) Augmented reality in neurosurgery: a systematic review. Neurosurg Rev 40(4):537\u2013548","journal-title":"Neurosurg Rev"},{"key":"2871_CR5","doi-asserted-by":"publisher","first-page":"456","DOI":"10.1159\/000511934","volume":"36","author":"F Chadebecq","year":"2020","unstructured":"Chadebecq F, Vasconcelos F, Mazomenos E, Stoyanon D (2020) Computer vision in the surgical operating room. Visc Med 36:456\u2013462","journal-title":"Visc Med"},{"issue":"2","key":"2871_CR6","doi-asserted-by":"publisher","first-page":"363","DOI":"10.1097\/SLA.0000000000004594","volume":"276","author":"A Madani","year":"2022","unstructured":"Madani A, Namazi B, Altieri MS, Hashimoto DA, Rivera AM, Pucher PH, Navarrete-Welton A, Sankaranarayanan G, Brunt LM, Okrainec A, Alseidi A (2022) Artificial intelligence for intraoperative guidance: using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann Surg 276(2):363\u2013369","journal-title":"Ann Surg"},{"key":"2871_CR7","doi-asserted-by":"publisher","first-page":"1651","DOI":"10.1007\/s00464-020-07548-x","volume":"35","author":"T Tokuyasu","year":"2020","unstructured":"Tokuyasu T, Iwashita Y, Matsunobu Y, Kamiyama T, Ishikake M, Sakaguchi S, Ebe K, Tada K, Endo Y, Etoh T, Nakashima M, Inomata M (2020) Development of an artificial intelligence system using deep learning to indicate anatomical landmarks during laparoscopic cholecystectomy. Surg Endosc 35:1651\u20131658","journal-title":"Surg Endosc"},{"issue":"1","key":"2871_CR8","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41598-021-93202-y","volume":"11","author":"J Gong","year":"2021","unstructured":"Gong J, Holsinger FC, Noel JE, Mitani S, Jopling J, Bedi N, Koh YW, Orloff LA, Cernea CR, Yeung S (2021) Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy. Sci Rep 11(1):1\u201311","journal-title":"Sci Rep"},{"issue":"3","key":"2871_CR9","doi-asserted-by":"publisher","first-page":"1273","DOI":"10.1007\/s10143-020-01312-4","volume":"44","author":"WR Muirhead","year":"2021","unstructured":"Muirhead WR, Grover PJ, Toma AK, Stoyanov D, Marcus HJ, Murphy M (2021) Adverse intraoperative events during surgical repair of ruptured cerebral aneurysms: a systematic review. Neurosurg Rev 44(3):1273\u20131285","journal-title":"Neurosurg Rev"},{"key":"2871_CR10","first-page":"1","volume":"1","author":"DZ Khan","year":"2021","unstructured":"Khan DZ, Luengo I, Barbarisi S, Addis C, Culshaw L, Dorward NL, Haikka P, Jain A, Kerr K, Koh CH, Layard-Horsfall H, Muirhead W, Palmisciano P, Vasey B, Stoyanov D, Marcus HJ (2021) Automated operative workflow analysis of endoscopic pituitary surgery using machine learning: development and preclinical evaluation (ideal stage 0). J Neurosurg 1:1\u20138","journal-title":"J Neurosurg"},{"key":"2871_CR11","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2014) Swin transformer: hierarchical vision transformer using shifted windows. In: ICCV, pp 10012\u201310022"},{"key":"2871_CR12","doi-asserted-by":"crossref","unstructured":"Long Y, Li Z, Yee CH, Ng CF, Taylor RH, Unberath M, Dou Q (2021) E-dssr: efficient dynamic surgical scene reconstruction with transformer-based stereoscopic depth perception. In: MICCAI. Springer, Berlin, pp 415\u2013425","DOI":"10.1007\/978-3-030-87202-1_40"},{"key":"2871_CR13","doi-asserted-by":"crossref","unstructured":"Czempiel T, Paschali M, Ostler D, Kim ST, Busam B, Navab N (2021) Opera: attention-regularized transformers for surgical phase recognition. In: MICCAI. Springer, Berlin, pp 604\u2013614","DOI":"10.1007\/978-3-030-87202-1_58"},{"key":"2871_CR14","doi-asserted-by":"crossref","unstructured":"Zhang J, Nie Y, Chang J, Zhang JJ (2021) Surgical instruction generation with transformers. In: MICCAI. Springer, Berlin, pp 290\u2013299","DOI":"10.1007\/978-3-030-87202-1_28"},{"key":"2871_CR15","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. NIPS 30"},{"key":"2871_CR16","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N (2021) An image is worth $$16\\times 16$$ words: transformers for image recognition at scale. In: ICLR"},{"key":"2871_CR17","doi-asserted-by":"crossref","unstructured":"Liu Z, Ning J, Cao Y, Wei Y, Zhang Z, Lin S, Hu H (2022) Video Swin transformer. In: CVPR, pp 3202\u20133211","DOI":"10.1109\/CVPR52688.2022.00320"},{"key":"2871_CR18","doi-asserted-by":"crossref","unstructured":"Carreira J, Zisserman A (2017) Quo vadis, action recognition? A new model and the kinetics dataset. In: CVPR, pp 6299\u20136308","DOI":"10.1109\/CVPR.2017.502"},{"key":"2871_CR19","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: IEEE ICCV, pp 618\u2013626","DOI":"10.1109\/ICCV.2017.74"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-023-02871-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-023-02871-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-023-02871-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,6,21]],"date-time":"2023-06-21T14:40:20Z","timestamp":1687358420000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-023-02871-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,31]]},"references-count":19,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2023,6]]}},"alternative-id":["2871"],"URL":"https:\/\/doi.org\/10.1007\/s11548-023-02871-9","relation":{},"ISSN":["1861-6429"],"issn-type":[{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,3,31]]},"assertion":[{"value":"10 February 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 March 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 March 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article does not contain any studies with human participants performed by any of the authors.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}},{"value":"This article does not contain patient data. Human assessors consented to anonymously participate in the survey presented.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}}]}}