{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T15:27:39Z","timestamp":1773847659928,"version":"3.50.1"},"reference-count":17,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2022,3,30]],"date-time":"2022-03-30T00:00:00Z","timestamp":1648598400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,3,30]],"date-time":"2022-03-30T00:00:00Z","timestamp":1648598400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"WELCOME\/EPSRC","award":["203145\/Z\/16\/Z"],"award-info":[{"award-number":["203145\/Z\/16\/Z"]}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/P027938\/1"],"award-info":[{"award-number":["EP\/P027938\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/R004080\/1"],"award-info":[{"award-number":["EP\/R004080\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/P012841\/1"],"award-info":[{"award-number":["EP\/P012841\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J CARS"],"published-print":{"date-parts":[[2022,5]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n                <jats:title>Purpose<\/jats:title>\n                <jats:p>Robotic-assisted laparoscopic surgery has become the trend in medicine thanks to its convenience and lower risk of infection against traditional open surgery. However, the visibility during these procedures may severely deteriorate due to electrocauterisation which generates smoke in the operating cavity. This decreased visibility hinders the procedural time and surgical performance. Recent deep learning-based techniques have shown the potential for smoke and glare removal, but few targets laparoscopic videos.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Method<\/jats:title>\n                <jats:p>We propose DeSmoke-LAP, a new method for removing smoke from real robotic laparoscopic hysterectomy videos. The proposed method is based on the unpaired image-to-image cycle-consistent generative adversarial network in which two novel loss functions, namely, inter-channel discrepancies and dark channel prior, are integrated to facilitate smoke removal while maintaining the true semantics and illumination of the scene.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Results<\/jats:title>\n                <jats:p>DeSmoke-LAP is compared with several state-of-the-art desmoking methods qualitatively and quantitatively using referenceless image quality metrics on 10 laparoscopic hysterectomy videos through 5-fold cross-validation.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Conclusion<\/jats:title>\n                <jats:p>DeSmoke-LAP outperformed existing methods and generated smoke-free images without applying ground truths (paired images) and atmospheric scattering model. This shows distinctive achievement in dehazing in surgery, even in scenarios with partial inhomogenenous smoke. Our code and hysterectomy dataset will be made publicly available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/www.ucl.ac.uk\/interventional-surgical-sciences\/weiss-open-research\/weiss-open-data-server\/desmoke-lap\">https:\/\/www.ucl.ac.uk\/interventional-surgical-sciences\/weiss-open-research\/weiss-open-data-server\/desmoke-lap<\/jats:ext-link>.<\/jats:p>\n              <\/jats:sec>","DOI":"10.1007\/s11548-022-02595-2","type":"journal-article","created":{"date-parts":[[2022,3,30]],"date-time":"2022-03-30T18:04:23Z","timestamp":1648663463000},"page":"885-893","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":34,"title":["DeSmoke-LAP: improved unpaired image-to-image translation for desmoking in laparoscopic surgery"],"prefix":"10.1007","volume":"17","author":[{"given":"Yirou","family":"Pan","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1329-4565","authenticated-orcid":false,"given":"Sophia","family":"Bano","sequence":"additional","affiliation":[]},{"given":"Francisco","family":"Vasconcelos","sequence":"additional","affiliation":[]},{"given":"Hyun","family":"Park","sequence":"additional","affiliation":[]},{"given":"Taikyeong Ted.","family":"Jeong","sequence":"additional","affiliation":[]},{"given":"Danail","family":"Stoyanov","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,3,30]]},"reference":[{"issue":"2","key":"2595_CR1","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1177\/1553350614537564","volume":"22","author":"L Gu","year":"2015","unstructured":"Gu L, Liu P, Jiang C, Luo M, Xu Q (2015) Virtual digital defogging technology improves laparoscopic imaging quality. Surgical innov 22(2):171\u2013176","journal-title":"Surgical innov"},{"key":"2595_CR2","doi-asserted-by":"crossref","unstructured":"Morales P, Klinghoffer T, Jae\u00a0Lee S (2019) Feature forwarding for efficient single image dehazing. In: IEEE computer vision and pattern recognition workshops, pp 0\u20130","DOI":"10.1109\/CVPRW.2019.00260"},{"key":"2595_CR3","doi-asserted-by":"publisher","first-page":"215","DOI":"10.1016\/j.scs.2018.02.001","volume":"39","author":"X Cheng","year":"2018","unstructured":"Cheng X, Yang B, Liu G, Olofsson T, Li H (2018) A variational approach to atmospheric visibility estimation in the weather of fog and haze. Sustain Cities Soc 39:215\u2013224","journal-title":"Sustain Cities Soc"},{"issue":"7","key":"2595_CR4","doi-asserted-by":"publisher","first-page":"961","DOI":"10.1587\/transinf.2021EDP7033","volume":"104","author":"H Zhou","year":"2021","unstructured":"Zhou H, Xiong H, Li C, Jiang W, Lu K, Chen N, Liu Y (2021) Single image dehazing based on weighted variational regularized model. IEICE Trans Inf Syst 104(7):961\u2013969","journal-title":"IEICE Trans Inf Syst"},{"key":"2595_CR5","doi-asserted-by":"publisher","first-page":"208898","DOI":"10.1109\/ACCESS.2020.3038437","volume":"8","author":"S Salazar-Colores","year":"2020","unstructured":"Salazar-Colores S, Jim\u00e9nez HM, Ortiz-Echeverri CJ, Flores G (2020) Desmoking laparoscopy surgery images using image-to-image translation guided by embedded dark channel. IEEE Access 8:208898\u2013208909","journal-title":"IEEE Access"},{"key":"2595_CR6","doi-asserted-by":"crossref","unstructured":"Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Computer vision (ICCV), 2017 IEEE international conference","DOI":"10.1109\/ICCV.2017.244"},{"key":"2595_CR7","doi-asserted-by":"crossref","unstructured":"Engin D, Gen\u00e7 A, Kemal\u00a0Ekenel H (2018) Cycle-dehaze: enhanced cyclegan for single image dehazing. In: IEEE computer vision and pattern recognition workshops, pp 825\u2013833","DOI":"10.1109\/CVPRW.2018.00127"},{"key":"2595_CR8","doi-asserted-by":"crossref","unstructured":"Zhang H, Patel VM (2018) Densely connected pyramid dehazing network. In: IEEE computer vision and pattern recognition, pp 3194\u20133203","DOI":"10.1109\/CVPR.2018.00337"},{"issue":"12","key":"2595_CR9","first-page":"2341","volume":"33","author":"K He","year":"2010","unstructured":"He K, Sun J, Tang X (2010) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341\u20132353","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"2595_CR10","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst 27"},{"issue":"2","key":"2595_CR11","doi-asserted-by":"publisher","first-page":"431","DOI":"10.1007\/s11045-019-00670-7","volume":"31","author":"M Kang","year":"2020","unstructured":"Kang M, Jung M (2020) A single image dehazing model using total variation and inter-channel correlation. Multidimens Syst Signal Process 31(2):431\u2013464","journal-title":"Multidimens Syst Signal Process"},{"key":"2595_CR12","unstructured":"Wang C, Cheikh FA, Kaaniche M, Elle OJ (2018) A smoke removal method for laparoscopic images. arXiv preprint arXiv:1803.08410"},{"issue":"6","key":"2595_CR13","doi-asserted-by":"publisher","first-page":"2856","DOI":"10.1109\/TIP.2018.2813092","volume":"27","author":"Y-T Peng","year":"2018","unstructured":"Peng Y-T, Cao K, Cosman PC (2018) Generalization of the dark channel prior for single image restoration. IEEE Trans Image Process 27(6):2856\u20132868","journal-title":"IEEE Trans Image Process"},{"issue":"11","key":"2595_CR14","doi-asserted-by":"publisher","first-page":"3888","DOI":"10.1109\/TIP.2015.2456502","volume":"24","author":"LK Choi","year":"2015","unstructured":"Choi LK, You J, Bovik AC (2015) Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans Image Process 24(11):3888\u20133901","journal-title":"IEEE Trans Image Process"},{"issue":"4","key":"2595_CR15","doi-asserted-by":"publisher","first-page":"717","DOI":"10.1109\/TIP.2008.2011760","volume":"18","author":"R Ferzli","year":"2009","unstructured":"Ferzli R, Karam LJ (2009) A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans Image Process 18(4):717\u2013728","journal-title":"IEEE Trans Image Process"},{"issue":"2","key":"2595_CR16","doi-asserted-by":"publisher","first-page":"87","DOI":"10.5566\/ias.v27.p87-95","volume":"27","author":"N Hauti\u00e8re","year":"2008","unstructured":"Hauti\u00e8re N, Tarel J-P, Aubert D, Dumont E (2008) Blind contrast enhancement assessment by gradient ratioing at visible edgese. Image Anal Stereol J 27(2):87\u201395","journal-title":"Image Anal Stereol J"},{"key":"2595_CR17","doi-asserted-by":"crossref","unstructured":"Park T, Efros AA, Zhang R, Zhu J-Y (2020) Contrastive learning for unpaired image-to-image translation. In: European conference on computer vision, pp. 319\u2013345. Springer","DOI":"10.1007\/978-3-030-58545-7_19"}],"container-title":["International Journal of Computer Assisted Radiology and Surgery"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-022-02595-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11548-022-02595-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11548-022-02595-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,5,16]],"date-time":"2022-05-16T11:34:42Z","timestamp":1652700882000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11548-022-02595-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,30]]},"references-count":17,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,5]]}},"alternative-id":["2595"],"URL":"https:\/\/doi.org\/10.1007\/s11548-022-02595-2","relation":{},"ISSN":["1861-6429"],"issn-type":[{"value":"1861-6429","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,30]]},"assertion":[{"value":"3 March 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 March 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 March 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Code to be released with this paper.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Code availability"}},{"value":"For this type of study, formal consent is not required.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval"}},{"value":"This article does not contain patient data.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}}]}}