{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T16:17:20Z","timestamp":1775578640571,"version":"3.50.1"},"reference-count":84,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2023,2,27]],"date-time":"2023-02-27T00:00:00Z","timestamp":1677456000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2023,7,31]]},"abstract":"<jats:p>\n            Achieving subjective and objective quality assessment of underwater images is of high significance in underwater visual perception and image\/video processing. However, the development of underwater image quality assessment (UIQA) is limited for the lack of publicly available underwater image datasets with human subjective scores and reliable objective UIQA metrics. To address this issue, we establish a large-scale underwater image dataset, dubbed UID2021, for evaluating no-reference (NR) UIQA metrics. The constructed dataset contains 60 multiply degraded underwater images collected from various sources, covering six common underwater scenes (i.e., bluish scene, blue-green scene, greenish scene, hazy scene, low-light scene, and turbid scene), and their corresponding 900 quality improved versions are generated by employing 15 state-of-the-art underwater image enhancement and restoration algorithms. Mean opinion scores with 52 observers for each image of UID2021 are also obtained by using the pairwise comparison sorting method. Both in-air and underwater-specific NR IQA algorithms are tested on our constructed dataset to fairly compare their performance and analyze their strengths and weaknesses. Our proposed UID2021 dataset enables ones to evaluate NR UIQA algorithms comprehensively and paves the way for further research on UIQA. The dataset is available at\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/Hou-Guojia\/UID2021\">https:\/\/github.com\/Hou-Guojia\/UID2021<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3578584","type":"journal-article","created":{"date-parts":[[2023,1,6]],"date-time":"2023-01-06T13:17:06Z","timestamp":1673011026000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":100,"title":["UID2021: An Underwater Image Dataset for Evaluation of No-Reference Quality Assessment Metrics"],"prefix":"10.1145","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6509-6259","authenticated-orcid":false,"given":"Guojia","family":"Hou","sequence":"first","affiliation":[{"name":"Qingdao University, Qingdao, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2925-1423","authenticated-orcid":false,"given":"Yuxuan","family":"Li","sequence":"additional","affiliation":[{"name":"Ocean University of China, Qingdao, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5810-0248","authenticated-orcid":false,"given":"Huan","family":"Yang","sequence":"additional","affiliation":[{"name":"Qingdao University, Qingdao, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9831-6457","authenticated-orcid":false,"given":"Kunqian","family":"Li","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0197-1119","authenticated-orcid":false,"given":"Zhenkuan","family":"Pan","sequence":"additional","affiliation":[]}],"member":"320","published-online":{"date-parts":[[2023,2,27]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2932611"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSMC.2017.2788902"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2020.115978"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2015.2491020"},{"issue":"3","key":"e_1_3_1_6_2","doi-asserted-by":"crossref","first-page":"541","DOI":"10.1109\/JOE.2015.2469915","article-title":"Human-visual-system-inspired underwater image quality measures","volume":"41","author":"Panetta Karen","year":"2016","unstructured":"Karen Panetta, Chen Gao, and Sos Again. 2016. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 41, 3 (July 2016), 541\u2013551.","journal-title":"IEEE J. Ocean. Eng."},{"key":"e_1_3_1_7_2","doi-asserted-by":"crossref","first-page":"904","DOI":"10.1016\/j.compeleceng.2017.12.006","article-title":"An imaging-inspired no-reference underwater color image quality assessment metric","volume":"70","author":"Wang Yan","year":"2017","unstructured":"Yan Wang, Na Li, Zongying Li, Zhaorui Gu, Haiyong Zheng, Bing Zheng, and Mengnan Sun. 2017. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electron. Eng. 70 (Dec. 2017), 904\u2013913.","journal-title":"Comput. Electron. Eng."},{"key":"e_1_3_1_8_2","doi-asserted-by":"crossref","first-page":"116218","DOI":"10.1016\/j.image.2021.116218","article-title":"A reference-free underwater image quality assessment metric in frequency domain.","volume":"94","author":"Yang Ning","year":"2021","unstructured":"Ning Yang, Qihang Zhong, Kun Li, Runmin Cong, Yao Zhao, and Sam Kwong. 2021. A reference-free underwater image quality assessment metric in frequency domain. Signal Process. Image Commun. 94 (March 2021), 116218.","journal-title":"Signal Process. Image Commun"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2955241"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2019.2963772"},{"issue":"8","key":"e_1_3_1_11_2","first-page":"2822","article-title":"Underwater single image color restoration using haze-lines and a new quantitative dataset","volume":"43","author":"Berman Dana","year":"2021","unstructured":"Dana Berman, Deborah Levy, Shai Avidan, and Tali Treibitz. 2021. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE Trans. Pattern Anal. Mach. Intell. 43, 8 (Aug. 2021), 2822\u20132837.","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"e_1_3_1_12_2","first-page":"1","volume-title":"Proceedings of the 2016 MTS\/IEEE OCEANS Conference (OCEANS\u201916)","author":"Duarte Amanda","year":"2016","unstructured":"Amanda Duarte, Felipe Codevilla, Joel De O. Gaya, and Silvia S. C. Botelho. 2016. A dataset to evaluate underwater image restoration methods. In Proceedings of the 2016 MTS\/IEEE OCEANS Conference (OCEANS\u201916). 1\u20136."},{"key":"e_1_3_1_13_2","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1016\/j.image.2019.05.015","article-title":"Bio-inspired optimization algorithms for real underwater image restoration.","volume":"77","author":"S\u00e1nchez-Ferreira C.","year":"2019","unstructured":"C. S\u00e1nchez-Ferreira, L. S. Coelho, H. V. H. Ayala, M. C. Q. Farias, and C. H. Llanos. 2019. Bio-inspired optimization algorithms for real underwater image restoration. Signal Process. Image Commun. 77 (Sept. 2019), 49\u201365.","journal-title":"Signal Process. Image Commun"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2019.107038"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3006359"},{"key":"e_1_3_1_16_2","first-page":"1","article-title":"Underwater no-reference image quality assessment for display module of ROV","volume":"2","author":"Wu Di","year":"2020","unstructured":"Di Wu, Fei Yuan, and En Cheng. 2020. Underwater no-reference image quality assessment for display module of ROV. Sci. Program. 2 (Aug. 2020), 1\u201315.","journal-title":"Sci. Program."},{"key":"e_1_3_1_17_2","first-page":"1980","article-title":"Underwater image quality assessment: Subjective and objective methods","volume":"24","author":"Guo Pengfei","year":"2021","unstructured":"Pengfei Guo, Lang He, Shuangyin Liu, Delu Zeng, and Hantao Liu. 2021. Underwater image quality assessment: Subjective and objective methods. IEEE Trans. Multimedia 24 (April 2021), 1980\u20131989.","journal-title":"IEEE Trans. Multimedia"},{"issue":"11","key":"e_1_3_1_18_2","doi-asserted-by":"crossref","first-page":"3440","DOI":"10.1109\/TIP.2006.881959","article-title":"A statistical evaluation of recent full reference image quality assessment algorithms","volume":"15","author":"Conrad Bovik Alan","year":"2006","unstructured":"Alan Conrad Bovik, Muhammad Farooq Sabir, and Hamid R. Sheikh. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15, 11 (Nov. 2006), 3440\u20133451.","journal-title":"IEEE Trans. Image Process."},{"key":"e_1_3_1_19_2","unstructured":"Y. Horita K. Shibata and Y. Kawayoke. 2023. MICT Image Quality Evaluation Database. Retrieved January 10 2023 from https:\/\/computervisiononline.com\/dataset\/1105138668."},{"issue":"4","key":"e_1_3_1_20_2","first-page":"30","article-title":"TID2008: A database for evaluation of full-reference visual quality assessment metrics","volume":"10","author":"Ponomarenko Nikolay","year":"2009","unstructured":"Nikolay Ponomarenko, Vladimir Lukin, Alexander Zelensky, Karen Egiazarian, Jaakko Astola, Marco Carli, and Federica Battisti. 2009. TID2008: A database for evaluation of full-reference visual quality assessment metrics. Adv. Modern Radioelectron. 10, 4 (2009), 30\u201345.","journal-title":"Adv. Modern Radioelectron."},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2014.2378061"},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2016.07.033"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2016.2631888"},{"key":"e_1_3_1_24_2","doi-asserted-by":"crossref","first-page":"1693","DOI":"10.1109\/ACSSC.2012.6489321","volume-title":"Proceedings of the 2012 Conference Record of the 46th Asilomar Conference on Signals, Systems, and Computers (ASILOMAR\u201912)","author":"Jayaraman Dinesh","year":"2012","unstructured":"Dinesh Jayaraman, Anish Mittal, Anush K. Moorthy, and Alan C. Bovik. 2012. Objective quality assessment of multiply distorted images. In Proceedings of the 2012 Conference Record of the 46th Asilomar Conference on Signals, Systems, and Computers (ASILOMAR\u201912). 1693\u20131697."},{"key":"e_1_3_1_25_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2014.10.009"},{"key":"e_1_3_1_26_2","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1007\/978-3-319-56010-6_8","volume-title":"Proceedings of the International Workshop on Computational Color Imaging (CCIW\u201917)","author":"Corchs Silvia","year":"2017","unstructured":"Silvia Corchs and Francesca Gasparini. 2017. A multidistortion database for image quality. In Proceedings of the International Workshop on Computational Color Imaging (CCIW\u201917). 95\u2013104. http:\/\/www.mmsp.unimib.it\/image-quality\/."},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.5555\/2319093.2321747"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2015.2500021"},{"key":"e_1_3_1_29_2","unstructured":"Hanhe Lin Vlad Hosu and Dietmar Saupe. 2018. KonIQ-10k: Towards an ecologically valid and large-scale IQA database. arXiv:1803.08489."},{"key":"e_1_3_1_30_2","doi-asserted-by":"publisher","DOI":"10.1117\/12.650780"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1117\/1.3267105"},{"issue":"2","key":"e_1_3_1_32_2","first-page":"91","article-title":"Achieving turbidity robustness on underwater images local feature detection","volume":"60","author":"Codevilla Felipe","year":"2015","unstructured":"Felipe Codevilla, Joel De O. Gaya, Nelson Duarte Filho, and Silvia S. C. Costa Botelho. 2015. Achieving turbidity robustness on underwater images local feature detection. Int. J. Comput. Vis. 60, 2 (Sept. 2015), 91\u2013110.","journal-title":"Int. J. Comput. Vis."},{"key":"e_1_3_1_33_2","first-page":"1","volume-title":"Proceedings of the 2018 8th International Conference on Image Processing Theory, Tools, and Applications (IPTA\u201918)","author":"Ma Yupeng","year":"2018","unstructured":"Yupeng Ma, Xiaoyi Feng, Lujing Chao, Dong Huang, Zhaoqiang Xia, and Xiaoyue Jiang. 2018. A new database for evaluating underwater image processing methods. In Proceedings of the 2018 8th International Conference on Image Processing Theory, Tools, and Applications (IPTA\u201918). 1\u20136."},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2010.2043888"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/tip.2012.2214050"},{"key":"e_1_3_1_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2012.2227726"},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2012.2191563"},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2017.2774045"},{"key":"e_1_3_1_39_2","first-page":"14131","volume-title":"Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201920)","author":"Zhu Hancheng","year":"2020","unstructured":"Hancheng Zhu, Leida Li, Jinjian Wu, Weisheng Dong, and Guangming Shi. 2020. MetaIQA: Deep meta-learning for no-reference image quality assessment. In Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201920). 14131\u201314140."},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2020.3002478"},{"key":"e_1_3_1_41_2","doi-asserted-by":"crossref","first-page":"1733","DOI":"10.1109\/CVPR.2014.224","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201914)","author":"Kang Le","year":"2014","unstructured":"Le Kang, Peng Ye, Yi Li, and David Doermann. 2014. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR\u201914). 1733\u20131740."},{"key":"e_1_3_1_42_2","first-page":"3664","volume-title":"Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201920)","author":"Su Shaolin","year":"2020","unstructured":"Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. 2020. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201920). 3664\u20133673."},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2018.2886771"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2017.2760518"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3061932"},{"key":"e_1_3_1_46_2","article-title":"Continual learning for blind image quality assessment","author":"Zhang Weixia","year":"2022","unstructured":"Weixia Zhang, Dingquan Li, Chao Ma, Guangtao Zhai, Xiaokang Yang, and Kede Ma. 2022. Continual learning for blind image quality assessment. IEEE Trans. Pattern Anal. Mach. Intell. Early access, 2022.","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"e_1_3_1_47_2","doi-asserted-by":"crossref","first-page":"1268","DOI":"10.1109\/CVPRW56347.2022.00133","volume-title":"Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW\u201922)","author":"Wang Jing","year":"2022","unstructured":"Jing Wang, Haotian Fan, Xiaoxia Hou, Yitian Xu, Tao Li, Xuechao Lu, and Lean Fu. 2022. MSTRIQ: No reference image quality assessment based on swin transformer with multi-stage fusion. In Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW\u201922). 1268\u20131277."},{"issue":"4","key":"e_1_3_1_48_2","doi-asserted-by":"crossref","first-page":"717","DOI":"10.1109\/TIP.2008.2011760","article-title":"A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB)","volume":"18","author":"Ferzli Rony","year":"2019","unstructured":"Rony Ferzli and Lina J. Karam. 2019. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans. Image Process. 18, 4 (April 2009), 717\u2013728.","journal-title":"IEEE Trans. Image Process."},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2011.2131660"},{"key":"e_1_3_1_50_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2015.2392129"},{"issue":"11","key":"e_1_3_1_51_2","doi-asserted-by":"crossref","first-page":"3888","DOI":"10.1109\/TIP.2015.2456502","article-title":"Referenceless prediction of perceptual fog density and perceptual image defogging","volume":"24","author":"Kwon Choi Lark","year":"2015","unstructured":"Lark Kwon Choi, Jaehee You, and Alan Conrad Bovik. 2015. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process. 24, 11 (Nov. 2015), 3888\u20133901.","journal-title":"IEEE Trans. Image Process."},{"key":"e_1_3_1_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2019.2900941"},{"issue":"110","key":"e_1_3_1_53_2","first-page":"1","article-title":"Precise no-reference image quality evaluation based on distortion identification","volume":"17","author":"Yan Chenggang","year":"2021","unstructured":"Chenggang Yan, Tong Teng, Yutao Liu, Yongbing Zhang, Haoqian Wang, and Xiangyang Ji. 2021. Precise no-reference image quality evaluation based on distortion identification. ACM Trans. Multimed. Comput. Commun. Appl. 17, 110 (Oct. 2021), 1\u201321.","journal-title":"ACM Trans. Multimed. Comput. Commun. Appl."},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2022.3196815"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2022.3164918"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2021.116622"},{"key":"e_1_3_1_57_2","unstructured":"Chunle Guo Ruiqi Wu Xin Jin Linghao Han Zhi Chai Weidong Zhang and Chongyi Li. 2022. Underwater Ranker: Learn which is better and how to be better. arXiv:2208.06857."},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jvcir.2014.11.006"},{"key":"e_1_3_1_59_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2019.08.041"},{"key":"e_1_3_1_60_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2016.2612882"},{"key":"e_1_3_1_61_2","first-page":"2169","article-title":"A hybrid framework for underwater image enhancement","volume":"8","author":"Li Xinjie","year":"2020","unstructured":"Xinjie Li, Guojia Hou, Lu Tan, and Wanquan Liu. 2020. A hybrid framework for underwater image enhancement. IEEE Access 8 (Oct. 2020), 2169\u20133536.","journal-title":"IEEE Access"},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2022.3177129"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2017.2759252"},{"key":"e_1_3_1_64_2","unstructured":"Nicholas Hope. n.d. Bubble Vision Underwater Imaging. Retrieved January 10 2023 from https:\/\/bubblevision.com."},{"key":"e_1_3_1_65_2","doi-asserted-by":"crossref","first-page":"104171","DOI":"10.1016\/j.engappai.2021.104171","article-title":"Bayesian retinex underwater image enhancement","volume":"101","author":"Zhuang Peixian","year":"2021","unstructured":"Peixian Zhuang, Chongyi Li, and Jiamin Wu. 2021. Bayesian retinex underwater image enhancement. Eng. Appl. Artif. Intell. 101 (May 2021), 104171.","journal-title":"Eng. Appl. Artif. Intell."},{"issue":"2","key":"e_1_3_1_66_2","first-page":"239","article-title":"Underwater image enhancement using an integrated colour model","volume":"34","author":"Iqbal Kashif","year":"2007","unstructured":"Kashif Iqbal, Rosalina Abdul Salam, Azam Osman, and Abdullah Zawawi Talib. 2007. Underwater image enhancement using an integrated colour model. IAENG Int. J. Comput. Sci. 34, 2 (March 2007), 239\u2013244.","journal-title":"IAENG Int. J. Comput. Sci."},{"key":"e_1_3_1_67_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2020.115892"},{"issue":"2","key":"e_1_3_1_68_2","doi-asserted-by":"crossref","first-page":"292","DOI":"10.1049\/iet-ipr.2017.0359","article-title":"Hue preserving-based approach for underwater colour image enhancement","volume":"12","author":"Hou Guojia","year":"2018","unstructured":"Guojia Hou, Zhenkuan Pan, Baoxiang Huang, Guodong Wang, and Xin Luan. 2018. Hue preserving-based approach for underwater colour image enhancement. IET Image Process. 12, 2 (Feb. 2018), 292\u2013298.","journal-title":"IET Image Process"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2017.2663846"},{"key":"e_1_3_1_70_2","first-page":"538","volume-title":"Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW\u201920)","author":"Marques Tunai Porto","year":"2020","unstructured":"Tunai Porto Marques and Alexandra Branzan Albu. 2020. L2UWE: A framework for the efficient enhancement of low-light underwater images using local contrast and multi-scale fusion. In Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW\u201920). 538\u2013539."},{"key":"e_1_3_1_71_2","first-page":"789","volume-title":"Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS\u201917)","author":"Fu Xueyang","year":"2017","unstructured":"Xueyang Fu, Zhiwen Fan, Mei Ling, Yue Huang, and Xinghao Ding. 2017. Two-step approach for single underwater image enhancement. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS\u201917). 789\u2013794."},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3076367"},{"key":"e_1_3_1_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2021.3115791"},{"key":"e_1_3_1_74_2","doi-asserted-by":"crossref","first-page":"102732","DOI":"10.1016\/j.jvcir.2019.102732","article-title":"A novel dark channel prior guided variational framework for underwater image restoration","volume":"66","author":"Hou Guojia","year":"2020","unstructured":"Guojia Hou, Jingming Li, Guodong Wang, Huan Yang, Baoxiang Huang, and Zhenkuan Pan. 2020. A novel dark channel prior guided variational framework for underwater image restoration. J. Vis. Commun. Image Represent. 66 (Jan. 2020), 102732.","journal-title":"J. Vis. Commun. Image Represent."},{"key":"e_1_3_1_75_2","first-page":"4572","volume-title":"Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP\u201914)","author":"Fu Xueyang","year":"2014","unstructured":"Xueyang Fu, Peixian Zhuang, Yue Huang, Yinghao Liao, Xiao-Ping Zhang, and Xinghao Ding. 2014. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP\u201914). 4572\u20134576."},{"key":"e_1_3_1_76_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2012.2215007"},{"issue":"116","key":"e_1_3_1_77_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3489520","article-title":"Leveraging deep statistics for underwater image enhancement","volume":"17","author":"Wang Yang","year":"2021","unstructured":"Yang Wang, Yang Cao, Jing Zhang, Feng Wu, and Zhengjun Zha. 2021. Leveraging deep statistics for underwater image enhancement. ACM Trans. Multimed. Comput. Commun. Appl. 17, 116 (Oct. 2021), 1\u201320.","journal-title":"ACM Trans. Multimed. Comput. Commun. Appl."},{"key":"e_1_3_1_78_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2022.104759"},{"key":"e_1_3_1_79_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2022.3216208"},{"key":"e_1_3_1_80_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2022.3196546"},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2022.104785"},{"key":"e_1_3_1_82_2","first-page":"4204315","article-title":"TEBCF: Real-world underwater image texture enhancement model based on blurriness and color fusion","volume":"60","author":"Yuan Jieyu","year":"2022","unstructured":"Jieyu Yuan, Zhanchuan Cai, and Wei Cao. 2022. TEBCF: Real-world underwater image texture enhancement model based on blurriness and color fusion. IEEE Trans. Geosci. Remote Sens. 60 (Oct. 2021), 4204315.","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"e_1_3_1_83_2","article-title":"Beyond single reference for training: Underwater image enhancement via comparative learning","author":"Li Kuanqin","year":"2022","unstructured":"Kuanqin Li, Li Wu, Qi Qi, Wenjie Liu, Xiang Gao, Liqin Zhou, and Dalei Song. 2022. Beyond single reference for training: Underwater image enhancement via comparative learning. IEEE Trans. Circuits Syst. Video Technol. Early access, November 28, 2022.","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"e_1_3_1_84_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.dsp.2022.103660"},{"key":"e_1_3_1_85_2","doi-asserted-by":"crossref","first-page":"3989","DOI":"10.1109\/WACV51458.2022.00404","volume-title":"Proceedings of the 2022 IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV\u201922)","author":"Alireza Golestaneh S.","year":"2022","unstructured":"S. Alireza Golestaneh, Saba Dadsetan, and Kris M. Kitani. 2022. No-reference image quality assessment via transformers, relative ranking, and self-consistency. In Proceedings of the 2022 IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV\u201922). 3989\u20133999."}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3578584","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3578584","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T18:08:38Z","timestamp":1750183718000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3578584"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,27]]},"references-count":84,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,7,31]]}},"alternative-id":["10.1145\/3578584"],"URL":"https:\/\/doi.org\/10.1145\/3578584","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"value":"1551-6857","type":"print"},{"value":"1551-6865","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,27]]},"assertion":[{"value":"2022-04-19","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-12-23","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-02-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}