{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T07:31:30Z","timestamp":1768289490429,"version":"3.49.0"},"reference-count":37,"publisher":"Cambridge University Press (CUP)","issue":"12","license":[{"start":{"date-parts":[[2023,9,8]],"date-time":"2023-09-08T00:00:00Z","timestamp":1694131200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/www.cambridge.org\/core\/terms"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Robotica"],"published-print":{"date-parts":[[2023,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Underwater archaeology is of great significance for historical and cultural transmission and preservation of underwater heritage, but it is also a challenging task. Underwater heritage is located in an environment with high sediment content, objects are mostly buried, and the water is turbid, resulting in some of the features of objects missing or blurred, making it difficult to accurately identify and understand the semantics of various objects in the scene. To tackle these issues, this paper proposes a global enhancement network (GENet) underwater scene parsing method. We introduce adaptive dilated convolution by adding an extra regression layer, which can automatically deduce adaptive dilated coefficients according to the different scene objects. In addition, considering the easy confusion in the process of fuzzy feature classification, an enhancement classification network is proposed to increase the difference between various types of probabilities by reducing the loss function. We verified the validity of the proposed model by conducting numerous experiments on the Underwater Shipwreck Scenes (USS) dataset. We achieve state-of-the-art performance compared to the current state-of-the-art algorithm under three different conditions: conventional, relic semi-buried, and turbidified water quality. The experimental results show that the proposed algorithm performs best in different situations. To verify the generalizability of the proposed algorithm, we conducted comparative experiments on the current publicly available Cityscapes, ADE20K, and the underwater dataset SUIM. The experimental results show that this paper achieves good performance on the public dataset, indicating that the proposed algorithm is generalizable.<\/jats:p>","DOI":"10.1017\/s026357472300098x","type":"journal-article","created":{"date-parts":[[2023,9,8]],"date-time":"2023-09-08T08:23:34Z","timestamp":1694161414000},"page":"3541-3564","source":"Crossref","is-referenced-by-count":3,"title":["Global enhancement network underwater archaeology scene parsing method"],"prefix":"10.1017","volume":"41","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5355-0984","authenticated-orcid":false,"given":"Junyan","family":"Pan","sequence":"first","affiliation":[]},{"given":"Jishen","family":"Jia","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4811-5854","authenticated-orcid":false,"given":"Lei","family":"Cai","sequence":"additional","affiliation":[]}],"member":"56","published-online":{"date-parts":[[2023,9,8]]},"reference":[{"key":"S026357472300098X_ref11","first-page":"9522","volume-title":"33rd IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Li","year":"2019"},{"key":"S026357472300098X_ref29","first-page":"1","article-title":"CEGFNet: Common extraction and gate fusion network for scene parsing of remote sensing images","volume":"60","author":"Zhou","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"S026357472300098X_ref20","first-page":"14114","volume-title":"2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Chen","year":"2020"},{"key":"S026357472300098X_ref32","first-page":"1","article-title":"Knowledge-guided semantic transfer network for few-shot image recognition","volume":"34","author":"Li","year":"2023","journal-title":"IEEE Trans. Neural. Netw. Learn. Syst."},{"key":"S026357472300098X_ref2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.3043808"},{"key":"S026357472300098X_ref28","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00068"},{"key":"S026357472300098X_ref5","doi-asserted-by":"publisher","DOI":"10.1080\/2150704X.2021.1910362"},{"key":"S026357472300098X_ref35","doi-asserted-by":"publisher","DOI":"10.1142\/S0218001421520200"},{"key":"S026357472300098X_ref17","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2021.3132068"},{"key":"S026357472300098X_ref1","first-page":"3431","volume-title":"IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Long","year":"2015"},{"key":"S026357472300098X_ref10","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3042254"},{"key":"S026357472300098X_ref12","first-page":"435","volume-title":"European Conference on Computer Vision (ECCV)","author":"Li","year":"2020"},{"key":"S026357472300098X_ref36","doi-asserted-by":"publisher","DOI":"10.1007\/s11045-019-00652-9"},{"key":"S026357472300098X_ref23","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1155\/2021\/4193625","article-title":"Underwater distortion target recognition network (UDTRNet) via enhanced image features","volume":"2021","author":"Cai","year":"2021","journal-title":"Comput. Intell. Neurosci."},{"key":"S026357472300098X_ref24","doi-asserted-by":"publisher","DOI":"10.1155\/2022\/5456818"},{"key":"S026357472300098X_ref3","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2021.3086618"},{"key":"S026357472300098X_ref4","doi-asserted-by":"publisher","DOI":"10.1007\/s11432-019-2738-y"},{"key":"S026357472300098X_ref33","doi-asserted-by":"publisher","DOI":"10.1109\/IROS45743.2020.9340821"},{"key":"S026357472300098X_ref8","first-page":"3991","volume-title":"IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Xiong","year":"2020"},{"key":"S026357472300098X_ref7","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107707"},{"key":"S026357472300098X_ref31","article-title":"Singular value fine-tuning: Few-shot segmentation requires few-parameters fine-tuning","author":"Sun","year":"2022","journal-title":"arXiv preprint"},{"key":"S026357472300098X_ref37","article-title":"Multi-scale context aggregation by dilated convolutions","author":"Yu","year":"2015","journal-title":"arXiv preprint"},{"key":"S026357472300098X_ref6","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2020.2987819"},{"key":"S026357472300098X_ref16","first-page":"5280","article-title":"Object-level scene context prediction","volume":"44","author":"Qiao","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"S026357472300098X_ref15","doi-asserted-by":"publisher","DOI":"10.1017\/S0263574722001059"},{"key":"S026357472300098X_ref21","article-title":"Adaptive dilated convolution for human pose estimation","author":"Luo","year":"2021","journal-title":"arXiv preprint"},{"key":"S026357472300098X_ref14","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2020.3045627"},{"key":"S026357472300098X_ref27","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2021.3131645"},{"key":"S026357472300098X_ref30","unstructured":"[30] Xie, E. , Wang, W. , Yu, Z. , Anandkumar, A. , Alvarez, J. M. and Luo, P. , \u201cSegFormer: Simple and efficient design for semantic segmentation with transformers,\u201d arXiv preprint arxiv:2105.15203 (2021)."},{"key":"S026357472300098X_ref22","doi-asserted-by":"crossref","first-page":"128","DOI":"10.1109\/MSP.2020.3016143","article-title":"Graphs, convolutions, and neural networks: From graph filters to graph neural networks","volume":"37","author":"Niu","year":"2020","journal-title":"IEEE Signal Process. Mag."},{"key":"S026357472300098X_ref25","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2022.3190209"},{"key":"S026357472300098X_ref34","first-page":"8961456","article-title":"Image semantic segmentation method based on deep fusion network and conditional random field","volume":"2022","author":"Wang","year":"2022","journal-title":"Comput. Intell. Neurosci."},{"key":"S026357472300098X_ref18","article-title":"SSA: Semantic structure aware inference for weakly pixel-wise dense predictions without cost","author":"Sun","year":"2021","journal-title":"arXiv preprint"},{"key":"S026357472300098X_ref13","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2020.3033165"},{"key":"S026357472300098X_ref19","article-title":"Dilated convolution with learnable spacings","author":"Khalfaoui-Hassani","year":"2021","journal-title":"arXiv preprint"},{"key":"S026357472300098X_ref26","doi-asserted-by":"publisher","DOI":"10.1002\/col.22728"},{"key":"S026357472300098X_ref9","first-page":"52","volume-title":"33st IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Nie","year":"2020"}],"container-title":["Robotica"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.cambridge.org\/core\/services\/aop-cambridge-core\/content\/view\/S026357472300098X","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,27]],"date-time":"2024-10-27T19:18:50Z","timestamp":1730056730000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.cambridge.org\/core\/product\/identifier\/S026357472300098X\/type\/journal_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,8]]},"references-count":37,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["S026357472300098X"],"URL":"https:\/\/doi.org\/10.1017\/s026357472300098x","relation":{},"ISSN":["0263-5747","1469-8668"],"issn-type":[{"value":"0263-5747","type":"print"},{"value":"1469-8668","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,8]]}}}