{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,18]],"date-time":"2026-01-18T12:24:34Z","timestamp":1768739074978,"version":"3.49.0"},"reference-count":35,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2021,5,24]],"date-time":"2021-05-24T00:00:00Z","timestamp":1621814400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"NSF of China","award":["51979210"],"award-info":[{"award-number":["51979210"]}]},{"name":"NSF of China","award":["51879210"],"award-info":[{"award-number":["51879210"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Complex marine environment has an adverse effect on the object detection algorithm based on the vision sensor for the smart ship sailing at sea. In order to eliminate the motion blur in the images during the navigation of the smart ship and ensure safety, we propose SharpGAN, a new image deblurring method based on the generative adversarial network (GAN). First of all, we introduce the receptive field block net (RFBNet) to the deblurring network to enhance the network\u2019s ability to extract blurred image features. Secondly, we propose a feature loss that combines different levels of image features to guide the network to perform higher-quality deblurring and improve the feature similarity between the restored images and the sharp images. Besides, we use the lightweight RFB-s module to significantly improve the real-time performance of the deblurring network. Compared with the existing deblurring methods, the proposed method not only has better deblurring performance in subjective visual effects and objective evaluation criteria, but also has higher deblurring efficiency. Finally, the experimental results reveal that the SharpGAN has a high correlation with the deblurring methods based on the physical model.<\/jats:p>","DOI":"10.3390\/s21113641","type":"journal-article","created":{"date-parts":[[2021,5,24]],"date-time":"2021-05-24T23:35:05Z","timestamp":1621899305000},"page":"3641","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":24,"title":["SharpGAN: Dynamic Scene Deblurring Method for Smart Ship Based on Receptive Field Block and Generative Adversarial Networks"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6696-3094","authenticated-orcid":false,"given":"Hui","family":"Feng","sequence":"first","affiliation":[{"name":"Key Laboratory of High Performance Ship Technology, Wuhan University of Technology, Ministry of Education, Wuhan 430063, China"},{"name":"School of Transportation, Wuhan University of Technology, Wuhan 430063, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8344-105X","authenticated-orcid":false,"given":"Jundong","family":"Guo","sequence":"additional","affiliation":[{"name":"Key Laboratory of High Performance Ship Technology, Wuhan University of Technology, Ministry of Education, Wuhan 430063, China"},{"name":"School of Transportation, Wuhan University of Technology, Wuhan 430063, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6598-3413","authenticated-orcid":false,"given":"Haixiang","family":"Xu","sequence":"additional","affiliation":[{"name":"Key Laboratory of High Performance Ship Technology, Wuhan University of Technology, Ministry of Education, Wuhan 430063, China"},{"name":"School of Transportation, Wuhan University of Technology, Wuhan 430063, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5549-312X","authenticated-orcid":false,"given":"Shuzhi Sam","family":"Ge","sequence":"additional","affiliation":[{"name":"Department of Electrical & Computer Engineering, National University of Singapore, Singapore 117576, Singapore"}]}],"member":"1968","published-online":{"date-parts":[[2021,5,24]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1364\/JOSA.62.000055","article-title":"Bayesian-based iterative method of image restoration","volume":"62","author":"Richardson","year":"1972","journal-title":"J. Opt. Soc. Am."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"745","DOI":"10.1086\/111605","article-title":"An iterative technique for the rectification of observed distributions","volume":"79","author":"Lucy","year":"1974","journal-title":"Astron. J."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"787","DOI":"10.1145\/1141911.1141956","article-title":"Removing camera shake from a single photograph","volume":"25","author":"Fergus","year":"2006","journal-title":"ACM Trans. Graph."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"73","DOI":"10.1145\/1360612.1360672","article-title":"High-quality motion deblurring from a single image","volume":"27","author":"Shan","year":"2008","journal-title":"ACM Trans. Graph."},{"key":"ref_5","unstructured":"Krishnan, D., and Fergus, R. (2009, January 7\u201310). Fast image deconvolution using hyper-laplacian priors. Proceedings of the Annual Conference on Neural Information Processing Systems 2009, Vancouver, BC, Canada."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"168","DOI":"10.1007\/s11263-011-0502-7","article-title":"Non-uniform deblurring for shaken images","volume":"98","author":"Whyte","year":"2012","journal-title":"Int. J. Comput. Vis."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Xu, L., Zheng, S., and Jia, J. (2013, January 23\u201328). Unnatural l0 sparse representation for natural image deblurring. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA.","DOI":"10.1109\/CVPR.2013.147"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Liu, D., Chen, X., Liu, X., and Shi, C. (2019). Star image prediction and restoration under dynamic conditions. Sensors, 19.","DOI":"10.3390\/s19081890"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Yan, Y., Ren, W., Guo, Y., Wang, R., and Cao, X. (2017, January 21\u201326). Image deblurring via extreme channels prior. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.738"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Sun, J., Cao, W., Xu, Z., and Ponce, J. (2015, January 8\u201312). Learning a convolutional neural network for non-uniform motion blur removal. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298677"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Noroozi, M., Chandramouli, P., and Favaro, P. (2017, January 13\u201315). Motion deblurring in the wild. Proceedings of the German Conference on Pattern Recognition, Basel, Switzerland.","DOI":"10.1007\/978-3-319-66709-6_6"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Nah, S., Kim, T.H., and Lee, K.M. (2017, January 21\u201326). Deep multi-scale convolutional neural network for dynamic scene deblurring. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.35"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Gong, D., Yang, J., Liu, L., Zhang, Y., Reid, I., Shen, C., Van Den Hengel, A., and Shi, Q. (2017, January 21\u201326). From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.405"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., and Matas, J. (2018, January 18\u201323). DeblurGAN: Blind motion deblurring using conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00854"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, R., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_16","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv, Available online: http:\/\/arxiv.org\/abs\/1409.1556."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"297","DOI":"10.1364\/JOSA.57.000297","article-title":"Image restoration by the method of least squares","volume":"57","author":"Helstrom","year":"1967","journal-title":"JOSA"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"259","DOI":"10.1016\/0167-2789(92)90242-F","article-title":"Nonlinear total variation based noise removal algorithms","volume":"60","author":"Rudin","year":"1992","journal-title":"Phys. D Nonlinear Phenom."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Zoran, D., and Weiss, Y. (2011, January 21\u201325). From learning models of natural image patches to whole image restoration. Proceedings of the International Conference on Computer Vision, Springs, CO, USA.","DOI":"10.1109\/ICCV.2011.6126278"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Li, J., and Liu, Z. (2019). Ensemble Dictionary Learning for Single Image Deblurring via Low-Rank Regularization. Sensors, 19.","DOI":"10.3390\/s19051143"},{"key":"ref_21","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative Adversarial Networks. arXiv, Available online: https:\/\/arxiv.org\/abs\/1406.2661."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_23","unstructured":"Kupyn, O., Martyniuk, T., Wu, J., and Wang, Z. (November, January 17). Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. Proceedings of the International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_24","unstructured":"Arjovsky, M., Chintala, S., and Bottou, L. (2017, January 9\u201312). Wasserstein generative adversarial networks. Proceedings of the International Conference on Machine Learning, Ningbo, China."},{"key":"ref_25","unstructured":"Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. (2017). Improved Training of Wasserstein Gans. arXiv, Available online: http:\/\/arxiv.org\/abs\/1704.00028."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Liu, S., and Huang, D. (2018, January 8\u201314). Receptive field block net for accurate and fast object detection. Proceedings of the European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-030-01252-6_24"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 8\u201312). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. (2017, January 4\u20139). Inception-v4, inception-resnet and the impact of residual connections on learning. Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_30","unstructured":"Yu, F., and Koltun, V. (2015). Multi-Scale Context Aggregation by Dilated Convolutions. arXiv, Available online: https:\/\/arxiv.org\/abs\/1511.07122."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Zeiler, M.D., and Fergus, R. (2014, January 6\u201312). Visualizing and understanding convolutional networks. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10590-1_53"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"1993","DOI":"10.1109\/TITS.2016.2634580","article-title":"Video processing from electro-optical sensors for object detection and tracking in a maritime environment: A survey","volume":"18","author":"Prasad","year":"2017","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Li, Y., Tofighi, M., Geng, J., Monga, V., and Eldar, Y.C. (2019). Deep Algorithm Unrolling for Blind Image Deblurring. arXiv, Available online: http:\/\/arxiv.org\/abs\/1902.03493.","DOI":"10.1109\/ICASSP.2019.8682542"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Mustaniemi, J., Kannala, J., S\u00e4rkk\u00e4, S., Matas, J., and Heikkila, J. (2019, January 7\u201311). Gyroscope-aided motion deblurring with deep networks. Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Honolulu, HI, USA.","DOI":"10.1109\/WACV.2019.00208"},{"key":"ref_35","unstructured":"(2021, April 15). Singapore Maritime Dataset Trained Deep Learning Models. Available online: https:\/\/github.com\/tilemmpon\/Singapore-Maritime-Dataset-Trained-Deep-Learning-Models."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/11\/3641\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:06:45Z","timestamp":1760162805000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/11\/3641"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,24]]},"references-count":35,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2021,6]]}},"alternative-id":["s21113641"],"URL":"https:\/\/doi.org\/10.3390\/s21113641","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,24]]}}}