{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T08:32:53Z","timestamp":1773909173426,"version":"3.50.1"},"reference-count":59,"publisher":"MDPI AG","issue":"20","license":[{"start":{"date-parts":[[2022,10,20]],"date-time":"2022-10-20T00:00:00Z","timestamp":1666224000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["62101041"],"award-info":[{"award-number":["62101041"]}]},{"name":"National Natural Science Foundation of China","award":["T2012122"],"award-info":[{"award-number":["T2012122"]}]},{"name":"National Natural Science Foundation of China","award":["Z141101001514005"],"award-info":[{"award-number":["Z141101001514005"]}]},{"name":"Chang Jiang Scholars Program","award":["62101041"],"award-info":[{"award-number":["62101041"]}]},{"name":"Chang Jiang Scholars Program","award":["T2012122"],"award-info":[{"award-number":["T2012122"]}]},{"name":"Chang Jiang Scholars Program","award":["Z141101001514005"],"award-info":[{"award-number":["Z141101001514005"]}]},{"name":"Hundred Leading Talent Project of Beijing Science and Technology","award":["62101041"],"award-info":[{"award-number":["62101041"]}]},{"name":"Hundred Leading Talent Project of Beijing Science and Technology","award":["T2012122"],"award-info":[{"award-number":["T2012122"]}]},{"name":"Hundred Leading Talent Project of Beijing Science and Technology","award":["Z141101001514005"],"award-info":[{"award-number":["Z141101001514005"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Ship detection in synthetic aperture radar (SAR) images has witnessed rapid development in recent years, especially after the adoption of convolutional neural network (CNN)-based methods. Recently, a transformer using self-attention and a feed forward neural network with a encoder-decoder structure has received much attention from researchers, due to its intrinsic characteristics of global-relation modeling between pixels and an enlarged global receptive field. However, when adapting transformers to SAR ship detection, one challenging issue cannot be ignored. Background clutter, such as a coast, an island, or a sea wave, made previous object detectors easily miss ships with a blurred contour. Therefore, in this paper, we propose a local-sparse-information-aggregation transformer with explicit contour guidance for ship detection in SAR images. Based on the Swin Transformer architecture, in order to effectively aggregate sparse meaningful cues of small-scale ships, a deformable attention mechanism is incorporated to change the original self-attention mechanism. Moreover, a novel contour-guided shape-enhancement module is proposed to explicitly enforce the contour constraints on the one-dimensional transformer architecture. Experimental results show that our proposed method achieves superior performance on the challenging HRSID and SSDD datasets.<\/jats:p>","DOI":"10.3390\/rs14205247","type":"journal-article","created":{"date-parts":[[2022,10,21]],"date-time":"2022-10-21T00:34:30Z","timestamp":1666312470000},"page":"5247","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":22,"title":["A Local-Sparse-Information-Aggregation Transformer with Explicit Contour Guidance for SAR Ship Detection"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2013-6592","authenticated-orcid":false,"given":"Hao","family":"Shi","sequence":"first","affiliation":[{"name":"Radar Research Lab, School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"},{"name":"Chongqing Innovation Center, Beijing Institute of Technology, Chongqing 401120, China"}]},{"given":"Bingqian","family":"Chai","sequence":"additional","affiliation":[{"name":"Radar Research Lab, School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9771-6229","authenticated-orcid":false,"given":"Yupei","family":"Wang","sequence":"additional","affiliation":[{"name":"Radar Research Lab, School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"},{"name":"Chongqing Innovation Center, Beijing Institute of Technology, Chongqing 401120, China"}]},{"given":"Liang","family":"Chen","sequence":"additional","affiliation":[{"name":"Radar Research Lab, School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China"},{"name":"Chongqing Innovation Center, Beijing Institute of Technology, Chongqing 401120, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,10,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"6","DOI":"10.1109\/MGRS.2013.2248301","article-title":"A tutorial on synthetic aperture radar","volume":"1","author":"Moreira","year":"2013","journal-title":"IEEE Geosci. Remote Sens. Mag."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1685","DOI":"10.1109\/TGRS.2008.2006504","article-title":"An adaptive and fast CFAR algorithm based on automatic censoring for target detection in high-resolution SAR images","volume":"47","author":"Gao","year":"2009","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"5146","DOI":"10.1109\/TGRS.2019.2897139","article-title":"ORSIm Detector: A novel object detection framework in optical remote sensing imagery using spatial-frequency channel features","volume":"57","author":"Wu","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intel."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A.C. (2016, January 11\u201314). SSD: Single shot multibox detector. Proceedings of the European Conference on Computer Vision\u2014ECCV 2016, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_7","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv."},{"key":"ref_8","unstructured":"Zhou, X., Wang, D., and Kr\u00e4henb\u00fchl, P. (2019). Objects as points. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Tian, Z., Shen, C., Chen, H., and He, T. (November, January 27). FCOS: Fully convolutional one-stage object detection. Proceedings of the 2019 IEEE\/CVF International Conference on Computer Vision (ICCV), IEEE, Seoul, Korea.","DOI":"10.1109\/ICCV.2019.00972"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y., Dollar, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature pyramid networks for object detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"302","DOI":"10.1109\/LGRS.2019.2919755","article-title":"Fourier-based rotation-invariant feature boosting: An efficient framework for geospatial object detection","volume":"17","author":"Wu","year":"2020","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"4340","DOI":"10.1109\/TGRS.2020.3016820","article-title":"More diverse means better: Multimodal deep learning meets remote-sensing imagery classification","volume":"59","author":"Hong","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"937","DOI":"10.1109\/TGRS.2017.2756851","article-title":"Multisource remote sensing data classification based on convolutional neural network","volume":"56","author":"Xu","year":"2018","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_14","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Luong, M.-T., Pham, H., and Manning, C.D. (2015). Effective approaches to attention-based neural machine translation. arXiv.","DOI":"10.18653\/v1\/D15-1166"},{"key":"ref_16","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2021). An image is worth 16 \u00d7 16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Dai, W., Mao, Y., Yuan, R., Liu, Y., Pu, X., and Li, C. (2020). A novel detector based on convolution neural networks for multiscale SAR ship detection in complex background. Sensors, 20.","DOI":"10.3390\/s20092547"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Kang, M., Ji, K., Leng, X., and Lin, Z. (2017). Contextual region-based convolutional neural network with multilayer fusion for SAR ship detection. Remote Sens., 9.","DOI":"10.3390\/rs9080860"},{"key":"ref_20","first-page":"7381","article-title":"Regional attention-based single shot detector for SAR ship detection","volume":"2019","author":"Shiqi","year":"2019","journal-title":"J. Eng."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Li, J.W., Qu, C.W., and Shao, J.Q. (2017, January 13\u201314). Ship detection in sar images based on an improved faster R-CNN. Proceedings of the Conference on SAR in Big Data Era\u2014Models, Methods and Applications (BIGSARDATA), Beijing, China.","DOI":"10.1109\/BIGSARDATA.2017.8124934"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"662","DOI":"10.1109\/LGRS.2020.2981255","article-title":"Pyramid attention dilated network for aircraft detection in SAR images","volume":"18","author":"Zhao","year":"2021","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1331","DOI":"10.1109\/TGRS.2020.3005151","article-title":"An anchor-free method based on feature balancing and refinement network for multiscale ship detection in SAR images","volume":"59","author":"Fu","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_24","first-page":"1","article-title":"BANet: A balance attention network for anchor-free ship detection in SAR images","volume":"60","author":"Hu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_25","first-page":"1","article-title":"Multiscale and dense ship detection in SAR images based on key-point estimation and attention mechanism","volume":"60","author":"Ma","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_26","first-page":"1","article-title":"Power transformations and feature alignment guided network for SAR ship detection","volume":"19","author":"Xiao","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/LGRS.2022.3145790","article-title":"Efficient encoder-decoder network with estimated direction for SAR ship detection","volume":"19","author":"Niu","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"379","DOI":"10.1109\/TGRS.2020.2997200","article-title":"Ship detection in large-scale SAR images via spatial shuffle-group enhance attention","volume":"59","author":"Cui","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., and Shao, L. (2021, January 10\u201317). Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), IEEE, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00061"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Guo, J., Han, K., Wu, H., Tang, Y., Chen, X., Wang, Y., and Xu, C. (2022). CMT: Convolutional neural networks meet vision transformers. arXiv.","DOI":"10.1109\/CVPR52688.2022.01186"},{"key":"ref_31","unstructured":"Wang, W., Yao, L., Chen, L., Lin, B., Cai, D., He, X., and Liu, W. (2021). CrossFormer: A versatile vision transformer hinging on cross-scale attention. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Peng, Z., Huang, W., Gu, S., Xie, L., Wang, Y., Jiao, J., and Ye, Q. (2021, January 10\u201317). Conformer: Local features coupling global representations for visual recognition. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00042"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"318","DOI":"10.1109\/TPAMI.2018.2858826","article-title":"Focal loss for dense object detection","volume":"42","author":"Lin","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intel."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Dollar, P., and Girshick, R. (2017, January 22\u201329). Mask R-CNN. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Cai, Z., and Vasconcelos, N. (2018, January 18\u201323). Cascade R-CNN: Delving into high quality object detection. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00644"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Zhang, S., Chi, C., Yao, Y., Lei, Z., and Li, S.Z. (2020, January 13\u201319). Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00978"},{"key":"ref_37","unstructured":"Chen, Y., Zhang, Z., Cao, Y., Wang, L., Lin, S., and Hu, H. (2020). RepPoints V2: Verification meets regression for object detection. arXiv."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Sun, P., Zhang, R., Jiang, Y., Kong, T., Xu, C., Zhan, W., Tomizuka, M., Li, L., Yuan, Z., and Wang, C. (2021, January 20\u201325). Sparse R-CNN: End-to-end object detection with learnable proposals. Proceedings of the 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01422"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Wang, X., Girshick, R., Gupta, A., and He, K. (2018, January 18\u201323). Non-local neural networks. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00813"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Cao, Y., Xu, J., Lin, S., Wei, F., and Hu, H. (2019, January 27\u201328). GCNet: Non-local networks meet squeeze-excitation networks and beyond. Proceedings of the 2019 IEEE\/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea.","DOI":"10.1109\/ICCVW.2019.00246"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Hu, H., Gu, J., Zhang, Z., Dai, J., and Wei, Y. (2018, January 18\u201323). Relation networks for object detection. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00378"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"392","DOI":"10.1007\/978-3-030-01258-8_24","article-title":"Learning region features for object detection","volume":"Volume 11216","author":"Ferrari","year":"2018","journal-title":"Computer Vision\u2014ECCV 2018"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. (2020). End-to-end object detection with transformers. arXiv.","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Sun, Z., Cao, S., Yang, Y., and Kitani, K. (2021, January 10\u201317). Rethinking transformer-based set prediction for object detection. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00359"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015, January 7\u201313). Fast R-CNN. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_46","unstructured":"Yao, Z., Ai, J., Li, B., and Zhang, C. (2021). Efficient DETR: Improving end-to-end object detector with dense prior. arXiv."},{"key":"ref_47","unstructured":"Zhu, X., Su, W., Lu, L., Li, B., Wang, X., and Dai, J. (2021). Deformable DETR: Deformable transformers for end-to-end object detection. arXiv."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Gao, P., Zheng, M., Wang, X., Dai, J., and Li, H. (2021). Fast convergence of DETR with spatially modulated co-attention. arXiv.","DOI":"10.1109\/ICCV48922.2021.00360"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Meng, D., Chen, X., Fan, Z., Zeng, G., Li, H., Yuan, Y., Sun, L., and Wang, J. (2021, January 10\u201317). Conditional DETR for fast training convergence. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00363"},{"key":"ref_50","first-page":"1","article-title":"Multifeature transformation and fusion-based ship detection with small targets and complex backgrounds","volume":"19","author":"Zha","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"666","DOI":"10.1109\/JSTARS.2021.3137390","article-title":"Ships detection in SAR images based on anchor-free model with mask guidance features","volume":"15","author":"Qu","year":"2022","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_52","first-page":"1","article-title":"An anchor-free vehicle detection algorithm in aerial image based on context information and transformer","volume":"19","author":"Zhou","year":"2022","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Cheng, B., Duan, H., Hou, S., Karim, A., Jia, W., and Zheng, Y. (2021, January 17\u201319). An effective anchor-free model with transformer for logo detection efficient logo detection via transformer. Proceedings of the 2021 International Conference on Computer Information Science and Artificial Intelligence (CISAI), Kunming, China.","DOI":"10.1109\/CISAI54367.2021.00045"},{"key":"ref_54","unstructured":"Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and J\u00e9gou, H. (2021). Training data-efficient image transformers & distillation through attention. arXiv."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Dollar, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Zhang, T., Zhang, X., and Ke, X. (2021). Quad-FPN: A novel quad feature pyramid network for SAR ship detection. Remote Sens., 13.","DOI":"10.3390\/rs13142771"},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"8983","DOI":"10.1109\/TGRS.2019.2923988","article-title":"Dense attention pyramid networks for multi-scale ship detection in SAR images","volume":"57","author":"Cui","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Shi, H., Fang, Z., Wang, Y., and Chen, L. (2022). An adaptive sample assignment strategy based on feature enhancement for ship detection in SAR images. Remote Sens., 14.","DOI":"10.3390\/rs14092238"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/20\/5247\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:58:07Z","timestamp":1760144287000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/20\/5247"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,20]]},"references-count":59,"journal-issue":{"issue":"20","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["rs14205247"],"URL":"https:\/\/doi.org\/10.3390\/rs14205247","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,10,20]]}}}