{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T02:21:36Z","timestamp":1760235696407,"version":"build-2065373602"},"reference-count":66,"publisher":"MDPI AG","issue":"19","license":[{"start":{"date-parts":[[2021,9,22]],"date-time":"2021-09-22T00:00:00Z","timestamp":1632268800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["17D110408"],"award-info":[{"award-number":["17D110408"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62001099","11972115"],"award-info":[{"award-number":["62001099","11972115"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Image inpainting aims to fill in corrupted regions with visually realistic and semantically plausible contents. In this paper, we propose a progressive image inpainting method, which is based on a forked-then-fused decoder network. A unit called PC-RN, which is the combination of partial convolution and region normalization, serves as the basic component to construct inpainting network. The PC-RN unit can extract useful features from the valid surroundings and can suppress incompleteness-caused interference at the same time. The forked-then-fused decoder network consists of a local reception branch, a long-range attention branch, and a squeeze-and-excitation-based fusing module. Two multi-scale contextual attention modules are deployed into the long-range attention branch for adaptively borrowing features from distant spatial positions. Progressive inpainting strategy allows the attention modules to use the previously filled region to reduce the risk of allocating wrong attention. We conduct extensive experiments on three benchmark databases: Places2, Paris StreetView, and CelebA. Qualitative and quantitative results show that the proposed inpainting model is superior to state-of-the-art works. Moreover, we perform ablation studies to reveal the functionality of each module for the image inpainting task.<\/jats:p>","DOI":"10.3390\/s21196336","type":"journal-article","created":{"date-parts":[[2021,9,22]],"date-time":"2021-09-22T22:50:48Z","timestamp":1632351048000},"page":"6336","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Progressively Inpainting Images Based on a Forked-Then-Fused Decoder Network"],"prefix":"10.3390","volume":"21","author":[{"given":"Shuai","family":"Yang","sequence":"first","affiliation":[{"name":"College of Information Science and Technology, Donghua University, Shanghai 201620, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3248-4620","authenticated-orcid":false,"given":"Rong","family":"Huang","sequence":"additional","affiliation":[{"name":"College of Information Science and Technology, Donghua University, Shanghai 201620, China"},{"name":"Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China"}]},{"given":"Fang","family":"Han","sequence":"additional","affiliation":[{"name":"College of Information Science and Technology, Donghua University, Shanghai 201620, China"},{"name":"Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,9,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"2007","DOI":"10.1007\/s11063-019-10163-0","article-title":"Image inpainting: A review","volume":"51","author":"Elharrouss","year":"2020","journal-title":"Neural Process. Lett."},{"key":"ref_2","first-page":"7717","article-title":"Adversarial scene editing: Automatic object removal from weak supervision","volume":"31","author":"Shetty","year":"2018","journal-title":"Proc. Adv. Neural Inf. Process. Syst. (NIPS)"},{"key":"ref_3","first-page":"2506","article-title":"Geometry-aware face completion and editing","volume":"33","author":"Song","year":"2019","journal-title":"Proc. Assoc. Adv. Artif. Intell. (AAAI)"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"13457","DOI":"10.1109\/ACCESS.2019.2963675","article-title":"Deep representation calibrated bayesian neural network for semantically explainable face inpainting and editing","volume":"8","author":"Xiong","year":"2020","journal-title":"IEEE Access"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"605","DOI":"10.1016\/j.sigpro.2012.07.022","article-title":"Crack detection and inpainting for virtual restoration of paintings: The case of the Ghent Altarpiece","volume":"93","author":"Cornelis","year":"2013","journal-title":"Signal Process."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"416","DOI":"10.1109\/TIP.2003.821347","article-title":"Virtual restoration of ancient Chinese paintings using color contrast enhancement and lacuna texture synthesis","volume":"13","author":"Pei","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_7","first-page":"1","article-title":"Damaged region filling and evaluation by symmetrical exemplar-based image inpainting for Thangka","volume":"38","author":"Wang","year":"2017","journal-title":"EURASIP J. Image Vid. Process."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Jo, I.S., Choi, D.B., and Park, Y.B. (2021). Chinese character image completion using a generative latent variable model. Appl. Sci., 11.","DOI":"10.3390\/app11020624"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Ehsani, K., Mottaghi, R., and Farhadi, A. (2018, January 18\u201322). SeGAN: Segmenting and generating the invisible. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00643"},{"key":"ref_10","unstructured":"Yan, X.S., Wang, F.G.G., Liu, W.X., Yu, Y.L., He, S.F., and Pan, J. (November, January 27). Visualizing the invisible: Occluded vehicle segmentation and recovery. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Upenik, E., Akyazi, P., Tuzmen, M., and Ebrahimi, T. (2019, January 12\u201317). Inpainting in omnidirectional images for privacy protection. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8683346"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Sun, Q.R., Ma, L.Q., Oh, S.J., Gool, L.V., Schiele, B., and Fritz, M. (2018, January 18\u201322). Natural and effective obfuscation by head inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00530"},{"key":"ref_13","unstructured":"Gong, M.G., Liu, J.L., Li, H., Xie, Y., and Tang, Z.D. (2020). Disentangled representation learning for multiple attributes preserving face deidentification. IEEE Transactions on Neural Networks and Learning Systems, IEEE."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Ching, J.H., See, J., and Wong, L.K. (2020, January 25\u201328). Learning image aesthetics by learning inpainting. Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Virtual, Abu Dhabi, UAE.","DOI":"10.1109\/ICIP40778.2020.9191130"},{"key":"ref_15","unstructured":"Han, X.T., Wu, Z.X., Huang, W.L., Scott, M.R., and Davis, L.S. (November, January 27). FiNet: Compatible and diverse fashion image inpainting. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"134125","DOI":"10.1109\/ACCESS.2019.2941378","article-title":"Inpainting-based virtual try-on network for selective garment transfer","volume":"7","author":"Yu","year":"2019","journal-title":"IEEE Access"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"260","DOI":"10.1049\/iet-ipr.2012.0592","article-title":"Two anisotropic forth-order partial differential equations for image inpainting","volume":"7","author":"Li","year":"2013","journal-title":"IET Image Process."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"870","DOI":"10.1049\/iet-ipr.2016.0898","article-title":"Novel image inpainting algorithm based on adaptive fourth-order partial differential equation","volume":"11","author":"Li","year":"2017","journal-title":"IET Image Process."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"6","DOI":"10.1007\/s40314-019-0768-x","article-title":"A linear fourth-order PDE-based gray-scale image inpainting model","volume":"38","author":"Kumar","year":"2019","journal-title":"Comput. Appl. Math."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"2701","DOI":"10.1016\/j.camwa.2019.12.002","article-title":"An anisotropic PDE model for image inpainting","volume":"79","author":"Halim","year":"2020","journal-title":"Comput. Math. Appl."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1200","DOI":"10.1109\/TIP.2004.833105","article-title":"Region filling and object removal by exemplar-based image inpainting","volume":"13","author":"Criminisi","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"2423","DOI":"10.1109\/TPAMI.2014.2330611","article-title":"Image completion approaches using the statistics of similar patches","volume":"36","author":"He","year":"2014","journal-title":"IEEE Pattern Anal. Mach. Intell."},{"key":"ref_23","first-page":"1809","article-title":"Exemplar-based inpainting: Technical review and new heuristics for better geometric reconstructions","volume":"24","author":"Buyssens","year":"2015","journal-title":"IEEE Trans. Image Process."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"782","DOI":"10.1109\/TIP.2016.2623481","article-title":"Sparsity-based image error concealment via adaptive dual dictionary learning and regularization","volume":"26","author":"Liu","year":"2017","journal-title":"IEEE Trans. Image Process."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"2023","DOI":"10.1109\/TVCG.2017.2702738","article-title":"Patch-based image inpainting via two-stage low rank approximation","volume":"24","author":"Guo","year":"2018","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1705","DOI":"10.1109\/TIP.2018.2880681","article-title":"Image inpainting using nonlocal texture matching and nonlinear filtering","volume":"28","author":"Ding","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Pathak, D., Kr\u00e4henb\u00fchl, P., Donahue, J., Darrell, T., and Efros, A.A. (2016, January 27\u201330). Context encoders: Feature learning by inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.278"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"107","DOI":"10.1145\/3072959.3073659","article-title":"Globally and locally consistent image completion","volume":"36","author":"Iizuka","year":"2017","journal-title":"ACM Trans. Graph."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., and Li, H. (2017, January 21\u201326). High-resolution image inpainting using multi-scale neural patch synthesis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.434"},{"key":"ref_30","first-page":"89","article-title":"Image inpainting for irregular holes using partial convolutions","volume":"11215","author":"Liu","year":"2018","journal-title":"Proc. Eur. Conf. Comput. Vis. (ECCV)"},{"key":"ref_31","first-page":"725","article-title":"Rethinking image inpainting via a mutual encoder-decoder with feature equalizations","volume":"12347","author":"Liu","year":"2020","journal-title":"Proc. Eur. Conf. Comput. Vis. (ECCV)"},{"key":"ref_32","unstructured":"Yu, J.H., Lin, Z., Yang, J.M., Shen, X.H., Lu, X., and Huang, T. (November, January 27). Free-form image inpainting with gated convolution. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Ma, Y.Q., Liu, X.L., Bai, S.H., Wang, L., He, D.L., and Liu, A.S. (2019, January 10\u201316). Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation. Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China.","DOI":"10.24963\/ijcai.2019\/433"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Song, Y.H., Yang, C., Lin, Z., Liu, X.F., Huang, Q., Li, H., and Kuo, C.C.J. (2018, January 8\u201314). Contextual-based image inpainting: Infer, match, and translate. Proceedings of the 2018 European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-030-01216-8_1"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Yan, Z.Y., Li, X.M., Li, M., Zuo, W.M., and Shan, S.G. (2018, January 8\u201314). Shift-net: Image inpainting via deep feature rearrangement. Proceedings of the 2018 European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-030-01264-9_1"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Yu, J.H., Lin, Z., Yang, J.M., Shen, X.H., Lu, X., and Huang, T.S. (2018, January 18\u201322). Generative image inpainting with contextual attention. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00577"},{"key":"ref_37","first-page":"12605","article-title":"Learning to incorporate structure knowledge for image inpainting","volume":"34","author":"Yang","year":"2020","journal-title":"Proc. Assoc. Adv. Artif. Intell. (AAAI)"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Sagong, M.C., Shin, Y.G., Kim, S.W., Park, S., and Ko, S.J. (2019, January 16\u201320). PEPSI: Fast image inpainting with parallel decoding network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01162"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"252","DOI":"10.1109\/TNNLS.2020.2978501","article-title":"PEPSI++: Fast and lightweight network for image","volume":"32","author":"Shin","year":"2021","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Uddin, S.M.N., and Jung, Y.J. (2020). Global and local attention-based free-form image inpainting. Sensors, 20.","DOI":"10.3390\/s20113204"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Wang, N., Li, J.Y., Zhang, L.F., and Du, B. (2019, January 10\u201316). MUSICAL: Multi-scale image contextual attention learning for inpainting. Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China.","DOI":"10.24963\/ijcai.2019\/520"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"107448","DOI":"10.1016\/j.patcog.2020.107448","article-title":"Multistage attention network for image inpainting","volume":"106","author":"Wang","year":"2020","journal-title":"Pattern Recognit."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Zeng, Y.H., Fu, J.L., Chao, H.Y., and Guo, B.N. (2019, January 16\u201320). Learning pyramid-context encoder network for high-quality image inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00158"},{"key":"ref_44","unstructured":"Liu, H.Y., Jiang, B., Xiao, Y., and Yang, C. (November, January 27). Coherent semantic attention for image inpainting. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Li, J.Y., Wang, N., Zhang, L.F., Du, B., and Tao, D.C. (2021, August 22). Recurrent Feature Reasoning for Image Inpainting. Available online: https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Li_Recurrent_Feature_Reasoning_for_Image_Inpainting_CVPR_2020_paper.pdf.","DOI":"10.1109\/CVPR42600.2020.00778"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Xiong, W., Yu, J.H., Lin, Z., Yang, J.M., Lu, X., Barnes, C., and Luo, J.B. (2019, January 16\u201320). Foreground-aware image inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00599"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Zhang, H.R., Hu, Z.Z., Luo, C.Z., Zuo, W.M., and Wang, M. (2018, January 22\u201326). Semantic image inpainting with progressive generative networks. Proceedings of the 26th ACM international conference on Multimedia, Seoul, Korea.","DOI":"10.1145\/3240508.3240625"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Guo, Z.Y., Chen, Z.B., Yu, T., Chen, J.L., and Liu, S. (2019, January 21\u201325). Progressive image inpainting with full-resolution residual network. Proceedings of the 26th ACM international conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3351022"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"1355","DOI":"10.1007\/s11063-018-9877-6","article-title":"An improved method for semantic image inpainting with GANs: Progressive inpainting","volume":"49","author":"Chen","year":"2019","journal-title":"Neural Process. Lett."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Zeng, Y., Lin, Z., Yang, J.M., Zhang, J.M., Shechtman, E., and Lu, H.C. (2020). High-resolution image inpainting with iterative confidence feedback and guided upsampling. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-030-58529-7_1"},{"key":"ref_51","first-page":"2672","article-title":"Generative adversarial nets","volume":"2","author":"Goodfellow","year":"2014","journal-title":"Proc. Adv. Neural Inf. Process. Syst. (NIPS)"},{"key":"ref_52","unstructured":"Kingma, D.P., and Welling, M. (2014). Auto-encoding variational bayes. arXiv."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","article-title":"Long short-term memory","volume":"9","author":"Hochreiter","year":"1997","journal-title":"Neural Comput."},{"key":"ref_54","first-page":"234","article-title":"U-Net: Convolutional networks for biomedical image segmentation","volume":"9351","author":"Ronneberger","year":"2015","journal-title":"Proc. Med. Image Comput. Comput. Assist Interv. (MICCAI)"},{"key":"ref_55","first-page":"12733","article-title":"Region normalization for image inpainting","volume":"34","author":"Yu","year":"2020","journal-title":"Proc. Assoc. Adv. Artif. Intell. (AAAI)"},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"2011","DOI":"10.1109\/TPAMI.2019.2913372","article-title":"Squeeze-and-excitation networks","volume":"42","author":"Hu","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"1452","DOI":"10.1109\/TPAMI.2017.2723009","article-title":"Places: A 10 million image database for scene recognition","volume":"40","author":"Zhou","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"101","DOI":"10.1145\/2185520.2185597","article-title":"What makes Paris look like Paris","volume":"31","author":"Doersch","year":"2012","journal-title":"ACM Trans. Graph."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Liu, Z.W., Luo, P., Wang, X.G., and Tang, X.O. (2015, January 7\u201313). Deep learning face attributes in the wild. Proceedings of 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.425"},{"key":"ref_60","unstructured":"Simonyan, K., and Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T.H., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs","volume":"40","author":"Chen","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","article-title":"ImageNet large scale visual recognition challenge","volume":"115","author":"Russakovsky","year":"2015","journal-title":"Int. J. Comput. Vis."},{"key":"ref_64","unstructured":"Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv."},{"key":"ref_65","unstructured":"Kingma, D.P., and Ba, J.L. (2015). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_66","first-page":"6629","article-title":"GANs trained by a two time-scale update rule converge to a local Nash equilibrium","volume":"30","author":"Hensel","year":"2017","journal-title":"Proc. Adv. Neural Inf. Process. Syst. (NIPS)"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/19\/6336\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T07:03:28Z","timestamp":1760166208000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/19\/6336"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,22]]},"references-count":66,"journal-issue":{"issue":"19","published-online":{"date-parts":[[2021,10]]}},"alternative-id":["s21196336"],"URL":"https:\/\/doi.org\/10.3390\/s21196336","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2021,9,22]]}}}