{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,11]],"date-time":"2026-05-11T10:11:06Z","timestamp":1778494266778,"version":"3.51.4"},"reference-count":63,"publisher":"MDPI AG","issue":"20","license":[{"start":{"date-parts":[[2023,10,20]],"date-time":"2023-10-20T00:00:00Z","timestamp":1697760000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Medical image segmentation is crucial for medical image processing and the development of computer-aided diagnostics. In recent years, deep Convolutional Neural Networks (CNNs) have been widely adopted for medical image segmentation and have achieved significant success. UNet, which is based on CNNs, is the mainstream method used for medical image segmentation. However, its performance suffers owing to its inability to capture long-range dependencies. Transformers were initially designed for Natural Language Processing (NLP), and sequence-to-sequence applications have demonstrated the ability to capture long-range dependencies. However, their abilities to acquire local information are limited. Hybrid architectures of CNNs and Transformer, such as TransUNet, have been proposed to benefit from Transformer\u2019s long-range dependencies and CNNs\u2019 low-level details. Nevertheless, automatic medical image segmentation remains a challenging task due to factors such as blurred boundaries, the low-contrast tissue environment, and in the context of ultrasound, issues like speckle noise and attenuation. In this paper, we propose a new model that combines the strengths of both CNNs and Transformer, with network architectural improvements designed to enrich the feature representation captured by the skip connections and the decoder. To this end, we devised a new attention module called Three-Level Attention (TLA). This module is composed of an Attention Gate (AG), channel attention, and spatial normalization mechanism. The AG preserves structural information, whereas channel attention helps to model the interdependencies between channels. Spatial normalization employs the spatial coefficient of the Transformer to improve spatial attention akin to TransNorm. To further improve the skip connection and reduce the semantic gap, skip connections between the encoder and decoder were redesigned in a manner similar to that of the UNet++ dense connection. Moreover, deep supervision using a side-output channel was introduced, analogous to BASNet, which was originally used for saliency predictions. Two datasets from different modalities, a CT scan dataset and an ultrasound dataset, were used to evaluate the proposed UNet architecture. The experimental results showed that our model consistently improved the prediction performance of the UNet across different datasets.<\/jats:p>","DOI":"10.3390\/s23208589","type":"journal-article","created":{"date-parts":[[2023,10,20]],"date-time":"2023-10-20T07:25:22Z","timestamp":1697786722000},"page":"8589","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":95,"title":["Improved UNet with Attention for Medical Image Segmentation"],"prefix":"10.3390","volume":"23","author":[{"given":"Ahmed","family":"AL Qurri","sequence":"first","affiliation":[{"name":"School of Electrical Engineering and Computer Science, Pennsylvania State University, University Park, PA 16802, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9222-3003","authenticated-orcid":false,"given":"Mohamed","family":"Almekkawy","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering and Computer Science, Pennsylvania State University, University Park, PA 16802, USA"}]}],"member":"1968","published-online":{"date-parts":[[2023,10,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Gao, Q., and Almekkawy, M. (2021). ASUNet++: A nested UNet with adaptive feature extractions for liver tumor segmentation. Comput. Biol. Med., 136.","DOI":"10.1016\/j.compbiomed.2021.104688"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"545","DOI":"10.1109\/TRPMS.2023.3265863","article-title":"Current and emerging trends in medical image segmentation with deep learning","volume":"7","author":"Conze","year":"2023","journal-title":"IEEE Trans. Radiat. Plasma Med. Sci."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"543","DOI":"10.1016\/j.media.2009.05.004","article-title":"Statistical shape models for 3D medical image segmentation: A review","volume":"13","author":"Heimann","year":"2009","journal-title":"Med. Image Anal."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Kakumani, A.K., Sree, L.P., Kumar, B.V., Rao, S.K., Garrepally, M., and Chandrakanth, M. (2022, January 7\u20139). Segmentation of Cell Nuclei in Microscopy Images using Modified ResUNet. Proceedings of the 2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT), Bangalore, India.","DOI":"10.1109\/GCAT55367.2022.9971978"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"107","DOI":"10.1016\/j.neucom.2015.12.073","article-title":"Active contour model based on local and global intensity information for medical image segmentation","volume":"186","author":"Zhou","year":"2016","journal-title":"Neurocomputing"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"A115","DOI":"10.1121\/10.0004693","article-title":"Ultrasound liver tumor segmentation with nested UNet and dynamic feature extraction","volume":"149","author":"Gao","year":"2021","journal-title":"J. Acoust. Soc. Am."},{"key":"ref_7","unstructured":"Pereira, F., Burges, C., Bottou, L., and Weinberger, K. (2012). Proceedings of the Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1607","DOI":"10.1007\/s11760-021-02115-w","article-title":"Multiscale transUNet++: Dense hybrid UNet with Transformer for medical image segmentation","volume":"16","author":"Wang","year":"2022","journal-title":"Signal Image Video Process."},{"key":"ref_9","unstructured":"Chen, B., Liu, Y., Zhang, Z., Lu, G., and Kong, A.W.K. (2021). TransattUNet: Multi-level attention-guided UNet with Transformer for medical image segmentation. arXiv."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully convolutional networks for semantic segmentation. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). UNet: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Huang, H., Lin, L., Tong, R., Hu, H., Zhang, Q., Iwamoto, Y., Han, X., Chen, Y.W., and Wu, J. (2020, January 4\u20138). UNet 3+: A full-scale connected UNet for medical image segmentation. Proceedings of the ICASSP 2020\u20142020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053405"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs","volume":"40","author":"Chen","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Jumutc, V., B\u013ciz\u0146uks, D., and Lihachev, A. (2022). Multi-Path UNet architecture for cell and colony-forming unit image segmentation. Sensors, 22.","DOI":"10.3390\/s22030990"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Mohammad, U.F., and Almekkawy, M. (2021, January 11\u201316). Automated detection of liver steatosis in ultrasound images using convolutional neural networks. Proceedings of the 2021 IEEE International Ultrasonics Symposium (IUS), Xi\u2019an, China.","DOI":"10.1109\/IUS52206.2021.9593420"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1441","DOI":"10.3390\/s21041441","article-title":"A-DenseUNet: Adaptive densely connected UNet for polyp segmentation in colonoscopy images with atrous convolution","volume":"21","author":"Safarov","year":"2021","journal-title":"Sensors"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Tao, S., Jiang, Y., Cao, S., Wu, C., and Ma, Z. (2021). Attention-guided network with densely connected convolution for skin lesion segmentation. Sensors, 21.","DOI":"10.3390\/s21103462"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Liu, H., Li, Z., Lin, S., and Cheng, L. (2023). A Residual UNet Denoising Network Based on Multi-Scale Feature Extraction and Attention-Guided Filter. Sensors, 23.","DOI":"10.3390\/s23167044"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Mohammad, U.F., and Almekkawy, M. (2021, January 11\u201316). A substitution of convolutional layers by fft layers-a low computational cost version. Proceedings of the 2021 IEEE International Ultrasonics Symposium (IUS), Xi\u2019an, China.","DOI":"10.1109\/IUS52206.2021.9593687"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Jiang, Y., Yao, H., Tao, S., and Liang, J. (2021). Gated skip-connection network with adaptive upsampling for retinal vessel segmentation. Sensors, 21.","DOI":"10.3390\/s21186177"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Li, S., Sultonov, F., Ye, Q., Bai, Y., Park, J.H., Yang, C., Song, M., Koo, S., and Kang, J.M. (2022). TA-UNet: Integrating triplet attention module for drivable road region segmentation. Sensors, 22.","DOI":"10.3390\/s22124438"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Chen, S., Qiu, C., Yang, W., and Zhang, Z. (2022). Multiresolution aggregation Transformer UNet based on multiscale input and coordinate attention for medical image segmentation. Sensors, 22.","DOI":"10.3390\/s22103820"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Thirusangu, N., and Almekkawy, M. (2021, January 4\u20135). Segmentation of Breast Ultrasound Images using Densely Connected Deep Convolutional Neural Network and Attention Gates. Proceedings of the 2021 IEEE UFFC Latin America Ultrasonics Symposium (LAUS), Gainesville, FL, USA.","DOI":"10.1109\/LAUS53676.2021.9639178"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"2636","DOI":"10.1121\/1.5147329","article-title":"Segmentation of induced substantia nigra from transcranial ultrasound images using deep convolutional neural network","volume":"148","author":"Thirusangu","year":"2020","journal-title":"J. Acoust. Soc. Am."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"2198","DOI":"10.1109\/TMI.2019.2900516","article-title":"Deep learning for segmentation using an open large-scale dataset in 2D echocardiography","volume":"38","author":"Leclerc","year":"2019","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Arsenescu, T., Chifor, R., Marita, T., Santoma, A., Lebovici, A., Duma, D., Vacaras, V., and Badea, A.F. (2023). 3D Ultrasound Reconstructions of the Carotid Artery and Thyroid Gland Using Artificial-Intelligence-Based Automatic Segmentation\u2014Qualitative and Quantitative Evaluation of the Segmentation Results via Comparison with CT Angiography. Sensors, 23.","DOI":"10.3390\/s23052806"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Katakis, S., Barotsis, N., Kakotaritis, A., Economou, G., Panagiotopoulos, E., and Panayiotakis, G. (2022). Automatic Extraction of Muscle Parameters with Attention UNet in Ultrasonography. Sensors, 22.","DOI":"10.3390\/s22145230"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"109512","DOI":"10.1016\/j.knosys.2022.109512","article-title":"ConvUNeXt: An efficient convolution neural network for medical image segmentation","volume":"253","author":"Han","year":"2022","journal-title":"Knowl.-Based Syst."},{"key":"ref_29","unstructured":"Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., and Liang, J. (2018). Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer."},{"key":"ref_30","unstructured":"Zeng, Z., Hu, Q., Xie, Z., Zhou, J., and Xu, Y. (2023). Small but Mighty: Enhancing 3D Point Clouds Semantic Segmentation with U-Next Framework. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., and Jagersand, M. (2019, January 15\u201320). Basnet: Boundary-aware salient object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00766"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"1328","DOI":"10.1109\/TPAMI.2022.3145427","article-title":"Vision permutator: A permutable mlp-like architecture for visual recognition","volume":"45","author":"Hou","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"59037","DOI":"10.1109\/ACCESS.2019.2914873","article-title":"Attention dense-UNet for automatic breast mass segmentation in digital mammogram","volume":"7","author":"Li","year":"2019","journal-title":"IEEE Access"},{"key":"ref_35","unstructured":"Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., and Kainz, B. (2018). Attention UNet: Learning where to look for the pancreas. arXiv."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"1110","DOI":"10.3389\/fgene.2019.01110","article-title":"Channel-UNet: A spatial channelwise convolutional neural network for liver and tumors segmentation","volume":"10","author":"Chen","year":"2019","journal-title":"Front. Genet."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018, January 8\u201314). Cbam: Convolutional block attention module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Zhao, P., Zhang, J., Fang, W., and Deng, S. (2020). SCAUNet: Spatial-channel attention UNet for gland segmentation. Front. Bioeng. Biotechnol., 8.","DOI":"10.3389\/fbioe.2020.00670"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"661","DOI":"10.1007\/s11517-022-02723-9","article-title":"Dual encoder network with Transformer-CNN for multi-organ segmentation","volume":"61","author":"Hong","year":"2023","journal-title":"Med Biol. Eng. Comput."},{"key":"ref_40","first-page":"1","article-title":"Attention is all you need","volume":"30","author":"Vaswani","year":"2017","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_41","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16\u00d716 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"108205","DOI":"10.1109\/ACCESS.2022.3211501","article-title":"Transnorm: Transformer provides a strong spatial normalization mechanism for a deep segmentation model","volume":"10","author":"Azad","year":"2022","journal-title":"IEEE Access"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"102327","DOI":"10.1016\/j.media.2021.102327","article-title":"FAT-Net: Feature adaptive Transformers for automated skin lesion segmentation","volume":"76","author":"Wu","year":"2022","journal-title":"Med. Image Anal."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"109552","DOI":"10.1016\/j.knosys.2022.109552","article-title":"Vision Transformers for dense prediction: A survey","volume":"253","author":"Zuo","year":"2022","journal-title":"Knowl.-Based Syst."},{"key":"ref_45","unstructured":"Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A.L., and Zhou, Y. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv."},{"key":"ref_46","unstructured":"Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., and Wang, M. (2021). Swin-UNet: UNet-like pure Transformer for medical image segmentation. arXiv."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"8320","DOI":"10.3934\/mbe.2023364","article-title":"CoT-UNet++: A medical image segmentation method based on contextual Transformer and dense connection","volume":"20","author":"Yin","year":"2023","journal-title":"Math. Biosci. Eng."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Balachandran, S., Qin, X., Jiang, C., Blouri, E.S., Forouzandeh, A., Dehghan, M., Zonoobi, D., Kapur, J., Jaremko, J., and Punithakumar, K. (2023). ACU2E-Net: A novel predict\u2013refine attention network for segmentation of soft-tissue structures in ultrasound images. Comput. Biol. Med., 157.","DOI":"10.1016\/j.compbiomed.2023.106792"},{"key":"ref_49","unstructured":"Zhang, S., Fu, H., Yan, Y., Zhang, Y., Wu, Q., Yang, M., Tan, M., and Xu, Y. (2019, January 13\u201317). Attention guided network for retinal image segmentation. Proceedings of the Medical Image Computing and Computer Assisted Intervention\u2013MICCAI 2019: 22nd International Conference, Shenzhen, China. Proceedings, Part I 22."},{"key":"ref_50","unstructured":"Xie, Y., Yang, B., Guan, Q., Zhang, J., Wu, Q., and Xia, Y. (2023). Attention Mechanisms in Medical Image Segmentation: A Survey. arXiv."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"17723","DOI":"10.1007\/s00521-022-07419-7","article-title":"R2U++: A multiscale recurrent residual UNet with dense skip connections for medical image segmentation","volume":"34","author":"Mubashar","year":"2022","journal-title":"Neural Comput. Appl."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., Albanie, S., Sun, G., and Wu, E. (2019). Squeeze-and-Excitation Networks. arXiv.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"109203","DOI":"10.1016\/j.sigpro.2023.109203","article-title":"An end-to-end multiple side-outputs fusion deep supervision network based remote sensing image change detection algorithm","volume":"213","author":"Wu","year":"2023","journal-title":"Signal Process."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Fu, S., Lu, Y., Wang, Y., Zhou, Y., Shen, W., Fishman, E., and Yuille, A. (2020, January 4\u20138). Domain adaptive relational reasoning for 3D multi-organ segmentation. Proceedings of the Medical Image Computing and Computer Assisted Intervention\u2013MICCAI 2020: 23rd International Conference, Lima, Peru. Proceedings, Part I 23.","DOI":"10.1007\/978-3-030-59710-8_64"},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., and Wang, M. (2022, January 23\u201327). Swin-UNet: UNet-like pure Transformer for medical image segmentation. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-25066-8_9"},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"102035","DOI":"10.1016\/j.media.2021.102035","article-title":"Loss odyssey in medical image segmentation","volume":"71","author":"Ma","year":"2021","journal-title":"Med Image Anal."},{"key":"ref_57","unstructured":"Wang, H., Cao, P., Wang, J., and Zaiane, O.R. (March, January 22). Uctransnet: Rethinking the skip connections in UNet from a channelwise perspective with Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, Virtually."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Wang, H., Xie, S., Lin, L., Iwamoto, Y., Han, X.H., Chen, Y.W., and Tong, R. (2022, January 23\u201327). Mixed Transformer UNet for medical image segmentation. Proceedings of the ICASSP 2022\u20142022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore.","DOI":"10.1109\/ICASSP43922.2022.9746172"},{"key":"ref_59","unstructured":"Lei, T., Sun, R., Wan, Y., Xia, Y., Du, X., and Nandi, A.K. (2023). TEC-Net: Vision Transformer Embrace Convolutional Neural Networks for Medical Image Segmentation. arXiv."},{"key":"ref_60","unstructured":"Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional Transformers for language understanding. arXiv."},{"key":"ref_61","unstructured":"Loshchilov, I., and Hutter, F. (2017). Decoupled weight decay regularization. arXiv."},{"key":"ref_62","unstructured":"Roux, N., Schmidt, M., and Bach, F. (2012). A stochastic gradient method with an exponential convergence _rate for finite training sets. Adv. Neural Inf. Process. Syst., 25."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"3668","DOI":"10.1109\/TCYB.2019.2950779","article-title":"A survey of optimization methods from a machine learning perspective","volume":"50","author":"Sun","year":"2019","journal-title":"IEEE Trans. Cybern."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/20\/8589\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:10:09Z","timestamp":1760130609000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/20\/8589"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,20]]},"references-count":63,"journal-issue":{"issue":"20","published-online":{"date-parts":[[2023,10]]}},"alternative-id":["s23208589"],"URL":"https:\/\/doi.org\/10.3390\/s23208589","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,20]]}}}