{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T09:16:01Z","timestamp":1771233361691,"version":"3.50.1"},"reference-count":39,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T00:00:00Z","timestamp":1765324800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T00:00:00Z","timestamp":1768262400000},"content-version":"vor","delay-in-days":34,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["No. 62373164"],"award-info":[{"award-number":["No. 62373164"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Natural Science Foundation on Computational Intelligence of Shandong Province","award":["No.SDCI202403"],"award-info":[{"award-number":["No.SDCI202403"]}]},{"name":"Shandong Province\u2019s Technology-based Small and Medium-sized Enterprises (SMEs) Innovation Capacity Enhancement Project","award":["No. 2023TSGC0149"],"award-info":[{"award-number":["No. 2023TSGC0149"]}]},{"name":"Project of Central Government Guides Local Program","award":["No. YDZX2024075"],"award-info":[{"award-number":["No. YDZX2024075"]}]},{"name":"Project of Central Government Guides Local Program","award":["No. 62273164"],"award-info":[{"award-number":["No. 62273164"]}]},{"name":"Taishan Scholars Program of Shandong Province","award":["tsqn202507271"],"award-info":[{"award-number":["tsqn202507271"]}]},{"name":"Taishan Experts Program","award":["tscy20241154"],"award-info":[{"award-number":["tscy20241154"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2026,2]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Skin cancer research is essential to finding new treatments and improving survival rates in computer-aided medicine. Within this research, the accurate segmentation of skin lesion images is an important step for both early diagnosis and personalized treatment strategies. However, while current popular Transformer-based models have achieved competitive segmentation results, they often ignore the computational complexity and the high costs associated with their training. In this paper, we propose a lightweight network, a multi-scale atrous attention network for skin lesion segmentation (MAAN). Firstly, we optimize the residual basic block by constructing a dual-path framework with both high and low-resolution paths, which reduces the number of parameters while maintaining effective feature extraction capability. Secondly, to better capture the information in the skin lesion images and further improve the model performance, we design an adaptive multi-scale atrous attention module at the final stage of the low-resolution path. The experiments conducted on the ISIC 2017 and ISIC2018 datasets show that the proposed model MAAN achieves mIoU of 85.20 and 85.67% respectively, outperforming recent MHorNet while maintaining only 0.37M parameters and 0.23G FLOPs computational complexity. Additionally, through ablation studies, we demonstrate that the AMAA module can work as a plug-and-play module for performance improvement on CNN-based methods.<\/jats:p>","DOI":"10.1007\/s40747-025-02186-z","type":"journal-article","created":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T11:38:10Z","timestamp":1765366690000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["MAAN: multi-scale atrous attention network for skin lesion segmentation"],"prefix":"10.1007","volume":"12","author":[{"given":"Yang","family":"Lian","sequence":"first","affiliation":[]},{"given":"Ruizhi","family":"Han","sequence":"additional","affiliation":[]},{"given":"Shiyuan","family":"Han","sequence":"additional","affiliation":[]},{"given":"Defu","family":"Qiu","sequence":"additional","affiliation":[]},{"given":"Jin","family":"Zhou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,10]]},"reference":[{"key":"2186_CR1","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1016\/j.compmedimag.2016.05.002","volume":"52","author":"A Pennisi","year":"2016","unstructured":"Pennisi A, Bloisi DD, Nardi D, Giampetruzzi AR, Mondino C, Facchiano A (2016) Skin lesion image segmentation using Delaunay triangulation for melanoma detection. Comput Med Imaging Graph 52:89\u2013103. https:\/\/doi.org\/10.1016\/j.compmedimag.2016.05.002","journal-title":"Comput Med Imaging Graph"},{"issue":"15","key":"2186_CR2","doi-asserted-by":"publisher","first-page":"1290","DOI":"10.1001\/jama.2023.4342","volume":"329","author":"CM Mangione","year":"2023","unstructured":"Mangione CM, Barry MJ, Nicholson WK, Chelmow D, Coker TR, Davis EM, Donahue KE, Ja\u00e9n CR, Kubik M, Li L (2023) Screening for skin cancer: US preventive services task force recommendation statement. JAMA 329(15):1290\u20131295. https:\/\/doi.org\/10.1001\/jama.2023.4342","journal-title":"JAMA"},{"key":"2186_CR3","doi-asserted-by":"publisher","DOI":"10.1016\/j.oor.2024.100365","author":"K Vayadande","year":"2024","unstructured":"Vayadande K (2024) Innovative approaches for skin disease identification in machine learning: a comprehensive study. Oral Oncol Rep. https:\/\/doi.org\/10.1016\/j.oor.2024.100365","journal-title":"Oral Oncol Rep"},{"issue":"2","key":"2186_CR4","doi-asserted-by":"publisher","first-page":"1487","DOI":"10.1007\/s40747-021-00587-4","volume":"8","author":"X He","year":"2022","unstructured":"He X, Wang Y, Zhao S, Yao C (2022) Deep metric attention learning for skin lesion classification in dermoscopy images. Complex Intell Syst 8(2):1487\u20131504. https:\/\/doi.org\/10.1007\/s40747-021-00587-4","journal-title":"Complex Intell Syst"},{"issue":"5","key":"2186_CR5","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s40747-025-01847-3","volume":"11","author":"B Li","year":"2025","unstructured":"Li B, Zhou J, Gou F, Wu J (2025) TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation. Complex Intell Syst 11(5):1\u201319. https:\/\/doi.org\/10.1007\/s40747-025-01847-3","journal-title":"Complex Intell Syst"},{"issue":"1","key":"2186_CR6","doi-asserted-by":"publisher","first-page":"24356","DOI":"10.1038\/s41598-025-09101-z","volume":"15","author":"M Abdelsattar","year":"2025","unstructured":"Abdelsattar M, AbdelMoety A, Emad-Eldeen A (2025) ResNet-based image processing approach for precise detection of cracks in photovoltaic panels. Sci Rep 15(1):24356. https:\/\/doi.org\/10.1038\/s41598-025-09101-z","journal-title":"Sci Rep"},{"issue":"1","key":"2186_CR7","doi-asserted-by":"publisher","first-page":"14236","DOI":"10.1038\/s41598-025-96945-0","volume":"15","author":"A Rabee","year":"2025","unstructured":"Rabee A, Anwar Z, AbdelMoety A, Abdelsallam A, Ali M (2025) Comparative analysis of automated foul detection in football using deep learning architectures. Sci Rep 15(1):14236. https:\/\/doi.org\/10.1038\/s41598-025-96945-0","journal-title":"Sci Rep"},{"issue":"15","key":"2186_CR8","doi-asserted-by":"publisher","first-page":"8825","DOI":"10.1007\/s00521-025-11035-6","volume":"37","author":"M Abdelsattar","year":"2025","unstructured":"Abdelsattar M, AbdelMoety A, Emad-Eldeen A (2025) Advanced machine learning techniques for predicting power generation and fault detection in solar photovoltaic systems. Neural Comput Appl 37(15):8825\u20138844. https:\/\/doi.org\/10.1007\/s00521-025-11035-6","journal-title":"Neural Comput Appl"},{"key":"2186_CR9","doi-asserted-by":"crossref","unstructured":"Gongwen X, Zhijun Z, Weihua Y, Li\u2019Na X (2014) On medical image segmentation based on wavelet transform. In: 2014 fifth international conference on intelligent systems design and engineering applications. IEEE, pp 671\u2013674","DOI":"10.1109\/ISDEA.2014.155"},{"issue":"21","key":"2186_CR10","doi-asserted-by":"publisher","first-page":"8027","DOI":"10.1016\/j.eswa.2015.06.032","volume":"42","author":"K Wu","year":"2015","unstructured":"Wu K, Zhang D (2015) Robust tongue segmentation by fusing region-based and edge-based approaches. Expert Syst Appl 42(21):8027\u20138038. https:\/\/doi.org\/10.1016\/j.eswa.2015.06.032","journal-title":"Expert Syst Appl"},{"issue":"7587","key":"2186_CR11","doi-asserted-by":"publisher","first-page":"484","DOI":"10.1038\/nature16961","volume":"529","author":"D Silver","year":"2016","unstructured":"Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484\u2013489. https:\/\/doi.org\/10.1038\/nature16961","journal-title":"Nature"},{"key":"2186_CR12","unstructured":"Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B et al (2018) Attention U-Net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999"},{"issue":"6","key":"2186_CR13","doi-asserted-by":"publisher","first-page":"1856","DOI":"10.1109\/TMI.2019.2959609","volume":"39","author":"Z Zhou","year":"2019","unstructured":"Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J (2019) UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans Med Imaging 39(6):1856\u20131867. https:\/\/doi.org\/10.1109\/TMI.2019.2959609","journal-title":"IEEE Trans Med Imaging"},{"key":"2186_CR14","doi-asserted-by":"publisher","unstructured":"Huang H, Lin L, Tong R, Hu H, Zhang Q, Iwamoto Y, Han X, Chen YW, Wu J (2020) UNet 3+: a full-scale connected UNet for medical image segmentation. In: IEEE international conference on acoustics, speech, and signal processing. IEEE, pp 1055\u20131059. https:\/\/doi.org\/10.1109\/ICASSP40776.2020.9053405","DOI":"10.1109\/ICASSP40776.2020.9053405"},{"key":"2186_CR15","doi-asserted-by":"publisher","unstructured":"Valanaras JMJ, Patel VM (2022) UNeXt: MLP-based rapid medical image segmentation network. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 23\u201333. https:\/\/doi.org\/10.1007\/978-3-031-16443-93","DOI":"10.1007\/978-3-031-16443-93"},{"key":"2186_CR16","doi-asserted-by":"publisher","DOI":"10.1016\/j.compbiomed.2023.107120","volume":"162","author":"Y Yin","year":"2023","unstructured":"Yin Y, Han Z, Jian M, Wang GG, Chen L, Wang R (2023) AMSUnet: a neural network using atrous multi-scale convolution for medical image segmentation. Comput Biol Med 162:107120. https:\/\/doi.org\/10.1016\/j.compbiomed.2023.107120","journal-title":"Comput Biol Med"},{"key":"2186_CR17","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2024.3509504","author":"Z Guo","year":"2024","unstructured":"Guo Z, Bian L, Wei H, Li J, Ni H, Huang X (2024) DSNet: a novel way to use atrous convolutions in semantic segmentation. IEEE Trans Circuits Syst Video Technol. https:\/\/doi.org\/10.1109\/TCSVT.2024.3509504","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"2186_CR18","doi-asserted-by":"publisher","first-page":"11358","DOI":"10.1109\/TMM.2024.3453059","volume":"26","author":"D Qiu","year":"2024","unstructured":"Qiu D, Cheng Y, Wong KKL, Zhang W, Yi Z, Wang X (2024) DBSR: quadratic conditional diffusion model for blind cardiac MRI super-resolution. IEEE Trans Multimed 26:11358\u201311371. https:\/\/doi.org\/10.1109\/TMM.2024.3453059","journal-title":"IEEE Trans Multimed"},{"key":"2186_CR19","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2024.106285","volume":"94","author":"Y Feng","year":"2024","unstructured":"Feng Y, Zhu X, Zhang X, Li Y, Lu H (2024) PAMSNet: a medical image segmentation network based on spatial pyramid and attention mechanism. Biomed Signal Process Control 94:106285. https:\/\/doi.org\/10.1016\/j.bspc.2024.106285","journal-title":"Biomed Signal Process Control"},{"key":"2186_CR20","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.1706.03762","author":"A Vaswani","year":"2017","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst. https:\/\/doi.org\/10.48550\/ARXIV.1706.03762","journal-title":"Adv Neural Inf Process Syst"},{"issue":"1","key":"2186_CR21","doi-asserted-by":"publisher","first-page":"87","DOI":"10.1109\/TPAMI.2022.3152247","volume":"45","author":"K Han","year":"2022","unstructured":"Han K, Wang Y, Chen H, Chen X, Guo J, Liu Z, Tang Y, Xiao A, Xu C, Xu Y (2022) A survey on vision transformer. IEEE Trans Pattern Anal Mach Intell 45(1):87\u2013110. https:\/\/doi.org\/10.1109\/TPAMI.2022.3152247","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"8","key":"2186_CR22","doi-asserted-by":"publisher","first-page":"4374","DOI":"10.1109\/TPAMI.2021.3065086","volume":"44","author":"J Mei","year":"2021","unstructured":"Mei J, Cheng M-M, Xu G, Wan L-R, Zhang H (2021) SANet: a slice-aware network for pulmonary nodule detection. IEEE Trans Pattern Anal Mach Intell 44(8):4374\u20134387. https:\/\/doi.org\/10.1109\/TPAMI.2021.3065086","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"2186_CR23","doi-asserted-by":"publisher","unstructured":"Zhang Y, Liu H, Hu Q (2021) Transfuse: fusing transformers and CNNs for medical image segmentation. In: Medical image computing and computer-assisted intervention\u2014MICCAI 2021: 24th international conference, Strasbourg, France, September 27\u2013October 1, 2021, proceedings, Part I 24. Springer, pp 14\u201324. https:\/\/doi.org\/10.1007\/978-3-030-87193-22","DOI":"10.1007\/978-3-030-87193-22"},{"key":"2186_CR24","unstructured":"Gao Y, Zhou M, Liu D, Yan Z, Zhang S, Metaxas DN (2022) A data-scalable transformer for medical image segmentation: architecture, model efficiency, and benchmark. ArXiv Preprint arXiv:2203.00131"},{"key":"2186_CR25","doi-asserted-by":"publisher","unstructured":"Aghdam EK, Azad R, Zarvani M, Merhof D (2023) Attention Swin U-Net: cross-contextual attention mechanism for skin lesion segmentation. In: 2023 IEEE 20th international symposium on biomedical imaging (ISBI). IEEE, pp 1\u20135. https:\/\/doi.org\/10.1109\/ISBI53787.2023.10230337","DOI":"10.1109\/ISBI53787.2023.10230337"},{"issue":"1","key":"2186_CR26","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1007\/s42452-024-05655-1","volume":"6","author":"K Rezaee","year":"2024","unstructured":"Rezaee K, Zadeh HG (2024) Self-attention transformer unit-based deep learning framework for skin lesions classification in smart healthcare. Discov Appl Sci 6(1):3. https:\/\/doi.org\/10.1007\/s42452-024-05655-1","journal-title":"Discov Appl Sci"},{"key":"2186_CR27","doi-asserted-by":"publisher","first-page":"64305","DOI":"10.1109\/ACCESS.2025.3556889","volume":"13","author":"AA Salam","year":"2025","unstructured":"Salam AA, Usman Akram M, Haroon Yousaf M, Rao B (2025) DermaTransNet: where transformer attention meets U-Net for skin image segmentation. IEEE Access 13:64305\u201364329. https:\/\/doi.org\/10.1109\/ACCESS.2025.3556889","journal-title":"IEEE Access"},{"key":"2186_CR28","doi-asserted-by":"publisher","DOI":"10.1109\/TMI.2023.3247814","author":"X Lin","year":"2023","unstructured":"Lin X, Yu L, Cheng K-T, Yan Z (2023) The lighter the better: rethinking transformers in medical image segmentation through adaptive pruning. IEEE Trans Med Imaging. https:\/\/doi.org\/10.1109\/TMI.2023.3247814","journal-title":"IEEE Trans Med Imaging"},{"key":"2186_CR29","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2023.105517","volume":"88","author":"R Wu","year":"2024","unstructured":"Wu R, Liang P, Huang X, Shi L, Gu Y, Zhu H, Chang Q (2024) MHorUNet: high-order spatial interaction UNet for skin lesion segmentation. Biomed Signal Process Control 88:105517. https:\/\/doi.org\/10.1016\/j.bspc.2023.105517","journal-title":"Biomed Signal Process Control"},{"key":"2186_CR30","doi-asserted-by":"crossref","unstructured":"Yu C, Gao C, Wang J, Yu G, Shen C, Sang N (2021) BiSeNet V2: bilateral network with guided aggregation for real-time semantic segmentation. Int J Comput Vis 129:3051\u20133068. s11263-021-01515-2","DOI":"10.1007\/s11263-021-01515-2"},{"key":"2186_CR31","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778. https:\/\/doi.org\/10.1109\/cvpr.2016.90","DOI":"10.1109\/cvpr.2016.90"},{"key":"2186_CR32","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1016\/j.neunet.2017.12.012","volume":"107","author":"S Elfwing","year":"2018","unstructured":"Elfwing S, Uchibe E, Doya K (2018) Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Netw 107:3\u201311. https:\/\/doi.org\/10.1016\/j.neunet.2017.12.012","journal-title":"Neural Netw"},{"key":"2186_CR33","doi-asserted-by":"publisher","unstructured":"Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132\u20137141. https:\/\/doi.org\/10.1109\/CVPR.2018.00745","DOI":"10.1109\/CVPR.2018.00745"},{"key":"2186_CR34","doi-asserted-by":"publisher","unstructured":"Zhang H, Zu K, Lu J, Zou Y, Meng D (2022) EPSANet: an efficient pyramid squeeze attention block on convolutional neural network. In: Proceedings of the Asian conference on computer vision, pp. 1161\u20131177. https:\/\/doi.org\/10.1007\/978-3-031-26313-233","DOI":"10.1007\/978-3-031-26313-233"},{"key":"2186_CR35","doi-asserted-by":"publisher","unstructured":"Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention\u2014MICCAI 2015: 18th international conference, Munich, Germany, October 5\u20139, 2015, proceedings, Part III 18. Springer, pp 234\u2013241. https:\/\/doi.org\/10.1007\/978-3-319-24574-4","DOI":"10.1007\/978-3-319-24574-4"},{"key":"2186_CR36","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2022.109512","volume":"253","author":"Z Han","year":"2022","unstructured":"Han Z, Jian M, Wang GG (2022) ConvuNeXt: an efficient convolution neural network for medical image segmentation. Knowl-Based Syst 253:109512. https:\/\/doi.org\/10.1016\/j.knosys.2022.109512","journal-title":"Knowl-Based Syst"},{"key":"2186_CR37","doi-asserted-by":"publisher","unstructured":"Zhao H, Shi J, Qi X, Wang X, Jia J (2017) Pyramid scene parsing network. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 2881\u20132890. https:\/\/doi.org\/10.1109\/CVPR.2017.660","DOI":"10.1109\/CVPR.2017.660"},{"key":"2186_CR38","doi-asserted-by":"publisher","unstructured":"Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder\u2013decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV), pp 801\u2013818. https:\/\/doi.org\/10.1007\/978-3-030-01234-249","DOI":"10.1007\/978-3-030-01234-249"},{"issue":"3","key":"2186_CR39","doi-asserted-by":"publisher","first-page":"3448","DOI":"10.1109\/TITS.2022.3228042","volume":"24","author":"H Pan","year":"2022","unstructured":"Pan H, Hong Y, Sun W, Jia Y (2022) Deep dual-resolution networks for real-time and accurate semantic segmentation of traffic scenes. IEEE Trans Intell Transp Syst 24(3):3448\u20133460. https:\/\/doi.org\/10.1109\/TITS.2022.3228042","journal-title":"IEEE Trans Intell Transp Syst"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-02186-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-025-02186-z","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-02186-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T08:24:07Z","timestamp":1771230247000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-025-02186-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,10]]},"references-count":39,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,2]]}},"alternative-id":["2186"],"URL":"https:\/\/doi.org\/10.1007\/s40747-025-02186-z","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,10]]},"assertion":[{"value":"16 April 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 November 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Institutional review board"}}],"article-number":"70"}}