{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,10]],"date-time":"2026-02-10T19:03:20Z","timestamp":1770750200978,"version":"3.50.0"},"reference-count":38,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2025,5,22]],"date-time":"2025-05-22T00:00:00Z","timestamp":1747872000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,5,22]],"date-time":"2025-05-22T00:00:00Z","timestamp":1747872000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"\u201cPioneer\u201d and \u201cLeading Goose\u201d R&D Program of Zhejiang","award":["2023C01143"],"award-info":[{"award-number":["2023C01143"]}]},{"DOI":"10.13039\/100014718","name":"Innovative Research Group Project of the National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62072121"],"award-info":[{"award-number":["62072121"]}],"id":[{"id":"10.13039\/100014718","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2025,7]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>DETR (DEtection TRansformer) is a CV model for object detection that replaces traditional complex methods with a Transformer architecture, and has achieved significant improvement over previous methods, particularly in handling small and medium-sized objects. However, the attention mechanism-based detection framework of DETR exhibits limitations in small and medium-sized object detection. It struggles to extract fine-grained details of small and medium-sized objects from low-resolution features, and its computational complexity increases significantly with the input scale, thereby constraining real-time detection efficiency. To address these limitations, we introduce the Cross Feature Attention (XFA) mechanism and propose XFCOS (XFA-based with FCOS), a novel object detection model built upon it. XFA simplifies the attention mechanism\u2019s computational process and reduces complexity through L2 normalization and two one-dimensional convolutions applied in different directions. This design reduces the computational complexity from quadratic to linear while preserving spatial context awareness. XFCOS enhances the original TSP-FCOS (Transformer-based Set Prediction with FCOS) model by integrating XFA into the transformer encoder, creating a CNN-ViT hybrid architecture, significantly reducing computational costs without sacrificing accuracy. Extensive experiments demonstrate that XFCOS achieves state-of-the-art performance while addressing DETR\u2019s convergence and efficiency limitations. On Pascal VOC 2007, XFCOS attains 54.7 AP and 60.7 AP<jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$_\\textrm{75}$$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mmultiscripts>\n                    <mml:mrow\/>\n                    <mml:mtext>75<\/mml:mtext>\n                    <mml:mrow\/>\n                  <\/mml:mmultiscripts>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> - surpassing DETR by 4.5 AP and 7.9 AP<jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$_\\textrm{75}$$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mmultiscripts>\n                    <mml:mrow\/>\n                    <mml:mtext>75<\/mml:mtext>\n                    <mml:mrow\/>\n                  <\/mml:mmultiscripts>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> respectively, establishing new benchmarks among ResNet-50-based detectors. The model shows particular strength in small object detection, achieving 24.0 AP<jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$_\\textrm{S}$$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mmultiscripts>\n                    <mml:mrow\/>\n                    <mml:mtext>S<\/mml:mtext>\n                    <mml:mrow\/>\n                  <\/mml:mmultiscripts>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> and 43.9 AP<jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$_\\textrm{M}$$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mmultiscripts>\n                    <mml:mrow\/>\n                    <mml:mtext>M<\/mml:mtext>\n                    <mml:mrow\/>\n                  <\/mml:mmultiscripts>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> on COCO 2017, representing 3.3 AP<jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$_\\textrm{S}$$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mmultiscripts>\n                    <mml:mrow\/>\n                    <mml:mtext>S<\/mml:mtext>\n                    <mml:mrow\/>\n                  <\/mml:mmultiscripts>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> and 3.8 AP<jats:inline-formula>\n              <jats:alternatives>\n                <jats:tex-math>$$_{\\textrm{M}}$$<\/jats:tex-math>\n                <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mmultiscripts>\n                    <mml:mrow\/>\n                    <mml:mtext>M<\/mml:mtext>\n                    <mml:mrow\/>\n                  <\/mml:mmultiscripts>\n                <\/mml:math>\n              <\/jats:alternatives>\n            <\/jats:inline-formula> improvements over DETR. Through computational optimization, XFCOS reduces encoder FLOPs to 13.5G, representing a 17.2% decrease compared to TSP-FCOS\u2019s 16.3G, while cutting activation memory from 285.78 to 264.64M, a reduction of 7.4%. This significantly enhances computational efficiency.<\/jats:p>","DOI":"10.1007\/s40747-025-01904-x","type":"journal-article","created":{"date-parts":[[2025,5,22]],"date-time":"2025-05-22T07:28:53Z","timestamp":1747898933000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["A lightweight mechanism for vision-transformer-based object detection"],"prefix":"10.1007","volume":"11","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-8461-9956","authenticated-orcid":false,"given":"Yanming","family":"Ye","sequence":"first","affiliation":[]},{"given":"Qiang","family":"Sun","sequence":"additional","affiliation":[]},{"given":"Kailong","family":"Cheng","sequence":"additional","affiliation":[]},{"given":"Xingfa","family":"Shen","sequence":"additional","affiliation":[]},{"given":"Dongjing","family":"Wang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,5,22]]},"reference":[{"key":"1904_CR1","doi-asserted-by":"publisher","unstructured":"Tan M, Pang R, Le QV (2020) EfficientDet: scalable and efficient object detection. In: 2020 IEEE\/CVF Conference on computer vision and pattern recognition (CVPR). IEEE, pp 10778\u201310787. https:\/\/doi.org\/10.1109\/CVPR42600.2020.01079","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"1904_CR2","unstructured":"Chen Y, Zhang Z, Cao Y, Wang L, Lin S, Hu H (2020) Reppoints v2: verification meets regression for object detection. In: Conference on neural information processing systems 33:5621\u20135631"},{"key":"1904_CR3","doi-asserted-by":"publisher","unstructured":"Zhang D, Zhang H, Tang J, Wang M, Hua X, Sun Q (2020) Feature pyramid transformer. In: Vedaldi A, Bischof H, Brox T, Frahm JM (eds) Computer Vision-ECCV 2020. ECCV 2020. Lecture Notes in Computer Science, vol 12373. Springer, Cham. https:\/\/doi.org\/10.1007\/978-3-030-58604-1_20","DOI":"10.1007\/978-3-030-58604-1_20"},{"key":"1904_CR4","doi-asserted-by":"publisher","unstructured":"Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: Vedaldi A, Bischof H, Brox T, Frahm, JM (eds) Computer Vision-ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12346. Springer, Cham. https:\/\/doi.org\/10.1007\/978-3-030-58452-8_13","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"1904_CR5","doi-asserted-by":"crossref","unstructured":"Rajat R, Anand M, Andrew YN (2009) Large-scale deep unsuper vised learning using graphics processors. In: International Conference on machine learning, pp 873-880. https:\/\/doi.org\/10.1145\/1553374.1553486","DOI":"10.1145\/1553374.1553486"},{"key":"1904_CR6","doi-asserted-by":"publisher","unstructured":"Wang S, Li BZ, Khabsa M, Fang H, Ma H (2020) Linformer: self-attention with linear complexity. https:\/\/doi.org\/10.48550\/arXiv.2006.04768","DOI":"10.48550\/arXiv.2006.04768"},{"key":"1904_CR7","doi-asserted-by":"publisher","unstructured":"Child R, Gray S, Radford A, Sutskever I (2019) Generating long sequences with sparse transformers. https:\/\/doi.org\/10.48550\/arXiv.1904.10509","DOI":"10.48550\/arXiv.1904.10509"},{"key":"1904_CR8","doi-asserted-by":"publisher","unstructured":"Xie E, Wang W, Yu Z, Anandkumar A, Alvarez JM, Luo P (2021) Segformer: simple and efficient design for semantic segmentation with transformers. In: NeuraIPS 2021. https:\/\/doi.org\/10.48550\/arXiv.2105.15203","DOI":"10.48550\/arXiv.2105.15203"},{"key":"1904_CR9","doi-asserted-by":"crossref","unstructured":"Sun Z, Cao S, Yang Y, Kitani KM (2021) Rethinking transformer-based set prediction for object detection. In: International Conference on computer vision, pp 3611\u20133620","DOI":"10.1109\/ICCV48922.2021.00359"},{"key":"1904_CR10","doi-asserted-by":"crossref","unstructured":"Tian Z, Shen C, Chen H, He T (2019) FCOS: fully convolutional one-stage object detection. In: 2019 IEEE\/CVF International Conference on Computer Vision (ICCV). IEEE. pp 9626\u20139635","DOI":"10.1109\/ICCV.2019.00972"},{"key":"1904_CR11","doi-asserted-by":"publisher","unstructured":"Zhao Y, Tang H, Jiang Y, Wu Q (2023) Lightweight vision transformer with cross feature attention. In: ICIP 2023. https:\/\/doi.org\/10.48550\/arXiv.2207.07268","DOI":"10.48550\/arXiv.2207.07268"},{"issue":"6","key":"1904_CR12","doi-asserted-by":"publisher","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","volume":"39","author":"S Ren","year":"2017","unstructured":"Ren S, He K, Girshick R, Sun J (2017) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137\u20131149. https:\/\/doi.org\/10.1109\/TPAMI.2016.2577031","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1904_CR13","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2844175","author":"K He","year":"2017","unstructured":"He K, Gkioxari G, Doll\u00e1r Piotr, Girshick R (2017) Mask r-cnn. IEEE Trans Pattern Anal Mach Intell. https:\/\/doi.org\/10.1109\/TPAMI.2018.2844175","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1904_CR14","doi-asserted-by":"publisher","unstructured":"Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: IEEE Computer Vision & Pattern Recognition. in CVPR 2016, pp 779\u2013788. https:\/\/doi.org\/10.1109\/CVPR.2016.91","DOI":"10.1109\/CVPR.2016.91"},{"key":"1904_CR15","doi-asserted-by":"publisher","DOI":"10.3390\/electronics12102323","author":"H Lou","year":"2023","unstructured":"Lou H, Duan X, Guo J, Liu H, Gu J, Bi L et al (2023) Dc-yolov8: small-size object detection algorithm based on camera sensor. Electronics (2079-9292). https:\/\/doi.org\/10.3390\/electronics12102323","journal-title":"Electronics (2079-9292)"},{"issue":"1","key":"1904_CR16","doi-asserted-by":"publisher","first-page":"012047","DOI":"10.1088\/1742-6596\/2171\/1\/012047","volume":"2171","author":"C Li","year":"2022","unstructured":"Li C, Cao Y, Peng Y (2022) Research on automatic driving target detection based on yolov5s. J Phys Conf Ser 2171(1):012047\u2013012047","journal-title":"J Phys Conf Ser"},{"key":"1904_CR17","doi-asserted-by":"publisher","DOI":"10.54097\/fcis.v3i1.6024","author":"X He","year":"2023","unstructured":"He X (2023) Vehicle target detection algorithm based on yolov5. Front Comput Intell Syst. https:\/\/doi.org\/10.54097\/fcis.v3i1.6024","journal-title":"Front Comput Intell Syst"},{"issue":"11","key":"1904_CR18","doi-asserted-by":"publisher","first-page":"2922","DOI":"10.3390\/rs15112922","volume":"15","author":"Y Chen","year":"2023","unstructured":"Chen Y, Wang H, Pang Y, Han J, Mou E, Cao E (2023) An infrared small target detection method based on a weighted human visual comparison mechanism for safety monitoring. Remote Sens 15(11):2922\u20132922. https:\/\/doi.org\/10.3390\/rs15112922","journal-title":"Remote Sens"},{"issue":"6","key":"1904_CR19","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0217647","volume":"14","author":"S Pang","year":"2019","unstructured":"Pang S, Ding T, Qiao S, Meng F, Wang S, Li P et al (2019) A novel yolov3-arch model for identifying cholelithiasis and classifying gallstones on ct images. PLoS One 14(6):e0217647. https:\/\/doi.org\/10.1371\/journal.pone.0217647","journal-title":"PLoS One"},{"key":"1904_CR20","doi-asserted-by":"publisher","unstructured":"Meinhardt T, Kirillov A, Leal-Taixe L, Feichtenhofer C (2021) Trackformer: multi-object tracking with transformers. In: 2021 IEEE\/CVF Conference on computer vision and pattern recognition. CVPR 2021, pp 8844-8854. https:\/\/doi.org\/10.48550\/arXiv.2101.02702","DOI":"10.48550\/arXiv.2101.02702"},{"key":"1904_CR21","doi-asserted-by":"publisher","unstructured":"Dai Z, Cai B, Lin Y, Chen J (2021) Up-detr: unsupervised pre-training for object detection with transformers. In: 2021 IEEE\/CVF Conference on computer vision and pattern recognition. CVPR 2021, pp 1601\u20131610. https:\/\/doi.org\/10.1109\/CVPR46437.2021","DOI":"10.1109\/CVPR46437.2021"},{"key":"1904_CR22","unstructured":"Bablani D, Mckinstry JL, Esser SK, Appuswamy R, Modha DS (2023) Efficient and effective methods for mixed precision neural network quantization for faster, energy-efficient inference. arxiv:2301.13330"},{"key":"1904_CR23","doi-asserted-by":"publisher","unstructured":"Zuo S, Zhang Q, Liang C, He P, Zhao T, Chen W (2022) Moebert: from bert to mixture-of-experts via importance-guided adaptation. In: NAACL 2022. https:\/\/doi.org\/10.48550\/arXiv.2204.07675","DOI":"10.48550\/arXiv.2204.07675"},{"key":"1904_CR24","unstructured":"Wu C, Wu F, Qi T, Huang Y, Xie X (2021) Fastformer: additive attention can be all you need. arxiv:2108.09084"},{"key":"1904_CR25","doi-asserted-by":"publisher","unstructured":"Kitaev N, Kaiser L, Levskaya A (2020) Reformer: the efficient transformer. In: ICLR 2020. https:\/\/doi.org\/10.48550\/arXiv.2001.04451","DOI":"10.48550\/arXiv.2001.04451"},{"key":"1904_CR26","first-page":"5998","volume":"30","author":"A Vaswani","year":"2017","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN et al (2017) Attention is all you need. Adv Neural Inf Process Syst 30:5998\u20136008","journal-title":"Adv Neural Inf Process Syst"},{"key":"1904_CR27","doi-asserted-by":"publisher","unstructured":"Liu YC, Ma CY, He Z, Kuo CW, Vajda P (2021) Unbiased teacher for semi-supervised object detection. In: ICLR 2021. https:\/\/doi.org\/10.48550\/arXiv.2102.09480","DOI":"10.48550\/arXiv.2102.09480"},{"key":"1904_CR28","doi-asserted-by":"publisher","unstructured":"Sohn K, Zhang Z, Li CL, Zhang H, Lee CY, Pfister T (2020) A simple semi-supervised learning framework for object detection. https:\/\/doi.org\/10.48550\/arXiv.2005.04757","DOI":"10.48550\/arXiv.2005.04757"},{"key":"1904_CR29","doi-asserted-by":"publisher","unstructured":"Zhou Q, Yu C , Wang Z, Qian Q, Li H (2021) Instant-teaching: an end-to-end semi-supervised object detection framework. In: 2021 IEEE\/CVF Conference on computer vision and pattern recognition, CVPR 2021, pp 4079\u20134088. https:\/\/doi.org\/10.48550\/arXiv.2103.11402","DOI":"10.48550\/arXiv.2103.11402"},{"key":"1904_CR30","doi-asserted-by":"publisher","unstructured":"Tang Y, Chen W, Luo Y, Zhang, Y (2021) Humble teachers teach better students for semi-supervised object detection. In: 2021 IEEE Computer Conference on computer vision and pattern recognition, CVPR 2021, pp 3131\u20133140. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00315","DOI":"10.1109\/CVPR46437.2021.00315"},{"key":"1904_CR31","doi-asserted-by":"publisher","unstructured":"Zhang H, Chang H, Ma B, Wang N, Chen, X (2020) Dynamic r-cnn: towards high quality object detection via dynamic training. In: ECCV2020. https:\/\/doi.org\/10.48550\/arXiv.2004.06002","DOI":"10.48550\/arXiv.2004.06002"},{"key":"1904_CR32","doi-asserted-by":"publisher","unstructured":"Zhu X, Su W, Lu L, Li B, Wang X, Dai J (2021) Deformable DETR: deformable transformers for end-to-end object detection. In: ICLR 2021. https:\/\/doi.org\/10.48550\/arXiv.2010.04159","DOI":"10.48550\/arXiv.2010.04159"},{"issue":"22","key":"1904_CR33","doi-asserted-by":"publisher","first-page":"12174","DOI":"10.3390\/app132212174","volume":"13","author":"Y Wang","year":"2023","unstructured":"Wang Y, Wang Q, Zou R, Wen F, Liu F, Zhang Y et al (2023) Advancing image object detection: enhanced feature pyramid network and gradient density loss for improved performance. Appl Sci (2076-3417) 13(22):12174\u201312174. https:\/\/doi.org\/10.3390\/app132212174","journal-title":"Appl Sci (2076-3417)"},{"key":"1904_CR34","doi-asserted-by":"publisher","unstructured":"Chen B, Chen W, Yang S, Xuan Y, Song J, Xie D, Pu S, Song M, Zhuang Y (2022) Label matching semi-supervised object detection. In: IEEE\/CVF Computer Vision and Pattern Recognition Conference. in CVPR 2022: 14361-14370. https:\/\/doi.org\/10.48550\/arXiv.2206.06608","DOI":"10.48550\/arXiv.2206.06608"},{"issue":"99","key":"1904_CR35","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/TPAMI.2021.3050494","volume":"PP","author":"X Zhang","year":"2021","unstructured":"Zhang X, Wan F, Liu C, Ji X, Ye Q (2021) FreeAnchor: learning to match anchors for visual object detection. IEEE Trans Pattern Anal Mach Intell PP(99):1\u20131. https:\/\/doi.org\/10.1109\/TPAMI.2021.3050494","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1904_CR36","doi-asserted-by":"publisher","unstructured":"Zhang H, Wang Y, Dayoub F, Niko S\u00fcnderhauf (2021) Varifocalnet: an iou-aware dense object detector. In: 2021 IEEE\/CVF Conference on computer vision and pattern recognition. in CVPR 2021. https:\/\/doi.org\/10.48550\/arXiv.2008.1336","DOI":"10.48550\/arXiv.2008.1336"},{"key":"1904_CR37","doi-asserted-by":"publisher","unstructured":"Mark E, Ali SME, Van Luc G, Christopher KIW, John W, Andrew Z (2014) The Pascal Visual Object Classes (VOC) challenge. Int J Comput Vis 111(1):98\u2013136. https:\/\/doi.org\/10.1007\/s11263-009-0275-4","DOI":"10.1007\/s11263-009-0275-4"},{"key":"1904_CR38","doi-asserted-by":"publisher","unstructured":"Tsung-Yi L, Michael M, Serge B, Lubomir B, Ross G, James H, Pietro P, Deva R, C Lawrence Z, Piotr D (2015) Microsoft COCO: common objects in context. In: Lecture Notes in Computer Science, 8693.: 740-755. https:\/\/doi.org\/10.1007\/978-3-319-10602-1_48","DOI":"10.1007\/978-3-319-10602-1_48"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01904-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-025-01904-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01904-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T11:09:37Z","timestamp":1750331377000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-025-01904-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,22]]},"references-count":38,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2025,7]]}},"alternative-id":["1904"],"URL":"https:\/\/doi.org\/10.1007\/s40747-025-01904-x","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,22]]},"assertion":[{"value":"31 July 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 April 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 May 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding authors state that there is no Conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article is licensed under a Creative Commons Attribution-Non Commercial-No Derivatives 4.0 International License, which permits any non-commercial use,sharing,distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed mate rial. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article\u2019s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article\u2019s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit .","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Open Access"}},{"value":"no","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"none","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Permission to reproduce materials from other sources"}}],"article-number":"302"}}