{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T22:21:49Z","timestamp":1773786109990,"version":"3.50.1"},"reference-count":59,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2024,7,10]],"date-time":"2024-07-10T00:00:00Z","timestamp":1720569600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,7,10]],"date-time":"2024-07-10T00:00:00Z","timestamp":1720569600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Few-shot Semantic Segmentation (FSS) was proposed to segment unseen classes in a query image, referring to only a few annotated examples named support images. One of the characteristics of FSS is spatial inconsistency between query and support targets, e.g., texture or appearance. This greatly challenges the generalization ability of methods for FSS, which requires to effectively exploit the dependency of the query image and the support examples. Most existing methods abstracted support features into prototype vectors and implemented the interaction with query features using cosine similarity or feature concatenation. However, this simple interaction may not capture spatial details in query features. To address this limitation, some methods utilized pixel-level support information by computing pixel-level correlations between paired query and support features implemented with the attention mechanism of Transformer. Nevertheless, these approaches suffer from heavy computation due to dot-product attention between all pixels of support and query features. In this paper, we propose a novel framework, termed ProtoFormer, built upon the Transformer architecture, to fully capture spatial details in query features. ProtoFormer treats the abstracted prototype of the target class in support features as the Query and the query features as Key and Value embeddings, which are input to the Transformer decoder. This approach enables better capture of spatial details and focuses on the semantic features of the target class in the query image. The output of the Transformer-based module can be interpreted as semantic-aware dynamic kernels that filter the segmentation mask from the enriched query features. Extensive experiments conducted on PASCAL-<jats:inline-formula><jats:alternatives><jats:tex-math>$$5^{i}$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:msup>\n                    <mml:mn>5<\/mml:mn>\n                    <mml:mi>i<\/mml:mi>\n                  <\/mml:msup>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula> and COCO-<jats:inline-formula><jats:alternatives><jats:tex-math>$$20^{i}$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:msup>\n                    <mml:mn>20<\/mml:mn>\n                    <mml:mi>i<\/mml:mi>\n                  <\/mml:msup>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula> datasets demonstrate that ProtoFormer significantly outperforms the state-of-the-art methods in FSS.<\/jats:p>","DOI":"10.1007\/s40747-024-01539-4","type":"journal-article","created":{"date-parts":[[2024,7,10]],"date-time":"2024-07-10T02:01:57Z","timestamp":1720576917000},"page":"7265-7278","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["Prototype as query for few shot semantic segmentation"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0336-9295","authenticated-orcid":false,"given":"Leilei","family":"Cao","sequence":"first","affiliation":[]},{"given":"Yibo","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Ye","family":"Yuan","sequence":"additional","affiliation":[]},{"given":"Qiangguo","family":"Jin","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,7,10]]},"reference":[{"key":"1539_CR1","doi-asserted-by":"crossref","unstructured":"Chen L, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) \u201cDeeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,\u201d TPAMI,","DOI":"10.1109\/TPAMI.2017.2699184"},{"key":"1539_CR2","doi-asserted-by":"crossref","unstructured":"Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H (2018) \u201cEncoder-decoder with atrous separable convolution for semantic image segmentation,\u201d in ECCV","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"1539_CR3","doi-asserted-by":"crossref","unstructured":"Long J, Shelhamer E, Darrell T (2015) \u201cFully convolutional networks for semantic segmentation,\u201d in CVPR,","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"1539_CR4","doi-asserted-by":"crossref","unstructured":"Shelhamer E, Long J, Darrell T (2017) \u201cFully convolutional networks for semantic segmentation,\u201d TPAMI,","DOI":"10.1109\/TPAMI.2016.2572683"},{"key":"1539_CR5","unstructured":"Cheng B, Schwing AG, Kirillov A (2021) \u201cPer-pixel classification is not all you need for semantic segmentation,\u201d in NeurIPS,"},{"key":"1539_CR6","unstructured":"Xie E, Wang W, Yu Z, Anandkumar A, Alvarez JM, Luo P (2021) \u201cSegFormer: Simple and efficient design for semantic segmentation with transformers,\u201d in NeurIPS,"},{"key":"1539_CR7","doi-asserted-by":"crossref","unstructured":"Zhou B, Zhao H, Puig X, Fidler S, Barriuso A,\u00a0Torralba A (2017) \u201cScene parsing through ade20k dataset,\u201d in CVPR,","DOI":"10.1109\/CVPR.2017.544"},{"key":"1539_CR8","doi-asserted-by":"crossref","unstructured":"Caesar H, Uijlings J, Ferrari V (2018) \u201cCoco-stuff: Thing and stuff classes in context,\u201d in CVPR, , pp. 1209\u20131218","DOI":"10.1109\/CVPR.2018.00132"},{"key":"1539_CR9","doi-asserted-by":"crossref","unstructured":"Shaban A, Bansal S, Liu Z, Essa I, Boots B (2017) \u201cOne-shot learning for semantic segmentation,\u201d in BMVC,","DOI":"10.5244\/C.31.167"},{"key":"1539_CR10","unstructured":"Snell J, Swersky K, Zemel R (2017) \u201cPrototypical networks for few-shot learning,\u201d in NeurIPS,"},{"key":"1539_CR11","unstructured":"Dong N, Xing E (2018) \u201cFew-shot semantic segmentation with prototype learning,\u201d in BMVC,"},{"key":"1539_CR12","doi-asserted-by":"crossref","unstructured":"Lu Z, He S, Zhu X, Zhang L, Song Y-Z, Xiang T (2021) \u201cSimple is better: Few-shot semantic segmentation with classifier weight transformer,\u201d in ICCV,","DOI":"10.1109\/ICCV48922.2021.00862"},{"key":"1539_CR13","doi-asserted-by":"crossref","unstructured":"Wang K, Liew J, Zou Y, Zhou D, Feng J (2019) \u201cPanet: Few-shot image semantic segmentation with prototype alignment,\u201d in ICCV,","DOI":"10.1109\/ICCV.2019.00929"},{"issue":"9","key":"1539_CR14","doi-asserted-by":"publisher","first-page":"3855","DOI":"10.1109\/TCYB.2020.2992433","volume":"50","author":"X Zhang","year":"2020","unstructured":"Zhang X, Wei Y, Yang Y, Huang T (2020) Sg-one: Similarity guidance network for one-shot semantic segmentation. IEEE Transactions on Cybernetics 50(9):3855\u20133865","journal-title":"IEEE Transactions on Cybernetics"},{"issue":"2","key":"1539_CR15","doi-asserted-by":"publisher","first-page":"1050","DOI":"10.1109\/TPAMI.2020.3013717","volume":"44","author":"Z Tian","year":"2022","unstructured":"Tian Z, Zhao H, Shu M, Yang Z, Li R, Jia J (2022) Prior guided feature enrichment network for few-shot segmentation. TPAMI 44(2):1050\u20131065","journal-title":"TPAMI"},{"key":"1539_CR16","doi-asserted-by":"crossref","unstructured":"Wang H, Zhang X, Hu Y, Yang Y, Cao X, Zhen X (2020) \u201cFew-shot semantic segmentation with democratic attention networks,\u201d in ECCV,","DOI":"10.1007\/978-3-030-58601-0_43"},{"key":"1539_CR17","unstructured":"Zhang G, Kang G, Yang Y, Wei Y (2021) \u201cFew-shot segmentation via cycle-consistent transformer,\u201d in NeurIPS,"},{"key":"1539_CR18","doi-asserted-by":"crossref","unstructured":"Shi X, Wei D, Zhang Y, Lu D, Ning M, Chen J, Ma K, Zheng Y (2022) \u201cDense cross-query-and-support attention weighted mask aggregation for few-shot segmentation,\u201d in ECCV,","DOI":"10.1007\/978-3-031-20044-1_9"},{"key":"1539_CR19","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) \u201cAttention is all you need,\u201d in NIPS, , pp. 5998\u20136008"},{"key":"1539_CR20","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N(2021) \u201cAn image is worth 16x16 words: Transformers for image recognition at scale,\u201d in ICLR,"},{"key":"1539_CR21","doi-asserted-by":"crossref","unstructured":"Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) \u201cEnd-to-end object detection with transformers,\u201d in ECCV, pp. 213\u2013229","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"1539_CR22","doi-asserted-by":"crossref","unstructured":"Strudel R, Garcia R, Laptev I, Schmid C (2021) \u201cSegmenter: Transformer for semantic segmentation,\u201d in ICCV,","DOI":"10.1109\/ICCV48922.2021.00717"},{"key":"1539_CR23","doi-asserted-by":"crossref","unstructured":"Zheng S, Lu J, Zhao H, Zhu X, Luo Z, Wang Y, Fu Y, Feng J, Xiang T, Torr PH (2021) et\u00a0al., \u201cRethinking semantic segmentation from a sequence-to-sequence perspective with transformers,\u201d in CVPR, pp. 6881\u20136890","DOI":"10.1109\/CVPR46437.2021.00681"},{"key":"1539_CR24","doi-asserted-by":"crossref","unstructured":"Wang Y, Xu Z, Wang X, Shen C, Cheng B, Shen H, Xia H (2021) \u201cEnd-to-end video instance segmentation with transformers,\u201d in CVPR, pp. 8741\u20138750","DOI":"10.1109\/CVPR46437.2021.00863"},{"key":"1539_CR25","doi-asserted-by":"crossref","unstructured":"Wu J, Jiang Y, Sun P, Yuan Z, Luo P (2022) \u201cLanguage as queries for referring video object segmentation,\u201d in CVPR,","DOI":"10.1109\/CVPR52688.2022.00492"},{"key":"1539_CR26","doi-asserted-by":"crossref","unstructured":"Meng D, Chen X, Fan Z, Zeng G, Li H, Yuan Y, Sun L, Wang J (2021) \u201cConditional detr for fast training convergence,\u201d in ICCV,","DOI":"10.1109\/ICCV48922.2021.00363"},{"key":"1539_CR27","doi-asserted-by":"crossref","unstructured":"Zhao H, Shi J, Qi X, Wang X, Jia J (2017) \u201cPyramid scene parsing network,\u201d in CVPR,","DOI":"10.1109\/CVPR.2017.660"},{"key":"1539_CR28","doi-asserted-by":"crossref","unstructured":"Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) \u201cDual attention network for scene segmentation,\u201d in CVPR, pp. 3146\u20133154","DOI":"10.1109\/CVPR.2019.00326"},{"key":"1539_CR29","doi-asserted-by":"crossref","unstructured":"Huang Z, Wang X, Huang L, Huang C, Wei Y, Liu W (2019) \u201cCcnet: Criss-cross attention for semantic segmentation,\u201d in ICCV,","DOI":"10.1109\/ICCV.2019.00069"},{"key":"1539_CR30","doi-asserted-by":"crossref","unstructured":"Huang Z, Wang X, Wei Y, Huang L, Shi H, Liu W, Huang TS (2020) \u201cCcnet: Criss-cross attention for semantic segmentation,\u201d TPAMI,","DOI":"10.1109\/ICCV.2019.00069"},{"key":"1539_CR31","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) \u201cSwin Transformer: Hierarchical vision transformer using shifted windows,\u201d in ICCV, pp. 10\u00a0012\u201310\u00a0022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"1539_CR32","unstructured":"Bao H, Dong L, Piao S, Wei F(2022), \u201cBEit: BERT pre-training of image transformers,\u201d in ICLR,"},{"key":"1539_CR33","doi-asserted-by":"crossref","unstructured":"Cheng B, Misra I, Schwing AG, Kirillov A, Girdhar R (2022) \u201cMasked-attention mask transformer for universal image segmentation,\u201d in CVPR,","DOI":"10.1109\/CVPR52688.2022.00135"},{"key":"1539_CR34","doi-asserted-by":"crossref","unstructured":"Zhang C, Lin G, Liu F, Yao R, Shen C (2019) \u201cCanet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning,\u201d in CVPR,","DOI":"10.1109\/CVPR.2019.00536"},{"key":"1539_CR35","doi-asserted-by":"crossref","unstructured":"Yang B, Liu C, Li B, Jiao J, Qixiang Y (2020) \u201cPrototype mixture models for few-shot semantic segmentation,\u201d in ECCV,","DOI":"10.1007\/978-3-030-58598-3_45"},{"key":"1539_CR36","doi-asserted-by":"crossref","unstructured":"Liu J, Bao Y, Xie G-S, Xiong H, Sonke J-J, Gavves E (2022) \u201cDynamic prototype convolution network for few-shot semantic segmentation,\u201d in CVPR,","DOI":"10.1109\/CVPR52688.2022.01126"},{"key":"1539_CR37","doi-asserted-by":"crossref","unstructured":"Fan Q, Pei W, Tai Y-W, Tand C-K (2022) \u201cSelf-support few-shot semantic segmentation,\u201d in ECCV,","DOI":"10.1007\/978-3-031-19800-7_41"},{"key":"1539_CR38","doi-asserted-by":"crossref","unstructured":"Lang C, Cheng G, Tu B, Han J (2022) \u201cLearning what not to segment: A new perspective on few-shot segmentation,\u201d in CVPR,","DOI":"10.1109\/CVPR52688.2022.00789"},{"key":"1539_CR39","doi-asserted-by":"crossref","unstructured":"Liu Y, Liu N, Cao Q, Yao X, Han J, Shao L (2022) \u201cLearning non-target knowledge for few-shot semantic segmentation,\u201d in CVPR,","DOI":"10.1109\/CVPR52688.2022.01128"},{"key":"1539_CR40","doi-asserted-by":"crossref","unstructured":"Yang Y, Chen Q, Feng Y, Huang T (2023) \u201cMianet: Aggregating unbiased instance and general information for few-shot semantic segmentation,\u201d in CVPR, pp. 7131\u20137140","DOI":"10.1109\/CVPR52729.2023.00689"},{"issue":"11","key":"1539_CR41","doi-asserted-by":"publisher","first-page":"6609","DOI":"10.1109\/TCSVT.2023.3265075","volume":"33","author":"L Zhang","year":"2023","unstructured":"Zhang L, Zhang X, Wang Q, Wu W, Chang X, Liu J (2023) Rpmg-fss: Robust prior mask guided few-shot semantic segmentation. IEEE Trans Circuits Syst Video Technol 33(11):6609\u20136621","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"1539_CR42","doi-asserted-by":"publisher","first-page":"1432","DOI":"10.1109\/TIP.2024.3364056","volume":"33","author":"Y Chen","year":"2024","unstructured":"Chen Y, Jiang R, Zheng Y, Sheng B, Yang Z-X, Wu E (2024) Dual branch multi-level semantic learning for few-shot segmentation. IEEE Trans Image Process 33:1432\u20131447","journal-title":"IEEE Trans Image Process"},{"key":"1539_CR43","doi-asserted-by":"crossref","unstructured":"Zhang C, Lin G, Liu F, Guo J, Wu Q, Yao R (2019) \u201cPyramid graph networks with connection attentions for region-based one-shot semantic segmentation,\u201d in ICCV,","DOI":"10.1109\/ICCV.2019.00968"},{"key":"1539_CR44","doi-asserted-by":"crossref","unstructured":"J.\u00a0Min, D.\u00a0Kang, and M.\u00a0Cho, \u201cHypercorrelation squeeze for few-shot segmentation,\u201d in ICCV, 2021","DOI":"10.1109\/ICCV48922.2021.00686"},{"key":"1539_CR45","doi-asserted-by":"crossref","unstructured":"Hong S, Cho S, Nam J, Lin S, Kim S (2022) \u201cCost aggregation with 4d convolutional swin transformer for few-shot segmentation,\u201d in ECCV,","DOI":"10.1007\/978-3-031-19818-2_7"},{"key":"1539_CR46","doi-asserted-by":"publisher","first-page":"8580","DOI":"10.1109\/TMM.2023.3238521","volume":"25","author":"H Liu","year":"2023","unstructured":"Liu H, Peng P, Chen T, Wang Q, Yao Y, Hua X-S (2023) Fecanet: Boosting few-shot semantic segmentation with feature-enhanced context-aware network. IEEE Trans Multimedia 25:8580\u20138592","journal-title":"IEEE Trans Multimedia"},{"key":"1539_CR47","doi-asserted-by":"crossref","unstructured":"Chang Z, Gao X, Li N, Zhou H, Lu Y (2024) \u201cDrnet: Disentanglement and recombination network for few-shot semantic segmentation,\u201d IEEE Transactions on Circuits and Systems for Video Technology, pp. 1\u20131,","DOI":"10.1109\/TCSVT.2024.3358679"},{"key":"1539_CR48","doi-asserted-by":"crossref","unstructured":"J.\u00a0Deng, W.\u00a0Dong, R.\u00a0Socher, L.\u00a0J. Li, K.\u00a0Li, and F.-F. Li, \u201cImagenet: A large-scale hierarchical image database,\u201d in CVPR, 2009","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"1539_CR49","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S,\u00a0Sun J (2016) \u201cDeep residual learning for image recognition,\u201d in CVPR,","DOI":"10.1109\/CVPR.2016.90"},{"key":"1539_CR50","doi-asserted-by":"crossref","unstructured":"Y.\u00a0Liu, X.\u00a0Zhang, S.\u00a0Zhang, and X.\u00a0He, \u201cPart-aware prototype network for few-shot semantic segmentation,\u201d in ECCV, 2020","DOI":"10.1007\/978-3-030-58545-7_9"},{"key":"1539_CR51","doi-asserted-by":"crossref","unstructured":"Boudiaf M, \u00a0Kervadec H, Masud ZI, \u00a0Piantanida P, Ayed IB, \u00a0Dolz J (2021) \u201cFew-shot segmentation without meta-learning: A good transductive inference is all you need?\u201d in CVPR,","DOI":"10.1109\/CVPR46437.2021.01376"},{"key":"1539_CR52","doi-asserted-by":"crossref","unstructured":"Y.\u00a0Liu, N.\u00a0Lu, X.\u00a0Yao, and J.\u00a0Han, \u201cIntermediate prototype mining transformer for few-shot semantic segmentation,\u201d in NeurIPS, 2022","DOI":"10.1109\/CVPR52688.2022.01128"},{"key":"1539_CR53","doi-asserted-by":"crossref","unstructured":"K.\u00a0Nguyen and S.\u00a0Todorovic, \u201cFeature weighting and boosting for few-shot segmentation,\u201d in ICCV, 2019","DOI":"10.1109\/ICCV.2019.00071"},{"key":"1539_CR54","doi-asserted-by":"crossref","unstructured":"Nguyen K, \u00a0Todorovic S (2019) \u201cFeature weighting and boosting for few-shot segmentation,\u201d in ICCV,","DOI":"10.1109\/ICCV.2019.00071"},{"key":"1539_CR55","doi-asserted-by":"crossref","unstructured":"Everingham M, Gool LV, Williams CKI, \u00a0Winn J, \u00a0Zisserman A (2010) \u201cThe pascal visual object classes (VOC) challenge,\u201d IJCV,","DOI":"10.1007\/s11263-009-0275-4"},{"key":"1539_CR56","doi-asserted-by":"crossref","unstructured":"Hariharan B, \u00a0Arbel\u00e1ez P, \u00a0Girshick R, \u00a0Malik J (2014) \u201cSimultaneous detection and segmentation,\u201d in ECCV,","DOI":"10.1007\/978-3-319-10584-0_20"},{"key":"1539_CR57","doi-asserted-by":"crossref","unstructured":"Lin T, \u00a0Maire M, Belongie SJ, \u00a0Hays J, \u00a0Perona P, \u00a0Ramana D, \u00a0Doll\u00e1r P, Zitnick CL (2014) \u201cMicrosoft COCO: common objects in context,\u201d in ECCV,","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"1539_CR58","doi-asserted-by":"crossref","unstructured":"Zhang B, \u00a0Xiao J, \u00a0Qin T (2021) \u201cSelf-guided and cross-guided learning for few-shot segmentation,\u201d in CVPR,","DOI":"10.1109\/CVPR46437.2021.00821"},{"key":"1539_CR59","doi-asserted-by":"crossref","unstructured":"Xie G-S, \u00a0Liu J, \u00a0Xiong H, \u00a0Shao L (2021) \u201cScale-aware graph neural network for few-shot semantic segmentation,\u201d in CVPR, pp. 5471\u20135480","DOI":"10.1109\/CVPR46437.2021.00543"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01539-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01539-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01539-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,9,14]],"date-time":"2024-09-14T15:24:55Z","timestamp":1726327495000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01539-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,10]]},"references-count":59,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2024,10]]}},"alternative-id":["1539"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01539-4","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,10]]},"assertion":[{"value":"20 March 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 June 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 July 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}