{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T07:27:37Z","timestamp":1740122857182,"version":"3.37.3"},"reference-count":53,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,6,2]],"date-time":"2024-06-02T00:00:00Z","timestamp":1717286400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,6,2]],"date-time":"2024-06-02T00:00:00Z","timestamp":1717286400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"the National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["No. U1804147"],"award-info":[{"award-number":["No. U1804147"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Innovative Scientists and Technicians Team of Henan Provincial High Education","award":["20IRTSTHN019"],"award-info":[{"award-number":["20IRTSTHN019"]}]},{"name":"Science and Technology Project of Henan Province","award":["No.212102210508"],"award-info":[{"award-number":["No.212102210508"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Neural Process Lett"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Lower versions of EfficientDet (such as D0, D1) have smaller network structures and parameter sizes, but lower detection accuracy. Higher versions exhibit higher accuracy, but the increase in network complexity poses challenges for real-time processing and hardware requirements. To meet the higher accuracy requirements under limited computational resources, this paper introduces SpanEffiDet based on the channel adaptive frequency filter (CAFF) and the Span-Path Bidirectional Feature Pyramid structure. Firstly, the CAFF module proposed in this paper realizes the frequency domain transformation of channel information through Fourier transform and effectively extracts the key features through semantic adaptive frequency filtering, thus, eliminating channel redundant information of EfficientNet. Simultaneously, the module has the ability to compute the weights across the channels and at fine granularity, and capture the detailed information of element features. Secondly, a two-way characteristic pyramid network multi-level cross-BIFPN, which can achieve multi-layer and multi-nodes, is proposed to build cross-level information transmission to incorporate both semantic and positional information of the target. This design enables the network to more effectively detect objects with significant size differences in complex environments. Finally, by introducing generalized focal Loss V2, reliable localization quality estimation scores are predicted from the distribution statistics of bounding boxes, thereby improving localization accuracy. The experimental results indicate that on the MS COCO dataset, SpanEffiDet-D0 achieved an AP improvement of 3.3% compared to the original EfficientDet series algorithms. Similarly, on the PASCAL VOC2007 and 2012 datasets, the mAP of SpanEffiDet-D0 is respectively 1.66 and 2.65% higher than that of EfficientDet-D0.<\/jats:p>","DOI":"10.1007\/s11063-024-11653-6","type":"journal-article","created":{"date-parts":[[2024,6,2]],"date-time":"2024-06-02T17:01:37Z","timestamp":1717347697000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["SpanEffiDet: Span-Scale and Span-Path Feature Fusion for Object Detection"],"prefix":"10.1007","volume":"56","author":[{"given":"Qunpo","family":"Liu","sequence":"first","affiliation":[]},{"given":"Yi","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Ruxin","family":"Gao","sequence":"additional","affiliation":[]},{"given":"Xuhui","family":"Bu","sequence":"additional","affiliation":[]},{"given":"Naohiko","family":"Hanajima","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,6,2]]},"reference":[{"key":"11653_CR1","doi-asserted-by":"publisher","unstructured":"Lin TY, Doll\u00e1r P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117\u20132125) https:\/\/doi.org\/10.1109\/CVPR.2017.106","DOI":"10.1109\/CVPR.2017.106"},{"key":"11653_CR2","doi-asserted-by":"publisher","unstructured":"Liu S, Qi L, Qin H, Shi J, Jia J (2018) Path aggregation network for instance segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8759\u20138768) https:\/\/doi.org\/10.1109\/CVPR.2018.00913","DOI":"10.1109\/CVPR.2018.00913"},{"key":"11653_CR3","doi-asserted-by":"publisher","unstructured":"Ghiasi G, Lin TY, Le QV (2019) Nas-fpn: learning scalable feature pyramid architecture for object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 7036\u20137045) https:\/\/doi.org\/10.1109\/CVPR.2019.00720","DOI":"10.1109\/CVPR.2019.00720"},{"key":"11653_CR4","doi-asserted-by":"publisher","unstructured":"Qiao S, Chen LC, Yuille A (2021) Detectors: detecting objects with recursive feature pyramid and switchable atrous convolution. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 10213\u201310224) https:\/\/doi.org\/10.1109\/CVPR46437.2021.01008","DOI":"10.1109\/CVPR46437.2021.01008"},{"key":"11653_CR5","unstructured":"Liu S, Huang D, Wang Y (2019) Learning spatial fusion for single-shot object detection. arXiv preprint arXiv:1911.09516"},{"key":"11653_CR6","doi-asserted-by":"publisher","unstructured":"Tan M, Pang R, Le QV (2020) Efficientdet: scalable and efficient object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 10781\u201310790) https:\/\/doi.org\/10.1109\/CVPR42600.2020.01079","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"11653_CR7","unstructured":"Tan M, Le Q (2019) Efficientnet: rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105\u20136114). PMLR"},{"key":"11653_CR8","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770\u2013778) https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"11653_CR9","doi-asserted-by":"publisher","unstructured":"Zagoruyko S, Komodakis N (2016) Wide residual networks. arXiv preprint arXiv:1605.07146https:\/\/doi.org\/10.5244\/C.30.87","DOI":"10.5244\/C.30.87"},{"key":"11653_CR10","doi-asserted-by":"publisher","unstructured":"Wang B, Lu T, Zhang Y (2020) Feature-driven super-resolution for object detection. In 2020 5th International conference on control, robotics and cybernetics (CRC) (pp. 211\u2013215). IEEEhttps:\/\/doi.org\/10.1109\/CRC51253.2020.9253468","DOI":"10.1109\/CRC51253.2020.9253468"},{"key":"11653_CR11","doi-asserted-by":"publisher","unstructured":"Tan M, Chen B, Pang R, Vasudevan V, Sandler M, Howard A, Le QV (2019) Mnasnet: Platform-aware neural architecture search for mobile. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 2820\u20132828) https:\/\/doi.org\/10.1109\/CVPR.2019.00293","DOI":"10.1109\/CVPR.2019.00293"},{"key":"11653_CR12","first-page":"14541","volume":"35","author":"Z Pan","year":"2022","unstructured":"Pan Z, Cai J, Zhuang B (2022) Fast vision transformers with hilo attention. Adv Neural Inf Process Syst 35:14541\u201314554","journal-title":"Adv Neural Inf Process Syst"},{"key":"11653_CR13","unstructured":"Li J, Xia X, Li W, Li H, Wang X, Xiao X, Pan X (2022) Next-vit: next generation vision transformer for efficient deployment in realistic industrial scenarios. arXiv preprint arXiv:2207.05501"},{"key":"11653_CR14","unstructured":"Mehta S, Rastegari M (2021). Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer. arXiv preprint arXiv:2110.02178"},{"key":"11653_CR15","unstructured":"Katharopoulos A, Vyas A, Pappas N, Fleuret F (2020) Transformers are rnns: fast autoregressive transformers with linear attention. In International conference on machine learning (pp. 5156\u20135165). PMLR"},{"key":"11653_CR16","doi-asserted-by":"publisher","unstructured":"Xiong Y, Zeng Z, Chakraborty R, Tan M, Fung G, Li Y, Singh V (2021) Nystr\u00f6mformer: a nystr\u00f6m-based algorithm for approximating self-attention. In: Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 16, pp. 14138\u201314148). https:\/\/doi.org\/10.1609\/aaai.v35i16.17664","DOI":"10.1609\/aaai.v35i16.17664"},{"key":"11653_CR17","volume-title":"Discrete-time signal processing","author":"AV Oppenheim","year":"1999","unstructured":"Oppenheim AV (1999) Discrete-time signal processing. Pearson Education India, London"},{"key":"11653_CR18","doi-asserted-by":"publisher","unstructured":"Zhang L, Zhou S, Guan J, Zhang J (2021) Accurate few-shot object detection with support-query mutual guidance and hybrid loss. In: Proceedings of the IEEE\/CVF Conference on computer vision and pattern recognition (pp. 14424\u201314432). https:\/\/doi.org\/10.1109\/CVPR46437.2021.01419","DOI":"10.1109\/CVPR46437.2021.01419"},{"key":"11653_CR19","unstructured":"McGillem CD, Cooper GR (1991) Continuous and discrete signal and system analysis. (No Title)"},{"key":"11653_CR20","doi-asserted-by":"publisher","unstructured":"Li X, Wang W, Hu X, Li J, Tang J, Yang J (2021) Generalized focal loss v2: learning reliable localization quality estimation for dense object detection. In: Proceedings of the IEEE\/CVF Conference on computer vision and pattern recognition (pp. 11632\u201311641) https:\/\/doi.org\/10.1109\/CVPR46437.2021.01146","DOI":"10.1109\/CVPR46437.2021.01146"},{"key":"11653_CR21","doi-asserted-by":"publisher","unstructured":"Lin TY, Goyal P, Girshick R, He K, Doll\u00e1r P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision (pp. 2980\u20132988) https:\/\/doi.org\/10.1109\/ICCV.2017.324","DOI":"10.1109\/ICCV.2017.324"},{"key":"11653_CR22","first-page":"21002","volume":"33","author":"X Li","year":"2020","unstructured":"Li X, Wang W, Wu L, Chen S, Hu X, Li J, Yang J (2020) Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. Advn Neural Inf Process Syst 33:21002\u201321012","journal-title":"Advn Neural Inf Process Syst"},{"key":"11653_CR23","doi-asserted-by":"publisher","unstructured":"Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Zitnick, CL (2014) Microsoft coco: Common objects in context. In: Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6\u201312, 2014, Proceedings, Part V 13 (pp. 740\u2013755). Springer International Publishing https:\/\/doi.org\/10.1007\/978-3-319-10602-1_48","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"11653_CR24","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1007\/s11263-014-0733-5","volume":"111","author":"M Everingham","year":"2015","unstructured":"Everingham M, Eslami SA, Van Gool L, Williams CK, Winn J, Zisserman A (2015) The pascal visual object classes challenge: a retrospective. Int J Comput Vis 111:98\u2013136. https:\/\/doi.org\/10.1007\/s11263-014-0733-5","journal-title":"Int J Comput Vis"},{"key":"11653_CR25","doi-asserted-by":"publisher","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","volume":"88","author":"M Everingham","year":"2010","unstructured":"Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes voc challenge. Int J Comput Vis 88:303\u2013338. https:\/\/doi.org\/10.1007\/s11263-009-0275-4","journal-title":"Int J Comput Vis"},{"key":"11653_CR26","doi-asserted-by":"publisher","unstructured":"Duan K, Bai S, Xie L, Qi H, Huang Q, Tian Q (2019) Centernet: keypoint triplets for object detection. In: Proceedings of the IEEE\/CVF international conference on computer vision (pp. 6569\u20136578) https:\/\/doi.org\/10.1109\/ICCV.2019.00667","DOI":"10.1109\/ICCV.2019.00667"},{"key":"11653_CR27","doi-asserted-by":"publisher","unstructured":"Li S, Yang L, Huang J, Hua XS, Zhang L (2019) Dynamic anchor feature selection for single-shot object detection. In: Proceedings of the IEEE\/CVF international conference on computer vision (pp. 6609\u20136618) https:\/\/doi.org\/10.1109\/ICCV.2019.00671","DOI":"10.1109\/ICCV.2019.00671"},{"key":"11653_CR28","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556"},{"key":"11653_CR29","doi-asserted-by":"publisher","unstructured":"Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) Ssd: single shot multibox detector. In: Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11\u201314, 2016, Proceedings, Part I 14 (pp. 21\u201337). Springer International Publishing https:\/\/doi.org\/10.1007\/978-3-319-46448-0_2","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"11653_CR30","doi-asserted-by":"publisher","unstructured":"Chao P, Kao CY, Ruan YS, Huang CH, Lin YL (2019) Hardnet: a low memory traffic network. In: Proceedings of the IEEE\/CVF international conference on computer vision (pp. 3552\u20133561) https:\/\/doi.org\/10.1109\/ICCV.2019.00365","DOI":"10.1109\/ICCV.2019.00365"},{"key":"11653_CR31","doi-asserted-by":"publisher","unstructured":"Zhao Q, Sheng T, Wang Y, Tang Z, Chen Y, Cai L, Ling H (2019) M2det: a single-shot object detector based on multi-level feature pyramid network. In: Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 9259\u20139266) https:\/\/doi.org\/10.1609\/aaai.v33i01.33019259","DOI":"10.1609\/aaai.v33i01.33019259"},{"key":"11653_CR32","doi-asserted-by":"publisher","unstructured":"Liu S, Huang D (2018). Receptive field block net for accurate and fast object detection. In: Proceedings of the European conference on computer vision (ECCV) (pp. 385\u2013400) https:\/\/doi.org\/10.1007\/978-3-030-01252-6_24","DOI":"10.1007\/978-3-030-01252-6_24"},{"key":"11653_CR33","doi-asserted-by":"publisher","unstructured":"Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263\u20137271) https:\/\/doi.org\/10.1109\/CVPR.2017.690","DOI":"10.1109\/CVPR.2017.690"},{"key":"11653_CR34","unstructured":"Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767"},{"key":"11653_CR35","unstructured":"Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934"},{"key":"11653_CR36","doi-asserted-by":"publisher","unstructured":"Zhang S, Wen L, Bian X, Lei Z, Li SZ (2018) Single-shot refinement neural network for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4203\u20134212) https:\/\/doi.org\/10.1109\/CVPR.2018.00442","DOI":"10.1109\/CVPR.2018.00442"},{"key":"11653_CR37","unstructured":"Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28"},{"key":"11653_CR38","doi-asserted-by":"publisher","unstructured":"He K, Gkioxari G, Doll\u00e1r P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision (pp. 2961\u20132969) https:\/\/doi.org\/10.1109\/ICCV.2017.322","DOI":"10.1109\/ICCV.2017.322"},{"key":"11653_CR39","unstructured":"Wang RJ, Li X, Ling CX (2018) Pelee: a real-time object detection system on mobile devices. Adv Neural Inf Process Syst 31"},{"key":"11653_CR40","doi-asserted-by":"publisher","unstructured":"Liu Z, Zheng T, Xu G, Yang Z, Liu H, Cai D (2020) Training-time-friendly network for real-time object detection. In: proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 07, pp. 11685\u201311692) https:\/\/doi.org\/10.1609\/aaai.v34i07.6838","DOI":"10.1609\/aaai.v34i07.6838"},{"key":"11653_CR41","doi-asserted-by":"publisher","unstructured":"Wang CY, Bochkovskiy A, Liao HYM (2023) YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 7464\u20137475). https:\/\/doi.org\/10.1109\/CVPR52729.2023.00721","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"11653_CR42","unstructured":"Jocher, G., Chaurasia, A., & Qiu, J. (2023). YOLO by Ultralytics. URL: https:\/\/github.com\/ultralytics\/ultralytics"},{"key":"11653_CR43","doi-asserted-by":"publisher","unstructured":"Chen Q, Wang Y, Yang T, Zhang X, Cheng J, Sun J (2021) You only look one-level feature. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 13039\u201313048). https:\/\/doi.org\/10.1109\/CVPR46437.2021.01284","DOI":"10.1109\/CVPR46437.2021.01284"},{"key":"11653_CR44","doi-asserted-by":"publisher","unstructured":"Zhang H, Wang Y, Dayoub F, Sunderhauf N (2021) Varifocalnet: an iou-aware dense object detector. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 8514\u20138523). https:\/\/doi.org\/10.1109\/CVPR46437.2021.00841","DOI":"10.1109\/CVPR46437.2021.00841"},{"key":"11653_CR45","doi-asserted-by":"publisher","unstructured":"Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European conference on computer vision (pp. 213\u2013229). Cham: Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-030-58452-8_13","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"11653_CR46","doi-asserted-by":"publisher","unstructured":"Meng D, Chen X, Fan Z, Zeng G, Li H, Yuan Y, Wang J (2021) Conditional detr for fast training convergence. In: Proceedings of the IEEE\/CVF International conference on computer vision (pp. 3651\u20133660). https:\/\/doi.org\/10.1109\/ICCV48922.2021.00363","DOI":"10.1109\/ICCV48922.2021.00363"},{"key":"11653_CR47","doi-asserted-by":"publisher","unstructured":"Wang Y, Zhang X, Yang T, Sun J (2021) Anchor DETR: query design for transformer-based object detection. arXiv preprint arXiv:2109.07107, 3(6). https:\/\/doi.org\/10.1609\/aaai.v36i3.20158","DOI":"10.1609\/aaai.v36i3.20158"},{"key":"11653_CR48","doi-asserted-by":"publisher","unstructured":"Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700\u20134708) https:\/\/doi.org\/10.1109\/CVPR.2017.243","DOI":"10.1109\/CVPR.2017.243"},{"key":"11653_CR49","unstructured":"Dai J, Li Y, He K, Sun J (2016) R-fcn: object detection via region-based fully convolutional networks. Adv Neural Inf Process Syst 29"},{"key":"11653_CR50","doi-asserted-by":"publisher","unstructured":"Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779\u2013788) https:\/\/doi.org\/10.1109\/CVPR.2016.91","DOI":"10.1109\/CVPR.2016.91"},{"key":"11653_CR51","doi-asserted-by":"publisher","unstructured":"Shen Z, Liu Z, Li J, Jiang YG, Chen Y, Xue X (2017) Dsod: learning deeply supervised object detectors from scratch. In: Proceedings of the IEEE international conference on computer vision (pp. 1919\u20131927) https:\/\/doi.org\/10.1109\/ICCV.2017.212","DOI":"10.1109\/ICCV.2017.212"},{"key":"11653_CR52","unstructured":"Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360"},{"key":"11653_CR53","unstructured":"Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861"}],"container-title":["Neural Processing Letters"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11063-024-11653-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11063-024-11653-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11063-024-11653-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,15]],"date-time":"2024-07-15T11:29:01Z","timestamp":1721042941000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11063-024-11653-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,2]]},"references-count":53,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2024,6]]}},"alternative-id":["11653"],"URL":"https:\/\/doi.org\/10.1007\/s11063-024-11653-6","relation":{},"ISSN":["1573-773X"],"issn-type":[{"type":"electronic","value":"1573-773X"}],"subject":[],"published":{"date-parts":[[2024,6,2]]},"assertion":[{"value":"9 May 2024","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 June 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"193"}}