{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T09:39:13Z","timestamp":1774949953057,"version":"3.50.1"},"reference-count":63,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2022,11,4]],"date-time":"2022-11-04T00:00:00Z","timestamp":1667520000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,11,4]],"date-time":"2022-11-04T00:00:00Z","timestamp":1667520000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62002369"],"award-info":[{"award-number":["62002369"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Scientific Research Project of National University of Defense Technology","award":["ZK19-03"],"award-info":[{"award-number":["ZK19-03"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61872378"],"award-info":[{"award-number":["61872378"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In the vanilla federated learning (FL) framework, the central server distributes a globally unified model to each client and uses labeled samples for training. However, in most cases, clients are equipped with different devices and are exposed to a variety of situations. There are great differences between clients in storage, computing, communication, and other resources, which makes unified deep models used in traditional FL cannot fit clients\u2019 personalized resource conditions. Furthermore, a great deal of labeled data is needed in traditional FL, whereas data labeling requires a great investment of time and resources, which is hard to do for individual clients. As a result, clients only have a vast amount of unlabeled data, which goes against the federated learning needs. To address the aforementioned two issues, we propose Semi-HFL, a semi-supervised federated learning approach for heterogeneous devices, which divides a deep model into a series of small submodels by inserting early exit branches to meet the resource requirements of different devices. Furthermore, considering the availability of labeled data, Semi-HFL introduces semi-supervised techniques for training in the above heterogeneous learning process. Specifically, two training phases are included in the semi-supervised learning process, unsupervised learning on clients and supervised learning on the server, which makes full use of clients\u2019 unlabeled data. Through image classification, text classification, next-word prediction, and multi-task FL experiments based on five kinds of datasets, it is verified that compared with the traditional homogeneous learning method, Semi-HFL not only achieves higher accuracies but also significantly reduces the global resource overhead.<\/jats:p>","DOI":"10.1007\/s40747-022-00894-4","type":"journal-article","created":{"date-parts":[[2022,11,4]],"date-time":"2022-11-04T07:03:27Z","timestamp":1667545407000},"page":"1995-2017","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":19,"title":["Semi-HFL: semi-supervised federated learning for heterogeneous devices"],"prefix":"10.1007","volume":"9","author":[{"given":"Zhengyi","family":"Zhong","sequence":"first","affiliation":[]},{"given":"Ji","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Weidong","family":"Bao","sequence":"additional","affiliation":[]},{"given":"Jingxuan","family":"Zhou","sequence":"additional","affiliation":[]},{"given":"Xiaomin","family":"Zhu","sequence":"additional","affiliation":[]},{"given":"Xiongtao","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,11,4]]},"reference":[{"key":"894_CR1","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1109\/OJCS.2020.2993259","volume":"1","author":"Q Wu","year":"2020","unstructured":"Wu Q, He K, Chen X (2020) Personalized federated learning for intelligent iot applications: a cloud-edge based framework. IEEE Open J Comput Soc 1:35\u201344","journal-title":"IEEE Open J Comput Soc"},{"key":"894_CR2","unstructured":"Diao E, Ding J, Tarokh V (2020) Heterofl: Computation and communication efficient federated learning for heterogeneous clients. In: International Conference on Learning Representations"},{"key":"894_CR3","unstructured":"Wang J, Charles Z, Xu Z, Joshi G, McMahan HB, Al-Shedivat M, Andrew G, Avestimehr S, Daly K, Data D, et al (2021) A field guide to federated optimization. arXiv preprint arXiv:2107.06917"},{"key":"894_CR4","doi-asserted-by":"crossref","unstructured":"Nishio T, Yonetani R (2019) Client selection for federated learning with heterogeneous resources in mobile edge. In: ICC 2019-2019 IEEE International Conference on Communications (ICC), pp. 1\u20137. IEEE","DOI":"10.1109\/ICC.2019.8761315"},{"key":"894_CR5","doi-asserted-by":"crossref","unstructured":"Teerapittayanon S, McDanel B, Kung H-T (2016) Branchynet: fast inference via early exiting from deep neural networks. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2464\u20132469. IEEE","DOI":"10.1109\/ICPR.2016.7900006"},{"key":"894_CR6","first-page":"429","volume":"2","author":"T Li","year":"2020","unstructured":"Li T, Sahu AK, Zaheer M, Sanjabi M, Talwalkar A, Smith V (2020) Federated optimization in heterogeneous networks. Proc Mach Learn Syst 2:429\u2013450","journal-title":"Proc Mach Learn Syst"},{"issue":"2","key":"894_CR7","doi-asserted-by":"publisher","first-page":"639","DOI":"10.1007\/s40747-020-00247-z","volume":"7","author":"H Zhu","year":"2021","unstructured":"Zhu H, Zhang H, Jin Y (2021) From federated learning to federated neural architecture search: a survey. Complex Intell Syst 7(2):639\u2013657","journal-title":"Complex Intell Syst"},{"issue":"6","key":"894_CR8","doi-asserted-by":"publisher","first-page":"3289","DOI":"10.1007\/s40747-021-00519-2","volume":"7","author":"L Zhang","year":"2021","unstructured":"Zhang L, Zhang Z, Guan C (2021) Accelerating privacy-preserving momentum federated learning for industrial cyber-physical systems. Complex Intell Syst 7(6):3289\u20133301","journal-title":"Complex Intell Syst"},{"issue":"1","key":"894_CR9","doi-asserted-by":"publisher","first-page":"439","DOI":"10.1007\/s40747-020-00212-w","volume":"7","author":"Q Zhang","year":"2021","unstructured":"Zhang Q, Lu J, Jin Y (2021) Artificial intelligence in recommender systems. Complex Intell Syst 7(1):439\u2013457","journal-title":"Complex Intell Syst"},{"key":"894_CR10","unstructured":"Wang L, Xu S, Wang X, Zhu Q (2020) Addressing class imbalance in federated learning. arXiv preprint arXiv:2008.06217"},{"key":"894_CR11","volume-title":"Dubhe: towards data unbiasedness with homomorphic encryption in federated learning client selection","author":"S Zhang","year":"2021","unstructured":"Zhang S, Li Z, Chen Q, Zheng W, Leng J, Guo M (2021) Dubhe: towards data unbiasedness with homomorphic encryption in federated learning client selection. Association for Computing Machinery, New York"},{"key":"894_CR12","unstructured":"Collins L, Hassani H, Mokhtari A, Shakkottai S (2021) Exploiting shared representations for personalized federated learning. arXiv preprint arXiv:2102.07078"},{"key":"894_CR13","unstructured":"Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Kone\u010dn\u1ef3 J, Mazzocchi S, McMahan HB et al (2019) Towards federated learning at scale: system design. arXiv preprint arXiv:1902.01046"},{"key":"894_CR14","unstructured":"Xie C, Koyejo S, Gupta I (2020) Asynchronous federated optimization"},{"key":"894_CR15","unstructured":"Dinh CT, Tran NH, Nguyen TD (2020) Personalized federated learning with moreau envelopes. arXiv preprint arXiv:2006.08848"},{"key":"894_CR16","unstructured":"Mansour Y, Mohri M, Ro J, Suresh AT (2020) Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619"},{"key":"894_CR17","unstructured":"Hanzely F, Richt\u00e1rik P (2020) Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516"},{"key":"894_CR18","unstructured":"Smith V, Chiang C-K, Sanjabi M, Talwalkar A (2017) Federated multi-task learning. arXiv preprint arXiv:1705.10467"},{"key":"894_CR19","unstructured":"Jiang Y, Kone\u010dn\u1ef3 J, Rush K, Kannan S (2019) Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488"},{"key":"894_CR20","unstructured":"Li D, Wang J (2019) Fedmd: heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581"},{"key":"894_CR21","unstructured":"Arivazhagan MG, Aggarwal V, Singh AK, Choudhary S (2019) Federated learning with personalization layers. arXiv preprint arXiv:1912.00818"},{"key":"894_CR22","doi-asserted-by":"crossref","unstructured":"Schneider J, Vlachos M (2020) Personalization of deep learning","DOI":"10.1007\/978-3-658-32182-6_14"},{"key":"894_CR23","doi-asserted-by":"crossref","unstructured":"Wang M, Mo J, Lin J, Wang Z, Du L (2019) Dynexit: a dynamic early-exit strategy for deep residual networks. In: 2019 IEEE International Workshop on Signal Processing Systems (SiPS), pp. 178\u2013183. IEEE","DOI":"10.1109\/SiPS47522.2019.9020551"},{"issue":"4","key":"894_CR24","doi-asserted-by":"publisher","first-page":"623","DOI":"10.1109\/JSTSP.2020.2979669","volume":"14","author":"Y Wang","year":"2020","unstructured":"Wang Y, Shen J, Hu T-K, Xu P, Nguyen T, Baraniuk R, Wang Z, Lin Y (2020) Dual dynamic inference: enabling more efficient, adaptive, and controllable deep inference. IEEE J Select Top Signal Process 14(4):623\u2013633","journal-title":"IEEE J Select Top Signal Process"},{"key":"894_CR25","doi-asserted-by":"crossref","unstructured":"Yang L, Han Y, Chen X, Song S, Dai J, Huang G (2020) Resolution adaptive networks for efficient inference. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 2369\u20132378","DOI":"10.1109\/CVPR42600.2020.00244"},{"key":"894_CR26","doi-asserted-by":"crossref","unstructured":"Soldaini L, Moschitti A (2020) The cascade transformer: an application for efficient answer sentence selection. arXiv preprint arXiv:2005.02534","DOI":"10.18653\/v1\/2020.acl-main.504"},{"key":"894_CR27","doi-asserted-by":"crossref","unstructured":"Xin J, Nogueira R, Yu Y, Lin J (2020) Early exiting bert for efficient document ranking. In: Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pp. 83\u201388","DOI":"10.18653\/v1\/2020.sustainlp-1.11"},{"key":"894_CR28","doi-asserted-by":"crossref","unstructured":"Liu W, Zhou P, Zhao Z, Wang Z, Deng H, Ju Q (2020) Fastbert: a self-distilling bert with adaptive inference time. arXiv preprint arXiv:2004.02178","DOI":"10.18653\/v1\/2020.acl-main.537"},{"key":"894_CR29","unstructured":"Elbayad M, Gu J, Grave E, Auli M (2019) Depth-adaptive transformer. arXiv preprint arXiv:1910.10073"},{"key":"894_CR30","doi-asserted-by":"crossref","unstructured":"Matsubara Y, Levorato M (2021) Neural compression and filtering for edge-assisted real-time object detection in challenged networks. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 2272\u20132279. IEEE","DOI":"10.1109\/ICPR48806.2021.9412388"},{"key":"894_CR31","doi-asserted-by":"crossref","unstructured":"Laskaridis S, Kouris A, Lane ND (2021) Adaptive inference through early-exit networks: design, challenges and directions. arXiv preprint arXiv:2106.05022","DOI":"10.1145\/3469116.3470012"},{"key":"894_CR32","doi-asserted-by":"crossref","unstructured":"Teerapittayanon S, McDanel B, Kung H-T (2017) Distributed deep neural networks over the cloud, the edge and end devices. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 328\u2013339. IEEE","DOI":"10.1109\/ICDCS.2017.226"},{"key":"894_CR33","unstructured":"Zhou W, Xu C, Ge T, McAuley J, Xu K, Wei F (2020) Bert loses patience: fast and robust inference with early exit. arXiv preprint arXiv:2006.04152"},{"key":"894_CR34","doi-asserted-by":"crossref","unstructured":"Leontiadis I, Laskaridis S, Venieris SI, Lane ND (2021) It\u2019s always personal: using early exits for efficient on-device cnn personalisation. In: Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications, pp. 15\u201321","DOI":"10.1145\/3446382.3448359"},{"key":"894_CR35","doi-asserted-by":"crossref","unstructured":"Li H, Zhang H, Qi X, Yang R, Huang G (2019) Improved techniques for training adaptive deep networks. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 1891\u20131900","DOI":"10.1109\/ICCV.2019.00198"},{"key":"894_CR36","doi-asserted-by":"crossref","unstructured":"Berestizshevsky K, Even G (2019) Dynamically sacrificing accuracy for reduced computation: Cascaded inference based on softmax confidence. In: International Conference on Artificial Neural Networks, pp. 306\u2013320. Springer","DOI":"10.1007\/978-3-030-30484-3_26"},{"key":"894_CR37","unstructured":"Gormez A, Koyuncu E (2021) Class means as an early exit decision mechanism. arXiv preprint arXiv:2103.01148"},{"key":"894_CR38","unstructured":"Chen X, Dai H, Li Y, Gao X, Song L (2020) Learning to stop while learning to predict. In: International Conference on Machine Learning, pp. 1520\u20131530. PMLR"},{"key":"894_CR39","doi-asserted-by":"crossref","unstructured":"Dai X, Kong X, Guo T (2020) Epnet: Learning to exit with flexible multi-branch network. In: Proceedings of the 29th ACM International Conference on Information and Knowledge Management, pp. 235\u2013244","DOI":"10.1145\/3340531.3411973"},{"key":"894_CR40","doi-asserted-by":"crossref","unstructured":"Scardapane S, Comminiello D, Scarpiniti M, Baccarelli E, Uncini A (2020) Differentiable branching in deep networks for fast inference. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4167\u20134171. IEEE","DOI":"10.1109\/ICASSP40776.2020.9054209"},{"key":"894_CR41","doi-asserted-by":"crossref","unstructured":"Yang X, Song Z, King I, Xu Z (2021) A survey on deep semi-supervised learning. arXiv preprint arXiv:2103.00550","DOI":"10.1109\/TKDE.2022.3220219"},{"issue":"4","key":"894_CR42","doi-asserted-by":"publisher","first-page":"373","DOI":"10.1109\/TIT.1970.1054472","volume":"16","author":"A Agrawala","year":"1970","unstructured":"Agrawala A (1970) Learning with a probabilistic teacher. IEEE Trans Inform Theory 16(4):373\u2013379","journal-title":"IEEE Trans Inform Theory"},{"issue":"1","key":"894_CR43","doi-asserted-by":"publisher","first-page":"57","DOI":"10.1109\/TIT.1967.1053952","volume":"13","author":"S Fralick","year":"1967","unstructured":"Fralick S (1967) Learning to recognize patterns without a teacher. IEEE Trans Inform Theory 13(1):57\u201364","journal-title":"IEEE Trans Inform Theory"},{"issue":"3","key":"894_CR44","doi-asserted-by":"publisher","first-page":"363","DOI":"10.1109\/TIT.1965.1053799","volume":"11","author":"H Scudder","year":"1965","unstructured":"Scudder H (1965) Probability of error of some adaptive pattern-recognition machines. IEEE Trans Inform Theory 11(3):363\u2013371","journal-title":"IEEE Trans Inform Theory"},{"key":"894_CR45","doi-asserted-by":"crossref","unstructured":"Zhang B, Wang Y, Hou W, Wu H, Wang J, Okumura M, Shinozaki T (2021) Flexmatch: boosting semi-supervised learning with curriculum pseudo labeling. Adv Neural Inform Process Syst 34","DOI":"10.1007\/978-3-030-92270-2_1"},{"key":"894_CR46","doi-asserted-by":"crossref","unstructured":"Li D, Dick S (2022) Semi-supervised multi-label classification using an extended graph-based manifold regularization. Complex Intell Syst:1\u201317","DOI":"10.1007\/s40747-021-00611-7"},{"key":"894_CR47","doi-asserted-by":"crossref","unstructured":"Mandapati S, Kadry S, Kumar RL, Sutham K, Thinnukool O (2022) Deep learning model construction for a semi-supervised classification with feature learning. Complex Intell Syst:1\u201311","DOI":"10.1007\/s40747-022-00641-9"},{"key":"894_CR48","unstructured":"Miller DJ, Uyar H (1996) A mixture of experts classifier with learning based on both labelled and unlabelled data. Adv Neural Inform Process Syst 9"},{"key":"894_CR49","unstructured":"Odena A (2016) Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583"},{"key":"894_CR50","first-page":"585","volume":"14","author":"M Belkin","year":"2001","unstructured":"Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. Nips 14:585\u2013591","journal-title":"Nips"},{"key":"894_CR51","doi-asserted-by":"crossref","unstructured":"Ke Z, Wang D, Yan Q, Ren J, Lau RW (2019) Dual student: breaking the limits of the teacher in semi-supervised learning. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 6728\u20136736","DOI":"10.1109\/ICCV.2019.00683"},{"key":"894_CR52","doi-asserted-by":"crossref","unstructured":"Chen P, Ma T, Qin X, Xu W, Zhou S (2020) Data-efficient semi-supervised learning by reliable edge mining. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 9192\u20139201","DOI":"10.1109\/CVPR42600.2020.00921"},{"key":"894_CR53","doi-asserted-by":"crossref","unstructured":"Li S, Liu B, Chen D, Chu Q, Yuan L, Yu N (2020) Density-aware graph for deep semi-supervised visual recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 13400\u201313409","DOI":"10.1109\/CVPR42600.2020.01341"},{"issue":"3","key":"894_CR54","doi-asserted-by":"publisher","first-page":"415","DOI":"10.1007\/s10115-009-0209-z","volume":"24","author":"Z-H Zhou","year":"2010","unstructured":"Zhou Z-H, Li M (2010) Semi-supervised learning by disagreement. Knowl Inform Syst 24(3):415\u2013439","journal-title":"Knowl Inform Syst"},{"key":"894_CR55","doi-asserted-by":"crossref","unstructured":"Qiao S, Shen W, Zhang Z, Wang B, Yuille A (2018) Deep co-training for semi-supervised image recognition. In: Proceedings of the European Conference on Computer Vision (eccv), pp. 135\u2013152","DOI":"10.1007\/978-3-030-01267-0_9"},{"key":"894_CR56","unstructured":"Berthelot D, Carlini N, Cubuk ED, Kurakin A, Sohn K, Zhang H, Raffel C (2019) Remixmatch: semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785"},{"key":"894_CR57","unstructured":"Li J, Socher R, Hoi SC (2020) Dividemix: learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394"},{"key":"894_CR58","doi-asserted-by":"crossref","unstructured":"Xie Q, Luong M-T, Hovy E, Le QV (2020) Self-training with noisy student improves imagenet classification. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 10687\u201310698","DOI":"10.1109\/CVPR42600.2020.01070"},{"key":"894_CR59","unstructured":"Sohn K, Berthelot D, Li C-L, Zhang Z, Carlini N, Cubuk ED, Kurakin A, Zhang H, Raffel C (2020) Fixmatch: simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685"},{"key":"894_CR60","unstructured":"McMahan B, Moore E, Ramage D, Hampson S, Arcas BAy (2017) Communication-Efficient Learning of Deep Networks from Decentralized Data. In: Singh A, Zhu J (eds) Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. In: Proceedings of Machine Learning Research, vol. 54, pp. 1273\u20131282. PMLR"},{"key":"894_CR61","doi-asserted-by":"crossref","unstructured":"Kim Y (2014) Convolutional neural networks for sentence classification","DOI":"10.3115\/v1\/D14-1181"},{"key":"894_CR62","unstructured":"Caldas S, Duddu SMK, Wu P, Li T, Kone\u010dn\u1ef3 J, McMahan HB, Smith V, Talwalkar A (2018) Leaf: a benchmark for federated settings. arXiv preprint arXiv:1812.01097"},{"key":"894_CR63","first-page":"3","volume":"1","author":"Y Tan","year":"2022","unstructured":"Tan Y, Long G, Liu L, Zhou T, Lu Q, Jiang J, Zhang C (2022) Fedproto: federated prototype learning across heterogeneous clients. AAAI Conf Artif Intell 1:3","journal-title":"AAAI Conf Artif Intell"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00894-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00894-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00894-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,4,18]],"date-time":"2023-04-18T09:41:45Z","timestamp":1681810905000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00894-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,4]]},"references-count":63,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,4]]}},"alternative-id":["894"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00894-4","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,4]]},"assertion":[{"value":"12 March 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 October 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 November 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}