{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T17:51:48Z","timestamp":1769017908084,"version":"3.49.0"},"reference-count":51,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,7,3]],"date-time":"2023-07-03T00:00:00Z","timestamp":1688342400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,3]],"date-time":"2023-07-03T00:00:00Z","timestamp":1688342400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61941116"],"award-info":[{"award-number":["61941116"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U1936119"],"award-info":[{"award-number":["U1936119"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012165","name":"Key Technologies Research and Development Program","doi-asserted-by":"publisher","award":["2019QY(Y)0602"],"award-info":[{"award-number":["2019QY(Y)0602"]}],"id":[{"id":"10.13039\/501100012165","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cybersecurity"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Recently, deep neural networks have been shown to be vulnerable to backdoor attacks. A backdoor is inserted into neural networks via this attack paradigm, thus compromising the integrity of the network. As soon as an attacker presents a trigger during the testing phase, the backdoor in the model is activated, allowing the network to make specific wrong predictions. It is extremely important to defend against backdoor attacks since they are very stealthy and dangerous. In this paper, we propose a novel defense mechanism, Neural Behavioral Alignment (NBA), for backdoor removal. NBA optimizes the distillation process in terms of knowledge form and distillation samples to improve defense performance according to the characteristics of backdoor defense. NBA builds high-level representations of neural behavior within networks in order to facilitate the transfer of knowledge. Additionally, NBA crafts pseudo samples to induce student models exhibit backdoor neural behavior. By aligning the backdoor neural behavior from the student network with the benign neural behavior from the teacher network, NBA enables the proactive removal of backdoors. Extensive experiments show that NBA can effectively defend against six different backdoor attacks and outperform five state-of-the-art defenses.<\/jats:p>","DOI":"10.1186\/s42400-023-00154-z","type":"journal-article","created":{"date-parts":[[2023,7,3]],"date-time":"2023-07-03T01:01:28Z","timestamp":1688346088000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["NBA: defensive distillation for backdoor removal via neural behavior alignment"],"prefix":"10.1186","volume":"6","author":[{"given":"Zonghao","family":"Ying","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8103-0468","authenticated-orcid":false,"given":"Bin","family":"Wu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,3]]},"reference":[{"key":"154_CR1","doi-asserted-by":"crossref","unstructured":"Barni M, Kallas K, Tondi B (2019) A new backdoor attack in CNNs by training set corruption without label poisoning. In: 2019 IEEE international conference on image processing, ICIP 2019, Taipei, China, 22\u201325 Sep 2019. pp 101\u2013105. IEEE","DOI":"10.1109\/ICIP.2019.8802997"},{"issue":"4","key":"154_CR2","doi-asserted-by":"publisher","first-page":"122","DOI":"10.3390\/info10040122","volume":"10","author":"DS Berman","year":"2019","unstructured":"Berman DS, Buczak AL, Chavis JS, Corbett CL (2019) A survey of deep learning methods for cyber security. Information 10(4):122","journal-title":"Information"},{"key":"154_CR3","unstructured":"Chen X, Liu C, Li B, Lu K, Song D (2017) Targeted backdoor attacks on deep learning systems using data poisoning. CoRR arXiv:1712.05526"},{"key":"154_CR4","doi-asserted-by":"crossref","unstructured":"Costa-juss\u00e0 MR (2018) From feature to paradigm: deep learning in machine translation (extended abstract). In: L Jang (eds) Proceedings of the Twenty-Seventh international joint conference on artificial intelligence, IJCAI 2018, Stockholm, Sweden, 13\u201319 July 2018. pp 5583\u20135587. ijcai.org","DOI":"10.24963\/ijcai.2018\/789"},{"key":"154_CR5","doi-asserted-by":"crossref","unstructured":"Doan BG, Abbasnejad E, Ranasinghe DC (2020) Februus: input purification defense against trojan attacks on deep neural network systems. In: ACSAC \u201920: annual computer security applications conference, virtual eventl, Austin, TX, USA, 7\u201311 Dec, 2020, pp 897\u2013912. ACM","DOI":"10.1145\/3427228.3427264"},{"key":"154_CR6","unstructured":"Furlanello T, Zachary CL, Tschannen M, Itti L, Anandkumar A (2018) Born-again neural networks. In: Dy JG and Krause A (eds), Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, 10\u201315 July 2018, vol 80 of Proceedings of machine learning research, pp 1602\u20131611. PMLR"},{"key":"154_CR7","unstructured":"Gao Y, Doan BG, Zhang Z et al (2020) Backdoor attacks and countermeasures on deep learning: a comprehensive review. CoRR, arXiv: abs\/2007.10760"},{"key":"154_CR8","doi-asserted-by":"crossref","unstructured":"Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR 2016, Las Vegas, NV, USA, 27\u201330 June, 2016, pp 2414\u20132423. IEEE Computer Society","DOI":"10.1109\/CVPR.2016.265"},{"key":"154_CR9","unstructured":"Geoffrey EH, Oriol V, Jeffrey D (2015) Distilling the knowledge in a neural network. CoRR arXiv:1503.02531"},{"key":"154_CR10","doi-asserted-by":"crossref","unstructured":"Ge Y, Wang Q, Zheng B et al (2021) Anti-distillation backdoor attacks: backdoors can really survive in knowledge distillation. In: Shen HT, Zhuang Y, Smith JR et al (eds) MM \u201921: ACM multimedia conference, virtual event, China, 20\u201324 Oct 2021, pp 826\u2013834. ACM","DOI":"10.1145\/3474085.3475254"},{"key":"154_CR11","unstructured":"Goodfellow IJ, Mirza M, Da X, Courville AC, Bengio Y (2014) An empirical investigation of catastrophic forgeting in gradient-based neural networks. In: Bengio Y and LeCun Y (eds) 2nd international conference on learning representations, ICLR 2014, Banff, AB, Canada, 14\u201316 April 2014, conference track proceedings"},{"issue":"3","key":"154_CR12","doi-asserted-by":"publisher","first-page":"362","DOI":"10.1002\/rob.21918","volume":"37","author":"SM Grigorescu","year":"2020","unstructured":"Grigorescu SM, Trasnea B, Cocias TT, Macesanu G (2020) A survey of deep learning techniques for autonomous driving. J Field Robot 37(3):362\u2013386","journal-title":"J Field Robot"},{"key":"154_CR13","unstructured":"Gu T, Dolan-Gavitt B, Garg S (2017) Badnets: identifying vulnerabilities in the machine learning model supply chain. CoRR arXiv: 1708.06733"},{"issue":"11","key":"154_CR14","doi-asserted-by":"publisher","first-page":"116","DOI":"10.1109\/MCOM.001.1900091","volume":"57","author":"Xu Guowen","year":"2019","unstructured":"Guowen Xu, Li Hongwei, Ren Hao, Yang Kan, Deng Robert H (2019) Data security issues in deep learning: attacks, countermeasures, and opportunities. IEEE Commun Mag 57(11):116\u2013122","journal-title":"IEEE Commun Mag"},{"key":"154_CR15","unstructured":"Hayase J, Kong W, Somani R, Oh S (2021) SPECTRE: defending against backdoor attacks using robust statistics. CoRR arXiv:2104.11315"},{"key":"154_CR16","doi-asserted-by":"crossref","unstructured":"Hu G, Yang Y, Yi D et al (2015) When face recognition meets with deep learning: an evaluation of convolutional neural networks for face recognition. In: 2015 IEEE international conference on computer vision workshop, ICCV Workshops 2015, Santiago, Chile, 7\u201313 Dec 2015, pp 384\u2013392. IEEE Computer Society","DOI":"10.1109\/ICCVW.2015.58"},{"key":"154_CR17","doi-asserted-by":"crossref","unstructured":"Jia J, Liu Y, Cao X, Gong NZ (2022) Certified robustness of nearest neighbors against data poisoning and backdoor attacks. In: Thirty-Sixth AAAI conference on artificial intelligence, AAAI 2022, Thirty-Fourth conference on innovative applications of artificial intelligence, IAAI 2022, The Twelveth symposium on educational advances in artificial intelligence, EAAI 2022 Virtual Event, February 22\u2013March 1, 2022, pp 9575\u20139583. AAAI Press, USA","DOI":"10.1609\/aaai.v36i9.21191"},{"issue":"13","key":"154_CR18","doi-asserted-by":"publisher","first-page":"3521","DOI":"10.1073\/pnas.1611835114","volume":"114","author":"James Kirkpatrick","year":"2017","unstructured":"Kirkpatrick James, Pascanu Razvan, Rabinowitz Neil et al (2017) Overcoming catastrophic forgetting in neural networks. Proc Natl Acad Sci 114(13):3521\u20133526","journal-title":"Proc Natl Acad Sci"},{"key":"154_CR19","doi-asserted-by":"publisher","DOI":"10.1017\/9781108608480","volume-title":"Neural machine translation","author":"P Koehn","year":"2020","unstructured":"Koehn P (2020) Neural machine translation. Cambridge University Press, Cambridge"},{"key":"154_CR20","unstructured":"Li Y, Lyu X, Koren N et al (2021) Neural attention distillation: erasing backdoor triggers from deep neural networks. In: 9th international conference on learning representations, ICLR 2021, Virtual Event, Austria, 3\u20137 May 2021. OpenReview.net"},{"key":"154_CR21","doi-asserted-by":"publisher","first-page":"4566","DOI":"10.1109\/ACCESS.2020.3045078","volume":"9","author":"Ximeng Liu","year":"2021","unstructured":"Liu Ximeng, Xie Lehui, Wang Yaopeng et al (2021) Privacy and security issues in deep learning: a survey. IEEE Access 9:4566\u20134593","journal-title":"IEEE Access"},{"key":"154_CR22","doi-asserted-by":"crossref","unstructured":"Liu K, Dolan-Gavitt B, Garg S (2018) Fine-pruning: defending against backdooring attacks on deep neural networks. In: Bailey M, Holz T, Stamatogiannakis M and Ioannidis S (eds) Research in attacks, intrusions, and defenses\u201421st international symposium, RAID 2018, Heraklion, Crete, Greece, 10\u201312 Sep 2018, Proceedings, vol 11050 of Lecture Notes in Computer Science, pp 273\u2013294. Springer","DOI":"10.1007\/978-3-030-00470-5_13"},{"key":"154_CR23","doi-asserted-by":"crossref","unstructured":"Liu K, Dolan-Gavitt B, Garg S (2018) Fine-pruning: defending against backdooring attacks on deep neural networks. In: Bailey M, Holz T, Stamatogiannakis M and Ioannidis S (eds), Research in attacks, intrusions, and defenses\u201421st international symposium, RAID 2018, Heraklion, Crete, Greece, 10\u201312 Sep, 2018, Proceedings, vol 11050 of Lecture Notes in Computer Science, pp 273\u2013294. Springer","DOI":"10.1007\/978-3-030-00470-5_13"},{"key":"154_CR24","doi-asserted-by":"crossref","unstructured":"Liu Y, Ma S, Aafer Y et al (2018) Trojaning attack on neural networks. In: 25th annual network and distributed system security symposium, NDSS 2018, San Diego, California, USA, 18\u201321 Feb 2018. The Internet Society","DOI":"10.14722\/ndss.2018.23291"},{"key":"154_CR25","doi-asserted-by":"crossref","unstructured":"Liu Y, Ma X, Bailey J, Lu F (2020) Reflection backdoor: a natural backdoor attack on deep neural networks. In: Vedaldi A, Bischof H, Brox H and Frahm J-M (eds) Computer vision\u2014ECCV 2020\u201416th European conference, Glasgow, UK, 23\u201328 Aug 2020, Proceedings, Part X, vol 12355 of Lecture Notes in Computer Science, pp 182\u2013199. Springer, Berlin","DOI":"10.1007\/978-3-030-58607-2_11"},{"key":"154_CR26","unstructured":"Liu Y, Shu C, Wang J, Shen C (2020) Structured knowledge distillation for dense prediction. IEEE Trans Pattern Anal Mach Intell"},{"key":"154_CR27","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30\u2013May 3, 2018, conference track proceedings. OpenReview.net"},{"issue":"7","key":"154_CR28","doi-asserted-by":"publisher","first-page":"4316","DOI":"10.1109\/TITS.2020.3032227","volume":"22","author":"K Muhammad","year":"2021","unstructured":"Muhammad K, Ullah A, Lloret J, Ser JD, de Albuquerque VHC (2021) Deep learning for safe autonomous driving: current challenges and future directions. IEEE Trans Intell Transp Syst 22(7):4316\u20134336","journal-title":"IEEE Trans Intell Transp Syst"},{"key":"154_CR29","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel PD, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE symposium on security and privacy, SP 2016, San Jose, CA, USA, 22\u201326 May, 2016","DOI":"10.1109\/SP.2016.41"},{"key":"154_CR30","doi-asserted-by":"crossref","unstructured":"Park W, Kim D, Lu Y, Cho M (2019) Relational knowledge distillation. In: IEEE conference on computer vision and pattern recognition, CVPR 2019, Long Beach, CA, USA, 16\u201320 June 2019, pp 3967\u20133976. Computer Vision Foundation\/IEEE","DOI":"10.1109\/CVPR.2019.00409"},{"key":"154_CR31","unstructured":"Qiao X, Yang Y, Li H (2019)Defending neural backdoors via generative distribution modeling. In: Wallach HM, Larochelle H, Beygelzimer H et al (eds) Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, 8\u201314 Dec 2019, Vancouver, BC, Canada, pp 14004\u201314013"},{"key":"154_CR32","doi-asserted-by":"crossref","unstructured":"Qiu H, Zeng Y, Guo S et al (2021) Deepsweep: an evaluation framework for mitigating DNN backdoor attacks using data augmentation. In: Cao J, Au MH, Lin Z and Yung M (eds) ASIA CCS \u201921: ACM Asia conference on computer and communications security, virtual event, Hong Kong, 7\u201311 June 2021, pp 363\u2013377. ACM","DOI":"10.1145\/3433210.3453108"},{"key":"154_CR33","doi-asserted-by":"crossref","unstructured":"Ribeiro M, Grolinger K, Capretz Miriam AM (2015) Mlaas: machine learning as a service. In: Li T, Kurgan LA, Palade V et al (eds), 14th IEEE international conference on machine learning and applications, ICMLA 2015, Miami, FL, USA, 9\u201311 Dec 2015","DOI":"10.1109\/ICMLA.2015.152"},{"key":"154_CR34","unstructured":"Romero A, Ballas N, Samira EK et al (2015) Fitnets: hints for thin deep nets. In: Bengio Y and LeCun Y (eds) 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, 7\u20139 May 2015, conference track proceedings"},{"key":"154_CR35","unstructured":"Romero A, Ballas N, Samira EK et al (2015) Fitnets: hints for thin deep nets. In: Bengio Y and LeCun Y (eds), 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, 7\u20139 May 2015, conference track proceedings"},{"issue":"5","key":"154_CR36","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"Cynthia Rudin","year":"2019","unstructured":"Rudin Cynthia (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206\u2013215","journal-title":"Nat Mach Intell"},{"key":"154_CR37","doi-asserted-by":"crossref","unstructured":"Tao G, Shen G, Liu Y et al (2022) Better trigger inversion optimization in backdoor scanning. In: IEEE\/CVF conference on computer vision and pattern recognition, CVPR 2022, New Orleans, LA, USA, 18\u201324 June 2022, pp 13358\u201313368. IEEE","DOI":"10.1109\/CVPR52688.2022.01301"},{"key":"154_CR38","unstructured":"Turner A, Tsipras D, Madry A (2019) Label-consistent backdoor attacks. CoRR arXiv:1912.02771"},{"key":"154_CR39","doi-asserted-by":"crossref","unstructured":"Wang H, Guo L (2021) Research on face recognition based on deep learning. In: 3rd international conference on artificial intelligence and advanced manufacture, AIAM 2021, Manchester, UK, 23\u201325 Oct, 2021, pp 540\u2013546. IEEE","DOI":"10.1109\/AIAM54119.2021.00113"},{"key":"154_CR40","doi-asserted-by":"crossref","unstructured":"Wang B, Yao Y, Shan S et al (2019) Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE symposium on security and privacy, SP 2019, San Francisco, CA, USA, 19\u201323 May 2019, pp 707\u2013723. IEEE","DOI":"10.1109\/SP.2019.00031"},{"key":"154_CR41","unstructured":"Weber M, Xu X, Karlas B, Zhang C, Li B (2020) RAB: provable robustness against backdoor attacks. CoRR arXiv:2003.08904"},{"key":"154_CR42","unstructured":"Wu D, Wang Y (2021) Adversarial neuron pruning purifies backdoored deep models. In: Ranzato M, Beygelzimer A, Dauphin YN, Liang P and Vaughan JW (eds), Advances in Neural Information Processing Systems 34: annual conference on neural information processing systems 2021, NeurIPS 2021, 6\u201314 Dec 2021, virtual, pp 16913\u201316925"},{"key":"154_CR43","doi-asserted-by":"crossref","unstructured":"Xia J, Wang T, Ding J, Wei X, Chen M (2022) Eliminating backdoor triggers for deep neural networks using attention relation graph distillation. In: De Raedt L (eds), Proceedings of the Thirty-First international joint conference on artificial intelligence, IJCAI 2022, Vienna, Austria, 23\u201329 July 2022, pp 1481\u20131487. ijcai.org","DOI":"10.24963\/ijcai.2022\/206"},{"key":"154_CR44","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1109\/LSP.2020.2975426","volume":"27","author":"Xu Xixia","year":"2020","unstructured":"Xixia Xu, Zou Qi, Lin Xue, Huang Yaping, Tian Yi (2020) Integral knowledge distillation for multi-person pose estimation. IEEE Signal Process Lett 27:436\u2013440","journal-title":"IEEE Signal Process Lett"},{"key":"154_CR45","doi-asserted-by":"crossref","unstructured":"Xu X, Wang Q, Li H et al (2021) Detecting AI trojans using meta neural analysis. In: 42nd IEEE symposium on security and privacy, SP 2021, San Francisco, CA, USA, 24\u201327 May 2021, pp 103\u2013120. IEEE","DOI":"10.1109\/SP40001.2021.00034"},{"key":"154_CR46","doi-asserted-by":"crossref","unstructured":"Yim J, Joo D, Bae J-H, Kim J (2017) A gift from knowledge distillation: fast optimization, network minimization and transfer learning. In: 2017 IEEE Conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, 21\u201326 July 2017, pp 7130\u20137138. IEEE Computer Society","DOI":"10.1109\/CVPR.2017.754"},{"key":"154_CR47","doi-asserted-by":"crossref","unstructured":"Zagoruyko S, Komodakis N (2016) Wide residual networks. In: Wilson RC, Hancock ER and Smith WAP (eds) Proceedings of the British machine vision conference 2016, BMVC 2016, York, UK, 19\u201322 Sep 2016. BMVA Press, UK","DOI":"10.5244\/C.30.87"},{"key":"154_CR48","unstructured":"Zagoruyko S, Komodakis N (2017) Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: 5th international conference on learning representations, ICLR 2017, Toulon, France, 24\u201326 April 2017, conference track proceedings. OpenReview.net"},{"key":"154_CR49","doi-asserted-by":"crossref","unstructured":"Zeng Y, Park W, Mao ZM, Jia R (2021) Rethinking the backdoor attacks\u2019 triggers: a frequency perspective. In: 2021 IEEE\/CVF international conference on computer vision, ICCV 2021, Montreal, QC, Canada, 10\u201317 Oct 2021, pp 16453\u201316461. IEEE","DOI":"10.1109\/ICCV48922.2021.01616"},{"key":"154_CR50","unstructured":"Zhao P, Chen P-Y, Das P, Ramamurthy KN, Lin X (2020) Bridging mode connectivity in loss landscapes and adversarial robustness. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26\u201330 April 2020. OpenReview.net"},{"key":"154_CR51","unstructured":"Zheng S, Zhang Y, Wagner H, Goswami M, Chen C (2021) Topological detection of trojaned neural networks. In: Ranzato MA, Beygelzimer A, Dauphin YN, Liang P and Vaughanc JW (eds), Advances in neural information processing systems 34: annual conference on neural information processing systems 2021, NeurIPS 2021, 6\u201314 Dec 2021, virtual, pp 17258\u201317272"}],"container-title":["Cybersecurity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-023-00154-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42400-023-00154-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-023-00154-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,3]],"date-time":"2023-07-03T01:02:46Z","timestamp":1688346166000},"score":1,"resource":{"primary":{"URL":"https:\/\/cybersecurity.springeropen.com\/articles\/10.1186\/s42400-023-00154-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,3]]},"references-count":51,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["154"],"URL":"https:\/\/doi.org\/10.1186\/s42400-023-00154-z","relation":{},"ISSN":["2523-3246"],"issn-type":[{"value":"2523-3246","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,3]]},"assertion":[{"value":"12 December 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 March 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 July 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"20"}}