{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T11:00:29Z","timestamp":1773745229454,"version":"3.50.1"},"reference-count":58,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2024,4,15]],"date-time":"2024-04-15T00:00:00Z","timestamp":1713139200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,4,15]],"date-time":"2024-04-15T00:00:00Z","timestamp":1713139200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100012542","name":"Sichuan Province Science and Technology Support Program","doi-asserted-by":"publisher","award":["2023YFG0099"],"award-info":[{"award-number":["2023YFG0099"]}],"id":[{"id":"10.13039\/100012542","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100012542","name":"Sichuan Province Science and Technology Support Program","doi-asserted-by":"publisher","award":["2023YFG0261"],"award-info":[{"award-number":["2023YFG0261"]}],"id":[{"id":"10.13039\/100012542","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The integration of convolutional neural network (CNN) and transformer enhances the network\u2019s capacity for concurrent modeling of texture details and global structures. However, training challenges with transformer limit their effectiveness to low-resolution images, leading to increased artifacts in slightly larger images. In this paper, we propose a single-stage network utilizing large kernel attention (LKA) to address high-resolution damaged images. LKA enables the capture of both global and local details, akin to transformer and CNN networks, resulting in high-quality inpainting. Our method excels in: (1) reducing parameters, improving inference speed, and enabling direct training on 1024<jats:inline-formula><jats:alternatives><jats:tex-math>$$\\times $$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mo>\u00d7<\/mml:mo>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula>1024 resolution images; (2) utilizing LKA for enhanced extraction of global high-frequency and local details; (3) demonstrating excellent generalization on irregular mask models and common datasets such as Places2, Celeba-HQ, FFHQ, and the random irregular mask dataset Pconv from NVIDIA.<\/jats:p>","DOI":"10.1007\/s40747-024-01411-5","type":"journal-article","created":{"date-parts":[[2024,4,15]],"date-time":"2024-04-15T09:01:59Z","timestamp":1713171719000},"page":"4921-4938","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["NLKFill: high-resolution image inpainting with a novel large kernel attention"],"prefix":"10.1007","volume":"10","author":[{"given":"Ting","family":"Wang","sequence":"first","affiliation":[]},{"given":"Dong","family":"Xiang","sequence":"additional","affiliation":[]},{"given":"Chuan","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Jiaying","family":"Liang","sequence":"additional","affiliation":[]},{"given":"Canghong","family":"Shi","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,4,15]]},"reference":[{"issue":"8","key":"1411_CR1","doi-asserted-by":"publisher","first-page":"1200","DOI":"10.1109\/83.935036","volume":"10","author":"C Ballester","year":"2001","unstructured":"Ballester C, Bertalmio M, Caselles V, Sapiro G, Verdera J (2001) Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans Image Process 10(8):1200\u20131211","journal-title":"IEEE Trans Image Process"},{"issue":"3","key":"1411_CR2","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1145\/1531326.1531330","volume":"28","author":"C Barnes","year":"2009","unstructured":"Barnes C, Shechtman E, Finkelstein A, Goldman DB (2009) Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans Graph 28(3):24","journal-title":"ACM Trans Graph"},{"issue":"9","key":"1411_CR3","doi-asserted-by":"publisher","first-page":"1200","DOI":"10.1109\/TIP.2004.833105","volume":"13","author":"A Criminisi","year":"2004","unstructured":"Criminisi A, P\u00e9rez P, Toyama K (2004) Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process 13(9):1200\u20131212","journal-title":"IEEE Trans Image Process"},{"issue":"4","key":"1411_CR4","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3072959.3073659","volume":"36","author":"S Iizuka","year":"2017","unstructured":"Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graph (ToG) 36(4):1\u201314","journal-title":"ACM Trans Graph (ToG)"},{"key":"1411_CR5","doi-asserted-by":"crossref","unstructured":"Liu G, Reda FA, Shih KJ, Wang T-C, Tao A, Catanzaro B (2018) Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85\u2013100","DOI":"10.1007\/978-3-030-01252-6_6"},{"key":"1411_CR6","doi-asserted-by":"crossref","unstructured":"Liu H, Jiang B, Song Y, Huang W, Yang C (2020) Rethinking image inpainting via a mutual encoder-decoder with feature equalizations. In: Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part II 16, pp. 725\u2013741. Springer","DOI":"10.1007\/978-3-030-58536-5_43"},{"key":"1411_CR7","doi-asserted-by":"crossref","unstructured":"Nazeri K, Ng E, Joseph T, Qureshi F, Ebrahimi M (2019) Edgeconnect: Structure guided image inpainting using edge prediction. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision Workshops, pp. 0\u20130","DOI":"10.1109\/ICCVW.2019.00408"},{"key":"1411_CR8","doi-asserted-by":"crossref","unstructured":"Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536\u20132544","DOI":"10.1109\/CVPR.2016.278"},{"key":"1411_CR9","doi-asserted-by":"crossref","unstructured":"Peng J, Liu D, Xu S, Li H (2021) Generating diverse structure for image inpainting with hierarchical vq-vae. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 10775\u201310784","DOI":"10.1109\/CVPR46437.2021.01063"},{"key":"1411_CR10","doi-asserted-by":"crossref","unstructured":"Wan Z, Zhang J, Chen D, Liao J (2021) High-fidelity pluralistic image completion with transformers. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 4692\u20134701","DOI":"10.1109\/ICCV48922.2021.00465"},{"key":"1411_CR11","doi-asserted-by":"crossref","unstructured":"Yi Z, Tang Q, Azizi S, Jang D, Xu Z (2020) Contextual residual aggregation for ultra high-resolution image inpainting. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 7508\u20137517","DOI":"10.1109\/CVPR42600.2020.00753"},{"key":"1411_CR12","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018) Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505\u20135514","DOI":"10.1109\/CVPR.2018.00577"},{"key":"1411_CR13","doi-asserted-by":"crossref","unstructured":"Zeng Y, Lin Z, Lu H, Patel VM (2021) Cr-fill: Generative image inpainting with auxiliary contextual reconstruction. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 14164\u201314173","DOI":"10.1109\/ICCV48922.2021.01390"},{"key":"1411_CR14","doi-asserted-by":"crossref","unstructured":"Zheng C, Cham T-J, Cai J (2019) Pluralistic image completion. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 1438\u20131447","DOI":"10.1109\/CVPR.2019.00153"},{"key":"1411_CR15","doi-asserted-by":"crossref","unstructured":"Zheng C, Cham T-J, Cai J, Phung D (2022) Bridging global context interactions for high-fidelity image completion. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 11512\u201311522","DOI":"10.1109\/CVPR52688.2022.01122"},{"key":"1411_CR16","doi-asserted-by":"crossref","unstructured":"Li Y, Liu S, Yang J, Yang M-H (2017) Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911\u20133919","DOI":"10.1109\/CVPR.2017.624"},{"key":"1411_CR17","doi-asserted-by":"crossref","unstructured":"Liu H, Jiang B, Xiao Y, Yang C (2019) Coherent semantic attention for image inpainting. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 4170\u20134179","DOI":"10.1109\/ICCV.2019.00427"},{"key":"1411_CR18","unstructured":"Azulay A, Weiss Y (2018) Why do deep convolutional networks generalize so poorly to small image transformations? arXiv preprint arXiv:1805.12177"},{"key":"1411_CR19","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. Adv Neural Inform Process Syst 30"},{"key":"1411_CR20","unstructured":"Chen M, Radford A, Child R, Wu J, Jun H, Luan D, Sutskever I (2020) Generative pretraining from pixels. In: International Conference on Machine Learning, pp. 1691\u20131703. PMLR"},{"key":"1411_CR21","unstructured":"Parmar N, Vaswani A, Uszkoreit J, Kaiser L, Shazeer N, Ku A, Tran D (2018) Image transformer. In: International Conference on Machine Learning, pp. 4055\u20134064. PMLR"},{"key":"1411_CR22","unstructured":"Guo M-H, Lu C-Z, Liu Z-N, Cheng M-M, Hu S-M (2022) Visual attention network. arXiv preprint arXiv:2202.09741"},{"issue":"8","key":"1411_CR23","doi-asserted-by":"publisher","first-page":"882","DOI":"10.1109\/TIP.2003.815261","volume":"12","author":"M Bertalmio","year":"2003","unstructured":"Bertalmio M, Vese L, Sapiro G, Osher S (2003) Simultaneous structure and texture image inpainting. IEEE Trans Image Process 12(8):882\u2013889","journal-title":"IEEE Trans Image Process"},{"key":"1411_CR24","doi-asserted-by":"crossref","unstructured":"Levin A, Zomet A, Weiss Y (2003) Learning how to inpaint from global image statistics. In: ICCV, vol. 1, pp. 305\u2013312","DOI":"10.1109\/ICCV.2003.1238360"},{"key":"1411_CR25","doi-asserted-by":"crossref","unstructured":"Jia J, Tang C-K (2004) Inference of segmented color and texture description by tensor voting. IEEE Trans Pattern Anal Mach Intellig 26(6):771\u2013786","DOI":"10.1109\/TPAMI.2004.10"},{"key":"1411_CR26","doi-asserted-by":"crossref","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139\u2013144","DOI":"10.1145\/3422622"},{"key":"1411_CR27","unstructured":"Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784"},{"key":"1411_CR28","unstructured":"Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114"},{"key":"1411_CR29","doi-asserted-by":"crossref","unstructured":"Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2019) Free-form image inpainting with gated convolution. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 4471\u20134480","DOI":"10.1109\/ICCV.2019.00457"},{"key":"1411_CR30","doi-asserted-by":"crossref","unstructured":"Portenier T, Hu Q, Szabo A, Bigdeli SA, Favaro P, Zwicker M (2018) Faceshop: Deep sketch-based face image editing. arXiv preprint arXiv:1804.08972","DOI":"10.1145\/3197517.3201393"},{"key":"1411_CR31","doi-asserted-by":"crossref","unstructured":"Jo Y, Park J (2019) Sc-fegan: Face editing generative adversarial network with user\u2019s sketch and color. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 1745\u20131753","DOI":"10.1109\/ICCV.2019.00183"},{"key":"1411_CR32","doi-asserted-by":"crossref","unstructured":"Cao C, Fu Y (2021) Learning a sketch tensor space for image inpainting of man-made scenes. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 14509\u201314518","DOI":"10.1109\/ICCV48922.2021.01424"},{"key":"1411_CR33","doi-asserted-by":"crossref","unstructured":"Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part I 16, pp. 213\u2013229. Springer","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"1411_CR34","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S et al (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929"},{"issue":"3","key":"1411_CR35","doi-asserted-by":"publisher","first-page":"331","DOI":"10.1007\/s41095-022-0271-y","volume":"8","author":"M-H Guo","year":"2022","unstructured":"Guo M-H, Xu T-X, Liu J-J, Liu Z-N, Jiang P-T, Mu T-J, Zhang S-H, Martin RR, Cheng M-M, Hu S-M (2022) Attention mechanisms in computer vision: A survey. Comput Visual Media 8(3):331\u2013368","journal-title":"Comput Visual Media"},{"key":"1411_CR36","doi-asserted-by":"crossref","unstructured":"Chen L, Zhang H, Xiao J, Nie L, Shao J, Liu W, Chua T-S (2017) Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5659\u20135667","DOI":"10.1109\/CVPR.2017.667"},{"key":"1411_CR37","doi-asserted-by":"crossref","unstructured":"Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132\u20137141","DOI":"10.1109\/CVPR.2018.00745"},{"key":"1411_CR38","doi-asserted-by":"crossref","unstructured":"Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3\u201319","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"1411_CR39","doi-asserted-by":"crossref","unstructured":"Wang Q, Wu B, Zhu P, Li P, Zuo W, Hu Q (2020) Eca-net: Efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 11534\u201311542","DOI":"10.1109\/CVPR42600.2020.01155"},{"key":"1411_CR40","doi-asserted-by":"crossref","unstructured":"Qin Z, Zhang P, Wu F, Li X (2021) Fcanet: Frequency channel attention networks. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 783\u2013792","DOI":"10.1109\/ICCV48922.2021.00082"},{"key":"1411_CR41","unstructured":"Yuan Y, Huang L, Guo J, Zhang C, Chen X, Wang J (2018) Ocnet: Object context network for scene parsing. arXiv preprint arXiv:1809.00916"},{"key":"1411_CR42","unstructured":"Zhang H, Goodfellow I, Metaxas D, Odena A (2019) Self-attention generative adversarial networks. In: International Conference on Machine Learning, pp. 7354\u20137363. PMLR"},{"key":"1411_CR43","doi-asserted-by":"crossref","unstructured":"Wang F, Jiang M, Qian C, Yang S, Li C, Zhang H, Wang X, Tang X (2017) Residual attention network for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156\u20133164","DOI":"10.1109\/CVPR.2017.683"},{"key":"1411_CR44","unstructured":"Hu J, Shen L, Albanie S, Sun G, Vedaldi A (2018) Gather-excite: Exploiting feature context in convolutional neural networks. Adv Neur Inform Process Syst 31"},{"key":"1411_CR45","unstructured":"Park J, Woo S, Lee J-Y, Kweon IS (2018) Bam: Bottleneck attention module. arXiv preprint arXiv:1807.06514"},{"key":"1411_CR46","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"1411_CR47","unstructured":"Hendrycks D, Gimpel K (2016) Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415"},{"key":"1411_CR48","doi-asserted-by":"crossref","unstructured":"Esser P, Rombach R, Ommer B (2021) Taming transformers for high-resolution image synthesis. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 12873\u201312883","DOI":"10.1109\/CVPR46437.2021.01268"},{"key":"1411_CR49","doi-asserted-by":"crossref","unstructured":"Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401\u20134410","DOI":"10.1109\/CVPR.2019.00453"},{"key":"1411_CR50","doi-asserted-by":"crossref","unstructured":"Zhang B, Gu S, Zhang B, Bao J, Chen D, Wen F, Wang Y, Guo B (2022) Styleswin: Transformer-based gan for high-resolution image generation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 11304\u201311314","DOI":"10.1109\/CVPR52688.2022.01102"},{"issue":"4","key":"1411_CR51","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3450626.3459836","volume":"40","author":"R Gal","year":"2021","unstructured":"Gal R, Hochberg DC, Bermano A, Cohen-Or D (2021) Swagan: a style-based wavelet-driven generative model. ACM Trans Graph (TOG) 40(4):1\u201311","journal-title":"ACM Trans Graph (TOG)"},{"issue":"1","key":"1411_CR52","doi-asserted-by":"publisher","first-page":"47","DOI":"10.1109\/TCI.2016.2644865","volume":"3","author":"H Zhao","year":"2016","unstructured":"Zhao H, Gallo O, Frosio I, Kautz J (2016) Loss functions for image restoration with neural networks. IEEE Trans Comput Imag 3(1):47\u201357","journal-title":"IEEE Trans Comput Imag"},{"key":"1411_CR53","doi-asserted-by":"crossref","unstructured":"Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 694\u2013711. Springer","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"1411_CR54","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556"},{"key":"1411_CR55","unstructured":"Liu Z, Luo P, Wang X, Tang X (2018) Large-scale celebfaces attributes (celeba) dataset. Retrieved August 14: 11"},{"issue":"6","key":"1411_CR56","doi-asserted-by":"publisher","first-page":"1452","DOI":"10.1109\/TPAMI.2017.2723009","volume":"40","author":"B Zhou","year":"2017","unstructured":"Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: a 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intellig 40(6):1452\u20131464","journal-title":"IEEE Trans Pattern Anal Mach Intellig"},{"key":"1411_CR57","doi-asserted-by":"crossref","unstructured":"Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586\u2013595","DOI":"10.1109\/CVPR.2018.00068"},{"key":"1411_CR58","unstructured":"Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv Neur Inform Process Syst 30"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01411-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01411-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01411-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,17]],"date-time":"2024-07-17T17:17:32Z","timestamp":1721236652000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01411-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,15]]},"references-count":58,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,8]]}},"alternative-id":["1411"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01411-5","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,4,15]]},"assertion":[{"value":"8 August 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 March 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 April 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"We declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}