{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T02:14:21Z","timestamp":1776132861051,"version":"3.50.1"},"reference-count":45,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2025,3,28]],"date-time":"2025-03-28T00:00:00Z","timestamp":1743120000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,28]],"date-time":"2025-03-28T00:00:00Z","timestamp":1743120000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2021YFF0704100"],"award-info":[{"award-number":["2021YFF0704100"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2021YFF0704100"],"award-info":[{"award-number":["2021YFF0704100"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2021YFF0704100"],"award-info":[{"award-number":["2021YFF0704100"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2021YFF0704100"],"award-info":[{"award-number":["2021YFF0704100"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2021YFF0704100"],"award-info":[{"award-number":["2021YFF0704100"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Artif Intell Rev"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>The previous advancements in pathology image understanding primarily involved developing models tailored to specific tasks. Recent studies have demonstrated that the large vision-language model can enhance the performance of various downstream tasks in medical image understanding. In this study, we developed a domain-specific large vision-language model (PathologyVLM) for pathology image understanding. Specifically, (1) we first construct a human pathology image-text dataset by cleaning the public medical image-text data for domain-specific alignment; (2) Using the proposed image-text data, we first train a pathology language-image pretraining (PLIP) model as the specialized visual encoder to extract the features of pathology image, and then we developed scale-invariant connector to avoid the information loss caused by image scaling; (3) We adopt two-stage learning to train PathologyVLM, first stage for domain alignment, and second stage for end to end visual question &amp; answering (VQA) task. In experiments, we evaluate our PathologyVLM on both supervised and zero-shot VQA datasets, our model achieved the best overall performance among multimodal models of similar scale. The ablation experiments also confirmed the effectiveness of our design. We posit that our PathologyVLM model and the datasets presented in this work can promote research in field of computational pathology. All codes are available at: <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/ddw2AIGROUP2CQUPT\/PA-LLaVA\" ext-link-type=\"uri\">https:\/\/github.com\/ddw2AIGROUP2CQUPT\/PA-LLaVA<\/jats:ext-link>\n          <\/jats:p>","DOI":"10.1007\/s10462-025-11190-1","type":"journal-article","created":{"date-parts":[[2025,3,31]],"date-time":"2025-03-31T09:34:11Z","timestamp":1743413651000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Pathologyvlm: a large vision-language model for pathology image understanding"],"prefix":"10.1007","volume":"58","author":[{"given":"Dawei","family":"Dai","sequence":"first","affiliation":[]},{"given":"Yuanhui","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Qianlan","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Long","family":"Xu","sequence":"additional","affiliation":[]},{"given":"Xiaojing","family":"Shen","sequence":"additional","affiliation":[]},{"given":"Shuyin","family":"Xia","sequence":"additional","affiliation":[]},{"given":"Guoyin","family":"Wang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,28]]},"reference":[{"key":"11190_CR1","doi-asserted-by":"publisher","first-page":"122","DOI":"10.1016\/j.media.2019.05.010","volume":"56","author":"G Aresta","year":"2019","unstructured":"Aresta G, Ara\u00fajo T, Kwok S, Chennamsetty SS, Safwan M, Alex V, Marami B, Prastawa M, Chan M, Donovan M et al (2019) Bach: grand challenge on breast cancer histology images. Med Image Anal 56:122\u2013139","journal-title":"Med Image Anal"},{"issue":"9","key":"11190_CR2","doi-asserted-by":"publisher","first-page":"1477","DOI":"10.3390\/rs16091477","volume":"16","author":"Y Bazi","year":"2024","unstructured":"Bazi Y, Bashmal L, Al Rahhal MM, Ricci R, Melgani F (2024) Rs-llava: a large vision-language model for joint captioning and question answering in remote sensing imagery. Remote Sens 16(9):1477","journal-title":"Remote Sens"},{"issue":"22","key":"11190_CR3","doi-asserted-by":"publisher","first-page":"2199","DOI":"10.1001\/jama.2017.14585","volume":"318","author":"BE Bejnordi","year":"2017","unstructured":"Bejnordi BE, Veta M, Van Diest PJ, Van Ginneken B, Karssemeijer N, Litjens G, Van Der Laak JA, Hermsen M, Manson QF, Balkenhol M et al (2017) Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318(22):2199\u20132210","journal-title":"JAMA"},{"key":"11190_CR4","doi-asserted-by":"crossref","unstructured":"Caffagni D, Cocchi F, Moratelli N, Sarto S, Cornia M, Baraldi L, Cucchiara R (2024) Wiki-llava: Hierarchical retrieval-augmented generation for multimodal llms. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 1818\u20131826","DOI":"10.1109\/CVPRW63382.2024.00188"},{"key":"11190_CR5","doi-asserted-by":"crossref","unstructured":"Cai M, Liu H, Mustikovela SK, Meyer GP, Chai Y, Park D, Lee YJ (2024) Making large multimodal models understand arbitrary visual prompts. In: IEEE Conference on Computer Vision and Pattern Recognition","DOI":"10.1109\/CVPR52733.2024.01227"},{"issue":"8","key":"11190_CR6","doi-asserted-by":"publisher","first-page":"865","DOI":"10.1016\/j.ccell.2022.07.004","volume":"40","author":"RJ Chen","year":"2022","unstructured":"Chen RJ, Lu MY, Williamson DF, Chen TY, Lipkova J, Noor Z, Shaban M, Shady M, Williams M, Joo B et al (2022a) Pan-cancer integrative histology-genomic analysis via multimodal deep learning. Cancer Cell 40(8):865\u2013878","journal-title":"Cancer Cell"},{"issue":"12","key":"11190_CR7","doi-asserted-by":"publisher","first-page":"1420","DOI":"10.1038\/s41551-022-00929-8","volume":"6","author":"C Chen","year":"2022","unstructured":"Chen C, Lu MY, Williamson DF, Chen TY, Schaumberg AJ, Mahmood F (2022b) Fast and scalable search of whole-slide images via self-supervised deep learning. Nature Biomed Eng 6(12):1420\u20131434","journal-title":"Nature Biomed Eng"},{"key":"11190_CR8","doi-asserted-by":"crossref","unstructured":"Chen J, Ouyang R, Gao A, Chen S, Chen GH, Wang X, Zhang R, Cai Z, Ji K, Yu G, et al (2024) Huatuogpt-vision, towards injecting medical visual knowledge into multimodal llms at scale. Preprint at arXiv:2406.19280","DOI":"10.18653\/v1\/2024.emnlp-main.418"},{"issue":"10","key":"11190_CR9","doi-asserted-by":"publisher","first-page":"1519","DOI":"10.1038\/s41591-019-0583-3","volume":"25","author":"P Courtiol","year":"2019","unstructured":"Courtiol P, Maussion C, Moarii M, Pronier E, Pilcer S, Sefta M, Manceron P, Toldo S, Zaslavskiy M, Le Stang N et al (2019) Deep learning-based classification of mesothelioma improves prediction of patient outcome. Nat Med 25(10):1519\u20131525","journal-title":"Nat Med"},{"key":"11190_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2024.102535","volume":"112","author":"D Dai","year":"2024","unstructured":"Dai D, Fu S, Liu Y, Wang G (2024) Vision-language joint representation learning for sketch less facial image retrieval. Inform Fusion 112:102535. https:\/\/doi.org\/10.1016\/j.inffus.2024.102535","journal-title":"Inform Fusion"},{"key":"11190_CR11","unstructured":"Du N, Huang Y, Dai AM, Tong S, Lepikhin D, Xu Y, Krikun M, Zhou Y, Yu AW, Firat O, et al (2022) Glam: Efficient scaling of language models with mixture-of-experts. In: International Conference on Machine Learning, pp. 5547\u20135569. PMLR"},{"key":"11190_CR12","doi-asserted-by":"crossref","unstructured":"He X (2021) Towards visual question answering on pathology images. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2","DOI":"10.18653\/v1\/2021.acl-short.90"},{"issue":"9","key":"11190_CR13","doi-asserted-by":"publisher","first-page":"2307","DOI":"10.1038\/s41591-023-02504-3","volume":"29","author":"Z Huang","year":"2023","unstructured":"Huang Z, Bianchi F, Yuksekgonul M, Montine TJ, Zou J (2023) A visual-language foundation model for pathology image analysis using medical twitter. Nat Med 29(9):2307\u20132316","journal-title":"Nat Med"},{"key":"11190_CR14","unstructured":"Ikezogwo W, Seyfioglu S, Ghezloo F, Geva D, Sheikh Mohammed F, Anand PK, Krishna R, Shapiro L (2024) Quilt-1m: One million image-text pairs for histopathology. Adv Neural Inform Process Syst. 36"},{"issue":"8","key":"11190_CR15","doi-asserted-by":"publisher","first-page":"789","DOI":"10.1038\/s43018-020-0087-6","volume":"1","author":"JN Kather","year":"2020","unstructured":"Kather JN, Heij LR, Grabsch HI, Loeffler C, Echle A, Muti HS, Krause J, Niehues JM, Sommer KA, Bankhead P et al (2020) Pan-cancer image-based detection of clinically actionable genetic alterations. Nature Cancer 1(8):789\u2013799","journal-title":"Nature Cancer"},{"key":"11190_CR16","doi-asserted-by":"crossref","unstructured":"Kuckreja K, Danish MS, Naseer M, Das A, Khan S, Khan FS (2024) Geochat: Grounded large vision-language model for remote sensing. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 27831\u201327840","DOI":"10.1109\/CVPR52733.2024.02629"},{"key":"11190_CR19","unstructured":"Li J, Li D, Xiong C, Hoi S (2022) Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In: International Conference on Machine Learning, pp. 12888\u201312900. PMLR"},{"key":"11190_CR18","doi-asserted-by":"crossref","unstructured":"Li P, Liu G, He J, Zhao Z, Zhong S (2023a) Masked vision and language pre-training with unimodal and multimodal contrastive losses for medical visual question answering. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 374\u2013383. Springer","DOI":"10.1007\/978-3-031-43907-0_36"},{"key":"11190_CR20","doi-asserted-by":"crossref","unstructured":"Li P, Liu G, Tan L, Liao J, Zhong S (2023b) Self-supervised vision-language pretraining for medial visual question answering. In: 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), pp. 1\u20135. IEEE","DOI":"10.1109\/ISBI53787.2023.10230743"},{"key":"11190_CR17","doi-asserted-by":"crossref","unstructured":"Li C, Wong C, Zhang S, Usuyama N, Liu H, Yang J, Naumann T, Poon H, Gao J (2024) Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Adv Neural Inform Process Syst. 36","DOI":"10.32388\/VLXB6M"},{"key":"11190_CR21","doi-asserted-by":"crossref","unstructured":"Lin W, Zhao Z, Zhang X, Wu C, Zhang Y, Wang Y, Xie W (2023) Pmc-clip: contrastive language-image pre-training using biomedical documents. In: MICCAI","DOI":"10.1007\/978-3-031-43993-3_51"},{"key":"11190_CR22","doi-asserted-by":"crossref","unstructured":"Liu H, Li C, Wu Q, Lee YJ (2024) Visual instruction tuning. Adv Neural Inform Process Syst. 36","DOI":"10.1007\/978-981-99-8079-6_1"},{"issue":"7861","key":"11190_CR23","doi-asserted-by":"publisher","first-page":"106","DOI":"10.1038\/s41586-021-03512-4","volume":"594","author":"MY Lu","year":"2021","unstructured":"Lu MY, Chen TY, Williamson DF, Zhao M, Shady M, Lipkova J, Mahmood F (2021a) Ai-based pathology predicts origins for cancers of unknown primary. Nature 594(7861):106\u2013110","journal-title":"Nature"},{"issue":"6","key":"11190_CR24","doi-asserted-by":"publisher","first-page":"555","DOI":"10.1038\/s41551-020-00682-w","volume":"5","author":"MY Lu","year":"2021","unstructured":"Lu MY, Williamson DF, Chen TY, Chen RJ, Barbieri M, Mahmood F (2021b) Data-efficient and weakly supervised computational pathology on whole-slide images. Nature Biomed Eng 5(6):555\u2013570","journal-title":"Nature Biomed Eng"},{"key":"11190_CR25","doi-asserted-by":"crossref","unstructured":"Lu MY, Chen B, Williamson DF, Chen RJ, Zhao M, Chow AK, Ikemura K, Kim A, Pouli D, Patel A et al (2024) A multimodal generative ai copilot for human pathology. Nature. 56","DOI":"10.1038\/s41586-024-07618-3"},{"key":"11190_CR26","unstructured":"Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J et al (2021) Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning 56:8748\u20138763. PMLR"},{"key":"11190_CR27","unstructured":"Rahman TY (2019) A histopathological image repository of normal epithelium of oral cavity and oral squamous cell carcinoma. Mendeley Data. 1"},{"key":"11190_CR28","doi-asserted-by":"crossref","unstructured":"Rajbhandari S, Rasley J, Ruwase O, He Y (2020) Zero: memory optimizations toward training trillion parameter models. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1\u201316. IEEE","DOI":"10.1109\/SC41405.2020.00024"},{"issue":"1","key":"11190_CR29","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1038\/s41698-023-00365-0","volume":"7","author":"OL Saldanha","year":"2023","unstructured":"Saldanha OL, Loeffler CM, Niehues JM, Treeck M, Seraphin TP, Hewitt KJ, Cifci D, Veldhuizen GP, Ramesh S, Pearson AT et al (2023) Self-supervised attention-based deep learning for pan-cancer mutation prediction from histopathology. NPJ Precision Oncol 7(1):35","journal-title":"NPJ Precision Oncol"},{"key":"11190_CR30","doi-asserted-by":"crossref","unstructured":"Seyfioglu MS, Ikezogwo WO, Ghezloo F, Krishna R, Shapiro L (2024) Quilt-llava: Visual instruction tuning by extracting localized narratives from open-source histopathology videos. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 13183\u201313192","DOI":"10.1109\/CVPR52733.2024.01252"},{"key":"11190_CR31","doi-asserted-by":"crossref","unstructured":"Sharma D, Dhiman C, Kumar D (2024) Control with style: Style embedding-based variational autoencoder for controlled stylized caption generation framework. IEEE Transactions on Cognitive and Developmental Systems","DOI":"10.1109\/TCDS.2024.3405573"},{"issue":"10221","key":"11190_CR32","doi-asserted-by":"publisher","first-page":"350","DOI":"10.1016\/S0140-6736(19)32998-8","volume":"395","author":"O-J Skrede","year":"2020","unstructured":"Skrede O-J, De Raedt S, Kleppe A, Hveem TS, Liest\u00f8l K, Maddison J, Askautrud HA, Pradhan M, Nesheim JA, Albregtsen F et al (2020) Deep learning for prediction of colorectal cancer outcome: a discovery and validation study. Lancet 395(10221):350\u2013360","journal-title":"Lancet"},{"issue":"1","key":"11190_CR33","doi-asserted-by":"publisher","first-page":"15561","DOI":"10.1038\/s41598-024-66658-x","volume":"14","author":"Y Tan","year":"2024","unstructured":"Tan Y, Zhang W-H, Huang Z, Tan Q-X, Zhang Y-M, Wei C-Y, Feng Z-B (2024) Ai models predicting breast cancer distant metastasis using lightgbm with clinical blood markers and ultrasound maximum diameter. Sci Rep 14(1):15561","journal-title":"Sci Rep"},{"key":"11190_CR34","unstructured":"Team G, Zeng A, Xu B, Wang B, Zhang C, Yin D, Rojas D, Feng G, Zhao H, Lai H, et al (2024) Chatglm: a family of large language models from glm-130b to glm-4 all tools. arXiv e-prints, 2406"},{"key":"11190_CR35","doi-asserted-by":"crossref","unstructured":"Van Sonsbeek T, Derakhshani MM, Najdenkoska I, Snoek CG, Worring M (2023) Open-ended medical visual question answering through prefix tuning of language models. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 726\u2013736. Springer","DOI":"10.1007\/978-3-031-43904-9_70"},{"key":"11190_CR36","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2022.102645","volume":"83","author":"X Wang","year":"2023","unstructured":"Wang X, Du Y, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X (2023) Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Med Image Anal 83:102645","journal-title":"Med Image Anal"},{"issue":"1","key":"11190_CR37","doi-asserted-by":"publisher","first-page":"148","DOI":"10.1109\/TMI.2022.3206605","volume":"42","author":"X Wei","year":"2022","unstructured":"Wei X, Liu Q, Liu M, Wang Y, Meijering E (2022) 3d soma detection in large-scale whole brain images via a two-stage neural network. IEEE Trans Med Imaging 42(1):148\u2013157","journal-title":"IEEE Trans Med Imaging"},{"key":"11190_CR38","unstructured":"Yang A, Yang B, Hui B, Zheng B, Yu B, Zhou C, Li C, Li C, Liu D, Huang F, et al (2024) Qwen2 technical report. Preprint at arXiv:2407.10671"},{"issue":"1","key":"11190_CR39","doi-asserted-by":"publisher","first-page":"103","DOI":"10.1109\/TMI.2022.3204538","volume":"42","author":"W Yu","year":"2022","unstructured":"Yu W, Zheng H, Gu Y, Xie F, Yang J, Sun J, Yang G-Z (2022) Tnn: Tree neural network for airway anatomical labeling. IEEE Trans Med Imaging 42(1):103\u2013118","journal-title":"IEEE Trans Med Imaging"},{"key":"11190_CR40","unstructured":"Zhang S, Xu Y, Usuyama N, Xu H, Bagga J, Tinn R, Preston S, Rao R, Wei M, Valluri N, et al (2023a) Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. Preprint at arXiv:2303.00915"},{"key":"11190_CR41","unstructured":"Zhang X, Wu C, Zhao Z, Lin W, Zhang Y, Wang Y, Xie W (2023b) Pmc-vqa: Visual instruction tuning for medical visual question answering. Preprint at arXiv:2305.10415"},{"key":"11190_CR42","unstructured":"Zhang K, Yu J, Yan Z, Liu Y, Adhikarla E, Fu S, Chen X, Chen C, Zhou Y, Li X, et al (2023c) Biomedgpt: a unified and generalist biomedical generative pre-trained transformer for vision, language, and multimodal tasks. Preprint at arXiv:2305.17100"},{"issue":"1","key":"11190_CR43","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1109\/TMI.2022.3207093","volume":"42","author":"G Zhao","year":"2022","unstructured":"Zhao G, Liang K, Pan C, Zhang F, Wu X, Hu X, Yu Y (2022) Graph convolution based cross-network multiscale feature fusion for deep vessel segmentation. IEEE Trans Med Imaging 42(1):183\u2013195","journal-title":"IEEE Trans Med Imaging"},{"issue":"1","key":"11190_CR44","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1109\/TMI.2022.3204551","volume":"42","author":"R Zheng","year":"2022","unstructured":"Zheng R, Zhong Y, Yan S, Sun H, Shen H, Huang K (2022) Msvrl: self-supervised multiscale visual representation learning via cross-level consistency for medical image segmentation. IEEE Trans Med Imaging 42(1):91\u2013102","journal-title":"IEEE Trans Med Imaging"},{"key":"11190_CR45","doi-asserted-by":"publisher","DOI":"10.1016\/j.ebiom.2022.104426","volume":"87","author":"L Zhu","year":"2023","unstructured":"Zhu L, Shi H, Wei H, Wang C, Shi S, Zhang F, Yan R, Liu Y, He T, Wang L et al (2023) An accurate prediction of the origin for bone metastatic cancer using deep learning on digital pathological images. EBioMedicine 87:104426","journal-title":"EBioMedicine"}],"container-title":["Artificial Intelligence Review"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-025-11190-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10462-025-11190-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10462-025-11190-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,17]],"date-time":"2025-04-17T19:34:26Z","timestamp":1744918466000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10462-025-11190-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,28]]},"references-count":45,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2025,6]]}},"alternative-id":["11190"],"URL":"https:\/\/doi.org\/10.1007\/s10462-025-11190-1","relation":{},"ISSN":["1573-7462"],"issn-type":[{"value":"1573-7462","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,28]]},"assertion":[{"value":"6 March 2025","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 March 2025","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no Conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"186"}}