{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,3]],"date-time":"2026-04-03T15:38:37Z","timestamp":1775230717338,"version":"3.50.1"},"reference-count":39,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,12,30]],"date-time":"2024-12-30T00:00:00Z","timestamp":1735516800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,12,30]],"date-time":"2024-12-30T00:00:00Z","timestamp":1735516800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100013076","name":"National Major Science and Technology Projects of China","doi-asserted-by":"publisher","award":["2021ZD0110503"],"award-info":[{"award-number":["2021ZD0110503"]}],"id":[{"id":"10.13039\/501100013076","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Swiss National Science Foundation (SNSF) project","award":["200021E_219943"],"award-info":[{"award-number":["200021E_219943"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["92367204"],"award-info":[{"award-number":["92367204"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62306025"],"award-info":[{"award-number":["62306025"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Baidu Scholarship"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis. Intell."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural language understanding tasks. In particular, LLaMA3 models have recently been released and have achieved impressive performance in various domains with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-constrained scenarios, we explore LLaMA3\u2019s capabilities when quantized to low bit-width. This exploration can potentially provide new insights and challenges for the low-bit quantization of LLaMA3 and other future LLMs, especially in addressing performance degradation issues that suffer in LLM compression. Specifically, we comprehensively evaluate the 10 existing post-training quantization and LoRA fine-tuning (LoRA-FT) methods of LLaMA3 on 1-8 bits and various datasets to reveal the low-bit quantization performance of LLaMA3. To uncover the capabilities of low-bit quantized MLLM, we assessed the performance of the LLaMA3-based LLaVA-Next-8B model under 2-4 ultra-low bits with post-training quantization methods. Our experimental results indicate that LLaMA3 still suffers from non-negligible degradation in linguistic and visual contexts, particularly under ultra-low bit widths. This highlights the significant performance gap at low bit-width that needs to be addressed in future developments. We expect that this empirical study will prove valuable in advancing future models, driving LLMs and MLLMs to achieve higher accuracy at lower bit to enhance practicality.<\/jats:p>","DOI":"10.1007\/s44267-024-00070-x","type":"journal-article","created":{"date-parts":[[2024,12,30]],"date-time":"2024-12-30T10:29:55Z","timestamp":1735554595000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":22,"title":["An empirical study of LLaMA3 quantization: from LLMs to MLLMs"],"prefix":"10.1007","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0009-0007-9885-0028","authenticated-orcid":false,"given":"Wei","family":"Huang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0009-6283-7635","authenticated-orcid":false,"given":"Xingyu","family":"Zheng","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7985-3350","authenticated-orcid":false,"given":"Xudong","family":"Ma","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7391-7539","authenticated-orcid":false,"given":"Haotong","family":"Qin","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9599-6557","authenticated-orcid":false,"given":"Chengtao","family":"Lv","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0001-9658-0593","authenticated-orcid":false,"given":"Hong","family":"Chen","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4157-9931","authenticated-orcid":false,"given":"Jie","family":"Luo","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4285-1626","authenticated-orcid":false,"given":"Xiaojuan","family":"Qi","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7618-3275","authenticated-orcid":false,"given":"Xianglong","family":"Liu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0368-8923","authenticated-orcid":false,"given":"Michele","family":"Magno","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,12,30]]},"reference":[{"key":"70_CR1","unstructured":"Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., et\u00a0al. (2023). LLaMA: open and efficient foundation language models. arXiv preprint. arXiv:2302.13971."},{"key":"70_CR2","first-page":"5998","volume-title":"Proceedings of the 31st international conference on neural information processing systems","author":"A. Vaswani","year":"2017","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al. (2017). Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, et al. (Eds.), Proceedings of the 31st international conference on neural information processing systems (pp. 5998\u20136008). Red Hook: Curran Associates."},{"key":"70_CR3","unstructured":"Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., et\u00a0al. (2024). The llama 3 herd of models. arXiv preprint. arXiv:2407.21783."},{"key":"70_CR4","first-page":"1","volume-title":"Proceedings of the 37th international conference on neural information processing systems","author":"H. Liu","year":"2023","unstructured":"Liu, H., Li, C., Wu, Q., & Lee, Y.J. (2023). Visual instruction tuning. In A. Oh, T. Neumann, A. Globerson, et al. (Eds.), Proceedings of the 37th international conference on neural information processing systems (pp. 1\u201325). Red Hook: Curran Associates."},{"key":"70_CR5","first-page":"38087","volume-title":"Proceedings of the international conference on machine learning","author":"G. Xiao","year":"2023","unstructured":"Xiao, G., Lin, J., Seznec, M., Wu, H., Demouth, J., & Han, S. (2023). SmoothQuant: accurate and efficient post-training quantization for large language models. In Proceedings of the international conference on machine learning (pp. 38087\u201338099). Retrieved November 10, 2024, from https:\/\/proceedings.mlr.press\/v202\/xiao23c.html."},{"key":"70_CR6","first-page":"1","volume-title":"Proceedings of the 37th international conference on neural information processing systems","author":"H. Qin","year":"2023","unstructured":"Qin, H., Zhang, Y., Ding, Y., Liu, X., Danelljan, M., Yu, F., et al. (2023). QuantSR: accurate low-bit quantization for efficient image super-resolution. In A. Oh, T. Neumann, A. Globerson, et al. (Eds.), Proceedings of the 37th international conference on neural information processing systems (pp. 1\u201311). Red Hook: Curran Associates."},{"key":"70_CR7","first-page":"2704","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"B. Jacob","year":"2018","unstructured":"Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., et al. (2018). Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 2704\u20132713). Piscataway: IEEE."},{"key":"70_CR8","unstructured":"Huang, W., Qin, H., Liu, Y., Li, Y., Liu, X., Benini, L., et\u00a0al. (2024). SliM-LLM: Salience-driven mixed-precision quantization for large language models. arXiv preprint. arXiv:2405.14917."},{"key":"70_CR9","first-page":"7197","volume-title":"International conference on machine learning","author":"M. Nagel","year":"2020","unstructured":"Nagel, M., Amjad, R.A., Van Baalen, M., Louizos, C., & Blankevoort, T. (2020). Up or down? Adaptive rounding for post-training quantization. In International conference on machine learning (pp. 7197\u20137206). PMLR."},{"key":"70_CR10","unstructured":"Frantar, E., Ashkboos, S., Hoefler, T., & Alistarh, D. (2022). GPTQ: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint. arXiv:2210.17323."},{"key":"70_CR11","first-page":"87","volume-title":"Proceedings of machine learning and systems","author":"J. Lin","year":"2024","unstructured":"Lin, J., Tang, J., Tang, H., Yang, S., Chen, W.-M., Xiao, G., et al. (2024). AWQ: activation-aware weight quantization for on-device LLM compression and acceleration. In P. B. Gibbons, G. Pekhimenko, & C. de Sa (Eds.), Proceedings of machine learning and systems (pp. 87\u2013100). Retrieved November 10, 2024, from https:\/\/proceedings.mlsys.org\/paper_files\/paper\/2024\/hash\/42a452cbafa9dd64e9ba4aa95cc1ef21-Abstract-Conference.html."},{"key":"70_CR12","first-page":"1","volume-title":"Proceedings of the 12th international conference on learning representations","author":"Y. Shang","year":"2024","unstructured":"Shang, Y., Yuan, Z., Wu, Q., & Dong, Z. (2024). PB-LLM: partially binarized large language models. In Proceedings of the 12th international conference on learning representations (pp. 1\u201314). Retrieved November 10, 2024, from https:\/\/openreview.net\/forum?id=BifeBRhikU."},{"key":"70_CR13","first-page":"1","volume-title":"Proceedings of the 37th international conference on neural information processing systems","author":"J. Chee","year":"2024","unstructured":"Chee, J., Cai, Y., Kuleshov, V., & De Sa, C. (2024). QuIP: 2-bit quantization of large language models with guarantees. In A. Oh, T. Neumann, A. Globerson, et al. (Eds.), Proceedings of the 37th international conference on neural information processing systems (pp. 1\u201334). Red Hook: Curran Associates."},{"key":"70_CR14","first-page":"8719","volume-title":"Findings of the association for computational linguistics","author":"H. Chen","year":"2024","unstructured":"Chen, H., Lv, C., Ding, L., Qin, H., Zhou, X., Ding, Y., et al. (2024). DB-LLM: accurate dual-binarization for efficient LLMs. In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), Findings of the association for computational linguistics (pp. 8719\u20138730). Stroudsburg: ACL."},{"key":"70_CR15","first-page":"1","volume-title":"Proceedings of the 41st international conference on machine learning","author":"W. Huang","year":"2024","unstructured":"Huang, W., Liu, Y., Qin, H., Li, Y., Zhang, S., Liu, X., et al. (2024). BiLLM: pushing the limit of post-training quantization for LLMs. In Proceedings of the 41st international conference on machine learning (pp. 1\u201320). Retrieved November 10, 2024, from https:\/\/openreview.net\/forum?id=qOl2WWOqFg."},{"key":"70_CR16","first-page":"1","volume-title":"Proceedings of the 37th international conference on neural information processing systems","author":"T. Dettmers","year":"2024","unstructured":"Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2024). QLoRA: efficient finetuning of quantized LLMs. In A. Oh, T. Neumann, A. Globerson, et al. (Eds.), Proceedings of the 37th international conference on neural information processing systems (pp. 1\u201328). Red Hook: Curran Associates."},{"key":"70_CR17","first-page":"1","volume-title":"Proceedings of the 41st international conference on machine learning","author":"H. Qin","year":"2024","unstructured":"Qin, H., Ma, X., Zheng, X., Li, X., Zhang, Y., Liu, S., et al. (2024). Accurate lora-finetuning quantization of LLMs via information retention. In Proceedings of the 41st international conference on machine learning (pp. 1\u201319). Retrieved November 10, 2024, from https:\/\/openreview.net\/forum?id=jQ92egz5Ym."},{"key":"70_CR18","first-page":"1","volume-title":"Proceedings of the 5th international conference on learning representations","author":"S. Merity","year":"2016","unstructured":"Merity, S., Xiong, C., Bradbury, J., & Socher, R. (2016). Pointer sentinel mixture models. In Proceedings of the 5th international conference on learning representations (pp. 1\u201315). Retrieved November 10, 2024, from https:\/\/openreview.net\/forum?id=Byj72udxe."},{"issue":"1","key":"70_CR19","first-page":"5485","volume":"21","author":"C. Raffel","year":"2020","unstructured":"Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., et al. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(1), 5485\u20135551.","journal-title":"Journal of Machine Learning Research"},{"key":"70_CR20","doi-asserted-by":"publisher","first-page":"114","DOI":"10.3115\/1075812.1075835","volume-title":"Proceedings of human language technology workshop","author":"M. Marcus","year":"1994","unstructured":"Marcus, M., Grace Kim, P., Marcinkiewicz, M. A., MacIntyre, R., Bies, A., Ferguson, M., et al. (1994). The Penn Treebank: annotating predicate argument structure. In Proceedings of human language technology workshop (pp. 114\u2013119). San Francisco: Morgan Kaufmann."},{"key":"70_CR21","first-page":"7432","volume-title":"Proceedings of the 34th AAAI conference on artificial intelligence","author":"Y. Bisk","year":"2020","unstructured":"Bisk, Y., Zellers, R., Le Bras, R., Gao, J., & Choi, Y. (2020). PIQA: reasoning about physical commonsense in natural language. In Proceedings of the 34th AAAI conference on artificial intelligence (pp. 7432\u20137439). Palo Alto: AAAI Press."},{"key":"70_CR22","unstructured":"Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., et\u00a0al. (2018). Think you have solved question answering? Try ARC-DA, the AI2 reasoning challenge. arXiv preprint. arXiv:1803.05457."},{"key":"70_CR23","first-page":"4791","volume-title":"Proceedings of the 57th conference of the association for computational linguistics","author":"R. Zellers","year":"2019","unstructured":"Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., & Choi, Y. (2019). HellaSwag: can a machine really finish your sentence? In A. Korhonen, D. R. Traum, & L. M\u2018arquez (Eds.), Proceedings of the 57th conference of the association for computational linguistics (pp. 4791\u20134800). Stroudsburg: ACL."},{"issue":"9","key":"70_CR24","doi-asserted-by":"publisher","first-page":"99","DOI":"10.1145\/3474381","volume":"64","author":"K. Sakaguchi","year":"2021","unstructured":"Sakaguchi, K., Le Bras, R., Bhagavatula, C., & Choi, Y. (2021). Winogrande: an adversarial winograd schema challenge at scale. Communications of the ACM, 64(9), 99\u2013106.","journal-title":"Communications of the ACM"},{"key":"70_CR25","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR)","author":"D. Hendrycks","year":"2021","unstructured":"Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J. (2021). Measuring massive multitask language understanding. In Proceedings of the International Conference on Learning Representations (ICLR)."},{"key":"70_CR26","doi-asserted-by":"crossref","unstructured":"Kembhavi, A., Salvato, M., Kolve, E., Seo, M., Hajishirzi, H., & Farhadi, A. (2016). A diagram is worth a dozen images.","DOI":"10.1007\/978-3-319-46493-0_15"},{"key":"70_CR27","doi-asserted-by":"crossref","unstructured":"Masry, A., Long, D.X., Tan, J.Q., Joty, S., & Hoque, E. (2022). A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint. arXiv:2203.10244.","DOI":"10.18653\/v1\/2022.findings-acl.177"},{"key":"70_CR28","first-page":"2200","volume-title":"Proceedings of the IEEE\/CVF winter conference on applications of computer vision","author":"M. Mathew","year":"2021","unstructured":"Mathew, M., Karatzas, D., & Jawahar, C. V. (2021). Docvqa: a dataset for vqa on document images. In Proceedings of the IEEE\/CVF winter conference on applications of computer vision (pp. 2200\u20132209)."},{"key":"70_CR29","unstructured":"Fu, C., Chen, P., Shen, Y., Qin, Y., Zhang, M., Lin, X., Yang, J., Zheng, X., Li, K., Sun, X., Wu, Y., & Ji, R. (2024). Mme: a comprehensive evaluation benchmark for multimodal large language models."},{"key":"70_CR30","first-page":"216","volume-title":"European conference on computer vision","author":"Y. Liu","year":"2025","unstructured":"Liu, Y., Duan, H., Zhang, Y., Li, B., Zhang, S., Zhao, W., Yuan, Y., Wang, J., He, C., Liu, Z., et al. (2025). Mmbench: is your multi-modal model an all-around player? In European conference on computer vision (pp. 216\u2013233). Berlin: Springer."},{"key":"70_CR31","first-page":"1","volume-title":"Proceedings of the 9th international conference on learning representations","author":"D. Hendrycks","year":"2021","unstructured":"Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., et al. (2021). Measuring massive multitask language understanding. In Proceedings of the 9th international conference on learning representations (pp. 1\u201327). Retrieved November 10, 2024, from https:\/\/openreview.net\/forum?id=d7KBjmI3GmQ."},{"key":"70_CR32","first-page":"467","volume-title":"Findings of the association for computational linguistics","author":"Z. Liu","year":"2024","unstructured":"Liu, Z., Oguz, B., Zhao, C., Chang, E., Stock, P., Mehdad, Y., et al. (2024). LLM-QAT: data-free quantization aware training for large language models. In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), Findings of the association for computational linguistics (pp. 467\u2013484). Stroudsburg: ACL."},{"key":"70_CR33","unstructured":"Shao, W., Chen, M., Zhang, Z., Xu, P., Zhao, L., Li, Z., Zhang, K., Gao, P., Qiao, Y., & Luo, P. (2023). Omniquant: Omnidirectionally calibrated quantization for large language models. arXiv preprint. arXiv:2308.13137."},{"key":"70_CR34","unstructured":"Hu, X., Cheng, Y., Yang, D., Yuan, Z., Yu, J., Xu, C., & Zhou, S. (2024). I-llm: Efficient integer-only inference for fully-quantized low-bit large language models. arXiv preprint. arXiv:2405.17849."},{"key":"70_CR35","unstructured":"Liu, Z., Zhao, C., Fedorov, I., Soran, B., Choudhary, D., Krishnamoorthi, R., Chandra, V., Tian, Y., & Blankevoort, T. (2024). Spinquant\u2013llm quantization with learned rotations. arXiv preprint. arXiv:2405.16406."},{"key":"70_CR36","unstructured":"Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., et\u00a0al. (2023). Stanford alpaca: an instruction-following llama model. Retrieved November 10, 2024, from https:\/\/github.com\/tatsu-lab\/stanford_alpaca."},{"key":"70_CR37","first-page":"1","volume-title":"Proceedings of the 10th international conference on learning representations","author":"E. J. Hu","year":"2022","unstructured":"Hu, E. J., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W., et al. (2022). LoRA: low-rank adaptation of large language models. In Proceedings of the 10th international conference on learning representations (pp. 1\u201313). Retrieved November 10, 2024, from https:\/\/openreview.net\/forum?id=nZeVKeeFYf9."},{"key":"70_CR38","first-page":"1","volume-title":"Proceedings of the 12th international conference on learning representations","author":"Y. Xu","year":"2024","unstructured":"Xu, Y., Xie, L., Gu, X., Chen, X., Chang, H., Zhang, H., et al. (2024). QA-LoRA: quantization-aware low-rank adaptation of large language models. In Proceedings of the 12th international conference on learning representations (pp. 1\u201318). Retrieved November 10, 2024, from https:\/\/openreview.net\/forum?id=WvFoJccpo8."},{"key":"70_CR39","first-page":"26689","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"J. Lin","year":"2024","unstructured":"Lin, J., Yin, H., Ping, W., Molchanov, P., Shoeybi, M., & Song, H. (2024). VILA: on pre-training for visual language models. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 26689\u201326699). Piscataway: IEEE."}],"container-title":["Visual Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44267-024-00070-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44267-024-00070-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44267-024-00070-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,30]],"date-time":"2024-12-30T11:03:21Z","timestamp":1735556601000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44267-024-00070-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,30]]},"references-count":39,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,12]]}},"alternative-id":["70"],"URL":"https:\/\/doi.org\/10.1007\/s44267-024-00070-x","relation":{},"ISSN":["2731-9008"],"issn-type":[{"value":"2731-9008","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,12,30]]},"assertion":[{"value":"27 August 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 December 2024","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 December 2024","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 December 2024","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"36"}}