{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,9]],"date-time":"2026-03-09T13:30:59Z","timestamp":1773063059738,"version":"3.50.1"},"reference-count":58,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2025,4,2]],"date-time":"2025-04-02T00:00:00Z","timestamp":1743552000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"},{"start":{"date-parts":[[2025,4,2]],"date-time":"2025-04-02T00:00:00Z","timestamp":1743552000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"}],"funder":[{"name":"Liaoning Provincial Science and Technology Innovation Project in the Field of Artificial Intelligence","award":["Grant no. 2023JH26\/10100005"],"award-info":[{"award-number":["Grant no. 2023JH26\/10100005"]}]},{"DOI":"10.13039\/501100018559","name":"Shenyang Young and Middle-aged Science and Technology Innovation Talent Support Program","doi-asserted-by":"publisher","award":["RC220414"],"award-info":[{"award-number":["RC220414"]}],"id":[{"id":"10.13039\/501100018559","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2025,5]]},"DOI":"10.1007\/s40747-025-01833-9","type":"journal-article","created":{"date-parts":[[2025,4,4]],"date-time":"2025-04-04T05:55:56Z","timestamp":1743746156000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Reducing hallucinations of large language models via hierarchical semantic piece"],"prefix":"10.1007","volume":"11","author":[{"given":"Yanyi","family":"Liu","sequence":"first","affiliation":[]},{"given":"Qingwen","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Jiawei","family":"Tang","sequence":"additional","affiliation":[]},{"given":"Tiezheng","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Chen","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Pan","family":"Li","sequence":"additional","affiliation":[]},{"given":"Sai","family":"Xu","sequence":"additional","affiliation":[]},{"given":"Xianlin","family":"Gao","sequence":"additional","affiliation":[]},{"given":"Zhi","family":"Li","sequence":"additional","affiliation":[]},{"given":"Jun","family":"Liu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6659-1785","authenticated-orcid":false,"given":"Yingyou","family":"Wen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,4,2]]},"reference":[{"key":"1833_CR1","unstructured":"Zhao WX, Zhou K, Li J, Tang T, Wang X, Hou Y, Min Y, Zhang B, Zhang J, Dong Z, Du Y Yang C, Chen Y, Chen Z, Jiang J, Ren R, Li Y, Tang X, Liu Z, Liu P, Nie J-Y, Wen J-R (2023) A survey of large language models. arXiv:2303.18223"},{"issue":"2","key":"1833_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3605943","volume":"56","author":"B Min","year":"2023","unstructured":"Min B, Ross H, Sulem E, Veyseh A, Nguyen TH, Sainz O, Agirre E, Heintz I, Roth D (2023) Recent advances in natural language processing via large pre-trained language models: a survey. ACM Comput Surv 56(2):1\u201340","journal-title":"ACM Comput Surv"},{"key":"1833_CR3","unstructured":"Morris MR, Sohl-Dickstein J, Fiedel N, Warkentin T, Dafoe A, Faust A, Farabet C, Legg S (2023) Levels of agi: operationalizing progress on the path to agi. arXiv:2311.02462"},{"key":"1833_CR4","unstructured":"Zhang C, Zhang C, Li C, Qiao Y, Zheng S, Dam SK, Zhang M, Kim JU, Kim ST, Choi J et al (2023) One small step for generative ai, one giant leap for agi: a complete survey on chatgpt in aigc era. arXiv:2304.06488"},{"key":"1833_CR5","unstructured":"Kaddour J, Harris J, Mozes M, Bradley H, Raileanu R, McHardy R (2023) Challenges and applications of large language models. arXiv:2307.10169"},{"issue":"12","key":"1833_CR6","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3571730","volume":"55","author":"Z Ji","year":"2023","unstructured":"Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, Ishii E, Bang YJ, Madotto A, Fung P (2023) Survey of hallucination in natural language generation. ACM Comput Surv 55(12):1\u201338","journal-title":"ACM Comput Surv"},{"key":"1833_CR7","unstructured":"Zhang Y, Li Y, Cui L, Cai D, Liu L, Fu T, Huang X, Zhao E, Zhang Y, Chen Y et al (2023) Siren\u2019s song in the ai ocean: a survey on hallucination in large language models. arXiv:2309.01219"},{"key":"1833_CR8","unstructured":"Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, Aleman FL, Almeida D, Altenschmidt J, Altman S, Anadkat S et al (2023) Gpt-4 technical report. arXiv:2303.08774"},{"key":"1833_CR9","unstructured":"Team G, Anil R, Borgeaud S, Wu Y, Alayrac J-B, Yu J, Soricut R, Schalkwyk J, Dai AM, Hauth A et al (2023) Gemini: a family of highly capable multimodal models. arXiv:2312.11805"},{"key":"1833_CR10","unstructured":"Tonmoy S, Zaman S, Jain V, Rani A, Rawte V, Chadha A, Das A (2024) A comprehensive survey of hallucination mitigation techniques in large language models. arXiv:2401.01313"},{"key":"1833_CR11","first-page":"79155","volume":"36","author":"G Penedo","year":"2023","unstructured":"Penedo G, Malartic Q, Hesslow D, Cojocaru R, Alobeidli H, Cappelli A, Pannier B, Almazrouei E, Launay J (2023) The refinedweb dataset for falcon llm: outperforming curated corpora with web data only. Adv Neural Inf Process Syst 36:79155\u201379172","journal-title":"Adv Neural Inf Process Syst"},{"key":"1833_CR12","unstructured":"Tian K, Mitchell E, Yao H, Manning CD, Finn C (2023) Fine-tuning language models for factuality. arXiv:2311.08401"},{"key":"1833_CR13","unstructured":"Wan F, Huang X, Cui L, Quan X, Bi W, Shi S (2024) Mitigating hallucinations of large language models via knowledge consistent alignment. arXiv:2401.10768"},{"key":"1833_CR14","doi-asserted-by":"crossref","unstructured":"Sun Z, Shen S, Cao S, Liu H, Li C, Shen Y, Gan C, Gui L-Y, Wang Y-X, Yang Y et al (2023) Aligning large multimodal models with factually augmented rlhf. arXiv:2309.14525","DOI":"10.18653\/v1\/2024.findings-acl.775"},{"key":"1833_CR15","unstructured":"Li K, Patel O, Vi\u00e9gas F, Pfister H, Wattenberg M (2024) Inference-time intervention: eliciting truthful answers from a language model. In: Advances in neural information processing systems, vol 36"},{"key":"1833_CR16","unstructured":"Gou Z, Shao Z, Gong Y, Shen Yang Y, Duan N, Chen W (2024) CRITIC: large language models can self-correct with tool-interactive critiquing. In: The 12th international conference on learning representations. https:\/\/openreview.net\/forum?id=Sx038qxjek"},{"key":"1833_CR17","doi-asserted-by":"crossref","unstructured":"Manakul P, Liusie A, Gales MJ (2023) Selfcheckgpt: zero-resource black-box hallucination detection for generative large language models. arXiv:2303.08896","DOI":"10.18653\/v1\/2023.emnlp-main.557"},{"key":"1833_CR18","doi-asserted-by":"crossref","unstructured":"Hu L, Liu Z, Zhao Z, Hou L, Nie L, Li J (2023) A survey of knowledge enhanced pre-trained language models. IEEE Trans Knowl Data Eng","DOI":"10.1109\/TKDE.2023.3310002"},{"key":"1833_CR19","unstructured":"Wang C, Liu X, Yue Y, Tang X, Zhang T, Jiayang C, Yao Y, Gao W, Hu X, Qi Z et al (2023) Survey on factuality in large language models: knowledge, retrieval and domain-specificity. arXiv:2310.07521"},{"key":"1833_CR20","doi-asserted-by":"crossref","unstructured":"Shuster K, Poff S, Chen M, Kiela D, Weston J (2021) Retrieval augmentation reduces hallucination in conversation. arXiv:2104.07567","DOI":"10.18653\/v1\/2021.findings-emnlp.320"},{"key":"1833_CR21","doi-asserted-by":"crossref","unstructured":"Novelli C, Casolari F, Hacker P, Spedicato G, Floridi L (2024) Generative ai in eu law: liability, privacy, intellectual property, and cybersecurity. arXiv:2401.07348","DOI":"10.2139\/ssrn.4821952"},{"issue":"8","key":"1833_CR22","doi-asserted-by":"publisher","first-page":"1930","DOI":"10.1038\/s41591-023-02448-8","volume":"29","author":"AJ Thirunavukarasu","year":"2023","unstructured":"Thirunavukarasu AJ, Ting D, Elangovan K, Gutierrez L, Tan TF, Ting D (2023) Large language models in medicine. Nat Med 29(8):1930\u20131940","journal-title":"Nat Med"},{"issue":"2","key":"1833_CR23","first-page":"550","volume":"3","author":"D Paul","year":"2023","unstructured":"Paul D, Namperumal G, Surampudi Y (2023) Optimizing llm training for financial services: best practices for model accuracy, risk management, and compliance in ai-powered financial applications. J Artif Intell Res Appl 3(2):550\u2013588","journal-title":"J Artif Intell Res Appl"},{"key":"1833_CR24","doi-asserted-by":"crossref","unstructured":"Yao Y, Duan J, Xu K, Cai Y, Sun Z, Zhang Y (2024) A survey on large language model (llm) security and privacy: the good, the bad, and the ugly. High-Confid Comput 100211","DOI":"10.1016\/j.hcc.2024.100211"},{"key":"1833_CR25","unstructured":"Urlana A, Kumar CV, Singh AK, Garlapati BM, Chalamala SR, Mishra R (2024) Llms with industrial lens: deciphering the challenges and prospects\u2013a survey. arXiv:2402.14558"},{"key":"1833_CR26","unstructured":"Touvron H, Martin L, Stone K, Albert P, Almahairi A, Babaei Y, Bashlykov N, Batra S, Bhargava P, Bhosale S et al (2023) Llama 2: open foundation and fine-tuned chat models. arXiv:2307.09288"},{"key":"1833_CR27","unstructured":"Jiang AQ, Sablayrolles A, Mensch A, Bamford C, Chaplot DS, Casas Ddl, Bressand F, Lengyel G, Lample G, Saulnier L et al (2023) Mistral 7b. arXiv:2310.06825"},{"key":"1833_CR28","unstructured":"Bai J, Bai S, Chu Y, Cui Z, Dang K, Deng X, Fan Y, Ge W, Han Y, Huang F et al (2023) Qwen technical report. arXiv:2309.16609"},{"key":"1833_CR29","unstructured":"Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805"},{"key":"1833_CR30","doi-asserted-by":"crossref","unstructured":"Marvin G, Hellen N, Jjingo D, Nakatumba-Nabende J (2023) Prompt engineering in large language models. In: International conference on data intelligence and cognitive informatics. Springer, pp 387\u2013402","DOI":"10.1007\/978-981-99-7962-2_30"},{"key":"1833_CR31","first-page":"9459","volume":"33","author":"P Lewis","year":"2020","unstructured":"Lewis P, Perez E, Piktus A, Petroni F, Karpukhin V, Goyal N, K\u00fcttler H, Lewis M, Yih W-T, Rockt\u00e4schel T et al (2020) Retrieval-augmented generation for knowledge-intensive nlp tasks. Adv Neural Inf Process Syst 33:9459\u20139474","journal-title":"Adv Neural Inf Process Syst"},{"key":"1833_CR32","unstructured":"Li H, Su Y, Cai D, Wang Y, Liu L (2022) A survey on retrieval-augmented text generation. arXiv:2202.01110"},{"key":"1833_CR33","unstructured":"Xi Z, Chen W, Guo X, He W, Ding Y, Hong B, Zhang M, Wang J, Jin S, Zhou E et al (2023) The rise and potential of large language model based agents: a survey. arXiv:2309.07864"},{"key":"1833_CR34","doi-asserted-by":"crossref","unstructured":"Guo T, Chen X, Wang Y, Chang R, Pei S, Chawla NV, Wiest O, Zhang X (2024) Large language model based multi-agents: a survey of progress and challenges. arXiv:2402.01680","DOI":"10.24963\/ijcai.2024\/890"},{"key":"1833_CR35","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2023.111165","volume":"151","author":"W Chen","year":"2024","unstructured":"Chen W, Yan-yi L, Tie-zheng G, Da-peng L, Tao H, Zhi L, Qing-wen Y, Hui-han W, Ying-you W (2024) Systems engineering issues for industry applications of large language model. Appl Soft Comput 151:111165","journal-title":"Appl Soft Comput"},{"key":"1833_CR36","unstructured":"Asai A, Wu Z, Wang Y, Sil A, Hajishirzi H (2023) Self-rag: learning to retrieve, generate, and critique through self-reflection. arXiv:2310.11511"},{"key":"1833_CR37","doi-asserted-by":"crossref","unstructured":"Ji Z, Yu T, Xu Y, Lee N, Ishii E, Fung P (2023) Towards mitigating llm hallucination via self reflection. In: Findings of the association for computational linguistics: EMNLP 2023, pp 1827\u20131843","DOI":"10.18653\/v1\/2023.findings-emnlp.123"},{"key":"1833_CR38","first-page":"24824","volume":"35","author":"J Wei","year":"2022","unstructured":"Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, Le QV, Zhou D et al (2022) Chain-of-thought prompting elicits reasoning in large language models. Adv Neural Inf Process Syst 35:24824\u201324837","journal-title":"Adv Neural Inf Process Syst"},{"key":"1833_CR39","doi-asserted-by":"crossref","unstructured":"Lyu Q, Havaldar S, Stein A, Zhang L, Rao D, Wong E, Apidianaki M, Callison-Burch C (2023) Faithful chain-of-thought reasoning. arXiv:2301.13379","DOI":"10.18653\/v1\/2023.ijcnlp-main.20"},{"key":"1833_CR40","unstructured":"Waldendorf J, Haddow B, Birch A (2024) Contrastive decoding reduces hallucinations in large multilingual machine translation models. In: Proceedings of the 18th conference of the European chapter of the association for computational linguistics (volume 1: long papers), pp 2526\u20132539"},{"key":"1833_CR41","unstructured":"Li D, Sun Z, Hu X, Liu Z, Chen Z, Hu B, Wu A, Zhang M (2023) A survey of large language models attribution. arXiv:2311.03731"},{"key":"1833_CR42","doi-asserted-by":"crossref","unstructured":"Min S, Krishna K, Lyu X, Lewis M, Yih W-t, Koh PW, Iyyer M, Zettlemoyer L, Hajishirzi H (2023) Factscore: fine-grained atomic evaluation of factual precision in long form text generation. arXiv:2305.14251","DOI":"10.18653\/v1\/2023.emnlp-main.741"},{"key":"1833_CR43","unstructured":"Wei J, Yang C, Song X, Lu Y, Hu N, Tran D, Peng D, Liu R, Huang D, Du C et al (2024) Long-form factuality in large language models. arXiv:2403.18802"},{"key":"1833_CR44","doi-asserted-by":"publisher","unstructured":"Gao L, Dai Z, Pasupat P, Chen A, Chaganty AT, Fan Y, Zhao V, Lao N, Lee H, Juan D-C, Guu K (2023) RARR: researching and revising what language models say, using language models. In: Rogers A, Boyd-Graber J, Okazaki N (eds) Proceedings of the 61st annual meeting of the association for computational linguistics (volume 1: long papers). Association for Computational Linguistics, Toronto, pp 16477\u201316508. https:\/\/doi.org\/10.18653\/v1\/2023.acl-long.910. https:\/\/aclanthology.org\/2023.acl-long.910","DOI":"10.18653\/v1\/2023.acl-long.910"},{"key":"1833_CR45","unstructured":"Lei D, Li Y, Hu M, Wang M, Yun X (2023) Chain of natural language inference for reducing large language model hallucinations. In: NeurIPS 2023 workshop on instruction tuning and instruction following"},{"key":"1833_CR46","doi-asserted-by":"crossref","unstructured":"Lu Y, Liu Q, Dai D, Xiao X, Lin H, Han X, Sun L, Wu H (2022) Unified structure generation for universal information extraction. arXiv:2203.12277","DOI":"10.18653\/v1\/2022.acl-long.395"},{"key":"1833_CR47","unstructured":"Yang A, Yang B, Hui B, Zheng B, Yu B, Zhou C, Li C, Li C, Liu D, Huang F et al (2024) Qwen2 technical report. arXiv:2407.10671"},{"key":"1833_CR48","unstructured":"Zhang T, Kishore V, Wu F, Weinberger KQ, Artzi Y (2019) Bertscore: evaluating text generation with bert. arXiv:1904.09675"},{"key":"1833_CR49","doi-asserted-by":"publisher","unstructured":"Zha Y, Yang Y, Li R, Hu Z (2023) AlignScore: evaluating factual consistency with a unified alignment function. In: Rogers A, Boyd-Graber J, Okazaki N (eds) Proceedings of the 61st annual meeting of the association for computational linguistics (volume 1: long papers). Association for Computational Linguistics, Toronto, pp 11328\u201311348. https:\/\/doi.org\/10.18653\/v1\/2023.acl-long.634. https:\/\/aclanthology.org\/2023.acl-long.634","DOI":"10.18653\/v1\/2023.acl-long.634"},{"key":"1833_CR50","doi-asserted-by":"crossref","unstructured":"Kry\u015bci\u0144ski W, McCann B, Xiong C, Socher R (2019) Evaluating the factual consistency of abstractive text summarization. arXiv:1910.12840","DOI":"10.18653\/v1\/2020.emnlp-main.750"},{"key":"1833_CR51","doi-asserted-by":"publisher","unstructured":"Wang A, Cho K, Lewis M (2020) Asking and answering questions to evaluate the factual consistency of summaries. In: Jurafsky D, Chai J, Schluter N, Tetreault J (eds) Proceedings of the 58th annual meeting of the association for computational linguistics. Association for Computational Linguistics, pp 5008\u20135020. https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.450. https:\/\/aclanthology.org\/2020.acl-main.450","DOI":"10.18653\/v1\/2020.acl-main.450"},{"key":"1833_CR52","doi-asserted-by":"crossref","unstructured":"Fabbri AR, Kry\u015bci\u0144ski W, McCann B, Xiong C, Socher R, Radev D (2020) Summeval: re-evaluating summarization evaluation. arXiv:2007.12626","DOI":"10.1162\/tacl_a_00373"},{"key":"1833_CR53","unstructured":"Ravi SS, Mielczarek B, Kannappan A, Kiela D, Qian R (2024) Lynx: an open source hallucination evaluation model. arXiv:2407.08488"},{"key":"1833_CR54","doi-asserted-by":"crossref","unstructured":"Honovich O, Aharoni R, Herzig J, Taitelbaum H, Kukliansy D, Cohen V, Scialom T, Szpektor I, Hassidim A, Matias Y (2022) True: re-evaluating factual consistency evaluation. arXiv:2204.04991","DOI":"10.18653\/v1\/2022.naacl-main.287"},{"key":"1833_CR55","doi-asserted-by":"crossref","unstructured":"Chen J, Xiao S, Zhang P, Luo K, Lian D, Liu, Z (2024) Bge m3-embedding: multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. arXiv:2402.03216","DOI":"10.18653\/v1\/2024.findings-acl.137"},{"key":"1833_CR56","unstructured":"Meng R, Liu Y, Joty SR, Xiong C, Zhou Y, Yavuz S (2024) Sfrembedding-mistral: enhance text retrieval with transfer learning. Salesforce AI Research Blog, vol 3"},{"key":"1833_CR57","doi-asserted-by":"publisher","unstructured":"Li J, Cheng X, Zhao X, Nie J-Y, Wen J-R (2023) HaluEval: a large-scale hallucination evaluation benchmark for large language models. In: Bouamor H, Pino J, Bali K (eds) Proceedings of the 2023 conference on empirical methods in natural language processing. Association for Computational Linguistics, Singapore, pp 6449\u20136464. https:\/\/doi.org\/10.18653\/v1\/2023.emnlp-main.397. https:\/\/aclanthology.org\/2023.emnlp-main.397","DOI":"10.18653\/v1\/2023.emnlp-main.397"},{"key":"1833_CR58","doi-asserted-by":"crossref","unstructured":"Lyu Y, Li Z, Niu S, Xiong F, Tang B, Wang W, Wu H, Liu H, Xu T, Chen E (2024) Crud-rag: a comprehensive Chinese benchmark for retrieval-augmented generation of large language models. arXiv:2401.17043","DOI":"10.1145\/3701228"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01833-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-025-01833-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01833-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,29]],"date-time":"2025-04-29T10:37:44Z","timestamp":1745923064000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-025-01833-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,2]]},"references-count":58,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2025,5]]}},"alternative-id":["1833"],"URL":"https:\/\/doi.org\/10.1007\/s40747-025-01833-9","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,2]]},"assertion":[{"value":"8 August 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 February 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 April 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"231"}}