{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,2]],"date-time":"2026-05-02T04:42:06Z","timestamp":1777696926142,"version":"3.51.4"},"reference-count":97,"publisher":"SAGE Publications","issue":"2","license":[{"start":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T00:00:00Z","timestamp":1750118400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Intelligent Data Analysis: An International Journal"],"published-print":{"date-parts":[[2026,3]]},"abstract":"<jats:p>In this cognitive era, vast amount of data are accumulated every day. Analysing such unstructured information and obtaining insights will be challenging. To address this, Large language models have been developed to support analysis of extensive data corpora. However, it tends to cause hallucination due to a lack of proper knowledge sources. If the analysis has to be performed with respect to the health care domain or finance domain, the challenge is raised because of the lack of domain specificity. COVID-19 sentiment analysis is one of the complex responsibilities of the government since it needs to know the opinions of people to take necessary measures. This paper presents COVID-19 retrieval augmented and fine-tuning (RAFT), a novel framework that includes the analysis of COVID-19 vaccine tweets through retrieval augmented-based approaches. This integrated domain-specific knowledge through a retrieval-augmented generation-based approach with external knowledge sources. We employed a transformer-based semantic approach in embedding generation via vector database. Furthermore, this framework exhibited generalizability when integrated with domain knowledge. It uses parameter efficient fine tuning with quantization to use a large language model with a reduced number of parameters, which will allow a model to be used in low-resource-constrained devices. This framework achieved an accuracy of 0.886 on the Twitter dataset containing tweets specific to Indian region and 0.912 on the Twitter dataset with tweets from Global region.<\/jats:p>","DOI":"10.1177\/1088467x251348350","type":"journal-article","created":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T04:20:48Z","timestamp":1750134048000},"page":"478-495","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":0,"title":["A cognitive domain specific framework integrating large language model for COVID-19 vaccine sentiment analysis"],"prefix":"10.1177","volume":"30","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7801-8364","authenticated-orcid":false,"given":"Lingeswaran","family":"Prasika","sequence":"first","affiliation":[{"name":"Department of Artificial Intelligence and Data Science, Mepco Schlenk Engineering College (Autonomous), Sivakasi, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6815-7587","authenticated-orcid":false,"given":"Samuel Nadar","family":"Edward Rajan","sequence":"additional","affiliation":[{"name":"Electrical and Electronics Engineering, Mepco Schlenk Engineering College (Autonomous), Sivakasi, India"}]}],"member":"179","published-online":{"date-parts":[[2025,6,17]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.2196\/26627"},{"key":"e_1_3_3_3_2","article-title":"COVID-19 vaccine\u2013related discussion on twitter: topic modeling and sentiment analysis","volume":"24","author":"Lyu J","year":"2022","unstructured":"Lyu J, Han EL, Luli GK. COVID-19 vaccine\u2013related discussion on twitter: topic modeling and sentiment analysis. J Med Internet Res 2022; 24: e31726.","journal-title":"J Med Internet Res"},{"key":"e_1_3_3_4_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown T","year":"2020","unstructured":"Brown T, Mann B, Ryder N, et\u00a0al. Language models are few-shot learners. Adv Neural Inf Process Syst 2020; 33: 1877\u20131901.","journal-title":"Adv Neural Inf Process Syst"},{"key":"e_1_3_3_5_2","volume-title":"LLaMA: Open and efficient foundation language modelsProceedings of the IEEE conference on computer vision and pattern recognition (CVPR)","author":"Touvron H","year":"2023","unstructured":"Touvron H, Martin J, Stone K, et\u00a0al. LLaMA: Open and efficient foundation language models. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2023. https:\/\/doi.org\/10.48550\/arXiv.2302.13971"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3571730"},{"key":"e_1_3_3_7_2","first-page":"2027","volume-title":"Knowledge-enhanced retrieval-augmented generationProceedings of the 61st Annual Meeting of the Association for Computational Linguistics","author":"Gao Y","year":"2023","unstructured":"Gao Y, Ge T, He J, et\u00a0al. Knowledge-enhanced retrieval-augmented generation. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 2023, pp. 2027\u20132039. https:\/\/doi.org\/10.48550\/arXiv.2305.13242"},{"key":"e_1_3_3_8_2","first-page":"9459","article-title":"Retrieval-augmented generation for knowledge-intensive NLP tasks","volume":"33","author":"Lewis P","year":"2020","unstructured":"Lewis P, Perez E, Piktus A, et\u00a0al. Retrieval-augmented generation for knowledge-intensive NLP tasks. Adv Neural Inf Process Syst 2020; 33: 9459\u20139474.","journal-title":"Adv Neural Inf Process Syst"},{"key":"e_1_3_3_9_2","first-page":"3457","article-title":"LoRA: low-rank adaptation of large language models","volume":"34","author":"Hu EJ","year":"2021","unstructured":"Hu EJ, Shen Y, Wallis P, et\u00a0al. LoRA: low-rank adaptation of large language models. Adv Neural Inf Process Syst 2021; 34: 3457\u20133470.","journal-title":"Adv Neural Inf Process Syst"},{"key":"e_1_3_3_10_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2023.102758"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/OJCS.2024.3396518"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2024.3353692"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2023.3236527"},{"key":"e_1_3_3_14_2","unstructured":"Xu Y Xia B Wan Y et\u00a0al. CDCAT: A multi-language cross-document entity and event co reference annotation tool http:\/\/dh.fbk.eu\/resources\/cat-content-annotation-tool (n.d.)."},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2023.3301956"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3259107"},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.aei.2023.102020"},{"key":"e_1_3_3_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3296447"},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2024.123981"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2023.3267183"},{"key":"e_1_3_3_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2023.3309229"},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2019.2934444"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2023.122936"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2021.3121909"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2024.3359430"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2024.3349970"},{"key":"e_1_3_3_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3179808"},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2023.3271894"},{"key":"e_1_3_3_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3187406"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2023.3315143"},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/TTS.2021.3088800"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3246162"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCBB.2022.3165592"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3152266"},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2023.3273729"},{"key":"e_1_3_3_36_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2023.123075"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3289715"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jksuci.2021.07.013"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3183108"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asej.2024.102736"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3291072"},{"key":"e_1_3_3_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2023.3234427"},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2024.127423"},{"key":"e_1_3_3_44_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.autcon.2024.105458"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3320738"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2021.3063259"},{"key":"e_1_3_3_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3268243"},{"key":"e_1_3_3_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3189956"},{"key":"e_1_3_3_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2023.3264114"},{"key":"e_1_3_3_50_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2023.110951"},{"key":"e_1_3_3_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3285536"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3238366"},{"key":"e_1_3_3_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3234433"},{"key":"e_1_3_3_54_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.csi.2024.103850"},{"key":"e_1_3_3_55_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.wpi.2020.101965"},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3257283"},{"key":"e_1_3_3_57_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3340520"},{"key":"e_1_3_3_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2021.3094987"},{"key":"e_1_3_3_59_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2021.3122439"},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2022.3146633"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3146712"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3150329"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.compbiomed.2023.106876"},{"key":"e_1_3_3_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3278979"},{"key":"e_1_3_3_65_2","article-title":"Repetitive action counting with hybrid temporal relation modeling","author":"Li K","year":"2025","unstructured":"Li K, Peng X, Guo D, et\u00a0al. Repetitive action counting with hybrid temporal relation modeling. IEEE Trans Multimed 2025. DOI: 10.48550\/arXiv.2412.07233","journal-title":"IEEE Trans Multimed"},{"issue":"10","key":"e_1_3_3_66_2","article-title":"ViGT: Proposal-Free Video Grounding with a Learnable Token in the Transformer","volume":"66","author":"Li K","year":"2023","unstructured":"Li K, Guo D, Wang M. ViGT: Proposal-Free Video Grounding with a Learnable Token in the Transformer. Proc Int Conf Smart Comput Inf Sci (SCIS) 2023; 66(10).","journal-title":"Proc Int Conf Smart Comput Inf Sci (SCIS)"},{"key":"e_1_3_3_67_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i6.28435"},{"key":"e_1_3_3_68_2","unstructured":"Shi L et al. Prompt-aware controllable shadow removal. Axiver https:\/\/arxiv.org\/abs\/2501.15043 (2025)."},{"key":"e_1_3_3_69_2","unstructured":"Wu Z Chen K Li K et\u00a0al. BVINet: Unlocking Blind Video Inpainting with Zero Annotations Axiver (2025)."},{"key":"e_1_3_3_70_2","volume-title":"Prototypical Calibrating Ambiguous Samples for Micro-Action Recognitionin Proc. AAAI Conf. on Artificial Intelligence","author":"Rao S","unstructured":"Rao S, et\u00a0al. Prototypical Calibrating Ambiguous Samples for Micro-Action Recognition. In: in Proc. AAAI Conf. on Artificial Intelligence, 2025."},{"key":"e_1_3_3_71_2","volume-title":"KILT: A benchmark for knowledge-intensive language tasksProc. 2021 Conf. of the North American Chapter of the Association for Computational Linguistics (NAACL)","author":"Petroni F","year":"2021","unstructured":"Petroni F, et al. KILT: A benchmark for knowledge-intensive language tasks. In Proc. 2021 Conf. of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021."},{"key":"e_1_3_3_72_2","volume-title":"REALM: Retrieval-Augmented Language Model Pre-TrainingProc. 37th Int. Conf. on Machine Learning (ICML)","author":"Guu K","unstructured":"Guu K, et\u00a0al. REALM: Retrieval-Augmented Language Model Pre-Training. In: Proc. 37th Int. Conf. on Machine Learning (ICML), 2020."},{"key":"e_1_3_3_73_2","volume-title":"Improving language models by retrieving from trillions of tokens (RETRO)Advances in Neural Information Processing Systems (NeurIPS)","volume":"35","author":"Borgeaud M","year":"2022","unstructured":"Borgeaud M, et al. Improving language models by retrieving from trillions of tokens (RETRO). In Advances in Neural Information Processing Systems (NeurIPS), Vol. 35, 2022."},{"key":"e_1_3_3_74_2","volume-title":"Fact or fiction: Verifying scientific claimsProc. 2020 Conf. on Empirical Methods in Natural Language Processing (EMNLP)","author":"Wadden I","year":"2020","unstructured":"Wadden I, et al. Fact or fiction: Verifying scientific claims. In Proc. 2020 Conf. on Empirical Methods in Natural Language Processing (EMNLP), 2020."},{"key":"e_1_3_3_75_2","volume-title":"Pre-train prompt tuning for few-shot sentiment analysisProc. 60th Annual Meeting of the Association for Computational Linguistics (ACL)","author":"Liu P","year":"2022","unstructured":"Liu P, et al. Pre-train prompt tuning for few-shot sentiment analysis. In Proc. 60th Annual Meeting of the Association for Computational Linguistics (ACL), 2022."},{"key":"e_1_3_3_76_2","doi-asserted-by":"crossref","unstructured":"Lu J Yu L Li X et\u00a0al. LLaMA-reviewer: Advancing code review automation with large language models through parameter-efficient fine-tuning http:\/\/arxiv.org\/abs\/2308.11148 (2023).","DOI":"10.1109\/ISSRE59848.2023.00026"},{"key":"e_1_3_3_77_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2024.3365742"},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3015854"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2023.111148"},{"key":"e_1_3_3_80_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2024.3350638"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.apenergy.2024.123431"},{"key":"e_1_3_3_82_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2024.112043"},{"key":"e_1_3_3_83_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patter.2023.100729"},{"key":"e_1_3_3_84_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2024.111975"},{"key":"e_1_3_3_85_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2024.111740"},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2023.110605"},{"key":"e_1_3_3_87_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAI.2023.3275132"},{"key":"e_1_3_3_88_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2023.3313881"},{"key":"e_1_3_3_89_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbi.2023.104580"},{"key":"e_1_3_3_90_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2023.111347"},{"key":"e_1_3_3_91_2","doi-asserted-by":"publisher","DOI":"10.1016\/S2589-7500(23)00202-9"},{"key":"e_1_3_3_92_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.nlp.2024.100056"},{"key":"e_1_3_3_93_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2023.103476"},{"key":"e_1_3_3_94_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.irfa.2024.103291"},{"key":"e_1_3_3_95_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2023.105864"},{"key":"e_1_3_3_96_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.caeai.2024.100210"},{"key":"e_1_3_3_97_2","unstructured":"Preda G. All COVID-19 vaccines Tweets dataset. Kaggle https:\/\/www.kaggle.com\/datasets\/gpreda\/all-covid19-vaccines-tweets\/data (2021)."},{"key":"e_1_3_3_98_2","doi-asserted-by":"crossref","unstructured":"Reimers N Gurevych I. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. arXiv preprint arXiv:1908.10084 https:\/\/arxiv.org\/abs\/1908.10084 (2019).","DOI":"10.18653\/v1\/D19-1410"}],"container-title":["Intelligent Data Analysis: An International Journal"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/1088467X251348350","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.1177\/1088467X251348350","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/1088467X251348350","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T09:21:20Z","timestamp":1777454480000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.1177\/1088467X251348350"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,17]]},"references-count":97,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,3]]}},"alternative-id":["10.1177\/1088467X251348350"],"URL":"https:\/\/doi.org\/10.1177\/1088467x251348350","relation":{},"ISSN":["1088-467X","1571-4128"],"issn-type":[{"value":"1088-467X","type":"print"},{"value":"1571-4128","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,17]]}}}