{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T19:22:14Z","timestamp":1776108134691,"version":"3.50.1"},"reference-count":79,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,11,27]],"date-time":"2025-11-27T00:00:00Z","timestamp":1764201600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,11,27]],"date-time":"2025-11-27T00:00:00Z","timestamp":1764201600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100005866","name":"Far Eastern Memorial Hospital","doi-asserted-by":"publisher","award":["112086-F"],"award-info":[{"award-number":["112086-F"]}],"id":[{"id":"10.13039\/501100005866","id-type":"DOI","asserted-by":"publisher"}]},{"name":"NIH NINDS","award":["R21NS135482"],"award-info":[{"award-number":["R21NS135482"]}]},{"name":"NIH NIBIB","award":["R21EB033455"],"award-info":[{"award-number":["R21EB033455"]}]},{"DOI":"10.13039\/501100024990","name":"National Yang Ming Chiao Tung University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100024990","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Health Inf Sci Syst"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Electronic health records (EHRs) are designed to synthesize diverse data types, including unstructured clinical notes, structured lab tests, and time-series visit data. Physicians draw on these multimodal and temporal sources of EHR data to form a comprehensive view of a patient\u2019s health, which is crucial for informed therapeutic decision-making. Yet, most predictive models fail to fully capture the interactions, redundancies, and temporal patterns across multiple data modalities, often focusing on a single data type or overlooking these complexities. In this paper, we present CURENet, a multimodal model (Combining Unified Representations for Efficient chronic disease prediction) that integrates unstructured clinical notes, lab tests, and patients\u2019 time-series data by utilizing large language models (LLMs) for clinical text processing and textual lab tests, as well as transformer encoders for longitudinal sequential visits. Curenet has been capable of capturing the intricate interaction between different forms of clinical data and creating a more reliable predictive model for chronic illnesses. We evaluated CURENet using the public MIMIC-III and private FEMH datasets, where it achieved over 94% accuracy in predicting the top 10 chronic conditions in a multi-label framework. Our findings highlight the potential of multimodal EHR integration to enhance clinical decision-making and improve patient outcomes.<\/jats:p>","DOI":"10.1007\/s13755-025-00396-w","type":"journal-article","created":{"date-parts":[[2025,11,27]],"date-time":"2025-11-27T07:46:28Z","timestamp":1764229588000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["CURENet: combining unified representations for efficient chronic disease prediction"],"prefix":"10.1007","volume":"14","author":[{"given":"Cong-Tinh","family":"Dao","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6814-9459","authenticated-orcid":false,"given":"Nguyen Minh Thao","family":"Phan","sequence":"additional","affiliation":[]},{"given":"Jun-En","family":"Ding","sequence":"additional","affiliation":[]},{"given":"Chenwei","family":"Wu","sequence":"additional","affiliation":[]},{"given":"David","family":"Restrepo","sequence":"additional","affiliation":[]},{"given":"Dongsheng","family":"Luo","sequence":"additional","affiliation":[]},{"given":"Fanyi","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Chun-Chieh","family":"Liao","sequence":"additional","affiliation":[]},{"given":"Wen-Chih","family":"Peng","sequence":"additional","affiliation":[]},{"given":"Chi-Te","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Pei-Fu","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Ling","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Xinglong","family":"Ju","sequence":"additional","affiliation":[]},{"given":"Feng","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Fang-Ming","family":"Hung","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,11,27]]},"reference":[{"key":"396_CR1","unstructured":"World Health Organization: Noncommunicable Diseases. Accessed: 2024-11-10 (2023). https:\/\/www.who.int\/news-room\/fact-sheets\/detail\/noncommunicable-diseases"},{"issue":"10","key":"396_CR2","doi-asserted-by":"publisher","first-page":"2784","DOI":"10.1080\/10408398.2020.1858751","volume":"62","author":"A Fardet","year":"2022","unstructured":"Fardet A, Rock E. Exclusive reductionism, chronic diseases and nutritional confusion: the degree of processing as a lever for improving public health. Crit Rev Food Sci Nutr. 2022;62(10):2784\u201399.","journal-title":"Crit Rev Food Sci Nutr"},{"key":"396_CR3","doi-asserted-by":"crossref","unstructured":"Ding J-E, Phan NMT, Peng W-C, Wang J-Z, Chug C-C, Hsieh M-C, Tseng Y-C, Chen L, Luo D, Wu C, et al. Large language multimodal models for new-onset type 2 diabetes prediction using five-year cohort electronic health records, 2024","DOI":"10.21203\/rs.3.rs-4414387\/v1"},{"key":"396_CR4","doi-asserted-by":"crossref","unstructured":"Restrepo D, Wu C, V\u00e1squez-Venegas C, Nakayama LF, Celi LA, L\u00f3pez DM. Df-dm: A foundational process model for multimodal data fusion in the artificial intelligence era. Res Sq. 2024","DOI":"10.21203\/rs.3.rs-4277992\/v1"},{"key":"396_CR5","doi-asserted-by":"crossref","unstructured":"Thao PNM, Dao C-T, Wu C, Wang J-Z, Liu S, Ding J-E, Restrepo D, Liu F, Hung F-M, Peng W-C. Medfuse: Multimodal ehr data fusion with masked lab-test modeling and large language models. In: Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, pp. 3974\u20133978, 2024","DOI":"10.1145\/3627673.3679962"},{"key":"396_CR6","doi-asserted-by":"crossref","unstructured":"Luo J, Ye M, Xiao C, Ma F. Hitanet: Hierarchical time-aware attention networks for risk prediction on electronic health records. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 647\u2013656, 2020.","DOI":"10.1145\/3394486.3403107"},{"key":"396_CR7","doi-asserted-by":"crossref","unstructured":"Ma F, Chitta R, Zhou J, You Q, Sun T, Gao J. Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1903\u20131911, 2017.","DOI":"10.1145\/3097983.3098088"},{"key":"396_CR8","unstructured":"Choi E, Bahadori MT, Sun J, Kulas J, Schuetz A, Stewart W. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. Adv Neural Inf Proc Syst. 2016;29"},{"issue":"1","key":"396_CR9","doi-asserted-by":"publisher","first-page":"7155","DOI":"10.1038\/s41598-020-62922-y","volume":"10","author":"Y Li","year":"2020","unstructured":"Li Y, Rao S, Solares JRA, Hassaine A, Ramakrishnan R, Canoy D, et al. Behrt: transformer for electronic health records. Sci Rep. 2020;10(1):7155.","journal-title":"Sci Rep"},{"key":"396_CR10","doi-asserted-by":"crossref","unstructured":"Zhang X, Qian B, Cao S, Li Y, Chen H, Zheng Y, Davidson I. Inprem: An interpretable and trustworthy predictive model for healthcare. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 450\u2013460, 2020.","DOI":"10.1145\/3394486.3403087"},{"key":"396_CR11","unstructured":"Devlin J, Chang M-W, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional transformers for language understanding. 2018. arXiv preprint arXiv:1810.04805"},{"issue":"8","key":"396_CR12","doi-asserted-by":"publisher","first-page":"1930","DOI":"10.1038\/s41591-023-02448-8","volume":"29","author":"AJ Thirunavukarasu","year":"2023","unstructured":"Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930\u201340.","journal-title":"Nat Med"},{"key":"396_CR13","first-page":"507","volume":"35","author":"L Grinsztajn","year":"2022","unstructured":"Grinsztajn L, Oyallon E, Varoquaux G. Why do tree-based models still outperform deep learning on typical tabular data? Adv Neural Inf Process Syst. 2022;35:507\u201320.","journal-title":"Adv Neural Inf Process Syst"},{"key":"396_CR14","unstructured":"Bellamy D.R, Kumar B, Wang C, Beam A.: Labrador: Exploring the limits of masked language modeling for laboratory data. arXiv preprint arXiv:2312.11502 (2023)"},{"key":"396_CR15","first-page":"5549","volume-title":"Int Conf Artif Intel Stat","author":"S Hegselmann","year":"2023","unstructured":"Hegselmann S, Buendia A, Lang H, Agrawal M, Jiang X, Sontag D. Tabllm: few-shot classification of tabular data with large language models. In: Int Conf Artif Intel Stat. PMLR; 2023. p. 5549\u201381."},{"key":"396_CR16","doi-asserted-by":"publisher","DOI":"10.1001\/jamasurg.2024.1621","volume-title":"Large language model capabilities in perioperative risk prediction and prognostication","author":"P Chung","year":"2024","unstructured":"Chung P, Fong CT, Walters AM, Aghaeepour N, Yetisgen M, O\u2019Reilly-Shah VN. Large language model capabilities in perioperative risk prediction and prognostication. JAMA surgery; 2024."},{"key":"396_CR17","first-page":"301","volume-title":"Machine Learning for Healthcare Conference","author":"E Choi","year":"2016","unstructured":"Choi E, Bahadori MT, Schuetz A, Stewart WF, Sun J. Doctor ai: Predicting clinical events via recurrent neural networks. In: Machine Learning for Healthcare Conference. PMLR; 2016. p. 301\u201318."},{"issue":"1","key":"396_CR18","doi-asserted-by":"publisher","first-page":"299","DOI":"10.1109\/TVCG.2018.2865027","volume":"25","author":"BC Kwon","year":"2018","unstructured":"Kwon BC, Choi M-J, Kim JT, Choi E, Kim YB, Kwon S, et al. Retainvis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Trans Visual Comput Graphics. 2018;25(1):299\u2013309.","journal-title":"IEEE Trans Visual Comput Graphics"},{"key":"396_CR19","doi-asserted-by":"crossref","unstructured":"Baytas IM, Xiao C, Zhang X, Wang F, Jain AK, Zhou J. Patient subtyping via time-aware lstm networks. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 65\u201374.","DOI":"10.1145\/3097983.3097997"},{"key":"396_CR20","doi-asserted-by":"crossref","unstructured":"Bai T, Zhang S, Egleston BL, Vucetic S. Interpretable representation learning for healthcare via capturing disease progression through time. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 43\u201351.","DOI":"10.1145\/3219819.3219904"},{"key":"396_CR21","unstructured":"Wang J, Luo J, Ye M, Wang X, Zhong Y, Chang A, Huang G, Yin Z, Xiao C, Sun J, et al. Recent advances in predictive modeling with electronic health records. 2024, arXiv preprint arXiv:2402.01077"},{"key":"396_CR22","unstructured":"Chen Y. Convolutional neural network for sentence classification. Master\u2019s thesis, University of Waterloo, 2015."},{"key":"396_CR23","doi-asserted-by":"crossref","unstructured":"Chen J, Hu Y, Liu J, Xiao Y, Jiang H. Deep short text classification with knowledge powered attention. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2019; vol. 33, pp. 6252\u20136259","DOI":"10.1609\/aaai.v33i01.33016252"},{"key":"396_CR24","doi-asserted-by":"crossref","unstructured":"Wu H, Chen W, Xu S, Xu B. Counterfactual supporting facts extraction for explainable medical record based diagnosis with graph network. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021, pp. 1942\u20131955.","DOI":"10.18653\/v1\/2021.naacl-main.156"},{"issue":"9","key":"396_CR25","doi-asserted-by":"publisher","first-page":"1801","DOI":"10.1093\/jamia\/ocae202","volume":"31","author":"Z Lu","year":"2024","unstructured":"Lu Z, Peng Y, Cohen T, Ghassemi M, Weng C, Tian S. Large language models in biomedicine and health: current research landscape and future directions. J Am Med Inform Assoc. 2024;31(9):1801\u201311.","journal-title":"J Am Med Inform Assoc"},{"issue":"7972","key":"396_CR26","doi-asserted-by":"publisher","first-page":"172","DOI":"10.1038\/s41586-023-06291-2","volume":"620","author":"K Singhal","year":"2023","unstructured":"Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, et al. Large language models encode clinical knowledge. Nature. 2023;620(7972):172\u201380.","journal-title":"Nature"},{"issue":"1","key":"396_CR27","doi-asserted-by":"publisher","first-page":"127","DOI":"10.1038\/s41746-024-01126-4","volume":"7","author":"L Peng","year":"2024","unstructured":"Peng L, Luo G, Zhou S, Chen J, Xu Z, Sun J, et al. An in-depth evaluation of federated learning on biomedical natural language processing for information extraction. NPJ Dig Med. 2024;7(1):127.","journal-title":"NPJ Dig Med"},{"issue":"1","key":"396_CR28","doi-asserted-by":"publisher","first-page":"226","DOI":"10.1038\/s41746-023-00952-2","volume":"6","author":"F Liu","year":"2023","unstructured":"Liu F, Zhu T, Wu X, Yang B, You C, Wang C, et al. A medical multimodal large language model for future pandemics. NPJ Dig Med. 2023;6(1):226.","journal-title":"NPJ Dig Med"},{"issue":"6","key":"396_CR29","doi-asserted-by":"publisher","first-page":"409","DOI":"10.1093\/bib\/bbac409","volume":"23","author":"R Luo","year":"2022","unstructured":"Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, et al. Biogpt: generative pre-trained transformer for biomedical text generation and mining. Brief Bioinform. 2022;23(6):409.","journal-title":"Brief Bioinform"},{"key":"396_CR30","unstructured":"Bolton E, Venigalla A, Yasunaga M, Hall D, Xiong B, Lee T, Daneshjou R, Frankle J, Liang P, Carbin M, et al. Biomedlm: A 2.7 b parameter language model trained on biomedical text. 2024. arXiv preprint arXiv:2403.18421"},{"issue":"1","key":"396_CR31","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/srep26094","volume":"6","author":"R Miotto","year":"2016","unstructured":"Miotto R, Li L, Kidd BA, Dudley JT. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci Rep. 2016;6(1):1\u201310.","journal-title":"Sci Rep"},{"key":"396_CR32","doi-asserted-by":"crossref","unstructured":"Cheng Y, Wang F, Zhang P, Hu J. Risk prediction with electronic health records: A deep learning approach. In: Proceedings of the 2016 SIAM International Conference on Data Mining, 2016, pp. 432\u2013440. SIAM","DOI":"10.1137\/1.9781611974348.49"},{"key":"396_CR33","doi-asserted-by":"crossref","unstructured":"Choi E, Bahadori MT, Song L, Stewart WF, Sun J. Gram: graph-based attention model for healthcare representation learning. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 787\u2013795.","DOI":"10.1145\/3097983.3098126"},{"key":"396_CR34","unstructured":"Suresh H, Hunt N, Johnson A, Celi LA, Szolovits P, Ghassemi M. Clinical intervention prediction and understanding with deep neural networks. In: Machine Learning for Healthcare Conference, 2017, pp. 322\u2013337. PMLR"},{"key":"396_CR35","doi-asserted-by":"crossref","unstructured":"Ma F, Wang Y, Xiao H, Yuan Y, Chitta R, Zhou J, et al. A general framework for diagnosis prediction via incorporating medical code descriptions. In: 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE; 2018. p. 1070\u20135.","DOI":"10.1109\/BIBM.2018.8621395"},{"key":"396_CR36","doi-asserted-by":"crossref","unstructured":"Jin B, Yang H, Sun L, Liu C, Qu Y, Tong J. A treatment engine by predicting next-period prescriptions. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 1608\u20131616.","DOI":"10.1145\/3219819.3220095"},{"key":"396_CR37","doi-asserted-by":"crossref","unstructured":"Lee W, Park S, Joo W, Moon I-C. Diagnosis prediction via medical context attention networks using deep generative modeling. In: 2018 IEEE International Conference on Data Mining (ICDM). IEEE; 2018. p. 1104\u20139.","DOI":"10.1109\/ICDM.2018.00143"},{"issue":"7","key":"396_CR38","doi-asserted-by":"publisher","first-page":"2053","DOI":"10.1109\/JBHI.2019.2962079","volume":"24","author":"H Duan","year":"2019","unstructured":"Duan H, Sun Z, Dong W, He K, Huang Z. On clinical event prediction in patient treatment trajectory using longitudinal electronic health records. IEEE J Biomed Health Inform. 2019;24(7):2053\u201363.","journal-title":"IEEE J Biomed Health Inform"},{"issue":"11","key":"396_CR39","doi-asserted-by":"publisher","first-page":"3268","DOI":"10.1109\/JBHI.2020.2984931","volume":"24","author":"S Darabi","year":"2020","unstructured":"Darabi S, Kachuee M, Fazeli S, Sarrafzadeh M. Taper: Time-aware patient ehr representation. IEEE J Biomed Health Inform. 2020;24(11):3268\u201375.","journal-title":"IEEE J Biomed Health Inform"},{"key":"396_CR40","doi-asserted-by":"crossref","unstructured":"Choi E, Xu Z, Li Y, Dusenberry M, Flores G, Xue E, Dai A. Learning the graphical structure of electronic health records with graph convolutional transformer. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2020;34:606\u2013613","DOI":"10.1609\/aaai.v34i01.5400"},{"key":"396_CR41","first-page":"566","volume-title":"Machine Learning for Healthcare Conference","author":"D Zhang","year":"2020","unstructured":"Zhang D, Thadajarassiri J, Sen C, Rundensteiner E. Time-aware transformer-based network for clinical notes series prediction. In: Machine Learning for Healthcare Conference. PMLR; 2020. p. 566\u201388."},{"key":"396_CR42","doi-asserted-by":"crossref","unstructured":"McDermott M, Nestor B, Kim E, Zhang W, Goldenberg A, Szolovits P, Ghassemi M. A comprehensive ehr timeseries pre-training benchmark. In: Proceedings of the Conference on Health, Inference, and Learning, 2021, pp. 257\u2013278","DOI":"10.1145\/3450439.3451877"},{"key":"396_CR43","doi-asserted-by":"crossref","unstructured":"Sprint G, Schmitter-Edgecombe M, Weaver R, Wiese L, Cook DJ. Cogprog: utilizing large language models to forecast in-the-moment health assessment. ACM Transactions on Computing for Healthcare 2024.","DOI":"10.1145\/3709153"},{"key":"396_CR44","doi-asserted-by":"crossref","unstructured":"Xu Y, Biswal S, Deshpande SR, Maher KO, Sun J.: Raim: Recurrent attentive and intensive model of multimodal patient monitoring data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2565\u20132573","DOI":"10.1145\/3219819.3220051"},{"key":"396_CR45","doi-asserted-by":"crossref","unstructured":"Feng Y, Xu Z, Gan L, Chen N, Yu B, Chen T, Wang F.: Dcmn: Double core memory network for patient outcome prediction with multimodal data. In: 2019 IEEE International Conference on Data Mining (ICDM), pp. 200\u2013209 (2019). IEEE","DOI":"10.1109\/ICDM.2019.00030"},{"key":"396_CR46","doi-asserted-by":"crossref","unstructured":"Huang K, Singh A, Chen S, Moseley E.T, Deng C.-Y, George N, Lindvall C.: Clinical xlnet: Modeling sequential clinical notes and predicting prolonged mechanical ventilation. arXiv preprint arXiv:1912.11975 (2019)","DOI":"10.18653\/v1\/2020.clinicalnlp-1.11"},{"key":"396_CR47","doi-asserted-by":"crossref","unstructured":"Khadanga S, Aggarwal K, Joty S, Srivastava J.: Using clinical notes with time series data for icu management. arXiv preprint arXiv:1909.09702 (2019)","DOI":"10.18653\/v1\/D19-1678"},{"key":"396_CR48","doi-asserted-by":"crossref","unstructured":"Wang W, Park Y, Lee T, Molloy I, Tang P, Xiong L.: Utilizing multimodal feature consistency to detect adversarial examples on clinical summaries. In: Proceedings of the 3rd Clinical Natural Language Processing Workshop, pp. 259\u2013268 (2020)","DOI":"10.18653\/v1\/2020.clinicalnlp-1.29"},{"key":"396_CR49","doi-asserted-by":"crossref","unstructured":"Zhang Z, Liu J, Razavian N.: Bert-xml: Large scale automated icd coding using bert pretraining. arXiv preprint arXiv:2006.03685 (2020)","DOI":"10.18653\/v1\/2020.clinicalnlp-1.3"},{"key":"396_CR50","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2021.102112","volume":"117","author":"B Bardak","year":"2021","unstructured":"Bardak B, Tan M. Improving clinical outcome predictions using convolution over medical entities with multimodal learning. Artif Intell Med. 2021;117:102112.","journal-title":"Artif Intell Med"},{"key":"396_CR51","doi-asserted-by":"crossref","unstructured":"Yang B, Wu L.: How to leverage multimodal ehr data for better medical predictions? arXiv preprint arXiv:2110.15763 (2021)","DOI":"10.18653\/v1\/2021.emnlp-main.329"},{"key":"396_CR52","doi-asserted-by":"crossref","unstructured":"Cui S, Wang J, Gui X, Wang T, Ma F.: Automed: automated medical risk predictive modeling on electronic health records. In: 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 948\u2013953 (2022). IEEE","DOI":"10.1109\/BIBM55620.2022.9995209"},{"issue":"3","key":"396_CR53","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3506719","volume":"3","author":"M Gupta","year":"2022","unstructured":"Gupta M, Phan T-LT, Bunnell HT, Beheshti R. Obesity prediction with ehr data: a deep learning approach with interpretable elements. ACM Trans Comput Healthcare (HEALTH). 2022;3(3):1\u201319.","journal-title":"ACM Trans Comput Healthcare (HEALTH)"},{"issue":"1","key":"396_CR54","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1038\/s41746-021-00455-y","volume":"4","author":"L Rasmy","year":"2021","unstructured":"Rasmy L, Xiang Y, Xie Z, Tao C, Zhi D. Med-bert: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. NPJ Dig Med. 2021;4(1):86.","journal-title":"NPJ Dig Med"},{"issue":"2","key":"396_CR55","doi-asserted-by":"publisher","first-page":"1106","DOI":"10.1109\/JBHI.2022.3224727","volume":"27","author":"Y Li","year":"2022","unstructured":"Li Y, Mamouei M, Salimi-Khorshidi G, Rao S, Hassaine A, Canoy D, et al. Hi-behrt: hierarchical transformer-based model for accurate prediction of clinical events using multimodal longitudinal electronic health records. IEEE J Biomed Health Inform. 2022;27(2):1106\u201317.","journal-title":"IEEE J Biomed Health Inform"},{"issue":"1","key":"396_CR56","doi-asserted-by":"publisher","first-page":"7857","DOI":"10.1038\/s41467-023-43715-z","volume":"14","author":"Z Yang","year":"2023","unstructured":"Yang Z, Mitra A, Liu W, Berlowitz D, Yu H. Transformehr: transformer-based encoder-decoder generative model to enhance prediction of disease outcomes using electronic health records. Nat Commun. 2023;14(1):7857.","journal-title":"Nat Commun"},{"issue":"3","key":"396_CR57","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3665899","volume":"5","author":"A Moore","year":"2024","unstructured":"Moore A, Orset B, Yassaee A, Irving B, Morelli D. Healthrecordbert (herbert): leveraging transformers on electronic health records for chronic kidney disease risk stratification. ACM Trans Comput Healthcare. 2024;5(3):1\u201318.","journal-title":"ACM Trans Comput Healthcare"},{"key":"396_CR58","unstructured":"Cai T, Huang F, Nakada R, Zhang L, Zhou D.: Contrastive learning on multimodal analysis of electronic health records. arXiv preprint arXiv:2403.14926 (2024)"},{"key":"396_CR59","unstructured":"Brown T.B.: Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020)"},{"key":"396_CR60","unstructured":"Touvron H, Martin L, Stone K, Albert P, Almahairi A, Babaei Y, Bashlykov N, Batra S, Bhargava P, Bhosale S, et al.: Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)"},{"key":"396_CR61","doi-asserted-by":"crossref","unstructured":"Zhou S, Xu Z, Zhang M, Xu C, Guo Y, Zhan Z, Ding S, Wang J, Xu K, Fang Y, et al.: Large language models for disease diagnosis: A scoping review. arXiv preprint arXiv:2409.00097 (2024)","DOI":"10.1038\/s44387-025-00011-z"},{"key":"396_CR62","unstructured":"Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)"},{"key":"396_CR63","unstructured":"ruslanmv: Medical-llama3-8b-16bit: Fine-tuned llama3 for medical q &a (2024)"},{"key":"396_CR64","unstructured":"AI@Meta: Llama 3 model card (2024)"},{"key":"396_CR65","doi-asserted-by":"crossref","unstructured":"Zerveas G, Jayaraman S, Patel D, Bhamidipaty A, Eickhoff C.: A transformer-based framework for multivariate time series representation learning. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 2114\u20132124 (2021)","DOI":"10.1145\/3447548.3467401"},{"key":"396_CR66","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A.N, Kaiser \u0141, Polosukhin I.: Attention is all you need. Adv Neural Inf Proc Syst. 2017:30"},{"key":"396_CR67","unstructured":"Shen S, Yao Z, Gholami A, Mahoney M, Keutzer K.: Powernorm: Rethinking batch normalization in transformers. In: International Conference on Machine Learning, pp. 8741\u20138751 (2020). PMLR"},{"key":"396_CR68","doi-asserted-by":"crossref","unstructured":"Nam J, Kim J, Loza\u00a0Menc\u00eda E, Gurevych I, F\u00fcrnkranz J.: Large-scale multi-label text classification\u2014revisiting neural networks. In: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part II 14, pp. 437\u2013452 (2014). Springer","DOI":"10.1007\/978-3-662-44851-9_28"},{"key":"396_CR69","unstructured":"Dosovitskiy A, Djolonga J.: You only train once: Loss-conditional training of deep networks. In: International Conference on Learning Representations (2019)"},{"issue":"1","key":"396_CR70","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/sdata.2016.35","volume":"3","author":"AE Johnson","year":"2016","unstructured":"Johnson AE, Pollard TJ, Shen L, Lehman L-WH, Feng M, Ghassemi M, et al. Mimic-iii, a freely accessible critical care database. Sci Data. 2016;3(1):1\u20139.","journal-title":"Sci Data"},{"issue":"2","key":"396_CR71","first-page":"1","volume":"5","author":"M Hossin","year":"2015","unstructured":"Hossin M, Sulaiman MN. A review on evaluation metrics for data classification evaluations. Int J Data Min Knowl Manag Proc. 2015;5(2):1.","journal-title":"Int J Data Min Knowl Manag Proc"},{"key":"396_CR72","unstructured":"Palacio-Ni\u00f1o J.-O, Berzal F.: Evaluation metrics for unsupervised learning algorithms. arXiv preprint arXiv:1905.05667 (2019)"},{"key":"396_CR73","doi-asserted-by":"crossref","unstructured":"Lu C, Reddy C.K, Chakraborty P, Kleinberg S, Ning Y.: Collaborative graph learning with auxiliary text for temporal event prediction in healthcare. arXiv preprint arXiv:2105.07542 (2021)","DOI":"10.24963\/ijcai.2021\/486"},{"key":"396_CR74","doi-asserted-by":"crossref","unstructured":"Tan Y, Zhou Z, Yu L, Liu W, Chen C, Ma G, Hu X, Hertzberg V.S, Yang C.: Enhancing personalized healthcare via capturing disease severity, interaction, and progression. In: 2023 IEEE International Conference on Data Mining (ICDM), pp. 1349\u20131354 (2023). IEEE","DOI":"10.1109\/ICDM58522.2023.00173"},{"key":"396_CR75","doi-asserted-by":"crossref","unstructured":"Lu C, Han T, Ning Y.: Context-aware health event prediction via transition functions on dynamic disease graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 4567\u20134574 (2022)","DOI":"10.1609\/aaai.v36i4.20380"},{"key":"396_CR76","first-page":"10088","volume":"36","author":"T Dettmers","year":"2023","unstructured":"Dettmers T, Pagnoni A, Holtzman A, Zettlemoyer L. Qlora: Efficient finetuning of quantized llms. Adv Neural Inf Process Syst. 2023;36:10088\u2013115.","journal-title":"Adv Neural Inf Process Syst"},{"issue":"2","key":"396_CR77","first-page":"3","volume":"1","author":"EJ Hu","year":"2022","unstructured":"Hu EJ, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, et al. Lora: Low-rank adaptation of large language models. ICLR. 2022;1(2):3.","journal-title":"ICLR"},{"key":"396_CR78","unstructured":"Jiang A.Q, Sablayrolles A, Mensch A, Bamford C, Chaplot D.S, Casas D.d.l, Bressand F, Lengyel G, Lample G, Saulnier L, et al.: Mistral 7b. arXiv preprint arXiv:2310.06825 (2023)"},{"key":"396_CR79","unstructured":"AI@Meta: Llama 2 model card (2024)"}],"container-title":["Health Information Science and Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s13755-025-00396-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s13755-025-00396-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s13755-025-00396-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,27]],"date-time":"2025-11-27T07:46:38Z","timestamp":1764229598000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s13755-025-00396-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,27]]},"references-count":79,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2026,12]]}},"alternative-id":["396"],"URL":"https:\/\/doi.org\/10.1007\/s13755-025-00396-w","relation":{},"ISSN":["2047-2501"],"issn-type":[{"value":"2047-2501","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,11,27]]},"assertion":[{"value":"28 April 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 November 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 November 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that there are no Conflict of interest, and we do not have any possible Conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"7"}}