{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,13]],"date-time":"2026-03-13T23:57:53Z","timestamp":1773446273360,"version":"3.50.1"},"reference-count":76,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2024,10,22]],"date-time":"2024-10-22T00:00:00Z","timestamp":1729555200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>Understanding and explaining legal systems is very challenging due to their complex structure, specialized terminology, and multiple interpretations. Legal AI models are currently undergoing drastic advancements due to the development of Large Language Models (LLMs) that have achieved state-of-the-art performance on a wide range of tasks and are currently undergoing very rapid iterations. As an emerging field, the application of LLMs in the legal field is still in its early stages, with multiple challenges that need to be addressed. Our objective is to provide a comprehensive survey of legal LLMs, not only reviewing the models themselves but also analyzing their applications within the legal systems in different geographies. The paper begins by providing a high-level overview of AI technologies in the legal field and showcasing recent research advancements in LLMs, followed by practical implementations of legal LLMs. Two databases (i.e., SCOPUS and Web of Science) were considered alongside additional related studies that met our selection criteria. We used the PRISMA for Scoping Reviews (PRISMA-ScR) guidelines as the methodology to extract relevant studies and report our findings. The paper discusses and analyses the limitations and challenges faced by legal LLMs, including issues related to data, algorithms, and judicial practices. Moreover, we examine the extent to which such systems can be effectively deployed. The paper summarizes recommendations and future directions to address challenges, aiming to help stakeholders overcome limitations and integrate legal LLMs into the judicial system.<\/jats:p>","DOI":"10.3390\/info15110662","type":"journal-article","created":{"date-parts":[[2024,10,22]],"date-time":"2024-10-22T04:10:14Z","timestamp":1729570214000},"page":"662","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["To What Extent Have LLMs Reshaped the Legal Domain So Far? A Scoping Literature Review"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0009-0000-1142-9409","authenticated-orcid":false,"given":"Bogdan","family":"Padiu","sequence":"first","affiliation":[{"name":"Computer Science & Engineering Department, National University of Science and Technology POLITEHNICA Bucharest, 313 Splaiul Independentei, 060042 Bucharest, Romania"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-4953-1180","authenticated-orcid":false,"given":"Radu","family":"Iacob","sequence":"additional","affiliation":[{"name":"Computer Science & Engineering Department, National University of Science and Technology POLITEHNICA Bucharest, 313 Splaiul Independentei, 060042 Bucharest, Romania"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7255-5537","authenticated-orcid":false,"given":"Traian","family":"Rebedea","sequence":"additional","affiliation":[{"name":"Computer Science & Engineering Department, National University of Science and Technology POLITEHNICA Bucharest, 313 Splaiul Independentei, 060042 Bucharest, Romania"},{"name":"NVIDIA, Santa Clara, CA 95051, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4815-9227","authenticated-orcid":false,"given":"Mihai","family":"Dascalu","sequence":"additional","affiliation":[{"name":"Computer Science & Engineering Department, National University of Science and Technology POLITEHNICA Bucharest, 313 Splaiul Independentei, 060042 Bucharest, Romania"},{"name":"Academy of Romanian Scientists, Str. Ilfov, Nr. 3, 050044 Bucharest, Romania"}]}],"member":"1968","published-online":{"date-parts":[[2024,10,22]]},"reference":[{"key":"ref_1","unstructured":"Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., and Anadkat, S. (2023). GPT-4 Technical Report. arXiv."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Sobkowicz, P. (2022). Hammering with the telescope. Front. Artif. Intell., 5.","DOI":"10.3389\/frai.2022.1010219"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"561","DOI":"10.1007\/s10506-022-09327-6","article-title":"Thirty years of artificial intelligence and law: The third decade","volume":"30","author":"Villata","year":"2022","journal-title":"Artif. Intell. Law"},{"key":"ref_4","unstructured":"Ridnik, T., Kredo, D., and Friedman, I. (2024). Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Dhuliawala, S., Komeili, M., Xu, J., Raileanu, R., Li, X., Celikyilmaz, A., and Weston, J. (2024, January 8). Chain-of-Verification Reduces Hallucination in Large Language Models. Proceedings of the Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand.","DOI":"10.18653\/v1\/2024.findings-acl.212"},{"key":"ref_6","unstructured":"Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou, D. (2022). Self-Consistency Improves Chain of Thought Reasoning in Language Models. arXiv."},{"key":"ref_7","unstructured":"Fei, Z., Shen, X., Zhu, D., Zhou, F., Han, Z., Zhang, S., Chen, K., Shen, Z., and Ge, J. (2023). LawBench: Benchmarking Legal Knowledge of Large Language Models. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Lai, J., Gan, W., Wu, J., Qi, Z., and Yu, P.S. (2023). Large Language Models in Law: A Survey. arXiv.","DOI":"10.1016\/j.aiopen.2024.09.002"},{"key":"ref_9","unstructured":"Huang, Q., Tao, M., Zhang, C., An, Z., Jiang, C., Chen, Z., Wu, Z., and Feng, Y. (2023). Lawyer LLaMA Technical Report. arXiv."},{"key":"ref_10","unstructured":"Cui, J., Li, Z., Yan, Y., Chen, B., and Yuan, L. (2023). ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases. arXiv."},{"key":"ref_11","first-page":"242","article-title":"Developing artificially intelligent justice","volume":"22","author":"Re","year":"2019","journal-title":"Stanf. Technol. Law Rev."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"467","DOI":"10.7326\/M18-0850","article-title":"PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation","volume":"169","author":"Tricco","year":"2018","journal-title":"Ann. Intern. Med."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"n71","DOI":"10.1136\/bmj.n71","article-title":"The PRISMA 2020 statement: An updated guideline for reporting systematic reviews","volume":"372","author":"Page","year":"2021","journal-title":"BMJ"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"e1230","DOI":"10.1002\/cl2.1230","article-title":"PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis","volume":"18","author":"Haddaway","year":"2022","journal-title":"Campbell Syst. Rev."},{"key":"ref_15","unstructured":"Honnibal, M., Montani, I., Van Landeghem, S., and Boyd, A. (2024, March 28). spaCy: Industrial-strength Natural Language Processing in Python; Explosion: 2020. Available online: https:\/\/spacy.io."},{"key":"ref_16","unstructured":"Grootendorst, M. (2024, March 28). MaartenGr\/KeyBERT: BibTeX (Version v0.1.3); Zenodo: 2021, Available online: https:\/\/zenodo.org\/records\/4461265."},{"key":"ref_17","unstructured":"Li, H., Su, W., Wang, C., Wu, Y., Ai, Q., and Liu, Y. (2023). THUIR@COLIEE 2023: Incorporating Structural Knowledge into Pre-trained Language Models for Legal Case Retrieval. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Louis, A., van Dijck, G., and Spanakis, G. (2024, January 20\u201327). Interpretable long-form legal question answering with retrieval-augmented large language models. Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada.","DOI":"10.1609\/aaai.v38i20.30232"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Guan, J., Yu, Z., Liao, Y., Tang, R., Duan, M., and Han, G. (2024). Predicting Critical Path of Labor Dispute Resolution in Legal Domain by Machine Learning Models Based on SHapley Additive exPlanations and Soft Voting Strategy. Mathematics, 12.","DOI":"10.3390\/math12020272"},{"key":"ref_20","first-page":"555","article-title":"Groups of experts often differ in their decisions: What are the implications for AI and machine learning? A commentary on Noise: A Flaw in Human Judgment, by Kahneman, Sibony, and Sunstein (2021)","volume":"44","author":"Sleeman","year":"2023","journal-title":"AI Mag."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"e904","DOI":"10.7717\/peerj-cs.904","article-title":"Predicting Brazilian Court Decisions","volume":"8","author":"Santana","year":"2022","journal-title":"Peerj Comput. Sci."},{"key":"ref_22","unstructured":"Weller, O., Chang, B., MacAvaney, S., Lo, K., Cohan, A., Durme, B.V., Lawrie, D., and Soldaini, L. (2024). FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Savelka, J., and Ashley, K.D. (2023). The unreasonable effectiveness of large language models in zero-shot semantic annotation of legal texts. Front. Artif. Intell., 6.","DOI":"10.3389\/frai.2023.1279794"},{"key":"ref_24","unstructured":"Bornstein Matt, R.R. (2023, June 20). Emerging Architectures for LLM Applications. Available online: https:\/\/a16z.com."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"2013652","DOI":"10.1080\/08839514.2021.2013652","article-title":"Human Judges in the Era of Artificial Intelligence: Challenges and Opportunities","volume":"36","author":"Xu","year":"2022","journal-title":"Appl. Artif. Intell."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"100","DOI":"10.58881\/jlps.v2i2.25","article-title":"Investigating the Listening and Transcription Performance in Court: Experiences from Stenographers in Philippine Courtrooms","volume":"2","author":"Etulle","year":"2023","journal-title":"J. Lang. Pragmat. Stud."},{"key":"ref_27","unstructured":"Haitao, L. (2024, May 01). LexiLaw. Available online: https:\/\/github.com\/CSHaitao\/LexiLaw."},{"key":"ref_28","unstructured":"GLM, T., Zeng, A., Xu, B., Wang, B., Zhang, C., Yin, D., Rojas, D., Feng, G., Zhao, H., and Lai, H. (2024). ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools. arXiv."},{"key":"ref_29","unstructured":"Wu, S., Liu, Z., Zhang, Z., Chen, Z., Deng, W., Zhang, W., Yang, J., Yao, Z., Lyu, Y., and Xin, X. (2024, March 28). fuzi.mingcha. Available online: https:\/\/github.com\/irlab-sdu\/fuzi.mingcha."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Deng, W., Pei, J., Kong, K., Chen, Z., Wei, F., Li, Y., Ren, Z., Chen, Z., and Ren, P. (2023, January 6\u201310). Syllogistic Reasoning for Legal Judgment Analysis. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore.","DOI":"10.18653\/v1\/2023.emnlp-main.864"},{"key":"ref_31","unstructured":"Cui, Y., Yang, Z., and Yao, X. (2023). Efficient and effective text encoding for chinese llama and alpaca. arXiv."},{"key":"ref_32","unstructured":"Huang, X., Zhang, L.L., Cheng, K.T., Yang, F., and Yang, M. (2023). Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning. arXiv."},{"key":"ref_33","unstructured":"(2024, March 28). JurisLMs. Available online: https:\/\/github.com\/seudl\/JurisLMs."},{"key":"ref_34","first-page":"9","article-title":"Language Models are Unsupervised Multitask Learners","volume":"1","author":"Radford","year":"2019","journal-title":"OpenAI Blog"},{"key":"ref_35","unstructured":"He, W., Wen, J., Zhang, L., Cheng, H., Qin, B., Li, Y., Jiang, F., Chen, J., Wang, B., and Yang, M. (2024, March 28). HanFei-1.0. Available online: https:\/\/github.com\/siat-nlp\/HanFei."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Muennighoff, N., Wang, T., Sutawika, L., Roberts, A., Biderman, S., Scao, T.L., Bari, M.S., Shen, S., Yong, Z.X., and Schoelkopf, H. (2022). Crosslingual generalization through multitask finetuning. arXiv.","DOI":"10.18653\/v1\/2023.acl-long.891"},{"key":"ref_37","unstructured":"Zhang, J., Gan, R., Wang, J., Zhang, Y., Zhang, L., Yang, P., Gao, X., Wu, Z., Dong, X., and He, J. (2022). Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence. arXiv."},{"key":"ref_38","unstructured":"Shen, X., Zhu, D., Fei, Z., Li, Q., Shen, Z., and Ge, J. (2024, March 28). Lychee. Available online: https:\/\/github.com\/davidpig\/lychee_law."},{"key":"ref_39","unstructured":"Muresan, S., Nakov, P., and Villavicencio, A. (2022, January 22\u201327). GLM: General Language Model Pretraining with Autoregressive Blank Infilling. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland."},{"key":"ref_40","unstructured":"Chiang, W.L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y., and Gonzalez, J.E. (2024, March 28). Vicuna: An Open-Source Chatbot Impressing Gpt-4 with 90%* Chatgpt Quality. Available online: https:\/\/lmsys.org\/blog\/2023-03-30-vicuna."},{"key":"ref_41","unstructured":"Xu, C., Sun, Q., Zheng, K., Geng, X., Zhao, P., Feng, J., Tao, C., and Jiang, D. (2023). Wizardlm: Empowering large language models to follow complex instructions. arXiv."},{"key":"ref_42","unstructured":"Wang, Y., Ivison, H., Dasigi, P., Hessel, J., Khot, T., Chandu, K.R., Wadden, D., MacMillan, K., Smith, N.A., and Beltagy, I. (2023, January 10\u201316). How far can camels go?. exploring the state of instruction tuning on open resources. In Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, LA, USA."},{"key":"ref_43","unstructured":"Dettmers, T., Pagnoni, A., Holtzman, A., and Zettlemoyer, L. (2023). QLoRA: Efficient Finetuning of Quantized LLMs. arXiv."},{"key":"ref_44","unstructured":"Jiang, A.Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D.S., Casas, D.d.l., Bressand, F., Lengyel, G., Lample, G., and Saulnier, L. (2023). Mistral 7B. arXiv."},{"key":"ref_45","unstructured":"Woolrych, D. (2024, March 28). How To Build The Ultimate Legal LLM Stack. Available online: https:\/\/lawpath.com.au\/blog\/how-to-build-the-ultimate-legal-llm-stack."},{"key":"ref_46","unstructured":"OpenAI (2024, May 01). GPT-3.5-turbo-16k. Available online: https:\/\/openai.com."},{"key":"ref_47","unstructured":"Nguyen, H.T. (2023). A Brief Report on LawGPT 1.0: A Virtual Legal Assistant Based on GPT-3. arXiv."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Moens, M.F., Boiy, E., Palau, R.M., and Reed, C. (2007, January 4\u20138). Automatic detection of arguments in legal texts. Proceedings of the 11th International Conference on Artificial Intelligence and Law, ICAIL \u201907, New York, NY, USA.","DOI":"10.1145\/1276318.1276362"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Zubaer, A.A., Granitzer, M., and Mitrovi\u0107, J. (2023). Performance analysis of large language models in the domain of legal argument mining. Front. Artif. Intell., 6.","DOI":"10.3389\/frai.2023.1278796"},{"key":"ref_50","unstructured":"Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M., Feng, Y., Han, X., Hu, Z., and Wang, H. (2018). CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction. arXiv."},{"key":"ref_51","unstructured":"Webber, B., Cohn, T., He, Y., and Liu, Y. (2020, January 16\u201320). An Element-aware Multi-representation Model for Law Article Prediction. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z., and Sun, M. (2020, January 7\u201312). JEC-QA: A legal-domain question answering dataset. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i05.6519"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"65","DOI":"10.1016\/j.aiopen.2021.06.001","article-title":"WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models","volume":"2","author":"Yuan","year":"2021","journal-title":"AI Open"},{"key":"ref_54","unstructured":"Xu, L., Zhang, X., and Dong, Q. (2020). CLUECorpus2020: A Large-scale Chinese Corpus for Pre-training Language Model. arXiv."},{"key":"ref_55","first-page":"1","article-title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","volume":"21","author":"Raffel","year":"2020","journal-title":"J. Mach. Learn. Res."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Chen, S., Hou, Y., Cui, Y., Che, W., Liu, T., and Yu, X. (2020, January 16\u201320). Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online.","DOI":"10.18653\/v1\/2020.emnlp-main.634"},{"key":"ref_57","unstructured":"Chen, F. (2024, March 28). The Legal Consultation Data and Corpus of the Thesis from China Law Network (Version V1); Peking University Open Research Data Platform. 2018. Available online: https:\/\/opendata.pku.edu.cn\/dataset.xhtml?persistentId=doi:10.18170\/DVN\/OLO4G8."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Louis, A., and Spanakis, G. (2022). A Statutory Article Retrieval Dataset in French. arXiv.","DOI":"10.18653\/v1\/2022.acl-long.468"},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3571730","article-title":"Survey of hallucination in natural language generation","volume":"55","author":"Ji","year":"2023","journal-title":"ACM Comput. Surv."},{"key":"ref_60","unstructured":"Gou, Z., Shao, Z., Gong, Y., Shen, Y., Yang, Y., Duan, N., and Chen, W. (2023). CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing. arXiv."},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"105871","DOI":"10.1016\/j.clsr.2023.105871","article-title":"The European AI liability directives\u2014Critique of a half-hearted approach and lessons for the future","volume":"51","author":"Hacker","year":"2023","journal-title":"Comput. Law Secur. Rev."},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N.A., Khashabi, D., and Hajishirzi, H. (2023, January 9\u201314). Self-Instruct: Aligning Language Models with Self-Generated Instructions. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, ON, Canada.","DOI":"10.18653\/v1\/2023.acl-long.754"},{"key":"ref_63","unstructured":"Peng, B., Li, C., He, P., Galley, M., and Gao, J. (2023). Instruction Tuning with GPT-4. arXiv."},{"key":"ref_64","unstructured":"Li, X., Zhang, T., Dubois, Y., Taori, R., Ishaan Gulrajani, C.G., Liang, P., and Hashimoto, T.B. (2024, March 28). Alpacaeval: An Automatic Evaluator of Instruction-Following Models. Available online: https:\/\/github.com\/tatsu-lab\/alpaca_eval."},{"key":"ref_65","unstructured":"Ng, A. (2024, March 28). The Batch Issue 242: Four Design Patterns for AI Agentic Workflows Blog Post. The Batch. Available online: https:\/\/www.deeplearning.ai\/the-batch\/issue-242\/."},{"key":"ref_66","unstructured":"Huang, X., Liu, W., Chen, X., Wang, X., Wang, H., Lian, D., Wang, Y., Tang, R., and Chen, E. (2024). Understanding the planning of LLM agents: A survey. arXiv."},{"key":"ref_67","unstructured":"Wei, J., Yang, C., Song, X., Lu, Y., Hu, N., Tran, D., Peng, D., Liu, R., Huang, D., and Du, C. (2024). Long-form factuality in large language models. arXiv."},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"1402","DOI":"10.1017\/S1351324923000463","article-title":"Emerging trends: Smooth-talking machines","volume":"29","author":"Church","year":"2023","journal-title":"Nat. Lang. Eng."},{"key":"ref_69","first-page":"189","article-title":"Cultural Dimensions Of Legal Discourse","volume":"38","author":"Sierocka","year":"2014","journal-title":"Stud. Log."},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"47","DOI":"10.1111\/j.1468-0386.2009.00496.x","article-title":"Beyond Multilingualism: On Different Approaches to the Handling of Diverging Language Versions of a Community Law","volume":"16","author":"Schilling","year":"2010","journal-title":"Eur. Law J."},{"key":"ref_71","unstructured":"Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., and Fan, A. (2024). The Llama 3 Herd of Models. arXiv."},{"key":"ref_72","first-page":"305","article-title":"Semantics of the verb shall in legal discourse","volume":"18","author":"Boginskaya","year":"2017","journal-title":"Jezikoslovlje"},{"key":"ref_73","unstructured":"Basmov, V., Goldberg, Y., and Tsarfaty, R. (2024). LLMs\u2019 Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements. arXiv."},{"key":"ref_74","first-page":"1250","article-title":"Iteratively Questioning and Answering for Interpretable Legal Judgment Prediction","volume":"34","author":"Zhong","year":"2020","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_75","doi-asserted-by":"crossref","unstructured":"Zhang, D., Finckenberg-Broman, P., Hoang, T., Pan, S., Xing, Z., Staples, M., and Xu, X. (2024). Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions. arXiv.","DOI":"10.1007\/s43681-024-00573-9"},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Ali, A., Al-rimy, B.A.S., Alsubaei, F.S., Almazroi, A.A., and Almazroi, A.A. (2023). HealthLock: Blockchain-Based Privacy Preservation Using Homomorphic Encryption in Internet of Things Healthcare Applications. Sensors, 23.","DOI":"10.3390\/s23156762"}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/15\/11\/662\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T16:17:46Z","timestamp":1760113066000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/15\/11\/662"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,22]]},"references-count":76,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2024,11]]}},"alternative-id":["info15110662"],"URL":"https:\/\/doi.org\/10.3390\/info15110662","relation":{},"ISSN":["2078-2489"],"issn-type":[{"value":"2078-2489","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,22]]}}}