{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,24]],"date-time":"2025-11-24T13:29:03Z","timestamp":1763990943242,"version":"3.45.0"},"reference-count":61,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2025,11,23]],"date-time":"2025-11-23T00:00:00Z","timestamp":1763856000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Computers"],"abstract":"<jats:p>The rapid growth of digital journalism has heightened the need for reliable multi-document summarization (MDS) systems, particularly in underrepresented, low-resource, and culturally distinct contexts. However, current progress is hindered by a lack of large-scale, high-quality non-Western datasets. Existing benchmarks\u2014such as CNN\/DailyMail, XSum, and MultiNews\u2014are limited by language, regional focus, or reliance on noisy, auto-generated summaries. We introduce NewsSumm, the largest human-annotated MDS dataset for Indian English, curated by over 14,000 expert annotators through the Suvidha Foundation. Spanning 36 Indian English newspapers from 2000 to 2025 and covering more than 20 topical categories, NewsSumm includes over 317,498 articles paired with factually accurate, professionally written abstractive summaries. We detail its robust collection, annotation, and quality control pipelines, and present extensive statistical, linguistic, and temporal analyses that underscore its scale and diversity. To establish benchmarks, we evaluate PEGASUS, BART, and T5 models on NewsSumm, reporting aggregate and category-specific ROUGE scores, as well as factual consistency metrics. All NewsSumm dataset materials are openly released via Zenodo. NewsSumm offers a foundational resource for advancing research in summarization, factuality, timeline synthesis, and domain adaptation for Indian English and other low-resource language settings.<\/jats:p>","DOI":"10.3390\/computers14120508","type":"journal-article","created":{"date-parts":[[2025,11,24]],"date-time":"2025-11-24T13:09:25Z","timestamp":1763989765000},"page":"508","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["NewsSumm: The World\u2019s Largest Human-Annotated Multi-Document News Summarization Dataset for Indian English"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3239-0205","authenticated-orcid":false,"given":"Manish","family":"Motghare","sequence":"first","affiliation":[{"name":"Shri Ramdeobaba College of Engineering and Management, Affiliated to Rashtrasant Tukdoji Maharaj Nagpur University, Nagpur 440013, India"}]},{"given":"Megha","family":"Agarwal","sequence":"additional","affiliation":[{"name":"School of Medicine, Stanford University, Stanford, CA 94305, USA"}]},{"given":"Avinash","family":"Agrawal","sequence":"additional","affiliation":[{"name":"Shri Ramdeobaba College of Engineering and Management, Affiliated to Rashtrasant Tukdoji Maharaj Nagpur University, Nagpur 440013, India"},{"name":"Department of Artificial Intelligence and Cyber Security, Ramdeobaba University, Nagpur 440013, India"}]}],"member":"1968","published-online":{"date-parts":[[2025,11,23]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1043","DOI":"10.1162\/tacl_a_00687","article-title":"Do Multi-Document Summarization Models Synthesize?","volume":"12","author":"DeYoung","year":"2024","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Ahuja, O., Xu, J., Gupta, A., Horecka, K., and Durrett, G. (2022, January 22\u201327). ASPECTNEWS: Aspect-Oriented Summarization of News Documents. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland.","DOI":"10.18653\/v1\/2022.acl-long.449"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Alambo, A., Lohstroh, C., Madaus, E., Padhee, S., Foster, B., Banerjee, T., Thirunarayan, K., and Raymer, M. (2020, January 10\u201313). Topic-Centric Unsupervised Multi-Document Summarization of Scientific and News Articles. Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA.","DOI":"10.1109\/BigData50022.2020.9378403"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Benedetto, I., Cagliero, L., Ferro, M., Tarasconi, F., Bernini, C., and Giacalone, G. (2025). Leveraging large language models for abstractive summarization of Italian legal news. Artif. Intell. Law.","DOI":"10.1007\/s10506-025-09431-3"},{"key":"ref_5","unstructured":"See, A., Liu, P.J., and Manning, C.D. (August, January 30). Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, BC, Canada."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3700639","article-title":"Single-Document Abstractive Text Summarization: A Systematic Literature Review","volume":"57","author":"Rao","year":"2025","journal-title":"ACM Comput. Surv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Narayan, S., Cohen, S.B., and Lapata, M. (November, January 31). Don\u2019t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.","DOI":"10.18653\/v1\/D18-1206"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Li, H., Zhang, Y., Zhang, R., and Chaturvedi, S. (May, January 29). Coverage-Based Fairness in Multi-Document Summarization. Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Albuquerque, NM, USA.","DOI":"10.18653\/v1\/2025.naacl-long.494"},{"key":"ref_9","unstructured":"Fabbri, A., Li, I., She, T., Li, S., and Radev, D. (August, January 28). Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Langston, O., and Ashford, B. (2024). Automated Summarization of Multiple Document Abstracts and Contents Using Large Language Models. TechRxiv.","DOI":"10.36227\/techrxiv.172262754.45577350\/v1"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"2287","DOI":"10.1093\/jamia\/ocab143","article-title":"A Systematic Review of Automatic Text Summarization for Biomedical Literature and EHRs","volume":"28","author":"Wang","year":"2021","journal-title":"J. Am. Med. Inform. Assoc."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Yu, Z., Sun, N., Wu, S., and Wang, Y. (2025, January 21). Research on Automatic Text Summarization Using Transformer and Pointer-Generator Networks. Proceedings of the 2025 4th International Symposium on Computer Applications and Information Technology (ISCAIT), Xi\u2019an, China.","DOI":"10.1109\/ISCAIT64916.2025.11010564"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Gliwa, B., Mochol, I., Biesek, M., and Wawer, A. (2019, January 4). SAMSum Corpus: A Human-Annotated Dialogue Dataset for Abstractive Summarization. Proceedings of the 2nd Workshop on New Frontiers in Summarization, Hong Kong, China.","DOI":"10.18653\/v1\/D19-5409"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Cao, M., Dong, Y., Wu, J., and Cheung, J.C.K. (2020, January 16\u201320). Factual Error Correction for Abstractive Summarization Models. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online.","DOI":"10.18653\/v1\/2020.emnlp-main.506"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Kryscinski, W., McCann, B., Xiong, C., and Socher, R. (2020, January 16\u201320). Evaluating the Factual Consistency of Abstractive Text Summarization. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online.","DOI":"10.18653\/v1\/2020.emnlp-main.750"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"124456","DOI":"10.1016\/j.eswa.2024.124456","article-title":"Factual Consistency Evaluation of Summarization in the Era of Large Language Models","volume":"254","author":"Luo","year":"2024","journal-title":"Expert Syst. Appl."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Pagnoni, A., Balachandran, V., and Tsvetkov, Y. (2021, January 6\u201311). Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online.","DOI":"10.18653\/v1\/2021.naacl-main.383"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Wang, A., Cho, K., and Lewis, M. (2020, January 5\u201310). Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online.","DOI":"10.18653\/v1\/2020.acl-main.450"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3748325","article-title":"A Bilingual Legal NER Dataset and Semantics-Aware Cross-Lingual Label Transfer Method for Low-Resource Languages","volume":"24","author":"Tulajiang","year":"2025","journal-title":"ACM Trans. Asian Low-Resour. Lang. Inf. Process."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3748317","article-title":"Loanword Identification in Social Media Texts with Extended Code-Switching Datasets","volume":"24","author":"Mi","year":"2025","journal-title":"ACM Trans. Asian Low-Resour. Lang. Inf. Process."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3748648","article-title":"A Novel Benchmark for Persian Table-to-Text Generation: A New Dataset and Baseline Experiments","volume":"24","author":"Mohammadalizadeh","year":"2025","journal-title":"ACM Trans. Asian Low-Resour. Lang. Inf. Process."},{"key":"ref_22","unstructured":"Beltagy, I., Peters, M.E., and Cohan, A. (2020). Longformer: The Long-Document Transformer. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1162\/coli_a_00536","article-title":"Compositionality and Sentence Meaning: Comparing Semantic Parsing and Transformers on a Challenging Sentence Similarity Dataset","volume":"51","author":"Fodor","year":"2025","journal-title":"Comput. Linguist."},{"key":"ref_24","first-page":"1","article-title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","volume":"21","author":"Raffel","year":"2020","journal-title":"J. Mach. Learn. Res."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"337","DOI":"10.1017\/S1351324922000031","article-title":"Topical Language Generation Using Transformers","volume":"29","author":"Zandie","year":"2023","journal-title":"Nat. Lang. Eng."},{"key":"ref_26","first-page":"1","article-title":"A Novel Dataset for Arabic Domain Specific Term Extraction and Comparative Evaluation of BERT-Based Models for Arabic Term Extraction","volume":"24","year":"2025","journal-title":"ACM Trans. Asian Low-Resour. Lang. Inf. Process."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kurniawan, K., and Louvan, S. (2018, January 15\u201317). IndoSum: A New Benchmark Dataset for Indonesian Text Summarization. Proceedings of the 2018 International Conference on Asian Language Processing (IALP), Bandung, Indonesia.","DOI":"10.1109\/IALP.2018.8629109"},{"key":"ref_28","unstructured":"Sharma, E., Li, C., and Wang, L. (August, January 28). BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Malik, M., Zhao, Z., Fonseca, M., Rao, S., and Cohen, S.B. (2024, January 10). CivilSum: A Dataset for Abstractive Summarization of Indian Court Decisions. Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, Washington, DC USA.","DOI":"10.1145\/3626772.3657859"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Wang, H., Li, T., Du, S., and Wei, X. (2025). Mixed Information Bottleneck for Location Metonymy Resolution Using Pre-trained Language Models. ACM Trans. Asian Low-Resour. Lang. Inf. Process.","DOI":"10.1145\/3774933"},{"key":"ref_31","unstructured":"Guti\u00e9rrez-Hinojosa, S.J., Calvo, H., Moreno-Armend\u00e1riz, M.A., and Duchanoy, C.A. (2018). Sentence Embeddings for Document Sets in DUC 2002 Summarization Task, IEEE Dataport."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Rush, A.M., Chopra, S., and Weston, J. (2015, January 17\u201321). A Neural Attention Model for Abstractive Sentence Summarization. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal.","DOI":"10.18653\/v1\/D15-1044"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"104977","DOI":"10.1109\/ACCESS.2025.3575610","article-title":"Abstractive Text Summarization in Arabic-Like Script Using Multi-Encoder Architecture and Semantic Extraction Techniques","volume":"13","author":"Fatima","year":"2025","journal-title":"IEEE Access"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Grusky, M., Naaman, M., and Artzi, Y. (2018, January 1\u20136). Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, LA, USA.","DOI":"10.18653\/v1\/N18-1065"},{"key":"ref_35","unstructured":"Kim, B., Kim, H., and Kim, G. (2019, January 2\u20137). Abstractive Summarization of Reddit Posts with Multi-level Memory Networks. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Gupta, V., Bharti, P., Nokhiz, P., and Karnick, H. (2021, January 1\u20136). SumPubMed: Summarization Dataset of PubMed Scientific Articles. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research, Online.","DOI":"10.18653\/v1\/2021.acl-srw.30"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Xia, T.C., Bertini, F., and Montesi, D. (2025). Large Language Models Evaluation for PubMed Extractive Summarisation. ACM Trans. Comput. Healthc., 3766905.","DOI":"10.1145\/3766905"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Cohan, A., Dernoncourt, F., Kim, D.S., Bui, T., Kim, S., Chang, W., and Goharian, N. (2018, January 1\u20136). A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, LA, USA.","DOI":"10.18653\/v1\/N18-2097"},{"key":"ref_39","unstructured":"Koupaee, M., and Wang, W.Y. (2018). WikiHow: A Large Scale Text Summarization Dataset. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Zhang, N., Liu, Y., Fabbri, A., Liu, J., Kamoi, R., Lu, X., Xiong, C., Zhao, J., and Radev, D. (2024, January 16\u201321). Fair Abstractive Summarization of Diverse Perspectives. Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Mexico City, Mexico.","DOI":"10.18653\/v1\/2024.naacl-long.187"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Huang, K.-H., Laban, P., Fabbri, A., Choubey, P.K., Joty, S., Xiong, C., and Wu, C.-S. (2024, January 16\u201321). Embrace Divergence for Richer Insights: A Multi-Document Summarization Benchmark and a Case Study on Summarizing Diverse Information from News Articles. Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Mexico City, Mexico.","DOI":"10.18653\/v1\/2024.naacl-long.32"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Datta, D., Soni, S., Mukherjee, R., and Ghosh, S. (2023, January 6\u201310). MILDSum: A Novel Benchmark Dataset for Multilingual Summarization of Indian Legal Case Judgments. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore.","DOI":"10.18653\/v1\/2023.emnlp-main.321"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Zhao, C., Zhou, X., Xie, X., and Zhang, Y. (2024). Hierarchical Attention Graph for Scientific Document Summarization in Global and Local Level. arXiv.","DOI":"10.18653\/v1\/2024.findings-naacl.45"},{"key":"ref_44","unstructured":"To, H.Q., Liu, M., Huang, G., Tran, H.-N., Greiner-Petter, A., Beierle, F., and Aizawa, A. (2024). SKT5SciSumm\u2014Revisiting Extractive-Generative Approach for Multi-Document Scientific Summarization. arXiv."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Lu, Y., Dong, Y., and Charlin, L. (2020, January 16\u201320). Multi-XScience: A Large-Scale Dataset for Extreme Multi-Document Summarization of Scientific Articles. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online.","DOI":"10.18653\/v1\/2020.emnlp-main.648"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"1132","DOI":"10.1162\/tacl_a_00417","article-title":"A Statistical Analysis of Summarization Evaluation Metrics Using Resampling Methods","volume":"9","author":"Deutsch","year":"2021","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Jiang, X., and Dreyer, M. (2024, January 16\u201321). CCSum: A Large-Scale and High-Quality Dataset for Abstractive News Summarization. Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Mexico City, Mexico.","DOI":"10.18653\/v1\/2024.naacl-long.406"},{"key":"ref_48","first-page":"26138","article-title":"EventSum: A Large-Scale Event-Centric Summarization Dataset for Chinese Multi-News Documents","volume":"39","author":"Zhu","year":"2025","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_49","unstructured":"Li, M., Qi, J., and Lau, J.H. (2022). PeerSum: A Peer Review Dataset for Abstractive Multi-Document Summarization. arXiv."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Zhong, M., Yin, D., Yu, T., Zaidi, A., Mutuma, M., Jha, R., Awadallah, A.H., Celikyilmaz, A., Liu, Y., and Qiu, X. (2021, January 6\u201311). QMSum: A New Benchmark for Query-Based Multi-Domain Meeting Summarization. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online.","DOI":"10.18653\/v1\/2021.naacl-main.472"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Giarelis, N., Mastrokostas, C., and Karacapilidis, N. (2023). Abstractive vs. Extractive Summarization: An Experimental Review. Appl. Sci., 13.","DOI":"10.3390\/app13137620"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"128255","DOI":"10.1016\/j.neucom.2024.128255","article-title":"Abstractive Text Summarization: State of the Art, Challenges, and Improvements","volume":"603","author":"Shakil","year":"2024","journal-title":"Neurocomputing"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"625","DOI":"10.1038\/s41586-024-07421-0","article-title":"Detecting Hallucinations in Large Language Models Using Semantic Entropy","volume":"630","author":"Farquhar","year":"2024","journal-title":"Nature"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"1163","DOI":"10.1162\/tacl_a_00695","article-title":"Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization","volume":"12","author":"Chrysostomou","year":"2024","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"ref_55","unstructured":"Alansari, A., and Luqman, H. (2025). Large Language Models Hallucination: A Comprehensive Survey. arXiv."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"89","DOI":"10.1145\/3744558","article-title":"Corpus Fusion and Text Summarization Extraction for Multi-Feature Enhanced Entity Alignment","volume":"24","author":"Gang","year":"2025","journal-title":"ACM Trans. Asian Low-Resour. Lang. Inf. Process."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"391","DOI":"10.1162\/tacl_a_00373","article-title":"SummEval: Re-Evaluating Summarization Evaluation","volume":"9","author":"Fabbri","year":"2021","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"ref_58","first-page":"120","article-title":"Continual Learning of Large Language Models: A Comprehensive Survey","volume":"58","author":"Shi","year":"2025","journal-title":"ACM Comput. Surv."},{"key":"ref_59","unstructured":"(2025, November 05). Hindustan Times. Available online: https:\/\/www.hindustantimes.com\/."},{"key":"ref_60","unstructured":"Zhang, J., Zhao, Y., Saleh, M., and Liu, P.J. (2019, January 13\u201318). PEGASUS: Pre-Training with Extracted Gap-Sentences for Abstractive Summarization. Proceedings of the International Conference on Machine Learning, Virtual."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. (2020, January 5\u201310). BART: Denoising Sequence-to-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online.","DOI":"10.18653\/v1\/2020.acl-main.703"}],"container-title":["Computers"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/12\/508\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,24]],"date-time":"2025-11-24T13:26:33Z","timestamp":1763990793000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/12\/508"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,23]]},"references-count":61,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["computers14120508"],"URL":"https:\/\/doi.org\/10.3390\/computers14120508","relation":{},"ISSN":["2073-431X"],"issn-type":[{"value":"2073-431X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,11,23]]}}}