{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T16:10:28Z","timestamp":1775837428395,"version":"3.50.1"},"reference-count":43,"publisher":"Walter de Gruyter GmbH","issue":"1","license":[{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,3,29]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>Automatic text summarization (ATS) extracts information from a source text and presents it to the user in a condensed form while preserving its primary content. Many text summarization approaches have been investigated in the literature for highly resourced languages. At the same time, ATS is a complicated and challenging task for under-resourced languages like Malayalam. The lack of a standard corpus and enough processing tools are challenges when it comes to language processing. In the absence of a standard corpus, we have developed a dataset consisting of Malayalam news articles. This article proposes an extractive topic modeling-based multi-document text summarization approach for Malayalam news documents. We first cluster the contents based on latent topics identified using the latent Dirichlet allocation topic modeling technique. Then by adopting vector space model, the topic vector and sentence vector of the given document are generated. According to the relevant status value, sentences are ranked between the document\u2019s topic and sentence vectors. The summary obtained is optimized for non-redundancy. Evaluation results on Malayalam news articles show that the summary generated by the proposed method is closer to the human-generated summaries than the existing text summarization methods.<\/jats:p>","DOI":"10.1515\/jisys-2022-0027","type":"journal-article","created":{"date-parts":[[2022,3,29]],"date-time":"2022-03-29T03:04:07Z","timestamp":1648523047000},"page":"393-406","source":"Crossref","is-referenced-by-count":4,"title":["Extractive summarization of Malayalam documents using latent Dirichlet allocation: An experience"],"prefix":"10.1515","volume":"31","author":[{"given":"Manju","family":"Kondath","sequence":"first","affiliation":[{"name":"Department of Computer Science, Cochin University of Science and Technology , Kochi 682022 , Kerala , India"}]},{"given":"David Peter","family":"Suseelan","sequence":"additional","affiliation":[{"name":"Department of Computer Science, SOE Campus, Cochin University of Science and Technology , Kochi 682022 , Kerala , India"}]},{"given":"Sumam Mary","family":"Idicula","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Muthoot Institute of Technology and Science , Kochi 682308 , Kerala , India"}]}],"member":"374","published-online":{"date-parts":[[2022,3,29]]},"reference":[{"key":"2025073113140067711_j_jisys-2022-0027_ref_001","unstructured":"Widyassari AP, Rustad S, Shidik GF, Noersasongko E, Syukur A, Affandy A, et al. Review of automatic text summarization techniques and methods. J King Saud Univ Comput Inform Sci. 2020. 10.1016\/j.jksuci.2020.05.006."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_002","doi-asserted-by":"crossref","unstructured":"Radev DR, Jing H, Sty\u015b M, Tam D. Centroid-based summarization of multiple documents. Inform Process Manag. 2004;40(6):919\u201338. ISSN 0306-4573. 10.1016\/j.ipm.2003.10.006.","DOI":"10.1016\/j.ipm.2003.10.006"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_003","doi-asserted-by":"crossref","unstructured":"Mao X, Yang H, Huang S, Liu Y, Li R. Extractive summarization using supervised and unsupervised learning. Expert Syst Appl. 2019;133:173\u201381, ISSN 0957-4174. 10.1016\/j.eswa.2019.05.011.","DOI":"10.1016\/j.eswa.2019.05.011"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_004","doi-asserted-by":"crossref","unstructured":"Yau CK, Porter A, Newman N, Suominen A. Clustering scientific documents with topic modeling. Scientometrics 2014 Sep 1;100(3):767\u201386.","DOI":"10.1007\/s11192-014-1321-8"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_005","doi-asserted-by":"crossref","unstructured":"Jelodar H, Wang Y, Yuan C, Feng X, Jiang X, Li Y, et al. Latent Dirichlet allocation (LDA) and topic modeling: models, applications, a survey. Multimedia Tools Appl. 2019 Jun;78(11):15169\u2013211.","DOI":"10.1007\/s11042-018-6894-4"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_006","unstructured":"Blei DM, Ng AY, Jordan MI. Latent Dirichlet allocation. J Machine Learn Res. 2003;3:993\u20131022."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_007","doi-asserted-by":"crossref","unstructured":"Arora R, Ravindran B. Latent Dirichlet allocation based multi-document summarization. In: Proceedings of the Second Workshop on Analytics for Noisy Unstructured Text Data; 2008 Jul 24. p. 91\u20137.","DOI":"10.1145\/1390749.1390764"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_008","doi-asserted-by":"crossref","unstructured":"Twinandilla S, Adhy S, Surarso B, Kusumaningrum R. Multi-document summarization using k-means and latent Dirichlet allocation (LDA)-significance sentences. Proc Comput Sci. 2018;135:663\u201370.","DOI":"10.1016\/j.procs.2018.08.220"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_009","doi-asserted-by":"crossref","unstructured":"Yang G, Wen D, Chen NS, Sutinen E. A novel contextual topic model for multi-document summarization. Expert Syst Appl. 2015 Feb 15;42(3):1340\u201352.","DOI":"10.1016\/j.eswa.2014.09.015"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_010","doi-asserted-by":"crossref","unstructured":"Rani R, Lobiyal DK. An extractive text summarization approach using tagged-LDA based topic modeling. Multimedia Tools Appl. 2021 Jan;80(3):3275\u2013305.","DOI":"10.1007\/s11042-020-09549-3"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_011","doi-asserted-by":"crossref","unstructured":"Rani U, Bidhan K. Comparative assessment of extractive summarization: textrank TF-IDF and LDA. J Sci Res. 2021;65(1):304\u201311.","DOI":"10.37398\/JSR.2021.650140"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_012","unstructured":"Radev DR, Allison T, Blair-Goldensohn S, Blitzer J, Celebi A, Dimitrov S, et al. MEAD-a platform for multidocument multilingual text summarization. In: Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC\u201904). Lisbon, Portugal: European Language Resources Association (ELRA); 2004."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_013","doi-asserted-by":"crossref","unstructured":"Al-Radaideh QA, Bataineh DQ. A hybrid approach for Arabic text summarization using domain knowledge and genetic algorithms. Cognitive Comput. 2018 Aug;10(4):651\u201369.","DOI":"10.1007\/s12559-018-9547-z"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_014","doi-asserted-by":"crossref","unstructured":"Elbarougy R, Behery G, ElKhatib A. Extractive Arabic text summarization using modified PageRank algorithm. Egypt. Inform J 2020 Jul 1;21(2):73\u201381.","DOI":"10.1016\/j.eij.2019.11.001"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_015","doi-asserted-by":"crossref","unstructured":"Xi X, Pi Z, Zhou G. Global encoding for long Chinese text summarization. ACM Trans Asian Low-Resource Language Inform Process (TALLIP). 2020 Oct 6;19(6):1\u20137.","DOI":"10.1145\/3407911"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_016","doi-asserted-by":"crossref","unstructured":"Kumar Y, Kaur K, Kaur S. Study of automatic text summarization approaches in different languages. Artif Intell Rev. 2021 Feb 12;54(8):1\u201333.","DOI":"10.1007\/s10462-021-09964-4"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_017","doi-asserted-by":"crossref","unstructured":"Bouscarrat L, Bonnefoy A, Peel T, Pereira C. STRASS: A light and effective method for extractive summarization based on sentence embeddings. 2019 Jul 16. arXiv preprint: http:\/\/arXiv.org\/abs\/arXiv:1907.07323.","DOI":"10.18653\/v1\/P19-2034"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_018","doi-asserted-by":"crossref","unstructured":"Gambhir M, Gupta V. Recent automatic text summarization techniques: a survey, Artif Intell Rev. 2017;47:1\u201366. 10.1007\/s10462-016-9475-9.","DOI":"10.1007\/s10462-016-9475-9"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_019","doi-asserted-by":"crossref","unstructured":"Gupta V, Kaur N. A novel hybrid text summarization system for Punjabi text. Cognitive Comput. 2016 Apr 1;8(2):261\u201377.","DOI":"10.1007\/s12559-015-9359-3"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_020","doi-asserted-by":"crossref","unstructured":"Nawaz A, Bakhtyar M, Baber J, Ullah I, Noor W, Basit A. Extractive text summarization models for Urdu language. Inform Process Manag. 2020 Nov 1 57(6):102383.","DOI":"10.1016\/j.ipm.2020.102383"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_021","unstructured":"Rathod TV. Extractive text summarization of Marathi news articles. IRJET. 2018;5:1204\u201310."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_022","doi-asserted-by":"crossref","unstructured":"Banu M, Karthika C, Sudarmani P, Geetha TV. Tamil document summarization using semantic graph method. In: International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 2007). vol. 2. IEEE; 2007 Dec 13. p. 128\u201334.","DOI":"10.1109\/ICCIMA.2007.247"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_023","doi-asserted-by":"crossref","unstructured":"Manju K, DavidPeter S, Idicula SM. A framework for generating extractive summary from multiple Malayalam documents. Information. 2021 Jan;12(1):41.","DOI":"10.3390\/info12010041"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_024","unstructured":"Hovy E, Lin CY. Automated text summarization in SUMMARIST. Adv Automatic Text Summarization. 1999;14:81\u201394."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_025","doi-asserted-by":"crossref","unstructured":"Zhang J, Tan J, Wan X. Adapting neural single-document summarization model for abstractive multi-document summarization: a pilot study. In: Proceedings of the 11th International Conference on Natural Language Generation; 2018 Nov. p. 381\u201390.","DOI":"10.18653\/v1\/W18-6545"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_026","doi-asserted-by":"crossref","unstructured":"Belwal RC, Rai S, Gupta A. Text summarization using topic-based vector space model and semantic measure. Inform Process Manag. 2021 May 1;58(3):102536.","DOI":"10.1016\/j.ipm.2021.102536"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_027","unstructured":"Kumar R, Raghuveer K. Legal document summarization using latent Dirichlet allocation. Int J Comput Sci Telecommun. 2012;3(114\u2013117):8\u201323."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_028","unstructured":"Erkan G, Radev D. Lexpagerank: Prestige in multi-document text summarization. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing; 2004 Jul. p. 365\u201371."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_029","unstructured":"Mihalcea R, Tarau P. Textrank: bringing order into text. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing; 2004 Jul. p. 404\u201311."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_030","doi-asserted-by":"crossref","unstructured":"Barrera A, Verma R. Combining syntax and semantics for automatic extractive single-document summarization. In: International Conference on Intelligent Text Processing and Computational Linguistics. Berlin, Heidelberg: Springer; 2012 Mar 11. p. 366\u201377.","DOI":"10.1007\/978-3-642-28601-8_31"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_031","doi-asserted-by":"crossref","unstructured":"U\u00e7kan T, Karc\u0131 A. Extractive multi-document text summarization based on graph independent sets. Egypt Inform J. 2020 Sep 1;21(3):145\u201357.","DOI":"10.1016\/j.eij.2019.12.002"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_032","doi-asserted-by":"crossref","unstructured":"Nallapati R, Zhai F, Zhou B. Summarunner: a recurrent neural network based sequence model for extractive summarization of documents. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. February 4\u20139, 2017, San Francisco, California, USA. p. 3075\u201381.","DOI":"10.1609\/aaai.v31i1.10958"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_033","doi-asserted-by":"crossref","unstructured":"Al-Sabahi K, Zuping Z, Nadher M. A hierarchical structured self-attentive model for extractive document summarization (HSSAS). IEEE Access. 2018 Apr 23;6:24205\u201312.","DOI":"10.1109\/ACCESS.2018.2829199"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_034","unstructured":"Lin CY. Rouge: a package for automatic evaluation of summaries. In: Text summarization branches out; 2004, Jul. p. 74\u201381."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_035","doi-asserted-by":"crossref","unstructured":"Tran NT, Nghiem MQ, Nguyen NT, Nguyen NLT, Van Chi N, Dinh D. ViMs: a high-quality Vietnamese dataset for abstractive multi-document summarization. Language Resour Evaluat. 2020;54(4):893\u2013920.","DOI":"10.1007\/s10579-020-09495-4"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_036","doi-asserted-by":"crossref","unstructured":"Radev D, Teufel S, Saggion H, Lam W, Blitzer J, Qi H, et al. Evaluation challenges in large-scale document summarization. In: Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. 2003 Jul. p. 375\u201382.","DOI":"10.3115\/1075096.1075144"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_037","unstructured":"Thottungal S. Indic Stemmer. 2019. Available online: https:\/\/silpa.readthedocs.io\/projects\/indicstemmer (accessed on March 12, 2019)."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_038","doi-asserted-by":"crossref","unstructured":"Gialitsis N, Pittaras N, Stamatopoulos P. A topic-based sentence representation for extractive text summarization. In: Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources; 2019 Sep. p. 26\u201334.","DOI":"10.26615\/978-954-452-058-8_005"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_039","doi-asserted-by":"crossref","unstructured":"Goldstein J, Carbonell JG. Summarization: (1) using MMR for diversity-based reranking and (2) evaluating summaries. In TIPSTER TEXT PROGRAM PHASE III: Proceedings of a Workshop held at Baltimore, Maryland; October 13\u201315, 1998. p. 181\u2013195.","DOI":"10.3115\/1119089.1119120"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_040","doi-asserted-by":"crossref","unstructured":"Radev DR, Jing H, Sty\u015b M, Tam D. Centroid-based summarization of multiple documents. Inform Process Manag. 2004 Nov 1;40(6):919\u201338.","DOI":"10.1016\/j.ipm.2003.10.006"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_041","unstructured":"Saziyabegum S, Sajja PS. Review on text summarization evaluation methods. Indian J Comput Sci Eng. 2017;8(4):497500."},{"key":"2025073113140067711_j_jisys-2022-0027_ref_042","doi-asserted-by":"crossref","unstructured":"Verma P, Verma A. Accountability of NLP tools in text summarization for Indian languages. J Scient Res. 2020;64(1):358\u201363.","DOI":"10.37398\/JSR.2020.640149"},{"key":"2025073113140067711_j_jisys-2022-0027_ref_043","doi-asserted-by":"crossref","unstructured":"Kumar KV, Yadav D. An improvised extractive approach to Hindi text summarization. In: Information Systems Design and Intelligent Applications. New Delhi: Springer; 2015. p. 291\u2013300.","DOI":"10.1007\/978-81-322-2250-7_28"}],"container-title":["Journal of Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/jisys-2022-0027\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/jisys-2022-0027\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,31]],"date-time":"2025-07-31T13:14:15Z","timestamp":1753967655000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/jisys-2022-0027\/html"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1,1]]},"references-count":43,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2022,3,29]]},"published-print":{"date-parts":[[2022,3,29]]}},"alternative-id":["10.1515\/jisys-2022-0027"],"URL":"https:\/\/doi.org\/10.1515\/jisys-2022-0027","relation":{},"ISSN":["2191-026X"],"issn-type":[{"value":"2191-026X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1,1]]}}}