{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T10:28:38Z","timestamp":1771064918212,"version":"3.50.1"},"reference-count":28,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2023,2,28]],"date-time":"2023-02-28T00:00:00Z","timestamp":1677542400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2023,2,28]]},"abstract":"<jats:p>Over the past few years, researchers are showing huge interest in sentiment analysis and summarization of documents. The primary reason being that huge volumes of information are available in textual format, and this data has proven helpful for real-world applications and challenges. The sentiment analysis of a document will help the user comprehend the content\u2019s emotional intent. Abstractive summarization algorithms generate a condensed version of the text, which can then be used to determine the emotion represented in the text using sentiment analysis. Recent research in abstractive summarization concentrates on neural network-based models, rather than conjunctions-based approaches, which might improve the overall efficiency. Neural network models like attention mechanism are tried out to handle complex works with promising results. The proposed work aims to present a novel framework that incorporates the part of speech tagging feature to the word embedding layer, which is then used as the input to the attention mechanism. With POS feature being part of the input layer, this framework is capable of dealing with words containing contextual and morphological information. The relevance of POS tagging here is due to its strong reliance on the language\u2019s syntactic, contextual, and morphological information. The three main elements in the work are pre-processing, POS tagging feature in the embedding phase, and the incorporation of it into the attention mechanism. The word embedding provides the semantic concept about the word, while the POS tags give an idea about how significant the words are in the context of the content, which corresponds to the syntactic information. The proposed work was carried out in Malayalam, one of the prominent Indian languages. A widely used and accepted dataset from the English language was translated to Malayalam for conducting the experiments. The proposed framework gives a ROUGE score of 28, which outperformed the baseline models.<\/jats:p>","DOI":"10.1145\/3561819","type":"journal-article","created":{"date-parts":[[2022,9,10]],"date-time":"2022-09-10T11:41:15Z","timestamp":1662810075000},"page":"1-14","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":13,"title":["Abstractive Summarization of Text Document in Malayalam Language: Enhancing Attention Model Using POS Tagging Feature"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6560-0010","authenticated-orcid":false,"given":"Sindhya","family":"K. Nambiar","sequence":"first","affiliation":[{"name":"Department of Computer Science, Cochin University of Science and Technology, Kerala, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1929-8324","authenticated-orcid":false,"given":"David","family":"Peter S.","sequence":"additional","affiliation":[{"name":"School of Engineering, Cochin University of Science and Technology, Kerala, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7088-6909","authenticated-orcid":false,"given":"Sumam","family":"Mary Idicula","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Muthoot Institute of Technology and Science, Ernakulam, Kerala, India"}]}],"member":"320","published-online":{"date-parts":[[2023,3,23]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"crossref","unstructured":"A. P. Ajees and Sumam Mary Idicula. 2018. A POS tagger for Malayalam using conditional random fields. Int. J. Appl. Eng. Res. 13 3 (2018).","DOI":"10.1109\/ICDSE.2018.8527814"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/P15-1034"},{"key":"e_1_3_2_4_2","unstructured":"Dzmitry Bahdanau Kyunghyun Cho and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)."},{"key":"e_1_3_2_5_2","doi-asserted-by":"crossref","unstructured":"Lidong Bing Piji Li Yi Liao Wai Lam Weiwei Guo and Rebecca J. Passonneau. 2015. Abstractive multi-document summarization via phrase selection and merging. arXiv preprint arXiv:1506.01597 (2015).","DOI":"10.3115\/v1\/P15-1153"},{"key":"e_1_3_2_6_2","article-title":"Deep communicating agents for abstractive summarization","author":"Celikyilmaz Asli","year":"2018","unstructured":"Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. arXiv preprint arXiv:1803.10357 (2018).","journal-title":"arXiv preprint arXiv:1803.10357"},{"key":"e_1_3_2_7_2","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).","journal-title":"arXiv preprint arXiv:1810.04805"},{"key":"e_1_3_2_8_2","unstructured":"Deepali K. Gaikwad and C. Namrata Mahender. 2016. A review paper on text summarization. International Journal of Advanced Research in Computer and Communication Engineering 5 3 (2016) 154\u2013160."},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1177\/026272800702700203"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2018.10.496"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2015.01.070"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2015.01.070"},{"key":"e_1_3_2_13_2","unstructured":"Renu Khandelwal. 2020. Attention: Sequence 2 Sequence model with Attention Mechanism. Retrieved from https:\/\/towardsdatascience.com\/sequence-2-sequence-model-with-attention-mechanism-9e9ca2a613a."},{"key":"e_1_3_2_14_2","article-title":"Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension","author":"Lewis Mike","year":"2019","unstructured":"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019).","journal-title":"arXiv preprint arXiv:1910.13461"},{"key":"e_1_3_2_15_2","unstructured":"Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out . 74\u201381."},{"key":"e_1_3_2_16_2","article-title":"Fine-tune BERT for extractive summarization","author":"Liu Yang","year":"2019","unstructured":"Yang Liu. 2019. Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318 (2019).","journal-title":"arXiv preprint arXiv:1903.10318"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1147\/rd.22.0159"},{"key":"e_1_3_2_18_2","article-title":"Efficient estimation of word representations in vector space","author":"Mikolov Tomas","year":"2013","unstructured":"Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).","journal-title":"arXiv preprint arXiv:1301.3781"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCES.2012.6408498"},{"key":"e_1_3_2_20_2","unstructured":"Karvanuur P. Mohanan. 1997. Grammatical relations and clause structure in Malayalam."},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2021.05.088"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICACCS51430.2021.9442060"},{"key":"e_1_3_2_23_2","first-page":"1","article-title":"A deep reinforced model for abstractive summarization","author":"Paulus Romain","year":"2018","unstructured":"Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations. 1\u201312. arxiv:1705.04304.","journal-title":"I"},{"key":"e_1_3_2_24_2","volume-title":"Proceedings of the 38th All India Conference of Dravidian Linguists","author":"Sebastian Mary Priya","year":"2010","unstructured":"Mary Priya Sebastian, K. Sheena Kurian, and G. Santhosh Kumar. 2010. A classification of Sandhi rules for suffix separation in Malayalam. In Proceedings of the 38th All India Conference of Dravidian Linguists."},{"key":"e_1_3_2_25_2","article-title":"Get to the point: Summarization with pointer-generator networks","author":"See Abigail","year":"2017","unstructured":"Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017).","journal-title":"arXiv preprint arXiv:1704.04368"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1162\/153244303322533223"},{"key":"e_1_3_2_27_2","unstructured":"George Tsatsaronis Iraklis Varlamis and Kjetil N\u00f8rv\u00e5g. 2010. SemanticRank: ranking keywords and sentences using semantic graphs. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling\u201910) . 1074\u20131082."},{"key":"e_1_3_2_28_2","first-page":"11328","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Zhang Jingqing","year":"2020","unstructured":"Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the International Conference on Machine Learning. PMLR, 11328\u201311339."},{"key":"e_1_3_2_29_2","article-title":"Neural document summarization by jointly learning to score and select sentences","author":"Zhou Qingyu","year":"2018","unstructured":"Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. arXiv preprint arXiv:1807.02305 (2018).","journal-title":"arXiv preprint arXiv:1807.02305"}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3561819","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3561819","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:02:24Z","timestamp":1750186944000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3561819"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,28]]},"references-count":28,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,2,28]]}},"alternative-id":["10.1145\/3561819"],"URL":"https:\/\/doi.org\/10.1145\/3561819","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"value":"2375-4699","type":"print"},{"value":"2375-4702","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,28]]},"assertion":[{"value":"2022-04-27","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-08-29","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-03-23","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}