{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,25]],"date-time":"2026-02-25T07:25:32Z","timestamp":1772004332223,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":39,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,10,27]],"date-time":"2020-10-27T00:00:00Z","timestamp":1603756800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2020,10,27]]},"DOI":"10.1145\/3437802.3437832","type":"proceedings-article","created":{"date-parts":[[2021,1,5]],"date-time":"2021-01-05T06:24:42Z","timestamp":1609827882000},"page":"176-184","source":"Crossref","is-referenced-by-count":20,"title":["Survey on Automatic Text Summarization and Transformer Models Applicability"],"prefix":"10.1145","author":[{"given":"Wang","family":"Guan","sequence":"first","affiliation":[{"name":"ITMO University, Russia"}]},{"given":"Ivan","family":"Smetannikov","sequence":"additional","affiliation":[{"name":"ITMO University, Russia"}]},{"given":"Man","family":"Tianxing","sequence":"additional","affiliation":[{"name":"ITMO University, Russia"}]}],"member":"320","published-online":{"date-parts":[[2021,1,4]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"crossref","unstructured":"Cohan A 2018. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 615\u2013621.  Cohan A 2018. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 615\u2013621.","DOI":"10.18653\/v1\/N18-2097"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/1233912.1233913"},{"key":"e_1_3_2_1_3_1","unstructured":"Radford A Narasimhan K Salimans T and Sutskever I. 2018. Improving language understanding by generative pre-training. www.cs.ubc.ca\/~amuham01\/LING530\/papers\/radford2018improving.pdf  Radford A Narasimhan K Salimans T and Sutskever I. 2018. Improving language understanding by generative pre-training. www.cs.ubc.ca\/~amuham01\/LING530\/papers\/radford2018improving.pdf"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2012.09.014"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2011.05.033"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295349"},{"key":"e_1_3_2_1_7_1","volume-title":"C 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910","author":"Raffel","year":"2019","unstructured":"Raffel C 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910 .10683 ( 2019 ). Raffel C 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683 (2019)."},{"key":"e_1_3_2_1_8_1","unstructured":"Bahdanau D Cho K and Bengio Y. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473 (2014).  Bahdanau D Cho K and Bengio Y. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473 (2014)."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.5555\/1622487.1622501"},{"key":"e_1_3_2_1_10_1","volume-title":"Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). 789\u2013797","author":"Zhang H","unstructured":"Zhang H , Jingjing C , Jianjun , and Ji W . 2019. Pretraining-Based Natural Language Generation for Text Summarization . In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). 789\u2013797 . Zhang H, Jingjing C, Jianjun, and Ji W. 2019. Pretraining-Based Natural Language Generation for Text Summarization. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). 789\u2013797."},{"key":"e_1_3_2_1_11_1","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171\u20134186","author":"Devlin J","unstructured":"Devlin J , Ming-Wei C , Kenton L , and Kristina T . 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171\u20134186 . Devlin J, Ming-Wei C, Kenton L, and Kristina T. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171\u20134186."},{"key":"e_1_3_2_1_12_1","volume-title":"Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 328\u2013339","author":"Howard J","unstructured":"Howard J and Sebastian R . 2018. Universal Language Model Fine-tuning for Text Classification . In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 328\u2013339 . Howard J and Sebastian R. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 328\u2013339."},{"key":"e_1_3_2_1_13_1","volume-title":"PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. arXiv:1912.08777","author":"Zhang J","year":"2019","unstructured":"Zhang J , Zhao Y , Saleh M , and Liu\u00a0 P J . 2019 . PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. arXiv:1912.08777 (2019). Zhang J, Zhao Y, Saleh M, and Liu\u00a0P J. 2019. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. arXiv:1912.08777 (2019)."},{"key":"e_1_3_2_1_14_1","unstructured":"Kaikhah K. 2004. Text summarization using neural networks. In Proceeding of second conference on intelligent system. 40\u201344.  Kaikhah K. 2004. Text summarization using neural networks. In Proceeding of second conference on intelligent system. 40\u201344."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.5555\/3045118.3045336"},{"key":"e_1_3_2_1_16_1","volume-title":"Proceedings of ACL Workshop \u201cText Summarization Branches Out\u201d. 8.","author":"Chin-Yew L.","year":"2004","unstructured":"Chin-Yew L. 2004 . ROUGE: A package for automatic evaluation of summaries . In Proceedings of ACL Workshop \u201cText Summarization Branches Out\u201d. 8. Chin-Yew L. 2004. ROUGE: A package for automatic evaluation of summaries. In Proceedings of ACL Workshop \u201cText Summarization Branches Out\u201d. 8."},{"key":"e_1_3_2_1_17_1","volume-title":"Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv:1910.13461","author":"Lewis","year":"2019","unstructured":"M. Lewis 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv:1910.13461 ( 2019 ). M. Lewis 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv:1910.13461 (2019)."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/CSNT.2011.65"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/383952.384042"},{"key":"e_1_3_2_1_20_1","volume-title":"Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2227\u20132237","author":"Peters M","unstructured":"Peters M , Mark N , Mohit I , Matt G , Christopher C , Kenton L , and Luke Z . 2018. Deep Contextualized Word Representations . In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2227\u20132237 . Peters M, Mark N, Mohit I, Matt G, Christopher C, Kenton L, and Luke Z. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2227\u20132237."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"crossref","unstructured":"Rush\u00a0A M Sumit C and Jason W. 2015. A neural attention model for abstractive sentence summarization. arXiv:1509.00685 (2015).  Rush\u00a0A M Sumit C and Jason W. 2015. A neural attention model for abstractive sentence summarization. arXiv:1509.00685 (2015).","DOI":"10.18653\/v1\/D15-1044"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.5555\/2969442.2969540"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2003.10.006"},{"key":"e_1_3_2_1_24_1","volume-title":"Proceedings of the 2004 conference on empirical methods in natural language processing. 404\u2013411","author":"Mihalcea R","unstructured":"Mihalcea R and Paul T . 2004. Textrank: Bringing order into text . In Proceedings of the 2004 conference on empirical methods in natural language processing. 404\u2013411 . Mihalcea R and Paul T. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. 404\u2013411."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"crossref","unstructured":"Nallapati R Bowen Z Caglar G and Bing X. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv:1602.06023 (2016).  Nallapati R Bowen Z Caglar G and Bing X. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv:1602.06023 (2016).","DOI":"10.18653\/v1\/K16-1028"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.15680\/IJIRCCE.2016.0403099"},{"key":"e_1_3_2_1_27_1","unstructured":"Parker R Graff D Cong J Chen K and Maeda K. 2011. English Gigaword. https:\/\/catalog.ldc.upenn.edu\/LDC2011T07  Parker R Graff D Cong J Chen K and Maeda K. 2011. English Gigaword. https:\/\/catalog.ldc.upenn.edu\/LDC2011T07"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-1012"},{"key":"e_1_3_2_1_29_1","volume-title":"The New York Times Annotated Corpus. https:\/\/catalog.ldc.upenn.edu\/LDC2008T19","author":"Evan S.","unstructured":"Evan S. 2008. The New York Times Annotated Corpus. https:\/\/catalog.ldc.upenn.edu\/LDC2008T19 Evan S. 2008. The New York Times Annotated Corpus. https:\/\/catalog.ldc.upenn.edu\/LDC2008T19"},{"key":"e_1_3_2_1_30_1","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. 4052\u20134059","author":"Edunov S","unstructured":"Edunov S , Alexei B , and Michael A . 2019. Pre-trained language model representations for language generation . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. 4052\u20134059 . Edunov S, Alexei B, and Michael A. 2019. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. 4052\u20134059."},{"key":"e_1_3_2_1_31_1","volume-title":"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 1797\u20131807","author":"Narayan S","unstructured":"Narayan S , Shay BC , and Mirella L . 2018. Pretraining-Based Natural Language Generation for Text Summarization . In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 1797\u20131807 . Narayan S, Shay BC, and Mirella L. 2018. Pretraining-Based Natural Language Generation for Text Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 1797\u20131807."},{"key":"e_1_3_2_1_32_1","volume-title":"Get to the point: Summarization with pointer-generator networks. arXiv:1704.04368","author":"Peter J\u00a0 L","year":"2017","unstructured":"Peter J\u00a0 L See\u00a0 A and Christopher\u00a0 D M . 2017. Get to the point: Summarization with pointer-generator networks. arXiv:1704.04368 ( 2017 ). Peter J\u00a0L See\u00a0A and Christopher\u00a0D M. 2017. Get to the point: Summarization with pointer-generator networks. arXiv:1704.04368 (2017)."},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.4304\/jetwi.2.3.258-268"},{"key":"e_1_3_2_1_34_1","unstructured":"Sanh V Lysandre D Julien C and Thomas W. 2019. DistilBERT a distilled version of BERT: smaller faster cheaper and lighter. arxiv.org\/pdf\/1910.01108(2019).  Sanh V Lysandre D Julien C and Thomas W. 2019. DistilBERT a distilled version of BERT: smaller faster cheaper and lighter. arxiv.org\/pdf\/1910.01108(2019)."},{"key":"e_1_3_2_1_35_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692","author":"Liu Y","year":"2019","unstructured":"Liu Y , Myle O , Naman G , Jingfei D , Mandar J , Danqi C , Omer L , Mike L , Luke Z , and Veselin S . 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692 (2019). Liu Y, Myle O, Naman G, Jingfei D, Mandar J, Danqi C, Omer L, Mike L, Luke Z, and Veselin S. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692 (2019)."},{"key":"e_1_3_2_1_36_1","unstructured":"Yan Y Qi W Gong Y Liu D Duan N Chen J Zhang R and Zhou M. 2020. ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training. arXiv:2001.04063 (2020).  Yan Y Qi W Gong Y Liu D Duan N Chen J Zhang R and Zhou M. 2020. ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training. arXiv:2001.04063 (2020)."},{"key":"e_1_3_2_1_37_1","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2978\u20132988","author":"Dai Z","unstructured":"Dai Z , Zhilin Y , Yiming Y , Carbonell\u00a0 G J , Quoc L , and Ruslan S . 2019. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context . In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2978\u20132988 . Dai Z, Zhilin Y, Yiming Y, Carbonell\u00a0G J, Quoc L, and Ruslan S. 2019. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2978\u20132988."},{"key":"e_1_3_2_1_38_1","volume-title":"Albert: A lite bert for self-supervised learning of language representations. arXiv:1909.11942","author":"Lan Z","year":"2019","unstructured":"Lan Z , Mingda C , Sebastian G , Kevin G , Piyush S , and Radu S . 2019 . Albert: A lite bert for self-supervised learning of language representations. arXiv:1909.11942 (2019). Lan Z, Mingda C, Sebastian G, Kevin G, Piyush S, and Radu S. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv:1909.11942 (2019)."},{"key":"e_1_3_2_1_39_1","volume-title":"Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems","author":"Yang Z","year":"2019","unstructured":"Yang Z , Zihang D , Yiming Y , Jaime C , Russ\u00a0 R S , and Quoc\u00a0 V L . 2019 . Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems (2019), 5754\u20135764. Yang Z, Zihang D, Yiming Y, Jaime C, Russ\u00a0R S, and Quoc\u00a0V L. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems (2019), 5754\u20135764."}],"event":{"name":"CCRIS 2020: 2020 International Conference on Control, Robotics and Intelligent System","location":"Xiamen China","acronym":"CCRIS 2020"},"container-title":["2020 International Conference on Control, Robotics and Intelligent System"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3437802.3437832","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3437802.3437832","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:17:26Z","timestamp":1750191446000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3437802.3437832"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,27]]},"references-count":39,"alternative-id":["10.1145\/3437802.3437832","10.1145\/3437802"],"URL":"https:\/\/doi.org\/10.1145\/3437802.3437832","relation":{},"subject":[],"published":{"date-parts":[[2020,10,27]]}}}