{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T01:53:33Z","timestamp":1773194013589,"version":"3.50.1"},"reference-count":28,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2021,5,8]],"date-time":"2021-05-08T00:00:00Z","timestamp":1620432000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,5,8]],"date-time":"2021-05-08T00:00:00Z","timestamp":1620432000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2022,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Propaganda is a rhetorical technique designed to serve a specific topic, which is often used purposefully in news article to achieve our intended purpose because of its specific psychological effect. Therefore, it is significant to be clear where and what propaganda techniques are used in the news for people to understand its theme efficiently during our daily lives. Recently, some relevant researches are proposed for propaganda detection but unsatisfactorily. As a result, detection of propaganda techniques in news articles is badly in need of research. In this paper, we are going to introduce our systems for detection of propaganda techniques in news articles, which is split into two tasks, Span Identification and Technique Classification. For these two tasks, we design a system based on the popular pretrained BERT model, respectively. Furthermore, we adopt the over-sampling and EDA strategies, propose a sentence-level feature concatenating method in our systems. Experiments on the dataset of about 550 news articles offered by SEMEVAL show that our systems perform state-of-the-art.<\/jats:p>","DOI":"10.1007\/s40747-021-00393-y","type":"journal-article","created":{"date-parts":[[2021,5,8]],"date-time":"2021-05-08T08:05:33Z","timestamp":1620461133000},"page":"3603-3612","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Span identification and technique classification of propaganda in news articles"],"prefix":"10.1007","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2738-4350","authenticated-orcid":false,"given":"Wei","family":"Li","sequence":"first","affiliation":[]},{"given":"Shiqian","family":"Li","sequence":"additional","affiliation":[]},{"given":"Chenhao","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Longfei","family":"Lu","sequence":"additional","affiliation":[]},{"given":"Ziyu","family":"Shi","sequence":"additional","affiliation":[]},{"given":"Shiping","family":"Wen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,5,8]]},"reference":[{"key":"393_CR1","doi-asserted-by":"crossref","unstructured":"Barron-Cedeno A, Da\u00a0San\u00a0Martino G, Jaradat I, Nakov P (2019) Proppy: a system to unmask propaganda in online news. In: Proceedings of the AAAI conference on artificial intelligence, pp 9847\u20139848","DOI":"10.1609\/aaai.v33i01.33019847"},{"key":"393_CR2","unstructured":"Corney D, Albakour D, Martinez-Alvarez M, Moussa S (2016) What do a million news articles look like? In: NewsIR@ ECIR, pp 42\u201347"},{"key":"393_CR3","unstructured":"Devlin J, Chang M.W, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. North American chapter of the association for computational linguistics"},{"key":"393_CR4","doi-asserted-by":"crossref","unstructured":"Fadel A, Tuffaha I, Al-Ayyoub M (2019) Pretrained ensemble learning for fine-grained propaganda detection. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pp 139\u2013142","DOI":"10.18653\/v1\/D19-5020"},{"key":"393_CR5","doi-asserted-by":"crossref","unstructured":"Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput:1735\u20131780","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"393_CR6","doi-asserted-by":"crossref","unstructured":"Hou W, Chen Y (2019) Caunlp at nlp4if 2019 shared task: context-dependent bert for sentence-level propaganda detection. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pp 83\u201386","DOI":"10.18653\/v1\/D19-5010"},{"key":"393_CR7","doi-asserted-by":"crossref","unstructured":"Huang Y, Wang W, Wang L, Tan T (2013) Multi-task deep neural network for multi-label learning. In: 2013 IEEE International conference on image processing, pp 2897\u20132900","DOI":"10.1109\/ICIP.2013.6738596"},{"key":"393_CR8","doi-asserted-by":"publisher","first-page":"569","DOI":"10.1016\/j.compeleceng.2018.02.029","volume":"74","author":"A Khalid","year":"2019","unstructured":"Khalid A, Khan FA, Imran M, Alharbi M, Khan M, Ahmad A, Jeon G (2019) Reference terms identification of cited articles as topics from citation contexts. Comput Electr Eng 74:569\u2013580","journal-title":"Comput Electr Eng"},{"key":"393_CR9","doi-asserted-by":"crossref","unstructured":"Kobayashi, S (2018) Contextual augmentation: data augmentation by words with paradigmatic relations. North American chapter of the association for computational linguistics, pp 452\u2013457","DOI":"10.18653\/v1\/N18-2072"},{"key":"393_CR10","doi-asserted-by":"crossref","unstructured":"Kurian D, Sattari F, Lefsrud L, Ma Y (2020) Using machine learning and keyword analysis to analyze incidents and reduce risk in oil sands operations. Saf Sci","DOI":"10.1016\/j.ssci.2020.104873"},{"key":"393_CR11","doi-asserted-by":"crossref","unstructured":"Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2020) Albert: a lite bert for self-supervised learning of language representations. ICLR","DOI":"10.1109\/SLT48900.2021.9383575"},{"key":"393_CR12","doi-asserted-by":"crossref","unstructured":"Liu S, Lee K, Lee I (2020) Document-level multi-topic sentiment classification of email data with bilstm and data augmentation. Knowl Based Syst:105918","DOI":"10.1016\/j.knosys.2020.105918"},{"key":"393_CR13","doi-asserted-by":"crossref","unstructured":"Mapes N, White A, Medury R, Dua S (2019) Divisive language and propaganda detection using multi-head attention transformers with deep learning bert-based language models for binary classification. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pp 103\u2013106","DOI":"10.18653\/v1\/D19-5014"},{"key":"393_CR14","first-page":"5635","volume":"1","author":"DSG Martino","year":"2019","unstructured":"Martino DSG, Yu S, Barron-Cedeno A, Petrov R, Nakov P (2019) Fine-grained analysis of propaganda in news articles. EMNLP\/IJCNLP 1:5635\u20135645","journal-title":"EMNLP\/IJCNLP"},{"key":"393_CR15","unstructured":"Pankaj G, Khushbu S, Usama Y, Thomas R, Hinrich S (2019) Neural architectures for fine-grained propaganda detection in news. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda"},{"key":"393_CR16","doi-asserted-by":"crossref","unstructured":"Peters E.M, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer S.L (2018) Deep contextualized word representations. North American chapter of the association for computational linguistics","DOI":"10.18653\/v1\/N18-1202"},{"key":"393_CR17","unstructured":"Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pretraining"},{"key":"393_CR18","unstructured":"San G.D.M, Alberto B.C, Preslav N (2019) Findings of the nlp4if-2019 shared task on fine-grained propaganda detection. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda"},{"key":"393_CR19","doi-asserted-by":"crossref","unstructured":"Tchiehe N.D, Gauthier F (2017) Classification of risk acceptability and risk tolerability factors in occupational health and safety. Saf Sci:138\u2013147","DOI":"10.1016\/j.ssci.2016.10.003"},{"key":"393_CR20","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez N.A, Kaiser L, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems 30 (NIPS 2017), pp 5998\u20136008"},{"key":"393_CR21","doi-asserted-by":"crossref","unstructured":"Vlad G.A, Tanase M.A, Onose C, Cercel D.C (2019) Sentence-level propaganda detection in news articles with transfer learning and bert-bilstm-capsule model. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pp 148\u2013154","DOI":"10.18653\/v1\/D19-5022"},{"key":"393_CR22","doi-asserted-by":"crossref","unstructured":"Wang A, Singh A, Michael J, Hill F, Levy O, Bowman R.S (2018) Glue: A multi-task benchmark and analysis platform for natural language understanding. In: International conference on learning representations","DOI":"10.18653\/v1\/W18-5446"},{"key":"393_CR23","doi-asserted-by":"crossref","unstructured":"Wei WJ, Zou K (2019) Eda: easy data augmentation techniques for boosting performance on text classification tasks. EMNLP\/IJCNLP 1:6381\u20136387","DOI":"10.18653\/v1\/D19-1670"},{"key":"393_CR24","unstructured":"Xie Z, Wang I.S, Li J, L\u00e9vy D, Nie A, Jurafsky D, Ng Y.A (2017) Data noising as smoothing in neural network language models. ICLR"},{"key":"393_CR25","unstructured":"Yang Z, Dai Z, Yang Y, Carbonell GJ, Salakhutdinov R, Le VQ (2019) Xlnet: generalized autoregressive pretraining for language understanding. In: Advances in neural information processing systems 32 (NIPS 2019), pp 5754\u20135764"},{"key":"393_CR26","doi-asserted-by":"crossref","unstructured":"Yoosuf S, Yang Y (2019) Fine-grained propaganda detection with fine-tuned bert. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pp 87\u201391","DOI":"10.18653\/v1\/D19-5011"},{"key":"393_CR27","doi-asserted-by":"crossref","unstructured":"Zhan Z, Hou Z, Yang Q, Zhao J, Zhang Y, Hu C (2020) Knowledge attention sandwich neural network for text classification. Neurocomputing:1\u201311","DOI":"10.1016\/j.neucom.2020.03.093"},{"key":"393_CR28","unstructured":"Zhang Z, Sabuncu M (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. In: Advances in neural information processing systems, pp 8778\u20138788"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-021-00393-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-021-00393-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-021-00393-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,27]],"date-time":"2022-09-27T13:37:09Z","timestamp":1664285829000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-021-00393-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,8]]},"references-count":28,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,10]]}},"alternative-id":["393"],"URL":"https:\/\/doi.org\/10.1007\/s40747-021-00393-y","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,8]]},"assertion":[{"value":"18 March 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 April 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 May 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}