{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,23]],"date-time":"2026-04-23T00:02:34Z","timestamp":1776902554572,"version":"3.51.2"},"reference-count":110,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,10,5]],"date-time":"2023-10-05T00:00:00Z","timestamp":1696464000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,3,31]]},"abstract":"<jats:p>The proliferation of harmful content on online platforms is a major societal problem, which comes in many different forms, including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self-harm, and many others. Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users. Researchers have developed different methods for automatically detecting harmful content, often focusing on specific sub-problems or on narrow communities, as what is considered harmful often depends on the platform and on the context. We argue that there is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content. We thus survey existing methods as well as content moderation policies by online platforms in this light and suggest directions for future work.<\/jats:p>","DOI":"10.1145\/3603399","type":"journal-article","created":{"date-parts":[[2023,6,7]],"date-time":"2023-06-07T11:59:57Z","timestamp":1686139197000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":53,"title":["Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4891-2677","authenticated-orcid":false,"given":"Arnav","family":"Arora","sequence":"first","affiliation":[{"name":"Checkstep Research, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3600-1510","authenticated-orcid":false,"given":"Preslav","family":"Nakov","sequence":"additional","affiliation":[{"name":"Checkstep Research, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8095-3570","authenticated-orcid":false,"given":"Momchil","family":"Hardalov","sequence":"additional","affiliation":[{"name":"Checkstep Research, Bulgaria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4820-9201","authenticated-orcid":false,"given":"Sheikh Muhammad","family":"Sarwar","sequence":"additional","affiliation":[{"name":"Checkstep Research, UK"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-9552-3471","authenticated-orcid":false,"given":"Vibha","family":"Nayak","sequence":"additional","affiliation":[{"name":"Checkstep, UK"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-1415-8963","authenticated-orcid":false,"given":"Yoan","family":"Dinkov","sequence":"additional","affiliation":[{"name":"Checkstep Research, Bulgaria"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-6766-4559","authenticated-orcid":false,"given":"Dimitrina","family":"Zlatkova","sequence":"additional","affiliation":[{"name":"Checkstep Research, Bulgaria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0543-6375","authenticated-orcid":false,"given":"Kyle","family":"Dent","sequence":"additional","affiliation":[{"name":"Checkstep, UK"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-0705-9127","authenticated-orcid":false,"given":"Ameya","family":"Bhatawdekar","sequence":"additional","affiliation":[{"name":"Checkstep, UK"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-5332-0923","authenticated-orcid":false,"given":"Guillaume","family":"Bouchard","sequence":"additional","affiliation":[{"name":"Checkstep, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1562-7909","authenticated-orcid":false,"given":"Isabelle","family":"Augenstein","sequence":"additional","affiliation":[{"name":"Checkstep Research, UK"}]}],"member":"320","published-online":{"date-parts":[[2023,10,5]]},"reference":[{"key":"e_1_3_3_2_2","unstructured":"Gelo Gonzales. 2021. Facebook moderators call for better mental health support end to NDAs. Rappler . Retrieved June 12 2023 from https:\/\/www.rappler.com\/technology\/social-media\/facebook-moderators-call-end-nondisclosure-agreements-better-mental-health-support\/."},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jnca.2016.11.030"},{"key":"e_1_3_3_4_2","doi-asserted-by":"crossref","first-page":"611","DOI":"10.18653\/v1\/2021.findings-emnlp.56","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2021","author":"Alam Firoj","year":"2021","unstructured":"Firoj Alam, Shaden Shaar, Fahim Dalvi, Hassan Sajjad, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, et\u00a0al. 2021. Fighting the COVID-19 infodemic: Modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, Cedarville, OH, 611\u2013649."},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10207-016-0321-5"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/K19-1096"},{"key":"e_1_3_3_7_2","unstructured":"Kim Barker and Olga Jurasz. 2019. Online Harms White Paper Consultation Response . Strilling Law School and the Open University Law School."},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S19-2007"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.4000\/books.aaccademia.3085"},{"key":"e_1_3_3_10_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.paid.2014.01.016"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.21428\/e3990ae6.483f18da"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1002\/poi3.85"},{"key":"e_1_3_3_13_2","volume-title":"Proceedings of the 20th Annual Conference of the HDS (SDoW\u201910)","author":"Cambria Erik","year":"2010","unstructured":"Erik Cambria, Praphul Chandra, Avinash Sharma, and Amir Hussain. 2010. Do not feel the trolls. In Proceedings of the 20th Annual Conference of the HDS (SDoW\u201910)."},{"key":"e_1_3_3_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3274301"},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/SocialCom-PASSAT.2012.55"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14-1179"},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P19-1271"},{"key":"e_1_3_3_18_2","first-page":"6174","volume-title":"Proceedings of the 12th Language Resources and Evaluation Conference","author":"\u00c7\u00f6ltekin \u00c7a\u011fr\u0131","year":"2020","unstructured":"\u00c7a\u011fr\u0131 \u00c7\u00f6ltekin. 2020. A corpus of Turkish offensive language on social media. In Proceedings of the 12th Language Resources and Evaluation Conference. 6174\u20136184. https:\/\/aclanthology.org\/2020.lrec-1.758"},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.747"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.dss.2015.09.003"},{"key":"e_1_3_3_21_2","doi-asserted-by":"crossref","first-page":"693","DOI":"10.1007\/978-3-642-36973-5_62","volume-title":"Advances in Information Retrieval","author":"Dadvar Maral","year":"2013","unstructured":"Maral Dadvar, Dolf Trieschnigg, Roeland Ordelman, and Franciska de Jong. 2013. Improving cyberbullying detection with user context. In Advances in Information Retrieval. Lecture Notes in Computer Science, Vol. 7814. Springer, 693\u2013696."},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/775152.775226"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1609\/icwsm.v11i1.14955"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1423"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.516"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.semeval-1.7"},{"key":"e_1_3_3_27_2","doi-asserted-by":"publisher","DOI":"10.1007\/s13278-014-0194-4"},{"key":"e_1_3_3_28_2","first-page":"128","volume-title":"Proceedings of the 1st Workshop on Trolling, Aggression, and Cyberbullying (TRAC\u201918)","author":"Fortuna Paula","year":"2018","unstructured":"Paula Fortuna, Jos\u00e9 Ferreira, Luiz Pires, Guilherme Routar, and S\u00e9rgio Nunes. 2018. Merging datasets for aggressive text identification. In Proceedings of the 1st Workshop on Trolling, Aggression, and Cyberbullying (TRAC\u201918). 128\u2013139. https:\/\/aclanthology.org\/W18-4416."},{"key":"e_1_3_3_29_2","article-title":"A survey on automatic detection of hate speech in text","volume":"51","author":"Fortuna Paula","year":"2018","unstructured":"Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Computing Surveys 51, 4 (2018), Article 85, 30 pages.","journal-title":"ACM Computing Surveys"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1609\/icwsm.v12i1.14991"},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292522.3326028"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF00344251"},{"issue":"1","key":"e_1_3_3_33_2","first-page":"42","article-title":"Supervised machine learning for the detection of troll profiles in Twitter social network: Application to a real case of cyberbullying","volume":"24","author":"Gal\u00e1n-Garc\u00eda Patxi","year":"2016","unstructured":"Patxi Gal\u00e1n-Garc\u00eda, Jos\u00e9 Gaviria de la Puerta, Carlos Laorden G\u00f3mez, Igor Santos, and Pablo Garc\u00eda Bringas. 2016. Supervised machine learning for the detection of troll profiles in Twitter social network: Application to a real case of cyberbullying. Logic Journal of the IGPL 24, 1 (2016), 42\u201353.","journal-title":"Logic Journal of the IGPL"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.5555\/3495724.3496279"},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/3200947.3208069"},{"key":"e_1_3_3_36_2","doi-asserted-by":"publisher","DOI":"10.12987\/9780300235029-003"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.coling-main.559"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00454"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-naacl.94"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1080\/01972240290108186"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/1014052.1014073"},{"key":"e_1_3_3_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3406865.3418312"},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0256762"},{"key":"e_1_3_3_44_2","unstructured":"Jigsaw. 2018. Toxic Comment Classification Challenge. Retrieved April 28 2021 from https:\/\/www.kaggle.com\/c\/jigsaw-toxic-comment-classification-challenge\/."},{"key":"e_1_3_3_45_2","unstructured":"Jigsaw Multilingual. 2020. Jigsaw Multilingual Toxic Comment Classification. Retrieved April 28 2021 from https:\/\/www.kaggle.com\/c\/jigsaw-multilingual-toxic-comment-classification\/."},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.269"},{"key":"e_1_3_3_47_2","unstructured":"Douwe Kiela Hamed Firooz Aravind Mohan Vedanuj Goswami Amanpreet Singh Pratik Ringshia and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS\u201920) . 1\u201314. https:\/\/proceedings.neurips.cc\/paper\/2020\/hash\/1b84c4cee2b8b3d823b30e2d604b1878-Abstract.html"},{"key":"e_1_3_3_48_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.nlp4pi-1.20"},{"key":"e_1_3_3_49_2","first-page":"1","volume-title":"Proceedings of the 1st Workshop on Trolling, Aggression, and Cyberbullying (TRAC\u201918)","author":"Kumar Ritesh","year":"2018","unstructured":"Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the 1st Workshop on Trolling, Aggression, and Cyberbullying (TRAC\u201918). 1\u201311. https:\/\/aclanthology.org\/W18-4401."},{"key":"e_1_3_3_50_2","first-page":"1","volume-title":"Proceedings of the 2nd Workshop on Trolling, Aggression, and Cyberbullying","author":"Kumar Ritesh","year":"2020","unstructured":"Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2020. Evaluating aggression identification in social media. In Proceedings of the 2nd Workshop on Trolling, Aggression, and Cyberbullying. 1\u20135. https:\/\/aclanthology.org\/2020.trac-1.1."},{"key":"e_1_3_3_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASONAM.2014.6921581"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v27i1.8539"},{"key":"e_1_3_3_53_2","volume-title":"Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920)","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920). https:\/\/openreview.net\/forum?id=H1eA7AEtvS."},{"key":"e_1_3_3_54_2","article-title":"VisualBERT: A simple and performant baseline for vision and language","volume":"1908","author":"Li Liunian Harold","year":"2019","unstructured":"Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. VisualBERT: A simple and performant baseline for vision and language. arXiv preprint arXiv:abs\/1908.03557 (2019). https:\/\/arxiv.org\/abs\/1908.03557.","journal-title":"arXiv preprint"},{"key":"e_1_3_3_55_2","doi-asserted-by":"publisher","DOI":"10.1007\/11875604_81"},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58577-8_8"},{"key":"e_1_3_3_57_2","article-title":"A multimodal framework for the detection of hateful memes","volume":"2012","author":"Lippe Phillip","year":"2020","unstructured":"Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova, and Helen Yannakoudakis. 2020. A multimodal framework for the detection of hateful memes. arXiv preprint arXiv:abs\/2012.12871 (2020). https:\/\/arxiv.org\/abs\/2012.12871.","journal-title":"arXiv preprint"},{"key":"e_1_3_3_58_2","article-title":"RoBERTa: A robustly optimized bert pretraining approach","volume":"1907","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:abs\/1907.11692 (2019). https:\/\/arxiv.org\/abs\/1907.11692.","journal-title":"arXiv preprint"},{"key":"e_1_3_3_59_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368567.3368584"},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.5555\/2035700.2035717"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/K15-1032"},{"key":"e_1_3_3_62_2","first-page":"443","volume-title":"Proceedings of the International Conference on Recent Advances in Natural Language Processing","author":"Mihaylov Todor","year":"2015","unstructured":"Todor Mihaylov, Ivan Koychev, Georgi Georgiev, and Preslav Nakov. 2015. Exposing paid opinion manipulation trolls. In Proceedings of the International Conference on Recent Advances in Natural Language Processing. 443\u2013450. https:\/\/aclanthology.org\/R15-1058."},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1108\/IntR-03-2017-0118"},{"key":"e_1_3_3_64_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P16-2065"},{"key":"e_1_3_3_65_2","doi-asserted-by":"publisher","DOI":"10.1609\/icwsm.v14i1.7314"},{"key":"e_1_3_3_66_2","first-page":"126","volume-title":"Proceedings of the 6th Arabic Natural Language Processing Workshop","author":"Mubarak Hamdy","year":"2021","unstructured":"Hamdy Mubarak, Ammar Rashed, Kareem Darwish, Younes Samih, and Ahmed Abdelali. 2021. Arabic offensive language on Twitter: Analysis and experiments. In Proceedings of the 6th Arabic Natural Language Processing Workshop. 126\u2013135. https:\/\/aclanthology.org\/2021.wanlp-1.13."},{"key":"e_1_3_3_67_2","article-title":"Vilio: State-of-the-art visio-linguistic models applied to hateful memes","volume":"2012","author":"Muennighoff Niklas","year":"2020","unstructured":"Niklas Muennighoff. 2020. Vilio: State-of-the-art visio-linguistic models applied to hateful memes. arXiv preprint arXiv:abs\/2012.07788 (2020). https:\/\/arxiv.org\/abs\/2012.07788.","journal-title":"arXiv preprint"},{"key":"e_1_3_3_68_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-99739-7_52"},{"key":"e_1_3_3_69_2","doi-asserted-by":"publisher","DOI":"10.26615\/978-954-452-049-6_072"},{"key":"e_1_3_3_70_2","unstructured":"BBC News. 2021. TikTok and Twitch Face Fines Under New Ofcom Rules. Retrieved June 12 2023 from https:\/\/www.bbc.com\/news\/technology-58809169."},{"key":"e_1_3_3_71_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.comnet.2012.05.002"},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1474"},{"key":"e_1_3_3_73_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-1202"},{"key":"e_1_3_3_74_2","first-page":"5113","volume-title":"Proceedings of the 12th Language Resources and Evaluation Conference","author":"Pitenis Zesis","year":"2020","unstructured":"Zesis Pitenis, Marcos Zampieri, and Tharindu Ranasinghe. 2020. Offensive language identification in Greek. In Proceedings of the 12th Language Resources and Evaluation Conference. 5113\u20135119. https:\/\/aclanthology.org\/2020.lrec-1.629."},{"key":"e_1_3_3_75_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-acl.246"},{"key":"e_1_3_3_76_2","doi-asserted-by":"crossref","first-page":"4439","DOI":"10.18653\/v1\/2021.findings-emnlp.379","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2021","author":"Pramanick Shraman","year":"2021","unstructured":"Shraman Pramanick, Shivam Sharma, Dimitar Dimitrov, Md. Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2021. MOMENTA: A multimodal framework for detecting harmful memes and their targets. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, Cedarville, OH, 4439\u20134455."},{"key":"e_1_3_3_77_2","article-title":"Language models are unsupervised multitask learners","author":"Radford Alec","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019), 1\u201324.","journal-title":"OpenAI Blog"},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-main.470"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2018.12.021"},{"key":"e_1_3_3_80_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-acl.80"},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.1038\/323533a0"},{"key":"e_1_3_3_82_2","article-title":"Hate speech in pixels: Detection of offensive memes towards automatic moderation","volume":"1910","author":"Sabat Benet Oriol","year":"2019","unstructured":"Benet Oriol Sabat, Cristian Canton Ferrer, and Xavier Giro-i-Nieto. 2019. Hate speech in pixels: Detection of offensive memes towards automatic moderation. ArXiv preprint arXiv:abs\/1910.02334 (2019). https:\/\/arxiv.org\/abs\/1910.02334.","journal-title":"ArXiv preprint"},{"key":"e_1_3_3_83_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2017.2761757"},{"key":"e_1_3_3_84_2","doi-asserted-by":"crossref","unstructured":"Geetika Sarna and M. P. S. Bhatia. 2017. Content based approach to find the credibility of user in social networks: an application of cyberbullying. International Journal of Machine Learning and Cybernetics 2 (2017) 677\u2013689.","DOI":"10.1007\/s13042-015-0463-1"},{"key":"e_1_3_3_85_2","doi-asserted-by":"publisher","DOI":"10.1609\/icwsm.v16i1.19340"},{"key":"e_1_3_3_86_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00472"},{"key":"e_1_3_3_87_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W17-1101"},{"key":"e_1_3_3_88_2","first-page":"792","volume-title":"Proceedings of the 2015 18th International Conference on Information Fusion (FUSION\u201915)","author":"Seah Chun Wei","year":"2015","unstructured":"Chun Wei Seah, Hai Leong Chieu, Kian Ming A. Chai, Loo-Nin Teow, and Lee Wei Yeong. 2015. Troll detection by domain-adapting sentiment analysis. In Proceedings of the 2015 18th International Conference on Information Fusion (FUSION\u201915). 792\u2013799."},{"key":"e_1_3_3_89_2","doi-asserted-by":"publisher","DOI":"10.1145\/505282.505283"},{"key":"e_1_3_3_90_2","doi-asserted-by":"publisher","DOI":"10.1108\/IntR-01-2014-0023"},{"key":"e_1_3_3_91_2","volume-title":"Findings of the Association for Computational Linguistics: NAACL 2022","author":"Sharma Shivam","year":"2022","unstructured":"Shivam Sharma, Md. Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2022. DISARM: Detecting the victims targeted by harmful memes. In Findings of the Association for Computational Linguistics: NAACL 2022. Association for Computational Linguistics, Cedarville, OH, 1572\u20131588."},{"key":"e_1_3_3_92_2","first-page":"3498","volume-title":"Proceedings of the 12th Language Resources and Evaluation Conference","author":"Sigurbergsson Gudbjartur Ingi","year":"2020","unstructured":"Gudbjartur Ingi Sigurbergsson and Leon Derczynski. 2020. Offensive language and hate speech detection for Danish. In Proceedings of the 12th Language Resources and Evaluation Conference. 3498\u20133508. https:\/\/aclanthology.org\/2020.lrec-1.430."},{"key":"e_1_3_3_93_2","volume-title":"Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920)","author":"Su Weijie","year":"2020","unstructured":"Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: Pre-training of generic visual-linguistic representations. In Proceedings of the 8th International Conference on Learning Representations (ICLR\u201920). https:\/\/openreview.net\/forum?id=SygXPaEYvH."},{"key":"e_1_3_3_94_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1514"},{"key":"e_1_3_3_95_2","doi-asserted-by":"crossref","unstructured":"Sahana Udupa Antonis Maronikolakis Hinrich Sch\u00fctze and Axel Wisiorek. 2022. Ethical Scaling for Content Moderation: Extreme Speech and the (In)Significance of Artificial Intelligence . Shorenstein Center on Media Politics and Public Policy Cambridge MA.","DOI":"10.1177\/20539517231172424"},{"key":"e_1_3_3_96_2","first-page":"5998","volume-title":"Advances in Neural Information Processing Systems 30 (NIPS\u201917)","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS\u201917), Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). Curran Associates, Red Hook, NY, 5998\u20136008. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html."},{"key":"e_1_3_3_97_2","unstructured":"Munsif Vengattil and Elizabeth Culliford. 2022. Facebook allows war posts urging violence against Russian invaders. Reuters . Retrieved June 12 2023 from https:\/\/www.reuters.com\/world\/europe\/exclusive-facebook-instagram-temporarily-allow-calls-violence-against-russians-2022-03-10\/."},{"issue":"12","key":"e_1_3_3_98_2","first-page":"1","article-title":"Directions in abusive language training data, a systematic review: Garbage in, garbage out","volume":"15","author":"Vidgen Bertie","year":"2021","unstructured":"Bertie Vidgen and Leon Derczynski. 2021. Directions in abusive language training data, a systematic review: Garbage in, garbage out. PLOS ONE 15, 12 (2021), 1\u201332.","journal-title":"PLOS ONE"},{"key":"e_1_3_3_99_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W17-3012"},{"key":"e_1_3_3_100_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-2013"},{"key":"e_1_3_3_101_2","volume-title":"Proceedings of the 14th Conference on Natural Language Processing (KONVENS\u201918)","author":"Wiegand Michael","year":"2018","unstructured":"Michael Wiegand, Melanie Siegel, and Josef Ruppenhofer. 2018. Overview of the GermEval 2018 shared task on the identification of offensive language. In Proceedings of the 14th Conference on Natural Language Processing (KONVENS\u201918)."},{"key":"e_1_3_3_102_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2017.10.022"},{"key":"e_1_3_3_103_2","doi-asserted-by":"publisher","DOI":"10.1145\/3038912.3052591"},{"key":"e_1_3_3_104_2","doi-asserted-by":"publisher","DOI":"10.5555\/2382029.2382139"},{"key":"e_1_3_3_105_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i4.16431"},{"key":"e_1_3_3_106_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1144"},{"key":"e_1_3_3_107_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S19-2010"},{"key":"e_1_3_3_108_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.semeval-1.188"},{"key":"e_1_3_3_109_2","article-title":"Hateful memes detection via complementary visual and linguistic networks","volume":"2012","author":"Zhang Weibo","year":"2020","unstructured":"Weibo Zhang, Guihua Liu, Zhuohua Li, and Fuqing Zhu. 2020. Hateful memes detection via complementary visual and linguistic networks. arXiv preprint arXiv:abs\/2012.04977 (2020). https:\/\/arxiv.org\/abs\/2012.04977.","journal-title":"arXiv preprint"},{"key":"e_1_3_3_110_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i07.7005"},{"key":"e_1_3_3_111_2","article-title":"Enhance multimodal transformer with external label and in-domain pretrain: Hateful meme challenge winning solution","volume":"2012","author":"Zhu Ron","year":"2020","unstructured":"Ron Zhu. 2020. Enhance multimodal transformer with external label and in-domain pretrain: Hateful meme challenge winning solution. arXiv preprint arXiv:abs\/2012.08290 (2020). https:\/\/arxiv.org\/abs\/2012.08290.","journal-title":"arXiv preprint"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3603399","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3603399","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:46:25Z","timestamp":1750178785000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3603399"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,5]]},"references-count":110,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,3,31]]}},"alternative-id":["10.1145\/3603399"],"URL":"https:\/\/doi.org\/10.1145\/3603399","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,5]]},"assertion":[{"value":"2022-04-09","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-05-02","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-10-05","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}