{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,14]],"date-time":"2026-03-14T21:57:41Z","timestamp":1773525461677,"version":"3.50.1"},"reference-count":70,"publisher":"SAGE Publications","issue":"2","license":[{"start":{"date-parts":[[2024,3,29]],"date-time":"2024-03-29T00:00:00Z","timestamp":1711670400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Journal of Intelligent &amp; Fuzzy Systems: Applications in Engineering and Technology"],"published-print":{"date-parts":[[2026,2]]},"abstract":"<jats:p>The rise of social media and micro-blogging platforms has led to concerns about hate speech, its potential to incite violence, psychological trauma, extremist beliefs, and self-harm. We have proposed a novel model, Odio-BERT for detecting hate speech using a pretrained BERT language model. This specialized model is specifically designed for detecting hate speech in the Spanish language, and when compared to existing models, it consistently outperforms them. The study provides valuable insights into addressing hate speech in the Spanish language and explores the impact of domain tasks.<\/jats:p>","DOI":"10.3233\/jifs-219349","type":"journal-article","created":{"date-parts":[[2024,3,29]],"date-time":"2024-03-29T12:39:24Z","timestamp":1711715964000},"page":"336-347","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":2,"title":["Odio-BERT: Evaluating domain task impact in hate speech detection"],"prefix":"10.1177","volume":"50","author":[{"given":"Mesay Gemeda","family":"Yigezu","sequence":"first","affiliation":[{"name":"Instituto Polit\u00e9cnico Nacional (IPN)","place":["Mexico"]}]},{"given":"Olga","family":"Kolesnikova","sequence":"additional","affiliation":[{"name":"Instituto Polit\u00e9cnico Nacional (IPN)","place":["Mexico"]}]},{"given":"Alexander","family":"Gelbukh","sequence":"additional","affiliation":[{"name":"Instituto Polit\u00e9cnico Nacional (IPN)","place":["Mexico"]}]},{"given":"Grigori","family":"Sidorov","sequence":"additional","affiliation":[{"name":"Instituto Polit\u00e9cnico Nacional (IPN)","place":["Mexico"]}]}],"member":"179","published-online":{"date-parts":[[2024,3,29]]},"reference":[{"key":"e_1_3_3_2_1","unstructured":"BymanD.L. How hateful rhetoric connects to real-world violence (2021)."},{"key":"e_1_3_3_3_1","first-page":"244","article-title":"Habesha@ DravidianLangTech: Abusive Comment Detection using Deep Learning Approach","author":"Yigezu M.G.","year":"2023","unstructured":"YigezuM.G.KantaS.KolesnikovaO.SidorovG.GelbukhA., Habesha@ DravidianLangTech: Abusive Comment Detection using Deep Learning Approach. In Proceedings of the Third Workshop onSpeech and Language Technologies for Dravidian Languages (2023), pp. 244\u2013249.","journal-title":"Proceedings of the Third Workshop onSpeech and Language Technologies for Dravidian Languages"},{"key":"e_1_3_3_4_1","doi-asserted-by":"publisher","DOI":"10.1016\/0022-1031(85)90006-X"},{"key":"e_1_3_3_5_1","doi-asserted-by":"publisher","DOI":"10.1177\/0146167203254505"},{"key":"e_1_3_3_6_1","doi-asserted-by":"publisher","DOI":"10.1001\/jamapediatrics.2013.4143"},{"key":"e_1_3_3_7_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-2013"},{"issue":"1","key":"e_1_3_3_8_1","article-title":"Characterizing and detecting hateful users on twitter","volume":"12","author":"Ribeiro M.","year":"2018","unstructured":"RibeiroM.CalaisP.SantosY.AlmeidaV.MeiraW.Jr, Characterizing and detecting hateful users on twitter. In Proceedings of the International AAAI Conference on Web and SocialMedia (Vol. 12, No. 1) (2018).","journal-title":"Proceedings of the International AAAI Conference on Web and SocialMedia"},{"key":"e_1_3_3_9_1","first-page":"1","article-title":"Identification of hate speech and abusive language on indonesian Twitter using the Word2vec, part of speech and emoji features","author":"Ibrohim M.O.","year":"2019","unstructured":"IbrohimM.O.SetiadiM.A.BudiI., Identification of hate speech and abusive language on indonesian Twitter using the Word2vec, part of speech and emoji features. In Proceedings of the 1st International Conference on Advanced Information Science and System (2019), pp. 1\u20135.","journal-title":"Proceedings of the 1st International Conference on Advanced Information Science and System"},{"key":"e_1_3_3_10_1","doi-asserted-by":"publisher","DOI":"10.1007\/s41870-022-01096-4"},{"key":"e_1_3_3_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3037073"},{"key":"e_1_3_3_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3041021.3054223"},{"key":"e_1_3_3_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/IALP.2018.8629154"},{"key":"e_1_3_3_14_1","article-title":"Predictive embeddings for hate speech detection on twitter","author":"Kshirsagar R.","year":"2018","unstructured":"KshirsagarR.CukuvacT.McKeownK.McGregorS., Predictive embeddings for hate speech detection on twitter. arXiv preprint arXiv:1809.10644 (2018).","journal-title":"arXiv preprint arXiv:1809.10644"},{"key":"e_1_3_3_15_1","article-title":"Offensive language identification in Greek","author":"Pitenis Z.","year":"2020","unstructured":"PitenisZ.ZampieriM.RanasingheT., Offensive language identification in Greek. arXiv preprint arXiv:2003.07459 (2020).","journal-title":"arXiv preprint arXiv:2003.07459"},{"key":"e_1_3_3_16_1","unstructured":"DevlinJ.ChangM.W.LeeK.ToutanovaK. Bert: Pre-training of deep bidirectional transformers for language understanding. arXivpreprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_3_17_1","article-title":"Roberta: A robustly optimized bert pretraining approach","author":"Liu Y.","year":"2019","unstructured":"LiuY.OttM.GoyalN.DuJ.JoshiM.Chen...D.StoyanovV., Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).","journal-title":"arXiv preprint arXiv:1907.11692"},{"issue":"1","key":"e_1_3_3_18_1","first-page":"5485","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel C.","year":"2020","unstructured":"RaffelC.ShazeerN.RobertsA.LeeK.NarangS.Matena...M.LiuP.J., Exploring the limits of transfer learning with a unified text-to-text transformer, The Journal of Machine Learning Research21(1) (2020), 5485\u20135551.","journal-title":"The Journal of Machine Learning Research"},{"key":"e_1_3_3_19_1","article-title":"Xlnet: Generalized autoregressive pretraining for language understanding","volume":"32","author":"Yang Z.","year":"2019","unstructured":"YangZ.DaiZ.YangY.CarbonellJ.SalakhutdinovR.R., ... LeQ.V., Xlnet: Generalized autoregressive pretraining for language understanding, Advances in Neural Information Processing Systems32 (2019).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_20_1","article-title":"Electra: Pre-training text encoders as discriminators rather than generators","author":"Clark K.","year":"2020","unstructured":"ClarkK.LuongM.T.Le...Q.V.ManningC.D., Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020).","journal-title":"arXiv preprint arXiv:2003.10555"},{"issue":"8","key":"e_1_3_3_21_1","first-page":"9","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"Radford A.","year":"2019","unstructured":"RadfordA.WuJ.ChildR.LuanD.Amodei...D.SutskeverI., Language models are unsupervised multitask learners, OpenAI Blog1(8) (2019), 9.","journal-title":"OpenAI Blog"},{"key":"e_1_3_3_22_1","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown T.","year":"2020","unstructured":"BrownT.MannB.RyderN.SubbiahM.KaplanJ.D.DhariwalandP.AmodeiD., Language models are few-shot learners, Advancesin Neural Information Processing Systems33 (2020), 1877\u20131901.","journal-title":"Advancesin Neural Information Processing Systems"},{"key":"e_1_3_3_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3446776"},{"key":"e_1_3_3_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICT4DA59526.2023.10302243"},{"key":"e_1_3_3_25_1","unstructured":"YigezuM.G.BadeG.Y.KolesnikovaO.Sidorov...G.GelbukhA. Multilingual Hope Speech Detection using Machine Learning (2023)."},{"key":"e_1_3_3_26_1","doi-asserted-by":"publisher","DOI":"10.1093\/bib\/bbac409"},{"key":"e_1_3_3_27_1","article-title":"Dialogpt: Large-scale generative pre-training for conversational response generation","author":"Zhang Y.","year":"2019","unstructured":"ZhangY.SunS.GalleyM.ChenY.C.BrockettC.Gao...X.DolanB., Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536 (2019).","journal-title":"arXiv preprint arXiv:1911.00536"},{"key":"e_1_3_3_28_1","doi-asserted-by":"crossref","unstructured":"JiaY.CaoS.NiuC.MaY.ZanH.Chao...R.ZhangW. EmoDialoGPT: enhancing DialoGPT with emotion. In Natural Language Processing ... Chinese Computing: 10th CCF International Conference NLPCC 2021 Qingdao China October 13\u201317 Proceedings Part II 10 (pp. 219\u2013231). Springer International Publishing (2021).","DOI":"10.1007\/978-3-030-88483-3_17"},{"key":"e_1_3_3_29_1","first-page":"4513","article-title":"Finbert: A pretrained financial language representation model for financial text mining","author":"Liu Z.","year":"2021","unstructured":"LiuZ.HuangD.HuangK.Li...Z.ZhaoJ., Finbert: A pretrained financial language representation model for financial text mining. In Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence (2021), pp. 4513\u20134519.","journal-title":"Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence"},{"key":"e_1_3_3_30_1","article-title":"Finbert: A pretrained language model for financial communications","author":"Yang Y.","year":"2020","unstructured":"YangY.Uy...M.C.S.HuangA., Finbert: A pretrained language model for financial communications. arXiv preprint arXiv:2006.08097 (2020).","journal-title":"arXiv preprint arXiv:2006.08097"},{"key":"e_1_3_3_31_1","article-title":"Finbert: Financial sentiment analysis with pretrained language models","author":"Araci D.","year":"2019","unstructured":"AraciD., Finbert: Financial sentiment analysis with pretrained language models. arXiv preprint arXiv:1908.10063 (2019).","journal-title":"arXiv preprint arXiv:1908.10063"},{"key":"e_1_3_3_32_1","article-title":"I., ... routsopoulos, LEGAL-BERT: The muppets straight out of law school","author":"Chalkidis I.","year":"2020","unstructured":"ChalkidisI.FergadiotisM.MalakasiotisP.Aletras...N., I., ... routsopoulos, LEGAL-BERT: The muppets straight out of law school. arXiv preprint arXiv:2010.02559 (2020).","journal-title":"arXiv preprint arXiv:2010.02559"},{"key":"e_1_3_3_33_1","doi-asserted-by":"publisher","DOI":"10.1093\/bioinformatics\/btz682"},{"key":"e_1_3_3_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ITME.2019.00022"},{"key":"e_1_3_3_35_1","article-title":"Hatebert: Retraining bert for abusive language detection in english","author":"Caselli T.","year":"2020","unstructured":"CaselliT.BasileV.Mitrovi\u0107J., ... GranitzerM., Hatebert: Retraining bert for abusive language detection in english. arXiv preprint arXiv:2010.12472 (2020).","journal-title":"arXiv preprint arXiv:2010.12472"},{"key":"e_1_3_3_36_1","article-title":"Universal language model fine-tuning for text classification","author":"Howard... J.","year":"2018","unstructured":"Howard...J.RuderS., Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146 (2018).","journal-title":"arXiv preprint arXiv:1801.06146"},{"key":"e_1_3_3_37_1","first-page":"239","article-title":"Habesha@ DravidianLangTech: Utilizing Deep,... Transfer Learning Approaches for Sentiment Analysis","author":"Yigezu M.G.","year":"2023","unstructured":"YigezuM.G.KebedeT.KolesnikovaO.Sidorov...G.GelbukhA., Habesha@ DravidianLangTech: Utilizing Deep,... Transfer Learning Approaches for Sentiment Analysis. In Proceedings of the Third Workshop on Speech,... Language Technologies for Dravidian Languages (2023), pp. 239\u2013243.","journal-title":"Proceedings of the Third Workshop on Speech,... Language Technologies for Dravidian Languages"},{"key":"e_1_3_3_38_1","article-title":"Toward word embedding for personalized information retrieval","author":"Amer N.O.","year":"2016","unstructured":"AmerN.O.Mulhem...P.G\u00e9ryM., Toward word embedding for personalized information retrieval. In Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval (2016).","journal-title":"Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval"},{"key":"e_1_3_3_39_1","article-title":"Robertuito: a pretrained language model for social media text in spanish","author":"P\u00e9rez J.M.","year":"2021","unstructured":"P\u00e9rezJ.M.FurmanD.A.Alemany...L.A.LuqueF., Robertuito: a pretrained language model for social media text in spanish. arXiv preprint arXiv:2111.09453 (2021).","journal-title":"arXiv preprint arXiv:2111.09453"},{"key":"e_1_3_3_40_1","article-title":"Lightweight spanish language models","author":"Ca\u00f1ete J.","year":"2022","unstructured":"Ca\u00f1eteJ.DonosoS.Bravo-MarquezF.Carvallo...A.AraujoV.distilbeto:Albeto..., Lightweight spanish language models. arXiv preprint arXiv:2204.09145 (2022).","journal-title":"arXiv preprint arXiv:2204.09145"},{"key":"e_1_3_3_41_1","article-title":"ARBERT & MARBERT: deep bidirectional transformers for Arabic","author":"Abdul-Mageed M.","year":"2020","unstructured":"Abdul-MageedM.ElmadanyA.,... NagoudiE.M.B., ARBERT & MARBERT: deep bidirectional transformers for Arabic. arXiv preprint arXiv:2101.01785 (2020).","journal-title":"arXiv preprint arXiv:2101.01785"},{"key":"e_1_3_3_42_1","article-title":"BanglaBERT: Language model pretraining,... benchmarks for low-resource language understanding evaluation in Bangla","author":"Bhattacharjee A.","year":"2021","unstructured":"BhattacharjeeA.HasanT.AhmadW.U.SaminK.IslamM.S.Iqbal...A.ShahriyarR., BanglaBERT: Language model pretraining,... benchmarks for low-resource language understanding evaluation in Bangla. arXiv preprint arXiv:2101.00204 (2021).","journal-title":"arXiv preprint arXiv:2101.00204"},{"key":"e_1_3_3_43_1","first-page":"26176","article-title":"Multilingual abusive comment detection at scale for indic languages","volume":"35","author":"Gupta V.","year":"2022","unstructured":"GuptaV.RoychowdhuryS.DasM.BanerjeeS.SahaP.MathewandB.MukherjeeA., Multilingual abusive comment detection at scale for indic languages, Advances in Neural Information Processing Systems. 35 (2022), 26176\u201326191.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_44_1","article-title":"Unsupervised cross-lingual representation learning at scale","author":"Conneau A.","year":"2019","unstructured":"ConneauA.KhandelwalK.GoyalN.ChaudharyV.WenzekG.Guzm\u00e1nF.,... StoyanovV., Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019).","journal-title":"arXiv preprint arXiv:1911.02116"},{"key":"e_1_3_3_45_1","article-title":"Muril: Multilingual representations for indian languages","author":"Khanuja S.","year":"2021","unstructured":"KhanujaS.BansalD.MehtaniS.KhoslaS.DeyA.Gopalan...B.TalukdarP., Muril: Multilingual representations for indian languages. arXiv preprint arXiv:2103.10730 (2021).","journal-title":"arXiv preprint arXiv:2103.10730"},{"key":"e_1_3_3_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICMLA52953.2021.00043"},{"key":"e_1_3_3_47_1","doi-asserted-by":"crossref","unstructured":"Garc\u0131a-D\u0131azJ.A.Jim\u00e9nez-ZafraS.M.Garc\u0131a-Cumbreras...M.A.Valencia-Garc\u0131aR. Evaluating feature combination strategies for hate-speech detection in Spanish using linguistic features ... transformers Complex & Intelligent Systems (2023).","DOI":"10.1007\/s40747-022-00693-x"},{"key":"e_1_3_3_48_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S19-2079"},{"key":"e_1_3_3_49_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-34058-2_3"},{"key":"e_1_3_3_50_1","doi-asserted-by":"publisher","DOI":"10.3390\/su142013094"},{"key":"e_1_3_3_51_1","doi-asserted-by":"publisher","DOI":"10.7717\/peerj-cs.742"},{"key":"e_1_3_3_52_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2020.114120"},{"issue":"3","key":"e_1_3_3_53_1","first-page":"1179","article-title":"Data-driven, ... psycholinguistics-motivated approaches to hate speech detection","volume":"24","author":"Silva S.C.D.","year":"2020","unstructured":"SilvaS.C.D.FerreiraT.C.Ramos...R.M.S.ParaboniI., Data-driven, ... psycholinguistics-motivated approaches to hate speech detection, Computing,... Systems24(3) (2020), 1179\u20131188.","journal-title":"Computing,... Systems"},{"key":"e_1_3_3_54_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3258973"},{"key":"e_1_3_3_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3103697"},{"key":"e_1_3_3_56_1","unstructured":"Shahiki-TashM.Armenta-SeguraJ.AhaniZ.KolesnikovaO.Sidorov...G.GelbukhA. Lidoma at homomex@ iberlef: Hate speech detection towards the mexican spanish-speaking lgbt+ population. the importance of preprocessing before using bert-based models. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2023) (2023)."},{"key":"e_1_3_3_57_1","unstructured":"YigezuM.G.KolesnikovaO.Sidorov...G.GelbukhA. Transformer-Based Hate Speech Detection for Multi-Class ... Multi-Label Classification (2023)."},{"key":"e_1_3_3_58_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2020.08.032"},{"key":"e_1_3_3_59_1","first-page":"361","article-title":"Overview of HOMO-MEX at Iberlef: Hate speech detection in Online Messages directed Towards the MEXican Spanish speaking LGBTQ+ population","volume":"71","author":"Bel-Enguix G.","year":"2023","unstructured":"Bel-EnguixG.G\u00f3mez-AdornoH.SierraG.V\u00e1squezJ.Andersen...S.T.Ojeda-TruebaS., Overview of HOMO-MEX at Iberlef: Hate speech detection in Online Messages directed Towards the MEXican Spanish speaking LGBTQ+ population, Natural Language Processing. 71 (2023), 361\u2013370.","journal-title":"Natural Language Processing"},{"key":"e_1_3_3_60_1","first-page":"1","article-title":"Analyzing Zero-Shot transfer Scenarios across Spanish variants for Hate Speech Detection. In Tenth Workshop on NLP for Similar Languages","author":"Castillo-L\u00f3pez G.","year":"2023","unstructured":"Castillo-L\u00f3pezG.Riabi...A.SeddahD., Analyzing Zero-Shot transfer Scenarios across Spanish variants for Hate Speech Detection. In Tenth Workshop on NLP for Similar Languages, Varieties,... Dialects (VarDial 2023) (2023), pp. 1\u201313.","journal-title":"Varieties,... Dialects (VarDial 2023)"},{"key":"e_1_3_3_61_1","doi-asserted-by":"publisher","DOI":"10.3390\/s19214654"},{"key":"e_1_3_3_62_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S19-2007"},{"key":"e_1_3_3_63_1","article-title":"Spanish pretrained bert model and evaluation data","author":"Ca\u00f1ete J.","year":"2023","unstructured":"Ca\u00f1eteJ.ChaperonG.FuentesR.HoJ.H.Kang...H.P\u00e9rezJ., Spanish pretrained bert model and evaluation data. arXiv preprint arXiv:2308.02976 (2023).","journal-title":"arXiv preprint arXiv:2308.02976"},{"key":"e_1_3_3_64_1","unstructured":"De la RosaJ.PonferradaE.G.VillegasP.SalasP.G.D.P.RomeroM. and GranduryM. Bertin Efficientre-training of a spanish language model using perplexity sampling. arXiv preprintarXiv:2207.06814 (2022)."},{"key":"e_1_3_3_65_1","article-title":"Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)","author":"Bansal M.","year":"2019","unstructured":"BansalM.VillavicencioA., Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) (2019).","journal-title":"Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)"},{"key":"e_1_3_3_66_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S19-2011"},{"key":"e_1_3_3_67_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i17.17745"},{"key":"e_1_3_3_68_1","unstructured":"DevlinJ.ChangM.W.LeeK.ToutanovaK. Bert: Pre-training of deep bidirectional transformers for language understanding. arXivpreprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_3_69_1","article-title":"Unsupervised cross-lingual representation learning at scale","author":"Conneau A.","year":"2019","unstructured":"ConneauA.KhandelwalK.GoyalN.ChaudharyV.WenzekG.Guzm\u00e1nF. and StoyanovV., Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019).","journal-title":"arXiv preprint arXiv:1911.02116"},{"key":"e_1_3_3_70_1","article-title":"MultiFiT: Efficient multi-lingual language model fine-tuning","author":"Eisenschlos J.M.","year":"2019","unstructured":"EisenschlosJ.M.RuderS.CzaplaP.KardasM.GuggerS. and HowardJ., MultiFiT: Efficient multi-lingual language model fine-tuning. arXiv preprint arXiv:1909.04761 (2019).","journal-title":"arXiv preprint arXiv:1909.04761"},{"key":"e_1_3_3_71_1","doi-asserted-by":"crossref","unstructured":"ChungH.W.GarretteD.TanK.C.RiesaJ. Improving multilingual models with language-clustered vocabularies. arXivpreprint arXiv:2010.12777 (2020).","DOI":"10.18653\/v1\/2020.emnlp-main.367"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems: Applications in Engineering and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JIFS-219349","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.3233\/JIFS-219349","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JIFS-219349","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,14]],"date-time":"2026-03-14T21:47:37Z","timestamp":1773524857000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.3233\/JIFS-219349"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,29]]},"references-count":70,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,2]]}},"alternative-id":["10.3233\/JIFS-219349"],"URL":"https:\/\/doi.org\/10.3233\/jifs-219349","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,29]]}}}