{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,20]],"date-time":"2025-06-20T04:08:52Z","timestamp":1750392532058,"version":"3.41.0"},"reference-count":89,"publisher":"Association for Computing Machinery (ACM)","issue":"FSE","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Softw. Eng."],"published-print":{"date-parts":[[2025,6,19]]},"abstract":"<jats:p>\n            In the realm of natural language processing (NLP), the rising computational demands of modern models bring energy efficiency to the forefront of sustainable computing. Preprocessing tasks, such as tokenization, stemming, and POS tagging, are critical steps in transforming raw text into structured formats suitable for machine learning models. However, despite their widespread use in numerous NLP pipelines, little attention has been given to their energy consumption. This empirical study evaluates and compares the energy consumption and runtime performance of three popular NLP libraries\u2014\n            <jats:italic toggle=\"yes\">NLTK<\/jats:italic>\n            ,\n            <jats:italic toggle=\"yes\">spaCy<\/jats:italic>\n            , and\n            <jats:italic toggle=\"yes\">Gensim<\/jats:italic>\n            \u2014across six common preprocessing tasks. We conducted a comprehensive comparison using three distinct datasets and six preprocessing tasks. Energy consumption was measured using the Intel-RAPL and NVIDIA-SMI interfaces, while runtime performance was recorded across all library-task combinations. The results reveal substantial discrepancies in energy consumption across the three libraries, with up to 93% of cases exhibiting significant variations.\n            <jats:italic toggle=\"yes\">Gensim<\/jats:italic>\n            showed superior efficiency in tokenization and stemming, while\n            <jats:italic toggle=\"yes\">spaCy<\/jats:italic>\n            excelled in tasks like POS tagging and Named Entity Recognition (NER). These findings underscore the potential for optimizing NLP preprocessing tasks for energy efficiency. Our study highlights the untapped potential for improving energy efficiency in NLP pipelines. These insights emphasize the need for more focused research into energy-efficient NLP techniques, especially in the preprocessing phase, to support the development of greener, more sustainable computational models.\n          <\/jats:p>","DOI":"10.1145\/3729396","type":"journal-article","created":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T15:16:02Z","timestamp":1750346162000},"page":"2850-2873","source":"Crossref","is-referenced-by-count":0,"title":["NLP Libraries, Energy Consumption and Runtime: An Empirical Study"],"prefix":"10.1145","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-8291-7133","authenticated-orcid":false,"given":"Rajrupa","family":"Chattaraj","sequence":"first","affiliation":[{"name":"IIT Tirupati, Tirupati, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0818-8178","authenticated-orcid":false,"given":"Sridhar","family":"Chimalakonda","sequence":"additional","affiliation":[{"name":"IIT Tirupati, Tirupati, India"}]}],"member":"320","published-online":{"date-parts":[[2025,6,19]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Diogo Almeida, Janko Altenschmidt, Sam Altman, and Shyamal Anadkat.","author":"Achiam Josh","year":"2023","unstructured":"Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, and Shyamal Anadkat. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.2196\/44977"},{"key":"e_1_2_1_3_1","doi-asserted-by":"crossref","first-page":"7","DOI":"10.1007\/s10515-022-00371-9","article-title":"An NLP-based quality attributes extraction and prioritization framework in Agile-driven software development","volume":"30","author":"Ahmed Mohsin","year":"2023","unstructured":"Mohsin Ahmed, Saif Ur Rehman Khan, and Khubaib Amjad Alam. 2023. An NLP-based quality attributes extraction and prioritization framework in Agile-driven software development. Automated Software Engineering, 30, 1 (2023), 7.","journal-title":"Automated Software Engineering"},{"key":"e_1_2_1_4_1","first-page":"39205","article-title":"Mallet vs GenSim: Topic modeling for 20 news groups report","volume":"2","author":"Akef Islam","year":"2016","unstructured":"Islam Akef, Juan S Munoz Arango, and Xiaowei Xu. 2016. Mallet vs GenSim: Topic modeling for 20 news groups report. Univ. Ark. Little Rock Law J, 2 (2016), 39205.","journal-title":"Univ. Ark. Little Rock Law J"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSR.2017.42"},{"key":"e_1_2_1_6_1","doi-asserted-by":"crossref","unstructured":"Daniel Andor Chris Alberti David Weiss Aliaksei Severyn Alessandro Presta Kuzman Ganchev Slav Petrov and Michael Collins. 2016. Globally normalized transition-based neural networks. arXiv preprint arXiv:1603.06042.","DOI":"10.18653\/v1\/P16-1231"},{"key":"e_1_2_1_7_1","volume-title":"Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv preprint arXiv:2007.03051.","author":"Wolff Anthony Lasse F","year":"2020","unstructured":"Lasse F Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. 2020. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv preprint arXiv:2007.03051."},{"key":"e_1_2_1_8_1","volume-title":"Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063.","author":"Araci Dogu","year":"2019","unstructured":"Dogu Araci. 2019. Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3387902.3392613"},{"key":"e_1_2_1_10_1","unstructured":"Arthur Asuncion and David Newman. 2007. UCI machine learning repository."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCONS.2017.8250563"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00041"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/BigData.2016.7841060"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.4337\/9781788972826.00017"},{"key":"e_1_2_1_16_1","doi-asserted-by":"crossref","unstructured":"Yelysei Bondarenko Markus Nagel and Tijmen Blankevoort. 2021. Understanding and overcoming the challenges of efficient transformer quantization. arXiv preprint arXiv:2109.12948.","DOI":"10.18653\/v1\/2021.emnlp-main.627"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/HOTCHIPS.2012.7476509"},{"key":"e_1_2_1_18_1","volume-title":"Language models are few-shot learners. Advances in neural information processing systems, 33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33 (2020), 1877\u20131901."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASE56229.2023.00207"},{"key":"e_1_2_1_20_1","unstructured":"Man Yan Miranda Chong. 2013. A study on plagiarism detection and plagiarism direction identification using natural language processing techniques."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1017\/S1351324916000334"},{"key":"e_1_2_1_22_1","volume-title":"Proceedings of the 16th ACM\/IEEE international symposium on Low power electronics and design. 189\u2013194","author":"David Howard","year":"2010","unstructured":"Howard David, Eugene Gorbatov, Ulf R Hanebutte, Rahul Khanna, and Christian Le. 2010. RAPL: Memory power estimation and capping. In Proceedings of the 16th ACM\/IEEE international symposium on Low power electronics and design. 189\u2013194. https:\/\/doi.org\/10.1145\/1840845.1840883 10.1145\/1840845.1840883"},{"key":"e_1_2_1_23_1","volume-title":"Introduction to natural language processing","author":"Eisenstein Jacob","unstructured":"Jacob Eisenstein. 2019. Introduction to natural language processing. The MIT Press."},{"key":"e_1_2_1_24_1","unstructured":"Explosion AI. 2020. spaCy Universe. https:\/\/github.com\/explosion\/spacy-universe"},{"key":"e_1_2_1_25_1","volume-title":"AMIA Annual Symposium Proceedings.","author":"Eyre Hannah","year":"2021","unstructured":"Hannah Eyre, Alec B Chapman, Kelly S Peterson, Jianlin Shi, Patrick R Alba, Makoto M Jones, Tamara L Box, Scott L DuVall, and Olga V Patterson. 2021. Launching into clinical space with medspaCy: a new clinical text processing toolkit in Python. In AMIA Annual Symposium Proceedings. 2021, 438."},{"key":"e_1_2_1_26_1","unstructured":"fast.ai. 2021. fastai. https:\/\/github.com\/fastai\/fastai"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510221"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.15439\/2020F20"},{"key":"e_1_2_1_29_1","volume-title":"Proceedings of annual conference on artificial intelligence (ACAI).","author":"Giorgos Orphanos","year":"1999","unstructured":"Orphanos Giorgos, Kalles Dimitris, Papagelis Thanasis, and Christodoulakis Dimitris. 1999. Decision trees and NLP: A case study in POS tagging. In Proceedings of annual conference on artificial intelligence (ACAI)."},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.14569\/IJACSA.2011.020508"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/2425248.2425252"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1002\/cpe.5971"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2884781.2884869"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2023.3279255"},{"key":"e_1_2_1_35_1","first-page":"1930","article-title":"A comparative study of stemming algorithms","volume":"2","author":"Jivani Anjali Ganesh","year":"2011","unstructured":"Anjali Ganesh Jivani. 2011. A comparative study of stemming algorithms. Int. J. Comp. Tech. Appl, 2, 6 (2011), 1930\u20131938.","journal-title":"Int. J. Comp. Tech. Appl"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/PuneCon50868.2020.9362395"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/FiCloud58648.2023.00022"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3177754"},{"key":"e_1_2_1_39_1","first-page":"350","article-title":"An interpretation of lemmatization and stemming in natural language processing","volume":"22","author":"Khyani Divya","year":"2021","unstructured":"Divya Khyani, BS Siddhartha, NM Niveditha, and BM Divya. 2021. An interpretation of lemmatization and stemming in natural language processing. Journal of University of Shanghai for Science and Technology, 22, 10 (2021), 350\u2013357.","journal-title":"Journal of University of Shanghai for Science and Technology"},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/PACT52795.2021.00013"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.4097\/kjae.2017.70.1.22"},{"key":"e_1_2_1_42_1","volume-title":"2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT). 1\u20137. https:\/\/doi.org\/10","author":"Koragoankar Ranjit","year":"2023","unstructured":"Ranjit Koragoankar, Varad Kulkarni, and Deepali Naik. 2023. Search Engine Using NLP Text Processing Techniques to Extract Most Relevant Search Results. In 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT). 1\u20137. https:\/\/doi.org\/10.1109\/ICCCNT56998.2023.10307392 10.1109\/ICCCNT56998.2023.10307392"},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2021.102086"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.5120\/20752-3148"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3342827.3342828"},{"key":"e_1_2_1_46_1","volume-title":"Proceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering. 272\u2013281","author":"Lella Hemasri Sai","year":"2024","unstructured":"Hemasri Sai Lella, Rajrupa Chattaraj, Sridhar Chimalakonda, and Manasa Kurra. 2024. Towards Comprehending Energy Consumption of Database Management Systems-A Tool and Empirical Study. In Proceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering. 272\u2013281. https:\/\/doi.org\/10.1145\/3661167.3661174 10.1145\/3661167.3661174"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-018-3865-7"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2024.112005"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-662-46675-9_21"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.3115\/977035.977037"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1002\/0471667196.ess5050"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1075\/li.30.1.03nad"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3661167.3661203"},{"key":"e_1_2_1_55_1","unstructured":"David Patterson Joseph Gonzalez Quoc Le Chen Liang Lluis-Miquel Munguia Daniel Rothchild David So Maud Texier and Jeff Dean. 2021. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00450-015-0300-5"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.3390\/ai4010004"},{"key":"e_1_2_1_59_1","unstructured":"Saurabhsingh Rajput Tim Widmayer Ziyuan Shang Maria Kechagia Federica Sarro and Tushar Sharma. 2023. FECoM: A Step towards Fine-Grained Energy Measurement for Deep Learning. arXiv preprint arXiv:2308.12264."},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3680470"},{"key":"e_1_2_1_61_1","unstructured":"In VJ Reddi A Smith and L Tang. [n. d.]. Synthesizing benchmarks for predictive modeling."},{"key":"e_1_2_1_62_1","unstructured":"Radim \u0158eh\u0159ek and Petr Sojka. 2011. Gensim\u2014statistical semantics in python. Retrieved from genism. org."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","unstructured":"Philip Resnik and Jimmy Lin. 2010. Evaluation of NLP systems. The handbook of computational linguistics and natural language processing 271\u2013295. https:\/\/doi.org\/10.1002\/9781444324044.ch11 10.1002\/9781444324044.ch11","DOI":"10.1002\/9781444324044.ch11"},{"key":"e_1_2_1_64_1","volume-title":"Proceedings of the 2011 conference on empirical methods in natural language processing. 1524\u20131534","author":"Ritter Alan","year":"2011","unstructured":"Alan Ritter, Sam Clark, and Oren Etzioni. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the 2011 conference on empirical methods in natural language processing. 1524\u20131534."},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1109\/MM.2012.12"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D15-1044"},{"key":"e_1_2_1_67_1","volume-title":"Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC\u201916)","author":"Sammons Mark","year":"2016","unstructured":"Mark Sammons, Christos Christodoulopoulos, Parisa Kordjamshidi, Daniel Khashabi, Vivek Srikumar, and Dan Roth. 2016. Edison: Feature extraction for nlp, simplified. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC\u201916). 4085\u20134092."},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1109\/SNAMS.2019.8931850"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSR59073.2023.00048"},{"key":"e_1_2_1_70_1","volume-title":"NLTK, Spacy, and Sumy Libraries","author":"Sharma Abhilasha","unstructured":"Abhilasha Sharma, Raghav Aggarwal, and Raghav Alawadhi. [n. d.]. A Comparative Study of Text Summarization using Gensim, NLTK, Spacy, and Sumy Libraries. Journal of Xi\u2019an Shiyou University, Natural Science Edition."},{"key":"e_1_2_1_71_1","unstructured":"Steven Sloria. 2013. TextBlob. https:\/\/github.com\/sloria\/TextBlob"},{"key":"e_1_2_1_72_1","volume-title":"International conference on machine learning. 5877\u20135886","author":"So David","year":"2019","unstructured":"David So, Quoc Le, and Chen Liang. 2019. The evolved transformer. In International conference on machine learning. 5877\u20135886."},{"key":"e_1_2_1_73_1","doi-asserted-by":"crossref","unstructured":"Emma Strubell Ananya Ganesh and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243.","DOI":"10.18653\/v1\/P19-1355"},{"key":"e_1_2_1_74_1","doi-asserted-by":"crossref","unstructured":"Emma Strubell Patrick Verga Daniel Andor David Weiss and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. arXiv preprint arXiv:1804.08199.","DOI":"10.18653\/v1\/D18-1548"},{"key":"e_1_2_1_75_1","volume-title":"27th International Conference on Intelligent User Interfaces. 212\u2013228","author":"Sun Jiao","year":"2022","unstructured":"Jiao Sun, Q Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, and Justin D Weisz. 2022. Investigating explainability of generative AI for code through scenario-based design. In 27th International Conference on Intelligent User Interfaces. 212\u2013228. https:\/\/doi.org\/10.1145\/3490099.3511119 10.1145\/3490099.3511119"},{"key":"e_1_2_1_76_1","volume-title":"Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27","author":"Sutskever Ilya","year":"2014","unstructured":"Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27 (2014)."},{"key":"e_1_2_1_77_1","first-page":"4864","article-title":"A survey on text pre-processing & feature extraction techniques in natural language processing","volume":"7","author":"Tabassum Ayisha","year":"2020","unstructured":"Ayisha Tabassum and Rajendra R Patil. 2020. A survey on text pre-processing & feature extraction techniques in natural language processing. International Research Journal of Engineering and Technology (IRJET), 7, 06 (2020), 4864\u20134867.","journal-title":"International Research Journal of Engineering and Technology (IRJET)"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1145\/3466752.3480095"},{"key":"e_1_2_1_79_1","volume-title":"\u0141 ukasz Kaiser, and Illia Polosukhin","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30 (2017)."},{"key":"e_1_2_1_80_1","volume-title":"2022 international conference on ICT for sustainability (ICT4S). 35\u201345","author":"Verdecchia Roberto","year":"2022","unstructured":"Roberto Verdecchia, Lu\u00eds Cruz, June Sallou, Michelle Lin, James Wickenden, and Estelle Hotellier. 2022. Data-centric green ai an exploratory empirical study. In 2022 international conference on ICT for sustainability (ICT4S). 35\u201345. https:\/\/doi.org\/10.1109\/ICT4S55073.2022.00015 10.1109\/ICT4S55073.2022.00015"},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.3115\/992424.992434"},{"key":"e_1_2_1_82_1","volume-title":"Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining. 1, 29\u201339","author":"Wirth R\u00fcdiger","year":"2000","unstructured":"R\u00fcdiger Wirth and Jochen Hipp. 2000. CRISP-DM: Towards a standard process model for data mining. In Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining. 1, 29\u201339."},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1"},{"key":"e_1_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.02.034"},{"key":"e_1_2_1_85_1","volume-title":"Proceedings of the AAAI conference on artificial intelligence. 33","author":"Yao Liang","year":"2019","unstructured":"Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI conference on artificial intelligence. 33, 7370\u20137377. https:\/\/doi.org\/10.1609\/aaai.v33i01.33017370 10.1609\/aaai.v33i01.33017370"},{"key":"e_1_2_1_86_1","doi-asserted-by":"crossref","unstructured":"Ye Yuan Jingzhi Zhang Zongyao Zhang Kaiwei Chen Jiacheng Shi Vincenzo Stoico and Ivano Malavolta. 2024. The Impact of Knowledge Distillation on the Energy Consumption and Runtime Efficiency of NLP Models.","DOI":"10.1145\/3644815.3644966"},{"key":"e_1_2_1_87_1","volume-title":"2019 5th international conference on optimization and applications (ICOA). 1\u201310","author":"Youssra ZAHIDI","year":"2019","unstructured":"Youssra ZAHIDI, Yacine EL YOUNOUSSI, and Chaimae AZROUMAHLI. 2019. Comparative study of the most useful Arabic-supporting natural language processing and deep learning libraries. In 2019 5th international conference on optimization and applications (ICOA). 1\u201310."},{"key":"e_1_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2022.3186248"},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eng.2019.12.014"}],"container-title":["Proceedings of the ACM on Software Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3729396","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T15:17:59Z","timestamp":1750346279000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3729396"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,19]]},"references-count":89,"journal-issue":{"issue":"FSE","published-print":{"date-parts":[[2025,6,19]]}},"alternative-id":["10.1145\/3729396"],"URL":"https:\/\/doi.org\/10.1145\/3729396","relation":{},"ISSN":["2994-970X"],"issn-type":[{"value":"2994-970X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,19]]}}}