{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,9]],"date-time":"2026-05-09T16:34:29Z","timestamp":1778344469088,"version":"3.51.4"},"reference-count":75,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2025,2,23]],"date-time":"2025-02-23T00:00:00Z","timestamp":1740268800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001381","name":"National Research Foundation Singapore","doi-asserted-by":"crossref","award":["NRF-NRFI08-2022-0002"],"award-info":[{"award-number":["NRF-NRFI08-2022-0002"]}],"id":[{"id":"10.13039\/501100001381","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2025,3,31]]},"abstract":"<jats:p>Software development involves collaborative interactions where stakeholders express opinions across various platforms. Recognizing the sentiments conveyed in these interactions is crucial for the effective development and ongoing maintenance of software systems. For software products, analyzing the sentiment of user feedback, e.g., reviews, comments, and forum posts can provide valuable insights into user satisfaction and areas for improvement. This can guide the development of future updates and features. However, accurately identifying sentiments in software engineering datasets remains challenging.<\/jats:p>\n          <jats:p>This study investigates bigger large language models (bLLMs) in addressing the labeled data shortage that hampers fine-tuned smaller large language models (sLLMs) in software engineering tasks. We conduct a comprehensive empirical study using five established datasets to assess three open source bLLMs in zero-shot and few-shot scenarios. Additionally, we compare them with fine-tuned sLLMs, using sLLMs to learn contextual embeddings of text from software platforms.<\/jats:p>\n          <jats:p>Our experimental findings demonstrate that bLLMs exhibit state-of-the-art performance on datasets marked by limited training data and imbalanced distributions. bLLMs can also achieve excellent performance under a zero-shot setting. However, when ample training data are available or the dataset exhibits a more balanced distribution, fine-tuned sLLMs can still achieve superior results.<\/jats:p>","DOI":"10.1145\/3697009","type":"journal-article","created":{"date-parts":[[2024,9,24]],"date-time":"2024-09-24T15:53:36Z","timestamp":1727193216000},"page":"1-30","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":25,"title":["Revisiting Sentiment Analysis for Software Engineering in the Era of Large Language Models"],"prefix":"10.1145","volume":"34","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6001-1372","authenticated-orcid":false,"given":"Ting","family":"Zhang","sequence":"first","affiliation":[{"name":"Singapore Management University, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6350-2700","authenticated-orcid":false,"given":"Ivana Clairine","family":"Irsan","sequence":"additional","affiliation":[{"name":"Singapore Management University, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5566-3819","authenticated-orcid":false,"given":"Ferdian","family":"Thung","sequence":"additional","affiliation":[{"name":"Singapore Management University, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4367-7201","authenticated-orcid":false,"given":"David","family":"Lo","sequence":"additional","affiliation":[{"name":"Singapore Management University, Singapore, Singapore"}]}],"member":"320","published-online":{"date-parts":[[2025,2,23]]},"reference":[{"key":"e_1_3_2_2_2","first-page":"106","volume-title":"Proceedings of the 2017 32nd IEEE\/ACM International Conference on Automated Software Engineering (ASE \u201917).","author":"Ahmed Toufique","year":"2017","unstructured":"Toufique Ahmed, Amiangshu Bosu, Anindya Iqbal, and Shahram Rahimi. 2017. SentiCR: A customized sentiment analysis tool for code review interactions. In Proceedings of the 2017 32nd IEEE\/ACM International Conference on Automated Software Engineering (ASE \u201917). IEEE, 106\u2013111."},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/3597503.3639183"},{"key":"e_1_3_2_4_2","doi-asserted-by":"crossref","unstructured":"Eeshita Biswas Mehmet Efruz Karabulut Lori Pollock and K. Vijay-Shanker. 2020. Achieving reliable sentiment analysis in the software engineering domain using bert. In Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME \u201920). IEEE 162\u2013173.","DOI":"10.1109\/ICSME46990.2020.00025"},{"key":"e_1_3_2_5_2","unstructured":"Rishi Bommasani Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri Chatterji Annie Chen Kathleen Creel Jared Quincy Davis Dora Demszky Chris Donahue Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin Ethayarajh Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah Goodman Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling Fereshte Khani Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed Nilforoshan Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park Chris Piech Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher R\u00e9 Dorsa Sadigh Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin Rohan Taori Armin W. Thomas Florian Tram\u00e8r Rose E. Wang William Wang et al. 2021. On the opportunities and risks of foundation models. arXiv:2108.07258."},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1010933404324"},{"key":"e_1_3_2_7_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, Vol. 33, 1877\u20131901.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/3180155.3182519"},{"key":"e_1_3_2_9_2","doi-asserted-by":"crossref","unstructured":"Fabio Calefato Filippo Lanubile Nicole Novielli and Luigi Quaranta. 2019. Emtk-the emotion mining toolkit. In Proceedings of the 2019 IEEE\/ACM 4th International Workshop on Emotion Awareness in Software Engineering (SEmotion \u201919). IEEE 34\u201337.","DOI":"10.1109\/SEmotion.2019.00014"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/2684822.2685305"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/3338906.3338977"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3424308"},{"key":"e_1_3_2_13_2","unstructured":"Wei-Lin Chiang Zhuohan Li Zi Lin Ying Sheng Zhanghao Wu Hao Zhang Lianmin Zheng Siyuan Zhuang Yonghao Zhuang Joseph E. Gonzalez Ion Stoica and Eric P. Xing. 2023. Vicuna: An open-source ChatBot impressing GPT-4 with 90%* ChatGPT quality. Retrieved from https:\/\/lmsys.org\/blog\/2023-03-30-vicuna\/"},{"key":"e_1_3_2_14_2","unstructured":"Aakanksha Chowdhery Sharan Narang Jacob Devlin Maarten Bosma Gaurav Mishra Adam Roberts Paul Barham Hyung Won Chung Charles Sutton Sebastian Gehrmann Parker Schuh Kensen Shi Sasha Tsvyashchenko Joshua Maynez Abhishek Rao Parker Barnes Yi Tay Noam Shazeer Vinodkumar Prabhakaran Emily Reif Nan Du Ben Hutchinson Reiner Pope James Bradbury Jacob Austin Michael Isard Guy Gur-Ari Pengcheng Yin Toju Duke Anselm Levskaya Sanjay Ghemawat Sunipa Dev Henryk Michalewski Xavier Garcia Vedant Misra Kevin Robinson Liam Fedus Denny Zhou Daphne Ippolito David Luan Hyeontaek Lim Barret Zoph Alexander Spiridonov Ryan Sepassi David Dohan Shivani Agrawal Mark Omernick Andrew M. Dai Thanumalayan Sankaranarayana Pillai Marie Pellat Aitor Lewkowycz Erica Moreira Rewon Child Oleksandr Polozov Katherine Lee Zongwei Zhou Xuezhi Wang Brennan Saeta Mark Diaz Orhan Firat Michele Catasta Jason Wei Kathy Meier-Hellstern Douglas Eck Jeff Dean Slav Petrov and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arXiv:2204.02311."},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3543873.3587605"},{"key":"e_1_3_2_16_2","first-page":"4171","volume-title":"Proceedings of NAACL-HLT","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, 4171\u20134186."},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2019.11.174"},{"key":"e_1_3_2_18_2","first-page":"11704","volume-title":"Proceedings of the 41st International Conference on Machine Learning","author":"Du Cunxiao","year":"2024","unstructured":"Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, and Yang You. 2024. GliDe with a CaPE: A low-hassle method to accelerate speculative decoding. In Proceedings of the 41st International Conference on Machine Learning. R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, and F. Berkenkamp (Eds.), PMLR, 11704\u201311720. Retrieved from https:\/\/proceedings.mlr.press\/v235\/du24c.html"},{"key":"e_1_3_2_19_2","first-page":"2849","volume-title":"International Conference on Machine Learning","author":"Du Cunxiao","year":"2021","unstructured":"Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Order-agnostic cross entropy for non-autoregressive machine translation. In International Conference on Machine Learning. PMLR, 2849\u20132859."},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2005.10.010"},{"key":"e_1_3_2_21_2","first-page":"1","volume-title":"Proceedings of the 46th IEEE\/ACM International Conference on Software Engineering","author":"Feng Sidong","year":"2024","unstructured":"Sidong Feng and Chunyang Chen. 2024. Prompting is all you need: Automated android bug replay with large language models. In Proceedings of the 46th IEEE\/ACM International Conference on Software Engineering, 1\u201313."},{"key":"e_1_3_2_22_2","first-page":"1","volume-title":"Proceedings of the 46th IEEE\/ACM International Conference on Software Engineering","author":"Geng Mingyang","year":"2024","unstructured":"Mingyang Geng, Shangwen Wang, Dezun Dong, Haotian Wang, Ge Li, Zhi Jin, Xiaoguang Mao, and Xiangke Liao. 2024. Large language models are few-shot summarizers: Multi-intent comment generation via in-context learning. In Proceedings of the 46th IEEE\/ACM International Conference on Software Engineering, 1\u201313."},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3551349.3556925"},{"key":"e_1_3_2_24_2","first-page":"272","volume-title":"Proceedings of the 2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER \u201923)","author":"Irsan Ivana Clairine","year":"2023","unstructured":"Ivana Clairine Irsan, Ting Zhang, Ferdian Thung, Kisub Kim, and David Lo. 2023. Multi-modal API recommendation. In Proceedings of the 2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER \u201923). IEEE, 272\u2013283."},{"key":"e_1_3_2_25_2","doi-asserted-by":"crossref","first-page":"1786","DOI":"10.1145\/3297280.3297455","volume-title":"Proceedings of the 34th ACM\/SIGAPP Symposium on Applied Computing","author":"Islam Md Rakibul","year":"2019","unstructured":"Md Rakibul Islam, Md Kauser Ahmmed, and Minhaz F. Zibran. 2019. MarValous: Machine learning based detection of emotions in the valence-arousal space in software engineering text. In Proceedings of the 34th ACM\/SIGAPP Symposium on Applied Computing, 1786\u20131793."},{"key":"e_1_3_2_26_2","doi-asserted-by":"crossref","first-page":"1536","DOI":"10.1145\/3167132.3167296","volume-title":"Proceedings of the 33rd Annual ACM Symposium on Applied Computing","author":"Islam Md Rakibul","year":"2018","unstructured":"Md Rakibul Islam and Minhaz F. Zibran. 2018. DEVA: Sensing emotions in the valence arousal space in software engineering text. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing, 1536\u20131543."},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2018.08.030"},{"key":"e_1_3_2_28_2","doi-asserted-by":"crossref","first-page":"531","DOI":"10.1109\/ICSM.2015.7332508","volume-title":"Proceedings of the 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME \u201915).","author":"Jongeling Robbert","year":"2015","unstructured":"Robbert Jongeling, Subhajit Datta, and Alexander Serebrenik. 2015. Choosing your weapons: On sentiment analysis tools for software engineering research. In Proceedings of the 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME \u201915). IEEE, 531\u2013535."},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-016-9493-x"},{"key":"e_1_3_2_30_2","unstructured":"Zhenzhong Lan Mingda Chen Sebastian Goodman Kevin Gimpel Piyush Sharma and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations. OpenReview.net. Retrieved from https:\/\/openreview.net\/forum?id=H1eA7AEtvS"},{"issue":"3","key":"e_1_3_2_31_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3490388","article-title":"Opinion mining for software development: A systematic literature review","volume":"31","author":"Lin Bin","year":"2022","unstructured":"Bin Lin, Nathan Cassee, Alexander Serebrenik, Gabriele Bavota, Nicole Novielli, and Michele Lanza. 2022. Opinion mining for software development: A systematic literature review. ACM Transactions on Software Engineering and Methodology 31, 3 (2022), 1\u201341.","journal-title":"ACM Transactions on Software Engineering and Methodology"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1145\/3180155.3180195"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1017\/9781108639286"},{"issue":"2010","key":"e_1_3_2_34_2","first-page":"627","article-title":"Sentiment analysis and subjectivity","volume":"2","author":"Liu Bing","year":"2010","unstructured":"Bing Liu. 2010. Sentiment analysis and subjectivity. Handbook of Natural Language Processing 2, 2010 (2010), 627\u2013666.","journal-title":"Handbook of Natural Language Processing"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4614-3223-4_13"},{"key":"e_1_3_2_36_2","unstructured":"Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692."},{"key":"e_1_3_2_37_2","first-page":"8086","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics,","volume":"1","author":"Lu Yao","year":"2022","unstructured":"Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Long Papers, Vol. 1, 8086\u20138098."},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3597503.3639150"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.emnlp-main.759"},{"key":"e_1_3_2_40_2","doi-asserted-by":"crossref","unstructured":"Moran Mizrahi Guy Kaplan Dan Malkin Rotem Dror Dafna Shahaf and Gabriel Stanovsky. 2023. State of what art? A call for multi-prompt llm evaluation. arXiv:2401.00595.","DOI":"10.1162\/tacl_a_00681"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-017-9526-0"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3379597.3387446"},{"issue":"4","key":"e_1_3_2_43_2","doi-asserted-by":"crossref","first-page":"77","DOI":"10.1007\/s10664-021-09960-w","article-title":"Assessment of off-the-shelf SE-specific sentiment analysis tools: An extended replication study","volume":"26","author":"Novielli Nicole","year":"2021","unstructured":"Nicole Novielli, Fabio Calefato, Filippo Lanubile, and Alexander Serebrenik. 2021. Assessment of off-the-shelf SE-specific sentiment analysis tools: An extended replication study. Empirical Software Engineering 26, 4 (2021), 77.","journal-title":"Empirical Software Engineering"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3196398.3196403"},{"key":"e_1_3_2_45_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.infsof.2022.107018"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/2901739.2903505"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.5555\/1953048.2078195"},{"key":"e_1_3_2_48_2","first-page":"11054","article-title":"True few-shot learning with language models","volume":"34","author":"Perez Ethan","year":"2021","unstructured":"Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. In Advances in Neural Information Processing Systems, Vol. 34, 11054\u201311070.","journal-title":"Advances in Neural Information Processing Systems,"},{"issue":"2","key":"e_1_3_2_49_2","first-page":"334","article-title":"Comparative analysis of decision tree classification algorithms","volume":"3","author":"Priyam Anuja","year":"2013","unstructured":"Anuja Priyam, Gupta R. Abhijeeta, Anju Rathee, and Saurabh Srivastava. 2013. Comparative analysis of decision tree classification algorithms. International Journal of Current Engineering and Technology 3, 2 (2013), 334\u2013337.","journal-title":"International Journal of Current Engineering and Technology"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411763.3451760"},{"key":"e_1_3_2_51_2","first-page":"41","article-title":"An empirical study of the naive Bayes classifier","volume":"3","author":"Rish Irina","year":"2001","unstructured":"Irina Rish. 2001. An empirical study of the naive Bayes classifier. In Proceedings of the IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence, Vol. 3, 41\u201346.","journal-title":"Proceedings of the IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence"},{"key":"e_1_3_2_52_2","unstructured":"Victor Sanh Lysandre Debut Julien Chaumond and Thomas Wolf. 2019. DistilBERT a distilled version of BERT: Smaller faster cheaper and lighter. arXiv:1910.01108."},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1177\/0539018404047701"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W17-1101"},{"key":"e_1_3_2_55_2","unstructured":"Noam Shazeer. 2020. Glu variants improve transformer. arXiv:2002.05202."},{"key":"e_1_3_2_56_2","doi-asserted-by":"crossref","unstructured":"Nan Song Hongjie Cai Rui Xia Jianfei Yu Zhen Wu and Xinyu Dai. 2023. A sequence-to-structure approach to document-level targeted sentiment analysis. In Findings of the Association for Computational Linguistics (EMNLP \u201923) 7687\u20137698.","DOI":"10.18653\/v1\/2023.findings-emnlp.515"},{"key":"e_1_3_2_57_2","unstructured":"Jianlin Su Yu Lu Shengfeng Pan Ahmed Murtadha Bo Wen and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. arXiv:2104.09864."},{"key":"e_1_3_2_58_2","unstructured":"Rohan Taori Ishaan Gulrajani Tianyi Zhang Yann Dubois Xuechen Li Carlos Guestrin Percy Liang and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An instruction-following LLaMA model. Retrieved from https:\/\/github.com\/tatsu-lab\/stanford_alpaca"},{"key":"e_1_3_2_59_2","first-page":"34","article-title":"Monitoring sentiment in open source mailing lists: Exploratory study on the apache ecosystem","volume":"14","author":"Tourani Parastou","year":"2014","unstructured":"Parastou Tourani, Yujuan Jiang, and Bram Adams. 2014. Monitoring sentiment in open source mailing lists: Exploratory study on the apache ecosystem. In CASCON, Vol. 14, 34\u201344.","journal-title":"CASCON"},{"key":"e_1_3_2_60_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar Aurelien Rodriguez Armand Joulin Edouard Grave and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv:2302.13971."},{"key":"e_1_3_2_61_2","unstructured":"Hugo Touvron Louis Martin Kevin Stone Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic Sergey Edunov and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288."},{"key":"e_1_3_2_62_2","first-page":"5998","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.), Vol. 30, 5998\u20136008.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/2884781.2884818"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.emnlp-demos.6"},{"key":"e_1_3_2_65_2","unstructured":"Can Xu Qingfeng Sun Kai Zheng Xiubo Geng Pu Zhao Jiazhan Feng Chongyang Tao and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. arXiv:2304.12244."},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3597503.3623326"},{"key":"e_1_3_2_67_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657719"},{"key":"e_1_3_2_68_2","unstructured":"Zhilin Yang Zihang Dai Yiming Yang Jaime Carbonell Russ R. Salakhutdinov and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems. Hanna M. Wallach Hugo Larochelle Alina Beygelzimer Florence d\u2019Alch\u00e9-Buc Emily B. Fox and Roman Garnett (Eds.) Vol. 32 5754\u20135764."},{"key":"e_1_3_2_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657720"},{"key":"e_1_3_2_70_2","unstructured":"Susan Zhang Stephen Roller Naman Goyal Mikel Artetxe Moya Chen Shuohui Chen Christopher Dewan Mona Diab Xian Li Xi Victoria Lin Todor Mihaylov Myle Ott Sam Shleifer Kurt Shuster Daniel Simig Punit Singh Koura Anjali Sridhar Tianlu Wang and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. arXiv:2205.01068."},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1145\/3524610.3527916"},{"key":"e_1_3_2_72_2","doi-asserted-by":"crossref","first-page":"70","DOI":"10.1109\/ICSME46990.2020.00017","volume-title":"Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME \u201920).","author":"Zhang Ting","year":"2020","unstructured":"Ting Zhang, Bowen Xu, Ferdian Thung, Stefanus Agus Haryono, David Lo, and Lingxiao Jiang. 2020. Sentiment analysis for software engineering: How far can pre-trained transformer models go? In Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME \u201920). IEEE, 70\u201380."},{"key":"e_1_3_2_73_2","doi-asserted-by":"crossref","unstructured":"Wenxuan Zhang Yue Deng Bing Liu Sinno Jialin Pan and Lidong Bing. 2023. Sentiment analysis in the era of large language models: A reality check. arXiv:2305.15005.","DOI":"10.18653\/v1\/2024.findings-naacl.246"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2022.3230975"},{"key":"e_1_3_2_75_2","doi-asserted-by":"crossref","first-page":"142","DOI":"10.1109\/ICPC.2013.6613842","volume-title":"Proceedings of the 2013 21st International Conference on Program Comprehension (ICPC \u201913).","author":"Zhang Yingying","year":"2013","unstructured":"Yingying Zhang and Daqing Hou. 2013. Extracting problematic API features from forum discussions. In Proceedings of the 2013 21st International Conference on Program Comprehension (ICPC \u201913). IEEE, 142\u2013151."},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/3639476.3639762"}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3697009","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3697009","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T18:43:15Z","timestamp":1750272195000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3697009"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,23]]},"references-count":75,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,3,31]]}},"alternative-id":["10.1145\/3697009"],"URL":"https:\/\/doi.org\/10.1145\/3697009","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,23]]},"assertion":[{"value":"2023-10-19","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-09-04","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-02-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}