{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,30]],"date-time":"2026-01-30T04:22:24Z","timestamp":1769746944160,"version":"3.49.0"},"reference-count":55,"publisher":"Association for Computing Machinery (ACM)","issue":"FSE","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62232003"],"award-info":[{"award-number":["62232003"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Proc. ACM Softw. Eng."],"published-print":{"date-parts":[[2025,6,19]]},"abstract":"<jats:p>\n            Accurate method naming is crucial for code readability and maintainability. However, manually creating concise and meaningful names remains a significant challenge. To this end, in this paper, we propose an approach based on Large Language Model (LLMs) to suggest method names according to function descriptions. The key of the approach is\n            <jats:italic toggle=\"yes\">ContextCraft<\/jats:italic>\n            , an automated algorithm for generating context-rich prompts for LLM that suggests the expected method names according to the prompts. For a given query (functional description), it retrieves a few best examples whose functional descriptions have the greatest similarity with the query. From the examples, it identifies tokens that are likely to appear in the final method name as well as their likely positions, picks up pivot words that are semantically related to tokens in the according method names, and specifies the evaluation results of the LLM on the selected examples. All such outputs (tokens with probabilities and position information, pivot words accompanied by associated name tokens and similarity scores, and evaluation results) together with the query and the selected examples are then filled in a predefined prompt template, resulting in a context-rich prompt. This context-rich prompt reduces the randomness of LLMs by focusing the LLM\u2019s attention on relevant contexts, constraining the solution space, and anchoring results to meaningful semantic relationships.                Consequently, the LLM leverages this prompt to generate the expected method name, producing a more accurate and relevant suggestion. We evaluated the proposed approach with 43k real-world Java and Python methods accompanied by functional descriptions. Our evaluation results suggested that it significantly outperforms the state-of-the-art approach\n            <jats:italic toggle=\"yes\">RNN-att-Copy<\/jats:italic>\n            , improving the chance of exact match by 52% and decreasing the edit distance between generated and expected method names by 32%. Our evaluation results also suggested that the proposed approach worked well for various LLMs, including ChatGPT-3.5, ChatGPT-4, ChatGPT-4o, Gemini-1.5, and Llama-3.\n          <\/jats:p>","DOI":"10.1145\/3715753","type":"journal-article","created":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T15:15:34Z","timestamp":1750346134000},"page":"779-800","source":"Crossref","is-referenced-by-count":1,"title":["LLM-Based Method Name Suggestion with Automatically Generated Context-Rich Prompts"],"prefix":"10.1145","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0009-0002-9987-1238","authenticated-orcid":false,"given":"Waseem","family":"Akram","sequence":"first","affiliation":[{"name":"Beijing Institute of Technology, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6404-9143","authenticated-orcid":false,"given":"Yanjie","family":"Jiang","sequence":"additional","affiliation":[{"name":"Peking University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9371-5931","authenticated-orcid":false,"given":"Yuxia","family":"Zhang","sequence":"additional","affiliation":[{"name":"Beijing Institute of Technology, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-5423-2827","authenticated-orcid":false,"given":"Haris Ali","family":"Khan","sequence":"additional","affiliation":[{"name":"Beijing Institute of Technology, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3267-6801","authenticated-orcid":false,"given":"Hui","family":"Liu","sequence":"additional","affiliation":[{"name":"Beijing Institute of Technology, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2025,6,19]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.2196\/45312"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/2786805.2786844"},{"key":"e_1_2_1_3_1","volume-title":"Proceedings of the 33rd International Conference on Machine Learning (ICML). PMLR","author":"Allamanis Miltiadis","year":"2016","unstructured":"Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A Convolutional Attention Network for Extreme Summarization of Source Code. In Proceedings of the 33rd International Conference on Machine Learning (ICML). PMLR, New York, NY, USA. 2091\u20132100. http:\/\/proceedings.mlr.press\/v48\/allamanis16.html"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3192366.3192412"},{"key":"e_1_2_1_5_1","volume-title":"International Conference on Learning Representations (ICLR). OpenReview, 1701\u20131711","author":"Alon Uri","year":"2019","unstructured":"Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. 2019. code2seq: Generating Sequences from Structured Representations of Code. In International Conference on Learning Representations (ICLR). OpenReview, 1701\u20131711."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290353"},{"key":"e_1_2_1_7_1","volume-title":"Proceedings of the 37th International Conference on Neural Information Processing Systems (NeurIPS). Neural Information Processing Systems Foundation","author":"Aman Madaan","year":"2023","unstructured":"Madaan Aman, Tandon Niket, Gupta Prakhar, Hallinan Skyler, Gao Luyu, Wiegreffe Sarah, Alon Uri, Dziri Nouha, Prabhumoye Shrimai, and Yang Yiming. 2023. SELF-REFINE: Iterative Refinement with Self-Feedback. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NeurIPS). Neural Information Processing Systems Foundation, New Orleans, Louisiana, USA. 46534\u201346594."},{"key":"e_1_2_1_8_1","volume-title":"Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of its Successes and Shortcomings. Ophthalmology Science, 2","author":"Antaki Fares","year":"2023","unstructured":"Fares Antaki, Samir Touma, Daniel Milad, Jonathan El-Khoury, and Renaud Duval. 2023. Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of its Successes and Shortcomings. Ophthalmology Science, 2 (2023), Online first"},{"key":"e_1_2_1_9_1","volume-title":"Proceedings of the 2000 International Conference on Software Maintenance. IEEE","author":"Tonella Caprile","year":"2000","unstructured":"Caprile and Tonella. 2000. Restructuring Program Identifier Names. In Proceedings of the 2000 International Conference on Software Maintenance. IEEE, Piscataway, NJ, USA. 97\u2013107."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11432-023-4127-5"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-42819-7_4"},{"key":"e_1_2_1_12_1","unstructured":"ContextCraft. 2024. ContextCraft. https:\/\/github.com\/contextcraft\/contextcraft"},{"key":"e_1_2_1_13_1","unstructured":"ContextCraft. 2024. Impact of Length. https:\/\/github.com\/contextcraft\/contextcraft\/blob\/main\/Extra_Material\/RQ4.pdf"},{"key":"e_1_2_1_14_1","unstructured":"ContextCraft. 2024. RNN-Copy-Attention Source Code. https:\/\/github.com\/contextcraft\/contextcraft\/tree\/main\/Source_Code\/RNN_Copy_Attn Accessed: 2024-09-12"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11219-006-9219-1"},{"key":"e_1_2_1_17_1","volume-title":"Smith","author":"Dodge Jesse","year":"2020","unstructured":"Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping. arXiv preprint arXiv:2002.06305, 1536\u20131547."},{"key":"e_1_2_1_18_1","volume-title":"The Llama 3 Herd of Models. arXiv preprint arXiv:2407.21783, 33, 3","author":"Dubey Abhimanyu","year":"2024","unstructured":"Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, and Angela Fan. 2024. The Llama 3 Herd of Models. arXiv preprint arXiv:2407.21783, 33, 3 (2024), 1\u201335."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2020.2976920"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.139"},{"key":"e_1_2_1_21_1","unstructured":"Python Software Foundation. 2023. inspect: Get Useful Information from Live Objects. Version 3.10 Available at: https:\/\/docs.python.org\/3\/library\/inspect.html"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCC54389.2021.9674736"},{"key":"e_1_2_1_23_1","volume-title":"Proceedings of the 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE","author":"Gao Sa","year":"2019","unstructured":"Sa Gao, Chunyang Chen, Zhenchang Xing, Yukun Ma, Wen Song, and Shang-Wei Lin. 2019. A Neural Model for Method Name Generation from Functional Description. In Proceedings of the 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, Piscataway, NJ, USA. 414\u2013421."},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPC52881.2021.00027"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.5753\/jserd.2023.2582"},{"key":"e_1_2_1_26_1","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics (ACL)","author":"Guo Daya","year":"2022","unstructured":"Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, and Jian Yin. 2022. UniXcoder: Unified Cross-Modal Pre-training for Code Representation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics (ACL), Dublin, Ireland. 7212\u20137225."},{"key":"e_1_2_1_27_1","volume-title":"GraphCodeBERT: Pre-training Code Representations with Data Flow. In 9th International Conference on Learning Representations (ICLR","author":"Guo Daya","year":"2021","unstructured":"Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, and Shengyu Fu. 2021. GraphCodeBERT: Pre-training Code Representations with Data Flow. In 9th International Conference on Learning Representations (ICLR 2021). IEEE, Online. 48\u201365. https:\/\/openreview.net\/forum?id=jLoC4ez43PZ"},{"key":"e_1_2_1_28_1","volume-title":"Debugging Method Names. In European Conference on Object-Oriented Programming. Springer","author":"Einar","unstructured":"Einar W. H\u00f8st and Bjarte M. \u00d8 stvold. 2009. Debugging Method Names. In European Conference on Object-Oriented Programming. Springer, Berlin, Heidelberg, Germany. 294\u2013317."},{"key":"e_1_2_1_29_1","volume-title":"Proceedings of the 10th International Conference on Learning Representations. Association for Computational Learning, Virtual Event. 59\u201370","author":"Hu Edward J.","year":"2022","unstructured":"Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In Proceedings of the 10th International Conference on Learning Representations. Association for Computational Learning, Virtual Event. 59\u201370. https:\/\/openreview.net\/forum?id=nZeVKeeFYf9"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00330-023-10213-1"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3548606.3560616"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASE56229.2023.00218"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPC.2006.51"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE48619.2023.00085"},{"key":"e_1_2_1_35_1","volume-title":"RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692, 30","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692, 30 (2019), arXiv:1907.11692. arxiv:1907.11692"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSR52588.2021.00072"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380926"},{"key":"e_1_2_1_38_1","first-page":"1","article-title":"Generative Pre-trained Transformer 4 (GPT-4)","volume":"4","author":"AI.","year":"2023","unstructured":"OpenAI. 2023. Generative Pre-trained Transformer 4 (GPT-4). OpenAI, 4, 1 (2023), 1\u20132. https:\/\/cdn.openai.com\/papers\/gpt-4.pdf","journal-title":"OpenAI"},{"key":"e_1_2_1_39_1","unstructured":"OpenAI. 2024. New Embedding Models and API Updates. https:\/\/openai.com\/index\/new-embedding-models-and-api-updates\/"},{"key":"e_1_2_1_40_1","volume-title":"Iker Garc\u00eda-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre.","author":"Sainz Oscar","year":"2023","unstructured":"Oscar Sainz, Jon Ander Campos, Iker Garc\u00eda-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. NLP Evaluation in Trouble: On the Need to Measure LLM Data Contamination for Each Benchmark. arXiv preprint, 30, 5 (2023), arXiv:2310.18018."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPEC58863.2023.10363447"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3597203"},{"key":"e_1_2_1_43_1","first-page":"16857","article-title":"MPNet: Masked and Permuted Pre-training for Language Understanding","volume":"33","author":"Song Kaitao","year":"2020","unstructured":"Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2020. MPNet: Masked and Permuted Pre-training for Language Understanding. Advances in Neural Information Processing Systems, 33 (2020), 16857\u201316867.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_44_1","volume-title":"Gemini: A Family of Highly Capable Multimodal Models. arXiv:2312","author":"Team Gemini","year":"2023","unstructured":"Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, and Anja Hauth. 2023. Gemini: A Family of Highly Capable Multimodal Models. arXiv:2312"},{"key":"e_1_2_1_45_1","volume-title":"Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study. In 2023 IEEE\/ACM 20th International Conference on Mining Software Repositories (MSR)","author":"van Dam Tim","unstructured":"Tim van Dam, Maliheh Izadi, and Arie van Deursen. 2023. Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study. In 2023 IEEE\/ACM 20th International Conference on Mining Software Repositories (MSR). IEEE, Virtual Conference. 170\u2013182."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1021\/acs.jcim.7b00403"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1155\/2023"},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1"},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3294032.3294079"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3630010"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISSRE55969.2022.00041"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1"},{"key":"e_1_2_1_53_1","volume-title":"Proceedings of the 29th International Conference on Computational Linguistics, 4965\u20134975","author":"Yu Zichun","year":"2022","unstructured":"Zichun Yu, Tianyu Gao, Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2022. Automatic label sequence generation for prompting sequence-to-sequence models. Proceedings of the 29th International Conference on Computational Linguistics, 4965\u20134975. https:\/\/aclanthology.org\/2022.coling-1.440\/"},{"key":"e_1_2_1_54_1","volume-title":"Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code. arXiv preprint, 23","author":"Zhang Ziyin","year":"2023","unstructured":"Ziyin Zhang, Chaoyu Chen, Bingchang Liu, Cong Liao, Zi Gong, Hang Yu, Jianguo Li, and Rui Wang. 2023. Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code. arXiv preprint, 23 (2023), arXiv:2311.07989. arxiv:2311.07989"},{"key":"e_1_2_1_55_1","volume-title":"Proceedings of the Eleventh International Conference on Learning Representations (ICLR). IEEE, 35\u201360","author":"Zhou Yongchao","year":"2023","unstructured":"Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2023. Large Language Models are Human-Level Prompt Engineers. In Proceedings of the Eleventh International Conference on Learning Representations (ICLR). IEEE, 35\u201360. https:\/\/openreview.net\/forum?id=YdqwNaCLCx"}],"container-title":["Proceedings of the ACM on Software Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3715753","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T15:25:03Z","timestamp":1750346703000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3715753"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,19]]},"references-count":55,"journal-issue":{"issue":"FSE","published-print":{"date-parts":[[2025,6,19]]}},"alternative-id":["10.1145\/3715753"],"URL":"https:\/\/doi.org\/10.1145\/3715753","relation":{},"ISSN":["2994-970X"],"issn-type":[{"value":"2994-970X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,19]]}}}