{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T15:42:37Z","timestamp":1773330157195,"version":"3.50.1"},"reference-count":78,"publisher":"Association for Computing Machinery (ACM)","issue":"7","license":[{"start":{"date-parts":[[2024,9,27]],"date-time":"2024-09-27T00:00:00Z","timestamp":1727395200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62262031, 62376110"],"award-info":[{"award-number":["62262031, 62376110"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"National Social Science Foundation Major Bidding Project","award":["20 & ZD068"],"award-info":[{"award-number":["20 & ZD068"]}]},{"DOI":"10.13039\/501100009102","name":"Jiangxi Provincial Department of Education","doi-asserted-by":"crossref","award":["GJJ2200303"],"award-info":[{"award-number":["GJJ2200303"]}],"id":[{"id":"10.13039\/501100009102","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Thousand Talents Plan of Jiangxi Province","award":["jxsq2019201124"],"award-info":[{"award-number":["jxsq2019201124"]}]},{"name":"Jiangxi Provincial Natural Science Foundation for Distinguished Young Scholars","award":["20224ACB212004"],"award-info":[{"award-number":["20224ACB212004"]}]},{"name":"Natural Science Foundation of Jiangxi, China","award":["20232ACB212001 and 20224BAB212001"],"award-info":[{"award-number":["20232ACB212001 and 20224BAB212001"]}]},{"name":"Young Elite Scientists Sponsorship Program by Jiangxi Association for Science and Technology","award":["2023QT12"],"award-info":[{"award-number":["2023QT12"]}]},{"DOI":"10.13039\/100017355","name":"Graduate Innovative Special Fund Projects of Jiangxi Province","doi-asserted-by":"crossref","award":["YC2022-s258"],"award-info":[{"award-number":["YC2022-s258"]}],"id":[{"id":"10.13039\/100017355","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2024,9,30]]},"abstract":"<jats:p>Dataflow graphs (DFGs) capture definitions (defs) and uses across program blocks, which is a fundamental program representation for program analysis, testing and maintenance. However, dynamically typed programming languages like Python present implicit dataflow issues that make it challenging to determine def-use flow information at compile time. Static analysis methods like Soot and WALA are inadequate for handling these issues, and manually enumerating comprehensive heuristic rules is impractical. Large pre-trained language models (LLMs) offer a potential solution, as they have powerful language understanding and pattern matching abilities, allowing them to predict implicit dataflow by analyzing code context and relationships between variables, functions, and statements in code. We propose leveraging LLMs\u2019 in-context learning ability to learn implicit rules and patterns from code representation and contextual information to solve implicit dataflow problems. To further enhance the accuracy of LLMs, we design a five-step chain of thought (CoT) and break it down into an Artificial Intelligence (AI) chain, with each step corresponding to a separate AI unit to generate accurate DFGs for Python code. Our approach\u2019s performance is thoroughly assessed, demonstrating the effectiveness of each AI unit in the AI Chain. Compared to static analysis, our method achieves 82% higher def coverage and 58% higher use coverage in DFG generation on implicit dataflow. We also prove the indispensability of each unit in the AI Chain. Overall, our approach offers a promising direction for building software engineering tools by utilizing foundation models, eliminating significant engineering and maintenance effort, but focusing on identifying problems for AI to solve.<\/jats:p>","DOI":"10.1145\/3672458","type":"journal-article","created":{"date-parts":[[2024,6,12]],"date-time":"2024-06-12T21:28:32Z","timestamp":1718227712000},"page":"1-29","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Revealing the Unseen: AI Chain on LLMs for Predicting Implicit Dataflows to Generate Dataflow Graphs in Dynamically Typed Code"],"prefix":"10.1145","volume":"33","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8877-4267","authenticated-orcid":false,"given":"Qing","family":"Huang","sequence":"first","affiliation":[{"name":"Jiangxi Normal University, School of Computer Information Engineering, Nanchang, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-2009-5637","authenticated-orcid":false,"given":"Zhiwen","family":"Luo","sequence":"additional","affiliation":[{"name":"Jiangxi Normal University, School of Computer Information Engineering, Nanchang, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7663-1421","authenticated-orcid":false,"given":"Zhenchang","family":"Xing","sequence":"additional","affiliation":[{"name":"CSIRO\u2019s Data61, Eveleigh, Australia and Australian National University, College of Engineering and Computer Science, Canberra, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1719-3358","authenticated-orcid":false,"given":"Jinshan","family":"Zeng","sequence":"additional","affiliation":[{"name":"Jiangxi Normal University, School of Computer Information Engineering, Nanchang, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2700-7478","authenticated-orcid":false,"given":"Jieshan","family":"Chen","sequence":"additional","affiliation":[{"name":"CSIRO\u2019s Data61, Canberra, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2540-973X","authenticated-orcid":false,"given":"Xiwei","family":"Xu","sequence":"additional","affiliation":[{"name":"CSIRO\u2019s Data61, Sydney, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5052-5919","authenticated-orcid":false,"given":"Yong","family":"Chen","sequence":"additional","affiliation":[{"name":"Jiangxi Normal University, School of Computer Information Engineering, Nanchang, China"}]}],"member":"320","published-online":{"date-parts":[[2024,9,27]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/120807.120820"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/2187671.2187672"},{"issue":"8","key":"e_1_3_2_4_2","first-page":"2055","article-title":"Automatic software testing framework for all def-use with genetic algorithm","volume":"8","author":"Khan Rijwan","year":"2019","unstructured":"Rijwan Khan and Akhilesh Kumar Srivastava. 2019. Automatic software testing framework for all def-use with genetic algorithm. International Journal of Innovative Technology and Exploring Engineering (IJITEE) 8, 8 (2019), 2055\u20132060.","journal-title":"International Journal of Innovative Technology and Exploring Engineering (IJITEE)"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3020266"},{"key":"e_1_3_2_6_2","first-page":"46","volume-title":"Proceedings of the 18th PhD Mini-Symposium","author":"Ujhelyi Zolt\u00e1n","year":"2011","unstructured":"Zolt\u00e1n Ujhelyi and D\u00e1niel Varr\u00f3. 2011. Def-use analysis of model transformation programs with program slicing. In Proceedings of the 18th PhD Mini-Symposium, Budapest University of Technology and Economics, 46\u201349."},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/CGO.2011.5764696"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TrustCom56396.2022.00042"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/3503222.3507764"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/1806596.1806601"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-018-9637-2"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3524842.3528467"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMPSAC.2014.30"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/MS.2014.101"},{"key":"e_1_3_2_15_2","volume-title":"LAPS: A General Framework for Modeling Alias Management Using Access Permission Sets","author":"Castegren Elias","year":"2012","unstructured":"Elias Castegren. 2012. LAPS: A General Framework for Modeling Alias Management Using Access Permission Sets. Uppsala University."},{"key":"e_1_3_2_16_2","first-page":"178","volume-title":"Proceedings of Machine Learning and Systems","volume":"1","author":"Agrawal Akshay","year":"2019","unstructured":"Akshay Agrawal, Akshay Modi, Alexandre Passos, Allen Lavoie, Ashish Agarwal, Asim Shankar, Igor Ganichev, Josh Levenberg, Mingsheng Hong, Rajat Monga, and Shanqing Cai. 2019. TensorFlow eager: A multi-stage, python-embedded DSL for machine learning. Proceedings of Machine Learning and Systems 1 (2019), 178\u2013189."},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/1925805.1925818"},{"key":"e_1_3_2_18_2","unstructured":"IBM. n.d. WALA - Static Analysis Framework for Java. Retrieved from http:\/\/wala.sourceforge.net\/"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/WCRE.2013.6671303"},{"key":"e_1_3_2_20_2","first-page":"1","volume-title":"Proceedings of the Cetus Users and Compiler Infrastructure Workshop, in Conjunction with PACT","volume":"2011","author":"Quinlan Dan","year":"2011","unstructured":"Dan Quinlan and Chunhua Liao. 2011. The rose source-to-source compiler infrastructure. In Proceedings of the Cetus Users and Compiler Infrastructure Workshop, in Conjunction with PACT, Vol. 2011. Citeseer, 1."},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/1411203.1411233"},{"key":"e_1_3_2_22_2","first-page":"630","volume-title":"Proceedings of the 2013 Science and Information Conference","author":"Mehboob Fozia","year":"2013","unstructured":"Fozia Mehboob, Atif Aftab Ahmed Jilani, and M Abbass. 2013. State based testing using swarm intelligence. In Proceedings of the 2013 Science and Information Conference, IEEE. 630\u2013635."},{"issue":"1","key":"e_1_3_2_23_2","first-page":"64","article-title":"A new software data-flow testing approach via ant colony algorithms","volume":"1","author":"Ghiduk Ahmed S.","year":"2010","unstructured":"Ahmed S. Ghiduk. 2010. A new software data-flow testing approach via ant colony algorithms. Universal Journal of Computer Science and Engineering Technology 1, 1 (2010), 64\u201372.","journal-title":"Universal Journal of Computer Science and Engineering Technology"},{"key":"e_1_3_2_24_2","doi-asserted-by":"crossref","unstructured":"Yue Wang Weishi Wang Shafiq Joty and Steven C. H. Hoi. 2021. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv:2109.00859. Retrieved from https:\/\/arxiv.org\/abs\/2109.00859.","DOI":"10.18653\/v1\/2021.emnlp-main.685"},{"key":"e_1_3_2_25_2","first-page":"1877","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","volume":"33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 33. 1877\u20131901."},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASE51524.2021.9678927"},{"key":"e_1_3_2_27_2","unstructured":"OpenAI. 2023. Gpt-4 Technical Report."},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510050"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3551349.3556912"},{"key":"e_1_3_2_30_2","unstructured":"Qing Huang Dianshu Liao Zhenchang Xing Zhiqiang Yuan Qinghua Lu Xiwei Xu and Jiaxing Lu. 2022. SE factual knowledge in frozen giant code model: A study on FQN and its retrieval. arXiv:2212.08221. Retrieved from https:\/\/arxiv.org\/abs\/2212.08221."},{"key":"e_1_3_2_31_2","unstructured":"Rishi Bommasani Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill Erik Brynjolfsson Shyamal Buch Dallas Card Rodrigo Castellon Niladri Chatterji Annie Chen Kathleen Creel Jared Quincy Davis Dora Demszky Chris Donahue Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin Ethayarajh Li Fei-Fei Chelsea Finn Trevor Gale Lauren Gillespie Karan Goel Noah Goodman Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas Icard Saahil Jain Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling Fereshte Khani Omar Khattab Pang Wei Koh Mark Krass Ranjay Krishna Rohith Kuditipudi Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning Suvir Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan Deepak Narayanan Ben Newman Allen Nie Juan Carlos Niebles Hamed Nilforoshan Julian Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park Chris Piech Eva Portelance Christopher Potts Aditi Raghunathan Rob Reich Hongyu Ren Frieda Rong Yusuf Roohani Camilo Ruiz Jack Ryan Christopher R\u00e9 Dorsa Sadigh Shiori Sagawa Keshav Santhanam Andy Shih Krishnan Srinivasan Alex Tamkin Rohan Taori Armin W. Thomas Florian Tram\u00e8r Rose E. Wang William Wang Bohan Wu Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei Zaharia Michael Zhang Tianyi Zhang Xikun Zhang Yuhui Zhang Lucia Zheng Kaitlyn Zhou and Percy Liang. On the opportunities and risks of foundation models. arXiv:2108.07258. Retrieved from https:\/\/arxiv.org\/abs\/2108.07258."},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.5555\/3455716.3455856"},{"key":"e_1_3_2_33_2","unstructured":"Antonia Creswell and Murray Shanahan. 2022. Faithful reasoning using large language models. arXiv:2208.14271. Retrieved from https:\/\/arxiv.org\/abs\/2208.14271."},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3571730"},{"key":"e_1_3_2_35_2","unstructured":"Shunyu Yao Jeffrey Zhao Dian Yu Nan Du Izhak Shafran Karthik Narasimhan and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv:2210.03629. Retrieved from https:\/\/arxiv.org\/abs\/2210.03629."},{"key":"e_1_3_2_36_2","unstructured":"Xuezhi Wang Jason Wei Dale Schuurmans Quoc Le Ed Chi and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv:2203.11171. Retrieved from https:\/\/arxiv.org\/abs\/2203.11171."},{"key":"e_1_3_2_37_2","unstructured":"Jason Wei Xuezhi Wang Dale Schuurmans Maarten Bosma Ed Chi Quoc Le and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv:2201.11903. Retrieved from https:\/\/arxiv.org\/abs\/2201.11903."},{"key":"e_1_3_2_38_2","unstructured":"Justin Reppert Ben Rachbach Charlie George Luke Stebbing Jungwon Byun Maggie Appleton and Andreas Stuhlm\u00fcller. 2023. Iterated decomposition: Improving science Q&A by supervising reasoning processes. arXiv:2301.01751. Retrieved from https:\/\/arxiv.org\/abs\/2301.01751."},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCIS.2012.91"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3491101.3519729"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3517582"},{"key":"e_1_3_2_42_2","unstructured":"Jiachang Liu Dinghan Shen Yizhe Zhang Bill Dolan Lawrence Carin and Weizhu Chen. 2021. What makes good in-context examples for gpt-3? arXiv:2101.06804. Retrieved from https:\/\/arxiv.org\/abs\/2101.06804."},{"key":"e_1_3_2_43_2","doi-asserted-by":"crossref","unstructured":"Sewon Min Xinxi Lyu Ari Holtzman Mikel Artetxe Mike Lewis Hannaneh Hajishirzi and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? arXiv:2202.12837. Retrieved from https:\/\/arxiv.org\/abs\/2202.12837.","DOI":"10.18653\/v1\/2022.emnlp-main.759"},{"key":"e_1_3_2_44_2","unstructured":"Benjamin Antunes and David R. C. Hill. 2024. Reproducibility energy efficiency and performance of pseudorandom number generators in machine learning: A comparative study of python numpy tensorflow and pytorch implementations. arXiv:2401.17345. Retrieved from https:\/\/arxiv.org\/abs\/2401.17345."},{"key":"e_1_3_2_45_2","unstructured":"CFG-Generator. 2020. Retrieved from https:\/\/github.com\/Tiankai-Jiang\/CFG-Generator"},{"key":"e_1_3_2_46_2","unstructured":"Baptiste Rozi\u00e8re Jonas Gehring Fabian Gloeckle Sten Sootla Itai Gat Xiaoqing Ellen Tan Yossi Adi Jingyu Liu Romain Sauvestre Tal Remez J\u00e9r\u00e9my Rapin Artyom Kozhevnikov Ivan Evtimov Joanna Bitton Manish Bhatt Cristian Canton Ferrer Aaron Grattafiori Wenhan Xiong Alexandre D\u00e9fossez Jade Copet Faisal Azhar Hugo Touvron Louis Martin Nicolas Usunier Thomas Scialom and Gabriel Synnaeve. 2024. Code Llama: Open foundation models for code. arXiv:2308.12950. Retrieved from https:\/\/arxiv.org\/abs\/2308.12950."},{"key":"e_1_3_2_47_2","unstructured":"Ruchir Puri David S. Kung Geert Janssen Wei Zhang Giacomo Domeniconi Vladimir Zolotov Julian Dolby Jie Chen Mihir Choudhury Lindsey Decker Veronika Thost Luca Buratti Saurabh Pujar Shyam Ramji Ulrich Finkler Susan Malaika and Frederick Reiss. 2021. CodeNet: A large-scale ai for code dataset for learning a diversity of coding tasks. arXiv:2105.12655. Retrieved from https:\/\/arxiv.org\/abs\/2105.12655."},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSME.2018.00028"},{"key":"e_1_3_2_49_2","first-page":"97","volume-title":"Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering","author":"Wang Chong","year":"2012","unstructured":"Chong Wang, Xin Peng, Mingwei Liu, Zhenchang Xing, Xue Bai, Bing Xie, and Tuo Wang. 2012. A learning-based approach for automatic construction of domain glossary from source code and documentation. Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 97\u2013108."},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416628"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-94-017-1404-4"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.11613\/BM.2012.031"},{"key":"e_1_3_2_53_2","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. Retrieved from https:\/\/www.cs.ubc.ca\/~amuham01\/LING530\/papers\/radford2018improving.pdf."},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.nlp.2023.100048"},{"key":"e_1_3_2_55_2","first-page":"15908","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","volume":"34","author":"Han Kai","year":"2021","unstructured":"Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. 2021. Transformer in transformer. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 34. 15908\u201315919."},{"key":"e_1_3_2_56_2","article-title":"GPT-4\u2019s leaked details shed light on its massive scale and impressive architecture","volume":"11","author":"Yalalov Damir","year":"2023","unstructured":"Damir Yalalov and D. Myakin. 2023. GPT-4\u2019s leaked details shed light on its massive scale and impressive architecture. Metaverse Post, 11.","journal-title":"Metaverse Post"},{"key":"e_1_3_2_57_2","first-page":"1","volume":"10","author":"Patel Dylan","year":"2023","unstructured":"Dylan Patel and Gerald Wong. 2023. GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE. Demystifying GPT-4: The Engineering Tradeoffs That Led OpenAI to Their Architecture. SemiAnalysis, Vol. 10, 1\u201317.","journal-title":"GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE. Demystifying GPT-4: The Engineering Tradeoffs That Led OpenAI to Their Architecture"},{"key":"e_1_3_2_58_2","first-page":"669","volume-title":"Proceedings of the 29th International Conference on Computational Linguistics","author":"Lee Young-Jun","year":"2022","unstructured":"Young-Jun Lee, Chae-Gyun Lim, and Ho-Jin Choi. 2022. Does GPT-3 generate empathetic dialogues? A novel in-context example selection method and automatic evaluation metric for empathetic dialogue generation. In Proceedings of the 29th International Conference on Computational Linguistics. 669\u2013683."},{"key":"e_1_3_2_59_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i3.20215"},{"key":"e_1_3_2_60_2","unstructured":"Jinlan Fu See-Kiong Ng Zhengbao Jiang and Pengfei Liu. 2023. GPTscore: Evaluate as you desire. arXiv:2302.04166. Retrieved from https:\/\/arxiv.org\/abs\/2302.04166."},{"key":"e_1_3_2_61_2","unstructured":"Tianhui Ma Yuan Cheng Hengshu Zhu and Hui Xiong. 2023. Large language models are not stable recommender systems. arXiv:2312.15746. Retrieved from https:\/\/arxiv.org\/abs\/2312.15746."},{"key":"e_1_3_2_62_2","unstructured":"Weixuan Wang Barry Haddow Alexandra Birch and Wei Peng. 2023. Assessing the reliability of large language model knowledge. arXiv:2310.09820. Retrived from https:\/\/arxiv.org\/abs\/2310.09820."},{"key":"e_1_3_2_63_2","unstructured":"Renat Aksitov Chung-Ching Chang David Reitter Siamak Shakeri and Yunhsuan Sung. 2023. Characterizing attribution and fluency tradeoffs for retrieval-augmented large language models. arXiv:2302.05578. Retrieved from https:\/\/arxiv.org\/abs\/2302.05578."},{"key":"e_1_3_2_64_2","unstructured":"Ming Wang Yuanzhong Liu Xiaoming Zhang Songlian Li Yijie Huang Chi Zhang Daling Wang Shi Feng and Jigang Li. 2024. LangGPT: Rethinking structured reusable prompt design framework for LLMs from the programming language. arXiv:2402.16929. Retrieved from https:\/\/arxiv.org\/abs\/2402.16929."},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.3390\/fi15120375"},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3540250.3549143"},{"key":"e_1_3_2_67_2","unstructured":"pycfg. 2020. Retrieved from https:\/\/github.com\/vrthra\/pycfg"},{"key":"e_1_3_2_68_2","unstructured":"Mark Chen Jerry Tworek Heewoo Jun Qiming Yuan Henrique Ponde de Oliveira Pinto Jared Kaplan Harri Edwards Yuri Burda Nicholas Joseph Greg Brockman Alex Ray Raul Puri Gretchen Krueger Michael Petrov Heidy Khlaaf Girish Sastry Pamela Mishkin Brooke Chan Scott Gray Nick Ryder Mikhail Pavlov Alethea Power Lukasz Kaiser Mohammad Bavarian Clemens Winter Philippe Tillet Felipe Petroski Such Dave Cummings Matthias Plappert Fotios Chantzis Elizabeth Barnes Ariel Herbert-Voss William Hebgen Guss Alex Nichol Alex Paino Nikolas Tezak Jie Tang Igor Babuschkin Suchir Balaji Shantanu Jain William Saunders Christopher Hesse Andrew N. Carr Jan Leike Josh Achiam Vedant Misra Evan Morikawa Alec Radford Matthew Knight Miles Brundage Mira Murati Katie Mayer Peter Welinder Bob McGrew Dario Amodei Sam McCandlish Ilya Sutskever and Wojciech Zaremba. Evaluating large language models trained on code. arXiv:2107.03374. Retrieved from https:\/\/arxiv.org\/abs\/2107.03374."},{"key":"e_1_3_2_69_2","unstructured":"OpenAI. 2023. Openai ChatGPT. Retrieved from https:\/\/chat.openai.com\/chat"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE.2019.00086"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-95289-0_11"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP46214.2022.9833571"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-03251-7_3"},{"key":"e_1_3_2_74_2","doi-asserted-by":"crossref","unstructured":"Qing Huang Yanbang Sun Zhenchang Xing Mingming Yu Xiwei Xu and Qinghua Lu. 2023. API entity and relation joint extraction from text via dynamic prompt-tuned language model. arXiv:2301.03987. Retrieved from https:\/\/arxiv.org\/abs\/2301.03987.","DOI":"10.1145\/3607188"},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3560815"},{"key":"e_1_3_2_76_2","doi-asserted-by":"crossref","unstructured":"Timo Schick Helmut Schmid and Hinrich Sch\u00fctze. 2020. Automatically identifying words that can serve as labels for few-shot text classification. arXiv:2010.13641. Retrieved from https:\/\/arxiv.org\/abs\/2010.13641.","DOI":"10.18653\/v1\/2020.coling-main.488"},{"key":"e_1_3_2_77_2","unstructured":"Bei Chen Fengji Zhang Anh Nguyen Daoguang Zan Zeqi Lin Jian-Guang Lou and Weizhu Chen. 2022. CodeT: Code generation with generated tests. arXiv:2207.10397. Retrieved from https:\/\/arxiv.org\/abs\/2207.10397."},{"key":"e_1_3_2_78_2","doi-asserted-by":"crossref","unstructured":"Antonio Mastropaolo Luca Pascarella Emanuela Guglielmi Matteo Ciniselli Simone Scalabrino Rocco Oliveto and Gabriele Bavota. 2023. On the robustness of code generation techniques: An empirical study on github copilot. arXiv:2302.00438. Retrieved from https:\/\/arxiv.org\/abs\/2302.00438.","DOI":"10.1109\/ICSE48619.2023.00181"},{"key":"e_1_3_2_79_2","unstructured":"Hai Dang Lukas Mecke Florian Lehmann Sven Goller and Daniel Buschek. 2022. How to prompt? Opportunities and challenges of zero-and few-shot learning for human-ai interaction in creative applications of generative models. arXiv:2209.01390. Retrieved from https:\/\/arxiv.org\/abs\/2209.01390."}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672458","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3672458","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:58:01Z","timestamp":1750294681000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672458"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,9,27]]},"references-count":78,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2024,9,30]]}},"alternative-id":["10.1145\/3672458"],"URL":"https:\/\/doi.org\/10.1145\/3672458","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,9,27]]},"assertion":[{"value":"2023-11-04","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-05-27","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-09-27","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}