{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T00:20:52Z","timestamp":1774398052904,"version":"3.50.1"},"reference-count":59,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2024,6,3]],"date-time":"2024-06-03T00:00:00Z","timestamp":1717372800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Nature Science Foundation of China","doi-asserted-by":"crossref","award":["6226201"],"award-info":[{"award-number":["6226201"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2024,6,30]]},"abstract":"<jats:p>\n            The emergence of foundation models, such as large language models (LLMs) GPT-4 and text-to-image models DALL-E, has opened up numerous possibilities across various domains. People can now use natural language (i.e., prompts) to communicate with AI to perform tasks. While people can use foundation models through chatbots (e.g., ChatGPT), chat, regardless of the capabilities of the underlying models, is not a production tool for building reusable AI services. APIs like LangChain allow for LLM-based application development but require substantial programming knowledge, thus posing a barrier. To mitigate this, we systematically review, summarise, refine and extend the concept of AI chain by incorporating the best principles and practices that have been accumulated in software engineering for decades into AI chain engineering, to systematize AI chain engineering methodology. We also develop a no-code integrated development environment,\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/YuCheng1106\/PromptSapper\">\n              <jats:styled-content style=\"color:#0000FF\">Prompt Sapper<\/jats:styled-content>\n            <\/jats:ext-link>\n            , which embodies these AI chain engineering principles and patterns naturally in the process of building AI chains, thereby improving the performance and quality of AI chains. With Prompt Sapper, AI chain engineers can compose prompt-based AI services on top of foundation models through chat-based requirement analysis and visual programming. Our user study evaluated and demonstrated the efficiency and correctness of Prompt Sapper.\n          <\/jats:p>","DOI":"10.1145\/3638247","type":"journal-article","created":{"date-parts":[[2023,12,21]],"date-time":"2023-12-21T11:53:05Z","timestamp":1703159585000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":31,"title":["Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains"],"prefix":"10.1145","volume":"33","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-5243-8705","authenticated-orcid":false,"given":"Yu","family":"Cheng","sequence":"first","affiliation":[{"name":"Jiangxi Normal University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2700-7478","authenticated-orcid":false,"given":"Jieshan","family":"Chen","sequence":"additional","affiliation":[{"name":"CSIRO\u2019s Data61, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8877-4267","authenticated-orcid":false,"given":"Qing","family":"Huang","sequence":"additional","affiliation":[{"name":"Jiangxi Normal University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7663-1421","authenticated-orcid":false,"given":"Zhenchang","family":"Xing","sequence":"additional","affiliation":[{"name":"CSIRO\u2019s Data61, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2273-1862","authenticated-orcid":false,"given":"Xiwei","family":"Xu","sequence":"additional","affiliation":[{"name":"CSIRO\u2019s Data61, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7783-5183","authenticated-orcid":false,"given":"Qinghua","family":"Lu","sequence":"additional","affiliation":[{"name":"CSIRO\u2019s Data61, Australia"}]}],"member":"320","published-online":{"date-parts":[[2024,6,3]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"OpenAI. 2023. ChatGPT Generative AI and GPT-3 Apps and use cases. https:\/\/gpt3demo.com\/. Accessed 16 June 2023."},{"key":"e_1_3_2_3_2","unstructured":"OpenAI. 2023. ChatGPT plugins. https:\/\/openai.com\/blog\/chatgpt-plugins. Accessed 16 June 2023."},{"key":"e_1_3_2_4_2","unstructured":"2023. Figma: The collaborative interface design tool. Retrieved from https:\/\/www.figma.com\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_5_2","unstructured":"2023. Jupyter Notebook: The Classic Notebook Interface. Retrieved from https:\/\/jupyter.org\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_6_2","unstructured":"2023. Replit: The collaborative browser based IDE. Retrieved from https:\/\/replit.com\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_7_2","unstructured":"2023. Sapper IDE. Retrieved from https:\/\/www.promptsapper.tech\/sapperpro\/workspace"},{"key":"e_1_3_2_8_2","unstructured":"Simran Arora Avanika Narayan Mayee F. Chen Laurel Orr Neel Guha Kush Bhatia Ines Chami and Christopher Re. 2023. Ask Me Anything: A simple strategy for prompting language models. In The 11th International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=bhUPJnS2g0X"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.2517-6161.1995.tb02031.x"},{"key":"e_1_3_2_10_2","volume-title":"Proceedings of the PPIG","volume":"13","author":"Blackwell Alan F.","year":"2000","unstructured":"Alan F. Blackwell and Thomas R. G. Green. 2000. A cognitive dimensions questionnaire optimised for users. In Proceedings of the PPIG, Vol. 13. Citeseer."},{"key":"e_1_3_2_11_2","unstructured":"Rishi Bommasani Drew A. Hudson Ehsan Adeli Russ Altman Simran Arora Sydney von Arx Michael S. Bernstein Jeannette Bohg Antoine Bosselut Emma Brunskill Erik Brynjolfsson S. Buch Dallas Card Rodrigo Castellon Niladri S. Chatterji Annie S. Chen Kathleen A. Creel Jared Davis Dora Demszky Chris Donahue Moussa Doumbouya Esin Durmus Stefano Ermon John Etchemendy Kawin Ethayarajh Li Fei-Fei Chelsea Finn Trevor Gale Lauren E. Gillespie Karan Goel Noah D. Goodman Shelby Grossman Neel Guha Tatsunori Hashimoto Peter Henderson John Hewitt Daniel E. Ho Jenny Hong Kyle Hsu Jing Huang Thomas F. Icard Saahil Jain Dan Jurafsky Pratyusha Kalluri Siddharth Karamcheti Geoff Keeling Fereshte Khani O. Khattab Pang Wei Koh Mark S. Krass Ranjay Krishna Rohith Kuditipudi Ananya Kumar Faisal Ladhak Mina Lee Tony Lee Jure Leskovec Isabelle Levent Xiang Lisa Li Xuechen Li Tengyu Ma Ali Malik Christopher D. Manning Suvir P. Mirchandani Eric Mitchell Zanele Munyikwa Suraj Nair Avanika Narayan Deepak Narayanan Benjamin Newman Allen Nie Juan Carlos Niebles Hamed Nilforoshan J. F. Nyarko Giray Ogut Laurel Orr Isabel Papadimitriou Joon Sung Park Chris Piech Eva Portelance Christopher Potts Aditi Raghunathan Robert Reich Hongyu Ren Frieda Rong Yusuf H. Roohani Camilo Ruiz Jack Ryan Christopher R\u2019e Dorsa Sadigh Shiori Sagawa Keshav Santhanam Andy Shih Krishna Parasuram Srinivasan Alex Tamkin Rohan Taori Armin W. Thomas Florian Tram\u00e8r Rose E. Wang William Wang Bohan Wu Jiajun Wu Yuhuai Wu Sang Michael Xie Michihiro Yasunaga Jiaxuan You Matei A. Zaharia Michael Zhang Tianyi Zhang Xikun Zhang Yuhui Zhang Lucia Zheng Kaitlyn Zhou and Percy Liang. 2021. On the opportunities and risks of foundation models. ArXiv (2021). https:\/\/crfm.stanford.edu\/assets\/report.pdf"},{"key":"e_1_3_2_12_2","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D. Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Tom Henighan Rewon Child Aditya Ramesh Daniel Ziegler Jeffrey Wu Clemens Winter Chris Hesse Mark Chen Eric Sigler Mateusz Litwin Scott Gray Benjamin Chess Jack Clark Christopher Berner Sam McCandlish Alec Radford Ilya Sutskever and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems H. Larochelle M. Ranzato R. Hadsell M. F. Balcan and H. Lin (Eds.). Vol. 33. Curran Associates Inc. 1877\u20131901. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf"},{"key":"e_1_3_2_13_2","unstructured":"Harrison Chase. 2023. Welcome to LangChain. Retrieved from https:\/\/python.langchain.com\/en\/latest\/index.html. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_14_2","volume-title":"Experimental and Quasi-experimental Designs for Generalized Causal Inference","author":"Cook Thomas D.","year":"2002","unstructured":"Thomas D. Cook, Donald Thomas Campbell, and William Shadish. 2002. Experimental and Quasi-experimental Designs for Generalized Causal Inference. Vol. 1195. Houghton Mifflin Boston, MA."},{"key":"e_1_3_2_15_2","unstructured":"Antonia Creswell and Murray Shanahan. 2022. Faithful reasoning using large language models. arXiv:2208.14271. Retrieved from https:\/\/arxiv.org\/abs\/2208.14271"},{"key":"e_1_3_2_16_2","unstructured":"Antonia Creswell Murray Shanahan and Irina Higgins. 2023. Selection-inference: Exploiting large language models for interpretable logical reasoning. In The 11th International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=3Pf3Wg6o-A4"},{"key":"e_1_3_2_17_2","volume-title":"Design Patterns: Elements of Reusable Object-oriented Software","author":"Gamma Erich","year":"1995","unstructured":"Erich Gamma, Richard Helm, Ralph Johnson, Ralph E. Johnson, and John Vlissides. 1995. Design Patterns: Elements of Reusable Object-oriented Software. Pearson Deutschland GmbH."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","unstructured":"Hila Gonen Srini Iyer Terra Blevins Noah Smith and Luke Zettlemoyer. 2023. Demystifying prompts in language models via perplexity estimation. In Findings of the Association for Computational Linguistics: (EMNLP\u201923) Houda Bouamor Juan Pino and Kalika Bali (Eds.). Association for Computational Linguistics Singapore 10136\u201310148. 10.18653\/v1\/2023.findings-emnlp.679","DOI":"10.18653\/v1\/2023.findings-emnlp.679"},{"key":"e_1_3_2_19_2","unstructured":"Google. 2023. Blockly. Retrieved from https:\/\/github.com\/google\/blockly. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376670"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE-Companion58688.2023.00013"},{"key":"e_1_3_2_22_2","unstructured":"Qing Huang Zhou Zou Zhenchang Xing Zhenkang Zuo Xiwei Xu and Qinghua Lu. 2023. AI chain on large language model for unsupervised control flow graph generation for statically-typed partial code. arXiv:2306.00757 [cs.SE] https:\/\/arxiv.org\/abs\/2306.00757"},{"key":"e_1_3_2_23_2","unstructured":"ifttt. 2023. short for if this then that. Retrieved from https:\/\/ifttt.com\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_24_2","unstructured":"Workflow Patterns Initiative. 2023. Workflow Patterns home page. Retrieved from http:\/\/www.workflowpatterns.com\/. Accessed: 02\/04\/2023."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/3491101.3503564"},{"key":"e_1_3_2_26_2","unstructured":"Subbarao Kambhampati. 2022. AI as an ersatz natural science. Retrieved from https:\/\/cacm.acm.org\/blogs\/blog-cacm\/261732-ai-as-an-ersatz-natural-science\/fulltext. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","unstructured":"Mehran Kazemi Najoung Kim Deepti Bhatia Xin Xu and Deepak Ramachandran. 2023. LAMBADA: Backward chaining for automated reasoning in natural language. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Anna Rogers Jordan Boyd-Graber and Naoaki Okazaki (Eds.). Association for Computational Linguistics Toronto 6547\u20136568. 10.18653\/v1\/2023.acl-long.361","DOI":"10.18653\/v1\/2023.acl-long.361"},{"key":"e_1_3_2_28_2","first-page":"22199","volume-title":"Advances in Neural Information Processing Systems","volume":"35","author":"Kojima Takeshi","year":"2022","unstructured":"Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems. S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 22199\u201322213. Retrieved from https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/file\/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf"},{"key":"e_1_3_2_29_2","unstructured":"Yoav Levine Itay Dalmedigos Ori Ram Yoel Zeldes Daniel Jannai Dor Muhlgay Yoni Osin Opher Lieber Barak Lenz Shai Shalev-Shwartz Amnon Shashua Kevin Leyton-Brown and Yoav Shoham. 2022. Standing on the shoulders of giant frozen language models. arXiv:2204.10019. Retrieved from https:\/\/arxiv.org\/abs\/cs\/2204.10019"},{"key":"e_1_3_2_30_2","doi-asserted-by":"crossref","unstructured":"Aman Madaan Shuyan Zhou Uri Alon Yiming Yang and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing Yoav Goldberg Zornitsa Kozareva and Yue Zhang (Eds.). Association for Computational Linguistics Abu Dhabi United Arab Emirates 1384\u20131403. https:\/\/aclanthology.org\/2022.emnlp-main.90","DOI":"10.18653\/v1\/2022.emnlp-main.90"},{"key":"e_1_3_2_31_2","unstructured":"Bertrand Meyer. 2022. What Do ChatGPT and AI-based Automatic Program Generation Mean for the Future of Software. Retrieved from https:\/\/cacm.acm.org\/blogs\/blog-cacm\/268103-what-do-chatgpt-and-ai-based-automatic-program-generation-mean-for-the-future-of-software\/fulltext. Accessed: 02\/04\/2023."},{"key":"e_1_3_2_32_2","unstructured":"microsoft. 2023. Microsoft AI Builder. Retrieved from https:\/\/powerautomate.microsoft.com\/zh-cn\/ai-builder\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_33_2","unstructured":"Midjourney. 2023. Midjourney. Retrieved from https:\/\/www.midjourney.com\/home\/. Accessed: 2\/04\/2023."},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","unstructured":"Swaroop Mishra Daniel Khashabi Chitta Baral Yejin Choi and Hannaneh Hajishirzi. 2022. Reframing instructional prompts to GPTk\u2019s language. In Findings of the Association for Computational Linguistics: (ACL\u201922) Smaranda Muresan Preslav Nakov and Aline Villavicencio (Eds.). Association for Computational Linguistics Dublin 589\u2013612. 10.18653\/v1\/2022.findings-acl.50","DOI":"10.18653\/v1\/2022.findings-acl.50"},{"key":"e_1_3_2_35_2","unstructured":"OpenAI. 2023. ChatGPT. Retrieved from https:\/\/chat.openai.com\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_36_2","unstructured":"OpenAI. 2023. GPT-4 Technical Report. (2023). arXiv:2303.08774 [cs.CL] https:\/\/arxiv.org\/abs\/2303.08774"},{"key":"e_1_3_2_37_2","unstructured":"Openai. 2023. Introducing ChatGPT. Retrieved from https:\/\/openai.com\/blog\/chatgpt. Accessed: 2\/04\/2023."},{"key":"e_1_3_2_38_2","unstructured":"OpenAI. 2023. prompts as programming. Retrieved from https:\/\/gwern.net\/gpt-3#prompts-as-programming. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","unstructured":"Ofir Press Muru Zhang Sewon Min Ludwig Schmidt Noah Smith and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models. In Findings of the Association for Computational Linguistics: (EMNLP\u201923) Houda Bouamor Juan Pino and Kalika Bali (Eds.). Association for Computational Linguistics Singapore 5687\u20135711. 10.18653\/v1\/2023.findings-emnlp.378","DOI":"10.18653\/v1\/2023.findings-emnlp.378"},{"issue":"8","key":"e_1_3_2_40_2","first-page":"9","article-title":"Language models are unsupervised multitask learners","volume":"1","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019), 9.","journal-title":"OpenAI Blog"},{"key":"e_1_3_2_41_2","first-page":"8821","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Ramesh Aditya","year":"2021","unstructured":"Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In Proceedings of the International Conference on Machine Learning. PMLR, 8821\u20138831."},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_3_2_43_2","unstructured":"Timo Schick Jane Dwivedi-Yu Zhengbao Jiang Fabio Petroni Patrick Lewis Gautier Izacard Qingfei You Christoforos Nalmpantis Edouard Grave and Sebastian Riedel. 2022. PEER: A collaborative language model. arXiv preprint arXiv:2208.11663 (2022). https:\/\/arxiv.org\/abs\/2208.11663"},{"key":"e_1_3_2_44_2","unstructured":"Noah Shinn Beck Labash and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366 (2023). https:\/\/arxiv.org\/abs\/2303.11366"},{"key":"e_1_3_2_45_2","doi-asserted-by":"crossref","unstructured":"Ishika Singh Valts Blukis Arsalan Mousavian Ankit Goyal Danfei Xu Jonathan Tremblay Dieter Fox Jesse Thomason and Animesh Garg. 2022. ProgPrompt: Generating situated robot task plans using large language models. In 2nd Workshop on Language and Reinforcement Learning. https:\/\/openreview.net\/forum?id=aflRdmGOhw1","DOI":"10.1109\/ICRA48891.2023.10161317"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/LRA.2018.2798300"},{"key":"e_1_3_2_47_2","unstructured":"Inc. Stork Tech. 2023. Collaboration for Hybrid Teams. https:\/\/www.stork.ai\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_48_2","unstructured":"structuredprompt. 2023. structuredprompt. Retrieved from https:\/\/structuredprompt.com\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_49_2","unstructured":"superbio. 2023. superbio.ai. Retrieved from https:\/\/www.superbio.ai\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_50_2","unstructured":"Xuezhi Wang Jason Wei Dale Schuurmans Quoc Le Ed Chi and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 (2022). https:\/\/arxiv.org\/abs\/2303.11366"},{"key":"e_1_3_2_51_2","first-page":"24824","volume-title":"Proceedings of the Advances in Neural Information Processing Systems","volume":"35","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Proceedings of the Advances in Neural Information Processing Systems. S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 24824\u201324837. Retrieved from https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/file\/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.5555\/2543993"},{"key":"e_1_3_2_53_2","unstructured":"wikipedia. 2022. Cooperative principle. Retrieved from https:\/\/en.wikipedia.org\/wiki\/Cooperative_principle#cite_note-6. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_54_2","unstructured":"wix. 2023. wix: Create a website without limits. Retrieved from https:\/\/www.wix.com\/. Accessed: 16\/06\/2023."},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/3491101.3519729"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3517582"},{"key":"e_1_3_2_57_2","doi-asserted-by":"crossref","unstructured":"Kevin Yang Yuandong Tian Nanyun Peng and Dan Klein. 2022. Re3: Generating longer stories with recursive reprompting and revision. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing Yoav Goldberg Zornitsa Kozareva and Yue Zhang (Eds.). Association for Computational Linguistics Abu Dhabi 4393\u20134479. https:\/\/aclanthology.org\/2022.emnlp-main.296","DOI":"10.18653\/v1\/2022.emnlp-main.296"},{"key":"e_1_3_2_58_2","unstructured":"Seonghyeon Ye Hyeonbin Hwang Sohee Yang Hyeongu Yun Yireun Kim and Minjoon Seo. 2023. Investigating the effectiveness of task-agnostic prefix prompt for instruction following. In NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following. https:\/\/openreview.net\/forum?id=1TFhamIXNn"},{"key":"e_1_3_2_59_2","series-title":"Proceedings of Machine Learning Research","first-page":"12697","volume-title":"Proceedings of the 38th International Conference on Machine Learning","volume":"139","author":"Zhao Zihao","year":"2021","unstructured":"Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 12697\u201312706. Retrieved from https:\/\/proceedings.mlr.press\/v139\/zhao21c.html"},{"key":"e_1_3_2_60_2","unstructured":"Yongchao Zhou Andrei Ioan Muresanu Ziwen Han Keiran Paster Silviu Pitis Harris Chan and Jimmy Ba. 2023. Large language models are human-level prompt engineers. In The 11th International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=92gvk82DE-"}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3638247","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3638247","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:53:35Z","timestamp":1750287215000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3638247"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,3]]},"references-count":59,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2024,6,30]]}},"alternative-id":["10.1145\/3638247"],"URL":"https:\/\/doi.org\/10.1145\/3638247","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,3]]},"assertion":[{"value":"2023-06-20","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-12-03","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}