{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T04:37:14Z","timestamp":1775018234956,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":27,"publisher":"ACM","license":[{"start":{"date-parts":[[2025,5,8]],"date-time":"2025-05-08T00:00:00Z","timestamp":1746662400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,5,8]]},"DOI":"10.1145\/3701716.3717814","type":"proceedings-article","created":{"date-parts":[[2025,5,23]],"date-time":"2025-05-23T16:06:11Z","timestamp":1748016371000},"page":"1605-1613","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["LLM Shots: Best Fired at System or User Prompts?"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0003-7620-0105","authenticated-orcid":false,"given":"Umut","family":"Halil","sequence":"first","affiliation":[{"name":"Huawei Research Ltd., Edinburgh, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-7524-9487","authenticated-orcid":false,"given":"Jin","family":"Huang","sequence":"additional","affiliation":[{"name":"Huawei Research Ltd., Edinburgh, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3392-3162","authenticated-orcid":false,"given":"Damien","family":"Graux","sequence":"additional","affiliation":[{"name":"Huawei Research Ltd., Edinburgh, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9779-2088","authenticated-orcid":false,"given":"Jeff Z.","family":"Pan","sequence":"additional","affiliation":[{"name":"University of Edinburgh, Edinburgh, United Kingdom"}]}],"member":"320","published-online":{"date-parts":[[2025,5,23]]},"reference":[{"key":"e_1_3_2_2_1_1","unstructured":"Eshaan Agarwal Joykirat Singh Vivek Dani Raghav Magazine Tanuja Ganu and Akshay Nambi. 2024a. PromptWizard: Task-Aware Prompt Optimization Framework. arxiv: 2405.18369 [cs.CL] https:\/\/arxiv.org\/abs\/2405.18369"},{"key":"e_1_3_2_2_2_1","unstructured":"Rishabh Agarwal Avi Singh Lei M. Zhang Bernd Bohnet Luis Rosias Stephanie Chan Biao Zhang Ankesh Anand Zaheer Abbas Azade Nova John D. Co-Reyes Eric Chu Feryal Behbahani Aleksandra Faust and Hugo Larochelle. 2024b. Many-Shot In-Context Learning. arxiv: 2404.11018"},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6239"},{"key":"e_1_3_2_2_4_1","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. Advances in neural information processing systems Vol. 33 (2020) 1877--1901."},{"key":"e_1_3_2_2_5_1","unstructured":"Manish Chandra Debasis Ganguly and Iadh Ounis. 2024. One size doesn't fit all: Predicting the number of examples for in-context learning. (2024). https:\/\/arxiv.org\/abs\/2403.06402"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","unstructured":"Manish Chandra Debasis Ganguly and Iadh Ounis. 2025. One size doesn't fit all: Predicting the Number of Examples for In-Context Learning. hrefhttps:\/\/doi.org\/10.48550\/arXiv.2403.06402doi:nolinkurl10.48550\/arXiv.2403.06402 arXiv:2403.06402.","DOI":"10.48550\/arXiv.2403.06402nolinkurl10.48550\/arXiv.2403.06402"},{"key":"e_1_3_2_2_7_1","volume-title":"BoolQ: Exploring the surprising difficulty of natural yes\/no questions. arXiv preprint arXiv:1905.10044","author":"Clark Christopher","year":"2019","unstructured":"Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes\/no questions. arXiv preprint arXiv:1905.10044 (2019)."},{"key":"e_1_3_2_2_8_1","unstructured":"Jia He Mukund Rungta David Koleczek Arshdeep Sekhon Franklin X Wang and Sadid Hasan. 2024. Does Prompt Formatting Have Any Impact on LLM Performance?arxiv: 2411.10541 [cs.CL] https:\/\/arxiv.org\/abs\/2411.10541"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"crossref","unstructured":"Roee Hendel Mor Geva and Amir Globerson. 2023. In-Context Learning Creates Task Vectors. arxiv: 2310.15916 [cs.CL] https:\/\/arxiv.org\/abs\/2310.15916","DOI":"10.18653\/v1\/2023.findings-emnlp.624"},{"key":"e_1_3_2_2_10_1","volume-title":"Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al.","author":"Jiang Albert Q","year":"2023","unstructured":"Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7B. arXiv:2310.06825 (2023)."},{"key":"e_1_3_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE48619.2023.00194"},{"key":"e_1_3_2_2_12_1","unstructured":"Mukai Li Shansan Gong Jiangtao Feng Yiheng Xu Jun Zhang Zhiyong Wu and Lingpeng Kong. 2023. In-Context Learning with Many Demonstration Examples. arxiv: 2302.04931 [cs.CL] https:\/\/arxiv.org\/abs\/2302.04931"},{"key":"e_1_3_2_2_13_1","unstructured":"Nelson F. Liu Kevin Lin John Hewitt Ashwin Paranjape Michele Bevilacqua Fabio Petroni and Percy Liang. 2023a. Lost in the Middle: How Language Models Use Long Contexts. arxiv: 2307.03172 [cs.CL] https:\/\/arxiv.org\/abs\/2307.03172"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3560815"},{"key":"e_1_3_2_2_15_1","volume-title":"Paolo Di Achille, and Shwetak Patel","author":"Liu Xin","year":"2023","unstructured":"Xin Liu, Daniel McDuff, Geza Kovacs, Isaac Galatzer-Levy, Jacob Sunshine, Jiening Zhan, Ming-Zher Poh, Shun Liao, Paolo Di Achille, and Shwetak Patel. 2023b. Large Language Models are Few-Shot Health Learners. arxiv: 2305.15525"},{"key":"e_1_3_2_2_16_1","unstructured":"Long Ouyang Jeff Wu Xu Jiang Diogo Almeida Carroll L. Wainwright Pamela Mishkin Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens Amanda Askell Peter Welinder Paul Christiano Jan Leike and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. arxiv: 2203.02155"},{"key":"e_1_3_2_2_17_1","volume-title":"100,000 questions for machine comprehension of text. arXiv preprint arXiv:1606.05250","author":"Rajpurkar P","year":"2016","unstructured":"P Rajpurkar. 2016. Squad: 100,000 questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016)."},{"key":"e_1_3_2_2_18_1","volume-title":"Know what you don't know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822","author":"Rajpurkar Pranav","year":"2018","unstructured":"Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822 (2018)."},{"key":"e_1_3_2_2_19_1","unstructured":"Laria Reynolds and Kyle McDonell. 2021. Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. arxiv: 2102.07350 [cs.CL] https:\/\/arxiv.org\/abs\/2102.07350"},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00280"},{"key":"e_1_3_2_2_21_1","unstructured":"Hugo Touvron Louis Martin Kevin Stone Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti Bhosale et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)."},{"key":"e_1_3_2_2_22_1","unstructured":"Xinyi Wang Wanrong Zhu Michael Saxon Mark Steyvers and William Yang Wang. 2024b. Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning. arxiv: 2301.11916 [cs.CL] https:\/\/arxiv.org\/abs\/2301.11916"},{"key":"e_1_3_2_2_23_1","volume-title":"Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574","author":"Wang Yubo","year":"2024","unstructured":"Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. 2024a. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574 (2024)."},{"key":"e_1_3_2_2_24_1","unstructured":"Sang Michael Xie Aditi Raghunathan Percy Liang and Tengyu Ma. 2022. An Explanation of In-context Learning as Implicit Bayesian Inference. arxiv: 2111.02080 [cs.CL] https:\/\/arxiv.org\/abs\/2111.02080"},{"key":"e_1_3_2_2_25_1","unstructured":"Benfeng Xu An Yang Junyang Lin Quan Wang Chang Zhou Yongdong Zhang and Zhendong Mao. 2023. ExpertPrompting: Instructing Large Language Models to be Distinguished Experts. arxiv: 2305.14688 https:\/\/arxiv.org\/abs\/2305.14688"},{"key":"e_1_3_2_2_26_1","unstructured":"Derek Xu Tong Xie Botao Xia Haoyu Li Yunsheng Bai Yizhou Sun and Wei Wang. 2024. Does Few-Shot Learning Help LLM Performance in Code Synthesis?arxiv: 2412.02906 [cs.SE] https:\/\/arxiv.org\/abs\/2412.02906"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"crossref","unstructured":"Mingqian Zheng Jiaxin Pei Lajanugen Logeswaran Moontae Lee and David Jurgens. 2024. When ''A Helpful Assistant'' Is Not Really Helpful: Personas in System Prompts Do Not Improve Performances of Large Language Models. arxiv: 2311.10054 [cs.CL] https:\/\/arxiv.org\/abs\/2311.10054","DOI":"10.18653\/v1\/2024.findings-emnlp.888"}],"event":{"name":"WWW '25: The ACM Web Conference 2025","location":"Sydney NSW Australia","acronym":"WWW '25","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web"]},"container-title":["Companion Proceedings of the ACM on Web Conference 2025"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3701716.3717814","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3701716.3717814","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,7]],"date-time":"2025-10-07T17:48:38Z","timestamp":1759859318000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3701716.3717814"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,8]]},"references-count":27,"alternative-id":["10.1145\/3701716.3717814","10.1145\/3701716"],"URL":"https:\/\/doi.org\/10.1145\/3701716.3717814","relation":{},"subject":[],"published":{"date-parts":[[2025,5,8]]},"assertion":[{"value":"2025-05-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}