{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T20:30:22Z","timestamp":1770496222636,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":26,"publisher":"ACM","license":[{"start":{"date-parts":[[2025,5,8]],"date-time":"2025-05-08T00:00:00Z","timestamp":1746662400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,5,8]]},"DOI":"10.1145\/3701716.3717818","type":"proceedings-article","created":{"date-parts":[[2025,5,23]],"date-time":"2025-05-23T16:06:11Z","timestamp":1748016371000},"page":"2574-2578","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["LLM as Auto-Prompt Engineer: Automated NER Prompt Optimisation"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0223-659X","authenticated-orcid":false,"given":"Can","family":"Yang","sequence":"first","affiliation":[{"name":"Australian National University, Canberra, ACT, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9764-9401","authenticated-orcid":false,"given":"Bernardo","family":"Pereira Nunes","sequence":"additional","affiliation":[{"name":"Australian National University, Canberra, ACT, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7203-8399","authenticated-orcid":false,"given":"Sergio","family":"Rodr\u00edguez M\u00e9ndez","sequence":"additional","affiliation":[{"name":"Australian National University, Canberra, ACT, Australia"}]}],"member":"320","published-online":{"date-parts":[[2025,5,23]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Sergei Bogdanov Alexandre Constantin Timoth\u00e9e Bernard Benoit Crabb\u00e9 and Etienne Bernard. 2024. NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data. arXiv:2402.15343 [cs.CL] https:\/\/arxiv.org\/abs\/2402.15343","DOI":"10.18653\/v1\/2024.emnlp-main.660"},{"key":"e_1_3_2_2_2_1","unstructured":"Tom B. Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Tom Henighan Rewon Child Aditya Ramesh Daniel M. Ziegler Jeffrey Wu Clemens Winter Christopher Hesse Mark Chen Eric Sigler Mateusz Litwin Scott Gray Benjamin Chess Jack Clark Christopher Berner Sam McCandlish Alec Radford Ilya Sutskever and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs.CL] https:\/\/arxiv.org\/abs\/2005.14165"},{"key":"e_1_3_2_2_3_1","unstructured":"Weizhe Chen Sven Koenig and Bistra Dilkina. 2024. RePrompt: Planning by Automatic Prompt Engineering for Large Language Models Agents. arXiv:2406.11132 [cs.CL] https:\/\/arxiv.org\/abs\/2406.11132"},{"key":"e_1_3_2_2_4_1","volume-title":"Dialog-to- Action: Conversational Question Answering Over a Large-Scale Knowledge Base. In Advances in Neural Information Processing Systems","author":"Guo Daya","year":"2018","unstructured":"Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2018. Dialog-to- Action: Conversational Question Answering Over a Large-Scale Knowledge Base. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2018\/file\/ d63fbf8c3173730f82b150c5ef38b8ff-Paper.pdf"},{"key":"e_1_3_2_2_5_1","unstructured":"Qingyan Guo RuiWang Junliang Guo Bei Li Kaitao Song Xu Tan Guoqing Liu Jiang Bian and Yujiu Yang. 2024. Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers. arXiv:2309.08532 [cs.CL] https:\/\/arxiv.org\/abs\/2309.08532"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.findings-acl.947"},{"key":"e_1_3_2_2_7_1","doi-asserted-by":"crossref","unstructured":"Guochao Jiang Zepeng Ding Yuchen Shi and Deqing Yang. 2024. P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models. arXiv:2405.04960 [cs.CL] https:\/\/arxiv.org\/abs\/2405.04960","DOI":"10.1016\/j.procs.2024.04.228"},{"key":"e_1_3_2_2_8_1","unstructured":"Daan Kepel and Konstantina Valogianni. 2024. Autonomous Prompt Engineering in Large Language Models. arXiv:2407.11000 [cs.CL] https:\/\/arxiv.org\/abs\/2407.11000"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"crossref","unstructured":"Minchan Kwon Gaeun Kim Jongsuk Kim Haeil Lee and Junmo Kim. 2024. StablePrompt: Automatic Prompt Tuning using Reinforcement Learning for Large Language Models. arXiv:2410.07652 [cs.CL] https:\/\/arxiv.org\/abs\/2410.07652","DOI":"10.18653\/v1\/2024.emnlp-main.551"},{"key":"e_1_3_2_2_10_1","volume-title":"GEIC: Universal and Multilingual Named Entity Recognition with Large Language Models. arXiv:2409.11022 [cs.CL] https:\/\/arxiv.org\/abs\/2409.11022","author":"Luo Hanjun","year":"2024","unstructured":"Hanjun Luo, Yingbin Jin, Xuecheng Liu, Tong Shang, Ruizhe Chen, and Zuozhu Liu. 2024. GEIC: Universal and Multilingual Named Entity Recognition with Large Language Models. arXiv:2409.11022 [cs.CL] https:\/\/arxiv.org\/abs\/2409.11022"},{"key":"e_1_3_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW63382.2024.00230"},{"key":"e_1_3_2_2_12_1","unstructured":"Mirac Suzgun and Adam Tauman Kalai. 2024. Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding. arXiv:2401.12954 [cs.CL] https:\/\/arxiv.org\/abs\/2401.12954"},{"key":"e_1_3_2_2_13_1","unstructured":"Fabi\u00e1n Villena Luis Miranda and Claudio Aracena. 2024. llmNER: (Zero|Few)- Shot Named Entity Recognition Exploiting the Power of Large Language Models. arXiv:2406.04528 [cs.CL] https:\/\/arxiv.org\/abs\/2406.04528"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3269206.3271739"},{"key":"e_1_3_2_2_15_1","unstructured":"Shuhe Wang Xiaofei Sun Xiaoya Li Rongbin Ouyang Fei Wu Tianwei Zhang Jiwei Li and GuoyinWang. 2023. GPT-NER: Named Entity Recognition via Large Language Models. arXiv:2304.10428 [cs.CL] https:\/\/arxiv.org\/abs\/2304.10428"},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330989"},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"crossref","unstructured":"Yike Wu Jiatao Zhang Nan Hu Lanling Tang Guilin Qi Jun Shao Jie Ren and Wei Song. 2024. MLDT: Multi-Level Decomposition for Complex Long-Horizon Robotic Task Planning with Open-Source Large Language Model. In Database Systems for Advanced Applications Makoto Onizuka Jae-Gil Lee Yongxin Tong Chuan Xiao Yoshiharu Ishikawa Sihem Amer-Yahia H. V. Jagadish and Kejing Lu (Eds.). Springer Nature Singapore Singapore 251--267.","DOI":"10.1007\/978-981-97-5569-1_16"},{"key":"e_1_3_2_2_18_1","unstructured":"Tingyu Xie Qi Li Yan Zhang Zuozhu Liu and Hongwei Wang. 2024. Self- Improving for Zero-Shot Named Entity Recognition with Large Language Models. arXiv:2311.08921 [cs.CL] https:\/\/arxiv.org\/abs\/2311.08921"},{"key":"e_1_3_2_2_19_1","unstructured":"Derong Xu Wei Chen Wenjun Peng Chao Zhang Tong Xu Xiangyu Zhao Xian Wu Yefeng Zheng YangWang and Enhong Chen. 2024. Large Language Models for Generative Information Extraction: A Survey. arXiv:2312.17617 [cs.CL] https:\/\/arxiv.org\/abs\/2312.17617"},{"key":"e_1_3_2_2_20_1","volume-title":"LTNER: Large Language Model Tagging for Named Entity Recognition with Contextualized Entity Marking. arXiv:2404.05624 [cs.CL] https:\/\/arxiv.org\/abs\/2404.05624","author":"Yan Faren","year":"2024","unstructured":"Faren Yan, Peng Yu, and Xin Chen. 2024. LTNER: Large Language Model Tagging for Named Entity Recognition with Contextualized Entity Marking. arXiv:2404.05624 [cs.CL] https:\/\/arxiv.org\/abs\/2404.05624"},{"key":"e_1_3_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3677389.3702517"},{"key":"e_1_3_2_2_22_1","unstructured":"Urchade Zaratiana Nadi Tomeh Pierre Holat and Thierry Charnois. 2023. GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer. arXiv:2311.08526 [cs.CL] https:\/\/arxiv.org\/abs\/2311.08526"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-naacl.3"},{"key":"e_1_3_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.32604\/cmes.2023.031513"},{"key":"e_1_3_2_2_25_1","unstructured":"Shijia Zhou Leonie Weissweiler Taiqi He Hinrich Sch\u00fctze David R. Mortensen and Lori Levin. 2024. Constructions Are So Difficult That Even Large Language Models Get Them Right for the Wrong Reasons. arXiv:2403.17760 [cs.CL] https:\/\/arxiv.org\/abs\/2403.17760"},{"key":"e_1_3_2_2_26_1","volume-title":"Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba.","author":"Zhou Yongchao","year":"2023","unstructured":"Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2023. Large Language Models Are Human-Level Prompt Engineers. arXiv:2211.01910 [cs.LG] https:\/\/arxiv.org\/abs\/2211.01910"}],"event":{"name":"WWW '25: The ACM Web Conference 2025","location":"Sydney NSW Australia","acronym":"WWW '25","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web"]},"container-title":["Companion Proceedings of the ACM on Web Conference 2025"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3701716.3717818","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3701716.3717818","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,7]],"date-time":"2025-10-07T17:49:50Z","timestamp":1759859390000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3701716.3717818"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,8]]},"references-count":26,"alternative-id":["10.1145\/3701716.3717818","10.1145\/3701716"],"URL":"https:\/\/doi.org\/10.1145\/3701716.3717818","relation":{},"subject":[],"published":{"date-parts":[[2025,5,8]]},"assertion":[{"value":"2025-05-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}