{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:45:55Z","timestamp":1773801955234,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"13","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Multimodal Large Language Models (MLLM) have enabled a wide range of advanced vision-language applications, including fine-grained object recognition and contextual understanding. When querying specific regions or objects in an image, human users naturally use \"Visual Prompts\" (VP) like bounding boxes to provide reference. However, no existing benchmark systematically evaluates the ability of MLLMs to interpret such VPs. This gap raises uncertainty about whether current MLLMs can effectively recognize VPs, an intuitive prompting method for humans, and utilize them to solve problems. To address this limitation, we introduce VP-Bench, aiming to assess MLLMs\u2019 capability in VP perception and utilization. VP-Bench employs a two-stage evaluation framework: Stage 1 examines models\u2019 ability to perceive VPs in natural scenes, utilizing 100K visualized prompts spanning 8 shapes and 355 attribute combinations. Stage 2 investigates the impact of VPs on downstream tasks, measuring their effectiveness in real-world problem-solving scenarios. Using VP-Bench, we evaluate 21 MLLMs, including proprietary systems (e.g., GPT-4o) and open-source models (e.g., InternVL-2.5 and Qwen2.5-VL). In addition, we conduct a comprehensive analysis of the factors influencing VP understanding, such as attribute variations and model scale. VP-Bench establishes a new reference framework for studying MLLMs\u2019 ability to comprehend and resolve grounded referring questions.<\/jats:p>","DOI":"10.1609\/aaai.v40i13.38114","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:06:00Z","timestamp":1773792360000},"page":"11332-11341","source":"Crossref","is-referenced-by-count":0,"title":["VP-Bench: A Comprehensive Benchmark for Visual Prompting in Multimodal Large Language Models"],"prefix":"10.1609","volume":"40","author":[{"given":"Mingjie","family":"Xu","sequence":"first","affiliation":[]},{"given":"Jinpeng","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Yuzhi","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Jason Chun Lok","family":"Li","sequence":"additional","affiliation":[]},{"given":"Yue","family":"Qiu","sequence":"additional","affiliation":[]},{"given":"Zekang","family":"Du","sequence":"additional","affiliation":[]},{"given":"Mengyang","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Pingping","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Kun","family":"Li","sequence":"additional","affiliation":[]},{"given":"Hongzheng","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Wenao","family":"Ma","sequence":"additional","affiliation":[]},{"given":"Jiaheng","family":"Wei","sequence":"additional","affiliation":[]},{"given":"Qinbin","family":"Li","sequence":"additional","affiliation":[]},{"given":"Kangcheng","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Wenqiang","family":"Lei","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38114\/42076","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38114\/42076","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:06:01Z","timestamp":1773792361000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/38114"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"13","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i13.38114","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}