{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T03:07:25Z","timestamp":1773803245045,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"29","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Large Vision-Language Models (VLMs) exhibit impressive multi-modal capabilities but suffer from prohibitive computational and memory demands, due to their long visual token sequences and massive parameter sizes. To address these issues, recent works have proposed training-free compression methods. However, existing efforts often suffer from three major limitations: (1) Current approaches do not decompose techniques into comparable modules, hindering fair evaluation across spatial and temporal redundancy. (2) Evaluation confined to simple single-turn tasks, failing to reflect performance in realistic scenarios. (3) Isolated use of individual compression techniques, without exploring their joint potential. To overcome these gaps, we introduce LLMC+, a comprehensive VLM compression benchmark with a versatile, plug-and-play toolkit. LLMC+ supports over 20 algorithms across five representative VLM families and enables systematic study of token-level and model-level compression. Our benchmark reveals that: (1) Spatial and temporal redundancies demand distinct technical strategies. (2) Token reduction methods degrade significantly in multi-turn dialogue and detail-sensitive tasks. (3) Combining token and model compression achieves extreme compression with minimal performance loss. We believe LLMC+ will facilitate fair evaluation and inspire future research in efficient VLM.<\/jats:p>","DOI":"10.1609\/aaai.v40i29.39598","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T01:45:48Z","timestamp":1773798348000},"page":"24189-24197","source":"Crossref","is-referenced-by-count":0,"title":["LLMC+: Benchmarking Vision-Language Model Compression with a plug-and-play Toolkit"],"prefix":"10.1609","volume":"40","author":[{"given":"Chengtao","family":"Lv","sequence":"first","affiliation":[]},{"given":"Bilang","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Yang","family":"Yong","sequence":"additional","affiliation":[]},{"given":"Ruihao","family":"Gong","sequence":"additional","affiliation":[]},{"given":"Yushi","family":"Huang","sequence":"additional","affiliation":[]},{"given":"Shiqiao","family":"Gu","sequence":"additional","affiliation":[]},{"given":"Jiajun","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Yumeng","family":"Shi","sequence":"additional","affiliation":[]},{"given":"Jinyang","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Wenya","family":"Wang","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/39598\/43559","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/39598\/43559","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T01:45:49Z","timestamp":1773798349000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/39598"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"29","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i29.39598","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}