{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:48:16Z","timestamp":1773802096577,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"14","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>To identify objects beyond predefined categories, open-vocabulary aerial object detection (OVAD) leverages the zero-shot capabilities of visual-language models (VLMs) to generalize from base to novel categories. Existing approaches typically utilize self-learning mechanisms with weak text supervision to generate region-level pseudo-labels to align detectors with VLMs semantic spaces. However, text dependence induces semantic bias, restricting open-vocabulary expansion to text-specified concepts. We propose VK-Det, a visual knowledge-guided open-vocabulary object detection framework without extra supervision. First, we discover and leverage vision encoder's inherent informative region perception to attain fine-grained localization and adaptive distillation. Second, we introduce a novel prototype-aware pseudo-labeling strategy. It models inter-class decision boundaries through feature clustering and maps detection regions to latent categories via prototype matching. This enhances attention to novel objects while compensating for missing supervision. Extensive experiments show state-of-the-art performance, achieving 30.1 mAP\u1d3a on DIOR and 23.3 mAP\u1d3a on DOTA, outperforming even extra supervised methods.<\/jats:p>","DOI":"10.1609\/aaai.v40i14.38174","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:11:37Z","timestamp":1773792697000},"page":"11875-11882","source":"Crossref","is-referenced-by-count":0,"title":["VK-Det: Visual Knowledge Guided Prototype Learning for Open-Vocabulary Aerial Object Detection"],"prefix":"10.1609","volume":"40","author":[{"given":"Jianhang","family":"Yao","sequence":"first","affiliation":[]},{"given":"Yongbin","family":"Zheng","sequence":"additional","affiliation":[]},{"given":"Siqi","family":"Lu","sequence":"additional","affiliation":[]},{"given":"Wanying","family":"Xu","sequence":"additional","affiliation":[]},{"given":"Peng","family":"Sun","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38174\/42136","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38174\/42136","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:11:37Z","timestamp":1773792697000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/38174"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i14.38174","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}