{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,4,18]],"date-time":"2025-04-18T05:12:26Z","timestamp":1744953146527},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2020,7]]},"abstract":"<jats:p>Zero-shot object detection (ZSD) has received considerable attention from the community of computer vision in recent years. It aims to simultaneously locate and categorize previously unseen objects during inference. One crucial problem of ZSD is how to accurately predict the label of each object proposal, i.e. categorizing object proposals, when conducting ZSD for unseen categories.\n\nPrevious ZSD models generally relied on learning an embedding from visual space to semantic space or learning a joint embedding between semantic description and visual representation. As the features in the learned semantic space or the joint projected space tend to suffer from the hubness problem, namely the feature vectors are likely embedded to an area of incorrect labels, and thus it will lead to lower detection precision. In this paper, instead, we propose to learn a deep embedding from the semantic space to the visual space, which enables to well alleviate the hubness problem, because, compared with semantic space or joint embedding space, the distribution in visual space has smaller variance. After learning a deep embedding model, we perform $k$ nearest neighbor search in the visual space of unseen categories to determine the category of each semantic description. Extensive experiments on two public datasets show that our approach significantly outperforms the existing methods.<\/jats:p>","DOI":"10.24963\/ijcai.2020\/126","type":"proceedings-article","created":{"date-parts":[[2020,7,8]],"date-time":"2020-07-08T12:12:10Z","timestamp":1594210330000},"page":"906-912","source":"Crossref","is-referenced-by-count":14,"title":["Zero-Shot Object Detection via Learning an Embedding from Semantic Space to Visual Space"],"prefix":"10.24963","author":[{"given":"Licheng","family":"Zhang","sequence":"first","affiliation":[{"name":"Depatment of Computer Science and Technology, Southern University of Science and Technology"},{"name":"Research Institute of Trustworthy Autonomous Systems"}]},{"given":"Xianzhi","family":"Wang","sequence":"additional","affiliation":[{"name":"University of Technology Sydney"}]},{"given":"Lina","family":"Yao","sequence":"additional","affiliation":[{"name":"University of New South Wales"}]},{"given":"Lin","family":"Wu","sequence":"additional","affiliation":[{"name":"Hefei University of Technology"}]},{"given":"Feng","family":"Zheng","sequence":"additional","affiliation":[{"name":"Depatment of Computer Science and Technology, Southern University of Science and Technology"},{"name":"Research Institute of Trustworthy Autonomous Systems"}]}],"member":"10584","event":{"number":"28","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-PRICAI-2020","name":"Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}","start":{"date-parts":[[2020,7,11]]},"theme":"Artificial Intelligence","location":"Yokohama, Japan","end":{"date-parts":[[2020,7,17]]}},"container-title":["Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2020,7,9]],"date-time":"2020-07-09T02:13:26Z","timestamp":1594260806000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2020\/126"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2020,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2020\/126","relation":{},"subject":[],"published":{"date-parts":[[2020,7]]}}}