{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:41:57Z","timestamp":1773801717868,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"11","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Zero-shot Composed Image Retrieval (ZS-CIR) involves diverse tasks with varied visual manipulation intents across domains, scenes, objects, and attributes. A key challenge is that existing datasets contain limited intent-relevant annotations, making it hard for models to infer human intent from textual modifications. We introduce an intent-centric image\u2013text dataset generated via reasoning by a Multimodal Large Language Model (MLLM) to better train ZS-CIR models for human manipulation intent understanding. Building on this dataset, we propose De-MINDS, a framework that distills the MLLM\u2019s reasoning ability to capture manipulation intent and enhance models\u2019 comprehension of modified text. A simple mapping network translates image information into language space and combines it with the manipulation text to form a query. De-MINDS then extracts intention-relevant information from this query and encodes it as pseudo-word tokens for accurate ZS-CIR. Across four ZS-CIR tasks, De-MINDS shows strong generalization and improves over existing methods by 2.15% to 4.05%, establishing new state-of-the-art results with comparable inference time.<\/jats:p>","DOI":"10.1609\/aaai.v40i11.37907","type":"journal-article","created":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T23:51:31Z","timestamp":1773791491000},"page":"9466-9474","source":"Crossref","is-referenced-by-count":0,"title":["Manipulation Intention Understanding for Zero-Shot Composed Image Retrieval"],"prefix":"10.1609","volume":"40","author":[{"given":"Yuanmin","family":"Tang","sequence":"first","affiliation":[]},{"given":"Jing","family":"Yu","sequence":"additional","affiliation":[]},{"given":"Keke","family":"Gai","sequence":"additional","affiliation":[]},{"given":"Gang","family":"Xiong","sequence":"additional","affiliation":[]},{"given":"Gaopeng","family":"Gou","sequence":"additional","affiliation":[]},{"given":"Meikang","family":"Qiu","sequence":"additional","affiliation":[]},{"given":"Qi","family":"Wu","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/37907\/41869","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/37907\/41869","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T23:51:31Z","timestamp":1773791491000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/37907"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i11.37907","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}