{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T03:26:12Z","timestamp":1773804372845,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"33","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Cross-modal hashing has emerged as a pivotal solution for efficient retrieval across diverse modalities, such as images and texts, by mapping them into compact binary hash spaces. However, in real-world scenarios, the  modalities data is often missing or misaligned. Existing methods are most rely on fully paired training data and ignore missing or misaligned   modalities data, resulting in the semantic inconsistencies. To address these challenges, we propose an Adaptive Graph Attention-Based Discrete Hashing (AGADH) method, which consists of three parts. First, to solve the problem of missing  modalities, AGADH employs a masked completion strategy to reconstruct missing modalities. Second, to mitigate semantic misalignment, AGADH leverages a Graph Attention Network (GAT) encoder-decoder architecture with alignment module to construct features from different modalities. Additionally, to enhance the fusion performance, an adaptive fusion module dynamically adjusting the contributions of image and text modalities with learnable weighting coefficients is proposed. Extensive experiments on three benchmark datasets, MS-COCO, NUS-WIDE, and MIRFlickr-25K, demonstrating that AGADH outperforms state-of-the-art methods in both fully paired and incompletely paired scenarios, showing its robustness and effectiveness in cross-modal retrieval tasks.<\/jats:p>","DOI":"10.1609\/aaai.v40i33.40067","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:20:04Z","timestamp":1773800404000},"page":"28382-28390","source":"Crossref","is-referenced-by-count":0,"title":["Adaptive Graph Attention Based Discrete Hashing for Incomplete Cross-modal Retrieval"],"prefix":"10.1609","volume":"40","author":[{"given":"Shuang","family":"Zhang","sequence":"first","affiliation":[]},{"given":"Yue","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Lei","family":"Shi","sequence":"additional","affiliation":[]},{"given":"Huilong","family":"Jin","sequence":"additional","affiliation":[]},{"given":"Feifei","family":"Kou","sequence":"additional","affiliation":[]},{"given":"Pengfei","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Mingying","family":"Xu","sequence":"additional","affiliation":[]},{"given":"Pengtao","family":"Lv","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/40067\/44028","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/40067\/44028","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:20:05Z","timestamp":1773800405000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/40067"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"33","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i33.40067","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}