{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T03:24:59Z","timestamp":1773804299531,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"33","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Multimodal learning has shown significant superiority on various tasks by integrating multiple modalities.\nHowever, the interdependencies among modalities increase the susceptibility of multimodal models to adversarial attacks.\nExisting methods mainly focus on attacks on specific modalities or indiscriminately attack all modalities. \nIn this paper, we find that these approaches ignore the differences between modalities in their contribution to final robustness, resulting in suboptimal robustness performance.\nTo bridge this gap, we introduce Vulnerability-Aware Robust Multimodal Adversarial Training (VARMAT), a probe-in-training adversarial training method that improves multimodal robustness by identifying the vulnerability of each modality.\nTo be specific, VARMAT first explicitly quantifies the vulnerability of each modality, grounded in a first-order approximation of the attack objective (Probe). Then, we propose a targeted regularization term that penalizes modalities with high vulnerability, guiding robust learning while maintaining task accuracy (Training).\nWe demonstrate the enhanced robustness of our method across multiple multimodal datasets involving diverse modalities.\nFinally, we achieve {12.73%, 22.21%, 11.19%} robustness improvement on three multimodal datasets, revealing a significant blind spot in multimodal adversarial training.<\/jats:p>","DOI":"10.1609\/aaai.v40i33.40054","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:18:20Z","timestamp":1773800300000},"page":"28265-28273","source":"Crossref","is-referenced-by-count":0,"title":["Vulnerability-Aware Robust Multimodal Adversarial Training"],"prefix":"10.1609","volume":"40","author":[{"given":"Junrui","family":"Zhang","sequence":"first","affiliation":[]},{"given":"Xinyu","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Jie","family":"Peng","sequence":"additional","affiliation":[]},{"given":"Chenjie","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Jianmin","family":"Ji","sequence":"additional","affiliation":[]},{"given":"Tianlong","family":"Chen","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/40054\/44015","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/40054\/44015","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:18:21Z","timestamp":1773800301000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/40054"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"33","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i33.40054","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}