{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,23]],"date-time":"2026-04-23T15:00:46Z","timestamp":1776956446173,"version":"3.51.4"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:p>Audio-visual speech recognition (AVSR) research has gained a great success recently by improving the noise-robustness of audio-only automatic speech recognition (ASR) with noise-invariant visual information. However, most existing AVSR approaches simply fuse the audio and visual features by concatenation, without explicit interactions to capture the deep correlations between them, which results in sub-optimal multimodal representations for downstream speech recognition task. In this paper, we propose a cross-modal global interaction and local alignment (GILA) approach for AVSR, which captures the deep audio-visual (A-V) correlations from both global and local perspectives. Specifically, we design a global interaction model to capture the A-V complementary relationship on modality level, as well as a local alignment approach to model the A-V temporal consistency on frame level. Such a holistic view of cross-modal correlations enable better multimodal representations for AVSR. Experiments on public benchmarks LRS3 and LRS2 show that our GILA outperforms the supervised learning state-of-the-art. Code is at https:\/\/github.com\/YUCHEN005\/GILA.<\/jats:p>","DOI":"10.24963\/ijcai.2023\/564","type":"proceedings-article","created":{"date-parts":[[2023,8,11]],"date-time":"2023-08-11T04:31:30Z","timestamp":1691728290000},"page":"5076-5084","source":"Crossref","is-referenced-by-count":9,"title":["Cross-Modal Global Interaction and Local Alignment for Audio-Visual Speech Recognition"],"prefix":"10.24963","author":[{"given":"Yuchen","family":"Hu","sequence":"first","affiliation":[{"name":"Nanyang Technological University, Singapore"}]},{"given":"Ruizhe","family":"Li","sequence":"additional","affiliation":[{"name":"University of Aberdeen, UK"}]},{"given":"Chen","family":"Chen","sequence":"additional","affiliation":[{"name":"Nanyang Technological University, Singapore"}]},{"given":"Heqing","family":"Zou","sequence":"additional","affiliation":[{"name":"Nanyang Technological University, Singapore"}]},{"given":"Qiushi","family":"Zhu","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, China"}]},{"given":"Eng Siong","family":"Chng","sequence":"additional","affiliation":[{"name":"Nanyang Technological University, Singapore"}]}],"member":"10584","event":{"name":"Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}","theme":"Artificial Intelligence","location":"Macau, SAR China","acronym":"IJCAI-2023","number":"32","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"start":{"date-parts":[[2023,8,19]]},"end":{"date-parts":[[2023,8,25]]}},"container-title":["Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2023,8,11]],"date-time":"2023-08-11T04:51:35Z","timestamp":1691729495000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2023\/564"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2023,8]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2023\/564","relation":{},"subject":[],"published":{"date-parts":[[2023,8]]}}}