{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T16:14:25Z","timestamp":1772727265728,"version":"3.50.1"},"reference-count":78,"publisher":"Springer Science and Business Media LLC","issue":"10","license":[{"start":{"date-parts":[[2025,7,3]],"date-time":"2025-07-03T00:00:00Z","timestamp":1751500800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,3]],"date-time":"2025-07-03T00:00:00Z","timestamp":1751500800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2022ZD0160102"],"award-info":[{"award-number":["2022ZD0160102"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Shanghai Committee of Science and Technology","award":["22YF1461500"],"award-info":[{"award-number":["22YF1461500"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2025,10]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Audio-Visual Question Answering (AVQA) requires the model to answer questions with complex dynamic audio-visual information. Prior works on this task mainly consider only using single question-answer pairs during training, overlooking the rich semantic associations between questions. In this work, we propose a novel Collective Question-Guided Network (CoQo), which accepts multiple question-answer pairs as input and leverages the reasoning over these questions to assist the model training process. The core module is the proposed Question Guided Transformer (QGT), which uses collective question reasoning to perform question-guided feature extraction. Since multiple question-answer pairs are not always available, especially during inference, our QGT uses a set of learnable tokens to learn the collective information from multiple questions during training. At inference time, these learnable tokens bring additional reasoning information even when only one question is used as input. We employ QGT in both spatial and temporal dimensions to extract question-related features effectively and efficiently. To better capture detailed audio-visual associations, we train the model in a finer level by distinguishing feature pairs of different questions within the same video. Extensive experiments demonstrate that our method can achieve state-of-the-art performance on three AVQA datasets while reducing training time significantly. We also observe strong performances of our method on three VQA benchmarks. Detailed ablation studies further confirm the effectiveness of our proposed collective question reasoning scheme, both quantitatively and qualitatively.<\/jats:p>","DOI":"10.1007\/s11263-025-02510-7","type":"journal-article","created":{"date-parts":[[2025,7,3]],"date-time":"2025-07-03T06:18:16Z","timestamp":1751523496000},"page":"6912-6929","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Guiding Audio-Visual Question Answering with Collective Question Reasoning"],"prefix":"10.1007","volume":"133","author":[{"given":"Baoqi","family":"Pei","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8067-6227","authenticated-orcid":false,"given":"Yifei","family":"Huang","sequence":"additional","affiliation":[]},{"given":"Guo","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Jilan","family":"Xu","sequence":"additional","affiliation":[]},{"given":"Yali","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Limin","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Tong","family":"Lu","sequence":"additional","affiliation":[]},{"given":"Yu","family":"Qiao","sequence":"additional","affiliation":[]},{"given":"Fei","family":"Wu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,3]]},"reference":[{"issue":"4","key":"2510_CR1","first-page":"4997","volume":"45","author":"J Abdelnour","year":"2022","unstructured":"Abdelnour, J., Rouat, J., & Salvi, G. (2022). Naaqa: A neural architecture for acoustic question answering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4), 4997\u20135009.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"2510_CR2","first-page":"9758","volume":"33","author":"H Alwassel","year":"2020","unstructured":"Alwassel, H., Mahajan, D., Korbar, B., Torresani, L., Ghanem, B., & Tran, D. (2020). Self-supervised learning by cross-modal audio-video clustering. Advances in Neural Information Processing Systems, 33, 9758\u20139770.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2510_CR3","doi-asserted-by":"crossref","unstructured":"Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C. L., & Parikh, D. (2015). Vqa: Visual question answering. In: Proceedings of the IEEE international conference on computer vision, pp 2425\u20132433.","DOI":"10.1109\/ICCV.2015.279"},{"key":"2510_CR4","unstructured":"Behera, S. R., Injeti, K. M., Patibandla, J. S. K., Pokala, P. K., & Pailla, B. R. (2023). Aquallm: Audio question answering data generation using large language models. arXiv preprint arXiv:2312.17343"},{"key":"2510_CR5","doi-asserted-by":"crossref","unstructured":"Ben-Younes, H., Cadene, R., Cord, M., & Thome, N. (2017). Mutan: Multimodal tucker fusion for visual question answering. In Proceedings of the IEEE international conference on computer vision, (pp 2612\u20132620).","DOI":"10.1109\/ICCV.2017.285"},{"key":"2510_CR6","doi-asserted-by":"crossref","unstructured":"Cartas, A., Luque, J., Radeva, P., Segura, C., & Dimiccoli, M. (2019). Seeing and hearing egocentric actions: How much can we learn? In Proceedings of the IEEE\/CVF international conference on computer vision workshops, (pp 0\u20130).","DOI":"10.1109\/ICCVW.2019.00548"},{"key":"2510_CR7","doi-asserted-by":"crossref","unstructured":"Chen, H., Xie, W., Afouras, T., Nagrani, A., Vedaldi, A., & Zisserman, A. (2021). Localizing visual sounds the hard way. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, (pp 16867\u201316876).","DOI":"10.1109\/CVPR46437.2021.01659"},{"key":"2510_CR8","unstructured":"Chen, S., Wu, Y., Wang, C., Liu, S., Tompkins, D., Chen, Z., & Wei, F. (2022). Beats: Audio pre-training with acoustic tokenizers. arXiv preprint arXiv:2212.09058"},{"key":"2510_CR9","unstructured":"Chen, S., He, X., Guo, L., Zhu, X., Wang, W., Tang, J., & Liu, J. (2023). Valor: Vision-audio-language omni-perception pretraining model and dataset. arXiv preprint arXiv:2304.08345"},{"issue":"5","key":"2510_CR10","doi-asserted-by":"publisher","first-page":"4109","DOI":"10.1109\/TCSVT.2023.3318220","volume":"34","author":"Z Chen","year":"2023","unstructured":"Chen, Z., Wang, L., Wang, P., & Gao, P. (2023). Question-aware global-local video understanding network for audio-visual question answering. IEEE Transactions on Circuits and Systems for Video Technology, 34(5), 4109\u20134119.","journal-title":"IEEE Transactions on Circuits and Systems for Video Technology"},{"key":"2510_CR11","doi-asserted-by":"crossref","unstructured":"Cheng, H., Liu, Z., Zhou, H., Qian, C., Wu, W., & Wang, L. (2022). Joint-modal label denoising for weakly-supervised audio-visual video parsing. In: European Conference on Computer Vision, Springer, pp 431\u2013448.","DOI":"10.1007\/978-3-031-19830-4_25"},{"key":"2510_CR12","doi-asserted-by":"crossref","unstructured":"Duan, B., Tang, H., Wang, W., Zong, Z., Yang, G., Yan, Y. (2021). Audio-visual event localization via recursive fusion by joint co-attention. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, pp 4013\u20134022.","DOI":"10.1109\/WACV48630.2021.00406"},{"key":"2510_CR13","doi-asserted-by":"crossref","unstructured":"Fan, C., Zhang, X., Zhang, S., Wang, W., Zhang, C., & Huang, H. (2019). Heterogeneous memory enhanced multimodal attention model for video question answering. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 1999\u20132007.","DOI":"10.1109\/CVPR.2019.00210"},{"key":"2510_CR14","doi-asserted-by":"publisher","first-page":"2283","DOI":"10.1109\/TASLP.2020.3010650","volume":"28","author":"HM Fayek","year":"2020","unstructured":"Fayek, H. M., & Johnson, J. (2020). Temporal reasoning via audio question answering. IEEE\/ACM Transactions on Audio, Speech, and Language Processing, 28, 2283\u20132294.","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"2510_CR15","doi-asserted-by":"crossref","unstructured":"Gan, C., Huang, D., Zhao, H., Tenenbaum, J. B., & Torralba, A. (2020). Music gesture for visual sound separation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 10478\u201310487.","DOI":"10.1109\/CVPR42600.2020.01049"},{"key":"2510_CR16","doi-asserted-by":"crossref","unstructured":"Gao, P., Li, H., Li, S., Lu, P., Li, Y., Hoi, S. C., & Wang, X. (2018). Question-guided hybrid convolution for visual question answering. In: Proceedings of the European Conference on Computer Vision (ECCV), pp 469\u2013485","DOI":"10.1007\/978-3-030-01246-5_29"},{"key":"2510_CR17","doi-asserted-by":"crossref","unstructured":"Gao, R., Grauman, K. (2021). Visualvoice: Audio-visual speech separation with cross-modal consistency. In: 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 15490\u201315500.","DOI":"10.1109\/CVPR46437.2021.01524"},{"key":"2510_CR18","unstructured":"Gao, S., Chen, Z., Chen, G., Wang, W., & Lu, T. (2023). Avsegformer: Audio-visual segmentation with transformer. arXiv preprint arXiv:2307.01146"},{"key":"2510_CR19","doi-asserted-by":"crossref","unstructured":"Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., Plakal, M., & Ritter, M. (2017). Audio set: An ontology and human-labeled dataset for audio events. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 776\u2013780.","DOI":"10.1109\/ICASSP.2017.7952261"},{"key":"2510_CR20","doi-asserted-by":"crossref","unstructured":"Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., Plakal, M., & Ritter, M. (2017). Audio set: An ontology and human-labeled dataset for audio events. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 776\u2013780.","DOI":"10.1109\/ICASSP.2017.7952261"},{"key":"2510_CR21","unstructured":"Girdhar, R., & Ramanan, D. (2017). Attentional pooling for action recognition. Advances in neural information processing systems 30."},{"key":"2510_CR22","doi-asserted-by":"crossref","unstructured":"Gong, Y., Chung, Y. A., & Glass, J. (2021). Ast: Audio spectrogram transformer. arXiv preprint arXiv:2104.01778","DOI":"10.21437\/Interspeech.2021-698"},{"key":"2510_CR23","doi-asserted-by":"crossref","unstructured":"Hu, Z., Lan, Y., Wang, L., Xu, W., Lim, E. P., Lee, R. K. W., Bing, L., & Poria, S. (2023). Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models. arXiv preprint arXiv:2304.01933","DOI":"10.18653\/v1\/2023.emnlp-main.319"},{"key":"2510_CR24","doi-asserted-by":"crossref","unstructured":"Huang, Y., Cai, M., Li, Z., & Sato, Y. (2018). Predicting gaze in egocentric video by learning task-dependent attention transition. In: Proceedings of the European conference on computer vision (ECCV), pp 754\u2013769.","DOI":"10.1007\/978-3-030-01225-0_46"},{"key":"2510_CR25","doi-asserted-by":"crossref","unstructured":"Huang, Y., Sugano, Y., & Sato, Y. (2020). Improving action segmentation via graph-based temporal reasoning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 14024\u201314034.","DOI":"10.1109\/CVPR42600.2020.01404"},{"key":"2510_CR26","doi-asserted-by":"crossref","unstructured":"Huang, Y., Yang, L., & Sato, Y. (2022). Compound prototype matching for few-shot action recognition. In: ECCV.","DOI":"10.1007\/978-3-031-19772-7_21"},{"key":"2510_CR27","doi-asserted-by":"crossref","unstructured":"Huang, Y., Yang, L., & Sato, Y. (2023). Weakly supervised temporal sentence grounding with uncertainty-guided self-training. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 18908\u201318918.","DOI":"10.1109\/CVPR52729.2023.01813"},{"key":"2510_CR28","doi-asserted-by":"crossref","unstructured":"Huang, Y., Chen, G., Xu, J., Zhang, M., Yang, L., Pei, B., Zhang, H., Dong, L., Wang, Y., Wang, L., et\u00a0al. (2024). Egoexolearn: A dataset for bridging asynchronous ego-and exo-centric view of procedural activities in real world. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 22072\u201322086.","DOI":"10.1109\/CVPR52733.2024.02084"},{"key":"2510_CR29","unstructured":"Huang, Y., Xu, J., Pei, B., He, Y., Chen, G., Yang, L., Chen, X., Wang, Y., Nie, Z., Liu, J., et\u00a0al. (2024). Vinci: A real-time embodied smart assistant based on egocentric vision-language model. arXiv preprint arXiv:2412.21080"},{"issue":"9","key":"2510_CR30","doi-asserted-by":"publisher","first-page":"3977","DOI":"10.1007\/s11263-024-02017-7","volume":"132","author":"Y Huang","year":"2024","unstructured":"Huang, Y., Yang, L., Chen, G., Zhang, H., Lu, F., & Sato, Y. (2024). Matching compound prototypes for few-shot action recognition. International Journal of Computer Vision, 132(9), 3977\u20134002.","journal-title":"International Journal of Computer Vision"},{"key":"2510_CR31","doi-asserted-by":"crossref","unstructured":"Jiang, H., Murdock, C., & Ithapu, V. K. (2022). Egocentric deep multi-channel audio-visual active speaker localization. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 10544\u201310552.","DOI":"10.1109\/CVPR52688.2022.01029"},{"key":"2510_CR32","unstructured":"Korbar, B., Tran, D., & Torresani, L. (2018). Cooperative learning of audio and video models from self-supervised synchronization. Advances in Neural Information Processing Systems 31."},{"key":"2510_CR33","doi-asserted-by":"publisher","first-page":"12995","DOI":"10.1609\/aaai.v37i11.26527","volume":"37","author":"M Lao","year":"2023","unstructured":"Lao, M., Pu, N., Liu, Y., He, K., Bakker, E. M., & Lew, M. S. (2023). Coca: Collaborative causal regularization for audio-visual question answering. Proceedings of the AAAI Conference on Artificial Intelligence, 37, 12995\u201313003.","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2510_CR34","doi-asserted-by":"crossref","unstructured":"Le, T. M., Le, V., Venkatesh, S., & Tran, T. (2020). Hierarchical conditional relation networks for video question answering. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 9972\u20139981.","DOI":"10.1109\/CVPR42600.2020.00999"},{"key":"2510_CR35","doi-asserted-by":"crossref","unstructured":"Lee, S., Chung, J., Yu, Y., Kim, G., Breuel, T., Chechik, G., & Song, Y. (2021). Acav100m: Automatic curation of large-scale datasets for audio-visual video representation learning. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 10274\u201310284.","DOI":"10.1109\/ICCV48922.2021.01011"},{"key":"2510_CR36","doi-asserted-by":"publisher","first-page":"11328","DOI":"10.1609\/aaai.v34i07.6794","volume":"34","author":"C Lei","year":"2020","unstructured":"Lei, C., Wu, L., Liu, D., Li, Z., Wang, G., Tang, H., & Li, H. (2020). Multi-question learning for visual question answering. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 11328\u201311335.","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2510_CR37","doi-asserted-by":"crossref","unstructured":"Li, D., Li, J., Li, H., Niebles, J. C., & Hoi, S. C. (2022). Align and prompt: Video-and-language pre-training with entity prompts. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 4953\u20134963.","DOI":"10.1109\/CVPR52688.2022.00490"},{"key":"2510_CR38","doi-asserted-by":"crossref","unstructured":"Li, G., Wei, Y., Tian, Y., Xu, C., Wen, J. R., & Hu, D. (2022). Learning to answer questions in dynamic audio-visual scenarios. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 19108\u201319118.","DOI":"10.1109\/CVPR52688.2022.01852"},{"key":"2510_CR39","doi-asserted-by":"crossref","unstructured":"Li, G., Hou, W., & Hu, D. (2023). Progressive spatio-temporal perception for audio-visual question answering. In: Proceedings of the 31st ACM International Conference on Multimedia, pp 7808\u20137816.","DOI":"10.1145\/3581783.3612293"},{"key":"2510_CR40","doi-asserted-by":"crossref","unstructured":"Li, K., Wang, Y., Li, Y., Wang, Y., He, Y., Wang, L., & Qiao, Y. (2023). Unmasked teacher: Towards training-efficient video foundation models. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 19948\u201319960.","DOI":"10.1109\/ICCV51070.2023.01826"},{"key":"2510_CR41","doi-asserted-by":"publisher","first-page":"8658","DOI":"10.1609\/aaai.v33i01.33018658","volume":"33","author":"X Li","year":"2019","unstructured":"Li, X., Song, J., Gao, L., Liu, X., Huang, W., He, X., & Gan, C. (2019). Beyond rnns: Positional self-attention with co-attention for video question answering. Proceedings of the AAAI conference on artificial intelligence, 33, 8658\u20138665.","journal-title":"Proceedings of the AAAI conference on artificial intelligence"},{"key":"2510_CR42","doi-asserted-by":"crossref","unstructured":"Li, Z., Guo, D., Zhou, J., Zhang, J., & Wang, M. (2023). Object-aware adaptive-positivity learning for audio-visual question answering. arXiv preprint arXiv:2312.12816","DOI":"10.1609\/aaai.v38i4.28116"},{"key":"2510_CR43","first-page":"11449","volume":"34","author":"YB Lin","year":"2021","unstructured":"Lin, Y. B., Tseng, H. Y., Lee, H. Y., Lin, Y. Y., & Yang, M. H. (2021). Exploring cross-video and cross-modality signals for weakly-supervised audio-visual video parsing. Advances in Neural Information Processing Systems, 34, 11449\u201311461.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2510_CR44","doi-asserted-by":"crossref","unstructured":"Lin, Y. B., Sung, Y. L., Lei, J., Bansal, M., & Bertasius, G. (2023). Vision transformers are parameter-efficient audio-visual learners. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 2299\u20132309.","DOI":"10.1109\/CVPR52729.2023.00228"},{"key":"2510_CR45","doi-asserted-by":"crossref","unstructured":"Liu, J., Wang, Y., Ju, C., Ma, C., Zhang, Y., & Xie, W. (2024). Annotation-free audio-visual segmentation. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, pp 5604\u20135614.","DOI":"10.1109\/WACV57701.2024.00551"},{"key":"2510_CR46","doi-asserted-by":"crossref","unstructured":"Liu, X., Dong, Z., & Zhang, P. (2024). Tackling data bias in music-avqa: Crafting a balanced dataset for unbiased question-answering. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, pp 4478\u20134487.","DOI":"10.1109\/WACV57701.2024.00442"},{"key":"2510_CR47","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 10012\u201310022.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"2510_CR48","doi-asserted-by":"crossref","unstructured":"Mo, S., & Tian, Y. (2023). Audio-visual grouping network for sound localization from mixtures. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 10565\u201310574.","DOI":"10.1109\/CVPR52729.2023.01018"},{"key":"2510_CR49","doi-asserted-by":"crossref","unstructured":"Nadeem, A., Hilton, A., Dawes, R., Thomas, G., Mustafa, A. (2024). Cad-contextual multi-modal alignment for dynamic avqa. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, pp 7251\u20137263.","DOI":"10.1109\/WACV57701.2024.00709"},{"key":"2510_CR50","unstructured":"Pei, B., Huang, Y., Xu, J., Chen, G., He, Y., Yang, L., Wang, Y., Xie, W., Qiao, Y., Wu, F., et\u00a0al. (2025). Modeling fine-grained hand-object dynamics for egocentric video representation learning. arXiv preprint arXiv:2503.00986"},{"key":"2510_CR51","doi-asserted-by":"crossref","unstructured":"Qian, R., Hu, D., Dinkel, H., Wu, M., Xu, N., & Lin, W. (2020). Multiple sound sources localization from coarse to fine. In: Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XX 16, Springer, pp 292\u2013308.","DOI":"10.1007\/978-3-030-58565-5_18"},{"key":"2510_CR52","unstructured":"Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et\u00a0al. (2021). Learning transferable visual models from natural language supervision. In: International conference on machine learning, PMLR, pp 8748\u20138763."},{"key":"2510_CR53","doi-asserted-by":"crossref","unstructured":"Rahman, T., Chou, S. H., Sigal, L., & Carenini, G. (2021). An improved attention for visual question answering. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 1653\u20131662.","DOI":"10.1109\/CVPRW53098.2021.00181"},{"key":"2510_CR54","doi-asserted-by":"crossref","unstructured":"Rao, V., Khalil, M. I., Li, H., Dai, P., & Lu, J. (2022). Dual perspective network for audio-visual event localization. In: European Conference on Computer Vision, Springer, pp 689\u2013704.","DOI":"10.1007\/978-3-031-19830-4_39"},{"key":"2510_CR55","doi-asserted-by":"crossref","unstructured":"Tian, Y., Shi, J., Li, B., Duan, Z., & Xu, C. (2018). Audio-visual event localization in unconstrained videos. In: Proceedings of the European conference on computer vision (ECCV), pp 247\u2013263.","DOI":"10.1007\/978-3-030-01216-8_16"},{"key":"2510_CR56","doi-asserted-by":"crossref","unstructured":"Tian, Y., Li, D., & Xu, C. (2020). Unified multisensory perception: Weakly-supervised audio-visual video parsing. In: Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part III 16, Springer, pp 436\u2013454.","DOI":"10.1007\/978-3-030-58580-8_26"},{"key":"2510_CR57","doi-asserted-by":"crossref","unstructured":"Tian, Y., Hu, D., & Xu, C. (2021). Cyclic co-learning of sounding object visual grounding and sound separation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 2745\u20132754.","DOI":"10.1109\/CVPR46437.2021.00277"},{"key":"2510_CR58","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., & Polosukhin, I. (2017). Attention is all you need. In: NeurIPS, pp 5998\u20136008."},{"key":"2510_CR59","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems 30"},{"key":"2510_CR60","first-page":"5696","volume":"35","author":"J Wang","year":"2022","unstructured":"Wang, J., Chen, D., Wu, Z., Luo, C., Zhou, L., Zhao, Y., Xie, Y., Liu, C., Jiang, Y. G., & Yuan, L. (2022). Omnivl: One foundation model for image-language and video-language tasks. Advances in neural information processing systems, 35, 5696\u20135710.","journal-title":"Advances in neural information processing systems"},{"key":"2510_CR61","doi-asserted-by":"crossref","unstructured":"Wang, J., Ge, Y., Yan, R., Ge, Y., Lin, K. Q., Tsutsui, S., Lin, X., Cai, G., Wu, J., Shan, Y., et\u00a0al. (2023). All in one: Exploring unified video-language pre-training. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 6598\u20136608.","DOI":"10.1109\/CVPR52729.2023.00638"},{"key":"2510_CR62","doi-asserted-by":"crossref","unstructured":"Wang, Y., Li, K., Li, X., Yu, J., He, Y., Chen, G., Pei, B., Zheng, R., Xu, J., Wang, Z., et\u00a0al. (2024). Internvideo2: Scaling video foundation models for multimodal video understanding. arXiv preprint arXiv:2403.15377","DOI":"10.1007\/978-3-031-73013-9_23"},{"key":"2510_CR63","doi-asserted-by":"crossref","unstructured":"Wu, Y., Zhu, L., Yan, Y., & Yang, Y. (2019). Dual attention matching for audio-visual event localization. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 6292\u20136300.","DOI":"10.1109\/ICCV.2019.00639"},{"key":"2510_CR64","unstructured":"Xiao, F., Lee, Y. J., Grauman, K., Malik, J., & Feichtenhofer, C. (2020). Audiovisual slowfast networks for video recognition. arXiv preprint arXiv:2001.08740"},{"key":"2510_CR65","doi-asserted-by":"crossref","unstructured":"Xiao, J., Shang, X., Yao, A., & Chua, T. S. (2021). Next-qa: Next phase of question-answering to explaining temporal actions. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 9777\u20139786.","DOI":"10.1109\/CVPR46437.2021.00965"},{"key":"2510_CR66","doi-asserted-by":"crossref","unstructured":"Xu, D., Zhao, Z., Xiao, J., Wu, F., Zhang, H., He, X., & Zhuang, Y. (2017). Video question answering via gradually refined attention over appearance and motion. In: Proceedings of the 25th ACM international conference on Multimedia, pp 1645\u20131653.","DOI":"10.1145\/3123266.3123427"},{"key":"2510_CR67","doi-asserted-by":"publisher","first-page":"279","DOI":"10.1609\/aaai.v34i01.5361","volume":"34","author":"H Xuan","year":"2020","unstructured":"Xuan, H., Zhang, Z., Chen, S., Yang, J., & Yan, Y. (2020). Cross-modal attention network for temporal inconsistent audio-visual event localization. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 279\u2013286.","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2510_CR68","doi-asserted-by":"crossref","unstructured":"Yang, A., Miech, A., Sivic, J., Laptev, I., & Schmid, C. (2021). Just ask: Learning to answer questions from millions of narrated videos. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 1686\u20131697.","DOI":"10.1109\/ICCV48922.2021.00171"},{"key":"2510_CR69","doi-asserted-by":"crossref","unstructured":"Yang, L., Huang, Y., Sugano, Y., & Sato, Y. (2022). Interact before align: Leveraging cross-modal knowledge for domain adaptive action recognition. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 14722\u201314732.","DOI":"10.1109\/CVPR52688.2022.01431"},{"key":"2510_CR70","doi-asserted-by":"crossref","unstructured":"Yang, L., Kong, Q., Yang, H. K., Kehl, W., Sato, Y., Kobori, N. (2023). Deco: Decomposition and reconstruction for compositional temporal grounding via coarse-to-fine contrastive ranking. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 23130\u201323140.","DOI":"10.1109\/CVPR52729.2023.02215"},{"key":"2510_CR71","doi-asserted-by":"crossref","unstructured":"Yang, P., Wang, X., Duan, X., Chen, H., Hou, R., Jin, C., & Zhu, W. (2022). Avqa: A dataset for audio-visual question answering on videos. In: Proceedings of the 30th ACM International Conference on Multimedia, pp 3480\u20133491.","DOI":"10.1145\/3503161.3548291"},{"key":"2510_CR72","doi-asserted-by":"publisher","first-page":"9127","DOI":"10.1609\/aaai.v33i01.33019127","volume":"33","author":"Z Yu","year":"2019","unstructured":"Yu, Z., Xu, D., Yu, J., Yu, T., Zhao, Z., Zhuang, Y., & Tao, D. (2019). Activitynet-qa: A dataset for understanding complex web videos via question answering. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 9127\u20139134.","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2510_CR73","doi-asserted-by":"crossref","unstructured":"Yun, H., Yu, Y., Yang, W., Lee, K., & Kim, G. (2021). Pano-avqa: Grounded audio-visual question answering on 360deg videos. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp 2031\u20132041.","DOI":"10.1109\/ICCV48922.2021.00204"},{"key":"2510_CR74","first-page":"23634","volume":"34","author":"R Zellers","year":"2021","unstructured":"Zellers, R., Lu, X., Hessel, J., Yu, Y., Park, J. S., Cao, J., Farhadi, A., & Choi, Y. (2021). Merlot: Multimodal neural script knowledge models. Advances in neural information processing systems, 34, 23634\u201323651.","journal-title":"Advances in neural information processing systems"},{"key":"2510_CR75","doi-asserted-by":"crossref","unstructured":"Zhao, H., Gan, C., Rouditchenko, A., Vondrick, C., McDermott, J., & Torralba, A. (2018). The sound of pixels. In: Proceedings of the European conference on computer vision (ECCV), pp 570\u2013586.","DOI":"10.1007\/978-3-030-01246-5_35"},{"key":"2510_CR76","doi-asserted-by":"crossref","unstructured":"Zhou, H., Xu, X., Lin, D., Wang, X., & Liu, Z. (2020). Sep-stereo: Visually guided stereophonic audio generation by associating source separation. In: Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XII 16, Springer, pp 52\u201369.","DOI":"10.1007\/978-3-030-58610-2_4"},{"key":"2510_CR77","doi-asserted-by":"crossref","unstructured":"Zhou, J., Wang, J., Zhang, J., Sun, W., Zhang, J., Birchfield, S., Guo, D., Kong, L., Wang, M., & Zhong, Y. (2022). Audio\u2013visual segmentation. In: European Conference on Computer Vision, Springer, pp 386\u2013403.","DOI":"10.1007\/978-3-031-19836-6_22"},{"key":"2510_CR78","doi-asserted-by":"crossref","unstructured":"Zhu, X., Zhu, J., Li, H., Wu, X., Li, H., Wang, X., & Dai, J. (2022). Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp 16804\u201316815.","DOI":"10.1109\/CVPR52688.2022.01630"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02510-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02510-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02510-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T08:53:36Z","timestamp":1760086416000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02510-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,3]]},"references-count":78,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2025,10]]}},"alternative-id":["2510"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02510-7","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,3]]},"assertion":[{"value":"21 August 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 June 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 July 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}