{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T02:14:10Z","timestamp":1760148850194,"version":"build-2065373602"},"reference-count":29,"publisher":"MDPI AG","issue":"6","license":[{"start":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T00:00:00Z","timestamp":1686528000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Social Science Planning Foundation of Liaoning Province","award":["L21CXW003","KFKT2022B41","2021RQ056"],"award-info":[{"award-number":["L21CXW003","KFKT2022B41","2021RQ056"]}]},{"name":"State Key Laboratory of Novel Software Technology, Nanjing University","award":["L21CXW003","KFKT2022B41","2021RQ056"],"award-info":[{"award-number":["L21CXW003","KFKT2022B41","2021RQ056"]}]},{"name":"Dalian High-level Talent Innovation Support Plan","award":["L21CXW003","KFKT2022B41","2021RQ056"],"award-info":[{"award-number":["L21CXW003","KFKT2022B41","2021RQ056"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>Video question answering (QA) is a cross-modal task that requires understanding the video content to answer questions. Current techniques address this challenge by employing stacked modules, such as attention mechanisms and graph convolutional networks. These methods reason about the semantics of video features and their interaction with text-based questions, yielding excellent results. However, these approaches often learn and fuse features representing different aspects of the video separately, neglecting the intra-interaction and overlooking the latent complex correlations between the extracted features. Additionally, the stacking of modules introduces a large number of parameters, making model training more challenging. To address these issues, we propose a novel multimodal knowledge distillation method that leverages the strengths of knowledge distillation for model compression and feature enhancement. Specifically, the fused features in the larger teacher model are distilled into knowledge, which guides the learning of appearance and motion features in the smaller student model. By incorporating cross-modal information in the early stages, the appearance and motion features can discover their related and complementary potential relationships, thus improving the overall model performance. Despite its simplicity, our extensive experiments on the widely used video QA datasets, MSVD-QA and MSRVTT-QA, demonstrate clear performance improvements over prior methods. These results validate the effectiveness of the proposed knowledge distillation approach.<\/jats:p>","DOI":"10.3390\/info14060328","type":"journal-article","created":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T01:59:07Z","timestamp":1686535147000},"page":"328","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["A Video Question Answering Model Based on Knowledge Distillation"],"prefix":"10.3390","volume":"14","author":[{"given":"Zhuang","family":"Shao","sequence":"first","affiliation":[{"name":"China Academy of Space Technology, Beijing 100094, China"}]},{"given":"Jiahui","family":"Wan","sequence":"additional","affiliation":[{"name":"Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, School of Software, Dalian University of Technology, Dalian 116620, China"}]},{"given":"Linlin","family":"Zong","sequence":"additional","affiliation":[{"name":"Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, School of Software, Dalian University of Technology, Dalian 116620, China"},{"name":"State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,6,12]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Xu, D., Zhao, Z., Xiao, J., Wu, F., Zhang, H., He, X., and Zhuang, Y. (2017, January 23\u201327). Video Question Answering via Gradually Refined Attention over Appearance and Motion. Proceedings of the 25th ACM International Conference on Multimedia, San Francisco, CA, USA.","DOI":"10.1145\/3123266.3123427"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Antol, S., Agrawal, A., Lu, J., Mitchell, M., and Parikh, D. (2015, January 13\u201316). Vqa: Visual Question Answering. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.279"},{"key":"ref_3","first-page":"1","article-title":"A Survey of Text Question Answering Techniques","volume":"53","author":"Gupta","year":"2012","journal-title":"J. Comput. Appl."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Gao, J., Ge, R., Chen, K., and Nevatia, R. (2018, January 18\u201322). Motion-appearance Co-memory Networks for Video Question Answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00688"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Wang, X., and Gupta, A. (2018, January 8\u201314). Videos as Space-time Region Graphs. Proceedings of the European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-030-01228-1_25"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"3369","DOI":"10.1109\/TMM.2021.3097171","article-title":"DualVGR: A Dual-Visual Graph Reasoning Unit for Video Question Answering","volume":"24","author":"Wang","year":"2022","journal-title":"IEEE Trans. Multimed."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Zhao, Z., Lin, Z., Song, J., and He, X. (2019, January 10\u201316). Open-ended Long-form Video Question Answering via Hierarchical Convolutional Self-attention Networks. Proceedings of the 28 International Joint Conference on Artificial Intelligence, Macao, China.","DOI":"10.24963\/ijcai.2019\/609"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"108959","DOI":"10.1016\/j.patcog.2022.108959","article-title":"Dynamic Self-Attention with Vision Synchronization Networks for Video Question Answering","volume":"132","author":"Liu","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Jiang, P., and Han, Y. (2020, January 7\u201312). Reasoning with Heterogeneous Graph Alignment for Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6767"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Huang, D., Chen, P., Zeng, R., Du, Q., Tan, M., and Gan, C. (2020, January 7\u201312). Location-Aware Graph Convolutional Networks for Video Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6737"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Wang, X., Zhu, M., Bo, D., Cui, P., Shi, C., and Pei, J. (2020, January 23\u201327). AM-GCN: Adaptive Multi-channel Graph Convolutional Networks. Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA.","DOI":"10.1145\/3394486.3403177"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1789","DOI":"10.1007\/s11263-021-01453-z","article-title":"Knowledge Distillation: A Survey","volume":"129","author":"Gou","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Tapaswi, M., Zhu, Y., Stiefelhagen, R., Torralba, A., Urtasun, R., and Fidler, S. (July, January 26). MovieQA: Understanding Stories in Movies through Question-Answering. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.501"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Lei, J., Yu, L., Bansal, M., and Berg, T.L. (November, January 31). Tvqa: Localized, Compositional Video Question Answering. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.","DOI":"10.18653\/v1\/D18-1167"},{"key":"ref_15","unstructured":"Castro, S., Azab, M., Stroud, J., Noujaim, C., Wang, R., Deng, J., and Mihalcea, R. (2020, January 11\u201316). LifeQA: A Real-life Dataset for Video Question Answering. Proceedings of the 12th Language Resources and Evaluation Conference, Marseille, France."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Song, X., Shi, Y., Chen, X., and Han, Y. (2018, January 22\u201326). Explore Multi-step Reasoning in Video Question Answering. Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Repubic of Korea.","DOI":"10.1145\/3240508.3240563"},{"key":"ref_17","unstructured":"Jia, D., Wei, D., Socher, R., Li, L.J., Kai, L., and Li, F.F. (2009, January 20\u201325). Imagenet: A Large-Scale Hierarchical Image Database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA."},{"key":"ref_18","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Carreira, J., and Zisserman, A. (2017, January 21\u201326). Quo Vadis, Action Recognition? A New model and the Kinetics Dataset. Proceedings of the IEEE Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.502"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Tran, D., Bourdev, L., Fergus, R., Torresani, L., and Paluri, M. (2015, January 13\u201316). Learning Spatiotemporal Features with 3D Convolutional Networks. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.510"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Pennington, J., Socher, R., and Manning, C. (2014, January 25\u201329). Glove: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar.","DOI":"10.3115\/v1\/D14-1162"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Jang, Y., Song, Y., Yu, Y., Kim, Y., and Kim, G. (2017, January 21\u201326). Tgif-qa: Toward Spatio-temporal Reasoning in Visual Question Answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.149"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Kim, K.M., Heo, M.O., Choi, S.H., and Zhang, B.T. (2017, January 19\u201325). Deepstory: Video Story QA by Deep Embedded Memory Networks. Proceedings of the 26 International Joint Conference on Artificial Intelligence, Melbourne, Australia.","DOI":"10.24963\/ijcai.2017\/280"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Le, V.M., Le, V., Venkatesh, S., and Tran, T. (2020, January 13\u201319). Hierarchical Conditional Relation Networks for Video Question Answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00999"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Yu, Z., Yu, J., Fan, J., and Tao, D. (2017, January 24\u201327). Multi-modal Factorized Bilinear Pooling with Co-attention Learning for Visual Question Answering. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.202"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1109\/TNNLS.2020.2978386","article-title":"A Comprehensive Survey on Graph Neural Networks","volume":"32","author":"Wu","year":"2020","journal-title":"Trans. Neural Netw. Learn. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Song, L., Smola, A., Gretton, A., Borgwardt, K., and Bedo, J. (2007, January 20\u201324). Supervised Feature Selection via Dependence Estimation. Proceedings of the 24th Annual International Conference on Machine Learning, Corvallis, OR, USA.","DOI":"10.1145\/1273496.1273600"},{"key":"ref_28","unstructured":"Chen, D., and Dolan, W.B. (2011, January 19\u201324). Collecting Highly Parallel Data for Paraphrase Evaluation. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA."},{"key":"ref_29","unstructured":"Xu, J., Mei, T., Yao, T., and Rui, Y. (July, January 26). Msr-vtt: A large video description dataset for bridging video and language. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/14\/6\/328\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:52:51Z","timestamp":1760125971000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/14\/6\/328"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,12]]},"references-count":29,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2023,6]]}},"alternative-id":["info14060328"],"URL":"https:\/\/doi.org\/10.3390\/info14060328","relation":{},"ISSN":["2078-2489"],"issn-type":[{"type":"electronic","value":"2078-2489"}],"subject":[],"published":{"date-parts":[[2023,6,12]]}}}