{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,10]],"date-time":"2025-09-10T22:34:47Z","timestamp":1757543687155,"version":"3.41.0"},"reference-count":66,"publisher":"Association for Computing Machinery (ACM)","issue":"2s","license":[{"start":{"date-parts":[[2019,4,30]],"date-time":"2019-04-30T00:00:00Z","timestamp":1556582400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100004731","name":"Zhejiang Natural Science Foundation","doi-asserted-by":"crossref","award":["LR19F020002,LZ17F020001"],"award-info":[{"award-number":["LR19F020002,LZ17F020001"]}],"id":[{"id":"10.13039\/501100004731","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["No.61602405,No.61836002,No.61572431,No.61751209"],"award-info":[{"award-number":["No.61602405,No.61836002,No.61572431,No.61751209"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2019,4,30]]},"abstract":"<jats:p>Visual Question Answering (VQA) is a challenging task that has gained increasing attention from both the computer vision and the natural language processing communities in recent years. Given a question in natural language, a VQA system is designed to automatically generate the answer according to the referenced visual content. Though there recently has been much intereset in this topic, the existing work of visual question answering mainly focuses on a single static image, which is only a small part of the dynamic and sequential visual data in the real world. As a natural extension, video question answering (VideoQA) is less explored. Because of the inherent temporal structure in the video, the approaches of ImageQA may be ineffectively applied to video question answering. In this article, we not only take the spatial and temporal dimension of video content into account but also employ an external knowledge base to improve the answering ability of the network. More specifically, we propose a knowledge-based progressive spatial-temporal attention network to tackle this problem. We obtain both objects and region features of the video frames from a region proposal network. The knowledge representation is generated by a word-level attention mechanism using the comment information of each object that is extracted from DBpedia. Then, we develop a question-knowledge-guided progressive spatial-temporal attention network to learn the joint video representation for video question answering task. We construct a large-scale video question answering dataset. The extensive experiments based on two different datasets validate the effectiveness of our method.<\/jats:p>","DOI":"10.1145\/3321505","type":"journal-article","created":{"date-parts":[[2019,7,3]],"date-time":"2019-07-03T13:47:53Z","timestamp":1562161673000},"page":"1-22","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":15,"title":["Video Question Answering via Knowledge-based Progressive Spatial-Temporal Attention Network"],"prefix":"10.1145","volume":"15","member":"320","published-online":{"date-parts":[[2019,7,3]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.12"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.279"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.5555\/1785162.1785216"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.5555\/1625275.1625705"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/1376616.1376746"},{"key":"e_1_2_1_6_1","volume-title":"Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075","author":"Bordes Antoine","year":"2015","unstructured":"Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 (2015)."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-662-44848-9_11"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.178"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298878"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/P15-1026"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D16-1044"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00688"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1422953112"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.337"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2017.2710635"},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2043612.2043613"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P17-1167"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.149"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","unstructured":"Andrej Karpathy Armand Joulin and Li F Fei-Fei. 2014. Deep fragment embeddings for bidirectional image sentence mapping. In Advances in Neural Information Processing Systems. 1889--1897.","DOI":"10.5555\/2969033.2969038"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","unstructured":"Jin-Hwa Kim Sang-Woo Lee Donghyun Kwak Min-Oh Heo Jeonghee Kim Jung-Woo Ha and Byoung-Tak Zhang. 2016. Multimodal residual learning for visual qa. In Advances in Neural Information Processing Systems. 361--369.","DOI":"10.5555\/3157096.3157137"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.5555\/3044805.3045025"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","unstructured":"Ruiyu Li and Jiaya Jia. 2016. Visual question answering with question representation update (qru). In Advances in Neural Information Processing Systems. 4655--4663.","DOI":"10.5555\/3157382.3157618"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00642"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3240508.3240605"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46475-6_17"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1023\/B:BTTJ.0000047600.45421.6d"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","unstructured":"Jiasen Lu Jianwei Yang Dhruv Batra and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in Neural Information Processing Systems. 289--297.","DOI":"10.5555\/3157096.3157129"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.5555\/3016387.3016405"},{"volume-title":"Proceedings of the Conference on Innovative Data Systems Research (CIDR\u201913)","author":"Mahdisoltani Farzaneh","key":"e_1_2_1_32_1","unstructured":"Farzaneh Mahdisoltani, Joanna Biega, and Fabian M. Suchanek. 2013. Yago3: A knowledge base from multilingual wikipedias. In Proceedings of the Conference on Innovative Data Systems Research (CIDR\u201913)."},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","unstructured":"Mateusz Malinowski and Mario Fritz. 2014. A multi-world approach to question answering about real-world scenes based on uncertain input. In Advances in Neural Information Processing Systems. 1682--1690.","DOI":"10.5555\/2968826.2969014"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.9"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.80"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.11"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00088"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","unstructured":"Shaoqing Ren Kaiming He Ross Girshick and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems. 91--99.","DOI":"10.5555\/2969239.2969250"},{"key":"e_1_2_1_39_1","volume-title":"Proceedings of the International Conference on Learning Representations.","author":"Simonyan Karen","year":"2015","unstructured":"Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3240508.3240563"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","unstructured":"Sainbayar Sukhbaatar Jason Weston Rob Fergus et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems. 2440--2448.","DOI":"10.5555\/2969442.2969512"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298594"},{"volume-title":"Proceedings of the Conference on Computer Vision and Pattern Recognition. 3233--3241","author":"Teney Damien","key":"e_1_2_1_43_1","unstructured":"Damien Teney, Lingqiao Liu, and Anton van den Hengel. 2017. Graph-structured representations for visual question answering. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 3233--3241."},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/2629489"},{"key":"e_1_2_1_45_1","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence (AAAI\u201918)","author":"Wang Bo","year":"2018","unstructured":"Bo Wang, Youjiang Xu, Yahong Han, and Richang Hong. 2018. Movie question answering: Remembering the textual cues for layered visual contents. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI\u201918). 7380--7387."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.5555\/3304415.3304561"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2017.2708709"},{"volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4622--4630","author":"Wu Qi","key":"e_1_2_1_48_1","unstructured":"Qi Wu, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2016. Ask me anything: Free-form visual question answering based on knowledge from external sources. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4622--4630."},{"key":"e_1_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.3115\/981732.981751"},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P16-1127"},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.5555\/3045390.3045643"},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","unstructured":"Dejing Xu Zhou Zhao Jun Xiao Fei Wu Hanwang Zhang Xiangnan He and Yueting Zhuang. 2017. Video question answering via gradually refined attention over appearance and motion. In ACM Multimedia. 1645--1653. 10.1145\/3123266.3123427","DOI":"10.1145\/3123266.3123427"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46478-7_28"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.5555\/3045118.3045336"},{"key":"e_1_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2846664"},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.10"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.512"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3077136.3080655"},{"key":"e_1_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.446"},{"key":"e_1_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.347"},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.5555\/3298023.3298196"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.5555\/3020336.3020416"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3123266.3123364"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.5555\/3304222.3304280"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-017-1033-7"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.540"},{"key":"e_1_2_1_67_1","volume-title":"Building a large-scale multimodal knowledge base system for answering visual queries. arXiv preprint arXiv:1507.05670","author":"Zhu Yuke","year":"2015","unstructured":"Yuke Zhu, Ce Zhang, Christopher R\u00e9, and Li Fei-Fei. 2015. Building a large-scale multimodal knowledge base system for answering visual queries. arXiv preprint arXiv:1507.05670 (2015)."}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3321505","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3321505","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:54:38Z","timestamp":1750204478000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3321505"}},"subtitle":[],"editor":[{"given":"Weike","family":"Jin","sequence":"first","affiliation":[]},{"given":"Zhou","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Yimeng","family":"Li","sequence":"additional","affiliation":[]},{"given":"Jie","family":"Li","sequence":"additional","affiliation":[]},{"given":"Jun","family":"Xiao","sequence":"additional","affiliation":[]},{"given":"Yueting","family":"Zhuang","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2019,4,30]]},"references-count":66,"journal-issue":{"issue":"2s","published-print":{"date-parts":[[2019,4,30]]}},"alternative-id":["10.1145\/3321505"],"URL":"https:\/\/doi.org\/10.1145\/3321505","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"type":"print","value":"1551-6857"},{"type":"electronic","value":"1551-6865"}],"subject":[],"published":{"date-parts":[[2019,4,30]]},"assertion":[{"value":"2018-06-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-02-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-07-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}