{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:20:35Z","timestamp":1750220435033,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":59,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,10,12]],"date-time":"2020-10-12T00:00:00Z","timestamp":1602460800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Beijing Natural Science Foundation","award":["4192059"],"award-info":[{"award-number":["4192059"]}]},{"name":"National Natural Science Foundation of China","award":["61922086, 61932009, 61722204"],"award-info":[{"award-number":["61922086, 61932009, 61722204"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,10,12]]},"DOI":"10.1145\/3394171.3413649","type":"proceedings-article","created":{"date-parts":[[2020,10,12]],"date-time":"2020-10-12T13:10:18Z","timestamp":1602508218000},"page":"4253-4261","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Dual Hierarchical Temporal Convolutional Network with QA-Aware Dynamic Normalization for Video Story Question Answering"],"prefix":"10.1145","author":[{"given":"Fei","family":"Liu","sequence":"first","affiliation":[{"name":"Institute of Automation, Chinese Academy of Sciences &amp; University of Chinese Academy of Sciences, Beijing, China"}]},{"given":"Jing","family":"Liu","sequence":"additional","affiliation":[{"name":"Institute of Automation, Chinese Academy of Sciences &amp; University of Chinese Academy of Sciences, Beijing, China"}]},{"given":"Xinxin","family":"Zhu","sequence":"additional","affiliation":[{"name":"Institute of Automation, Chinese Academy of Sciences, Beijing, China"}]},{"given":"Richang","family":"Hong","sequence":"additional","affiliation":[{"name":"Hefei University of Technology, Hefei, China"}]},{"given":"Hanqing","family":"Lu","sequence":"additional","affiliation":[{"name":"Institute of Automation, Chinese Academy of Sciences &amp; University of Chinese Academy of Sciences, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2020,10,12]]},"reference":[{"volume-title":"Don't just assume","author":"Agrawal Aishwarya","key":"e_1_3_2_2_1_1","unstructured":"Aishwarya Agrawal , Dhruv Batra , Devi Parikh , and Aniruddha Kembhavi . 2018. Don't just assume ; look and answer: Overcoming priors for visual question answering. In CVPR. 4971--4980. Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for visual question answering. In CVPR. 4971--4980."},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"crossref","unstructured":"Peter Anderson Xiaodong He Chris Buehler Damien Teney Mark Johnson Stephen Gould and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. 6077--6086.  Peter Anderson Xiaodong He Chris Buehler Damien Teney Mark Johnson Stephen Gould and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. 6077--6086.","DOI":"10.1109\/CVPR.2018.00636"},{"key":"e_1_3_2_2_3_1","volume-title":"Vqa: Visual question answering. In ICCV. 2425--2433.","author":"Antol Stanislaw","year":"2015","unstructured":"Stanislaw Antol , Aishwarya Agrawal , Jiasen Lu , Margaret Mitchell , Dhruv Batra , C Lawrence Zitnick , and Devi Parikh . 2015 . Vqa: Visual question answering. In ICCV. 2425--2433. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In ICCV. 2425--2433."},{"key":"e_1_3_2_2_4_1","volume-title":"Jamie Ryan Kiros, and Geoffrey E Hinton","author":"Ba Jimmy Lei","year":"2016","unstructured":"Jimmy Lei Ba , Jamie Ryan Kiros, and Geoffrey E Hinton . 2016 . Layer normalization. arXiv preprint arXiv:1607.06450 (2016). Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016)."},{"key":"e_1_3_2_2_5_1","volume-title":"Mutan: Multimodal tucker fusion for visual question answering. In ICCV. 2612--2620.","author":"Ben-Younes Hedi","year":"2017","unstructured":"Hedi Ben-Younes , R\u00e9mi Cadene , Matthieu Cord , and Nicolas Thome . 2017 . Mutan: Multimodal tucker fusion for visual question answering. In ICCV. 2612--2620. Hedi Ben-Younes, R\u00e9mi Cadene, Matthieu Cord, and Nicolas Thome. 2017. Mutan: Multimodal tucker fusion for visual question answering. In ICCV. 2612--2620."},{"key":"e_1_3_2_2_6_1","volume-title":"Murel: Multimodal relational reasoning for visual question answering. In CVPR. 1989--1998.","author":"Cadene Remi","year":"2019","unstructured":"Remi Cadene , Hedi Ben-Younes , Matthieu Cord , and Nicolas Thome . 2019 . Murel: Multimodal relational reasoning for visual question answering. In CVPR. 1989--1998. Remi Cadene, Hedi Ben-Younes, Matthieu Cord, and Nicolas Thome. 2019. Murel: Multimodal relational reasoning for visual question answering. In CVPR. 1989--1998."},{"key":"e_1_3_2_2_7_1","volume-title":"Abc-cnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960","author":"Chen Kan","year":"2015","unstructured":"Kan Chen , Jiang Wang , Liang-Chieh Chen , Haoyuan Gao , Wei Xu , and Ram Nevatia . 2015 . Abc-cnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960 (2015). Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. Abc-cnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960 (2015)."},{"key":"e_1_3_2_2_8_1","unstructured":"Harm De Vries Florian Strub J\u00e9r\u00e9mie Mary Hugo Larochelle Olivier Pietquin and Aaron C Courville. 2017. Modulating early visual processing by language. In NeurIPS. 6594--6604.  Harm De Vries Florian Strub J\u00e9r\u00e9mie Mary Hugo Larochelle Olivier Pietquin and Aaron C Courville. 2017. Modulating early visual processing by language. In NeurIPS. 6594--6604."},{"key":"e_1_3_2_2_9_1","volume-title":"Imagenet: A large-scale hierarchical image database. In CVPR. 248--255.","author":"Deng Jia","year":"2009","unstructured":"Jia Deng , Wei Dong , Richard Socher , Li-Jia Li , Kai Li , and Li Fei-Fei . 2009 . Imagenet: A large-scale hierarchical image database. In CVPR. 248--255. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR. 248--255."},{"key":"e_1_3_2_2_10_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_2_2_11_1","unstructured":"Chenyou Fan Xiaofan Zhang Shu Zhang Wensheng Wang Chi Zhang and Heng Huang. 2019. Heterogeneous memory enhanced multimodal attention model for video question answering. In CVPR. 1999--2007.  Chenyou Fan Xiaofan Zhang Shu Zhang Wensheng Wang Chi Zhang and Heng Huang. 2019. Heterogeneous memory enhanced multimodal attention model for video question answering. In CVPR. 1999--2007."},{"key":"e_1_3_2_2_12_1","volume-title":"Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach.","author":"Fukui Akira","year":"2016","unstructured":"Akira Fukui , Dong Huk Park , Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016 . Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. In EMNLP. 457--468. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. In EMNLP. 457--468."},{"key":"e_1_3_2_2_13_1","unstructured":"Jiyang Gao Runzhou Ge Kan Chen and Ram Nevatia. 2018. Motion-appearance co-memory networks for video question answering. In CVPR. 6576--6585.  Jiyang Gao Runzhou Ge Kan Chen and Ram Nevatia. 2018. Motion-appearance co-memory networks for video question answering. In CVPR. 6576--6585."},{"key":"e_1_3_2_2_14_1","volume-title":"Xiaogang Wang, and Hongsheng Li.","author":"Gao Peng","year":"2019","unstructured":"Peng Gao , Zhengkai Jiang , Haoxuan You , Pan Lu , Steven CH Hoi , Xiaogang Wang, and Hongsheng Li. 2019 . Dynamic fusion with intra-and inter-modality attention flow for visual question answering. In CVPR. 6639--6648. Peng Gao, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven CH Hoi, Xiaogang Wang, and Hongsheng Li. 2019. Dynamic fusion with intra-and inter-modality attention flow for visual question answering. In CVPR. 6639--6648."},{"key":"e_1_3_2_2_15_1","volume-title":"Edward Grefenstette, Tiago Ramalho, John Agapiou, et almbox.","author":"Graves Alex","year":"2016","unstructured":"Alex Graves , Greg Wayne , Malcolm Reynolds , Tim Harley , Ivo Danihelka , Agnieszka Grabska-Barwi\u0148ska , Sergio G\u00f3mez Colmenarejo , Edward Grefenstette, Tiago Ramalho, John Agapiou, et almbox. 2016 . Hybrid computing using a neural network with dynamic external memory. Nature , Vol. 538 , 7626 (2016), 471--476. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi\u0148ska, Sergio G\u00f3mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et almbox. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, Vol. 538, 7626 (2016), 471--476."},{"key":"e_1_3_2_2_16_1","unstructured":"Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770--778.  Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770--778."},{"key":"e_1_3_2_2_17_1","volume-title":"Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167","author":"Ioffe Sergey","year":"2015","unstructured":"Sergey Ioffe and Christian Szegedy . 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 ( 2015 ). Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)."},{"key":"e_1_3_2_2_18_1","volume-title":"Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In CVPR. 2758--2766.","author":"Jang Yunseok","year":"2017","unstructured":"Yunseok Jang , Yale Song , Youngjae Yu , Youngjin Kim , and Gunhee Kim . 2017 . Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In CVPR. 2758--2766. Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In CVPR. 2758--2766."},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"crossref","unstructured":"Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In ICCV. 1965--1973.  Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In ICCV. 1965--1973.","DOI":"10.1109\/ICCV.2017.217"},{"key":"e_1_3_2_2_20_1","unstructured":"Junyeong Kim Minuk Ma Kyungsu Kim Sungjin Kim and Chang D Yoo. 2019. Progressive attention memory network for movie story question answering. In CVPR. 8337--8346.  Junyeong Kim Minuk Ma Kyungsu Kim Sungjin Kim and Chang D Yoo. 2019. Progressive attention memory network for movie story question answering. In CVPR. 8337--8346."},{"key":"e_1_3_2_2_21_1","unstructured":"Jin-Hwa Kim Jaehyun Jun and Byoung-Tak Zhang. 2018b. Bilinear attention networks. In NeurIPS. 1564--1574.  Jin-Hwa Kim Jaehyun Jun and Byoung-Tak Zhang. 2018b. Bilinear attention networks. In NeurIPS. 1564--1574."},{"key":"e_1_3_2_2_22_1","volume-title":"Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325","author":"Kim Jin-Hwa","year":"2016","unstructured":"Jin-Hwa Kim , Kyoung-Woon On , Woosang Lim , Jeonghee Kim , Jung-Woo Ha , and Byoung-Tak Zhang . 2016. Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325 ( 2016 ). Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. 2016. Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325 (2016)."},{"key":"e_1_3_2_2_23_1","unstructured":"Kyung-Min Kim Seong-Ho Choi Jin-Hwa Kim and Byoung-Tak Zhang. 2018a. Multimodal dual attention memory for video story question answering. In ECCV. 673--688.  Kyung-Min Kim Seong-Ho Choi Jin-Hwa Kim and Byoung-Tak Zhang. 2018a. Multimodal dual attention memory for video story question answering. In ECCV. 673--688."},{"key":"e_1_3_2_2_24_1","volume-title":"Deepstory: Video story qa by deep embedded memory networks. In IJCAI. 2016--2022.","author":"Kim Kyung-Min","year":"2017","unstructured":"Kyung-Min Kim , Min-Oh Heo , Seong-Ho Choi , and Byoung-Tak Zhang . 2017 . Deepstory: Video story qa by deep embedded memory networks. In IJCAI. 2016--2022. Kyung-Min Kim, Min-Oh Heo, Seong-Ho Choi, and Byoung-Tak Zhang. 2017. Deepstory: Video story qa by deep embedded memory networks. In IJCAI. 2016--2022."},{"key":"e_1_3_2_2_25_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Jimmy Ba . 2014 . Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_3_2_2_26_1","volume-title":"TVQA: Localized, Compositional Video Question Answering. In EMNLP. 1369--1379.","author":"Lei Jie","year":"2018","unstructured":"Jie Lei , Licheng Yu , Mohit Bansal , and Tamara Berg . 2018 . TVQA: Localized, Compositional Video Question Answering. In EMNLP. 1369--1379. Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. TVQA: Localized, Compositional Video Question Answering. In EMNLP. 1369--1379."},{"key":"e_1_3_2_2_27_1","volume-title":"TVQA: Spatio-temporal grounding for video question answering. arXiv preprint arXiv:1904.11574","author":"Lei Jie","year":"2019","unstructured":"Jie Lei , Licheng Yu , Tamara L Berg , and Mohit Bansal . 2019 . TVQA: Spatio-temporal grounding for video question answering. arXiv preprint arXiv:1904.11574 (2019). Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. 2019. TVQA: Spatio-temporal grounding for video question answering. arXiv preprint arXiv:1904.11574 (2019)."},{"key":"e_1_3_2_2_28_1","doi-asserted-by":"crossref","unstructured":"Junwei Liang Lu Jiang Liangliang Cao Li-Jia Li and Alexander G Hauptmann. 2018. Focal visual-text attention for visual question answering. In CVPR. 6135--6143.  Junwei Liang Lu Jiang Liangliang Cao Li-Jia Li and Alexander G Hauptmann. 2018. Focal visual-text attention for visual question answering. In CVPR. 6135--6143.","DOI":"10.1109\/CVPR.2018.00642"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"crossref","unstructured":"Fei Liu Jing Liu Zhiwei Fang Richang Hong and Hanqing Lu. 2019 a. Densely Connected Attention Flow for Visual Question Answering.. In IJCAI. 869--875.  Fei Liu Jing Liu Zhiwei Fang Richang Hong and Hanqing Lu. 2019 a. Densely Connected Attention Flow for Visual Question Answering.. In IJCAI. 869--875.","DOI":"10.24963\/ijcai.2019\/122"},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"crossref","unstructured":"Fei Liu Jing Liu Richang Hong and Hanqing Lu. 2019 b. Erasing-based Attention Learning for Visual Question Answering. In ACM MM. 1175--1183.  Fei Liu Jing Liu Richang Hong and Hanqing Lu. 2019 b. Erasing-based Attention Learning for Visual Question Answering. In ACM MM. 1175--1183.","DOI":"10.1145\/3343031.3350993"},{"key":"e_1_3_2_2_31_1","unstructured":"Jiasen Lu Jianwei Yang Dhruv Batra and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In NeurIPS. 289--297.  Jiasen Lu Jianwei Yang Dhruv Batra and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In NeurIPS. 289--297."},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"crossref","unstructured":"Tegan Maharaj Nicolas Ballas Anna Rohrbach Aaron Courville and Christopher Pal. 2017. A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering. In CVPR. 6884--6893.  Tegan Maharaj Nicolas Ballas Anna Rohrbach Aaron Courville and Christopher Pal. 2017. A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering. In CVPR. 6884--6893.","DOI":"10.1109\/CVPR.2017.778"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"crossref","unstructured":"Mateusz Malinowski Marcus Rohrbach and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In ICCV. 1--9.  Mateusz Malinowski Marcus Rohrbach and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In ICCV. 1--9.","DOI":"10.1109\/ICCV.2015.9"},{"key":"e_1_3_2_2_34_1","volume-title":"End-to-end Learning of Flexible Activation Functions in Deep Networks. arXiv preprint arXiv:1907.06732","author":"Molina Alejandro","year":"2019","unstructured":"Alejandro Molina , Patrick Schramowski , and Kristian Kersting . 2019. Pad\\'e Activation Units : End-to-end Learning of Flexible Activation Functions in Deep Networks. arXiv preprint arXiv:1907.06732 ( 2019 ). Alejandro Molina, Patrick Schramowski, and Kristian Kersting. 2019. Pad\\'e Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks. arXiv preprint arXiv:1907.06732 (2019)."},{"key":"e_1_3_2_2_35_1","volume-title":"Ilchae Jung, and Bohyung Han.","author":"Mun Jonghwan","year":"2017","unstructured":"Jonghwan Mun , Paul Hongsuck Seo , Ilchae Jung, and Bohyung Han. 2017 . Marioqa : Answering questions by watching gameplay videos. In ICCV. 2867--2875. Jonghwan Mun, Paul Hongsuck Seo, Ilchae Jung, and Bohyung Han. 2017. Marioqa: Answering questions by watching gameplay videos. In ICCV. 2867--2875."},{"key":"e_1_3_2_2_36_1","unstructured":"Seil Na Sangho Lee Jisung Kim and Gunhee Kim. 2017. A read-write memory network for movie story understanding. In ICCV. 677--685.  Seil Na Sangho Lee Jisung Kim and Gunhee Kim. 2017. A read-write memory network for movie story understanding. In ICCV. 677--685."},{"key":"e_1_3_2_2_37_1","doi-asserted-by":"crossref","unstructured":"Duy-Kien Nguyen and Takayuki Okatani. 2018. Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering. In CVPR. 6087--6096.  Duy-Kien Nguyen and Takayuki Okatani. 2018. Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering. In CVPR. 6087--6096.","DOI":"10.1109\/CVPR.2018.00637"},{"key":"e_1_3_2_2_38_1","doi-asserted-by":"crossref","unstructured":"Kevin J Shih Saurabh Singh and Derek Hoiem. 2016. Where to look: Focus regions for visual question answering. In CVPR. 4613--4621.  Kevin J Shih Saurabh Singh and Derek Hoiem. 2016. Where to look: Focus regions for visual question answering. In CVPR. 4613--4621.","DOI":"10.1109\/CVPR.2016.499"},{"key":"e_1_3_2_2_39_1","volume-title":"et almbox","author":"Sukhbaatar Sainbayar","year":"2015","unstructured":"Sainbayar Sukhbaatar , Jason Weston , Rob Fergus , et almbox . 2015 . End-to-end memory networks. In NeurIPS. 2440--2448. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et almbox. 2015. End-to-end memory networks. In NeurIPS. 2440--2448."},{"key":"e_1_3_2_2_40_1","volume-title":"Social anchor-unit graph regularized tensor completion for large-scale image retagging","author":"Tang Jinhui","year":"2019","unstructured":"Jinhui Tang , Xiangbo Shu , Zechao Li , Yu-Gang Jiang , and Qi Tian . 2019. Social anchor-unit graph regularized tensor completion for large-scale image retagging . IEEE transactions on pattern analysis and machine intelligence ( 2019 ). Jinhui Tang, Xiangbo Shu, Zechao Li, Yu-Gang Jiang, and Qi Tian. 2019. Social anchor-unit graph regularized tensor completion for large-scale image retagging. IEEE transactions on pattern analysis and machine intelligence (2019)."},{"key":"e_1_3_2_2_41_1","volume-title":"Tri-clustered tensor completion for social-aware image tag refinement","author":"Tang Jinhui","year":"2016","unstructured":"Jinhui Tang , Xiangbo Shu , Guo-Jun Qi , Zechao Li , Meng Wang , Shuicheng Yan , and Ramesh Jain . 2016. Tri-clustered tensor completion for social-aware image tag refinement . IEEE transactions on pattern analysis and machine intelligence, Vol. 39 , 8 ( 2016 ), 1662--1674. Jinhui Tang, Xiangbo Shu, Guo-Jun Qi, Zechao Li, Meng Wang, Shuicheng Yan, and Ramesh Jain. 2016. Tri-clustered tensor completion for social-aware image tag refinement. IEEE transactions on pattern analysis and machine intelligence, Vol. 39, 8 (2016), 1662--1674."},{"key":"e_1_3_2_2_42_1","volume-title":"Movieqa: Understanding stories in movies through question-answering. In CVPR. 4631--4640.","author":"Tapaswi Makarand","year":"2016","unstructured":"Makarand Tapaswi , Yukun Zhu , Rainer Stiefelhagen , Antonio Torralba , Raquel Urtasun , and Sanja Fidler . 2016 . Movieqa: Understanding stories in movies through question-answering. In CVPR. 4631--4640. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. Movieqa: Understanding stories in movies through question-answering. In CVPR. 4631--4640."},{"key":"e_1_3_2_2_43_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. 5998--6008.  Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. 5998--6008."},{"key":"e_1_3_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2931534"},{"key":"e_1_3_2_2_45_1","doi-asserted-by":"crossref","unstructured":"Bo Wang Youjiang Xu Yahong Han and Richang Hong. 2018. Movie question answering: Remembering the textual cues for layered visual contents. In AAAI.  Bo Wang Youjiang Xu Yahong Han and Richang Hong. 2018. Movie question answering: Remembering the textual cues for layered visual contents. In AAAI.","DOI":"10.1609\/aaai.v32i1.12253"},{"key":"e_1_3_2_2_46_1","volume-title":"Memory networks. arXiv preprint arXiv:1410.3916","author":"Weston Jason","year":"2014","unstructured":"Jason Weston , Sumit Chopra , and Antoine Bordes . 2014. Memory networks. arXiv preprint arXiv:1410.3916 ( 2014 ). Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916 (2014)."},{"key":"e_1_3_2_2_47_1","unstructured":"Jialin Wu Zeyuan Hu and Raymond Mooney. 2019. Generating Question Relevant Captions to Aid Visual Question Answering. In ACL. 3585--3594.  Jialin Wu Zeyuan Hu and Raymond Mooney. 2019. Generating Question Relevant Captions to Aid Visual Question Answering. In ACL. 3585--3594."},{"key":"e_1_3_2_2_48_1","unstructured":"Caiming Xiong Stephen Merity and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In ICML. 2397--2406.  Caiming Xiong Stephen Merity and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In ICML. 2397--2406."},{"key":"e_1_3_2_2_49_1","unstructured":"Dejing Xu Zhou Zhao Jun Xiao Fei Wu Hanwang Zhang Xiangnan He and Yueting Zhuang. 2017. Video question answering via gradually refined attention over appearance and motion. In ACM MM. 1645--1653.  Dejing Xu Zhou Zhao Jun Xiao Fei Wu Hanwang Zhang Xiangnan He and Yueting Zhuang. 2017. Video question answering via gradually refined attention over appearance and motion. In ACM MM. 1645--1653."},{"key":"e_1_3_2_2_50_1","doi-asserted-by":"crossref","unstructured":"Huijuan Xu and Kate Saenko. 2016. Ask attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV. 451--466.  Huijuan Xu and Kate Saenko. 2016. Ask attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV. 451--466.","DOI":"10.1007\/978-3-319-46478-7_28"},{"key":"e_1_3_2_2_51_1","doi-asserted-by":"crossref","unstructured":"Zekun Yang Noa Garcia Chenhui Chu Mayu Otani Yuta Nakashima and Haruo Takemura. 2020. BERT representations for Video Question Answering. In WACV. 1556--1565.  Zekun Yang Noa Garcia Chenhui Chu Mayu Otani Yuta Nakashima and Haruo Takemura. 2020. BERT representations for Video Question Answering. In WACV. 1556--1565.","DOI":"10.1109\/WACV45572.2020.9093596"},{"key":"e_1_3_2_2_52_1","doi-asserted-by":"crossref","unstructured":"Zichao Yang Xiaodong He Jianfeng Gao Li Deng and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR. 21--29.  Zichao Yang Xiaodong He Jianfeng Gao Li Deng and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR. 21--29.","DOI":"10.1109\/CVPR.2016.10"},{"key":"e_1_3_2_2_53_1","unstructured":"Yunan Ye Zhou Zhao Yimeng Li Long Chen Jun Xiao and Yueting Zhuang. 2017. Video question answering via attribute-augmented attention network learning. In ACM SIGIR. 829--832.  Yunan Ye Zhou Zhao Yimeng Li Long Chen Jun Xiao and Yueting Zhuang. 2017. Video question answering via attribute-augmented attention network learning. In ACM SIGIR. 829--832."},{"key":"e_1_3_2_2_54_1","doi-asserted-by":"crossref","unstructured":"Youngjae Yu Jongseok Kim and Gunhee Kim. 2018. A joint sequence fusion model for video question answering and retrieval. In ECCV. 471--487.  Youngjae Yu Jongseok Kim and Gunhee Kim. 2018. A joint sequence fusion model for video question answering and retrieval. In ECCV. 471--487.","DOI":"10.1007\/978-3-030-01234-2_29"},{"key":"e_1_3_2_2_55_1","unstructured":"Zhou Yu Jun Yu Yuhao Cui Dacheng Tao and Qi Tian. 2019. Deep modular co-attention networks for visual question answering. In CVPR. 6281--6290.  Zhou Yu Jun Yu Yuhao Cui Dacheng Tao and Qi Tian. 2019. Deep modular co-attention networks for visual question answering. In CVPR. 6281--6290."},{"key":"e_1_3_2_2_56_1","unstructured":"Zhou Yu Jun Yu Jianping Fan and Dacheng Tao. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In ICCV. 1821--1830.  Zhou Yu Jun Yu Jianping Fan and Dacheng Tao. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In ICCV. 1821--1830."},{"key":"e_1_3_2_2_57_1","volume-title":"Context-aware visual policy network for fine-grained image captioning","author":"Zha Zheng-Jun","year":"2019","unstructured":"Zheng-Jun Zha , Daqing Liu , Hanwang Zhang , Yongdong Zhang , and Feng Wu. 2019. Context-aware visual policy network for fine-grained image captioning . IEEE transactions on pattern analysis and machine intelligence ( 2019 ). Zheng-Jun Zha, Daqing Liu, Hanwang Zhang, Yongdong Zhang, and Feng Wu. 2019. Context-aware visual policy network for fine-grained image captioning. IEEE transactions on pattern analysis and machine intelligence (2019)."},{"key":"e_1_3_2_2_58_1","volume-title":"Adversarial attribute-text embedding for person search with natural language query","author":"Zha Zheng-Jun","year":"2020","unstructured":"Zheng-Jun Zha , Jiawei Liu , Di Chen , and Feng Wu. 2020. Adversarial attribute-text embedding for person search with natural language query . IEEE Transactions on Multimedia ( 2020 ). Zheng-Jun Zha, Jiawei Liu, Di Chen, and Feng Wu. 2020. Adversarial attribute-text embedding for person search with natural language query. IEEE Transactions on Multimedia (2020)."},{"key":"e_1_3_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-017-1033-7"}],"event":{"name":"MM '20: The 28th ACM International Conference on Multimedia","sponsor":["SIGMM ACM Special Interest Group on Multimedia"],"location":"Seattle WA USA","acronym":"MM '20"},"container-title":["Proceedings of the 28th ACM International Conference on Multimedia"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3394171.3413649","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3394171.3413649","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:47:15Z","timestamp":1750193235000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3394171.3413649"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,12]]},"references-count":59,"alternative-id":["10.1145\/3394171.3413649","10.1145\/3394171"],"URL":"https:\/\/doi.org\/10.1145\/3394171.3413649","relation":{},"subject":[],"published":{"date-parts":[[2020,10,12]]},"assertion":[{"value":"2020-10-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}