{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T17:54:19Z","timestamp":1764784459854},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2020,7]]},"abstract":"<jats:p>Generating natural language descriptions for videos, i.e., video captioning, essentially requires step-by-step reasoning along the generation process. For example, to generate the sentence \u201ca man is shooting a basketball\u201d, we need to first locate and describe the subject \u201cman\u201d, next reason out the man is \u201cshooting\u201d, then describe the object \u201cbasketball\u201d of shooting. However, existing visual reasoning methods designed for visual question answering are not appropriate to video captioning, for it requires more complex visual reasoning on videos over both space and time, and dynamic module composition along the generation process. In this paper, we propose a novel visual reasoning approach for video captioning, named Reasoning Module Networks (RMN), to equip the existing encoder-decoder framework with the above reasoning capacity. Specifically, our RMN employs 1) three sophisticated spatio-temporal reasoning modules, and 2) a dynamic and discrete module selector trained by a linguistic loss with a Gumbel approximation. Extensive experiments on MSVD and MSR-VTT datasets demonstrate the proposed RMN outperforms the state-of-the-art methods while providing an explicit and explainable generation process. Our code is available at https:\/\/github.com\/tgc1997\/RMN.<\/jats:p>","DOI":"10.24963\/ijcai.2020\/104","type":"proceedings-article","created":{"date-parts":[[2020,7,8]],"date-time":"2020-07-08T08:12:10Z","timestamp":1594195930000},"page":"745-752","source":"Crossref","is-referenced-by-count":54,"title":["Learning to Discretely Compose Reasoning Module Networks for Video Captioning"],"prefix":"10.24963","author":[{"given":"Ganchao","family":"Tan","sequence":"first","affiliation":[{"name":"University of Science and Technology of China"}]},{"given":"Daqing","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China"}]},{"given":"Meng","family":"Wang","sequence":"additional","affiliation":[{"name":"Hefei University of Technology"}]},{"given":"Zheng-Jun","family":"Zha","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China"}]}],"member":"10584","event":{"number":"28","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-PRICAI-2020","name":"Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}","start":{"date-parts":[[2020,7,11]]},"theme":"Artificial Intelligence","location":"Yokohama, Japan","end":{"date-parts":[[2020,7,17]]}},"container-title":["Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2020,7,8]],"date-time":"2020-07-08T22:13:18Z","timestamp":1594246398000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2020\/104"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2020,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2020\/104","relation":{},"subject":[],"published":{"date-parts":[[2020,7]]}}}