{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,25]],"date-time":"2026-02-25T17:32:18Z","timestamp":1772040738669,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":213,"publisher":"ACM","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,5,8]]},"DOI":"10.1145\/3701716.3717744","type":"proceedings-article","created":{"date-parts":[[2025,6,23]],"date-time":"2025-06-23T14:10:32Z","timestamp":1750687832000},"page":"1855-1868","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Do Language Models Understand Time?"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-5309-4340","authenticated-orcid":false,"given":"Xi","family":"Ding","sequence":"first","affiliation":[{"name":"Australian National University, Canberra, Australian Capital Territory, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8600-7099","authenticated-orcid":false,"given":"Lei","family":"Wang","sequence":"additional","affiliation":[{"name":"Griffith University, Brisbane, Queensland, Australia and Australian National University, Canberra, Australian Capital Territory, Australia"}]}],"member":"320","published-online":{"date-parts":[[2025,5,23]]},"reference":[{"key":"e_1_3_2_2_1_1","unstructured":"Marah Abdin Jyoti Aneja Hany Awadalla Ahmed Awadallah Ammar Ahmad Awan Nguyen Bach Amit Bahree Arash Bakhtiari Jianmin Bao Harkirat Behl Alon Benhaim Misha Bilenko Johan Bjorck S\u00e9bastien Bubeck Martin Cai Qin Cai Vishrav Chaudhary Dong Chen Dongdong Chen Weizhu Chen YenChun Chen Yi-Ling Chen Hao Cheng Parul Chopra Xiyang Dai Matthew Dixon Ronen Eldan Victor Fragoso Jianfeng Gao Mei Gao Min Gao Amit Garg Allie Del Giorno Abhishek Goswami Suriya Gunasekar Emman Haider Junheng Hao Russell J. Hewett Wenxiang Hu Jamie Huynh Dan Iter Sam Ade Jacobs Mojan Javaheripi Xin Jin Nikos Karampatziakis Piero Kauffmann Mahoud Khademi Dongwoo Kim Young Jin Kim Lev Kurilenko James R. Lee Yin Tat Lee Yuanzhi Li Yunsheng Li Chen Liang Lars Liden Xihui Lin Zeqi Lin Ce Liu Liyuan Liu Mengchen Liu Weishung Liu Xiaodong Liu Chong Luo Piyush Madan Ali Mahmoudzadeh David Majercak Matt Mazzola Caio C\u00e9sar Teodoro Mendes Arindam Mitra Hardik Modi Anh Nguyen Brandon Norick Barun Patra Daniel Perez-Becker Thomas Portet Reid Pryzant Heyang Qin Marko Radmilac Liliang Ren Gustavo de Rosa Corby Rosset Sambudha Roy Olatunji Ruwase Olli Saarikivi Amin Saied Adil Salim Michael Santacroce Shital Shah Ning Shang Hiteshi Sharma Yelong Shen Swadheen Shukla Xia Song Masahiro Tanaka Andrea Tupini Praneetha Vaddamanu Chunyu Wang Guanhua Wang Lijuan Wang Shuohang Wang Xin Wang Yu Wang Rachel Ward Wen Wen Philipp Witte Haiping Wu Xiaoxia Wu Michael Wyatt Bin Xiao Can Xu Jiahang Xu Weijian Xu Jilong Xue Sonali Yadav Fan Yang Jianwei Yang Yifan Yang Ziyi Yang Donghan Yu Lu Yuan Chenruidong Zhang Cyril Zhang Jianwen Zhang Li Lyna Zhang Yi Zhang Yue Zhang Yunan Zhang and Xiren Zhou. 2024. Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone. arXiv:2404.14219 [cs.CL] https:\/\/arxiv.org\/abs\/ 2404.14219"},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2007.70825"},{"key":"e_1_3_2_2_3_1","volume-title":"Yi: Open Foundation Models by 01.AI. arXiv:2403.04652 [cs.CL] https:\/\/arxiv.org\/abs\/2403.04652","author":"AI","year":"2024","unstructured":"01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, and Zonghong Dai. 2024. Yi: Open Foundation Models by 01.AI. arXiv:2403.04652 [cs.CL] https:\/\/arxiv.org\/abs\/2403.04652"},{"key":"e_1_3_2_2_4_1","unstructured":"Meta AI. 2024. Introducing Meta Llama 3: The most capable openly available LLM to date. https:\/\/ai.meta.com\/blog\/meta-llama-3\/"},{"key":"e_1_3_2_2_5_1","unstructured":"Jean-Baptiste Alayrac Jeff Donahue Pauline Luc Antoine Miech Iain Barr Yana Hasson Karel Lenc Arthur Mensch Katherine Millican Malcolm Reynolds et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems 35 (2022) 23716--23736."},{"key":"e_1_3_2_2_6_1","volume-title":"Minigpt4-video: Advancing multimodal llms for video understanding with interleaved visual-textual tokens. arXiv preprint arXiv:2404.03413","author":"Ataallah Kirolos","year":"2024","unstructured":"Kirolos Ataallah, Xiaoqian Shen, Eslam Abdelrahman, Essam Sleiman, Deyao Zhu, Jian Ding, and Mohamed Elhoseiny. 2024. Minigpt4-video: Advancing multimodal llms for video understanding with interleaved visual-textual tokens. arXiv preprint arXiv:2404.03413 (2024)."},{"key":"e_1_3_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00247"},{"key":"e_1_3_2_2_8_1","unstructured":"Jinze Bai Shuai Bai Yunfei Chu Zeyu Cui Kai Dang Xiaodong Deng Yang Fan Wenbin Ge Yu Han Fei Huang et al. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609 (2023)."},{"key":"e_1_3_2_2_9_1","first-page":"4","article-title":"Is space-time attention all you need for video understanding?","volume":"2","author":"Bertasius Gedas","year":"2021","unstructured":"Gedas Bertasius, Heng Wang, and Lorenzo Torresani. 2021. Is space-time attention all you need for video understanding?. In ICML, Vol. 2. 4.","journal-title":"ICML"},{"key":"e_1_3_2_2_10_1","volume-title":"International conference on machine learning. PMLR, 1059--1071","author":"Brock Andy","year":"2021","unstructured":"Andy Brock, Soham De, Samuel L Smith, and Karen Simonyan. 2021. Highperformance large-scale image recognition without normalization. In International conference on machine learning. PMLR, 1059--1071."},{"key":"e_1_3_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00408"},{"key":"e_1_3_2_2_12_1","unstructured":"Minwoo Byeon Beomhee Park Haecheon Kim Sungjun Lee Woonhyuk Baek and Saehoon Kim. 2022. COYO-700M: Image-Text Pair Dataset. https:\/\/github. com\/kakaobrain\/coyo-dataset."},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298698"},{"key":"e_1_3_2_2_14_1","volume-title":"Reversible column networks. arXiv preprint arXiv:2212.11696","author":"Cai Yuxuan","year":"2022","unstructured":"Yuxuan Cai, Yizhuang Zhou, Qi Han, Jianjian Sun, Xiangwen Kong, Jun Li, and Xiangyu Zhang. 2022. Reversible column networks. arXiv preprint arXiv:2212.11696 (2022)."},{"key":"e_1_3_2_2_15_1","unstructured":"Zheng Cai Maosong Cao Haojiong Chen Kai Chen Keyu Chen Xin Chen Xun Chen Zehui Chen Zhi Chen Pei Chu Xiaoyi Dong Haodong Duan Qi Fan Zhaoye Fei Yang Gao Jiaye Ge Chenya Gu Yuzhe Gu Tao Gui Aijia Guo Qipeng Guo Conghui He Yingfan Hu Ting Huang Tao Jiang Penglong Jiao Zhenjiang Jin Zhikai Lei Jiaxing Li Jingwen Li Linyang Li Shuaibin Li Wei Li Yining Li Hongwei Liu Jiangning Liu Jiawei Hong Kaiwen Liu Kuikun Liu Xiaoran Liu Chengqi Lv Haijun Lv Kai Lv Li Ma Runyuan Ma Zerun Ma Wenchang Ning Linke Ouyang Jiantao Qiu Yuan Qu Fukai Shang Yunfan Shao Demin Song Zifan Song Zhihao Sui Peng Sun Yu Sun Huanze Tang Bin Wang Guoteng Wang Jiaqi Wang Jiayu Wang Rui Wang Yudong Wang Ziyi Wang Xingjian Wei Qizhen Weng Fan Wu Yingtong Xiong Chao Xu Ruiliang Xu Hang Yan Yirong Yan Xiaogui Yang Haochen Ye Huaiyuan Ying Jia Yu Jing Yu Yuhang Zang Chuyu Zhang Li Zhang Pan Zhang Peng Zhang Ruijie Zhang Shuo Zhang Songyang Zhang Wenjian Zhang Wenwei Zhang Xingcheng Zhang Xinyue Zhang Hui Zhao Qian Zhao Xiaomeng Zhao Fengzhe Zhou Zaida Zhou Jingming Zhuo Yicheng Zou Xipeng Qiu Yu Qiao and Dahua Lin. 2024. InternLM2 Technical Report. arXiv:2403.17297 [cs.CL]"},{"key":"e_1_3_2_2_16_1","volume-title":"A short note about kinetics-600. arXiv preprint arXiv:1808.01340","author":"Carreira Joao","year":"2018","unstructured":"Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. 2018. A short note about kinetics-600. arXiv preprint arXiv:1808.01340 (2018)."},{"key":"e_1_3_2_2_17_1","volume-title":"A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987","author":"Carreira Joao","year":"2019","unstructured":"Joao Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. 2019. A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987 (2019)."},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.502"},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00356"},{"key":"e_1_3_2_2_20_1","volume-title":"Videollm: Modeling video sequence with large language models. arXiv preprint arXiv:2305.13292","author":"Chen Guo","year":"2023","unstructured":"Guo Chen, Yin-Dong Zheng, Jiahao Wang, Jilan Xu, Yifei Huang, Junting Pan, Yi Wang, Yali Wang, Yu Qiao, Tong Lu, et al. 2023. Videollm: Modeling video sequence with large language models. arXiv preprint arXiv:2305.13292 (2023)."},{"key":"e_1_3_2_2_21_1","volume-title":"When Spatial meets Temporal in Action Recognition. arXiv preprint arXiv:2411.15284","author":"Chen Huilin","year":"2024","unstructured":"Huilin Chen, Lei Wang, Yifan Chen, Tom Gedeon, and Piotr Koniusz. 2024. When Spatial meets Temporal in Action Recognition. arXiv preprint arXiv:2411.15284 (2024)."},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01742"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i2.16187"},{"key":"e_1_3_2_2_24_1","unstructured":"Lin Chen Xilin Wei Jinsong Li Xiaoyi Dong Pan Zhang Yuhang Zang Zehui Chen Haodong Duan Bin Lin Zhenyu Tang et al. 2024. Sharegpt4video: Improving video understanding and generation with better captions. arXiv preprint arXiv:2406.04325 (2024)."},{"key":"e_1_3_2_2_25_1","volume-title":"MotionLLM: Understanding Human Behaviors from Human Motions and Videos. arXiv preprint arXiv:2405.20340","author":"Chen Ling-Hao","year":"2024","unstructured":"Ling-Hao Chen, Shunlin Lu, Ailing Zeng, Hao Zhang, Benyou Wang, Ruimao Zhang, and Lei Zhang. 2024. MotionLLM: Understanding Human Behaviors from Human Motions and Videos. arXiv preprint arXiv:2405.20340 (2024)."},{"key":"e_1_3_2_2_26_1","volume-title":"The 16th Asian Conference on Machine Learning (Conference Track).","author":"Chen Qixiang","unstructured":"Qixiang Chen, Lei Wang, Piotr Koniusz, and Tom Gedeon. [n. d.]. Motion meets attention: Video motion prompts. In The 16th Asian Conference on Machine Learning (Conference Track)."},{"key":"e_1_3_2_2_27_1","first-page":"72842","article-title":"Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset","volume":"36","author":"Chen Sihan","year":"2023","unstructured":"Sihan Chen, Handong Li, Qunbo Wang, Zijia Zhao, Mingzhen Sun, Xinxin Zhu, and Jing Liu. 2023. Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset. Advances in Neural Information Processing Systems 36 (2023), 72842--72866.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_28_1","volume-title":"Beats: Audio pre-training with acoustic tokenizers. arXiv preprint arXiv:2212.09058","author":"Chen Sanyuan","year":"2022","unstructured":"Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei. 2022. Beats: Audio pre-training with acoustic tokenizers. arXiv preprint arXiv:2212.09058 (2022)."},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3664647.3681034"},{"key":"e_1_3_2_2_30_1","volume-title":"Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794","author":"Chen Xi","year":"2022","unstructured":"Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. 2022. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794 (2022)."},{"key":"e_1_3_2_2_31_1","unstructured":"Zhe Chen Weiyun Wang Yue Cao Yangzhou Liu Zhangwei Gao Erfei Cui Jinguo Zhu Shenglong Ye Hao Tian Zhaoyang Liu et al. 2024. Expanding Performance Boundaries of Open-Source Multimodal Models with Model Data and Test-Time Scaling. arXiv preprint arXiv:2412.05271 (2024)."},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"crossref","unstructured":"Zhe Chen Weiyun Wang Hao Tian Shenglong Ye Zhangwei Gao Erfei Cui Wenwen Tong Kongzhi Hu Jiapeng Luo Zheng Ma et al. 2024. How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites. arXiv preprint arXiv:2404.16821 (2024).","DOI":"10.1007\/s11432-024-4231-5"},{"key":"e_1_3_2_2_33_1","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 24185--24198","author":"Chen Zhe","year":"2024","unstructured":"Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. 2024. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 24185--24198."},{"key":"e_1_3_2_2_34_1","unstructured":"Zesen Cheng Sicong Leng Hang Zhang Yifei Xin Xin Li Guanzheng Chen Yongxin Zhu Wenqi Zhang Ziyang Luo Deli Zhao et al. 2024. VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs. arXiv preprint arXiv:2406.07476 (2024)."},{"key":"e_1_3_2_2_35_1","volume-title":"Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https:\/\/vicuna. lmsys. org (accessed","author":"Chiang Wei-Lin","year":"2023","unstructured":"Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https:\/\/vicuna. lmsys. org (accessed 14 April 2023) 2, 3 (2023), 6."},{"key":"e_1_3_2_2_36_1","volume-title":"Gurpreet Gosal, et al.","author":"Christophe Cl\u00e9ment","year":"2024","unstructured":"Cl\u00e9ment Christophe, Praveen K Kanithi, Prateek Munjal, Tathagata Raha, Nasir Hayat, Ronnie Rajan, Ahmed Al-Mahrooqi, Avani Gupta, Muhammad Umar Salman, Gurpreet Gosal, et al. 2024. Med42--Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches. arXiv preprint arXiv:2404.14779 (2024)."},{"key":"e_1_3_2_2_37_1","unstructured":"StableLM contributors. 2023. StableLM: Stability AI language models. https:\/\/github.com\/stability-AI\/stableLM"},{"key":"e_1_3_2_2_38_1","volume-title":"Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray.","author":"Damen Dima","year":"2018","unstructured":"Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. 2018. Scaling Egocentric Vision: The EPICKITCHENS Dataset. ArXiv abs\/1804.02748 (2018). https:\/\/api.semanticscholar.org\/CorpusID:4710439"},{"key":"e_1_3_2_2_39_1","volume-title":"Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al.","author":"Damen Dima","year":"2022","unstructured":"Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. 2022. Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100. International Journal of Computer Vision (2022), 1--23."},{"key":"e_1_3_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2013.340"},{"key":"e_1_3_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/N19--1423"},{"key":"e_1_3_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00459"},{"key":"e_1_3_2_2_44_1","volume-title":"Lego: Learnable expansion of graph operators for multi-modal feature fusion. arXiv preprint arXiv:2410.01506","author":"Ding Dexuan","year":"2024","unstructured":"Dexuan Ding, Lei Wang, Liyun Zhu, Tom Gedeon, and Piotr Koniusz. 2024. Lego: Learnable expansion of graph operators for multi-modal feature fusion. arXiv preprint arXiv:2410.01506 (2024)."},{"key":"e_1_3_2_2_45_1","volume-title":"International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy","author":"Dosovitskiy Alexey","year":"2021","unstructured":"Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy"},{"key":"e_1_3_2_2_46_1","unstructured":"Hang Du Sicheng Zhang Binzhu Xie Guoshun Nan Jiayang Zhang Junrui Xu Hangyu Liu Sicong Leng Jiangming Liu Hehe Fan Dajiu Huang Jing Feng Linli Chen Can Zhang Xuhuan Li Hao Zhang Jianhang Chen Qimei Cui and Xiaofeng Tao. 2024. Uncovering What Why and How: A Comprehensive Benchmark for Causation Understanding of Video Anomaly. arXiv:2405.00181 [cs.CV] https:\/\/arxiv.org\/abs\/2405.00181"},{"key":"e_1_3_2_2_47_1","volume-title":"Video-ccam: Enhancing video-language understanding with causal crossattention masks for short and long videos. arXiv preprint arXiv:2408.14023","author":"Fei Jiajun","year":"2024","unstructured":"Jiajun Fei, Dian Li, Zhidong Deng, Zekun Wang, Gang Liu, and Hui Wang. 2024. Video-ccam: Enhancing video-language understanding with causal crossattention masks for short and long videos. arXiv preprint arXiv:2408.14023 (2024)."},{"key":"e_1_3_2_2_48_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00630"},{"key":"e_1_3_2_2_49_1","volume-title":"Videomme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075","author":"Fu Chaoyou","year":"2024","unstructured":"Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. 2024. Videomme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075 (2024)."},{"key":"e_1_3_2_2_50_1","volume-title":"Vita: Towards opensource interactive omni multimodal llm. arXiv preprint arXiv:2408.05211","author":"Fu Chaoyou","year":"2024","unstructured":"Chaoyou Fu, Haojia Lin, Zuwei Long, Yunhang Shen, Meng Zhao, Yifan Zhang, Shaoqi Dong, Xiong Wang, Di Yin, Long Ma, et al. 2024. Vita: Towards opensource interactive omni multimodal llm. arXiv preprint arXiv:2408.05211 (2024)."},{"key":"e_1_3_2_2_51_1","volume-title":"Mini-internvl: A flexibletransfer pocket multimodal model with 5% parameters and 90% performance. arXiv preprint arXiv:2410.16261","author":"Gao Zhangwei","year":"2024","unstructured":"Zhangwei Gao, Zhe Chen, Erfei Cui, Yiming Ren, Weiyun Wang, Jinguo Zhu, Hao Tian, Shenglong Ye, Junjun He, Xizhou Zhu, et al. 2024. Mini-internvl: A flexibletransfer pocket multimodal model with 5% parameters and 90% performance. arXiv preprint arXiv:2410.16261 (2024)."},{"key":"e_1_3_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2017.7952261"},{"key":"e_1_3_2_2_53_1","volume-title":"Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214","author":"Ghosh Akash","year":"2024","unstructured":"Akash Ghosh, Arkadeep Acharya, Sriparna Saha, Vinija Jain, and Aman Chadha. 2024. Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214 (2024)."},{"key":"e_1_3_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01457"},{"key":"e_1_3_2_2_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01003"},{"key":"e_1_3_2_2_56_1","volume-title":"CATER: A diagnostic dataset for Compositional Actions and TEmporal Reasoning. arXiv preprint arXiv:1910.04744","author":"Girdhar Rohit","year":"2019","unstructured":"Rohit Girdhar and Deva Ramanan. 2019. CATER: A diagnostic dataset for Compositional Actions and TEmporal Reasoning. arXiv preprint arXiv:1910.04744 (2019)."},{"key":"e_1_3_2_2_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01563"},{"key":"e_1_3_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.622"},{"key":"e_1_3_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01842"},{"key":"e_1_3_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00633"},{"key":"e_1_3_2_2_61_1","first-page":"26418","article-title":"Wukong: A 100 million large-scale chinese cross-modal pre-training benchmark","volume":"35","author":"Gu Jiaxi","year":"2022","unstructured":"Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Niu Minzhe, Xiaodan Liang, Lewei Yao, Runhui Huang, Wei Zhang, Xin Jiang, et al. 2022. Wukong: A 100 million large-scale chinese cross-modal pre-training benchmark. Advances in Neural Information Processing Systems 35 (2022), 26418--26431.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_62_1","volume-title":"VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding. arXiv preprint arXiv:2405.13382","author":"Guo Yongxin","year":"2024","unstructured":"Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Xi Chen, and Bo Zhao. 2024. VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding. arXiv preprint arXiv:2405.13382 (2024)."},{"key":"e_1_3_2_2_63_1","volume-title":"Trace: Temporal grounding video llm via causal event modeling. arXiv preprint arXiv:2410.05643","author":"Guo Yongxin","year":"2024","unstructured":"Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Qingbin Liu, and Xi Chen. 2024. Trace: Temporal grounding video llm via causal event modeling. arXiv preprint arXiv:2410.05643 (2024)."},{"key":"e_1_3_2_2_64_1","volume-title":"Language Models Represent Space and Time. In The Twelfth International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=jE8xbmvFin","author":"Gurnee Wes","year":"2024","unstructured":"Wes Gurnee and Max Tegmark. 2024. Language Models Represent Space and Time. In The Twelfth International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=jE8xbmvFin"},{"key":"e_1_3_2_2_65_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.01255"},{"key":"e_1_3_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01720"},{"key":"e_1_3_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01282"},{"key":"e_1_3_2_2_68_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_2_69_1","doi-asserted-by":"crossref","unstructured":"Lisa Anne Hendricks Oliver Wang Eli Shechtman Josef Sivic Trevor Darrell and Bryan Russell. 2017. Localizing Moments in Video with Natural Language. arXiv:1708.01641 [cs.CV] https:\/\/arxiv.org\/abs\/1708.01641","DOI":"10.1109\/ICCV.2017.618"},{"key":"e_1_3_2_2_70_1","volume-title":"Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al.","author":"Hoffmann Jordan","year":"2022","unstructured":"Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 (2022)."},{"key":"e_1_3_2_2_71_1","volume-title":"V2xum-llm: Cross-modal video summarization with temporal prompt instruction tuning. arXiv preprint arXiv:2404.12353","author":"Hua Hang","year":"2024","unstructured":"Hang Hua, Yunlong Tang, Chenliang Xu, and Jiebo Luo. 2024. V2xum-llm: Cross-modal video summarization with temporal prompt instruction tuning. arXiv preprint arXiv:2404.12353 (2024)."},{"key":"e_1_3_2_2_72_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01353"},{"key":"e_1_3_2_2_73_1","volume-title":"EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval. arXiv preprint arXiv:2407.16658","author":"Hummel Thomas","year":"2024","unstructured":"Thomas Hummel, Shyamgopal Karthik, Mariana-Iuliana Georgescu, and Zeynep Akata. 2024. EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval. arXiv preprint arXiv:2407.16658 (2024)."},{"key":"e_1_3_2_2_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01723"},{"key":"e_1_3_2_2_75_1","volume-title":"Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, et al.","author":"Iyer Srinivasan","year":"2022","unstructured":"Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. 2022. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. arXiv preprint arXiv:2212.12017 (2022)."},{"key":"e_1_3_2_2_76_1","volume-title":"Revisiting Temporal Commonsense Reasoning in the Era of Large Language Models. In The 2023 Conference on Empirical Methods in Natural Language Processing. https:\/\/openreview.net\/forum?id=akJUrevmwI","author":"Jain Raghav","year":"2023","unstructured":"Raghav Jain, Daivik Sojitra, Arkadeep Acharya, Sriparna Saha, Adam Jatowt, and Sandipan Dandapat. 2023. Do Language Models Have a Common Sense regarding Time? Revisiting Temporal Commonsense Reasoning in the Era of Large Language Models. In The 2023 Conference on Empirical Methods in Natural Language Processing. https:\/\/openreview.net\/forum?id=akJUrevmwI"},{"key":"e_1_3_2_2_77_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-019-01189-x"},{"key":"e_1_3_2_2_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.149"},{"key":"e_1_3_2_2_79_1","volume-title":"Fine-tuning and utilization methods of domain-specific llms. arXiv preprint arXiv:2401.02981","author":"Jeong Cheonsu","year":"2024","unstructured":"Cheonsu Jeong. 2024. Fine-tuning and utilization methods of domain-specific llms. arXiv preprint arXiv:2401.02981 (2024)."},{"key":"e_1_3_2_2_80_1","volume-title":"Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al.","author":"Jiang Albert Q","year":"2023","unstructured":"Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7B. arXiv preprint arXiv:2310.06825 (2023)."},{"key":"e_1_3_2_2_81_1","volume-title":"Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al.","author":"Jiang Albert Q","year":"2024","unstructured":"Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088 (2024)."},{"key":"e_1_3_2_2_82_1","unstructured":"Will Kay Joao Carreira Karen Simonyan Brian Zhang Chloe Hillier Sudheendra Vijayanarasimhan Fabio Viola Tim Green Trevor Back Paul Natsev et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017)."},{"key":"e_1_3_2_2_83_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW63382.2024.00187"},{"key":"e_1_3_2_2_84_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2021.3107160"},{"key":"e_1_3_2_2_85_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2019.2905741"},{"key":"e_1_3_2_2_86_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.83"},{"key":"e_1_3_2_2_87_1","volume-title":"Proceedings of the International Conference on Computer Vision (ICCV).","author":"Kuehne H.","unstructured":"H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. 2011. HMDB: a large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision (ICCV)."},{"key":"e_1_3_2_2_88_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126543"},{"key":"e_1_3_2_2_89_1","volume-title":"Tvqa: Localized, compositional video question answering. arXiv preprint arXiv:1809.01696","author":"Lei Jie","year":"2018","unstructured":"Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. 2018. Tvqa: Localized, compositional video question answering. arXiv preprint arXiv:1809.01696 (2018)."},{"key":"e_1_3_2_2_90_1","volume-title":"Proceedings, Part XXI 16","author":"Lei Jie","year":"2020","unstructured":"Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. 2020. Tvr: A large-scale dataset for video-subtitle moment retrieval. In Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XXI 16. Springer, 447--463."},{"key":"e_1_3_2_2_91_1","volume-title":"Annual Meeting of the Association for Computational Linguistics. https:\/\/api.semanticscholar.org\/CorpusID:204960716","author":"Lewis Mike","year":"2019","unstructured":"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdel rahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Annual Meeting of the Association for Computational Linguistics. https:\/\/api.semanticscholar.org\/CorpusID:204960716"},{"key":"e_1_3_2_2_92_1","volume-title":"International conference on machine learning. PMLR","author":"Li Junnan","year":"2023","unstructured":"Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning. PMLR, 19730-- 19742."},{"key":"e_1_3_2_2_93_1","volume-title":"Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355","author":"Li KunChang","year":"2023","unstructured":"KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. 2023. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355 (2023)."},{"key":"e_1_3_2_2_94_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02095"},{"key":"e_1_3_2_2_95_1","volume-title":"Hero: Hierarchical encoder for video language omni-representation pre-training. arXiv preprint arXiv:2005.00200","author":"Li Linjie","year":"2020","unstructured":"Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. 2020. Hero: Hierarchical encoder for video language omni-representation pre-training. arXiv preprint arXiv:2005.00200 (2020)."},{"key":"e_1_3_2_2_96_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-72952-2_19"},{"key":"e_1_3_2_2_97_1","volume-title":"VideoINSTA: Zero-shot Long Video Understanding via Informative Spatial-Temporal Reasoning with LLMs. arXiv preprint arXiv:2409.20365","author":"Liao Ruotong","year":"2024","unstructured":"Ruotong Liao, Max Erler, Huiyu Wang, Guangyao Zhai, Gengyuan Zhang, Yunpu Ma, and Volker Tresp. 2024. VideoINSTA: Zero-shot Long Video Understanding via Informative Spatial-Temporal Reasoning with LLMs. arXiv preprint arXiv:2409.20365 (2024)."},{"key":"e_1_3_2_2_98_1","volume-title":"Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122","author":"Lin Bin","year":"2023","unstructured":"Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munan Ning, Peng Jin, and Li Yuan. 2023. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122 (2023)."},{"key":"e_1_3_2_2_99_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02520"},{"key":"e_1_3_2_2_100_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i2.20053"},{"key":"e_1_3_2_2_101_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2820807"},{"key":"e_1_3_2_2_102_1","unstructured":"Haotian Liu Chunyuan Li Yuheng Li and Yong Jae Lee. 2024. Improved Baselines with Visual Instruction Tuning. arXiv:2310.03744 [cs.CV] https:\/\/arxiv.org\/abs\/2310.03744"},{"key":"e_1_3_2_2_103_1","unstructured":"Haotian Liu Chunyuan Li Qingyang Wu and Yong Jae Lee. 2023. Visual Instruction Tuning. arXiv:2304.08485 [cs.CV] https:\/\/arxiv.org\/abs\/2304.08485"},{"key":"e_1_3_2_2_104_1","volume-title":"Kangaroo: A powerful video-language model supporting long-context video input. arXiv preprint arXiv:2408.15542","author":"Liu Jiajun","year":"2024","unstructured":"Jiajun Liu, Yibing Wang, Hanghang Ma, Xiaoping Wu, Xiaoqi Ma, Xiaoming Wei, Jianbin Jiao, Enhua Wu, and Jie Hu. 2024. Kangaroo: A powerful video-language model supporting long-context video input. arXiv preprint arXiv:2408.15542 (2024)."},{"key":"e_1_3_2_2_105_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-72998-0_1"},{"key":"e_1_3_2_2_106_1","volume-title":"Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476","author":"Liu Yuanxin","year":"2024","unstructured":"Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. 2024. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476 (2024)."},{"key":"e_1_3_2_2_107_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00305"},{"key":"e_1_3_2_2_108_1","volume-title":"Oryx mllm: On-demand spatial-temporal understanding at arbitrary resolution. arXiv preprint arXiv:2409.12961","author":"Liu Zuyan","year":"2024","unstructured":"Zuyan Liu, Yuhao Dong, Ziwei Liu, Winston Hu, Jiwen Lu, and Yongming Rao. 2024. Oryx mllm: On-demand spatial-temporal understanding at arbitrary resolution. arXiv preprint arXiv:2409.12961 (2024)."},{"key":"e_1_3_2_2_109_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00320"},{"key":"e_1_3_2_2_110_1","volume-title":"Videodrafter: Contentconsistent multi-scene video generation with llm. arXiv preprint arXiv:2401.01256","author":"Long Fuchen","year":"2024","unstructured":"Fuchen Long, Zhaofan Qiu, Ting Yao, and Tao Mei. 2024. Videodrafter: Contentconsistent multi-scene video generation with llm. arXiv preprint arXiv:2401.01256 (2024)."},{"key":"e_1_3_2_2_111_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.338"},{"key":"e_1_3_2_2_112_1","volume-title":"Valley: Video assistant with large language model enhanced ability. arXiv preprint arXiv:2306.07207","author":"Luo Ruipu","year":"2023","unstructured":"Ruipu Luo, Ziwang Zhao, Min Yang, Junwei Dong, Da Li, Pengcheng Lu, Tao Wang, Linmei Hu, Minghui Qiu, and Zhongyu Wei. 2023. Valley: Video assistant with large language model enhanced ability. arXiv preprint arXiv:2306.07207 (2023)."},{"key":"e_1_3_2_2_113_1","volume-title":"Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. arXiv preprint arXiv:2306.09093","author":"Lyu Chenyang","year":"2023","unstructured":"Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, and Zhaopeng Tu. 2023. Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. arXiv preprint arXiv:2306.09093 (2023)."},{"key":"e_1_3_2_2_114_1","volume-title":"VideoGPT: Integrating Image and Video Encoders for Enhanced Video Understanding. arXiv preprint arXiv:2406.09418","author":"Maaz Muhammad","year":"2024","unstructured":"Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Khan. 2024. VideoGPT: Integrating Image and Video Encoders for Enhanced Video Understanding. arXiv preprint arXiv:2406.09418 (2024)."},{"key":"e_1_3_2_2_115_1","volume-title":"Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424","author":"Maaz Muhammad","year":"2023","unstructured":"Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. 2023. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424 (2023)."},{"key":"e_1_3_2_2_116_1","first-page":"46212","article-title":"Egoschema: A diagnostic benchmark for very long-form video language understanding","volume":"36","author":"Mangalam Karttikeya","year":"2023","unstructured":"Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. 2023. Egoschema: A diagnostic benchmark for very long-form video language understanding. Advances in Neural Information Processing Systems 36 (2023), 46212--46244.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_117_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00272"},{"key":"e_1_3_2_2_118_1","volume-title":"Cong-Duy Nguyen, See-Kiong Ng, and Luu Anh Tuan.","author":"Nguyen Thong","year":"2024","unstructured":"Thong Nguyen, Yi Bin, Junbin Xiao, Leigang Qu, Yicong Li, Jay Zhangjie Wu, Cong-Duy Nguyen, See-Kiong Ng, and Luu Anh Tuan. 2024. Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives. arXiv preprint arXiv:2406.05615 (2024)."},{"key":"e_1_3_2_2_119_1","volume-title":"SlowFocus: Enhancing Fine-grained Temporal Understanding in Video LLM. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. https:\/\/openreview.net\/forum?id=FOkKndty5B","author":"Nie Ming","year":"2024","unstructured":"Ming Nie, Dan Ding, Chunwei Wang, Yuanfan Guo, Jianhua Han, Hang Xu, and Li Zhang. 2024. SlowFocus: Enhancing Fine-grained Temporal Understanding in Video LLM. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. https:\/\/openreview.net\/forum?id=FOkKndty5B"},{"key":"e_1_3_2_2_120_1","volume-title":"Keeping your eye on the ball: Trajectory attention in video transformers. Advances in neural information processing systems 34","author":"Patrick Mandela","year":"2021","unstructured":"Mandela Patrick, Dylan Campbell, Yuki Asano, Ishan Misra, Florian Metze, Christoph Feichtenhofer, Andrea Vedaldi, and Joao F Henriques. 2021. Keeping your eye on the ball: Trajectory attention in video transformers. Advances in neural information processing systems 34 (2021), 12493--12506."},{"key":"e_1_3_2_2_121_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2022.3201518"},{"key":"e_1_3_2_2_122_1","volume-title":"International conference on machine learning. PMLR, 8748--8763","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748--8763."},{"key":"e_1_3_2_2_123_1","unstructured":"Alec Radford Jeffrey Wu Rewon Child David Luan Dario Amodei Ilya Sutskever et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1 8 (2019) 9."},{"key":"e_1_3_2_2_124_1","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research 21, 140 (2020), 1--67.","journal-title":"Journal of machine learning research"},{"key":"e_1_3_2_2_125_1","volume-title":"TrackNetV4: Enhancing Fast Sports Object Tracking with Motion Attention Maps. arXiv preprint arXiv:2409.14543","author":"Raj Arjun","year":"2024","unstructured":"Arjun Raj, Lei Wang, and Tom Gedeon. 2024. TrackNetV4: Enhancing Fast Sports Object Tracking with Motion Attention Maps. arXiv preprint arXiv:2409.14543 (2024)."},{"key":"e_1_3_2_2_126_1","volume-title":"Street Scene: A new dataset and evaluation protocol for video anomaly detection. arXiv:1902.05872 [cs.CV] https:\/\/arxiv.org\/abs\/1902.05872","author":"Ramachandra Bharathkumar","year":"2020","unstructured":"Bharathkumar Ramachandra and Michael Jones. 2020. Street Scene: A new dataset and evaluation protocol for video anomaly detection. arXiv:1902.05872 [cs.CV] https:\/\/arxiv.org\/abs\/1902.05872"},{"key":"e_1_3_2_2_127_1","unstructured":"Tal Ridnik Emanuel Ben-Baruch Asaf Noy and Lihi Zelnik-Manor. 2021. ImageNet-21K Pretraining for the Masses. arXiv:2104.10972 [cs.CV]"},{"key":"e_1_3_2_2_128_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-016-0987-1"},{"key":"e_1_3_2_2_129_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00554"},{"key":"e_1_3_2_2_130_1","first-page":"25278","article-title":"Laion-5b: An open large-scale dataset for training next generation image-text models","volume":"35","author":"Schuhmann Christoph","year":"2022","unstructured":"Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. 2022. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35 (2022), 25278--25294.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_2_131_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1238"},{"key":"e_1_3_2_2_132_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.physd.2019.132306"},{"key":"e_1_3_2_2_133_1","volume-title":"Eagle: Exploring the design space for multimodal llms with mixture of encoders. arXiv preprint arXiv:2408.15998","author":"Shi Min","year":"2024","unstructured":"Min Shi, Fuxiao Liu, Shihao Wang, Shijia Liao, Subhashree Radhakrishnan, DeAn Huang, Hongxu Yin, Karan Sapra, Yaser Yacoob, Humphrey Shi, et al. 2024. Eagle: Exploring the design space for multimodal llms with mixture of encoders. arXiv preprint arXiv:2408.15998 (2024)."},{"key":"e_1_3_2_2_134_1","volume-title":"Audio-visual llm for video understanding. arXiv preprint arXiv:2312.06720","author":"Shu Fangxun","year":"2023","unstructured":"Fangxun Shu, Lei Zhang, Hao Jiang, and Cihang Xie. 2023. Audio-visual llm for video understanding. arXiv preprint arXiv:2312.06720 (2023)."},{"key":"e_1_3_2_2_135_1","volume-title":"Video-xl: Extra-long vision language model for hour-scale video understanding. arXiv preprint arXiv:2409.14485","author":"Shu Yan","year":"2024","unstructured":"Yan Shu, Peitian Zhang, Zheng Liu, Minghao Qin, Junjie Zhou, Tiejun Huang, and Bo Zhao. 2024. Video-xl: Extra-long vision language model for hour-scale video understanding. arXiv preprint arXiv:2409.14485 (2024)."},{"key":"e_1_3_2_2_136_1","volume-title":"Proceedings, Part I 14","author":"Sigurdsson Gunnar A","year":"2016","unstructured":"Gunnar A Sigurdsson, G\u00fcl Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Computer Vision--ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11--14, 2016, Proceedings, Part I 14. Springer, 510--526."},{"key":"e_1_3_2_2_137_1","volume-title":"UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402","author":"Soomro K","year":"2012","unstructured":"K Soomro. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)."},{"key":"e_1_3_2_2_138_1","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3463257"},{"key":"e_1_3_2_2_139_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00756"},{"key":"e_1_3_2_2_140_1","volume-title":"Eva-clip: Improved training techniques for clip at scale. arXiv preprint arXiv:2303.15389","author":"Sun Quan","year":"2023","unstructured":"Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, and Yue Cao. 2023. Eva-clip: Improved training techniques for clip at scale. arXiv preprint arXiv:2303.15389 (2023)."},{"key":"e_1_3_2_2_141_1","volume-title":"The Thirty-eighth Annual Conference on Neural Information Processing Systems. https:\/\/openreview.net\/forum?id=DV15UbHCY1","author":"Tan Mingtian","year":"2024","unstructured":"Mingtian Tan, Mike A Merrill, Vinayak Gupta, Tim Althoff, and Thomas Hartvigsen. 2024. Are Language Models Actually Useful for Time Series Forecasting?. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. https:\/\/openreview.net\/forum?id=DV15UbHCY1"},{"key":"e_1_3_2_2_142_1","unstructured":"Yunlong Tang Jing Bi Siting Xu Luchuan Song Susan Liang Teng Wang Daoan Zhang Jie An Jingyang Lin Rongyi Zhu et al. 2023. Video understanding with large language models: A survey. arXiv preprint arXiv:2312.17432 (2023)."},{"key":"e_1_3_2_2_143_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00130"},{"key":"e_1_3_2_2_144_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.501"},{"key":"e_1_3_2_2_145_1","unstructured":"Together.xyz. 2023. Releasing 3b and 7b redpajama incite family of models including base instruction-tuned and chat models. https:\/\/www.together.xyz\/blog\/redpajama-models-v1"},{"key":"e_1_3_2_2_146_1","volume-title":"Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)."},{"key":"e_1_3_2_2_147_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.510"},{"key":"e_1_3_2_2_148_1","volume-title":"\u0141 ukasz Kaiser, and Illia Polosukhin","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"},{"key":"e_1_3_2_2_149_1","volume-title":"Generating videos with scene dynamics. Advances in neural information processing systems 29","author":"Vondrick Carl","year":"2016","unstructured":"Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. 2016. Generating videos with scene dynamics. Advances in neural information processing systems 29 (2016)."},{"key":"e_1_3_2_2_150_1","volume-title":"Jianfeng Wang, Kevin Lin, Zhengyuan Yang, Lijuan Wang, and Mike Zheng Shou.","author":"Wang Alex Jinpeng","year":"2024","unstructured":"Alex Jinpeng Wang, Linjie Li, Kevin Qinghong Lin, Jianfeng Wang, Kevin Lin, Zhengyuan Yang, Lijuan Wang, and Mike Zheng Shou. 2024. COSMO: COntrastive Streamlined MultimOdal Model with Interleaved Pre-Training. arXiv preprint arXiv:2401.00849 (2024)."},{"key":"e_1_3_2_2_151_1","volume-title":"Chatvideo: A tracklet-centric multimodal and versatile video understanding system. arXiv preprint arXiv:2304.14407","author":"Wang Junke","year":"2023","unstructured":"Junke Wang, Dongdong Chen, Chong Luo, Xiyang Dai, Lu Yuan, Zuxuan Wu, and Yu-Gang Jiang. 2023. Chatvideo: A tracklet-centric multimodal and versatile video understanding system. arXiv preprint arXiv:2304.14407 (2023)."},{"key":"e_1_3_2_2_152_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01724"},{"key":"e_1_3_2_2_153_1","volume-title":"Omnivl: One foundation model for image-language and video-language tasks. Advances in neural information processing systems 35","author":"Wang Junke","year":"2022","unstructured":"Junke Wang, Dongdong Chen, Zuxuan Wu, Chong Luo, Luowei Zhou, Yucheng Zhao, Yujia Xie, Ce Liu, Yu-Gang Jiang, and Lu Yuan. 2022. Omnivl: One foundation model for image-language and video-language tasks. Advances in neural information processing systems 35 (2022), 5696--5710."},{"key":"e_1_3_2_2_154_1","volume-title":"Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models. arXiv preprint arXiv:2406.01863","author":"Wang Jiexin","year":"2024","unstructured":"Jiexin Wang, Adam Jatowt, and Yi Cai. 2024. Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models. arXiv preprint arXiv:2406.01863 (2024)."},{"key":"e_1_3_2_2_155_1","unstructured":"Jiaqi Wang Hanqi Jiang Yiheng Liu Chong Ma Xu Zhang Yi Pan Mengyuan Liu Peiran Gu Sichen Xia Wenjun Li et al. 2024. A comprehensive review of multimodal large language models: Performance and challenges across different tasks. arXiv preprint arXiv:2408.01319 (2024)."},{"key":"e_1_3_2_2_156_1","volume-title":"Analysis and evaluation of Kinect-based action recognition algorithms. arXiv preprint arXiv:2112.08626","author":"Wang Lei","year":"2021","unstructured":"Lei Wang. 2021. Analysis and evaluation of Kinect-based action recognition algorithms. arXiv preprint arXiv:2112.08626 (2021)."},{"key":"e_1_3_2_2_157_1","volume-title":"Robust human action modelling. Ph. D. Dissertation","author":"Wang Lei","unstructured":"Lei Wang. 2023. Robust human action modelling. Ph. D. Dissertation. The Australian National University (Australia)."},{"key":"e_1_3_2_2_158_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01398"},{"key":"e_1_3_2_2_159_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2925285"},{"key":"e_1_3_2_2_160_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2019.8803051"},{"key":"e_1_3_2_2_161_1","doi-asserted-by":"publisher","DOI":"10.1145\/3474085.3475572"},{"key":"e_1_3_2_2_162_1","volume-title":"Proceedings of the Asian Conference on Computer Vision. 4176--4193","author":"Wang Lei","year":"2022","unstructured":"Lei Wang and Piotr Koniusz. 2022. Temporal-viewpoint transportation plan for skeletal few-shot action recognition. In Proceedings of the Asian Conference on Computer Vision. 4176--4193."},{"key":"e_1_3_2_2_163_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19803-8_11"},{"key":"e_1_3_2_2_164_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00544"},{"key":"e_1_3_2_2_165_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP48485.2024.10446223"},{"key":"e_1_3_2_2_166_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00879"},{"key":"e_1_3_2_2_167_1","volume-title":"3D Skeleton-based Few-shot Action Recognition with JEANIE is not so Na\u00efve. arXiv preprint arXiv:2112.12668","author":"Wang Lei","year":"2021","unstructured":"Lei Wang, Jun Liu, and Piotr Koniusz. 2021. 3D Skeleton-based Few-shot Action Recognition with JEANIE is not so Na\u00efve. arXiv preprint arXiv:2112.12668 (2021)."},{"key":"e_1_3_2_2_168_1","volume-title":"Meet JEANIE: a Similarity Measure for 3D Skeleton Sequences via TemporalViewpoint Alignment. International Journal of Computer Vision","author":"Wang Lei","year":"2024","unstructured":"Lei Wang, Jun Liu, Liang Zheng, Tom Gedeon, and Piotr Koniusz. 2024. Meet JEANIE: a Similarity Measure for 3D Skeleton Sequences via TemporalViewpoint Alignment. International Journal of Computer Vision (2024), 1--32."},{"key":"e_1_3_2_2_169_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP48485.2024.10446900"},{"key":"e_1_3_2_2_170_1","volume-title":"Taylor Videos for Action Recognition. In Forty-first International Conference on Machine Learning.","author":"Wang Lei","unstructured":"Lei Wang, Xiuyuan Yuan, Tom Gedeon, and Liang Zheng. [n. d.]. Taylor Videos for Action Recognition. In Forty-first International Conference on Machine Learning."},{"key":"e_1_3_2_2_171_1","unstructured":"Peng Wang Shuai Bai Sinan Tan Shijie Wang Zhihao Fan Jinze Bai Keqin Chen Xuejing Liu Jialin Wang Wenbin Ge et al. 2024. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191 (2024)."},{"key":"e_1_3_2_2_172_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV48630.2021.00117"},{"key":"e_1_3_2_2_173_1","volume-title":"Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. Advances in Neural Information Processing Systems 36","author":"Wang Wenhai","year":"2024","unstructured":"Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. 2024. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. Advances in Neural Information Processing Systems 36 (2024)."},{"key":"e_1_3_2_2_174_1","volume-title":"Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization. arXiv preprint arXiv:2411.10442","author":"Wang Weiyun","year":"2024","unstructured":"Weiyun Wang, Zhe Chen, Wenhai Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Jinguo Zhu, Xizhou Zhu, Lewei Lu, Yu Qiao, and Jifeng Dai. 2024. Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization. arXiv preprint arXiv:2411.10442 (2024)."},{"key":"e_1_3_2_2_175_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00468"},{"key":"e_1_3_2_2_176_1","unstructured":"Yi Wang Yinan He Yizhuo Li Kunchang Li Jiashuo Yu Xin Ma Xinhao Li Guo Chen Xinyuan Chen Yaohui Wang Conghui He Ping Luo Ziwei Liu Yali Wang Limin Wang and Yu Qiao. 2024. InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation. arXiv:2307.06942 [cs.CV] https:\/\/arxiv.org\/abs\/2307.06942"},{"key":"e_1_3_2_2_177_1","doi-asserted-by":"crossref","unstructured":"Yi Wang Kunchang Li Xinhao Li Jiashuo Yu Yinan He Guo Chen Baoqi Pei Rongkun Zheng Jilan Xu Zun Wang et al. 2024. Internvideo2: Scaling video foundation models for multimodal video understanding. arXiv e-prints (2024) arXiv--2403.","DOI":"10.1007\/978-3-031-73013-9_23"},{"key":"e_1_3_2_2_178_1","volume-title":"Internvideo: General video foundation models via generative and discriminative learning. arXiv preprint arXiv:2212.03191","author":"Wang Yi","year":"2022","unstructured":"Yi Wang, Kunchang Li, Yizhuo Li, Yinan He, Bingkun Huang, Zhiyu Zhao, Hongjie Zhang, Jilan Xu, Yi Liu, Zun Wang, et al. 2022. Internvideo: General video foundation models via generative and discriminative learning. arXiv preprint arXiv:2212.03191 (2022)."},{"key":"e_1_3_2_2_179_1","volume-title":"Loong: Generating Minute-level Long Videos with Autoregressive Language Models. arXiv preprint arXiv:2410.02757","author":"Wang Yuqing","year":"2024","unstructured":"Yuqing Wang, Tianwei Xiong, Daquan Zhou, Zhijie Lin, Yang Zhao, Bingyi Kang, Jiashi Feng, and Xihui Liu. 2024. Loong: Generating Minute-level Long Videos with Autoregressive Language Models. arXiv preprint arXiv:2410.02757 (2024)."},{"key":"e_1_3_2_2_180_1","doi-asserted-by":"publisher","DOI":"10.1145\/3664647.3681464"},{"key":"e_1_3_2_2_181_1","volume-title":"Star: A benchmark for situated reasoning in real-world videos. arXiv preprint arXiv:2405.09711","author":"Wu Bo","year":"2024","unstructured":"Bo Wu, Shoubin Yu, Zhenfang Chen, Joshua B Tenenbaum, and Chuang Gan. 2024. Star: A benchmark for situated reasoning in real-world videos. arXiv preprint arXiv:2405.09711 (2024)."},{"key":"e_1_3_2_2_182_1","unstructured":"Chenfei Wu Shengming Yin Weizhen Qi Xiaodong Wang Zecheng Tang and Nan Duan. 2023. Visual ChatGPT: Talking Drawing and Editing with Visual Foundation Models. arXiv:2303.04671 [cs.CV] https:\/\/arxiv.org\/abs\/2303.04671"},{"key":"e_1_3_2_2_183_1","unstructured":"Peng Wu Jing Liu Yujia Shi Yujia Sun Fangtao Shao Zhaoyang Wu and Zhiwei Yang. 2020. Not only Look but also Listen: Learning Multimodal Violence Detection under Weak Supervision. arXiv:2007.04687 [cs.CV] https:\/\/arxiv.org\/abs\/2007.04687"},{"key":"e_1_3_2_2_184_1","volume-title":"Next-gpt: Any-to-any multimodal llm. arXiv preprint arXiv:2309.05519","author":"Wu Shengqiong","year":"2023","unstructured":"Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, and Tat-Seng Chua. 2023. Next-gpt: Any-to-any multimodal llm. arXiv preprint arXiv:2309.05519 (2023)."},{"key":"e_1_3_2_2_185_1","volume-title":"Mike Zheng Shou, and Xiang Bai","author":"Wu Weijia","year":"2023","unstructured":"Weijia Wu, Yuzhong Zhao, Zhuang Li, Jiahong Li, Hong Zhou, Mike Zheng Shou, and Xiang Bai. 2023. A Large Cross-Modal Video Retrieval Dataset with Reading Comprehension. arXiv:2305.03347 [cs.CV] https:\/\/arxiv.org\/abs\/2305.03347"},{"key":"e_1_3_2_2_186_1","doi-asserted-by":"publisher","DOI":"10.1093\/nsr\/nwad267"},{"key":"e_1_3_2_2_187_1","unstructured":"Dejing Xu Zhou Zhao Jun Xiao Fei Wu Hanwang Zhang Xiangnan He and Yueting Zhuang. [n. d.]. Video Question Answering via Gradually Refined Attention over Appearance and Motion. In ACM Multimedia."},{"key":"e_1_3_2_2_188_1","volume-title":"International Conference on Machine Learning. PMLR, 38728--38748","author":"Xu Haiyang","year":"2023","unstructured":"Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu, Chenliang Li, Bin Bi, Qi Qian, Wei Wang, et al. 2023. mplug-2: A modularized multi-modal foundation model across text, image and video. In International Conference on Machine Learning. PMLR, 38728--38748."},{"key":"e_1_3_2_2_189_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.571"},{"key":"e_1_3_2_2_190_1","volume-title":"See Kiong Ng, and Jiashi Feng","author":"Xu Lin","year":"2024","unstructured":"Lin Xu, Yilin Zhao, Daquan Zhou, Zhijie Lin, See Kiong Ng, and Jiashi Feng. 2024. Pllava: Parameter-free llava extension from images to videos for video dense captioning. arXiv preprint arXiv:2404.16994 (2024)."},{"key":"e_1_3_2_2_191_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01032"},{"key":"e_1_3_2_2_192_1","unstructured":"An Yang Baosong Yang Binyuan Hui Bo Zheng Bowen Yu Chang Zhou Chengpeng Li Chengyuan Li Dayiheng Liu Fei Huang et al. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671 (2024)."},{"key":"e_1_3_2_2_193_1","volume-title":"Vript: A Video Is Worth Thousands of Words. arXiv preprint arXiv:2406.06040","author":"Yang Dongjie","year":"2024","unstructured":"Dongjie Yang, Suyuan Huang, Chengqiang Lu, Xiaodong Han, Haoxin Zhang, Yan Gao, Yao Hu, and Hai Zhao. 2024. Vript: A Video Is Worth Thousands of Words. arXiv preprint arXiv:2406.06040 (2024)."},{"key":"e_1_3_2_2_194_1","volume-title":"Emollm: Multimodal emotional understanding meets large language models. arXiv preprint arXiv:2406.16442","author":"Yang Qu","year":"2024","unstructured":"Qu Yang, Mang Ye, and Bo Du. 2024. Emollm: Multimodal emotional understanding meets large language models. arXiv preprint arXiv:2406.16442 (2024)."},{"key":"e_1_3_2_2_195_1","unstructured":"Lijun Yu Jos\u00e9 Lezama Nitesh B. Gundavarapu Luca Versari Kihyuk Sohn David Minnen Yong Cheng Vighnesh Birodkar Agrim Gupta Xiuye Gu Alexander G. Hauptmann Boqing Gong Ming-Hsuan Yang Irfan Essa David A. Ross and Lu Jiang. 2024. Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation. arXiv:2310.05737 [cs.CV] https:\/\/arxiv.org\/abs\/2310.05737"},{"key":"e_1_3_2_2_196_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33019127"},{"key":"e_1_3_2_2_197_1","volume-title":"Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432","author":"Yuan Lu","year":"2021","unstructured":"Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. 2021. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432 (2021)."},{"key":"e_1_3_2_2_198_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01753"},{"key":"e_1_3_2_2_199_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.01100"},{"key":"e_1_3_2_2_200_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19772-7_29"},{"key":"e_1_3_2_2_201_1","doi-asserted-by":"crossref","unstructured":"Hang Zhang Xin Li and Lidong Bing. 2023. Video-LLaMA: An Instructiontuned Audio-Visual Language Model for Video Understanding. In Conference on Empirical Methods in Natural Language Processing. https:\/\/api.semanticscholar.org\/CorpusID:259075356","DOI":"10.18653\/v1\/2023.emnlp-demo.49"},{"key":"e_1_3_2_2_202_1","volume-title":"Holmes-VAD: Towards Unbiased and Explainable Video Anomaly Detection via Multi-modal LLM. arXiv preprint arXiv:2406.12235","author":"Zhang Huaxin","year":"2024","unstructured":"Huaxin Zhang, Xiaohao Xu, Xiang Wang, Jialong Zuo, Chuchu Han, Xiaonan Huang, Changxin Gao, Yuehuan Wang, and Nong Sang. 2024. Holmes-VAD: Towards Unbiased and Explainable Video Anomaly Detection via Multi-modal LLM. arXiv preprint arXiv:2406.12235 (2024)."},{"key":"e_1_3_2_2_203_1","volume-title":"Task Me Anything. arXiv preprint arXiv:2406.11775","author":"Zhang Jieyu","year":"2024","unstructured":"Jieyu Zhang, Weikai Huang, Zixian Ma, Oscar Michel, Dong He, Tanmay Gupta, Wei-Chiu Ma, Ali Farhadi, Aniruddha Kembhavi, and Ranjay Krishna. 2024. Task Me Anything. arXiv preprint arXiv:2406.11775 (2024)."},{"key":"e_1_3_2_2_204_1","unstructured":"Jianrong Zhang Yangsong Zhang Xiaodong Cun Shaoli Huang Yong Zhang Hongwei Zhao Hongtao Lu and Xi Shen. 2023. T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations. arXiv:2301.06052 [cs.CV] https:\/\/arxiv.org\/abs\/2301.06052"},{"key":"e_1_3_2_2_205_1","volume-title":"Long Context Understanding. arXiv preprint arXiv:2406.02472","author":"Zhang Zhihan","year":"2024","unstructured":"Zhihan Zhang, Yixin Cao, Chenchen Ye, Yunshan Ma, Lizi Liao, and Tat-Seng Chua. 2024. Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding. arXiv preprint arXiv:2406.02472 (2024)."},{"key":"e_1_3_2_2_206_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i17.29936"},{"key":"e_1_3_2_2_207_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00637"},{"key":"e_1_3_2_2_208_1","unstructured":"Lianmin Zheng Wei-Lin Chiang Ying Sheng Siyuan Zhuang Zhanghao Wu Yonghao Zhuang Zi Lin Zhuohan Li Dacheng Li Eric P. Xing Hao Zhang Joseph E. Gonzalez and Ion Stoica. 2023. Judging LLM-as-a-Judge with MTBench and Chatbot Arena. arXiv:2306.05685 [cs.CL] https:\/\/arxiv.org\/abs\/2306.05685"},{"key":"e_1_3_2_2_209_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01246-5_49"},{"key":"e_1_3_2_2_210_1","volume-title":"A survey on generative ai and llm for video generation, understanding, and streaming. arXiv preprint arXiv:2404.16038","author":"Zhou Pengyuan","year":"2024","unstructured":"Pengyuan Zhou, Lin Wang, Zhi Liu, Yanbin Hao, Pan Hui, Sasu Tarkoma, and Jussi Kangasharju. 2024. A survey on generative ai and llm for video generation, understanding, and streaming. arXiv preprint arXiv:2404.16038 (2024)."},{"key":"e_1_3_2_2_211_1","volume-title":"Languagebind: Extending video-language pretraining to n-modality by language-based semantic alignment. arXiv preprint arXiv:2310.01852","author":"Zhu Bin","year":"2023","unstructured":"Bin Zhu, Bin Lin, Munan Ning, Yang Yan, Jiaxi Cui, HongFa Wang, Yatian Pang, Wenhao Jiang, Junwu Zhang, Zongwei Li, et al. 2023. Languagebind: Extending video-language pretraining to n-modality by language-based semantic alignment. arXiv preprint arXiv:2310.01852 (2023)."},{"key":"e_1_3_2_2_212_1","volume-title":"The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.","author":"Zhu Liyun","year":"2024","unstructured":"Liyun Zhu, Lei Wang, Arjun Raj, Tom Gedeon, and Chen Chen. 2024. Advancing Video Anomaly Detection: A Concise Review and a New Dataset. In The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track."},{"key":"e_1_3_2_2_213_1","volume-title":"Apollo: An Exploration of Video Understanding in Large Multimodal Models. arXiv:2412.10360 [cs.CV] https:\/\/arxiv.org\/abs\/2412.10360","author":"Zohar Orr","year":"2024","unstructured":"Orr Zohar, Xiaohan Wang, Yann Dubois, Nikhil Mehta, Tong Xiao, Philippe Hansen-Estruch, Licheng Yu, Xiaofang Wang, Felix Juefei-Xu, Ning Zhang, Serena Yeung-Levy, and Xide Xia. 2024. Apollo: An Exploration of Video Understanding in Large Multimodal Models. arXiv:2412.10360 [cs.CV] https:\/\/arxiv.org\/abs\/2412.10360"}],"event":{"name":"WWW '25: The ACM Web Conference 2025","location":"Sydney NSW Australia","acronym":"WWW '25","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web"]},"container-title":["Companion Proceedings of the ACM on Web Conference 2025"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3701716.3717744","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,7]],"date-time":"2025-10-07T18:21:45Z","timestamp":1759861305000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3701716.3717744"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,8]]},"references-count":213,"alternative-id":["10.1145\/3701716.3717744","10.1145\/3701716"],"URL":"https:\/\/doi.org\/10.1145\/3701716.3717744","relation":{},"subject":[],"published":{"date-parts":[[2025,5,8]]},"assertion":[{"value":"2025-05-23","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}