{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T16:48:26Z","timestamp":1775580506375,"version":"3.50.1"},"reference-count":1284,"publisher":"Springer Science and Business Media LLC","issue":"9","license":[{"start":{"date-parts":[[2025,5,30]],"date-time":"2025-05-30T00:00:00Z","timestamp":1748563200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,5,30]],"date-time":"2025-05-30T00:00:00Z","timestamp":1748563200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>We have witnessed impressive advances in video action understanding. Increased dataset sizes, variability, and computation availability have enabled leaps in performance and task diversification. Current systems can provide coarse- and fine-grained descriptions of video scenes, extract segments corresponding to queries, synthesize unobserved parts of videos, and predict context across multiple modalities. This survey comprehensively reviews advances in uni- and multi-modal action understanding across a range of tasks. We focus on prevalent challenges, overview widely adopted datasets, and survey seminal works with an emphasis on recent advances. We broadly distinguish between three temporal scopes: (1) recognition tasks of actions observed in full, (2) prediction tasks for ongoing partially observed actions, and (3) forecasting tasks for subsequent unobserved action(s). This division allows us to identify specific action modeling and video representation challenges. Finally, we outline future directions to address current shortcomings.<\/jats:p>","DOI":"10.1007\/s11263-025-02478-4","type":"journal-article","created":{"date-parts":[[2025,5,30]],"date-time":"2025-05-30T06:45:35Z","timestamp":1748587535000},"page":"6251-6315","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["About Time: Advances, Challenges, and Outlooks of Action Understanding"],"prefix":"10.1007","volume":"133","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4706-4231","authenticated-orcid":false,"given":"Alexandros","family":"Stergiou","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0843-7878","authenticated-orcid":false,"given":"Ronald","family":"Poppe","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,5,30]]},"reference":[{"key":"2478_CR1","doi-asserted-by":"crossref","unstructured":"Aafaq, N., Akhtar, N., Liu, W., Gilani, S. Z., & Mian, A. (2019). Spatio-temporal dynamics and semantic attribute enriched visual encoding for video captioning. In: CVPR","DOI":"10.1109\/CVPR.2019.01277"},{"key":"2478_CR2","doi-asserted-by":"crossref","unstructured":"Aakur, S. N., & Sarkar, S. (2019). A perceptual prediction framework for self supervised event segmentation. In: CVPR","DOI":"10.1109\/CVPR.2019.00129"},{"key":"2478_CR3","doi-asserted-by":"crossref","unstructured":"Abati, D., Ben Yahia, H., Nagel, M., & Habibian, A. (2023). Resq: Residual quantization for video perception. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01570"},{"key":"2478_CR4","doi-asserted-by":"crossref","unstructured":"Abdelsalam, M. A., Rangrej, S. B., Hadji, I., Dvornik, N., Derpanis, K. G., & Fazly, A. (2023). Gepsan: Generative procedure step anticipation in cooking videos. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00279"},{"key":"2478_CR5","unstructured":"Abu, Y., Ke, Q., Schiele, B., & Gall, J. (2021). Long-term anticipation of activities with cycle consistency. In: DAGM GCPR"},{"key":"2478_CR6","unstructured":"Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., & Vijayanarasimhan, S. (2016), Youtube-8m: A large-scale video classification benchmark. arXiv:1609.08675"},{"key":"2478_CR7","doi-asserted-by":"crossref","unstructured":"Abu Farha, Y., Richard, A., & Gall, J. (2018). When will you do what?-anticipating temporal occurrences of activities. In: CVPR","DOI":"10.1109\/CVPR.2018.00560"},{"key":"2478_CR8","doi-asserted-by":"crossref","unstructured":"Acsintoae, A., Florescu, A., Georgescu, M. I., Mare, T., Sumedrea, P., Ionescu, R. T., Khan, F. S., & Shah, M. (2022). Ubnormal: New benchmark for supervised open-set video anomaly detection. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01951"},{"key":"2478_CR9","unstructured":"Adnan, M., Ioannou, Y., Tsai, C. Y., Galloway, A., Tizhoosh, H. R., & Taylor, G. W. (2022). Monitoring shortcut learning using mutual information. In: ICMLw"},{"key":"2478_CR10","doi-asserted-by":"crossref","unstructured":"Agarwal, N., Chen, Y. T., Dariush, B., & Yang, M. H. (2020). Unsupervised domain adaptation for spatio-temporal action localization. In: BMVC","DOI":"10.5244\/C.34.46"},{"issue":"3","key":"2478_CR11","first-page":"428","volume":"73","author":"JK Aggarwal","year":"1999","unstructured":"Aggarwal, J. K., & Cai, Q. (1999). Human motion analysis: A review. CVIU, 73(3), 428\u2013440.","journal-title":"Human motion analysis: A review. CVIU"},{"key":"2478_CR12","unstructured":"Aggarwal, J. K., Cai, Q., Liao, W., & Sabata, B. (1994). Articulated and elastic non-rigid motion: A review. In: Workshop on Motion of Non-rigid and Articulated Objects"},{"issue":"2","key":"2478_CR13","first-page":"142","volume":"70","author":"JK Aggarwal","year":"1998","unstructured":"Aggarwal, J. K., Cai, Q., Liao, W., & Sabata, B. (1998). Nonrigid motion analysis: Articulated and elastic motion. CVIU, 70(2), 142\u2013156.","journal-title":"CVIU"},{"key":"2478_CR14","unstructured":"Akbari, H., Yuan, L., Qian, R., Chuang, W. H., Chang, S. F., Cui, Y., & Gong, B. (2021). Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. In: NeurIPS"},{"key":"2478_CR15","unstructured":"Aklilu, J., Wang, X., & Yeung-Levy, S. (2024). Zero-shot action localization via the confidence of large vision-language models. arxiv:2410.14340"},{"key":"2478_CR16","unstructured":"Al-Tahan, H., Garrido, Q., Balestriero, R., Bouchacourt, D., Hazirbas, C., & Ibrahim, M. (2024). Unibench: Visual reasoning requires rethinking vision-language beyond scaling. arXiv:2408.04810"},{"key":"2478_CR17","doi-asserted-by":"crossref","unstructured":"Alayrac, J. B., Bojanowski, P., Agrawal, N., Sivic, J., Laptev, I., & Lacoste-Julien, S. (2016). Unsupervised learning from narrated instruction videos. In: CVPR","DOI":"10.1109\/CVPR.2016.495"},{"key":"2478_CR18","doi-asserted-by":"crossref","unstructured":"Alayrac, J. B., Laptev, I., Sivic, J., & Lacoste-Julien, S. (2017), Joint discovery of object states and manipulation actions. In: ICCV","DOI":"10.1109\/ICCV.2017.234"},{"key":"2478_CR19","unstructured":"Alayrac, J. B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et\u00a0al (2022). Flamingo: a visual language model for few-shot learning. In: NeurIPS"},{"issue":"12","key":"2478_CR20","doi-asserted-by":"crossref","first-page":"2246","DOI":"10.1109\/TPAMI.2010.33","volume":"32","author":"M Albanese","year":"2010","unstructured":"Albanese, M., Chellappa, R., Cuntoor, N., Moscato, V., Picariello, A., Subrahmanian, V., & Udrea, O. (2010). Pads: A probabilistic activity detection framework for video data. IEEE TPAMI, 32(12), 2246\u20132261.","journal-title":"IEEE TPAMI"},{"key":"2478_CR21","unstructured":"Albanie, S., Liu, Y., Nagrani, A., Miech, A., Coto, E., Laptev, I., Sukthankar, R., Ghanem, B., Zisserman, A., Gabeur, V., et\u00a0al. (2020). The end-of-end-to-end: A video understanding pentathlon challenge (2020). arXiv:2008.00744"},{"key":"2478_CR22","doi-asserted-by":"crossref","unstructured":"Albu, A. B., Bergevin, R., & Quirion, S. (2008). Generic Temporal Segmentation of Cyclic Human Motion. PR 41:6\u201321","DOI":"10.1016\/j.patcog.2007.03.013"},{"key":"2478_CR23","doi-asserted-by":"crossref","unstructured":"Ali, M. K., Kim, D., & Kim, T. H. (2023). Task agnostic restoration of natural video dynamics. In: CVPR","DOI":"10.1109\/ICCV51070.2023.01245"},{"issue":"3","key":"2478_CR24","doi-asserted-by":"crossref","first-page":"587","DOI":"10.1145\/882262.882311","volume":"22","author":"B Allen","year":"2003","unstructured":"Allen, B., Curless, B., & Popovi\u0107, Z. (2003). The space of human body shapes: reconstruction and parameterization from range scans. ACM TOG, 22(3), 587\u2013594.","journal-title":"ACM TOG"},{"key":"2478_CR25","unstructured":"Allen, B., Curless, B., Popovi\u0107, Z., & Hertzmann, A. (2006), Learning a correlated model of identity and pose-dependent body shape variation for real-time synthesis. In: SIGGRAPH"},{"key":"2478_CR26","doi-asserted-by":"crossref","unstructured":"AlMarri, S., Zaheer, M. Z., & Nandakumar, K. (2024). A multi-head approach with shuffled segments for weakly-supervised video anomaly detection. In: WACVw","DOI":"10.1109\/WACVW60836.2024.00022"},{"key":"2478_CR27","unstructured":"Alonso, E., Jelley, A., Micheli, V., Kanervisto, A., Storkey, A., Pearce, T., & Fleuret, F. (2024). Diffusion for world modeling: Visual details matter in atari. arXiv:2405.12399"},{"key":"2478_CR28","doi-asserted-by":"crossref","unstructured":"Alper, M., & Averbuch-Elor, H. (2024). Emergent visual-semantic hierarchies in image-text representations. In: ECCV","DOI":"10.1007\/978-3-031-72943-0_13"},{"key":"2478_CR29","doi-asserted-by":"crossref","unstructured":"Alwassel, H., Heilbron, F. C., Escorcia, V., & Ghanem, B. (2018), Diagnosing error in temporal action detectors. In: ECCV","DOI":"10.1007\/978-3-030-01219-9_16"},{"key":"2478_CR30","doi-asserted-by":"crossref","unstructured":"Alwassel, H., Giancola, S., & Ghanem, B. (2021). Tsp: Temporally-sensitive pretraining of video encoders for localization tasks. In: ICCV","DOI":"10.1109\/ICCVW54120.2021.00356"},{"key":"2478_CR31","doi-asserted-by":"crossref","unstructured":"Amer, M. R., & Todorovic, S. (2012). Sum-product networks for modeling activities with stochastic structure. In: CVPR","DOI":"10.1109\/CVPR.2012.6247816"},{"key":"2478_CR32","doi-asserted-by":"crossref","unstructured":"Amrani, E., Ben-Ari, R., Rotman, D., & Bronstein, A. (2021). Noise estimation using density estimation for self-supervised multimodal learning. In: AAAI","DOI":"10.1609\/aaai.v35i8.16822"},{"key":"2478_CR33","unstructured":"An, J., Zhang, S., Yang, H., Gupta, S., Huang, J. B., Luo, J., & Yin, X. (2023). Latent-shift: Latent diffusion with temporal shift for efficient text-to-video generation. arXiv:2304.08477"},{"key":"2478_CR34","doi-asserted-by":"crossref","unstructured":"Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., S\u00fcnderhauf, N., Reid, I., Gould, S., & Van Den\u00a0Hengel, A. (2018). Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In: CVPR","DOI":"10.1109\/CVPR.2018.00387"},{"key":"2478_CR35","unstructured":"Andrew, G., Arora, R., Bilmes, J., & Livescu, K. (2013). Deep canonical correlation analysis. In: ICML"},{"key":"2478_CR36","doi-asserted-by":"crossref","unstructured":"Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., & Davis, J. (2005). Scape: shape completion and animation of people. In: SIGGRAPH","DOI":"10.1145\/1186822.1073207"},{"key":"2478_CR37","doi-asserted-by":"crossref","unstructured":"Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C. L., & Parikh, D. (2015). Vqa: Visual question answering. In: ICCV","DOI":"10.1109\/ICCV.2015.279"},{"key":"2478_CR38","doi-asserted-by":"crossref","unstructured":"Arandjelovic, R., & Zisserman, A. (2018). Objects that sound. In: ECCV","DOI":"10.1007\/978-3-030-01246-5_27"},{"key":"2478_CR39","doi-asserted-by":"crossref","unstructured":"Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lu\u010di\u0107, M., & Schmid, C. (2021a) Vivit: A video vision transformer. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00676"},{"key":"2478_CR40","doi-asserted-by":"crossref","unstructured":"Arnab, A., Sun, C., & Schmid, C. (2021b). Unified graph structured models for video understanding. In: CVPR","DOI":"10.1109\/ICCV48922.2021.00801"},{"key":"2478_CR41","doi-asserted-by":"crossref","unstructured":"Ashutosh, K., Girdhar, R., Torresani, L., & Grauman, K. (2023a). Hiervl: Learning hierarchical video-language embeddings. In: CVPR","DOI":"10.1109\/CVPR52729.2023.02209"},{"key":"2478_CR42","unstructured":"Ashutosh, K., Ramakrishnan, S. K., Afouras, T., & Grauman, K. (2023b). Video-mined task graphs for keystep recognition in instructional videos. In: NeurIPS"},{"key":"2478_CR43","doi-asserted-by":"crossref","unstructured":"Astrid, M., Zaheer. M. Z., Lee, J. Y., & Lee, S. I. (2021a). Learning not to reconstruct anomalies. In: BMVC","DOI":"10.5244\/C.35.205"},{"key":"2478_CR44","doi-asserted-by":"crossref","unstructured":"Astrid, M., Zaheer, M. Z., & Lee, S. I. (2021b). Synthetic temporal anomaly guided end-to-end video anomaly detection. In: ICCVw","DOI":"10.1109\/ICCVW54120.2021.00028"},{"key":"2478_CR45","doi-asserted-by":"crossref","unstructured":"Aytar, Y., Vondrick, C., & Torralba, A. (2016). Soundnet: Learning sound representations from unlabeled video. In: NeurIPS","DOI":"10.1109\/CVPR.2016.18"},{"key":"2478_CR46","doi-asserted-by":"crossref","unstructured":"Azy, O., & Ahuja, N. (2008). Segmentation of Periodically Moving Objects. In: ICPR","DOI":"10.1109\/ICPR.2008.4760949"},{"key":"2478_CR47","doi-asserted-by":"crossref","unstructured":"Baade, A., Peng, P., & Harwath, D. (2022). Mae-ast: Masked autoencoding audio spectrogram transformer. In: Interspeech","DOI":"10.21437\/Interspeech.2022-10961"},{"key":"2478_CR48","unstructured":"Babaeizadeh, M., Finn, C., Erhan, D., Campbell, R. H., & Levine, S. (2018). Stochastic variational video prediction. In: ICLR"},{"key":"2478_CR49","doi-asserted-by":"crossref","unstructured":"Baccouche, M., Mamalet, F., Wolf, C., Garcia, C., & Baskurt, A. (2011). Sequential deep learning for human action recognition. In: HBU","DOI":"10.1007\/978-3-642-25446-8_4"},{"key":"2478_CR50","doi-asserted-by":"crossref","unstructured":"Bacharidis, K., & Argyros, A. (2023). Repetition-aware Image Sequence Sampling for Recognizing Repetitive Human Actions. In: ICCVw","DOI":"10.1109\/ICCVW60793.2023.00202"},{"key":"2478_CR51","doi-asserted-by":"crossref","unstructured":"Bachmann, R., Mizrahi, D., Atanov, A., & Zamir, A. (2022). Multimae: Multi-modal multi-task masked autoencoders. In: ECCV","DOI":"10.1007\/978-3-031-19836-6_20"},{"key":"2478_CR52","doi-asserted-by":"crossref","unstructured":"Badamdorj, T., Rochan, M., Wang, Y., & Cheng, L. (2022). Contrastive learning for unsupervised video highlight detection. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01365"},{"key":"2478_CR53","unstructured":"Baevski, A., Hsu, W. N., Xu, Q., Babu, A., Gu, J., & Auli, M. (2022). Data2vec: A general framework for self-supervised learning in speech, vision and language. In: ICML"},{"key":"2478_CR54","doi-asserted-by":"crossref","unstructured":"Bagad, P., Tapaswi, M., & Snoek, C. G. M. (2023). Test of time: Instilling video-language models with a sense of time. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00247"},{"key":"2478_CR55","doi-asserted-by":"crossref","unstructured":"Bai, J., Gao, K., Min, S., Xia, S. T., Li, Z., & Liu, W. (2024a). Badclip: Trigger-aware prompt learning for backdoor attacks on clip. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02288"},{"key":"2478_CR56","doi-asserted-by":"crossref","unstructured":"Bai, S., Ma, B., Chang, H., Huang, R., & Chen, X. (2022). Salient-to-broad transition for video person re-identification. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00719"},{"key":"2478_CR57","doi-asserted-by":"crossref","unstructured":"Bai, Y., Wang, Y., Tong, Y., Yang, Y., Liu, Q., & Liu, J. (2020). Boundary content graph neural network for temporal action proposal generation. In: ECCV","DOI":"10.1007\/978-3-030-58604-1_8"},{"key":"2478_CR58","unstructured":"Bai, Y., Zhou, Y., Zhou, J., Goh, R. S. M., Ting, D. S. W., & Liu, Y. (2024b). From generalist to specialist: Adapting vision language models via task-specific visual instruction tuning. arXiv:2410.06456"},{"key":"2478_CR59","doi-asserted-by":"crossref","unstructured":"Bain, M., Nagrani, A., Varol, G., & Zisserman, A. (2021). Frozen in time: A joint video and image encoder for end-to-end retrieval. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00175"},{"key":"2478_CR60","doi-asserted-by":"crossref","unstructured":"Baldassini, F. B., Shukor, M., Cord, M., Soulier, L., & Piwowarski, B. (2024). What makes multimodal in-context learning work? In: CVPRw","DOI":"10.1109\/CVPRW63382.2024.00161"},{"key":"2478_CR61","unstructured":"Ballas, N., Yao, L., Pal, C., & Courville, A. (2015). Delving deeper into convolutional networks for learning video representations. In: ICLR"},{"key":"2478_CR62","doi-asserted-by":"crossref","unstructured":"Bandara, W. G. C., Patel, N., Gholami, A., Nikkhah, M., Agrawal, M., & Patel, V. M. (2023). Adamae: Adaptive masking for efficient spatiotemporal learning with masked autoencoders. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01394"},{"key":"2478_CR63","doi-asserted-by":"crossref","unstructured":"Bansal, H., Gopalakrishnan, K., Dingliwal, S., Bodapati, S., Kirchhoff, K., & Roth, D. (2023). Rethinking the role of scale for in-context learning: An interpretability-based case study at 66 billion scale. In: ACL","DOI":"10.18653\/v1\/2023.acl-long.660"},{"key":"2478_CR64","doi-asserted-by":"crossref","unstructured":"Bansal, S., Arora, C., & Jawahar, C. (2022). My view is the best view: Procedure learning from egocentric videos. In: ECCV","DOI":"10.1007\/978-3-031-19778-9_38"},{"key":"2478_CR65","unstructured":"Bao, H., Dong, L., Piao, S., & Wei, F. (2021). Beit: Bert pre-training of image transformers. arXiv:2106.08254"},{"key":"2478_CR66","unstructured":"Bao, H., Wang, W., Dong, L., Liu, Q., Mohammed, O. K., Aggarwal, K., Som, S., Piao, S., & Wei, F. (2022). Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. In: NeurIPS"},{"key":"2478_CR67","doi-asserted-by":"crossref","unstructured":"Baqu\u00e9, P., Fleuret, F., & Fua, P. (2017). Deep occlusion reasoning for multi-camera multi-target detection. In: ICCV","DOI":"10.1109\/ICCV.2017.38"},{"key":"2478_CR68","unstructured":"Bardes, A., Ponce, J., & LeCun, Y. (2021). Vicreg: Variance-invariance-covariance regularization for self-supervised learning. In: ICLR"},{"key":"2478_CR69","unstructured":"Bardes, A., Ponce, J., & LeCun, Y. (2023). Mc-jepa: A joint-embedding predictive architecture for self-supervised learning of motion and content features. arXiv:2307.12698"},{"key":"2478_CR70","doi-asserted-by":"crossref","unstructured":"Barekatain, M., Mart\u00ed, M., Shih, H. F., Murray, S., Nakayama, K., Matsuo, Y., & Prendinger, H. (2017). Okutama-action: An aerial view video dataset for concurrent human action detection. In: ICCVw","DOI":"10.1109\/CVPRW.2017.267"},{"key":"2478_CR71","unstructured":"Barnard, K., & Forsyth, D. (2001). Learning the semantics of words and pictures. In: ICCV"},{"key":"2478_CR72","unstructured":"Barnard, K., Duygulu, P., Forsyth, D., De\u00a0Freitas, N., Blei, D. M., & Jordan, M. I. (2003). Matching words and pictures. JMLR 3(Feb):1107\u20131135"},{"issue":"4","key":"2478_CR73","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3402447","volume":"16","author":"F Becattini","year":"2020","unstructured":"Becattini, F., Uricchio, T., Seidenari, L., Ballan, L., & Bimbo, A. D. (2020). Am i done? predicting action progress in videos. TOMM, 16(4), 1\u201324.","journal-title":"TOMM"},{"key":"2478_CR74","first-page":"30509","volume":"79","author":"DR Beddiar","year":"2020","unstructured":"Beddiar, D. R., Nini, B., Sabokrou, M., & Hadid, A. (2020). Vision-based human activity recognition: a survey. MTA, 79, 30509\u201330555.","journal-title":"MTA"},{"key":"2478_CR75","doi-asserted-by":"crossref","unstructured":"Ben-Shabat, Y., Yu, X., Saleh, F., Campbell, D., Rodriguez-Opazo, C., Li, H., & Gould, S. (2021). The ikea asm dataset: Understanding people assembling furniture through actions, objects and pose. In: WACV","DOI":"10.1109\/WACV48630.2021.00089"},{"key":"2478_CR76","doi-asserted-by":"crossref","unstructured":"BenAbdelkader, C., Cutler, R. G., & Davis, L. S. (2004). Gait recognition using image self-similarity. In: EURASIP","DOI":"10.1155\/S1110865704309236"},{"key":"2478_CR77","doi-asserted-by":"crossref","unstructured":"Benaim, S., Ephrat, A., Lang, O., Mosseri, I., Freeman, W. T., Rubinstein, M., Irani, M., & Dekel, T. (2020). Speednet: Learning the speediness in videos. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00994"},{"key":"2478_CR78","doi-asserted-by":"crossref","unstructured":"Benfold, B., & Reid, I. (2011). Stable multi-target tracking in real-time surveillance video. In: CVPR","DOI":"10.1109\/CVPR.2011.5995667"},{"issue":"8","key":"2478_CR79","doi-asserted-by":"crossref","first-page":"1798","DOI":"10.1109\/TPAMI.2013.50","volume":"35","author":"Y Bengio","year":"2013","unstructured":"Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE TPAMI, 35(8), 1798\u20131828.","journal-title":"IEEE TPAMI"},{"key":"2478_CR80","unstructured":"Bengio, Y., L\u00e9onard, N., & Courville, A. (2013b). Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv:1308.3432"},{"key":"2478_CR81","unstructured":"Bertasius, G., Wang, H., & Torresani, L. (2021). Is space-time attention all you need for video understanding? In: ICML"},{"key":"2478_CR82","doi-asserted-by":"crossref","unstructured":"Bhatnagar, B. L., Xie, X., Petrov, I. A., Sminchisescu, C., Theobalt, C., & Pons-Moll, G. (2022). Behave: Dataset and method for tracking human object interactions. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01547"},{"key":"2478_CR83","doi-asserted-by":"crossref","unstructured":"Bilen, H., Fernando. B., Gavves, E., Vedaldi, A., & Gould, S. (2016). Dynamic image networks for action recognition. In: CVPR","DOI":"10.1109\/CVPR.2016.331"},{"key":"2478_CR84","doi-asserted-by":"crossref","unstructured":"Black, M. J., Patel, P., Tesch, J., & Yang, J. (2023). Bedlam: A synthetic dataset of bodies exhibiting detailed lifelike animated motion. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00843"},{"key":"2478_CR85","doi-asserted-by":"crossref","unstructured":"Blank, M., Gorelick, L., Shechtman, E., Irani, M., & Basri, R. (2005). Actions as space-time shapes. In: ICCV","DOI":"10.1109\/ICCV.2005.28"},{"key":"2478_CR86","doi-asserted-by":"crossref","unstructured":"Blattmann, A., Rombach, R., Ling, H., Dockhorn, T., Kim, S. W., Fidler, S., & Kreis, K. (2023). Align your latents: High-resolution video synthesis with latent diffusion models. In: CVPR","DOI":"10.1109\/CVPR52729.2023.02161"},{"key":"2478_CR87","unstructured":"Bleeker, M., Hendriksen, M., Yates, A., & de\u00a0Rijke, M. (2024). Demonstrating and reducing shortcuts in vision-language representation learning. TMLR"},{"key":"2478_CR88","doi-asserted-by":"crossref","first-page":"257","DOI":"10.1109\/34.910878","volume":"23","author":"AF Bobick","year":"2001","unstructured":"Bobick, A. F., & Davis, J. W. (2001). The recognition of human movement using temporal templates. IEEE TPAMI, 23, 257\u2013267.","journal-title":"IEEE TPAMI"},{"key":"2478_CR89","doi-asserted-by":"crossref","unstructured":"de\u00a0Boer, F., van Gemert, J. C., Dijkstra, J., & Pintea, S. L. (2023). Is there progress in activity progress prediction? In: ICCVw","DOI":"10.1109\/ICCVW60793.2023.00318"},{"key":"2478_CR90","doi-asserted-by":"crossref","unstructured":"Bogo, F., Romero, J., Loper, M., & Black, M. J. (2014). Faust: Dataset and evaluation for 3d mesh registration. In: CVPR","DOI":"10.1109\/CVPR.2014.491"},{"key":"2478_CR91","doi-asserted-by":"crossref","unstructured":"Bogo, F., Romero, J., Pons-Moll, G., & Black, M. J. (2017). Dynamic faust: Registering human bodies in motion. In: CVPR","DOI":"10.1109\/CVPR.2017.591"},{"key":"2478_CR92","doi-asserted-by":"crossref","unstructured":"Bokhari, S. Z., & Kitani, K. M. (2017). Long-term activity forecasting using first-person vision. In: ACCV","DOI":"10.1007\/978-3-319-54193-8_22"},{"key":"2478_CR93","doi-asserted-by":"crossref","unstructured":"Bordt, S., Upadhyay, U., Akata, Z., & von Luxburg, U. (2023). The manifold hypothesis for gradient-based explanations. In: CVPRw","DOI":"10.1109\/CVPRW59228.2023.00378"},{"issue":"1","key":"2478_CR94","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1109\/TPAMI.2012.89","volume":"35","author":"A Borji","year":"2012","unstructured":"Borji, A., & Itti, L. (2012). State-of-the-art in visual attention modeling. IEEE TPAMI, 35(1), 185\u2013207.","journal-title":"IEEE TPAMI"},{"key":"2478_CR95","doi-asserted-by":"crossref","unstructured":"Bottou, L. (1998). Online algorithms and stochastic approximations. Online learning in neural networks","DOI":"10.1017\/CBO9780511569920.003"},{"key":"2478_CR96","doi-asserted-by":"crossref","first-page":"1244","DOI":"10.1109\/TPAMI.2007.1042","volume":"29","author":"A Briassouli","year":"2007","unstructured":"Briassouli, A., & Ahuja, N. (2007). Extraction and Analysis of Multiple Periodic Motions in Video Sequences. IEEE TPAMI, 29, 1244\u20131261.","journal-title":"IEEE TPAMI"},{"key":"2478_CR97","unstructured":"Bronstein, A., Bronstein, M., Castellani, U., Dubrovina, A., Guibas, L., Horaud, R., Kimmel, R., Knossow, D., Von\u00a0Lavante, E., Mateus, D., et\u00a0al. (2010). Shrec 2010: robust correspondence benchmark. In: Eurographicsw 3D-OR"},{"key":"2478_CR98","unstructured":"Brooks, T., Hellsten, J., Aittala, M., Wang, T. C., Aila, T., Lehtinen, J., Liu, M. Y., Efros, A., & Karras, T. (2022). Generating long videos of dynamic scenes. In: NeurIPS"},{"key":"2478_CR99","unstructured":"Brooks, T., Peebles, B., Holmes, C., DePue, W., Guo, Y., Jing, L., Schnurr, D., Taylor, J., Luhman, T., Luhman, E., Ng, C., Wang, R., & Ramesh, A. (2024). Video generation models as world simulators. https:\/\/openai.com\/research\/video-generation-models-as-world-simulators"},{"key":"2478_CR100","unstructured":"Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et\u00a0al. (2020). Language models are few-shot learners. In: NeurIPS"},{"issue":"4","key":"2478_CR101","doi-asserted-by":"crossref","first-page":"86","DOI":"10.1145\/3386569.3392485","volume":"39","author":"M Broxton","year":"2020","unstructured":"Broxton, M., Flynn, J., Overbeck, R., Erickson, D., Hedman, P., Duvall, M., Dourgarian, J., Busch, J., Whalen, M., & Debevec, P. (2020). Immersive light field video with a layered mesh representation. ACM TOG, 39(4), 86\u20131.","journal-title":"ACM TOG"},{"key":"2478_CR102","doi-asserted-by":"crossref","unstructured":"Bugarin, N., Bugaric, J., Barusco, M., Pezze, D. D., & Susto, G. A. (2024). Unveiling the anomalies in an ever-changing world: A benchmark for pixel-level anomaly detection in continual learning. In: CVPRw","DOI":"10.1109\/CVPRW63382.2024.00410"},{"key":"2478_CR103","unstructured":"Bulat, A., Perez\u00a0Rua, J. M., Sudhakaran, S., Martinez, B., & Tzimiropoulos, G. (2021). Space-time mixing attention for video transformer. In: NeurIPS"},{"key":"2478_CR104","doi-asserted-by":"crossref","unstructured":"Buxton, H. (2003). Learning and understanding dynamic scene activity: a review. IVC 21(1)","DOI":"10.1016\/S0262-8856(02)00127-0"},{"key":"2478_CR105","doi-asserted-by":"crossref","unstructured":"Caba\u00a0Heilbron, F., Escorcia, V., Ghanem, B., & Carlos\u00a0Niebles, J. (2015). Activitynet: A large-scale video benchmark for human activity understanding. In: CVPR","DOI":"10.1109\/CVPR.2015.7298698"},{"key":"2478_CR106","first-page":"2761","volume":"25","author":"D Cai","year":"2022","unstructured":"Cai, D., Qian, S., Fang, Q., Hu, J., Ding, W., & Xu, C. (2022). Heterogeneous graph contrastive learning network for personalized micro-video recommendation. IEEE TMM, 25, 2761\u20132773.","journal-title":"IEEE TMM"},{"key":"2478_CR107","doi-asserted-by":"crossref","unstructured":"Cai, Y., Li, H., Hu, J. F., & Zheng, W. S. (2019). Action knowledge transfer for action prediction with partial videos. In: AAAI","DOI":"10.1609\/aaai.v33i01.33018118"},{"key":"2478_CR108","doi-asserted-by":"crossref","unstructured":"Cai, Z., Ren, D., Zeng, A., Lin, Z., Yu, T., Wang, W., Fan, X., Gao, Y., Yu, Y., Pan, L., et\u00a0al. (2022b). Humman: Multi-modal 4d human dataset for versatile sensing and modeling. In: ECCV","DOI":"10.1007\/978-3-031-20071-7_33"},{"issue":"8","key":"2478_CR109","doi-asserted-by":"crossref","first-page":"1243","DOI":"10.1093\/cercor\/bhi007","volume":"15","author":"B Calvo-Merino","year":"2005","unstructured":"Calvo-Merino, B., Glaser, D. E., Gr\u00e8zes, J., Passingham, R. E., & Haggard, P. (2005). Action observation and acquired motor skills: an fmri study with expert dancers. Cerebral cortex, 15(8), 1243\u20131249.","journal-title":"Cerebral cortex"},{"key":"2478_CR110","doi-asserted-by":"crossref","unstructured":"Cao, M., Chen, L., Shou, M. Z., Zhang, C., & Zou, Y. (2021). On pursuit of designing multi-modal transformer for video grounding. In: EMNLP","DOI":"10.18653\/v1\/2021.emnlp-main.773"},{"key":"2478_CR111","doi-asserted-by":"crossref","unstructured":"Cao, Y., Barrett, D., Barbu, A., Narayanaswamy, S., Yu, H., Michaux, A., Lin, Y., Dickinson, S., Mark\u00a0Siskind, J., & Wang, S. (2013). Recognize human activities from partially observed videos. In: CVPR","DOI":"10.1109\/CVPR.2013.343"},{"key":"2478_CR112","doi-asserted-by":"crossref","unstructured":"Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-end object detection with transformers. In: ECCV","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"2478_CR113","unstructured":"Carlini, N., & Terzis, A. (2022). Poisoning and backdooring contrastive learning. In: ICLR"},{"key":"2478_CR114","unstructured":"Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., & Joulin, A. (2020). Unsupervised learning of visual features by contrasting cluster assignments. In: NeurIPS"},{"key":"2478_CR115","doi-asserted-by":"crossref","unstructured":"Caron, M., Touvron, H., Misra, I., J\u00e9gou, H., Mairal, J., Bojanowski, P., & Joulin, A. (2021). Emerging properties in self-supervised vision transformers. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00951"},{"key":"2478_CR116","doi-asserted-by":"crossref","unstructured":"Carreira, J., & Zisserman, A. (2017). Quo vadis, action recognition? a new model and the kinetics dataset. In: CVPR","DOI":"10.1109\/CVPR.2017.502"},{"key":"2478_CR117","unstructured":"Carreira, J., Noland, E., Banki-Horvath, A., Hillier, C., & Zisserman, A. (2018). A short note about kinetics-600. arXiv:1808.01340"},{"key":"2478_CR118","unstructured":"Carreira, J., Noland. E., Hillier, C., & Zisserman, A. (2019). A short note on the kinetics-700 human action dataset. arXiv:1907.06987"},{"key":"2478_CR119","doi-asserted-by":"crossref","unstructured":"Castrejon, L., Ballas, N., & Courville, A. (2019). Improved conditional vrnns for video prediction. In: ICCV","DOI":"10.1109\/ICCV.2019.00770"},{"issue":"2","key":"2478_CR120","first-page":"129","volume":"13","author":"C Cedras","year":"1995","unstructured":"Cedras, C., & Shah, M. (1995). Motion-based recognition a survey. IVC, 13(2), 129\u2013155.","journal-title":"Motion-based recognition a survey. IVC"},{"key":"2478_CR121","doi-asserted-by":"crossref","unstructured":"Chaabane, M., Trabelsi, A., Blanchard, N., & Beveridge, R. (2020). Looking ahead: Anticipating pedestrians crossing with future frames prediction. In: WACV","DOI":"10.1109\/WACV45572.2020.9093426"},{"issue":"12","key":"2478_CR122","first-page":"10873","volume":"39","author":"AA Chaaraoui","year":"2012","unstructured":"Chaaraoui, A. A., Climent-P\u00e9rez, P., & Fl\u00f3rez-Revuelta, F. (2012). A review on vision techniques applied to human behaviour analysis for ambient-assisted living. ESWA, 39(12), 10873\u201310888.","journal-title":"ESWA"},{"key":"2478_CR123","unstructured":"Chandrasegaran, K., Gupta, A., Hadzic, L. M., Kota, T., He, J., Eyzaguirre, C., Durante, Z., Li, M., Wu, J., & Fei-Fei, L. (2024). Hourvideo: 1-hour video-language understanding. In: NeurIPS"},{"key":"2478_CR124","doi-asserted-by":"crossref","unstructured":"Chang, C. Y., Huang, D. A., Sui, Y., Fei-Fei, L., & Niebles, J. C. (2019). D3tw: Discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation. In: CVPR","DOI":"10.1109\/CVPR.2019.00366"},{"key":"2478_CR125","unstructured":"Chang, M., Prakash, A., & Gupta, S. (2024). Look ma, no hands! agent-environment factorization of egocentric videos. In: NeurIPS"},{"key":"2478_CR126","unstructured":"Chang, Z., Zhang, X., Wang, S., Ma, S., Ye, Y., Xinguang, X., & Gao, W. (2021). Mau: A motion-aware unit for video prediction and beyond. In: NeurIPS"},{"key":"2478_CR127","doi-asserted-by":"crossref","unstructured":"Chang, Z., Zhang, X., Wang, S., Ma, S., & Gao, W. (2022). Strpm: A spatiotemporal residual predictive model for high-resolution video prediction. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01356"},{"key":"2478_CR128","doi-asserted-by":"crossref","unstructured":"Chao, Y. W., Vijayanarasimhan, S., Seybold, B., Ross, D. A., Deng, J., & Sukthankar, R. (2018). Rethinking the faster r-cnn architecture for temporal action localization. In: CVPR","DOI":"10.1109\/CVPR.2018.00124"},{"key":"2478_CR129","doi-asserted-by":"crossref","unstructured":"Chao, Y. W., Yang, W., Xiang, Y., Molchanov, P., Handa, A., Tremblay, J., Narang, Y. S., Van\u00a0Wyk, K., Iqbal, U., Birchfield, S., et\u00a0al. (2021). Dexycb: A benchmark for capturing hand grasping of objects. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00893"},{"key":"2478_CR130","doi-asserted-by":"crossref","unstructured":"Chatterjee, M., Ahuja, N., & Cherian, A. (2021). A hierarchical variational neural uncertainty model for stochastic video prediction. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00961"},{"key":"2478_CR131","doi-asserted-by":"crossref","unstructured":"Chen, C., Ashutosh, K., Girdhar, R., Harwath, D., & Grauman, K. (2024a). Soundingactions: Learning how actions sound from narrated egocentric videos. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02573"},{"key":"2478_CR132","unstructured":"Chen, D., & Dolan, W. B. (2011). Collecting highly parallel data for paraphrase evaluation. In: ACL"},{"key":"2478_CR133","doi-asserted-by":"crossref","unstructured":"Chen, G., Zheng, Y. D., Wang, L., & Lu, T. (2022a). Dcan: improving temporal action detection via dual context aggregation. In: AAAI","DOI":"10.1609\/aaai.v36i1.19900"},{"key":"2478_CR134","unstructured":"Chen, G., Huang, Y., Xu, J., Pei, B., Chen, Z., Li, Z., Wang, J., Li, K., Lu, T., & Wang, L. (2024b). Video mamba suite: State space model as a versatile alternative for video understanding. arXiv:2403.09626"},{"key":"2478_CR135","doi-asserted-by":"crossref","unstructured":"Chen, H., Xie, W., Vedaldi, A., & Zisserman, A. (2020a). Vggsound: A large-scale audio-visual dataset. In: ICASSP","DOI":"10.1109\/ICASSP40776.2020.9053174"},{"key":"2478_CR136","doi-asserted-by":"crossref","unstructured":"Chen, H., Huang, Z., Hong, Y., Wang, Y., Lyu, Z., Xu, Z., Lan, J., & Gu, Z. (2024c). Efficient transfer learning for video-language foundation models. arXiv:2411.11223","DOI":"10.1109\/CVPR52734.2025.02712"},{"key":"2478_CR137","doi-asserted-by":"crossref","unstructured":"Chen, J., Chen, X., Ma, L., Jie, Z., & Chua, T. S. (2018a). Temporally grounding natural sentence in video. In: EMNLP","DOI":"10.18653\/v1\/D18-1015"},{"key":"2478_CR138","doi-asserted-by":"crossref","unstructured":"Chen, L., Yan, X., Xiao, J., Zhang, H., Pu, S., & Zhuang, Y. (2020b). Counterfactual samples synthesizing for robust visual question answering. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01081"},{"issue":"9","key":"2478_CR139","first-page":"6058","volume":"32","author":"L Chen","year":"2022","unstructured":"Chen, L., Lu, J., Song, Z., & Zhou, J. (2022). Ambiguousness-aware state evolution for action prediction. IEEE TCSVT, 32(9), 6058\u20136072.","journal-title":"IEEE TCSVT"},{"key":"2478_CR140","doi-asserted-by":"crossref","unstructured":"Chen, M., Wei, F., Li, C., & Cai, D. (2022c). Frame-wise action representations for long videos via sequence contrastive learning. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01343"},{"key":"2478_CR141","doi-asserted-by":"crossref","unstructured":"Chen, M., Wen, C., Zheng, F., He, F., & Shao, L. (2022d). Vita: A multi-source vicinal transfer augmentation method for out-of-distribution generalization. In: AAAI","DOI":"10.1609\/aaai.v36i1.19908"},{"key":"2478_CR142","doi-asserted-by":"crossref","unstructured":"Chen, P., Huang, D., He, D., Long, X., Zeng, R., Wen, S., Tan, M., & Gan, C. (2021a). Rspnet: Relative speed perception for unsupervised video representation learning. In: AAAI","DOI":"10.1609\/aaai.v35i2.16189"},{"key":"2478_CR143","doi-asserted-by":"crossref","unstructured":"Chen, S., & Jiang, Y. G. (2021). Towards bridging event captioner and sentence localizer for weakly supervised dense event captioning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00832"},{"key":"2478_CR144","doi-asserted-by":"crossref","unstructured":"Chen, S., Chen, J., & Jin, Q. (2017a). Generating video descriptions with topic guidance. In: ICMR","DOI":"10.1145\/3078971.3079000"},{"key":"2478_CR145","doi-asserted-by":"crossref","unstructured":"Chen, S., Sun, P., Xie, E., Ge, C., Wu, J., Ma, L., Shen, J., & Luo, P. (2021b). Watch only once: An end-to-end video action detection framework. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00807"},{"key":"2478_CR146","unstructured":"Chen, S., Li, H., Wang, Q., Zhao, Z., Sun, M., Zhu, X., & Liu, J. (2023a). Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset. In: NeurIPS"},{"key":"2478_CR147","unstructured":"Chen, S., Han, Z., He, B., Buckley, M., Torr, P., Tresp, V., & Gu, J. (2024d). Understanding and improving in-context learning on vision-language models. In: ICLRw"},{"key":"2478_CR148","unstructured":"Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020c). A simple framework for contrastive learning of visual representations. In: ICML"},{"key":"2478_CR149","unstructured":"Chen, T., Luo, C., & Li, L. (2021c). Intriguing properties of contrastive losses. In: NeurIPS"},{"key":"2478_CR150","doi-asserted-by":"crossref","unstructured":"Chen, T. S., Siarohin, A., Menapace, W., Deyneka, E., Chao, H. w., Jeon, B. E., Fang, Y., Lee, H. Y., Ren, J., Yang, M. H., et\u00a0al. (2024e). Panda-70m: Captioning 70m videos with multiple cross-modality teachers. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01265"},{"key":"2478_CR151","doi-asserted-by":"crossref","unstructured":"Chen, X., Wang, W., Wang, J., & Li, W. (2017b). Learning object-centric transformation for video prediction. In: MM","DOI":"10.1145\/3123266.3123349"},{"key":"2478_CR152","unstructured":"Chen, Y., Kalantidis, Y., Li, J., Yan, S., & Feng, J. (2018b). $${\\text{A}}^{\\wedge }$$ 2-nets: Double attention networks. In: NeurIPS"},{"key":"2478_CR153","doi-asserted-by":"crossref","unstructured":"Chen, Y., Kalantidis, Y., Li, J., Yan, S., & Feng, J. (2018c). Multi-fiber networks for video recognition. In: ECCV","DOI":"10.1007\/978-3-030-01246-5_22"},{"key":"2478_CR154","doi-asserted-by":"crossref","unstructured":"Chen, Y., Fan, H., Xu, B., Yan, Z., Kalantidis, Y., Rohrbach, M., Yan, S., & Feng, J. (2019). Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution. In: CVPR","DOI":"10.1109\/ICCV.2019.00353"},{"key":"2478_CR155","doi-asserted-by":"crossref","unstructured":"Chen, Y., Liu, Z., Zhang, B., Fok, W., Qi, X., & Wu, Y. C. (2023b). Mgfn: Magnitude-contrastive glance-and-focus network for weakly-supervised video anomaly detection. In: AAAI","DOI":"10.1609\/aaai.v37i1.25112"},{"key":"2478_CR156","doi-asserted-by":"crossref","unstructured":"Cheng, F., & Bertasius, G. (2022). Tallformer: Temporal action localization with a long-memory transformer. In: ECCV","DOI":"10.1007\/978-3-031-19830-4_29"},{"key":"2478_CR157","doi-asserted-by":"crossref","unstructured":"Cheng, F., Xu, M., Xiong, Y., Chen, H., Li, X., Li, W., & Xia, W. (2022). Stochastic backpropagation: A memory efficient strategy for training video models. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00812"},{"key":"2478_CR158","doi-asserted-by":"crossref","unstructured":"Cheng, F., Wang, X., Lei, J., Crandall, D., Bansal, M., & Bertasius, G. (2023). Vindlu: A recipe for effective video-and-language pretraining. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01034"},{"key":"2478_CR159","doi-asserted-by":"crossref","unstructured":"Cheng, S., Guo, Z., Wu, J., Fang, K., Li, P., Liu, H., & Liu, Y. (2024). Egothink: Evaluating first-person perspective thinking capability of vision-language models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01355"},{"key":"2478_CR160","doi-asserted-by":"crossref","unstructured":"Cherian, A., Hori, C., Marks, T. K., & Le\u00a0Roux, J. (2022). (2.5+ 1) d spatio-temporal scene graphs for video question answering. In: AAAI","DOI":"10.1609\/aaai.v36i1.19922"},{"key":"2478_CR161","doi-asserted-by":"crossref","unstructured":"Chi, H. g., Lee, K., Agarwal, N., Xu, Y., Ramani, K., & Choi, C. (2023). Adamsformer for spatial action localization in the future. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01715"},{"key":"2478_CR162","doi-asserted-by":"crossref","unstructured":"Cho, M., Kim, T., Kim, W. J., Cho, S., & Lee, S. (2022). Unsupervised video anomaly detection via normalizing flows with implicit latent features. PR 129:108703","DOI":"10.1016\/j.patcog.2022.108703"},{"key":"2478_CR163","unstructured":"Choi, J., Gao, C., Messou, J. C., & Huang, J. B. (2019). Why can\u2019t i dance in the mall? learning to mitigate scene bias in action recognition. In: NeurIPS"},{"key":"2478_CR164","doi-asserted-by":"crossref","unstructured":"Choi, W., & Savarese, S. (2012). A unified framework for multi-target tracking and collective activity recognition. In: ECCV","DOI":"10.1007\/978-3-642-33765-9_16"},{"issue":"1","key":"2478_CR165","doi-asserted-by":"crossref","first-page":"6386","DOI":"10.1038\/s41467-020-19712-x","volume":"11","author":"E Chong","year":"2020","unstructured":"Chong, E., Clark-Whitney, E., Southerland, A., Stubbs, E., Miller, C., Ajodan, E. L., Silverman, M. R., Lord, C., Rozga, A., Jones, R. M., & Rehg, J. M. (2020). Detection of eye contact with deep neural networks is as accurate as human experts. Nature Communications, 11(1), 6386.","journal-title":"Nature Communications"},{"key":"2478_CR166","doi-asserted-by":"crossref","unstructured":"Chong, E., Wang, Y., Ruiz, N., & Rehg, J. M. (2020b). Detecting attended visual targets in video. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00544"},{"key":"2478_CR167","unstructured":"Chu, W. H., Ke, L., & Fragkiadaki, K. (2024). Dreamscene4d: Dynamic multi-object scene generation from monocular videos. In: NeurIPS"},{"key":"2478_CR168","doi-asserted-by":"crossref","unstructured":"Chun, S., Oh, S. J., De\u00a0Rezende, R. S., Kalantidis, Y., & Larlus, D. (2021). Probabilistic embeddings for cross-modal retrieval. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00831"},{"key":"2478_CR169","unstructured":"Chung, J., & Zisserman, A. (2016). Signs in time: Encoding human motion as a temporal image. In: ECCVw"},{"key":"2478_CR170","doi-asserted-by":"crossref","unstructured":"Chung, J., Wuu, C. h., Yang, Hr., Tai, Y. W., & Tang, C. K. (2021). Haa500: Human-centric atomic action dataset with curated videos. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01321"},{"key":"2478_CR171","unstructured":"Cipolla, R., & Blake, A. (1990). The dynamic analysis of apparent contours. In: ICCV"},{"key":"2478_CR172","unstructured":"Clark, A., Donahue, J., & Simonyan, K. (2019). Adversarial video generation on complex datasets. arXiv:1907.06571"},{"key":"2478_CR173","doi-asserted-by":"crossref","unstructured":"Cole, E., Yang, X., Wilber, K., Mac\u00a0Aodha, O., & Belongie, S. (2022). When does contrastive visual representation learning work? In: CVPR","DOI":"10.1109\/CVPR52688.2022.01434"},{"key":"2478_CR174","doi-asserted-by":"crossref","unstructured":"Corona, E., Pumarola, A., Alenya, G., Moreno-Noguer, F., & Rogez, G. (2020). Ganhand: Predicting human grasp affordances in multi-object scenes. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00508"},{"key":"2478_CR175","doi-asserted-by":"crossref","unstructured":"Coskun, H., Zareian, A., Moore, J. L., Tombari, F., & Wang, C. (2022). Goca: Guided online cluster assignment for self-supervised video representation learning. In: ECCV","DOI":"10.1007\/978-3-031-19821-2_1"},{"key":"2478_CR176","doi-asserted-by":"crossref","unstructured":"Cui, Y., Zeng, C., Zhao, X., Yang, Y., Wu, G., & Wang, L. (2023). Sportsmot: A large multi-object tracking dataset in multiple sports scenes. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00910"},{"key":"2478_CR177","doi-asserted-by":"crossref","first-page":"781","DOI":"10.1109\/34.868681","volume":"22","author":"R Cutler","year":"2000","unstructured":"Cutler, R., & Davis, L. S. (2000). Robust Real-Time Periodic Motion Detection, Analysis, and Applications. IEEE TPAMI, 22, 781\u2013796.","journal-title":"IEEE TPAMI"},{"key":"2478_CR178","unstructured":"Czolbe, S., Krause, O., Cox, I., & Igel, C. (2020). A loss function for generative neural networks based on watson\u2019s perceptual model. In: NeurIPS"},{"key":"2478_CR179","doi-asserted-by":"crossref","unstructured":"Da\u00a0Costa, V. G. T., Zara, G., Rota, P., Oliveira-Santos, T., Sebe, N., Murino, V., & Ricci, E. (2022). Unsupervised domain adaptation for video transformers in action recognition. In: ICPR","DOI":"10.1109\/ICPR56361.2022.9956679"},{"key":"2478_CR180","doi-asserted-by":"crossref","unstructured":"Dai, R., Das, S., Minciullo, L., Garattoni, L., Francesca, G., & Bremond, F. (2021). Pdan: Pyramid dilated attention network for action detection. In: WACV","DOI":"10.1109\/WACV48630.2021.00301"},{"key":"2478_CR181","doi-asserted-by":"crossref","first-page":"2533","DOI":"10.1109\/TPAMI.2022.3169976","volume":"45","author":"R Dai","year":"2022","unstructured":"Dai, R., Das, S., Sharma, S., Minciullo, L., Garattoni, L., Bremond, F., & Francesca, G. (2022). Toyota smarthome untrimmed: Real-world untrimmed videos for activity detection. IEEE TPAMI, 45, 2533\u20132550.","journal-title":"IEEE TPAMI"},{"key":"2478_CR182","unstructured":"Dai, Y., Tang, D., Liu, L., Tan, M., Zhou, C., Wang, J., Feng, Z., Zhang, F., Hu, X., & Shi, S. (2022b). One model, multiple modalities: A sparsely activated approach for text, sound, image, video and code. arXiv:2205.06126,"},{"key":"2478_CR183","doi-asserted-by":"crossref","unstructured":"Damen, D., Leelasawassuk, T., Haines, O., Calway, A., & Mayol-Cuevas, W. W. (2014). You-do, i-learn: Discovering task relevant objects and their modes of interaction from multi-user egocentric video. In: BMVC","DOI":"10.5244\/C.28.30"},{"key":"2478_CR184","first-page":"98","volume":"149","author":"D Damen","year":"2016","unstructured":"Damen, D., Leelasawassuk, T., & Mayol-Cuevas, W. (2016). You-do, i-learn: Egocentric unsupervised discovery of objects and their modes of interaction towards video-based guidance. CVIU, 149, 98\u2013112.","journal-title":"CVIU"},{"key":"2478_CR185","doi-asserted-by":"crossref","unstructured":"Damen, D., Doughty, H., Farinella, G. M., Fidler, S., Furnari, A., Kazakos, E., Moltisanti, D., Munro, J., Perrett, T., Price, W., et\u00a0al. (2018). Scaling egocentric vision: The epic-kitchens dataset. In: ECCV","DOI":"10.1007\/978-3-030-01225-0_44"},{"key":"2478_CR186","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s11263-021-01531-2","volume":"130","author":"D Damen","year":"2022","unstructured":"Damen, D., Doughty, H., Farinella, G. M., Furnari, A., Kazakos, E., Ma, J., Moltisanti, D., Munro, J., Perrett, T., Price, W., et al. (2022). Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100. IJCV, 130, 1\u201323.","journal-title":"IJCV"},{"key":"2478_CR187","doi-asserted-by":"crossref","unstructured":"Dang, L. H., Le, T. M., Le, V., & Tran, T. (2021). Hierarchical object-oriented spatio-temporal reasoning for video question answering. In: IJCAI","DOI":"10.24963\/ijcai.2021\/88"},{"key":"2478_CR188","unstructured":"Dao, T., Fu, D., Ermon, S., Rudra, A., & R\u00e9, C. (2022). Flashattention: Fast and memory-efficient exact attention with io-awareness. In: NeurIPS"},{"key":"2478_CR189","volume":"219","author":"I Dave","year":"2022","unstructured":"Dave, I., Gupta, R., Rizve, M. N., & Shah, M. (2022). Tclr: Temporal contrastive learning for video representation. CVIU, 219, Article 103406.","journal-title":"CVIU"},{"key":"2478_CR190","doi-asserted-by":"crossref","unstructured":"Davtyan, A., Sameni, S., & Favaro, P. (2023). Efficient video prediction via sparsely conditioned flow matching. In: ICCV","DOI":"10.1109\/ICCV51070.2023.02126"},{"key":"2478_CR191","doi-asserted-by":"crossref","unstructured":"De\u00a0Geest, R., Gavves, E., Ghodrati, A., Li, Z., Snoek, C. G. M., & Tuytelaars, T. (2016). Online action detection. In: ECCV","DOI":"10.1007\/978-3-319-46454-1_17"},{"key":"2478_CR192","doi-asserted-by":"crossref","unstructured":"Delmas, G., Weinzaepfel, P., Lucas, T., Moreno-Noguer, F., & Rogez, G. (2022). Posescript: 3d human poses from natural language. In: ECCV","DOI":"10.1007\/978-3-031-20068-7_20"},{"key":"2478_CR193","doi-asserted-by":"crossref","unstructured":"Deng, C., Chen, S., Chen, D., He, Y., & Wu, Q. (2021). Sketch, ground, and refine: Top-down dense video captioning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00030"},{"key":"2478_CR194","unstructured":"Denton, E., & Fergus, R. (2018). Stochastic video generation with a learned prior. In: ICML"},{"issue":"6","key":"2478_CR195","doi-asserted-by":"crossref","first-page":"6703","DOI":"10.1109\/TPAMI.2021.3055233","volume":"45","author":"E Dessalene","year":"2021","unstructured":"Dessalene, E., Devaraj, C., Maynord, M., Ferm\u00fcller, C., & Aloimonos, Y. (2021). Forecasting action through contact representations from first person video. IEEE TPAMI, 45(6), 6703\u20136714.","journal-title":"IEEE TPAMI"},{"key":"2478_CR196","doi-asserted-by":"crossref","unstructured":"Destro, M., & Gygli, M. (2024). CycleCL: Self-supervised Learning for Periodic Videos. In: WACV","DOI":"10.1109\/WACV57701.2024.00284"},{"key":"2478_CR197","unstructured":"Dhariwal, P., & Nichol, A. (2021) Diffusion models beat gans on image synthesis. In: NeurIPS"},{"key":"2478_CR198","unstructured":"Dhiman, A., Srinath, R., Sarkar, S., Boregowda, L. R., & Babu, R. V. (2023). Corf: Colorizing radiance fields using knowledge distillation. In: ICCVw"},{"key":"2478_CR199","first-page":"21","volume":"77","author":"C Dhiman","year":"2019","unstructured":"Dhiman, C., & Vishwakarma, D. K. (2019). A review of state-of-the-art techniques for abnormal human activity recognition. EAAI, 77, 21\u201345.","journal-title":"EAAI"},{"key":"2478_CR200","doi-asserted-by":"crossref","unstructured":"Diba, A., Fayyaz, M., Sharma, V., Paluri, M., Gall, J., Stiefelhagen, R., & Van\u00a0Gool, L. (2020). Large scale holistic video understanding. In: ECCV","DOI":"10.1007\/978-3-030-58558-7_35"},{"key":"2478_CR201","doi-asserted-by":"crossref","unstructured":"Diba, A., Sharma, V., Safdari, R., Lotfi, D., Sarfraz, S., Stiefelhagen, R., & Van\u00a0Gool, L. (2021). Vi2clr: Video and image for visual contrastive learning of representation. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00153"},{"issue":"1\u20132","key":"2478_CR202","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1016\/S0004-3702(96)00034-3","volume":"89","author":"TG Dietterich","year":"1997","unstructured":"Dietterich, T. G., Lathrop, R. H., & Lozano-P\u00e9rez, T. (1997). Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1\u20132), 31\u201371.","journal-title":"Artificial intelligence"},{"key":"2478_CR203","doi-asserted-by":"crossref","unstructured":"Diko, A., Avola, D., Prenkaj, B., Fontana, F., & Cinque, L. (2024). Semantically guided representation learning for action anticipation. In: ECCV","DOI":"10.1007\/978-3-031-73390-1_26"},{"key":"2478_CR204","doi-asserted-by":"crossref","first-page":"1011","DOI":"10.1109\/TPAMI.2023.3327284","volume":"46","author":"G Ding","year":"2023","unstructured":"Ding, G., Sener, F., & Yao, A. (2023). Temporal action segmentation: An analysis of modern techniques. IEEE TPAMI, 46, 1011\u20131030.","journal-title":"IEEE TPAMI"},{"issue":"5","key":"2478_CR205","first-page":"2567","volume":"44","author":"K Ding","year":"2020","unstructured":"Ding, K., Ma, K., Wang, S., & Simoncelli, E. P. (2020). Image quality assessment: Unifying structure and texture similarity. IEEE TPAMI, 44(5), 2567\u20132581.","journal-title":"IEEE TPAMI"},{"key":"2478_CR206","doi-asserted-by":"crossref","unstructured":"Ding, S., Qian, R., Xu, H., Lin, D., & Xiong, H. (2024). Betrayed by attention: A simple yet effective approach for self-supervised video object segmentation. In: ECCV","DOI":"10.1007\/978-3-031-72995-9_13"},{"key":"2478_CR207","unstructured":"Doll\u00e1r, P., Rabaud, V., Cottrell, G., & Belongie, S. (2005). Behavior recognition via sparse spatio-temporal features. In: VS-PETS"},{"key":"2478_CR208","doi-asserted-by":"crossref","unstructured":"Donahue, G., & Elhamifar, E. (2024). Learning to predict activity progress by self-supervised video alignment. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01766"},{"key":"2478_CR209","doi-asserted-by":"crossref","unstructured":"Donahue, J., Hendricks, L. A., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., & Darrell, T. (2015). Long-term recurrent convolutional networks for visual recognition and description. In: CVPR","DOI":"10.21236\/ADA623249"},{"key":"2478_CR210","unstructured":"Dong, H., Chharia, A., Gou, W., Vicente\u00a0Carrasco, F., & De\u00a0la Torre, F. D. (2024). Hamba: Single-view 3d hand reconstruction with graph-guided bi-scanning mamba. In: NeurIPS"},{"issue":"12","key":"2478_CR211","first-page":"3377","volume":"20","author":"J Dong","year":"2018","unstructured":"Dong, J., Li, X., & Snoek, C. G. M. (2018). Predicting visual features from text for image and video caption retrieval. IEEE TM, 20(12), 3377\u20133388.","journal-title":"IEEE TM"},{"key":"2478_CR212","doi-asserted-by":"crossref","unstructured":"Dorkenwald, M., Milbich, T., Blattmann, A., Rombach, R., Derpanis, K. G., & Ommer, B. (2021). Stochastic image-to-video synthesis using cinns. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00374"},{"key":"2478_CR213","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et\u00a0al. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR"},{"key":"2478_CR214","doi-asserted-by":"crossref","unstructured":"Doughty, H., & Snoek, C. G. M. (2022). How do you do it? fine-grained action understanding with pseudo-adverbs. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01346"},{"key":"2478_CR215","doi-asserted-by":"crossref","unstructured":"Doughty, H., Damen, D., & Mayol-Cuevas, W. (2018). Who\u2019s better? who\u2019s best? pairwise deep ranking for skill determination. In: CVPR","DOI":"10.1109\/CVPR.2018.00634"},{"key":"2478_CR216","doi-asserted-by":"crossref","unstructured":"Doughty, H., Laptev, I., Mayol-Cuevas, W., & Damen, D. (2020). Action modifiers: Learning from adverbs in instructional videos. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00095"},{"key":"2478_CR217","unstructured":"Du, C., Li, Y., Qiu, Z., & Xu, C. (2023). Stable diffusion is unstable. In: NeurIPS"},{"issue":"3","key":"2478_CR218","doi-asserted-by":"crossref","first-page":"1347","DOI":"10.1109\/TIP.2017.2778563","volume":"27","author":"W Du","year":"2017","unstructured":"Du, W., Wang, Y., & Qiao, Y. (2017). Recurrent spatial-temporal attention network for action recognition in videos. IEEE T-IP, 27(3), 1347\u20131360.","journal-title":"IEEE T-IP"},{"key":"2478_CR219","doi-asserted-by":"crossref","unstructured":"Dubey, S., Boragule, A., & Jeon, M. (2019). 3d resnet with ranking loss function for abnormal activity detection in videos. In: ICCAIS","DOI":"10.1109\/ICCAIS46528.2019.9074586"},{"key":"2478_CR220","unstructured":"Dvornik, M., Hadji, I., Derpanis, K. G., Garg, A., & Jepson, A. (2021). Drop-dtw: Aligning common signal between sequences while dropping outliers. In: NeurIPS"},{"key":"2478_CR221","unstructured":"Dwibedi, D., Sermanet, P., & Tompson, J. (2018). Temporal reasoning in videos using convolutional gated recurrent units. In: CVPRw"},{"key":"2478_CR222","doi-asserted-by":"crossref","unstructured":"Dwibedi, D., Aytar, Y., Tompson, J., Sermanet, P., & Zisserman, A. (2020). Counting out time: Class agnostic video repetition counting in the wild. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01040"},{"key":"2478_CR223","unstructured":"Dwibedi, D., Aytar, Y., Tompson, J., & Zisserman, A. (2024). Ovr: A dataset for open vocabulary temporal repetition counting in videos. arXiv:2407.17085"},{"key":"2478_CR224","doi-asserted-by":"crossref","unstructured":"Dwivedi, S. K., Sun, Y., Patel, P., Feng, Y., & Black, M. J. (2024). Tokenhmr: Advancing human mesh recovery with a tokenized pose representation. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00132"},{"key":"2478_CR225","doi-asserted-by":"crossref","unstructured":"Eagleman, D. M. (2010). How does the timing of neural signals map onto the timing of perception. Space and time in perception and action pp 216\u2013231","DOI":"10.1017\/CBO9780511750540.014"},{"key":"2478_CR226","first-page":"73","volume":"144","author":"M Edwards","year":"2016","unstructured":"Edwards, M., Deng, J., & Xie, X. (2016). From pose to activity: Surveying datasets and introducing converse. CVIU, 144, 73\u2013105.","journal-title":"CVIU"},{"key":"2478_CR227","doi-asserted-by":"crossref","unstructured":"Efros, A., Berg, A., Mori, G., & Malik, J. (2003). Recognizing action at a distance. In: ICCV","DOI":"10.1109\/ICCV.2003.1238420"},{"key":"2478_CR228","doi-asserted-by":"crossref","unstructured":"Engel, J., Sch\u00f6ps, T., & Cremers, D. (2014). Lsd-slam: Large-scale direct monocular slam. In: ECCV","DOI":"10.1007\/978-3-319-10605-2_54"},{"key":"2478_CR229","doi-asserted-by":"crossref","unstructured":"Epstein, D., Chen, B., Vondrick, C. (2020). Oops! predicting unintentional action in video. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00100"},{"key":"2478_CR230","doi-asserted-by":"crossref","unstructured":"Epstein, D., Wu, J., Schmid, C., & Sun, C. (2021). Learning temporal dynamics from cycles in narrated video. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00151"},{"key":"2478_CR231","unstructured":"Escontrela, A., Adeniji, A., Yan, W., Jain, A., Peng, X. B., Goldberg, K., Lee, Y., Hafner, D., & Abbeel, P. (2023). Video prediction models as rewards for reinforcement learning. In: NeurIPS"},{"key":"2478_CR232","unstructured":"Escorcia, V., Soldan, M., Sivic, J., Ghanem, B., & Russell, B. (2019). Temporal localization of moments in video collections with natural language. arXiv:1907.12763"},{"key":"2478_CR233","doi-asserted-by":"crossref","unstructured":"Esser, P., Rombach, R., & Ommer, B. (2021). Taming transformers for high-resolution image synthesis. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01268"},{"key":"2478_CR234","unstructured":"Eyzaguirre, C., Tang, E., Buch, S., Gaidon, A., Wu, J., & Niebles, J. C. (2024). Streaming detection of queried event start. In: NeurIPS"},{"key":"2478_CR235","doi-asserted-by":"crossref","unstructured":"Fan, C., Zhang, X., Zhang, S., Wang, W., Zhang, C., & Huang, H. (2019). Heterogeneous memory enhanced multimodal attention model for video question answering. In: CVPR","DOI":"10.1109\/CVPR.2019.00210"},{"key":"2478_CR236","doi-asserted-by":"crossref","unstructured":"Fan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., & Feichtenhofer, C. (2021). Multiscale vision transformers. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00675"},{"key":"2478_CR237","doi-asserted-by":"crossref","unstructured":"Fan, K., Bai, Z., Xiao, T., Zietlow, D., Horn, M., Zhao, Z., Simon-Gabriel, C. J., Shou, M. Z., Locatello, F., Schiele, B., et\u00a0al. (2023). Unsupervised open-vocabulary object localization in videos. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01264"},{"key":"2478_CR238","doi-asserted-by":"crossref","unstructured":"Fan, Z., Parelli, M., Kadoglou, M. E., Chen, X., Kocabas, M., Black, M. J., & Hilliges, O. (2024). Hold: Category-agnostic 3d reconstruction of interacting hands and objects from video. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00054"},{"key":"2478_CR239","doi-asserted-by":"crossref","unstructured":"Fang, H., Chen, B., Wang, X., Wang, Z., & Xia, S. T. (2023). Gifd: A generative gradient inversion method with feature domain optimization. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00458"},{"key":"2478_CR240","doi-asserted-by":"crossref","unstructured":"Fathi, A., & Rehg, J. M. (2013). Modeling actions through state changes. In: CVPR","DOI":"10.1109\/CVPR.2013.333"},{"key":"2478_CR241","doi-asserted-by":"crossref","unstructured":"Fathi, A., Li, Y., & Rehg, J. M. (2012). Learning to recognize daily actions using gaze. In: ECCV","DOI":"10.1007\/978-3-642-33718-5_23"},{"key":"2478_CR242","doi-asserted-by":"crossref","unstructured":"Faure, G. J., Chen, M. H., & Lai, S. H. (2023). Holistic interaction transformer network for action detection. In: WACV","DOI":"10.1109\/WACV56688.2023.00334"},{"key":"2478_CR243","doi-asserted-by":"crossref","unstructured":"Fayek, H. M., & Kumar, A. (2020). Large scale audiovisual learning of sounds with weakly labeled data. In: IJCAI","DOI":"10.24963\/ijcai.2020\/78"},{"key":"2478_CR244","doi-asserted-by":"crossref","unstructured":"Fei, H., Wu, S., Ji, W., Zhang, H., & Chua, T. S. (2024a). Dysen-vdm: Empowering dynamics-aware text-to-video diffusion with llms. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00730"},{"key":"2478_CR245","unstructured":"Fei, H., Wu, S., Ji, W., Zhang, H., Zhang, M., Lee, M. L., & Hsu, W. (2024b). Video-of-thought: Step-by-step video reasoning from perception to cognition. In: ICML"},{"key":"2478_CR246","doi-asserted-by":"crossref","unstructured":"Feichtenhofer, C. (2020). X3d: Expanding architectures for efficient video recognition. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00028"},{"key":"2478_CR247","doi-asserted-by":"crossref","unstructured":"Feichtenhofer, C., Pinz, A., & Zisserman, A. (2016). Convolutional two-stream network fusion for video action recognition. In: CVPR","DOI":"10.1109\/CVPR.2016.213"},{"key":"2478_CR248","doi-asserted-by":"crossref","unstructured":"Feichtenhofer, C., Pinz, A., & Wildes, R. P. (2017). Spatiotemporal multiplier networks for video action recognition. In: CVPR","DOI":"10.1109\/CVPR.2017.787"},{"key":"2478_CR249","doi-asserted-by":"crossref","unstructured":"Feichtenhofer, C., Fan, H., Malik, J., & He, K. (2019). Slowfast networks for video recognition. In: ICCV","DOI":"10.1109\/ICCV.2019.00630"},{"key":"2478_CR250","doi-asserted-by":"crossref","unstructured":"Feichtenhofer, C., Fan, H., Xiong, B., Girshick, R., & He, K. (2021). A large-scale study on unsupervised spatiotemporal representation learning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00331"},{"key":"2478_CR251","unstructured":"Feichtenhofer, C., Li, Y., He, K., et\u00a0al. (2022). Masked autoencoders as spatiotemporal learners. In: NeurIPS"},{"key":"2478_CR252","doi-asserted-by":"crossref","unstructured":"Feng, J., Erol, M. H., Chung, J. S., & Senocak, A. (2024). From coarse to fine: Efficient training for audio spectrogram transformers. In: ICASSP","DOI":"10.1109\/ICASSP48485.2024.10448376"},{"key":"2478_CR253","doi-asserted-by":"crossref","unstructured":"Feng, J. C., Hong, F. T., & Zheng, W. S. (2021a). Mist: Multiple instance self-training framework for video anomaly detection. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01379"},{"key":"2478_CR254","doi-asserted-by":"crossref","unstructured":"Feng, R., Gao, Y., Ma, X., Tse, T. H. E., & Chang, H. J. (2023). Mutual information-based temporal difference learning for human pose estimation in video. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01643"},{"key":"2478_CR255","unstructured":"Feng, Y., Jiang, J., Huang, Z., Qing, Z., Wang, X., Zhang, S., Tang, M., & Gao, Y. (2021b). Relation modeling in spatio-temporal action localization. In: CVPRw"},{"key":"2478_CR256","doi-asserted-by":"crossref","unstructured":"Fernando, B., & Herath, S. (2021). Anticipating human actions by correlating past with the future with jaccard similarity measures. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01302"},{"key":"2478_CR257","doi-asserted-by":"crossref","unstructured":"Fernando, B., Gavves, E., Oramas, J. M., Ghodrati, A., & Tuytelaars, T. (2015). Modeling video evolution for action recognition. In: CVPR","DOI":"10.1109\/CVPR.2015.7299176"},{"issue":"4","key":"2478_CR258","doi-asserted-by":"crossref","first-page":"773","DOI":"10.1109\/TPAMI.2016.2558148","volume":"39","author":"B Fernando","year":"2016","unstructured":"Fernando, B., Gavves, E., Oramas, J., Ghodrati, A., & Tuytelaars, T. (2016). Rank pooling for action recognition. IEEE TPAMI, 39(4), 773\u2013787.","journal-title":"IEEE TPAMI"},{"key":"2478_CR259","doi-asserted-by":"crossref","unstructured":"Fernando, B., Bilen, H., Gavves, E., & Gould, S. (2017). Self-supervised video representation learning with odd-one-out networks. In: CVPR","DOI":"10.1109\/CVPR.2017.607"},{"key":"2478_CR260","first-page":"259","volume":"151","author":"B Ferreira","year":"2021","unstructured":"Ferreira, B., Ferreira, P. M., Pinheiro, G., Figueiredo, N., Carvalho, F., Menezes, P., & Batista, J. (2021). Deep Learning Approaches for Workout Repetition Counting and Validation. PRL, 151, 259\u2013266.","journal-title":"PRL"},{"key":"2478_CR261","doi-asserted-by":"crossref","unstructured":"Fiche, G., Leglaive, S., Alameda-Pineda, X., Agudo, A., & Moreno-Noguer, F. (2024). Vq-hps: Human pose and shape estimation in a vector-quantized latent space. In: ECCV","DOI":"10.1007\/978-3-031-72943-0_27"},{"key":"2478_CR262","doi-asserted-by":"crossref","unstructured":"Fieraru, M., Zanfir, M., Oneata, E., Popa, A. I., Olaru, V., & Sminchisescu, C. (2020). Three-dimensional reconstruction of human interactions. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00724"},{"key":"2478_CR263","doi-asserted-by":"crossref","unstructured":"Fieraru, M., Zanfir, M., Oneata, E., Popa, A. I., Olaru, V., & Sminchisescu, C. (2021). Learning complex 3d human self-contact. In: AAAI","DOI":"10.1609\/aaai.v35i2.16223"},{"key":"2478_CR264","unstructured":"Finn, C., Goodfellow, I., & Levine, S. (2016). Unsupervised learning for physical interaction through video prediction. In: NeurIPS"},{"key":"2478_CR265","doi-asserted-by":"crossref","unstructured":"Fioresi, J., Dave, I. R., & Shah, M. (2023). Ted-spad: Temporal distinctiveness for self-supervised privacy-preservation for video anomaly detection. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01251"},{"key":"2478_CR266","doi-asserted-by":"crossref","unstructured":"Flaborea, A., Collorone, L., Di\u00a0Melendugno, G. M. D., D\u2019Arrigo, S., Prenkaj, B., & Galasso, F. (2023). Multimodal motion conditioned diffusion model for skeleton-based video anomaly detection. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00947"},{"key":"2478_CR267","unstructured":"Flanagan, K., Damen, D., & Wray, M. (2023). Learning temporal sentence grounding from narrated egovideos. In: BMVC"},{"issue":"5722","key":"2478_CR268","doi-asserted-by":"crossref","first-page":"662","DOI":"10.1126\/science.1106138","volume":"308","author":"L Fogassi","year":"2005","unstructured":"Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G. (2005). Parietal lobe: from action organization to intention understanding. Science, 308(5722), 662\u2013667.","journal-title":"Science"},{"key":"2478_CR269","doi-asserted-by":"crossref","unstructured":"Foo, L. G., Li, T., Rahmani, H., Ke, Q., & Liu, J. (2022). Era: Expert retrieval and assembly for early action prediction. In: ECCV","DOI":"10.1007\/978-3-031-19830-4_38"},{"key":"2478_CR270","unstructured":"F\u00f6rstner, W., & G\u00fclch, E. (1987). A fast operator for detection and precise location of distinct points, corners and centres of circular features. In: ICFPPD"},{"key":"2478_CR271","doi-asserted-by":"crossref","unstructured":"Fouhey, D. F., Kuo, Wc., Efros, A. A., & Malik, J. (2018). From lifestyle vlogs to everyday interactions. In: CVPR","DOI":"10.1109\/CVPR.2018.00524"},{"key":"2478_CR272","unstructured":"Fragkiadaki, K., Huang, J., Alemi, A., Vijayanarasimhan, S., Ricco, S., & Sukthankar, R. (2017). Motion prediction under multimodality with conditional stochastic networks. arXiv:1705.02082"},{"key":"2478_CR273","unstructured":"Franceschi, J. Y., Delasalles, E., Chen, M., Lamprier, S., & Gallinari, P. (2020). Stochastic latent residual video prediction. In: ICML"},{"key":"2478_CR274","doi-asserted-by":"crossref","unstructured":"Fu, C., Dai, Y., Luo, Y., Li, L., Ren, S., Zhang, R., Wang, Z., Zhou, C., Shen, Y., Zhang, M., et\u00a0al. (2024). Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv:2405.21075","DOI":"10.1109\/CVPR52734.2025.02245"},{"key":"2478_CR275","unstructured":"Fu, Q., Liu, X., & Kitani, K. M. (2022). Sequential decision-making for active object detection from hand. In: CVPR"},{"key":"2478_CR276","unstructured":"Fu, T. J., Li, L., Gan, Z., Lin, K., Wang, W. Y., Wang, L., & Liu, Z. (2021). Violet: End-to-end video-language transformers with masked visual-token modeling. arXiv:2111.12681"},{"key":"2478_CR277","doi-asserted-by":"crossref","unstructured":"Fu, T. J., Yu, L., Zhang, N., Fu, C. Y., Su, J. C., Wang, W. Y., & Bell, S. (2023). Tell me what happened: Unifying text-guided video completion via multimodal masked video generation. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01029"},{"key":"2478_CR278","doi-asserted-by":"crossref","unstructured":"Furnari, A., & Farinella, G. M. (2019). What would you expect? anticipating egocentric actions with rolling-unrolling lstms and modality attention. In: ICCV","DOI":"10.1109\/ICCV.2019.00635"},{"key":"2478_CR279","doi-asserted-by":"crossref","unstructured":"Furnari, A., & Farinella, G. M. (2022). Towards streaming egocentric action anticipation. In: ICPR","DOI":"10.1109\/ICPR56361.2022.9956090"},{"key":"2478_CR280","doi-asserted-by":"crossref","unstructured":"Furnari, A., Battiato, S., & Maria\u00a0Farinella, G. (2018). Leveraging uncertainty to rethink loss functions and evaluation measures for egocentric action anticipation. In: ECCVw","DOI":"10.1007\/978-3-030-11021-5_24"},{"key":"2478_CR281","doi-asserted-by":"crossref","unstructured":"Gabeur, V., Sun, C., Alahari, K., & Schmid, C. (2020). Multi-modal transformer for video retrieval. In: ECCV","DOI":"10.1007\/978-3-030-58548-8_13"},{"issue":"11","key":"2478_CR282","doi-asserted-by":"crossref","first-page":"2782","DOI":"10.1109\/TPAMI.2013.65","volume":"35","author":"A Gaidon","year":"2013","unstructured":"Gaidon, A., Harchaoui, Z., & Schmid, C. (2013). Temporal localization of actions with actoms. IEEE TPAMI, 35(11), 2782\u20132795.","journal-title":"IEEE TPAMI"},{"issue":"2","key":"2478_CR283","doi-asserted-by":"crossref","first-page":"593","DOI":"10.1093\/brain\/119.2.593","volume":"119","author":"V Gallese","year":"1996","unstructured":"Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain, 119(2), 593\u2013609.","journal-title":"Brain"},{"key":"2478_CR284","doi-asserted-by":"crossref","unstructured":"Gammulle, H., Denman, S., Sridharan, S., & Fookes, C. (2019). Predicting the future: A jointly learnt model for action anticipation. In: ICCV","DOI":"10.1109\/ICCV.2019.00566"},{"key":"2478_CR285","doi-asserted-by":"crossref","unstructured":"Gan, Z., Gan, C., He, X., Pu, Y., Tran, K., Gao, J., Carin, L., & Deng, L. (2017). Semantic compositional networks for visual captioning. In: CVPR","DOI":"10.1109\/CVPR.2017.127"},{"key":"2478_CR286","doi-asserted-by":"crossref","unstructured":"Gao, D., Zhou, L., Ji, L., Zhu, L., Yang, Y., & Shou, M. Z. (2023). Mist: Multi-modal iterative spatial-temporal transformer for long-form video question answering. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01419"},{"key":"2478_CR287","doi-asserted-by":"crossref","unstructured":"Gao, J., Sun, C., Yang, Z., & Nevatia, R. (2017a). Tall: Temporal activity localization via language query. In: ICCV","DOI":"10.1109\/ICCV.2017.563"},{"key":"2478_CR288","doi-asserted-by":"crossref","unstructured":"Gao, J., Yang, Z., & Nevatia, R. (2017b). Red: Reinforced encoder-decoder networks for action anticipation. arXiv:1707.04818","DOI":"10.5244\/C.31.92"},{"key":"2478_CR289","doi-asserted-by":"crossref","unstructured":"Gao, J., Ge, R., Chen, K., & Nevatia, R. (2018). Motion-appearance co-memory networks for video question answering. In: CVPR","DOI":"10.1109\/CVPR.2018.00688"},{"key":"2478_CR290","doi-asserted-by":"crossref","unstructured":"Gao, R., Oh, T. H., Grauman, K., & Torresani, L. (2020). Listen to look: Action recognition by previewing audio. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01047"},{"key":"2478_CR291","unstructured":"Gao, Y., Liu, J., Xu, Z., Zhang, J., Li, K., Ji, R., & Shen, C. (2022a). Pyramidclip: Hierarchical feature alignment for vision-language model pretraining. In: NeurIPS"},{"key":"2478_CR292","doi-asserted-by":"crossref","unstructured":"Gao, Z., Tan, C., Wu, L., & Li, S. Z. (2022b). Simvp: Simpler yet better video prediction. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00317"},{"key":"2478_CR293","doi-asserted-by":"crossref","unstructured":"Garcia-Hernando, G., Yuan, S., Baek, S., & Kim, T. K. (2018). First-person hand action benchmark with rgb-d videos and 3d hand pose annotations. In: CVPR","DOI":"10.1109\/CVPR.2018.00050"},{"key":"2478_CR294","unstructured":"Gat, I., Schwartz, I., & Schwing, A. (2021). Perceptual score: What data modalities does your model perceive? In: NeurIPS"},{"key":"2478_CR295","doi-asserted-by":"crossref","unstructured":"Ge, R., Gao, J., Chen, K., & Nevatia, R. (2019). Mac: Mining activity concepts for language-based temporal localization. In: WACV","DOI":"10.1109\/WACV.2019.00032"},{"key":"2478_CR296","doi-asserted-by":"crossref","unstructured":"Ge, S., Hayes, T., Yang, H., Yin, X., Pang, G., Jacobs, D., Huang, J. B., & Parikh, D. (2022a). Long video generation with time-agnostic vqgan and time-sensitive transformer. In: ECCV","DOI":"10.1007\/978-3-031-19790-1_7"},{"key":"2478_CR297","doi-asserted-by":"crossref","unstructured":"Ge, Y., Ge, Y., Liu, X., Li, D., Shan, Y., Qie, X., & Luo, P. (2022b). Bridging video-text retrieval with multiple choice questions. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01569"},{"key":"2478_CR298","doi-asserted-by":"crossref","unstructured":"Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., Plakal, M., & Ritter, M. (2017). Audio set: An ontology and human-labeled dataset for audio events. In: ICASSP","DOI":"10.1109\/ICASSP.2017.7952261"},{"issue":"1","key":"2478_CR299","first-page":"1263","volume":"13","author":"CR Genovese","year":"2012","unstructured":"Genovese, C. R., Perone Pacifico, M., Verdinelli, I., Wasserman, L., et al. (2012). Minimax manifold estimation. JMLR, 13(1), 1263\u20131291.","journal-title":"Minimax manifold estimation. JMLR"},{"key":"2478_CR300","doi-asserted-by":"crossref","unstructured":"Georgescu, M. I., Barbalau, A., Ionescu, R. T., Khan, F. S., Popescu, M., & Shah, M. (2021). Anomaly detection in video via self-supervised and multi-task learning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01255"},{"key":"2478_CR301","doi-asserted-by":"crossref","unstructured":"Georgescu, M. I., Fonseca, E., Ionescu, R. T., Lucic, M., Schmid, C., & Arnab, A. (2023). Audiovisual masked autoencoders. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01479"},{"key":"2478_CR302","doi-asserted-by":"crossref","unstructured":"Ghadiyaram, D., Tran, D., & Mahajan, D. (2019). Large-scale weakly-supervised pre-training for video action recognition. In: CVPR","DOI":"10.1109\/CVPR.2019.01232"},{"key":"2478_CR303","doi-asserted-by":"crossref","unstructured":"Ghodrati, A., Bejnordi, B. E., & Habibian, A. (2021). Frameexit: Conditional early exiting for efficient video recognition. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01535"},{"key":"2478_CR304","doi-asserted-by":"crossref","unstructured":"Girase, H., Agarwal, N., Choi, C., & Mangalam, K. (2023). Latency matters: Real-time action forecasting transformer. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01799"},{"key":"2478_CR305","doi-asserted-by":"crossref","unstructured":"Girdhar, R., & Grauman, K. (2021). Anticipative video transformer. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01325"},{"key":"2478_CR306","unstructured":"Girdhar, R., & Ramanan, D. (2017). Attentional pooling for action recognition. In: NeurIPS"},{"key":"2478_CR307","doi-asserted-by":"crossref","unstructured":"Girdhar, R., Carreira, J., Doersch, C., & Zisserman, A. (2019). Video action transformer network. In: CVPR","DOI":"10.1109\/CVPR.2019.00033"},{"key":"2478_CR308","doi-asserted-by":"crossref","unstructured":"Girdhar, R., Singh, M., Ravi, N., Van Der\u00a0Maaten, L., Joulin, A., & Misra, I. (2022). Omnivore: A single model for many visual modalities. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01563"},{"key":"2478_CR309","doi-asserted-by":"crossref","unstructured":"Girdhar, R., El-Nouby, A., Liu, Z., Singh, M., Alwala, K. V., Joulin, A., & Misra, I. (2023a). Imagebind: One embedding space to bind them all. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01457"},{"key":"2478_CR310","doi-asserted-by":"crossref","unstructured":"Girdhar, R., El-Nouby, A., Singh, M., Alwala, K. V., Joulin, A., & Misra, I. (2023b). Omnimae: Single model masked pretraining on images and videos. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01003"},{"key":"2478_CR311","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015). Fast r-cnn. In: ICCV","DOI":"10.1109\/ICCV.2015.169"},{"key":"2478_CR312","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR","DOI":"10.1109\/CVPR.2014.81"},{"key":"2478_CR313","doi-asserted-by":"crossref","unstructured":"Godard, C., Mac\u00a0Aodha, O., Firman, M., & Brostow, G. J. (2019). Digging into self-supervised monocular depth estimation. In: ICCV","DOI":"10.1109\/ICCV.2019.00393"},{"key":"2478_CR314","doi-asserted-by":"crossref","unstructured":"Goletto, G., Nagarajan, T., Averta, G., & Damen, D. (2024). Amego: Active memory from long egocentric videos. In: ECCV","DOI":"10.1007\/978-3-031-72624-8_6"},{"key":"2478_CR315","doi-asserted-by":"crossref","unstructured":"Gong, D., Lee, J., Kim, M., Ha, S. J., & Cho, M. (2022a). Future transformer for long-term action anticipation. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00306"},{"key":"2478_CR316","first-page":"3292","volume":"29","author":"Y Gong","year":"2021","unstructured":"Gong, Y., Chung, Y. A., & Glass, J. (2021). Psla: Improving audio tagging with pretraining, sampling, labeling, and aggregation. IEEE\/ACM TASLP, 29, 3292\u20133306.","journal-title":"IEEE\/ACM TASLP"},{"key":"2478_CR317","first-page":"2437","volume":"29","author":"Y Gong","year":"2022","unstructured":"Gong, Y., Liu, A. H., Rouditchenko, A., & Glass, J. (2022). Uavm: Towards unifying audio and visual models. IEEE SPL, 29, 2437\u20132441.","journal-title":"IEEE SPL"},{"key":"2478_CR318","unstructured":"Gong, Y., Rouditchenko, A., Liu, A. H., Harwath, D., Karlinsky, L., Kuehne, H., & Glass, J. (2023). Contrastive audio-visual masked autoencoder. In: ICLR"},{"key":"2478_CR319","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In: NeurIPS"},{"key":"2478_CR320","doi-asserted-by":"crossref","unstructured":"Gordo, A., & Larlus, D. (2017). Beyond instance-level image retrieval: Leveraging captions to learn a global visual representation for semantic retrieval. In: CVPR","DOI":"10.1109\/CVPR.2017.560"},{"key":"2478_CR321","doi-asserted-by":"crossref","unstructured":"Gordon, A., Li, H., Jonschkowski, R., & Angelova, A. (2019). Depth from videos in the wild: Unsupervised monocular depth learning from unknown cameras. In: ICCV","DOI":"10.1109\/ICCV.2019.00907"},{"key":"2478_CR322","unstructured":"Gordon, D., Ehsani, K., Fox, D., & Farhadi, A. (2020). Watching the world go by: Representation learning from unlabeled videos. arXiv:2003.07990"},{"key":"2478_CR323","doi-asserted-by":"crossref","first-page":"1991","DOI":"10.1109\/TPAMI.2006.253","volume":"28","author":"L Gorelick","year":"2006","unstructured":"Gorelick, L., Galun, M., Sharon, E., Basri, R., & Brandt, A. (2006). Shape representation and classification using the poisson equation. IEEE TPAMI, 28, 1991\u20132005.","journal-title":"IEEE TPAMI"},{"issue":"12","key":"2478_CR324","first-page":"2247","volume":"29","author":"L Gorelick","year":"2007","unstructured":"Gorelick, L., Blank, M., Shechtman, E., Irani, M., & Basri, R. (2007). Actions as space-time shapes. IEEE TPAMI, 29(12), 2247\u20132253.","journal-title":"Actions as space-time shapes. IEEE TPAMI"},{"key":"2478_CR325","doi-asserted-by":"crossref","unstructured":"Goroshin, R., Bruna, J., Tompson, J., Eigen, D., & LeCun, Y. (2015). Unsupervised learning of spatiotemporally coherent metrics. In: ICCV","DOI":"10.1109\/ICCV.2015.465"},{"key":"2478_CR326","doi-asserted-by":"crossref","unstructured":"Gouidis, F., Patkos, T., Argyros, A., & Plexousakis, D. (2023). Leveraging knowledge graphs for zero-shot object-agnostic state classification. arXiv:2307.12179","DOI":"10.1109\/ICPRS62101.2024.10677802"},{"key":"2478_CR327","doi-asserted-by":"crossref","unstructured":"Gowda, S. N., Rohrbach, M., & Sevilla-Lara, L. (2021). Smart frame selection for action recognition. In: AAAI","DOI":"10.1609\/aaai.v35i2.16235"},{"key":"2478_CR328","doi-asserted-by":"crossref","unstructured":"Goyal, R., Ebrahimi\u00a0Kahou, S., Michalski, V., Materzynska, J., Westphal, S., Kim, H., Haenel, V., Fruend, I., Yianilos, P., Mueller-Freitag, M., et\u00a0al. (2017a). The\" something something\" video database for learning and evaluating visual common sense. In: ICCV","DOI":"10.1109\/ICCV.2017.622"},{"key":"2478_CR329","doi-asserted-by":"crossref","unstructured":"Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., & Parikh, D. (2017b). Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In: CVPR","DOI":"10.1109\/CVPR.2017.670"},{"key":"2478_CR330","doi-asserted-by":"crossref","unstructured":"Grady, P., Tang, C., Twigg, C. D., Vo, M., Brahmbhatt, S., & Kemp, C. C. (2021). Contactopt: Optimizing contact to improve grasps. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00152"},{"key":"2478_CR331","doi-asserted-by":"crossref","unstructured":"Grauman, K., Westbury, A., Byrne, E., Chavis, Z., Furnari, A., Girdhar, R., Hamburger, J., Jiang, H., Liu, M., Liu, X., et\u00a0al. (2022). Ego4d: Around the world in 3,000 hours of egocentric video. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01842"},{"key":"2478_CR332","doi-asserted-by":"crossref","unstructured":"Grauman, K., Westbury, A., Torresani, L., Kitani, K., Malik, J., Afouras, T., Ashutosh, K., Baiyya, V., Bansal, S., Boote, B., et\u00a0al. (2024). Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01834"},{"key":"2478_CR333","unstructured":"Grill, J. B., Strub, F., Altch\u00e9, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila\u00a0Pires, B., Guo, Z., Gheshlaghi\u00a0Azar, M., et\u00a0al. (2020). Bootstrap your own latent-a new approach to self-supervised learning. In: NeurIPS"},{"key":"2478_CR334","doi-asserted-by":"crossref","unstructured":"Gritsenko, A. A., Xiong, X., Djolonga, J., Dehghani, M., Sun, C., Lucic, M., Schmid, C., & Arnab, A. (2024). End-to-end spatio-temporal action localisation with video transformers. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01739"},{"key":"2478_CR335","unstructured":"Gu, A., & Dao, T. (2023). Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752"},{"key":"2478_CR336","doi-asserted-by":"crossref","unstructured":"Gu, C., Sun, C., Ross, D. A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et\u00a0al. (2018). Ava: A video dataset of spatio-temporally localized atomic visual actions. In: CVPR","DOI":"10.1109\/CVPR.2018.00633"},{"key":"2478_CR337","doi-asserted-by":"crossref","unstructured":"Gu, X., Fan, H., Huang, Y., Luo, T., & Zhang, L. (2024a). Context-guided spatio-temporal video grounding. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01735"},{"key":"2478_CR338","unstructured":"Gu, X., Wen, C., Ye, W., Song, J., & Gao, Y. (2024b). Seer: Language instructed video prediction with latent diffusion models. In: ICLR"},{"key":"2478_CR339","doi-asserted-by":"crossref","unstructured":"Guadarrama, S., Krishnamoorthy, N., Malkarnenkar, G., Venugopalan, S., Mooney, R., Darrell, T., & Saenko, K. (2013). Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In: ICCV","DOI":"10.1109\/ICCV.2013.337"},{"key":"2478_CR340","unstructured":"Guen, V. L., & Thome, N. (2020). Disentangling physical dynamics from unknown factors for unsupervised video prediction. In: CVPR"},{"key":"2478_CR341","doi-asserted-by":"crossref","unstructured":"Gulati, A., Qin, J., Chiu, C. C., Parmar, N., Zhang, Y., Yu, J., Han, W., Wang, S., Zhang, Z., Wu, Y., et\u00a0al. (2020). Conformer: Convolution-augmented transformer for speech recognition. Interspeech","DOI":"10.21437\/Interspeech.2020-3015"},{"key":"2478_CR342","doi-asserted-by":"crossref","unstructured":"Guo, C., Zuo, X., Wang, S., Zou, S., Sun, Q., Deng, A., Gong, M., & Cheng, L. (2020). Action2motion: Conditioned generation of 3d human motions. In: ACM MM","DOI":"10.1145\/3394171.3413635"},{"key":"2478_CR343","doi-asserted-by":"crossref","unstructured":"Guo, H., Agarwal, N., Lo, S. Y., Lee, K., & Ji, Q. (2024a). Uncertainty-aware action decoupling transformer for action anticipation. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01764"},{"key":"2478_CR344","doi-asserted-by":"crossref","unstructured":"Guo, Y., Sun, S., Ma, S., Zheng, K., Bao, X., Ma, S., Zou, W., & Zheng, Y. (2024b). Crossmae: Cross-modality masked autoencoders for region-aware audio-visual pre-training. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02523"},{"key":"2478_CR345","doi-asserted-by":"crossref","unstructured":"Guo, Z., Zhao, J., Jiao, L., Liu, X., & Li, L. (2021). Multi-scale progressive attention network for video question answering. In: ACL","DOI":"10.18653\/v1\/2021.acl-short.122"},{"key":"2478_CR346","doi-asserted-by":"crossref","first-page":"1775","DOI":"10.1109\/TPAMI.2009.83","volume":"31","author":"A Gupta","year":"2009","unstructured":"Gupta, A., Kembhavi, A., & Davis, L. S. (2009). Observing human-object interactions: Using spatial and functional compatibility for recognition. IEEE TPAMI, 31, 1775\u20131789.","journal-title":"IEEE TPAMI"},{"key":"2478_CR347","doi-asserted-by":"crossref","unstructured":"Gupta, A., Yu, L., Sohn, K., Gu, X., Hahn, M., Fei-Fei, L., Essa, I., Jiang, L., & Lezama, J. (2023). Photorealistic video generation with diffusion models. arXiv:2312.06662","DOI":"10.1007\/978-3-031-72986-7_23"},{"key":"2478_CR348","doi-asserted-by":"crossref","unstructured":"Gupta, S., Keshari, A., & Das, S. (2022). Rv-gan: Recurrent gan for unconditional video generation. In: CVPR","DOI":"10.1109\/CVPRW56347.2022.00220"},{"key":"2478_CR349","doi-asserted-by":"crossref","unstructured":"Hadji, I., Derpanis, K. G., & Jepson, A. D. (2021). Representation learning via global temporal alignment and cycle-consistency. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01092"},{"key":"2478_CR350","doi-asserted-by":"crossref","unstructured":"Hakeem, A., & Shah, M. (2004). Ontology and taxonomy collaborated framework for meeting classification. In: ICPR","DOI":"10.1109\/ICPR.2004.1333743"},{"key":"2478_CR351","doi-asserted-by":"crossref","unstructured":"Hampali, S., Rad, M., Oberweger, M., & Lepetit, V. (2020). Honnotate: A method for 3d annotation of hand and object poses. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00326"},{"key":"2478_CR352","doi-asserted-by":"crossref","unstructured":"Han, L., Ren, J., Lee, H. Y., Barbieri, F., Olszewski, K., Minaee, S., Metaxas, D., & Tulyakov, S. (2022). Show me what and tell me how: Video synthesis via multimodal conditioning. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00360"},{"key":"2478_CR353","doi-asserted-by":"crossref","unstructured":"Han, T., Xie, W., & Zisserman, A. (2020a). Memory-augmented dense predictive coding for video representation learning. In: ECCV","DOI":"10.1007\/978-3-030-58580-8_19"},{"key":"2478_CR354","unstructured":"Han, T., Xie, W., & Zisserman, A. (2020b). Self-supervised co-training for video representation learning. In: NeurIPS"},{"key":"2478_CR355","doi-asserted-by":"crossref","unstructured":"Han, T., Bain, M., Nagrani, A., Varol, G., Xie, W., & Zisserman, A. (2023a). Autoad ii: The sequel-who, when, and what in movie audio description. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01255"},{"key":"2478_CR356","doi-asserted-by":"crossref","unstructured":"Han, T., Bain, M., Nagrani, A., Varol, G., Xie, W., & Zisserman, A. (2023b). Autoad: Movie description in context. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01815"},{"key":"2478_CR357","doi-asserted-by":"crossref","unstructured":"Han, T., Bain, M., Nagrani, A., Varol, G., Xie, W., & Zisserman, A. (2024). Autoad iii: The prequel-back to the pixels. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01720"},{"key":"2478_CR358","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.neucom.2022.01.085","volume":"483","author":"J Hao","year":"2022","unstructured":"Hao, J., Sun, H., Ren, P., Wang, J., Qi, Q., & Liao, J. (2022). Query-aware video encoder for video moment retrieval. Neurocomputing, 483, 72\u201386.","journal-title":"Neurocomputing"},{"key":"2478_CR359","unstructured":"Hao, X., & Zhang, W. (2024). Uncertainty-aware alignment network for cross-domain video-text retrieval. In: NeurIPS"},{"key":"2478_CR360","doi-asserted-by":"crossref","unstructured":"Hara, K., Kataoka, H., & Satoh, Y. (2018). Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In: CVPR","DOI":"10.1109\/CVPR.2018.00685"},{"key":"2478_CR361","doi-asserted-by":"crossref","unstructured":"Haresh, S., Kumar, S., Coskun, H., Syed, S. N., Konin, A., Zia, Z., & Tran, Q. H. (2021). Learning by aligning videos in time. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00550"},{"key":"2478_CR362","doi-asserted-by":"crossref","unstructured":"Harris, C., Stephens, M., et\u00a0al. (1988). A combined corner and edge detector. In: AVC","DOI":"10.5244\/C.2.23"},{"key":"2478_CR363","unstructured":"Harvey, W., Naderiparizi, S., Masrani, V., Weilbach, C., & Wood, F. (2022). Flexible diffusion modeling of long videos. In: NeurIPS"},{"key":"2478_CR364","doi-asserted-by":"crossref","unstructured":"Hasan, M., Choi, J., Neumann, J., Roy-Chowdhury, A. K., & Davis, L. S. (2016). Learning temporal regularity in video sequences. In: CVPR","DOI":"10.1109\/CVPR.2016.86"},{"key":"2478_CR365","doi-asserted-by":"crossref","unstructured":"Hassan, M., Choutas, V., Tzionas, D., & Black, M. J. (2019). Resolving 3d human pose ambiguities with 3d scene constraints. In: CVPR","DOI":"10.1109\/ICCV.2019.00237"},{"key":"2478_CR366","doi-asserted-by":"crossref","unstructured":"Hasson, Y., Varol, G., Tzionas, D., Kalevatykh, I., Black, M. J., Laptev, I., & Schmid, C. (2019). Learning joint reconstruction of hands and manipulated objects. In: CVPR","DOI":"10.1109\/CVPR.2019.01208"},{"key":"2478_CR367","doi-asserted-by":"crossref","unstructured":"Hasson, Y., Tekin, B., Bogo, F., Laptev, I., Pollefeys, M., & Schmid, C. (2020). Leveraging photometric consistency over time for sparsely supervised hand-object reconstruction. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00065"},{"key":"2478_CR368","doi-asserted-by":"crossref","unstructured":"Hatamizadeh, A., Yin, H., Roth, H. R., Li, W., Kautz, J., Xu, D., & Molchanov, P. (2022). Gradvit: Gradient inversion of vision transformers. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00978"},{"key":"2478_CR369","doi-asserted-by":"crossref","unstructured":"He, B., Yang, X., Kang, L., Cheng, Z., Zhou, X., & Shrivastava, A. (2022a). Asm-loc: Action-aware segment modeling for weakly-supervised temporal action localization. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01355"},{"key":"2478_CR370","doi-asserted-by":"crossref","unstructured":"He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00975"},{"key":"2478_CR371","doi-asserted-by":"crossref","unstructured":"He, K., Chen, X., Xie, S., Li, Y., Doll\u00e1r, P., & Girshick, R. (2022b). Masked autoencoders are scalable vision learners. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01553"},{"key":"2478_CR372","unstructured":"He, Y., Yang, T., Zhang, Y., Shan, Y., & Chen, Q. (2022c). Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv:2211.13221"},{"key":"2478_CR373","unstructured":"He, Y., Murata, N., Lai, C. H., Takida, Y., Uesaka, T., Kim, D., Liao, W. H., Mitsufuji, Y., Kolter, J. Z., Salakhutdinov, R., et\u00a0al. (2023). Manifold preserving guided diffusion. In: NeurIPS"},{"key":"2478_CR374","doi-asserted-by":"crossref","unstructured":"Hegde, K., Agrawal, R., Yao, Y., & Fletcher, C. W. (2018). Morph: Flexible acceleration for 3d cnn-based video understanding. In: MICRO","DOI":"10.1109\/MICRO.2018.00080"},{"key":"2478_CR375","doi-asserted-by":"crossref","unstructured":"Heidarivincheh, F., Mirmehdi, M., & Damen, D. (2016). Beyond action recognition: Action completion in rgb-d data. In: BMVC","DOI":"10.5244\/C.30.142"},{"key":"2478_CR376","unstructured":"Heidarivincheh, F., Mirmehdi, M., & Damen, D. (2018). Action completion: A temporal model for moment detection. In: BMVC"},{"key":"2478_CR377","doi-asserted-by":"crossref","unstructured":"Hendricks, L. A., Wang, O., Shechtman, E., Sivic, J., Darrell, T., & Russell, B. (2017). Localizing moments in video with natural language. In: ICCV","DOI":"10.1109\/ICCV.2017.618"},{"key":"2478_CR378","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1016\/j.imavis.2017.01.010","volume":"60","author":"S Herath","year":"2017","unstructured":"Herath, S., Harandi, M., & Porikli, F. (2017). Going deeper into action recognition: A survey. IVC, 60, 4\u201321.","journal-title":"IVC"},{"key":"2478_CR379","unstructured":"Hjelm, R. D., & Bachman, P. (2020). Representation learning with video deep infomax. arXiv:2007.13278"},{"key":"2478_CR380","unstructured":"Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., & Bengio, Y. (2018). Learning deep representations by mutual information estimation and maximization. In: ICLR"},{"key":"2478_CR381","unstructured":"Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. In: NeurIPS"},{"key":"2478_CR382","unstructured":"Ho, J., Chan, W., Saharia, C., Whang, J., Gao, R., Gritsenko, A., Kingma, D. P., Poole, B., Norouzi, M., Fleet, D. J., et\u00a0al. (2022a). Imagen video: High definition video generation with diffusion models. arXiv:2210.02303"},{"key":"2478_CR383","unstructured":"Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., & Fleet, D. J. (2022b). Video diffusion models. In: NeurIPS"},{"key":"2478_CR384","first-page":"191","volume":"107","author":"M Hoai","year":"2014","unstructured":"Hoai, M., & De la Torre, F. (2014). Max-margin early event detectors. IJCV, 107, 191\u2013202.","journal-title":"Max-margin early event detectors. IJCV"},{"key":"2478_CR385","doi-asserted-by":"crossref","unstructured":"Hoai, M., & Zisserman, A. (2015). Improving human action recognition using score distribution and ranking. In: ACCV","DOI":"10.1007\/978-3-319-16814-2_1"},{"key":"2478_CR386","unstructured":"Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., de\u00a0Las\u00a0Casas, D., Hendricks, L. A., Welbl, J., Clark, A., et\u00a0al. (2022). An empirical analysis of compute-optimal large language model training. In: NeurIPS"},{"key":"2478_CR387","doi-asserted-by":"crossref","unstructured":"Hong, J., Zhang, H., Gharbi, M., Fisher, M., & Fatahalian, K. (2022a). Spotting temporally precise, fine-grained events in video. In: ECCV","DOI":"10.1007\/978-3-031-19833-5_3"},{"key":"2478_CR388","unstructured":"Hong, W., Ding, M., Zheng, W., Liu, X., & Tang, J. (2022b). Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv:2205.15868"},{"key":"2478_CR389","doi-asserted-by":"crossref","unstructured":"Hong, X., Lan, Y., Pang, L., Guo, J., & Cheng, X. (2021). Transformation driven visual reasoning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00683"},{"key":"2478_CR390","unstructured":"H\u00f6ppe, T., Mehrjou, A., Bauer, S., Nielsen, D., & Dittadi, A. (2024). Diffusion models for video prediction and infilling. IEEE TMLR"},{"key":"2478_CR391","doi-asserted-by":"crossref","first-page":"6017","DOI":"10.1109\/TIP.2020.2987425","volume":"29","author":"J Hou","year":"2020","unstructured":"Hou, J., Wu, X., Wang, R., Luo, J., & Jia, Y. (2020). Confidence-guided self refinement for action prediction in untrimmed videos. IEEE T-IP, 29, 6017\u20136031.","journal-title":"IEEE T-IP"},{"key":"2478_CR392","doi-asserted-by":"crossref","unstructured":"Hou, Q., Ghildyal, A., & Liu, F. (2022). A perceptual quality metric for video frame interpolation. In: ECCV","DOI":"10.1007\/978-3-031-19784-0_14"},{"key":"2478_CR393","doi-asserted-by":"crossref","unstructured":"Hou, R., Chen, C., & Shah, M. (2017). Tube convolutional neural network (t-cnn) for action detection in videos. In: ICCV","DOI":"10.1109\/ICCV.2017.620"},{"key":"2478_CR394","doi-asserted-by":"crossref","unstructured":"Hu, D., Nie, F., & Li, X. (2019). Deep multimodal clustering for unsupervised audiovisual learning. In: CVPR","DOI":"10.1109\/CVPR.2019.00947"},{"key":"2478_CR395","unstructured":"Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2021). Lora: Low-rank adaptation of large language models. arXiv:2106.09685"},{"key":"2478_CR396","doi-asserted-by":"crossref","unstructured":"Hu, H., Dong, S., Zhao, Y., Lian, D., Li, Z., & Gao, S. (2022a). Transrac: Encoding multi-scale temporal correlation with transformers for repetitive action counting. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01843"},{"issue":"11","key":"2478_CR397","first-page":"2568","volume":"41","author":"JF Hu","year":"2018","unstructured":"Hu, J. F., Zheng, W. S., Ma, L., Wang, G., Lai, J., & Zhang, J. (2018). Early action prediction by soft regression. IEEE TPAMI, 41(11), 2568\u20132583.","journal-title":"IEEE TPAMI"},{"key":"2478_CR398","doi-asserted-by":"crossref","unstructured":"Hu, X., Chen, Z., & Owens, A. (2022b). Mix and localize: Localizing sound sources in mixtures. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01023"},{"key":"2478_CR399","doi-asserted-by":"crossref","first-page":"395","DOI":"10.1016\/j.neucom.2022.03.069","volume":"491","author":"X Hu","year":"2022","unstructured":"Hu, X., Dai, J., Li, M., Peng, C., Li, Y., & Du, S. (2022). Online human action detection and anticipation in videos: A survey. Neurocomputing, 491, 395\u2013413.","journal-title":"Neurocomputing"},{"key":"2478_CR400","doi-asserted-by":"crossref","unstructured":"Hu, X., Huang, Z., Huang, A., Xu, J., & Zhou, S. (2023). A dynamic multi-scale voxel flow network for video prediction. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00593"},{"key":"2478_CR401","doi-asserted-by":"crossref","unstructured":"Hu, Y., Luo, C., & Chen, Z. (2022d). Make it move: controllable image-to-video generation with text descriptions. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01768"},{"key":"2478_CR402","doi-asserted-by":"crossref","unstructured":"Huang, B., Zhao, Z., Zhang, G., Qiao, Y., & Wang, L. (2023a). Mgmae: Motion guided masking for video masked autoencoding. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01241"},{"key":"2478_CR403","doi-asserted-by":"crossref","unstructured":"Huang, B., Li, C., Xu, C., Pan, L., Wang, Y., & Lee, G. H. (2024a). Closely interactive human reconstruction with proxemics and physics-guided adaption. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00102"},{"key":"2478_CR404","doi-asserted-by":"crossref","unstructured":"Huang, C. H. P., Yi, H., H\u00f6schle, M., Safroshkin, M., Alexiadis, T., Polikovsky, S., Scharstein, D., & Black, M. J. (2022a). Capturing and inferring dense full-body human-scene contact. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01292"},{"key":"2478_CR405","doi-asserted-by":"crossref","unstructured":"Huang, D., Chen, P., Zeng, R., Du, Q., Tan, M., & Gan, C. (2020a). Location-aware graph convolutional networks for video question answering. In: AAAI","DOI":"10.1609\/aaai.v34i07.6737"},{"key":"2478_CR406","doi-asserted-by":"crossref","unstructured":"Huang, D. A., & Kitani, K. M. (2014). Action-reaction: Forecasting the dynamics of human interaction. In: ECCV","DOI":"10.1007\/978-3-319-10584-0_32"},{"key":"2478_CR407","doi-asserted-by":"crossref","unstructured":"Huang, D. A., Ramanathan, V., Mahajan, D., Torresani, L., Paluri, M., Fei-Fei, L., & Niebles, J. C. (2018a). What makes a video a video: Analyzing temporal information in video understanding models and datasets. In: CVPR","DOI":"10.1109\/CVPR.2018.00769"},{"key":"2478_CR408","unstructured":"Huang, P. Y., Xu, H., Li, J., Baevski, A., Auli, M., Galuba, W., Metze, F., & Feichtenhofer, C. (2022b). Masked autoencoders that listen. In: NeurIPS"},{"key":"2478_CR409","unstructured":"Huang, P. Y., Sharma, V., Xu, H., Ryali, C., Li, Y., Li, S. W, Ghosh, G., Malik, J., Feichtenhofer, C., et\u00a0al. (2023b). Mavil: Masked audio-video learners. In: NeurIPS"},{"key":"2478_CR410","doi-asserted-by":"crossref","unstructured":"Huang, S., Suri, S., Gupta, K., Rambhatla, S. S., Lim, Sn., & Shrivastava, A. (2024b). Uvis: Unsupervised video instance segmentation. In: CVPR","DOI":"10.1109\/CVPRW63382.2024.00274"},{"key":"2478_CR411","doi-asserted-by":"crossref","unstructured":"Huang, Y., Cai, M., Li, Z., & Sato, Y. (2018b). Predicting gaze in egocentric video by learning task-dependent attention transition. In: ECCV","DOI":"10.1007\/978-3-030-01225-0_46"},{"key":"2478_CR412","doi-asserted-by":"crossref","unstructured":"Huang, Y., Dai, Q., & Lu, Y. (2019). Decoupling localization and classification in single shot temporal action detection. In: ICME","DOI":"10.1109\/ICME.2019.00224"},{"key":"2478_CR413","first-page":"7795","volume":"29","author":"Y Huang","year":"2020","unstructured":"Huang, Y., Cai, M., Li, Z., Lu, F., & Sato, Y. (2020). Mutual context network for jointly estimating egocentric gaze and action. IEEE TIP, 29, 7795\u20137806.","journal-title":"IEEE TIP"},{"key":"2478_CR414","doi-asserted-by":"crossref","unstructured":"Huang, Y., Zhang, Y., Elachqar, O., & Cheng, Y. (2020c). Inset: Sentence infilling with inter-sentential transformer. In: ACL","DOI":"10.18653\/v1\/2020.acl-main.226"},{"key":"2478_CR415","doi-asserted-by":"crossref","unstructured":"Huang, Y., Chen, G., Xu, J., Zhang, M., Yang, L., Pei, B., Zhang, H., Dong, L., Wang, Y., Wang, L., et\u00a0al. (2024c). Egoexolearn: A dataset for bridging asynchronous ego-and exo-centric view of procedural activities in real world. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02084"},{"issue":"7","key":"2478_CR416","doi-asserted-by":"crossref","first-page":"2551","DOI":"10.1007\/s11263-024-01984-1","volume":"132","author":"Y Huang","year":"2024","unstructured":"Huang, Y., Taheri, O., Black, M. J., & Tzionas, D. (2024). Intercap: joint markerless 3d tracking of humans and objects in interaction from multi-view rgb-d images. IJCV, 132(7), 2551\u20132566.","journal-title":"IJCV"},{"key":"2478_CR417","doi-asserted-by":"crossref","unstructured":"Huang, Z., He, Y., Yu, J., Zhang, F., Si, C., Jiang, Y., Zhang, Y., Wu, T., Jin, Q., Chanpaisit, N., et\u00a0al. (2024e). Vbench: Comprehensive benchmark suite for video generative models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02060"},{"key":"2478_CR418","doi-asserted-by":"crossref","unstructured":"Huh, J., Chalk, J., Kazakos, E., Damen, D., & Zisserman, A. (2023). Epic-sounds: A large-scale dataset of actions that sound. In: ICASSP","DOI":"10.1109\/ICASSP49357.2023.10096198"},{"key":"2478_CR419","unstructured":"Hussain, Z., Sheng, M., & Zhang, W. E. (2019). Different approaches for human activity recognition: A survey. arXiv:1906.05074"},{"key":"2478_CR420","doi-asserted-by":"crossref","unstructured":"Hussein, N., Gavves, E., & Smeulders, A. W. (2019). Timeception for complex action recognition. In: CVPR","DOI":"10.1109\/CVPR.2019.00034"},{"key":"2478_CR421","doi-asserted-by":"crossref","unstructured":"Hwang, J. J., Ke, T. W., Shi, J., & Yu, S. X. (2019). Adversarial structure matching for structured prediction tasks. In: CVPR","DOI":"10.1109\/CVPR.2019.00418"},{"key":"2478_CR422","doi-asserted-by":"crossref","unstructured":"Iashin, V., & Rahtu, E. (2020a). A better use of audio-visual cues: Dense video captioning with bi-modal transformer. In: BMVC","DOI":"10.5244\/C.34.29"},{"key":"2478_CR423","doi-asserted-by":"crossref","unstructured":"Iashin, V., & Rahtu, E. (2020b). Multi-modal dense video captioning. In: CVPRw","DOI":"10.1109\/CVPRW50498.2020.00487"},{"key":"2478_CR424","doi-asserted-by":"crossref","unstructured":"Ibrahim, M. S., Muralidharan, S., Deng, Z., Vahdat, A., & Mori, G. (2016). A hierarchical deep temporal model for group activity recognition. In: CVPR","DOI":"10.1109\/CVPR.2016.217"},{"key":"2478_CR425","doi-asserted-by":"crossref","unstructured":"Ikizler-Cinbis, N., & Sclaroff, S. (2010). Object, scene and actions: Combining multiple features for human action recognition. In: ECCV","DOI":"10.1007\/978-3-642-15549-9_36"},{"key":"2478_CR426","doi-asserted-by":"crossref","unstructured":"Iofinova, E., Peste, A., & Alistarh, D. (2023). Bias in pruned vision models: In-depth analysis and countermeasures. In: CVPR","DOI":"10.1109\/CVPR52729.2023.02334"},{"key":"2478_CR427","doi-asserted-by":"crossref","first-page":"1325","DOI":"10.1109\/TPAMI.2013.248","volume":"36","author":"C Ionescu","year":"2013","unstructured":"Ionescu, C., Papava, D., Olaru, V., & Sminchisescu, C. (2013). Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE TPAMI, 36, 1325\u20131339.","journal-title":"IEEE TPAMI"},{"issue":"3","key":"2478_CR428","first-page":"412","volume":"23","author":"A Iosifidis","year":"2012","unstructured":"Iosifidis, A., Tefas, A., & Pitas, I. (2012). View-invariant action recognition based on artificial neural networks. IEEE TNNLS, 23(3), 412\u2013424.","journal-title":"IEEE TNNLS"},{"key":"2478_CR429","doi-asserted-by":"crossref","unstructured":"Ippolito, D., Grangier, D., Callison-Burch, C., & Eck, D. (2019). Unsupervised hierarchical story infilling. In: WNU","DOI":"10.18653\/v1\/W19-2405"},{"key":"2478_CR430","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1023\/A:1008078328650","volume":"29","author":"M Isard","year":"1998","unstructured":"Isard, M., & Blake, A. (1998). Condensation-conditional density propagation for visual tracking. IJCV, 29, 5\u201328.","journal-title":"IJCV"},{"key":"2478_CR431","doi-asserted-by":"crossref","unstructured":"Islam, M. M., Ho, N., Yang, X., Nagarajan, T., Torresani, L., & Bertasius, G. (2024). Video recap: Recursive captioning of hour-long videos. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01723"},{"issue":"11","key":"2478_CR432","doi-asserted-by":"crossref","first-page":"1254","DOI":"10.1109\/34.730558","volume":"20","author":"L Itti","year":"2002","unstructured":"Itti, L., Koch, C., & Niebur, E. (2002). A model of saliency-based visual attention for rapid scene analysis. IEEE TPAMI, 20(11), 1254\u20131259.","journal-title":"IEEE TPAMI"},{"key":"2478_CR433","unstructured":"Jabri, A., Owens, A., & Efros, A. (2020). Space-time correspondence as a contrastive random walk. In: NeurIPS"},{"issue":"1","key":"2478_CR434","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1162\/neco.1991.3.1.79","volume":"3","author":"RA Jacobs","year":"1991","unstructured":"Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton, G. E. (1991). Adaptive mixtures of local experts. Neural computation, 3(1), 79\u201387.","journal-title":"Neural computation"},{"key":"2478_CR435","unstructured":"Jaegle, A., Gimeno, F., Brock, A., Vinyals, O., Zisserman, A., & Carreira, J. (2021). Perceiver: General perception with iterative attention. In: ICML"},{"key":"2478_CR436","doi-asserted-by":"crossref","unstructured":"Jain, A., Tompson, J., LeCun, Y., & Bregler, C. (2015a). Modeep: A deep learning framework using motion features for human pose estimation. In: ACCV","DOI":"10.1007\/978-3-319-16808-1_21"},{"key":"2478_CR437","doi-asserted-by":"crossref","unstructured":"Jain, M., Van\u00a0Gemert, J., J\u00e9gou, H., Bouthemy, P., & Snoek, C. G. M. (2014). Action localization with tubelets from motion. In: CVPR","DOI":"10.1109\/CVPR.2014.100"},{"key":"2478_CR438","doi-asserted-by":"crossref","unstructured":"Jain, M., van Gemert, J. C., & Snoek, C. G. M. (2015b). What do 15,000 object categories tell us about classifying and localizing actions. In: CVPR","DOI":"10.1109\/CVPR.2015.7298599"},{"key":"2478_CR439","doi-asserted-by":"crossref","unstructured":"Jang, J., Kong, C., Jeon, D., Kim, S., & Kwak, N. (2023). Unifying vision-language representation space with single-tower transformer. In: AAAI","DOI":"10.1609\/aaai.v37i1.25178"},{"key":"2478_CR440","doi-asserted-by":"crossref","unstructured":"Jang, Y., Song, Y., Yu, Y., Kim, Y., & Kim, G. (2017). Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In: CVPR","DOI":"10.1109\/CVPR.2017.149"},{"key":"2478_CR441","doi-asserted-by":"crossref","unstructured":"Janocha, K., & Czarnecki, W. M. (2017). On loss functions for deep neural networks in classification. TFML","DOI":"10.4467\/20838476SI.16.004.6185"},{"issue":"2","key":"2478_CR442","first-page":"187","volume":"17","author":"M Jeannerod","year":"1994","unstructured":"Jeannerod, M. (1994). The representing brain: Neural correlates of motor intention and imagery. BBS, 17(2), 187\u2013202.","journal-title":"BBS"},{"key":"2478_CR443","doi-asserted-by":"crossref","unstructured":"Jenni, S., & Jin, H. (2021). Time-equivariant contrastive video representation learning. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00982"},{"key":"2478_CR444","doi-asserted-by":"crossref","unstructured":"Jhuang, H., Gall, J., Zuffi, S., Schmid, C., & Black, M. J. (2013). Towards understanding action recognition. In: ICCV","DOI":"10.1109\/ICCV.2013.396"},{"key":"2478_CR445","doi-asserted-by":"crossref","unstructured":"Ji, J., Krishna, R., Fei-Fei, L., & Niebles, J. C. (2020). Action genome: Actions as compositions of spatio-temporal scene graphs. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01025"},{"key":"2478_CR446","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","volume":"35","author":"S Ji","year":"2012","unstructured":"Ji, S., Xu, W., Yang, M., & Yu, K. (2012). 3d convolutional neural networks for human action recognition. IEEE TPAMI, 35, 221\u2013231.","journal-title":"IEEE TPAMI"},{"key":"2478_CR447","unstructured":"Jia, K., & Yeung, D. Y. (2008). Human action recognition using local spatio-temporal discriminant embedding. In: CVPR"},{"key":"2478_CR448","doi-asserted-by":"crossref","unstructured":"Jiang, B., Huang, X., Yang, C., & Yuan, J. (2019a). Cross-modal video moment retrieval with spatial and language-temporal attention. In: ICMR","DOI":"10.1145\/3323873.3325019"},{"key":"2478_CR449","doi-asserted-by":"crossref","unstructured":"Jiang, B., Wang, M., Gan, W., Wu, W., & Yan, J. (2019b). Stm: Spatiotemporal and motion encoding for action recognition. In: ICCV","DOI":"10.1109\/ICCV.2019.00209"},{"key":"2478_CR450","unstructured":"Jiang, H., Kim, B., Guan, M., Gupta, M. (2018). To trust or not to trust a classifier. In: NeurIPS"},{"key":"2478_CR451","doi-asserted-by":"crossref","first-page":"212","DOI":"10.1016\/j.neucom.2020.12.069","volume":"433","author":"J Jiang","year":"2021","unstructured":"Jiang, J., Nan, Z., Chen, H., Chen, S., & Zheng, N. (2021). Predicting short-term next-active-object through visual attention and hand position. Neurocomputing, 433, 212\u2013222.","journal-title":"Neurocomputing"},{"key":"2478_CR452","doi-asserted-by":"crossref","unstructured":"Jiang, N., Liu, T., Cao, Z., Cui, J., Zhang, Z., Chen, Y., Wang, H., Zhu, Y., & Huang, S. (2023). Full-body articulated human-object interaction. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00859"},{"key":"2478_CR453","doi-asserted-by":"crossref","unstructured":"Jiang, P., & Han, Y. (2020). Reasoning with heterogeneous graph alignment for video question answering. In: AAAI","DOI":"10.1609\/aaai.v34i07.6767"},{"key":"2478_CR454","doi-asserted-by":"crossref","unstructured":"Jiang, W., Yi, K. M., Samei, G., Tuzel, O., & Ranjan, A. (2022). Neuman: Neural human radiance field from a single video. In: ECCV","DOI":"10.1007\/978-3-031-19824-3_24"},{"key":"2478_CR455","doi-asserted-by":"crossref","unstructured":"Jiang, Y. G., Ye, G., Chang, S. F., Ellis, D., & Loui, A. C. (2011). Consumer video understanding: A benchmark database and an evaluation of human and machine performance. In: ICMR","DOI":"10.1145\/1991996.1992025"},{"key":"2478_CR456","doi-asserted-by":"crossref","unstructured":"Jin, B., Hu, Y., Tang, Q., Niu, J., Shi, Z., Han, Y., & Li, X. (2020). Exploring spatial-temporal multi-frequency analysis for high-fidelity and temporal-consistency video prediction. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00461"},{"key":"2478_CR457","doi-asserted-by":"crossref","unstructured":"Jin, S., Choi, H., Noh, T., & Han, K. (2024). Integration of global and local representations for fine-grained cross-modal alignment. In: ECCV","DOI":"10.1007\/978-3-031-73010-8_4"},{"key":"2478_CR458","doi-asserted-by":"crossref","unstructured":"Jin, X., Li, X., Xiao, H., Shen, X., Lin, Z., Yang, J., Chen, Y., Dong, J., Liu, L., Jie, Z., et\u00a0al. (2017). Video scene parsing with predictive feature learning. In: ICCV","DOI":"10.1109\/ICCV.2017.595"},{"key":"2478_CR459","doi-asserted-by":"crossref","unstructured":"Joo, H., Liu, H., Tan, L., Gui, L., Nabbe, B., Matthews, I., Kanade, T., Nobuhara, S., & Sheikh, Y. (2015). Panoptic studio: A massively multiview system for social motion capture. In: ICCV","DOI":"10.1109\/ICCV.2015.381"},{"key":"2478_CR460","doi-asserted-by":"crossref","first-page":"190","DOI":"10.1109\/TPAMI.2017.2782743","volume":"41","author":"H Joo","year":"2017","unstructured":"Joo, H., Simon, T., Li, X., Liu, H., Tan, L., Gui, L., Banerjee, S., Godisart, T. S., Nabbe, B., Matthews, I., Kanade, T., Nobuhara, S., & Sheikh, Y. (2017). Panoptic studio: A massively multiview system for social interaction capture. IEEE TPAMI, 41, 190\u2013204.","journal-title":"IEEE TPAMI"},{"key":"2478_CR461","doi-asserted-by":"crossref","unstructured":"Joo, H., Simon, T., & Sheikh, Y. (2018). Total capture: A 3d deformation model for tracking faces, hands, and bodies. In: CVPR","DOI":"10.1109\/CVPR.2018.00868"},{"key":"2478_CR462","doi-asserted-by":"crossref","unstructured":"Joo, H., Neverova, N., & Vedaldi, A. (2021). Exemplar fine-tuning for 3d human model fitting towards in-the-wild 3d human pose estimation. In: 3DV","DOI":"10.1109\/3DV53792.2021.00015"},{"key":"2478_CR463","doi-asserted-by":"crossref","unstructured":"Ju, C., Han, T., Zheng, K., Zhang, Y., & Xie, W. (2022). Prompting visual-language models for efficient video understanding. In: ECCV","DOI":"10.1007\/978-3-031-19833-5_7"},{"key":"2478_CR464","doi-asserted-by":"crossref","unstructured":"Ju, C., Zheng, K., Liu, J., Zhao, P., Zhang, Y., Chang, J., Tian, Q., & Wang, Y. (2023). Distilling vision-language pre-training to collaborate with weakly-supervised temporal action localization. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01417"},{"key":"2478_CR465","doi-asserted-by":"crossref","unstructured":"Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. In: ICCV","DOI":"10.1109\/ICCV.2009.5459462"},{"issue":"1","key":"2478_CR466","doi-asserted-by":"crossref","first-page":"172","DOI":"10.1109\/TPAMI.2010.68","volume":"33","author":"IN Junejo","year":"2010","unstructured":"Junejo, I. N., Dexter, E., Laptev, I., & Perez, P. (2010). View-independent action recognition from temporal self-similarities. IEEE TPAMI, 33(1), 172\u2013185.","journal-title":"IEEE TPAMI"},{"key":"2478_CR467","unstructured":"Kahana, J., Cohen, N., & Hoshen, Y. (2022). Improving zero-shot models with label distribution priors. arXiv:2212.00784"},{"key":"2478_CR468","doi-asserted-by":"crossref","unstructured":"Kahatapitiya, K., Arnab, A., Nagrani, A., & Ryoo, M. S. (2024). Victr: Video-conditioned text representations for activity recognition. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01755"},{"key":"2478_CR469","unstructured":"Kaiser, L., Gomez, A. N., Shazeer, N., Vaswani, A., Parmar, N., Jones, L., & Uszkoreit, J. (2017). One model to learn them all. arXiv:1706.05137"},{"key":"2478_CR470","doi-asserted-by":"crossref","unstructured":"Kalogeiton, V., Weinzaepfel, P., Ferrari, V., & Schmid, C. (2017). Action tubelet detector for spatio-temporal action localization. In: ICCV","DOI":"10.1109\/ICCV.2017.472"},{"key":"2478_CR471","doi-asserted-by":"crossref","unstructured":"Kanazawa, A., Black, M. J., Jacobs, D. W., & Malik, J. (2018). End-to-end recovery of human shape and pose. In: CVPR","DOI":"10.1109\/CVPR.2018.00744"},{"key":"2478_CR472","doi-asserted-by":"crossref","first-page":"6618","DOI":"10.1109\/TPAMI.2021.3061479","volume":"45","author":"G Kapidis","year":"2023","unstructured":"Kapidis, G., Poppe, R., & Veltkamp, R. C. (2023). Multi-dataset, multitask learning of egocentric vision tasks. IEEE TPAMI, 45, 6618\u20136630.","journal-title":"IEEE TPAMI"},{"key":"2478_CR473","unstructured":"Kariyappa, S., Guo, C., Maeng, K., Xiong, W., Suh, G. E., Qureshi, M. K., & Lee, H. H. S. (2023). Cocktail party attack: Breaking aggregation-based privacy in federated learning using independent component analysis. In: ICML"},{"key":"2478_CR474","doi-asserted-by":"crossref","unstructured":"Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. In: CVPR","DOI":"10.1109\/CVPR.2014.223"},{"key":"2478_CR475","doi-asserted-by":"crossref","unstructured":"Kataoka, H., Miyashita, Y., Hayashi, M., Iwata, K., & Satoh, Y. (2016). Recognition of transitional action for short-term action prediction using discriminative temporal CNN feature. In: BMVC","DOI":"10.5244\/C.30.12"},{"key":"2478_CR476","unstructured":"Kaufmann, T., Weng, P., Bengs, V., & H\u00fcllermeier, E. (2023). A survey of reinforcement learning from human feedback. arXiv:2312.14925"},{"key":"2478_CR477","unstructured":"Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F., Green, T., Back, T., Natsev, P., et\u00a0al. (2017). The kinetics human action video dataset. arXiv:1705.06950"},{"key":"2478_CR478","doi-asserted-by":"crossref","unstructured":"Kazakos, E., Nagrani, A., Zisserman, A., & Damen, D. (2021). Slow-fast auditory streams for audio recognition. In: ICASSP","DOI":"10.1109\/ICASSP39728.2021.9413376"},{"key":"2478_CR479","doi-asserted-by":"crossref","unstructured":"Ke, Q., Fritz, M., Schiele, B. (2019). Time-conditioned action anticipation in one shot. In: CVPR","DOI":"10.1109\/CVPR.2019.01016"},{"key":"2478_CR480","doi-asserted-by":"crossref","unstructured":"Ke, Y., Sukthankar, R., & Hebert, M. (2007). Spatio-temporal shape and flow correlation for action recognition. In: CVPR","DOI":"10.1109\/CVPR.2007.383512"},{"key":"2478_CR481","doi-asserted-by":"crossref","unstructured":"Khamis, S., & Davis, L. S. (2015). Walking and talking: A bilinear approach to multi-label action recognition. In: CVPRws","DOI":"10.1109\/CVPRW.2015.7301277"},{"key":"2478_CR482","doi-asserted-by":"crossref","unstructured":"Khattak, M. U., Rasheed, H., Maaz, M., Khan, S., & Khan, F. S. (2023). Maple: Multi-modal prompt learning. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01832"},{"issue":"8","key":"2478_CR483","doi-asserted-by":"crossref","first-page":"352","DOI":"10.1016\/j.tics.2011.06.005","volume":"15","author":"JM Kilner","year":"2011","unstructured":"Kilner, J. M. (2011). More than one pathway to action understanding. Trends in cognitive sciences, 15(8), 352\u2013357.","journal-title":"Trends in cognitive sciences"},{"key":"2478_CR484","doi-asserted-by":"crossref","unstructured":"Kim, B., Lee, J., Kang, J., Kim, E. S., & Kim, H. J. (2021a). Hotr: End-to-end human-object interaction detection with transformers. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00014"},{"key":"2478_CR485","doi-asserted-by":"crossref","unstructured":"Kim, D., & Kim, T. (2024). Missing modality prediction for unpaired multimodal learning via joint embedding of unimodal models. In: ECCV","DOI":"10.1007\/978-3-031-73016-0_11"},{"key":"2478_CR486","doi-asserted-by":"crossref","unstructured":"Kim, D., Cho, D., & Kweon, I. S. (2019). Self-supervised video representation learning with space-time cubic puzzles. In: AAAI","DOI":"10.1609\/aaai.v33i01.33018545"},{"key":"2478_CR487","doi-asserted-by":"crossref","unstructured":"Kim, H., Jain, M., Lee, J. T., Yun, S., & Porikli, F. (2021b). Efficient action recognition via dynamic knowledge propagation. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01346"},{"key":"2478_CR488","doi-asserted-by":"crossref","unstructured":"Kim, J., & Grauman, K. (2009). Observe locally, infer globally: a space-time mrf for detecting abnormal activities with incremental updates. In: CVPR","DOI":"10.1109\/CVPR.2009.5206569"},{"key":"2478_CR489","unstructured":"Kim, J., Kang, J., Choi, J., & Han, B. (2024a). Fifo-diffusion: Generating infinite videos from text without training. In: NeurIPS"},{"key":"2478_CR490","doi-asserted-by":"crossref","unstructured":"Kim, J. M., Koepke, A., Schmid, C., & Akata, Z. (2023). Exposing and mitigating spurious correlations for cross-modal retrieval. In: CVPR","DOI":"10.1109\/CVPRW59228.2023.00257"},{"key":"2478_CR491","unstructured":"Kim, K., Moltisanti, D., Mac\u00a0Aodha, O., & Sevilla-Lara, L. (2022). An action is worth multiple words: Handling ambiguity in action recognition. In: BMVC"},{"key":"2478_CR492","unstructured":"Kim, M., Kwon, H., Wang, C., Kwak, S., & Cho, M. (2021c). Relational self-attention: What\u2019s missing in attention for video understanding. In: NeurIPS"},{"key":"2478_CR493","doi-asserted-by":"crossref","unstructured":"Kim, M., Gao, S., Hsu, Y. C., Shen, Y., & Jin, H. (2024b). Token fusion: Bridging the gap between token pruning and token merging. In: WACV","DOI":"10.1109\/WACV57701.2024.00141"},{"key":"2478_CR494","doi-asserted-by":"crossref","unstructured":"Kim, M., Kim, H. B., Moon, J., Choi, J., & Kim, S. T. (2024c). Do you remember? dense video captioning with cross-modal memory retrieval. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01318"},{"key":"2478_CR495","unstructured":"Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arxiv e-prints. In: ICLR"},{"key":"2478_CR496","doi-asserted-by":"crossref","unstructured":"Kitani, K. M., Ziebart, B. D., Bagnell, J. A., & Hebert, M. (2012). Activity forecasting. In: ECCV","DOI":"10.1007\/978-3-642-33765-9_15"},{"key":"2478_CR497","doi-asserted-by":"crossref","unstructured":"Ko, D., Lee, J. S., Choi, M., Chu, J., Park, J., & Kim, H. J. (2023). Open-vocabulary video question answering: A new benchmark for evaluating the generalizability of video question answering models. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00288"},{"issue":"5582","key":"2478_CR498","doi-asserted-by":"crossref","first-page":"846","DOI":"10.1126\/science.1070311","volume":"297","author":"E Kohler","year":"2002","unstructured":"Kohler, E., Keysers, C., Umilta, M. A., Fogassi, L., Gallese, V., & Rizzolatti, G. (2002). Hearing sounds, understanding actions: action representation in mirror neurons. Science, 297(5582), 846\u2013848.","journal-title":"Science"},{"key":"2478_CR499","doi-asserted-by":"crossref","unstructured":"Kolotouros, N., Pavlakos, G., Black, M. J., D & aniilidis, K. (2019). Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In: CVPR","DOI":"10.1109\/ICCV.2019.00234"},{"key":"2478_CR500","doi-asserted-by":"crossref","unstructured":"Kondratyuk, D., Yuan, L., Li, Y., Zhang, L., Tan, M., Brown, M., & Gong, B. (2021). Movinets: Mobile video networks for efficient video recognition. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01576"},{"key":"2478_CR501","unstructured":"Kone\u010dn\u1ef3, J., McMahan, H. B., Ramage, D., & Richt\u00e1rik, P. (2016). Federated optimization: Distributed machine learning for on-device intelligence. arXiv:1610.02527"},{"key":"2478_CR502","first-page":"2880","volume":"28","author":"Q Kong","year":"2020","unstructured":"Kong, Q., Cao, Y., Iqbal, T., Wang, Y., Wang, W., & Plumbley, M. D. (2020). Panns: Large-scale pretrained audio neural networks for audio pattern recognition. IEEE\/ACM TASLP, 28, 2880\u20132894.","journal-title":"IEEE\/ACM TASLP"},{"key":"2478_CR503","doi-asserted-by":"crossref","first-page":"1366","DOI":"10.1007\/s11263-022-01594-9","volume":"130","author":"Y Kong","year":"2022","unstructured":"Kong, Y., & Fu, Y. (2022). Human action recognition and prediction: A survey. IJCV, 130, 1366\u20131401.","journal-title":"IJCV"},{"key":"2478_CR504","doi-asserted-by":"crossref","unstructured":"Kong, Y., Kit, D., & Fu, Y. (2014). A discriminative model with multiple temporal scales for action prediction. In: ECCV","DOI":"10.1007\/978-3-319-10602-1_39"},{"key":"2478_CR505","doi-asserted-by":"crossref","unstructured":"Kong, Y., Gao, S., Sun, B., & Fu, Y. (2018). Action prediction from videos via memorizing hard-to-predict samples. In: AAAI","DOI":"10.1609\/aaai.v32i1.12324"},{"key":"2478_CR506","doi-asserted-by":"crossref","unstructured":"Kopf, J., Rong, X., & Huang, J. B. (2021). Robust consistent video depth estimation. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00166"},{"issue":"1","key":"2478_CR507","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1109\/TPAMI.2015.2430335","volume":"38","author":"HS Koppula","year":"2015","unstructured":"Koppula, H. S., & Saxena, A. (2015). Anticipating human activities using object affordances for reactive robotic response. IEEE TPAMI, 38(1), 14\u201329.","journal-title":"IEEE TPAMI"},{"issue":"8","key":"2478_CR508","first-page":"951","volume":"32","author":"HS Koppula","year":"2013","unstructured":"Koppula, H. S., Gupta, R., & Saxena, A. (2013). Learning human activities and object affordances from rgb-d videos. IJRR, 32(8), 951\u2013970.","journal-title":"IJRR"},{"key":"2478_CR509","doi-asserted-by":"crossref","unstructured":"Korbar, B., Tran, D., & Torresani, L. (2019). Scsampler: Sampling salient clips from video for efficient action recognition. In: ICCV","DOI":"10.1109\/ICCV.2019.00633"},{"key":"2478_CR510","doi-asserted-by":"crossref","unstructured":"K\u00f6rner, M., & Denzler, J. (2013). Temporal self-similarity for appearance-based action recognition in multi-view setups. In: CAIP","DOI":"10.1007\/978-3-642-40261-6_19"},{"key":"2478_CR511","doi-asserted-by":"crossref","unstructured":"Koutini, K., Schl\u00fcter, J., Eghbal-Zadeh, H., & Widmer, G. (2022). Efficient training of audio transformers with patchout. In: Interspeech","DOI":"10.21437\/Interspeech.2022-227"},{"key":"2478_CR512","doi-asserted-by":"crossref","unstructured":"Kowal, M., Dave, A., Ambrus, R., Gaidon, A., Derpanis, K. G., & Tokmakov, P. (2024a). Understanding video transformers via universal concept discovery. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01041"},{"key":"2478_CR513","doi-asserted-by":"crossref","unstructured":"Kowal, M., Wildes, R. P., & Derpanis, K. G. (2024b). Visual concept connectome (vcc): Open world concept discovery and their interlayer connections in deep models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01036"},{"key":"2478_CR514","doi-asserted-by":"crossref","unstructured":"Krishna, R., Hata, K., Ren, F., Fei-Fei, L., & Carlos\u00a0Niebles, J. (2017). Dense-captioning events in videos. In: ICCV","DOI":"10.1109\/ICCV.2017.83"},{"key":"2478_CR515","doi-asserted-by":"crossref","unstructured":"Kuang, H., Zhu, Y., Zhang, Z., Li, X., Tighe, J., Schwertfeger, S., Stachniss, C., & Li, M. (2021). Video contrastive learning with global context. In: ICCVw","DOI":"10.1109\/ICCVW54120.2021.00358"},{"key":"2478_CR516","doi-asserted-by":"crossref","unstructured":"Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., & Serre, T. (2011). Hmdb: a large video database for human motion recognition. In: ICCV","DOI":"10.1109\/ICCV.2011.6126543"},{"key":"2478_CR517","doi-asserted-by":"crossref","unstructured":"Kuehne, H., Arslan, A., & Serre, T. (2014). The language of actions: Recovering the syntax and semantics of goal-directed human activities. In: CVPR","DOI":"10.1109\/CVPR.2014.105"},{"key":"2478_CR518","doi-asserted-by":"crossref","unstructured":"Kumar, A., & Rawat, Y. S. (2022). End-to-end semi-supervised learning for video action detection. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01429"},{"key":"2478_CR519","unstructured":"Kumar, M., Babaeizadeh, M., Erhan, D., Finn, C., Levine, S., Dinh, L., & Kingma, D. (2020). Videoflow: A conditional flow-based model for stochastic video generation. In: ICLR"},{"key":"2478_CR520","unstructured":"Kun, L., He, Z., Lu, C., Hu, K., Gao, Y., & Xu, H. (2024). Uni-o4: Unifying online and offline deep reinforcement learning with multi-step on-policy optimization. In: The Twelfth International Conference on Learning Representations"},{"key":"2478_CR521","first-page":"15","volume":"129","author":"I Kviatkovsky","year":"2014","unstructured":"Kviatkovsky, I., Rivlin, E., & Shimshoni, I. (2014). Online action recognition using covariance of shape and motion. CVIU, 129, 15\u201326.","journal-title":"CVIU"},{"key":"2478_CR522","doi-asserted-by":"crossref","unstructured":"Kwon, T., Tekin, B., St\u00fchmer, J., Bogo, F., & Pollefeys, M. (2021). H2o: Two hands manipulating objects for first person interaction recognition. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00998"},{"issue":"3","key":"2478_CR523","doi-asserted-by":"crossref","first-page":"854","DOI":"10.1007\/s11263-023-01879-7","volume":"132","author":"B Lai","year":"2024","unstructured":"Lai, B., Liu, M., Ryan, F., & Rehg, J. M. (2024). In the eye of transformer: Global-local correlation for egocentric gaze estimation and beyond. IJCV, 132(3), 854\u2013871.","journal-title":"IJCV"},{"key":"2478_CR524","doi-asserted-by":"crossref","unstructured":"Lai, B., Ryan, F., Jia, W., Liu, M., & Rehg, J. M. (2024b). Listen to look into the future: Audio-visual egocentric gaze anticipation. In: ECCV","DOI":"10.1007\/978-3-031-72673-6_11"},{"key":"2478_CR525","unstructured":"Lai, B., Toyer, S., Nagarajan, T., Girdhar, R., Zha, S., Rehg, J. M., Kitani, K., Grauman, K., Desai, R., & Liu, M. (2024c). Human action anticipation: A survey. axiv:241014045"},{"issue":"25\u201326","key":"2478_CR526","doi-asserted-by":"crossref","first-page":"3559","DOI":"10.1016\/S0042-6989(01)00102-X","volume":"41","author":"MF Land","year":"2001","unstructured":"Land, M. F., & Hayhoe, M. (2001). In what ways do eye movements contribute to everyday activities? Vision research, 41(25\u201326), 3559\u20133565.","journal-title":"Vision research"},{"key":"2478_CR527","doi-asserted-by":"crossref","unstructured":"Laptev, I., & Lindeberg, T. (2003). Space-time interest points. In: ICCV","DOI":"10.1109\/ICCV.2003.1238378"},{"key":"2478_CR528","doi-asserted-by":"crossref","unstructured":"Laptev, I., & P\u00e9rez, P. (2007). Retrieving actions in movies. In: ICCV","DOI":"10.1109\/ICCV.2007.4409105"},{"key":"2478_CR529","doi-asserted-by":"crossref","unstructured":"Laptev, I., Marszalek, M., Schmid, C., & Rozenfeld, B. (2008). Learning realistic human actions from movies. In: CVPR","DOI":"10.1109\/CVPR.2008.4587756"},{"key":"2478_CR530","unstructured":"Larochelle, H., Bengio, Y., Louradour, J., & Lamblin, P. (2009). Exploring strategies for training deep neural networks. JMLR 10(1)"},{"key":"2478_CR531","doi-asserted-by":"crossref","unstructured":"Le, Q. V., Zou, W. Y., Yeung, S. Y., & Ng, A. Y. (2011). Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In: CVPR","DOI":"10.1109\/CVPR.2011.5995496"},{"key":"2478_CR532","unstructured":"Lee, A. X., Zhang, R., Ebert, F., Abbeel, P., Finn, C., & Levine, S. (2018). Stochastic adversarial video prediction. arXiv:1804.01523"},{"key":"2478_CR533","doi-asserted-by":"crossref","unstructured":"Lee, H., Battle, A., Raina, R., & Ng, A. (2006). Efficient sparse coding algorithms. In: NeurIPS","DOI":"10.7551\/mitpress\/7503.003.0105"},{"key":"2478_CR534","doi-asserted-by":"crossref","unstructured":"Lee, H. Y., Huang, J. B., Singh, M., & Yang, M. H. (2017). Unsupervised representation learning by sorting sequences. In: ICCV","DOI":"10.1109\/ICCV.2017.79"},{"key":"2478_CR535","unstructured":"Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019). Set transformer: A framework for attention-based permutation-invariant neural networks. In: ICML"},{"key":"2478_CR536","doi-asserted-by":"crossref","unstructured":"Lee, T., Kwon, S., & Kim, T. (2024). Grid diffusion models for text-to-video generation. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00834"},{"key":"2478_CR537","doi-asserted-by":"crossref","unstructured":"Lei, J., Yu, L., Bansal, M., & Berg, T. L. (2018). Tvqa: Localized, compositional video question answering. arXiv:1809.01696","DOI":"10.18653\/v1\/D18-1167"},{"key":"2478_CR538","unstructured":"Lei, J., Berg, T. L., & Bansal, M. (2021a). Detecting moments and highlights in videos via natural language queries. In: NeurIPS"},{"key":"2478_CR539","doi-asserted-by":"crossref","unstructured":"Lei, J., Li, L., Zhou, L., Gan, Z., Berg, T. L., Bansal, M., & Liu, J. (2021b). Less is more: Clipbert for video-and-language learning via sparse sampling. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00725"},{"key":"2478_CR540","doi-asserted-by":"crossref","unstructured":"Lei, J., Weng, Y., Harley, A., Guibas, L., & Daniilidis, K. (2024). Mosca: Dynamic gaussian fusion from casual videos via 4d motion scaffolds. arXiv:2405.17421","DOI":"10.1109\/CVPR52734.2025.00578"},{"key":"2478_CR541","doi-asserted-by":"crossref","unstructured":"Leng, Z., Wu, S. C., Saleh, M., Montanaro, A., Yu, H., Wang, Y., Navab, N., Liang, X., & Tombari, F. (2023). Dynamic hyperbolic attention network for fine hand-object reconstruction. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01368"},{"key":"2478_CR542","doi-asserted-by":"crossref","unstructured":"Li, D., Qiu, Z., Dai, Q., Yao, T., & Mei, T. (2018a). Recurrent tubelet proposal and recognition networks for action detection. In: ECCV","DOI":"10.1007\/978-3-030-01231-1_19"},{"key":"2478_CR543","doi-asserted-by":"crossref","unstructured":"Li, D., Jiang, T., & Jiang, M. (2019a). Quality assessment of in-the-wild videos. In: MM","DOI":"10.1145\/3343031.3351028"},{"key":"2478_CR544","doi-asserted-by":"crossref","unstructured":"Li, D., Li, J., Li, H., Niebles, J. C., & Hoi, S. C. (2022a). Align and prompt: Video-and-language pre-training with entity prompts. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00490"},{"key":"2478_CR545","unstructured":"Li, F., Zhang, R., Zhang, H., Zhang, Y., Li, B., Li, W., Ma, Z., & Li, C. (2024a). Llava-next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv:2407.07895"},{"key":"2478_CR546","doi-asserted-by":"crossref","unstructured":"Li, G., Cai, G., Zeng, X., & Zhao, R. (2022b). Scale-aware spatio-temporal relation learning for video anomaly detection. In: ECCV","DOI":"10.1007\/978-3-031-19772-7_20"},{"key":"2478_CR547","volume":"566","author":"H Li","year":"2024","unstructured":"Li, H., Zhu, G., Zhang, L., Jiang, Y., Dang, Y., Hou, H., Shen, P., Zhao, X., Shah, S. A. A., & Bennamoun, M. (2024). Scene graph generation: A comprehensive survey. Neurocomputing, 566, Article 127052.","journal-title":"Neurocomputing"},{"issue":"2","key":"2478_CR548","first-page":"554","volume":"22","author":"J Li","year":"2019","unstructured":"Li, J., Wong, Y., Zhao, Q., & Kankanhalli, M. S. (2019). Video storytelling: Textual summaries for events. IEEE T-M, 22(2), 554\u2013565.","journal-title":"IEEE T-M"},{"key":"2478_CR549","unstructured":"Li, J., Li, D., Savarese, S., & Hoi, S. (2023a). Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In: ICML"},{"key":"2478_CR550","doi-asserted-by":"crossref","unstructured":"Li, J., Wei, P., Han, W., & Fan, L. (2023b). Intentqa: Context-aware video intent reasoning. In: CVPR","DOI":"10.1109\/ICCV51070.2023.01099"},{"key":"2478_CR551","unstructured":"Li, J., Gao, K., Bai, Y., Zhang, J., Xia, St., & Wang, Y. (2024c). Fmm-attack: A flow-based multi-modal adversarial attack on video-based llms. arXiv:2403.13507"},{"key":"2478_CR552","doi-asserted-by":"crossref","unstructured":"Li, J., Yuan, Y., Rempe, D., Zhang, H., Molchanov, P., Lu, C., Kautz, J., & Iqbal, U. (2024d). Coin: Control-inpainting diffusion prior for human and camera motion estimation. In: ECCV","DOI":"10.1007\/978-3-031-72640-8_24"},{"issue":"8","key":"2478_CR553","doi-asserted-by":"crossref","first-page":"1644","DOI":"10.1109\/TPAMI.2013.2297321","volume":"36","author":"K Li","year":"2014","unstructured":"Li, K., & Fu, Y. (2014). Prediction of human activity by discovering temporal sequence patterns. IEEE TPAMI, 36(8), 1644\u20131657.","journal-title":"IEEE TPAMI"},{"key":"2478_CR554","doi-asserted-by":"crossref","unstructured":"Li, K., Hu, J., & Fu, Y. (2012). Modeling complex temporal composition of actionlets for activity prediction. In: ECCV","DOI":"10.1007\/978-3-642-33718-5_21"},{"key":"2478_CR555","unstructured":"Li, K., Wang, Y., Peng, G., Song, G., Liu, Y., Li, H., & Qiao, Y. (2022c). Uniformer: Unified transformer for efficient spatial-temporal representation learning. In: ICLR"},{"key":"2478_CR556","doi-asserted-by":"crossref","unstructured":"Li, K., Wang, Y., Li, Y., Wang, Y., He, Y., Wang, L., & Qiao, Y. (2023c). Unmasked teacher: Towards training-efficient video foundation models. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01826"},{"key":"2478_CR557","doi-asserted-by":"crossref","unstructured":"Li, K., Wang, Y., He, Y., Li, Y., Wang, Y., Liu, Y., Wang, Z., Xu, J., Chen, G., Luo, P., et\u00a0al. (2024e). Mvbench: A comprehensive multi-modal video understanding benchmark. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02095"},{"key":"2478_CR558","doi-asserted-by":"crossref","unstructured":"Li, L., Chen, Y. C., Cheng, Y., Gan, Z., Yu, L., & Liu, J. (2020a). Hero: Hierarchical encoder for video+ language omni-representation pre-training. In: EMNLP","DOI":"10.18653\/v1\/2020.emnlp-main.161"},{"key":"2478_CR559","doi-asserted-by":"crossref","unstructured":"Li, P., Xie, C. W., Zhao, L., Xie, H., Ge, J., Zheng, Y., Zhao, D., & Zhang, Y. (2023d). Progressive spatio-temporal prototype matching for text-video retrieval. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00379"},{"key":"2478_CR560","doi-asserted-by":"crossref","unstructured":"Li, R., Zhang, Y., Qiu, Z., Yao, T., Liu, D., & Mei, T. (2021a). Motion-focused contrastive learning of video representations. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00211"},{"key":"2478_CR561","doi-asserted-by":"crossref","unstructured":"Li, T., Wang, Z., Liu, S., & Lin, W. Y. (2021b). Deep unsupervised anomaly detection. In: WACV","DOI":"10.1109\/WACV48630.2021.00368"},{"key":"2478_CR562","doi-asserted-by":"crossref","unstructured":"Li, T., Slavcheva, M., Zollhoefer, M., Green, S., Lassner, C., Kim, C., Schmidt, T., Lovegrove, S., Goesele, M., Newcombe, R., et\u00a0al. (2022d). Neural 3d video synthesis from multi-view video. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00544"},{"key":"2478_CR563","doi-asserted-by":"crossref","unstructured":"Li, T., Fan, L., Yuan, Y., He, H., Tian, Y., Feris, R., Indyk, P., & Katabi, D. (2023e). Addressing feature suppression in unsupervised visual representations. In: WACV","DOI":"10.1109\/WACV56688.2023.00146"},{"key":"2478_CR564","doi-asserted-by":"crossref","unstructured":"Li, T., Ma, M., & Peng, X. (2024f). Deal: Disentangle and localize concept-level explanations for vlms. In: ECCV","DOI":"10.1007\/978-3-031-72933-1_22"},{"key":"2478_CR565","doi-asserted-by":"crossref","unstructured":"Li, W., & Fritz, M. (2016). Recognition of ongoing complex activities by sequence prediction over a hierarchical label space. In: WACV","DOI":"10.1109\/WACV.2016.7477586"},{"key":"2478_CR566","doi-asserted-by":"crossref","unstructured":"Li, W., Zhang, Z., & Liu, Z. (2010). Action recognition based on a bag of 3d points. In: CVPRw","DOI":"10.1109\/CVPRW.2010.5543273"},{"key":"2478_CR567","doi-asserted-by":"crossref","unstructured":"Li, X., & Xu, H. (2024). Repetitive Action Counting With Motion Feature Learning. In: WACV","DOI":"10.1109\/WACV57701.2024.00637"},{"key":"2478_CR568","doi-asserted-by":"crossref","unstructured":"Li, X., Song, J., Gao, L., Liu, X., Huang, W., He, X., & Gan, C. (2019c). Beyond rnns: Positional self-attention with co-attention for video question answering. In: AAAI","DOI":"10.1609\/aaai.v33i01.33018658"},{"key":"2478_CR569","doi-asserted-by":"crossref","unstructured":"Li, Y., Ye, Z., & Rehg, J. M. (2015). Delving into egocentric actions. In: CVPR","DOI":"10.1109\/CVPR.2015.7298625"},{"key":"2478_CR570","doi-asserted-by":"crossref","unstructured":"Li, Y., Li, Y., & Vasconcelos, N. (2018b). Resound: Towards action recognition without representation bias. In: ECCV","DOI":"10.1007\/978-3-030-01231-1_32"},{"key":"2478_CR571","doi-asserted-by":"crossref","unstructured":"Li, Y., Liu, M., & Rehg, J. M. (2018c). In the eye of beholder: Joint learning of gaze and actions in first person video. In: ECCV","DOI":"10.1007\/978-3-030-01228-1_38"},{"key":"2478_CR572","doi-asserted-by":"crossref","unstructured":"Li, Y., Yao, T., Pan, Y., Chao, H., & Mei, T. (2018d). Jointly localizing and describing events for dense video captioning. In: CVPR","DOI":"10.1109\/CVPR.2018.00782"},{"key":"2478_CR573","doi-asserted-by":"crossref","unstructured":"Li, Y., Wang, Z., Wang, L., & Wu, G. (2020b). Actions as moving points. In: ECCV","DOI":"10.1007\/978-3-030-58517-4_5"},{"key":"2478_CR574","doi-asserted-by":"crossref","unstructured":"Li, Y., Chen, L., He, R., Wang, Z., Wu, G., & Wang, L. (2021c). Multisports: A multi-person video dataset of spatio-temporally localized sports actions. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01328"},{"key":"2478_CR575","doi-asserted-by":"crossref","unstructured":"Li, Y., Wu, C. Y., Fan, H., Mangalam, K., Xiong, B., Malik, J., & Feichtenhofer, C. (2022e). Mvitv2: Improved multiscale vision transformers for classification and detection. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00476"},{"key":"2478_CR576","doi-asserted-by":"crossref","unstructured":"Li, Y., Min, K., Tripathi, S., & Vasconcelos, N. (2023f). Svitt: Temporal learning of sparse video-text transformers. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01814"},{"key":"2478_CR577","doi-asserted-by":"crossref","unstructured":"Li, Y., Xiao, J., Feng, C., Wang, X., & Chua, T. S. (2023g). Discovering spatio-temporal rationales for video question answering. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01275"},{"key":"2478_CR578","unstructured":"Li, Y., Chen, X., Hu, B., Wang, L., Shi, H., & Zhang, M. (2024g). Videovista: A versatile benchmark for video understanding and reasoning. arXiv:2406.11303"},{"key":"2478_CR579","doi-asserted-by":"crossref","unstructured":"Li, Y., Wang, C., & Jia, J. (2024h). Llama-vid: An image is worth 2 tokens in large language models. In: European Conference on Computer Vision","DOI":"10.1007\/978-3-031-72952-2_19"},{"key":"2478_CR580","unstructured":"Li, Y., Zhang, Y., Wang, C., Zhong, Z., Chen, Y., Chu, R., Liu, S., & Jia, J. (2024i). Mini-gemini: Mining the potential of multi-modality vision language models. arXiv:2403.18814"},{"key":"2478_CR581","doi-asserted-by":"crossref","unstructured":"Li, Z., Liu, J., Zhang, Z., Xu, S., & Yan, Y. (2022f). Cliff: Carrying location information in full frames into human pose and shape estimation. In: ECCV","DOI":"10.1007\/978-3-031-20065-6_34"},{"key":"2478_CR582","unstructured":"Li, Z., Ma, X., Shang, Q., Zhu, W., Ci, H., Qiao, Y., & Wang, Y. (2024j). Efficient action counting with dynamic queries. arXiv:2403.01543"},{"key":"2478_CR583","doi-asserted-by":"crossref","unstructured":"Li, Z., Tucker, R., Cole, F., Wang, Q., Jin, L., Ye, V., Kanazawa, A., Holynski, A., & Snavely, N. (2024k). Megasam: Accurate, fast, and robust structure and motion from casual dynamic videos. arXiv:2412.04463","DOI":"10.1109\/CVPR52734.2025.00981"},{"key":"2478_CR584","doi-asserted-by":"crossref","unstructured":"Lian, J., Baevski, A., Hsu, W. N., & Auli, M. (2023). Av-data2vec: Self-supervised learning of audio-visual speech representations with contextualized target representations. In: ASRUw","DOI":"10.1109\/ASRU57964.2023.10389642"},{"key":"2478_CR585","doi-asserted-by":"crossref","unstructured":"Liang, C., Wang, W., Zhou, T., & Yang, Y. (2022a). Visual abductive reasoning. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01512"},{"key":"2478_CR586","unstructured":"Liang, H., Ren, J., Mirzaei, A., Torralba, A., Liu, Z., Gilitschenski, I., Fidler, S., Oztireli, C., Ling, H., Gojcic, Z., et\u00a0al. (2024a). Feed-forward bullet-time reconstruction of dynamic scenes from monocular videos. arXiv:2412.03526"},{"key":"2478_CR587","unstructured":"Liang, J., Wu, C., Hu, X., Gan, Z., Wang, J., Wang, L., Liu, Z., Fang, Y., & Duan, N. (2022b). Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. In: NeurIPS"},{"key":"2478_CR588","doi-asserted-by":"crossref","unstructured":"Liang, J., Liang, S., Luo, M., Liu, A., Han, D., Chang, E. C., & Cao, X. (2024b). Vl-trojan: Multimodal instruction backdoor attacks against autoregressive visual language models. arXiv:2402.13851","DOI":"10.1007\/s11263-025-02368-9"},{"key":"2478_CR589","doi-asserted-by":"crossref","unstructured":"Liang, P. P., Zadeh, A., & Morency, L. P. (2022c). Foundations and trends in multimodal machine learning: Principles, challenges, and open questions. arXiv:2209.03430","DOI":"10.1145\/3610661.3617602"},{"issue":"10","key":"2478_CR590","first-page":"1","volume":"56","author":"PP Liang","year":"2024","unstructured":"Liang, P. P., Zadeh, A., & Morency, L. P. (2024). Foundations & trends in multimodal machine learning: Principles, challenges, and open questions. ACM Computing Surveys, 56(10), 1\u201342.","journal-title":"ACM Computing Surveys"},{"key":"2478_CR591","unstructured":"Liang, V. W., Zhang, Y., Kwon, Y., Yeung, S., & Zou, J. Y. (2022d). Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. In: NeurIPS"},{"key":"2478_CR592","doi-asserted-by":"crossref","unstructured":"Liang, X., Lee, L., Dai, W., & Xing, E. P. (2017). Dual motion gan for future-flow embedded video prediction. In: ICCV","DOI":"10.1109\/ICCV.2017.194"},{"key":"2478_CR593","doi-asserted-by":"crossref","unstructured":"Liberatori, B., Conti, A., Rota, P., Wang, Y., & Ricci, E. (2024). Test-time zero-shot temporal action localization. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01771"},{"key":"2478_CR594","unstructured":"Lin, B., Tang, Z., Ye, Y., Cui, J., Zhu, B., Jin, P., Zhang, J., Ning, M., & Yuan, L. (2024). Moe-llava: Mixture of experts for large vision-language models. arXiv:2401.15947"},{"key":"2478_CR595","doi-asserted-by":"crossref","unstructured":"Lin, C., Xu, C., Luo, D., Wang, Y., Tai, Y., Wang, C., Li, J., Huang, F., & Fu, Y. (2021a). Learning salient boundary feature for anchor-free temporal action localization. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00333"},{"key":"2478_CR596","doi-asserted-by":"crossref","unstructured":"Lin, J., Gan, C., & Han, S. (2019). Tsm: Temporal shift module for efficient video understanding. In: ICCV","DOI":"10.1109\/ICCV.2019.00718"},{"key":"2478_CR597","first-page":"5548","volume":"26","author":"J Lin","year":"2023","unstructured":"Lin, J., Hua, H., Chen, M., Li, Y., Hsiao, J., Ho, C., & Luo, J. (2023). Videoxum: Cross-modal visual and textural summarization of videos. IEEE TM, 26, 5548\u20135560.","journal-title":"IEEE TM"},{"key":"2478_CR598","unstructured":"Lin, J., Zeng, A., Lu, S., Cai, Y., Zhang, R., Wang, H., & Zhang, L. (2023b). Motion-x: A large-scale 3d expressive whole-body human motion dataset. In: NeurIPS"},{"key":"2478_CR599","doi-asserted-by":"crossref","unstructured":"Lin, K., Li, L., Lin, C. C., Ahmed, F., Gan, Z., Liu, Z., Lu, Y., & Wang, L. (2022a). Swinbert: End-to-end transformers with sparse attention for video captioning. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01742"},{"key":"2478_CR600","doi-asserted-by":"crossref","unstructured":"Lin, K. E., Xiao, L., Liu, F., Yang, G., & Ramamoorthi, R. (2021b). Deep 3d mask volume for view synthesis of dynamic scenes. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00177"},{"key":"2478_CR601","unstructured":"Lin, K. Q., Wang, J., Soldan, M., Wray, M., Yan, R., Xu, E. Z., Gao, D., Tu, R. C., Zhao, W., Kong, W., et\u00a0al. (2022b). Egocentric video-language pretraining. In: NeurIPS"},{"key":"2478_CR602","doi-asserted-by":"crossref","unstructured":"Lin, T., Zhao, X., Su, H., Wang, C., & Yang, M. (2018). Bsn: Boundary sensitive network for temporal action proposal generation. In: ECCV","DOI":"10.1007\/978-3-030-01225-0_1"},{"key":"2478_CR603","doi-asserted-by":"crossref","unstructured":"Lin, Y., Wei, C., Wang, H., Yuille, A., & Xie, C. (2023c). Smaug: Sparse masked autoencoder for efficient video-language pre-training. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00233"},{"key":"2478_CR604","doi-asserted-by":"crossref","unstructured":"Lin, Y. B., & Bertasius, G. (2024). Siamese vision transformers are scalable audio-visual learners. arXiv:2403.19638","DOI":"10.1007\/978-3-031-72630-9_18"},{"key":"2478_CR605","doi-asserted-by":"crossref","unstructured":"Lin, Y. B., Sung, Y. L., Lei, J., Bansal, M., & Bertasius, G. (2023d). Vision transformers are parameter-efficient audio-visual learners. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00228"},{"key":"2478_CR606","doi-asserted-by":"crossref","unstructured":"Lin, Z., Geng, S., Zhang, R., Gao, P., De\u00a0Melo, G., Wang, X., Dai, J., Qiao, Y., & Li, H. (2022c). Frozen clip models are efficient video learners. In: ECCV","DOI":"10.1007\/978-3-031-19833-5_23"},{"key":"2478_CR607","unstructured":"Lipman, Y., Chen, R. T., Ben-Hamu, H., Nickel, M., & Le, M. (2023). Flow matching for generative modeling. In: ICLR"},{"key":"2478_CR608","doi-asserted-by":"crossref","unstructured":"Liu, D., Qu, X., Dong, J., Zhou, P., Cheng, Y., Wei, W., Xu, Z., & Xie, Y. (2021a). Context-aware biaffine localizing network for temporal sentence grounding. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01108"},{"key":"2478_CR609","doi-asserted-by":"crossref","unstructured":"Liu, D., Qu, X., Di, X., Cheng, Y., Xu, Z., & Zhou, P. (2022a). Memory-guided semantic learning network for temporal sentence grounding. In: AAAI","DOI":"10.1609\/aaai.v36i2.20058"},{"key":"2478_CR610","doi-asserted-by":"crossref","unstructured":"Liu, F., Liu, J., Wang, W., & Lu, H. (2021b). Hair: Hierarchical visual-semantic relational reasoning for video question answering. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00172"},{"key":"2478_CR611","unstructured":"Liu, H., Liu, X., Kong, Q., Wang, W., & Plumbley, M. D. (2022b). Learning the spectrogram temporal resolution for audio classification. In: AAAI"},{"key":"2478_CR612","unstructured":"Liu, H., Li, C., Wu, Q., & Lee, Y. J. (2024a). Visual instruction tuning. In: NeurIPS"},{"key":"2478_CR613","unstructured":"Liu, J., & Shah, M. (2008). Learning human actions via information maximization. In: CVPR"},{"key":"2478_CR614","unstructured":"Liu, J., Ali, S., & Shah, M. (2008). Recognizing human actions using multiple features. In: CVPR"},{"key":"2478_CR615","doi-asserted-by":"crossref","unstructured":"Liu, J., Luo, J., & Shah, M. (2009). Recognizing realistic actions from videos \u201cin the wild\u201d\u2019. In: CVPR","DOI":"10.1109\/CVPR.2009.5206744"},{"key":"2478_CR616","doi-asserted-by":"crossref","unstructured":"Liu, J., Teshome, W., Ghimire, S., Sznaier, M., & Camps, O. (2024b). Solving masked jigsaw puzzles with diffusion vision transformers. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02171"},{"key":"2478_CR617","doi-asserted-by":"crossref","unstructured":"Liu, M., Wang, X., Nie, L., Tian, Q., Chen, B., & Chua, T. S. (2018a). Cross-modal moment localization in videos. In: MM","DOI":"10.1145\/3240508.3240549"},{"key":"2478_CR618","doi-asserted-by":"crossref","unstructured":"Liu, M., Tang, S., Li, Y., & Rehg, J. M. (2020). Forecasting human-object interaction: joint prediction of motor attention and actions in first person video. In: ECCV","DOI":"10.1007\/978-3-030-58452-8_41"},{"key":"2478_CR619","unstructured":"Liu, M., Zhang, M., Liu, J., Dai, H., Yang, M. H., Ji, S., Feng, Z., & Gong, B. (2023a). Video timeline modeling for news story understanding. In: NeurIPS"},{"key":"2478_CR620","doi-asserted-by":"crossref","unstructured":"Liu, Q., & Wang, Z. (2020). Progressive boundary refinement network for temporal action detection. In: AAAI","DOI":"10.1609\/aaai.v34i07.6829"},{"key":"2478_CR621","unstructured":"Liu, Q., Liu, Y., Wang, J., Lyv, X., Wang, P., Wang, W., & Hou, J. (2024c). Modgs: Dynamic gaussian splatting from casually-captured monocular videos. In: ICLR"},{"key":"2478_CR622","doi-asserted-by":"crossref","unstructured":"Liu, S., Fan, H., Qian, S., Chen, Y., Ding, W., & Wang, Z. (2021c). Hit: Hierarchical transformer with momentum contrast for video-text retrieval. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01170"},{"key":"2478_CR623","doi-asserted-by":"crossref","unstructured":"Liu, S., Jiang, H., Xu, J., Liu, S., & Wang, X. (2021d). Semi-supervised 3d hand-object poses estimation with interactions in time. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01445"},{"key":"2478_CR624","doi-asserted-by":"crossref","unstructured":"Liu, S., Tripathi, S., Majumdar, S., & Wang, X. (2022c). Joint hand motion and interaction hotspots prediction from egocentric videos. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00328"},{"key":"2478_CR625","doi-asserted-by":"crossref","unstructured":"Liu, S., Zhou, Y., Yang, J., Gupta, S., & Wang, S. (2023b). Contactgen: Generative contact modeling for grasp generation. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01884"},{"key":"2478_CR626","doi-asserted-by":"crossref","unstructured":"Liu, S., Ren, Z., Gupta, S., & Wang, S. (2024d). Physgen: Rigid-body physics-grounded image-to-video generation. In: ECCV","DOI":"10.1007\/978-3-031-73007-8_21"},{"key":"2478_CR627","doi-asserted-by":"crossref","unstructured":"Liu, S., Zhang, C. L., Zhao, C., & Ghanem, B. (2024e). End-to-end temporal action detection with 1b parameters across 1000 frames. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01759"},{"key":"2478_CR628","doi-asserted-by":"crossref","unstructured":"Liu, T., & Lam, K. M. (2022). A hybrid egocentric activity anticipation framework via memory-augmented recurrent and one-shot representation forecasting. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01353"},{"key":"2478_CR629","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In: ECCV","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"2478_CR630","doi-asserted-by":"crossref","unstructured":"Liu, W., Luo, W., Lian, D., & Gao, S. (2018b). Future frame prediction for anomaly detection\u2013a new baseline. In: CVPR","DOI":"10.1109\/CVPR.2018.00684"},{"key":"2478_CR631","doi-asserted-by":"crossref","unstructured":"Liu, W., Tekin, B., Coskun, H., Vineet, V., Fua, P., & Pollefeys, M. (2022d). Learning to align sequential actions in the wild. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00222"},{"key":"2478_CR632","doi-asserted-by":"crossref","unstructured":"Liu, X., Bai, S., & Bai, X. (2022e). An empirical study of end-to-end temporal action detection. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01938"},{"key":"2478_CR633","doi-asserted-by":"crossref","unstructured":"Liu, Y., Wei, P., & Zhu, S. C. (2017a). Jointly recognizing object fluents and tasks in egocentric videos. In: ICCV","DOI":"10.1109\/ICCV.2017.318"},{"key":"2478_CR634","unstructured":"Liu, Y., Albanie, S., Nagrani, A., & Zisserman, A. (2019). Use what you have: Video retrieval using representations from collaborative experts. In: BMVC"},{"key":"2478_CR635","doi-asserted-by":"crossref","unstructured":"Liu, Y., Zhou, L., Bai, X., Huang, Y., Gu, L., Zhou, J., & Harada, T. (2021e). Goal-oriented gaze estimation for zero-shot learning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00379"},{"key":"2478_CR636","doi-asserted-by":"crossref","unstructured":"Liu, Y., Liu, Y., Jiang, C., Lyu, K., Wan, W., Shen, H., Liang, B., Fu, Z., Wang, H., & Yi, L. (2022f). Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In: CVPR","DOI":"10.1109\/CVPR52688.2022.02034"},{"key":"2478_CR637","doi-asserted-by":"crossref","first-page":"6937","DOI":"10.1109\/TIP.2022.3217368","volume":"31","author":"Y Liu","year":"2022","unstructured":"Liu, Y., Wang, L., Wang, Y., Ma, X., & Qiao, Y. (2022). Fineaction: A fine-grained video dataset for temporal action localization. IEEE T-IP, 31, 6937\u20136950.","journal-title":"IEEE T-IP"},{"key":"2478_CR638","unstructured":"Liu, Y., Li, L., Ren, S., Gao, R., Li, S., Chen, S., Sun, X., & Hou, L. (2023c). Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. In: NeurIPS"},{"key":"2478_CR639","doi-asserted-by":"crossref","unstructured":"Liu, Y., Cun, X., Liu, X., Wang, X., Zhang, Y., Chen, H., Liu, Y., Zeng, T., Chan, R., & Shan, Y. (2024f). Evalcrafter: Benchmarking and evaluating large video generation models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02090"},{"key":"2478_CR640","doi-asserted-by":"crossref","unstructured":"Liu, Y., Duan, H., Zhang, Y., Li, B., Zhang, S., Zhao, W., Yuan, Y., Wang, J., He, C., Liu, Z., et\u00a0al. (2024g). Mmbench: Is your multi-modal model an all-around player? In: ECCV","DOI":"10.1007\/978-3-031-72658-3_13"},{"key":"2478_CR641","unstructured":"Liu, Y., Eyzaguirre, C., Li, M., Khanna, S., Niebles, J. C., Ravi, V., Mishra, S., Liu, W., & Wu, J. (2024h). Ikea manuals at work: 4d grounding of assembly instructions on internet videos. In: NeurIPS"},{"key":"2478_CR642","first-page":"375","volume":"10","author":"Y Liu","year":"2024","unstructured":"Liu, Y., Zhao, H., Chan, K. C., Wang, X., Loy, C. C., Qiao, Y., & Dong, C. (2024). Temporally consistent video colorization with deep feature propagation and self-regularization learning. CVM, 10, 375\u2013395.","journal-title":"CVM"},{"key":"2478_CR643","doi-asserted-by":"crossref","unstructured":"Liu, Z., Yeh, R. A., Tang, X., Liu, Y., & Agarwala, A. (2017b). Video frame synthesis using deep voxel flow. In: ICCV","DOI":"10.1109\/ICCV.2017.478"},{"key":"2478_CR644","doi-asserted-by":"crossref","unstructured":"Liu, Z., Wang, L., Tang, W., Yuan, J., Zheng, N., & Hua, G. (2021f). Weakly supervised temporal action localization through learning explicit subspaces for action and context. In: AAAI","DOI":"10.1609\/aaai.v35i3.16323"},{"key":"2478_CR645","doi-asserted-by":"crossref","unstructured":"Liu, Z., Courant, R., & Kalogeiton, V. (2022h). Funnynet: Audiovisual learning of funny moments in videos. In: ACCV","DOI":"10.1007\/978-3-031-26316-3_26"},{"key":"2478_CR646","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022i). A convnet for the 2020s. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"2478_CR647","doi-asserted-by":"crossref","unstructured":"Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., & Hu, H. (2022j). Video swin transformer. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00320"},{"key":"2478_CR648","unstructured":"Liu, Z., Lin, J., Wu, W., & Zhou, B. (2025). Joint optimization for 4d human-scene reconstruction in the wild. arXiv:2501.02158"},{"key":"2478_CR649","unstructured":"Long, F., Qiu, Z., Yao, T., & Mei, T. (2024). Videodrafter: Content-consistent multi-scene video generation with llm. In: ECCV"},{"key":"2478_CR650","doi-asserted-by":"crossref","unstructured":"Long, T., & van Noord, N. (2023). Cross-modal scalable hyperbolic hierarchical clustering. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01527"},{"key":"2478_CR651","doi-asserted-by":"crossref","unstructured":"Long, T., Mettes, P., Shen, H. T., & Snoek, C. G. M. (2020). Searching for actions on the hyperbole. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00122"},{"key":"2478_CR652","doi-asserted-by":"crossref","unstructured":"Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., & Black, M. J. (2015). Smpl: A skinned multi-person linear model. ACM-TOG pp 851\u2013866","DOI":"10.1145\/3596711.3596800"},{"issue":"2","key":"2478_CR653","doi-asserted-by":"crossref","first-page":"258","DOI":"10.1109\/TPAMI.2004.1262196","volume":"26","author":"C Lu","year":"2004","unstructured":"Lu, C., & Ferrier, N. J. (2004). Repetitive Motion Analysis: Segmentation and Event Classification. IEEE TPAMI, 26(2), 258\u2013263.","journal-title":"IEEE TPAMI"},{"key":"2478_CR654","doi-asserted-by":"crossref","unstructured":"Lu, C., Shi, J., & Jia, J. (2013). Abnormal event detection at 150 fps in matlab. In: ICCV","DOI":"10.1109\/ICCV.2013.338"},{"key":"2478_CR655","unstructured":"Lu, H., Poppe, R., & Salah, A. A. (2024a). Improving the generalization of vits for action understanding with vlm pre-training. arXiv:2403.16128"},{"key":"2478_CR656","doi-asserted-by":"crossref","unstructured":"Lu, H., Poppe, R., & Salah, A. A. (2024b). Tcnet: Continuous sign language recognition from trajectories and correlated regions. In: ECCV","DOI":"10.1609\/aaai.v38i4.28181"},{"key":"2478_CR657","doi-asserted-by":"crossref","unstructured":"Lu, J., Huang, T., Li, P., Dou, Z., Lin, C., Cui, Z., Dong, Z., Yeung, S. K., Wang, W., & Liu, Y. (2025). Align3r: Aligned monocular depth estimation for dynamic videos. In: CVPR","DOI":"10.1109\/CVPR52734.2025.02125"},{"issue":"8","key":"2478_CR658","first-page":"3703","volume":"28","author":"M Lu","year":"2019","unstructured":"Lu, M., Li, Z. N., Wang, Y., & Pan, G. (2019). Deep attention network for egocentric action recognition. IEEE TIP, 28(8), 3703\u20133713.","journal-title":"IEEE TIP"},{"key":"2478_CR659","doi-asserted-by":"crossref","unstructured":"Luc, P., Couprie, C., Lecun, Y., & Verbeek, J. (2018). Predicting future instance segmentation by forecasting convolutional features. In: ECCV","DOI":"10.1007\/978-3-030-01240-3_36"},{"key":"2478_CR660","unstructured":"Luc, P., Clark, A., Dieleman, S., Casas, Dd. L., Doron, Y., Cassirer, A., & Simonyan, K. (2020). Transformation-based adversarial video prediction on large-scale data. arXiv:2003.04035"},{"key":"2478_CR661","doi-asserted-by":"crossref","unstructured":"Luo, C., & Yuille, A. L. (2019). Grouped spatial-temporal aggregation for efficient action recognition. In: ICCV","DOI":"10.1109\/ICCV.2019.00561"},{"key":"2478_CR662","doi-asserted-by":"crossref","unstructured":"Luo, R., Zhang, H., Chen, L., Lin, T. E., Liu, X., Wu, Y., Yang, M., Wang, M., Zeng, P., Gao, L., et\u00a0al. (2024). Mmevol: Empowering multimodal large language models with evol-instruct. arXiv:2409.05840","DOI":"10.18653\/v1\/2025.findings-acl.1009"},{"key":"2478_CR663","doi-asserted-by":"crossref","unstructured":"Luo, W., Liu, W., & Gao, S. (2017). A revisit of sparse coding based anomaly detection in stacked rnn framework. In: ICCV","DOI":"10.1109\/ICCV.2017.45"},{"key":"2478_CR664","doi-asserted-by":"crossref","unstructured":"Luo, Z., Guillory, D., Shi, B., Ke, W., Wan, F., Darrell, T., & Xu, H. (2020). Weakly-supervised action localization with expectation-maximization multi-instance learning. In: ECCV","DOI":"10.1007\/978-3-030-58526-6_43"},{"key":"2478_CR665","unstructured":"Luo, Z., Xie, W., Kapoor, S., Liang, Y., Cooper, M., Niebles, J. C., Adeli, E., & Li, F. F. (2021). Moma: Multi-object multi-actor activity parsing. In: NeurIPS"},{"key":"2478_CR666","unstructured":"Lv, Z., Charron, N., Moulon, P., Gamino, A., Peng, C., Sweeney, C., Miller, E., Tang, H., Meissner, J., Dong, J., et\u00a0al. (2024). Aria everyday activities dataset. arXiv:2402.13349"},{"key":"2478_CR667","unstructured":"Ma, C., Guo, Q., Jiang, Y., Luo, P., Yuan, Z., & Qi, X. (2022a). Rethinking resolution in the context of efficient video recognition. In: NeurIPS"},{"key":"2478_CR668","doi-asserted-by":"crossref","unstructured":"Ma, M., Ren, J., Zhao, L., Testuggine, D., & Peng, X. (2022b). Are multimodal transformers robust to missing modality? In: CVPR","DOI":"10.1109\/CVPR52688.2022.01764"},{"key":"2478_CR669","unstructured":"Ma, S., Zeng, Z., McDuff, D., & Song, Y. (2021). Active contrastive learning of audio-visual video representations. In: ICLR"},{"key":"2478_CR670","doi-asserted-by":"crossref","unstructured":"Maaz, M., Rasheed, H., Khan, S., & Khan, F. S. (2023). Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv:2306.05424","DOI":"10.18653\/v1\/2024.acl-long.679"},{"key":"2478_CR671","doi-asserted-by":"crossref","unstructured":"Madan, N., Moegelmose, A., Modi, R., Rawat, Y. S., & Moeslund, T. B. (2024). Foundation models for video understanding: A survey. arxiv arXiv:2405.03770","DOI":"10.36227\/techrxiv.171769139.99464428\/v1"},{"key":"2478_CR672","doi-asserted-by":"crossref","unstructured":"Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G., & Black, M. J. (2019). Amass: Archive of motion capture as surface shapes. In: CVPR","DOI":"10.1109\/ICCV.2019.00554"},{"key":"2478_CR673","doi-asserted-by":"crossref","unstructured":"Majumder, S., Nagarajan, T., Al-Halah, Z., Pradhan, R., & Grauman, K. (2024). Which viewpoint shows it best? language for weakly supervising view selection in multi-view videos. arXiv:2411.08753","DOI":"10.1109\/CVPR52734.2025.02702"},{"key":"2478_CR674","unstructured":"Mangalam, K., Akshulakov, R., & Malik, J. (2023). Egoschema: A diagnostic benchmark for very long-form video language understanding. In: NeurIPS"},{"key":"2478_CR675","doi-asserted-by":"crossref","unstructured":"Markovitz, A., Sharir, G., Friedman, I., Zelnik-Manor, L., & Avidan, S. (2020). Graph embedded pose clustering for anomaly detection. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01055"},{"key":"2478_CR676","doi-asserted-by":"crossref","unstructured":"Marszalek, M., Laptev, I., & Schmid, C. (2009). Actions in context. In: CVPR","DOI":"10.1109\/CVPRW.2009.5206557"},{"key":"2478_CR677","doi-asserted-by":"crossref","unstructured":"Martin, M., Roitberg, A., Haurilet, M., Horne, M., Rei\u00df, S., Voit, M., & Stiefelhagen, R. (2019). Drive &act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In: ICCV","DOI":"10.1109\/ICCV.2019.00289"},{"key":"2478_CR678","doi-asserted-by":"crossref","first-page":"3069","DOI":"10.1109\/TPAMI.2020.3048482","volume":"44","author":"MJ Mar\u00edn-Jim\u00e9nez","year":"2021","unstructured":"Mar\u00edn-Jim\u00e9nez, M. J., Kalogeiton, V., Medina-Su\u00e1rez, P., & Zisserman, A. (2021). Laeo-net++: Revisiting people looking at each other in videos. IEEE TPAMI, 44, 3069\u20133081.","journal-title":"IEEE TPAMI"},{"key":"2478_CR679","doi-asserted-by":"crossref","unstructured":"Mascar\u00f3, E. V., Ahn, H., & Lee, D. (2023). Intention-conditioned long-term human egocentric action anticipation. In: WACV","DOI":"10.1109\/WACV56688.2023.00599"},{"key":"2478_CR680","doi-asserted-by":"crossref","unstructured":"Mavroudi, E., Afouras, T., & Torresani, L. (2023). Learning to ground instructional articles in videos through narrations. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01395"},{"key":"2478_CR681","doi-asserted-by":"crossref","unstructured":"Mazzamuto, M., Furnari, A., Sato, Y., & Farinella, G. M. (2025). Gazing into missteps: Leveraging eye-gaze for unsupervised mistake detection in egocentric videos of skilled human activities. In: CVPR","DOI":"10.1109\/CVPR52734.2025.00778"},{"key":"2478_CR682","doi-asserted-by":"crossref","unstructured":"Menapace, W., Lathuiliere, S., Tulyakov, S., Siarohin, A., & Ricci, E. (2021). Playable video generation. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00993"},{"key":"2478_CR683","doi-asserted-by":"crossref","unstructured":"Meng, Y., Lin, C. C., Panda, R., Sattigeri, P., Karlinsky, L., Oliva, A., Saenko, K., & Feris, R. (2020). Ar-net: Adaptive frame resolution for efficient action recognition. In: ECCV","DOI":"10.1007\/978-3-030-58571-6_6"},{"key":"2478_CR684","unstructured":"Menick, J., & Kalchbrenner, N. (2019). Generating high fidelity images with subscale pixel networks and multidimensional upscaling. In: ICLR"},{"issue":"6\u20137","key":"2478_CR685","doi-asserted-by":"crossref","first-page":"421","DOI":"10.1016\/j.imavis.2013.03.005","volume":"31","author":"D Metaxas","year":"2013","unstructured":"Metaxas, D., & Zhang, S. (2013). A review of motion analysis methods for human nonverbal communication computing. IVC, 31(6\u20137), 421\u2013433.","journal-title":"IVC"},{"key":"2478_CR686","doi-asserted-by":"crossref","unstructured":"Mettes, P., Van\u00a0Gemert, J. C., & Snoek, C. G. M. (2016). Spot on: Action localization from pointly-supervised proposals. In: ECCV","DOI":"10.1007\/978-3-319-46454-1_27"},{"issue":"9","key":"2478_CR687","doi-asserted-by":"crossref","first-page":"3484","DOI":"10.1007\/s11263-024-02043-5","volume":"132","author":"P Mettes","year":"2024","unstructured":"Mettes, P., Ghadimi Atigh, M., Keller-Ressel, M., Gu, J., & Yeung, S. (2024). Hyperbolic deep learning in computer vision: A survey. IJCV, 132(9), 3484\u20133508.","journal-title":"IJCV"},{"key":"2478_CR688","doi-asserted-by":"crossref","unstructured":"Micorek, J., Possegger, H., Narnhofer, D., Bischof, H., & Kozinski, M. (2024). Mulde: Multiscale log-density estimation via denoising score matching for video anomaly detection. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01785"},{"key":"2478_CR689","doi-asserted-by":"crossref","unstructured":"Miech, A., Zhukov, D., Alayrac, J. B., Tapaswi, M., Laptev, I., & Sivic, J. (2019). Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In: CVPR","DOI":"10.1109\/ICCV.2019.00272"},{"key":"2478_CR690","unstructured":"Miech, A., Alayrac, J. B., Laptev, I., Sivic, J., & Zisserman, A. (2020a). Rareact: A video dataset of unusual interactions. arXiv:2008.01018"},{"key":"2478_CR691","doi-asserted-by":"crossref","unstructured":"Miech, A., Alayrac, J. B., Smaira, L., Laptev, I., Sivic, J., & Zisserman, A. (2020b). End-to-end learning of visual representations from uncurated instructional videos. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00990"},{"key":"2478_CR692","doi-asserted-by":"crossref","unstructured":"Mikolajczyk, K., & Uemura, H. (2008). Action recognition with motion-appearance vocabulary forest. In: CVPR","DOI":"10.1109\/CVPR.2008.4587628"},{"key":"2478_CR693","doi-asserted-by":"crossref","unstructured":"Min, J., Buch, S., Nagrani, A., Cho, M., & Schmid, C. (2024). Morevqa: Exploring modular reasoning models for video question answering. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01257"},{"key":"2478_CR694","doi-asserted-by":"crossref","unstructured":"Min, K., & Corso, J. J. (2021). Integrating human gaze into attention for egocentric activity recognition. In: WACV","DOI":"10.1109\/WACV48630.2021.00111"},{"key":"2478_CR695","unstructured":"Ming, R., Huang, Z., Ju, Z., Hu, J., Peng, L., & Zhou, S. (2024). A survey on video prediction: From deterministic to generative approaches. arXiv:2401.14718"},{"key":"2478_CR696","doi-asserted-by":"crossref","unstructured":"Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2022). Explainable artificial intelligence: A comprehensive review. Artificial Intelligence Review pp 1\u201366","DOI":"10.1007\/s10462-021-10088-y"},{"key":"2478_CR697","doi-asserted-by":"crossref","unstructured":"Misra, I., Zitnick, C. L., & Hebert, M. (2016). Shuffle and learn: unsupervised learning using temporal order verification. In: ECCV","DOI":"10.1007\/978-3-319-46448-0_32"},{"key":"2478_CR698","doi-asserted-by":"crossref","unstructured":"Mistretta, M., Baldrati, A., Bertini, M., & Bagdanov, A. D. (2024). Improving zero-shot generalization of learned prompts via unsupervised knowledge distillation. In: ECCV","DOI":"10.1007\/978-3-031-72907-2_27"},{"key":"2478_CR699","doi-asserted-by":"crossref","unstructured":"Mithun, N. C., Li, J., Metze, F., & Roy-Chowdhury, A. K. (2018). Learning joint embedding with multimodal cues for cross-modal video-text retrieval. In: ICMR","DOI":"10.1145\/3206025.3206064"},{"key":"2478_CR700","doi-asserted-by":"crossref","unstructured":"Mittal, H., Agarwal, N., Lo, S. Y., & Lee, K. (2024). Can\u2019t make an omelette without breaking some eggs: Plausible action anticipation using large video-language models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01758"},{"key":"2478_CR701","unstructured":"Mizrahi, D., Bachmann, R., Kar, O., Yeo, T., Gao, M., Dehghan, A., & Zamir, A. (2023). 4m: Massively multimodal masked modeling. In: NeurIPS"},{"key":"2478_CR702","doi-asserted-by":"crossref","unstructured":"Mo, K., Guibas, L. J., Mukadam, M., Gupta, A., & Tulsiani, S. (2021). Where2act: From pixels to actions for articulated 3d objects. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00674"},{"key":"2478_CR703","unstructured":"Mo, S., & Morgado, P. (2023). A unified audio-visual learning framework for localization, separation, and recognition. In: ICML"},{"key":"2478_CR704","first-page":"231","volume":"81","author":"TB Moeslund","year":"2001","unstructured":"Moeslund, T. B., & Granum, E. (2001). A survey of computer vision-based human motion capture. CVIU, 81, 231\u2013268.","journal-title":"CVIU"},{"key":"2478_CR705","doi-asserted-by":"crossref","unstructured":"Moeslund, T. B., Hilton, A., & Kr\u00fcger, V. (2006). A survey of advances in vision-based human motion capture and analysis. CVIU 104(2-3)","DOI":"10.1016\/j.cviu.2006.08.002"},{"key":"2478_CR706","unstructured":"Mokady, R., Hertz, A., & Bermano, A. H. (2021). Clipcap: Clip prefix for image captioning. arXiv:2111.09734"},{"key":"2478_CR707","doi-asserted-by":"crossref","unstructured":"Moltisanti, D., Wray, M., Mayol-Cuevas, W., & Damen, D. (2017). Trespassing the boundaries: Labeling temporal bounds for object interactions in egocentric video. In: ICCV","DOI":"10.1109\/ICCV.2017.314"},{"key":"2478_CR708","doi-asserted-by":"crossref","unstructured":"Moltisanti, D., Fidler, S., & Damen, D. (2019). Action recognition from single timestamp supervision in untrimmed videos. In: CVPR","DOI":"10.1109\/CVPR.2019.01015"},{"key":"2478_CR709","doi-asserted-by":"crossref","unstructured":"Moltisanti, D., Keller, F., Bilen, H., & Sevilla-Lara, L. (2023). Learning action changes by measuring verb-adverb textual relationships. In: CVPR","DOI":"10.1109\/CVPR52729.2023.02213"},{"issue":"2","key":"2478_CR710","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1109\/TPAMI.2019.2901464","volume":"42","author":"M Monfort","year":"2019","unstructured":"Monfort, M., Andonian, A., Zhou, B., Ramakrishnan, K., Bargal, S. A., Yan, T., Brown, L., Fan, Q., Gutfreund, D., Vondrick, C., et al. (2019). Moments in time dataset: one million videos for event understanding. IEEE TPAMI, 42(2), 502\u2013508.","journal-title":"IEEE TPAMI"},{"key":"2478_CR711","doi-asserted-by":"crossref","unstructured":"Monfort, M., Jin, S., Liu, A., Harwath, D., Feris, R., Glass, J., & Oliva, A. (2021). Spoken moments: Learning joint audio-visual representations from video descriptions. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01463"},{"key":"2478_CR712","doi-asserted-by":"crossref","unstructured":"Moon, G., Yu, S. I., Wen, H., Shiratori, T., & Lee, K. M. (2020). Interhand2. 6m: A dataset and baseline for 3d interacting hand pose estimation from a single rgb image. In: ECCV","DOI":"10.1007\/978-3-030-58565-5_33"},{"key":"2478_CR713","doi-asserted-by":"crossref","unstructured":"Moon, G., Choi, H., & Lee, K. M. (2022). Neuralannot: Neural annotator for 3d human mesh training sets. In: CVPR","DOI":"10.1109\/CVPRW56347.2022.00256"},{"key":"2478_CR714","doi-asserted-by":"crossref","unstructured":"Morais, R., Le, V., Tran, T., Saha, B., Mansour, M., & Venkatesh, S. (2019). Learning regularity in skeleton trajectories for anomaly detection in videos. In: CVPR","DOI":"10.1109\/CVPR.2019.01227"},{"key":"2478_CR715","doi-asserted-by":"crossref","unstructured":"Morales, J., Murrugarra-Llerena, N., & Saavedra, J. M. (2022). Leveraging unlabeled data for sketch-based understanding. In: CVPRw","DOI":"10.1109\/CVPRW56347.2022.00563"},{"key":"2478_CR716","doi-asserted-by":"crossref","unstructured":"Morgado, P., Vasconcelos, N., & Misra, I. (2021). Audio-visual instance discrimination with cross-modal agreement. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01274"},{"key":"2478_CR717","unstructured":"Mounir, R., Vijayaraghavan, S., & Sarkar, S. (2023). Streamer: Streaming representation learning and event segmentation in a hierarchical manner. In: NeurIPS"},{"key":"2478_CR718","doi-asserted-by":"crossref","unstructured":"Mueller, F., Mehta, D., Sotnychenko, O., Sridhar, S., Casas, D., & Theobalt, C. (2017). Real-time hand tracking under occlusion from an egocentric rgb-d sensor. In: CVPR","DOI":"10.1109\/ICCV.2017.131"},{"key":"2478_CR719","doi-asserted-by":"crossref","unstructured":"Muller, L., Osman, A. A., Tang, S., Huang, C. H. P., & Black, M. J. (2021). On self-contact and human pose. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00986"},{"key":"2478_CR720","doi-asserted-by":"crossref","unstructured":"Mun, J., Yang, L., Ren, Z., Xu, N., & Han, B. (2019). Streamlined dense video captioning. In: CVPR","DOI":"10.1109\/CVPR.2019.00675"},{"key":"2478_CR721","doi-asserted-by":"crossref","unstructured":"Mun, J., Cho, M., & Han, B. (2020). Local-global video-text interactions for temporal grounding. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01082"},{"key":"2478_CR722","doi-asserted-by":"crossref","unstructured":"Munoz, A., Zolfaghari, M., Argus, M., & Brox, T. (2021). Temporal shift gan for large scale video generation. In: WACV","DOI":"10.1109\/WACV48630.2021.00322"},{"key":"2478_CR723","doi-asserted-by":"crossref","unstructured":"Munro, J., & Damen, D. (2020). Multi-modal domain adaptation for fine-grained action recognition. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00020"},{"issue":"5","key":"2478_CR724","first-page":"1255","volume":"33","author":"R Mur-Artal","year":"2017","unstructured":"Mur-Artal, R., & Tard\u00f3s, J. D. (2017). Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE TR, 33(5), 1255\u20131262.","journal-title":"IEEE TR"},{"key":"2478_CR725","doi-asserted-by":"crossref","unstructured":"Mur-Labadia, L., Martinez-Cantin, R., Guerrero, J., Farinella, G. M., & Furnari, A. (2024). Aff-ttention! affordances and attention models for short-term object interaction anticipation. arXiv:2406.01194","DOI":"10.1007\/978-3-031-73337-6_10"},{"key":"2478_CR726","doi-asserted-by":"crossref","unstructured":"Nag, S., Zhu, X., Song, Y. Z., & Xiang, T. (2022). Zero-shot temporal action detection via vision-language prompting. In: ECCV","DOI":"10.1007\/978-3-031-20062-5_39"},{"key":"2478_CR727","doi-asserted-by":"crossref","unstructured":"Nag, S., Zhu, X., Deng, J., Song, Y. Z., & Xiang, T. (2023). Difftad: Temporal action detection with proposal denoising diffusion. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00951"},{"key":"2478_CR728","doi-asserted-by":"crossref","unstructured":"Nag, S., Goswami, K., & Karanam, S. (2024). Safari: Adaptive sequence transformer for weakly supervised referring expression segmentation. In: ECCV","DOI":"10.1007\/978-3-031-72784-9_27"},{"key":"2478_CR729","doi-asserted-by":"crossref","unstructured":"Nagarajan, T., & Grauman, K. (2018). Attributes as operators: factorizing unseen attribute-object compositions. In: ECCV","DOI":"10.1007\/978-3-030-01246-5_11"},{"key":"2478_CR730","doi-asserted-by":"crossref","unstructured":"Nagarajan, T., Feichtenhofer, C., & Grauman, K. (2019). Grounded human-object interaction hotspots from video. In: CVPR","DOI":"10.1109\/ICCV.2019.00878"},{"key":"2478_CR731","unstructured":"Nagrani, A., Yang, S., Arnab, A., Jansen, A., Schmid, C., & Sun, C. (2021). Attention bottlenecks for multimodal fusion. In: NeurIPS"},{"key":"2478_CR732","doi-asserted-by":"crossref","unstructured":"Nam, H., Jung, D. S., Moon, G., & Lee, K. M. (2024). Joint reconstruction of 3d human and object via contact-based refinement transformer. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00973"},{"key":"2478_CR733","doi-asserted-by":"crossref","unstructured":"Nan, G., Qiao, R., Xiao, Y., Liu, J., Leng, S., Zhang, H., & Lu, W. (2021). Interventional video grounding with dual contrastive learning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00279"},{"key":"2478_CR734","doi-asserted-by":"crossref","unstructured":"Nawhal, M., Jyothi, A. A., & Mori, G. (2022). Rethinking learning approaches for long-term action anticipation. In: ECCV","DOI":"10.1007\/978-3-031-19830-4_32"},{"key":"2478_CR735","unstructured":"Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., & Ng, A. Y. (2011). Multimodal deep learning. In: ICML"},{"key":"2478_CR736","doi-asserted-by":"crossref","unstructured":"Nguyen, T. N., & Meunier, J. (2019). Anomaly detection in video sequence with appearance-motion correspondence. In: ICCV","DOI":"10.1109\/ICCV.2019.00136"},{"key":"2478_CR737","doi-asserted-by":"crossref","unstructured":"Ni, B., Paramathayalan, V. R., & Moulin, P. (2014). Multiple granularity analysis for fine-grained action detection. In: CVPR","DOI":"10.1109\/CVPR.2014.102"},{"key":"2478_CR738","doi-asserted-by":"crossref","unstructured":"Nie, X., Chen, X., Jin, H., Zhu, Z., Yan, Y., & Qi, D. (2024), Triplet attention transformer for spatiotemporal predictive learning. In: WACV","DOI":"10.1109\/WACV57701.2024.00688"},{"key":"2478_CR739","doi-asserted-by":"crossref","first-page":"299","DOI":"10.1007\/s11263-007-0122-4","volume":"79","author":"JC Niebles","year":"2008","unstructured":"Niebles, J. C., Wang, H., & Fei-Fei, L. (2008). Unsupervised learning of human action categories using spatial-temporal words. IJCV, 79, 299\u2013318.","journal-title":"IJCV"},{"key":"2478_CR740","first-page":"971","volume":"24","author":"JC Niebles","year":"2010","unstructured":"Niebles, J. C., Chen, C. W., & Fei-Fei, L. (2010). Modeling temporal structure of decomposable motion segments for activity classification. ECCV, 24, 971\u2013981.","journal-title":"ECCV"},{"key":"2478_CR741","unstructured":"Nikankin, Y., Haim, N., & Irani, M. (2023). Sinfusion: training diffusion models on a single image or video. In: ICML"},{"key":"2478_CR742","doi-asserted-by":"crossref","unstructured":"Nowozin, S., Bakir, G., & Tsuda, K. (2007). Discriminative subsequence mining for action classification. In: ICCV","DOI":"10.1109\/ICCV.2007.4409049"},{"key":"2478_CR743","doi-asserted-by":"crossref","unstructured":"Ntinou, I., Sanchez, E., & Tzimiropoulos, G. (2024). Multiscale vision transformers meet bipartite matching for efficient single-stage action localization. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01781"},{"key":"2478_CR744","doi-asserted-by":"crossref","unstructured":"Nugroho, M. A., Woo, S., Lee, S., & Kim, C. (2023). Audio-visual glance network for efficient video recognition. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00931"},{"key":"2478_CR745","doi-asserted-by":"crossref","unstructured":"Ohkawa, T., He, K., Sener, F., Hodan, T., Tran, L., Keskin, C. (2023). AssemblyHands: towards egocentric activity understanding via 3d hand pose estimation. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01249"},{"key":"2478_CR746","unstructured":"Oikonomopoulos, A., Patras, I., & Pantic, M. (2005). Spatiotemporal saliency for human action recognition. In: ICME"},{"key":"2478_CR747","doi-asserted-by":"crossref","unstructured":"Omran, M., Lassner, C., Pons-Moll, G., Gehler, P., & Schiele, B. (2018). Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In: 3DV","DOI":"10.1109\/3DV.2018.00062"},{"key":"2478_CR748","doi-asserted-by":"crossref","unstructured":"Oncescu, A. M., Henriques, J. F., Liu, Y., Zisserman, A., & Albanie, S. (2021). Queryd: A video dataset with high-quality text and audio narrations. In: ICASSP","DOI":"10.1109\/ICASSP39728.2021.9414640"},{"key":"2478_CR749","doi-asserted-by":"crossref","unstructured":"Oneata, D., Verbeek, J., & Schmid, C. (2013). Action and event recognition with fisher vectors on a compact feature set. In: ICCV","DOI":"10.1109\/ICCV.2013.228"},{"key":"2478_CR750","unstructured":"Oord, Avd., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv:1807.03748"},{"key":"2478_CR751","doi-asserted-by":"crossref","first-page":"2806","DOI":"10.1109\/TPAMI.2020.3045007","volume":"44","author":"S Oprea","year":"2022","unstructured":"Oprea, S., Martinez-Gonzalez, P., Garcia-Garcia, A., Castro-Vargas, J. A., Orts-Escolano, S., Garcia-Rodriguez, J., & Argyros, A. (2022). A review on deep learning techniques for video prediction. IEEE TPAMI, 44, 2806\u20132826.","journal-title":"IEEE TPAMI"},{"key":"2478_CR752","doi-asserted-by":"crossref","unstructured":"Oreifej, O., & Liu, Z. (2013). Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences. In: CVPR","DOI":"10.1109\/CVPR.2013.98"},{"key":"2478_CR753","doi-asserted-by":"crossref","unstructured":"Ortega, J. D., Kose, N., Ca\u00f1as, P., Chao, M. A., Unnervik, A., Nieto, M., Otaegui, O., & Salgado, L. (2020). Dmd: A large-scale multi-modal driver monitoring dataset for attention and alertness analysis. In: ECCV","DOI":"10.1007\/978-3-030-66823-5_23"},{"key":"2478_CR754","doi-asserted-by":"crossref","unstructured":"Oshima, Y., Taniguchi, S., Suzuki, M., & Matsuo, Y. (2024). Ssm meets video diffusion models: Efficient video generation with structured state spaces. In: ICLRw","DOI":"10.2139\/ssrn.4999610"},{"key":"2478_CR755","doi-asserted-by":"crossref","unstructured":"Otani, M., Nakashima, Y., Rahtu, E., Heikkil\u00e4, J., & Yokoya, N. (2016). Learning joint representations of videos and sentences with web image search. In: ECCVw","DOI":"10.1007\/978-3-319-46604-0_46"},{"key":"2478_CR756","doi-asserted-by":"crossref","unstructured":"Owens, A., Isola, P., McDermott, J., Torralba, A., Adelson, E. H., & Freeman, W. T. (2016). Visually indicated sounds. In: CVPR","DOI":"10.1109\/CVPR.2016.264"},{"key":"2478_CR757","doi-asserted-by":"crossref","unstructured":"Pan, B., Cao, Z., Adeli, E., & Niebles, J. C. (2020). Adversarial cross-domain action recognition with co-attention. In: AAAI","DOI":"10.1609\/aaai.v34i07.6854"},{"key":"2478_CR758","doi-asserted-by":"crossref","unstructured":"Pan, J., Chen, S., Shou, M. Z., Liu, Y., Shao, J., & Li, H. (2021a). Actor-context-actor relation network for spatio-temporal action localization. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00053"},{"key":"2478_CR759","doi-asserted-by":"crossref","unstructured":"Pan, T., Song, Y., Yang, T., Jiang, W., & Liu, W. (2021b). Videomoco: Contrastive video representation learning with temporally adversarial examples. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01105"},{"key":"2478_CR760","doi-asserted-by":"crossref","unstructured":"Pan, X., Charron, N., Yang, Y., Peters, S., Whelan, T., Kong, C., Parkhi, O., Newcombe, R., & Ren, Y. C. (2023). Aria digital twin: A new benchmark dataset for egocentric 3d machine perception. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01842"},{"key":"2478_CR761","doi-asserted-by":"crossref","unstructured":"Pan, X., Qin, P., Li, Y., Xue, H., & Chen, W. (2024). Synthesizing coherent story with auto-regressive latent diffusion models. In: WACV","DOI":"10.1109\/WACV57701.2024.00290"},{"key":"2478_CR762","doi-asserted-by":"crossref","unstructured":"Pan, Y., Yao, T., Li, H., & Mei, T. (2017). Video captioning with transferred semantic attributes. In: CVPR","DOI":"10.1109\/CVPR.2017.111"},{"key":"2478_CR763","doi-asserted-by":"crossref","unstructured":"Panagiotakis, C., Karvounas, G., & Argyros, A. (2018). Unsupervised Detection of Periodic Segments in Videos. In: ICIP","DOI":"10.1109\/ICIP.2018.8451336"},{"key":"2478_CR764","unstructured":"Paredes, B. R., Argyriou, A., Berthouze, N., & Pontil, M. (2012). Exploiting unrelated tasks in multi-task learning. In: AISTATS"},{"key":"2478_CR765","first-page":"2259","volume":"54","author":"P Pareek","year":"2021","unstructured":"Pareek, P., & Thakkar, A. (2021). A survey on video-based human action recognition: recent updates, datasets, challenges, and applications. UMT-AIR, 54, 2259\u20132322.","journal-title":"UMT-AIR"},{"key":"2478_CR766","doi-asserted-by":"crossref","unstructured":"Park, H., Noh, J., & Ham, B. (2020). Learning memory-guided normality for anomaly detection. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01438"},{"key":"2478_CR767","doi-asserted-by":"crossref","unstructured":"Park, J., Lee, J., & Sohn, K. (2021a). Bridge to answer: Structure-aware graph interaction network for video question answering. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01527"},{"key":"2478_CR768","doi-asserted-by":"crossref","unstructured":"Park, J., Lee, J., Kim, I. J., & Sohn, K. (2022a). Probabilistic representations for video contrastive learning. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01430"},{"key":"2478_CR769","doi-asserted-by":"crossref","unstructured":"Park, J. S., Shen, S., Farhadi, A., Darrell, T., Choi, Y., & Rohrbach, A. (2022b). Exposing the limits of video-text models through contrast sets. In: NAACL","DOI":"10.18653\/v1\/2022.naacl-main.261"},{"key":"2478_CR770","unstructured":"Park, N., Kim, W., Heo, B., Kim, T., & Yun, S. (2023). What do self-supervised vision transformers learn? In: ICLR"},{"key":"2478_CR771","doi-asserted-by":"crossref","unstructured":"Park, S., Kim, K., Lee, J., Choo, J., Lee, J., Kim, S., & Choi, E. (2021b). Vid-ode: Continuous-time video generation with neural ordinary differential equation. In: AAAI","DOI":"10.1609\/aaai.v35i3.16342"},{"key":"2478_CR772","doi-asserted-by":"crossref","unstructured":"Park, W., Kim, D., Lu, Y., & Cho, M. (2019). Relational knowledge distillation. In: CVPR","DOI":"10.1109\/CVPR.2019.00409"},{"key":"2478_CR773","doi-asserted-by":"crossref","unstructured":"Parmar, P., & Morris, B. T. (2019). What and how well you performed? a multitask learning approach to action quality assessment. In: CVPR","DOI":"10.1109\/CVPR.2019.00039"},{"key":"2478_CR774","unstructured":"Parthasarathy, N., Eslami, S., Carreira, J., & Henaff, O. (2023). Self-supervised video pretraining yields robust and more human-aligned visual representations. In: NeurIPS"},{"key":"2478_CR775","unstructured":"Patrick, M., Huang, P. Y., Asano, Y., Metze, F., Hauptmann, A., Henriques, J., & Vedaldi, A. (2020). Support-set bottlenecks for video-text representation learning. In: ICLR"},{"key":"2478_CR776","doi-asserted-by":"crossref","unstructured":"Patron-Perez, A., Marszalek, M., Zisserman, A., & Reid, I. (2010). High five: Recognising human interactions in tv shows. In: BMVC","DOI":"10.5244\/C.24.50"},{"key":"2478_CR777","doi-asserted-by":"crossref","unstructured":"Patsch, C., Zhang, J., Wu, Y., Zakour, M., Salihu, D., & Steinbach, E. (2024). Long-term action anticipation based on contextual alignment. In: ICASSP","DOI":"10.1109\/ICASSP48485.2024.10445978"},{"key":"2478_CR778","doi-asserted-by":"crossref","unstructured":"Paul, S., Roy, S., & Roy-Chowdhury, A. K. (2018). W-talc: Weakly-supervised temporal activity localization and classification. In: ECCV","DOI":"10.1007\/978-3-030-01225-0_35"},{"key":"2478_CR779","doi-asserted-by":"crossref","unstructured":"Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A., Tzionas, D., & Black, M. J. (2019). Expressive body capture: 3d hands, face, and body from a single image. In: CVPR","DOI":"10.1109\/CVPR.2019.01123"},{"key":"2478_CR780","doi-asserted-by":"crossref","unstructured":"Pavlakos, G., Shan, D., Radosavovic, I., Kanazawa, A., Fouhey, D., & Malik, J. (2024). Reconstructing hands in 3d with transformers. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00938"},{"key":"2478_CR781","doi-asserted-by":"crossref","unstructured":"Peh, E., Parmar, P., & Fernando, B. (2024). Learning to visually connect actions and their effects. arXiv:2401.10805","DOI":"10.1109\/WACV61041.2025.00151"},{"key":"2478_CR782","doi-asserted-by":"crossref","unstructured":"Pei, G., Chen, T., Jiang, X., Liu, H., Sun, Z., & Yao, Y. (2024). Videomac: Video masked autoencoders meet convnets. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02145"},{"key":"2478_CR783","doi-asserted-by":"crossref","unstructured":"Pei, M., Jia, Y., & Zhu, S. C. (2011). Parsing video events with goal inference and intent prediction. In: ICCV","DOI":"10.1109\/ICCV.2011.6126279"},{"key":"2478_CR784","doi-asserted-by":"crossref","unstructured":"Peng, S., Zhang, Y., Xu, Y., Wang, Q., Shuai, Q., Bao, H., & Zhou, X. (2021). Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00894"},{"key":"2478_CR785","doi-asserted-by":"crossref","unstructured":"Peng, X., Schmid, C. (2016). Multi-region two-stream r-cnn for action detection. In: ECCV","DOI":"10.1007\/978-3-319-46493-0_45"},{"key":"2478_CR786","doi-asserted-by":"crossref","unstructured":"Perrett, T., & Damen, D. (2019). Ddlstm: dual-domain lstm for cross-dataset action recognition. In: CVPR","DOI":"10.1109\/CVPR.2019.00804"},{"key":"2478_CR787","doi-asserted-by":"crossref","unstructured":"Perrett, T., Han, T., Damen, D., & Zisserman, A. (2024). It\u2019s just another day: Unique video captioning by discriminative prompting. In: ACCV","DOI":"10.1007\/978-981-96-0908-6_16"},{"key":"2478_CR788","doi-asserted-by":"crossref","unstructured":"Perrett, T., Darkhalil, A., Sinha, S., Emara, O., Pollard, S., Parida, K., Liu, K., Gatti, P., Bansal, S., Flanagan, K., et\u00a0al. (2025). Hd-epic: A highly-detailed egocentric video dataset. arXiv:2502.04144","DOI":"10.1109\/CVPR52734.2025.02226"},{"key":"2478_CR789","doi-asserted-by":"crossref","unstructured":"Phan, T., Vo, K., Le, D., Doretto, G., Adjeroh, D., & Le, N. (2024). Zeetad: Adapting pretrained vision-language model for zero-shot end-to-end temporal action detection. In: WACV","DOI":"10.1109\/WACV57701.2024.00689"},{"key":"2478_CR790","doi-asserted-by":"crossref","unstructured":"Pian, W., Mo, S., Guo, Y., & Tian, Y. (2023). Audio-visual class-incremental learning. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00717"},{"key":"2478_CR791","doi-asserted-by":"crossref","unstructured":"Pickup, L. C., Pan, Z., Wei, D., Shih, Y., Zhang, C., Zisserman, A., Scholkopf, B., & Freeman, W. T. (2014). Seeing the arrow of time. In: CVPR","DOI":"10.1109\/CVPR.2014.262"},{"key":"2478_CR792","unstructured":"Piergiovanni, A., & Ryoo, M. (2020). Avid dataset: Anonymized videos from diverse countries. In: NeurIPS"},{"key":"2478_CR793","doi-asserted-by":"crossref","unstructured":"Piergiovanni, A., Angelova, A., Toshev, A., & Ryoo, M. S. (2020). Adversarial generative grammars for human activity prediction. In: ECCV","DOI":"10.1007\/978-3-030-58536-5_30"},{"key":"2478_CR794","doi-asserted-by":"crossref","unstructured":"Piergiovanni, A., Kuo, W., & Angelova, A. (2023). Rethinking video vits: Sparse video tubes for joint image and video learning. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00220"},{"key":"2478_CR795","doi-asserted-by":"crossref","unstructured":"Piergiovanni, A., Noble, I., Kim, D., Ryoo, M. S., Gomes, V., & Angelova, A. (2024). Mirasol3b: A multimodal autoregressive model for time-aligned and contextual modalities. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02531"},{"key":"2478_CR796","doi-asserted-by":"crossref","unstructured":"Pirsiavash, H., & Ramanan, D. (2012). Detecting activities of daily living in first-person camera views. In: CVPR","DOI":"10.1109\/CVPR.2012.6248010"},{"key":"2478_CR797","doi-asserted-by":"crossref","unstructured":"Pishchulin, L., Andriluka, M., Gehler, P., & Schiele, B. (2013). Strong appearance and expressive spatial models for human pose estimation. In: CVPR","DOI":"10.1109\/ICCV.2013.433"},{"key":"2478_CR798","doi-asserted-by":"crossref","first-page":"4880","DOI":"10.1007\/s11263-024-02095-7","volume":"132","author":"C Plizzari","year":"2024","unstructured":"Plizzari, C., Goletto, G., Furnari, A., Bansal, S., Ragusa, F., Farinella, G. M., Damen, D., & Tommasi, T. (2024). An outlook into the future of egocentric vision. IJCV, 132, 4880\u20134936.","journal-title":"IJCV"},{"key":"2478_CR799","doi-asserted-by":"crossref","unstructured":"Pogalin, E., Smeulders, A. W., & Thean, A. H. (2008). Visual Quasi-Periodicity. In: CVPR","DOI":"10.1109\/CVPR.2008.4587509"},{"key":"2478_CR800","unstructured":"Poli, M., Massaroli, S., Nguyen, E., Fu, D. Y., Dao, T., Baccus, S., Bengio, Y., Ermon, S., & R\u00e9, C. (2023). Hyena hierarchy: Towards larger convolutional language models. In: ICLR"},{"key":"2478_CR801","doi-asserted-by":"crossref","unstructured":"Ponimatkin, G., Samet, N., Xiao, Y., Du, Y., Marlet, R., & Lepetit, V. (2023). A simple and powerful global optimization for unsupervised video object segmentation. In: WACV","DOI":"10.1109\/WACV56688.2023.00584"},{"issue":"4","key":"2478_CR802","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2766993","volume":"34","author":"G Pons-Moll","year":"2015","unstructured":"Pons-Moll, G., Romero, J., Mahmood, N., & Black, M. J. (2015). Dyna: A model of dynamic human shape in motion. ACM TOG, 34(4), 1\u201314.","journal-title":"ACM TOG"},{"issue":"6","key":"2478_CR803","doi-asserted-by":"crossref","first-page":"976","DOI":"10.1016\/j.imavis.2009.11.014","volume":"28","author":"R Poppe","year":"2010","unstructured":"Poppe, R. (2010). A survey on vision-based human action recognition. IVC, 28(6), 976\u2013990.","journal-title":"IVC"},{"key":"2478_CR804","doi-asserted-by":"crossref","unstructured":"Price, W., Vondrick, C., & Damen, D. (2022). Unweavenet: Unweaving activity stories. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01340"},{"key":"2478_CR805","doi-asserted-by":"crossref","first-page":"4923","DOI":"10.1109\/TIP.2024.3451935","volume":"33","author":"Y Pu","year":"2024","unstructured":"Pu, Y., Wu, X., Yang, L., & Wang, S. (2024). Learning prompt-enhanced context features for weakly-supervised video anomaly detection. IEEE T-IP, 33, 4923\u20134936.","journal-title":"IEEE T-IP"},{"key":"2478_CR806","doi-asserted-by":"crossref","unstructured":"Purwanto, D., Chen, Y. T., & Fang, W. H. (2021). Dance with self-attention: A new look of conditional random fields on anomaly detection in videos. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00024"},{"key":"2478_CR807","unstructured":"Qian, L., Li, J., Wu, Y., Ye, Y., Fei, H., Chua, T. S., Zhuang, Y., Tang, S. (2024). Momentor: Advancing video large language model with fine-grained temporal reasoning. In: ICML"},{"key":"2478_CR808","doi-asserted-by":"crossref","unstructured":"Qian, R., Meng, T., Gong, B., Yang, M. H., Wang, H., Belongie, S., & Cui, Y. (2021). Spatiotemporal contrastive video representation learning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00689"},{"key":"2478_CR809","doi-asserted-by":"crossref","unstructured":"Qing, Z., Su, H., Gan, W., Wang, D., Wu, W., Wang, X., Qiao, Y., Yan, J., Gao, C., & Sang, N. (2021). Temporal context aggregation network for temporal action proposal refinement. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00055"},{"key":"2478_CR810","doi-asserted-by":"crossref","unstructured":"Qiu, Z., Yao, T., & Mei, T. (2017). Learning spatio-temporal representation with pseudo-3d residual networks. In: ICCV","DOI":"10.1109\/ICCV.2017.590"},{"key":"2478_CR811","doi-asserted-by":"crossref","unstructured":"Qiu, Z., Yao, T., Ngo, C. W., Tian, X., & Mei, T. (2019). Learning spatio-temporal representation with local and global diffusion. In: CVPR","DOI":"10.1109\/CVPR.2019.01233"},{"key":"2478_CR812","doi-asserted-by":"crossref","unstructured":"Qu, X., Tang, P., Zou, Z., Cheng, Y., Dong, J., Zhou, P., & Xu, Z. (2020). Fine-grained iterative attention network for temporal language localization in videos. In: MM","DOI":"10.1145\/3394171.3414053"},{"key":"2478_CR813","doi-asserted-by":"crossref","unstructured":"Radevski, G., Grujicic, D., Blaschko, M., Moens, M. F., & Tuytelaars, T. (2023). Multimodal distillation for egocentric action recognition. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00481"},{"key":"2478_CR814","unstructured":"Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et\u00a0al. (2021). Learning transferable visual models from natural language supervision. In: ICLR"},{"key":"2478_CR815","doi-asserted-by":"crossref","unstructured":"Ragusa, F., Furnari, A., Livatino, S., & Farinella, G. M. (2021). The meccano dataset: Understanding human-object interactions from egocentric videos in an industrial-like domain. In: WACV","DOI":"10.1109\/WACV48630.2021.00161"},{"key":"2478_CR816","doi-asserted-by":"crossref","unstructured":"Rahaman, R., Singhania, D., Thiery, A., & Yao, A. (2022). A generalized and robust framework for timestamp supervision in temporal action segmentation. In: ECCV","DOI":"10.1007\/978-3-031-19772-7_17"},{"key":"2478_CR817","doi-asserted-by":"crossref","unstructured":"Rahmani, H., Mahmood, A., Huynh, D. Q., & Mian, A. (2014). Real time action recognition using histograms of depth gradients and random decision forests. In: WACV","DOI":"10.1109\/WACV.2014.6836044"},{"key":"2478_CR818","doi-asserted-by":"crossref","unstructured":"Rahmanzadehgervi, P., Bolton, L., Taesiri, M. R., Nguyen, A. T. (2024). Vision language models are blind. In: ACCV","DOI":"10.1007\/978-981-96-0917-8_17"},{"key":"2478_CR819","doi-asserted-by":"crossref","unstructured":"Rai, N., Chen, H., Ji, J., Desai, R., Kozuka, K., Ishizaka, S., Adeli, E., & Niebles, J. C. (2021). Home action genome: Cooperative compositional action understanding. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01103"},{"key":"2478_CR820","first-page":"2293","volume":"44","author":"B Ramachandra","year":"2020","unstructured":"Ramachandra, B., Jones, M. J., & Vatsavai, R. R. (2020). A survey of single-scene video anomaly detection. IEEE TPAMI, 44, 2293\u20132312.","journal-title":"IEEE TPAMI"},{"key":"2478_CR821","unstructured":"Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv:2204.06125"},{"key":"2478_CR822","unstructured":"Ranasinghe, K., & Ryoo, M. S. (2023). Language-based action concept spaces improve video self-supervised learning. In: NeurIPS"},{"key":"2478_CR823","doi-asserted-by":"crossref","unstructured":"Ranasinghe, K., Naseer, M., Khan, S., Khan, F. S., & Ryoo, M. S. (2022). Self-supervised video transformer. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00289"},{"key":"2478_CR824","unstructured":"Randall, M. (2009). Movie narrative charts. https:\/\/xkcd.com\/657\/"},{"key":"2478_CR825","doi-asserted-by":"crossref","unstructured":"Rangrej, S. B., Liang, K. J., Hassner, T., & Clark, J. J. (2023). Glitr: Glimpse transformers with spatiotemporal consistency for online action prediction. In: WACV","DOI":"10.1109\/WACV56688.2023.00341"},{"key":"2478_CR826","unstructured":"Rasouli, A. (2020). Deep learning for vision-based prediction: A survey. arXiv:2007.00095"},{"key":"2478_CR827","unstructured":"Rawal, R., Saifullah, K., Farr\u00e9, M., Basri, R., Jacobs, D., Somepalli, G., & Goldstein, T. (2024). Cinepile: A long video question answering dataset and benchmark. arXiv:2405.08813"},{"key":"2478_CR828","doi-asserted-by":"crossref","unstructured":"Recasens, A., Vondrick, C., Khosla, A., & Torralba, A. (2017). Following gaze in video. In: CVPR","DOI":"10.1109\/ICCV.2017.160"},{"key":"2478_CR829","doi-asserted-by":"crossref","unstructured":"Recasens, A., Luc, P., Alayrac, J. B., Wang, L., Strub, F., Tallec, C., Malinowski, M., P\u0103tr\u0103ucean, V., Altch\u00e9, F., Valko, M., et\u00a0al. (2021). Broaden your views for self-supervised video learning. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00129"},{"key":"2478_CR830","unstructured":"Recasens, A., Lin, J, Carreira, J., Jaegle, D., Wang, L., Alayrac, Jb., Luc, P., Miech, A., Smaira, L., Hemsley, R., et\u00a0al. (2023). Zorro: the masked multimodal transformer. arXiv:2301.09595"},{"key":"2478_CR831","doi-asserted-by":"crossref","unstructured":"Reddy, K. K., & Shah, M. (2013). Recognizing 50 human action categories of web videos. MVA","DOI":"10.1007\/s00138-012-0450-4"},{"key":"2478_CR832","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In: CVPR","DOI":"10.1109\/CVPR.2016.91"},{"key":"2478_CR833","first-page":"25","volume":"1","author":"M Regneri","year":"2013","unstructured":"Regneri, M., Rohrbach, M., Wetzel, D., Thater, S., Schiele, B., & Pinkal, M. (2013). Grounding action descriptions in videos. TACL, 1, 25\u201336.","journal-title":"Grounding action descriptions in videos. TACL"},{"key":"2478_CR834","doi-asserted-by":"crossref","unstructured":"Rehg, J. M., Abowd, G. D., Rozga, A., Romero, M., Clements, M. A., Sclaroff, S., Essa, I., Ousley, O. Y., Li, Y., Kim, C., Rao, H., Kim, J. C., Presti, L. L., Zhang, J., Lantsman, D., Bidwell, J., & Ye, Z. (2013). Decoding children\u2019s social behavior. In: CVPR","DOI":"10.1109\/CVPR.2013.438"},{"key":"2478_CR835","doi-asserted-by":"crossref","unstructured":"Ren, S., Yao, L., Li, S., Sun, X., & Hou, L. (2024a). Timechat: A time-sensitive multimodal large language model for long video understanding. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01357"},{"key":"2478_CR836","unstructured":"Ren, W., Yang, H., Zhang, G., Wei, C., Du, X., Huang, W., & Chen, W. (2024b). Consisti2v: Enhancing visual consistency for image-to-video generation. TMLR"},{"key":"2478_CR837","doi-asserted-by":"crossref","unstructured":"Rizve, M. N., Mittal, G., Yu, Y., Hall, M., Sajeev, S., Shah, M., & Chen, M. (2023). Pivotal: Prior-driven supervision for weakly-supervised temporal action localization. In: CVPR","DOI":"10.1109\/CVPR52729.2023.02202"},{"issue":"9","key":"2478_CR838","doi-asserted-by":"crossref","first-page":"661","DOI":"10.1038\/35090060","volume":"2","author":"G Rizzolatti","year":"2001","unstructured":"Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2(9), 661\u2013670.","journal-title":"Nature Reviews Neuroscience"},{"key":"2478_CR839","unstructured":"Robinson, J., Sun, L., Yu, K., Batmanghelich, K., Jegelka, S., & Sra, S. (2021). Can contrastive learning avoid shortcut solutions? In: NeurIPS"},{"key":"2478_CR840","volume":"211","author":"I Rodin","year":"2021","unstructured":"Rodin, I., Furnari, A., Mavroeidis, D., & Farinella, G. M. (2021). Predicting the future from first person (egocentric) vision: A survey. CVIU, 211, Article 103252.","journal-title":"CVIU"},{"key":"2478_CR841","doi-asserted-by":"crossref","unstructured":"Rodriguez, M. D., Ahmed, J., & Shah, M. (2008). Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: CVPR","DOI":"10.1109\/CVPR.2008.4587727"},{"key":"2478_CR842","doi-asserted-by":"crossref","first-page":"94","DOI":"10.1006\/ciun.1994.1006","volume":"59","author":"K Rohr","year":"1994","unstructured":"Rohr, K. (1994). Towards model-based recognition of human movements in image sequences. CVGIP, 59, 94\u2013115.","journal-title":"CVGIP"},{"key":"2478_CR843","doi-asserted-by":"crossref","unstructured":"Rohrbach, A., Rohrbach, M., Tandon, N., & Schiele, B. (2015). A dataset for movie description. In: CVPR","DOI":"10.1109\/CVPR.2015.7298940"},{"key":"2478_CR844","doi-asserted-by":"crossref","unstructured":"Rohrbach, A., Rohrbach, M., Hu, R., Darrell, T., & Schiele, B. (2016). Grounding of textual phrases in images by reconstruction. In: ECCV","DOI":"10.1007\/978-3-319-46448-0_49"},{"key":"2478_CR845","doi-asserted-by":"crossref","unstructured":"Rohrbach, M., Amin, S., Andriluka, M., & Schiele, B. (2012). A database for fine grained activity detection of cooking activities. In: CVPR","DOI":"10.1109\/CVPR.2012.6247801"},{"key":"2478_CR846","doi-asserted-by":"crossref","unstructured":"Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01042"},{"issue":"6","key":"2478_CR847","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3130800.3130883","volume":"36","author":"J Romero","year":"2017","unstructured":"Romero, J., Tzionas, D., & Black, M. J. (2017). Embodied hands: modeling and capturing hands and bodies together. ACM TOG, 36(6), 1\u201317.","journal-title":"ACM TOG"},{"key":"2478_CR848","doi-asserted-by":"crossref","first-page":"8116","DOI":"10.1109\/TIP.2021.3113114","volume":"30","author":"D Roy","year":"2021","unstructured":"Roy, D., & Fernando, B. (2021). Action anticipation using pairwise human-object interactions and transformers. IEEE T-IP, 30, 8116\u20138129.","journal-title":"IEEE T-IP"},{"key":"2478_CR849","doi-asserted-by":"crossref","unstructured":"Roy, D., Fernando, B. (2022). Action anticipation using latent goal learning. In: WACV","DOI":"10.1109\/WACV51458.2022.00088"},{"key":"2478_CR850","doi-asserted-by":"crossref","unstructured":"Roy, D., Rajendiran, R., & Fernando, B. (2024). Interaction region visual transformer for egocentric action anticipation. In: WACV","DOI":"10.1109\/WACV57701.2024.00660"},{"key":"2478_CR851","doi-asserted-by":"crossref","unstructured":"Runia, T. F. H., Snoek, C. G. M., & Smeulders, A. W. (2018). Real-World Repetition Estimation by Div, Grad and Curl. In: CVPR","DOI":"10.1109\/CVPR.2018.00939"},{"key":"2478_CR852","unstructured":"Ryali, C., Hu, Y. T., Bolya, D., Wei, C., Fan, H., Huang, P. Y., Aggarwal, V., Chowdhury, A., Poursaeed, O., Hoffman, J., et\u00a0al. (2023). Hiera: A hierarchical vision transformer without the bells-and-whistles. In: ICML"},{"key":"2478_CR853","unstructured":"Ryoo, M., Piergiovanni, A., Arnab, A., Dehghani, M., & Angelova, A. (2021). Tokenlearner: Adaptive space-time tokenization for videos. In: NeurIPS"},{"key":"2478_CR854","doi-asserted-by":"crossref","unstructured":"Ryoo, M. S. (2011). Human activity prediction: Early recognition of ongoing activities from streaming videos. In: ICCV","DOI":"10.1109\/ICCV.2011.6126349"},{"key":"2478_CR855","doi-asserted-by":"crossref","unstructured":"Ryoo, M. S., & Aggarwal, J. K. (2009). Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities. In: ICCV","DOI":"10.1109\/ICCV.2009.5459361"},{"key":"2478_CR856","doi-asserted-by":"crossref","unstructured":"Sadanand, S., & Corso, J. J. (2012). Action bank: A high-level representation of activity in video. In: CVPR","DOI":"10.1109\/CVPR.2012.6247806"},{"key":"2478_CR857","doi-asserted-by":"crossref","unstructured":"Saini, N., Pham, K., & Shrivastava, A. (2022). Disentangling visual embeddings for attributes and objects. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01329"},{"key":"2478_CR858","doi-asserted-by":"crossref","unstructured":"Saini, N., Wang, H., Swaminathan, A., Jayasundara, V., He, B., Gupta, K., Shrivastava, A. (2023). Chop & learn: Recognizing and generating object-state compositions. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01852"},{"key":"2478_CR859","doi-asserted-by":"crossref","unstructured":"Saito, M., Matsumoto, E., & Saito, S. (2017). Temporal generative adversarial nets with singular value clipping. In: ICCV","DOI":"10.1109\/ICCV.2017.308"},{"issue":"10","key":"2478_CR860","doi-asserted-by":"crossref","first-page":"2586","DOI":"10.1007\/s11263-020-01333-y","volume":"128","author":"M Saito","year":"2020","unstructured":"Saito, M., Saito, S., Koyama, M., & Kobayashi, S. (2020). Train sparsely, generate densely: Memory-efficient unsupervised training of high-resolution temporal gan. IJCV, 128(10), 2586\u20132606.","journal-title":"IJCV"},{"issue":"1","key":"2478_CR861","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1109\/TASSP.1978.1163055","volume":"26","author":"H Sakoe","year":"1978","unstructured":"Sakoe, H., & Chiba, S. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE TASSP, 26(1), 43\u201349.","journal-title":"IEEE TASSP"},{"key":"2478_CR862","doi-asserted-by":"crossref","unstructured":"Salehi, M., Gavves, E., Snoek, C. G. M., & Asano, Y. M. (2023). Time does tell: Self-supervised time-tuning of dense image representations. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01516"},{"key":"2478_CR863","unstructured":"Salimans, T., Karpathy, A., Chen, X., & Kingma, D. P. (2017). Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. In: ICLR"},{"key":"2478_CR864","doi-asserted-by":"crossref","unstructured":"Sameni, S., Jenni, S., & Favaro, P. (2023). Spatio-temporal crop aggregation for video representation learning. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00521"},{"key":"2478_CR865","unstructured":"Sarkar, P., Beirami, A., & Etemad, A. (2023). Uncovering the hidden dynamics of video self-supervised learning under distribution shifts. In: NeurIPS"},{"key":"2478_CR866","unstructured":"Saxena, V., Ba, J., & Hafner, D. (2021). Clockwork variational autoencoders. In: NeurIPS"},{"key":"2478_CR867","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3577925","volume":"55","author":"MC Schiappa","year":"2023","unstructured":"Schiappa, M. C., Rawat, Y. S., & Shah, M. (2023). Self-supervised learning for videos: A survey. CSUR, 55, 1\u201337.","journal-title":"CSUR"},{"key":"2478_CR868","unstructured":"Schlimmer, J. C., & Fisher, D. (1986). A case study of incremental concept induction. In: AAAI"},{"key":"2478_CR869","doi-asserted-by":"crossref","unstructured":"Schonberger, J. L., & Frahm, J. M. (2016). Structure-from-motion revisited. In: CVPR","DOI":"10.1109\/CVPR.2016.445"},{"key":"2478_CR870","doi-asserted-by":"crossref","unstructured":"Schuldt, C., Laptev, I., & Caputo, B. (2004). Recognizing human actions: a local svm approach. In: ICPR","DOI":"10.1109\/ICPR.2004.1334462"},{"key":"2478_CR871","first-page":"12922","volume":"45","author":"J Selva","year":"2023","unstructured":"Selva, J., Johansen, A. S., Escalera, S., Nasrollahi, K., Moeslund, T. B., & Clap\u00e9s, A. (2023). Video transformers: A survey. IEEE TPAMI, 45, 12922\u201312943.","journal-title":"Video transformers: A survey. IEEE TPAMI"},{"key":"2478_CR872","doi-asserted-by":"crossref","unstructured":"Sener, F., Chatterjee, D., Shelepov, D., He, K., Singhania, D., Wang, R., & Yao, A. (2022). Assembly101: A large-scale multi-view video dataset for understanding procedural activities. In: CVPR","DOI":"10.1109\/CVPR52688.2022.02042"},{"key":"2478_CR873","doi-asserted-by":"crossref","unstructured":"Sengupta, A., Budvytis, I., & Cipolla, R. (2023). Humaniflow: Ancestor-conditioned normalising flows on so (3) manifolds for human pose and shape distribution estimation. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00463"},{"key":"2478_CR874","doi-asserted-by":"crossref","unstructured":"Seo, P. H., Nagrani, A., Arnab, A., & Schmid, C. (2022). End-to-end generative pretraining for multimodal video captioning. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01743"},{"key":"2478_CR875","doi-asserted-by":"crossref","unstructured":"Sermanet, P., Xu, K., & Levine, S. (2017). Unsupervised perceptual rewards for imitation learning. In: ICLRw","DOI":"10.15607\/RSS.2017.XIII.050"},{"key":"2478_CR876","doi-asserted-by":"crossref","unstructured":"Sermanet, P., Lynch, C., Chebotar, Y., Hsu, J., Jang, E., Schaal, S., Levine, S., & Brain, G. (2018). Time-contrastive networks: Self-supervised learning from video. In: ICRA","DOI":"10.1109\/ICRA.2018.8462891"},{"key":"2478_CR877","doi-asserted-by":"crossref","unstructured":"Sevilla-Lara, L., Liao, Y., G\u00fcney, F., Jampani, V., Geiger, A., & Black, M. J. (2019). On the integration of optical flow and action recognition. In: GCPR","DOI":"10.1007\/978-3-030-12939-2_20"},{"key":"2478_CR878","doi-asserted-by":"crossref","unstructured":"Shahroudy, A., Liu, J., Ng, T. T., & Wang, G. (2016). Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In: CVPR","DOI":"10.1109\/CVPR.2016.115"},{"key":"2478_CR879","doi-asserted-by":"crossref","unstructured":"Shang, X., Ren, T., Guo, J., Zhang, H., & Chua, T. S. (2017). Video visual relation detection. In: MM","DOI":"10.1145\/3123266.3123380"},{"key":"2478_CR880","doi-asserted-by":"crossref","unstructured":"Shao, D., Zhao, Y., Dai, B., & Lin, D. (2020). Finegym: A hierarchical video dataset for fine-grained action understanding. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00269"},{"key":"2478_CR881","doi-asserted-by":"crossref","unstructured":"Shao, J., Wang, X., Quan, R., Zheng, J., Yang, J., & Yang, Y. (2023). Action sensitivity learning for temporal action localization. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01238"},{"key":"2478_CR882","unstructured":"Sharma, S., Kiros, R., & Salakhutdinov, R. (2015). Action recognition using visual attention. In: ICLR"},{"key":"2478_CR883","unstructured":"Shechtman, E., & Irani, M. (2005). Space-time behavior based correlation. In: CVPR"},{"key":"2478_CR884","doi-asserted-by":"crossref","unstructured":"Sheikh, Y., Sheikh, M., & Shah, M. (2005). Exploring the space of a human action. In: ICCV","DOI":"10.1109\/ICCV.2005.90"},{"key":"2478_CR885","unstructured":"Shen, J., Tenenholtz, N., Hall, J. B., Alvarez-Melis, D., & Fusi, N. (2024). Tag-llm: Repurposing general-purpose llms for specialized domains. arXiv:2402.05140"},{"key":"2478_CR886","doi-asserted-by":"crossref","unstructured":"Shen, X., Li, X., & Elhoseiny, M. (2023a). Mostgan-v: Video generation with temporal motion styles. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00547"},{"key":"2478_CR887","doi-asserted-by":"crossref","unstructured":"Shen, Y., & Elhamifar, E. (2024). Progress-aware online action segmentation for egocentric procedural task videos. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01722"},{"key":"2478_CR888","doi-asserted-by":"crossref","unstructured":"Shen, Y., Ni, B., Li, Z., & Zhuang, N. (2018). Egocentric activity prediction via event modulated attention. In: ECCV","DOI":"10.1007\/978-3-030-01216-8_13"},{"key":"2478_CR889","doi-asserted-by":"crossref","unstructured":"Shen, Y., Gu, X., Xu, K., Fan, H., Wen, L., & Zhang, L. (2023b). Accurate and fast compressed video captioning. In: CVPR","DOI":"10.1109\/ICCV51070.2023.01426"},{"key":"2478_CR890","doi-asserted-by":"crossref","unstructured":"Shen, Z., Li, J., Su, Z., Li, M., Chen, Y., Jiang, Y. G., & Xue, X. (2017) Weakly supervised dense video captioning. In: CVPR","DOI":"10.1109\/CVPR.2017.548"},{"key":"2478_CR891","doi-asserted-by":"crossref","unstructured":"Shi, B., Ji, L., Liang, Y., Duan, N., Chen, P., Niu, Z., & Zhou, M. (2019). Dense procedure captioning in narrated instructional videos. In: ACL","DOI":"10.18653\/v1\/P19-1641"},{"key":"2478_CR892","doi-asserted-by":"crossref","unstructured":"Shi, D., Zhong, Y., Cao, Q., Ma, L., Li, J., & Tao, D. (2023). Tridet: Temporal action detection with relative boundary modeling. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01808"},{"key":"2478_CR893","doi-asserted-by":"crossref","unstructured":"Shin, W., Lee, J., Lee, T., Lee, S., & Yun, J. P. (2023). Anomaly detection using score-based perturbation resilience. In: ICCV","DOI":"10.1109\/ICCV51070.2023.02136"},{"key":"2478_CR894","doi-asserted-by":"crossref","unstructured":"Shou, M. Z., Lei, S. W., Wang, W., Ghadiyaram, D., & Feiszli, M. (2021). Generic event boundary detection: A benchmark for event segmentation. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00797"},{"key":"2478_CR895","doi-asserted-by":"crossref","unstructured":"Shou, Z., Wang, D., & Chang, S. F. (2016). Temporal action localization in untrimmed videos via multi-stage cnns. In: CVPR","DOI":"10.1109\/CVPR.2016.119"},{"key":"2478_CR896","doi-asserted-by":"crossref","unstructured":"Shou, Z., Chan, J., Zareian, A., Miyazawa, K., & Chang, S. F. (2017). Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In: CVPR","DOI":"10.1109\/CVPR.2017.155"},{"key":"2478_CR897","doi-asserted-by":"crossref","unstructured":"Shou, Z., Gao, H., Zhang, L., Miyazawa, K., & Chang, S. F. (2018). Autoloc: Weakly-supervised temporal action localization in untrimmed videos. In: ECCV","DOI":"10.1007\/978-3-030-01270-0_10"},{"key":"2478_CR898","doi-asserted-by":"crossref","unstructured":"Shrivastava, G., & Shrivastava, A. (2024). Video prediction by modeling videos as continuous multi-dimensional processes. In: CVPR","DOI":"10.32388\/DM98UZ"},{"key":"2478_CR899","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1007\/s11263-009-0273-6","volume":"87","author":"L Sigal","year":"2010","unstructured":"Sigal, L., Balan, A. O., & Black, M. J. (2010). Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. IJCV, 87, 4\u201327.","journal-title":"IJCV"},{"key":"2478_CR900","doi-asserted-by":"crossref","unstructured":"Sigurdsson, G. A., Varol, G., Wang, X., Farhadi, A., Laptev, I., & Gupta, A. (2016). Hollywood in homes: Crowdsourcing data collection for activity understanding. In: ECCV","DOI":"10.1007\/978-3-319-46448-0_31"},{"key":"2478_CR901","doi-asserted-by":"crossref","unstructured":"Sigurdsson, G. A., Russakovsky, O., & Gupta, A. (2017). What actions are needed for understanding human actions in videos? In: ICCV","DOI":"10.1109\/ICCV.2017.235"},{"key":"2478_CR902","unstructured":"Sigurdsson, G. A., Gupta, A., Schmid, C., Farhadi, A., & Alahari, K. (2018). Charades-ego: A large-scale dataset of paired third and first person videos. arXiv:1804.09626"},{"key":"2478_CR903","unstructured":"Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. In: NeurIPS"},{"key":"2478_CR904","unstructured":"Singer, U., Polyak, A., Hayes, T., Yin, X., An, J., Zhang, S., Hu, Q., Yang, H., Ashual, O., Gafni, O., et\u00a0al. (2023). Make-a-video: Text-to-video generation without text-video data. In: ICLR"},{"key":"2478_CR905","doi-asserted-by":"crossref","unstructured":"Singh, B., Marks, T. K., Jones, M., Tuzel, O., & Shao, M. (2016). A multi-stream bi-directional recurrent neural network for fine-grained action detection. In: CVPR","DOI":"10.1109\/CVPR.2016.216"},{"key":"2478_CR906","doi-asserted-by":"crossref","unstructured":"Singh, G., Saha, S., Sapienza, M., Torr, P. H., & Cuzzolin, F. (2017). Online real-time multiple spatiotemporal action localisation and prediction. In: ICCV","DOI":"10.1109\/ICCV.2017.393"},{"key":"2478_CR907","doi-asserted-by":"crossref","unstructured":"Singh, N., Wu, C. W., Orife, I., & Kalayeh, M. (2024). Looking similar sounding different: Leveraging counterfactual cross-modal pairs for audiovisual representation learning. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02541"},{"key":"2478_CR908","doi-asserted-by":"crossref","unstructured":"Sinha, S., Stergiou, A., & Damen, D. (2024). Every shot counts: Using exemplars for repetition counting in videos. In: ACCV","DOI":"10.1007\/978-981-96-0908-6_22"},{"key":"2478_CR909","doi-asserted-by":"crossref","unstructured":"Skorokhodov, I., Tulyakov, S., & Elhoseiny, M. (2022). Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00361"},{"key":"2478_CR910","unstructured":"Smaira, L., Carreira, J., Noland, E., Clancy, E., Wu, A., & Zisserman, A. (2020). A short note on the kinetics-700-2020 human action dataset. arXiv:2010.10864"},{"key":"2478_CR911","unstructured":"Smith, J., De\u00a0Mello, S., Kautz, J., Linderman, S., & Byeon, W. (2024). Convolutional state space models for long-range spatiotemporal modeling. In: NeurIPS"},{"key":"2478_CR912","unstructured":"Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In: ICLR"},{"key":"2478_CR913","doi-asserted-by":"crossref","unstructured":"Song, E., Chai, W., Wang, G., Zhang, Y., Zhou, H., Wu, F., Chi, H., Guo, X., Ye, T., Zhang, Y., et\u00a0al. (2024). Moviechat: From dense token to sparse memory for long video understanding. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01725"},{"key":"2478_CR914","doi-asserted-by":"crossref","unstructured":"Song, J., Yang, Y., Huang, Z., Shen, H. T., & Hong, R. (2011). Multiple feature hashing for real-time large scale near-duplicate video retrieval. In: MM","DOI":"10.1145\/2072298.2072354"},{"key":"2478_CR915","doi-asserted-by":"crossref","unstructured":"Song, L., Zhang, S., Yu, G., & Sun, H. (2019). Tacnet: Transition-aware context network for spatio-temporal action detection. In: CVPR","DOI":"10.1109\/CVPR.2019.01226"},{"key":"2478_CR916","volume":"76","author":"L Song","year":"2021","unstructured":"Song, L., Yu, G., Yuan, J., & Liu, Z. (2021). Human pose estimation and its application to action recognition: A survey. JVCIR, 76, Article 103055.","journal-title":"JVCIR"},{"key":"2478_CR917","unstructured":"Song, Y., & Ermon, S. (2019). Generative modeling by estimating gradients of the data distribution. In: NeurIPS, vol\u00a032"},{"key":"2478_CR918","unstructured":"Soomro, K., Zamir, A. R., & Shah, M. (2012). Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv:1212.0402"},{"key":"2478_CR919","doi-asserted-by":"crossref","unstructured":"Soomro, K., Idrees, H., Shah, M. (2015). Action localization in videos through context walk. In: ICCV","DOI":"10.1109\/ICCV.2015.375"},{"key":"2478_CR920","doi-asserted-by":"crossref","unstructured":"Sou\u010dek, T., Alayrac, J. B., Miech, A., Laptev, I., & Sivic, J. (2022). Look for the change: Learning object states and state-modifying actions from untrimmed web videos. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01357"},{"key":"2478_CR921","doi-asserted-by":"crossref","unstructured":"Sou\u010dek, T., Alayrac, J. B., Miech, A., Laptev, I., & Sivic, J. (2024a). Multi-task learning of object states and state-modifying actions from web videos. IEEE TPAMI 46(7)","DOI":"10.1109\/TPAMI.2024.3362288"},{"key":"2478_CR922","doi-asserted-by":"crossref","unstructured":"Sou\u010dek, T., Damen, D., Wray, M., Laptev, I., Sivic, J., et\u00a0al. (2024b). Genhowto: Learning to generate actions and state transformations from instructional videos. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00627"},{"issue":"11","key":"2478_CR923","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.1038\/s42256-023-00743-0","volume":"5","author":"A Spielberg","year":"2023","unstructured":"Spielberg, A., Zhong, F., Rematas, K., Jatavallabhula, K. M., Oztireli, C., Li, T. M., & Nowrouzezahrai, D. (2023). Differentiable visual computing for inverse problems and machine learning. Nature Machine Intelligence, 5(11), 1189\u20131199.","journal-title":"Nature Machine Intelligence"},{"issue":"1","key":"2478_CR924","first-page":"63","volume":"23","author":"RP Spunt","year":"2011","unstructured":"Spunt, R. P., Satpute, A. B., & Lieberman, M. D. (2011). Identifying the what, why, and how of an observed action: an fmri study of mentalizing and mechanizing during action observation. JCN, 23(1), 63\u201374.","journal-title":"JCN"},{"key":"2478_CR925","unstructured":"Srivastava, N., Mansimov, E., & Salakhudinov, R. (2015). Unsupervised learning of video representations using lstms. In: ICML"},{"key":"2478_CR926","doi-asserted-by":"crossref","unstructured":"Srivastava S, & Sharma G (2024a) Omnivec: Learning robust representations with cross modal sharing. In: WACV","DOI":"10.1109\/WACV57701.2024.00127"},{"key":"2478_CR927","doi-asserted-by":"crossref","unstructured":"Srivastava, S., & Sharma, G. (2024b). Omnivec2-a novel transformer based network for large scale multimodal and multitask learning. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02588"},{"key":"2478_CR928","doi-asserted-by":"crossref","unstructured":"Stathopoulos, A., Han, L., & Metaxas, D. (2024). Score-guided diffusion for 3d human recovery. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00092"},{"key":"2478_CR929","unstructured":"van Steenkiste, S., Zoran, D., Yang, Y., Rubanova, Y., Kabra, R., Doersch, C., Gokay, D., Heyward, J., Pot, E., Greff, K., et\u00a0al. (2024). Moving off-the-grid: Scene-grounded video representations. In: NeurIPS"},{"key":"2478_CR930","doi-asserted-by":"crossref","unstructured":"Stein, S., & McKenna, S. J. (2013). Combining embedded accelerometers with computer vision for recognizing food preparation activities. In: UbiComp","DOI":"10.1145\/2493432.2493482"},{"key":"2478_CR931","unstructured":"Stergiou, A. (2024). Lavib: A large-scale video interpolation benchmark. In: NeurIPS"},{"key":"2478_CR932","doi-asserted-by":"crossref","unstructured":"Stergiou, A., & Damen, D. (2023a). Play it back: Iterative attention for audio recognition. In: ICASSP","DOI":"10.1109\/ICASSP49357.2023.10096532"},{"key":"2478_CR933","doi-asserted-by":"crossref","unstructured":"Stergiou, A., & Damen, D. (2023b). The wisdom of crowds: Temporal progressive attention for early action prediction. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01413"},{"key":"2478_CR934","doi-asserted-by":"crossref","unstructured":"Stergiou, A., & Deligiannis, N. (2023). Leaping into memories: Space-time deep feature synthesis. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00188"},{"key":"2478_CR935","volume":"188","author":"A Stergiou","year":"2019","unstructured":"Stergiou, A., & Poppe, R. (2019). Analyzing human-human interactions: A survey. CVIU, 188, Article 102799.","journal-title":"Analyzing human-human interactions: A survey. CVIU"},{"key":"2478_CR936","first-page":"1","volume":"141","author":"A Stergiou","year":"2021","unstructured":"Stergiou, A., & Poppe, R. (2021). Learn to cycle: Time-consistent feature discovery for action recognition. PRL, 141, 1\u20137.","journal-title":"PRL"},{"key":"2478_CR937","doi-asserted-by":"crossref","unstructured":"Stergiou, A., & Poppe, R. (2021b). Multi-temporal convolutions for human action recognition in videos. In: IJCNN","DOI":"10.1109\/IJCNN52387.2021.9533515"},{"key":"2478_CR938","doi-asserted-by":"crossref","unstructured":"Stergiou, A., De\u00a0Weerdt, B., & Deligiannis, N. (2024). Holistic representation learning for multitask trajectory anomaly detection. In: WACV","DOI":"10.1109\/WACV57701.2024.00659"},{"key":"2478_CR939","unstructured":"Straub, J., DeTone, D., Shen, T., Yang, N., Sweeney, C., & Newcombe, R. (2024). Efm3d: A benchmark for measuring progress towards 3d egocentric foundation models. arXiv:2406.10224"},{"key":"2478_CR940","doi-asserted-by":"crossref","unstructured":"Sudhakaran, S., Escalera, S., & Lanz, O. (2020). Gate-shift networks for video action recognition. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00118"},{"key":"2478_CR941","doi-asserted-by":"crossref","unstructured":"Sultani, W., Chen, C., & Shah, M. (2018). Real-world anomaly detection in surveillance videos. In: CVPR","DOI":"10.1109\/CVPR.2018.00678"},{"key":"2478_CR942","doi-asserted-by":"crossref","unstructured":"Sun, C., Shrivastava, A., Vondrick, C., Murphy, K., Sukthankar, R., & Schmid, C. (2018). Actor-centric relation network. In: ECCV","DOI":"10.1007\/978-3-030-01252-6_20"},{"key":"2478_CR943","unstructured":"Sun, C., Baradel, F., Murphy, K., & Schmid, C. (2019a). Learning video representations using contrastive bidirectional transformer. arXiv:1906.05743"},{"key":"2478_CR944","doi-asserted-by":"crossref","unstructured":"Sun, C., Myers, A., Vondrick, C., Murphy, K., & Schmid, C. (2019b). Videobert: A joint model for video and language representation learning. In: CVPR","DOI":"10.1109\/ICCV.2019.00756"},{"key":"2478_CR945","doi-asserted-by":"crossref","unstructured":"Sun, C., Shrivastava, A., Vondrick, C., Sukthankar, R., Murphy, K., Schmid, C. (2019c). Relational action forecasting. In: CVPR","DOI":"10.1109\/CVPR.2019.00036"},{"key":"2478_CR946","doi-asserted-by":"crossref","unstructured":"Sun, C., Nagrani, A., Tian, Y., & Schmid, C. (2021a). Composable augmentation encoding for video representation learning. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00871"},{"key":"2478_CR947","doi-asserted-by":"crossref","unstructured":"Sun, L., Jia, K., Yeung, D. Y., & Shi, B. E. (2015). Human action recognition using factorized spatio-temporal convolutional networks. In: ICCV","DOI":"10.1109\/ICCV.2015.522"},{"key":"2478_CR948","doi-asserted-by":"crossref","unstructured":"Sun, P., Cao, J., Jiang, Y., Yuan, Z., Bai, S., Kitani, K., & Luo, P. (2022a). Dancetrack: Multi-object tracking in uniform appearance and diverse motion. In: CVPR","DOI":"10.1109\/CVPR52688.2022.02032"},{"key":"2478_CR949","doi-asserted-by":"crossref","unstructured":"Sun, S., Liu, D., Dong, J., Qu, X., Gao, J., Yang, X., Wang, X., & Wang, M. (2023). Unified multi-modal unsupervised representation learning for skeleton-based action understanding. In: ACM MM","DOI":"10.1145\/3581783.3612449"},{"key":"2478_CR950","unstructured":"Sun, X., Chen, M., & Hauptmann, A. (2009). Action recognition via local descriptors and holistic features. In: CVPRw"},{"key":"2478_CR951","doi-asserted-by":"crossref","unstructured":"Sun, X., Panda, R., Chen, C. F. R., Oliva, A., Feris, R., & Saenko, K. (2021b). Dynamic network quantization for efficient video inference. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00728"},{"key":"2478_CR952","first-page":"3200","volume":"45","author":"Z Sun","year":"2022","unstructured":"Sun, Z., Ke, Q., Rahmani, H., Bennamoun, M., Wang, G., & Liu, J. (2022). Human action recognition from various data modalities: A review. IEEE TPAMI, 45, 3200\u20133225.","journal-title":"IEEE TPAMI"},{"key":"2478_CR953","unstructured":"Sung, J., Ponce, C., Selman, B., & Saxena, A. (2012). Unstructured human activity detection from rgbd images. In: ICRA"},{"key":"2478_CR954","doi-asserted-by":"crossref","unstructured":"Sung, Y. L., Cho, J., & Bansal, M. (2022). Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00516"},{"key":"2478_CR955","doi-asserted-by":"crossref","unstructured":"Sur\u00eds, D., Liu, R., & Vondrick, C. (2021). Learning the predictability of the future. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01242"},{"key":"2478_CR956","doi-asserted-by":"crossref","unstructured":"Tafasca, S., Gupta, A., & Odobez, J. M. (2024). Sharingan: A transformer architecture for multi-person gaze following. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00196"},{"key":"2478_CR957","doi-asserted-by":"crossref","unstructured":"Taheri, O., Ghorbani, N., Black, M. J., & Tzionas, D. (2020). Grab: A dataset of whole-body human grasping of objects. In: ECCV","DOI":"10.1007\/978-3-030-58548-8_34"},{"key":"2478_CR958","doi-asserted-by":"crossref","unstructured":"Takano, W., & Nakamura, Y. (2015). Statistical mutual conversion between whole body motion primitives and linguistic sentences for human motions. IJRR","DOI":"10.1177\/0278364915587923"},{"key":"2478_CR959","doi-asserted-by":"crossref","unstructured":"Tan, C., Gao, Z., Wu, L., Xu, Y., Xia, J., Li, S., & Li, S. Z. (2023a). Temporal attention unit: Towards efficient spatiotemporal predictive learning. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01800"},{"key":"2478_CR960","unstructured":"Tan, H., Lei, J., Wolf, T., & Bansal, M. (2021a). Vimpac: Video pre-training via masked token prediction and contrastive learning. arXiv:2106.11250"},{"key":"2478_CR961","doi-asserted-by":"crossref","unstructured":"Tan, J., Tang, J., Wang, L., & Wu, G. (2021b). Relaxed transformer decoders for direct action proposal generation. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01327"},{"key":"2478_CR962","unstructured":"Tan, S., Nagarajan, T., & Grauman, K. (2023b). Egodistill: Egocentric head motion distillation for efficient video understanding. In: NeurIPS"},{"key":"2478_CR963","doi-asserted-by":"crossref","unstructured":"Tang, J., Xia, J., Mu, X., Pang, B., & Lu, C. (2020a). Asynchronous interaction aggregation for action detection. In: ECCV","DOI":"10.1007\/978-3-030-58555-6_5"},{"key":"2478_CR964","doi-asserted-by":"crossref","unstructured":"Tang, Y., Ding, D., Rao, Y., Zheng, Y., Zhang, D., Zhao, L., Lu, J., & Zhou, J. (2019). Coin: A large-scale dataset for comprehensive instructional video analysis. In: CVPR","DOI":"10.1109\/CVPR.2019.00130"},{"key":"2478_CR965","doi-asserted-by":"crossref","unstructured":"Tang Y, Ni Z, Zhou J, Zhang D, Lu J, Wu Y, & Zhou J (2020b) Uncertainty-aware score distribution learning for action quality assessment. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00986"},{"key":"2478_CR966","unstructured":"Tang, Y., Bi, J., Xu, S., Song, L., Liang, S., Wang, T., Zhang, D., An, J., Lin, J., Zhu, R., et\u00a0al. (2023). Video understanding with large language models: A survey. arXiv:2312.17432"},{"key":"2478_CR967","doi-asserted-by":"crossref","unstructured":"Tang, Y., Dong, P., Tang, Z., Chu, X., & Liang, J. (2024). Vmrnn: Integrating vision mamba and lstm for efficient and accurate spatiotemporal forecasting. In: CVPR","DOI":"10.1109\/CVPRW63382.2024.00575"},{"key":"2478_CR968","doi-asserted-by":"crossref","unstructured":"Tavakoli, H. R., Rahtu, E., Kannala, J., & Borji, A. (2019). Digging deeper into egocentric gaze prediction. In: WACV","DOI":"10.1109\/WACV.2019.00035"},{"key":"2478_CR969","doi-asserted-by":"crossref","unstructured":"Taylor, G. W., Fergus, R., LeCun, Y., & Bregler, C. (2010). Convolutional learning of spatio-temporal features. In: ECCV","DOI":"10.1007\/978-3-642-15567-3_11"},{"key":"2478_CR970","unstructured":"Teed, Z., & Deng, J. (2021). Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras. In: NeurIPS"},{"key":"2478_CR971","doi-asserted-by":"crossref","unstructured":"Teeti, I., Bhargav, R. S., Singh, V., Bradley, A., Banerjee, B., & Cuzzolin, F. (2023). Temporal dino: A self-supervised video strategy to enhance action prediction. In: ICCVW","DOI":"10.1109\/ICCVW60793.2023.00352"},{"key":"2478_CR972","doi-asserted-by":"crossref","unstructured":"Tekin, B., Bogo, F., & Pollefeys, M. (2019). H+ o: Unified egocentric recognition of 3d hand-object poses and interactions. In: CVPR","DOI":"10.1109\/CVPR.2019.00464"},{"key":"2478_CR973","unstructured":"Tewari, A., Yin, T., Cazenavette, G., Rezchikov, S., Tenenbaum, J., Durand, F., Freeman, B., & Sitzmann, V. (2023). Diffusion with forward models: Solving stochastic inverse problems without direct supervision. In: NeurIPS"},{"key":"2478_CR974","unstructured":"Tewel, Y., Shalev, Y., Nadler, R., Schwartz, I., & Wolf, L. (2022). Zero-shot video captioning with evolving pseudo-tokens. arXiv:2207.11100"},{"key":"2478_CR975","doi-asserted-by":"crossref","first-page":"61767","DOI":"10.1109\/ACCESS.2024.3395282","volume":"12","author":"S Thakur","year":"2024","unstructured":"Thakur, S., Beyan, C., Morerio, P., Murino, V., & Del Bue, A. (2024). Anticipating next active objects for egocentric videos. IEEE Access, 12, 61767\u201361779.","journal-title":"IEEE Access"},{"key":"2478_CR976","doi-asserted-by":"crossref","unstructured":"Thangali, A., & Sclaroff, S. (2005). Periodic motion detection and estimation via space-time sampling. In: WACV","DOI":"10.1109\/ACVMOT.2005.91"},{"key":"2478_CR977","doi-asserted-by":"crossref","unstructured":"Thoker, F. M., Doughty, H., & Snoek, C. G. M. (2023). Tubelet-contrastive self-supervision for video-efficient generalization. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01270"},{"key":"2478_CR978","first-page":"106","volume":"105","author":"EL Thompson","year":"2019","unstructured":"Thompson, E. L., Bird, G., & Catmur, C. (2019). Conceptualizing and testing action understanding. NBR, 105, 106\u2013114.","journal-title":"Conceptualizing and testing action understanding. NBR"},{"key":"2478_CR979","doi-asserted-by":"crossref","unstructured":"Thurau, C., & Hlav\u00e1c, V. (2008). Pose primitive based human action recognition in videos or still images. In: CVPR","DOI":"10.1109\/CVPR.2008.4587721"},{"key":"2478_CR980","doi-asserted-by":"crossref","unstructured":"Tian, X., Zou, S., Yang, Z., & Zhang, J. (2024a). Argue: Attribute-guided prompt tuning for vision-language models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02700"},{"key":"2478_CR981","doi-asserted-by":"crossref","unstructured":"Tian, Y., Li, D., & Xu, C. (2020). Unified multisensory perception: Weakly-supervised audio-visual video parsing. In: ECCV","DOI":"10.1007\/978-3-030-58580-8_26"},{"key":"2478_CR982","doi-asserted-by":"crossref","unstructured":"Tian, Y., Pang, G., Chen, Y., Singh, R., Verjans, J. W., & Carneiro, G. (2021a). Weakly-supervised video anomaly detection with robust temporal feature magnitude learning. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00493"},{"key":"2478_CR983","unstructured":"Tian, Y., Ren, J., Chai, M., Olszewski, K., Peng, X., Metaxas, D. N., & Tulyakov, S. (2021b). A good image generator is what you need for high-resolution video synthesis. In: ICLR"},{"key":"2478_CR984","unstructured":"Tian, Y., Yang, L., Yang, H., Gao, Y., Deng, Y., Chen, J., Wang, X., Yu, Z., Tao, X., Wan, P., et\u00a0al. (2024b). Videotetris: Towards compositional text-to-video generation. arXiv:2406.04277"},{"key":"2478_CR985","doi-asserted-by":"crossref","unstructured":"Tirupattur, P., Duarte, K., Rawat, Y. S., & Shah, M. (2021). Modeling multi-label action dependencies for temporal action localization. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00151"},{"key":"2478_CR986","doi-asserted-by":"crossref","unstructured":"Toering, M., Gatopoulos, I., Stol, M., & Hu, V. T. (2022). Self-supervised video representation learning with cross-stream prototypical contrasting. In: WACV","DOI":"10.1109\/WACV51458.2022.00092"},{"key":"2478_CR987","unstructured":"Tong, Z., Song, Y., Wang, J., & Wang, L. (2022). Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In: NeurIPS"},{"key":"2478_CR988","unstructured":"Torabi, A., Tandon, N., Sigal, L. (2016). Learning language-visual embedding for movie understanding with natural-language. arXiv:1609.08124"},{"issue":"4","key":"2478_CR989","doi-asserted-by":"crossref","first-page":"766","DOI":"10.1037\/0033-295X.113.4.766","volume":"113","author":"A Torralba","year":"2006","unstructured":"Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological review, 113(4), 766.","journal-title":"Psychological review"},{"key":"2478_CR990","volume-title":"Guide to the carnegie mellon university multimodal activity (cmu-mmac) database","author":"FD la Torre Frade","year":"2008","unstructured":"la Torre Frade, F. D., Hodgins, J. K., Bargteil, A. W., Artal, X. M., Macey, J. C., Castells, A. C. I., & Beltran, J. (2008). Guide to the carnegie mellon university multimodal activity (cmu-mmac) database. CMU: Tech. rep."},{"key":"2478_CR991","unstructured":"Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., Rozi\u00e8re, B., Goyal, N., Hambro, E., Azhar, F., et\u00a0al. (2023). Llama: Open and efficient foundation language models. arXiv:2302.13971"},{"key":"2478_CR992","doi-asserted-by":"crossref","unstructured":"Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3d convolutional networks. In: ICCV","DOI":"10.1109\/ICCV.2015.510"},{"key":"2478_CR993","doi-asserted-by":"crossref","unstructured":"Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., & Paluri, M. (2018). A closer look at spatiotemporal convolutions for action recognition. In: CVPR","DOI":"10.1109\/CVPR.2018.00675"},{"key":"2478_CR994","doi-asserted-by":"crossref","unstructured":"Tran, D., Wang, H., Torresani, L., & Feiszli, M. (2019) Video classification with channel-separated convolutional networks. In: ICCV","DOI":"10.1109\/ICCV.2019.00565"},{"key":"2478_CR995","doi-asserted-by":"crossref","unstructured":"Tran, K.N., Kakadiaris, I.A., & Shah, S.K. (2012) Part-based motion descriptor image for human action recognition. PR 45(7):2562\u20132572","DOI":"10.1016\/j.patcog.2011.12.028"},{"key":"2478_CR996","unstructured":"Tschernezki, V., Darkhalil, A., Zhu, Z., Fouhey, D., Laina, I., Larlus, D., Damen, D., & Vedaldi, A. (2024) Epic fields: Marrying 3d geometry and video understanding. In: NeurIPS"},{"key":"2478_CR997","doi-asserted-by":"crossref","unstructured":"Tse, T.H.E., Kim, K.I., Leonardis, A., Chang, H.J.:(2022) Collaborative learning for hand and object reconstruction with attention-guided graph convolution. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00171"},{"key":"2478_CR998","unstructured":"Tsuchida, S., Fukayama, S., Hamasaki, M., Goto, M. (2019) Aist dance video database: Multi-genre, multi-dancer, and multi-camera database for dance information processing. In: ISMIR"},{"key":"2478_CR999","first-page":"42","volume":"21","author":"K Tu","year":"2014","unstructured":"Tu, K., Meng, M., Lee, M. W., Choe, T. E., & Zhu, S. C. (2014). Joint video and text parsing for understanding events and answering queries. IEEE MM, 21, 42\u201370.","journal-title":"IEEE MM"},{"key":"2478_CR1000","doi-asserted-by":"crossref","unstructured":"Tu, Z., Xie, W., Qin, Q., Poppe, R., Veltkamp, R.C., Li, B., & Yuan, J. (2018) Multi-stream CNN: Learning representations based on human-related regions for action recognition. PR 79:32\u201343","DOI":"10.1016\/j.patcog.2018.01.020"},{"issue":"11","key":"2478_CR1001","first-page":"1473","volume":"18","author":"P Turaga","year":"2008","unstructured":"Turaga, P., Chellappa, R., Subrahmanian, V. S., & Udrea, O. (2008). Machine recognition of human activities: A survey. IEEE TCSVT, 18(11), 1473\u20131488.","journal-title":"IEEE TCSVT"},{"key":"2478_CR1002","doi-asserted-by":"crossref","unstructured":"Uithol, S., van Rooij, I., Bekkering, H., & Haselager, P. (2011) Understanding motor resonance. Social neuroscience 6(4):388\u2013397","DOI":"10.1080\/17470919.2011.559129"},{"key":"2478_CR1003","doi-asserted-by":"crossref","first-page":"1155","DOI":"10.1109\/ACCESS.2017.2778011","volume":"6","author":"A Ullah","year":"2017","unstructured":"Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M., & Baik, S. W. (2017). Action recognition in video sequences using deep bi-directional lstm with cnn features. IEEE access, 6, 1155\u20131166.","journal-title":"IEEE access"},{"key":"2478_CR1004","doi-asserted-by":"crossref","unstructured":"Ulutan, O., Rallapalli, S., Srivatsa, M., Torres, C., & Manjunath, B. (2020) Actor conditioned attention maps for video action detection. In: WACV","DOI":"10.1109\/WACV45572.2020.9093617"},{"key":"2478_CR1005","unstructured":"Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., & Gelly, S. (2019) Fvd: A new metric for video generation. In: ICLR"},{"key":"2478_CR1006","doi-asserted-by":"crossref","unstructured":"Upadhyay, U., Karthik, S., Mancini, M., & Akata, Z. (2023) Probvlm: Probabilistic adapter for frozen vison-language models. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00182"},{"key":"2478_CR1007","doi-asserted-by":"crossref","first-page":"161","DOI":"10.1023\/A:1022699900025","volume":"4","author":"PE Utgoff","year":"1989","unstructured":"Utgoff, P. E. (1989). Incremental induction of decision trees. Machine learning, 4, 161\u2013186.","journal-title":"Machine learning"},{"issue":"3","key":"2478_CR1008","first-page":"313","volume":"6","author":"LM Vaina","year":"1991","unstructured":"Vaina, L. M., & Jaulent, M. C. (1991). Object structure and action requirements: A compatibility model for functional recognition. IJIS, 6(3), 313\u2013336.","journal-title":"IJIS"},{"key":"2478_CR1009","unstructured":"Valevski, D., Leviathan, Y., Arar, M., & Fruchter, S. (2024) Diffusion models are real-time game engines. arXiv:2408.14837"},{"key":"2478_CR1010","doi-asserted-by":"crossref","unstructured":"Van\u00a0Gemeren, C., Poppe, R., & Veltkamp, R.C. (2016) Spatio-temporal detection of fine-grained dyadic human interactions. In: HBU","DOI":"10.1007\/978-3-319-46843-3_8"},{"issue":"6","key":"2478_CR1011","doi-asserted-by":"crossref","first-page":"1510","DOI":"10.1109\/TPAMI.2017.2712608","volume":"40","author":"G Varol","year":"2017","unstructured":"Varol, G., Laptev, I., & Schmid, C. (2017). Long-term temporal convolutions for action recognition. IEEE TPAMI, 40(6), 1510\u20131517.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1012","doi-asserted-by":"crossref","first-page":"279","DOI":"10.1016\/S0079-6123(09)01322-3","volume":"174","author":"JN Vickers","year":"2009","unstructured":"Vickers, J. N. (2009). Advances in coupling perception and action: The quiet eye as a bidirectional link between gaze, attention, and action. Progress in Brain Research, 174, 279\u2013288.","journal-title":"Progress in Brain Research"},{"key":"2478_CR1013","unstructured":"Villegas, R., Yang, J., Zou, Y., Sohn, S., Lin, X., & Lee, H. (2017) Learning to generate long-term future via hierarchical prediction. In: ICML"},{"key":"2478_CR1014","unstructured":"Villegas, R., Erhan, D., & Lee, H., et\u00a0al (2018) Hierarchical long-term video prediction without supervision. In: ICML"},{"key":"2478_CR1015","unstructured":"Villegas, R., Babaeizadeh, M., Kindermans, P.J., Moraldo, H., Zhang, H., Saffar, M.T., Castro, S., Kunze, J., & Erhan, D. (2022) Phenaki: Variable length video generation from open domain textual descriptions. In: ICLR"},{"issue":"1","key":"2478_CR1016","first-page":"69","volume":"3","author":"A Vinciarelli","year":"2012","unstructured":"Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D\u2019Errico, F., & Schroeder, M. (2012). Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE TAFFC, 3(1), 69\u201387.","journal-title":"IEEE TAFFC"},{"key":"2478_CR1017","doi-asserted-by":"crossref","first-page":"983","DOI":"10.1007\/s00371-012-0752-6","volume":"29","author":"S Vishwakarma","year":"2013","unstructured":"Vishwakarma, S., & Agrawal, A. (2013). A survey on activity recognition and behavior understanding in video surveillance. TVC, 29, 983\u20131009.","journal-title":"TVC"},{"key":"2478_CR1018","unstructured":"Voleti, V., Jolicoeur-Martineau, A., & Pal, C. (2022) Mcvd-masked conditional video diffusion for prediction, generation, and interpolation. In: NeurIPS"},{"key":"2478_CR1019","doi-asserted-by":"crossref","unstructured":"Vondrick, C., Pirsiavash, H., & Torralba, A. (2016a) Anticipating visual representations from unlabeled video. In: CVPR","DOI":"10.1109\/CVPR.2016.18"},{"key":"2478_CR1020","unstructured":"Vondrick, C., Pirsiavash, H., & Torralba, A. (2016b) Generating videos with scene dynamics. In: NeurIPS"},{"key":"2478_CR1021","doi-asserted-by":"crossref","unstructured":"Vondrick, C., Shrivastava, A., Fathi, A., Guadarrama, S., & Murphy, K. (2018) Tracking emerges by colorizing videos. In: ECCV","DOI":"10.1007\/978-3-030-01261-8_24"},{"key":"2478_CR1022","doi-asserted-by":"crossref","unstructured":"Walker, J., Doersch, C., Gupta, A., & Hebert, M. (2016) An uncertain future: Forecasting from static images using variational autoencoders. In: ECCV","DOI":"10.1007\/978-3-319-46478-7_51"},{"key":"2478_CR1023","doi-asserted-by":"crossref","unstructured":"Walmer, M., Suri, S., Gupta, K., & Shrivastava, A. (2023) Teaching matters: Investigating the role of supervision in vision transformers. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00723"},{"key":"2478_CR1024","doi-asserted-by":"crossref","unstructured":"Wang, B., Ma, L., Zhang, W., & Liu, W. (2018a) Reconstruction network for video captioning. In: CVPR","DOI":"10.1109\/CVPR.2018.00795"},{"key":"2478_CR1025","doi-asserted-by":"crossref","first-page":"2171","DOI":"10.1109\/TPAMI.2023.3330794","volume":"46","author":"B Wang","year":"2023","unstructured":"Wang, B., Zhao, Y., Yang, L., Long, T., & Li, X. (2023). Temporal action localization in the deep learning era: A survey. IEEE TPAMI, 46, 2171\u20132190.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1026","unstructured":"Wang, F.Y., Chen, W., Song, G., Ye, H.J., Liu, Y., & Li, H. (2023b) Gen-l-video: Multi-text to long video generation via temporal co-denoising. arXiv:2305.18264"},{"key":"2478_CR1027","doi-asserted-by":"crossref","unstructured":"Wang, G., Wang, Y., Qin, J., Zhang, D., Bao, X., & Huang, D. (2022a) Video anomaly detection by solving decoupled spatio-temporal jigsaw puzzles. In: ECCV","DOI":"10.1007\/978-3-031-20080-9_29"},{"key":"2478_CR1028","doi-asserted-by":"crossref","unstructured":"Wang, H., & Schmid, C. (2013) Action recognition with improved trajectories. In: ICCV","DOI":"10.1109\/ICCV.2013.441"},{"key":"2478_CR1029","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1007\/s11263-012-0594-8","volume":"103","author":"H Wang","year":"2013","unstructured":"Wang, H., Kl\u00e4ser, A., Schmid, C., & Liu, C. L. (2013). Dense trajectories and motion boundary descriptors for action recognition. IJCV, 103, 60\u201379.","journal-title":"IJCV"},{"key":"2478_CR1030","doi-asserted-by":"crossref","unstructured":"Wang, J., & Cherian, A. (2019) Gods: Generalized one-class discriminative subspaces for anomaly detection. In: ICCV","DOI":"10.1109\/ICCV.2019.00829"},{"issue":"5","key":"2478_CR1031","doi-asserted-by":"crossref","first-page":"914","DOI":"10.1109\/TPAMI.2013.198","volume":"36","author":"J Wang","year":"2014","unstructured":"Wang, J., Liu, Z., Wu, Y., & Yuan, J. (2014). Learning actionlet ensemble for 3d human action recognition. IEEE TPAMI, 36(5), 914\u2013927.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1032","doi-asserted-by":"crossref","unstructured":"Wang, J., Jiang, W., Ma, L., Liu, W., & Xu, Y. (2018b) Bidirectional attentive fusion with context gating for dense video captioning. In: CVPR","DOI":"10.1109\/CVPR.2018.00751"},{"key":"2478_CR1033","doi-asserted-by":"crossref","unstructured":"Wang, J., Jiao, J., Liu, Y.H. (2020a) Self-supervised video representation learning by pace prediction. In: ECCV","DOI":"10.1007\/978-3-030-58520-4_30"},{"key":"2478_CR1034","doi-asserted-by":"crossref","unstructured":"Wang, J., Ma, L., & Jiang, W. (2020b) Temporally grounding language queries in videos by contextual boundary-aware prediction. In: AAAI","DOI":"10.1609\/aaai.v34i07.6897"},{"key":"2478_CR1035","doi-asserted-by":"crossref","unstructured":"Wang, J., Gao, Y., Li, K., Hu, J., Jiang, X., Guo, X., Ji, R., & Sun, X. (2021a) Enhancing unsupervised video representation learning by decoupling the scene and the motion. In: AAAI","DOI":"10.1609\/aaai.v35i11.17215"},{"key":"2478_CR1036","doi-asserted-by":"crossref","unstructured":"Wang, J., Dasari, S., Srirama, M.K., Tulsiani, S., & Gupta, A. (2023c) Manipulate by seeing: Creating manipulation controllers from pre-trained representations. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00357"},{"key":"2478_CR1037","doi-asserted-by":"crossref","unstructured":"Wang, J., Chen, D., Luo, C., He, B., Yuan, L., Wu, Z., & Jiang, Y.G. (2024a) Omnivid: A generative framework for universal video understanding. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01724"},{"key":"2478_CR1038","doi-asserted-by":"crossref","unstructured":"Wang, L., Li, Y., & Lazebnik, S. (2016a) Learning deep structure-preserving image-text embeddings. In: CVPR","DOI":"10.1109\/CVPR.2016.541"},{"key":"2478_CR1039","doi-asserted-by":"crossref","unstructured":"Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., & Van\u00a0Gool, L. (2016b) Temporal segment networks: Towards good practices for deep action recognition. In: ECCV","DOI":"10.1007\/978-3-319-46484-8_2"},{"key":"2478_CR1040","doi-asserted-by":"crossref","unstructured":"Wang, L., Xiong, Y., Lin, D., & Van\u00a0Gool, L. (2017a) Untrimmednets for weakly supervised action recognition and detection. In: CVPR","DOI":"10.1109\/CVPR.2017.678"},{"key":"2478_CR1041","doi-asserted-by":"crossref","unstructured":"Wang, L., Li, W., Li, W., & Van\u00a0Gool, L. (2018c) Appearance-and-relation networks for video classification. In: CVPR","DOI":"10.1109\/CVPR.2018.00155"},{"key":"2478_CR1042","doi-asserted-by":"crossref","unstructured":"Wang, L., Huang, B., Zhao, Z., Tong, Z., He, Y., Wang, Y., Wang, Y., & Qiao, Y. (2023d) Videomae v2: Scaling video masked autoencoders with dual masking. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01398"},{"key":"2478_CR1043","doi-asserted-by":"crossref","unstructured":"Wang, M., Ni, B., & Yang, X. (2017b) Recurrent modeling of interaction context for collective activity recognition. In: CVPR","DOI":"10.1109\/CVPR.2017.783"},{"key":"2478_CR1044","first-page":"118","volume":"171","author":"P Wang","year":"2018","unstructured":"Wang, P., Li, W., Ogunbona, P., Wan, J., & Escalera, S. (2018). Rgb-d-based human motion recognition with deep learning: A survey. CVIU, 171, 118\u2013139.","journal-title":"CVIU"},{"key":"2478_CR1045","doi-asserted-by":"crossref","unstructured":"Wang, Q., Zhao, L., Yuan, L., Liu, T., & Peng, X. (2023e) Learning from semantic alignment between unpaired multiviews for egocentric video recognition. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00306"},{"key":"2478_CR1046","doi-asserted-by":"crossref","unstructured":"Wang, R., Chen, D., Wu, Z., Chen, Y., Dai, X., Liu, M., Jiang, Y.G., Zhou, L., & Yuan, L. (2022b) Bevt: Bert pretraining of video transformers. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01432"},{"key":"2478_CR1047","doi-asserted-by":"crossref","unstructured":"Wang, R., Chen, D., Wu, Z., Chen, Y., Dai, X., Liu, M., Yuan, L., & Jiang, Y.G. (2023f) Masked video distillation: Rethinking masked feature modeling for self-supervised video representation learning. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00611"},{"key":"2478_CR1048","doi-asserted-by":"crossref","unstructured":"Wang, S., Leroy, V., Cabon, Y., Chidlovskii, B., & Revaud, J. (2024b) Dust3r: Geometric 3d vision made easy. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01956"},{"key":"2478_CR1049","doi-asserted-by":"crossref","unstructured":"Wang, W., Tran, D., & Feiszli, M. (2020c) What makes training multi-modal classification networks hard? In: CVPR","DOI":"10.1109\/CVPR42600.2020.01271"},{"key":"2478_CR1050","doi-asserted-by":"crossref","unstructured":"Wang, W., Bao, H., Dong, L., Bjorck, J., Peng, Z., Liu, Q., Aggarwal, K., Mohammed, O.K., Singhal, S., Som, S., et\u00a0al (2022c) Image as a foreign language: Beit pretraining for all vision and vision-language tasks. arXiv:2208.10442","DOI":"10.1109\/CVPR52729.2023.01838"},{"key":"2478_CR1051","doi-asserted-by":"crossref","first-page":"3254","DOI":"10.1109\/TIP.2023.3279991","volume":"32","author":"W Wang","year":"2023","unstructured":"Wang, W., Chang, F., Zhang, J., Yan, R., Liu, C., Wang, B., & Shou, M. Z. (2023). Magi-net: Meta negative network for early activity prediction. IEEE T-IP, 32, 3254\u20133265.","journal-title":"IEEE T-IP"},{"key":"2478_CR1052","doi-asserted-by":"crossref","unstructured":"Wang, X., & Gupta, A. (2018) Videos as space-time region graphs. In: ECCV","DOI":"10.1007\/978-3-030-01228-1_25"},{"key":"2478_CR1053","doi-asserted-by":"crossref","unstructured":"Wang, X., Girshick, R., Gupta, A., & He, K. (2018e) Non-local neural networks. In: CVPR","DOI":"10.1109\/CVPR.2018.00813"},{"key":"2478_CR1054","doi-asserted-by":"crossref","unstructured":"Wang, X., Hu, J.F., Lai, J.H., Zhang, J., & Zheng, W.S. (2019a) Progressive teacher-student learning for early action prediction. In: CVPR","DOI":"10.1109\/CVPR.2019.00367"},{"key":"2478_CR1055","doi-asserted-by":"crossref","unstructured":"Wang, X., Wu, J., Chen, J., Li, L., Wang, Y.F., & Wang, W.Y. (2019b) Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In: CVPR","DOI":"10.1109\/ICCV.2019.00468"},{"key":"2478_CR1056","doi-asserted-by":"crossref","unstructured":"Wang, X., Kwon, T., Rad, M., Pan, B., Chakraborty, I., Andrist, S., Bohus, D., Feniello, A., Tekin, B., & Frujeri, FV. et\u00a0al (2023h) Holoassist: an egocentric human interaction dataset for interactive ai assistants in the real world. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01854"},{"key":"2478_CR1057","unstructured":"Wang, X., Yuan, H., Zhang, S., Chen, D., Wang, J., Zhang, Y., Shen, Y., Zhao, D., & Zhou, J. (2023i) Videocomposer: Compositional video synthesis with motion controllability. In: NeurIPS"},{"key":"2478_CR1058","doi-asserted-by":"crossref","unstructured":"Wang, X., Misra, I., Zeng, Z., Girdhar, R., & Darrell, T. (2024c) Videocutler: Surprisingly simple unsupervised video instance segmentation. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02147"},{"key":"2478_CR1059","doi-asserted-by":"crossref","unstructured":"Wang, X., Zhang, S., Yuan, H., Qing, Z., Gong, B., Zhang, Y., Shen, Y., Gao, C., & Sang, N. (2024d) A recipe for scaling up text-to-video generation with text-free videos. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00628"},{"key":"2478_CR1060","doi-asserted-by":"crossref","unstructured":"Wang, Y., Huang, K., & Tan, T. (2007) Human activity recognition based on r transform. In: CVPR","DOI":"10.1109\/CVPR.2007.383505"},{"key":"2478_CR1061","doi-asserted-by":"crossref","unstructured":"Wang, Y., Long, M., Wang, J., & Yu, P.S. (2017c) Spatiotemporal pyramid network for video action recognition. In: CVPR","DOI":"10.1109\/CVPR.2017.226"},{"key":"2478_CR1062","unstructured":"Wang, Y., Gao, Z., Long, M., Wang, J., & Philip, S.Y. (2018f) Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning. In: ICML"},{"key":"2478_CR1063","doi-asserted-by":"crossref","unstructured":"Wang, Y., Wu, J., Long, M., & Tenenbaum, J.B. (2020d) Probabilistic video prediction from noisy data with a posterior confidence. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01084"},{"key":"2478_CR1064","doi-asserted-by":"crossref","unstructured":"Wang, Y., Chen, Z., Jiang, H., Song, S., Han, Y., & Huang, G. (2021b) Adaptive focus for efficient video recognition. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01594"},{"key":"2478_CR1065","doi-asserted-by":"crossref","unstructured":"Wang, Y., Yue, Y., Lin, Y., Jiang, H., Lai, Z., Kulikov, V., Orlov, N., Shi, H., & Huang, G. (2022d) Adafocus v2: End-to-end training of spatial dynamic networks for video recognition. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01943"},{"key":"2478_CR1066","doi-asserted-by":"crossref","unstructured":"Wang, Y., Yue, Y., Xu, X., Hassani, A., Kulikov, V., Orlov, N., Song, S., Shi, H., & Huang, G. (2022e) Adafocusv3: On unified spatial-temporal dynamic video recognition. In: ECCV","DOI":"10.1007\/978-3-031-19772-7_14"},{"key":"2478_CR1067","doi-asserted-by":"crossref","unstructured":"Wang, Y., Cui, Z., & Li, Y. (2023j) Distribution-consistent modal recovering for incomplete multimodal learning. In: ICCV","DOI":"10.1109\/ICCV51070.2023.02013"},{"key":"2478_CR1068","doi-asserted-by":"crossref","unstructured":"Wang, Y., Jiang, L., & Loy, C.C. (2023k) Styleinv: A temporal style modulated inversion network for unconditional video generation. In: ICCV","DOI":"10.1109\/ICCV51070.2023.02089"},{"key":"2478_CR1069","doi-asserted-by":"crossref","unstructured":"Wang, Y., Li, K., Li, X., Yu, J., He, Y., Chen, G., Pei, B., Zheng, R., Xu, J., & Wang, Z., et\u00a0al (2024e) Internvideo2: Scaling video foundation models for multimodal video understanding. arXiv:2403.15377","DOI":"10.1007\/978-3-031-73013-9_23"},{"key":"2478_CR1070","doi-asserted-by":"crossref","unstructured":"Wang, Z., Wang, L., Wu, T., Li, T., & Wu, G. (2022f) Negative sample matters: A renaissance of metric learning for temporal grounding. In: AAAI","DOI":"10.1609\/aaai.v36i3.20163"},{"key":"2478_CR1071","doi-asserted-by":"crossref","unstructured":"Wei, C., Fan, H., Xie, S., Wu, C.Y., Yuille, A., & Feichtenhofer, C. (2022a) Masked feature prediction for self-supervised visual pre-training. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01426"},{"key":"2478_CR1072","doi-asserted-by":"crossref","unstructured":"Wei, D., Lim, J.J., Zisserman, A., & Freeman, W.T. (2018) Learning and using the arrow of time. In: CVPR","DOI":"10.1109\/CVPR.2018.00840"},{"key":"2478_CR1073","doi-asserted-by":"crossref","unstructured":"Wei, J., Luo, G., Li, B., & Hu, W. (2022b) Inter-intra cross-modality self-supervised video representation learning by contrastive clustering. In: ICPR","DOI":"10.1109\/ICPR56361.2022.9956697"},{"key":"2478_CR1074","unstructured":"Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., & Zhou, D., et\u00a0al (2022c) Chain-of-thought prompting elicits reasoning in large language models. In: NeurIPS"},{"key":"2478_CR1075","doi-asserted-by":"crossref","unstructured":"Wei, Y., Zhang, S., Qing, Z., Yuan, H., Liu, Z., Liu, Y., Zhang, Y., Zhou, J., & Shan, H. (2024) Dreamvideo: Composing your dream videos with customized subject and motion. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00625"},{"key":"2478_CR1076","doi-asserted-by":"crossref","unstructured":"Weinland, D., \u00d6zuysal, M., & Fua, P. (2010) Making action recognition robust to occlusions and viewpoint changes. In: ECCV","DOI":"10.1007\/978-3-642-15558-1_46"},{"issue":"2","key":"2478_CR1077","first-page":"224","volume":"115","author":"D Weinland","year":"2011","unstructured":"Weinland, D., Ronfard, R., & Boyer, E. (2011). A survey of vision-based methods for action representation, segmentation and recognition. CVIU, 115(2), 224\u2013241.","journal-title":"CVIU"},{"key":"2478_CR1078","doi-asserted-by":"crossref","unstructured":"Weinzaepfel, P., Harchaoui, Z., & Schmid, C. (2015) Learning to track for spatio-temporal action localization. In: ICCV","DOI":"10.1109\/ICCV.2015.362"},{"key":"2478_CR1079","unstructured":"Weinzaepfel, P., Martin, X., & Schmid, C. (2016) Towards weakly-supervised action localization. arXiv:1605.05197"},{"key":"2478_CR1080","doi-asserted-by":"crossref","unstructured":"Weinzaepfel, P., Lucas, T., Leroy, V., Cabon, Y., Arora, V., Br\u00e9gier, R., Csurka, G., Antsfeld, L., Chidlovskii, B., & Revaud, J. (2023) Croco v2: Improved cross-view completion pre-training for stereo matching and optical flow. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01647"},{"key":"2478_CR1081","unstructured":"Weissenborn, D., T\u00e4ckstr\u00f6m, O., & Uszkoreit, J. (2020) Scaling autoregressive video models. In: ICLR"},{"key":"2478_CR1082","doi-asserted-by":"crossref","unstructured":"Wong, S.F., & Cipolla, R. (2007) Extracting spatiotemporal interest points using global information. In: ICCV","DOI":"10.1109\/ICCV.2007.4408923"},{"key":"2478_CR1083","doi-asserted-by":"crossref","unstructured":"Woo, S., Lee, S., Park, Y., Nugroho, M.A., & Kim, C. (2023) Towards good practices for missing modality robust action recognition. In: AAAI","DOI":"10.1609\/aaai.v37i3.25378"},{"key":"2478_CR1084","unstructured":"Wray, M., & Damen, D. (2019) Learning visual actions using multiple verb-only labels. In: BMVC"},{"key":"2478_CR1085","doi-asserted-by":"crossref","unstructured":"Wray, M., Larlus, D., Csurka, G., & Damen, D. (2019) Fine-grained action retrieval through multiple parts-of-speech embeddings. In: CVPR","DOI":"10.1109\/ICCV.2019.00054"},{"key":"2478_CR1086","doi-asserted-by":"crossref","unstructured":"Wray, M., Doughty, H., & Damen, D. (2021) On semantic similarity in video retrieval. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00365"},{"issue":"7","key":"2478_CR1087","doi-asserted-by":"crossref","first-page":"780","DOI":"10.1109\/34.598236","volume":"19","author":"CR Wren","year":"1997","unstructured":"Wren, C. R., Azarbayejani, A., Darrell, T., & Pentland, A. P. (1997). Pfinder: Real-time tracking of the human body. IEEE TPAMI, 19(7), 780\u2013785.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1088","doi-asserted-by":"crossref","unstructured":"Wu, C., Zhang, J., Savarese, S., & Saxena, A. (2015) Watch-n-patch: Unsupervised understanding of actions and relations. In: CVPR","DOI":"10.1109\/CVPR.2015.7299065"},{"key":"2478_CR1089","unstructured":"Wu, C., Huang, L., Zhang, Q., Li, B., Ji, L., Yang, F., Sapiro, G., & Duan, N. (2021a) Godiva: Generating open-domain videos from natural descriptions. arXiv:2104.14806"},{"key":"2478_CR1090","doi-asserted-by":"crossref","unstructured":"Wu, C., Liang, J., Ji, L., Yang, F., Fang, Y., Jiang, D., & Duan, N. (2022a) N\u00fcwa: Visual synthesis pre-training for neural visual world creation. In: ECCV","DOI":"10.1007\/978-3-031-19787-1_41"},{"key":"2478_CR1091","doi-asserted-by":"crossref","unstructured":"Wu, C.Y., Feichtenhofer, C., Fan, H., He, K., Krahenbuhl, P., & Girshick, R. (2019a) Long-term feature banks for detailed video understanding. In: CVPR","DOI":"10.1109\/CVPR.2019.00037"},{"key":"2478_CR1092","doi-asserted-by":"crossref","unstructured":"Wu, C.Y., Li, Y., Mangalam, K., Fan, H., Xiong, B., Malik, J., & Feichtenhofer, C. (2022b) Memvit: Memory-augmented multiscale vision transformer for efficient long-term video recognition. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01322"},{"key":"2478_CR1093","doi-asserted-by":"crossref","unstructured":"Wu, H., & Wang, X. (2021) Contrastive learning of image representations with cross-video cycle-consistency. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00999"},{"key":"2478_CR1094","doi-asserted-by":"crossref","unstructured":"Wu, H., Yao, Z., Wang, J., & Long, M. (2021b) Motionrnn: A flexible model for video prediction with spacetime-varying motions. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01518"},{"key":"2478_CR1095","doi-asserted-by":"crossref","unstructured":"Wu, H., Chen, K., Liu, H., Zhuge, M., Li, B., Qiao, R., Shu, X., Gan, B., Xu, L., & Ren, B., et\u00a0al (2023a) Newsnet: A novel dataset for hierarchical temporal segmentation. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01028"},{"key":"2478_CR1096","unstructured":"Wu, H., Li, D., Chen, B., & Li, J. (2024a) Longvideobench: A benchmark for long-context interleaved video-language understanding. In: NeurIPS"},{"key":"2478_CR1097","doi-asserted-by":"crossref","unstructured":"Wu, J.Z., Ge, Y., Wang, X., Lei, S.W., Gu, Y., Shi, Y., Hsu, W., Shan, Y., Qie, X., & Shou, M.Z. (2023b) Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In: CVPR","DOI":"10.1109\/ICCV51070.2023.00701"},{"key":"2478_CR1098","doi-asserted-by":"crossref","unstructured":"Wu, P., Liu, J., Shi, Y., Sun, Y., Shao, F., Wu, Z., & Yang, Z. (2020a) Not only look, but also listen: Learning multimodal violence detection under weak supervision. In: ECCV","DOI":"10.1007\/978-3-030-58577-8_20"},{"key":"2478_CR1099","doi-asserted-by":"crossref","unstructured":"Wu, Q., Yang, T., Liu, Z., Wu, B., Shan, Y., & Chan, A.B. (2023c) Dropmae: Masked autoencoders with spatial-attention dropout for tracking tasks. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01399"},{"key":"2478_CR1100","doi-asserted-by":"crossref","unstructured":"Wu, Q., Cui, R., Li, Y., & Zhu, H. (2024b) Haltingvt: Adaptive token halting transformer for efficient video recognition. In: ICASSP","DOI":"10.1109\/ICASSP48485.2024.10447548"},{"key":"2478_CR1101","doi-asserted-by":"crossref","unstructured":"Wu, R., Lin, H., Qi, X., & Jia, J. (2020b) Memory selection network for video propagation. In: ECCV","DOI":"10.1007\/978-3-030-58555-6_11"},{"key":"2478_CR1102","doi-asserted-by":"crossref","unstructured":"Wu, T., Cao, M,, Gao, Z., Wu, G., & Wang, L. (2023d) Stmixer: A one-stage sparse action detector. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01414"},{"issue":"5","key":"2478_CR1103","doi-asserted-by":"crossref","first-page":"1484","DOI":"10.1007\/s11263-020-01409-9","volume":"129","author":"X Wu","year":"2021","unstructured":"Wu, X., Wang, R., Hou, J., Lin, H., & Luo, J. (2021). Spatial-temporal relation reasoning for action prediction in videos. IJCV, 129(5), 1484\u20131505.","journal-title":"IJCV"},{"key":"2478_CR1104","doi-asserted-by":"crossref","unstructured":"Wu, X., Zhao, J., & Wang, R. (2021d) Anticipating future relations via graph growing for action prediction. In: AAAI","DOI":"10.1609\/aaai.v35i4.16402"},{"key":"2478_CR1105","doi-asserted-by":"crossref","unstructured":"Wu, Y., & Yang, Y. (2021) Exploring heterogeneous clues for weakly-supervised audio-visual video parsing. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00138"},{"key":"2478_CR1106","doi-asserted-by":"crossref","first-page":"1143","DOI":"10.1109\/TIP.2020.3040521","volume":"30","author":"Y Wu","year":"2020","unstructured":"Wu, Y., Zhu, L., Wang, X., Yang, Y., & Wu, F. (2020). Learning to anticipate egocentric actions by imagination. IEEE T-IP, 30, 1143\u20131152.","journal-title":"IEEE T-IP"},{"key":"2478_CR1107","unstructured":"Wu, Z., Xiong, C., Jiang, Y.G., & Davis, L.S. (2019b) Liteeval: A coarse-to-fine framework for resource efficient video recognition. In: NeurIPS"},{"key":"2478_CR1108","doi-asserted-by":"crossref","unstructured":"Wu, Z., Xiong, C., Ma, C.Y., Socher, R., & Davis, L.S. (2019c) Adaframe: Adaptive frame selection for fast video recognition. In: CVPR","DOI":"10.1109\/CVPR.2019.00137"},{"issue":"4","key":"2478_CR1109","doi-asserted-by":"crossref","first-page":"1699","DOI":"10.1109\/TPAMI.2020.3029425","volume":"44","author":"Z Wu","year":"2020","unstructured":"Wu, Z., Li, H., Xiong, C., Jiang, Y. G., & Davis, L. S. (2020). A dynamic frame selection framework for fast video recognition. IEEE TPAMI, 44(4), 1699\u20131711.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1110","doi-asserted-by":"crossref","unstructured":"Xia, B., Wang, Z., Wu, W., Wang, H., & Han, J. (2022a). Temporal saliency query network for efficient video recognition. In: ECCV","DOI":"10.1007\/978-3-031-19830-4_42"},{"key":"2478_CR1111","doi-asserted-by":"crossref","unstructured":"Xia, B., Wu, W., Wang, H., Su, R., He, D., Yang, H., Fan, X., & Ouyang, W. (2022b). Nsnet: Non-saliency suppression sampler for efficient video recognition. In: ECCV","DOI":"10.1007\/978-3-031-19830-4_40"},{"key":"2478_CR1112","unstructured":"Xiao, F., Lee, Y. J., Grauman, K., Malik, J., & Feichtenhofer, C. (2020). Audiovisual slowfast networks for video recognition. arXiv:2001.08740"},{"key":"2478_CR1113","doi-asserted-by":"crossref","unstructured":"Xiao, F., Kundu, K., Tighe, J., & Modolo, D. (2022). Hierarchical self-supervised representation learning for movie understanding. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00950"},{"key":"2478_CR1114","doi-asserted-by":"crossref","unstructured":"Xiao, J., Shang, X., Yao, A., & Chua, T. S. (2021). Next-qa: Next phase of question-answering to explaining temporal actions. In: CVPR","DOI":"10.1109\/CVPR46437.2021.00965"},{"issue":"11","key":"2478_CR1115","doi-asserted-by":"crossref","first-page":"13265","DOI":"10.1109\/TPAMI.2023.3292266","volume":"45","author":"J Xiao","year":"2023","unstructured":"Xiao, J., Zhou, P., Yao, A., Li, Y., Hong, R., Yan, S., & Chua, T. S. (2023). Contrastive video question answering via video graph transformer. IEEE TPAMI, 45(11), 13265\u201313280.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1116","doi-asserted-by":"crossref","unstructured":"Xiao, J., Yao, A., Li, Y., & Chua, T. S. (2024). Can i trust your answer? visually grounded video question answering. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01254"},{"key":"2478_CR1117","doi-asserted-by":"crossref","unstructured":"Xie, J., Han, T., Bain, M., Nagrani, A., Varol, G., Xie, W., & Zisserman, A. (2024). Autoad-zero: A training-free framework for zero-shot audio description. In: ACCV","DOI":"10.1007\/978-981-96-0908-6_5"},{"key":"2478_CR1118","doi-asserted-by":"crossref","unstructured":"Xie, X., Bhatnagar, B. L., & Pons-Moll, G. (2022). Chore: Contact, human and object reconstruction from a single rgb image. In: ECCV","DOI":"10.1007\/978-3-031-20086-1_8"},{"key":"2478_CR1119","doi-asserted-by":"crossref","unstructured":"Xing, Z., Dai, Q., Hu, H., Chen, J., Wu, Z., & Jiang, Y. G. (2023) Svformer: Semi-supervised video transformer for action recognition. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01804"},{"key":"2478_CR1120","unstructured":"Xiong, Y., Zhao, Y., Wang, L., Lin, D., & Tang, X. (2017). A pursuit of temporal accuracy in general activity detection. arXiv:1703.02716"},{"key":"2478_CR1121","doi-asserted-by":"crossref","unstructured":"Xiong, Y., Ren, M., Zeng, W., & Urtasun, R. (2021). Self-supervised representation learning from flow equivariance. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01003"},{"key":"2478_CR1122","doi-asserted-by":"crossref","unstructured":"Xu, D., Ricci, E., Yan, Y., Song, J., & Sebe, N. (2015a). Learning deep representations of appearance and motion for anomalous event detection. In: BMVC","DOI":"10.5244\/C.29.8"},{"key":"2478_CR1123","doi-asserted-by":"crossref","unstructured":"Xu, D., Zhao, Z., Xiao, J., Wu, F., Zhang, H., He, X., & Zhuang, Y. (2017a). Video question answering via gradually refined attention over appearance and motion. In: MM","DOI":"10.1145\/3123266.3123427"},{"key":"2478_CR1124","doi-asserted-by":"crossref","unstructured":"Xu, D., Xiao, J., Zhao, Z., Shao, J., Xie, D., & Zhuang, Y. (2019a). Self-supervised spatiotemporal learning via video clip order prediction. In: CVPR","DOI":"10.1109\/CVPR.2019.01058"},{"key":"2478_CR1125","doi-asserted-by":"crossref","unstructured":"Xu, H., Das, A., & Saenko, K. (2017b). R-c3d: Region convolutional 3d network for temporal activity detection. In: ICCV","DOI":"10.1109\/ICCV.2017.617"},{"key":"2478_CR1126","doi-asserted-by":"crossref","unstructured":"Xu, H., He, K., Plummer, B. A., Sigal, L., Sclaroff, S., & Saenko, K. (2019b). Multilevel language and vision integration for text-to-clip retrieval. In: AAAI","DOI":"10.1609\/aaai.v33i01.33019062"},{"key":"2478_CR1127","doi-asserted-by":"crossref","unstructured":"Xu, H., Bazavan, E. G., Zanfir, A., Freeman, W. T., Sukthankar, R., & Sminchisescu, C. (2020). Ghum & ghuml: Generative 3d human shape and articulated pose models. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00622"},{"key":"2478_CR1128","doi-asserted-by":"crossref","unstructured":"Xu, H., Ghosh, G., Huang, P. Y., Okhonko, D., Aghajanyan, A., Metze, F., Zettlemoyer, L., & Feichtenhofer, C. (2021). Videoclip: Contrastive pre-training for zero-shot video-text understanding. In: EMNLP","DOI":"10.18653\/v1\/2021.emnlp-main.544"},{"key":"2478_CR1129","doi-asserted-by":"crossref","unstructured":"Xu, H., Wang, T., Tang, X., & Fu, C. W. (2023a). H2onet: Hand-occlusion-and-orientation-aware network for real-time 3d hand mesh reconstruction. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01635"},{"key":"2478_CR1130","unstructured":"Xu, H., Ye, Q., Yan, M., Shi, Y., Ye, J., Xu, Y., Li, C., Bi, B., Qian, Q., Wang, W., et\u00a0al. (2023b). mplug-2: A modularized multi-modal foundation model across text, image and video. In: ICML"},{"key":"2478_CR1131","doi-asserted-by":"crossref","unstructured":"Xu, J., & Wang, X. (2021). Rethinking self-supervised correspondence learning: A video frame-level similarity perspective. In: ICCV","DOI":"10.1109\/ICCV48922.2021.00992"},{"key":"2478_CR1132","doi-asserted-by":"crossref","unstructured":"Xu, J., Mukherjee, L., Li, Y., Warner, J., Rehg, J. M., & Singh, V. (2015b). Gaze-enabled egocentric video summarization via constrained submodular maximization. In: CVPR","DOI":"10.1109\/CVPR.2015.7298836"},{"key":"2478_CR1133","doi-asserted-by":"crossref","unstructured":"Xu, J., Mei, T., Yao, T., & Rui, Y. (2016). Msr-vtt: A large video description dataset for bridging video and language. In: CVPR","DOI":"10.1109\/CVPR.2016.571"},{"key":"2478_CR1134","doi-asserted-by":"crossref","unstructured":"Xu, R., Xiong, C., Chen, W., & Corso, J. (2015c). Jointly modeling deep video and compositional text to bridge vision and language in a unified framework. In: AAAI","DOI":"10.1609\/aaai.v29i1.9512"},{"key":"2478_CR1135","doi-asserted-by":"crossref","unstructured":"Xu, S., Li, Z., Wang, Y. X., & Gui, L. Y. (2023c). Interdiff: Generating 3d human-object interactions with physics-informed diffusion. In: CVPR","DOI":"10.1109\/ICCV51070.2023.01371"},{"key":"2478_CR1136","unstructured":"Xu, S., Wang, Y. X., Gui, L., et\u00a0al. (2025). Interdreamer: Zero-shot text to 3d dynamic human-object interaction. In: NeurIPS"},{"key":"2478_CR1137","doi-asserted-by":"crossref","unstructured":"Xu, W., Yu, J., Miao, Z., Wan, L., & Ji, Q. (2019c). Prediction-cgan: Human action prediction with conditional generative adversarial networks. In: MM","DOI":"10.1145\/3343031.3351073"},{"issue":"12","key":"2478_CR1138","doi-asserted-by":"crossref","first-page":"3272","DOI":"10.1007\/s11263-023-01850-6","volume":"131","author":"X Xu","year":"2023","unstructured":"Xu, X., Li, Y. L., & Lu, C. (2023). Dynamic context removal: A general training strategy for robust models on video action predictive tasks. IJCV, 131(12), 3272\u20133288.","journal-title":"IJCV"},{"key":"2478_CR1139","doi-asserted-by":"crossref","unstructured":"Xu, Z., Qing, L., & Miao, J. (2015d). Activity auto-completion: Predicting human activities from partial videos. In: CVPR","DOI":"10.1109\/ICCV.2015.365"},{"key":"2478_CR1140","doi-asserted-by":"crossref","unstructured":"Xue, H., Hang, T., Zeng, Y., Sun, Y., Liu, B., Yang, H., Fu, J., & Guo, B. (2022). Advancing high-resolution video-language representation with large-scale video transcriptions. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00498"},{"key":"2478_CR1141","doi-asserted-by":"crossref","unstructured":"Xue, Z., & Marculescu, R. (2023). Dynamic multimodal fusion. In: CVPR","DOI":"10.1109\/CVPRW59228.2023.00256"},{"key":"2478_CR1142","doi-asserted-by":"crossref","unstructured":"Xue, Z., Song, Y., Grauman, K., & Torresani, L. (2023). Egocentric video task translation. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00229"},{"key":"2478_CR1143","doi-asserted-by":"crossref","unstructured":"Xue, Z., Ashutosh, K., & Grauman, K. (2024). Learning object state changes in videos: An open-world perspective. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01750"},{"key":"2478_CR1144","doi-asserted-by":"crossref","unstructured":"Yan, L., Han, C., Xu, Z., Liu. D., & Wang, Q. (2023a). Prompt learns prompt: Exploring knowledge-aware generative prompt collaboration for video captioning. In: IJCAI","DOI":"10.24963\/ijcai.2023\/180"},{"key":"2478_CR1145","doi-asserted-by":"crossref","unstructured":"Yan, S., Xiong, X., Arnab, A., Lu, Z., Zhang, M., Sun, C., & Schmid, C. (2022). Multiview transformers for video recognition. In: CVPR","DOI":"10.1109\/CVPR52688.2022.00333"},{"key":"2478_CR1146","doi-asserted-by":"crossref","unstructured":"Yan, S., Xiong, X., Nagrani, A., Arnab, A., Wang, Z., Ge, W., Ross, D., & Schmid, C. (2023b). Unloc: A unified framework for video localization tasks. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01253"},{"key":"2478_CR1147","unstructured":"Yan, W., Zhang, Y., Abbeel, P., & Srinivas, A. (2021). Videogpt: Video generation using vq-vae and transformers. arXiv:2104.10157"},{"key":"2478_CR1148","unstructured":"Yan, W., Hafner, D., James, S., & Abbeel, P. (2023c). Temporally consistent transformers for video generation. In: ICML"},{"key":"2478_CR1149","doi-asserted-by":"crossref","unstructured":"Yan, X., Rastogi, A., Villegas, R., Sunkavalli, K., Shechtman, E., Hadap, S., Yumer, E., & Lee, H. (2018). Mt-vae: Learning motion transformations to generate multimodal human dynamics. In: ECCV","DOI":"10.1007\/978-3-030-01228-1_17"},{"key":"2478_CR1150","doi-asserted-by":"crossref","unstructured":"Yan, X., Misra, I., Gupta, A., Ghadiyaram, D., & Mahajan, D. (2020). Clusterfit: Improving generalization of visual representations. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00654"},{"key":"2478_CR1151","doi-asserted-by":"crossref","unstructured":"Yang, A., Miech, A., Sivic, J., Laptev, I., & Schmid, C. (2021a). Just ask: Learning to answer questions from millions of narrated videos. In: CVPR","DOI":"10.1109\/ICCV48922.2021.00171"},{"key":"2478_CR1152","doi-asserted-by":"crossref","unstructured":"Yang, A., Miech, A., Sivic, J., Laptev, I., & Schmid, C. (2022a). Tubedetr: Spatio-temporal video grounding with transformers. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01595"},{"key":"2478_CR1153","unstructured":"Yang, A., Miech, A., Sivic, J., Laptev, I., & Schmid, C. (2022b). Zero-shot video question answering via frozen bidirectional language models. In: NeurIPS"},{"key":"2478_CR1154","doi-asserted-by":"crossref","unstructured":"Yang, A., Nagrani, A., Seo, P. H., Miech, A., Pont-Tuset, J., Laptev, I., Sivic, J., & Schmid, C. (2023a). Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01032"},{"key":"2478_CR1155","unstructured":"Yang, A., Nagrani, A., Laptev, I., Sivic, J., & Schmid, C. (2024a). Vidchapters-7m: Video chapters at scale. In: NeurIPS"},{"key":"2478_CR1156","unstructured":"Yang, C., Xu, Y., Dai, B., & Zhou, B. (2020a). Video representation learning with visual tempo consistency. arXiv:2006.15489"},{"key":"2478_CR1157","doi-asserted-by":"crossref","unstructured":"Yang, C., Xu, Y., Shi, J., Dai, B., & Zhou, B. (2020b). Temporal pyramid network for action recognition. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00067"},{"key":"2478_CR1158","doi-asserted-by":"crossref","unstructured":"Yang, D., & Liu, Y. (2024). Active object detection with knowledge aggregation and distillation from large models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01573"},{"key":"2478_CR1159","doi-asserted-by":"crossref","first-page":"1225312","DOI":"10.3389\/fnins.2023.1225312","volume":"17","author":"H Yang","year":"2023","unstructured":"Yang, H., Ren, Z., Yuan, H., Xu, Z., & Zhou, J. (2023). Contrastive self-supervised representation learning without negative samples for multimodal human action recognition. Frontiers in Neuroscience, 17, 1225312.","journal-title":"Frontiers in Neuroscience"},{"key":"2478_CR1160","doi-asserted-by":"crossref","unstructured":"Yang, J., Bisk, Y., & Gao, J. (2021b). Taco: Token-aware cascade contrastive learning for video-text alignment. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01136"},{"key":"2478_CR1161","unstructured":"Yang, M., Du, Y., Ghasemipour, K., Tompson, J., Schuurmans, D., & Abbeel, P. (2023c). Learning interactive real-world simulators. arXiv:2310.06114"},{"key":"2478_CR1162","doi-asserted-by":"crossref","unstructured":"Yang, P., Hu, V. T., Mettes, P., & Snoek, C. G. M. (2020c). Localizing the common action among a few videos. In: ECCV","DOI":"10.1007\/978-3-030-58571-6_30"},{"key":"2478_CR1163","doi-asserted-by":"crossref","unstructured":"Yang S, Zhang L, Liu Y, Jiang Z, & He Y (2023d) Video diffusion models with local-global context guidance. In: IJCAI","DOI":"10.24963\/ijcai.2023\/182"},{"key":"2478_CR1164","doi-asserted-by":"crossref","unstructured":"Yang, X., Yang, X., Liu, M. Y., Xiao, F., Davis, L. S., & Kautz, J. (2019). Step: Spatio-temporal progressive learning for video action detection. In: CVPR","DOI":"10.1109\/CVPR.2019.00035"},{"key":"2478_CR1165","doi-asserted-by":"crossref","unstructured":"Yang, Y., Zhai, W., Luo, H., Cao, Y., & Zha, Z. J. (2024b). Lemon: Learning 3d human-object interaction relation from 2d images. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01541"},{"key":"2478_CR1166","doi-asserted-by":"crossref","unstructured":"Yang, Z., Liu, J., & Wu, P. (2024c). Text prompt with normality guidance for weakly supervised video anomaly detection. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01788"},{"key":"2478_CR1167","doi-asserted-by":"crossref","unstructured":"Yao, B., & Fei-Fei, L. (2010). Modeling mutual context of object and human pose in human-object interaction activities. In: CVPR","DOI":"10.1109\/CVPR.2010.5540235"},{"key":"2478_CR1168","first-page":"14","volume":"118","author":"G Yao","year":"2019","unstructured":"Yao, G., Lei, T., & Zhong, J. (2019). A review of convolutional-neural-network-based action recognition. PRL, 118, 14\u201322.","journal-title":"PRL"},{"key":"2478_CR1169","doi-asserted-by":"crossref","unstructured":"Yao, T., Zhang, Y., Qiu, Z., Pan, Y., & Mei, T. (2021). Seco: Exploring sequence supervision for unsupervised representation learning. In: AAAI","DOI":"10.1609\/aaai.v35i12.17274"},{"key":"2478_CR1170","unstructured":"Yao, Z., Cheng, X., & Zou, Y. (2023). PoseRAC: Pose Saliency Transformer for Repetitive Action Counting. arXiv:2303.08450"},{"key":"2478_CR1171","doi-asserted-by":"crossref","unstructured":"Ye, H., Li, G., Qi, Y., Wang, S., Huang, Q., & Yang, M. H. (2022). Hierarchical modular network for video captioning. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01741"},{"key":"2478_CR1172","doi-asserted-by":"crossref","unstructured":"Ye, X., & Bilodeau, G. A. (2022). Vptr: Efficient transformers for video prediction. In: ICPR","DOI":"10.1109\/ICPR56361.2022.9956707"},{"key":"2478_CR1173","doi-asserted-by":"crossref","unstructured":"Ye, X., & Bilodeau, G. A. (2023). A unified model for continuous conditional video prediction. In: CVPRw","DOI":"10.1109\/CVPRW59228.2023.00368"},{"key":"2478_CR1174","doi-asserted-by":"crossref","unstructured":"Ye, X., & Bilodeau, G. A. (2024), Stdiff: Spatio-temporal diffusion for continuous stochastic video prediction. In: AAAI","DOI":"10.1609\/aaai.v38i7.28489"},{"key":"2478_CR1175","doi-asserted-by":"crossref","unstructured":"Ye, Y., Zhao, Z., Li, Y., Chen, L., Xiao, J., & Zhuang, Y. (2017). Video question answering via attribute-augmented attention network learning. In: SIGIR","DOI":"10.1145\/3077136.3080655"},{"key":"2478_CR1176","doi-asserted-by":"crossref","unstructured":"Yeung, S., Russakovsky, O., Mori, G., & Fei-Fei, L. (2016). End-to-end learning of action detection from frame glimpses in videos. In: CVPR","DOI":"10.1109\/CVPR.2016.293"},{"key":"2478_CR1177","doi-asserted-by":"crossref","first-page":"375","DOI":"10.1007\/s11263-017-1013-y","volume":"126","author":"S Yeung","year":"2018","unstructured":"Yeung, S., Russakovsky, O., Jin, N., Andriluka, M., Mori, G., & Fei-Fei, L. (2018). Every moment counts: Dense detailed labeling of actions in complex videos. IJCV, 126, 375\u2013389.","journal-title":"IJCV"},{"key":"2478_CR1178","first-page":"221","volume":"104","author":"A Yilmaz","year":"2006","unstructured":"Yilmaz, A., & Shah, M. (2006). Matching actions in presence of camera motion. CVIU, 104, 221\u2013231.","journal-title":"CVIU"},{"key":"2478_CR1179","first-page":"13","volume":"38","author":"A Yilmaz","year":"2006","unstructured":"Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking: A survey. CSUR, 38, 13\u201358.","journal-title":"Object tracking: A survey. CSUR"},{"key":"2478_CR1180","doi-asserted-by":"crossref","unstructured":"Yin, S., Wu, C., Yang, H., Wang, J., Wang, X., Ni, M., Yang, Z., Li, L., Liu, S., Yang, F., et\u00a0al. (2023a). Nuwa-xl: Diffusion over diffusion for extremely long video generation. arXiv:2303.12346","DOI":"10.18653\/v1\/2023.acl-long.73"},{"key":"2478_CR1181","doi-asserted-by":"crossref","unstructured":"Yin, Y., Guo, C., Kaufmann, M., Zarate, J. J., Song, J., & Hilliges, O. (2023b). Hi4d: 4d instance segmentation of close human interaction. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01632"},{"key":"2478_CR1182","unstructured":"Ying, K., Meng, F., Wang, J., Li, Z., Lin, H., Yang, Y., Zhang, H., Zhang, W., Lin, Y., Liu, S., et\u00a0al. (2024). Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi. arXiv:2404.16006"},{"key":"2478_CR1183","unstructured":"Yoon, J. S., Kim, K., Gallo, O., Park, H. S., & Kautz, J. (2020). Novel view synthesis of dynamic scenes with globally coherent depths from a monocular camera. In: CVPR"},{"key":"2478_CR1184","unstructured":"Yu, J., Wang, Z., Vasudevan, V., Yeung, L., Seyedhosseini, M., & Wu, Y. (2022a). Coca: Contrastive captioners are image-text foundation models. arXiv:2205.01917"},{"key":"2478_CR1185","doi-asserted-by":"crossref","unstructured":"Yu, J., Li, X., Zhao, X., Zhang, H., & Wang, Y. X. (2023a). Video state-changing object segmentation. In: ICCV","DOI":"10.1109\/ICCV51070.2023.01869"},{"key":"2478_CR1186","doi-asserted-by":"crossref","unstructured":"Yu, J., Zhuge, Y., Zhang, L., Hu, P., Wang, D., Lu, H., & He, Y. (2024a). Boosting continual learning of vision-language models via mixture-of-experts adapters. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02191"},{"key":"2478_CR1187","unstructured":"Yu, S., Tack, J., Mo, S., Kim, H., Kim, J., Ha, J. W., & Shin, J. (2022b). Generating videos with dynamics-aware implicit generative adversarial networks. In: ICLR"},{"key":"2478_CR1188","unstructured":"Yu, S., Cho, J., Yadav, P., & Bansal, M. (2023b). Self-chained image-language model for video localization and question answering. In: NeurIPS"},{"key":"2478_CR1189","doi-asserted-by":"crossref","unstructured":"Yu, S., Sohn, K., Kim, S., & Shin, J. (2023c). Video probabilistic diffusion models in projected latent space. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01770"},{"key":"2478_CR1190","unstructured":"Yu, S., Nie, W., Huang, D. A., Li, B., Shin, J., & Anandkumar, A. (2024b). Efficient video diffusion models via content-frame motion-latent decomposition. In: ICLR"},{"key":"2478_CR1191","doi-asserted-by":"crossref","unstructured":"Yu, X., Rosing, T., & Guo, Y. (2024c). Evolve: Enhancing unsupervised continual learning with multiple experts. In: WACV","DOI":"10.1109\/WACV57701.2024.00236"},{"key":"2478_CR1192","doi-asserted-by":"crossref","unstructured":"Yu, Y., Chung, J., Yun, H., Kim, J., & Kim, G. (2021). Transitional adaptation of pretrained models for visual storytelling. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01247"},{"key":"2478_CR1193","doi-asserted-by":"crossref","unstructured":"Yue-Hei\u00a0Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., & Toderici, G. (2015). Beyond short snippets: Deep networks for video classification. In: CVPR","DOI":"10.1109\/CVPR.2015.7299101"},{"key":"2478_CR1194","doi-asserted-by":"crossref","unstructured":"Zaheer, M. Z., Mahmood, A., Astrid, M., & Lee, S. I. (2020a). Claws: Clustering assisted weakly supervised learning with normalcy suppression for anomalous event detection. In: ECCV","DOI":"10.1007\/978-3-030-58542-6_22"},{"key":"2478_CR1195","first-page":"1705","volume":"27","author":"MZ Zaheer","year":"2020","unstructured":"Zaheer, M. Z., Mahmood, A., Shin, H., & Lee, S. I. (2020). A self-reasoning framework for anomaly detection using video-level labels. IEEE SPL, 27, 1705\u20131709.","journal-title":"IEEE SPL"},{"key":"2478_CR1196","doi-asserted-by":"crossref","unstructured":"Zanella, L., Menapace, W., Mancini, M., Wang, Y., & Ricci, E. (2024). Harnessing large language models for training-free video anomaly detection. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01753"},{"key":"2478_CR1197","doi-asserted-by":"crossref","unstructured":"Zatsarynna, O., Abu\u00a0Farha, Y., & Gall, J. (2021). Multi-modal temporal convolutional network for anticipating actions in egocentric videos. In: CVPRw, pp 2249\u20132258","DOI":"10.1109\/CVPRW53098.2021.00254"},{"key":"2478_CR1198","doi-asserted-by":"crossref","unstructured":"Zatsarynna, O., Bahrami, E., Farha, Y. A., Francesca, G., & Gall, J. (2024). Gated temporal diffusion for stochastic long-term dense anticipation. In: ECCV","DOI":"10.1007\/978-3-031-73001-6_26"},{"key":"2478_CR1199","unstructured":"Zbontar, J., Jing, L., Misra, I., LeCun, Y., & Deny, S. (2021). Barlow twins: Self-supervised learning via redundancy reduction. In: ICML"},{"key":"2478_CR1200","unstructured":"Zellers, R., Lu, X., Hessel, J., Yu, Y., Park, J. S., Cao, J., Farhadi, A., & Choi, Y. (2021). Merlot: Multimodal neural script knowledge models. In: NeurIPS"},{"key":"2478_CR1201","doi-asserted-by":"crossref","unstructured":"Zellers, R., Lu, J., Lu, X., Yu, Y., Zhao, Y., Salehi, M., Kusupati, A., Hessel, J., Farhadi, A., & Choi, Y. (2022). Merlot reserve: Neural script knowledge through vision and language and sound. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01589"},{"key":"2478_CR1202","unstructured":"Zelnik-Manor, L., & Irani, M. (2001). Event-based analysis of video. In: CVPR"},{"key":"2478_CR1203","doi-asserted-by":"crossref","unstructured":"Zeng, K. H., Chen, T. H., Chuang, C. Y., Liao, Y. H., Niebles, J. C., & Sun, M. (2017). Leveraging video descriptions to learn video question answering. In: AAAI","DOI":"10.1609\/aaai.v31i1.11238"},{"key":"2478_CR1204","doi-asserted-by":"crossref","unstructured":"Zeng, R., Huang, W., Tan, M., Rong, Y., Zhao, P., Huang, J., & Gan. C. (2019). Graph convolutional networks for temporal action localization. In: ICCV","DOI":"10.1109\/ICCV.2019.00719"},{"key":"2478_CR1205","doi-asserted-by":"crossref","unstructured":"Zeng, Y., Wei, G., Zheng, J., Zou, J., Wei, Y., Zhang, Y., & Li, H. (2024). Make pixels dance: High-dynamic video generation. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00845"},{"key":"2478_CR1206","unstructured":"Zha, X., Zhu, W., Xun, L., Yang, S., & Liu, J. (2021). Shifted chunk transformer for spatio-temporal representational learning. In: NeurIPS"},{"issue":"3","key":"2478_CR1207","doi-asserted-by":"crossref","first-page":"750","DOI":"10.1007\/s11263-023-01919-2","volume":"132","author":"W Zhai","year":"2024","unstructured":"Zhai, W., Wu, P., Zhu, K., Cao, Y., Wu, F., & Zha, Z. J. (2024). Background activation suppression for weakly supervised object localization and semantic segmentation. IJCV, 132(3), 750\u2013775.","journal-title":"IJCV"},{"key":"2478_CR1208","doi-asserted-by":"crossref","unstructured":"Zhai, Y., Wang, L., Tang, W., Zhang, Q., Yuan, J., & Hua, G. (2020). Two-stream consensus network for weakly-supervised temporal action localization. In: ECCV","DOI":"10.1007\/978-3-030-58539-6_3"},{"key":"2478_CR1209","doi-asserted-by":"crossref","unstructured":"Zhang, B., Wang, L., Wang, Z., Qiao, Y., & Wang, H. (2016). Real-time action recognition with enhanced motion vector cnns. In: CVPR","DOI":"10.1109\/CVPR.2016.297"},{"key":"2478_CR1210","doi-asserted-by":"crossref","unstructured":"Zhang, C., Cao, M., Yang, D., Chen, J., & Zou, Y. (2021a). Cola: Weakly-supervised temporal action localization with snippet contrastive learning. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01575"},{"key":"2478_CR1211","doi-asserted-by":"crossref","unstructured":"Zhang, C., Yang, T., Weng, J., Cao, M., Wang, J., & Zou, Y. (2022a). Unsupervised pre-training for temporal action localization tasks. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01364"},{"key":"2478_CR1212","doi-asserted-by":"crossref","unstructured":"Zhang, C. L., Wu, J., & Li, Y. (2022b). Actionformer: Localizing moments of actions with transformers. In: ECCV","DOI":"10.1007\/978-3-031-19772-7_29"},{"key":"2478_CR1213","doi-asserted-by":"crossref","unstructured":"Zhang, D., Dai, X., Wang, X., Wang, Y. F., & Davis, L. S. (2019a). Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In: CVPR","DOI":"10.1109\/CVPR.2019.00134"},{"key":"2478_CR1214","doi-asserted-by":"crossref","unstructured":"Zhang, H., Xu, X., Han, G., & He, S. (2020a). Context-aware and scale-insensitive temporal repetition counting. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00075"},{"key":"2478_CR1215","doi-asserted-by":"crossref","unstructured":"Zhang, H., Sun, A., Jing, W., Nan, G., Zhen, L., Zhou, J. T., & Goh, R. S. M. (2021b). Video corpus moment retrieval with contrastive learning. In: SIGIR","DOI":"10.1145\/3404835.3462874"},{"issue":"8","key":"2478_CR1216","first-page":"4252","volume":"44","author":"H Zhang","year":"2021","unstructured":"Zhang, H., Sun, A., Jing, W., Zhen, L., Zhou, J. T., & Goh, R. S. M. (2021). Natural language video localization: A revisit in span-based question answering framework. IEEE TPAMI, 44(8), 4252\u20134266.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1217","doi-asserted-by":"crossref","unstructured":"Zhang, H., Li, X., & Bing, L. (2023a). Video-llama: An instruction-tuned audio-visual language model for video understanding. In: EMNLP","DOI":"10.18653\/v1\/2023.emnlp-demo.49"},{"key":"2478_CR1218","doi-asserted-by":"crossref","unstructured":"Zhang, H., Liu, D., Zheng, Q., & Su, B. (2023b). Modeling video as stochastic processes for fine-grained video representation learning. In: CVPR","DOI":"10.1109\/CVPR52729.2023.00221"},{"key":"2478_CR1219","doi-asserted-by":"crossref","unstructured":"Zhang, H., Christen, S., Fan, Z., Hilliges, O., & Song, J. (2024a). Graspxl: Generating grasping motions for diverse objects at scale. In: ECCV","DOI":"10.1007\/978-3-031-73347-5_22"},{"issue":"5","key":"2478_CR1220","doi-asserted-by":"crossref","first-page":"1005","DOI":"10.3390\/s19051005","volume":"19","author":"HB Zhang","year":"2019","unstructured":"Zhang, H. B., Zhang, Y. X., Zhong, B., Lei, Q., Yang, L., Du, J. X., & Chen, D. S. (2019). A comprehensive survey of vision-based human action recognition methods. Sensors, 19(5), 1005.","journal-title":"Sensors"},{"key":"2478_CR1221","doi-asserted-by":"crossref","unstructured":"Zhang, J., Qing, L., & Miao, J. (2019c). Temporal convolutional network with complementary inner bag loss for weakly supervised anomaly detection. In: ICIP","DOI":"10.1109\/ICIP.2019.8803657"},{"key":"2478_CR1222","unstructured":"Zhang, J., Herrmann, C., Hur, J., Jampani, V., Darrell, T., Cole, F., Sun, D., & Yang, M. H. (2025). Monst3r: A simple approach for estimating geometry in the presence of motion. In: ICLR"},{"key":"2478_CR1223","doi-asserted-by":"crossref","unstructured":"Zhang, J. Y., Pepose, S., Joo, H., Ramanan, D., Malik, J., & Kanazawa, A. (2020b). Perceiving 3d human-object spatial arrangements from a single image in the wild. In: ECCV","DOI":"10.1007\/978-3-030-58610-2_3"},{"key":"2478_CR1224","doi-asserted-by":"crossref","unstructured":"Zhang, K. T., Mengmiand\u00a0Ma, Lim J. H., Zhao, Q., & Feng, J. (2017). Deep future gaze: Gaze anticipation on egocentric videos using adversarial networks. In: CVPR","DOI":"10.1109\/CVPR.2017.377"},{"key":"2478_CR1225","doi-asserted-by":"crossref","unstructured":"Zhang, L., Rao, A., & Agrawala, M. (2023c). Adding conditional control to text-to-image diffusion models. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00355"},{"key":"2478_CR1226","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR","DOI":"10.1109\/CVPR.2018.00068"},{"key":"2478_CR1227","unstructured":"Zhang, R., Fang, R., Zhang, W., Gao, P., Li, K., Dai, J., Qiao, Y., & Li, H. (2021d). Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv:2111.03930"},{"key":"2478_CR1228","unstructured":"Zhang, R., Han, J., Liu, C., Gao, P., Zhou, A., Hu, X., Yan, S., Lu, P., Li, H., & Qiao, Y. (2024b). Llama-adapter: Efficient fine-tuning of language models with zero-init attention. In: ICLR"},{"key":"2478_CR1229","doi-asserted-by":"crossref","unstructured":"Zhang, S., Ma, Q., Zhang, Y., Qian, Z., Kwon, T., Pollefeys, M., Bogo, F., & Tang, S. (2022c). Egobody: Human body shape and motion of interacting people from head-mounted devices. In: ECCV","DOI":"10.1007\/978-3-031-20068-7_11"},{"key":"2478_CR1230","doi-asserted-by":"crossref","unstructured":"Zhang, S., Ma, Q., Zhang, Y., Aliakbarian, S., Cosker, D., & Tang, S. (2023d). Probabilistic human mesh recovery in 3d scenes from egocentric views. In: CVPR","DOI":"10.1109\/ICCV51070.2023.00734"},{"key":"2478_CR1231","doi-asserted-by":"crossref","unstructured":"Zhang, W., Zhu, M., & Derpanis, K. G. (2013). From actemes to action: A strongly-supervised representation for detailed action understanding. In: ICCV","DOI":"10.1109\/ICCV.2013.280"},{"key":"2478_CR1232","doi-asserted-by":"crossref","unstructured":"Zhang, W., Wan, C., Liu, T., Tian, X., Shen, X., & Ye, J. (2024c). Enhanced motion-text alignment for image-to-video transfer learning. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01751"},{"key":"2478_CR1233","doi-asserted-by":"crossref","unstructured":"Zhang, X., Yoon, J., Bansal, M., & Yao, H. (2024d). Multimodal representation learning by alternating unimodal adaptation. In: CVPR","DOI":"10.1109\/CVPR52733.2024.02592"},{"key":"2478_CR1234","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tokmakov, P., Hebert, M., & Schmid, C. (2019d). A structured model for action detection. In: CVPR","DOI":"10.1109\/CVPR.2019.01021"},{"key":"2478_CR1235","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Shao, L., & Snoek, C. G. M. (2021e). Repetitive Activity Counting by Sight and Sound. In: CVPR","DOI":"10.1109\/CVPR46437.2021.01385"},{"key":"2478_CR1236","unstructured":"Zhang, Y., Bai, Y., Wang, H., Xu, Y., & Fu, Y. (2022d). Look more but care less in video recognition. In: NeurIPS"},{"key":"2478_CR1237","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Po, L. M., Xu, X., Liu, M., Wang, Y., Ou, W., Zhao, Y., & Yu, W. Y. (2022e). Contrastive spatio-temporal pretext learning for self-supervised video representation. In: AAAI","DOI":"10.1609\/aaai.v36i3.20248"},{"key":"2478_CR1238","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Chen, S., Wang, M., Zhang, X., Zhu, C., Zhang, Y., & Li, X. (2023e). Temporal consistent automatic video colorization via semantic correspondence. In: CVPR","DOI":"10.1109\/CVPRW59228.2023.00182"},{"key":"2478_CR1239","unstructured":"Zhang, Y., Doughty, H., & Snoek, C. G. M. (2023f). Learning unseen modality interaction. In: NeurIPS"},{"key":"2478_CR1240","unstructured":"Zhang, Y., Li, J., Liu, L., & Qiang, W. (2024e). Rethinking misalignment in vision-language model adaptation from a causal perspective. In: NeurIPS"},{"key":"2478_CR1241","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Cole, F., Li, Z., Rubinstein, M., Snavely, N., & Freeman, W. T. (2022f). Structure and motion from casual videos. In: ECCV","DOI":"10.1007\/978-3-031-19827-4_2"},{"key":"2478_CR1242","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Hu, J., Cheng, W., Paudel, D., & Yang, J. (2024f). Extdm: Distribution extrapolation diffusion model for video prediction. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01827"},{"key":"2478_CR1243","doi-asserted-by":"crossref","unstructured":"Zhao, B., Fei-Fei, L., & Xing, E. P. (2011). Online detection of unusual events in videos via dynamic sparse coding. In: CVPR","DOI":"10.1109\/CVPR.2011.5995524"},{"key":"2478_CR1244","unstructured":"Zhao, B., Dirac, L. P., & Varshavskaya, P. (2024a). Can vision language models learn from visual demonstrations of ambiguous spatial reasoning? arXiv:2409.17080"},{"key":"2478_CR1245","doi-asserted-by":"crossref","unstructured":"Zhao, C., Thabet, A. K., & Ghanem, B. (2021). Video self-stitching graph network for temporal action localization. In: ICCV","DOI":"10.1109\/ICCV48922.2021.01340"},{"key":"2478_CR1246","doi-asserted-by":"crossref","unstructured":"Zhao, C., Liu, S., Mangalam, K., & Ghanem, B. (2023). Re2tal: Rewiring pretrained video backbones for reversible temporal action localization. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01025"},{"key":"2478_CR1247","doi-asserted-by":"crossref","unstructured":"Zhao, H., & Wildes, R. P. (2019). Spatiotemporal feature residual propagation for action prediction. In: ICCV","DOI":"10.1109\/ICCV.2019.00710"},{"key":"2478_CR1248","doi-asserted-by":"crossref","unstructured":"Zhao, H., & Wildes, R. P. (2020). On diverse asynchronous activity anticipation. In: ECCV","DOI":"10.1007\/978-3-030-58526-6_46"},{"key":"2478_CR1249","doi-asserted-by":"crossref","unstructured":"Zhao, H., Gan, C., Rouditchenko, A., Vondrick, C., McDermott, J., & Torralba, A. (2018). The sound of pixels. In: ECCV","DOI":"10.1007\/978-3-030-01246-5_35"},{"key":"2478_CR1250","doi-asserted-by":"crossref","unstructured":"Zhao, H., Torralba, A., Torresani, L., & Yan, Z. (2019). Hacs: Human action clips and segments dataset for recognition and temporal localization. In: ICCV","DOI":"10.1109\/ICCV.2019.00876"},{"key":"2478_CR1251","doi-asserted-by":"crossref","unstructured":"Zhao, J., & Snoek, C. G. M. (2019). Dance with flow: Two-in-one stream action detection. In: CVPR","DOI":"10.1109\/CVPR.2019.01017"},{"key":"2478_CR1252","doi-asserted-by":"crossref","unstructured":"Zhao, J., Zhang, Y., Li, X., Chen, H., Shuai, B., Xu, M., Liu, C., Kundu, K., Xiong, Y., Modolo, D., et\u00a0al. (2022). Tuber: Tubelet transformer for video action detection. In: CVPR","DOI":"10.1109\/CVPR52688.2022.01323"},{"key":"2478_CR1253","unstructured":"Zhao, L., Gundavarapu, N. B., Yuan, L., Zhou, H., Yan, S., Sun, J. J., Friedman, L., Qian, R., Weyand, T., Zhao, Y., et\u00a0al. (2024b). Videoprism: A foundational visual encoder for video understanding. In: ICML"},{"key":"2478_CR1254","doi-asserted-by":"crossref","unstructured":"Zhao, Z., Huang, B., Xing, S., Wu, G., Qiao, Y., & Wang, L. (2024c). Asymmetric masked distillation for pre-training small foundation models. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01752"},{"key":"2478_CR1255","doi-asserted-by":"crossref","unstructured":"Zhao, Z., Huang, X., Zhou, H., Yao, K., Ding, E., Wang, J., Wang, X., Liu, W., & Feng, B. (2024d). Skim then focus: Integrating contextual and fine-grained views for repetitive action counting. arXiv:2406.08814","DOI":"10.1007\/s11263-025-02471-x"},{"key":"2478_CR1256","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3603618","volume":"56","author":"C Zheng","year":"2020","unstructured":"Zheng, C., Wu, W., Chen, C., Yang, T., Zhu, S., Shen, J., Kehtarnavaz, N., & Shah, M. (2020). Deep learning-based human pose estimation: A survey. CSUR, 56, 1\u201337.","journal-title":"CSUR"},{"issue":"2","key":"2478_CR1257","first-page":"1","volume":"19","author":"N Zheng","year":"2023","unstructured":"Zheng, N., Song, X., Su, T., Liu, W., Yan, Y., & Nie, L. (2023). Egocentric early action prediction via adversarial knowledge distillation. ACM TOMM, 19(2), 1\u201321.","journal-title":"ACM TOMM"},{"key":"2478_CR1258","doi-asserted-by":"crossref","unstructured":"Zheng, Q., Wang, C., & Tao, D. (2020b). Syntax-aware action targeting for video captioning. In: CVPR","DOI":"10.1109\/CVPR42600.2020.01311"},{"key":"2478_CR1259","doi-asserted-by":"crossref","unstructured":"Zhong, J. X., Li, N., Kong, W., Liu, S., Li, T. H., & Li, G. (2019). Graph convolutional label noise cleaner: Train a plug-and-play action classifier for anomaly detection. In: CVPR","DOI":"10.1109\/CVPR.2019.00133"},{"key":"2478_CR1260","doi-asserted-by":"crossref","unstructured":"Zhong, Y., Liang, L., Zharkov, I., & Neumann, U. (2023a). Mmvp: Motion-matrix-based video prediction. In: ICCV","DOI":"10.1109\/ICCV51070.2023.00394"},{"key":"2478_CR1261","unstructured":"Zhong, Z., Martin, M., Voit, M., Gall, J., & Beyerer, J. (2023b). A survey on deep learning techniques for action anticipation. arXiv:2309.17257"},{"key":"2478_CR1262","doi-asserted-by":"crossref","unstructured":"Zhong, Z., Schneider, D., Voit, M., Stiefelhagen, R., & Beyerer, J. (2023c). Anticipative feature fusion transformer for multi-modal action anticipation. In: WACV","DOI":"10.1109\/WACV56688.2023.00601"},{"key":"2478_CR1263","doi-asserted-by":"crossref","unstructured":"Zhou, B., Andonian, A., Oliva, A., & Torralba, A. (2018a). Temporal relational reasoning in videos. In: ECCV","DOI":"10.1007\/978-3-030-01246-5_49"},{"issue":"3","key":"2478_CR1264","doi-asserted-by":"crossref","first-page":"582","DOI":"10.1109\/TPAMI.2012.137","volume":"35","author":"F Zhou","year":"2013","unstructured":"Zhou, F., De la Torre, F., & Hodgins, J. K. (2013). Hierarchical aligned cluster analysis for temporal clustering of human motion. IEEE TPAMI, 35(3), 582\u2013596.","journal-title":"IEEE TPAMI"},{"key":"2478_CR1265","doi-asserted-by":"crossref","unstructured":"Zhou, H., Mart\u00edn-Mart\u00edn, R., Kapadia, M., Savarese, S., & Niebles, J. C. (2023a). Procedure-aware pretraining for instructional video understanding. In: CVPR","DOI":"10.1109\/CVPR52729.2023.01033"},{"key":"2478_CR1266","doi-asserted-by":"crossref","unstructured":"Zhou, J., Wang, J., Zhang, J., Sun, W., Zhang, J., Birchfield, S., Guo, D., Kong, L., Wang, M., & Zhong, Y. (2022). Audio\u2013visual segmentation. In: ECCV","DOI":"10.1007\/978-3-031-19836-6_22"},{"key":"2478_CR1267","doi-asserted-by":"crossref","unstructured":"Zhou, L., Xu, C., & Corso, J. J. (2018b), Towards automatic learning of procedures from web instructional videos. In: AAAI","DOI":"10.1609\/aaai.v32i1.12342"},{"key":"2478_CR1268","doi-asserted-by":"crossref","unstructured":"Zhou, L., Zhou, Y., Corso, J. J., Socher, R., & Xiong, C. (2018c). End-to-end dense video captioning with masked transformer. In: CVPR","DOI":"10.1109\/CVPR.2018.00911"},{"key":"2478_CR1269","doi-asserted-by":"crossref","unstructured":"Zhou, X., Arnab, A., Buch, S., Yan, S., Myers, A., Xiong, X., Nagrani, A., & Schmid, C. (2024). Streaming dense video captioning. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01727"},{"key":"2478_CR1270","doi-asserted-by":"crossref","unstructured":"Zhou, Y., & Berg, T. L. (2015). Temporal perception and prediction in ego-centric video. In: ICCV","DOI":"10.1109\/ICCV.2015.511"},{"key":"2478_CR1271","doi-asserted-by":"crossref","unstructured":"Zhou, Y., Sun, X., Zha, Z. J., & Zeng, W. (2018d). Mict: Mixed 3d\/2d convolutional tube for human action recognition. In: CVPR","DOI":"10.1109\/CVPR.2018.00054"},{"key":"2478_CR1272","doi-asserted-by":"crossref","unstructured":"Zhou, Y., Duan, H., Rao, A., Su, B., & Wang, J. (2023b). Self-supervised action representation learning from partial spatio-temporal skeleton sequences. In: AAAI","DOI":"10.1609\/aaai.v37i3.25495"},{"key":"2478_CR1273","doi-asserted-by":"crossref","unstructured":"Zhu, B., Flanagan, K., Fragomeni, A., Wray, M., & Damen, D. (2024a). Video editing for video retrieval. arXiv:2402.02335","DOI":"10.1007\/978-3-031-92591-7_15"},{"key":"2478_CR1274","unstructured":"Zhu, B., Lin, B., Ning, M., Yan, Y., Cui, J., Wang, H., Pang, Y., Jiang, W., Zhang, J., Li, Z., et\u00a0al. (2024b). Languagebind: Extending video-language pretraining to n-modality by language-based semantic alignment. In: ICLR"},{"key":"2478_CR1275","doi-asserted-by":"crossref","unstructured":"Zhu, L., & Yang, Y. (2020). Actbert: Learning global-local video-text representations. In: CVPR","DOI":"10.1109\/CVPR42600.2020.00877"},{"key":"2478_CR1276","doi-asserted-by":"crossref","unstructured":"Zhu, W., Hu, J., Sun, G., Cao, X., & Qiao, Y. (2016). A key volume mining deep framework for action recognition. In: CVPR","DOI":"10.1109\/CVPR.2016.219"},{"key":"2478_CR1277","unstructured":"Zhu, Y., & Newsam, S. (2019). Motion-aware feature for improved video anomaly detection. In: BMVC"},{"key":"2478_CR1278","unstructured":"Zhu, Y., Shen, X., & Xia, R. (2023). Personality-aware human-centric multimodal reasoning: A new task, dataset and baselines. arXiv:2304.02313"},{"key":"2478_CR1279","doi-asserted-by":"crossref","unstructured":"Zhu, Y., Zhang, G., Tan, J., Wu, G., & Wang, L. (2024c). Dual detrs for multi-label temporal action detection. In: CVPR","DOI":"10.1109\/CVPR52733.2024.01756"},{"key":"2478_CR1280","unstructured":"Zhu, Z., & Damen, D. (2023). Get a grip: Reconstructing hand-object stable grasps in egocentric videos. arXiv:2312.15719"},{"key":"2478_CR1281","doi-asserted-by":"crossref","unstructured":"Zhuang, S., Li, K., Chen, X., Wang, Y., Liu, Z., Qiao, Y., & Wang, Y. (2024). Vlogger: Make your dream a vlog. In: CVPR","DOI":"10.1109\/CVPR52733.2024.00841"},{"key":"2478_CR1282","doi-asserted-by":"crossref","unstructured":"Zhuo, T., Cheng, Z., Zhang, P., Wong, Y., & Kankanhalli, M. (2019). Explainable video action reasoning via prior knowledge and state transitions. In: MM","DOI":"10.1145\/3343031.3351040"},{"key":"2478_CR1283","doi-asserted-by":"crossref","DOI":"10.1016\/j.imavis.2021.104108","volume":"107","author":"M Zong","year":"2021","unstructured":"Zong, M., Wang, R., Chen, X., Chen, Z., & Gong, Y. (2021). Motion saliency based multi-stream multiplier resnets for action recognition. IVC, 107, Article 104108.","journal-title":"IVC"},{"key":"2478_CR1284","doi-asserted-by":"crossref","unstructured":"Zou, S., Zuo, X., Qian, Y., Wang, S., Xu, C., Gong, M., & Cheng, L. (2020). 3d human shape reconstruction from a polarization image. In: ECCV","DOI":"10.1007\/978-3-030-58568-6_21"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02478-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02478-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02478-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,9]],"date-time":"2025-09-09T08:12:15Z","timestamp":1757405535000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02478-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,30]]},"references-count":1284,"journal-issue":{"issue":"9","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["2478"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02478-4","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,30]]},"assertion":[{"value":"23 November 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 May 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 May 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}