{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T20:05:54Z","timestamp":1773950754188,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":46,"publisher":"ACM","license":[{"start":{"date-parts":[[2024,8,24]],"date-time":"2024-08-24T00:00:00Z","timestamp":1724457600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2024,8,25]]},"DOI":"10.1145\/3637528.3671945","type":"proceedings-article","created":{"date-parts":[[2024,8,25]],"date-time":"2024-08-25T04:54:55Z","timestamp":1724561695000},"page":"2663-2673","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Efficient and Long-Tailed Generalization for Pre-trained Vision-Language Model"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0318-0911","authenticated-orcid":false,"given":"Jiang-Xin","family":"Shi","sequence":"first","affiliation":[{"name":"National Key Laboratory for Novel Software Technology, School of Artificial Intelligence, Nanjing University, Nanjing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-4745-4453","authenticated-orcid":false,"given":"Chi","family":"Zhang","sequence":"additional","affiliation":[{"name":"National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2766-8209","authenticated-orcid":false,"given":"Tong","family":"Wei","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Key Laboratory of Computer Network and Information, Integration of Ministry of Education, Southeast University, Nanjing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7727-4304","authenticated-orcid":false,"given":"Yu-Feng","family":"Li","sequence":"additional","affiliation":[{"name":"National Key Laboratory for Novel Software Technology, School of Artificial Intelligence, Nanjing University, Nanjing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,8,24]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Lukas Bossard Matthieu Guillaumin and Luc Van Gool. 2014. Food-101 -- Mining Discriminative Components with Random Forests. In ECCV.","DOI":"10.1007\/978-3-319-10599-4_29"},{"key":"e_1_3_2_2_2_1","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. In NeurIPS."},{"key":"e_1_3_2_2_3_1","unstructured":"Kaidi Cao Colin Wei Adrien Gaidon Nikos Arechiga and Tengyu Ma. 2019. Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss. In NeurIPS."},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"crossref","unstructured":"Mircea Cimpoi Subhransu Maji Iasonas Kokkinos Sammy Mohamed and Andrea Vedaldi. 2014. Describing Textures in the Wild. In CVPR.","DOI":"10.1109\/CVPR.2014.461"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"crossref","unstructured":"Jia Deng Wei Dong Richard Socher Li-Jia Li Kai Li and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In CVPR.","DOI":"10.1109\/CVPRW.2009.5206848"},{"key":"e_1_3_2_2_6_1","volume-title":"Don't Stop Learning: Towards Continual Learning for the CLIP Model. arXiv preprint arXiv:2207.09248","author":"Ding Yuxuan","year":"2022","unstructured":"Yuxuan Ding, Lingqiao Liu, Chunna Tian, Jingyuan Yang, and Haoxuan Ding. 2022. Don't Stop Learning: Towards Continual Learning for the CLIP Model. arXiv preprint arXiv:2207.09248 (2022)."},{"key":"e_1_3_2_2_7_1","volume-title":"LPT: Long-tailed Prompt Tuning for Image Classification. In ICLR.","author":"Dong Bowen","year":"2023","unstructured":"Bowen Dong, Pan Zhou, Shuicheng Yan, and Wangmeng Zuo. 2023. LPT: Long-tailed Prompt Tuning for Image Classification. In ICLR."},{"key":"e_1_3_2_2_8_1","unstructured":"Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold Sylvain Gelly Jakob Uszkoreit and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR."},{"key":"e_1_3_2_2_9_1","volume-title":"CVPR Workshops.","author":"Fei-Fei Li","unstructured":"Li Fei-Fei, R. Fergus, and P. Perona. 2004. Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. In CVPR Workshops."},{"key":"e_1_3_2_2_10_1","volume-title":"CLIP-Adapter: Better Vision-Language Models with Feature Adapters. arXiv preprint arXiv:2110.04544","author":"Gao Peng","year":"2021","unstructured":"Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. 2021. CLIP-Adapter: Better Vision-Language Models with Feature Adapters. arXiv preprint arXiv:2110.04544 (2021)."},{"key":"e_1_3_2_2_11_1","unstructured":"Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR."},{"key":"e_1_3_2_2_12_1","volume-title":"EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification","author":"Helber Patrick","year":"2019","unstructured":"Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2019)."},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"crossref","unstructured":"Dan Hendrycks Steven Basart Norman Mu Saurav Kadavath Frank Wang Evan Dorundo Rahul Desai Tyler Zhu Samyak Parajuli Mike Guo et al. 2021. The many faces of robustness: A critical analysis of out-of-distribution generalization. In ICCV.","DOI":"10.1109\/ICCV48922.2021.00823"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"crossref","unstructured":"Dan Hendrycks Kevin Zhao Steven Basart Jacob Steinhardt and Dawn Song. 2021. Natural Adversarial Examples. In CVPR.","DOI":"10.1109\/CVPR46437.2021.01501"},{"key":"e_1_3_2_2_15_1","unstructured":"Chao Jia Yinfei Yang Ye Xia Yi-Ting Chen Zarana Parekh Hieu Pham Quoc Le Yun-Hsuan Sung Zhen Li and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML."},{"key":"e_1_3_2_2_16_1","volume-title":"Maple: Multi-modal prompt learning. In CVPR.","author":"Khattak Muhammad Uzair","year":"2023","unstructured":"Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. 2023. Maple: Multi-modal prompt learning. In CVPR."},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2013.77"},{"key":"e_1_3_2_2_18_1","volume-title":"Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML.","author":"Li Junnan","year":"2022","unstructured":"Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML."},{"key":"e_1_3_2_2_19_1","volume-title":"Yu","author":"Liu Ziwei","year":"2019","unstructured":"Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X. Yu. 2019. Large-Scale Long-Tailed Recognition in an Open World. In CVPR."},{"key":"e_1_3_2_2_20_1","volume-title":"A Simple Long-Tailed Recognition Baseline via Vision-Language Model. arXiv preprint arXiv:2111.14745","author":"Ma Teli","year":"2021","unstructured":"Teli Ma, Shijie Geng, Mengmeng Wang, Jing Shao, Jiasen Lu, Hongsheng Li, Peng Gao, and Yu Qiao. 2021. A Simple Long-Tailed Recognition Baseline via Vision-Language Model. arXiv preprint arXiv:2111.14745 (2021)."},{"key":"e_1_3_2_2_21_1","volume-title":"Fine-Grained Visual Classification of Aircraft. arXiv preprint arXiv: 1306.5151","author":"Maji Subhransu","year":"2013","unstructured":"Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. 2013. Fine-Grained Visual Classification of Aircraft. arXiv preprint arXiv: 1306.5151 (2013)."},{"key":"e_1_3_2_2_22_1","unstructured":"Chengzhi Mao Scott Geng Junfeng Yang Xin Wang and Carl Vondrick. 2023. Understanding zero-shot adversarial robustness for large-scale models. In ICLR."},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"crossref","unstructured":"Maria-Elena Nilsback and Andrew Zisserman. 2008. Automated Flower Classification over a Large Number of Classes. In ICVGIP.","DOI":"10.1109\/ICVGIP.2008.47"},{"key":"e_1_3_2_2_24_1","doi-asserted-by":"crossref","unstructured":"Yassine Ouali Adrian Bulat Brais Matinez and Georgios Tzimiropoulos. 2023. Black Box Few-Shot Adaptation for Vision-Language Models. In ICCV.","DOI":"10.1109\/ICCV51070.2023.01424"},{"key":"e_1_3_2_2_25_1","volume-title":"Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning. arXiv preprint arXiv: 2306.15955","author":"Jishnu Jaykumar","year":"2023","unstructured":"Jishnu Jaykumar P, Kamalesh Palanisamy, Yu-Wei Chao, Xinya Du, and Yu Xiang. 2023. Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning. arXiv preprint arXiv: 2306.15955 (2023)."},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"crossref","unstructured":"Omkar M Parkhi Andrea Vedaldi Andrew Zisserman and C. V. Jawahar. 2012. Cats and dogs. In CVPR.","DOI":"10.1109\/CVPR.2012.6248092"},{"key":"e_1_3_2_2_27_1","volume-title":"Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML."},{"key":"e_1_3_2_2_28_1","unstructured":"Benjamin Recht Rebecca Roelofs Ludwig Schmidt and Vaishaal Shankar. 2019. Do imagenet classifiers generalize to imagenet?. In ICML."},{"key":"e_1_3_2_2_29_1","unstructured":"Jiawei Ren Cunjun Yu Xiao Ma Haiyu Zhao Shuai Yi et al. 2020. Balanced meta-softmax for long-tailed visual recognition. In NeurIPS."},{"key":"e_1_3_2_2_30_1","unstructured":"Christoph Schuhmann Romain Beaumont Richard Vencu Cade Gordon Ross Wightman Mehdi Cherti Theo Coombes Aarush Katta Clayton Mullis Mitchell Wortsman Patrick Schramowski Srivatsa Kundurthy Katherine Crowson Ludwig Schmidt Robert Kaczmarczyk and Jenia Jitsev. 2022. LAION-5B: An open large-scale dataset for training next generation image-text models. In NeurIPS."},{"key":"e_1_3_2_2_31_1","volume-title":"NeurIPS Workshop Datacentric AI.","author":"Schuhmann Christoph","year":"2021","unstructured":"Christoph Schuhmann, Robert Kaczmarczyk, Aran Komatsuzaki, Aarush Katta, Richard Vencu, Romain Beaumont, Jenia Jitsev, Theo Coombes, and Clayton Mullis. 2021. LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. In NeurIPS Workshop Datacentric AI."},{"key":"e_1_3_2_2_32_1","volume-title":"Parameter-Efficient Long-Tailed Recognition. arXiv preprint arXiv:2309.10019","author":"Shi Jiang-Xin","year":"2023","unstructured":"Jiang-Xin Shi, Tong Wei, Zhi Zhou, Xin-Yan Han, Jie-Jing Shao, and Yu-Feng Li. 2023. Parameter-Efficient Long-Tailed Recognition. arXiv preprint arXiv:2309.10019 (2023)."},{"key":"e_1_3_2_2_33_1","volume-title":"Clip models are few-shot learners: Empirical studies on vqa and visual entailment. arXiv preprint arXiv:2203.07190","author":"Song Haoyu","year":"2022","unstructured":"Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, and Furu Wei. 2022. Clip models are few-shot learners: Empirical studies on vqa and visual entailment. arXiv preprint arXiv:2203.07190 (2022)."},{"key":"e_1_3_2_2_34_1","volume-title":"Amir Roshan Zamir, and Mubarak Shah","author":"Soomro Khurram","year":"2012","unstructured":"Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. arXiv preprint arXiv: 1212.0402 (2012)."},{"key":"e_1_3_2_2_35_1","volume-title":"GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis. In CVPR.","author":"Tao Ming","year":"2023","unstructured":"Ming Tao, Bing-Kun Bao, Hao Tang, and Changsheng Xu. 2023. GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis. In CVPR."},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"crossref","unstructured":"Changyao Tian Wenhai Wang Xizhou Zhu Jifeng Dai and Yu Qiao. 2022. VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition. In ECCV.","DOI":"10.1007\/978-3-031-19806-9_5"},{"key":"e_1_3_2_2_37_1","volume-title":"L ukasz Kaiser, and Illia Polosukhin","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In NeurIPS."},{"key":"e_1_3_2_2_38_1","unstructured":"Haohan Wang Songwei Ge Zachary Lipton and Eric P Xing. 2019. Learning robust global representations by penalizing local predictive power. In NeurIPS."},{"key":"e_1_3_2_2_39_1","volume-title":"Exploring Vision-Language Models for Imbalanced Learning. IJCV","author":"Wang Yidong","year":"2023","unstructured":"Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, and Shikun Zhang. 2023. Exploring Vision-Language Models for Imbalanced Learning. IJCV (2023)."},{"key":"e_1_3_2_2_40_1","unstructured":"Zhengbo Wang Jian Liang Ran He Nan Xu Zilei Wang and Tieniu Tan. 2023. Improving Zero-Shot Generalization for CLIP with Synthesized Prompts. In ICCV."},{"key":"e_1_3_2_2_41_1","doi-asserted-by":"crossref","unstructured":"Jianxiong Xiao James Hays Krista A. Ehinger Aude Oliva and Antonio Torralba. 2010. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR.","DOI":"10.1109\/CVPR.2010.5539970"},{"key":"e_1_3_2_2_42_1","volume-title":"Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917","author":"Yu Jiahui","year":"2022","unstructured":"Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917 (2022)."},{"key":"e_1_3_2_2_43_1","volume-title":"Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling. arXiv preprint arXiv:2111.03930","author":"Zhang Renrui","year":"2021","unstructured":"Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. 2021. Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling. arXiv preprint arXiv:2111.03930 (2021)."},{"key":"e_1_3_2_2_44_1","volume-title":"Learning without Forgetting for Vision-Language Models. arXiv preprint arXiv: 2305.19270","author":"Zhou Da-Wei","year":"2023","unstructured":"Da-Wei Zhou, Yuanhan Zhang, Jingyi Ning, Han-Jia Ye, De-Chuan Zhan, and Ziwei Liu. 2023. Learning without Forgetting for Vision-Language Models. arXiv preprint arXiv: 2305.19270 (2023)."},{"key":"e_1_3_2_2_45_1","volume-title":"Chen Change Loy, and Ziwei Liu","author":"Zhou Kaiyang","year":"2022","unstructured":"Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022. Conditional Prompt Learning for Vision-Language Models. In CVPR."},{"key":"e_1_3_2_2_46_1","volume-title":"Chen Change Loy, and Ziwei Liu","author":"Zhou Kaiyang","year":"2022","unstructured":"Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022. Learning to Prompt for Vision-Language Models. IJCV (2022)."}],"event":{"name":"KDD '24: The 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining","location":"Barcelona Spain","acronym":"KDD '24","sponsor":["SIGMOD ACM Special Interest Group on Management of Data","SIGKDD ACM Special Interest Group on Knowledge Discovery in Data"]},"container-title":["Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3637528.3671945","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3637528.3671945","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:06:05Z","timestamp":1750291565000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3637528.3671945"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,24]]},"references-count":46,"alternative-id":["10.1145\/3637528.3671945","10.1145\/3637528"],"URL":"https:\/\/doi.org\/10.1145\/3637528.3671945","relation":{},"subject":[],"published":{"date-parts":[[2024,8,24]]},"assertion":[{"value":"2024-08-24","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}