{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T12:19:58Z","timestamp":1776082798308,"version":"3.50.1"},"reference-count":601,"publisher":"Emerald","issue":"1-2","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2024,5,6]]},"abstract":"<jats:p>This monograph presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language capabilities, focusing on the transition from specialist models to generalpurpose assistants. The research landscape encompasses five core topics, categorized into two classes. (i) We start with a survey of well-established research areas: multimodal foundation models pre-trained for specific purposes, including two topics \u2013 methods of learning vision backbones for visual understanding and text-to-image generation. (ii) Then, we present recent advances in exploratory, open research areas: multimodal foundation models that aim to play the role of general-purpose assistants, including three topics \u2013 unified vision models inspired by large language models (LLMs), end-to-end training of multimodal LLMs, and chaining multimodal tools with LLMs. The target audiences of the monograph are researchers, graduate students, and professionals in computer vision and vision-language multimodal communities who are eager to learn the basics and recent advances in multimodal foundation models.<\/jats:p>","DOI":"10.1561\/0600000110","type":"journal-article","created":{"date-parts":[[2024,5,6]],"date-time":"2024-05-06T04:29:37Z","timestamp":1714969777000},"page":"1-214","source":"Crossref","is-referenced-by-count":103,"title":["Multimodal Foundation Models: From Specialists to General-Purpose Assistants"],"prefix":"10.1561","volume":"16","author":[{"given":"Chunyuan","family":"Li","sequence":"first","affiliation":[{"name":"Microsoft Corporation ,","place":["USA"]}]},{"given":"Zhe","family":"Gan","sequence":"additional","affiliation":[{"name":"Microsoft Corporation ,","place":["USA"]}]},{"given":"Zhengyuan","family":"Yang","sequence":"additional","affiliation":[{"name":"Microsoft Corporation ,","place":["USA"]}]},{"given":"Jianwei","family":"Yang","sequence":"additional","affiliation":[{"name":"Microsoft Corporation ,","place":["USA"]}]},{"given":"Linjie","family":"Li","sequence":"additional","affiliation":[{"name":"Microsoft Corporation ,","place":["USA"]}]},{"given":"Lijuan","family":"Wang","sequence":"additional","affiliation":[{"name":"Microsoft Corporation ,","place":["USA"]}]},{"given":"Jianfeng","family":"Gao","sequence":"additional","affiliation":[{"name":"Microsoft Corporation ,","place":["USA"]}]}],"member":"140","published-online":{"date-parts":[[2024,5,6]]},"reference":[{"key":"2026032614365299900_ref001","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.00217","article-title":"A-star: Test-time attention segregation and retention for text-to-image synthesis","volume-title":"arXiv preprint arXiv:2306.14544","author":"Agarwal","year":"2023"},{"key":"2026032614365299900_ref002","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2019.00904","article-title":"Nocaps: Novel object captioning at scale","volume-title":"ICCV","author":"Agrawal","year":"2019"},{"key":"2026032614365299900_ref003","article-title":"Do as i can, not as i say: Grounding language in robotic affordances","volume-title":"arXiv preprint arXiv:2204.01691","author":"Ahn","year":"2022"},{"key":"2026032614365299900_ref004","doi-asserted-by":"crossref","DOI":"10.52202\/068431-1723","article-title":"Flamingo: A visual language model for few-shot learning","volume-title":"arXiv preprint arXiv:2204.14198","author":"Alayrac","year":"2022"},{"key":"2026032614365299900_ref005","doi-asserted-by":"crossref","DOI":"10.14569\/IJACSA.2017.081052","article-title":"Text summarization techniques: A brief survey","volume-title":"arXiv preprint arXiv:1707.02268","author":"Allahyari","year":"2017"},{"key":"2026032614365299900_ref006","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-19821-2_7","article-title":"Self-supervised classification network","volume-title":"ECCV","author":"Amrani","year":"2022"},{"key":"2026032614365299900_ref007","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2018.00636","article-title":"Bottom-up and top-down attention for image captioning and visual question answering","volume-title":"CVPR","author":"Anderson","year":"2018"},{"key":"2026032614365299900_ref008","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2015.279","article-title":"Vqa: Visual question answering","volume-title":"ICCV","author":"Antol","year":"2015"},{"key":"2026032614365299900_ref009","article-title":"A theoretical analysis of contrastive unsupervised representation learning","volume-title":"arXiv preprint arXiv:1902.09229","author":"Arora","year":"2019"},{"key":"2026032614365299900_ref010","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-19821-2_26","article-title":"Masked siamese networks for label-efficient learning","volume-title":"ECCV","author":"Assran","year":"2022"},{"key":"2026032614365299900_ref011","doi-asserted-by":"crossref","DOI":"10.1145\/3610548.3618154","article-title":"Break-a-scene: Extracting multiple concepts from a single image","volume-title":"arXiv preprint arXiv:2305.16311","author":"Avrahami","year":"2023"},{"key":"2026032614365299900_ref012","article-title":"Blended latent diffusion","volume-title":"arXiv preprint arXiv:2206.02779","author":"Avrahami","year":"2022"},{"key":"2026032614365299900_ref013","first-page":"18 370","article-title":"Spatext: Spatiotextual representation for controllable image generation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Avrahami","year":"2023"},{"key":"2026032614365299900_ref014","first-page":"18 208","article-title":"Blended diffusion for text-driven editing of natural images","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Avrahami","year":"2022"},{"key":"2026032614365299900_ref015","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.7733589","volume-title":"Openflamingo","author":"Awadalla","year":"2023"},{"key":"2026032614365299900_ref016","article-title":"Foundational models defining a new era in vision: A survey and outlook","volume-title":"arXiv preprint arXiv:2307.13721","author":"Awais","year":"2023"},{"key":"2026032614365299900_ref017","article-title":"Learning representations by maximizing mutual information across views","volume-title":"NeurIPS","author":"Bachman","year":"2019"},{"key":"2026032614365299900_ref018","article-title":"Data2vec: A general framework for self-supervised learning in speech, vision and language","volume-title":"ICML","author":"Baevski","year":"2022"},{"key":"2026032614365299900_ref019","article-title":"Neural machine translation by jointly learning to align and translate","volume-title":"ICLR","author":"Bahdanau","year":"2015"},{"key":"2026032614365299900_ref020","article-title":"Qwen-vl: A frontier large vision-language model with versatile abilities","volume-title":"arXiv preprint arXiv:2308.12966","author":"Bai","year":"2023"},{"key":"2026032614365299900_ref021","unstructured":"S.\n              Bai\n            , S.Yang, J.Bai, P.Wang, X.Zhang, J.Lin, X.Wang, C.Zhou, and J.Zhou, Touchstone: Evaluating vision-language models by language models, 2023. URL: https:\/\/arxiv.org\/abs\/2308.16890."},{"key":"2026032614365299900_ref022","article-title":"Ediffi: Text-to-image diffusion models with an ensemble of expert denoisers","volume-title":"arXiv preprint arXiv:2211.01324","author":"Balaji","year":"2022"},{"key":"2026032614365299900_ref023","article-title":"Towards in-context scene understanding","volume-title":"arXiv preprint arXiv:2306.01667","author":"Bala\u017eevi\u0107","year":"2023"},{"key":"2026032614365299900_ref024","first-page":"384","article-title":"Zero-shot object detection","volume-title":"Proceedings of the European conference on computer vision (ECCV)","author":"Bansal","year":"2018"},{"key":"2026032614365299900_ref025","first-page":"843","article-title":"Universal guidance for diffusion models","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Bansal","year":"2023"},{"key":"2026032614365299900_ref026","article-title":"BEiT: Bert pre-training of image transformers","volume-title":"ICLR","author":"Bao","year":"2022"},{"key":"2026032614365299900_ref027","doi-asserted-by":"crossref","first-page":"25 005","DOI":"10.52202\/068431-1813","article-title":"Visual prompting via image inpainting","volume":"35","author":"Bar","year":"2022","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2026032614365299900_ref028","article-title":"Vicreg: Variance-invariance-covariance regularization for self-supervised learning","volume-title":"arXiv preprint arXiv:2105.04906","author":"Bardes","year":"2021"},{"key":"2026032614365299900_ref029","doi-asserted-by":"crossref","first-page":"1533","DOI":"10.18653\/v1\/D13-1160","article-title":"Semantic parsing on freebase from question-answer pairs","volume-title":"Proceedings of the 2013 conference on empirical methods in natural language processing","author":"Berant","year":"2013"},{"key":"2026032614365299900_ref030","unstructured":"Y.\n              Bitton\n            , H.Bansal, J.Hessel, R.Shao, W.Zhu, A.Awadalla, J.Gardner, R.Taori, and L.Schimdt, Visit-bench: A benchmark for vision-language instruction following inspired by real-world use, 2023. arXiv: 2308.06595."},{"key":"2026032614365299900_ref031","article-title":"Training diffusion models with reinforcement learning","volume-title":"arXiv preprint arXiv:2305.13301","author":"Black","year":"2023"},{"key":"2026032614365299900_ref032","article-title":"Stable video diffusion: Scaling latent video diffusion models to large datasets","author":"Blattmann","year":"2023"},{"key":"2026032614365299900_ref033","doi-asserted-by":"crossref","DOI":"10.52202\/068431-1114","article-title":"Retrieval-augmented diffusion models","volume-title":"arXiv preprint arXiv:2204.11824","author":"Blattmann","year":"2022"},{"key":"2026032614365299900_ref034","first-page":"9157","article-title":"Yolact: Real-time instance segmentation","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision","author":"Bolya","year":"2019"},{"key":"2026032614365299900_ref035","article-title":"On the opportunities and risks of foundation models","volume-title":"arXiv preprint arXiv:2108.07258","author":"Bommasani","year":"2021"},{"key":"2026032614365299900_ref036","first-page":"18 392","article-title":"Instructpix2pix: Learning to follow image editing instructions","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Brooks","year":"2023"},{"key":"2026032614365299900_ref037","article-title":"Language models are few-shot learners","volume-title":"NeuIPS","author":"Brown","year":"2020"},{"key":"2026032614365299900_ref038","unstructured":"M.\n              Byeon\n            , B.Park, H.Kim, S.Lee, W.Baek, and S.Kim, Coyo-700m: Image-text pair dataset, 2022. URL: https:\/\/github.com\/kakaobrain\/coyo-dataset."},{"key":"2026032614365299900_ref039","article-title":"Making large multimodal models understand arbitrary visual prompts","volume-title":"arXiv:2312.00784","author":"Cai","year":"2023"},{"key":"2026032614365299900_ref040","article-title":"Large language models as tool makers","volume-title":"arXiv preprint arXiv:2305.17126","author":"Cai","year":"2023"},{"key":"2026032614365299900_ref041","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-20059-5_17","article-title":"X-detr: A versatile architecture for instance-wise vision-language tasks","volume-title":"ECCV","author":"Cai","year":"2022"},{"key":"2026032614365299900_ref042","article-title":"Less is more: Removing text-regions improves clip training efficiency and robustness","volume-title":"arXiv preprint arXiv:2305.05095","author":"Cao","year":"2023"},{"key":"2026032614365299900_ref043","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58452-8_13","article-title":"End-to-end object detection with transformers","volume-title":"ECCV","author":"Carion","year":"2020"},{"key":"2026032614365299900_ref044","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-01264-9_9","article-title":"Deep clustering for unsupervised learning of visual features","volume-title":"ECCV","author":"Caron","year":"2018"},{"key":"2026032614365299900_ref045","article-title":"Unsupervised learning of visual features by contrasting cluster assignments","volume-title":"NeurIPS","author":"Caron","year":"2020"},{"key":"2026032614365299900_ref046","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV48922.2021.00951","article-title":"Emerging properties in self-supervised vision transformers","volume-title":"ICCV","author":"Caron","year":"2021"},{"key":"2026032614365299900_ref047","first-page":"5230","article-title":"Annotating object instances with a polygon-rnn","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Castrejon","year":"2017"},{"key":"2026032614365299900_ref048","article-title":"Muse: Text-to-image generation via masked generative transformers","volume-title":"arXiv preprint arXiv:2301.00704","author":"Chang","year":"2023"},{"key":"2026032614365299900_ref049","first-page":"11 315","article-title":"Maskgit: Masked generative image transformer","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Chang","year":"2022"},{"key":"2026032614365299900_ref050","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR46437.2021.00356","article-title":"Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts","volume-title":"CVPR","author":"Changpinyo","year":"2021"},{"key":"2026032614365299900_ref051","doi-asserted-by":"crossref","DOI":"10.1145\/3592116","article-title":"Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models","volume-title":"arXiv preprint arXiv:2301.13826","author":"Chefer","year":"2023"},{"key":"2026032614365299900_ref052","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.emnlp-main.932","article-title":"Stair: Learning sparse text and image representation in grounded tokens","volume-title":"arXiv preprint arXiv:2301.13081","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref053","article-title":"Visual instruction tuning with polite flamingo","volume-title":"arXiv preprint arXiv:2307.01003","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref054","article-title":"Vlp: A survey on vision-language pre-training","volume-title":"arXiv preprint arXiv:2202.09061","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref055","article-title":"Shikra: Unleashing multimodal llm\u2019s referential dialogue magic","volume-title":"arXiv preprint arXiv:2306.15195","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref056","first-page":"0","article-title":"Object grounding via iterative context reasoning","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision Workshops","author":"Chen","year":"2019"},{"key":"2026032614365299900_ref057","article-title":"Rethinking atrous convolution for semantic image segmentation","volume-title":"arXiv preprint arXiv:1706.05587","author":"Chen","year":"2017"},{"key":"2026032614365299900_ref058","article-title":"Sharegpt4v: Improving large multi-modal models with better captions","volume-title":"arXiv preprint arXiv:2311.12793","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref059","article-title":"Evaluating large language models trained on code","volume-title":"arXiv preprint arXiv:2107.03374","author":"Chen","year":"2021"},{"key":"2026032614365299900_ref060","article-title":"Training-free layout control with cross-attention guidance","volume-title":"arXiv preprint arXiv:2304.03373","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref061","article-title":"Group detr: Fast training convergence with decoupled one-to-many label assignment","volume-title":"arXiv preprint arXiv:2207.13085","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref062","article-title":"A simple framework for contrastive learning of visual representations","volume-title":"ICML","author":"Chen","year":"2020"},{"key":"2026032614365299900_ref063","article-title":"Big self-supervised models are strong semi-supervised learners","volume-title":"NeurIPS","author":"Chen","year":"2020"},{"key":"2026032614365299900_ref064","article-title":"Pix2seq: A language modeling framework for object detection","volume-title":"ICLR","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref065","doi-asserted-by":"crossref","DOI":"10.52202\/068431-2272","article-title":"A unified sequence interface for vision tasks","volume-title":"arXiv preprint arXiv:2206.07669","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref066","article-title":"Llava-interactive: An all-in-one demo for image chat, segmentation, generation and editing","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref067","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2022.emnlp-main.375","article-title":"Murag: Multimodal retrieval-augmented generator for open question answering over images and text","volume-title":"arXiv preprint arXiv:2210.02928","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref068","article-title":"Subject-driven text-to-image generation via apprenticeship learning","volume-title":"arXiv preprint arXiv:2304.00186","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref069","article-title":"Re-imagen: Retrieval-augmented text-to-image generator","volume-title":"arXiv preprint arXiv:2209.14491","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref070","article-title":"Pali-x: On scaling up a multilingual vision and language model","volume-title":"arXiv preprint arXiv:2305.18565","author":"Chen","year":"2023"},{"key":"2026032614365299900_ref071","article-title":"Pali: A jointly-scaled multilingual language-image model","volume-title":"arXiv preprint arXiv:2209.06794","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref072","first-page":"7345","article-title":"Conditional diffusion for interactive segmentation","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision","author":"Chen","year":"2021"},{"key":"2026032614365299900_ref073","first-page":"1300","article-title":"Focalclick: Towards practical interactive image segmentation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref074","article-title":"Context autoencoder for self-supervised representation learning","volume-title":"arXiv preprint arXiv:2202.03026","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref075","article-title":"Improved baselines with momentum contrastive learning","volume-title":"arXiv preprint arXiv:2003.04297","author":"Chen","year":"2020"},{"key":"2026032614365299900_ref076","article-title":"Microsoft COCO captions: Data collection and evaluation server","volume-title":"arXiv preprint arXiv:1504.00325","author":"Chen","year":"2015"},{"key":"2026032614365299900_ref077","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR46437.2021.01549","article-title":"Exploring simple siamese representation learning","volume-title":"CVPR","author":"Chen","year":"2021"},{"key":"2026032614365299900_ref078","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV48922.2021.00950","article-title":"An empirical study of training self-supervised vision transformers","volume-title":"ICCV","author":"Chen","year":"2021"},{"key":"2026032614365299900_ref079","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58577-8_7","article-title":"UNITER: Universal image-text representation learning","volume-title":"ECCV","author":"Chen","year":"2020"},{"key":"2026032614365299900_ref080","article-title":"Vision transformer adapter for dense predictions","volume-title":"arXiv preprint arXiv:2205.08534","author":"Chen","year":"2022"},{"key":"2026032614365299900_ref081","first-page":"1290","article-title":"Masked-attention mask transformer for universal image segmentation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Cheng","year":"2022"},{"key":"2026032614365299900_ref082","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.00276","article-title":"Reproducible scaling laws for contrastive language-image learning","volume-title":"CVPR","author":"Cherti","year":"2023"},{"key":"2026032614365299900_ref083","article-title":"Unifying vision-and- language tasks via text generation","volume-title":"ICML","author":"Cho","year":"2021"},{"key":"2026032614365299900_ref084","article-title":"Diagnostic benchmark and iterative inpainting for layout-guided image generation","volume-title":"arXiv preprint arXiv:2304.06671","author":"Cho","year":"2023"},{"key":"2026032614365299900_ref085","article-title":"Palm: Scaling language modeling with pathways","volume-title":"arXiv preprint arXiv:2204.02311","author":"Chowdhery","year":"2022"},{"key":"2026032614365299900_ref086","unstructured":"T.\n              Computer\n            \n          , Redpajama-data: An open source recipe to reproduce llama training dataset, 2023. URL: https:\/\/github.com\/togethercomputer\/RedPajama-Data."},{"key":"2026032614365299900_ref087","article-title":"Multi-task learning with deep neural networks: A survey","volume-title":"arXiv preprint arXiv:2009.09796","author":"Crawshaw","year":"2020"},{"issue":"1","key":"2026032614365299900_ref088","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1109\/MSP.2017.2765202","article-title":"Generative adversarial networks: An overview","volume":"35","author":"Creswell","year":"2018","journal-title":"IEEE signal processing magazine"},{"key":"2026032614365299900_ref089","article-title":"Samaug: Point prompt augmentation for segment anything model","volume-title":"arXiv preprint arXiv:2307.01187","author":"Dai","year":"2023"},{"key":"2026032614365299900_ref090","doi-asserted-by":"crossref","DOI":"10.52202\/075280-2142","article-title":"Instructblip: Towards general-purpose vision-language models with instruction tuning","volume-title":"arXiv preprint arXiv:2305.06500","author":"Dai","year":"2023"},{"key":"2026032614365299900_ref091","first-page":"7373","article-title":"Dynamic head: Unifying object detection heads with attentions","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Dai","year":"2021"},{"issue":"2","key":"2026032614365299900_ref092","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/1348246.1348248","article-title":"Image retrieval: Ideas, influences, and trends of the new age","volume":"40","author":"Datta","year":"2008","journal-title":"ACM Computing Surveys (Csur)"},{"key":"2026032614365299900_ref093","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2018.00808","article-title":"Visual grounding via accumulated attention","volume-title":"CVPR","author":"Deng","year":"2018"},{"key":"2026032614365299900_ref094","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2009.5206848","article-title":"Imagenet: A large-scale hierarchical image database","volume-title":"CVPR","author":"Deng","year":"2009"},{"key":"2026032614365299900_ref095","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR46437.2021.01101","article-title":"Virtex: Learning visual representations from textual annotations","volume-title":"CVPR","author":"Desai","year":"2021"},{"key":"2026032614365299900_ref096","article-title":"Redcaps: Web-curated image-text data created by the people, for the people","volume-title":"NeurIPS, Track on Datasets and Benchmarks","author":"Desai","year":"2021"},{"key":"2026032614365299900_ref097","doi-asserted-by":"crossref","DOI":"10.52202\/075280-0441","article-title":"Qlora: Efficient finetuning of quantized llms","volume-title":"arXiv preprint arXiv:2305.14314","author":"Dettmers","year":"2023"},{"key":"2026032614365299900_ref098","article-title":"Bert: Pre-training of deep bidirectional transformers for language understanding","volume-title":"NAACL","author":"Devlin","year":"2019"},{"key":"2026032614365299900_ref099","article-title":"Diffusion models beat gans on image synthesis","volume-title":"NeurIPS","author":"Dhariwal","year":"2021"},{"key":"2026032614365299900_ref100","doi-asserted-by":"crossref","unstructured":"J.\n              Ding\n            , N.Xue, G.-S.Xia, and D.Dai, Decoupling zero-shot semantic segmentation, 2022. arXiv: 2112.07910 [cs.CV].","DOI":"10.1109\/CVPR52688.2022.01129"},{"key":"2026032614365299900_ref101","article-title":"Open-vocabulary panoptic segmentation with maskclip","volume-title":"arXiv preprint arXiv:2208.08984","author":"Ding","year":"2022"},{"key":"2026032614365299900_ref102","first-page":"21 898","article-title":"Solq: Segmenting objects by learning queries","volume":"34","author":"Dong","year":"2021","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2026032614365299900_ref103","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-20056-4_15","article-title":"Bootstrapped masked autoencoders for vision bert pretraining","volume-title":"ECCV","author":"Dong","year":"2022"},{"key":"2026032614365299900_ref104","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v37i1.25130","article-title":"Peco: Perceptual codebook for bert pre-training of vision transformers","volume-title":"AAAI","author":"Dong","year":"2023"},{"key":"2026032614365299900_ref105","article-title":"An image is worth 16x16 words: Transformers for image recognition at scale","volume-title":"ICLR","author":"Dosovitskiy","year":"2021"},{"key":"2026032614365299900_ref106","doi-asserted-by":"crossref","DOI":"10.52202\/068431-2387","article-title":"Coarse-to-fine vision-language pre-training with fusion in the backbone","volume-title":"NeurIPS","author":"Dou","year":"2022"},{"key":"2026032614365299900_ref107","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01763","article-title":"An empirical study of training end-to-end vision-and-language transformers","volume-title":"CVPR","author":"Dou","year":"2022"},{"key":"2026032614365299900_ref108","article-title":"PaLME: An embodied multimodal language model","volume-title":"arXiv preprint arXiv:2303.03378","author":"Driess","year":"2023"},{"key":"2026032614365299900_ref109","doi-asserted-by":"crossref","DOI":"10.24963\/ijcai.2022\/762","article-title":"A survey of vision-language pre-trained models","volume-title":"IJCAI survey track","author":"Du","year":"2022"},{"key":"2026032614365299900_ref110","doi-asserted-by":"crossref","first-page":"2007","DOI":"10.1007\/s11063-019-10163-0","article-title":"Image inpainting: A review","volume":"51","author":"Elharrouss","year":"2020","journal-title":"Neural Processing Letters"},{"key":"2026032614365299900_ref111","article-title":"Whitening for self-supervised representation learning","volume-title":"ICML","author":"Ermolov","year":"2021"},{"key":"2026032614365299900_ref112","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR46437.2021.01268","article-title":"Taming transformers for high-resolution image synthesis","volume-title":"CVPR","author":"Esser","year":"2021"},{"issue":"5","key":"2026032614365299900_ref113","article-title":"The pascal visual object classes challenge 2012 (voc2012) development kit","volume":"8","author":"Everingham","year":"2011","journal-title":"Pattern Analysis, Statistical Modelling and Computational Learning, Tech. Rep"},{"key":"2026032614365299900_ref114","doi-asserted-by":"crossref","DOI":"10.52202\/075280-1544","article-title":"Improving clip training with language rewrites","volume-title":"arXiv preprint arXiv:2305.20088","author":"Fan","year":"2023"},{"key":"2026032614365299900_ref115","article-title":"Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models","volume-title":"arXiv preprint arXiv:2305.16381","author":"Fan","year":"2023"},{"key":"2026032614365299900_ref116","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.01855","article-title":"Eva: Exploring the limits of masked visual representation learning at scale","volume-title":"CVPR","author":"Fang","year":"2023"},{"key":"2026032614365299900_ref117","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01748","article-title":"Injecting semantic concepts into end-to-end image captioning","volume-title":"CVPR","author":"Fang","year":"2022"},{"key":"2026032614365299900_ref118","article-title":"Masked autoencoders as spatiotemporal learners","volume-title":"NeurIPS","author":"Feichtenhofer","year":"2022"},{"key":"2026032614365299900_ref119","first-page":"701","volume-title":"European Conference on Computer Vision","author":"Feng","year":"2022"},{"key":"2026032614365299900_ref120","article-title":"Training-free structured diffusion guidance for compositional text-to-image synthesis","volume-title":"The Eleventh International Conference on Learning Representations","author":"Feng","year":"2022"},{"key":"2026032614365299900_ref121","article-title":"Layoutgpt: Compositional visual planning and generation with large language models","volume-title":"arXiv preprint arXiv:2305.15393","author":"Feng","year":"2023"},{"key":"2026032614365299900_ref122","article-title":"Devise: A deep visual-semantic embedding model","volume-title":"NeurIPS","author":"Frome","year":"2013"},{"key":"2026032614365299900_ref123","article-title":"Mme: A comprehensive evaluation benchmark for multimodal large language models","volume-title":"arXiv preprint arXiv:2306.13394","author":"Fu","year":"2023"},{"key":"2026032614365299900_ref124","article-title":"Guiding instruction-based image editing via multimodal large language models","author":"Fu","year":"2023"},{"key":"2026032614365299900_ref125","article-title":"Datacomp: In search of the next generation of multimodal datasets","volume-title":"arXiv preprint arXiv:2304.14108","author":"Gadre","year":"2023"},{"key":"2026032614365299900_ref126","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-19784-0_6","article-title":"Make-a-scene: Scene-based text-to-image generation with human priors","volume-title":"arXiv preprint arXiv:2203.13131","author":"Gafni","year":"2022"},{"key":"2026032614365299900_ref127","article-title":"An image is worth one word: Personalizing text-to-image generation using textual inversion","volume-title":"arXiv preprint arXiv:2208.01618","author":"Gal","year":"2022"},{"key":"2026032614365299900_ref128","article-title":"Largescale adversarial training for vision-and-language representation learning","volume-title":"NeurIPS","author":"Gan","year":"2020"},{"key":"2026032614365299900_ref129","article-title":"Visionlanguage pre-training: Basics, recent advances, and future trends","volume-title":"Foundations and Trends\u00ae in Computer Graphics and Vision","author":"Gan","year":"2022"},{"key":"2026032614365299900_ref130","article-title":"Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn","volume-title":"arXiv preprint arXiv:2306.08640","author":"Gao","year":"2023"},{"key":"2026032614365299900_ref131","article-title":"Llama-adapter v2: Parameter-efficient visual instruction model","volume-title":"arXiv preprint arXiv:2304.15010","author":"Gao","year":"2023"},{"key":"2026032614365299900_ref132","article-title":"Convmae: Masked convolution meets masked autoencoders","volume-title":"arXiv preprint arXiv:2205.03892","author":"Gao","year":"2022"},{"key":"2026032614365299900_ref133","article-title":"Planting a seed of vision in large language model","volume-title":"arXiv preprint arXiv:2307.08041","author":"Ge","year":"2023"},{"key":"2026032614365299900_ref134","unstructured":"Gen-2\n          , https:\/\/research.runwayml.com\/gen2."},{"key":"2026032614365299900_ref135","unstructured":"X.\n              Geng\n             and H.Liu, Openllama: An open reproduction of llama, May2023. URL: https:\/\/github.com\/openlm-research\/open_llama."},{"key":"2026032614365299900_ref136","article-title":"Instructdiffusion: A generalist modeling interface for vision tasks","volume-title":"arXiv preprint arXiv:2309.03895","author":"Geng","year":"2023"},{"key":"2026032614365299900_ref137","article-title":"Open-vocabulary image segmentation","volume-title":"ECCV","author":"Ghiasi","year":"2022"},{"key":"2026032614365299900_ref138","first-page":"540","volume-title":"European Conference on Computer Vision","author":"Ghiasi","year":"2022"},{"key":"2026032614365299900_ref139","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.01457","article-title":"Imagebind: One embedding space to bind them all","volume-title":"CVPR","author":"Girdhar","year":"2023"},{"key":"2026032614365299900_ref140","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2015.169","article-title":"Fast r-cnn","volume-title":"ICCV","author":"Girshick","year":"2015"},{"issue":"1","key":"2026032614365299900_ref141","doi-asserted-by":"crossref","first-page":"142","DOI":"10.1109\/TPAMI.2015.2437384","article-title":"Region-based convolutional networks for accurate object detection and segmentation","volume":"38","author":"Girshick","year":"2015","journal-title":"IEEE transactions on pattern analysis and machine intelligence"},{"key":"2026032614365299900_ref142","article-title":"Multimodal-gpt: A vision and language model for dialogue with humans","volume-title":"arXiv preprint arXiv:2305.04790","author":"Gong","year":"2023"},{"key":"2026032614365299900_ref143","doi-asserted-by":"crossref","DOI":"10.1145\/3422622","article-title":"Generative adversarial networks","volume-title":"Communications of the ACM","author":"Goodfellow","year":"2020"},{"key":"2026032614365299900_ref144","article-title":"Bootstrap your own latent-a new approach to self-supervised learning","volume-title":"NeurIPS","author":"Grill","year":"2020"},{"key":"2026032614365299900_ref145","article-title":"Dataseg: Taming a universal multi-dataset multi-task segmentation model","volume-title":"arXiv preprint arXiv:2306.01736","author":"Gu","year":"2023"},{"key":"2026032614365299900_ref146","article-title":"Open-vocabulary object detection via vision and language knowledge distillation","volume-title":"arXiv preprint arXiv:2104.13921","author":"Gu","year":"2021"},{"key":"2026032614365299900_ref147","article-title":"Open-vocabulary object detection via vision and language knowledge distillation","volume-title":"ICLR","author":"Gu","year":"2022"},{"key":"2026032614365299900_ref148","article-title":"The false promise of imitating proprietary llms","volume-title":"arXiv preprint arXiv:2305.15717","author":"Gudibande","year":"2023"},{"key":"2026032614365299900_ref149","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2022.naacl-main.70","article-title":"Kat: A knowledge augmented transformer for vision-and- language","volume-title":"NAACL","author":"Gui","year":"2022"},{"key":"2026032614365299900_ref150","article-title":"Detecting and preventing hallucinations in large vision language models","volume-title":"arXiv preprint arXiv:2308.06394","author":"Gunjal","year":"2023"},{"key":"2026032614365299900_ref151","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01591","article-title":"Towards general purpose vision systems: An end-to-end task-agnostic vision-language architecture","volume-title":"CVPR","author":"Gupta","year":"2022"},{"key":"2026032614365299900_ref152","first-page":"16 399","article-title":"Towards general purpose vision systems: An end-to-end taskagnostic vision-language architecture","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Gupta","year":"2022"},{"key":"2026032614365299900_ref153","article-title":"Visual programming: Compositional visual reasoning without training","volume-title":"arXiv preprint arXiv:2211.11559","author":"Gupta","year":"2022"},{"key":"2026032614365299900_ref154","article-title":"Visual programming: Compositional visual reasoning without training","volume":"abs\/2211.11559","author":"Gupta","year":"2022","journal-title":"ArXiv"},{"key":"2026032614365299900_ref155","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.01436","article-title":"Visual programming: Compositional visual reasoning without training","volume-title":"CVPR","author":"Gupta","year":"2023"},{"key":"2026032614365299900_ref156","article-title":"Grit: General robust image task benchmark","volume-title":"arXiv preprint arXiv:2204.13653","author":"Gupta","year":"2022"},{"key":"2026032614365299900_ref157","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2018.00380","article-title":"Vizwiz grand challenge: Answering visual questions from blind people","volume-title":"CVPR","author":"Gurari","year":"2018"},{"key":"2026032614365299900_ref158","article-title":"Noise-contrastive estimation: A new estimation principle for unnormalized statistical models","volume-title":"AISTATS","author":"Gutmann","year":"2010"},{"key":"2026032614365299900_ref159","article-title":"Realm: Retrieval-augmented language model pre-training","volume-title":"arXiv preprint arXiv:2002.08909","author":"Guu","year":"2020"},{"key":"2026032614365299900_ref160","doi-asserted-by":"crossref","DOI":"10.1007\/s13735-020-00195-x","article-title":"A survey on instance segmentation: State of the art","volume-title":"International journal of multimedia information retrieval","author":"Hafiz","year":"2020"},{"key":"2026032614365299900_ref161","first-page":"59","volume-title":"European Conference on Computer Vision","author":"Harley","year":"2022"},{"key":"2026032614365299900_ref162","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01553","article-title":"Masked autoencoders are scalable vision learners","volume-title":"CVPR","author":"He","year":"2022"},{"key":"2026032614365299900_ref163","first-page":"9729","article-title":"Momentum contrast for unsupervised visual representation learning","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"He","year":"2020"},{"key":"2026032614365299900_ref164","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2017.322","article-title":"Mask r-cnn","volume-title":"ICCV","author":"He","year":"2017"},{"issue":"12","key":"2026032614365299900_ref165","first-page":"2341","article-title":"Single image haze removal using dark channel prior","volume":"33","author":"He","year":"2010","journal-title":"IEEE transactions on pattern analysis and machine intelligence"},{"key":"2026032614365299900_ref166","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2016.90","article-title":"Deep residual learning for image recognition","volume-title":"CVPR","author":"He","year":"2016"},{"key":"2026032614365299900_ref167","article-title":"DeBERTa: Decoding-enhanced bert with disentangled attention","volume-title":"ICLR","author":"He","year":"2021"},{"key":"2026032614365299900_ref168","article-title":"Is synthetic data from generative models ready for image recognition?","volume-title":"arXiv preprint arXiv:2210.07574","author":"He","year":"2022"},{"key":"2026032614365299900_ref169","article-title":"Data-efficient image recognition with contrastive predictive coding","volume-title":"ICML","author":"Henaff","year":"2020"},{"key":"2026032614365299900_ref170","article-title":"Prompt-to-prompt image editing with crossattention control","volume-title":"The Eleventh International Conference on Learning Representations","author":"Hertz","year":"2022"},{"key":"2026032614365299900_ref171","article-title":"Learning deep representations by mutual information estimation and maximization","volume-title":"arXiv preprint arXiv:1808.06670","author":"Hjelm","year":"2018"},{"key":"2026032614365299900_ref172","article-title":"Imagen video: High definition video generation with diffusion models","volume-title":"arXiv preprint arXiv:2210.02303","author":"Ho","year":"2022"},{"key":"2026032614365299900_ref173","article-title":"Denoising diffusion probabilistic models","volume-title":"NeurIPS","author":"Ho","year":"2020"},{"key":"2026032614365299900_ref174","article-title":"Training compute-optimal large language models","volume-title":"arXiv preprint arXiv:2203.15556","author":"Hoffmann","year":"2022"},{"key":"2026032614365299900_ref175","article-title":"3d-llm: Injecting the 3d world into large language models","volume-title":"arXiv preprint arXiv:2307.12981","author":"Hong","year":"2023"},{"key":"2026032614365299900_ref176","article-title":"Lora: Low-rank adaptation of large language models","volume-title":"arXiv preprint arXiv:2106.09685","author":"Hu","year":"2021"},{"key":"2026032614365299900_ref177","first-page":"108","volume-title":"European Conference on Computer Vision","author":"Hu","year":"2016"},{"key":"2026032614365299900_ref178","first-page":"1439","article-title":"Unit: Multimodal multitask learning with a unified transformer","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV)","author":"Hu","year":"2021"},{"key":"2026032614365299900_ref179","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV48922.2021.00147","article-title":"Unit: Multimodal multitask learning with a unified transformer","volume-title":"ICCV","author":"Hu","year":"2021"},{"key":"2026032614365299900_ref180","article-title":"Bliva: A simple multimodal llm for better handling of text-rich visual questions","volume-title":"arXiv preprint arXiv:2308.09936","author":"Hu","year":"2023"},{"key":"2026032614365299900_ref181","doi-asserted-by":"crossref","DOI":"10.52202\/068431-1454","article-title":"Green hierarchical vision transformer for masked image modeling","volume-title":"NeurIPS","author":"Huang","year":"2022"},{"key":"2026032614365299900_ref182","article-title":"Audiogpt: Understanding and generating speech, music, sound, and talking head","volume-title":"arXiv preprint arXiv:2304.12995","author":"Huang","year":"2023"},{"key":"2026032614365299900_ref183","article-title":"Language is not all you need: Aligning perception with language models","volume-title":"arXiv preprint arXiv:2302.14045","author":"Huang","year":"2023"},{"key":"2026032614365299900_ref184","article-title":"Instruct2act: Mapping multi-modality instructions to robotic actions with large language model","volume-title":"arXiv preprint arXiv:2305.11176","author":"Huang","year":"2023"},{"key":"2026032614365299900_ref185","first-page":"9118","article-title":"Language models as zero-shot planners: Extracting actionable knowledge for embodied agents","volume-title":"International Conference on Machine Learning","author":"Huang","year":"2022"},{"key":"2026032614365299900_ref186","article-title":"Sparkles: Unlocking chats across multiple images for multimodal instruction-following models","volume-title":"arXiv preprint arXiv:2308.16463","author":"Huang","year":"2023"},{"key":"2026032614365299900_ref187","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR46437.2021.01278","article-title":"Seeing out of the box: End-to-end pre-training for vision-language representation learning","volume-title":"CVPR","author":"Huang","year":"2021"},{"key":"2026032614365299900_ref188","article-title":"Pixel-BERT: Aligning image pixels with text by deep multi-modal transformers","volume-title":"arXiv preprint arXiv:2004.00849","author":"Huang","year":"2020"},{"key":"2026032614365299900_ref189","first-page":"7020","article-title":"Openvocabulary instance segmentation via robust cross-modal pseudolabeling","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Huynh","year":"2022"},{"key":"2026032614365299900_ref190","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.5143773","volume-title":"Openclip","author":"Ilharco","year":"2021"},{"key":"2026032614365299900_ref191","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.00094","article-title":"Zero-shot text-guided object generation with dream fields","author":"Jain","year":"2022"},{"key":"2026032614365299900_ref192","article-title":"One-former: One transformer to rule universal image segmentation","volume-title":"CVPR","author":"Jain","year":"2023"},{"key":"2026032614365299900_ref193","article-title":"A survey on contrastive self-supervised learning","volume-title":"Technologies","author":"Jaiswal","year":"2020"},{"issue":"9","key":"2026032614365299900_ref194","doi-asserted-by":"crossref","first-page":"1896","DOI":"10.1109\/TMM.2016.2576283","article-title":"Image co-segmentation via saliency co-fusion","volume":"18","author":"Jerripothula","year":"2016","journal-title":"IEEE Transactions on Multimedia"},{"key":"2026032614365299900_ref195","article-title":"Scaling up visual and visionlanguage representation learning with noisy text supervision","volume-title":"ICML","author":"Jia","year":"2021"},{"key":"2026032614365299900_ref196","article-title":"Self-supervised visual feature learning with deep neural networks: A survey","volume-title":"IEEE transactions on pattern analysis and machine intelligence","author":"Jing","year":"2020"},{"key":"2026032614365299900_ref197","doi-asserted-by":"crossref","first-page":"1943","DOI":"10.1109\/CVPR.2010.5539868","volume-title":"2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","author":"Joulin","year":"2010"},{"key":"2026032614365299900_ref198","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV48922.2021.00180","article-title":"Mdetr-modulated detection for end-to-end multi-modal understanding","volume-title":"ICCV","author":"Kamath","year":"2021"},{"key":"2026032614365299900_ref199","first-page":"10 124","article-title":"Scaling up gans for text-to-image synthesis","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Kang","year":"2023"},{"key":"2026032614365299900_ref200","article-title":"Scaling laws for neural language models","volume-title":"arXiv preprint arXiv:2001.08361","author":"Kaplan","year":"2020"},{"key":"2026032614365299900_ref201","first-page":"6007","article-title":"Imagic: Text-based real image editing with diffusion models","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Kawar","year":"2023"},{"key":"2026032614365299900_ref202","doi-asserted-by":"crossref","DOI":"10.3115\/v1\/D14-1086","article-title":"Referitgame: Referring to objects in photographs of natural scenes","volume-title":"EMNLP","author":"Kazemzadeh","year":"2014"},{"key":"2026032614365299900_ref203","article-title":"ViLT: Vision-and-language transformer without convolution or region supervision","volume-title":"ICML","author":"Kim","year":"2021"},{"key":"2026032614365299900_ref204","article-title":"Auto-encoding variational bayes","volume-title":"arXiv preprint arXiv:1312.6114","author":"Kingma","year":"2013"},{"key":"2026032614365299900_ref205","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2019.00963","article-title":"Panoptic segmentation","volume-title":"CVPR","author":"Kirillov","year":"2019"},{"key":"2026032614365299900_ref206","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.00371","article-title":"Segment anything","volume-title":"arXiv preprint arXiv:2304.02643","author":"Kirillov","year":"2023"},{"key":"2026032614365299900_ref207","article-title":"Generating images with multimodal language models","volume-title":"arXiv preprint arXiv:2305.17216","author":"Koh","year":"2023"},{"key":"2026032614365299900_ref208","article-title":"Large language models are zero-shot reasoners","volume-title":"arXiv preprint arXiv:2205.11916","author":"Kojima","year":"2022"},{"key":"2026032614365299900_ref209","first-page":"6129","article-title":"Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Kokkinos","year":"2017"},{"key":"2026032614365299900_ref210","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58558-7_29","article-title":"Big transfer (bit): General visual representation learning","volume-title":"ECCV","author":"Kolesnikov","year":"2020"},{"key":"2026032614365299900_ref211","article-title":"Uvim: A unified modeling approach for vision with learned guiding codes","volume-title":"arXiv preprint arXiv:2205.10337","author":"Kolesnikov","year":"2022"},{"key":"2026032614365299900_ref212","article-title":"Imagenet classification with deep convolutional neural networks","volume-title":"NeurIPS","author":"Krizhevsky","year":"2012"},{"key":"2026032614365299900_ref213","first-page":"1931","article-title":"Multi-concept customization of text-to-image diffusion","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Kumari","year":"2023"},{"key":"2026032614365299900_ref214","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-20059-5_29","article-title":"Findit: Generalized localization with natural language queries","volume-title":"ECCV","author":"Kuo","year":"2022"},{"key":"2026032614365299900_ref215","article-title":"Lisa: Reasoning segmentation via large language model","volume-title":"arXiv preprint arXiv:2308.00692","author":"Lai","year":"2023"},{"key":"2026032614365299900_ref216","article-title":"Discriminative regularization for generative models","volume-title":"arXiv preprint arXiv:1602.03220","author":"Lamb","year":"2016"},{"key":"2026032614365299900_ref217","first-page":"2879","article-title":"Mseg: A composite dataset for multi-domain semantic segmentation","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Lambert","year":"2020"},{"key":"2026032614365299900_ref218","first-page":"1558","article-title":"Autoencoding beyond pixels using a learned similarity metric","volume-title":"International conference on machine learning","author":"Larsen","year":"2016"},{"key":"2026032614365299900_ref219","article-title":"Obelisc: An open web-scale filtered dataset of interleaved image-text documents","volume-title":"arXiv preprint arXiv:2306.16527","author":"Lauren\u00e7on","year":"2023"},{"key":"2026032614365299900_ref220","first-page":"11 523","article-title":"Autoregressive image generation using residual quantization","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Lee","year":"2022"},{"key":"2026032614365299900_ref221","article-title":"Retrieval-augmented generation for knowledge-intensive nlp tasks","volume-title":"NeurIPS","author":"Lewis","year":"2020"},{"key":"2026032614365299900_ref222","article-title":"Benchmarking and analyzing generative data for visual recognition","volume-title":"arXiv preprint arXiv:2307.13697","author":"Li","year":"2023"},{"key":"2026032614365299900_ref223","article-title":"Mimic-it: Multi-modal in-context instruction tuning","volume-title":"arXiv preprint arXiv:2306.05425","author":"Li","year":"2023"},{"key":"2026032614365299900_ref224","article-title":"Otter: A multi-modal model with in-context instruction tuning","volume-title":"arXiv preprint arXiv:2305.03726","author":"Li","year":"2023"},{"key":"2026032614365299900_ref225","article-title":"Seedbench: Benchmarking multimodal llms with generative comprehension","volume-title":"arXiv preprint arXiv:2307.16125","author":"Li","year":"2023"},{"key":"2026032614365299900_ref226","article-title":"Language-driven semantic segmentation","volume-title":"ICLR","author":"Li","year":"2022"},{"key":"2026032614365299900_ref227","article-title":"Elevater: A benchmark and toolkit for evaluating language-augmented visual models","volume-title":"NeurIPS, Track on Datasets and Benchmarks","author":"Li","year":"2022"},{"key":"2026032614365299900_ref228","article-title":"Llava-med: Training a large language-and-vision assistant for biomedicine in one day","volume-title":"arXiv preprint arXiv:2306.00890","author":"Li","year":"2023"},{"key":"2026032614365299900_ref229","article-title":"Efficient self-supervised vision transformers for representation learning","volume-title":"arXiv preprint arXiv:2106.09785","author":"Li","year":"2021"},{"key":"2026032614365299900_ref230","article-title":"Semantic-sam: Segment and recognize anything at any granularity","volume-title":"arXiv preprint arXiv:2307.04767","author":"Li","year":"2023"},{"key":"2026032614365299900_ref231","article-title":"Vision-language intelligence: Tasks, representation learning, and large models","volume-title":"arXiv preprint arXiv:2203.01922","author":"Li","year":"2022"},{"key":"2026032614365299900_ref232","article-title":"Unicodervl: A universal encoder for vision and language by cross-modal pre-training","volume-title":"AAAI","author":"Li","year":"2020"},{"key":"2026032614365299900_ref233","first-page":"2691","article-title":"Uni-perceiver v2: A generalist model for large-scale vision and vision-language tasks","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Li","year":"2023"},{"key":"2026032614365299900_ref234","article-title":"Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models","volume-title":"arXiv preprint arXiv:2301.12597","author":"Li","year":"2023"},{"key":"2026032614365299900_ref235","article-title":"Blip: Bootstrapping languageimage pre-training for unified vision-language understanding and generation","volume-title":"ICML","author":"Li","year":"2022"},{"key":"2026032614365299900_ref236","article-title":"Align before fuse: Vision and language representation learning with momentum distillation","volume-title":"NeurIPS","author":"Li","year":"2021"},{"key":"2026032614365299900_ref237","article-title":"Videochat: Chat-centric video understanding","volume-title":"arXiv preprint arXiv:2305.06355","author":"Li","year":"2023"},{"key":"2026032614365299900_ref238","article-title":"M3it: A large-scale dataset towards multi-modal multilingual instruction tuning","volume-title":"arXiv preprint arXiv:2306.04387","author":"Li","year":"2023"},{"key":"2026032614365299900_ref239","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2019.01041","article-title":"Relation-aware graph attention network for visual question answering","volume-title":"ICCV","author":"Li","year":"2019"},{"key":"2026032614365299900_ref240","article-title":"VisualBERT: A simple and performant baseline for vision and language","volume-title":"arXiv preprint arXiv:1908.03557","author":"Li","year":"2019"},{"key":"2026032614365299900_ref241","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01069","article-title":"Grounded language-image pre-training","volume-title":"CVPR","author":"Li","year":"2022"},{"key":"2026032614365299900_ref242","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01069","article-title":"Grounded language-image pre-training","volume-title":"CVPR","author":"Li","year":"2022"},{"key":"2026032614365299900_ref243","article-title":"Apibank: A benchmark for tool-augmented llms","volume-title":"arXiv preprint arXiv:2304.08244","author":"Li","year":"2023"},{"key":"2026032614365299900_ref244","article-title":"Scigraphqa: A large-scale synthetic multi-turn question-answering dataset for scientific graphs","volume-title":"arXiv preprint arXiv:2308.03349","author":"Li","year":"2023"},{"key":"2026032614365299900_ref245","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2021.acl-long.353","article-title":"Prefix-tuning: Optimizing continuous prompts for generation","volume-title":"arXiv preprint arXiv:2101.00190","author":"Li","year":"2021"},{"key":"2026032614365299900_ref246","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58577-8_8","article-title":"Oscar: Object-semantics aligned pre-training for vision-language tasks","volume-title":"ECCV","author":"Li","year":"2020"},{"key":"2026032614365299900_ref247","article-title":"Stablellava: Enhanced visual instruction tuning with synthesized image-dialogue data","volume-title":"arXiv preprint arXiv:2308.10253","author":"Li","year":"2023"},{"key":"2026032614365299900_ref248","article-title":"Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm","volume-title":"ICLR","author":"Li","year":"2022"},{"key":"2026032614365299900_ref249","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.02240","article-title":"Scaling language-image pre-training via masking","volume-title":"CVPR","author":"Li","year":"2023"},{"key":"2026032614365299900_ref250","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.emnlp-main.20","article-title":"Evaluating object hallucination in large vision-language models","volume-title":"arXiv preprint arXiv:2305.10355","author":"Li","year":"2023"},{"key":"2026032614365299900_ref251","unstructured":"Y.\n              Li\n            , S.Bubeck, R.Eldan, A. D.Giorno, S.Gunasekar, and Y. T.Lee, Textbooks are all you need ii: Phi-1.5 technical report, 2023. arXiv: 2309.05463 [cs.CL]."},{"key":"2026032614365299900_ref252","first-page":"22 511","article-title":"Gligen: Open-set grounded text-to-image generation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Li","year":"2023"},{"key":"2026032614365299900_ref253","first-page":"7061","article-title":"Open-vocabulary semantic segmentation with mask-adapted clip","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Liang","year":"2023"},{"key":"2026032614365299900_ref254","article-title":"Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis","volume-title":"arXiv preprint arXiv:2303.16434","author":"Liang","year":"2023"},{"key":"2026032614365299900_ref255","article-title":"Video-llava: Learning united visual representation by alignment before projection","volume-title":"arXiv preprint arXiv:2311.10122","author":"Lin","year":"2023"},{"key":"2026032614365299900_ref256","article-title":"Magic3d: Highresolution text-to-3d content creation","volume-title":"IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Lin","year":"2023"},{"key":"2026032614365299900_ref257","first-page":"2980","article-title":"Focal loss for dense object detection","volume-title":"Proceedings of the IEEE international conference on computer vision","author":"Lin","year":"2017"},{"key":"2026032614365299900_ref258","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-319-10602-1_48","article-title":"Microsoft coco: Common objects in context","volume-title":"ECCV","author":"Lin","year":"2014"},{"key":"2026032614365299900_ref259","article-title":"Towards language-guided interactive 3d generation: Llms as layout interpreter with generative feedback","volume-title":"arXiv preprint arXiv:2305.15808","author":"Lin","year":"2023"},{"key":"2026032614365299900_ref260","first-page":"1271","article-title":"Recurrent multimodal interaction for referring image segmentation","volume-title":"Proceedings of the IEEE International Conference on Computer Vision","author":"Liu","year":"2017"},{"key":"2026032614365299900_ref261","article-title":"Aligning large multi-modal model with robust instruction tuning","volume-title":"arXiv preprint arXiv:2306.14565","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref262","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v37i2.25252","article-title":"The devil is in the frequency: Geminated gestalt autoencoder for self-supervised visual pre-training","volume-title":"AAAI","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref263","volume-title":"Improved baselines with visual instruction tuning","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref264","article-title":"Visual instruction tuning","volume-title":"arXiv preprint arXiv:2304.08485","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref265","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.01454","article-title":"Learning customized visual models with retrieval-augmented knowledge","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref266","doi-asserted-by":"crossref","unstructured":"J.\n              Liu\n            , H.Ding, Z.Cai, Y.Zhang, R. K.Satzoda, V.Mahadevan, and R.Manmatha, Polyformer: Referring image segmentation as sequential polygon generation, 2023. arXiv: 2302.07387 [cs.CV].","DOI":"10.1109\/CVPR52729.2023.01789"},{"key":"2026032614365299900_ref267","first-page":"423","volume-title":"European Conference on Computer Vision","author":"Liu","year":"2022"},{"key":"2026032614365299900_ref268","doi-asserted-by":"crossref","unstructured":"R.\n              Liu\n            , R.Wu, B. V.Hoorick, P.Tokmakov, S.Zakharov, and C.Vondrick, Zero-1-to-3: Zero-shot one image to 3d object, 2023. arXiv: 2303.11328 [cs.CV].","DOI":"10.1109\/ICCV51070.2023.00853"},{"key":"2026032614365299900_ref269","unstructured":"S.\n              Liu\n            , L.Fan, E.Johns, Z.Yu, C.Xiao, and A.Anandkumar, Prismer: A vision-language model with an ensemble of experts, 2023. arXiv: 2303.02506 [cs.LG]."},{"key":"2026032614365299900_ref270","volume-title":"Llava-plus: Learning to use tools for creating multimodal agents","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref271","article-title":"Grounding dino: Marrying dino with grounded pre-training for open-set object detection","volume-title":"arXiv preprint arXiv:2303.05499","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref272","article-title":"Any-to-any style transfer","volume-title":"arXiv preprint arXiv:2304.09728","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref273","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1007\/978-3-319-46448-0_2","volume-title":"Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11\u201314, 2016, Proceedings, Part I 14","author":"Liu","year":"2016"},{"key":"2026032614365299900_ref274","article-title":"Stone needle: A general multimodal largescale model framework towards healthcare","volume-title":"arXiv preprint arXiv:2306.16034","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref275","unstructured":"X.\n              Liu\n            , C.Gong, and Q.Liu, Flow straight and fast: Learning to generate and transfer data with rectified flow, 2022. arXiv: 2209.03003 [cs.LG]."},{"key":"2026032614365299900_ref276","article-title":"Wavjourney: Compositional audio creation with large language models","volume-title":"arXiv preprint arXiv:2307.14335","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref277","article-title":"Roberta: A robustly optimized bert pretraining approach","volume-title":"arXiv preprint arXiv:1907.11692","author":"Liu","year":"2019"},{"key":"2026032614365299900_ref278","article-title":"Mmbench: Is your multi-modal model an all-around player?","volume-title":"arXiv preprint arXiv:2307.06281","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref279","article-title":"On the hidden mystery of ocr in large multimodal models","volume-title":"arXiv preprint arXiv:2305.07895","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref280","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV48922.2021.00986","article-title":"Swin transformer: Hierarchical vision transformer using shifted windows","volume-title":"ICCV","author":"Liu","year":"2021"},{"key":"2026032614365299900_ref281","article-title":"Internchat: Solving vision-centric tasks by interacting with chatbots beyond language","volume-title":"arXiv preprint arXiv:2305.05662","author":"Liu","year":"2023"},{"key":"2026032614365299900_ref282","first-page":"11 976","article-title":"A convnet for the 2020s","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Liu","year":"2022"},{"key":"2026032614365299900_ref283","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.00683","article-title":"Retrieval augmented classification for long-tail visual recognition","volume-title":"CVPR","author":"Long","year":"2022"},{"key":"2026032614365299900_ref284","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2015.7298965","article-title":"Fully convolutional networks for semantic segmentation","volume-title":"CVPR","author":"Long","year":"2015"},{"key":"2026032614365299900_ref285","article-title":"Delving deeper into data scaling in masked image modeling","volume-title":"arXiv preprint arXiv:2305.15248","author":"Lu","year":"2023"},{"key":"2026032614365299900_ref286","article-title":"Vilbert: Pretraining taskagnostic visiolinguistic representations for vision-and-language tasks","volume-title":"NeurIPS","author":"Lu","year":"2019"},{"key":"2026032614365299900_ref287","article-title":"Unified-io: A unified model for vision, language, and multimodal tasks","volume-title":"arXiv preprint arXiv:2206.08916","author":"Lu","year":"2022"},{"key":"2026032614365299900_ref288","first-page":"10 437","article-title":"12-in-1: Multi-task vision and language representation learning","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Lu","year":"2020"},{"key":"2026032614365299900_ref289","article-title":"Mathvista: Evaluating mathematical reason-ing of foundation models in visual contexts","volume-title":"arXiv preprint arXiv:2310.02255","author":"Lu","year":"2023"},{"key":"2026032614365299900_ref290","article-title":"Learn to explain: Multimodal reasoning via thought chains for science question answering","volume-title":"Advances in Neural Information Processing Systems","author":"Lu","year":"2022"},{"key":"2026032614365299900_ref291","article-title":"Chameleon: Plug-and-play compositional reasoning with large language models","volume-title":"arXiv preprint arXiv:2304.09842","author":"Lu","year":"2023"},{"key":"2026032614365299900_ref292","article-title":"High-quality entity segmentation","volume-title":"ICCV","author":"Lu","year":"2023"},{"key":"2026032614365299900_ref293","article-title":"An empirical study of scaling instruction-tuned large multimodal models","volume-title":"arXiv preprint","author":"Lu","year":"2023"},{"key":"2026032614365299900_ref294","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.00695","article-title":"Image segmentation using text and image prompts","volume-title":"CVPR","author":"L\u00fcddecke","year":"2022"},{"key":"2026032614365299900_ref295","article-title":"Cheap and quick: Efficient vision-language instruction tuning for large language models","volume-title":"arXiv preprint arXiv:2305.15023","author":"Luo","year":"2023"},{"key":"2026032614365299900_ref296","first-page":"23 033","volume-title":"International Conference on Machine Learning","author":"Luo","year":"2023"},{"key":"2026032614365299900_ref297","article-title":"Valley: Video assistant with large language model enhanced ability","volume-title":"arXiv preprint arXiv:2306.07207","author":"Luo","year":"2023"},{"key":"2026032614365299900_ref298","doi-asserted-by":"crossref","first-page":"103 448","DOI":"10.1016\/j.artint.2020.103448","article-title":"Multiple object tracking: A literature review","volume":"293","author":"Luo","year":"2021","journal-title":"Artificial intelligence"},{"key":"2026032614365299900_ref299","article-title":"Segment anything in medical images","volume-title":"arXiv preprint arXiv:2304.12306","author":"Ma","year":"2023"},{"key":"2026032614365299900_ref300","article-title":"Can sam count anything? an empirical study on sam counting","volume-title":"arXiv preprint arXiv:2304.10817","author":"Ma","year":"2023"},{"key":"2026032614365299900_ref301","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2016.9","article-title":"Generation and comprehension of unambiguous object descriptions","volume-title":"CVPR","author":"Mao","year":"2016"},{"key":"2026032614365299900_ref302","first-page":"630","article-title":"Dynamic multimodal instance segmentation guided by natural language queries","volume-title":"Proceedings of the European Conference on Computer Vision (ECCV)","author":"Margffoy-Tuay","year":"2018"},{"key":"2026032614365299900_ref303","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR46437.2021.01389","article-title":"Krisp: Integrating implicit and symbolic knowledge for opendomain knowledge-based vqa","volume-title":"CVPR","author":"Marino","year":"2021"},{"key":"2026032614365299900_ref304","article-title":"Dataperf: Benchmarks for data-centric ai development","volume-title":"arXiv preprint arXiv:2207.10062","author":"Mazumder","year":"2022"},{"issue":"2","key":"2026032614365299900_ref305","doi-asserted-by":"crossref","first-page":"434","DOI":"10.1016\/j.patcog.2009.03.008","article-title":"A comparative evaluation of interactive segmentation algorithms","volume":"43","author":"McGuinness","year":"2010","journal-title":"Pattern Recognition"},{"key":"2026032614365299900_ref306","article-title":"Sdedit: Guided image synthesis and editing with stochastic differential equations","volume-title":"International Conference on Learning Representations","author":"Meng","year":"2021"},{"key":"2026032614365299900_ref307","doi-asserted-by":"crossref","first-page":"103 441","DOI":"10.1016\/j.dsp.2022.103441","article-title":"Single image depth estimation: An overview","volume":"123","author":"Mertan","year":"2022","journal-title":"Digital Signal Processing"},{"key":"2026032614365299900_ref308","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2019.00272","article-title":"Howto100m: Learning a text-video embedding by watching hundred million narrated video clips","volume-title":"ICCV","author":"Miech","year":"2019"},{"key":"2026032614365299900_ref309","article-title":"Efficient estimation of word representations in vector space","volume-title":"arXiv preprint arXiv:1301.3781","author":"Mikolov","year":"2013"},{"key":"2026032614365299900_ref310","doi-asserted-by":"crossref","unstructured":"M.\n              Minderer\n            , A.Gritsenko, A.Stone, M.Neumann, D.Weissenborn, A.Dosovitskiy, A.Mahendran, A.Arnab, M.Dehghani, Z.Shen, X.Wang, X.Zhai, T.Kipf, and N.Houlsby, Simple open-vocabulary object detection with vision transformers, 2022. arXiv: 2205.06230 [cs.CV].","DOI":"10.1007\/978-3-031-20080-9_42"},{"key":"2026032614365299900_ref311","article-title":"Self-supervised learning of pretextinvariant representations","volume-title":"CVPR","author":"Misra","year":"2020"},{"key":"2026032614365299900_ref312","first-page":"3994","article-title":"Cross-stitch networks for multi-task learning","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Misra","year":"2016"},{"key":"2026032614365299900_ref313","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.acl-short.43","article-title":"Metavl: Transferring in-context learning ability from language models to vision-language models","volume-title":"arXiv preprint arXiv:2306.01311","author":"Monajatipoor","year":"2023"},{"key":"2026032614365299900_ref314","unstructured":"Moonvalley\n          , 2023. URL: https:\/\/moonvalley.ai\/."},{"key":"2026032614365299900_ref315","article-title":"Medflamingo: A multimodal medical few-shot learner","volume-title":"arXiv preprint arXiv:2307.15189","author":"Moor","year":"2023"},{"key":"2026032614365299900_ref316","unstructured":"Morph\n          , 2023. URL: https:\/\/www.morphstudio.com\/."},{"issue":"5","key":"2026032614365299900_ref317","doi-asserted-by":"crossref","first-page":"349","DOI":"10.1006\/gmip.1998.0480","article-title":"Interactive segmentation with intelligent scissors","volume":"60","author":"Mortensen","year":"1998","journal-title":"Graphical models and image processing"},{"key":"2026032614365299900_ref318","unstructured":"MosaicML NLP Team\n          , Introducing mpt-7b: A new standard for open-source, ly usable llms, 2023. URL: www.mosaicml.com\/blog\/mpt-7b."},{"key":"2026032614365299900_ref319","first-page":"891","article-title":"The role of context for object detection and semantic segmentation in the wild","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Mottaghi","year":"2014"},{"key":"2026032614365299900_ref320","article-title":"T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models","volume-title":"arXiv preprint arXiv:2302.08453","author":"Mou","year":"2023"},{"key":"2026032614365299900_ref321","article-title":"Slip: Selfsupervision meets language-image pre-training","volume-title":"arXiv preprint arXiv:2112.12750","author":"Mu","year":"2021"},{"key":"2026032614365299900_ref322","article-title":"Embodiedgpt: Vision-language pre-training via embodied chain of thought","volume-title":"arXiv preprint arXiv:2305.15021","author":"Mu","year":"2023"},{"key":"2026032614365299900_ref323","unstructured":"S.\n              Munasinghe\n            , R.Thushara, M.Maaz, H. A.Rasheed, S.Khan, M.Shah, and F.Khan, Pg-video-llava: Pixel grounding large video-language models, 2023. arXiv: 2311.13435 [cs.CV]."},{"key":"2026032614365299900_ref324","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58595-2_41","article-title":"A metric learning reality check","volume-title":"ECCV","author":"Musgrave","year":"2020"},{"key":"2026032614365299900_ref325","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-319-46493-0_48","article-title":"Modeling context between objects for referring expression understanding","volume-title":"ECCV","author":"Nagaraja","year":"2016"},{"key":"2026032614365299900_ref326","article-title":"Webgpt: Browser-assisted question-answering with human feedback","volume-title":"arXiv preprint arXiv:2112.09332","author":"Nakano","year":"2021"},{"key":"2026032614365299900_ref327","article-title":"Quality not quantity: On the interaction between dataset design and robustness of clip","volume-title":"NeurIPS","author":"Nguyen","year":"2022"},{"key":"2026032614365299900_ref328","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.01822","article-title":"All in tokens: Unifying output space of visual tasks via soft token","volume-title":"arXiv preprint arXiv:2301.02229","author":"Ning","year":"2023"},{"key":"2026032614365299900_ref329","article-title":"Are large-scale datasets necessary for self-supervised pre-training?","volume-title":"arXiv preprint arXiv:2112.10740","author":"El-Nouby","year":"2021"},{"key":"2026032614365299900_ref330","article-title":"Neural discrete representation learning","volume-title":"NeurIPS","author":"van den Oord","year":"2017"},{"key":"2026032614365299900_ref331","article-title":"Representation learning with contrastive predictive coding","volume-title":"arXiv preprint arXiv:1807.03748","author":"Oord","year":"2018"},{"key":"2026032614365299900_ref332","article-title":"Neural discrete representation learning","volume-title":"arXiv preprint arXiv:1711.00937","author":"Oord","year":"2017"},{"key":"2026032614365299900_ref333","unstructured":"OpenAI\n          , ChatGPT, 2022. URL: https:\/\/openai.com\/blog\/chatgpt\/."},{"key":"2026032614365299900_ref334","unstructured":"OpenAI\n          , GPT-4 technical report, 2023. URL: https:\/\/arxiv.org\/abs\/2303.08774."},{"key":"2026032614365299900_ref335","unstructured":"OpenAI\n          , Gpt-4 technical report, 2023. arXiv: 2303.08774 [cs.CL]."},{"key":"2026032614365299900_ref336","article-title":"Dinov2: Learning robust visual features without supervision","volume-title":"arXiv preprint arXiv:2304.07193","author":"Oquab","year":"2023"},{"key":"2026032614365299900_ref337","article-title":"Im2text: Describing images using 1 million captioned photographs","volume-title":"NeurIPS","author":"Ordonez","year":"2011"},{"key":"2026032614365299900_ref338","first-page":"27 730","article-title":"Training language models to follow instructions with human feedback","volume":"35","author":"Ouyang","year":"2022","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2026032614365299900_ref339","article-title":"Know your self-supervised learning: A survey on image-based generative and discriminative training","volume-title":"arXiv preprint arXiv:2305.13689","author":"Ozbulak","year":"2023"},{"key":"2026032614365299900_ref340","article-title":"Art: Automatic multi-step reasoning and tool-use for large language models","volume-title":"arXiv preprint arXiv:2303.09014","author":"Paranjape","year":"2023"},{"key":"2026032614365299900_ref341","article-title":"Gorilla: Large language model connected with massive apis","volume-title":"arXiv preprint arXiv:2305.15334","author":"Patil","year":"2023"},{"key":"2026032614365299900_ref342","article-title":"Scalable diffusion models with transformers","volume-title":"arXiv preprint arXiv:2212.09748","author":"Peebles","year":"2022"},{"key":"2026032614365299900_ref343","unstructured":"G.\n              Penedo\n            , Q.Malartic, D.Hesslow, R.Cojocaru, A.Cappelli, H.Alobeidli, B.Pannier, E.Almazrouei, and J.Launay, \u201cThe RefinedWeb dataset for Falcon LLM: Outperforming curated corpora with web data, and web data only,\u201d arXiv preprint arXiv:2306.01116, 2023. URL: https:\/\/arxiv.org\/abs\/2306.01116."},{"key":"2026032614365299900_ref344","article-title":"Check your facts and try again: Improving large language models with external knowledge and automated feedback","volume-title":"arXiv preprint arXiv:2302.12813","author":"Peng","year":"2023"},{"key":"2026032614365299900_ref345","article-title":"Instruction tuning with GPT-4","volume-title":"arXiv preprint arXiv:2304.03277","author":"Peng","year":"2023"},{"key":"2026032614365299900_ref346","article-title":"A unified view of masked image modeling","volume-title":"arXiv preprint arXiv:2210.10615","author":"Peng","year":"2022"},{"key":"2026032614365299900_ref347","article-title":"Beit v2: Masked image modeling with vector-quantized visual tokenizers","volume-title":"arXiv preprint arXiv:2208.06366","author":"Peng","year":"2022"},{"key":"2026032614365299900_ref348","article-title":"Kosmos-2: Grounding multimodal large language models to the world","volume-title":"arXiv preprint arXiv:2306.14824","author":"Peng","year":"2023"},{"key":"2026032614365299900_ref349","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/D19-1005","article-title":"Knowledge enhanced contextual word representations","volume-title":"arXiv preprint arXiv:1909.04164","author":"Peters","year":"2019"},{"key":"2026032614365299900_ref350","article-title":"Combined scaling for zero-shot transfer learning","volume-title":"arXiv preprint arXiv:2111.10050","author":"Pham","year":"2021"},{"key":"2026032614365299900_ref351","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.emnlp-main.876","article-title":"Detgpt: Detect what you need via reasoning","volume-title":"arXiv preprint arXiv:2305.14167","author":"Pi","year":"2023"},{"key":"2026032614365299900_ref352","unstructured":"Pika 1.0\n          , 2023. URL: https:\/\/pika.art\/."},{"key":"2026032614365299900_ref353","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2015.303","article-title":"Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models","volume-title":"ICCV","author":"Plummer","year":"2015"},{"key":"2026032614365299900_ref354","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58558-7_38","article-title":"Connecting vision and language with localized narratives","volume-title":"ECCV","author":"Pont-Tuset","year":"2020"},{"key":"2026032614365299900_ref355","article-title":"Dreamfusion: Text-to-3d using 2d diffusion","volume-title":"arXiv","author":"Poole","year":"2022"},{"key":"2026032614365299900_ref356","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.findings-emnlp.462","article-title":"Creator: Disentangling abstract and concrete reasonings of large language models through tool creation","volume-title":"arXiv preprint arXiv:2305.14318","author":"Qian","year":"2023"},{"key":"2026032614365299900_ref357","article-title":"Multimodal open-vocabulary video classification via pre-trained vision and language models","volume-title":"arXiv preprint arXiv:2207.07646","author":"Qian","year":"2022"},{"key":"2026032614365299900_ref358","article-title":"Unicontrol: A unified diffusion model for controllable visual generation in the wild","volume-title":"arXiv preprint arXiv:2305.11147","author":"Qin","year":"2023"},{"key":"2026032614365299900_ref359","unstructured":"J.\n              Qin\n            , J.Wu, P.Yan, M.Li, R.Yuxi, X.Xiao, Y.Wang, R.Wang, S.Wen, X.Pan, and X.Wang, Freeseg: Unified, universal and open-vocabulary image segmentation, 2023. arXiv: 2303.17225 [cs.CV]."},{"key":"2026032614365299900_ref360","article-title":"Learning transferable visual models from natural language supervision","volume-title":"ICML","author":"Radford","year":"2021"},{"key":"2026032614365299900_ref361","unstructured":"A.\n              Radford\n            , J. W.Kim, T.Xu, G.Brockman, C.McLeavey, and I.Sutskever, Robust speech recognition via large-scale weak supervision, 2022. arXiv: 2212.04356 [eess.AS]."},{"key":"2026032614365299900_ref362","article-title":"Language models are unsupervised multitask learners","volume-title":"OpenAI blog","author":"Radford","year":"2019"},{"key":"2026032614365299900_ref363","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume-title":"JMLR","author":"Raffel","year":"2020"},{"key":"2026032614365299900_ref364","doi-asserted-by":"crossref","first-page":"11 932","DOI":"10.1609\/aaai.v34i07.6868","article-title":"Improved visual-semantic alignment for zero-shot object detection","volume":"34","author":"Rahman","year":"2020","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2026032614365299900_ref365","article-title":"Segment anything meets point tracking","volume-title":"arXiv preprint arXiv:2307.01197","author":"Raji\u010d","year":"2023"},{"key":"2026032614365299900_ref366","article-title":"Hierarchical text-conditional image generation with clip latents","volume-title":"arXiv preprint arXiv:2204.06125","author":"Ramesh","year":"2022"},{"key":"2026032614365299900_ref367","article-title":"Zero-Shot Text-to-Image Generation","volume-title":"ICML","author":"Ramesh","year":"2021"},{"key":"2026032614365299900_ref368","first-page":"8821","volume-title":"International Conference on Machine Learning","author":"Ramesh","year":"2021"},{"key":"2026032614365299900_ref369","first-page":"18 082","article-title":"Denseclip: Language-guided dense prediction with context-aware prompting","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Rao","year":"2022"},{"key":"2026032614365299900_ref370","article-title":"Generating diverse high-fidelity images with vq-vae-2","volume-title":"NeurIPS","author":"Razavi","year":"2019"},{"key":"2026032614365299900_ref371","first-page":"779","article-title":"You only look once: Unified, real-time object detection","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Redmon","year":"2016"},{"key":"2026032614365299900_ref372","article-title":"A generalist agent","volume-title":"arXiv preprint arXiv:2205.06175","author":"Reed","year":"2022"},{"key":"2026032614365299900_ref373","article-title":"Faster r-cnn: Towards real-time object detection with region proposal networks","volume-title":"NeurIPS","author":"Ren","year":"2015"},{"key":"2026032614365299900_ref374","article-title":"Imagenet-21k pretraining for the masses","volume-title":"arXiv preprint arXiv:2104.10972","author":"Ridnik","year":"2021"},{"key":"2026032614365299900_ref375","unstructured":"R.\n              Rombach\n            , A.Blattmann, D.Lorenz, P.Esser, and B.Ommer, High-resolution image synthesis with latent diffusion models, 2021. arXiv: 2112.10752 [cs.CV]."},{"key":"2026032614365299900_ref376","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01042","article-title":"High-resolution image synthesis with latent diffusion models","volume-title":"CVPR","author":"Rombach","year":"2022"},{"key":"2026032614365299900_ref377","first-page":"234","volume-title":"Medical Image Computing and Computer-Assisted Intervention\u2013MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18","author":"Ronneberger","year":"2015"},{"key":"2026032614365299900_ref378","article-title":"Sam. md: Zeroshot medical image segmentation capabilities of the segment anything model","volume-title":"arXiv preprint arXiv:2304.05396","author":"Roy","year":"2023"},{"key":"2026032614365299900_ref379","doi-asserted-by":"crossref","DOI":"10.1016\/j.aiopen.2022.01.001","article-title":"Survey: Transformer based video-language pre-training","volume-title":"AI Open","author":"Ruan","year":"2022"},{"key":"2026032614365299900_ref380","first-page":"22 500","article-title":"Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Ruiz","year":"2023"},{"key":"2026032614365299900_ref381","doi-asserted-by":"crossref","DOI":"10.1007\/s11263-015-0816-y","article-title":"Imagenet large scale visual recognition challenge","volume-title":"IJCV","author":"Russakovsky","year":"2015"},{"key":"2026032614365299900_ref382","article-title":"Photorealistic text-to-image diffusion models with deep language understanding","volume-title":"arXiv preprint arXiv:2205.11487","author":"Saharia","year":"2022"},{"key":"2026032614365299900_ref383","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58598-3_10","article-title":"Learning visual representations with caption annotations","volume-title":"ECCV","author":"Sariyildiz","year":"2020"},{"key":"2026032614365299900_ref384","article-title":"Toolformer: Language models can teach themselves to use tools","volume-title":"arXiv preprint arXiv:2302.04761","author":"Schick","year":"2023"},{"key":"2026032614365299900_ref385","article-title":"Laion-5b: An open large-scale dataset for training next generation image-text models","volume-title":"NeurIPS","author":"Schuhmann","year":"2022"},{"key":"2026032614365299900_ref386","article-title":"Laion-400m: Open dataset of clip-filtered 400 million image-text pairs","volume-title":"arXiv preprint arXiv:2111.02114","author":"Schuhmann","year":"2021"},{"key":"2026032614365299900_ref387","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-20074-8_9","article-title":"A-okvqa: A benchmark for visual question answering using world knowledge","volume-title":"arXiv preprint arXiv:2206.01718","author":"Schwenk","year":"2022"},{"key":"2026032614365299900_ref388","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/P16-1162","article-title":"Neural machine translation of rare words with subword units","volume-title":"ACL","author":"Sennrich","year":"2016"},{"key":"2026032614365299900_ref389","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2019.00852","article-title":"Objects365: A large-scale, high-quality dataset for object detection","volume-title":"ICCV","author":"Shao","year":"2019"},{"key":"2026032614365299900_ref390","article-title":"Tiny lvlm-ehub: Early multimodal experiments with bard","volume-title":"arXiv preprint arXiv:2308.03729","author":"Shao","year":"2023"},{"key":"2026032614365299900_ref391","unstructured":"ShareGPT\n          , 2023. URL: https:\/\/sharegpt.com\/."},{"key":"2026032614365299900_ref392","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/P18-1238","article-title":"Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning","volume-title":"ACL","author":"Sharma","year":"2018"},{"key":"2026032614365299900_ref393","article-title":"Anything-3d: Towards single-view anything reconstruction in the wild","volume-title":"arXiv preprint arXiv:2304.10261","author":"Shen","year":"2023"},{"key":"2026032614365299900_ref394","article-title":"K-lite: Learning transferable visual models with external knowledge","volume-title":"NeurIPS","author":"Shen","year":"2022"},{"key":"2026032614365299900_ref395","article-title":"How much can clip benefit vision-and- language tasks?","volume-title":"ICLR","author":"Shen","year":"2022"},{"key":"2026032614365299900_ref396","article-title":"Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging-face","volume-title":"arXiv preprint arXiv:2303.17580","author":"Shen","year":"2023"},{"key":"2026032614365299900_ref397","article-title":"Knn-diffusion: Image generation via large-scale retrieval","volume-title":"arXiv preprint arXiv:2204.02849","author":"Sheynin","year":"2022"},{"key":"2026032614365299900_ref398","article-title":"Instantbooth: Personalized text-to-image generation without test-time finetuning","volume-title":"arXiv preprint arXiv:2304.03411","author":"Shi","year":"2023"},{"key":"2026032614365299900_ref399","article-title":"Generalist vision foundation models for medical imaging: A case study of segment anything model on zero-shot medical segmentation","volume-title":"Diagnostics","author":"Shi","year":"2023"},{"key":"2026032614365299900_ref400","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2020.emnlp-main.346","article-title":"Autoprompt: Eliciting knowledge from language models with automatically generated prompts","volume-title":"arXiv preprint arXiv:2010.15980","author":"Shin","year":"2020"},{"key":"2026032614365299900_ref401","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58536-5_44","article-title":"Textcaps: A dataset for image captioning with reading comprehension","volume-title":"ECCV","author":"Sidorov","year":"2020"},{"key":"2026032614365299900_ref402","doi-asserted-by":"crossref","first-page":"746","DOI":"10.1007\/978-3-642-33715-4_54","volume-title":"Computer Vision\u2013ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V 12","author":"Silberman","year":"2012"},{"key":"2026032614365299900_ref403","article-title":"Make-a-video: Text-to-video generation without text-video data","volume-title":"arXiv preprint arXiv:2209.14792","author":"Singer","year":"2022"},{"key":"2026032614365299900_ref404","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01519","article-title":"Flava: A foundational language and vision alignment model","volume-title":"CVPR","author":"Singh","year":"2022"},{"key":"2026032614365299900_ref405","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.00505","article-title":"The effectiveness of mae pre-pretraining for billion-scale pretraining","volume-title":"arXiv preprint arXiv:2303.13496","author":"Singh","year":"2023"},{"key":"2026032614365299900_ref406","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.00088","article-title":"Revisiting weakly supervised pre-training of visual perception models","volume-title":"CVPR","author":"Singh","year":"2022"},{"key":"2026032614365299900_ref407","first-page":"2256","volume-title":"International conference on machine learning","author":"Sohl-Dickstein","year":"2015"},{"key":"2026032614365299900_ref408","unstructured":"Y.\n              Song\n            , P.Dhariwal, M.Chen, and I.Sutskever, Consistency models, 2023. arXiv: 2303.01469 [cs.LG]."},{"key":"2026032614365299900_ref409","first-page":"12 438","article-title":"Improved techniques for training scorebased generative models","volume":"33","author":"Song","year":"2020","journal-title":"Advances in neural information processing systems"},{"key":"2026032614365299900_ref410","article-title":"Restgpt: Connecting large language models with real-world applications via restful apis","volume-title":"arXiv preprint arXiv:2306.06624","author":"Song","year":"2023"},{"key":"2026032614365299900_ref411","doi-asserted-by":"crossref","DOI":"10.1145\/3404835.3463257","article-title":"Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning","volume-title":"arXiv preprint arXiv:2103.01913","author":"Srinivasan","year":"2021"},{"key":"2026032614365299900_ref412","unstructured":"Stable diffusion\n          , 2022. URL: https:\/\/github.com\/CompVis\/stable-diffusion."},{"key":"2026032614365299900_ref413","article-title":"VL-BERT: Pre-training of generic visual-linguistic representations","volume-title":"ICLR","author":"Su","year":"2019"},{"key":"2026032614365299900_ref414","article-title":"Pandagpt: One model to instruction-follow them all","volume-title":"arXiv preprint arXiv:2305.16355","author":"Su","year":"2023"},{"key":"2026032614365299900_ref415","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV.2017.97","article-title":"Revisiting unreasonable effectiveness of data in deep learning era","volume-title":"ICCV","author":"Sun","year":"2017"},{"key":"2026032614365299900_ref416","article-title":"Generative pretraining in multimodality","volume-title":"arXiv preprint arXiv:2307.05222","author":"Sun","year":"2023"},{"key":"2026032614365299900_ref417","article-title":"Imagebrush: Learning visual in-context instructions for exemplar-based image manipulation","volume-title":"arXiv preprint arXiv:2308.00906","author":"Sun","year":"2023"},{"key":"2026032614365299900_ref418","article-title":"Pathasst: Redefining pathology through generative foundation ai assistant for pathology","volume-title":"arXiv preprint arXiv:2305.15072","author":"Sun","year":"2023"},{"key":"2026032614365299900_ref419","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.01092","article-title":"Vipergpt: Visual inference via python execution for reasoning","volume-title":"arXiv preprint arXiv:2303.08128","author":"Sur\u00eds","year":"2023"},{"key":"2026032614365299900_ref420","unstructured":"Svd-xt\n          , 2023. URL: https:\/\/huggingface.co\/stabilityai\/stable-video-diffusion-img2vid-xt."},{"key":"2026032614365299900_ref421","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/D19-1514","article-title":"LXMERT: Learning cross-modality encoder representations from transformers","volume-title":"EMNLP","author":"Tan","year":"2019"},{"key":"2026032614365299900_ref422","article-title":"Can sam segment anything? when sam meets camouflaged object detection","volume-title":"arXiv preprint arXiv:2304.04709","author":"Tang","year":"2023"},{"key":"2026032614365299900_ref423","unstructured":"Z.\n              Tang\n            , Z.Yang, C.Zhu, M.Zeng, and M.Bansal, \u201cAny-to-any generation via composable diffusion,\u201d 2023. arXiv: 2305.11846 [cs.CV]."},{"key":"2026032614365299900_ref424","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.00212","article-title":"Siamese image modeling for self-supervised vision representation learning","volume-title":"CVPR","author":"Tao","year":"2023"},{"key":"2026032614365299900_ref425","unstructured":"R.\n              Taori\n            , I.Gulrajani, T.Zhang, Y.Dubois, X.Li, C.Guestrin, P.Liang, and T. B.Hashimoto, Stanford alpaca: An instructionfollowing llama model, 2023. URL: https:\/\/github.com\/tatsu-lab\/stanford_alpaca."},{"key":"2026032614365299900_ref426","unstructured":"G. G.\n              Team\n            \n          , Gemini: A family of highly capable multimodal models, 2023. arXiv: 2312.11805 [cs.CL]."},{"key":"2026032614365299900_ref427","unstructured":"The Vicuna Team\n          , Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality, 2023. URL: https:\/\/vicuna.lmsys.org\/."},{"key":"2026032614365299900_ref428","doi-asserted-by":"crossref","DOI":"10.1145\/2812802","article-title":"Yfcc100m: The new data in multimedia research","volume-title":"Communications of the ACM","author":"Thomee","year":"2016"},{"key":"2026032614365299900_ref429","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-58621-8_45","article-title":"Contrastive multiview coding","volume-title":"ECCV","author":"Tian","year":"2020"},{"key":"2026032614365299900_ref430","doi-asserted-by":"crossref","first-page":"282","DOI":"10.1007\/978-3-030-58452-8_17","volume-title":"Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part I 16","author":"Tian","year":"2020"},{"key":"2026032614365299900_ref431","article-title":"Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training","volume-title":"NeurIPS","author":"Tong","year":"2022"},{"key":"2026032614365299900_ref432","article-title":"Training data-efficient image transformers & distillation through attention","volume-title":"ICML","author":"Touvron","year":"2021"},{"key":"2026032614365299900_ref433","article-title":"Llama: Open and efficient foundation language models","volume-title":"arXiv preprint arXiv:2302.13971","author":"Touvron","year":"2023"},{"key":"2026032614365299900_ref434","article-title":"Image captioners are scalable vision learners too","volume-title":"arXiv preprint arXiv:2306.07915","author":"Tschannen","year":"2023"},{"key":"2026032614365299900_ref435","article-title":"Towards generalist biomedical ai","volume-title":"arXiv preprint arXiv:2307.14334","author":"Tu","year":"2023"},{"key":"2026032614365299900_ref436","article-title":"Nvae: A deep hierarchical variational autoencoder","author":"Vahdat","year":"2020"},{"key":"2026032614365299900_ref437","article-title":"Attention is all you need","volume-title":"NeurIPS","author":"Vaswani","year":"2017"},{"key":"2026032614365299900_ref438","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2015.7298935","article-title":"Show and tell: A neural image caption generator","volume-title":"CVPR","author":"Vinyals","year":"2015"},{"issue":"4","key":"2026032614365299900_ref439","doi-asserted-by":"crossref","first-page":"652","DOI":"10.1109\/TPAMI.2016.2587640","article-title":"Show and tell: Lessons learned from the 2015 mscoco image captioning challenge","volume":"39","author":"Vinyals","year":"2016","journal-title":"IEEE transactions on pattern analysis and machine intelligence"},{"key":"2026032614365299900_ref440","first-page":"I","volume-title":"Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001","author":"Viola","year":"2001"},{"key":"2026032614365299900_ref441","unstructured":"B.\n              Wang\n             and A.Komatsuzaki, GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model, May2021. URL: https:\/\/github.com\/kingoflolz\/mesh-transformer-jax."},{"key":"2026032614365299900_ref442","article-title":"Vigc: Visual instruction generation and correction","volume-title":"arXiv preprint arXiv:2308.12714","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref443","doi-asserted-by":"crossref","DOI":"10.1109\/TIP.2023.3266169","article-title":"Self-supervised learning by estimating twin class distribution","volume-title":"TIP","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref444","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2018.00552","article-title":"Cosface: Large margin cosine loss for deep face recognition","volume-title":"CVPR","author":"Wang","year":"2018"},{"key":"2026032614365299900_ref445","first-page":"5463","article-title":"Maxdeeplab: End-to-end panoptic segmentation with mask transformers","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Wang","year":"2021"},{"key":"2026032614365299900_ref446","article-title":"Git: A generative image-to-text transformer for vision and language","volume-title":"arXiv preprint arXiv:2205.14100","author":"Wang","year":"2022"},{"key":"2026032614365299900_ref447","article-title":"Chatvideo: A tracklet-centric multimodal and versatile video understanding system","volume-title":"arXiv preprint arXiv:2304.14407","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref448","unstructured":"J.\n              Wang\n            , L.Meng, Z.Weng, B.He, Z.Wu, and Y.-G.Jiang, To see is to believe: Prompting gpt-4v for better visual instruction tuning, 2023. arXiv: 2311.07574 [cs.CV]."},{"key":"2026032614365299900_ref449","article-title":"Evaluation and analysis of hallucination in large vision-language models","volume-title":"arXiv preprint arXiv:2308.15126","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref450","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.01398","article-title":"Videomae v2: Scaling video masked autoencoders with dual masking","volume-title":"CVPR","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref451","article-title":"Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework","volume-title":"ICML","author":"Wang","year":"2022"},{"key":"2026032614365299900_ref452","first-page":"14 733","article-title":"Bevt: Bert pretraining of video transformers","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Wang","year":"2022"},{"key":"2026032614365299900_ref453","article-title":"Disco: Disentangled control for referring human dance generation in real world","volume-title":"arXiv preprint arXiv:2307.00040","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref454","article-title":"Caption anything: Interactive image description with diverse multimodal controls","volume-title":"arXiv preprint arXiv:2305.02677","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref455","unstructured":"W.\n              Wang\n            , J.Liu, Z.Lin, J.Yan, S.Chen, C.Low, T.Hoang, J.Wu, J. H.Liew, H.Yan, D.Zhou, and J.Feng, Magicvideo-v2: Multi-stage high-aesthetic video generation, 2024. arXiv: 2401.04468 [cs.CV]."},{"key":"2026032614365299900_ref456","article-title":"VisionLLM: Large language model is also an open-ended decoder for vision-centric tasks","volume-title":"arXiv preprint arXiv:2305.11175","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref457","article-title":"Image as a foreign language: Beit pretraining for all vision and vision-language tasks","volume-title":"arXiv preprint arXiv:2208.10442","author":"Wang","year":"2022"},{"key":"2026032614365299900_ref458","article-title":"Vlmo: Unified visionlanguage pre-training with mixture-of-modality-experts","volume-title":"arXiv preprint arXiv:2111.02358","author":"Wang","year":"2021"},{"key":"2026032614365299900_ref459","first-page":"6830","article-title":"Images speak in images: A generalist painter for in-context visual learning","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref460","article-title":"Seggpt: Segmenting everything in context","volume-title":"arXiv preprint arXiv:2304.03284","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref461","article-title":"How far can camels go? exploring the state of instruction tuning on open resources","volume-title":"arXiv preprint arXiv:2306.04751","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref462","article-title":"Self-instruct: Aligning language model with self generated instructions","volume-title":"arXiv preprint arXiv:2212.10560","author":"Wang","year":"2022"},{"key":"2026032614365299900_ref463","article-title":"Benchmarking generalization via in-context instructions on 1,600+ language tasks","volume-title":"arXiv preprint arXiv:2204.07705","author":"Wang","year":"2022"},{"key":"2026032614365299900_ref464","article-title":"Chat-3d: Data-efficiently tuning large language model for universal dialogue of 3d scenes","volume-title":"arXiv preprint arXiv:2308.08769","author":"Wang","year":"2023"},{"key":"2026032614365299900_ref465","unstructured":"Z.\n              Wang\n            , Y.Jiang, Y.Lu, Y.Shen, P.He, W.Chen, Z.Wang, and M.Zhou, In-context learning unlocked for diffusion models, 2023. arXiv: 2305.01115 [cs.CV]."},{"key":"2026032614365299900_ref466","article-title":"Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation","volume-title":"arXiv preprint arXiv:2305.16213","author":"Wang","year":"2023"},{"issue":"10","key":"2026032614365299900_ref467","doi-asserted-by":"crossref","first-page":"3365","DOI":"10.1109\/TPAMI.2020.2982166","article-title":"Deep learning for image superresolution: A survey","volume":"43","author":"Wang","year":"2020","journal-title":"IEEE transactions on pattern analysis and machine intelligence"},{"key":"2026032614365299900_ref468","article-title":"Simvlm: Simple visual language model pretraining with weak supervision","volume-title":"ICLR","author":"Wang","year":"2022"},{"key":"2026032614365299900_ref469","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.02244","article-title":"Masked autoencoding does not help natural language supervision at scale","volume-title":"CVPR","author":"Weers","year":"2023"},{"key":"2026032614365299900_ref470","article-title":"Masked feature prediction for self-supervised visual pre-training","volume-title":"CVPR","author":"Wei","year":"2021"},{"key":"2026032614365299900_ref471","article-title":"Chain of thought prompting elicits reasoning in large language models","volume-title":"arXiv preprint arXiv:2201.11903","author":"Wei","year":"2022"},{"key":"2026032614365299900_ref472","article-title":"Instructiongpt-4: A 200-instruction paradigm for fine-tuning minigpt-4","volume-title":"arXiv preprint arXiv:2308.12067","author":"Wei","year":"2023"},{"key":"2026032614365299900_ref473","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-20056-4_20","article-title":"Mvp: Multimodality-guided visual pre-training","volume-title":"ECCV","author":"Wei","year":"2022"},{"key":"2026032614365299900_ref474","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.01461","article-title":"Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation","volume-title":"arXiv preprint arXiv:2302.13848","author":"Wei","year":"2023"},{"key":"2026032614365299900_ref475","unstructured":"L.\n              Weng\n            \n          , \u201cLlm-powered autonomous agents,\u201d lilianweng.github.io, Jun. 2023. URL: https:\/\/lilianweng.github.io\/posts\/2023-06-23-agent\/."},{"key":"2026032614365299900_ref476","article-title":"Towards generalist foundation model for radiology","volume-title":"arXiv preprint arXiv:2308.02463","author":"Wu","year":"2023"},{"key":"2026032614365299900_ref477","article-title":"Visual chatgpt: Talking, drawing and editing with visual foundation models","volume-title":"arXiv preprint arXiv:2303.04671","author":"Wu","year":"2023"},{"key":"2026032614365299900_ref478","article-title":"Grit: A generative region-to-text transformer for object understanding","volume-title":"arXiv preprint arXiv:2212.00280","author":"Wu","year":"2022"},{"key":"2026032614365299900_ref479","article-title":"Multi-modal answer validation for knowledge-based VQA","volume-title":"arXiv preprint arXiv:2103.12248","author":"Wu","year":"2021"},{"key":"2026032614365299900_ref480","first-page":"4974","article-title":"Language as queries for referring video object segmentation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Wu","year":"2022"},{"key":"2026032614365299900_ref481","article-title":"Next-gpt: Any-to-any multimodal llm","volume":"abs\/2309.05519","author":"Wu","year":"2023","journal-title":"CoRR"},{"key":"2026032614365299900_ref482","article-title":"Mofi: Learning image representations from noisy entity annotated images","volume-title":"arXiv preprint arXiv:2306.07952","author":"Wu","year":"2023"},{"key":"2026032614365299900_ref483","first-page":"2411","article-title":"Online object tracking: A benchmark","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Wu","year":"2013"},{"key":"2026032614365299900_ref484","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2018.00393","article-title":"Unsupervised feature learning via non-parametric instance discrimination","volume-title":"CVPR","author":"Wu","year":"2018"},{"key":"2026032614365299900_ref485","article-title":"Zero-shot learning\u2014a comprehensive evaluation of the good, the bad and the ugly","volume-title":"TPAMI","author":"Xian","year":"2018"},{"key":"2026032614365299900_ref486","article-title":"Instruction-vit: Multimodal prompts for instruction learning in vit","volume-title":"arXiv preprint arXiv:2305.00201","author":"Xiao","year":"2023"},{"key":"2026032614365299900_ref487","article-title":"Edit everything: A text-guided generative system for images editing","volume-title":"arXiv preprint arXiv:2304.14006","author":"Xie","year":"2023"},{"key":"2026032614365299900_ref488","doi-asserted-by":"crossref","unstructured":"X.\n              Xie\n            , L.Fu, Z.Zhang, Z.Wang, and X.Bai, Toward understanding wordart: Corner-guided transformer for scene text recognition, 2022. arXiv: 2208.00438 [cs.CV].","DOI":"10.1007\/978-3-031-19815-1_18"},{"key":"2026032614365299900_ref489","article-title":"Self-supervised learning with swin transformers","volume-title":"arXiv preprint arXiv:2105.04553","author":"Xie","year":"2021"},{"key":"2026032614365299900_ref490","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.00943","article-title":"Simmim: A simple framework for masked image modeling","volume-title":"CVPR","author":"Xie","year":"2022"},{"key":"2026032614365299900_ref491","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.00999","article-title":"On data scaling in masked image modeling","volume-title":"CVPR","author":"Xie","year":"2023"},{"key":"2026032614365299900_ref492","first-page":"675","article-title":"Pad-net: Multitasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Xu","year":"2018"},{"key":"2026032614365299900_ref493","doi-asserted-by":"crossref","unstructured":"H.\n              Xu\n            , M.Yan, C.Li, B.Bi, S.Huang, W.Xiao, and F.Huang, E2e-vlp: End-to-end vision-language pre-training enhanced by visual learning, 2021. arXiv: 2106.01804 [cs.CV].","DOI":"10.18653\/v1\/2021.acl-long.42"},{"key":"2026032614365299900_ref494","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01760","article-title":"Groupvit: Semantic segmentation emerges from text supervision","volume-title":"CVPR","author":"Xu","year":"2022"},{"key":"2026032614365299900_ref495","first-page":"2955","article-title":"Open-vocabulary panoptic segmentation with text-to-image diffusion models","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Xu","year":"2023"},{"key":"2026032614365299900_ref496","article-title":"U-llava: Unifying multi-modal tasks via large language model","volume-title":"arXiv preprint arXiv:2311.05348","author":"Xu","year":"2023"},{"key":"2026032614365299900_ref497","article-title":"Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models","volume-title":"arXiv preprint arXiv:2306.09265","author":"Xu","year":"2023"},{"key":"2026032614365299900_ref498","article-title":"Pointllm: Empowering large language models to understand point clouds","volume-title":"arXiv preprint arXiv:2308.16911","author":"Xu","year":"2023"},{"key":"2026032614365299900_ref499","doi-asserted-by":"crossref","unstructured":"Y.\n              Xu\n            , Y.Zhao, Z.Xiao, and T.Hou, Ufogen: You forward once large scale text-to-image generation via diffusion gans, 2023. arXiv: 2311.09257 [cs.CV].","DOI":"10.1109\/CVPR52733.2024.00783"},{"key":"2026032614365299900_ref500","article-title":"Multiinstruct: Improving multimodal zero-shot learning via instruction tuning","volume-title":"arXiv preprint arXiv:2212.10773","author":"Xu","year":"2022"},{"key":"2026032614365299900_ref501","first-page":"15 325","article-title":"Universal instance perception as object discovery and retrieval","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Yan","year":"2023"},{"key":"2026032614365299900_ref502","first-page":"18 381","article-title":"Paint by example: Exemplar-based image editing with diffusion models","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Yang","year":"2023"},{"key":"2026032614365299900_ref503","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01857","article-title":"Unified contrastive learning in image-text-label space","volume-title":"CVPR","author":"Yang","year":"2022"},{"key":"2026032614365299900_ref504","article-title":"Unicl: Unified contrastive learning in image-text-label space","volume-title":"CVPR","author":"Yang","year":"2022"},{"key":"2026032614365299900_ref505","article-title":"Track anything: Segment anything meets videos","volume-title":"arXiv preprint arXiv:2304.11968","author":"Yang","year":"2023"},{"key":"2026032614365299900_ref506","article-title":"Gpt4tools: Teaching large language model to use tools via selfinstruction","volume-title":"arXiv preprint arXiv:2305.18752","author":"Yang","year":"2023"},{"key":"2026032614365299900_ref507","first-page":"18 155","article-title":"Lavt: Language-aware vision transformer for referring image segmentation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Yang","year":"2022"},{"key":"2026032614365299900_ref508","article-title":"Crossing the format boundary of text and boxes: Towards unified vision-language modeling","volume-title":"arXiv preprint arXiv:2111.12085","author":"Yang","year":"2021"},{"key":"2026032614365299900_ref509","first-page":"521","volume-title":"European Conference on Computer Vision","author":"Yang","year":"2022"},{"key":"2026032614365299900_ref510","doi-asserted-by":"crossref","DOI":"10.1609\/aaai.v36i3.20215","article-title":"An empirical study of gpt-3 for few-shot knowledge-based vqa","volume-title":"AAAI","author":"Yang","year":"2022"},{"key":"2026032614365299900_ref511","first-page":"14 246","article-title":"Reco: Region-controlled text-to-image generation","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Yang","year":"2023"},{"key":"2026032614365299900_ref512","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.findings-emnlp.793","article-title":"Re-vilm: Retrieval-augmented visual language model for zero and few-shot image captioning","volume-title":"arXiv preprint arXiv:2302.04858","author":"Yang","year":"2023"},{"key":"2026032614365299900_ref513","article-title":"Mm-react: Prompting chatgpt for multimodal reasoning and action","author":"Yang","year":"2023"},{"key":"2026032614365299900_ref514","first-page":"23 497","article-title":"Detclipv2: Scalable open-vocabulary object detection pre-training via word-region alignment","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Yao","year":"2023"},{"key":"2026032614365299900_ref515","article-title":"Detclip: Dictionary-enriched visual-concept paralleled pre-training for open-world detection","volume-title":"arXiv preprint arXiv:2209.09407","author":"Yao","year":"2022"},{"key":"2026032614365299900_ref516","article-title":"Filip: Fine-grained interactive language-image pre-training","volume-title":"ICLR","author":"Yao","year":"2022"},{"key":"2026032614365299900_ref517","article-title":"React: Synergizing reasoning and acting in language models","volume-title":"arXiv preprint arXiv:2210.03629","author":"Yao","year":"2022"},{"key":"2026032614365299900_ref518","article-title":"Retrieval-augmented multimodal language modeling","volume-title":"arXiv preprint arXiv:2211.12561","author":"Yasunaga","year":"2022"},{"key":"2026032614365299900_ref519","article-title":"Mplug-docowl: Modularized multimodal large language model for document understanding","volume-title":"arXiv preprint arXiv:2307.02499","author":"Ye","year":"2023"},{"key":"2026032614365299900_ref520","first-page":"10 502","article-title":"Cross-modal selfattention network for referring image segmentation","volume-title":"Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Ye","year":"2019"},{"key":"2026032614365299900_ref521","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2019.00637","article-title":"Unsupervised embedding learning via invariant and spreading instance feature","volume-title":"CVPR","author":"Ye","year":"2019"},{"key":"2026032614365299900_ref522","article-title":"Mplug-owl: Modularization empowers large language models with multimodality","volume-title":"arXiv preprint arXiv:2304.14178","author":"Ye","year":"2023"},{"key":"2026032614365299900_ref523","article-title":"Masked image modeling with denoising contrast","volume-title":"arXiv preprint arXiv:2205.09616","author":"Yi","year":"2022"},{"issue":"4","key":"2026032614365299900_ref524","doi-asserted-by":"crossref","DOI":"10.1145\/1177352.1177355","article-title":"Object tracking: A survey","volume":"38","author":"Yilmaz","year":"2006","journal-title":"Acm computing surveys (CSUR)"},{"key":"2026032614365299900_ref525","article-title":"A survey of knowledge-intensive nlp with pre-trained language models","volume-title":"arXiv preprint arXiv:2202.08772","author":"Yin","year":"2022"},{"key":"2026032614365299900_ref526","article-title":"Lamm: Language-assisted multimodal instruction-tuning dataset, framework, and benchmark","volume-title":"arXiv preprint arXiv:2306.06687","author":"Yin","year":"2023"},{"key":"2026032614365299900_ref527","doi-asserted-by":"crossref","first-page":"10 718","DOI":"10.1609\/aaai.v35i12.17281","article-title":"Image-to-image retrieval by learning similarity between scene graphs","volume":"35","author":"Yoon","year":"2021","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2026032614365299900_ref528","article-title":"Ferret: Refer and ground anything anywhere at any granularity","volume-title":"arXiv preprint arXiv:2310.07704","author":"You","year":"2023"},{"key":"2026032614365299900_ref529","article-title":"Vector-quantized image modeling with improved vqgan","volume-title":"arXiv preprint arXiv:2110.04627","author":"Yu","year":"2021"},{"key":"2026032614365299900_ref530","article-title":"Coca: Contrastive captioners are image-text foundation models","volume-title":"TMLR","author":"Yu","year":"2022"},{"key":"2026032614365299900_ref531","article-title":"Scaling autoregressive models for content-rich text-to-image generation","volume-title":"Transactions on Machine Learning Research","author":"Yu","year":"2022"},{"key":"2026032614365299900_ref532","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-319-46475-6_5","article-title":"Modeling context in referring expressions","volume-title":"ECCV","author":"Yu","year":"2016"},{"key":"2026032614365299900_ref533","article-title":"Scaling autoregressive multi-modal models: Pretraining and instruction tuning","author":"Yu","year":"2023"},{"key":"2026032614365299900_ref534","article-title":"Interactive data synthesis for systematic vision adaptation via llms-aigcs collaboration","volume-title":"arXiv preprint arXiv:2305.12799","author":"Yu","year":"2023"},{"key":"2026032614365299900_ref535","article-title":"Convolutions die hard: Open-vocabulary segmentation with single frozen convolutional clip","volume-title":"arXiv preprint arXiv:2308.02487","author":"Yu","year":"2023"},{"key":"2026032614365299900_ref536","article-title":"Inpaint anything: Segment anything meets image inpainting","volume-title":"arXiv preprint arXiv:2304.06790","author":"Yu","year":"2023"},{"key":"2026032614365299900_ref537","article-title":"Mm-vet: Evaluating large multimodal models for integrated capabilities","volume-title":"arXiv preprint arXiv:2308.02490","author":"Yu","year":"2023"},{"key":"2026032614365299900_ref538","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2019.00644","article-title":"Deep modular co-attention networks for visual question answering","volume-title":"CVPR","author":"Yu","year":"2019"},{"key":"2026032614365299900_ref539","article-title":"Florence: A new foundation model for computer vision","volume-title":"arXiv preprint arXiv:2111.11432","author":"Yuan","year":"2021"},{"key":"2026032614365299900_ref540","first-page":"3712","article-title":"Taskonomy: Disentangling task transfer learning","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Zamir","year":"2018"},{"key":"2026032614365299900_ref541","article-title":"Contextual object detection with multimodal large language models","volume-title":"arXiv preprint arXiv:2305.18279","author":"Zang","year":"2023"},{"key":"2026032614365299900_ref542","article-title":"Openvocabulary detr with conditional matching","volume-title":"arXiv preprint arXiv:2203.11876","author":"Zang","year":"2022"},{"key":"2026032614365299900_ref543","article-title":"Openvocabulary object detection using captions","volume-title":"CVPR","author":"Zareian","year":"2021"},{"key":"2026032614365299900_ref544","article-title":"Barlow twins: Self-supervised learning via redundancy reduction","volume-title":"ICML","author":"Zbontar","year":"2021"},{"key":"2026032614365299900_ref545","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2019.00688","article-title":"From recognition to cognition: Visual commonsense reasoning","volume-title":"CVPR","author":"Zellers","year":"2019"},{"key":"2026032614365299900_ref546","article-title":"Multi-grained vision language pre-training: Aligning texts with visual concepts","volume-title":"ICML","author":"Zeng","year":"2022"},{"key":"2026032614365299900_ref547","first-page":"22 468","article-title":"Scenecomposer: Any-level semantic image synthesis","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Zeng","year":"2023"},{"key":"2026032614365299900_ref548","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01179","article-title":"Scaling vision transformers","volume-title":"CVPR","author":"Zhai","year":"2022"},{"key":"2026032614365299900_ref549","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.01100","article-title":"Sigmoid loss for language image pre-training","volume-title":"arXiv preprint arXiv:2303.15343","author":"Zhai","year":"2023"},{"key":"2026032614365299900_ref550","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01759","article-title":"Lit: Zero-shot transfer with lockedimage text tuning","volume-title":"CVPR","author":"Zhai","year":"2022"},{"key":"2026032614365299900_ref551","doi-asserted-by":"crossref","DOI":"10.1109\/JSTSP.2020.2987728","article-title":"Multimodal intelligence: Representation learning, information fusion, and applications","volume-title":"JSTSP","author":"Zhang","year":"2020"},{"key":"2026032614365299900_ref552","article-title":"A survey on segment anything model (sam): Vision foundation model meets prompt engineering","volume-title":"arXiv preprint arXiv:2306.06211","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref553","article-title":"Text-to-image diffusion model in generative ai: A survey","volume-title":"arXiv preprint arXiv:2303.07909","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref554","article-title":"A comprehensive survey on segment anything model for vision and beyond","volume-title":"arXiv preprint arXiv:2305.08196","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref555","doi-asserted-by":"crossref","DOI":"10.18653\/v1\/2023.emnlp-demo.49","article-title":"Video-llama: An instruction-tuned audio-visual language model for video understanding","volume-title":"arXiv preprint arXiv:2306.02858","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref556","article-title":"Dino: Detr with improved denoising anchor boxes for end-to-end object detection","volume-title":"arXiv preprint arXiv:2203.03605","author":"Zhang","year":"2022"},{"key":"2026032614365299900_ref557","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.00100","article-title":"A simple framework for open-vocabulary segmentation and detection","volume-title":"arXiv preprint arXiv:2303.08131","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref558","volume-title":"Llava-grounding: Grounded visual chat with large multimodal models","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref559","article-title":"Glipv2: Unifying localization and vision-language understanding","volume-title":"ECCV","author":"Zhang","year":"2022"},{"key":"2026032614365299900_ref560","article-title":"Vision-language models for vision tasks: A survey","volume-title":"arXiv preprint arXiv:2304.00685","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref561","first-page":"297","article-title":"Generative domain-migration hashing for sketch-to-image retrieval","volume-title":"Proceedings of the European conference on computer vision (ECCV)","author":"Zhang","year":"2018"},{"key":"2026032614365299900_ref562","doi-asserted-by":"crossref","DOI":"10.1109\/ICCV51070.2023.00355","article-title":"Adding conditional control to text-to-image diffusion models","volume-title":"arXiv preprint arXiv:2302.05543","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref563","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR46437.2021.00553","article-title":"VinVL: Revisiting visual representations in visionlanguage models","volume-title":"CVPR","author":"Zhang","year":"2021"},{"key":"2026032614365299900_ref564","article-title":"Personalize segment anything model with one shot","volume-title":"arXiv preprint arXiv:2305.03048","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref565","article-title":"Instruction tuning for large language models: A survey","volume-title":"arXiv preprint arXiv:2308.10792","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref566","article-title":"Gpt4roi: Instruction tuning large language model on region-of-interest","volume-title":"arXiv preprint arXiv:2307.03601","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref567","article-title":"Automl-gpt: Automatic machine learning with gpt","volume-title":"arXiv preprint arXiv:2305.02499","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref568","article-title":"Bertscore: Evaluating text generation with bert","volume-title":"arXiv preprint arXiv:1904.09675","author":"Zhang","year":"2019"},{"key":"2026032614365299900_ref569","article-title":"M3exam: A multilingual, multimodal, multilevel benchmark for examining large language models","volume-title":"arXiv preprint arXiv:2306.05179","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref570","article-title":"Pmc-vqa: Visual instruction tuning for medical visual question answering","volume-title":"arXiv preprint arXiv:2305.10415","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref571","article-title":"Hivit: Hierarchical vision transformer meets masked image modeling","volume-title":"arXiv preprint arXiv:2205.14949","author":"Zhang","year":"2022"},{"key":"2026032614365299900_ref572","article-title":"Cae v2: Context autoencoder with clip target","volume-title":"arXiv preprint arXiv:2211.09799","author":"Zhang","year":"2022"},{"key":"2026032614365299900_ref573","article-title":"Llavar: Enhanced visual instruction tuning for text-rich image understanding","volume-title":"arXiv preprint arXiv:2306.17107","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref574","doi-asserted-by":"crossref","DOI":"10.2139\/ssrn.4495221","article-title":"How segment anything model (sam) boost medical image segmentation?","volume-title":"arXiv preprint arXiv:2305.03678","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref575","article-title":"Recognize anything: A strong image tagging model","volume-title":"arXiv preprint arXiv:2306.03514","author":"Zhang","year":"2023"},{"key":"2026032614365299900_ref576","article-title":"Svit: Scaling up visual instruction tuning","volume-title":"arXiv preprint arXiv:2307.04087","author":"Zhao","year":"2023"},{"key":"2026032614365299900_ref577","article-title":"Uni-controlnet: All-in-one control to text-to-image diffusion models","volume-title":"arXiv preprint arXiv:2305.16322","author":"Zhao","year":"2023"},{"key":"2026032614365299900_ref578","article-title":"Bubogpt: Enabling visual grounding in multi-modal llms","volume-title":"arXiv preprint arXiv:2307.08581","author":"Zhao","year":"2023"},{"key":"2026032614365299900_ref579","article-title":"On evaluating adversarial robustness of large visionlanguage models","volume-title":"arXiv preprint arXiv:2305.16934","author":"Zhao","year":"2023"},{"key":"2026032614365299900_ref580","first-page":"12 697","article-title":"Calibrate before use: Improving few-shot performance of language models","volume-title":"International Conference on Machine Learning","author":"Zhao","year":"2021"},{"key":"2026032614365299900_ref581","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52688.2022.01629","article-title":"Regionclip: Region-based language-image pretraining","volume-title":"CVPR","author":"Zhong","year":"2022"},{"key":"2026032614365299900_ref582","first-page":"16 793","article-title":"Regionclip: Region-based languageimage pretraining","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Zhong","year":"2022"},{"key":"2026032614365299900_ref583","first-page":"633","article-title":"Scene parsing through ade20k dataset","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Zhou","year":"2017"},{"key":"2026032614365299900_ref584","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-031-19815-1_40","article-title":"Extract free dense labels from clip","volume-title":"ECCV","author":"Zhou","year":"2022"},{"key":"2026032614365299900_ref585","article-title":"Lima: Less is more for alignment","volume-title":"arXiv preprint arXiv:2305.11206","author":"Zhou","year":"2023"},{"key":"2026032614365299900_ref586","article-title":"Navgpt: Explicit reasoning in vision-and-language navigation with large language models","volume-title":"arXiv preprint arXiv:2305.16986","author":"Zhou","year":"2023"},{"key":"2026032614365299900_ref587","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.01061","article-title":"Non-contrastive learning meets language-image pre-training","volume-title":"CVPR","author":"Zhou","year":"2023"},{"key":"2026032614365299900_ref588","article-title":"Ibot: Image bert pre-training with online tokenizer","volume-title":"arXiv preprint arXiv:2111.07832","author":"Zhou","year":"2021"},{"key":"2026032614365299900_ref589","article-title":"Can sam segment polyps?","volume-title":"arXiv preprint arXiv:2304.07583","author":"Zhou","year":"2023"},{"key":"2026032614365299900_ref590","first-page":"350","volume-title":"European Conference on Computer Vision","author":"Zhou","year":"2022"},{"key":"2026032614365299900_ref591","article-title":"Lafite2: Fewshot text-to-image generation","volume-title":"arXiv preprint arXiv:2210.14124","author":"Zhou","year":"2022"},{"key":"2026032614365299900_ref592","first-page":"826","article-title":"Vision + language applications: A survey","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Zhou","year":"2023"},{"key":"2026032614365299900_ref593","unstructured":"D.\n              Zhu\n            , J.Chen, X.Shen, X.Li, and M.Elhoseiny, Minigpt-4: Enhancing vision-language understanding with advanced large language models, 2023. arXiv: 2304.10592 [cs.CV]."},{"issue":"4","key":"2026032614365299900_ref594","doi-asserted-by":"crossref","first-page":"998","DOI":"10.1109\/TCSVT.2019.2899569","article-title":"Zero shot detection","volume":"30","author":"Zhu","year":"2019","journal-title":"IEEE Transactions on Circuits and Systems for Video Technology"},{"key":"2026032614365299900_ref595","first-page":"11 693","article-title":"Don\u2019t even look once: Synthesizing features for zero-shot detection","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Zhu","year":"2020"},{"key":"2026032614365299900_ref596","article-title":"Multimodal c4: An open, billion-scale corpus of images interleaved with text","volume-title":"arXiv preprint arXiv:2304.06939","author":"Zhu","year":"2023"},{"key":"2026032614365299900_ref597","article-title":"Llavaphi: Efficient multi-modal assistant with small language model","volume-title":"arXiv preprint arXiv:2401.02330","author":"Zhu","year":"2024"},{"key":"2026032614365299900_ref598","doi-asserted-by":"crossref","unstructured":"Z.\n              Zong\n            , G.Song, and Y.Liu, Detrs with collaborative hybrid assignments training, 2023. arXiv: 2211.12860 [cs.CV].","DOI":"10.1109\/ICCV51070.2023.00621"},{"key":"2026032614365299900_ref599","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR52729.2023.01451","article-title":"Generalized decoding for pixel, image, and language","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Zou","year":"2023"},{"key":"2026032614365299900_ref600","article-title":"End-to-end instance edge detection","volume-title":"arXiv preprint arXiv:2204.02898","author":"Zou","year":"2022"},{"key":"2026032614365299900_ref601","article-title":"Segment everything everywhere all at once","volume-title":"arXiv preprint arXiv:2304.06718","author":"Zou","year":"2023"}],"container-title":["Foundations and Trends\u00ae in Computer Graphics and Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/ftcgv\/article-pdf\/16\/1-2\/1\/10901156\/0600000110en.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/www.emerald.com\/ftcgv\/article-pdf\/16\/1-2\/1\/10901156\/0600000110en.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T18:38:40Z","timestamp":1774550320000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.emerald.com\/ftcgv\/article\/16\/1-2\/1\/1320821\/Multimodal-Foundation-Models-From-Specialists-to"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,5,6]]},"references-count":601,"journal-issue":{"issue":"1-2","published-print":{"date-parts":[[2024,5,6]]}},"URL":"https:\/\/doi.org\/10.1561\/0600000110","relation":{},"ISSN":["1572-2740","1572-2759"],"issn-type":[{"value":"1572-2740","type":"print"},{"value":"1572-2759","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,5,6]]}}}