{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T22:13:51Z","timestamp":1776377631080,"version":"3.51.2"},"reference-count":56,"publisher":"Association for Computing Machinery (ACM)","issue":"7","license":[{"start":{"date-parts":[[2024,6,19]],"date-time":"2024-06-19T00:00:00Z","timestamp":1718755200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62222213, U22B2059, 62072423"],"award-info":[{"award-number":["62222213, U22B2059, 62072423"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2024,8,31]]},"abstract":"<jats:p>Multi-modal large language models (MLLMs), such as GPT-4, exhibit great comprehension capabilities on human instruction, as well as zero-shot ability on new downstream multi-modal tasks. To integrate the different modalities within a unified embedding space, previous MLLMs attempted to conduct visual instruction tuning with massive and high-quality image-text pair data, which requires substantial costs in data collection and training resources. In this article, we propose TOMGPT (Text-Only training Multi-modal GPT), a cost-effective MLLM tuned solely on easily accessible text data with much fewer resources. Along with pre-trained visual-linguistic coupled modality space (e.g., CLIP and ALIGN model), a text-only training strategy is devised to further project the aligned multi-modal latent space to that of LLM, endowing the LLM with visual comprehension capabilities in an efficient manner. Instead of enormous image-text training data required by previous MLLMs, we find that TOMGPT can be well-tuned with fewer yet diverse GPT-generated free-form text data, as we establish the semantic connection between LLM and pre-trained vision-language model. A quantitative evaluation is conducted on both MME and LVLM, which are recently released and extensively utilized MLLM benchmarks. The experiments reveal that TOMGPT achieved reliable performance compared to numerous models trained on a large amount of image-text pair data. Case studies are also presented, demonstrating TOMGPT\u2019s broad understanding and dialogue capabilities across diverse image categories.<\/jats:p>","DOI":"10.1145\/3654674","type":"journal-article","created":{"date-parts":[[2024,3,28]],"date-time":"2024-03-28T11:20:01Z","timestamp":1711624801000},"page":"1-19","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":13,"title":["TOMGPT: Reliable Text-Only Training Approach for Cost-Effective Multi-modal Large Language Model"],"prefix":"10.1145","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7197-246X","authenticated-orcid":false,"given":"Yunkai","family":"Chen","sequence":"first","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9715-836X","authenticated-orcid":false,"given":"Qimeng","family":"Wang","sequence":"additional","affiliation":[{"name":"Xiaohongshu Inc., Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3206-6827","authenticated-orcid":false,"given":"Shiwei","family":"Wu","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-5960-1684","authenticated-orcid":false,"given":"Yan","family":"Gao","sequence":"additional","affiliation":[{"name":"Xiaohongshu Inc., Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4246-5386","authenticated-orcid":false,"given":"Tong","family":"Xu","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-1274-7111","authenticated-orcid":false,"given":"Yao","family":"Hu","sequence":"additional","affiliation":[{"name":"Xiaohongshu Inc., Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,6,19]]},"reference":[{"key":"e_1_3_1_2_2","unstructured":"Jean-Baptiste Alayrac Jeff Donahue Pauline Luc Antoine Miech Iain Barr Yana Hasson Karel Lenc Arthur Mensch Katherine Millican Malcolm Reynolds Roman Ring Eliza Rutherford Serkan Cabi Tengda Han Zhitao Gong Sina Samangooei Marianne Monteiro Jacob L. Menick Sebastian Borgeaud Andy Brock Aida Nematzadeh Sahand Sharifzadeh Miko\u0142 aj Bi\u0144kowski Ricardo Barreira Oriol Vinyals Andrew Zisserman and Kar\u00e9n Simonyan. 2022. Flamingo: A visual language model for few-shot learning. In Advances in Neural Information Processing Systems Curran Associates Inc. 35 (2022) 23716\u201323736."},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW56347.2022.00512"},{"key":"e_1_3_1_4_2","first-page":"1877","article-title":"Language models are few-shot learners","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Author PictureDaniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of the 34th International Conference on Advances in Neural Information Processing Systems , 1877\u20131901.","journal-title":"In Proceedings of the 34th International Conference on Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_5_2","article-title":"A survey on evaluation of large language models","author":"Chang Yupeng","year":"2023","unstructured":"Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang2023. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology 15, 3 (2023), 1\u201345.","journal-title":"ACM Transactions on Intelligent Systems and Technology"},{"key":"e_1_3_1_6_2","doi-asserted-by":"crossref","unstructured":"Zhihong Chen Guiming Chen Shizhe Diao Xiang Wan and Benyou Wang. 2023. On the difference of BERT-style and CLIP-style text encoders. In Findings of the Association for Computational Linguistics: (ACL 2023). Association for Computational Linguistics Toronto Canada 13710\u201313721.","DOI":"10.18653\/v1\/2023.findings-acl.866"},{"key":"e_1_3_1_7_2","unstructured":"Wei-Lin Chiang Zhuohan Li Zi Lin Ying Sheng Zhanghao Wu Hao Zhang Lianmin Zheng Siyuan Zhuang Yonghao Zhuang Joseph E. Gonzalez Ion Stoica and Eric P. Xing. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. Retrieved from https:\/\/lmsys.org\/blog\/2023-03-30-vicuna\/"},{"key":"e_1_3_1_8_2","unstructured":"Aakanksha Chowdhery Sharan Narang Jacob Devlin Maarten Bosma Gaurav Mishra Adam Roberts Paul Barham Hyung Won Chung Charles Sutton Sebastian Gehrmann Parker Schuh Kensen Shi Sasha Tsvyashchenko Joshua Maynez Abhishek Rao Parker Barnes Yi Tay Noam Shazeer Vinodkumar Prabhakaran Emily Reif Nan Du Ben Hutchinson Reiner Pope James Bradbury Jacob Austin Michael Isard Guy Gur-Ari Pengcheng Yin Toju Duke Anselm Levskaya Sanjay Ghemawat Sunipa Dev Henryk Michalewski Xavier Garcia Vedant Misra Kevin Robinson Liam Fedus Denny Zhou Daphne Ippolito David Luan Hyeontaek Lim Barret Zoph Alexander Spiridonov Ryan Sepassi David Dohan Shivani Agrawal Mark Omernick Andrew M. Dai Thanumalayan Sankaranarayana Pillai Marie Pellat Aitor Lewkowycz Erica Moreira Rewon Child Oleksandr Polozov Katherine Lee Zongwei Zhou Xuezhi Wang Brennan Saeta Mark Diaz Orhan Firat Michele Catasta Jason Wei Kathy Meier-Hellstern Douglas Eck Jeff Dean Slav Petrov and Noah Fiedel. 2023. PaLM: Scaling language modeling with pathways. Journal of Machine Learning Research 24 240 (2023) 1\u2013113."},{"key":"e_1_3_1_9_2","unstructured":"Hyung Won Chung Le Hou Shayne Longpre Barret Zoph Yi Tay William Fedus Yunxuan Li Xuezhi Wang Mostafa Dehghani Siddhartha Brahma Albert Webson Shixiang Shane Gu Zhuyun Dai Mirac Suzgun Xinyun Chen Aakanksha Chowdhery Alex Castro-Ros Marie Pellat Kevin Robinson Dasha Valter Sharan Narang Gaurav Mishra Adams Yu Vincent Zhao Yanping Huang Andrew Dai Hongkun Yu Slav Petrov Ed H. Chi Jeff Dean Jacob Devlin Adam Roberts Denny Zhou Quoc V. Le and Jason Wei. 2024. Scaling instruction-finetuned language models. Journal of Machine Learning Research 25 70 (2024) 1\u201353."},{"key":"e_1_3_1_10_2","unstructured":"Wenliang Dai Junnan Li Dongxu Li Anthony Tiong Junqi Zhao Weisheng Wang Boyang Li Pascale N. Fung and Steven Hoi. 2023. InstructBLIP: Towards general-purpose vision-language models with instruction tuning. In Advances in Neural Information Processing Systems Curran Associates Inc. 36 (2023) 49250\u201349267."},{"key":"e_1_3_1_11_2","unstructured":"Jacob Devlin Ming-Wei Chang Kenton Lee and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 1 (Long and Short Papers). Association for Computational Linguistics Minneapolis Minnesota 4171\u20134186."},{"key":"e_1_3_1_12_2","unstructured":"Sivan Doveh Assaf Arbelle Sivan Harary Roei Herzig Donghyun Kim Paola Cascante-Bonilla Amit Alfassy Rameswar Panda Raja Giryes Rogerio Feris Shimon Ullman and Leonid Karlinsky. 2023. Dense and aligned captions (DAC) promote compositional reasoning in VL models. In Advances in Neural Information Processing Systems Curran Associates Inc. 36 (2023) 76137\u201376150."},{"key":"e_1_3_1_13_2","unstructured":"Zhengxiao Du Yujie Qian Xiao Liu Ming Ding Jiezhong Qiu Zhilin Yang and Jie Tang. 2022. GLM: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics Dublin Ireland 320\u2013335."},{"key":"e_1_3_1_14_2","article-title":"MME: A comprehensive evaluation benchmark for multimodal large language models","author":"Fu Chaoyou","year":"2023","unstructured":"Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. 2023. MME: A comprehensive evaluation benchmark for multimodal large language models. arXiv:2306.13394 . Retrieved from https:\/\/arxiv.org\/abs\/2306.13394","journal-title":"arXiv:2306.13394"},{"key":"e_1_3_1_15_2","article-title":"MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models","author":"Fu Chaoyou","year":"2023","unstructured":"Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. 2023. MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models. Retrieved from https:\/\/github.com\/BradyFU\/Awesome-Multimodal-Large-Language-Models\/tree\/Evaluation","journal-title":"R"},{"key":"e_1_3_1_16_2","unstructured":"Samir Yitzhak Gadre Gabriel Ilharco Alex Fang Jonathan Hayase Georgios Smyrnis Thao Nguyen Ryan Marten Mitchell Wortsman Dhruba Ghosh Jieyu Zhang Eyal Orgad Rahim Entezari Giannis Daras Sarah Pratt Vivek Ramanujan Yonatan Bitton Kalyani Marathe Stephen Mussmann Richard Vencu Mehdi Cherti Ranjay Krishna Pang Wei W. Koh Olga Saukh Alexander J. Ratner Shuran Song Hannaneh Hajishirzi Ali Farhadi Romain Beaumont Sewoong Oh Alex Dimakis Jenia Jitsev Yair Carmon Vaishaal Shankar and Ludwig Schmidt. 2023. DataComp: In search of the next generation of multimodal datasets. In Advances in Neural Information Processing Systems Curran Associates Inc. 36 (2023) 27092\u201327112."},{"key":"e_1_3_1_17_2","article-title":"Llama-adapter v2: Parameter-efficient visual instruction model","author":"Gao Peng","year":"2023","unstructured":"Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. 2023. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv:2304.15010 . Retrieved from https:\/\/arxiv.org\/abs\/2304.15010","journal-title":"arXiv:2304.15010"},{"key":"e_1_3_1_18_2","doi-asserted-by":"crossref","unstructured":"Sophia Gu Christopher Clark and Aniruddha Kembhavi. 2023. I can\u2019t believe there\u2019s no images! Learning visual tasks using only language supervision. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV\u201923). 2672\u20132683.","DOI":"10.1109\/ICCV51070.2023.00252"},{"key":"e_1_3_1_19_2","doi-asserted-by":"crossref","unstructured":"Or Honovich Thomas Scialom Omer Levy and Timo Schick. 2023. Unnatural instructions: Tuning language models with (almost) no human labor. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics Toronto Canada 14409\u201314428.","DOI":"10.18653\/v1\/2023.acl-long.806"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.5143773"},{"key":"e_1_3_1_21_2","first-page":"4904","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Jia Chao","year":"2021","unstructured":"Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In Proceedings of the International Conference on Machine Learning. PMLR, 4904\u20134916."},{"key":"e_1_3_1_22_2","doi-asserted-by":"crossref","unstructured":"Mike Lewis Yinhan Liu Naman Goyal Marjan Ghazvininejad Abdelrahman Mohamed Omer Levy Veselin Stoyanov and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation translation and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics 7871\u20137880.","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"e_1_3_1_23_2","article-title":"Otter: A multi-modal model with in-context instruction tuning","author":"Li Bo","year":"2023","unstructured":"Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. 2023. Otter: A multi-modal model with in-context instruction tuning. arXiv:2305.03726 . Retrieved from https:\/\/arxiv.org\/abs\/2305.03726","journal-title":"arXiv:2305.03726"},{"key":"e_1_3_1_24_2","unstructured":"Junnan Li Dongxu Li Silvio Savarese and Steven Hoi. 2023. BLIP-2: Bootstrapping language-Image pre-training with frozen image encoders and large language models. In Proceedings of the 40th International Conference on Machine Learning (Proceedings of Machine Learning Research Vol. 202). PMLR 19730\u201319742."},{"key":"e_1_3_1_25_2","volume-title":"Proceedings of the 11th International Conference on Learning Representations","author":"Li Wei","year":"2022","unstructured":"Wei Li, Linchao Zhu, Longyin Wen, and Yi Yang. 2022. DeCap: Decoding CLIP latents for zero-shot captioning via text-only training. In Proceedings of the 11th International Conference on Learning Representations."},{"key":"e_1_3_1_26_2","unstructured":"Victor Weixin Liang Yuhui Zhang Yongchan Kwon Serena Yeung and James Y. Zou. 2022. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. In Advances in Neural Information Processing Systems Curran Associates Inc. 35 (2022) 17612\u201317625."},{"key":"e_1_3_1_27_2","first-page":"74","volume-title":"Proceedings of the Text Summarization Branches Out","author":"Lin Chin-Yew","year":"2004","unstructured":"Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the Text Summarization Branches Out. 74\u201381."},{"key":"e_1_3_1_28_2","first-page":"740","volume-title":"Proceedings of the Computer Vision\u2013ECCV 2014: 13th European Conference","author":"Lin Tsung-Yi","year":"2014","unstructured":"Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision\u2013ECCV 2014: 13th European Conference. Springer, 740\u2013755."},{"key":"e_1_3_1_29_2","unstructured":"Haotian Liu Chunyuan Li Qingyang Wu and Yong Jae Lee. 2023. Visual instruction tuning. In Advances in Neural Information Processing Systems Curran Associates Inc. 36 (2023) 34892\u201334916."},{"key":"e_1_3_1_30_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Loshchilov Ilya","year":"2018","unstructured":"Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_3_1_31_2","article-title":"Clipcap: Clip prefix for image captioning","author":"Mokady Ron","year":"2021","unstructured":"Ron Mokady, Amir Hertz, and Amit H. Bermano. 2021. Clipcap: Clip prefix for image captioning. arXiv:2111.09734 . Retrieved from https:\/\/arxiv.org\/abs\/2111.09734","journal-title":"arXiv:2111.09734"},{"key":"e_1_3_1_32_2","article-title":"Introducing chatgpt","year":"2022","unstructured":"OpenAI. 2022. Introducing chatgpt. Retrieved from https:\/\/openai.com\/blog\/chatgpt","journal-title":"R"},{"key":"e_1_3_1_33_2","unstructured":"OpenAI. 2023. GPT-4 Technical Report. arxiv:2303.08774 . Retrieved from https:\/\/arxiv.org\/abs\/2303.08774"},{"key":"e_1_3_1_34_2","unstructured":"Long Ouyang Jeffrey Wu Xu Jiang Diogo Almeida Carroll Wainwright Pamela Mishkin Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens Amanda Askell Peter Welinder Paul F. Christiano Jan Leike and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems Curran Associates Inc. 35 (2022) 27730\u201327744."},{"key":"e_1_3_1_35_2","first-page":"311","volume-title":"Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics","author":"Papineni Kishore","year":"2002","unstructured":"Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 311\u2013318."},{"key":"e_1_3_1_36_2","first-page":"8748","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et\u00a0al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning. PMLR, 8748\u20138763."},{"key":"e_1_3_1_37_2","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans Ilya Sutskever. 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_1_38_2","doi-asserted-by":"publisher","DOI":"10.5555\/3455716.3455856"},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_3_1_40_2","article-title":"Bloom: A 176b-parameter open-access multilingual language model","author":"Scao Teven Le","year":"2022","unstructured":"Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili\u0107, Daniel Hesslow, Roman Castagn\u00e9, Alexandra Sasha Luccioni, Fran\u00e7ois Yvon, Matthias Gall\u00e9, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Beno\u00eet Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Lauren\u00e7on, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo Gonz\u00e1lez Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, G\u00e9rard Dupont, Germ\u00e1n Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, J\u00f6rg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro Von Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Mu\u00f1oz, Maraim Masoud, Mar\u00eda Grandury, Mario \u0160a\u0161ko, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, and Paulo Villegas. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv:2211.05100 . Retrieved from https:\/\/arxiv.org\/abs\/2211.05100","journal-title":"arXiv:2211.05100"},{"key":"e_1_3_1_41_2","article-title":"Tiny lvlm-ehub: Early multimodal experiments with bard","author":"Shao Wenqi","year":"2023","unstructured":"Wenqi Shao, Yutao Hu, Peng Gao, Meng Lei, Kaipeng Zhang, Fanqing Meng, Peng Xu, Siyuan Huang, Hongsheng Li, Yu Qiao, and Ping Luo. 2023. Tiny lvlm-ehub: Early multimodal experiments with bard. arXiv:2308.03729 . Retrieved from https:\/\/arxiv.org\/abs\/2308.03729","journal-title":"arXiv:2308.03729"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3474085.3479207"},{"key":"e_1_3_1_43_2","article-title":"Stanford Alpaca: An Instruction-following LLaMA model","author":"Taori Rohan","year":"2023","unstructured":"Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An Instruction-following LLaMA model. Retrieved from https:\/\/github.com\/tatsu-lab\/stanford_alpaca","journal-title":"R"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01739"},{"key":"e_1_3_1_45_2","article-title":"Llama: Open and efficient foundation language models.","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv.2302.13971 . Retrieved from https:\/\/arxiv.org\/abs\/2302.13971","journal-title":"arXiv.2302.13971"},{"key":"e_1_3_1_46_2","article-title":"Llama 2: Open foundation and fine-tuned chat models","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288 . Retrieved from https:\/\/arxiv.org\/abs\/2307.09288","journal-title":"arXiv:2307.09288"},{"issue":"6","key":"e_1_3_1_47_2","first-page":"1","article-title":"A survey on large language model based autonomous agents","volume":"18","author":"Wang Lei","year":"2024","unstructured":"Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science 18, 6 (2024), 1\u201326.","journal-title":"Frontiers of Computer Science"},{"key":"e_1_3_1_48_2","article-title":"Instruction in the Wild: A User-based Instruction Dataset","author":"Xue Fuzhao","year":"2023","unstructured":"Fuzhao Xue, Kabir Jain, Mahir Hitesh Shah, Zangwei Zheng, and Yang You. 2023. Instruction in the Wild: A User-based Instruction Dataset. Retrieved from https:\/\/github.com\/XueFuzhao\/InstructionWild.","journal-title":"R"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01857"},{"key":"e_1_3_1_50_2","article-title":"mplug-owl: Modularization empowers large language models with multimodality","author":"Ye Qinghao","year":"2023","unstructured":"Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. 2023. mplug-owl: Modularization empowers large language models with multimodality. arXiv:2304.14178 . Retrieved from https:\/\/arxiv.org\/abs\/2304.14178","journal-title":"arXiv:2304.14178"},{"key":"e_1_3_1_51_2","volume-title":"Proceedings of the 11th International Conference on Learning Representations","author":"Yuksekgonul Mert","year":"2022","unstructured":"Mert Yuksekgonul, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, and James Zou. 2022. When and why vision-language models behave like bags-of-words, and what to do about it?. In Proceedings of the 11th International Conference on Learning Representations."},{"key":"e_1_3_1_52_2","article-title":"What matters in training a GPT4-style language model with multimodal inputs?","author":"Zeng Yan","year":"2023","unstructured":"Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. 2023. What matters in training a GPT4-style language model with multimodal inputs? arXiv:2307.02469 . Retrieved from https:\/\/arxiv.org\/abs\/2307.02469","journal-title":"arXiv:2307.02469"},{"key":"e_1_3_1_53_2","article-title":"Opt: Open pre-trained transformer language models","author":"Zhang Susan","year":"2022","unstructured":"Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. arXiv:2205.01068 . Retrieved from https:\/\/arxiv.org\/abs\/2205.01068","journal-title":"arXiv:2205.01068"},{"key":"e_1_3_1_54_2","article-title":"A survey of large language models","author":"Zhao Wayne Xin","year":"2023","unstructured":"Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. arXiv:2303.18223 . Retrieved from https:\/\/arxiv.org\/abs\/2303.18223","journal-title":"arXiv:2303.18223"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01631"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-022-01653-1"},{"key":"e_1_3_1_57_2","unstructured":"Deyao Zhu Jun Chen Xiaoqian Shen Xiang Li and Mohamed Elhoseiny. 2024. MiniGPT-4: Enhancing vision-language understanding with advanced large language models. In The Twelfth International Conference on Learning Representations."}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3654674","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3654674","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:49:04Z","timestamp":1750286944000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3654674"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,19]]},"references-count":56,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2024,8,31]]}},"alternative-id":["10.1145\/3654674"],"URL":"https:\/\/doi.org\/10.1145\/3654674","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"value":"1556-4681","type":"print"},{"value":"1556-472X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,19]]},"assertion":[{"value":"2023-09-15","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-03-17","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}