{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T23:35:19Z","timestamp":1761176119441,"version":"build-2065373602"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686318","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:00:00Z","timestamp":1761004800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,10,21]]},"abstract":"<jats:p>Long-tailed recognition (LTR) has seen a surge in the level of attention it receives due to its practical value. Fine-tuning vision-language models (VLMs) has garnered significant attention among the various long-tailed approaches available, with foundation models thriving. While parameter-efficient fine-tuning (PEFT) methods such as adapter and visual prompt tuning (VPT) exhibit strong performance in long-tailed recognition, low-rank adaptation (LoRA), which is prominent in large language models (LLMs), fails to achieve comparable effectiveness in this context. To address the challenge, we introduce LotoRA, a groundbreaking long-tailed low-rank adaptation module. By leveraging diagonal blocks, LotoRA effectively enhances the rank while simultaneously reducing the number of parameters. This innovative approach overcomes the parameter limitations of traditional LoRA, enabling more efficient and targeted learning. Complementing this, we integrate Semantic Attention Pooling into the vision encoder and Semantic Prompt Embedding into the text encoder. These two components synergistically enhance the model\u2019s capacity to represent tail classes by extracting more profound semantic features, effectively addressing the information deficiency often associated with tail categories in long-tailed datasets. Experimental results demonstrate that our method outperforms existing state-of-the-art (SOTA) approaches based on PEFT. Furthermore, our approach provides new insights into parameter-efficient adaptation for long-tailed recognition tasks with foundation models.<\/jats:p>","DOI":"10.3233\/faia250811","type":"book-chapter","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:42:54Z","timestamp":1761126174000},"source":"Crossref","is-referenced-by-count":0,"title":["Rethinking the Effect of LoRA in Foundation Models for Long-Tailed Recognition"],"prefix":"10.3233","author":[{"given":"Haowei","family":"Liu","sequence":"first","affiliation":[{"name":"Soochow University"}]},{"given":"Shijia","family":"Sun","sequence":"additional","affiliation":[{"name":"Soochow University"}]},{"given":"Liang","family":"Chen","sequence":"additional","affiliation":[{"name":"Soochow University"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2025"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA250811","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:42:54Z","timestamp":1761126174000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA250811"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"ISBN":["9781643686318"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia250811","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]}}}