{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T23:36:37Z","timestamp":1761176197181,"version":"build-2065373602"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686318","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:00:00Z","timestamp":1761004800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,10,21]]},"abstract":"<jats:p>Pre-trained visual models enjoy strong representations, yet suffer from massive parameters to be shifted in downstream practices. Many parameter-efficient fine-tuning methods have been proposed, mostly requiring only 1% additional parameters to achieve comparable results. However, current solutions either consider all feature channels equally or detect saliencies with individual layer, leading to many redundancies reserved. To address current issues, this paper proposes a new parameter fine-tuning method named \u201cGradient Selection Tuning\u201d (GST), which leverages gradients that are capable of capturing the cascading effects across successive channels. Instead of saliency detection, we turn to compress the redundancies for channel selection, since the computed gradient values enjoy much lower mutual information. With GST facilitated, we further elaborate an Information-Guided Adapter following information bottleneck theory, effectively performing parameter compression yet with task-specific features preserved. Experimental results demonstrate that our method outperforms the baseline methods by adding only 0.075M parameters to ViT-B backbone. On domain generalization, our proposal also enjoys strong performance in low-parameter scenarios.<\/jats:p>","DOI":"10.3233\/faia251069","type":"book-chapter","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:50:39Z","timestamp":1761126639000},"source":"Crossref","is-referenced-by-count":0,"title":["Gradient Selection Tuning via Information Bottleneck"],"prefix":"10.3233","author":[{"given":"Xiaoxu","family":"Lin","sequence":"first","affiliation":[{"name":"Zhejiang University of Technology, Hangzhou, China"}]},{"given":"Wei","family":"Li","sequence":"additional","affiliation":[{"name":"Zhejiang University of Technology, Hangzhou, China"}]},{"given":"Junwei","family":"Zhu","sequence":"additional","affiliation":[{"name":"Zhejiang University of Technology, Hangzhou, China"}]},{"given":"Ni","family":"Xu","sequence":"additional","affiliation":[{"name":"Zhejiang University of Technology, Hangzhou, China"}]},{"given":"Honghui","family":"Xu","sequence":"additional","affiliation":[{"name":"Zhejiang University of Technology, Hangzhou, China"}]},{"given":"Jianwei","family":"Zheng","sequence":"additional","affiliation":[{"name":"Zhejiang University of Technology, Hangzhou, China"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2025"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA251069","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:50:40Z","timestamp":1761126640000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA251069"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"ISBN":["9781643686318"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia251069","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]}}}