{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,2]],"date-time":"2025-08-02T14:31:10Z","timestamp":1754145070686,"version":"3.41.2"},"publisher-location":"New York, NY, USA","reference-count":16,"publisher":"ACM","funder":[{"name":"Sichuan Central-Guided Local Science and Technology Development","award":["2023ZYD0165"],"award-info":[{"award-number":["2023ZYD0165"]}]},{"name":"the Natural Science Foundation of Sichuan Province","award":["25QNJJ2131"],"award-info":[{"award-number":["25QNJJ2131"]}]},{"name":"the China Postdoctoral Science Foundation","award":["2024M760357"],"award-info":[{"award-number":["2024M760357"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,1,10]]},"DOI":"10.1145\/3725899.3725924","type":"proceedings-article","created":{"date-parts":[[2025,7,11]],"date-time":"2025-07-11T11:33:04Z","timestamp":1752233584000},"page":"165-169","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Dynamic Layer-Wise Strategy for Parameter-Efficient Fine-Tuning"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-7711-086X","authenticated-orcid":false,"given":"Guowei","family":"Peng","sequence":"first","affiliation":[{"name":"University of Electronic Science and Technology of China, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-1257-9933","authenticated-orcid":false,"given":"Yuning","family":"Yang","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4839-0234","authenticated-orcid":false,"given":"Dongyang","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3720-4379","authenticated-orcid":false,"given":"Xiurui","family":"Xie","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-2950-2229","authenticated-orcid":false,"given":"Anning","family":"Jiang","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China, Chengdu, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-4896-2958","authenticated-orcid":false,"given":"Fangyi","family":"Ding","sequence":"additional","affiliation":[{"name":"Southwestern University of Finance and Economics, Chengdu, China"}]}],"member":"320","published-online":{"date-parts":[[2025,7,11]]},"reference":[{"key":"e_1_3_3_1_2_2","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared\u00a0D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et\u00a0al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020) 1877\u20131901."},{"key":"e_1_3_3_1_3_2","doi-asserted-by":"crossref","unstructured":"Ning Ding Xingtai Lv Qiaosen Wang Yulin Chen Bowen Zhou Zhiyuan Liu and Maosong Sun. 2023. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2311.11696 (2023).","DOI":"10.18653\/v1\/2023.emnlp-main.252"},{"key":"e_1_3_3_1_4_2","unstructured":"Demi Guo Alexander\u00a0M Rush and Yoon Kim. 2020. Parameter-efficient transfer learning with diff pruning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2012.07463 (2020)."},{"key":"e_1_3_3_1_5_2","unstructured":"Pengcheng He Jianfeng Gao and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2111.09543 (2021)."},{"key":"e_1_3_3_1_6_2","unstructured":"Edward\u00a0J Hu Yelong Shen Phillip Wallis Zeyuan Allen-Zhu Yuanzhi Li Shean Wang Lu Wang Weizhu Chen et\u00a0al. 2022. Lora: Low-rank adaptation of large language models. ICLR 1 2 (2022) 3."},{"key":"e_1_3_3_1_7_2","volume-title":"Proceedings of naacL-HLT","volume":"1","author":"Kenton Jacob Devlin Ming-Wei\u00a0Chang","year":"2019","unstructured":"Jacob Devlin Ming-Wei\u00a0Chang Kenton and Lee\u00a0Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of naacL-HLT, Vol.\u00a01. Minneapolis, Minnesota."},{"key":"e_1_3_3_1_8_2","unstructured":"Dawid\u00a0J Kopiczko Tijmen Blankevoort and Yuki\u00a0M Asano. 2023. Vera: Vector-based random matrix adaptation. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2310.11454 (2023)."},{"key":"e_1_3_3_1_9_2","doi-asserted-by":"crossref","unstructured":"Junyi Li Tianyi Tang Wayne\u00a0Xin Zhao Jian-Yun Nie and Ji-Rong Wen. 2024. Pre-trained language models for text generation: A survey. Comput. Surveys 56 9 (2024) 1\u201339.","DOI":"10.1145\/3649449"},{"key":"e_1_3_3_1_10_2","unstructured":"Xinyin Ma Gongfan Fang and Xinchao Wang. 2023. Llm-pruner: On the structural pruning of large language models. Advances in neural information processing systems 36 (2023) 21702\u201321720."},{"key":"e_1_3_3_1_11_2","doi-asserted-by":"crossref","unstructured":"Mourad Mars. 2022. From word embeddings to pre-trained language models: A state-of-the-art walkthrough. Applied Sciences 12 17 (2022) 8805.","DOI":"10.3390\/app12178805"},{"key":"e_1_3_3_1_12_2","doi-asserted-by":"crossref","unstructured":"Bonan Min Hayley Ross Elior Sulem Amir Pouran\u00a0Ben Veyseh Thien\u00a0Huu Nguyen Oscar Sainz Eneko Agirre Ilana Heintz and Dan Roth. 2023. Recent advances in natural language processing via large pre-trained language models: A survey. Comput. Surveys 56 2 (2023) 1\u201340.","DOI":"10.1145\/3605943"},{"key":"e_1_3_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/3458817.3476209"},{"key":"e_1_3_3_1_14_2","doi-asserted-by":"crossref","unstructured":"Pranav Rajpurkar Robin Jia and Percy Liang. 2018. Know what you don\u2019t know: Unanswerable questions for SQuAD. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1806.03822 (2018).","DOI":"10.18653\/v1\/P18-2124"},{"key":"e_1_3_3_1_15_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar et\u00a0al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2302.13971 (2023)."},{"key":"e_1_3_3_1_16_2","doi-asserted-by":"crossref","unstructured":"Alex Wang Amanpreet Singh Julian Michael Felix Hill Omer Levy and Samuel\u00a0R Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/1804.07461 (2018).","DOI":"10.18653\/v1\/W18-5446"},{"key":"e_1_3_3_1_17_2","unstructured":"Qingru Zhang Minshuo Chen Alexander Bukharin Nikos Karampatziakis Pengcheng He Yu Cheng Weizhu Chen and Tuo Zhao. 2023. Adalora: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:https:\/\/arXiv.org\/abs\/2303.10512 (2023)."}],"event":{"name":"ICSIM 2025: 2025 The 8th International Conference on Software Engineering and Information Management","acronym":"ICSIM 2025","location":"Singapore Singapore"},"container-title":["Proceedings of the 2025 8th International Conference on Software Engineering and Information Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3725899.3725924","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,16]],"date-time":"2025-07-16T11:33:39Z","timestamp":1752665619000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3725899.3725924"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,10]]},"references-count":16,"alternative-id":["10.1145\/3725899.3725924","10.1145\/3725899"],"URL":"https:\/\/doi.org\/10.1145\/3725899.3725924","relation":{},"subject":[],"published":{"date-parts":[[2025,1,10]]},"assertion":[{"value":"2025-07-11","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}