{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,8]],"date-time":"2026-03-08T20:34:02Z","timestamp":1773002042601,"version":"3.50.1"},"reference-count":46,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2025,10,17]],"date-time":"2025-10-17T00:00:00Z","timestamp":1760659200000},"content-version":"vor","delay-in-days":289,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"},{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/doi.wiley.com\/10.1002\/tdm_license_1.1"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61702516"],"award-info":[{"award-number":["61702516"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["International Journal of Intelligent Systems"],"published-print":{"date-parts":[[2025,1]]},"abstract":"<jats:p>Current\u200b fine\u2010tuning techniques for large pretrained language models (LLMs) face significant challenges, particularly regarding the high computational costs associated with adapting billions of parameters and their limitations in effectively addressing diverse language understanding tasks. These methods often result in an inability to manage inter\u2010task dependencies effectively, leading to underutilization of inter\u2010task information. To address these issues, we propose tasks\u2010embedded reparameterization (TER), a novel parameter\u2010efficient fine\u2010tuning framework that exploits multitask learning to enhance task\u2010specific capabilities. The TER model integrates prompt tuning and multitask reparameterization, merging task\u2010specific experts and hidden states of target tasks in a unified model framework. Furthermore, it employs a dynamic, task\u2010oriented gating mechanism to optimize the prompts output by the model. This method dynamically adjusts the parameters according to the differing requirements of the task, ensuring that the model optimally adjusts the parameters according to the specific requirements of the task, so that the task can find a suitable balance between different tasks and improve knowledge sharing and task adaptability. Experimental evaluations using the SuperGLUE benchmark demonstrate that TER consistently outperforms existing parameter\u2010efficient fine\u2010tuning techniques in both performance and computational efficiency, offering a promising solution for task\u2010specific language understanding in both research and industry.<\/jats:p>","DOI":"10.1155\/int\/1688391","type":"journal-article","created":{"date-parts":[[2025,10,18]],"date-time":"2025-10-18T01:43:13Z","timestamp":1760751793000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Tasks\u2010Embedded Reparameterization: A Novel Framework for Task\u2010Specific Transfer Enhancement With Multitask Prompt Learning"],"prefix":"10.1155","volume":"2025","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7087-1077","authenticated-orcid":false,"given":"Jingjing","family":"Liu","sequence":"first","affiliation":[]},{"given":"Yishuai","family":"Song","sequence":"additional","affiliation":[]},{"given":"Rui","family":"Jiang","sequence":"additional","affiliation":[]},{"given":"Yi","family":"Feng","sequence":"additional","affiliation":[]},{"given":"Mo","family":"Tao","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3401-1771","authenticated-orcid":false,"given":"Yinlin","family":"Li","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2025,10,17]]},"reference":[{"key":"e_1_2_12_1_2","first-page":"1","article-title":"A Survey of Knowledge Enhanced Pre-Trained Language Models","author":"Hu L.","year":"2023","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"key":"e_1_2_12_2_2","article-title":"Multilingual Large Language Models: A Systematic Survey","author":"Zhu S.","year":"2024","journal-title":"CoRR abs\/2411"},{"key":"e_1_2_12_3_2","article-title":"Enhancing Aspect-Based Sentiment Analysis in Tourism Using Large Language Models and Positional Information","author":"Xu C.","year":"2024","journal-title":"CoRR abs\/2409"},{"key":"e_1_2_12_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2025.113052"},{"key":"e_1_2_12_5_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.emnlp-main.1015"},{"key":"e_1_2_12_6_2","article-title":"Fine-Tuning Language Models With Just Forward Passes","volume":"36","author":"Malladi S.","year":"2024","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_12_7_2","doi-asserted-by":"crossref","unstructured":"ZhuS. PanL. LiB. andXiongD. Landermt: Dectecting and Routing Language-Aware Neurons for Selectively Finetuning Llms to Machine Translation Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics August 2024.","DOI":"10.18653\/v1\/2024.acl-long.656"},{"key":"e_1_2_12_8_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.findings-acl.273"},{"key":"e_1_2_12_9_2","doi-asserted-by":"crossref","unstructured":"AsaiA. SalehiM. PetersM. E. andHajishirziH. Attempt: Parameter-Efficient Multi-Task Tuning via Attentional Mixtures of Soft Prompts Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing June 2022 6655\u20136672.","DOI":"10.18653\/v1\/2022.emnlp-main.446"},{"key":"e_1_2_12_10_2","unstructured":"KimJ. HeoJ. ShinH. LimC. andYuH. Integrated Parameter-Efficient Tuning for General-Purpose Audio Models 2022."},{"key":"e_1_2_12_11_2","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-023-00626-4"},{"key":"e_1_2_12_12_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2025.104078"},{"key":"e_1_2_12_13_2","doi-asserted-by":"publisher","DOI":"10.3390\/e24030432"},{"key":"e_1_2_12_14_2","article-title":"Don\u2019t Stop Pretraining? Make Prompt-Based Fine-Tuning Powerful Learner","volume":"36","author":"Shi Z.","year":"2024","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_12_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.aiopen.2021.08.002"},{"key":"e_1_2_12_16_2","article-title":"Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning","author":"Lou R.","year":"2023","journal-title":"arXiv preprint arXiv:2303.10475"},{"key":"e_1_2_12_17_2","first-page":"4582","article-title":"Prefix-Tuning: Optimizing Continuous Prompts for Generation","author":"Li X. L.","year":"2021","journal-title":"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing"},{"key":"e_1_2_12_18_2","unstructured":"SanhV. WebsonA. RaffelC.et al. Multitask Prompted Training Enables Zero-Shot Task Generalization International Conference on Learning Representations May 2021."},{"key":"e_1_2_12_19_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-981-96-6603-4_11"},{"key":"e_1_2_12_20_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-acl.421"},{"key":"e_1_2_12_21_2","doi-asserted-by":"crossref","unstructured":"LiuX. ZhengY. DuZ.et al. Gpt Understands Too AI Open 2023.","DOI":"10.1016\/j.aiopen.2023.08.012"},{"key":"e_1_2_12_22_2","first-page":"5485","article-title":"Exploring the Limits of Transfer Learning With a Unified Text-To-Text Transformer","volume":"21","author":"Raffel C.","year":"2020","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_12_23_2","doi-asserted-by":"publisher","DOI":"10.1177\/0165551521990616"},{"key":"e_1_2_12_24_2","unstructured":"WangZ. PandaR. KarlinskyL. FerisR. SunH. andKimY. Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning The Eleventh International Conference on Learning Representations March 2022."},{"key":"e_1_2_12_25_2","first-page":"5232","article-title":"Switch Transformers: Scaling to Trillion Parameter Models With Simple and Efficient Sparsity","volume":"23","author":"Fedus W.","year":"2022","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_12_26_2","unstructured":"LiB. ShenY. YangJ.et al. Sparse Mixture-of-Experts Are Domain Generalizable Learners 2022."},{"key":"e_1_2_12_27_2","unstructured":"PfeifferJ. KamathA. R\u00fcckl\u00e9A. ChoK. andGurevychI. Adapterfusion: Non-Destructive Task Composition for Transfer Learning 2020."},{"key":"e_1_2_12_28_2","unstructured":"ZhaoH. LuoH. ZhaoY. WangP. WangF. andShouM. Z. Revisit Parameter-Efficient Transfer Learning: A Two-Stage Paradigm 2023."},{"key":"e_1_2_12_29_2","first-page":"12991","article-title":"Lst: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning","volume":"35","author":"Sung Y.-L.","year":"2022","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_12_30_2","unstructured":"HuE. J. WallisP. Allen-ZhuZ.et al. Lora: Low-Rank Adaptation of Large Language Models International Conference on Learning Representations April 2021."},{"key":"e_1_2_12_31_2","article-title":"Superglue: A Stickier Benchmark for General-Purpose Language Understanding Systems","volume":"32","author":"Wang A.","year":"2019","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_12_32_2","first-page":"107","article-title":"The Commitmentbank: Investigating Projection in Naturally Occurring Discourse","volume":"23","author":"De Marneffe M.-C.","year":"2019","journal-title":"Proceedings of Sinn und Bedeutung"},{"key":"e_1_2_12_33_2","unstructured":"LevesqueH. DavisE. andMorgensternL. The Winograd Schema Challenge Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning May 2012."},{"key":"e_1_2_12_34_2","volume-title":"2011 AAAI Spring Symposium Series","author":"Roemmele M.","year":"2011"},{"key":"e_1_2_12_35_2","doi-asserted-by":"crossref","unstructured":"GiampiccoloD. MagniniB. DaganI. andDolanB. The Third Pascal Recognizing Textual Entailment Challenge Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing July 2007 1\u20139 https:\/\/doi.org\/10.3115\/1654536.1654538.","DOI":"10.3115\/1654536.1654538"},{"key":"e_1_2_12_36_2","unstructured":"PilehvarM. T.andCamacho-ColladosJ. Wic: the Word-In-Context Dataset for Evaluating Context-Sensitive Meaning Representations Proceedings of NAACL-HLT August 2019 1267\u20131273."},{"key":"e_1_2_12_37_2","unstructured":"ClarkC. LeeK. ChangM.-W. KwiatkowskiT. CollinsM. andToutanovaK. Boolq: Exploring the Surprising Difficulty of Natural Yes\/No Questions Proceedings of NAACL-HLT March 2019 2924\u20132936."},{"key":"e_1_2_12_38_2","first-page":"252","article-title":"Looking Beyond the Surface: A Challenge Set for Reading Comprehension Over Multiple Sentences","author":"Khashabi D.","year":"2018","journal-title":"Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"},{"key":"e_1_2_12_39_2","unstructured":"ZhangS. LiuX. LiuJ. GaoJ. DuhK. andVan DurmeB. ReCoRD: Bridging the Gap Between Human and Machine Commonsense Reading Comprehension 2023."},{"key":"e_1_2_12_40_2","unstructured":"HeJ. ZhouC. MaX. Berg-KirkpatrickT. andNeubigG. Towards a Unified View of Parameter-Efficient Transfer Learning International Conference on Learning Representations July 2021."},{"key":"e_1_2_12_41_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-emnlp.584"},{"key":"e_1_2_12_42_2","doi-asserted-by":"crossref","unstructured":"LeeH. JeongM. YunS.-Y. andKimK.-E. Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning 2024.","DOI":"10.18653\/v1\/2023.findings-emnlp.329"},{"key":"e_1_2_12_43_2","doi-asserted-by":"crossref","unstructured":"WellerO. SeppiK. andGardnerM. When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-trained Encoder Transfer Learning Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) April 2022 272\u2013282 https:\/\/doi.org\/10.18653\/v1\/2022.acl-short.30.","DOI":"10.18653\/v1\/2022.acl-short.30"},{"key":"e_1_2_12_44_2","doi-asserted-by":"crossref","unstructured":"R\u00fcckl\u00e9A. GeigleG. GlocknerM.et al. Adapterdrop: On the Efficiency of Adapters in Transformers Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing June 2021 7930\u20137946.","DOI":"10.18653\/v1\/2021.emnlp-main.626"},{"key":"e_1_2_12_45_2","unstructured":"VuT. LesterB. ConstantN. Al-RfouR. andCerD. Spot: Better Frozen Model Adaptation through Soft Prompt Transfer."},{"key":"e_1_2_12_46_2","unstructured":"Le ScaoT. FanA. AkikiC.et al. Bloom: A 176b-Parameter Open-Access Multilingual Language Model 2023."}],"container-title":["International Journal of Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/int\/1688391","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/full-xml\/10.1155\/int\/1688391","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/int\/1688391","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,8]],"date-time":"2026-03-08T18:09:16Z","timestamp":1772993356000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/int\/1688391"}},"subtitle":[],"editor":[{"given":"Richard","family":"Murray","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2025,1]]},"references-count":46,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1]]}},"alternative-id":["10.1155\/int\/1688391"],"URL":"https:\/\/doi.org\/10.1155\/int\/1688391","archive":["Portico"],"relation":{},"ISSN":["0884-8173","1098-111X"],"issn-type":[{"value":"0884-8173","type":"print"},{"value":"1098-111X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1]]},"assertion":[{"value":"2025-03-05","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-03","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-10-17","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"1688391"}}