{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T04:04:04Z","timestamp":1773806644409,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"39","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Adapting large language models (LLMs) to new languages is an expensive and opaque process. Understanding how language models acquire new languages and multilingual abilities is key to achieve efficient adaptation. Prior work on multilingual interpretability research focuses primarily on how trained models process multilingual instructions, leaving unexplored the mechanisms through which they acquire new languages during training. We investigate these training dynamics on decoder-only transformers through the lens of two functional cognitive specializations: language perception (input comprehension) and production (output generation). Through experiments on low-resource languages, we demonstrate how perceptual and productive specialization emerges in different regions of a language model by running layer ablation sweeps from the model\u2019s input and output directions. Based on the observed specialization patterns, we propose CogSym, a layer-wise heuristic that enables effective adaptation by exclusively finetuning a few early and late layers. We show that tuning only the 25% outermost layers achieves downstream task performance within 2\u20133% deviation from the full finetuning baseline. Unlike similar layer-selection methods, the proposed method requires no extra data or computation while retaining comparable performance, which is especially beneficial for low-resource languages. CogSym yields consistent performance with adapter methods such as LoRA, showcasing generalization beyond full finetuning. These findings provide insights to better understand how LLMs learn new languages and push toward accessible and inclusive language modeling.<\/jats:p>","DOI":"10.1609\/aaai.v40i39.40568","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T03:03:33Z","timestamp":1773803013000},"page":"32875-32883","source":"Crossref","is-referenced-by-count":0,"title":["Positional Cognitive Specialization: Where Do LLMs Learn to Comprehend and Speak Your Language?"],"prefix":"10.1609","volume":"40","author":[{"given":"Luis Frentzen","family":"Salim","sequence":"first","affiliation":[]},{"given":"Lun-Wei","family":"Ku","sequence":"additional","affiliation":[]},{"given":"Hsing-Kuo Kenneth","family":"Pao","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/40568\/44529","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/40568\/44529","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T03:03:34Z","timestamp":1773803014000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/40568"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"39","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i39.40568","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}