{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T05:02:49Z","timestamp":1750309369386,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":31,"publisher":"ACM","license":[{"start":{"date-parts":[[2025,3,31]],"date-time":"2025-03-31T00:00:00Z","timestamp":1743379200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-sa\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,3,31]]},"DOI":"10.1145\/3672608.3707740","type":"proceedings-article","created":{"date-parts":[[2025,5,14]],"date-time":"2025-05-14T18:26:21Z","timestamp":1747247181000},"page":"936-944","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["A Comparison of the Effects of Model Adaptation Techniques on Large Language Models for Non-Linguistic and Linguistic Tasks"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-9797-5505","authenticated-orcid":false,"given":"Khoa","family":"Nguyen","sequence":"first","affiliation":[{"name":"University of Texas at San Antonio, San Antonio, TX, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5384-3162","authenticated-orcid":false,"given":"Sadia","family":"Jahan","sequence":"additional","affiliation":[{"name":"University of Texas at San Antonio, San Antonio, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1283-2595","authenticated-orcid":false,"given":"Rocky","family":"Slavin","sequence":"additional","affiliation":[{"name":"University of Texas at San Antonio, San Antonio, TX, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,5,14]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"2024. Accuracy. https:\/\/lichess.org\/page\/accuracy. Accessed: 2024-07-17."},{"key":"e_1_3_2_1_2_1","unstructured":"Rishi Bommasani et al. 2021. On the Opportunities and Risks of Foundation Models. CoRR abs\/2108.07258 (2021). arXiv:2108.07258 https:\/\/arxiv.org\/abs\/2108.07258"},{"key":"e_1_3_2_1_3_1","volume-title":"Lin (Eds.)","volume":"33","author":"Tom","year":"1877","unstructured":"Tom B. Brown et al. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877\u20131901. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf"},{"volume-title":"The Twelfth International Conference on Learning Representations.","author":"Chan Chi-Min","key":"e_1_3_2_1_4_1","unstructured":"Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. [n. d.]. ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. In The Twelfth International Conference on Learning Representations."},{"key":"e_1_3_2_1_5_1","volume-title":"Llm4ts: Aligning pre-trained llms as data-efficient time-series forecasters. arXiv preprint arXiv:2308.08469","author":"Chang Ching","year":"2024","unstructured":"Ching Chang, Wei-Yao Wang, Wen-Chih Peng, and Tien-Fu Chen. 2024. Llm4ts: Aligning pre-trained llms as data-efficient time-series forecasters. arXiv preprint arXiv:2308.08469 (2024)."},{"key":"e_1_3_2_1_6_1","volume-title":"Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al.","author":"Chen Mark","year":"2021","unstructured":"Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating Large Language Models Trained on Code. ArXiv abs\/2107.03374 (2021). https:\/\/api.semanticscholar.org\/CorpusID:235755472"},{"key":"e_1_3_2_1_7_1","first-page":"1","article-title":"Scaling instruction-finetuned language models","volume":"25","author":"Chung Hyung Won","year":"2024","unstructured":"Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2024. Scaling instruction-finetuned language models. Journal of Machine Learning Research 25, 70 (2024), 1\u201353.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1423"},{"key":"e_1_3_2_1_9_1","first-page":"11763","article-title":"Lift: Language-interfaced fine-tuning for non-language machine learning tasks","volume":"35","author":"Dinh Tuan","year":"2022","unstructured":"Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee. 2022. Lift: Language-interfaced fine-tuning for non-language machine learning tasks. Advances in Neural Information Processing Systems 35 (2022), 11763\u201311784.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_10_1","volume-title":"International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy","author":"Dosovitskiy Alexey","year":"2021","unstructured":"Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00494"},{"key":"e_1_3_2_1_12_1","volume-title":"Measuring Massive Multitask Language Understanding. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=d7KBjmI3GmQ","author":"Hendrycks Dan","year":"2021","unstructured":"Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring Massive Multitask Language Understanding. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=d7KBjmI3GmQ"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1031"},{"volume-title":"LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations.","author":"Hu Edward J","key":"e_1_3_2_1_14_1","unstructured":"Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. [n. d.]. LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_15_1","volume-title":"The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691","author":"Lester Brian","year":"2021","unstructured":"Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691 (2021)."},{"key":"e_1_3_2_1_16_1","volume-title":"Pretrained Transformers as Universal Computation Engines. CoRR abs\/2103.05247","author":"Lu Kevin","year":"2021","unstructured":"Kevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch. 2021. Pretrained Transformers as Universal Computation Engines. CoRR abs\/2103.05247 (2021). arXiv:2103.05247 https:\/\/arxiv.org\/abs\/2103.05247"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2022.emnlp-main.90"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER60148.2024.00016"},{"key":"e_1_3_2_1_19_1","volume-title":"Proceedings of the 40th International Conference on Machine Learning (Proceedings of Machine Learning Research","volume":"26128","author":"Ni Ansong","year":"2023","unstructured":"Ansong Ni, Srini Iyer, Dragomir Radev, Veselin Stoyanov, Wen-Tau Yih, Sida Wang, and Xi Victoria Lin. 2023. LEVER: Learning to Verify Language-to-Code Generation with Execution. In Proceedings of the 40th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 202), Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (Eds.). PMLR, 26106\u201326128. https:\/\/proceedings.mlr.press\/v202\/ni23b.html"},{"key":"e_1_3_2_1_20_1","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans Ilya Sutskever et al. 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_2_1_21_1","unstructured":"Alec Radford Jeffrey Wu Rewon Child David Luan Dario Amodei Ilya Sutskever et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1 8 (2019) 9."},{"key":"e_1_3_2_1_22_1","volume-title":"Transfer learning gaussian anomaly detection by fine-tuning representations. arXiv preprint arXiv:2108.04116","author":"Rippel Oliver","year":"2021","unstructured":"Oliver Rippel, Arnav Chavan, Chucai Lei, and Dorit Merhof. 2021. Transfer learning gaussian anomaly detection by fine-tuning representations. arXiv preprint arXiv:2108.04116 (2021)."},{"volume-title":"Multitask Prompted Training Enables Zero-Shot Task Generalization. In International Conference on Learning Representations.","author":"Sanh Victor","key":"e_1_3_2_1_23_1","unstructured":"Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. [n. d.]. Multitask Prompted Training Enables Zero-Shot Task Generalization. In International Conference on Learning Representations."},{"volume-title":"Official Rules of Chess","author":"Schiller Eric","key":"e_1_3_2_1_24_1","unstructured":"Eric Schiller. 2003. Official Rules of Chess. Cardoza Publishing."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N19-1380"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.26615\/978-954-452-072-4_153"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3417058"},{"key":"e_1_3_2_1_28_1","volume-title":"Attention is all you need. Advances in neural information processing systems 30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017)."},{"volume-title":"International Conference on Learning Representations.","author":"Wei Jason","key":"e_1_3_2_1_29_1","unstructured":"Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. [n. d.]. Finetuned Language Models are Zero-Shot Learners. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-44693-1_54"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.acl-long.245"}],"event":{"name":"SAC '25: 40th ACM\/SIGAPP Symposium on Applied Computing","sponsor":["SIGAPP ACM Special Interest Group on Applied Computing"],"location":"Catania International Airport Catania Italy","acronym":"SAC '25"},"container-title":["Proceedings of the 40th ACM\/SIGAPP Symposium on Applied Computing"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672608.3707740","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3672608.3707740","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:06:14Z","timestamp":1750291574000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3672608.3707740"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,31]]},"references-count":31,"alternative-id":["10.1145\/3672608.3707740","10.1145\/3672608"],"URL":"https:\/\/doi.org\/10.1145\/3672608.3707740","relation":{},"subject":[],"published":{"date-parts":[[2025,3,31]]},"assertion":[{"value":"2025-05-14","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}