{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,20]],"date-time":"2026-02-20T18:52:31Z","timestamp":1771613551890,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":41,"publisher":"ACM","funder":[{"name":"This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2024-RS-2024-00436887) supervised by the IITP (Institute for Information \\& Communications Technology Planning \\& Evaluation); and also by IITP grant funded by MSIT (RS-2024-00439803, SW Star Lab)."}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,11,10]]},"DOI":"10.1145\/3746252.3760795","type":"proceedings-article","created":{"date-parts":[[2025,11,8]],"date-time":"2025-11-08T00:52:37Z","timestamp":1762563157000},"page":"5171-5175","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Quantum-Amplitude Embedded Adaptation for Parameter-Efficient Fine-Tuning in Large Language Models"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-0013-6342","authenticated-orcid":false,"given":"Emily Jimin","family":"Roh","sequence":"first","affiliation":[{"name":"Korea University, Seoul, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1794-6076","authenticated-orcid":false,"given":"Joongheon","family":"Kim","sequence":"additional","affiliation":[{"name":"Korea University, Seoul, Republic of Korea"}]}],"member":"320","published-online":{"date-parts":[[2025,11,10]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Soyi Jung, Soohyun Park, and Joongheon Kim.","author":"Ahn Hyojun","year":"2025","unstructured":"Hyojun Ahn, Seungcheol Oh, Gyu Seon Kim, Soyi Jung, Soohyun Park, and Joongheon Kim. 2025. Hallucination-aware generative pretrained transformer for cooperative aerial mobility control. arXiv preprint arXiv:2504.10831 (2025)."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP49660.2025.10890639"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3583780.3615240"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","unstructured":"Sid Black Leo Gao Phil Wang Connor Leahy and Stella Biderman. 2021. GPT-Neo: Large scale autoregressive language modeling with mesh-tensorflow. doi:10.5281\/zenodo.5297715","DOI":"10.5281\/zenodo.5297715"},{"key":"e_1_3_2_1_5_1","first-page":"1877","volume-title":"Proc. Advances in Neural Information Processing Systems (NeurIPS)","volume":"33","author":"Tom","unstructured":"Tom Brown et al., 2020. Language models are few-shot learners. In Proc. Advances in Neural Information Processing Systems (NeurIPS), Vol. 33. Virtual, 1877-1901."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"crossref","unstructured":"Yupeng Chang Xu Wang Jindong Wang Yuan Wu Linyi Yang Kaijie Zhu Hao Chen Xiaoyuan Yi Cunxiang Wang Yidong Wang et al. 2024. A survey on evaluation of large language models. ACM Trans. on intelligent systems and technology Vol. 15 3 (March 2024) 1-45.","DOI":"10.1145\/3641289"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1007\/s42484-023-00133-0"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-023-00626-4"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2024.3456145"},{"key":"e_1_3_2_1_10_1","volume-title":"Quantum","volume":"8","author":"Gonzalez-Conde Javier","year":"2024","unstructured":"Javier Gonzalez-Conde, Thomas W Watts, Pablo Rodriguez-Grasa, and Mikel Sanz. 2024. Efficient quantum amplitude encoding of polynomial functions. Quantum, Vol. 8 (March 2024), 1297."},{"key":"e_1_3_2_1_11_1","first-page":"30016","volume-title":"Proc. Advances in Neural Information Processing Systems (NeurIPS)","volume":"35","author":"Jordan","unstructured":"Jordan Hoffmann et al., 2022. An empirical analysis of compute-optimal large language model training. In Proc. Advances in Neural Information Processing Systems (NeurIPS), Vol. 35. New Orleans, Louisiana, USA, 30016-30030."},{"key":"e_1_3_2_1_12_1","volume-title":"Proc. Int'l Conf. Learning Representations (ICLR). Virtual.","author":"Hu Edward J","year":"2022","unstructured":"Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al., 2022. LoRA: Low-rank adaptation of large language models. In Proc. Int'l Conf. Learning Representations (ICLR). Virtual."},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2023.3266495"},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.nlp.2023.100048"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Enkelejda Kasneci Kathrin Se\u00dfler Stefan K\u00fcchemann Maria Bannert Daryna Dementieva Frank Fischer Urs Gasser Georg Groh Stephan G\u00fcnnemann Eyke H\u00fcllermeier et al. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences Vol. 103 (April 2023) 102274.","DOI":"10.1016\/j.lindif.2023.102274"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i03.5616"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3728636"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3660647"},{"key":"e_1_3_2_1_19_1","first-page":"109","volume-title":"Proc. Advances in Neural Information Processing Systems (NeurIPS)","volume":"35","author":"Lian Dongze","year":"2022","unstructured":"Dongze Lian, Daquan Zhou, Jiashi Feng, and Xinchao Wang. 2022. Scaling & shifting your features: A new baseline for efficient model tuning. Proc. Advances in Neural Information Processing Systems (NeurIPS), Vol. 35 (November-December 2022), 109-123."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP49660.2025.10887682"},{"key":"e_1_3_2_1_21_1","first-page":"397","volume-title":"Proc. IEEE\/ACM Int'l Conf. Automated Software Engineering (ASE)","author":"Liu Jiaxing","year":"2023","unstructured":"Jiaxing Liu, Chaofeng Sha, and Xin Peng. 2023. An empirical study of parameter-efficient fine-tuning methods for pre-trrained code models. In Proc. IEEE\/ACM Int'l Conf. Automated Software Engineering (ASE). Luxembourg, Belgium, 397-408."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01828"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3511808.3557445"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2025.3528116"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11704-024-40663-9"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TE.2024.3467912"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1103\/PhysRevA.111.032429"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41467-024-51844-2"},{"key":"e_1_3_2_1_29_1","volume-title":"Hashimoto","author":"Taori Rohan","year":"2023","unstructured":"Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An instruction-following LLaMA model. https:\/\/github.com\/tatsu-lab\/stanford_alpaca."},{"key":"e_1_3_2_1_30_1","first-page":"108439","volume-title":"Proc. Advances in Neural Information Processing Systems (NeurIPS)","volume":"37","author":"Thomas Valentin","year":"2024","unstructured":"Valentin Thomas, Junwei Ma, Rasa Hosseinzadeh, Keyvan Golestan, Guangwei Yu, Maks Volkovs, and Anthony L Caterini. 2024. Retrieval & fine-tuning for in-context tabular models. Proc. Advances in Neural Information Processing Systems (NeurIPS), Vol. 37 (December 2024), 108439-108467."},{"key":"e_1_3_2_1_31_1","volume-title":"Mi Young Lee, and JaKeoung Koo.","author":"Usman Muhammad Talha","year":"2024","unstructured":"Muhammad Talha Usman, Habib Khan, Sushil Kumar Singh, Mi Young Lee, and JaKeoung Koo. 2024. Efficient deepfake detection via layer-frozen assisted dual attention network for consumer imaging devices. IEEE Trans. on Consumer Electronics ((Early Access) 2024), 1-11."},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2024.3506731"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/TBDATA.2025.3536928"},{"key":"e_1_3_2_1_34_1","volume-title":"5: Efficient LLM inference with model compression and hardware acceleration. arXiv preprint arXiv:2504.17376 (April","author":"Xiang Maoyang","year":"2025","unstructured":"Maoyang Xiang, Ramesh Fernando, and Bo Wang. 2025. On-device Qwen2. 5: Efficient LLM inference with model compression and hardware acceleration. arXiv preprint arXiv:2504.17376 (April 2025)."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3627673.3679913"},{"key":"e_1_3_2_1_36_1","first-page":"83465","volume-title":"Proc. Advances in Neural Information Processing Systems (NeurIPS)","volume":"37","author":"Yang Yu","year":"2024","unstructured":"Yu Yang, Siddhartha Mishra, Jeffrey Chiang, and Baharan Mirzasoleiman. 2024. SmallToLarge (S2L): Scalable data selection for fine-tuning large language models by summarizing training trajectories of small models. In Proc. Advances in Neural Information Processing Systems (NeurIPS), Vol. 37. Vancouver, Canada, 83465-83496."},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3627673.3679233"},{"key":"e_1_3_2_1_38_1","volume-title":"Tinyllama: An open-source small language model. arXiv preprint arXiv:2401.02385 (January","author":"Zhang Peiyuan","year":"2024","unstructured":"Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu. 2024b. Tinyllama: An open-source small language model. arXiv preprint arXiv:2401.02385 (January 2024)."},{"key":"e_1_3_2_1_39_1","volume-title":"Large language model (LLM) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities","author":"Zhou Hao","year":"2024","unstructured":"Hao Zhou, Chengming Hu, Ye Yuan, Yufei Cui, Yili Jin, Can Chen, Haolun Wu, Dun Yuan, Li Jiang, Di Wu, Xue Liu, Charlie Zhang, Xianbin Wang, and Jiangchuan Liu. 2024a. Large language model (LLM) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities. IEEE Communications Surveys and Tutorials ((Early Access) 2024), 1-54."},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1038\/s41586-024-07930-y"},{"key":"e_1_3_2_1_41_1","first-page":"1","article-title":"A comparative study of 11 non-linear regression models highlighting autoencoder, DBN, and SVR, enhanced by SHAP importance analysis in soybean branching prediction","volume":"14","author":"Zhou Wei","year":"2024","unstructured":"Wei Zhou, Zhengxiao Yan, and Liting Zhang. 2024c. A comparative study of 11 non-linear regression models highlighting autoencoder, DBN, and SVR, enhanced by SHAP importance analysis in soybean branching prediction. Scientific Reports, Vol. 14, 1 (March 2024), 5905.","journal-title":"Scientific Reports"}],"event":{"name":"CIKM '25: The 34th ACM International Conference on Information and Knowledge Management","location":"Seoul Republic of Korea","acronym":"CIKM '25","sponsor":["SIGIR ACM Special Interest Group on Information Retrieval","SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web"]},"container-title":["Proceedings of the 34th ACM International Conference on Information and Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3746252.3760795","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T02:26:43Z","timestamp":1765506403000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3746252.3760795"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,10]]},"references-count":41,"alternative-id":["10.1145\/3746252.3760795","10.1145\/3746252"],"URL":"https:\/\/doi.org\/10.1145\/3746252.3760795","relation":{},"subject":[],"published":{"date-parts":[[2025,11,10]]},"assertion":[{"value":"2025-11-10","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}