{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T23:12:41Z","timestamp":1773270761505,"version":"3.50.1"},"reference-count":27,"publisher":"Association for Computing Machinery (ACM)","issue":"3","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62306119"],"award-info":[{"award-number":["62306119"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Science and Technology Projects in Guangzhou","award":["2025A04J3436"],"award-info":[{"award-number":["2025A04J3436"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2026,3,31]]},"abstract":"<jats:p>Integrating external knowledge with traditional spoken language understanding (SLU) models can effectively mitigate the ambiguity in user utterances in real-world scenarios. Knowledge graph, as a common source of external knowledge, encapsulates entities enriched with diverse attribute information. Nevertheless, existing models consider all entities as relevant, which introduces significant noise into the input. Additionally, not all attribute information of the entities is essential, resulting in considerable noise and redundancy. In this article, we propose a Noise-Removal of Knowledge-Enhanced (NRKE) framework for SLU, which involves two different types of denoising. The first approach involves hard denoising via entity selection, where we leverage a small clean dataset and introduce a BERT-based auxiliary model to filter out entities unrelated to user utterances, effectively eliminating noisy entities. In addition, we further refine entity selection by incorporating Large Language Models (LLMs) to assist in filtering out entities unrelated to user utterances. The second method involves soft denoising through the selection of entity attribute information. This approach utilizes a keywords-based local semantic selection that gives greater weight to relevant local semantics associated with specific keywords. This allows us to capture task-related information from the chosen entities, thereby minimizing noise and redundancy. To evaluate the generalization capability of existing knowledge-enhanced SLU models, we construct a new dataset named KGCAIS. The experimental results show that our NRKE achieves better performance than the competing models on both the PROSLU and KGCAIS datasets.<\/jats:p>","DOI":"10.1145\/3796226","type":"journal-article","created":{"date-parts":[[2026,2,6]],"date-time":"2026-02-06T20:51:12Z","timestamp":1770411072000},"page":"1-18","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["NRKE: Noise-Removal of Knowledge-Enhanced Framework for Spoken Language Understanding"],"prefix":"10.1145","volume":"25","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2106-1334","authenticated-orcid":false,"given":"Peijie","family":"Huang","sequence":"first","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University","place":["Guangzhou, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-3267-5944","authenticated-orcid":false,"given":"Xinming","family":"Chen","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University","place":["Guangzhou, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-7583-9879","authenticated-orcid":false,"given":"Leyi","family":"Lao","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University","place":["Guangzhou, China"]},{"name":"Southern University of Science and Technology","place":["Shenzhen, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0521-2646","authenticated-orcid":false,"given":"Yuhong","family":"Xu","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University","place":["Guangzhou, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-9313-0627","authenticated-orcid":false,"given":"Shuyuan","family":"Liang","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University","place":["Guangzhou, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-1484-8121","authenticated-orcid":false,"given":"Hanlin","family":"Liu","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University","place":["Guangzhou, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-6465-7456","authenticated-orcid":false,"given":"Yunhao","family":"Ba","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University","place":["Guangzhou, China"]}]}],"member":"320","published-online":{"date-parts":[[2026,3,6]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1002\/9781119992691"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2014.6854368"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/SLT.2014.7078572"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2016-1352"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2016-402"},{"key":"e_1_3_2_7_2","first-page":"2993","volume-title":"Proc. of IJCAI 2016","author":"Zhang Xiaodong","unstructured":"Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In Proc. of IJCAI 2016. 2993\u20132999. Retrieved from http:\/\/www.ijcai.org\/Abstract\/16\/425"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/n18-2118"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/n18-2050"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/d18-1417"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/p19-1544"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP43922.2022.9746942"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1214"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i10.21411"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-44693-1_63"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.18653\/V1\/D19-1097"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP48485.2024.10446353"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","unstructured":"Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9 8 (1997) 1735\u20131780. DOI:10.1162\/neco.1997.9.8.1735","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_2_19_2","first-page":"5998","volume-title":"Proc. of NIPS 2017","author":"Vaswani Ashish","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NIPS 2017. 5998\u20136008. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/n19-1423"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-60045-1_44"},{"key":"e_1_3_2_22_2","volume-title":"Proc. of ICLR 2015","author":"Kingma Diederik P.","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR 2015. Retrieved from http:\/\/arxiv.org\/abs\/1412.6980"},{"key":"e_1_3_2_23_2","unstructured":"Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Retrieved from http:\/\/arxiv.org\/abs\/1907.11692"},{"key":"e_1_3_2_24_2","first-page":"5754","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019. Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d\u2019Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett (Eds.). Curran Associates, Inc., 5754\u20135764. Retrieved from https:\/\/proceedings.neurips.cc\/paper\/2019\/hash\/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html"},{"key":"e_1_3_2_25_2","volume-title":"Proc. of ICLR 2020","author":"Clark Kevin","unstructured":"Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre-training text encoders as discriminators rather than generators. In Proc. of ICLR 2020. Retrieved from https:\/\/openreview.net\/forum?id=r1xMH1BtvB"},{"key":"e_1_3_2_26_2","unstructured":"Team GLM Aohan Zeng Bin Xu Bowen Wang Chenhui Zhang Da Yin Diego Rojas Guanyu Feng Hanlin Zhao Hanyu Lai et\u00a0al. 2024. ChatGLM: A family of large language models from GLM-130B to GLM-4 all tools. arXiv preprint arXiv:2406.12793."},{"key":"e_1_3_2_27_2","unstructured":"Qwen Team. 2024. Qwen2.5: A Party of Foundation Models. (September2024). Retrieved December 15 2024 from https:\/\/qwenlm.github.io\/blog\/qwen2.5\/"},{"key":"e_1_3_2_28_2","unstructured":"An Yang Baosong Yang Binyuan Hui Bo Zheng Bowen Yu Chang Zhou Chengpeng Li Chengyuan Li Dayiheng Liu Fei Huang et\u00a0al. 2024. Qwen2 Technical Report. .arXiv:2407.10671. Retrieved from https:\/\/arxiv.org\/abs\/2407.10671"}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3796226","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T08:02:38Z","timestamp":1773216158000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3796226"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,6]]},"references-count":27,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2026,3,31]]}},"alternative-id":["10.1145\/3796226"],"URL":"https:\/\/doi.org\/10.1145\/3796226","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"value":"2375-4699","type":"print"},{"value":"2375-4702","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,6]]},"assertion":[{"value":"2024-12-31","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-01-29","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-03-06","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}