{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,2]],"date-time":"2026-05-02T07:07:29Z","timestamp":1777705649127,"version":"3.51.4"},"reference-count":8,"publisher":"SAGE Publications","issue":"3","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IFS"],"published-print":{"date-parts":[[2022,2,2]]},"abstract":"<jats:p>Intent detection and slot filling are recognized as two very important tasks in a spoken language understanding (SLU) system. In order to model these two tasks at the same time, many joint models based on deep neural networks have been proposed recently and archived excellent results. In addition, graph neural network has made good achievements in the field of vision. Therefore, we combine these two advantages and propose a new joint model with a wheel-graph attention network (Wheel-GAT), which is able to model interrelated connections directly for single intent detection and slot filling. To construct a graph structure for utterances, we create intent nodes, slot nodes, and directed edges. Intent nodes can provide utterance-level semantic information for slot filling, while slot nodes can also provide local keyword information for intent detection. The two tasks promote each other and carry out end-to-end training at the same time. Experiments show that our proposed approach is superior to multiple baselines on ATIS and SNIPS datasets. Besides, we also demonstrate that using bi-directional encoder representation from transformer (BERT) model further boosts the performance of the SLU task.<\/jats:p>","DOI":"10.3233\/jifs-211674","type":"journal-article","created":{"date-parts":[[2021,11,12]],"date-time":"2021-11-12T11:05:47Z","timestamp":1636715147000},"page":"2409-2420","source":"Crossref","is-referenced-by-count":9,"title":["Joint intent detection and slot filling with wheel-graph attention networks"],"prefix":"10.1177","volume":"42","author":[{"given":"Pengfei","family":"Wei","sequence":"first","affiliation":[{"name":"School of Computers, Guangdong University of Technology, Guangzhou, P.R. China"}]},{"given":"Bi","family":"Zeng","sequence":"additional","affiliation":[{"name":"School of Computers, Guangdong University of Technology, Guangzhou, P.R. China"}]},{"given":"Wenxiong","family":"Liao","sequence":"additional","affiliation":[{"name":"School of Computers, Guangdong University of Technology, Guangzhou, P.R. China"}]}],"member":"179","reference":[{"issue":"4","key":"10.3233\/JIFS-211674_ref3","doi-asserted-by":"crossref","first-page":"778","DOI":"10.1109\/TASLP.2014.2303296","article-title":"Application of deep beliefnetworks for natural language understanding","volume":"22","author":"Sarikaya","year":"2014","journal-title":"IEEE\/ACMTransactions on Audio, Speech, and Language Processing"},{"key":"10.3233\/JIFS-211674_ref4","doi-asserted-by":"crossref","unstructured":"Haffner P. , Tur G. and Wright J.H. , Optimizing svms for complex callclassification, in 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP\u201903), IEEE 1 (2003) I\u2013I.","DOI":"10.1109\/ICASSP.2003.1198860"},{"issue":"2\u20133","key":"10.3233\/JIFS-211674_ref5","doi-asserted-by":"crossref","first-page":"135","DOI":"10.1023\/A:1007649029923","article-title":"Boostexter: A boosting-based system fortext categorization","volume":"39","author":"Schapire","year":"2000","journal-title":"Machine Learning"},{"key":"10.3233\/JIFS-211674_ref10","first-page":"685","article-title":"Attention-based recurrent neural network modelsfor joint intent detection and slot filling","volume":"2016","author":"Liu","year":"2016","journal-title":"Interspeech"},{"issue":"2016","key":"10.3233\/JIFS-211674_ref12","first-page":"2993","article-title":"A joint model of intent determination and slotfilling for spoken language understanding","volume":"16","author":"Zhang","year":"2016","journal-title":"in IJCAI"},{"issue":"8","key":"10.3233\/JIFS-211674_ref34","doi-asserted-by":"crossref","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","article-title":"Long short-term memory","volume":"9","author":"Hochreiter","year":"1997","journal-title":"Neural Computation"},{"issue":"1","key":"10.3233\/JIFS-211674_ref36","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1109\/TNN.2008.2005605","article-title":"The graph neural network model","volume":"20","author":"Scarselli","year":"2008","journal-title":"IEEE Transactions on Neural Networks"},{"key":"10.3233\/JIFS-211674_ref37","first-page":"3","article-title":"Rectifier nonlinearities improveneural network acoustic models","volume":"30","author":"Maas","year":"2013","journal-title":"in Proc. icml"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/JIFS-211674","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T09:44:35Z","timestamp":1777455875000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/JIFS-211674"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,2,2]]},"references-count":8,"journal-issue":{"issue":"3"},"URL":"https:\/\/doi.org\/10.3233\/jifs-211674","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,2,2]]}}}