{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,27]],"date-time":"2026-02-27T15:19:49Z","timestamp":1772205589686,"version":"3.50.1"},"reference-count":34,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2022,11,18]],"date-time":"2022-11-18T00:00:00Z","timestamp":1668729600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100009101","name":"Higher Education Institutions in Henan Province, China","doi-asserted-by":"publisher","award":["22B520040"],"award-info":[{"award-number":["22B520040"]}],"id":[{"id":"10.13039\/501100009101","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Recently, with the rise of deep learning, text classification techniques have developed rapidly. However, the existing work usually takes the entire text as the modeling object and pays less attention to the hierarchical structure within the text, ignoring the internal connection between the upper and lower sentences. To address these issues, this paper proposes a Bert-based hierarchical graph attention network model (BHGAttN) based on a large-scale pretrained model and graph attention network to model the hierarchical relationship of texts. During modeling, the semantic features are enhanced by the output of the intermediate layer of BERT, and the multilevel hierarchical graph network corresponding to each layer of BERT is constructed by using the dependencies between the whole sentence and the subsentence. This model pays attention to the layer-by-layer semantic information and the hierarchical relationship within the text. The experimental results show that the BHGAttN model exhibits significant competitive advantages compared with the current state-of-the-art baseline models.<\/jats:p>","DOI":"10.3390\/e24111691","type":"journal-article","created":{"date-parts":[[2022,11,21]],"date-time":"2022-11-21T03:07:23Z","timestamp":1669000043000},"page":"1691","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["BHGAttN: A Feature-Enhanced Hierarchical Graph Attention Network for Sentiment Analysis"],"prefix":"10.3390","volume":"24","author":[{"given":"Junjun","family":"Zhang","sequence":"first","affiliation":[{"name":"Department of Computer Information Engineering, Cheongju University, Cheongju 28503, Republic of Korea"}]},{"given":"Zhengyan","family":"Cui","sequence":"additional","affiliation":[{"name":"Department of Computer Information Engineering, Cheongju University, Cheongju 28503, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1365-0232","authenticated-orcid":false,"given":"Hyun Jun","family":"Park","sequence":"additional","affiliation":[{"name":"Division of Software Convergence, Cheongju University, Cheongju 28503, Republic of Korea"}]},{"given":"Giseop","family":"Noh","sequence":"additional","affiliation":[{"name":"Division of Software Convergence, Cheongju University, Cheongju 28503, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2022,11,18]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"111","DOI":"10.1002\/asmb.537","article-title":"A tutorial on \u03bd-support vector machines","volume":"21","author":"Chen","year":"2005","journal-title":"Appl. Stoch. Model. Bus. Ind."},{"key":"ref_2","unstructured":"Masurel, P. (2021, December 16). Of Bayesian Average and Star Ratings. Available online: https:\/\/fulmicoton.com\/posts\/bayesian_rating\/."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Kim, Y. (2014, January 25\u201329). Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.","DOI":"10.3115\/v1\/D14-1181"},{"key":"ref_4","unstructured":"Liu, P., Qiu, X., and Huang, X. (2016, January 9\u201315). Recurrent Neural Network for Text Classification with Multi-Task Learning. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), New York, NY, USA."},{"key":"ref_5","unstructured":"Huang, Z., Wei, X., and Kai, Y. (2015). Bidirectional LSTM-CRF Models for Sequence Tagging. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Cho, K., Van Merri\u00ebnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv.","DOI":"10.3115\/v1\/D14-1179"},{"key":"ref_7","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017, January 3\u20139). Attention is All You Need. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Long Beach, CA, USA."},{"key":"ref_8","unstructured":"Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2022, November 15). Improving Language Understanding by Generative pre-Training. Available online: https:\/\/www.cs.ubc.ca\/~amuham01\/LING530\/papers\/radford2018improving.pdf."},{"key":"ref_9","first-page":"9","article-title":"Language models are unsupervised multitask learners. OpenAI blog","volume":"1","author":"Radford","year":"2019","journal-title":"OpenAI Blog"},{"key":"ref_10","unstructured":"Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., and Amodei, D. (2020, January 6\u201312). Language Models are Few-Shot Learners. Proceedings of the Advances in Neural Information Processing Systems 33 (2020), Virtual."},{"key":"ref_11","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel","year":"2020","journal-title":"J. Mach. Learn. Res."},{"key":"ref_12","unstructured":"Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Grover, A., and Leskovec, J. (2016, January 13\u201317). node2vec: Scalable Feature Learning for Networks. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939754"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Dong, Y., Chawla, N.V., and Swami, A. (2017, January 13\u201317). metapath2vec: Scalable Representation Learning for Heterogeneous Networks. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada.","DOI":"10.1145\/3097983.3098036"},{"key":"ref_15","unstructured":"Yao, L., Mao, C., and Luo, Y. (February, January 27). Graph Convolutional Networks for Text Classification. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Huang, L., Ma, D., Li, S., Zhang, X., and Wang, H. Text level graph neural network for text classification. arXiv, 2019.","DOI":"10.18653\/v1\/D19-1345"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Lin, Y., Meng, Y., Sun, X., Han, Q., Kuang, K., Li, J., and Wu, F. (2021, January 1\u20136). BertGCN: Transudative Text Classification by Combining GCN and BERT. Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), Bangkok, Thailand.","DOI":"10.18653\/v1\/2021.findings-acl.126"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy, E. (2016, January 12\u201317). Hierarchical Attention Networks for Document Classification. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human language Technologies, San Diego, CA, USA.","DOI":"10.18653\/v1\/N16-1174"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Joachims, T. (1998, January 21\u201323). Text Categorization with Support Vector Machines: Learning with Many Relevant Features. Proceedings of the European Conference on Machine Learning, Chemnitz, Germany.","DOI":"10.1007\/BFb0026683"},{"key":"ref_20","unstructured":"Ramos, J. (2003, January 21\u201324). Using tf-idf to Determine Word Relevance in Document Queries. Proceedings of the First Instructional Conference on Machine Learning, Washington DC, USA."},{"key":"ref_21","unstructured":"Cavnar, W.B., and Trenkle, J.M. (, January April). N-Gram-Based Text Categorization. Proceedings of the SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval, Las Vegas, NV, USA."},{"key":"ref_22","unstructured":"Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv."},{"key":"ref_23","unstructured":"Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., and Dean, J. (2013, January 5\u201310). Distributed Representations of Words and Phrases and Their Compositionality. Proceedings of the Advances in Neural Information Processing Systems 26, Lake Tahoe, NV, USA."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Pennington, J., Socher, R., and Manning, C.D. (2014, January 25\u201329). Glove: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar.","DOI":"10.3115\/v1\/D14-1162"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Peters, M., Neumann, M., Iyyer, M., Gardner, M., and Zettlemoyer, L. (2018, January 1\u20136). Deep Contextualized Word Representations. Proceedings of the NAACL 2018, New Orleans, LA, USA.","DOI":"10.18653\/v1\/N18-1202"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Peng, H., Li, J., He, Y., Liu, Y., Bao, M., Wang, L., and Yang, Q. (2018, January 23\u201327). Large-Scale Hierarchical Text Classification with Recursively Regularized Deep Graph-Cnn. Proceedings of the 2018 World Wide Web Conference, Lyon, France.","DOI":"10.1145\/3178876.3186005"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Yu, X., Cui, Z., Wu, S., Wen, Z., and Wang, L. (2020, January 5\u201310). Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2020), Seattle, WA, USA.","DOI":"10.18653\/v1\/2020.acl-main.31"},{"key":"ref_28","unstructured":"Yang, J., Liu, Z., Xiao, S., Li, C., Lian, D., and Agrawal, S. (2021, January 6\u201314). GraphFormers: GNN-nested Transformers for Representation Learning on Textual Graph. Proceedings of the Advances in Neural Information Processing Systems 34 (NIPS 2021), Online."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Yang, Y., and Cui, X. (2021). Bert-enhanced text graph neural network for classification. Entropy, 23.","DOI":"10.3390\/e23111536"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Xiao, Z., Wu, J., Chen, Q., and Deng, C. (2021). BERT4GCN: Using BERT Intermediate Layers to Augment GCN for Aspect-based Sentiment Classification. arXiv.","DOI":"10.18653\/v1\/2021.emnlp-main.724"},{"key":"ref_31","unstructured":"Veli\u010dkovi\u0107, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. (May, January 30). Graph Attention Networks. Proceedings of the International Conference on Learning Representations (ICLR), Vancouver, Canada."},{"key":"ref_32","first-page":"012064","article-title":"Sentiment Classification of Reviews Based on BiGRU Neural Network and Fine-Grained Attention","volume":"Volume 1229","author":"Feng","year":"2019","journal-title":"Proceedings of the Journal of Physics: Conference Series, 2019 3rd International Conference on Machine Vision and Information Technology (CMVIT 2019)"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"032097","DOI":"10.1088\/1742-6596\/1345\/3\/032097","article-title":"Improved text sentiment classification method based on BiGRU-Attention","volume":"1345","author":"Zhou","year":"2019","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_34","unstructured":"Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arxiv."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/11\/1691\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:21:40Z","timestamp":1760145700000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/11\/1691"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,18]]},"references-count":34,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2022,11]]}},"alternative-id":["e24111691"],"URL":"https:\/\/doi.org\/10.3390\/e24111691","relation":{},"ISSN":["1099-4300"],"issn-type":[{"value":"1099-4300","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,18]]}}}