{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T04:50:58Z","timestamp":1760244658184,"version":"build-2065373602"},"reference-count":31,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2022,12,15]],"date-time":"2022-12-15T00:00:00Z","timestamp":1671062400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["61375053"],"award-info":[{"award-number":["61375053"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>The natural language model BERT uses a large-scale unsupervised corpus to accumulate rich linguistic knowledge during its pretraining stage, and then, the information is fine-tuned for specific downstream tasks, which greatly improves the understanding capability of various natural language tasks. For some specific tasks, the capability of the model can be enhanced by introducing external knowledge. In fact, these methods, such as ERNIE, have been proposed for integrating knowledge graphs into BERT models, which significantly enhanced its capabilities in related tasks such as entity recognition. However, for two types of tasks, commonsense causal reasoning and predicting the ending of stories, few previous studies have combined model modification and process optimization to integrate external knowledge. Therefore, referring to ERNIE, in this paper, we propose enhanced language representation with event chains (EREC), which focuses on keywords in the text corpus and their implied relations. Event chains are integrated into EREC as external knowledge. Furthermore, various graph networks are used to generate embeddings and to associate keywords in the corpus. Finally, via multi-task training, external knowledge is integrated into the model generated in the pretraining stage so as to enhance the effect of the model in downstream tasks. The experimental process of the EREC model is carried out with a three-stage design, and the experimental results show that EREC has a deeper understanding of the causal relationship and event relationship contained in the text by integrating the event chains, and it achieved significant improvements on two specific tasks.<\/jats:p>","DOI":"10.3390\/info13120582","type":"journal-article","created":{"date-parts":[[2022,12,15]],"date-time":"2022-12-15T04:25:26Z","timestamp":1671078326000},"page":"582","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["EREC: Enhanced Language Representations with Event Chains"],"prefix":"10.3390","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6294-7502","authenticated-orcid":false,"given":"Huajie","family":"Wang","sequence":"first","affiliation":[{"name":"School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai 200433, China"},{"name":"School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan 250014, China"}]},{"given":"Yinglin","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai 200433, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,12,15]]},"reference":[{"key":"ref_1","unstructured":"Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Li, Z., Ding, X., and Liu, T. (2019). Story ending prediction by transferable bert. arXiv.","DOI":"10.24963\/ijcai.2019\/249"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Peters, M.E., Neumann, M., Logan IV, R.L., Schwartz, R., Joshi, V., Singh, S., and Smith, N.A. (2019). Knowledge enhanced contextual word representations. arXiv.","DOI":"10.18653\/v1\/D19-1005"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., and Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. arXiv.","DOI":"10.18653\/v1\/P19-1139"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Liu, W., Zhou, P., Zhao, Z., Wang, Z., Ju, Q., Deng, H., and Wang, P. (2020, January 7\u201312). K-bert: Enabling language representation with knowledge graph. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i03.5681"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.A. (2008, January 5\u20139). Extracting and composing robust features with denoising autoencoders. Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland.","DOI":"10.1145\/1390156.1390294"},{"key":"ref_7","unstructured":"Roemmele, M., Bejan, C.A., and Gordon, A.S. (2011, January 21\u201323). Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. Proceedings of the AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, Stanford, CA, USA."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Mostafazadeh, N., Chambers, N., He, X., Parikh, D., Batra, D., Vanderwende, L., Kohli, P., and Allen, J. (2016, January 12\u201317). A corpus and cloze evaluation for deeper understanding of commonsense stories. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA.","DOI":"10.18653\/v1\/N16-1098"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Li, J., Katsis, Y., Baldwin, T., Kim, H.C., Bartko, A., McAuley, J., and Hsu, C.N. (2022, January 17\u201322). SPOT: Knowledge-Enhanced Language Representations for Information Extraction. Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA.","DOI":"10.1145\/3511808.3557459"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"3781","DOI":"10.1109\/TSMC.2019.2932410","article-title":"Multilevel image-enhanced sentence representation net for natural language inference","volume":"51","author":"Zhang","year":"2019","journal-title":"IEEE Trans. Syst. Man Cybern. Syst."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Xiao, S., Fang, Y., and Ni, L. (2021, January 18\u201322). Multi-modal Sign Language Recognition with Enhanced Spatiotemporal Representation. Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Virtual.","DOI":"10.1109\/IJCNN52387.2021.9533707"},{"key":"ref_12","first-page":"457","article-title":"Word-length algorithm for language identification of under-resourced languages","volume":"28","author":"Selamat","year":"2016","journal-title":"J. King Saud-Univ. Comput. Inf. Sci."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"195","DOI":"10.1016\/j.engappai.2014.07.020","article-title":"Cross-lingual sentiment classification using multiple source languages in multi-view semi-supervised learning","volume":"36","author":"Hajmohammadi","year":"2014","journal-title":"Eng. Appl. Artif. Intell."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Gururangan, S., Marasovi\u0107, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., and Smith, N.A. (2020). Don\u2019t stop pretraining: Adapt language models to domains and tasks. arXiv.","DOI":"10.18653\/v1\/2020.acl-main.740"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Gu, Y., Zhang, Z., Wang, X., Liu, Z., and Sun, M. (2020). Train no evil: Selective masking for task-guided pre-training. arXiv.","DOI":"10.18653\/v1\/2020.emnlp-main.566"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Granroth-Wilding, M., and Clark, S. (2016, January 12\u201317). What happens next? event prediction using a compositional neural network model. Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA.","DOI":"10.1609\/aaai.v30i1.10344"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Li, X., Zhang, B., Zhang, Z., and Stefanidis, K. (2020). A sentiment-statistical approach for identifying problematic mobile app updates based on user reviews. Information, 11.","DOI":"10.3390\/info11030152"},{"key":"ref_18","unstructured":"Li, X., Zhang, Z., and Stefanidis, K. (2018). Mobile app evolution analysis based on user reviews. New Trends in Intelligent Software Methodologies, Tools and Techniques, IOS Press."},{"key":"ref_19","unstructured":"Luo, Z., Sha, Y., Zhu, K.Q., Hwang, S.W., and Wang, Z. (2016, January 25\u201329). Commonsense causal reasoning between short texts. Proceedings of the Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning, Cape Town, South Africa."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Li, Z., Ding, X., Liu, T., Hu, J.E., and Van Durme, B. (2021). Guided generation of cause and effect. arXiv.","DOI":"10.24963\/ijcai.2020\/502"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Perozzi, B., Al-Rfou, R., and Skiena, S. (2014, January 24\u201327). Deepwalk: Online learning of social representations. Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA.","DOI":"10.1145\/2623330.2623732"},{"key":"ref_22","unstructured":"Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., and Mei, Q. (2015, January 18\u201322). Line: Large-scale information network embedding. Proceedings of the 24th International Conference on World Wide Web, Florence, Italy.","DOI":"10.1145\/2736277.2741093"},{"key":"ref_24","unstructured":"Chambers, N., and Jurafsky, D. (2008, January 16\u201318). Unsupervised learning of narrative event chains. Proceedings of the ACL-08: HLT, Columbus, OH, USA."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Ding, X., Zhang, Y., Liu, T., and Duan, J. (2014, January 25\u201329). Using structured events to predict stock price movement: An empirical investigation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.","DOI":"10.3115\/v1\/D14-1148"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Li, Z., Chen, T., and Van Durme, B. (2019). Learning to rank for plausible plausibility. arXiv.","DOI":"10.18653\/v1\/P19-1475"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Li, Z., Ding, X., and Liu, T. (2018). Constructing narrative event evolutionary graph for script event prediction. arXiv.","DOI":"10.24963\/ijcai.2018\/584"},{"key":"ref_28","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_29","unstructured":"Loshchilov, I., and Hutter, F. (2018). Fixing Weight Decay Regularization in Adam. arXiv."},{"key":"ref_30","unstructured":"Zhang, T., Wu, F., Katiyar, A., Weinberger, K.Q., and Artzi, Y. (2020). Revisiting few-sample BERT fine-tuning. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Sharma, R., Allen, J., Bakhshandeh, O., and Mostafazadeh, N. (2018, January 15\u201320). Tackling the story ending biases in the story cloze test. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia.","DOI":"10.18653\/v1\/P18-2119"}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/13\/12\/582\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:41:48Z","timestamp":1760146908000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/13\/12\/582"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,15]]},"references-count":31,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["info13120582"],"URL":"https:\/\/doi.org\/10.3390\/info13120582","relation":{},"ISSN":["2078-2489"],"issn-type":[{"type":"electronic","value":"2078-2489"}],"subject":[],"published":{"date-parts":[[2022,12,15]]}}}