{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T05:05:16Z","timestamp":1750309516469,"version":"3.41.0"},"reference-count":39,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2025,1,18]],"date-time":"2025-01-18T00:00:00Z","timestamp":1737158400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62162056"],"award-info":[{"award-number":["62162056"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Gansu Provincial Department of Education Industry Support Plan Project","award":["2021CYZC-06"],"award-info":[{"award-number":["2021CYZC-06"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2025,1,31]]},"abstract":"<jats:p>Information extraction is pivotal in natural language processing, where the goal is to convert unstructured text into structured information. A significant challenge in this domain is the diversity and specific needs of various processing tasks. Traditional approaches typically utilize separate frameworks for different information extraction tasks, such as named entity recognition and relationship extraction, which hampers their uniformity and scalability. In this study, this study introduce a Universal Information Extraction (UIE) framework combined with a cue learning strategy, significantly improving the efficiency and accuracy of extracting mine hoist fault data. Initially, domain-specific data is manually labeled to fine-tune the model, and the accuracy is further enhanced by constructing negative examples during this fine-tuning process. The model then focuses on faults using the Structured Extraction Language (SEL) and a schema-based prompt syntax, the Structural Schema Instructor (SSI), which targets and extracts key information from the fault data to meet specific domain requirements. Experimental results show that UIE substantially improves the processing efficiency and the F1 accuracy of the extracted mine hoist fault data, with the fine-tuned F1 score increasing from 23.59% to 92.51%.<\/jats:p>","DOI":"10.1145\/3705313","type":"journal-article","created":{"date-parts":[[2024,11,21]],"date-time":"2024-11-21T09:01:58Z","timestamp":1732179718000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["UIE-Based Relational Extraction Task for Mine Hoist Fault Data"],"prefix":"10.1145","volume":"24","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9655-0044","authenticated-orcid":false,"given":"Xiaochao","family":"Dang","sequence":"first","affiliation":[{"name":"College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China and Gansu Province Internet of Things Engineering Research Center, Lanzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-0486-3179","authenticated-orcid":false,"given":"Guozhen","family":"Ding","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9228-8552","authenticated-orcid":false,"given":"Xiaohui","family":"Dong","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6051-0737","authenticated-orcid":false,"given":"Fenfang","family":"Li","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9145-2309","authenticated-orcid":false,"given":"Shiwei","family":"Gao","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-4643-1514","authenticated-orcid":false,"given":"Yue","family":"Wang","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2025,1,18]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11431-020-1647-3"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1136\/amiajnl-2011-000464"},{"volume-title":"North American Association for Computational Linguistics (NAACL)","key":"e_1_3_1_4_2","unstructured":"J. Devlin, M. W. Chang, and K. Lee. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. In North American Association for Computational Linguistics (NAACL). 1--16."},{"key":"e_1_3_1_5_2","doi-asserted-by":"crossref","unstructured":"J. Li T. Tang and W. X. Zhao. 2022. Pretrained language models for text generation: A survey. ACM Computing Surveys 56 (2022) 1--39.","DOI":"10.1145\/3649449"},{"key":"e_1_3_1_6_2","doi-asserted-by":"crossref","first-page":"203","DOI":"10.18653\/v1\/D19-5827","article-title":"Generalizing question answering system with pre-trained language model fine-tuning","author":"Su D.","year":"2019","unstructured":"D. Su, Y. Xu, G. I. Winata et al. 2019. Generalizing question answering system with pre-trained language model fine-tuning. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering. 203\u2013211.","journal-title":"Proceedings of the 2nd Workshop on Machine Reading for Question Answering"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1561\/1900000003"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2021.3070843"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2018.2812203"},{"volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics","key":"e_1_3_1_10_2","unstructured":"Y. Lu, Q. Liu, D. Dai, X. Xiao, H. Lin, X. Han, L. Sun, and H. Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 5755--5772."},{"key":"e_1_3_1_11_2","article-title":"Unified language model pre-training for natural language understanding and generation","volume":"32","author":"Dong L.","year":"2019","unstructured":"L. Dong, N. Yang, W. Wang et al. 2019. Unified language model pre-training for natural language understanding and generation. Advan. Neural Inf. Process. Syst. 32 (2019), 1--14.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_1_12_2","first-page":"830","article-title":"Importance of semantic representation: Dataless classification","author":"Chang M. W.","year":"2008","unstructured":"M. W. Chang, L. A. Ratinov, D. Roth et al. 2008. Importance of semantic representation: Dataless classification. In Proceedings of the AAAI Conference on Artificial Intelligence. 830\u2013835.","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"e_1_3_1_13_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani A.","year":"2017","unstructured":"A. Vaswani, N. Shazeer, N. Parmar et al. 2017. Attention is all you need. Advan. Neural Inf. Process. Syst. 30 (2017), 1--11.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_1_14_2","first-page":"1877","article-title":"Language models are few-shot learners","volume":"33","author":"Brown T.","year":"2020","unstructured":"T. Brown, B. Mann, N. Ryder et al. 2020. Language models are few-shot learners. Advan. Neural Inf. Process. Syst. 33 (2020), 1877\u20131901.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_1_15_2","doi-asserted-by":"crossref","first-page":"1441","DOI":"10.18653\/v1\/P19-1139","article-title":"ERNIE: Enhanced language representation with informative entities","author":"Zhang Zhengyan","year":"2019","unstructured":"Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 1441\u20131451.","journal-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics"},{"key":"e_1_3_1_16_2","unstructured":"Y. Sun S. Wang and Y. Li. 2019. ERNIE: Enhanced representation through knowledge integration. 1--8. arXiv preprint arXiv:1904.09223."},{"key":"e_1_3_1_17_2","unstructured":"Y. Sun S. Wang S. Feng et al. 2021. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. 1--6. arXiv preprint arXiv:2107.02137."},{"key":"e_1_3_1_18_2","doi-asserted-by":"crossref","first-page":"2901","DOI":"10.1609\/aaai.v34i03.5681","article-title":"K-BERT: Enabling language representation with knowledge graph","author":"Liu W.","year":"2020","unstructured":"W. Liu, P. Zhou, Z. Zhao et al. 2020. K-BERT: Enabling language representation with knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence. 2901\u20132908.","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00360"},{"volume-title":"Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","key":"e_1_3_1_20_2","unstructured":"G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 260--270."},{"volume-title":"Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM'19)","key":"e_1_3_1_21_2","unstructured":"S. Wu and Y. He. 2019. Enriching pre-trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM'19). Association for Computing Machinery, New York, NY, USA. 2019, 2361--2364."},{"key":"e_1_3_1_22_2","first-page":"289","article-title":"Joint extraction of events and entities within a document context","author":"Yang B.","year":"2016","unstructured":"B. Yang and T. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 289\u2013299.","journal-title":"Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"},{"key":"e_1_3_1_23_2","first-page":"260","article-title":"Neural architectures for named entity recognition","author":"Lample Guillaume","year":"2016","unstructured":"Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 260\u2013270.","journal-title":"Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"},{"key":"e_1_3_1_24_2","doi-asserted-by":"crossref","first-page":"2205","DOI":"10.18653\/v1\/D18-1244","article-title":"Graph convolution over pruned dependency trees improves relation extraction","author":"Zhang Yuhao","year":"2018","unstructured":"Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2205\u20132215.","journal-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing"},{"key":"e_1_3_1_25_2","doi-asserted-by":"crossref","first-page":"1105","DOI":"10.18653\/v1\/P16-1105","article-title":"End-to-end relation extraction using LSTMs on sequences and tree structures","author":"Miwa Makoto","year":"2016","unstructured":"Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1105\u20131116.","journal-title":"Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)"},{"key":"e_1_3_1_26_2","first-page":"1","article-title":"The stages of event extraction","author":"Ahn D.","year":"2006","unstructured":"D. Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events. 1\u20138.","journal-title":"Proceedings of the Workshop on Annotating and Reasoning about Time and Events"},{"key":"e_1_3_1_27_2","unstructured":"Y. Liu M. Ott N. Goyal et al. 2019. RoBERTa: A robustly optimized bert pretraining approach. 1--13. arXiv preprint arXiv:1907.11692."},{"key":"e_1_3_1_28_2","unstructured":"Q. Chen Z. Zhuo and W. Wang. 2019. Bert for joint intent classification and slot filling. 1--6. arXiv preprint arXiv:1902.10909."},{"key":"e_1_3_1_29_2","first-page":"6442","article-title":"LUKE: Deep contextualized entity representations with entity-aware self-attention","author":"Yamada Ikuya","year":"2020","unstructured":"Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP\u201920). 6442\u20136454.","journal-title":"Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP\u201920)"},{"key":"e_1_3_1_30_2","first-page":"1476","article-title":"A novel cascade binary tagging framework for relational triple extraction","author":"Wei Zhepei","year":"2020","unstructured":"Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A novel cascade binary tagging framework for relational triple extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 1476\u20131488.","journal-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"},{"key":"e_1_3_1_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3560815"},{"key":"e_1_3_1_32_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1007379606734"},{"key":"e_1_3_1_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3386252"},{"key":"e_1_3_1_34_2","unstructured":"Tkachenko Maxim Malyuk Mikhail Holmanyuk Andrey and Liubimov Nikolai. 2020. Label studio: Data labeling software. Retrieved from https:\/\/github.com\/heartexlabs\/label-studio"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.703"},{"issue":"140","key":"e_1_3_1_36_2","first-page":"1","article-title":"Exploring the limits of transfer learning with a unified text-to-text transformer","volume":"21","author":"Raffel C.","year":"2020","unstructured":"C. Raffel, N. Shazeer, A. Roberts et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 140 (2020), 1\u201367.","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_1_37_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1989.1.2.270"},{"key":"e_1_3_1_38_2","unstructured":"M. A. Ranzato S. Chopra M. Auli et al. 2015. Sequence level training with recurrent neural networks. 1--16. arXiv preprint arXiv:1511.06732."},{"key":"e_1_3_1_39_2","first-page":"236","volume-title":"Findings of the Association for Computational Linguistics: EMNLP\u201920","author":"Zhang Ranran Haoran","year":"2020","unstructured":"Ranran Haoran Zhang, Qianying Liu, Aysa Xuemo Fan, Heng Ji, Daojian Zeng, Fei Cheng, Daisuke Kawahara, and Sadao Kurohashi. 2020. Minimize exposure bias of Seq2Seq models in joint entity and relation extraction. In Findings of the Association for Computational Linguistics: EMNLP\u201920. 236\u2013246."},{"key":"e_1_3_1_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2009.191"}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3705313","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3705313","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:02Z","timestamp":1750295882000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3705313"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,18]]},"references-count":39,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1,31]]}},"alternative-id":["10.1145\/3705313"],"URL":"https:\/\/doi.org\/10.1145\/3705313","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"type":"print","value":"2375-4699"},{"type":"electronic","value":"2375-4702"}],"subject":[],"published":{"date-parts":[[2025,1,18]]},"assertion":[{"value":"2024-06-02","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-15","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-01-18","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}