{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T22:28:53Z","timestamp":1768343333099,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":17,"publisher":"ACM","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,6,16]]},"DOI":"10.1145\/3769126.3769214","type":"proceedings-article","created":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T14:50:37Z","timestamp":1768315837000},"page":"364-368","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Improved Understanding of Legal Text with Graph Attention Networks"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0969-9925","authenticated-orcid":false,"given":"Andrew","family":"Shin","sequence":"first","affiliation":[{"name":"Keio University, Yokohama, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-6107-6151","authenticated-orcid":false,"given":"Kunitake","family":"Kaneko","sequence":"additional","affiliation":[{"name":"Keio University, Yokohama, Japan"}]}],"member":"320","published-online":{"date-parts":[[2026,1,13]]},"reference":[{"key":"e_1_3_3_1_2_2","unstructured":"Iz Beltagy Matthew\u00a0E. Peters and Arman Cohan. 2020. Longformer: The Long-Document Transformer. ArXiv abs\/2004.05150 (2020). https:\/\/api.semanticscholar.org\/CorpusID:215737171"},{"key":"e_1_3_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-2041"},{"key":"e_1_3_3_1_4_2","unstructured":"Ilias Chalkidis Manos Fergadiotis Prodromos Malakasiotis Nikolaos Aletras and Ion Androutsopoulos. 2020. LEGAL-BERT: \u201cPreparing the Muppets for Court\u2019\u201d. ArXiv abs\/2010.02559 (2020). https:\/\/api.semanticscholar.org\/CorpusID:222141043"},{"key":"e_1_3_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3936759"},{"key":"e_1_3_3_1_6_2","unstructured":"Yakun Chen Xianzhi Wang and Guandong Xu. 2023. GATGPT: A Pre-trained Large Language Model with Graph Attention Network for Spatiotemporal Imputation. ArXiv abs\/2311.14332 (2023). https:\/\/api.semanticscholar.org\/CorpusID:265444946"},{"key":"e_1_3_3_1_7_2","unstructured":"Aakanksha Chowdhery Sharan Narang Jacob Devlin Maarten Bosma Gaurav Mishra Adam Roberts Paul Barham Hyung\u00a0Won Chung Charles Sutton Sebastian Gehrmann Parker Schuh Kensen Shi Sasha Tsvyashchenko Joshua Maynez Abhishek Rao Parker Barnes Yi Tay Noam\u00a0M. Shazeer Vinodkumar Prabhakaran Emily Reif Nan Du Ben Hutchinson Reiner Pope James Bradbury Jacob Austin Michael Isard Guy Gur-Ari Pengcheng Yin Toju Duke Anselm Levskaya Sanjay Ghemawat Sunipa Dev Henryk Michalewski Xavier Garc\u00eda Vedant Misra Kevin Robinson Liam Fedus Denny Zhou Daphne Ippolito David Luan Hyeontaek Lim Barret Zoph Alexander Spiridonov Ryan Sepassi David Dohan Shivani Agrawal Mark Omernick Andrew\u00a0M. Dai Thanumalayan\u00a0Sankaranarayana Pillai Marie Pellat Aitor Lewkowycz Erica Moreira Rewon Child Oleksandr Polozov Katherine Lee Zongwei Zhou Xuezhi Wang Brennan Saeta Mark D\u00edaz Orhan Firat Michele Catasta Jason Wei Kathleen\u00a0S. Meier-Hellstern Douglas Eck Jeff Dean Slav Petrov and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. ArXiv abs\/2204.02311 (2022). https:\/\/api.semanticscholar.org\/CorpusID:247951931"},{"key":"e_1_3_3_1_8_2","volume-title":"North American Chapter of the Association for Computational Linguistics","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In North American Chapter of the Association for Computational Linguistics. https:\/\/api.semanticscholar.org\/CorpusID:52967399"},{"key":"e_1_3_3_1_9_2","unstructured":"Pengcheng He Xiaodong Liu Jianfeng Gao and Weizhu Chen. 2020. DeBERTa: Decoding-enhanced BERT with Disentangled Attention. ArXiv abs\/2006.03654 (2020). https:\/\/api.semanticscholar.org\/CorpusID:219531210"},{"key":"e_1_3_3_1_10_2","unstructured":"Dan Hendrycks and Kevin Gimpel. 2016. Gaussian Error Linear Units (GELUs). arXiv:https:\/\/arXiv.org\/abs\/Learning (2016). https:\/\/api.semanticscholar.org\/CorpusID:125617073"},{"key":"e_1_3_3_1_11_2","unstructured":"Yinhan Liu Myle Ott Naman Goyal Jingfei Du Mandar Joshi Danqi Chen Omer Levy Mike Lewis Luke Zettlemoyer and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv abs\/1907.11692 (2019). https:\/\/api.semanticscholar.org\/CorpusID:198953378"},{"key":"e_1_3_3_1_12_2","unstructured":"OpenAI. 2022. OpenAI: Introducing ChatGPT. https:\/\/openai.com\/blog\/chatgpt."},{"key":"e_1_3_3_1_13_2","doi-asserted-by":"crossref","unstructured":"Shounak Paul A. Mandal Pawan Goyal and Saptarshi Ghosh. 2022. Pre-trained Language Models for the Legal Domain: A Case Study on Indian Law. Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law (2022). https:\/\/api.semanticscholar.org\/CorpusID:258615145","DOI":"10.1145\/3594536.3595165"},{"key":"e_1_3_3_1_14_2","unstructured":"Hugo Touvron Thibaut Lavril Gautier Izacard Xavier Martinet Marie-Anne Lachaux Timoth\u00e9e Lacroix Baptiste Rozi\u00e8re Naman Goyal Eric Hambro Faisal Azhar Aur\u00e9lien Rodriguez Armand Joulin Edouard Grave and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. ArXiv abs\/2302.13971 (2023). https:\/\/api.semanticscholar.org\/CorpusID:257219404"},{"key":"e_1_3_3_1_15_2","unstructured":"Hugo Touvron Louis Martin Kevin\u00a0R. Stone Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti Bhosale Daniel\u00a0M. Bikel Lukas Blecher Cristian\u00a0Cant\u00f3n Ferrer Moya Chen Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller Cynthia Gao Vedanuj Goswami Naman Goyal Anthony\u00a0S. Hartshorn Saghar Hosseini Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel\u00a0M. Kloumann A.\u00a0V. Korenev Punit\u00a0Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi Alan Schelten Ruan Silva Eric\u00a0Michael Smith R. Subramanian Xia Tan Binh Tang Ross Taylor Adina Williams Jian\u00a0Xiang Kuan Puxin Xu Zhengxu Yan Iliyan Zarov Yuchen Zhang Angela Fan Melissa Hall\u00a0Melanie Kambadur Sharan Narang Aur\u00e9lien Rodriguez Robert Stojnic Sergey Edunov and Thomas Scialom. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. ArXiv abs\/2307.09288 (2023). https:\/\/api.semanticscholar.org\/CorpusID:259950998"},{"key":"e_1_3_3_1_16_2","unstructured":"Petar Velickovic Guillem Cucurull Arantxa Casanova Adriana Romero Pietro Lio\u2019 and Yoshua Bengio. 2017. Graph Attention Networks. ArXiv abs\/1710.10903 (2017). https:\/\/api.semanticscholar.org\/CorpusID:3292002"},{"key":"e_1_3_3_1_17_2","unstructured":"Manzil Zaheer Guru Guruganesh Kumar\u00a0Avinava Dubey Joshua Ainslie Chris Alberti Santiago Onta\u00f1\u00f3n Philip Pham Anirudh Ravula Qifan Wang Li Yang and Amr Ahmed. 2020. Big Bird: Transformers for Longer Sequences. ArXiv abs\/2007.14062 (2020). https:\/\/api.semanticscholar.org\/CorpusID:220831004"},{"key":"e_1_3_3_1_18_2","unstructured":"Yue Zhang Yafu Li Leyang Cui Deng Cai Lemao Liu Tingchen Fu Xinting Huang Enbo Zhao Yu Zhang Yulong Chen Longyue Wang Anh\u00a0Tuan Luu Wei Bi Freda Shi and Shuming Shi. 2023. Siren\u2019s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. ArXiv abs\/2309.01219 (2023). https:\/\/api.semanticscholar.org\/CorpusID:261530162"}],"event":{"name":"ICAIL 2025: 20th International Conference on Artificial Intelligence and Law","location":"Chicago , IL , USA","acronym":"ICAIL 2025"},"container-title":["Proceedings of the Twentieth International Conference on Artificial Intelligence and Law"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3769126.3769214","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T15:46:30Z","timestamp":1768319190000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3769126.3769214"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,16]]},"references-count":17,"alternative-id":["10.1145\/3769126.3769214","10.1145\/3769126"],"URL":"https:\/\/doi.org\/10.1145\/3769126.3769214","relation":{},"subject":[],"published":{"date-parts":[[2025,6,16]]},"assertion":[{"value":"2026-01-13","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}