{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,7]],"date-time":"2024-08-07T07:42:54Z","timestamp":1723016574783},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,7]]},"abstract":"<jats:p>Entity representation plays a central role in building effective entity retrieval models. Recent works propose to learn entity representations based on entity-centric contexts, which achieve SOTA performances on many tasks. However, these methods lead to poor representations for unseen entities since they rely on a multitude of occurrences for each entity to enable accurate representations. To address this issue, we propose to learn enhanced descriptional representations for unseen entities by distilling knowledge from distributional semantics into descriptional embeddings. Specifically, we infer enhanced embeddings for unseen entities based on descriptions by aligning the descriptional embedding space to the distributional embedding space with different granularities, i.e., element-level, batch-level and space-level alignment. Experimental results on four benchmark datasets show that our approach improves the performance over all baseline methods. In particular, our approach can achieve the effectiveness of the teacher model on almost all entities, and maintain such high performance on unseen entities.<\/jats:p>","DOI":"10.24963\/ijcai.2022\/611","type":"proceedings-article","created":{"date-parts":[[2022,7,15]],"date-time":"2022-07-15T22:55:56Z","timestamp":1657925756000},"page":"4404-4410","source":"Crossref","is-referenced-by-count":0,"title":["MGAD: Learning Descriptional Representation Distilled from Distributional Semantics for Unseen Entities"],"prefix":"10.24963","author":[{"given":"Yuanzheng","family":"Wang","sequence":"first","affiliation":[{"name":"Institute of Compute Technology,"},{"name":"University of Chinese Academy of Science"}]},{"given":"Xueqi","family":"Cheng","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology,"},{"name":"University of Chinese Academy of Sciences"}]},{"given":"Yixing","family":"Fan","sequence":"additional","affiliation":[{"name":"CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology,"},{"name":"University of Chinese Academy of Sciences"}]},{"given":"Xiaofei","family":"Zhu","sequence":"additional","affiliation":[{"name":"College of Computer Science and Engineering, Chongqing University of Technology"}]},{"given":"Huasheng","family":"Liang","sequence":"additional","affiliation":[{"name":"tencent"}]},{"given":"Qiang","family":"Yan","sequence":"additional","affiliation":[{"name":"Tencent"}]},{"given":"Jiafeng","family":"Guo","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology,"},{"name":"University of Chinese Academy of Sciences"}]}],"member":"10584","event":{"number":"31","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2022","name":"Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}","start":{"date-parts":[[2022,7,23]]},"theme":"Artificial Intelligence","location":"Vienna, Austria","end":{"date-parts":[[2022,7,29]]}},"container-title":["Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2022,7,18]],"date-time":"2022-07-18T07:10:41Z","timestamp":1658128241000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2022\/611"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2022,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2022\/611","relation":{},"subject":[],"published":{"date-parts":[[2022,7]]}}}