{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,4]],"date-time":"2025-12-04T10:00:09Z","timestamp":1764842409950},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2020,7]]},"abstract":"<jats:p>In this paper, we tackle a challenging task named video-language segmentation. Given a video and a sentence in natural language, the goal is to segment the object or actor described by the sentence in video frames. To accurately denote a target object, the given sentence usually refers to multiple attributes, such as nearby objects with spatial relations, etc. In this paper, we propose a novel Polar Relative Positional Encoding (PRPE) mechanism that represents spatial relations in a ``linguistic'' way, i.e., in terms of direction and range. Sentence feature can interact with positional embeddings in a more direct way to extract the implied relative positional relations. We also propose parameterized functions for these positional embeddings to adapt real-value directions and ranges. With PRPE, we design a Polar Attention Module (PAM) as the basic module for vision-language fusion. Our method outperforms previous best method by a large margin of 11.4% absolute improvement in terms of mAP on the challenging A2D Sentences dataset. Our method also achieves competitive performances on the J-HMDB Sentences dataset.<\/jats:p>","DOI":"10.24963\/ijcai.2020\/132","type":"proceedings-article","created":{"date-parts":[[2020,7,8]],"date-time":"2020-07-08T12:12:10Z","timestamp":1594210330000},"page":"948-954","source":"Crossref","is-referenced-by-count":32,"title":["Polar Relative Positional Encoding for Video-Language Segmentation"],"prefix":"10.24963","author":[{"given":"Ke","family":"Ning","sequence":"first","affiliation":[{"name":"Zhejiang University"}]},{"given":"Lingxi","family":"Xie","sequence":"additional","affiliation":[{"name":"Huawei Noah's Ark Lab"}]},{"given":"Fei","family":"Wu","sequence":"additional","affiliation":[{"name":"Zhejiang University"}]},{"given":"Qi","family":"Tian","sequence":"additional","affiliation":[{"name":"Huawei Noah\u2019s Ark Lab"}]}],"member":"10584","event":{"number":"28","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-PRICAI-2020","name":"Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}","start":{"date-parts":[[2020,7,11]]},"theme":"Artificial Intelligence","location":"Yokohama, Japan","end":{"date-parts":[[2020,7,17]]}},"container-title":["Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2020,7,9]],"date-time":"2020-07-09T02:13:29Z","timestamp":1594260809000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2020\/132"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2020,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2020\/132","relation":{},"subject":[],"published":{"date-parts":[[2020,7]]}}}