{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T01:52:07Z","timestamp":1773798727350,"version":"3.50.1"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2021,8]]},"abstract":"<jats:p>Graph neural networks (GNNs) have been widely used in the 3D human pose estimation task, since the pose representation of a human body can be naturally modeled by the graph structure. Generally, most of the existing GNN-based models utilize the restricted receptive \ufb01elds of \ufb01lters and single-scale information, while neglecting the valuable multi-scale contextual information. To tackle this issue, we propose a novel Graph Transformer Encoder-Decoder with Atrous Convolution, named PoseGTAC, to effectively extract multi-scale context and long-range information. In our proposed PoseGTAC model, Graph Atrous Convolution (GAC) and Graph Transformer Layer (GTL), respectively for the extraction of local multi-scale and global long-range information, are combined and stacked in an encoder-decoder structure, where graph pooling and unpooling are adopted for the interaction of multi-scale information from local to global (e.g., part-scale and body-scale). Extensive experiments on the Human3.6M and MPI-INF-3DHP datasets demonstrate that the proposed PoseGTAC model exceeds all previous methods and achieves state-of-the-art performance.<\/jats:p>","DOI":"10.24963\/ijcai.2021\/188","type":"proceedings-article","created":{"date-parts":[[2021,8,11]],"date-time":"2021-08-11T11:00:49Z","timestamp":1628679649000},"page":"1359-1365","source":"Crossref","is-referenced-by-count":20,"title":["PoseGTAC: Graph Transformer Encoder-Decoder with Atrous Convolution for 3D Human Pose Estimation"],"prefix":"10.24963","author":[{"given":"Yiran","family":"Zhu","sequence":"first","affiliation":[{"name":"University of Electronic Science and Technology of China"}]},{"given":"Xing","family":"Xu","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China"}]},{"given":"Fumin","family":"Shen","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China"}]},{"given":"Yanli","family":"Ji","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China"}]},{"given":"Lianli","family":"Gao","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China"}]},{"given":"Heng Tao","family":"Shen","sequence":"additional","affiliation":[{"name":"University of Electronic Science and Technology of China"}]}],"member":"10584","event":{"name":"Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}","theme":"Artificial Intelligence","location":"Montreal, Canada","acronym":"IJCAI-2021","number":"30","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"start":{"date-parts":[[2021,8,19]]},"end":{"date-parts":[[2021,8,27]]}},"container-title":["Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2021,8,11]],"date-time":"2021-08-11T11:01:51Z","timestamp":1628679711000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2021\/188"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2021,8]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2021\/188","relation":{},"subject":[],"published":{"date-parts":[[2021,8]]}}}