{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T16:14:47Z","timestamp":1775578487391,"version":"3.50.1"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,7]]},"abstract":"<jats:p>This paper presents a Refinement Pyramid Transformer (RePFormer) for robust facial landmark detection. Most facial landmark detectors focus on learning  representative image features. However, these CNN-based feature representations are not robust enough to handle complex real-world scenarios due to ignoring the internal structure of landmarks, as well as the relations between landmarks and context. In this work, we formulate the facial landmark detection task as refining landmark queries along pyramid memories. Specifically, a pyramid transformer head (PTH) is introduced to build both homologous relations among landmarks and heterologous relations between landmarks and cross-scale contexts. Besides, a dynamic landmark refinement (DLR) module is designed to decompose the landmark regression into an end-to-end refinement procedure, where the dynamically aggregated queries are transformed to residual coordinates predictions. Extensive experimental results on four facial landmark detection benchmarks and their various subsets demonstrate the superior performance and high robustness of our framework.<\/jats:p>","DOI":"10.24963\/ijcai.2022\/152","type":"proceedings-article","created":{"date-parts":[[2022,7,15]],"date-time":"2022-07-15T22:55:56Z","timestamp":1657925756000},"page":"1088-1094","source":"Crossref","is-referenced-by-count":14,"title":["RePFormer: Refinement Pyramid Transformer for Robust Facial Landmark Detection"],"prefix":"10.24963","author":[{"given":"Jinpeng","family":"Li","sequence":"first","affiliation":[{"name":"Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China"}]},{"given":"Haibo","family":"Jin","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China"}]},{"given":"Shengcai","family":"Liao","sequence":"additional","affiliation":[{"name":"Inception Institute of Artificial Intelligence (IIAI), UAE"}]},{"given":"Ling","family":"Shao","sequence":"additional","affiliation":[{"name":"Terminus Group, China"}]},{"given":"Pheng-Ann","family":"Heng","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China"},{"name":"Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China"}]}],"member":"10584","event":{"name":"Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}","theme":"Artificial Intelligence","location":"Vienna, Austria","acronym":"IJCAI-2022","number":"31","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"start":{"date-parts":[[2022,7,23]]},"end":{"date-parts":[[2022,7,29]]}},"container-title":["Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2022,7,18]],"date-time":"2022-07-18T07:08:00Z","timestamp":1658128080000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2022\/152"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2022,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2022\/152","relation":{},"subject":[],"published":{"date-parts":[[2022,7]]}}}