{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,24]],"date-time":"2025-09-24T00:14:58Z","timestamp":1758672898053,"version":"3.44.0"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:p>Point clouds, as a primary representation of 3D data, can be categorized into scene domain point clouds and object domain point clouds. Point cloud self-supervised learning (SSL) has become a mainstream paradigm for learning 3D representations. However, existing point cloud SSL primarily focuses on learning domain-specific 3D representations within a single domain, neglecting the complementary nature of cross-domain knowledge, which limits the learning of 3D representations. In this paper, we propose to learn a comprehensive Point cloud Mixture-of-Domain-Experts model (Point-MoDE) via a block-to-scene pre-training strategy. Specifically, \n\nWe first propose a mixture-of-domain-expert model consisting of scene domain experts and multiple shared object domain experts. Furthermore, we propose a block-to-scene pretraining strategy, which leverages the features of point blocks in the object domain to regress their initial positions in the scene domain through object-level block mask reconstruction and scene-level block position regression. By integrating the complementary knowledge between object and scene, this strategy simultaneously facilitates the learning of both object-domain and scene-domain representations, leading to a more comprehensive 3D representation.\n\nExtensive experiments in downstream tasks demonstrate the superiority of our model.<\/jats:p>","DOI":"10.24963\/ijcai.2025\/260","type":"proceedings-article","created":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T08:10:40Z","timestamp":1758269440000},"page":"2332-2340","source":"Crossref","is-referenced-by-count":0,"title":["Point Cloud Mixture-of-Domain-Experts Model for 3D Self-supervised Learning"],"prefix":"10.24963","author":[{"given":"Yaohua","family":"Zha","sequence":"first","affiliation":[{"name":"Tsinghua University"},{"name":"Pengcheng Laboratory"}]},{"given":"Tao","family":"Dai","sequence":"additional","affiliation":[{"name":"Shenzhen University"}]},{"given":"Hang","family":"Guo","sequence":"additional","affiliation":[{"name":"Tsinghua University"}]},{"given":"Yanzi","family":"Wang","sequence":"additional","affiliation":[{"name":"Tsinghua University"}]},{"given":"Bin","family":"Chen","sequence":"additional","affiliation":[{"name":"Harbin Institute of Technology, Shenzhen"}]},{"given":"Ke","family":"Chen","sequence":"additional","affiliation":[{"name":"Pengcheng Laboratory"}]},{"given":"Shu-Tao","family":"Xia","sequence":"additional","affiliation":[{"name":"Pengcheng Laboratory"}]}],"member":"10584","event":{"number":"34","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2025","name":"Thirty-Fourth International Joint Conference on Artificial Intelligence {IJCAI-25}","start":{"date-parts":[[2025,8,16]]},"theme":"Artificial Intelligence","location":"Montreal, Canada","end":{"date-parts":[[2025,8,22]]}},"container-title":["Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T11:33:29Z","timestamp":1758627209000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2025\/260"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2025,9]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2025\/260","relation":{},"subject":[],"published":{"date-parts":[[2025,9]]}}}