{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,19]],"date-time":"2026-01-19T09:22:17Z","timestamp":1768814537747,"version":"3.49.0"},"reference-count":39,"publisher":"MDPI AG","issue":"17","license":[{"start":{"date-parts":[[2024,8,24]],"date-time":"2024-08-24T00:00:00Z","timestamp":1724457600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100015224","name":"State Grid Zhejiang Electric Power Company","doi-asserted-by":"publisher","award":["5211HZ220007"],"award-info":[{"award-number":["5211HZ220007"]}],"id":[{"id":"10.13039\/501100015224","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Point clouds are essential 3D data representations utilized across various disciplines, often requiring point cloud completion methods to address inherent incompleteness. Existing completion methods like SnowflakeNet only consider local attention, lacking global information of the complete shape, and tend to suffer from overfitting as the model depth increases. To address these issues, we introduced self-positioning point-based attention to better capture complete global contextual features and designed a Channel Attention module for adaptive feature adjustment within the global vector. Additionally, we implemented a vector attention grouping strategy in both the skip-transformer and self-positioning point-based attention to mitigate overfitting, improving parameter efficiency and generalization. We evaluated our method on the PCN dataset as well as the ShapeNet55\/34 datasets. The experimental results show that our method achieved an average CD-L1 of 7.09 and average CD-L2 scores of 8.0, 7.8, and 14.4 on the PCN, ShapeNet55, ShapeNet34, and ShapeNet-unseen21 benchmarks, respectively. Compared to SnowflakeNet, we improved the average CD by 1.6%, 3.6%, 3.7%, and 4.6% on the corresponding benchmarks, while also reducing complexity and computational costs and accelerating training and inference speeds. Compared to other existing point cloud completion networks, our method also achieves competitive results.<\/jats:p>","DOI":"10.3390\/rs16173127","type":"journal-article","created":{"date-parts":[[2024,8,26]],"date-time":"2024-08-26T03:14:31Z","timestamp":1724642071000},"page":"3127","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["GSSnowflake: Point Cloud Completion by Snowflake with Grouped Vector and Self-Positioning Point Attention"],"prefix":"10.3390","volume":"16","author":[{"given":"Yu","family":"Xiao","sequence":"first","affiliation":[{"name":"The Academy of Digital China, Fuzhou University, Fuzhou 350108, China"}]},{"given":"Yisheng","family":"Chen","sequence":"additional","affiliation":[{"name":"The Academy of Digital China, Fuzhou University, Fuzhou 350108, China"}]},{"given":"Chongcheng","family":"Chen","sequence":"additional","affiliation":[{"name":"The Academy of Digital China, Fuzhou University, Fuzhou 350108, China"}]},{"given":"Ding","family":"Lin","sequence":"additional","affiliation":[{"name":"The Academy of Digital China, Fuzhou University, Fuzhou 350108, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,8,24]]},"reference":[{"key":"ref_1","unstructured":"Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21\u201326). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA."},{"key":"ref_2","unstructured":"Qi, C.R., Yi, L., Su, H., and Guibas, L.J. (2017). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems 30, MIT Press."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Pan, X., Xia, Z., Song, S., Li, L.E., and Huang, G. (2021, January 20\u201315). 3d object detection with pointformer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00738"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ali, W., Abdelkarim, S., Zidan, M., Zahran, M., and El Sallab, A. (2018, January 8\u201314). Yolo3d: End-to-end real-time 3d oriented object bounding box detection from lidar point cloud. IIn Proceedings of the Computer Vision\u2014ECCV 2018 Workshops, Munich, Germany.","DOI":"10.1007\/978-3-030-11015-4_54"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Xie, H., Yao, H., Zhou, S., Mao, J., Zhang, S., and Sun, W. (2020, January 23\u201328). Grnet: Gridding residual network for dense point cloud completion. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58545-7_21"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Yuan, W., Khot, T., Held, D., Mertz, C., and Hebert, M. (2018, January 5\u20138). Pcn: Point completion network. Proceedings of the 2018 IEEE International Conference on 3D Vision (3DV), Verona, Italy.","DOI":"10.1109\/3DV.2018.00088"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., and Zhou, J. (2021, January 10\u201317). Pointr: Diverse point cloud completion with geometry-aware transformers. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.01227"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Zhou, H., Cao, Y., Chu, W., Zhu, J., Lu, T., Tai, Y., and Wang, C. (2022, January 23\u201327). Seedformer: Patch seeds based point cloud completion with upsample transformer. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-20062-5_24"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Wen, X., Xiang, P., Han, Z., Cao, Y.P., Wan, P., Zheng, W., and Liu, Y.S. (2021, January 20\u201315). Pmp-net: Point cloud completion by learning multi-step point moving paths. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00736"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"852","DOI":"10.1109\/TPAMI.2022.3159003","article-title":"Pmp-net++: Point cloud completion by transformer-enhanced multi-step point moving paths","volume":"45","author":"Wen","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"2425","DOI":"10.1007\/s11263-023-01820-y","article-title":"Learning geometric transformation for point cloud completion","volume":"131","author":"Zhang","year":"2023","journal-title":"Int. J. Comput. Vis."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Li, S., Gao, P., Tan, X., and Wei, M. (2023, January 17\u201324). Proxyformer: Proxy alignment assisted point cloud completion with missing part sensitive transformer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00913"},{"key":"ref_13","unstructured":"Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., and Xiao, J. (2015, January 7\u201312). 3d shapenets: A deep representation for volumetric shapes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Dai, A., Ruizhongtai Qi, C., and Nie\u00dfner, M. (2017, January 21\u201326). Shape completion using 3d-encoder-predictor cnns and shape synthesis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.693"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Stutz, D., and Geiger, A. (2018, January 18\u201323). Learning 3d shape completion from laser scan data with weak supervision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00209"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Wang, X., Ang, M.H., and Lee, G.H. (2020, January 13\u201319). Cascaded refinement network for point cloud completion. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00087"},{"key":"ref_17","first-page":"6320","article-title":"Snowflake point deconvolution for point cloud completion and generation with skip-transformer","volume":"45","author":"Xiang","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Cai, P., Scott, D., Li, X., and Wang, S. (2024, January 20\u201327). Orthogonal Dictionary Guided Shape Completion Network for Point Cloud. Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada.","DOI":"10.1609\/aaai.v38i2.27845"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Rong, Y., Zhou, H., Yuan, L., Mei, C., Wang, J., and Lu, T. (2024, January 20\u201327). CRA-PCN: Point Cloud Completion with Intra-and Inter-level Cross-Resolution Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada.","DOI":"10.1609\/aaai.v38i5.28268"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Zhao, H., Jiang, L., Jia, J., Torr, P.H., and Koltun, V. (2021, January 10\u201317). Point transformer. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.01595"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Park, J., Lee, S., Kim, S., Xiong, Y., and Kim, H.J. (2023, January 17\u201324). Self-positioning point-based transformer for point cloud understanding. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.02089"},{"key":"ref_22","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems 30, MIT Press."},{"key":"ref_23","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"187","DOI":"10.1007\/s41095-021-0229-5","article-title":"Pct: Point cloud transformer","volume":"7","author":"Guo","year":"2021","journal-title":"Comput. Vis. Media"},{"key":"ref_25","unstructured":"Wu, X., Lao, Y., Jiang, L., Liu, X., and Zhao, H. (2022). Point transformer v2: Grouped vector attention and partition-based pooling. Advances in Neural Information Processing Systems 35, MIT Press."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"228","DOI":"10.1007\/s11263-016-0921-6","article-title":"Adaptive spatial-spectral dictionary learning for hyperspectral image restoration","volume":"122","author":"Fu","year":"2017","journal-title":"Int. J. Comput. Vis."},{"key":"ref_27","unstructured":"Xie, H., Yao, H., Sun, X., Zhou, S., and Zhang, S. (November, January 27). Pix2vox: Context-aware 3d reconstruction from single and multi-view images. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Klokov, R., and Lempitsky, V. (2017, January 22\u201329). Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.99"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Tatarchenko, M., Dosovitskiy, A., and Brox, T. (2017, January 22\u201329). Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.230"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Graham, B., Engelcke, M., and Van der Maaten, L. (2018, January 18\u201322). 3d semantic segmentation with submanifold sparse convolutional networks. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00961"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Su, H., Jampani, V., Sun, D., Maji, S., Kalogerakis, E., Yang, M.H., and Kautz, J. (2018, January 18\u201323). Splatnet: Sparse lattice networks for point cloud processing. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00268"},{"key":"ref_32","first-page":"1","article-title":"Adaptive O-CNN: A patch-based deep representation of 3D shapes","volume":"37","author":"Wang","year":"2018","journal-title":"ACM Trans. Graph. (TOG)"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Yang, Y., Feng, C., Shen, Y., and Tian, D. (2018, January 18\u201323). Foldingnet: Point cloud auto-encoder via deep grid deformation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00029"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Xia, Y., Xia, Y., Li, W., Song, R., Cao, K., and Stilla, U. (2021, January 20\u201324). Asfm-net: Asymmetrical siamese feature matching network for point completion. Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, China.","DOI":"10.1145\/3474085.3475348"},{"key":"ref_35","unstructured":"Lu, D., Xie, Q., Wei, M., Gao, K., Xu, L., and Li, J. (2022). Transformers in 3d point clouds: A survey. arXiv."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., and Wei, Y. (2017, January 22\u201329). Deformable convolutional networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.89"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., and Hu, Q. (2020, January 13\u201319). ECA-Net: Efficient channel attention for deep convolutional neural networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01155"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Wen, X., Han, Z., Cao, Y.-P., Wan, P., Zheng, W., and Liu, Y.S. (2021, January 20\u201315). Cycle4completion: Unpaired point cloud completion using cycle transformation with missing region coding. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01288"},{"key":"ref_39","unstructured":"Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., and Su, H. (2015). Shapenet: An information-rich 3d model repository. arXiv."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/17\/3127\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:42:56Z","timestamp":1760110976000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/17\/3127"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,24]]},"references-count":39,"journal-issue":{"issue":"17","published-online":{"date-parts":[[2024,9]]}},"alternative-id":["rs16173127"],"URL":"https:\/\/doi.org\/10.3390\/rs16173127","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,8,24]]}}}