{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,17]],"date-time":"2025-10-17T00:19:09Z","timestamp":1760660349398,"version":"build-2065373602"},"reference-count":38,"publisher":"SAGE Publications","issue":"5","license":[{"start":{"date-parts":[[2025,5,8]],"date-time":"2025-05-08T00:00:00Z","timestamp":1746662400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"funder":[{"DOI":"10.13039\/501100001809","name":"the National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["61871258"],"award-info":[{"award-number":["61871258"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Journal of Intelligent &amp; Fuzzy Systems: Applications in Engineering and Technology"],"published-print":{"date-parts":[[2025,11]]},"abstract":"<jats:p>In recent years, with the development of technologies such as computer vision, machine learning, and deep learning, as well as the popularity of large-scale data collection devices, 3D point cloud processing has become increasingly important. 3D point cloud processing can be widely used in fields such as object recognition, robot navigation, building information modeling (BIM), and urban planning. With more and more 3D point cloud data acquired, it has become a challenge for present 3D point cloud processing models to accurately and efficiently process this data. To improve the accuracy of point cloud classification and segmentation tasks, this study proposes an improved point cloud classification and segmentation model based on neighborhood aware information fusion. The model includes a Fusion Neighbor Information Feature Enhancement (FNIFE) module, which connects points in the local neighborhood and obtains the features of the current point through the feature relationships between the points in the neighborhood. By enhancing the feature expression of the point, it reduces the feature loss caused by the feature extraction operation and improves the accuracy of point cloud classification. Additionally, the model includes a Reverse Transmission of Point Features (RToPF) module, in which interpolation parameters are adjusted to ensure that the enhanced feature information can be effectively transmitted, thereby improving the accuracy and computing speed of the model. Finally, to further improve classification accuracy further, a module containing the X-Conv operator is utilized in the model to replace the max-pooling in the original network and reduce the feature loss generated during feature extraction. Comparative experiments are conducted on ModelNet40, ShapeNet, S3DIS datasets and ScanNet datasets. The experimental results show that the overall accuracy of proposed model reaches 92.4%. The average accuracy reaches 90.2% in the point cloud classification task, and the average intersection ratio reaches 84.5% in the point cloud segmentation task, achieving superior performance in classification and segmentation tasks compared with the state-of-the-art models.<\/jats:p>","DOI":"10.1177\/18758967251335691","type":"journal-article","created":{"date-parts":[[2025,5,8]],"date-time":"2025-05-08T03:17:27Z","timestamp":1746674247000},"page":"1198-1212","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":0,"title":["Neighbor-Aware Information Fusion for Point Cloud Classification and Segmentation"],"prefix":"10.1177","volume":"49","author":[{"given":"Shuifa","family":"Sun","sequence":"first","affiliation":[{"name":"School of Information Science and Technology, Hangzhou Normal University, Hangzhou, Zhejiang, China"},{"name":"Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, Yichang, Hubei, China"}]},{"given":"Yongheng","family":"Tang","sequence":"additional","affiliation":[{"name":"Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, Yichang, Hubei, China"}]},{"given":"Anning","family":"Xu","sequence":"additional","affiliation":[{"name":"Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, Yichang, Hubei, China"}]},{"given":"Xuchen","family":"Li","sequence":"additional","affiliation":[{"name":"Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, Yichang, Hubei, China"}]},{"given":"Yongwei","family":"Miao","sequence":"additional","affiliation":[{"name":"School of Information Science and Technology, Hangzhou Normal University, Hangzhou, Zhejiang, China"}]},{"given":"Ben","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Information Science and Technology, Hangzhou Normal University, Hangzhou, Zhejiang, China"}]},{"given":"Yirong","family":"Wu","sequence":"additional","affiliation":[{"name":"Institute of Advanced Studies in Humanities and Social Sciences, Beijing Normal University, Zhuhai, Guangdong, China"}]}],"member":"179","published-online":{"date-parts":[[2025,5,8]]},"reference":[{"key":"e_1_3_3_2_1","first-page":"1534","volume-title":"3d Semantic parsing of large-scale indoor spacesProceedings of the IEEE conference on computer vision and pattern recognition","author":"Armeni I.","year":"2016","unstructured":"Armeni I., Sener O., Zamir A. R., Jiang H., Brilakis I., Fischer M., Savarese S. (2016). 3d Semantic parsing of large-scale indoor spaces. Proceedings of the IEEE conference on computer vision and pattern recognition, 1534\u20131543."},{"key":"e_1_3_3_3_1","first-page":"13778","volume-title":"Affordances from human videos as a versatile representation for robotics[C]Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Bahl S.","year":"2023","unstructured":"Bahl S., Mendonca R., Chen L., et\u00a0al. (2023). Affordances from human videos as a versatile representation for robotics[C]. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 13778\u201313790."},{"key":"e_1_3_3_4_1","first-page":"117","volume-title":"Location-aware self-supervised transformers for semantic segmentation[C]Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision","author":"Caron M.","year":"2024","unstructured":"Caron M., Houlsby N., Schmid C. (2024). Location-aware self-supervised transformers for semantic segmentation[C]. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, 117\u2013127."},{"key":"e_1_3_3_5_1","unstructured":"Chang A. X. Funkhouser T. Guibas L. Hanrahan P. Huang Q. Li Z. Yu F. (2015). Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012."},{"key":"e_1_3_3_6_1","first-page":"5828","volume-title":"Scannet: Richly-annotated 3d reconstructions of indoor scenes[C]Proceedings of the IEEE conference on computer vision and pattern recognition","author":"Dai A.","year":"2017","unstructured":"Dai A., Chang A. X., Savva M., et\u00a0al. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes[C]. Proceedings of the IEEE conference on computer vision and pattern recognition, 5828\u20135839."},{"key":"e_1_3_3_7_1","unstructured":"Fan H. Yang Y. (2019). PointRNN: Point recurrent neural network for moving point cloud processing. arXiv preprint arXiv:1910.08287."},{"key":"e_1_3_3_8_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.rcim.2023.102567"},{"key":"e_1_3_3_9_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-022-06148-1"},{"key":"e_1_3_3_10_1","doi-asserted-by":"publisher","DOI":"10.1007\/s41095-021-0229-5"},{"key":"e_1_3_3_11_1","first-page":"2017","article-title":"Spatial transformer networks","volume":"28","author":"Jaderberg M.","year":"2015","unstructured":"Jaderberg M., Simonyan K., Zisserman A. (2015). Spatial transformer networks. Advances in Neural Information Processing Systems, 28, 2017\u20132025. https:\/\/arxiv.org\/abs\/1506.02025","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_12_1","doi-asserted-by":"crossref","unstructured":"Jiang M. Wu Y. Zhao T. Zhao Z. Lu C. (2018). Pointsift: A sift-like network module for 3d point cloud semantic segmentation. arXiv preprint arXiv:1807.00652.","DOI":"10.1109\/IGARSS.2019.8900102"},{"key":"e_1_3_3_13_1","doi-asserted-by":"publisher","DOI":"10.3390\/rs14041036"},{"key":"e_1_3_3_14_1","doi-asserted-by":"publisher","DOI":"10.3390\/rs14205099"},{"key":"e_1_3_3_15_1","first-page":"21694","volume-title":"Mseg3d: Multi-modal 3d semantic segmentation for autonomous driving[C]Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Li J.","year":"2023","unstructured":"Li J., Dai H., Han H., et\u00a0al. (2023). Mseg3d: Multi-modal 3d semantic segmentation for autonomous driving[C]. Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, 21694\u201321704."},{"key":"e_1_3_3_16_1","first-page":"1919","volume-title":"End-to-end learning local multi-view descriptors for 3d point clouds[C]Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Li L.","year":"2020","unstructured":"Li L., Zhu S., Fu H., et\u00a0al. (2020). End-to-end learning local multi-view descriptors for 3d point clouds[C]. Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, 1919\u20131928."},{"key":"e_1_3_3_17_1","first-page":"1","article-title":"Multiscale receptive fields graph attention network for point cloud classification","author":"Li X. A.","year":"2021","unstructured":"Li X. A., Wang L. Y., Lu J. (2021). Multiscale receptive fields graph attention network for point cloud classification. Complexity, 1\u20139. https:\/\/doi.org\/10.1155\/2021\/8832081","journal-title":"Complexity"},{"key":"e_1_3_3_18_1","first-page":"828","volume-title":"PointCNN: Convolution on \u03a7-transformed pointsProceedings of the 32nd International Conference on Neural Information Processing Systems","author":"Li Y.","year":"2018","unstructured":"Li Y., Bu R., Sun M., Wu W., Di X., Chen B. (2018). PointCNN: Convolution on \u03a7-transformed points. Proceedings of the 32nd International Conference on Neural Information Processing Systems, 828\u2013838 ."},{"key":"e_1_3_3_19_1","first-page":"2690","volume-title":"Simplified markov random fields for efficient semantic labeling of 3D point cloudsProceedings of 2012 IEEE\/RSJ International Conference on Intelligent Robots and Systems","author":"Lu Y.","year":"2012","unstructured":"Lu Y., Rasmussen C. (2012). Simplified markov random fields for efficient semantic labeling of 3D point clouds. Proceedings of 2012 IEEE\/RSJ International Conference on Intelligent Robots and Systems, 2690\u20132697."},{"issue":"11","key":"e_1_3_3_20_1","first-page":"2579","article-title":"Visualizing data using t-SNE","volume":"9","author":"Maaten L.","year":"2008","unstructured":"Maaten L., Hinton G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9(11), 2579\u20132605.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_3_21_1","first-page":"652","volume-title":"Pointnet: Deep learning on point sets for 3d classification and segmentationProceedings of the IEEE conference on computer vision and pattern recognition","author":"Qi C. R.","unstructured":"Qi C. R., Su H., Mo K., Guibas L. J. (2017a). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, 652\u2013660 ."},{"key":"e_1_3_3_22_1","first-page":"5105","volume-title":"Pointnet++: Deep hierarchical feature learning on point sets in a metric spaceAdvances in neural information processing systems","author":"Qi C. R.","unstructured":"Qi C. R., Yi L., Su H., Guibas L. J. (2017b). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 5105\u20135114."},{"key":"e_1_3_3_23_1","first-page":"1","article-title":"MFFNet: Multimodal feature fusion network for point cloud semantic segmentation","volume":"39","author":"Ren D.","year":"2023","unstructured":"Ren D., Li J., Wu Z., Guo J., Wei M., Guo Y. (2023). MFFNet: Multimodal feature fusion network for point cloud semantic segmentation. The Visual Computer, 39, 1\u201313. https:\/\/doi.org\/10.1007\/s00371-023-02907-w","journal-title":"The Visual Computer"},{"key":"e_1_3_3_24_1","first-page":"3577","volume-title":"Octnet: Learning deep 3d representations at high resolutionsProceedings of the IEEE conference on computer vision and pattern recognition","author":"Riegler G.","year":"2017","unstructured":"Riegler G., Osman Ulusoy A., Geiger A. (2017). Octnet: Learning deep 3d representations at high resolutions. Proceedings of the IEEE conference on computer vision and pattern recognition, 3577\u20133586."},{"key":"e_1_3_3_25_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TGRS.2023.3242346","article-title":"Spectral\u2013spatial morphological attention transformer for hyperspectral image classification[J]","volume":"61","author":"Roy S. K.","year":"2023","unstructured":"Roy S. K., Deria A., Shah C., et\u00a0al. (2023). Spectral\u2013spatial morphological attention transformer for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 61, 1\u201315. https:\/\/doi.org\/10.1109\/TGRS.2023.3245678","journal-title":"IEEE Transactions on Geoscience and Remote Sensing"},{"key":"e_1_3_3_26_1","first-page":"460","volume-title":"Point clouds classification and segmentation based on local fusion network[C]2022 5th International Conference on Pattern Recognition and Artificial Intelligence (PRAI)","author":"Shu Y.","year":"2022","unstructured":"Shu Y., Sui Y., Zhao S., et\u00a0al. (2022). Point clouds classification and segmentation based on local fusion network[C]. 2022 5th International Conference on Pattern Recognition and Artificial Intelligence (PRAI), IEEE, 460\u2013464."},{"key":"e_1_3_3_27_1","first-page":"24993","article-title":"Canonical capsules: Self-supervised capsules in canonical pose","volume":"34","author":"Sun W.","year":"2021","unstructured":"Sun W., Tagliasacchi A., Deng B., Sabour S., Yazdani S., Hinton G. E., Yi K. M. (2021). Canonical capsules: Self-supervised capsules in canonical pose. Advances in Neural Information Processing Systems, 34, 24993\u201325005.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_28_1","first-page":"8489","volume-title":"Contrastive boundary learning for point cloud segmentation[C]Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Tang L.","year":"2022","unstructured":"Tang L., Zhan Y., Chen Z., et\u00a0al. (2022). Contrastive boundary learning for point cloud segmentation[C]. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 8489\u20138499."},{"key":"e_1_3_3_29_1","first-page":"6411","volume-title":"Kpconv: Flexible and deformable convolution for point cloudsProceedings of the IEEE\/CVF international conference on computer vision","author":"Thomas H.","year":"2019","unstructured":"Thomas H., Qi C. R., Deschaud J. E., Marcotegui B., Goulette F., Guibas L. J. (2019). Kpconv: Flexible and deformable convolution for point clouds. Proceedings of the IEEE\/CVF international conference on computer vision, 6411\u20136420."},{"key":"e_1_3_3_30_1","doi-asserted-by":"publisher","DOI":"10.3390\/rs13173484"},{"key":"e_1_3_3_31_1","first-page":"10296","volume-title":"Graph attention convolution for point cloud semantic segmentationProceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Wang L.","unstructured":"Wang L., Huang Y., Hou Y., Zhang S., Shan J. (2019a). Graph attention convolution for point cloud semantic segmentation. Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, 10296\u201310305."},{"key":"e_1_3_3_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/3326362"},{"key":"e_1_3_3_33_1","doi-asserted-by":"publisher","DOI":"10.1049\/cvi2.12255"},{"key":"e_1_3_3_34_1","first-page":"1912","volume-title":"3d Shapenets: A deep representation for volumetric shapesProceedings of the IEEE conference on computer vision and pattern recognition","author":"Wu Z.","year":"2015","unstructured":"Wu Z., Song S., Khosla A., Yu F., Zhang L., Tang X., Xiao J. (2015). 3d Shapenets: A deep representation for volumetric shapes. Proceedings of the IEEE conference on computer vision and pattern recognition, 1912\u20131920."},{"key":"e_1_3_3_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2023.3262786"},{"key":"e_1_3_3_36_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.compmedimag.2022.102088"},{"key":"e_1_3_3_37_1","first-page":"5565","volume-title":"Pointweb: Enhancing local neighborhood features for point cloud processingProceedings of the IEEE\/CVF conference on computer vision and pattern recognition","author":"Zhao H.","year":"2019","unstructured":"Zhao H., Jiang L., Fu C. W., Jia J. (2019). Pointweb: Enhancing local neighborhood features for point cloud processing. Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, 5565\u20135573 ."},{"key":"e_1_3_3_38_1","first-page":"16259","volume-title":"Point transformer[C]Proceedings of the IEEE\/CVF international conference on computer vision","author":"Zhao H.","year":"2021","unstructured":"Zhao H., Jiang L., Jia J., et\u00a0al. (2021). Point transformer[C]. Proceedings of the IEEE\/CVF international conference on computer vision, 16259\u201316268."},{"key":"e_1_3_3_39_1","doi-asserted-by":"publisher","DOI":"10.3390\/rs15164015"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems: Applications in Engineering and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/18758967251335691","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.1177\/18758967251335691","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/18758967251335691","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T13:40:17Z","timestamp":1760622017000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.1177\/18758967251335691"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,8]]},"references-count":38,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2025,11]]}},"alternative-id":["10.1177\/18758967251335691"],"URL":"https:\/\/doi.org\/10.1177\/18758967251335691","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"type":"print","value":"1064-1246"},{"type":"electronic","value":"1875-8967"}],"subject":[],"published":{"date-parts":[[2025,5,8]]}}}