{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,7]],"date-time":"2026-01-07T07:47:57Z","timestamp":1767772077676,"version":"build-2065373602"},"reference-count":54,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2024,1,24]],"date-time":"2024-01-24T00:00:00Z","timestamp":1706054400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Key Research and Development Program of China","award":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"],"award-info":[{"award-number":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"]}]},{"name":"Primary Research &amp; Development Plan of Jiangsu Province","award":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"],"award-info":[{"award-number":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"]}]},{"name":"National Natural Science Foundation of China","award":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"],"award-info":[{"award-number":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"]}]},{"name":"Collective Intelligence &amp; Collaboration Laboratory","award":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"],"award-info":[{"award-number":["2022YFC3002603","BE2022053-5","61973079","QXZ23012201"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Since camera and LiDAR sensors provide complementary information for the 3D semantic segmentation of intelligent vehicles, extensive efforts have been invested to fuse information from multi-modal data. Despite considerable advantages, fusion-based methods still have inevitable limitations: field-of-view disparity between two modal inputs, demanding precise paired data as inputs in both the training and inferring stages, and consuming more resources. These limitations pose significant obstacles to the practical application of fusion-based methods in real-world scenarios. Therefore, we propose a robust 3D semantic segmentation method based on multi-modal collaborative learning, aiming to enhance feature extraction and segmentation performance for point clouds. In practice, an attention based cross-modal knowledge distillation module is proposed to effectively acquire comprehensive information from multi-modal data and guide the pure point cloud network; then, a confidence-map-driven late fusion strategy is proposed to dynamically fuse the results of two modalities at the pixel-level to complement their advantages and further optimize segmentation results. The proposed method is evaluated on two public datasets (urban dataset SemanticKITTI and off-road dataset RELLIS-3D) and our unstructured test set. The experimental results demonstrate the competitiveness of state-of-the-art methods in diverse scenarios and a robustness to sensor faults.<\/jats:p>","DOI":"10.3390\/rs16030453","type":"journal-article","created":{"date-parts":[[2024,1,24]],"date-time":"2024-01-24T09:57:42Z","timestamp":1706090262000},"page":"453","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Robust 3D Semantic Segmentation Method Based on Multi-Modal Collaborative Learning"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1684-9936","authenticated-orcid":false,"given":"Peizhou","family":"Ni","sequence":"first","affiliation":[{"name":"School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China"}]},{"given":"Xu","family":"Li","sequence":"additional","affiliation":[{"name":"School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China"}]},{"given":"Wang","family":"Xu","sequence":"additional","affiliation":[{"name":"School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China"}]},{"given":"Xiaojing","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China"}]},{"given":"Tao","family":"Jiang","sequence":"additional","affiliation":[{"name":"Xuzhou XCMG Automobile Manufacturing Co., Ltd., Xuzhou 221112, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9483-5559","authenticated-orcid":false,"given":"Weiming","family":"Hu","sequence":"additional","affiliation":[{"name":"China Automotive Engineering Research Institute Company Ltd., Chongqing 401122, China"},{"name":"School of Transportation, Southeast University, Nanjing 211189, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,1,24]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"3147","DOI":"10.1109\/TIE.2022.3169849","article-title":"Learning a novel LiDAR submap-based observation model for global positioning in long-term changing environments","volume":"70","author":"Kong","year":"2022","journal-title":"IEEE Trans. Ind. Electron."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Richa, J.P., Deschaud, J.-E., Goulette, F., and Dalmasso, N. (2022). AdaSplats: Adaptive Splatting of Point Clouds for Accurate 3D Modeling and Real-Time High-Fidelity LiDAR Simulation. Remote Sens., 14.","DOI":"10.3390\/rs14246262"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"3412","DOI":"10.1109\/TNNLS.2020.3015992","article-title":"Deep learning for lidar point clouds in autonomous driving: A review","volume":"32","author":"Li","year":"2020","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1158","DOI":"10.1109\/TMM.2023.3277281","article-title":"Lif-seg: Lidar and camera image fusion for 3d lidar semantic segmentation","volume":"26","author":"Zhao","year":"2023","journal-title":"IEEE Trans. Multimed."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Zhao, J., Wang, Y., Cao, Y., Guo, M., Huang, X., Zhang, R., Dou, X., Niu, X., Cui, Y., and Wang, J. (2021). The fusion strategy of 2D and 3D information based on deep learning: A review. Remote Sens., 13.","DOI":"10.3390\/rs13204029"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"722","DOI":"10.1109\/TITS.2020.3023541","article-title":"Deep learning for image and point cloud fusion in autonomous driving: A review","volume":"23","author":"Cui","year":"2021","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"El Madawi, K., Rashed, H., El Sallab, A., Nasr, O., Kamel, H., and Yogamani, S. (2019, January 27\u201330). Rgb and lidar fusion based 3d semantic segmentation for autonomous driving. Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand.","DOI":"10.1109\/ITSC.2019.8917447"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"85","DOI":"10.1016\/j.isprsjprs.2018.04.022","article-title":"Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning","volume":"143","author":"Zhang","year":"2018","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"5802","DOI":"10.1109\/TITS.2020.2988302","article-title":"Fast road detection by cnn-based camera\u2013lidar fusion and spherical coordinate transformation","volume":"22","author":"Lee","year":"2020","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Vora, S., Lang, A.H., Helou, B., and Beijbom, O. (2020, January 13\u201319). Pointpainting: Sequential fusion for 3d object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00466"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Xu, S., Zhou, D., Fang, J., Yin, J., Bin, Z., and Zhang, L. (2021, January 19\u201322). Fusionpainting: Multimodal fusion with adaptive attention for 3d object detection. Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA.","DOI":"10.1109\/ITSC48978.2021.9564951"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Fang, F., Zhou, T., Song, Z., and Lu, J. (2023). MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors. Remote Sens., 15.","DOI":"10.3390\/rs15041142"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Zhuang, Z., Li, R., Jia, K., Wang, Q., Li, Y., and Tan, M. (2021, January 11\u201317). Perception-aware multi-sensor fusion for 3d lidar semantic segmentation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.01597"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1239","DOI":"10.1007\/s11263-019-01188-y","article-title":"Self-supervised model adaptation for multimodal semantic segmentation","volume":"128","author":"Valada","year":"2020","journal-title":"Int. J. Comput. Vis."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Schieber, H., Duerr, F., Schoen, T., and Beyerer, J. (2022, January 5\u20139). Deep Sensor Fusion with Pyramid Fusion Networks for 3D Semantic Segmentation. Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany.","DOI":"10.1109\/IV51971.2022.9827113"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Jaritz, M., Vu, T.-H., Charette, R.D., Wirbel, E., and P\u00e9rez, P. (2020, January 13\u201319). Xmuda: Cross-modal unsupervised domain adaptation for 3d semantic segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01262"},{"key":"ref_17","unstructured":"Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J. (November, January 27). Semantickitti: A dataset for semantic scene understanding of lidar sequences. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Jiang, P., Osteen, P., Wigness, M., and Saripalli, S. (June, January 30). Rellis-3d dataset: Data, benchmarks and analysis. Proceedings of the 2021 IEEE International Conference on robotics and AUTOMATION (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561251"},{"key":"ref_19","unstructured":"Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21\u201326). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA."},{"key":"ref_20","unstructured":"Thomas, H., Qi, C.R., Deschaud, J.-E., Marcotegui, B., Goulette, F., and Guibas, L.J. (November, January 27). Kpconv: Flexible and deformable convolution for point clouds. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Hu, Q., Yang, B., Xie, L., Rosa, S., Guo, Y., Wang, Z., Trigoni, N., and Markham, A. (2020, January 13\u201319). Randla-net: Efficient semantic segmentation of large-scale point clouds. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01112"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhou, Y., and Tuzel, O. (2018, January 18\u201323). Voxelnet: End-to-end learning for point cloud based 3d object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00472"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Zhao, L., Xu, S., Liu, L., Ming, D., and Tao, W. (2022). SVASeg: Sparse voxel-based attention for 3D LiDAR point cloud semantic segmentation. Remote Sens., 14.","DOI":"10.3390\/rs14184471"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Zhu, Z., Li, X., Xu, J., Yuan, J., and Tao, J. (2021). Unstructured road segmentation based on road boundary enhancement point-cylinder network using LiDAR sensor. Remote Sens., 13.","DOI":"10.3390\/rs13030495"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Tang, H., Liu, Z., Zhao, S., Lin, Y., Lin, J., Wang, H., and Han, S. (2020, January 23\u201328). Searching efficient 3d architectures with sparse point-voxel convolution. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58604-1_41"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"6807","DOI":"10.1109\/TPAMI.2021.3098789","article-title":"Cylindrical and asymmetrical 3d convolution networks for lidar-based perception","volume":"44","author":"Zhu","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, C., Luo, W., and Urtasun, R. (2018, January 5\u20138). Efficient convolutions for real-time semantic segmentation of 3d point clouds. Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy.","DOI":"10.1109\/3DV.2018.00053"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Zhou, Z., David, P., Yue, X., Xi, Z., Gong, B., and Foroosh, H. (2020, January 13\u201319). Polarnet: An improved grid representation for online lidar point clouds semantic segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00962"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"1473","DOI":"10.1109\/TIV.2022.3195426","article-title":"Location-guided lidar-based panoptic segmentation for autonomous driving","volume":"8","author":"Xian","year":"2022","journal-title":"IEEE Trans. Intell. Veh."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Cortinhal, T., Tzelepis, G., and Erdal Aksoy, E. (2020;, January 5\u20137). Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. Proceedings of the Advances in Visual Computing: 15th International Symposium, ISVC 2020, San Diego, CA, USA.","DOI":"10.1007\/978-3-030-64559-5_16"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Yin, X., Li, X., Ni, P., Xu, Q., and Kong, D. (2023). A Novel Real-Time Edge-Guided LiDAR Semantic Segmentation Network for Unstructured Environments. Remote Sens., 15.","DOI":"10.3390\/rs15041093"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1109\/TIV.2021.3085827","article-title":"RangeSeg: Range-aware real time segmentation of 3D LiDAR point clouds","volume":"7","author":"Chen","year":"2021","journal-title":"IEEE Trans. Intell. Veh."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"31461","DOI":"10.1109\/JSEN.2023.3328603","article-title":"Multi-View Fusion Driven 3D Point Cloud Semantic Segmentation Based on Hierarchical Transformer","volume":"23","author":"Xu","year":"2023","journal-title":"IEEE Sens. J."},{"key":"ref_34","unstructured":"Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. arXiv."},{"key":"ref_35","unstructured":"Hou, Y., Ma, Z., Liu, C., and Loy, C.C. (February, January 27). Learning to steer by mimicking features from heterogeneous auxiliary networks. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Hu, J., Zhao, M., and Li, Y. (2019). Hyperspectral image super-resolution by deep spatial-spectral exploitation. Remote Sens., 11.","DOI":"10.3390\/rs11101229"},{"key":"ref_37","unstructured":"Hou, Y., Ma, Z., Liu, C., and Loy, C.C. (November, January 27). Learning lightweight lane detection cnns by self attention distillation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Chai, Y., Fu, K., Sun, X., Diao, W., Yan, Z., Feng, Y., and Wang, L. (2020). Compact cloud detection with bidirectional self-attention knowledge distillation. Remote Sens., 12.","DOI":"10.3390\/rs12172770"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Yim, J., Joo, D., Bae, J., and Kim, J. (2017, January 21\u201326). A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.754"},{"key":"ref_40","unstructured":"Tung, F., and Mori, G. (November, January 27). Similarity-preserving knowledge distillation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Park, W., Kim, D., Lu, Y., and Cho, M. (2019, January 15\u201320). Relational knowledge distillation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00409"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Liu, Y., Chen, K., Liu, C., Qin, Z., Luo, Z., and Wang, J. (2019, January 15\u201320). Structured knowledge distillation for semantic segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00271"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"He, T., Shen, C., Tian, Z., Gong, D., Sun, C., and Yan, Y. (2019, January 15\u201320). Knowledge adaptation for efficient semantic segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00067"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Hou, Y., Zhu, X., Ma, Y., Loy, C.C., and Li, Y. (2022, January 18\u201324). Point-to-voxel knowledge distillation for lidar semantic segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00829"},{"key":"ref_45","unstructured":"Wang, L., Wu, J., Huang, S.-L., Zheng, L., Xu, X., Zhang, L., and Huang, J. (February, January 27). An efficient approach to informative feature extraction from multimodal data. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Zhao, L., Peng, X., Chen, Y., Kapadia, M., and Metaxas, D.N. (2020, January 13\u201319). Knowledge as priors: Cross-modal knowledge generalization for datasets without superior knowledge. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00656"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Liu, Z., Qi, X., and Fu, C.-W. (2021, January 20\u201325). 3d-to-2d distillation for indoor scene parsing. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00444"},{"key":"ref_48","unstructured":"Liu, Y.-C., Huang, Y.-K., Chiang, H.-Y., Su, H.-T., Liu, Z.-Y., Chen, C.-T., Tseng, C.-Y., and Hsu, W.H. (2021). Learning from 2d: Contrastive pixel-to-point knowledge transfer for 3d pretraining. arXiv."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Xu, C., Yang, S., Galanti, T., Wu, B., Yue, X., Zhai, B., Zhan, W., Vajda, P., Keutzer, K., and Tomizuka, M. (2021). Image2point: 3d point-cloud understanding with 2d image pretrained models. arXiv.","DOI":"10.1007\/978-3-031-19836-6_36"},{"key":"ref_50","unstructured":"Chen, Z., Li, Z., Zhang, S., Fang, L., Jiang, Q., and Zhao, F. (2022). Bevdistill: Cross-modal bev distillation for multi-view 3d object detection. arXiv."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"3349","DOI":"10.1109\/TPAMI.2020.2983686","article-title":"Deep high-resolution representation learning for visual recognition","volume":"43","author":"Wang","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_52","unstructured":"Zhang, L., and Ma, K. (2021, January 3\u20137). Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors. Proceedings of the International Conference on Learning Representations, Virtual Event."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Yang, Z., Li, Z., Jiang, X., Gong, Y., Yuan, Z., Zhao, D., and Yuan, C. (2021). Focal and Global Knowledge Distillation for Detectors. arXiv.","DOI":"10.1109\/CVPR52688.2022.00460"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Dai, X., Jiang, Z., Wu, Z., Bao, Y., and Zhou, E. (2021). General Instance Distillation for Object Detection. arXiv.","DOI":"10.1109\/CVPR46437.2021.00775"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/3\/453\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T13:48:39Z","timestamp":1760104119000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/3\/453"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,24]]},"references-count":54,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2024,2]]}},"alternative-id":["rs16030453"],"URL":"https:\/\/doi.org\/10.3390\/rs16030453","relation":{},"ISSN":["2072-4292"],"issn-type":[{"type":"electronic","value":"2072-4292"}],"subject":[],"published":{"date-parts":[[2024,1,24]]}}}