{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,28]],"date-time":"2026-03-28T17:11:59Z","timestamp":1774717919434,"version":"3.50.1"},"reference-count":59,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T00:00:00Z","timestamp":1755561600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T00:00:00Z","timestamp":1755561600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"ZHAW Zurich University of Applied Sciences"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Intell Robot Syst"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>The robust interpretation of 3D environments is crucial for human-robot collaboration (HRC) applications, where safety and operational efficiency are paramount. Semantic segmentation plays a key role in this context by enabling a precise and detailed understanding of the environment. Considering the intense data hunger for real-world industrial annotated data essential for effective semantic segmentation, this paper introduces a pioneering approach in the Sim2Real domain adaptation for semantic segmentation of 3D point cloud data, specifically tailored for HRC. Our focus is on developing a network that robustly transitions from simulated environments to real-world applications, thereby enhancing its practical utility and impact on a safe HRC. In this work, we propose a dual-stream network architecture (FUSION) combining Dynamic Graph Convolutional Neural Networks (DGCNN) and Convolutional Neural Networks (CNN) augmented with residual layers as a Sim2Real domain adaptation algorithm for an industrial environment. The proposed model was evaluated on real-world HRC setups and simulation industrial point clouds, it showed increased state-of-the-art performance, achieving a segmentation accuracy of 97.76%, and superior robustness compared to existing methods. The simulation dataset and source code will be made publicly available at: <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/Fatemeh-MA\/Fusion\" ext-link-type=\"uri\">https:\/\/github.com\/Fatemeh-MA\/Fusion<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s10846-025-02290-9","type":"journal-article","created":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T12:45:44Z","timestamp":1755607544000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Enhancing Human-Robot Collaboration: A Sim2Real Domain Adaptation Algorithm for Point Cloud Segmentation in Industrial Environments"],"prefix":"10.1007","volume":"111","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7432-437X","authenticated-orcid":false,"given":"Fatemeh","family":"Mohammadi Amin","sequence":"first","affiliation":[]},{"given":"Darwin\u00a0G.","family":"Caldwell","sequence":"additional","affiliation":[]},{"given":"Hans Wernher","family":"van de Venn","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,8,19]]},"reference":[{"key":"2290_CR1","first-page":"103","volume":"3","author":"M Olender","year":"2019","unstructured":"Olender, M., Banas, W.: Cobots-future in production. Int. J. Mod. Manuf. Technol. Special Issue, XI. 3, 103\u2013109 (2019)","journal-title":"Int. J. Mod. Manuf. Technol. Special Issue, XI."},{"issue":"4","key":"2290_CR2","doi-asserted-by":"publisher","DOI":"10.1115\/1.4046238","volume":"143","author":"F Vicentini","year":"2021","unstructured":"Vicentini, F.: Collaborative robotics: a survey. J. Mech. Des. 143(4), 040802 (2021)","journal-title":"J. Mech. Des."},{"issue":"21","key":"2290_CR3","doi-asserted-by":"publisher","first-page":"6347","DOI":"10.3390\/s20216347","volume":"20","author":"F Mohammadi Amin","year":"2020","unstructured":"Mohammadi Amin, F., Rezayati, M., Venn, H.W., Karimpour, H.: A mixed-perception approach for safe human-robot collaboration in industrial automation. Sensors. 20(21), 6347 (2020)","journal-title":"Sensors."},{"key":"2290_CR4","unstructured":"Hamon, R., Junklewitz, H., Sanchez, I.: Robustness and explainability of artificial intelligence. Publications Office of the European Union. (2020)"},{"key":"2290_CR5","doi-asserted-by":"publisher","unstructured":"Milioto, A., Vizzo, I., Behley, J., Stachniss, C.: Rangenet ++: Fast and accurate lidar semantic segmentation. In: 2019 IEEE\/RSJ international conference on intelligent robots and systems (IROS), pp. 4213\u20134220 (2019). https:\/\/doi.org\/10.1109\/IROS40897.2019.8967762","DOI":"10.1109\/IROS40897.2019.8967762"},{"issue":"2","key":"2290_CR6","doi-asserted-by":"publisher","first-page":"87","DOI":"10.1007\/s13735-017-0141-z","volume":"7","author":"Y Guo","year":"2018","unstructured":"Guo, Y., Liu, Y., Georgiou, T., Lew, M.S.: A review of semantic segmentation using deep neural networks. Int. J. Multimedia Inf. Retrieval 7(2), 87\u201393 (2018)","journal-title":"Int. J. Multimedia Inf. Retrieval"},{"key":"2290_CR7","unstructured":"Gao, B., Pan, Y., Li, C., Geng, S., Zhao, H.: Are we hungry for 3d lidar data for semantic segmentation? (2020). CoRR. arXiv:2006.04307"},{"key":"2290_CR8","doi-asserted-by":"crossref","unstructured":"Triess, L.T., Dreissig, M., Rist, C.B., Z\u00f6llner, J.M.: A survey on deep domain adaptation for lidar perception. (2021). CoRR. arXiv:2106.02377","DOI":"10.1109\/IVWorkshops54471.2021.9669228"},{"key":"2290_CR9","doi-asserted-by":"crossref","unstructured":"Wang, X., Jr., M.H.A., Lee, G.H.: Cascaded refinement network for point cloud completion. (2020). CoRR. arXiv:2004.03327","DOI":"10.1109\/CVPR42600.2020.00087"},{"key":"2290_CR10","doi-asserted-by":"crossref","unstructured":"Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., Bennamoun, M.: Deep learning for 3D point clouds: A Survey (2020)","DOI":"10.1109\/TPAMI.2020.3005434"},{"key":"2290_CR11","doi-asserted-by":"publisher","unstructured":"Yang, Z., Wang, L.: Learning relationships for multi-view 3d object recognition. In: 2019 IEEE\/CVF international conference on computer vision (ICCV), pp. 7504\u20137513 (2019). https:\/\/doi.org\/10.1109\/ICCV.2019.00760","DOI":"10.1109\/ICCV.2019.00760"},{"key":"2290_CR12","doi-asserted-by":"crossref","unstructured":"Ma, L., St\u00fcckler, J., Kerl, C., Cremers, D.: Multi-view deep learning for consistent semantic mapping with RGB-D cameras. (2017). CoRR. arXiv:1703.08866","DOI":"10.1109\/IROS.2017.8202213"},{"key":"2290_CR13","doi-asserted-by":"crossref","unstructured":"Wu, B., Zhou, X., Zhao, S., Yue, X., Keutzer, K.: Squeezesegv2: Improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. In: 2019 international conference on robotics and automation (ICRA), IEEE, pp. 4376\u20134382 (2019)","DOI":"10.1109\/ICRA.2019.8793495"},{"key":"2290_CR14","doi-asserted-by":"crossref","unstructured":"Milioto, A., Vizzo, I., Behley, J., Stachniss, C.: Rangenet++: Fast and accurate lidar semantic segmentation. In: 2019 IEEE\/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp. 4213\u20134220 (2019)","DOI":"10.1109\/IROS40897.2019.8967762"},{"key":"2290_CR15","doi-asserted-by":"crossref","unstructured":"Zhou, Y., Tuzel, O.: Voxelnet: End-to-end learning for point cloud based 3d object detection. (2017). CoRR. arXiv:1711.06396","DOI":"10.1109\/CVPR.2018.00472"},{"key":"2290_CR16","doi-asserted-by":"crossref","unstructured":"Meng, H., Gao, L., Lai, Y., Manocha, D.: Vv-net: Voxel VAE net with group convolutions for point cloud segmentation. (2018). CoRR. arXiv:1811.04337","DOI":"10.1109\/ICCV.2019.00859"},{"key":"2290_CR17","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Hua, B., Yeung, S.: Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. (2019). CoRR. arXiv:1908.06295","DOI":"10.1109\/ICCV.2019.00169"},{"key":"2290_CR18","doi-asserted-by":"crossref","unstructured":"Huang, Q., Wang, W., Neumann, U.: Recurrent slice networks for 3d segmentation on point clouds. (2018). CoRR. arxiv:1802.04402","DOI":"10.1109\/CVPR.2018.00278"},{"key":"2290_CR19","doi-asserted-by":"crossref","unstructured":"Landrieu, L., Simonovsky, M.: Large-scale point cloud semantic segmentation with superpoint graphs. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), (2018)","DOI":"10.1109\/CVPR.2018.00479"},{"key":"2290_CR20","unstructured":"Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. (2016). CoRR. arXiv:1612.00593"},{"key":"2290_CR21","unstructured":"Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. (2017). CoRR. arXiv:1706.02413"},{"key":"2290_CR22","unstructured":"Li, Y., Bu, R., Sun, M., Chen, B.: Pointcnn. (2018). CoRR. arXiv:1801.07791"},{"key":"2290_CR23","doi-asserted-by":"crossref","unstructured":"Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. (2018). CoRR. arXiv:1801.07829","DOI":"10.1145\/3326362"},{"key":"2290_CR24","doi-asserted-by":"crossref","unstructured":"Aksoy, E.E., Baci, S., Cavdar, S.: Salsanet: Fast road and vehicle segmentation in lidar point clouds for autonomous driving. (2019). CoRR. arXiv:1909.08291","DOI":"10.1109\/IV47402.2020.9304694"},{"key":"2290_CR25","doi-asserted-by":"crossref","unstructured":"Wu, B., Wan, A., Yue, X., Keutzer, K.: Squeezeseg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3d lidar point cloud. (2017). CoRR. arXiv:1710.07368","DOI":"10.1109\/ICRA.2018.8462926"},{"issue":"4","key":"2290_CR26","doi-asserted-by":"publisher","first-page":"3434","DOI":"10.1109\/LRA.2018.2852843","volume":"3","author":"Y Zeng","year":"2018","unstructured":"Zeng, Y., Hu, Y., Liu, S., Ye, J., Han, Y., Li, X., Sun, N.: Rt3d: Real-time 3-d vehicle detection in lidar point cloud for autonomous driving. IEEE Robot. Autom. Lett. 3(4), 3434\u20133440 (2018). https:\/\/doi.org\/10.1109\/LRA.2018.2852843","journal-title":"IEEE Robot. Autom. Lett."},{"key":"2290_CR27","doi-asserted-by":"crossref","unstructured":"Cortinhal, T., Tzelepis, G., Aksoy, E.E.: Salsanext: Fast semantic segmentation of lidar point clouds for autonomous driving. (2020). CoRR. arXiv:2003.03653","DOI":"10.1007\/978-3-030-64559-5_16"},{"key":"2290_CR28","doi-asserted-by":"crossref","unstructured":"Cheng, R., Razani, R., Taghavi, E., Li, E., Liu, B.: (af)2-s3net: Attentive feature fusion with adaptive feature selection for sparse semantic segmentation network. (2021). CoRR. arXiv:2102.04530","DOI":"10.1109\/CVPR46437.2021.01236"},{"key":"2290_CR29","doi-asserted-by":"crossref","unstructured":"Thomas, H., Qi, C.R., Deschaud, J.-E., Marcotegui, B., Goulette, F., Guibas, L.J.: Kpconv: Flexible and deformable convolution for point clouds. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp. 6411\u20136420 (2019)","DOI":"10.1109\/ICCV.2019.00651"},{"key":"2290_CR30","doi-asserted-by":"crossref","unstructured":"Feng, D., Rosenbaum, L., Dietmayer, K.: Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection. (2018). CoRR. arXiv:1804.05132","DOI":"10.1109\/ITSC.2018.8569814"},{"key":"2290_CR31","doi-asserted-by":"publisher","unstructured":"Zha, Fu, J., Wang, Zhaoyu, G., Yin-Sheng, L., Yidong, C.: Semantic 3d reconstruction for robotic manipulators with an eye-in-hand vision system. Appl. Sci. 10, 1183 (2020). https:\/\/doi.org\/10.3390\/app10031183","DOI":"10.3390\/app10031183"},{"key":"2290_CR32","doi-asserted-by":"crossref","unstructured":"Hu, Q., Yang, B., Xie, L., Rosa, S., Guo, Y., Wang, Z., Trigoni, N., Markham, A.: Randla-net: Efficient semantic segmentation of large-scale point clouds. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 11108\u201311117 (2020)","DOI":"10.1109\/CVPR42600.2020.01112"},{"key":"2290_CR33","doi-asserted-by":"publisher","DOI":"10.1016\/j.rcim.2021.102304","author":"J Fan","year":"2021","unstructured":"Fan, J., Zheng, P., Li, S.: Vision-based holistic scene understanding towards proactive human-robot collaboration. Robot. Comput.-Integr. Manuf. (2021). https:\/\/doi.org\/10.1016\/j.rcim.2021.102304","journal-title":"Robot. Comput.-Integr. Manuf."},{"key":"2290_CR34","doi-asserted-by":"publisher","unstructured":"Angleraud, A., Ekrekli, A., Samarawickrama, K., Sharma, G., Pieters, R.: Sensor-based human\u2013robot collaboration for industrial tasks. Robot. Comput.-Integr. Manuf.. 86, 102663 (2024). https:\/\/doi.org\/10.1016\/j.rcim.2023.102663","DOI":"10.1016\/j.rcim.2023.102663"},{"key":"2290_CR35","doi-asserted-by":"publisher","unstructured":"Su, H., Qi, W., Chen, J., Yang, C., Sandoval, J., Laribi, M.A.: Recent advancements in multimodal human\u2013robot interaction. Front. Neurorobot. 17 (2023). https:\/\/doi.org\/10.3389\/fnbot.2023.1084000","DOI":"10.3389\/fnbot.2023.1084000"},{"key":"2290_CR36","doi-asserted-by":"publisher","unstructured":"Liu, J., Luo, H., Wu, D.: Human\u2013robot collaboration in construction: Robot design, perception and interaction, and task allocation and execution. Adv. Eng. Inf. 65, 103109 (2025) https:\/\/doi.org\/10.1016\/j.aei.2025.103109","DOI":"10.1016\/j.aei.2025.103109"},{"key":"2290_CR37","doi-asserted-by":"publisher","unstructured":"Duan, J., Zhuang, L., Zhang, Q., Zhou, Y., Qin, J.: Multimodal perception-fusion-control and human\u2013robot collaboration in manufacturing: a review. Int.J. Adv. Manuf. Technol. 132, 1\u201323 (2024). https:\/\/doi.org\/10.1007\/s00170-024-13385-2","DOI":"10.1007\/s00170-024-13385-2"},{"issue":"5","key":"2290_CR38","doi-asserted-by":"publisher","first-page":"4186","DOI":"10.1109\/LRA.2024.3376974","volume":"9","author":"J Lin","year":"2024","unstructured":"Lin, J., Zhong, K., Gong, T., Zhang, X., Wang, N.: Prior information-assisted neural network for point cloud segmentation in human-robot interaction scenarios. IEEE Robot. Autom. Lett. 9(5), 4186\u20134193 (2024). https:\/\/doi.org\/10.1109\/LRA.2024.3376974","journal-title":"IEEE Robot. Autom. Lett."},{"key":"2290_CR39","doi-asserted-by":"publisher","unstructured":"Alharasees, O., Adali, O.H., Kale, U.: Human factors in the age of autonomous uavs: Impact of artificial intelligence on operator performance and safety, pp. 798\u2013805 (2023). https:\/\/doi.org\/10.1109\/ICUAS57906.2023.10156037","DOI":"10.1109\/ICUAS57906.2023.10156037"},{"key":"2290_CR40","doi-asserted-by":"publisher","unstructured":"de Nobile, A., Bibbo, D., Russo, M., Conforto, S.: A focus on quantitative methods to assess human factors in collaborative robotics. Int. J. Ind. Ergon. 104, 103663 (2024). https:\/\/doi.org\/10.1016\/j.ergon.2024.103663","DOI":"10.1016\/j.ergon.2024.103663"},{"key":"2290_CR41","doi-asserted-by":"publisher","unstructured":"Chen, H., Li, S., Fan, J., Duan, A., Yang, C., Navarro-Alarcon, D., Zheng, P.: Human-in-the-loop robot learning for smart manufacturing: A human-centric perspective. IEEE Trans. Autom. Sci. Eng. PP, 1\u20131 (2025). https:\/\/doi.org\/10.1109\/TASE.2025.3528051","DOI":"10.1109\/TASE.2025.3528051"},{"key":"2290_CR42","doi-asserted-by":"publisher","unstructured":"Lasota, P.A., Song, T., Shah, J.A., (2017). https:\/\/doi.org\/10.1561\/2300000052","DOI":"10.1561\/2300000052"},{"key":"2290_CR43","doi-asserted-by":"publisher","unstructured":"Hopko, S., Wang, J., Mehta, R.: Human factors considerations and metrics in shared space human-robot collaboration: A systematic review. Front. Robot. AI. 9 (2022). https:\/\/doi.org\/10.3389\/frobt.2022.799522","DOI":"10.3389\/frobt.2022.799522"},{"key":"2290_CR44","doi-asserted-by":"publisher","first-page":"27","DOI":"10.1016\/0376-6349(87)90023-X","volume":"9","author":"B Jiang","year":"1987","unstructured":"Jiang, B., Gainer, C.: A cause-and-effect analysis of robot accidents. J. Occup. Accid. 9, 27\u201345 (1987)","journal-title":"J. Occup. Accid."},{"key":"2290_CR45","doi-asserted-by":"publisher","unstructured":"El-Shamouty, M., Wu, X., Yang, S., Albus, M., Huber, M.F.: Towards safe human-robot collaboration using deep reinforcement learning. In: 2020 IEEE international conference on robotics and automation (ICRA), pp. 4899\u20134905 (2020). https:\/\/doi.org\/10.1109\/ICRA40945.2020.9196924","DOI":"10.1109\/ICRA40945.2020.9196924"},{"key":"2290_CR46","doi-asserted-by":"publisher","unstructured":"Amaral, P., Silva, F., Santos, V.: Recognition of grasping patterns using deep learning for human\u2013robot collaboration. Sensors. 23(21) (2023). https:\/\/doi.org\/10.3390\/s23218989","DOI":"10.3390\/s23218989"},{"key":"2290_CR47","doi-asserted-by":"publisher","unstructured":"Armeni, I., Sener, O., Zamir, A.R., Jiang, H., Brilakis, I., Fischer, M., Savarese, S.: 3d semantic parsing of large-scale indoor spaces. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp. 1534\u20131543 (2016). https:\/\/doi.org\/10.1109\/CVPR.2016.170","DOI":"10.1109\/CVPR.2016.170"},{"key":"2290_CR48","doi-asserted-by":"crossref","unstructured":"Hackel, T., Savinov, N., Ladicky, L., Wegner, J.D., Schindler, K., Pollefeys, M.: Semantic3d.net: A new large-scale point cloud classification benchmark. (2017). CoRR. arXiv:1704.03847","DOI":"10.5194\/isprs-annals-IV-1-W1-91-2017"},{"key":"2290_CR49","doi-asserted-by":"crossref","unstructured":"Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., Gall, J.: Semantickitti: A dataset for semantic scene understanding of lidar sequences. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp. 9297\u20139307 (2019)","DOI":"10.1109\/ICCV.2019.00939"},{"key":"2290_CR50","doi-asserted-by":"publisher","unstructured":"Munasinghe, C., Amin, F.M., Scaramuzza, D., Venn, H.W.: Covered, collaborative robot environment dataset for 3d semantic segmentation. In: 2022 IEEE 27th international conference on emerging technologies and factory automation (ETFA), pp. 1\u20134 (2022). https:\/\/doi.org\/10.1109\/ETFA52439.2022.9921525","DOI":"10.1109\/ETFA52439.2022.9921525"},{"key":"2290_CR51","doi-asserted-by":"crossref","unstructured":"Yi, L., Gong, B., Funkhouser, T.A.: Complete & label: A domain adaptation approach to semantic segmentation of lidar point clouds. CoRR. (2020)","DOI":"10.1109\/CVPR46437.2021.01511"},{"key":"2290_CR52","unstructured":"Jia, D., Hermans, A., Leibe, B.: Domain and modality gaps for lidar-based person detection on mobile robots. (2021). CoRR. arXiv:2106.11239"},{"key":"2290_CR53","doi-asserted-by":"publisher","unstructured":"Langer, F., Milioto, A., Haag, A., Behley, J., Stachniss, C.: Domain transfer for semantic segmentation of lidar data using deep neural networks. In: 2020 IEEE\/RSJ international conference on intelligent robots and systems (IROS) (2020). https:\/\/doi.org\/10.1109\/IROS45743.2020.9341508","DOI":"10.1109\/IROS45743.2020.9341508"},{"key":"2290_CR54","doi-asserted-by":"crossref","unstructured":"Zhao, S., Wang, Y., Li, B., Wu, B., Gao, Y., Xu, P., Darrell, T., Keutzer, K.: epointda: An end-to-end simulation-to-real domain adaptation framework for lidar point cloud segmentation. CoRR. (2020)","DOI":"10.1609\/aaai.v35i4.16464"},{"key":"2290_CR55","doi-asserted-by":"crossref","unstructured":"Jiang, P., Saripalli, S.: Lidarnet: A boundary-aware domain adaptation model for lidar point cloud semantic segmentation. CoRR. (2020)","DOI":"10.1109\/ICRA48506.2021.9561255"},{"key":"2290_CR56","doi-asserted-by":"publisher","unstructured":"Yin, C., Wang, B., Gan, V.J.L., Wang, M., Cheng, J.C.P.: Automated semantic segmentation of industrial point clouds using respointnet++. Autom. Construc. 130, 103874 (2021). https:\/\/doi.org\/10.1016\/j.autcon.2021.103874","DOI":"10.1016\/j.autcon.2021.103874"},{"key":"2290_CR57","doi-asserted-by":"crossref","unstructured":"Chen, Z., Xu, H., Chen, W., Zhou, Z., Xiao, H., Sun, B., Xie, X., Kang, W.: PointDC:Unsupervised semantic segmentation of 3D point clouds via cross-modal distillation and super-voxel Clustering (2024)","DOI":"10.1109\/ICCV51070.2023.01314"},{"key":"2290_CR58","doi-asserted-by":"crossref","unstructured":"Wu, C., Bi, X., Pfrommer, J., Cebulla, A., Mangold, S., Beyerer, J.: Sim2real transfer learning for point cloud segmentation: An industrial application case on autonomous disassembly (2023)","DOI":"10.1109\/WACV56688.2023.00451"},{"key":"2290_CR59","unstructured":"NVIDIA: NVIDIA Isaac Sim. Accessed: 01 Feb 2024. https:\/\/developer.nvidia.com\/isaac\/sim"}],"container-title":["Journal of Intelligent &amp; Robotic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10846-025-02290-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10846-025-02290-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10846-025-02290-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,6]],"date-time":"2025-10-06T04:14:01Z","timestamp":1759724041000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10846-025-02290-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,19]]},"references-count":59,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2025,9]]}},"alternative-id":["2290"],"URL":"https:\/\/doi.org\/10.1007\/s10846-025-02290-9","relation":{},"ISSN":["1573-0409"],"issn-type":[{"value":"1573-0409","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,8,19]]},"assertion":[{"value":"18 July 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 July 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 August 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no relevant financial or non-financial interests to disclose.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not Applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval"}},{"value":"Not Applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"The authors affirm that human research participants provided informed consent for the publication of the images in Fig.\u00a02b.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to publish"}}],"article-number":"94"}}