{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T05:40:05Z","timestamp":1742190005260,"version":"3.38.0"},"reference-count":73,"publisher":"Walter de Gruyter GmbH","issue":"9","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2024,9,25]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>Process automation is essential to establish an economically viable circular factory in high-wage locations. This involves using autonomous production technologies, such as robots, to disassemble, reprocess, and reassemble used products with unknown conditions into the original or a new generation of products. This is a complex and highly dynamic issue that involves a high degree of uncertainty. To adapt robots to these conditions, learning from humans is necessary. Humans are the most flexible resource in the circular factory and they can adapt their knowledge and skills to new tasks and changing conditions. This paper presents an interdisciplinary research framework for learning human action knowledge from complex manipulation tasks through human observation and demonstration. The acquired knowledge will be described in a machine-executable form and will be transferred to industrial automation execution by robots in a circular factory. There are two primary research objectives. First, we investigate the multi-modal capture of human behavior and the description of human action knowledge. Second, the reproduction and generalization of learned actions, such as disassembly and assembly actions on robots is studied.<\/jats:p>","DOI":"10.1515\/auto-2024-0008","type":"journal-article","created":{"date-parts":[[2024,9,10]],"date-time":"2024-09-10T13:34:52Z","timestamp":1725975292000},"page":"844-859","source":"Crossref","is-referenced-by-count":0,"title":["Learning human actions from complex manipulation tasks and their transfer to robots in the circular factory"],"prefix":"10.1515","volume":"72","author":[{"given":"Manuel","family":"Zaremski","sequence":"first","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute of Human and Industrial Engineering , Engler-Bunte-Ring 4, 76131 Karlsruhe , Germany"}]},{"given":"Blanca","family":"Handwerker","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute of Human and Industrial Engineering , Engler-Bunte-Ring 4, 76131 Karlsruhe , Germany"}]},{"given":"Christian R. G.","family":"Dreher","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute for Anthropomatics and Robotics, High Performance Humanoid Technologies , Adenauerring 12, 76131 Karlsruhe , Germany"}]},{"given":"Fabian","family":"Leven","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute of Industrial Information Technology , Hertzstra\u00dfe 16, 76187 Karlsruhe , Germany"}]},{"given":"David","family":"Schneider","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute for Anthropomatics and Robotics, Computer Vision for Human-Computer Interaction Lab , Adenauerring 10 , 76131 Karlsruhe , Germany"}]},{"given":"Alina","family":"Roitberg","sequence":"additional","affiliation":[{"name":"University of Stuttgart, Institute for Artificial Intelligence , Universit\u00e4tsstra\u00dfe 32, 70569 Stuttgart , Germany"}]},{"given":"Rainer","family":"Stiefelhagen","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute for Anthropomatics and Robotics, Computer Vision for Human-Computer Interaction Lab , Adenauerring 10 , 76131 Karlsruhe , Germany"}]},{"given":"Gerhard","family":"Neumann","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute for Anthropomatics and Robotics, Autonomous Learning Robots Lab , Engler-Bunte-Ring 8, 76131 Karlsruhe , Germany"}]},{"given":"Michael","family":"Heizmann","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute of Industrial Information Technology , Hertzstra\u00dfe 16, 76187 Karlsruhe , Germany"}]},{"given":"Tamim","family":"Asfour","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute for Anthropomatics and Robotics, High Performance Humanoid Technologies , Adenauerring 12, 76131 Karlsruhe , Germany"}]},{"given":"Barbara","family":"Deml","sequence":"additional","affiliation":[{"name":"Karlsruhe Institute of Technology, Institute of Human and Industrial Engineering , Engler-Bunte-Ring 4, 76131 Karlsruhe , Germany"}]}],"member":"374","published-online":{"date-parts":[[2024,9,10]]},"reference":[{"key":"2025031705022694485_j_auto-2024-0008_ref_001","doi-asserted-by":"crossref","unstructured":"C. B. Cetin and G. Zaccour, \u201cRemanufacturing with innovative features: a strategic analysis,\u201d Eur. J. Oper. Res., vol.\u00a0310, no.\u00a02, pp.\u00a0655\u2013669, 2023. https:\/\/doi.org\/10.1016\/j.ejor.2023.03.027.","DOI":"10.1016\/j.ejor.2023.03.027"},{"key":"2025031705022694485_j_auto-2024-0008_ref_002","doi-asserted-by":"crossref","unstructured":"M. Matsumoto, S. Yang, K. Martinsen, and Y. Kainuma, \u201cTrends and research challenges in remanufacturing,\u201d Int. J. Precis. Eng. Manuf. Green Technol., vol.\u00a03, no.\u00a01, pp.\u00a0129\u2013142, 2016. https:\/\/doi.org\/10.1007\/s40684-016-0016-4.","DOI":"10.1007\/s40684-016-0016-4"},{"key":"2025031705022694485_j_auto-2024-0008_ref_003","doi-asserted-by":"crossref","unstructured":"R. Slama, O. Ben-Ammar, H. Tlahig, I. Slama, and P. Slangen, \u201cHuman-centred assembly and disassembly systems: a survey on technologies, ergonomic, productivity and optimisation,\u201d IFAC-PapersOnLine, vol.\u00a055, no.\u00a010, pp.\u00a01722\u20131727, 2022. https:\/\/doi.org\/10.1016\/j.ifacol.2022.09.646.","DOI":"10.1016\/j.ifacol.2022.09.646"},{"key":"2025031705022694485_j_auto-2024-0008_ref_004","unstructured":"S. Kadner, et al.., Circular Economy Roadmap f\u00fcr Deutschland, Circular Economy Initiative Deutschland, Hrsg., Munich, Germany, Acatech, Deutsche Akademie der Technikwissenschaften e.V, 2021."},{"key":"2025031705022694485_j_auto-2024-0008_ref_005","doi-asserted-by":"crossref","unstructured":"T. Pfeifer and R. Schmitt, Autonome Produktionszellen \u2013 Komplexe Produktionsprozesse flexibel automatisieren, Berlin, Springer, 2006.","DOI":"10.1007\/3-540-30811-3"},{"key":"2025031705022694485_j_auto-2024-0008_ref_006","doi-asserted-by":"crossref","unstructured":"D. Sch\u00fctz, C. Budde, A. Raatz, and J. Hesselbach, Parallel Kinematic Structures of the SFB 562, Berlin, Heidelberg, Springer Berlin Heidelberg, 2011, pp.\u00a0109\u2013124.","DOI":"10.1007\/978-3-642-16785-0_7"},{"key":"2025031705022694485_j_auto-2024-0008_ref_007","unstructured":"R. Dillmann and T. Asfour, \u201cCollaborative research center on humanoid robots (sfb 588),\u201d Zeitschrift K\u00fcnstl. Intell., vol. 4, pp. 26\u201328, 2008."},{"key":"2025031705022694485_j_auto-2024-0008_ref_008","doi-asserted-by":"crossref","unstructured":"J. Gausemeier, F. Rammig, and W. Sch\u00e4fer, Design Methodology for Intelligent Technical Systems \u2013 Develop Intelligent Technical Systems of the Future, Berlin Heidelberg, Springer, 2014.","DOI":"10.1007\/978-3-642-45435-6"},{"key":"2025031705022694485_j_auto-2024-0008_ref_009","doi-asserted-by":"crossref","unstructured":"C. Pentzold, A. Kaun, and C. Lohmeier, \u201cImagining and instituting future media: introduction to the special issue,\u201d Convergence, vol.\u00a026, no.\u00a04, pp.\u00a0705\u2013715, 2020. https:\/\/doi.org\/10.1177\/1354856520938584.","DOI":"10.1177\/1354856520938584"},{"key":"2025031705022694485_j_auto-2024-0008_ref_010","unstructured":"F. H. P. Fitzek, S.-C. Li, S. Speidel, T. Strufe, M. \u015eim\u015fek, and M. Reisslein, Eds., Tactile Internet with Human-in-the-Loop, Cambridge, Massachusetts, US, Academic Press, 2021."},{"key":"2025031705022694485_j_auto-2024-0008_ref_011","doi-asserted-by":"crossref","unstructured":"P. Ruppel and J. Zhang, \u201cLearning object manipulation with dexterous hand-arm systems from human demonstration,\u201d in 2020 IEEE\/RSJ International Conference, 2020, pp.\u00a05417\u20135424.","DOI":"10.1109\/IROS45743.2020.9340966"},{"key":"2025031705022694485_j_auto-2024-0008_ref_012","doi-asserted-by":"crossref","unstructured":"C. R. G. Dreher, et al.., \u201cErfassung und Interpretation menschlicher Handlungen f\u00fcr die Programmierung von Robotern in der Produktion,\u201d Automatisierungstechnik, vol.\u00a070, no.\u00a06, pp.\u00a0517\u2013533, 2022. https:\/\/doi.org\/10.1515\/auto-2022-0006.","DOI":"10.1515\/auto-2022-0006"},{"key":"2025031705022694485_j_auto-2024-0008_ref_013","doi-asserted-by":"crossref","unstructured":"J. Carreira and A. Zisserman, \u201cQuo vadis, action recognition? A new model and the kinetics dataset,\u201d arXiv, 2017.","DOI":"10.1109\/CVPR.2017.502"},{"key":"2025031705022694485_j_auto-2024-0008_ref_014","unstructured":"T. Jiang, et al.., \u201cRtmpose: real-time multi-person pose estimation based on mmpose,\u201d arXiv: 2303.07399, 2023, https:\/\/doi.org\/10.48550\/arXiv.2303.07399."},{"key":"2025031705022694485_j_auto-2024-0008_ref_015","doi-asserted-by":"crossref","unstructured":"S. Goel, G. Pavlakos, J. Rajasegaran, A. Kanazawa, and J. Malik, \u201cHumans in 4D: reconstructing and tracking humans with transformers,\u201d in ICCV, 2023.","DOI":"10.1109\/ICCV51070.2023.01358"},{"key":"2025031705022694485_j_auto-2024-0008_ref_016","doi-asserted-by":"crossref","unstructured":"A. Roitberg, et al.., \u201cIs my driver observation model overconfident? Input-guided calibration networks for reliable and interpretable confidence estimates,\u201d IEEE Trans. Intell. Transport. Syst., vol.\u00a023, no.\u00a012, pp.\u00a025271\u201325286, 2022. https:\/\/doi.org\/10.1109\/tits.2022.3196410.","DOI":"10.1109\/TITS.2022.3196410"},{"key":"2025031705022694485_j_auto-2024-0008_ref_017","unstructured":"B. Lakshminarayanan, A. Pritzel, and C. Blundell, \u201cSimple and scalable predictive uncertainty estimation using deep ensembles,\u201d in Conference on Neural Information Processing Systems, vol. 31, 2017."},{"key":"2025031705022694485_j_auto-2024-0008_ref_018","unstructured":"Y. Gal and Z. Ghahramani, \u201cDropout as a bayesian approximation: representing model uncertainty in deep learning,\u201d arXiv, 2015."},{"key":"2025031705022694485_j_auto-2024-0008_ref_019","unstructured":"K. Peng, et al.., \u201cNavigating open set scenarios for skeleton-based action recognition,\u201d arXiv preprint arXiv:2312.06330, 2023."},{"key":"2025031705022694485_j_auto-2024-0008_ref_020","doi-asserted-by":"crossref","unstructured":"F. Mannhardt, et al.., \u201cA taxonomy for combining activity recognition and process discovery in industrial environments,\u201d in Intelligent Data Engineering and Automated Learning \u2013 IDEAL 2018, Cham, Springer International Publishing, 2018, pp.\u00a084\u201393.","DOI":"10.1007\/978-3-030-03496-2_10"},{"key":"2025031705022694485_j_auto-2024-0008_ref_021","doi-asserted-by":"crossref","unstructured":"M. Fayyaz and J. Gall, \u201cSct: set constrained temporal transformer for set supervised action segmentation,\u201d in Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.\u00a0501\u2013510.","DOI":"10.1109\/CVPR42600.2020.00058"},{"key":"2025031705022694485_j_auto-2024-0008_ref_022","doi-asserted-by":"crossref","unstructured":"R. Ghoddoosian, S. Sayed, and V. Athitsos, \u201cHierarchical modeling for task recognition and action segmentation in weakly-labeled instructional videos,\u201d in Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, 2022, pp.\u00a01922\u20131932.","DOI":"10.1109\/WACV51458.2022.00020"},{"key":"2025031705022694485_j_auto-2024-0008_ref_023","unstructured":"M. Ilse, J. Tomczak, and M. Welling, \u201cAttention-based deep multiple instance learning,\u201d in International Conference on Machine Learning, PMLR, 2018, pp.\u00a02127\u20132136."},{"key":"2025031705022694485_j_auto-2024-0008_ref_024","doi-asserted-by":"crossref","unstructured":"A. Villanueva and R. Cabeza, \u201cModels for gaze tracking systems,\u201d EURASIP J. Image Video Process., vol.\u00a02007, no.\u00a01, p.\u00a0023570, 2007. https:\/\/doi.org\/10.1186\/1687-5281-2007-023570.","DOI":"10.1186\/1687-5281-2007-023570"},{"key":"2025031705022694485_j_auto-2024-0008_ref_025","doi-asserted-by":"crossref","unstructured":"J. Merchant, R. Morrissette, and J. L. Porterfield, \u201cRemote measurement of eye direction allowing subject motion over one cubic foot of space,\u201d IEEE Trans. Biomed. Eng., vol.\u00a0BME-21, no.\u00a04, pp.\u00a0309\u2013317, 1974. https:\/\/doi.org\/10.1109\/tbme.1974.324318.","DOI":"10.1109\/TBME.1974.324318"},{"key":"2025031705022694485_j_auto-2024-0008_ref_026","unstructured":"K. Holmqvist, M. Nystr\u00f6m, R. Andersson, R. Dewhurst, H. Jarodzka, and J. van de Weijer, Eye Tracking: A Comprehensive Guide to Methods and Measures, Oxford, OUP, 2011."},{"key":"2025031705022694485_j_auto-2024-0008_ref_027","doi-asserted-by":"crossref","unstructured":"S.-W. Shih and J. Liu, \u201cA novel approach to 3-D gaze tracking using stereo cameras,\u201d IEEE Trans. Syst. Man Cybern. B Cybern., vol. 34, no. 1, pp. 234\u2013245, 2004. https:\/\/doi.org\/10.1109\/tsmcb.2003.811128.","DOI":"10.1109\/TSMCB.2003.811128"},{"key":"2025031705022694485_j_auto-2024-0008_ref_028","doi-asserted-by":"crossref","unstructured":"E. D. Guestrin and M. Eizenman, \u201cGeneral theory of remote gaze estimation using the pupil center and corneal reflections,\u201d IEEE Trans. Biomed. Eng., vol.\u00a053, no.\u00a06, pp.\u00a01124\u20131133, 2006. https:\/\/doi.org\/10.1109\/tbme.2005.863952.","DOI":"10.1109\/TBME.2005.863952"},{"key":"2025031705022694485_j_auto-2024-0008_ref_029","doi-asserted-by":"crossref","unstructured":"C. Hennessey and P. Lawrence, \u201cNoncontact binocular eye-gaze tracking for point-of-gaze estimation in three dimensions,\u201d IEEE Trans. Biomed. Eng., vol.\u00a056, no.\u00a03, pp.\u00a0790\u2013799, 2009. https:\/\/doi.org\/10.1109\/tbme.2008.2005943.","DOI":"10.1109\/TBME.2008.2005943"},{"key":"2025031705022694485_j_auto-2024-0008_ref_030","doi-asserted-by":"crossref","unstructured":"B. Hosp, S. Eivazi, M. Maurer, W. Fuhl, D. Geisler, and E. Kasneci, \u201cRemoteEye: an open-source high-speed remote eye tracker: implementation insights of a pupil- and glint-detection algorithm for high-speed remote eye tracking,\u201d Behav. Res. Methods, vol.\u00a052, no.\u00a03, pp.\u00a01387\u20131401, 2020. https:\/\/doi.org\/10.3758\/s13428-019-01305-2.","DOI":"10.3758\/s13428-019-01305-2"},{"key":"2025031705022694485_j_auto-2024-0008_ref_031","doi-asserted-by":"crossref","unstructured":"Y.-L. Xiao, S. Li, Q. Zhang, J. Zhong, X. Su, and Z. You, \u201cOptical fringe-reflection deflectometry with bundle adjustment,\u201d Opt. Lasers Eng., vol. 105, pp. 132\u2013140, 2018. https:\/\/doi.org\/10.1016\/j.optlaseng.2018.01.013.","DOI":"10.1016\/j.optlaseng.2018.01.013"},{"key":"2025031705022694485_j_auto-2024-0008_ref_032","unstructured":"F. B\u00f6hle, \u201cImplizites Wissen und subjektivierendes Handeln \u2013 Konzepte und empirische Befunde aus der Arbeitsforschung,\u201d in Implizites Wissen, Wirtschaft \u2013 Beruf \u2013 Ethik, R. Hermkes, G. H. Neuweg, and T. Bonowski, Eds., Bielefeld, wbv Media, 2020, pp.\u00a037\u201364."},{"key":"2025031705022694485_j_auto-2024-0008_ref_033","doi-asserted-by":"crossref","unstructured":"B. M. Velichkovsky, \u201cHeterarchy of cognition: the depths and the highs of a framework for memory research,\u201d Memory, vol.\u00a010, nos. 5\u20136, pp.\u00a0405\u2013419, 2002. https:\/\/doi.org\/10.1080\/09658210244000234.","DOI":"10.1080\/09658210244000234"},{"key":"2025031705022694485_j_auto-2024-0008_ref_034","doi-asserted-by":"crossref","unstructured":"A. T. Duchowski, \u201cGaze-based interaction: a 30 year retrospective,\u201d Comput. Graph., vol. 73, pp. 59\u201369, 2018. https:\/\/doi.org\/10.1016\/j.cag.2018.04.002.","DOI":"10.1016\/j.cag.2018.04.002"},{"key":"2025031705022694485_j_auto-2024-0008_ref_035","doi-asserted-by":"crossref","unstructured":"J. Theeuwes, A. Belopolsky, and C. N. L. Olivers, \u201cInteractions between working memory, attention and eye movements,\u201d Acta Psychol., vol.\u00a0132, no.\u00a02, pp.\u00a0106\u2013114, 2009. https:\/\/doi.org\/10.1016\/j.actpsy.2009.01.005.","DOI":"10.1016\/j.actpsy.2009.01.005"},{"key":"2025031705022694485_j_auto-2024-0008_ref_036","doi-asserted-by":"crossref","unstructured":"E. Awh, K. M. Armstrong, and T. Moore, \u201cVisual and oculomotor selection: links, causes and implications for spatial attention,\u201d Trends Cognit. Sci., vol.\u00a010, no.\u00a03, pp.\u00a0124\u2013130, 2006. https:\/\/doi.org\/10.1016\/j.tics.2006.01.001.","DOI":"10.1016\/j.tics.2006.01.001"},{"key":"2025031705022694485_j_auto-2024-0008_ref_037","doi-asserted-by":"crossref","unstructured":"Y.-c. Yeh, J.-L. Tsai, W.-C. Hsu, and C. F. Lin, \u201cA model of how working memory capacity influences insight problem solving in situations with multiple visual representations: an eye tracking analysis,\u201d Think. Skills Creativ., vol. 13, pp. 153\u2013167, 2014. https:\/\/doi.org\/10.1016\/j.tsc.2014.04.003.","DOI":"10.1016\/j.tsc.2014.04.003"},{"key":"2025031705022694485_j_auto-2024-0008_ref_038","doi-asserted-by":"crossref","unstructured":"N. Ayala, A. Zafar, and E. Niechwiej-Szwedo, \u201cGaze behaviour: a window into distinct cognitive processes revealed by the tower of london test,\u201d Vis. Res., vol. 199, p. 108072, 2022. https:\/\/doi.org\/10.1016\/j.visres.2022.108072.","DOI":"10.1016\/j.visres.2022.108072"},{"key":"2025031705022694485_j_auto-2024-0008_ref_039","doi-asserted-by":"crossref","unstructured":"A. Gegenfurtner and M. Sepp\u00e4nen, \u201cTransfer of expertise: an eye tracking and think aloud study using dynamic medical visualizations,\u201d Comput. Educ., vol. 63, pp. 393\u2013403, 2013. https:\/\/doi.org\/10.1016\/j.compedu.2012.12.021.","DOI":"10.1016\/j.compedu.2012.12.021"},{"key":"2025031705022694485_j_auto-2024-0008_ref_040","doi-asserted-by":"crossref","unstructured":"A. H. Fathaliyan, X. Wang, and V. J. Santos, \u201cExploiting three-dimensional gaze tracking for action recognition during bimanual manipulation to enhance human\u2013robot collaboration,\u201d Front. Robot. AI, vol.\u00a05, 2018, Art. no. 25. https:\/\/doi.org\/10.3389\/frobt.2018.00025.","DOI":"10.3389\/frobt.2018.00025"},{"key":"2025031705022694485_j_auto-2024-0008_ref_041","unstructured":"A. Saran, E. S. Short, A. Thomaz, and S. Niekum, \u201cUnderstanding teacher gaze patterns for robot learning,\u201d in 3rd Conference on Robot Learning, 2019."},{"key":"2025031705022694485_j_auto-2024-0008_ref_042","doi-asserted-by":"crossref","unstructured":"J. Pfrommer, et al.., \u201cAn ontology for remanufacturing systems,\u201d Automatisierungstechnik, vol.\u00a070, no.\u00a06, pp.\u00a0534\u2013541, 2022. https:\/\/doi.org\/10.1515\/auto-2021-0156.","DOI":"10.1515\/auto-2021-0156"},{"key":"2025031705022694485_j_auto-2024-0008_ref_043","doi-asserted-by":"crossref","unstructured":"A. Billard and D. Kragi\u0107, \u201cTrends and challenges in robot manipulation,\u201d Science, vol.\u00a0364, no.\u00a06446, p.\u00a0eaat8414, 2019. https:\/\/doi.org\/10.1126\/science.aat8414.","DOI":"10.1126\/science.aat8414"},{"key":"2025031705022694485_j_auto-2024-0008_ref_044","doi-asserted-by":"crossref","unstructured":"S. Niekum, S. Osentoski, G. Konidaris, S. Chitta, B. Marthi, and A. G. Barto, \u201cLearning grounded finite-state representations from unstructured demonstrations,\u201d Int. J. Robot. Res., vol.\u00a034, no.\u00a02, pp.\u00a0131\u2013157, 2015. https:\/\/doi.org\/10.1177\/0278364914554471.","DOI":"10.1177\/0278364914554471"},{"key":"2025031705022694485_j_auto-2024-0008_ref_045","doi-asserted-by":"crossref","unstructured":"S. Calinon, F. Guenter, and A. Billard, \u201cOn learning, representing, and generalizing a task in a humanoid robot,\u201d IEEE Trans. Syst. Man Cybern. B Cybern., vol.\u00a037, no.\u00a02, pp.\u00a0286\u2013298, 2007. https:\/\/doi.org\/10.1109\/tsmcb.2006.886952.","DOI":"10.1109\/TSMCB.2006.886952"},{"key":"2025031705022694485_j_auto-2024-0008_ref_046","doi-asserted-by":"crossref","unstructured":"X. Ye, Z. Lin, and Y. Yang, \u201cRobot learning of manipulation activities with overall planning through precedence graph,\u201d Robot. Autonom. Syst., vol. 116, pp. 126\u2013135, 2019. https:\/\/doi.org\/10.1016\/j.robot.2019.03.011.","DOI":"10.1016\/j.robot.2019.03.011"},{"key":"2025031705022694485_j_auto-2024-0008_ref_047","doi-asserted-by":"crossref","unstructured":"T. Welschehold, N. Abdo, C. Dornhege, and W. Burgard, \u201cCombined task and action learning from human demonstrations for mobile manipulation applications,\u201d in International Conference on Intelligent Robots and Systems (IROS), IEEE, 2019, pp.\u00a04317\u20134324.","DOI":"10.1109\/IROS40897.2019.8968091"},{"key":"2025031705022694485_j_auto-2024-0008_ref_048","doi-asserted-by":"crossref","unstructured":"N. Kr\u00fcger, et al.., \u201cObject\u2013action complexes: grounded abstractions of sensory\u2013motor processes,\u201d Robot. Autonom. Syst., vol.\u00a059, no.\u00a010, pp.\u00a0740\u2013757, 2011. https:\/\/doi.org\/10.1016\/j.robot.2011.05.009.","DOI":"10.1016\/j.robot.2011.05.009"},{"key":"2025031705022694485_j_auto-2024-0008_ref_049","unstructured":"A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann, \u201cProbabilistic movement primitives,\u201d Adv. Neural Inf. Process. Syst., vol. 26, 2013."},{"key":"2025031705022694485_j_auto-2024-0008_ref_050","doi-asserted-by":"crossref","unstructured":"Y. Zhou, J. Gao, and T. Asfour, \u201cMovement primitive learning and generalization: using mixture density networks,\u201d Robot. Autom. Mag., vol.\u00a027, no.\u00a02, pp.\u00a022\u201332, 2020. https:\/\/doi.org\/10.1109\/mra.2020.2980591.","DOI":"10.1109\/MRA.2020.2980591"},{"key":"2025031705022694485_j_auto-2024-0008_ref_051","doi-asserted-by":"crossref","unstructured":"C. R. G. Dreher, M. W\u00e4chter, and T. Asfour, \u201cLearning object-action relations from bimanual human demonstration using graph networks,\u201d Robot. Autom. Lett., vol.\u00a05, no.\u00a01, pp.\u00a0187\u2013194, 2020. https:\/\/doi.org\/10.1109\/lra.2019.2949221.","DOI":"10.1109\/LRA.2019.2949221"},{"key":"2025031705022694485_j_auto-2024-0008_ref_052","doi-asserted-by":"crossref","unstructured":"F. Krebs and T. Asfour, \u201cA bimanual manipulation taxonomy,\u201d Robot. Autom. Lett., vol.\u00a07, no.\u00a04, pp.\u00a011031\u201311038, 2022. https:\/\/doi.org\/10.1109\/lra.2022.3196158.","DOI":"10.1109\/LRA.2022.3196158"},{"key":"2025031705022694485_j_auto-2024-0008_ref_053","doi-asserted-by":"crossref","unstructured":"R. Mirsky, R. Stern, K. Gal, and M. Kalech, \u201cSequential plan recognition: an iterative approach to disambiguating between hypotheses,\u201d Artif. Intell., vol. 260, pp. 51\u201373, 2018. https:\/\/doi.org\/10.1016\/j.artint.2018.03.006.","DOI":"10.1016\/j.artint.2018.03.006"},{"key":"2025031705022694485_j_auto-2024-0008_ref_054","doi-asserted-by":"crossref","unstructured":"K. French, S. Wu, T. Pan, Z. Zhou, and O. C. Jenkins, \u201cLearning behavior trees from demonstration,\u201d Int. Conf. Robot. Autom., pp. 7791\u20137797, 2019.","DOI":"10.1109\/ICRA.2019.8794104"},{"key":"2025031705022694485_j_auto-2024-0008_ref_055","doi-asserted-by":"crossref","unstructured":"T. Aoki, G. Venture, and D. Kuli\u0107, \u201cSegmentation of human body movement using inertial measurement unit,\u201d in International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2013, pp.\u00a01181\u20131186.","DOI":"10.1109\/SMC.2013.205"},{"key":"2025031705022694485_j_auto-2024-0008_ref_056","doi-asserted-by":"crossref","unstructured":"F. Zhou, F. de la Torre, and J. K. Hodgins, \u201cHierarchical aligned cluster analysis for temporal clustering of human motion,\u201d Trans. Pattern Anal. Mach. Intell., vol.\u00a035, no.\u00a03, pp.\u00a0582\u2013596, 2013. https:\/\/doi.org\/10.1109\/tpami.2012.137.","DOI":"10.1109\/TPAMI.2012.137"},{"key":"2025031705022694485_j_auto-2024-0008_ref_057","doi-asserted-by":"crossref","unstructured":"J. F.-S. Lin, V. Joukov, and D. Kuli\u0107, \u201cHuman motion segmentation by data point classification,\u201d in Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, 2014, pp.\u00a09\u201313.","DOI":"10.1109\/EMBC.2014.6943516"},{"key":"2025031705022694485_j_auto-2024-0008_ref_058","doi-asserted-by":"crossref","unstructured":"M. W\u00e4chter and T. Asfour, \u201cHierarchical segmentation of manipulation actions based on object relations and motion characteristics,\u201d in International Conference on Advanced Robotics (ICAR), IEEE, 2015, pp.\u00a0549\u2013556.","DOI":"10.1109\/ICAR.2015.7251510"},{"key":"2025031705022694485_j_auto-2024-0008_ref_059","doi-asserted-by":"crossref","unstructured":"C. Mandery, \u00d6. Terlemez, M. Do, N. Vahrenkamp, and T. Asfour, \u201cUnifying representations and large-scale whole-body motion databases for studying human motion,\u201d Trans. Robot., vol.\u00a032, no.\u00a04, pp.\u00a0796\u2013809, 2016. https:\/\/doi.org\/10.1109\/tro.2016.2572685.","DOI":"10.1109\/TRO.2016.2572685"},{"key":"2025031705022694485_j_auto-2024-0008_ref_060","doi-asserted-by":"crossref","unstructured":"C. R. G. Dreher and T. Asfour, \u201cLearning temporal task models from human bimanual demonstrations,\u201d in International Conference on Intelligent Robots and Systems (IROS), 2022, pp.\u00a07664\u20137671.","DOI":"10.1109\/IROS47612.2022.9981068"},{"key":"2025031705022694485_j_auto-2024-0008_ref_061","doi-asserted-by":"crossref","unstructured":"G. Li, Z. Jin, M. Volpp, F. Otto, R. Lioutikov, and G. Neumann, \u201cProdmps: a unified perspective on dynamic and probabilistic movement primitives,\u201d arXiv: 2306.12729, 2023, https:\/\/doi.org\/10.1109\/LRA.2023.3248443.","DOI":"10.1109\/LRA.2023.3248443"},{"key":"2025031705022694485_j_auto-2024-0008_ref_062","unstructured":"F. Otto, O. Celik, H. Zhou, H. Ziesche, N. A. Vien, and G. Neumann, \u201cDeep black-box reinforcement learning with movement primitives,\u201d in 6th Conference on Robot Learning (CoRL 2022), 2022."},{"key":"2025031705022694485_j_auto-2024-0008_ref_063","unstructured":"O. Celik, D. Zhou, G. Li, P. Becker, and G. Neumann, \u201cSpecializing versatile skill libraries using local mixture of experts,\u201d in Conference on Robot Learning, 2021."},{"key":"2025031705022694485_j_auto-2024-0008_ref_064","unstructured":"K. Lee, L. M. Smith, and P. Abbeel, \u201cPebble: feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training,\u201d in Proceedings of the 38th International Conference on Machine Learning, M. Meila and T. Zhang, Eds., PMLR, 2021, pp.\u00a06152\u20136163."},{"key":"2025031705022694485_j_auto-2024-0008_ref_065","unstructured":"J. Hejna and D. Sadigh, \u201cFew-shot preference learning for human-in-the-loop rl,\u201d in 6th Annual Conference on Robot Learning, 2022."},{"key":"2025031705022694485_j_auto-2024-0008_ref_066","doi-asserted-by":"crossref","unstructured":"P. Sundaresan, R. Antonova, and J. Bohg, \u201cDiffcloud: real-to-sim from point clouds with differentiable simulation and rendering of deformable objects,\u201d arXiv preprint arXiv:2204.03139, 2022.","DOI":"10.1109\/IROS47612.2022.9981101"},{"key":"2025031705022694485_j_auto-2024-0008_ref_067","doi-asserted-by":"crossref","unstructured":"B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, \u201cNerf: representing scenes as neural radiance fields for view synthesis,\u201d Commun. ACM, vol.\u00a065, no.\u00a01, pp.\u00a099\u2013106, 2021. https:\/\/doi.org\/10.1145\/3503250.","DOI":"10.1145\/3503250"},{"key":"2025031705022694485_j_auto-2024-0008_ref_068","unstructured":"D. Driess, I. Schubert, P. Florence, Y. Li, and M. Toussaint, \u201cReinforcement learning with neural radiance fields,\u201d arXiv preprint arXiv:2206.01634, 2022."},{"key":"2025031705022694485_j_auto-2024-0008_ref_069","doi-asserted-by":"crossref","unstructured":"G. Maeda, M. Ewerton, D. Koert, and J. Peters, \u201cAcquiring and generalizing the embodiment mapping from human observations to robot skills,\u201d Robot. Autom. Lett., vol.\u00a01, no.\u00a02, pp.\u00a0784\u2013791, 2016. https:\/\/doi.org\/10.1109\/lra.2016.2525038.","DOI":"10.1109\/LRA.2016.2525038"},{"key":"2025031705022694485_j_auto-2024-0008_ref_070","doi-asserted-by":"crossref","unstructured":"P. Englert, N. A. Vien, and M. Toussaint, \u201cInverse kkt: learning cost functions of manipulation tasks from demonstrations,\u201d Int. J. Robot. Res., vol.\u00a036, nos. 13\u201314, pp.\u00a01474\u20131488, 2017. https:\/\/doi.org\/10.1177\/0278364917745980.","DOI":"10.1177\/0278364917745980"},{"key":"2025031705022694485_j_auto-2024-0008_ref_071","unstructured":"K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi, \u201cXirl: cross-embodiment inverse reinforcement learning,\u201d in Conference on Robot Learning, 2021."},{"key":"2025031705022694485_j_auto-2024-0008_ref_072","unstructured":"W. Chen, et al.., \u201cLearning to predict 3d objects with an interpolation-based differentiable renderer,\u201d in Proceedings of the 33rd International Conference on Neural Information Processing Systems, Red Hook, NY, USA, Curran Associates Inc, 2019."},{"key":"2025031705022694485_j_auto-2024-0008_ref_073","unstructured":"M. Garnelo, et al.., \u201cNeural processes,\u201d arXiv: 1807.01622, 2018, https:\/\/doi.org\/10.48550\/arXiv.1807.01622."}],"container-title":["at - Automatisierungstechnik"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/auto-2024-0008\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/auto-2024-0008\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T05:03:09Z","timestamp":1742187789000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/auto-2024-0008\/html"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,9,1]]},"references-count":73,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2024,9,10]]},"published-print":{"date-parts":[[2024,9,25]]}},"alternative-id":["10.1515\/auto-2024-0008"],"URL":"https:\/\/doi.org\/10.1515\/auto-2024-0008","relation":{},"ISSN":["0178-2312","2196-677X"],"issn-type":[{"type":"print","value":"0178-2312"},{"type":"electronic","value":"2196-677X"}],"subject":[],"published":{"date-parts":[[2024,9,1]]}}}