{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T15:54:46Z","timestamp":1774540486832,"version":"3.50.1"},"reference-count":27,"publisher":"Springer Science and Business Media LLC","issue":"8","license":[{"start":{"date-parts":[[2023,11,24]],"date-time":"2023-11-24T00:00:00Z","timestamp":1700784000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,11,24]],"date-time":"2023-11-24T00:00:00Z","timestamp":1700784000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100010669","name":"H2020 LEIT Information and Communication Technologies","doi-asserted-by":"publisher","award":["101000165"],"award-info":[{"award-number":["101000165"]}],"id":[{"id":"10.13039\/100010669","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Intell Manuf"],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The role of Artificial intelligence in achieving high performance in manufacturing systems has been explored over the years. However, with the increasing number of variants in the factories and the advances in digital technologies new opportunities arise for supporting operators in the factory. The hybrid production systems stipulate the efficient collaboration of the workers with the machines. Human action recognition is a major enabler for intuitive machines and robots to achieve more efficient interaction with workers. This paper discusses a software framework called Praxis, aiming to facilitate the deployment of human action recognition (HAR) in assembly. Praxis is designed to provide a flexible and scalable architecture for implementing human action recognition in assembly lines. The framework has been implemented in a real-world case study originating for showcasing and validating the effectiveness of Praxis in real-life applications. It is deployed in an assembly use case for an air compression production industry. This study highlights the potential of the Praxis framework for promoting efficient human\u2013robot collaboration (HRC) in modern manufacturing environments through HAR.<\/jats:p>","DOI":"10.1007\/s10845-023-02228-8","type":"journal-article","created":{"date-parts":[[2023,11,24]],"date-time":"2023-11-24T17:01:32Z","timestamp":1700845292000},"page":"3697-3711","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["Praxis: a framework for AI-driven human action recognition in assembly"],"prefix":"10.1007","volume":"35","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6834-3075","authenticated-orcid":false,"given":"Christos","family":"Gkournelos","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8003-6622","authenticated-orcid":false,"given":"Christos","family":"Konstantinou","sequence":"additional","affiliation":[]},{"given":"Panagiotis","family":"Angelakis","sequence":"additional","affiliation":[]},{"given":"Eleni","family":"Tzavara","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9687-5925","authenticated-orcid":false,"given":"Sotiris","family":"Makris","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,11,24]]},"reference":[{"key":"2228_CR1","doi-asserted-by":"publisher","unstructured":"Abadi, M, et al. (2015). TensorFlow: Large-scale machine learning on heterogeneous distributed systems. p. 19. https:\/\/doi.org\/10.5281\/zenodo.4724125.","DOI":"10.5281\/zenodo.4724125"},{"key":"2228_CR2","doi-asserted-by":"publisher","first-page":"198","DOI":"10.1016\/j.procir.2020.01.040","volume":"86","author":"G Andrianakos","year":"2019","unstructured":"Andrianakos, G., et al. (2019). An approach for monitoring the execution of human based assembly operations using machine learning. Procedia Cirp, 86, 198\u2013203. https:\/\/doi.org\/10.1016\/j.procir.2020.01.040","journal-title":"Procedia Cirp"},{"key":"2228_CR3","doi-asserted-by":"crossref","unstructured":"Ben-Shabat, Y, et al. (2020). The IKEA ASM dataset: Understanding people assembling furniture through actions, objects and pose. http:\/\/arxiv.org\/abs\/2007.00394.","DOI":"10.1109\/WACV48630.2021.00089"},{"key":"2228_CR4","doi-asserted-by":"publisher","DOI":"10.1007\/0-387-28431-1","volume-title":"Manufacturing systems: Theory and practice","author":"G Chryssolouris","year":"2006","unstructured":"Chryssolouris, G. (2006). Manufacturing systems: Theory and practice. New York: Springer. https:\/\/doi.org\/10.1007\/0-387-28431-1"},{"key":"2228_CR5","doi-asserted-by":"publisher","DOI":"10.1007\/s10845-022-02014-y","author":"M Ciccarelli","year":"2022","unstructured":"Ciccarelli, M., et al. (2022). SPECTRE: A deep learning network for posture recognition in manufacturing. Journal of Intelligent Manufacturing. https:\/\/doi.org\/10.1007\/s10845-022-02014-y","journal-title":"Journal of Intelligent Manufacturing"},{"issue":"1","key":"2228_CR6","doi-asserted-by":"publisher","first-page":"745","DOI":"10.1038\/s41597-022-01843-z","volume":"9","author":"G Cicirelli","year":"2022","unstructured":"Cicirelli, G., et al. (2022). The HA4M dataset: Multi-modal monitoring of an assembly task for human action recognition in manufacturing. Scientific Data, 9(1), 745. https:\/\/doi.org\/10.1038\/s41597-022-01843-z","journal-title":"Scientific Data"},{"key":"2228_CR7","doi-asserted-by":"publisher","unstructured":"Dehzangi, O., & Sahu, V. (2018). IMU-based robust human activity recognition using feature analysis, extraction, and reduction. In 2018 24th international conference on pattern recognition (ICPR), pp. 1402\u201307. IEEE Xplore. https:\/\/doi.org\/10.1109\/ICPR.2018.8546311.","DOI":"10.1109\/ICPR.2018.8546311"},{"key":"2228_CR8","doi-asserted-by":"publisher","unstructured":"Herrmann, E, et al. (2019). Motion data and model management for applied statistical motion synthesis. In Smart tools and apps for graphics-Eurographics Italian chapter conference, pp. 079\u2013088. https:\/\/doi.org\/10.2312\/STAG.20191366.","DOI":"10.2312\/STAG.20191366"},{"key":"2228_CR9","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1016\/j.procir.2018.03.130","volume":"72","author":"N Kousi","year":"2018","unstructured":"Kousi, N., et al. (2018). An outlook on future assembly systems introducing robotic mobile dual arm workers. Procedia CIRP, 72, 33\u201338. https:\/\/doi.org\/10.1016\/j.procir.2018.03.130","journal-title":"Procedia CIRP"},{"key":"2228_CR10","unstructured":"Li, M, et al. (2019). Symbiotic graph neural networks for 3D skeleton-based human action recognition and motion prediction. http:\/\/arxiv.org\/abs\/1910.02212."},{"issue":"8","key":"2228_CR11","doi-asserted-by":"publisher","first-page":"8579","DOI":"10.1109\/TIE.2021.3105977","volume":"69","author":"S Li","year":"2022","unstructured":"Li, S., et al. (2022). Toward proactive human-robot collaborative assembly: A multimodal transfer-learning-enabled action prediction approach. IEEE Transactions on Industrial Electronics, 69(8), 8579\u20138588. https:\/\/doi.org\/10.1109\/TIE.2021.3105977","journal-title":"IEEE Transactions on Industrial Electronics"},{"key":"2228_CR12","doi-asserted-by":"publisher","first-page":"287","DOI":"10.1016\/j.jmsy.2017.04.009","volume":"44","author":"H Liu","year":"2017","unstructured":"Liu, H., & Wang, L. (2017). Human motion prediction for human-robot collaboration. Journal of Manufacturing Systems, 44, 287\u2013294. https:\/\/doi.org\/10.1016\/j.jmsy.2017.04.009","journal-title":"Journal of Manufacturing Systems"},{"key":"2228_CR13","doi-asserted-by":"publisher","first-page":"355","DOI":"10.1016\/j.ergon.2017.02.004","volume":"68","author":"H Liu","year":"2018","unstructured":"Liu, H., & Wang, L. (2018). Gesture recognition for human-robot collaboration: A review. International Journal of Industrial Ergonomics, 68, 355\u2013367. https:\/\/doi.org\/10.1016\/j.ergon.2017.02.004","journal-title":"International Journal of Industrial Ergonomics"},{"key":"2228_CR14","doi-asserted-by":"publisher","first-page":"186","DOI":"10.1016\/j.patrec.2021.11.003","volume":"155","author":"U Mahbub","year":"2022","unstructured":"Mahbub, U., & Ahad, M. A. R. (2022). Advances in human action, activity and gesture recognition. Pattern Recognition Letters, 155, 186\u2013190. https:\/\/doi.org\/10.1016\/j.patrec.2021.11.003","journal-title":"Pattern Recognition Letters"},{"key":"2228_CR15","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-51591-1","volume-title":"Cooperating robots for flexible manufacturing","author":"S Makris","year":"2021","unstructured":"Makris, S. (2021). Cooperating robots for flexible manufacturing. Cham: Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-030-51591-1"},{"key":"2228_CR16","unstructured":"Microsoft HoloLens2. (2019). https:\/\/www.microsoft.com\/en-us\/hololens."},{"key":"2228_CR17","doi-asserted-by":"publisher","first-page":"820","DOI":"10.1016\/j.future.2021.06.045","volume":"125","author":"K Muhammad","year":"2021","unstructured":"Muhammad, K., et al. (2021). Human action recognition using attention based LSTM network with dilated CNN features. Future Generation Computer Systems, 125, 820\u2013830. https:\/\/doi.org\/10.1016\/j.future.2021.06.045","journal-title":"Future Generation Computer Systems"},{"key":"2228_CR18","unstructured":"Quigley, M, et al. (2009). ROS: An open-source robot operating system. In IEEE international conference on robotics and automation."},{"key":"2228_CR19","doi-asserted-by":"crossref","unstructured":"Sener, F, et al. (2022). Assembly101: A large-scale multi-view video dataset for understanding procedural activities. https:\/\/assembly-101.github.io\/.","DOI":"10.1109\/CVPR52688.2022.02042"},{"key":"2228_CR20","doi-asserted-by":"publisher","unstructured":"Tzavara, E, et al. (2021) Worker in the loop: A framework for enabling human-robot collaborative assembly. In IFIP Advances in information and communication technology, vol. 630 IFIP, pp. 275\u201383, https:\/\/doi.org\/10.1007\/978-3-030-85874-2_29.","DOI":"10.1007\/978-3-030-85874-2_29"},{"issue":"1","key":"2228_CR21","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1016\/j.cirp.2019.04.052","volume":"68","author":"M Urgo","year":"2019","unstructured":"Urgo, M., et al. (2019). A human modelling and monitoring approach to support the execution of manufacturing operations. CIRP Annals, 68(1), 5\u20138. https:\/\/doi.org\/10.1016\/j.cirp.2019.04.052","journal-title":"CIRP Annals"},{"issue":"2","key":"2228_CR22","doi-asserted-by":"publisher","first-page":"701","DOI":"10.1016\/j.cirp.2019.05.002","volume":"68","author":"L Wang","year":"2019","unstructured":"Wang, L., et al. (2019). Symbiotic human-robot collaborative assembly. CIRP Annals, 68(2), 701\u2013726. https:\/\/doi.org\/10.1016\/j.cirp.2019.05.002","journal-title":"CIRP Annals"},{"key":"2228_CR23","doi-asserted-by":"publisher","unstructured":"Wen, X, et al. (2019). Human assembly task recognition in human-robot collaboration based on 3D CNN. In 2019 IEEE 9th annual international conference on CYBER technology in automation, control, and intelligent systems (CYBER), pp. 1230\u201334. IEEE Xplore, https:\/\/doi.org\/10.1109\/CYBER46603.2019.9066597.","DOI":"10.1109\/CYBER46603.2019.9066597"},{"key":"2228_CR24","unstructured":"Wu, Y, et al. (2019). Detectron2. https:\/\/github.com\/facebookresearch\/detectron2."},{"key":"2228_CR25","unstructured":"Zhang, F, et al. (2020). MediaPipe hands: On-device real-time hand tracking."},{"issue":"5","key":"2228_CR26","doi-asserted-by":"publisher","first-page":"1005","DOI":"10.3390\/s19051005","volume":"19","author":"HB Zhang","year":"2019","unstructured":"Zhang, H. B., et al. (2019). A comprehensive survey of vision-based human action recognition methods. Sensors, 19(5), 1005. https:\/\/doi.org\/10.3390\/s19051005","journal-title":"Sensors"},{"key":"2228_CR27","doi-asserted-by":"publisher","unstructured":"Zhao, R, et al. (2019). Bayesian hierarchical dynamic model for human action recognition. In 2019 IEEE\/CVF conference on computer vision and pattern recognition (CVPR), 2019, pp. 7725\u201334. IEEE Xplore, https:\/\/doi.org\/10.1109\/CVPR.2019.00792.","DOI":"10.1109\/CVPR.2019.00792"}],"container-title":["Journal of Intelligent Manufacturing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10845-023-02228-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10845-023-02228-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10845-023-02228-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,18]],"date-time":"2024-11-18T18:05:33Z","timestamp":1731953133000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10845-023-02228-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,24]]},"references-count":27,"journal-issue":{"issue":"8","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["2228"],"URL":"https:\/\/doi.org\/10.1007\/s10845-023-02228-8","relation":{},"ISSN":["0956-5515","1572-8145"],"issn-type":[{"value":"0956-5515","type":"print"},{"value":"1572-8145","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,11,24]]},"assertion":[{"value":"18 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 September 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 November 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}