{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T04:24:47Z","timestamp":1760243087627,"version":"build-2065373602"},"reference-count":49,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2015,9,24]],"date-time":"2015-09-24T00:00:00Z","timestamp":1443052800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>In this paper we propose a novel algorithm that enables online actions segmentation and classification. The algorithm enables segmentation from an incoming motion capture (MoCap) data stream, sport (or karate) movement sequences that are later processed by classification algorithm. The segmentation is based on Gesture Description Language classifier that is trained with an unsupervised learning algorithm. The classification is performed by continuous density forward-only hidden Markov models (HMM) classifier. Our methodology was evaluated on a unique dataset consisting of MoCap recordings of six Oyama karate martial artists including multiple champion of Kumite Knockdown Oyama karate. The dataset consists of 10 classes of actions and included dynamic actions of stands, kicks and blocking techniques. Total number of samples was 1236. We have examined several HMM classifiers with various number of hidden states and also Gaussian mixture model (GMM) classifier to empirically find the best setup of the proposed method in our dataset. We have used leave-one-out cross validation. The recognition rate of our methodology differs between karate techniques and is in the range of 81% \u00b1 15% even to 100%. Our method is not limited for this class of actions but can be easily adapted to any other MoCap-based actions. The description of our approach and its evaluation are the main contributions of this paper. The results presented in this paper are effects of pioneering research on online karate action classification.<\/jats:p>","DOI":"10.3390\/sym7041670","type":"journal-article","created":{"date-parts":[[2015,9,24]],"date-time":"2015-09-24T10:51:07Z","timestamp":1443091867000},"page":"1670-1698","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":30,"title":["Application of Assistive Computer Vision Methods to Oyama Karate Techniques Recognition"],"prefix":"10.3390","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1390-9021","authenticated-orcid":false,"given":"Tomasz","family":"Hachaj","sequence":"first","affiliation":[{"name":"Institute of Computer Science and Computer Methods, Pedagogical University of Krakow,  2 Podchorazych Ave, Krakow 30-084, Poland"}]},{"given":"Marek","family":"Ogiela","sequence":"additional","affiliation":[{"name":"Cryptography and Cognitive Informatics Research Group, AGH University of Science and Technology, 30 Mickiewicza Ave, Krakow 30-059, Poland"}]},{"given":"Katarzyna","family":"Koptyra","sequence":"additional","affiliation":[{"name":"Cryptography and Cognitive Informatics Research Group, AGH University of Science and Technology, 30 Mickiewicza Ave, Krakow 30-059, Poland"}]}],"member":"1968","published-online":{"date-parts":[[2015,9,24]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1569","DOI":"10.1016\/j.patcog.2012.11.028","article-title":"A conditional random field-based model for joint sequence segmentation and classification","volume":"46","author":"Chatzis","year":"2013","journal-title":"Pattern Recognit."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"2","DOI":"10.1016\/j.jvcir.2013.03.001","article-title":"Effective 3D action recognition using EigenJoints","volume":"25","author":"Yang","year":"2014","journal-title":"J. Visual Commun. Image Represent."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1007\/s10844-014-0329-0","article-title":"Classifying actions based on histogram of oriented velocity vectors","volume":"44","author":"Boubou","year":"2015","journal-title":"J. Intell. Inf. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Celiktutan, O., Akgul, C.B., Wolf, C., and Sankur, B. (2013, January 21\u201325). Graph-based analysis of physical exercise actions. Proceedings of the 1st ACM international workshop on Multimedia indexing and information retrieval for healthcare (MIIRH \u203213), Barcelona, Catalunya, Spain.","DOI":"10.1145\/2505323.2505330"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"65","DOI":"10.1016\/j.patrec.2013.10.005","article-title":"Online gesture recognition from pose kernel learning and decision forests","volume":"39","author":"Miranda","year":"2014","journal-title":"Pattern Recognit. Lett."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"808","DOI":"10.1016\/j.imavis.2012.06.007","article-title":"Model-based recognition of human actions by trajectory matching in phase spaces","volume":"30","author":"Casas","year":"2012","journal-title":"Image Vis. Comput."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1021","DOI":"10.1007\/s00371-014-0923-8","article-title":"Online robust action recognition based on a hierarchical model","volume":"30","author":"Jiang","year":"2014","journal-title":"Visual Comput."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"15","DOI":"10.1016\/j.cviu.2014.08.001","article-title":"Online action recognition using covariance of shape and motion","volume":"129","author":"Kviatkovsky","year":"2014","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"12","DOI":"10.1016\/j.jvcir.2013.03.008","article-title":"Pose-based human action recognition via sparse representation in dissimilarity space","volume":"25","author":"Theodorakopoulos","year":"2014","journal-title":"J. Visual Commun. Image Represent."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"178","DOI":"10.1016\/j.image.2014.10.008","article-title":"Stratified gesture recognition using the normalized longest common subsequence with rough sets","volume":"30","author":"Nyirarugira","year":"2015","journal-title":"Signal Process. Image Commun."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1007\/s00530-013-0332-2","article-title":"Rule-based approach to recognizing human body poses and gestures in real time","volume":"20","author":"Hachaj","year":"2014","journal-title":"Multimed. Syst."},{"key":"ref_12","unstructured":"Meinard, M. (2007). Information Retrieval for Music and Motion, Springer."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"3045","DOI":"10.1007\/s11042-013-1591-9","article-title":"Robust gesture recognition using feature pre-processing and weighted dynamic time warping","volume":"72","author":"Arici","year":"2014","journal-title":"Multimed. Tools Appl."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"238","DOI":"10.1016\/j.patcog.2013.06.020","article-title":"Ongoing human action recognition with motion capture","volume":"47","author":"Barnachon","year":"2014","journal-title":"Pattern Recognit."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"652","DOI":"10.1016\/j.asoc.2014.04.020","article-title":"Kinect-enabled home-based rehabilitation system using Dynamic Time Warping and fuzzy logic","volume":"22","author":"Su","year":"2014","journal-title":"Appl. Soft Comput."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1125","DOI":"10.1007\/s11042-014-2103-2","article-title":"3D motion matching algorithm using signature feature descriptor","volume":"74","author":"Pham","year":"2015","journal-title":"Multimed. Tools Appl."},{"key":"ref_17","first-page":"112","article-title":"DC machine diagnostics based on sound recognition with application of LPC and Fuzzy Logic","volume":"85","year":"2009","journal-title":"Prz. Elektrotech. Electr. Rev."},{"key":"ref_18","first-page":"231","article-title":"Diagnostics of Direct Current motor with application of acoustic signals, reflection coefficients and K-Nearest Neighbor classifier","volume":"88","year":"2012","journal-title":"Prz. Elektrotech. Electr. Rev."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"367","DOI":"10.1007\/s12369-013-0189-8","article-title":"Automated Proxemic Feature Extraction and Behavior Recognition: Applications in Human-Robot Interaction","volume":"5","author":"Mead","year":"2013","journal-title":"Int. J. Soc. Robot."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Kosmopoulos, D., Papoutsakis, K., and Argyros, A. (2014, January 1\u20135). Online segmentation and classication of modeled actions performed in the context of unmodeled ones. Proceedings of the British Machine Vision Conference, Nottingham, UK.","DOI":"10.5244\/C.28.95"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"3807","DOI":"10.1016\/j.patcog.2014.05.010","article-title":"Simultaneous segmentation and classification of human actions in video streams using deeply optimized Hough transform","volume":"47","author":"Achard","year":"2014","journal-title":"Pattern Recognit."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Hoai, M., Lan, Z., and de la Torre, F. (2011, January 20\u201325). Joint segmentation and classication of human actions in video. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA.","DOI":"10.1109\/CVPR.2011.5995470"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","article-title":"3d convolutional neural networks for human action recognition","volume":"35","author":"Ji","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_24","unstructured":"Le, Q., Ngiam, J., Chen, Z., Chia, D., Koh, P., and Ng, A. (2010, January 6\u201311). Tiled convolutional neural networks. Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), Vancouver, BC, Canada."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"1527","DOI":"10.1162\/neco.2006.18.7.1527","article-title":"A fast learning algorithm for deep belief nets","volume":"18","author":"Hinton","year":"2006","journal-title":"Neural Comput."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1995","DOI":"10.1016\/j.patrec.2013.02.006","article-title":"A survey of human motion analysis using depth imagery","volume":"34","author":"Chen","year":"2013","journal-title":"Pattern Recognit. Lett."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1225","DOI":"10.1007\/s12555-012-0617-9","article-title":"Development of human-machine interface for teleoperation of a mobile manipulator","volume":"10","author":"Wang","year":"2012","journal-title":"Int. J. Control Autom. Syst."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"16","DOI":"10.1007\/s10956-014-9517-5","article-title":"Assessment of Application Technology of Natural User Interfaces in the Creation of a Virtual Chemical Laboratory","volume":"24","author":"Wolski","year":"2015","journal-title":"J. Sci. Educ. Technol."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"1062","DOI":"10.1016\/j.gaitpost.2014.01.008","article-title":"Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson\u2019s disease","volume":"39","author":"Galna","year":"2014","journal-title":"Gait Posture"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"32","DOI":"10.1016\/j.compag.2014.08.008","article-title":"Estimation of pig weight using a Microsoft Kinect prototype imaging system","volume":"109","author":"Kongsro","year":"2014","journal-title":"Comput. Electron. Agric."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"635","DOI":"10.1007\/s11554-012-0246-9","article-title":"Fall detection system using Kinect\u2019s infrared sensor","volume":"9","author":"Mastorakis","year":"2014","journal-title":"J. Real Time Image Process."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"1063","DOI":"10.1007\/s00779-012-0552-z","article-title":"Introducing the use of depth data for fall detection","volume":"17","author":"Planinc","year":"2013","journal-title":"Pers. Ubiquitous Comput."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"389","DOI":"10.1007\/s10919-014-0186-0","article-title":"Automatically Detected Nonverbal Behavior Predicts Creativity in Collaborating Dyads","volume":"38","author":"Won","year":"2014","journal-title":"J. Nonverbal Behav."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"711","DOI":"10.1007\/s11423-014-9351-8","article-title":"From psychomotor to \u201cmotorpsycho\u201d: Learning through gestures with body sensory technologies","volume":"62","author":"Xu","year":"2014","journal-title":"Educ. Technol. Res. Dev."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Hachaj, T., and Baraniewicz, D. (2015). Knowledge Bricks\u2014Educational immersive reality environment. Int. J. Inf. Manag.","DOI":"10.1016\/j.ijinfomgt.2015.01.006"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"372","DOI":"10.1016\/j.gaitpost.2012.03.033","article-title":"Validity of the Microsoft Kinect for assessment of postural control","volume":"36","author":"Clark","year":"2012","journal-title":"Gait Posture"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"855","DOI":"10.1007\/s00371-014-0965-y","article-title":"Estimation of Kinect depth confidence through self-training","volume":"30","author":"Song","year":"2014","journal-title":"Visual Comput."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1016\/j.bspc.2007.02.001","article-title":"Design of a marker-based human motion tracking system","volume":"2","author":"Kolahi","year":"2007","journal-title":"Biomed. Signal Process. Control"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"523","DOI":"10.1007\/978-3-540-45243-0_67","article-title":"Estimation of Skill Levels in Sports Based on Hierarchical Spatio-Temporal Correspondences","volume":"2781","author":"Ilg","year":"2003","journal-title":"Pattern Recognit."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Stasinopoulos, S., and Maragos, P. (October, January 30). Human action recognition using Histographic methods and hidden Markov models for visual martial arts applications. Proceedings of the 2012 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA.","DOI":"10.1109\/ICIP.2012.6466967"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Bianco, S., and Tisato, F. (2013, January 12). Karate moves recognition from skeletal motion. Proceedings of the SPIE 8650, Three-Dimensional Image Processing (3DIP) and Applications 2013, Burlingame, CA, USA.","DOI":"10.1117\/12.2006229"},{"key":"ref_42","unstructured":"Hachaj, T., Ogiela, M.R., and Piekarczyk, M. (2013, January 8\u201311). Dependence of Kinect sensors number and position on gestures recognition with Gesture Description Language semantic classifier. Proceedings of the 2013 Federated Conference on Computer Science and Information Systems, Krakow, Poland."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Hachaj, T., and Ogiela, M.R. (2014, January 23). Full-body gestures and movements recognition: User descriptive and unsupervised learning approaches in GDL classifier. Proceedings of the SPIE 9217, Applications of Digital Image Processing XXXVII, San Diego, CA, USA.","DOI":"10.1117\/12.2061171"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Ogiela, M.R., and Hachaj, T. (2015). Natural User Interfaces in Medical Image Analysis: Advances in Computer Vision and Pattern Recognition, Springer International Publishing.","DOI":"10.1007\/978-3-319-07800-7"},{"key":"ref_45","unstructured":"Accord.NET Framework. Available online: http:\/\/accord-framework.net\/."},{"key":"ref_46","unstructured":"GDL Technology. Available online: http:\/\/cci.up.krakow.pl\/gdl\/."},{"key":"ref_47","unstructured":"Obdr\u017e\u00e1lek, \u0160., Kurillo, G., Ofli, F., Bajcsy, R., Seto, E., Jimison, H., and Pavel, M. (September, January 28). Accuracy and Robustness of Kinect Pose Estimation in the Context of Coaching of Elderly Population. Proceedings of the 34th International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), San Diego, CA, USA."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"12","DOI":"10.1016\/j.humov.2014.10.006","article-title":"Which technology to investigate visual perception in sport: Video vs. virtual reality","volume":"39","author":"Vignais","year":"2015","journal-title":"Human Mov. Sci."},{"key":"ref_49","first-page":"218","article-title":"Computer Networks","volume":"39","author":"Piorkowski","year":"2009","journal-title":"Commun. Comput. Inf. Sci."}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/7\/4\/1670\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T20:49:12Z","timestamp":1760215752000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/7\/4\/1670"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2015,9,24]]},"references-count":49,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2015,12]]}},"alternative-id":["sym7041670"],"URL":"https:\/\/doi.org\/10.3390\/sym7041670","relation":{},"ISSN":["2073-8994"],"issn-type":[{"type":"electronic","value":"2073-8994"}],"subject":[],"published":{"date-parts":[[2015,9,24]]}}}