{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T16:33:34Z","timestamp":1772555614203,"version":"3.50.1"},"reference-count":29,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2023,8,14]],"date-time":"2023-08-14T00:00:00Z","timestamp":1691971200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,8,14]],"date-time":"2023-08-14T00:00:00Z","timestamp":1691971200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["SN COMPUT. SCI."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Human activity recognition has been an open problem in computer vision for almost 2 decades. During this time, there have been many approaches proposed to solve this problem, but very few have managed to solve it in a way that is sufficiently computationally efficient for real-time applications. Recently, this has changed, with keypoint-based methods demonstrating a high degree of accuracy with low computational cost. These approaches take a given image and return a set of joint locations for each individual within an image. In order to achieve real-time performance, a sparse representation of these features over a given time frame is required for classification. Previous methods have achieved this using a reduced number of keypoints, but this approach gives a less robust representation of the individual\u2019s body pose and may limit the types of activity that can be detected. We present a novel method for reducing the size of the feature set, by calculating the Euclidian distance and the direction of keypoint changes across a number of frames. This allows for a meaningful representation of the individuals movements over time. We show that this method achieves accuracy on par with current state-of-the-art methods, while demonstrating real-time performance.<\/jats:p>","DOI":"10.1007\/s42979-023-02063-x","type":"journal-article","created":{"date-parts":[[2023,8,14]],"date-time":"2023-08-14T10:02:06Z","timestamp":1692007326000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Keypoint Changes for Fast Human Activity Recognition"],"prefix":"10.1007","volume":"4","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8307-3293","authenticated-orcid":false,"given":"Shane","family":"Reid","sequence":"first","affiliation":[]},{"given":"Sonya","family":"Coleman","sequence":"additional","affiliation":[]},{"given":"Dermot","family":"Kerr","sequence":"additional","affiliation":[]},{"given":"Philip","family":"Vance","sequence":"additional","affiliation":[]},{"given":"Siobhan","family":"O\u2019Neill","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,8,14]]},"reference":[{"issue":"8","key":"2063_CR1","doi-asserted-by":"publisher","first-page":"7432","DOI":"10.1109\/JIOT.2020.2984544","volume":"7","author":"F Luo","year":"2020","unstructured":"Luo F, Poslad S, Bodanese E. Temporal convolutional networks for multiperson activity recognition using a 2-D LIDAR. IEEE Internet Things J. 2020;7(8):7432\u201342. https:\/\/doi.org\/10.1109\/JIOT.2020.2984544.","journal-title":"IEEE Internet Things J"},{"key":"2063_CR2","doi-asserted-by":"publisher","first-page":"1061","DOI":"10.1016\/J.PATCOG.2020.107561","volume":"108","author":"L Minh Dang","year":"2020","unstructured":"Minh Dang L, Min K, Wang H, Jalil-Piran M, Hee-Lee C, Moon H. Sensor-based and vision-based human activity recognition: a comprehensive survey. Pattern Recognit. 2020;108:1061. https:\/\/doi.org\/10.1016\/J.PATCOG.2020.107561.","journal-title":"Pattern Recognit"},{"key":"2063_CR3","doi-asserted-by":"publisher","DOI":"10.5220\/0010420300910098","author":"S Reid","year":"2015","unstructured":"Reid S, Coleman S, Kerr D, Vance P, O\u2019neill S. Fast human activity recognition. Multimedia Model. 2015. https:\/\/doi.org\/10.5220\/0010420300910098.","journal-title":"Multimedia Model"},{"key":"2063_CR4","doi-asserted-by":"publisher","DOI":"10.1109\/IPAS50080.2020.9334948","author":"S Reid","year":"2020","unstructured":"Reid S, Vance P, Coleman S, Kerr D, O\u2019Neill S. Towards real-time activity recognition. IEEE Conf Image Process Appl. 2020. https:\/\/doi.org\/10.1109\/IPAS50080.2020.9334948.","journal-title":"IEEE Conf Image Process Appl"},{"issue":"1229","key":"2063_CR5","doi-asserted-by":"publisher","first-page":"123","DOI":"10.1109\/cvpr.2001.990935","volume":"2","author":"L Zelnik-Manor","year":"2001","unstructured":"Zelnik-Manor L, Irani M. Event-based analysis of video. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2001;2(1229):123\u201330. https:\/\/doi.org\/10.1109\/cvpr.2001.990935.","journal-title":"Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit"},{"key":"2063_CR6","doi-asserted-by":"crossref","unstructured":"Dollar P, Rabaud V, Cottrell G, Belongie S. Behavior recognition via sparse spatio-temporal feature. In: 2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2005, pp 65\u201372.","DOI":"10.1109\/VSPETS.2005.1570899"},{"key":"2063_CR7","unstructured":"Laptev I. Local spatio-temporal image features for motion interpretation. Doctoral Thesis, Stockholm. 2004."},{"key":"2063_CR8","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1109\/ICPR.2004.1334462","volume":"3","author":"C Sch\u00fcldt","year":"2004","unstructured":"Sch\u00fcldt C, Laptev I, Caputo B. Recognizing human actions: a local SVM approach. Proc Int Conf Pattern Recognit (ICPR). 2004;3:32\u20136. https:\/\/doi.org\/10.1109\/ICPR.2004.1334462.","journal-title":"Proc Int Conf Pattern Recognit (ICPR)"},{"key":"2063_CR9","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2011.5995407","author":"H Wang","year":"2011","unstructured":"Wang H, Kl\u00e4ser A, Schmid C, Liu CL. Action recognition by dense trajectories. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2011. https:\/\/doi.org\/10.1109\/CVPR.2011.5995407.","journal-title":"Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit"},{"key":"2063_CR10","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2007.383137","author":"Y Ke","year":"2005","unstructured":"Ke Y, Sukthankar R, Hebert M. Efficient visual event detection using volumetric features. Tenth IEEE Int Conf Comput Vis (ICCV\u201905). 2005. https:\/\/doi.org\/10.1109\/CVPR.2007.383137.","journal-title":"Tenth IEEE Int Conf Comput Vis (ICCV\u201905)"},{"key":"2063_CR11","doi-asserted-by":"publisher","first-page":"726","DOI":"10.1017\/s1358246107000136","volume":"2","author":"AA Efros","year":"2003","unstructured":"Efros AA, Berg AC, Mori G, Malik J. Recognising action at a distance. Proc Ninth IEEE Int Conf Comput Vis. 2003;2:726\u201333. https:\/\/doi.org\/10.1017\/s1358246107000136.","journal-title":"Proc Ninth IEEE Int Conf Comput Vis"},{"key":"2063_CR12","doi-asserted-by":"publisher","first-page":"188","DOI":"10.1109\/AVSS.2010.71","volume":"2010","author":"K Guo","year":"2010","unstructured":"Guo K, Ishwar P, Konrad J. Action recognition using sparse representation on covariance manifolds of optical flow. Proc IEEE Int Conf Adv Video Signal Based Surveill. 2010;2010:188\u201395. https:\/\/doi.org\/10.1109\/AVSS.2010.71.","journal-title":"Proc IEEE Int Conf Adv Video Signal Based Surveill"},{"key":"2063_CR13","unstructured":"D\u2019Sa AG, Prasad BG. An IoT based framework for activity recognition using deep learning technique. 2019. http:\/\/arxiv.org\/abs\/1906.07247."},{"key":"2063_CR14","doi-asserted-by":"publisher","first-page":"198","DOI":"10.1016\/j.patcog.2018.08.006","volume":"85","author":"DG Lee","year":"2019","unstructured":"Lee DG, Lee SW. Prediction of partially observed human activity based on pre-trained deep representation. Pattern Recognit. 2019;85:198\u2013206. https:\/\/doi.org\/10.1016\/j.patcog.2018.08.006.","journal-title":"Pattern Recognit"},{"key":"2063_CR15","doi-asserted-by":"publisher","DOI":"10.1007\/s12065-019-00245-2","author":"PT Sheeba","year":"2019","unstructured":"Sheeba PT, Murugan S. Fuzzy dragon deep belief neural network for activity recognition using hierarchical skeleton features. Evol Intell. 2019. https:\/\/doi.org\/10.1007\/s12065-019-00245-2.","journal-title":"Evol Intell"},{"key":"2063_CR16","doi-asserted-by":"crossref","unstructured":"Subedar M, Krishnan R, Meyer PL, Tickoo O, Huang J. Uncertainty-aware audiovisual activity recognition using deep bayesian variational inference. In: Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 6301\u201310.","DOI":"10.1109\/ICCV.2019.00640"},{"key":"2063_CR17","doi-asserted-by":"crossref","unstructured":"Camarena F, Chang L, Gonzalez-Mendoza M. Improving the dense trajectories approach towards efficient recognition of simple human activities. In: 2019 7th International Workshop on Biometrics and Forensics (IWBF), 2019, pp. 1\u20136.","DOI":"10.1109\/IWBF.2019.8739244"},{"key":"2063_CR18","doi-asserted-by":"publisher","DOI":"10.1109\/ICME.2010.5583046","author":"J Sun","year":"2010","unstructured":"Sun J, Mu Y, Yan S, Cheong LF. Activity recognition using dense long-duration trajectories. IEEE Int Conf Multimed Expo (ICME). 2010. https:\/\/doi.org\/10.1109\/ICME.2010.5583046.","journal-title":"IEEE Int. Conf. Multimed. Expo (ICME)"},{"key":"2063_CR19","doi-asserted-by":"crossref","unstructured":"Cao Z, Simon T, Wei SE, Sheikh Y. Realtime multi-person 2D pose estimation using part affinity fields. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 7291\u20139.","DOI":"10.1109\/CVPR.2017.143"},{"key":"2063_CR20","unstructured":"Xiu Y, Wang H, Lu C. Pose flow: efficient online pose tracking. In: British Machine Vision Conference, 2018, pp. 1\u201312."},{"key":"2063_CR21","doi-asserted-by":"crossref","unstructured":"Xu Y, Zhang J, Zhang Q, Tao D. ViTPose: simple vision transformer baselines for human pose estimation. 2022. arXiv preprint arXiv:2204.12484.","DOI":"10.1109\/TPAMI.2023.3330016"},{"issue":"11","key":"2063_CR22","doi-asserted-by":"publisher","first-page":"3991","DOI":"10.3390\/s22113991","volume":"22","author":"H Ramirez","year":"2022","unstructured":"Ramirez H, Velastin SA, Aguayo P, Fabregas E, Farias G. Human activity recognition by sequences of skeleton features. Sensors. 2022;22(11):3991.","journal-title":"Sensors"},{"key":"2063_CR23","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1007\/s11263-012-0594-8","volume":"103","author":"H Wang","year":"2013","unstructured":"Wang H, Kl\u00e4ser A, Schmid C, Liu C. Dense trajectories and motion boundary descriptors for action recognition. Int J Comput Vis. 2013;103:60\u201379. https:\/\/doi.org\/10.1007\/s11263-012-0594-8.","journal-title":"Int J Comput Vis"},{"key":"2063_CR24","doi-asserted-by":"crossref","unstructured":"Reid S, Vance P, Coleman S, Kerr D, O\u2019Neill S. Towards real time activity recognition. In: Procedings of the Fourth IEEE International Conference on Image Processing, Applications and Systems (IPAS 2020), 2020.","DOI":"10.1109\/IPAS50080.2020.9334948"},{"key":"2063_CR25","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2009.5457659","author":"P Matikainen","year":"2009","unstructured":"Matikainen P, Hebert M, Sukthankar R. Trajectons: action recognition through the motion analysis of tracked features. IEEE Twelth Int Conf Comput Vis Workshop (ICCV Workshop). 2009. https:\/\/doi.org\/10.1109\/ICCVW.2009.5457659.","journal-title":"IEEE Twelth Int Conf Comput Vis Workshop (ICCV Workshop)"},{"key":"2063_CR26","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2013.330","author":"M Jain","year":"2013","unstructured":"Jain M, J\u00e9gou H, Bouthemy P. Better exploiting motion for better action recognition. IEEE Conf Comput Vis Pattern Recognit. 2013. https:\/\/doi.org\/10.1109\/CVPR.2013.330.","journal-title":"IEEE Conf Comput Vis Pattern Recognit"},{"key":"2063_CR27","unstructured":"Choutas V, Weinzaepfel P, Revaud J, Schmid C. PoTion: pose motion representation for action recognition. IEEE conference on computer vision and pattern recognition (cvpr). 2023."},{"issue":"3","key":"2063_CR28","doi-asserted-by":"publisher","first-page":"391","DOI":"10.1007\/s00371-016-1345-6","volume":"34","author":"Y Yi","year":"2018","unstructured":"Yi Y, Wang H. Motion keypoint trajectory and covariance descriptor for human action recognition. Vis Comput. 2018;34(3):391\u2013403. https:\/\/doi.org\/10.1007\/s00371-016-1345-6.","journal-title":"Vis Comput"},{"key":"2063_CR29","unstructured":"Cai Y et al. Res-steps-net for multi-person pose estimation. In: Joint COCO and Mapillary Workshop at ICCV 2019: COCO Keypoint Challenge Track, 2019, p. 3."}],"container-title":["SN Computer Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s42979-023-02063-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s42979-023-02063-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s42979-023-02063-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,26]],"date-time":"2024-10-26T03:53:56Z","timestamp":1729914836000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s42979-023-02063-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,14]]},"references-count":29,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2023,9]]}},"alternative-id":["2063"],"URL":"https:\/\/doi.org\/10.1007\/s42979-023-02063-x","relation":{},"ISSN":["2661-8907"],"issn-type":[{"value":"2661-8907","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,14]]},"assertion":[{"value":"2 November 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 June 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 August 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare there is no conflice of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"621"}}