{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,20]],"date-time":"2026-03-20T19:07:10Z","timestamp":1774033630431,"version":"3.50.1"},"reference-count":47,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2024,3,30]],"date-time":"2024-03-30T00:00:00Z","timestamp":1711756800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,3,30]],"date-time":"2024-03-30T00:00:00Z","timestamp":1711756800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61876138"],"award-info":[{"award-number":["61876138"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61203311"],"award-info":[{"award-number":["61203311"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100013064","name":"Key Research and Development Program of Jiangxi Province","doi-asserted-by":"publisher","award":["2020SF-375"],"award-info":[{"award-number":["2020SF-375"]}],"id":[{"id":"10.13039\/501100013064","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100013064","name":"Key Research and Development Program of Jiangxi Province","doi-asserted-by":"publisher","award":["2022GY-315"],"award-info":[{"award-number":["2022GY-315"]}],"id":[{"id":"10.13039\/501100013064","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In the action recognition field based on the characteristics of human skeleton joint points, the selection of keyframes in the skeleton sequence is a significant issue, which directly affects the action recognition accuracy. In order to improve the effectiveness of keyframes selection, this paper proposes inflection point frames, and transforms keyframes selection into a multi-objective optimization problem based on it. First, the pose features are extracted from the input skeleton joint point data, which used to construct the pose feature vector of each frame in time sequence; then, the inflection point frames in the sequence are determined according to the flow of momentum of each body part. Next, the pose feature vectors are input into the keyframes multi-objective optimization model, with the fusion of domain information and the number of keyframes; finally, the output keyframes are input to the action classifier. To verify the effectiveness of the method, the MSR-Action3D, the UTKinect-Action and Florence3D-Action, and the 3 public datasets, are chosen for simulation experiments and the results show that the keyframes sequence obtained by this method can significantly improve the accuracy of multiple action classifiers, and the average recognition accuracy of the three data sets can reach 94.6%, 97.6% and 94.2% respectively. Besides, combining the optimized keyframes with deep learning classifier on the NTU RGB\u2009+\u2009D dataset can make the accuracies reaching 83.2% and 93.7%.<\/jats:p>","DOI":"10.1007\/s40747-024-01403-5","type":"journal-article","created":{"date-parts":[[2024,3,30]],"date-time":"2024-03-30T08:01:45Z","timestamp":1711785705000},"page":"4659-4673","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["An optimization method of human skeleton keyframes selection for action recognition"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6418-3043","authenticated-orcid":false,"given":"Hao","family":"Chen","sequence":"first","affiliation":[]},{"given":"Yuekai","family":"Pan","sequence":"additional","affiliation":[]},{"given":"Chenwu","family":"Wang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,3,30]]},"reference":[{"key":"1403_CR1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107561","volume":"108","author":"LM Dang","year":"2020","unstructured":"Dang LM et al (2020) Sensor-based and vision-based human activity recognition: a comprehensive survey. Pattern Recogn 108:107561","journal-title":"Pattern Recogn"},{"key":"1403_CR2","doi-asserted-by":"crossref","unstructured":"Elias P, Sedmidubsky J, Zezula P (2019) Understanding the gap between 2D and 3D skeleton-based action recognition. In: 2019 IEEE International Symposium on Multimedia (ISM), pp 192\u2013195","DOI":"10.1109\/ISM46123.2019.00041"},{"issue":"2","key":"1403_CR3","first-page":"399","volume":"15","author":"TN Xuan","year":"2019","unstructured":"Xuan TN, Ngo TD, Le TH (2019) A Spatial-temporal 3D human pose reconstruction framework. J Inform Process Syst 15(2):399\u2013409","journal-title":"J Inform Process Syst"},{"key":"1403_CR4","doi-asserted-by":"crossref","unstructured":"Lillo I, Soto A, Niebles JC (2014) Discriminative hierarchical modeling of spatio-temporally composable human activities. In: Proceedings of the IEEE International Conference on computer vision, p 812\u2013819","DOI":"10.1109\/CVPR.2014.109"},{"issue":"3","key":"1403_CR5","doi-asserted-by":"publisher","first-page":"257","DOI":"10.1109\/34.910878","volume":"23","author":"AF Bobick","year":"2001","unstructured":"Bobick AF, Davis JW (2001) The recognition of human movement using temporal templates. IEEE Trans Pattern Anal Mach Intell 23(3):257\u2013267","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"2-3","key":"1403_CR6","doi-asserted-by":"publisher","first-page":"107","DOI":"10.1007\/s11263-005-1838-7","volume":"64","author":"I Laptev","year":"2005","unstructured":"Laptev I (2005) On space-time interest points. Int J Comput Vis 64(2\u20133):107\u2013123","journal-title":"Int J Comput Vis"},{"key":"1403_CR7","doi-asserted-by":"crossref","unstructured":"Wang H, Klaser A, Schmid C, et al (2011). action recognition by dense trajectories. In: Proceedings of the IEEE International Conference on computer vision, pp 3169\u20133176","DOI":"10.1109\/CVPR.2011.5995407"},{"issue":"1","key":"1403_CR8","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1007\/s11263-012-0594-8","volume":"103","author":"H Wang","year":"2013","unstructured":"Wang H et al (2013) Dense trajectories and motion boundary descriptors for action recognition. Int J Comput Vision 103(1):60\u201379","journal-title":"Int J Comput Vision"},{"key":"1403_CR9","doi-asserted-by":"crossref","unstructured":"Wang H, Schmid C (2013) Action recognition with improved trajectories. In: Proceedings of the IEEE International Conference on computer vision, pp 3551\u20133558.","DOI":"10.1109\/ICCV.2013.441"},{"issue":"1","key":"1403_CR10","doi-asserted-by":"publisher","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","volume":"35","author":"S Ji","year":"2012","unstructured":"Ji S et al (2012) 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 35(1):221\u2013231","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1403_CR11","doi-asserted-by":"crossref","unstructured":"Feichtenhofer C, Pinz A, Zisserman A (2016). Convolutional two-stream network fusion for video action recognition.In: Proceedings of the IEEE International Conference on computer vision, pp 1933\u20131941","DOI":"10.1109\/CVPR.2016.213"},{"key":"1403_CR12","doi-asserted-by":"crossref","unstructured":"Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, Manohar Paluri (2015) In: Proceedings of the IEEE International Conference on Computer Vision, 4489\u20134497.","DOI":"10.1109\/ICCV.2015.510"},{"key":"1403_CR13","doi-asserted-by":"crossref","unstructured":"Sudhakaran S, Escalera S, Lanz O (2019), LSTA: Long short-term attention for egocentric action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9954\u20139963.","DOI":"10.1109\/CVPR.2019.01019"},{"key":"1403_CR14","doi-asserted-by":"crossref","unstructured":"Vemulapalli R, Arrate F, Chellappa R (2014) Human action recognition by representing 3Dskeletons as points in a lie group. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 588\u2013595","DOI":"10.1109\/CVPR.2014.82"},{"issue":"5","key":"1403_CR15","doi-asserted-by":"publisher","first-page":"922","DOI":"10.1109\/TPAMI.2016.2564409","volume":"39","author":"R Anirudh","year":"2016","unstructured":"Anirudh R et al (2016) Elastic functional coding of Riemannian trajectories. IEEE Trans Pattern Anal Mach Intell 39(5):922\u2013936","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1403_CR16","doi-asserted-by":"publisher","first-page":"329","DOI":"10.1016\/j.jvcir.2014.10.009","volume":"26","author":"W Ding","year":"2015","unstructured":"Ding W et al (2015) STFC: Spatio-temporal feature chain for skeleton-based human action recognition. J Vis Commun Image Represent 26:329\u2013337","journal-title":"J Vis Commun Image Represent"},{"key":"1403_CR17","doi-asserted-by":"publisher","first-page":"729","DOI":"10.1016\/j.jvcir.2018.08.001","volume":"55","author":"S Ghodsi","year":"2018","unstructured":"Ghodsi S, Mohammadzade H, Korki E (2018) Simultaneous joint and object trajectory templates for human activity recognition from 3-D data. J Vis Commun Image Represent 55:729\u2013741","journal-title":"J Vis Commun Image Represent"},{"issue":"12","key":"1403_CR18","doi-asserted-by":"publisher","first-page":"3007","DOI":"10.1109\/TPAMI.2017.2771306","volume":"40","author":"J Liu","year":"2017","unstructured":"Liu J et al (2017) Skeleton-based action recognition using spatio-temporal LSTM network with trust gates. IEEE Trans Pattern Anal Mach Intell 40(12):3007\u20133021","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1403_CR19","doi-asserted-by":"publisher","first-page":"103371","DOI":"10.1016\/j.jvcir.2021.103371","volume":"82","author":"A Barkoky","year":"2022","unstructured":"Barkoky A, Charkari NM (2022) Complex Network-based features extraction in RGB-D human action recognition. J Vis Commun Image Represent 82:103371","journal-title":"J Vis Commun Image Represent"},{"key":"1403_CR20","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2022.108146","volume":"240","author":"Y Liu","year":"2022","unstructured":"Liu Y, Zhang H, Xu D et al (2022) Graph transformer network with temporal kernel attention for skeleton-based action recognition. Knowl-Based Syst 240:108146","journal-title":"Knowl-Based Syst"},{"issue":"1","key":"1403_CR21","doi-asserted-by":"publisher","first-page":"46","DOI":"10.1049\/cit2.12012","volume":"7","author":"J Zhang","year":"2022","unstructured":"Zhang J, Ye G, Tu Z et al (2022) A spatial attentive and temporal dilated (SATD) GCN for skeleton-based action recognition. CAAI Trans Intell Technol 7(1):46\u201355","journal-title":"CAAI Trans Intell Technol"},{"key":"1403_CR22","doi-asserted-by":"crossref","unstructured":"Schindler K, Van Gool L (2008) Action snippets: how many frames does human action recognition require. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 1\u20138","DOI":"10.1109\/CVPR.2008.4587730"},{"key":"1403_CR23","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1016\/j.patrec.2013.10.005","volume":"39","author":"L Miranda","year":"2014","unstructured":"Miranda L et al (2014) Online gesture recognition from pose kernel learning and decision forests. Pattern Recognit Lett 39:65\u201373","journal-title":"Pattern Recognit Lett"},{"key":"1403_CR24","doi-asserted-by":"publisher","unstructured":"Enea C, Samuele G, Ennio G et al (2016) A human activity recognition system using skeleton data from RGBD sensors. Comput Intell Neurosci 2016. https:\/\/doi.org\/10.1155\/2016\/4351435","DOI":"10.1155\/2016\/4351435"},{"issue":"4","key":"1403_CR25","doi-asserted-by":"publisher","first-page":"926","DOI":"10.3390\/sym6040926","volume":"6","author":"Z Qiang","year":"2014","unstructured":"Qiang Z, Zhang S, Zhou D (2014) Keyframe extraction from human motion capture data based on a multiple population genetic algorithm. Symmetry 6(4):926\u2013937","journal-title":"Symmetry"},{"issue":"1","key":"1403_CR26","doi-asserted-by":"publisher","first-page":"85","DOI":"10.1007\/s00371-012-0676-1","volume":"29","author":"X-M Liu","year":"2013","unstructured":"Liu X-M, Hao A-M, Zhao D (2013) Optimization-based key frame extraction for motion capture animation. Vis Comput 29(1):85\u201395","journal-title":"Vis Comput"},{"issue":"205","key":"1403_CR27","first-page":"526","volume":"2","author":"T Yang","year":"2014","unstructured":"Yang T, Sun HJ, Jun YE (2014) Extraction of keyframe from motion capture data based on quantum-behaved particle swarm optimization. Appl Res Comput 2(205):526\u2013530","journal-title":"Appl Res Comput"},{"key":"1403_CR28","doi-asserted-by":"publisher","first-page":"189","DOI":"10.1007\/s13198-019-00941-3","volume":"11","author":"PS Kumar","year":"2020","unstructured":"Kumar PS (2020) Algorithms for solving the optimization problems using fuzzy and intuitionistic fuzzy set. Int J Syst Assur Eng Manag 11:189\u2013222. https:\/\/doi.org\/10.1007\/s13198-019-00941-3","journal-title":"Int J Syst Assur Eng Manag"},{"key":"1403_CR29","doi-asserted-by":"publisher","first-page":"149","DOI":"10.4018\/978-1-6684-8474-6.ch007","volume-title":"Transport and logistics planning and optimization","author":"PS Kumar","year":"2023","unstructured":"Kumar PS (2023) The PSK method: a new and efficient approach to solving fuzzy transportation problems. In: Boukachour J, Benaini A (eds) Transport and logistics planning and optimization. IGI Global, pp 149\u2013197. https:\/\/doi.org\/10.4018\/978-1-6684-8474-6.ch007"},{"issue":"1","key":"1403_CR30","doi-asserted-by":"publisher","first-page":"1","DOI":"10.4018\/IJFSA.2020010101","volume":"9","author":"PS Kumar","year":"2020","unstructured":"Kumar PS (2020) Developing a new approach to solve solid assignment problems under intuitionistic fuzzy environment. Int J Fuzzy Syst Appl (IJFSA) 9(1):1\u201334. https:\/\/doi.org\/10.4018\/IJFSA.2020010101","journal-title":"Int J Fuzzy Syst Appl (IJFSA)"},{"key":"1403_CR31","doi-asserted-by":"publisher","first-page":"137","DOI":"10.4018\/978-1-6684-7684-0.ch007","volume-title":"Perspectives and considerations on the evolution of smart systems","author":"PS Kumar","year":"2023","unstructured":"Kumar PS (2023) The theory and applications of the software-based PSK method for solving intuitionistic fuzzy solid transportation problems. In: Habib M (ed) Perspectives and considerations on the evolution of smart systems. IGI Global, pp 137\u2013186. https:\/\/doi.org\/10.4018\/978-1-6684-7684-0.ch007"},{"key":"1403_CR32","doi-asserted-by":"publisher","first-page":"661","DOI":"10.1007\/s13198-019-00794-w","volume":"10","author":"PS Kumar","year":"2019","unstructured":"Kumar PS (2019) Intuitionistic fuzzy solid assignment problems: a software-based approach. Int J Syst Assur Eng Manag 10:661\u2013675. https:\/\/doi.org\/10.1007\/s13198-019-00794-w","journal-title":"Int J Syst Assur Eng Manag"},{"key":"1403_CR33","doi-asserted-by":"publisher","first-page":"12179","DOI":"10.1007\/s00500-022-07032-9","volume":"26","author":"RM Aziz","year":"2022","unstructured":"Aziz RM (2022) Application of nature inspired soft computing techniques for gene selection: a novel frame work for classification of cancer. Soft Comput 26:12179\u201312196","journal-title":"Soft Comput"},{"issue":"6","key":"1403_CR34","doi-asserted-by":"publisher","first-page":"1627","DOI":"10.1007\/s11517-022-02555-7","volume":"60","author":"RM Aziz","year":"2022","unstructured":"Aziz RM (2022) Nature-inspired metaheuristics model for gene selection and classification of biomedical microarray data. Med Biol Eng Compu 60(6):1627\u20131646","journal-title":"Med Biol Eng Compu"},{"key":"1403_CR35","doi-asserted-by":"crossref","unstructured":"Li W, Zhang Z, Liu Z (2010) Action recognition based on a bag of 3D points. In: IEEE International Workshop on CVPR for Human Communicative Behavior Analysis (in conjunction with CVPR2010), San Francisco, CA, June, 2010","DOI":"10.1109\/CVPRW.2010.5543273"},{"key":"1403_CR36","doi-asserted-by":"crossref","unstructured":"Xia L, Chen CC, Aggarwal JK (2012) View invariant human action recognition using histograms of 3D joints. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pp 20\u201327","DOI":"10.1109\/CVPRW.2012.6239233"},{"key":"1403_CR37","doi-asserted-by":"crossref","unstructured":"Seidenari L, Varano V, et al (2013), Recognizing actions from depth cameras as weakly aligned multi-part bag-of-poses. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 479\u2013485","DOI":"10.1109\/CVPRW.2013.77"},{"key":"1403_CR38","doi-asserted-by":"crossref","unstructured":"Shahroudy A, Liu J, Ng TT, et al (2016), NTU RGB+D: a large scale dataset for 3d human activity analysis. In: IEEE Computer Society, pp 1010\u20131019","DOI":"10.1109\/CVPR.2016.115"},{"key":"1403_CR39","doi-asserted-by":"crossref","unstructured":"Yang X, Tian YL (2012) Eigenjoints-based action recognition using Naive-Bayes-Nearest-Neighbor. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 14\u201319","DOI":"10.1109\/CVPRW.2012.6239232"},{"key":"1403_CR40","doi-asserted-by":"crossref","unstructured":"Wang J, Liu Z, Wu Y, et al (2012) Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 1290\u20131297","DOI":"10.1109\/CVPR.2012.6247813"},{"key":"1403_CR41","doi-asserted-by":"crossref","unstructured":"Garcia-Hernando G, Kim TK (2017) Transition forests: Learning dis-criminative temporal transitions for action recognition and detection. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 432\u2013440","DOI":"10.1109\/CVPR.2017.51"},{"key":"1403_CR42","doi-asserted-by":"crossref","unstructured":"Wang C, Flynn J, Wang Y, et al (2016) Recognizing actions in 3Dusing action-snippets and activated simplices. In: Proceedings of the 30th AAAI Conference on artificial intelligence, pp 3604\u20133610","DOI":"10.1609\/aaai.v30i1.10456"},{"issue":"7","key":"1403_CR43","doi-asserted-by":"publisher","first-page":"1340","DOI":"10.1109\/TCYB.2014.2350774","volume":"45","author":"M Devanne","year":"2014","unstructured":"Devanne M et al (2014) 3-d human action recognition by shape analysis of motion trajectories on riemannian manifold. IEEE Trans Cybern 45(7):1340\u20131352","journal-title":"IEEE Trans Cybern"},{"key":"1403_CR44","doi-asserted-by":"crossref","unstructured":"Vemulapalli R, Chellappa R (2016) Rolling Rotations for Recognizing Human Actions from 3D Skeletal Data. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 4471\u20134479","DOI":"10.1109\/CVPR.2016.484"},{"key":"1403_CR45","doi-asserted-by":"crossref","unstructured":"Yan S, Xiong Y, Lin D (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Proceedings of the AAAI Conference on artificial intelligence, 32(1): 7444\u20137452","DOI":"10.1609\/aaai.v32i1.12328"},{"key":"1403_CR46","doi-asserted-by":"crossref","unstructured":"Zhang P, Lan C, Xing J, et al (2017) View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 2117\u20132126","DOI":"10.1109\/ICCV.2017.233"},{"key":"1403_CR47","doi-asserted-by":"crossref","unstructured":"Li M, Chen S, Chen X, et al (2019) Actional-structural graph convolutional networks for skeleton-based action recognition. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 3595\u20133603","DOI":"10.1109\/CVPR.2019.00371"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01403-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01403-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01403-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,15]],"date-time":"2024-11-15T09:23:43Z","timestamp":1731662623000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01403-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,30]]},"references-count":47,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,8]]}},"alternative-id":["1403"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01403-5","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,30]]},"assertion":[{"value":"23 January 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 February 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 March 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}