{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,15]],"date-time":"2026-02-15T08:55:34Z","timestamp":1771145734095,"version":"3.50.1"},"reference-count":23,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T00:00:00Z","timestamp":1742169600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"},{"start":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T00:00:00Z","timestamp":1742169600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"}],"funder":[{"name":"Jilin Provincial Department of Education Science and Technology Plan Project","award":["JJKH20230680KJ"],"award-info":[{"award-number":["JJKH20230680KJ"]}]},{"name":"Jilin Science and Technology Development Plan Project","award":["20230401104YY"],"award-info":[{"award-number":["20230401104YY"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Artif Intell"],"DOI":"10.1007\/s44163-025-00246-4","type":"journal-article","created":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T03:12:48Z","timestamp":1742181168000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Limb movement detection and analysis based on visual recognition of human posture"],"prefix":"10.1007","volume":"5","author":[{"given":"Zhiguo","family":"Xiao","sequence":"first","affiliation":[]},{"given":"Chunxiang","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Tianjiao","family":"Ding","sequence":"additional","affiliation":[]},{"given":"Xiangfeng","family":"Shen","sequence":"additional","affiliation":[]},{"given":"Xinyuan","family":"Li","sequence":"additional","affiliation":[]},{"given":"Dongni","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,17]]},"reference":[{"issue":"19","key":"246_CR1","doi-asserted-by":"publisher","first-page":"4129","DOI":"10.3390\/s19194129","volume":"19","author":"Q Lei","year":"2019","unstructured":"Lei Q, Du JX, Zhang HB, et al. A survey of vision-based human action evaluation methods. Sensors. 2019;19(19):4129.","journal-title":"Sensors"},{"key":"246_CR2","doi-asserted-by":"publisher","first-page":"612","DOI":"10.1016\/j.patcog.2017.12.007","volume":"76","author":"F Patrona","year":"2018","unstructured":"Patrona F, Chatzitofis A, Zarpalas D, et al. Motion analysis: action detection, recognition and evaluation based on motion capture data. Pattern Recogn. 2018;76:612\u201322.","journal-title":"Pattern Recogn"},{"key":"246_CR3","doi-asserted-by":"crossref","unstructured":"Weeratunga K, Dharmaratne A, Boon How K. Application of computer vision and vector space model for tactical movement classification in badminton. Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017. 76\u201382.","DOI":"10.1109\/CVPRW.2017.22"},{"issue":"1","key":"246_CR4","doi-asserted-by":"publisher","first-page":"209","DOI":"10.1007\/s00530-021-00815-4","volume":"28","author":"B Debnath","year":"2022","unstructured":"Debnath B, Obrien M, Yamaguchi M, et al. A review of computer vision-based approaches for physical rehabilitation and assessment. Multimedia Syst. 2022;28(1):209\u201339.","journal-title":"Multimedia Syst"},{"key":"246_CR5","doi-asserted-by":"publisher","DOI":"10.1016\/j.compbiomed.2023.106835","author":"S Sardari","year":"2023","unstructured":"Sardari S, Sharifzadeh S, Daneshkhah A, et al. Artificial Intelligence for skeleton-based physical rehabilitation action evaluation: a systematic review. Comput Biol Med. 2023. https:\/\/doi.org\/10.1016\/j.compbiomed.2023.106835.","journal-title":"Comput Biol Med"},{"key":"246_CR6","doi-asserted-by":"crossref","unstructured":"Shah D, Rautela V, Sharma C. Yoga pose detection using posenet and k-nn 2021 International conference on computing, communication and green engineering (CCGE). IEEE, 2021: 1\u20134.","DOI":"10.1109\/CCGE50943.2021.9776451"},{"issue":"4","key":"246_CR7","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3524497","volume":"55","author":"W Liu","year":"2022","unstructured":"Liu W, Bao Q, Sun Y, et al. Recent advances of monocular 2d and 3d human pose estimation: a deep learning perspective. ACM Comput Surv. 2022;55(4):1\u201341.","journal-title":"ACM Comput Surv"},{"issue":"23","key":"246_CR8","doi-asserted-by":"publisher","first-page":"6966","DOI":"10.3390\/s20236966","volume":"20","author":"K Han","year":"2020","unstructured":"Han K, Yang Q, Huang Z. A two-stage fall recognition algorithm based on human posture features. Sensors. 2020;20(23):6966.","journal-title":"Sensors"},{"issue":"1","key":"246_CR9","doi-asserted-by":"publisher","first-page":"110","DOI":"10.3390\/s24010110","volume":"24","author":"A Zakir","year":"2023","unstructured":"Zakir A, Salman SA, Takahashi H. SOCA-PRNet: spatially oriented attention-infused structured-feature-enabled PoseResNet for 2D human pose estimation. Sensors. 2023;24(1):110.","journal-title":"Sensors"},{"issue":"1","key":"246_CR10","doi-asserted-by":"publisher","first-page":"211","DOI":"10.3390\/s24010211","volume":"24","author":"G Ding","year":"2023","unstructured":"Ding G, Georgilas I, Plummer A. A deep learning model with a self-attention mechanism for leg joint angle estimation across varied locomotion modes. Sensors. 2023;24(1):211.","journal-title":"Sensors"},{"issue":"5","key":"246_CR11","doi-asserted-by":"publisher","first-page":"568","DOI":"10.1080\/02640414.2018.1521769","volume":"37","author":"EE Cust","year":"2019","unstructured":"Cust EE, Sweeting AJ, Ball K, et al. Machine and deep learning for sport-specific movement recognition: a systematic review of model development and performance. J Sports Sci. 2019;37(5):568\u2013600.","journal-title":"J Sports Sci"},{"key":"246_CR12","doi-asserted-by":"crossref","unstructured":"Taylor P E, Almeida G J M, Kanade T, et al. Classifying human motion quality for knee osteoarthritis using accelerometers 2010 Annual international conference of the IEEE engineering in medicine and biology. IEEE, 2010: 339\u2013343.","DOI":"10.1109\/IEMBS.2010.5627665"},{"key":"246_CR13","doi-asserted-by":"publisher","first-page":"30283","DOI":"10.1109\/ACCESS.2021.3055960","volume":"9","author":"S Miao","year":"2021","unstructured":"Miao S, Shen C, Feng X, et al. Upper limb rehabilitation system for stroke survivors based on multi-modal sensors and machine learning. IEEE Access. 2021;9:30283\u201391.","journal-title":"IEEE Access"},{"key":"246_CR14","doi-asserted-by":"crossref","unstructured":"Mroz S, Baddour N, McGuirk C, et al. Comparing the quality of human pose estimation with blazepose or openpose 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART). IEEE, 2021: 1\u20134.","DOI":"10.1109\/BioSMART54244.2021.9677850"},{"key":"246_CR15","doi-asserted-by":"crossref","unstructured":"Feliandra Z B, Khadijah S, Rachmadi M F, et al. Classification of stroke and Non-Stroke patients from human body movements using smartphone videos and Deep Neural Networks 2022 International Conference on Advanced Computer Science and Information Systems (ICACSIS). IEEE, 2022: 187\u2013192.","DOI":"10.1109\/ICACSIS56558.2022.9923501"},{"key":"246_CR16","first-page":"492","volume":"4","author":"P Wang","year":"2021","unstructured":"Wang P, Zhang Y, Jiang W. Application of K-Nearest neighbor (knn) algorithm for human action recognition 2021 IEEE 4th advanced information management, communicates, electronic and automation control conference (IMCEC). IEEE. 2021;4:492\u20136.","journal-title":"IEEE"},{"key":"246_CR17","first-page":"2013","volume-title":"Full-body human motion capture from monocular depth images time-of-flight and depth imaging. sensors, algorithms, and applications: Dagstuhl 2012seminar on time-of-flight imaging and GCPR 2013 workshop on imaging new modalities","author":"T Helten","year":"2012","unstructured":"Helten T, Baak A, M\u00fcller M, et al. Full-body human motion capture from monocular depth images time-of-flight and depth imaging. sensors, algorithms, and applications: Dagstuhl 2012seminar on time-of-flight imaging and GCPR 2013 workshop on imaging new modalities. Berlin Heidelberg: Springer, Berlin Heidelberg; 2012. p. 2013."},{"issue":"3","key":"246_CR18","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1080\/21679169.2017.1296888","volume":"19","author":"S Mani","year":"2017","unstructured":"Mani S, Sharma S, Omar B, et al. Quantitative measurements of forward head posture in a clinical settings: a technical feasibility study. Euro J Physiotherapy. 2017;19(3):119\u201323.","journal-title":"Euro J Physiotherapy"},{"key":"246_CR19","unstructured":"Adolf J, Dolezal J, Kutilek P, et al. Automatic Telerehabilitation System in a Home Environment Using Computer Vision. pHealth. 2020. 142\u2013148."},{"key":"246_CR20","doi-asserted-by":"publisher","unstructured":"Li Y, Wang C, Cao Y, et al. Human pose estimation based in-home lower body rehabilitation system. 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. https:\/\/doi.org\/10.1109\/ijcnn48605.2020.9207296.","DOI":"10.1109\/ijcnn48605.2020.9207296"},{"issue":"7","key":"246_CR21","doi-asserted-by":"publisher","first-page":"173","DOI":"10.3390\/fi13070173","volume":"13","author":"J Chua","year":"2021","unstructured":"Chua J, Ong LY, Leow MC. Telehealth using PoseNet-based system for in-home rehabilitation. Future Internet. 2021;13(7):173.","journal-title":"Future Internet"},{"issue":"4","key":"246_CR22","first-page":"90","volume":"12","author":"S Sheikhi","year":"2020","unstructured":"Sheikhi S, Kheirabadi MT, Bazzazi A. A novel scheme for improving accuracy of KNN classification algorithm based on the new weighting technique and stepwise feature selection. J Inf Technol Manag. 2020;12(4):90\u2013104.","journal-title":"J Inf Technol Manag"},{"key":"246_CR23","doi-asserted-by":"crossref","unstructured":"Hansun S. A new approach of moving average method in time series analysis. 2013 conference on new media studies (CoNMedia). IEEE. 2013: 1\u20134.","DOI":"10.1109\/CoNMedia.2013.6708545"}],"container-title":["Discover Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00246-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44163-025-00246-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00246-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,17]],"date-time":"2025-03-17T03:13:02Z","timestamp":1742181182000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44163-025-00246-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,17]]},"references-count":23,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["246"],"URL":"https:\/\/doi.org\/10.1007\/s44163-025-00246-4","relation":{},"ISSN":["2731-0809"],"issn-type":[{"value":"2731-0809","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,17]]},"assertion":[{"value":"10 October 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 March 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 March 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"This study focuses on the development of human posture recognition technology, for which we conducted specialized data collection. Hereby, we solemnly declare: The data collection and research process for the study were conducted at Changchun University. Dongni Li, as Dr. Zhiguo Xiao's doctoral supervisor, provided support in the design of research methods and the resolution of key issues. During the data collection process, we provided each participant with a detailed explanation of the study's purpose, procedures, and potential risks, and obtained their voluntarily signed written informed consent. We ensured participants' right to be informed of the study's progress at any time, and they also have the freedom to withdraw unconditionally at any stage. We highly value the privacy rights of participants and have strictly anonymized all collected data. By deleting or encrypting data, we ensured that the dataset contains no information traceable to individual identities. This research protocol has undergone rigorous review by the [Changchun University\/Institutional Ethics Review Board] and received formal approval, ensuring the study's legality and ethical compliance. We adhere to the principles of research integrity and commit that the results of this study will be presented based on genuine and reliable data, without any form of fabrication, falsification, or misleading practices, to maintain the study's objectivity and accuracy. Please adjust the specific content in the above template and case according to the actual research situation and the ethical review process of your institution. When writing the ethical statement, ensure the statement's authenticity and accuracy to reflect the research team's respect for and adherence to ethical standards.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"We invite you to participate in a study on Limb movement detection and analysis based on Visual Human Posture Recognition, with the aim of designing and implementing more robust and accurate human motion assessment techniques. Before participating, you need to know the following information: Location: The data collection and personnel for the study were conducted at Changchun University. Research process: You will participate in the collection and preprocessing of human movements. Risks and discomforts: The potential risks and discomforts associated with participating in this study have not yet been identified. Data confidentiality: Your personal information and research data will be strictly kept confidential and used only for the purpose of this study. Voluntary participation: Participation in this study is completely voluntary, and you have the right to withdraw at any time without giving any reason. Contact information: If you have any questions or discomfort, please contact Dr. Zhiguo Xiao. All individuals participating in this study voluntarily provided written informed consent after fully understanding the research purpose, process, and potential risks. We ensure that participants have the right to be informed of research progress at any time and have the right to unconditionally withdraw from the study at any time. All participating researchers agree to publish this achievement.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}},{"value":"The authors declare no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"27"}}