{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,4,14]],"date-time":"2025-04-14T16:40:10Z","timestamp":1744648810025,"version":"3.40.4"},"reference-count":22,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,4,14]],"date-time":"2025-04-14T00:00:00Z","timestamp":1744588800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"},{"start":{"date-parts":[[2025,4,14]],"date-time":"2025-04-14T00:00:00Z","timestamp":1744588800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"}],"funder":[{"DOI":"10.13039\/501100002491","name":"Hansung University","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100002491","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Artif Intell"],"DOI":"10.1007\/s44163-025-00259-z","type":"journal-article","created":{"date-parts":[[2025,4,14]],"date-time":"2025-04-14T16:03:03Z","timestamp":1744646583000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["A novel machine learning-based shoulder exercise assistant program"],"prefix":"10.1007","volume":"5","author":[{"given":"Hae-Jun","family":"Kwon","sequence":"first","affiliation":[]},{"given":"Seoung-Ho","family":"Choi","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,4,14]]},"reference":[{"key":"259_CR1","doi-asserted-by":"publisher","first-page":"279","DOI":"10.51979\/KSSLS.2022.10.90.279","volume":"90","author":"J Kwon","year":"2022","unstructured":"Kwon J, Nam S, Sang-Back N. How has the home training trend changed before and after the Covid-19 pandemic? J Sport Leisure Stud. 2022;90:279\u201393. https:\/\/doi.org\/10.51979\/KSSLS.2022.10.90.279.","journal-title":"J Sport Leisure Stud"},{"issue":"11","key":"259_CR2","doi-asserted-by":"publisher","first-page":"709","DOI":"10.5124\/jkma.2022.65.11.709","volume":"65","author":"H Lee","year":"2022","unstructured":"Lee H, Kim D, Lee Y. Surgical treatment of shoulder pain: a focused study on rotator cuff injury. Korean Med Assoc (KAMJE). 2022;65(11):709\u201318. https:\/\/doi.org\/10.5124\/jkma.2022.65.11.709.","journal-title":"Korean Med Assoc (KAMJE)"},{"key":"259_CR3","unstructured":"Papers with Code: Skeleton-based action recognition. https:\/\/paperswithcode.com\/task\/skeleton-based-action-recognition. Accessed 22 Feb 2023"},{"issue":"2","key":"259_CR4","doi-asserted-by":"publisher","first-page":"925","DOI":"10.3390\/app12020925","volume":"12","author":"KH Kim","year":"2022","unstructured":"Kim KH, Choi W-J, Sohn M-J. Feature importance analysis for postural deformity detection system using explainable predictive modeling technique. Appl Sci. 2022;12(2):925.","journal-title":"Appl Sci"},{"key":"259_CR5","doi-asserted-by":"publisher","unstructured":"Duan H, Zhao Y, Chen K, Lin D, Dai B. Revisiting skeleton-based action recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022;pp. 2969\u20132978 . https:\/\/doi.org\/10.48550\/arXiv.2104.13586","DOI":"10.48550\/arXiv.2104.13586"},{"key":"259_CR6","doi-asserted-by":"publisher","unstructured":"Jiang T, Camgoz N, Bowden R. Skeletor: skeletal transformers for robust body-pose estimation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021;pp. 3394\u20133402 . https:\/\/doi.org\/10.48550\/arXiv.2104.11712","DOI":"10.48550\/arXiv.2104.11712"},{"key":"259_CR7","doi-asserted-by":"crossref","unstructured":"Crabbe B, Paiement A, Hannuna S, Mirmehdi M. Skeleton-free body pose estimation from depth images for movement analysis. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015;pp. 70\u201378","DOI":"10.1109\/ICCVW.2015.49"},{"issue":"07","key":"259_CR8","first-page":"11401","volume":"34","author":"Y Cheng","year":"2020","unstructured":"Cheng Y, Wang B, Yang B, Zhou H, Wang S, Tan R, Wang X. 3d human pose estimation using spatio-temporal networks with explicit occlusion training. Proc AAAI Conf Art Intell. 2020;34(07):11401\u20138.","journal-title":"Proc AAAI Conf Art Intell"},{"key":"259_CR9","doi-asserted-by":"crossref","unstructured":"Wang Z, Shin D, Fowlkes CC. Predicting camera viewpoint improves cross-dataset generalization for 3d human pose estimation. In: Proceedings of the ECCV Workshops 2020.","DOI":"10.1007\/978-3-030-66096-3_36"},{"key":"259_CR10","doi-asserted-by":"crossref","unstructured":"Arnab A, Doersch C, Zisserman A. Exploiting temporal context for 3d human pose estimation in the wild. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019;pp. 3390\u20133399","DOI":"10.1109\/CVPR.2019.00351"},{"key":"259_CR11","doi-asserted-by":"publisher","DOI":"10.4855\/arXiv.2009.10013","author":"A Sengupta","year":"2020","unstructured":"Sengupta A, Budvytis I, Cipolla R. Synthetic training for accurate 3D human pose and shape estimation in the wild. ArXiv Preprint. 2020. https:\/\/doi.org\/10.4855\/arXiv.2009.10013.","journal-title":"ArXiv Preprint"},{"key":"259_CR12","unstructured":"Lin K, Wang L, Liu Z. End-to-end human pose and mesh reconstruction with transformers. arXiv:abs\/2012.09760 2020."},{"key":"259_CR13","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2021.103225","volume":"210","author":"J Wang","year":"2021","unstructured":"Wang J, Tan S, Zhen X, Xu S, Zheng F, He Z, Shao L. Deep 3d human pose estimation: a review. Comput Vis Image Underst. 2021;210: 103225. https:\/\/doi.org\/10.1016\/j.cviu.2021.103225.","journal-title":"Comput Vis Image Underst"},{"key":"259_CR14","unstructured":"Scikit-learn: ExtraTreesClassifier. https:\/\/scikit-learn.org\/dev\/modules\/generated\/sklearn.ensemble.ExtraTreesClassifier.html. Accessed 24 Nov 2024"},{"key":"259_CR15","unstructured":"Scikit-learn: LogisticRegression. https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.linear_model.LogisticRegression.html. Accessed 15 Jan 2023"},{"key":"259_CR16","unstructured":"Scikit-learn: RandomForestClassifier. https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.ensemble.RandomForestClassifier.html. Accessed 15 Jan 2023"},{"key":"259_CR17","unstructured":"Scikit-learn: GradientBoostingClassifier. https:\/\/scikit-learn.org\/dev\/modules\/generated\/sklearn.ensemble.GradientBoostingClassifier.html. Accessed 24 Nov 2024"},{"key":"259_CR18","unstructured":"Scikit-learn: HistGradientBoostingClassifier. https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.ensemble.HistGradientBoostingClassifier.html. Accessed 15 Jan 2023"},{"key":"259_CR19","unstructured":"Scikit-learn: SVMClassifier. https:\/\/scikit-learn.org\/1.5\/modules\/svm.html. Accessed 24 Nov 2024"},{"key":"259_CR20","unstructured":"Mokhuri: Shoulder rehabilitation exercise. https:\/\/www.mokhuri.com\/TAclinic_new\/01_intro\/intro_branch08_info29.php. Accessed 22 Feb 2023"},{"key":"259_CR21","unstructured":"CVAT https:\/\/www.cvat.ai\/. Accessed 22 Feb 2023"},{"key":"259_CR22","unstructured":"Blmoistawinde: Classical ML equations in LaTeX. https:\/\/blmoistawinde.github.io\/ml_equations_latex\/. Accessed 22 Feb 2023"}],"container-title":["Discover Artificial Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00259-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44163-025-00259-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44163-025-00259-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,14]],"date-time":"2025-04-14T16:03:11Z","timestamp":1744646591000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44163-025-00259-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,14]]},"references-count":22,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["259"],"URL":"https:\/\/doi.org\/10.1007\/s44163-025-00259-z","relation":{},"ISSN":["2731-0809"],"issn-type":[{"value":"2731-0809","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,14]]},"assertion":[{"value":"19 September 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 April 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 April 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Informed consent was obtained from all individual participants included in the study. Participants were explicitly informed that their motion capture images would be used for research purposes, and they provided their consent to participate. It was further explained that the captured images would be cropped to only display the areas necessary for joint point extraction, specifically the shoulders, elbows, and wrists. The participants were assured that no facial features would be visible in the images. All participants fully understood and agreed to these conditions before the experiment was conducted.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Informed consent"}},{"value":"The authors declare that there is no competing interests in this paper.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"37"}}