{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,10]],"date-time":"2026-02-10T19:02:53Z","timestamp":1770750173287,"version":"3.50.0"},"reference-count":47,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2025,2,10]],"date-time":"2025-02-10T00:00:00Z","timestamp":1739145600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>The purpose of this research is to develop an efficient model for human pose estimation (HPE). The main limitations of the study include the small size of the dataset and confounds in the classification of certain poses, suggesting the need for more data to improve the robustness of the model in uncontrolled environments. The methodology used combines MediaPipe for the detection of key points in images with a CNN1D model that processes preprocessed feature sequences. The Yoga Poses dataset was used for the training and validation of the model, and resampling techniques, such as bootstrapping, were applied to improve accuracy and avoid overfitting in the training. The results show that the proposed model achieves 96% overall accuracy in the classification of five yoga poses, with accuracy metrics above 90% for all classes. The implementation of the CNN1D model instead of traditional 2D or 3D architectures accomplishes the goal of maintaining a low computational cost and efficient preprocessing of the images, allowing for its use on mobile devices and real-time environments.<\/jats:p>","DOI":"10.3390\/info16020129","type":"journal-article","created":{"date-parts":[[2025,2,10]],"date-time":"2025-02-10T04:44:03Z","timestamp":1739162643000},"page":"129","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["CNN 1D: A Robust Model for Human Pose Estimation"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8837-3204","authenticated-orcid":false,"given":"Mercedes Hern\u00e1ndez de la","family":"Cruz","sequence":"first","affiliation":[{"name":"Divisi\u00f3n de Estudios de Posgrado e Investigaci\u00f3n, Tecnol\u00f3gico Nacional de M\u00e9xico, Instituto Tecnol\u00f3gico de Chilpancingo, Chilpancingo 39090, Guerrero, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-8185-0868","authenticated-orcid":false,"given":"Uriel","family":"Solache","sequence":"additional","affiliation":[{"name":"Divisi\u00f3n de Estudios de Posgrado e Investigaci\u00f3n, Tecnol\u00f3gico Nacional de M\u00e9xico, Instituto Tecnol\u00f3gico de Chilpancingo, Chilpancingo 39090, Guerrero, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1571-9267","authenticated-orcid":false,"given":"Antonio","family":"Luna-\u00c1lvarez","sequence":"additional","affiliation":[{"name":"Divisi\u00f3n de Estudios de Posgrado e Investigaci\u00f3n, Tecnol\u00f3gico Nacional de M\u00e9xico, Instituto Tecnol\u00f3gico de Chilpancingo, Chilpancingo 39090, Guerrero, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2212-7785","authenticated-orcid":false,"given":"Sergio Ricardo","family":"Zagal-Barrera","sequence":"additional","affiliation":[{"name":"Divisi\u00f3n de Estudios de Posgrado e Investigaci\u00f3n, Tecnol\u00f3gico Nacional de M\u00e9xico, Instituto Tecnol\u00f3gico de Chilpancingo, Chilpancingo 39090, Guerrero, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-4990-1154","authenticated-orcid":false,"given":"Daniela Aurora","family":"Morales L\u00f3pez","sequence":"additional","affiliation":[{"name":"Divisi\u00f3n de Estudios de Posgrado e Investigaci\u00f3n, Tecnol\u00f3gico Nacional de M\u00e9xico, Instituto Tecnol\u00f3gico de Chilpancingo, Chilpancingo 39090, Guerrero, Mexico"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8665-4096","authenticated-orcid":false,"given":"Dante","family":"Mujica-Vargas","sequence":"additional","affiliation":[{"name":"Departamento de Ciencias Computacionales, Tecnol\u00f3gico Nacional de M\u00e9xico, Centro Nacional de Investigaci\u00f3n y Desarrollo Tecnol\u00f3gico, Cuernavaca 62490, Morelos, Mexico"}]}],"member":"1968","published-online":{"date-parts":[[2025,2,10]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Peng, Q., Zheng, C., and Chen, C. (2024, January 16\u201322). A Dual-Augmentor Framework for Domain Generalization in 3D Human Pose Estimation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.00218"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"110925","DOI":"10.1016\/j.patcog.2024.110925","article-title":"GraphMLP: A graph MLP-like architecture for 3D human pose estimation","volume":"158","author":"Li","year":"2024","journal-title":"Pattern Recognit."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"26565","DOI":"10.1007\/s11042-023-16618-w","article-title":"Pose estimation for swimmers in video surveillance","volume":"83","author":"Cao","year":"2024","journal-title":"Multimed. Tools Appl."},{"key":"ref_4","first-page":"4311350","article-title":"Yoga pose estimation and feedback generation using deep learning","volume":"2022","author":"Srivastava","year":"2022","journal-title":"Comput. Intell. Neurosci."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Xia, S., Chu, L., Yang, J., Wu, Q., and Pei, L. (2024, January 16\u201322). Dynamic Inertial Poser (DynaIP): Part-Based Motion Dynamics Learning for Enhanced Human Pose Estimation with Sparse Inertial Sensors. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.00185"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Chen, H., Zendehdel, N., Leu, M.C., Moniruzzaman, M., Yin, Z., and Hajmohammadi, S. (2024, January 21\u201324). Repetitive Action Counting Through Joint Angle Analysis and Video Transformer Techniques. Proceedings of the International Symposium on Flexible Automation, Seattle, Washington, USA.","DOI":"10.1115\/ISFA2024-140665"},{"key":"ref_7","unstructured":"Grishchenko, I., Bazarevsky, V., Zanfir, A., Bazavan, E.G., Zanfir, M., Yee, R., Raveendran, K., Zhdanovich, M., Grundmann, M., and Sminchisescu, C. (2022). BlazePose GHUM holistic: Real-time 3D human landmarks and pose estimation. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"108600","DOI":"10.1016\/j.optlastec.2022.108600","article-title":"Image-free single-pixel segmentation","volume":"157","author":"Liu","year":"2023","journal-title":"Opt. Laser Technol."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Park, S., You, E., Lee, I., and Lee, J. (2023, January 1\u20136). Towards robust and smooth 3D multi-person pose estimation from monocular videos in the wild. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Paris, France.","DOI":"10.1109\/ICCV51070.2023.01357"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Dong, Z., Song, J., Chen, X., Guo, C., and Hilliges, O. (2021, January 10\u201317). Shape-aware multi-person pose estimation from multi-view images. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01097"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Goyal, G., Di Pietro, F., Carissimi, N., Glover, A., and Bartolozzi, C. (2023, January 17\u201324). MoveEnet: Online high-frequency human pose estimation with an event camera. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPRW59228.2023.00420"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Aarthy, K., Kruthi, M., Upadhyay, R., Darbhamulla, J., Singh, S., and Pavikars, M.M. (2024, January 17\u201318). Advanced Yoga Pose Estimation: Enhancing PoseNet with Adaptive Key Point Elimination. Proceedings of the 2024 International Conference on Recent Advances in Electrical, Electronics, Ubiquitous Communication, and Computational Intelligence (RAEEUCCI), Chennai, India.","DOI":"10.1109\/RAEEUCCI61380.2024.10547791"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"102","DOI":"10.4236\/jcc.2023.112008","article-title":"Detection of 3D human posture based on improved MediaPipe","volume":"11","author":"Lin","year":"2023","journal-title":"J. Comput. Commun."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Ullah, H., and Munir, A. (2023). Human action representation learning using an attention-driven residual 3DCNN network. Algorithms, 16.","DOI":"10.3390\/a16080369"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Ullah, R., Asif, M., Shah, W.A., Anjam, F., Ullah, I., Khurshaid, T., Wuttisittikulkij, L., Shah, S., Ali, S.M., and Alibakhshikenari, M. (2023). Speech emotion recognition using convolution neural networks and multi-head convolutional transformer. Sensors, 23.","DOI":"10.3390\/s23136212"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Aldawsari, H., Al-Ahmadi, S., and Muhammad, F. (2023). Optimizing 1D-CNN-based emotion recognition process through channel and feature selection from EEG signals. Diagnostics, 13.","DOI":"10.3390\/diagnostics13162624"},{"key":"ref_17","unstructured":"Bazarevsky, V. (2020). BlazePose: On-device Real-time Body Pose tracking. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Lovanshi, M., and Tiwari, V. (2022, January 21\u201323). Human pose estimation: Benchmarking deep learning-based methods. Proceedings of the 2022 IEEE Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI), Gwalior, India.","DOI":"10.1109\/IATMSI56455.2022.10119324"},{"key":"ref_19","first-page":"1","article-title":"Posture Detection and Comparison of Different Physical Exercises Based on Deep Learning Using MediaPipe, OpenCV","volume":"7","author":"Kale","year":"2023","journal-title":"Int. J. Sci. Res. Eng. Manag."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Mroz, S., Baddour, N., McGuirk, C., Juneau, P., Tu, A., Cheung, K., and Lemaire, E. (2021, January 8\u201310). Comparing the quality of human pose estimation with BlazePose or OpenPose. Proceedings of the 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France.","DOI":"10.1109\/BioSMART54244.2021.9677850"},{"key":"ref_21","first-page":"767","article-title":"Personal Training with Tai Chi: Classifying Movement using MediaPipe Pose Estimation and LSTM","volume":"6","author":"Suhandi","year":"2024","journal-title":"Build. Inform. Technol. Sci. (BITS)"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Chaudhary, I., Singh, N.T., Chaudhary, M., and Yadav, K. (2023, January 26\u201328). Real-time yoga pose detection using OpenCV and MediaPipe. Proceedings of the 2023 4th International Conference for Emerging Technology (INCET), Belgaum, India.","DOI":"10.1109\/INCET57972.2023.10170485"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"563","DOI":"10.1515\/cdbme-2023-1141","article-title":"Accuracy Evaluation of 3D Pose Estimation with MediaPipe Pose for Physical Exercises","volume":"9","author":"Dill","year":"2023","journal-title":"Curr. Dir. Biomed. Eng."},{"key":"ref_24","first-page":"1197","article-title":"Improved Yoga Pose Detection Using MediaPipe and MoveNet in a Deep Learning Model","volume":"37","author":"Parashar","year":"2023","journal-title":"Rev. D\u2019Intell. Artif."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Mountzouris, K., Perikos, I., and Hatzilygeroudis, I. (2023). Speech emotion recognition using convolutional neural networks with attention mechanism. Electronics, 12.","DOI":"10.20944\/preprints202309.1202.v1"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Arrowsmith, C., Burns, D., Mak, T., Hardisty, M., and Whyne, C. (2022). Physiotherapy exercise classification with single-camera pose detection and machine learning. Sensors, 23.","DOI":"10.3390\/s23010363"},{"key":"ref_27","first-page":"V001T09A001","article-title":"Dynamic gesture design and recognition for human-robot collaboration with convolutional neural networks","volume":"Volume 83617","author":"Chen","year":"2020","journal-title":"Proceedings of the International Symposium on Flexible Automation"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"103055","DOI":"10.1016\/j.jvcir.2021.103055","article-title":"Human pose estimation and its application to action recognition: A survey","volume":"76","author":"Song","year":"2021","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Dedhia, U., Bhoir, P., Ranka, P., and Kanani, P. (2023, January 1\u20132). Pose Estimation and Virtual Gym Assistant Using MediaPipe and Machine Learning. Proceedings of the 2023 International Conference on Network, Multimedia and Information Technology (NMITCON), Bengaluru, India.","DOI":"10.1109\/NMITCON58196.2023.10275938"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Bhamidipati, V.S.P., Saxena, I., Saisanthiya, D., and Retnadhas, M. (2023, January 5\u20136). Robust intelligent posture estimation for an AI gym trainer using MediaPipe and OpenCV. Proceedings of the 2023 International Conference on Networking and Communications (ICNWC), Chennai, India.","DOI":"10.1109\/ICNWC57852.2023.10127264"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"102861","DOI":"10.1016\/j.mex.2024.102861","article-title":"Estimation of human body 3D pose for parent-infant interaction settings using Azure Kinect and OpenPose","volume":"13","author":"Myowa","year":"2024","journal-title":"MethodsX"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"65","DOI":"10.35912\/jisted.v1i1.1899","article-title":"Pengenalan Bahasa Isyarat Indonesia menggunakan Mediapipe dengan Model Random Forest dan Multinomial Logistic Regression","volume":"1","author":"Suyudi","year":"2022","journal-title":"J. Ilmu Siber Dan Teknol. Digit."},{"key":"ref_33","first-page":"46","article-title":"Human Activity Recognition using SVM, KNN & Logistic Regression","volume":"16","author":"Bhattarai","year":"2023","journal-title":"World J. Res. Rev. (WJRR)"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Alsabhan, W. (2023). Human\u2013computer interaction with a real-time speech emotion recognition with ensembling techniques 1D convolution neural network and attention. Sensors, 23.","DOI":"10.3390\/s23031386"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Hartley, R., and Zisserman, A. (2003). Multiple View Geometry in Computer Vision, Cambridge University Press.","DOI":"10.1017\/CBO9780511811685"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Szeliski, R. (2022). Computer Vision: Algorithms and Applications, Springer Nature.","DOI":"10.1007\/978-3-030-34372-9"},{"key":"ref_37","first-page":"1691","article-title":"On-device real-time pose estimation & correction","volume":"3","author":"Ohri","year":"2021","journal-title":"Int. J. Adv. Eng. Manag. (IJAEM)"},{"key":"ref_38","unstructured":"Gorriz, J.M., Segovia, F., Ramirez, J., Ortiz, A., and Suckling, J. (2024). Is K-fold cross validation the best model selection method for Machine Learning?. arXiv."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"164","DOI":"10.12928\/telkomnika.v22i1.24845","article-title":"Neural network with k-fold cross validation for oil palm fruit ripeness prediction","volume":"22","author":"Shiddiq","year":"2024","journal-title":"TELKOMNIKA (Telecommun. Comput. Electron. Control)"},{"key":"ref_40","unstructured":"Jian, Y., Gao, C., and Vosoughi, S. (2023, January 10\u201316). Bootstrapping vision-language learning with decoupled language pre-training. Proceedings of the Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA."},{"key":"ref_41","unstructured":"Zhao, Q., Zheng, C., Liu, M., and Chen, C. (2023, January 10\u201316). A single 2D pose with context is worth hundreds for 3D human pose estimation. Proceedings of the Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"24699","DOI":"10.1007\/s11042-023-15379-w","article-title":"UV R-CNN: Stable and efficient dense human pose estimation","volume":"83","author":"Jia","year":"2024","journal-title":"Multimed. Tools Appl."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Shan, W., Lu, H., Wang, S., Zhang, X., and Gao, W. (2021, January 20\u201324). Improving robustness and accuracy via relative information encoding in 3D human pose estimation. Proceedings of the 29th ACM International Conference on Multimedia, Virtual, China.","DOI":"10.1145\/3474085.3475504"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"e2311436121","DOI":"10.1073\/pnas.2311436121","article-title":"Manifold fitting with CycleGAN","volume":"121","author":"Yao","year":"2024","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Alsaadi, S., Anande, T.J., and Leeson, M.S. (2024, January 21\u201323). Comparative Analysis of 1D-CNN and 2D-CNN for Network Intrusion Detection in Software Defined Networks. Proceedings of the International Conference on Emerging Internet, Data & Web Technologies, Naples, Italy.","DOI":"10.1007\/978-3-031-53555-0_46"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"e39977","DOI":"10.1016\/j.heliyon.2024.e39977","article-title":"A comprehensive analysis of the machine learning pose estimation models used in human movement and posture analyses: A narrative review","volume":"10","author":"Roggio","year":"2024","journal-title":"Heliyon"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Mundt, M., Born, Z., Goldacre, M., and Alderson, J. (2022). Estimating ground reaction forces from two-dimensional pose data: A biomechanics-based comparison of alphapose, blazepose, and openpose. Sensors, 23.","DOI":"10.3390\/s23010078"}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/16\/2\/129\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T16:30:19Z","timestamp":1760027419000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/16\/2\/129"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,2,10]]},"references-count":47,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2025,2]]}},"alternative-id":["info16020129"],"URL":"https:\/\/doi.org\/10.3390\/info16020129","relation":{},"ISSN":["2078-2489"],"issn-type":[{"value":"2078-2489","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,2,10]]}}}