{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,13]],"date-time":"2026-01-13T21:17:06Z","timestamp":1768339026373,"version":"3.49.0"},"reference-count":29,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,9,18]],"date-time":"2024-09-18T00:00:00Z","timestamp":1726617600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,9,18]],"date-time":"2024-09-18T00:00:00Z","timestamp":1726617600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100009389","name":"Stiftelsen Promobilia","doi-asserted-by":"publisher","award":["20098"],"award-info":[{"award-number":["20098"]}],"id":[{"id":"10.13039\/100009389","id-type":"DOI","asserted-by":"publisher"}]},{"name":"UU Innovation","award":["2019"],"award-info":[{"award-number":["2019"]}]},{"DOI":"10.13039\/100020248","name":"Svenska Handelsbankens Forskningsstiftelse","doi-asserted-by":"publisher","award":["2020"],"award-info":[{"award-number":["2020"]}],"id":[{"id":"10.13039\/100020248","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Med Biol Eng Comput"],"published-print":{"date-parts":[[2025,1]]},"abstract":"<jats:sec>\n                <jats:title>Abstract<\/jats:title>\n                <jats:p>Accurate and fast extraction of step parameters from video recordings of gait allows for richer information to be obtained from clinical tests such as Timed Up and Go. Current deep-learning methods are promising, but lack in accuracy for many clinical use cases. Extracting step parameters will often depend on extracted landmarks (keypoints) on the feet. We hypothesize that such keypoints can be determined with an accuracy relevant for clinical practice from video recordings by combining an existing general-purpose pose estimation method (OpenPose) with custom convolutional neural networks (convnets) specifically trained to identify keypoints on the heel. The combined method finds keypoints on the posterior and lateral aspects of the heel of the foot in side-view and frontal-view images from which step length and step width can be determined for calibrated cameras. Six different candidate convnets were evaluated, combining three different standard architectures as networks for feature extraction (backbone), and with two different networks for predicting keypoints on the heel (head networks). Using transfer learning, the backbone networks were pre-trained on the ImageNet dataset, and the combined networks (backbone + head) were fine-tuned on data from 184 trials of older, unimpaired adults. The data was recorded at three different locations and consisted of 193 k side-view images and 110 k frontal-view images. We evaluated the six different models using the absolute distance on the floor between predicted keypoints and manually labelled keypoints. For the best-performing convnet, the median error was 0.55 cm and the 75% quartile was below 1.26 cm using data from the side-view camera. The predictions are overall accurate, but show some outliers. The results indicate potential for future clinical use by automating a key step in marker-less gait parameter extraction.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Graphical abstract<\/jats:title>\n                \n              <\/jats:sec>","DOI":"10.1007\/s11517-024-03189-7","type":"journal-article","created":{"date-parts":[[2024,9,24]],"date-time":"2024-09-24T06:03:00Z","timestamp":1727157780000},"page":"229-237","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Two-step deep-learning identification of heel keypoints from video-recorded gait"],"prefix":"10.1007","volume":"63","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6916-4148","authenticated-orcid":false,"given":"Kjartan","family":"Halvorsen","sequence":"first","affiliation":[]},{"given":"Wei","family":"Peng","sequence":"additional","affiliation":[]},{"given":"Fredrik","family":"Olsson","sequence":"additional","affiliation":[]},{"given":"Anna Cristina","family":"\u00c5berg","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,9,18]]},"reference":[{"key":"3189_CR1","doi-asserted-by":"publisher","unstructured":"B\u00a0\u00c5hman H, Berglund L, Cedervall Y, Kilander L, Giedraitis V, J\u00a0McKee K, Ingelsson M, Rosendahl E, \u00c5berg AC (2020) Dual-task tests predict conversion to dementia\u2014a prospective memory-clinic-based cohort study. Int J Environ Res Public Health 17(21):8129. https:\/\/doi.org\/10.3390\/ijerph17218129","DOI":"10.3390\/ijerph17218129"},{"key":"3189_CR2","doi-asserted-by":"publisher","DOI":"10.1016\/j.gaitpost.2021.08.004","author":"AC \u00c5berg","year":"2021","unstructured":"\u00c5berg AC, Olsson F, \u00c5hman HB, Tarassova O, Arndt A, Giedraitis V, Berglund L, Halvorsen K (2021) Extraction of gait parameters from marker-free video recordings of timed up-and-go tests: validity, inter- and intra-rater reliability. Gait & Posture. https:\/\/doi.org\/10.1016\/j.gaitpost.2021.08.004","journal-title":"Gait & Posture"},{"key":"3189_CR3","doi-asserted-by":"publisher","first-page":"195","DOI":"10.1016\/j.gaitpost.2022.03.015","volume":"94","author":"AC \u00c5berg","year":"2022","unstructured":"\u00c5berg AC, Olsson F, \u00c5hman HB, Tarassova O, Arndt A, Giedraitis V, Berglund L, Halvorsen K (2022) Corrigendum to \u201cextraction of gait parameters from marker-free video recordings of timed up-and-go tests: Validity, inter- and intra-rater reliability\u2019\u2019 [gait posture 90 (2021) 489\u2013495]. Gait Posture 94:195\u2013197. https:\/\/doi.org\/10.1016\/j.gaitpost.2022.03.015","journal-title":"Gait Posture"},{"key":"3189_CR4","unstructured":"Bradski G (2000) The OpenCV Library. Dr. Dobb\u2019s Journal of Software Tools"},{"key":"3189_CR5","unstructured":"Cao Z, Hidalgo Martinez G, Simon T, Wei S, Sheikh YA (2019) OpenPose: realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans Pattern Anal Mach Intell"},{"issue":"5","key":"3189_CR6","doi-asserted-by":"publisher","first-page":"1715","DOI":"10.3390\/ijerph17051715","volume":"17","author":"Y Cedervall","year":"2020","unstructured":"Cedervall Y, Stenberg AM, \u00c5hman HB, Giedraitis V, Tinmark F, Berglund L, Halvorsen K, Ingelsson M, Rosendahl E, \u00c5berg AC (2020) Timed up-and-go dual-task testing in the assessment of cognitive function: a mixed methods observational study for development of the UDDGait protocol. Int J Environ Res Public Health 17(5):1715","journal-title":"Int J Environ Res Public Health"},{"issue":"5","key":"3189_CR7","doi-asserted-by":"publisher","first-page":"744","DOI":"10.3390\/sym12050744","volume":"12","author":"W Chen","year":"2020","unstructured":"Chen W, Jiang Z, Guo H, Ni X (2020) Fall detection based on key points of human-skeleton using OpenPose. Symmetry 12(5):744","journal-title":"Symmetry"},{"issue":"15","key":"3189_CR8","doi-asserted-by":"publisher","first-page":"17064","DOI":"10.1109\/JSEN.2021.3081188","volume":"21","author":"E D\u2019Antonio","year":"2021","unstructured":"D\u2019Antonio E, Taborri J, Mileti I, Rossi S, Patan\u00e9 F (2021) Validation of a 3D markerless system for gait analysis based on OpenPose and two RGB webcams. IEEE Sens J 21(15):17064\u201317075. https:\/\/doi.org\/10.1109\/JSEN.2021.3081188","journal-title":"IEEE Sens J"},{"key":"3189_CR9","doi-asserted-by":"crossref","unstructured":"D\u2019Antonio E, Taborri J, Palermo E, Rossi S, Patane F (2020) A markerless system for gait analysis based on openpose library. In: 2020 IEEE international instrumentation and measurement technology conference (I2MTC). IEEE, pp 1\u20136","DOI":"10.1109\/I2MTC43012.2020.9128918"},{"key":"3189_CR10","doi-asserted-by":"publisher","unstructured":"Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248\u2013255 . https:\/\/doi.org\/10.1109\/CVPR.2009.5206848","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"3189_CR11","doi-asserted-by":"publisher","unstructured":"Gu X, Deligianni F, Lo B, Chen W, Yang G (2018) Markerless gait analysis based on a single RGB camera. In: 2018 IEEE 15th international conference on wearable and implantable Body Sensor Networks (BSN), pp 42\u201345. https:\/\/doi.org\/10.1109\/BSN.2018.8329654","DOI":"10.1109\/BSN.2018.8329654"},{"issue":"1","key":"3189_CR12","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41467-020-17807-z","volume":"11","author":"\u0141 Kidzi\u0144ski","year":"2020","unstructured":"Kidzi\u0144ski \u0141, Yang B, Hicks JL, Rajagopal A, Delp SL, Schwartz MH (2020) Deep neural networks enable quantitative movement analysis using single-camera videos. Nat Commun 11(1):1\u201310","journal-title":"Nat Commun"},{"key":"3189_CR13","doi-asserted-by":"crossref","unstructured":"LeCun Y, Bengio Y, Hinton G et\u00a0al (2015) Deep learning. Nat 521(7553):436-444. Google Scholar Google Scholar Cross Ref Cross Ref p\u00a025","DOI":"10.1038\/nature14539"},{"key":"3189_CR14","doi-asserted-by":"crossref","unstructured":"Li J, Wang C, Zhu H, Mao Y, Fang HS, Lu C (2018) CrowdPose: efficient crowded scenes pose estimation and a new benchmark. arXiv:1812.00324","DOI":"10.1109\/CVPR.2019.01112"},{"key":"3189_CR15","doi-asserted-by":"crossref","unstructured":"Mathis A, Biasi T, Schneider S, Yuksekgonul M, Rogers B, Bethge M, Mathis MW (2021) Pretraining boosts out-of-domain robustness for pose estimation. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp 1859\u20131868","DOI":"10.1109\/WACV48630.2021.00190"},{"key":"3189_CR16","doi-asserted-by":"crossref","unstructured":"Nakano N, Sakura T, Ueda K, Omura L, Kimura A, Iino Y, Fukashiro S, Yoshioka S (2020) Evaluation of 3D markerless motion capture accuracy using OpenPose with multiple video cameras. Frontiers Sports Active Living 2","DOI":"10.3389\/fspor.2020.00050"},{"issue":"8","key":"3189_CR17","doi-asserted-by":"publisher","first-page":"2889","DOI":"10.3390\/s21082889","volume":"21","author":"L Needham","year":"2021","unstructured":"Needham L, Evans M, Cosker DP, Colyer SL (2021) Can markerless pose estimation algorithms estimate 3D mass centre positions and velocities during linear sprinting activities? Sensors 21(8):2889. https:\/\/doi.org\/10.3390\/s21082889","journal-title":"Sensors"},{"key":"3189_CR18","unstructured":"Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Kopf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, Chintala S (2019) Pytorch: an imperative style, high-performance deep learning library. In: Wallach H, Larochelle H, Beygelzimer A, d\u2019 Alch\u00e9-Buc F, Fox E, Garnett R (eds) Advances in neural information processing systems, vol.\u00a032. Curran Associates, Inc"},{"key":"3189_CR19","unstructured":"Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.: Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation. CoRR abs\/1801.04381 (2018). arXiv:1801.04381"},{"key":"3189_CR20","doi-asserted-by":"crossref","unstructured":"Shi D, Wei X, Li L, Ren Y, Tan W (2022) End-to-end multi-person pose estimation with transformers. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 11069\u201311078","DOI":"10.1109\/CVPR52688.2022.01079"},{"key":"3189_CR21","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556"},{"issue":"4","key":"3189_CR22","doi-asserted-by":"publisher","first-page":"e1008935","DOI":"10.1371\/journal.pcbi.1008935","volume":"17","author":"J Stenum","year":"2021","unstructured":"Stenum J, Rossi C, Roemmich RT (2021) Two-dimensional video-based analysis of human gait using pose estimation. PLoS Comput Biol 17(4):e1008935","journal-title":"PLoS Comput Biol"},{"key":"3189_CR23","doi-asserted-by":"publisher","first-page":"205","DOI":"10.1016\/j.gaitpost.2021.10.028","volume":"91","author":"YM Tang","year":"2022","unstructured":"Tang YM, Wang YH, Feng XY, Zou QS, Wang Q, Ding J, Shi RCJ, Wang X (2022) Diagnostic value of a vision-based intelligent gait analyzer in screening for gait abnormalities. Gait Posture 91:205\u2013211","journal-title":"Gait Posture"},{"key":"3189_CR24","doi-asserted-by":"crossref","unstructured":"Thanushree M, Bharamagoudra MR, Noorain SZ, SP SM (2023) Hand gesture detection using transfer learning with deep neural networks. In: 2023 IEEE 8th International conference for convergence in technology (I2CT). IEEE, pp 1\u20135","DOI":"10.1109\/I2CT57861.2023.10126207"},{"issue":"146","key":"3189_CR25","first-page":"10","volume":"2006","author":"S Tomar","year":"2006","unstructured":"Tomar S (2006) Converting video formats with ffmpeg. Linux Journal 2006(146):10","journal-title":"Linux Journal"},{"issue":"2","key":"3189_CR26","doi-asserted-by":"publisher","first-page":"e0264302","DOI":"10.1371\/journal.pone.0264302","volume":"17","author":"H Wang","year":"2022","unstructured":"Wang H, Sun Mh, Zhang H, Ly Dong (2022) LHPE-nets: a lightweight 2D and 3D human pose estimation model with well-structural deep networks and multi-view pose sample simplification method. Plos one 17(2):e0264302","journal-title":"Plos one"},{"key":"3189_CR27","doi-asserted-by":"publisher","first-page":"188","DOI":"10.1016\/j.gaitpost.2022.08.008","volume":"97","author":"EP Washabaugh","year":"2022","unstructured":"Washabaugh EP, Shanmugam TA, Ranganathan R, Krishnan C (2022) Comparing the accuracy of open-source pose estimation methods for measuring gait kinematics. Gait Posture 97:188\u2013195","journal-title":"Gait Posture"},{"key":"3189_CR28","doi-asserted-by":"crossref","unstructured":"Xie S, Girshick RB, Doll\u00e1r P, Tu Z, He K (2016) Aggregated residual transformations for deep neural networks. arXiv:1611.05431","DOI":"10.1109\/CVPR.2017.634"},{"issue":"1","key":"3189_CR29","doi-asserted-by":"publisher","first-page":"43","DOI":"10.1109\/JPROC.2020.3004555","volume":"109","author":"F Zhuang","year":"2020","unstructured":"Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q (2020) A comprehensive survey on transfer learning. Proc IEEE 109(1):43\u201376","journal-title":"Proc IEEE"}],"container-title":["Medical &amp; Biological Engineering &amp; Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11517-024-03189-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11517-024-03189-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11517-024-03189-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,2]],"date-time":"2025-01-02T08:34:29Z","timestamp":1735806869000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11517-024-03189-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,9,18]]},"references-count":29,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1]]}},"alternative-id":["3189"],"URL":"https:\/\/doi.org\/10.1007\/s11517-024-03189-7","relation":{},"ISSN":["0140-0118","1741-0444"],"issn-type":[{"value":"0140-0118","type":"print"},{"value":"1741-0444","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,9,18]]},"assertion":[{"value":"22 November 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 August 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 September 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of Interest"}}]}}