{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,11]],"date-time":"2025-09-11T20:46:20Z","timestamp":1757623580606,"version":"3.44.0"},"reference-count":90,"publisher":"Springer Science and Business Media LLC","issue":"9","license":[{"start":{"date-parts":[[2025,6,11]],"date-time":"2025-06-11T00:00:00Z","timestamp":1749600000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,6,11]],"date-time":"2025-06-11T00:00:00Z","timestamp":1749600000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000781","name":"European Research Council","doi-asserted-by":"publisher","award":["770784"],"award-info":[{"award-number":["770784"]}],"id":[{"id":"10.13039\/501100000781","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Monocular egocentric 3D human motion capture remains a significant challenge, particularly under conditions of low lighting and fast movements, which are common in head-mounted device applications. Existing methods that rely on RGB cameras often fail under these conditions. To address these limitations, we introduce EventEgo3D++, the first approach that leverages a monocular event camera with a fisheye lens for 3D human motion capture. Event cameras excel in high-speed scenarios and varying illumination due to their high temporal resolution, providing reliable cues for accurate 3D human motion capture. EventEgo3D++ leverages the LNES representation of event streams to enable precise 3D reconstructions. We have also developed a mobile head-mounted device (HMD) prototype equipped with an event camera, capturing a comprehensive dataset that includes real event observations from both controlled studio environments and in-the-wild settings, in addition to a synthetic dataset. Additionally, to provide a more holistic dataset, we include allocentric RGB streams that offer different perspectives of the HMD wearer, along with their corresponding SMPL body model. Our experiments demonstrate that EventEgo3D++ achieves superior 3D accuracy and robustness compared to existing solutions, even in challenging conditions. Moreover, our method supports real-time 3D pose updates at a rate of 140Hz. This work is an extension of the EventEgo3D approach (CVPR 2024) and further advances the state of the art in egocentric 3D human motion capture. For more details, visit the project page at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/eventego3d.mpi-inf.mpg.de\" ext-link-type=\"uri\">https:\/\/eventego3d.mpi-inf.mpg.de<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s11263-025-02489-1","type":"journal-article","created":{"date-parts":[[2025,6,11]],"date-time":"2025-06-11T11:48:15Z","timestamp":1749642495000},"page":"6432-6455","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["EventEgo3D++: 3D Human Motion Capture from A\u00a0Head-Mounted Event Camera"],"prefix":"10.1007","volume":"133","author":[{"given":"Christen","family":"Millerdurai","sequence":"first","affiliation":[]},{"given":"Hiroyasu","family":"Akada","sequence":"additional","affiliation":[]},{"given":"Jian","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Diogo","family":"Luvizon","sequence":"additional","affiliation":[]},{"given":"Alain","family":"Pagani","sequence":"additional","affiliation":[]},{"given":"Didier","family":"Stricker","sequence":"additional","affiliation":[]},{"given":"Christian","family":"Theobalt","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1630-2006","authenticated-orcid":false,"given":"Vladislav","family":"Golyanik","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,6,11]]},"reference":[{"key":"2489_CR1","doi-asserted-by":"crossref","unstructured":"Akada, H., Wang, J., Shimada, S. et\u00a0al (2022). Unrealego: A new dataset for robust egocentric 3d human motion capture. In: European Conference on Computer Vision (ECCV)","DOI":"10.1007\/978-3-031-20068-7_1"},{"key":"2489_CR2","doi-asserted-by":"crossref","unstructured":"Akada, H., Wang, J., Golyanik, V. et\u00a0al (2024). 3d human pose perception from egocentric stereo videos. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52733.2024.00079"},{"key":"2489_CR3","unstructured":"Akada, H., Wang, J., Golyanik, V. et\u00a0al (2025). Bring your rear cameras for egocentric 3d human pose estimation. arXiv preprint arXiv:2503.11652"},{"key":"2489_CR4","doi-asserted-by":"crossref","unstructured":"Aliakbarian, S., Cameron, P., Bogo, F. et\u00a0al (2022). Flag: Flow-based 3d avatar generation from sparse observations. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52688.2022.01290"},{"key":"2489_CR5","unstructured":"Bazarevsky, V., Grishchenko, I., Raveendran, K. et\u00a0al (2020). Blazepose: On-device real-time body pose tracking. arXiv preprint arXiv:2006.10204"},{"key":"2489_CR6","unstructured":"Blender. (2020). Blender - a 3D modelling and rendering package. Blender Foundation, Blender Institute, Amsterdam, http:\/\/www.blender.org"},{"key":"2489_CR7","unstructured":"Captury. (2024). Capturystudio - markerless mocap of humans from pre-recorded, multi-view video footage. http:\/\/www.thecaptury.com\/"},{"key":"2489_CR8","doi-asserted-by":"crossref","unstructured":"Chen, J., Shi, H., Ye, Y. et\u00a0al (2022). Efficient human pose estimation via 3d event point cloud. In: International Conference on 3D Vision (3DV)","DOI":"10.1109\/3DV57658.2022.00023"},{"key":"2489_CR9","unstructured":"CMU. (2006). Cmu graphics lab motion capture database. http:\/\/mocap.cs.cmu.edu\/"},{"key":"2489_CR10","doi-asserted-by":"crossref","unstructured":"Dai, P., Zhang, Y., Liu, T. et\u00a0al (2024). Hmd-poser: On-device real-time human motion tracking from scalable sparse observations. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52733.2024.00089"},{"key":"2489_CR11","doi-asserted-by":"crossref","unstructured":"Du, Y., Kips, R., Pumarola, A. et\u00a0al (2023). Avatars grow legs: Generating smooth human motion from sparse tracking inputs with diffusion model. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52729.2023.00054"},{"key":"2489_CR12","unstructured":"DVXplorer Mini. (2021). Dvxplorer mini specification. https:\/\/netsket.kr\/img\/custom\/board\/DVXplorer-Mini.pdf"},{"key":"2489_CR13","unstructured":"EasyMoCap. (2021). Easymocap - make human motion capture easier. https:\/\/github.com\/zju3dv\/EasyMocap"},{"issue":"3","key":"2489_CR14","doi-asserted-by":"publisher","first-page":"501","DOI":"10.1109\/TPAMI.2016.2557779","volume":"39","author":"A Elhayek","year":"2016","unstructured":"Elhayek, A., de Aguiar, E., Jain, A., et al. (2016). Marconi-convnet-based marker-less motion capture in outdoor and indoor scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 39(3), 501\u2013514.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)"},{"issue":"1","key":"2489_CR15","doi-asserted-by":"publisher","first-page":"154","DOI":"10.1109\/TPAMI.2020.3008413","volume":"44","author":"G Gallego","year":"2020","unstructured":"Gallego, G., Delbr\u00fcck, T., Orchard, G., et al. (2020). Event-based vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 44(1), 154\u2013180.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)"},{"key":"2489_CR16","doi-asserted-by":"crossref","unstructured":"Gehrig, D., Gehrig, M., Hidalgo-Carri\u00f3, J. et\u00a0al (2020). Video to events: Recycling video datasets for event cameras. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR42600.2020.00364"},{"key":"2489_CR17","doi-asserted-by":"publisher","first-page":"381","DOI":"10.1007\/s11263-018-1118-y","volume":"127","author":"A Gilbert","year":"2019","unstructured":"Gilbert, A., Trumble, M., Malleson, C., et al. (2019). Fusing visual and inertial sensors with semantics for 3d human pose estimation. International Journal of Computer Vision (IJCV), 127, 381\u2013397.","journal-title":"International Journal of Computer Vision (IJCV)"},{"key":"2489_CR18","doi-asserted-by":"crossref","unstructured":"Guzov, V., Mir, A., Sattler, T. et\u00a0al (2021). Human poseitioning system (hps): 3d human pose estimation and self-localization in large scenes from body-mounted sensors. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR46437.2021.00430"},{"key":"2489_CR19","doi-asserted-by":"crossref","unstructured":"Guzov, V., Jiang, Y., Hong, F. et\u00a0al (2024). $$\\text{Hmd}^2$$: Environment-aware motion generation from single egocentric head-mounted device. arXiv preprint arXiv:2409.13426","DOI":"10.1109\/3DV66043.2025.00132"},{"key":"2489_CR20","doi-asserted-by":"crossref","unstructured":"Helten, T., Muller, M., Seidel, HP. et\u00a0al (2013). Real-time body tracking with one depth camera and inertial sensors. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/ICCV.2013.141"},{"issue":"6","key":"2489_CR21","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3272127.3275108","volume":"37","author":"Y Huang","year":"2018","unstructured":"Huang, Y., Kaufmann, M., Aksan, E., et al. (2018). Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG), 37(6), 1\u201315.","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"2489_CR22","unstructured":"Itseez. (2015). Open source computer vision library. https:\/\/github.com\/itseez\/opencv"},{"key":"2489_CR23","doi-asserted-by":"crossref","unstructured":"Jiang, J., Streli, P., Qiu, H. et\u00a0al (2022a). Avatarposer: Articulated full-body pose tracking from sparse motion sensing. In: European conference on computer vision (ECCV)","DOI":"10.1007\/978-3-031-20065-6_26"},{"key":"2489_CR24","unstructured":"Jiang, J., Streli, P., Meier, M. et\u00a0al (2023). Egoposer: Robust real-time ego-body pose estimation in large scenes. arXiv e-prints pp arXiv\u20132308"},{"issue":"9","key":"2489_CR25","doi-asserted-by":"publisher","first-page":"6416","DOI":"10.1109\/TPAMI.2024.3380648","volume":"46","author":"J Jiang","year":"2024","unstructured":"Jiang, J., Li, J., Zhang, B., et al. (2024). Evhandpose: Event-based 3d hand pose estimation with sparse supervision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(9), 6416\u20136430.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"2489_CR26","doi-asserted-by":"crossref","unstructured":"Jiang, J., Zhou, X., Wang, B. et\u00a0al (2024b). Complementing event streams and rgb frames for hand mesh reconstruction. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52733.2024.02356"},{"key":"2489_CR27","doi-asserted-by":"crossref","unstructured":"Jiang, J., Streli, P., Luo, X. et\u00a0al (2025). Manikin: biomechanically accurate neural inverse kinematics for human motion estimation. In: European Conference on Computer Vision (ECCV)","DOI":"10.1007\/978-3-031-72627-9_8"},{"key":"2489_CR28","doi-asserted-by":"crossref","unstructured":"Jiang, Y., Ye, Y., Gopinath, D. et\u00a0al (2022b). Transformer inertial poser: Real-time human motion reconstruction from sparse imus with simultaneous terrain generation. In: ACM SIGGRAPH Asia Conference","DOI":"10.1145\/3550469.3555428"},{"key":"2489_CR29","doi-asserted-by":"crossref","unstructured":"Kang, T., Lee, Y. (2024). Attention-propagation network for egocentric heatmap to 3d pose lifting. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52733.2024.00086"},{"key":"2489_CR30","doi-asserted-by":"crossref","unstructured":"Kang, T., Lee, K., Zhang, J. et\u00a0al (2023). Ego3dpose: Capturing 3d cues from binocular egocentric views. In: ACM SIGGRAPH Asia Conference","DOI":"10.1145\/3610548.3618147"},{"issue":"2","key":"2489_CR31","first-page":"87","volume":"4","author":"DG Kendall","year":"1989","unstructured":"Kendall, D. G. (1989). A survey of the statistical theory of shape. Statistical Science, 4(2), 87\u201399.","journal-title":"Statistical Science"},{"key":"2489_CR32","doi-asserted-by":"publisher","first-page":"103149","DOI":"10.1109\/ACCESS.2020.2996661","volume":"8","author":"N Khan","year":"2020","unstructured":"Khan, N., Iqbal, K., & Martini, M. G. (2020). Lossless compression of data from static and mobile dynamic vision sensors-performance and trade-offs. IEEE Access, 8, 103149\u2013103163.","journal-title":"IEEE Access"},{"key":"2489_CR33","doi-asserted-by":"crossref","unstructured":"Khirodkar, R., Bansal, A., Ma, L., et\u00a0al (2023). Ego-humans: An ego-centric 3d multi-human benchmark. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV51070.2023.01814"},{"key":"2489_CR34","unstructured":"Kingma, D., Ba, J. (2015). Adam: A method for stochastic optimization. In: International Conference on Learning Representations (ICLR)"},{"key":"2489_CR35","unstructured":"Lan, C., Yin, Z., Basu, A. et\u00a0al (2023). Tracking fast by learning slow: An event-based speed adaptive hand tracker leveraging knowledge in rgb domain. arXiv preprint arXiv:2302.14430"},{"key":"2489_CR36","doi-asserted-by":"crossref","unstructured":"Lee, S., Starke, S., Ye, Y. et\u00a0al (2023). Questenvsim: Environment-aware simulated motion tracking from sparse sensors. In: ACM SIGGRAPH Conference","DOI":"10.1145\/3588432.3591504"},{"key":"2489_CR37","unstructured":"Lensagon BF10M14522S118C (2020). Lensagon bf10m14522s118 specification. https:\/\/www.lensation.de\/pdf\/BF10M14522S118.pdf"},{"key":"2489_CR38","doi-asserted-by":"crossref","unstructured":"Li, J., Liu, K., Wu, J. (2023a). Ego-body pose estimation via ego-head pose estimation. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52729.2023.01644"},{"key":"2489_CR39","doi-asserted-by":"crossref","unstructured":"Li, J., Liu, K., Wu, J. (2023b). Ego-body pose estimation via ego-head pose estimation. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52729.2023.01644"},{"key":"2489_CR40","doi-asserted-by":"publisher","first-page":"8880","DOI":"10.1109\/TMM.2023.3242551","volume":"25","author":"Y Liu","year":"2023","unstructured":"Liu, Y., Yang, J., Gu, X., et al. (2023). Egofish3d: Egocentric 3d pose estimation from a fisheye camera via self-supervised learning. IEEE Transactions on Multimedia, 25, 8880\u20138891.","journal-title":"IEEE Transactions on Multimedia"},{"key":"2489_CR41","doi-asserted-by":"crossref","unstructured":"Loper, M., Mahmood, N., Romero, J. et\u00a0al (2015). SMPL: A skinned multi-person linear model. ACM Transactions on Graphics (TOG) 34(6):248:1\u2013248:16","DOI":"10.1145\/2816795.2818013"},{"key":"2489_CR42","unstructured":"Luo, Z., Hachiuma, R., Yuan, Y. et\u00a0al (2021). Dynamics-regulated kinematic policy for egocentric pose estimation. In: Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"2489_CR43","doi-asserted-by":"crossref","unstructured":"Malleson, C., Gilbert, A., Trumble, M. et\u00a0al (2017). Real-time full-body motion capture from video and imus. In: 2017 international conference on 3D vision (3DV)","DOI":"10.1109\/3DV.2017.00058"},{"key":"2489_CR44","unstructured":"MathWorks. (2023). Matlab version: 9.14.0 (r2023a). https:\/\/www.mathworks.com"},{"key":"2489_CR45","doi-asserted-by":"crossref","unstructured":"Mehta, D., Sotnychenko, O., Mueller, F. et\u00a0al (2018). Single-shot multi-person 3d pose estimation from monocular rgb. In: International Conference on 3D Vision (3DV)","DOI":"10.1109\/3DV.2018.00024"},{"key":"2489_CR46","doi-asserted-by":"crossref","unstructured":"Millerdurai, C., Akada, H., Wang, J. et\u00a0al (2024a). Eventego3d: 3d human motion capture from egocentric event streams. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1007\/s11263-025-02489-1"},{"key":"2489_CR47","doi-asserted-by":"crossref","unstructured":"Millerdurai, C., Luvizon, D., Rudnev, V. et\u00a0al (2024b). 3d pose estimation of two interacting hands from a monocular event camera. In: International Conference on 3D Vision (3DV)","DOI":"10.1109\/3DV62453.2024.00008"},{"key":"2489_CR48","doi-asserted-by":"crossref","unstructured":"Muglikar, M., Gehrig, M., Gehrig, D. et\u00a0al (2021). How to calibrate your event camera. In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops","DOI":"10.1109\/CVPRW53098.2021.00155"},{"key":"2489_CR49","doi-asserted-by":"crossref","unstructured":"Nehvi, J., Golyanik, V., Mueller, F. et\u00a0al (2021). Differentiable event stream simulator for non-rigid 3d tracking. In: Computer Vision and Pattern Recognition (CVPR) Workshops","DOI":"10.1109\/CVPRW53098.2021.00143"},{"key":"2489_CR50","doi-asserted-by":"crossref","unstructured":"Pan, X., Charron, N., Yang, Y. et\u00a0al (2023). Aria digital twin: A new benchmark dataset for egocentric 3d machine perception. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV51070.2023.01842"},{"key":"2489_CR51","doi-asserted-by":"crossref","unstructured":"Park, J., Moon, G., Xu, W. et\u00a0al (2024). 3d hand sequence recovery from real blurry images and event stream. In: European Conference on Computer Vision (ECCV)","DOI":"10.1007\/978-3-031-73202-7_20"},{"key":"2489_CR52","unstructured":"Paszke, A., Gross, S., Massa, F. et\u00a0al (2019). Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems (NeurIPS)"},{"key":"2489_CR53","doi-asserted-by":"crossref","unstructured":"Pavlakos, G., Zhu, L., Zhou, X. et\u00a0al (2018). Learning to estimate 3d human pose and shape from a single color image. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR.2018.00055"},{"key":"2489_CR54","unstructured":"Rebecq, H., Gehrig, D., Scaramuzza, D. (2018). Esim: an open event camera simulator. In: Conference on Robot Learning (CORL)"},{"key":"2489_CR55","doi-asserted-by":"crossref","unstructured":"Rebecq, H., Ranftl, R., Koltun, V. et\u00a0al (2019a). Events-to-video: Bringing modern computer vision to event cameras. Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR.2019.00398"},{"issue":"6","key":"2489_CR56","doi-asserted-by":"publisher","first-page":"1964","DOI":"10.1109\/TPAMI.2019.2963386","volume":"43","author":"H Rebecq","year":"2019","unstructured":"Rebecq, H., Ranftl, R., Koltun, V., et al. (2019). High speed and high dynamic range video with an event camera. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 43(6), 1964\u20131980.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)"},{"issue":"6","key":"2489_CR57","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2980179.2980235","volume":"35","author":"H Rhodin","year":"2016","unstructured":"Rhodin, H., Richardt, C., Casas, D., et al. (2016). Egocap: egocentric marker-less motion capture with two fisheye cameras. ACM Transactions on Graphics (TOG), 35(6), 1\u201311.","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"2489_CR58","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention (MICCAI)","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"2489_CR59","doi-asserted-by":"crossref","unstructured":"Rudnev, V., Golyanik, V., Wang, J. et\u00a0al (2021). Eventhands: Real-time neural 3d hand pose estimation from an event stream. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV48922.2021.01216"},{"key":"2489_CR60","doi-asserted-by":"crossref","unstructured":"Rudnev, V., Elgharib, M., Theobalt, C. et\u00a0al (2023). Eventnerf: Neural radiance fields from a single colour event camera. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52729.2023.00483"},{"key":"2489_CR61","doi-asserted-by":"crossref","unstructured":"Scaramuzza, D., Martinelli, A., Siegwart, R. (2006). A toolbox for easily calibrating omnidirectional cameras. In: IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)","DOI":"10.1109\/IROS.2006.282372"},{"key":"2489_CR62","doi-asserted-by":"crossref","unstructured":"Schiopu, I., Bilcu, RC. (2023). Entropy coding-based lossless compression of asynchronous event sequences. In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops","DOI":"10.1109\/CVPRW59228.2023.00407"},{"key":"2489_CR63","doi-asserted-by":"crossref","unstructured":"Tome, D., Peluse, P., Agapito, L. et\u00a0al (2019). xr-egopose: Egocentric 3d human pose from an hmd camera. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV.2019.00782"},{"issue":"6","key":"2489_CR64","doi-asserted-by":"publisher","first-page":"6794","DOI":"10.1109\/TPAMI.2020.3029700","volume":"45","author":"D Tome","year":"2020","unstructured":"Tome, D., Alldieck, T., Peluse, P., et al. (2020). Selfpose: 3d egocentric pose estimation from a headset mounted camera. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 45(6), 6794\u20136806.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)"},{"key":"2489_CR65","doi-asserted-by":"crossref","unstructured":"Varol, G., Romero, J., Martin, X. et\u00a0al (2017). Learning from synthetic humans. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR.2017.492"},{"issue":"8","key":"2489_CR66","doi-asserted-by":"publisher","first-page":"1533","DOI":"10.1109\/TPAMI.2016.2522398","volume":"38","author":"T Von Marcard","year":"2016","unstructured":"Von Marcard, T., Pons-Moll, G., & Rosenhahn, B. (2016). Human pose estimation from video and imus. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 38(8), 1533\u20131547.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)"},{"key":"2489_CR67","doi-asserted-by":"crossref","unstructured":"Von\u00a0Marcard, T., Rosenhahn, B., Black, MJ. et\u00a0al (2017). Sparse inertial poser: Automatic 3d human pose estimation from sparse imus. In: Computer graphics forum","DOI":"10.1111\/cgf.13131"},{"key":"2489_CR68","doi-asserted-by":"crossref","unstructured":"Wang, J., Liu, L., Xu, W. et\u00a0al (2021). Estimating egocentric 3d human pose in global space. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV48922.2021.01130"},{"key":"2489_CR69","doi-asserted-by":"crossref","unstructured":"Wang, J., Liu, L., Xu, W. et\u00a0al (2022a). Estimating egocentric 3d human pose in the wild with external weak supervision. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52688.2022.01281"},{"key":"2489_CR70","doi-asserted-by":"crossref","unstructured":"Wang, J., Luvizon, D., Xu, W. et\u00a0al (2023). Scene-aware egocentric 3d human pose estimation. Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52729.2023.01252"},{"key":"2489_CR71","doi-asserted-by":"crossref","unstructured":"Wang, J., Cao, Z., Luvizon, D. et\u00a0al (2024a). Egocentric whole-body motion capture with fisheyevit and diffusion-based motion refinement. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52733.2024.00080"},{"key":"2489_CR72","doi-asserted-by":"crossref","unstructured":"Wang, J., Cao, Z., Luvizon, D. et\u00a0al (2024b). Egocentric whole-body motion capture with fisheyevit and diffusion-based motion refinement. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52733.2024.00080"},{"key":"2489_CR73","doi-asserted-by":"crossref","unstructured":"Wang, Z., Chaney, K., Daniilidis, K. (2022b). Evac3d: From event-based apparent contours to 3d models via continuous visual hulls. In: European Conference on Computer Vision (ECCV)","DOI":"10.1007\/978-3-031-20071-7_17"},{"key":"2489_CR74","doi-asserted-by":"crossref","unstructured":"Winkler, A., Won, J., Ye, Y. (2022). Questsim: Human motion tracking from sparse sensors with simulated avatars. In: ACM SIGGRAPH Asia Conference","DOI":"10.1145\/3550469.3555411"},{"key":"2489_CR75","doi-asserted-by":"crossref","unstructured":"Xu, L., Xu, W., Golyanik, V. et\u00a0al (2020). Eventcap: Monocular 3d capture of high-speed human motions using an event camera. In: Computer Vision andn Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR42600.2020.00502"},{"issue":"5","key":"2489_CR76","doi-asserted-by":"publisher","first-page":"2093","DOI":"10.1109\/TVCG.2019.2898650","volume":"25","author":"W Xu","year":"2019","unstructured":"Xu, W., Chatterjee, A., Zollhoefer, M., et al. (2019). Mo$$^{2}$$Cap$$^{2}$$: Real-time mobile 3d motion capture with a cap-mounted fisheye camera. IEEE Transactions on Visualization and Computer Graphics (TVCG), 25(5), 2093\u20132101.","journal-title":"IEEE Transactions on Visualization and Computer Graphics (TVCG)"},{"key":"2489_CR77","unstructured":"Xue, Y., Li, H., Leutenegger, S. et\u00a0al (2022). Event-based non-rigid reconstruction from contours. In: British Machine Vision Conference (BMVC)"},{"issue":"4","key":"2489_CR78","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3450626.3459786","volume":"40","author":"X Yi","year":"2021","unstructured":"Yi, X., Zhou, Y., & Xu, F. (2021). Transpose: Real-time 3d human translation and pose estimation with six inertial sensors. ACM Transactions On Graphics (TOG), 40(4), 1\u201313.","journal-title":"ACM Transactions On Graphics (TOG)"},{"key":"2489_CR79","doi-asserted-by":"crossref","unstructured":"Yi, X., Zhou, Y., Habermann, M. et\u00a0al (2022). Physical inertial poser (pip): Physics-aware real-time human motion tracking from sparse inertial sensors. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR52688.2022.01282"},{"issue":"4","key":"2489_CR80","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3592099","volume":"42","author":"X Yi","year":"2023","unstructured":"Yi, X., Zhou, Y., Habermann, M., et al. (2023). Egolocate: Real-time motion capture, localization, and mapping with sparse body-mounted sensors. ACM Transactions on Graphics (TOG), 42(4), 1\u201317.","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"2489_CR81","unstructured":"YouTube (2025). Live encoder settings, bitrates, and resolutions. https:\/\/support.google.com\/youtube\/answer\/2853702?hl=en"},{"key":"2489_CR82","unstructured":"Yu, F., Zhang, Y., Song, S. et\u00a0al (2015). Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365"},{"key":"2489_CR83","doi-asserted-by":"crossref","unstructured":"Yuan, Y., Kitani, K. (2019). Ego-pose estimation and forecasting as real-time pd control. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV.2019.01018"},{"key":"2489_CR84","doi-asserted-by":"crossref","unstructured":"Zahid, S., Rudnev, V., Ilg, E. et\u00a0al (2025). E-3dgs: Event-based novel view rendering of large-scale scenes using 3d gaussian splatting. In: International Conference on 3D Vision (3DV)","DOI":"10.1109\/3DV66043.2025.00090"},{"key":"2489_CR85","doi-asserted-by":"crossref","unstructured":"Zhang, S., Ma, Q., Zhang, Y. et\u00a0al (2022). Egobody: Human body shape and motion of interacting people from head-mounted devices. In: European conference on computer vision (ECCV)","DOI":"10.1007\/978-3-031-20068-7_11"},{"key":"2489_CR86","doi-asserted-by":"crossref","unstructured":"Zhang, S., Ma, Q., Zhang, Y. et\u00a0al (2023). Probabilistic human mesh recovery in 3d scenes from egocentric views. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV51070.2023.00734"},{"key":"2489_CR87","doi-asserted-by":"crossref","unstructured":"Zhang, Y., You, S., Gevers, T. (2021). Automatic calibration of the fisheye camera for egocentric 3d human pose estimation from a single image. In: Winter Conference on Applications of Computer Vision (WACV)","DOI":"10.1109\/WACV48630.2021.00181"},{"key":"2489_CR88","doi-asserted-by":"crossref","unstructured":"Zhao, D., Wei, Z., Mahmud, J. et\u00a0al (2021). Egoglass: Egocentric-view human pose estimation from an eyeglass frame. In: International Conference on 3D Vision (3DV)","DOI":"10.1109\/3DV53792.2021.00014"},{"key":"2489_CR89","doi-asserted-by":"crossref","unstructured":"Zheng, X., Su, Z., Wen, C. et\u00a0al (2023). Realistic full-body tracking from sparse observations via joint-level modeling. In: Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/ICCV51070.2023.01349"},{"key":"2489_CR90","doi-asserted-by":"crossref","unstructured":"Zou, S., Guo, C., Zuo, X. et\u00a0al (2021). Eventhpe: Event-based 3d human pose and shape estimation. In: International Conference on Computer Vision (ICCV)","DOI":"10.1109\/ICCV48922.2021.01081"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02489-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02489-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02489-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,9]],"date-time":"2025-09-09T08:01:58Z","timestamp":1757404918000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02489-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,11]]},"references-count":90,"journal-issue":{"issue":"9","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["2489"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02489-1","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"type":"print","value":"0920-5691"},{"type":"electronic","value":"1573-1405"}],"subject":[],"published":{"date-parts":[[2025,6,11]]},"assertion":[{"value":"18 September 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 May 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 June 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}