{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,26]],"date-time":"2026-02-26T18:35:55Z","timestamp":1772130955235,"version":"3.50.1"},"reference-count":53,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2024,10,1]],"date-time":"2024-10-01T00:00:00Z","timestamp":1727740800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"ICT Creative Consilience Program"},{"name":"Institute of Information & Communications Technology Planning & Evaluation","award":["IITP-2024-2020-0-01819"],"award-info":[{"award-number":["IITP-2024-2020-0-01819"]}]},{"name":"Artificial Intelligence Convergence Innovation Human Resources Development, Kyung Hee University","award":["RS-2022-00155911"],"award-info":[{"award-number":["RS-2022-00155911"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2025,2,28]]},"abstract":"<jats:p>\n            Marker-based optical motion capture (mocap) systems are increasingly utilized for acquiring 3D human motion, offering advantages in capturing the subtle nuances of human movement, style consistency, and ease of obtaining desired motion. Motion data acquisition via mocap typically requires laborious marker labeling and motion reconstruction, recent deep-learning solutions have aimed to automate the process. However, such solutions generally presuppose a fixed marker configuration to reduce learning complexity, thereby limiting flexibility. To overcome the limitation, we introduce DAMO, an end-to-end deep solver, proficiently inferring arbitrary marker configurations and optimizing pose reconstruction. DAMO outperforms state-of-the-art like SOMA and MoCap-Solver in scenarios with significant noise and unknown marker configurations. We expect that DAMO will meet various practical demands such as facilitating dynamic marker configuration adjustments during capture sessions, processing marker clouds irrespective of whether they employ mixed or entirely unknown marker configurations, and allowing custom marker configurations to suit distinct capture scenarios. DAMO code and pretrained models are available at\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/CritBear\/damo\">https:\/\/github.com\/CritBear\/damo<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3695865","type":"journal-article","created":{"date-parts":[[2024,9,14]],"date-time":"2024-09-14T07:46:37Z","timestamp":1726299997000},"page":"1-14","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["DAMO: A Deep Solver for Arbitrary Marker Configuration in Optical Motion Capture"],"prefix":"10.1145","volume":"44","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5168-9563","authenticated-orcid":false,"given":"KyeongMin","family":"Kim","sequence":"first","affiliation":[{"name":"Kyung Hee University, Yongin, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5805-3397","authenticated-orcid":false,"given":"SeungWon","family":"Seo","sequence":"additional","affiliation":[{"name":"Department of Software Convergence, Kyung Hee University, Yongin, Korea (the Republic of)"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7693-7674","authenticated-orcid":false,"given":"DongHeun","family":"Han","sequence":"additional","affiliation":[{"name":"Department of Software Convergence, Kyung Hee University, Yongin, Korea (the Republic of)"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5292-4342","authenticated-orcid":false,"given":"HyeongYeop","family":"Kang","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Korea University, Seongbuk-gu, Korea (the Republic of)"}]}],"member":"320","published-online":{"date-parts":[[2024,10]]},"reference":[{"key":"e_1_3_2_2_1","unstructured":"Advanced Computing Center for the Arts and Design (ACCAD) MoCap Dataset. 2019. Retrieved from https:\/\/accad.osu.edu\/research\/motion-lab\/mocap-system-and-data"},{"key":"e_1_3_2_3_1","unstructured":"Carnegie Mellon University (CMU) MoCap Dataset. 2019. Retrieved from http:\/\/mocap.cs.cmu.edu\/"},{"key":"e_1_3_2_4_1","doi-asserted-by":"publisher","DOI":"10.5555\/928525"},{"key":"e_1_3_2_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298751"},{"key":"e_1_3_2_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00371-011-0671-y"},{"key":"e_1_3_2_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3344383"},{"key":"e_1_3_2_8_1","first-page":"916","article-title":"Sensor-independent target state estimator design and evaluation","author":"Asseo S. J.","year":"1982","unstructured":"S. J. Asseo and R. J. Ardila. 1982. Sensor-independent target state estimator design and evaluation. NAECON 1982 (1982), 916\u2013924.","journal-title":"NAECON 1982"},{"issue":"4","key":"e_1_3_2_9_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3450626.3459681","article-title":"MoCap-Solver: A neural solver for optical motion capture data","volume":"40","author":"Chen Kang","year":"2021","unstructured":"Kang Chen, Yupan Wang, Song-Hai Zhang, Sen-Zhe Xu, Weidong Zhang, and Shi-Min Hu. 2021. MoCap-Solver: A neural solver for optical motion capture data. ACM Trans. Graph. 40, 4 (2021), 1\u201311.","journal-title":"ACM Trans. Graph."},{"key":"e_1_3_2_10_1","unstructured":"Klaus Dorfm\u00fcller-Ulhaas. 2007. Robust optical user motion tracking using a kalman filter. Fakult\u00e4t f\u00fcr Angewandte Informatik."},{"key":"e_1_3_2_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.391"},{"key":"e_1_3_2_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01093"},{"key":"e_1_3_2_13_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-22514-8_14"},{"key":"e_1_3_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201399"},{"key":"e_1_3_2_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/CA.2000.889046"},{"key":"e_1_3_2_16_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0167-9457(01)00050-1"},{"key":"e_1_3_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201302"},{"key":"e_1_3_2_18_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-50115-4_68"},{"key":"e_1_3_2_19_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-33950-0_65"},{"key":"e_1_3_2_20_1","doi-asserted-by":"publisher","unstructured":"R. E. Kalman. 1960. A new approach to linear filtering and prediction problems. Journal of Basic Engineering 82 1 (March 1960) 35\u201345. DOI:10.1115\/1.3662552","DOI":"10.1115\/1.3662552"},{"key":"e_1_3_2_21_1","unstructured":"Sai Charan Mahadevan Karunanidhi Durai Kumar Huang Geng. 2020. SFU Motion Capture Database. Retrieved from https:\/\/mocap.cs.sfu.ca\/"},{"key":"e_1_3_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/1186223.1186260"},{"key":"e_1_3_2_23_1","doi-asserted-by":"publisher","unstructured":"Miriam Klous and Sander Klous. 2010. Marker-based reconstruction of the kinematics of a chain of segments: A new method that incorporates joint kinematic constraints. Journal of Biomechanical Engineering 132 7 (May 2010) 074501. DOI:10.1115\/1.4001396","DOI":"10.1115\/1.4001396"},{"key":"e_1_3_2_24_1","unstructured":"Taras Kucherenko Jonas Beskow and Hedvig Kjellstr\u00f6m. 2018. A neural network approach to missing marker reconstruction in human motion capture. Retrieved from https:\/\/arXiv:1803.02665"},{"key":"e_1_3_2_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00959"},{"key":"e_1_3_2_26_1","doi-asserted-by":"publisher","DOI":"10.1090\/qam\/10666"},{"key":"e_1_3_2_27_1","unstructured":"Lei Li James McCann Nancy Pollard and Christos Faloutsos. 2010. Bolero: A principled technique for including bone length constraints in motion capture occlusion filling. In Proceedings of the ACM SIGGRAPH\/Eurographics Symposium on Computer Animation."},{"key":"e_1_3_2_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/2816795.2818013"},{"key":"e_1_3_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661273"},{"key":"e_1_3_2_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00554"},{"key":"e_1_3_2_31_1","doi-asserted-by":"publisher","DOI":"10.1137\/0111030"},{"key":"e_1_3_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2015.7353481"},{"key":"e_1_3_2_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA.2014.6907690"},{"issue":"7","key":"e_1_3_2_34_1","article-title":"Mocap database hdm05","volume":"2","author":"M\u00fcller Meinard","year":"2007","unstructured":"Meinard M\u00fcller, Tido R\u00f6der, Michael Clausen, Bernhard Eberhardt, Bj\u00f6rn Kr\u00fcger, and Andreas Weber. 2007. Mocap database hdm05. Institut f\u00fcr Informatik II, Universit\u00e4t Bonn 2, 7 (2007). Retrieved from https:\/\/resources.mpi-inf.mpg.de\/HDM05\/","journal-title":"Institut f\u00fcr Informatik II, Universit\u00e4t Bonn"},{"key":"e_1_3_2_35_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2021.103219"},{"key":"e_1_3_2_36_1","doi-asserted-by":"publisher","DOI":"10.1063\/1.4822961"},{"key":"e_1_3_2_37_1","first-page":"652","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Qi Charles R.","year":"2017","unstructured":"Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. 2017a. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 652\u2013660."},{"key":"e_1_3_2_38_1","article-title":"Pointnet++: Deep hierarchical feature learning on point sets in a metric space","volume":"30","author":"Qi Charles Ruizhongtai","year":"2017","unstructured":"Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J. Guibas. 2017b. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Info. Process. Syst. 30 (2017).","journal-title":"Adv. Neural Info. Process. Syst."},{"key":"e_1_3_2_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.701"},{"key":"e_1_3_2_40_1","doi-asserted-by":"publisher","DOI":"10.5555\/645315.649191"},{"key":"e_1_3_2_41_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.imavis.2004.02.011"},{"key":"e_1_3_2_42_1","doi-asserted-by":"publisher","DOI":"10.21236\/ADA406704"},{"key":"e_1_3_2_43_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-62655-6_10"},{"key":"e_1_3_2_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA.2015.7139260"},{"issue":"1","key":"e_1_3_2_45_1","first-page":"1","article-title":"Least-squares rigid motion using svd","volume":"1","author":"Sorkine-Hornung Olga","year":"2017","unstructured":"Olga Sorkine-Hornung and Michael Rabinovich. 2017. Least-squares rigid motion using svd. Computing 1, 1 (2017), 1\u20135.","journal-title":"Computing"},{"key":"e_1_3_2_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/MFI.2016.7849550"},{"key":"e_1_3_2_47_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.114"},{"key":"e_1_3_2_48_1","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Adv. Neural Info. Process. Syst. 30 (2017).","journal-title":"Adv. Neural Info. Process. Syst."},{"key":"e_1_3_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459787"},{"key":"e_1_3_2_50_1","first-page":"1912","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Wu Zhirong","year":"2015","unstructured":"Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 2015. 3D shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1912\u20131920."},{"key":"e_1_3_2_51_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00563"},{"key":"e_1_3_2_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00344"},{"key":"e_1_3_2_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00571"},{"key":"e_1_3_2_54_1","first-page":"245","volume-title":"Proceedings of the ACM SIGGRAPH\/Eurographics Symposium on Computer Animation","author":"Zordan Victor Brian","year":"2003","unstructured":"Victor Brian Zordan and Nicholas C. Van Der Horst. 2003. Mapping optical motion capture data to skeletal motion using a physical model. In Proceedings of the ACM SIGGRAPH\/Eurographics Symposium on Computer Animation. Citeseer, 245\u2013250."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3695865","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3695865","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T00:04:29Z","timestamp":1750291469000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3695865"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10]]},"references-count":53,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,2,28]]}},"alternative-id":["10.1145\/3695865"],"URL":"https:\/\/doi.org\/10.1145\/3695865","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10]]},"assertion":[{"value":"2023-07-29","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-03","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-10-01","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}