{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,17]],"date-time":"2026-01-17T19:14:10Z","timestamp":1768677250936,"version":"3.49.0"},"reference-count":73,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2023,12,5]],"date-time":"2023-12-05T00:00:00Z","timestamp":1701734400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"funder":[{"name":"ERC Consolidator Grant 4DRepLy","award":["770784"],"award-info":[{"award-number":["770784"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2023,12,5]]},"abstract":"<jats:p>\n            Existing methods for 3D tracking from monocular RGB videos predominantly consider articulated and rigid objects (\n            <jats:italic toggle=\"yes\">e.g.<\/jats:italic>\n            , two hands or humans interacting with rigid environments). Modelling dense non-rigid object deformations in this setting (\n            <jats:italic toggle=\"yes\">e.g.<\/jats:italic>\n            when hands are interacting with a face), remained largely unaddressed so far, although such effects can improve the realism of the downstream applications such as AR\/VR, 3D virtual avatar communications, and character animations. This is due to the severe ill-posedness of the monocular view setting and the associated challenges (\n            <jats:italic toggle=\"yes\">e.g.<\/jats:italic>\n            , in acquiring a dataset for training and evaluation or obtaining the reasonable non-uniform stiffness of the deformable object). While it is possible to na\u00efvely track multiple non-rigid objects independently using 3D templates or parametric 3D models, such an approach would suffer from multiple artefacts in the resulting 3D estimates such as depth ambiguity, unnatural intra-object collisions and missing or implausible deformations.\n          <\/jats:p>\n          <jats:p>Hence, this paper introduces the first method that addresses the fundamental challenges depicted above and that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos. We model hands as articulated objects inducing non-rigid face deformations during an active interaction. Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system. As a pivotal step in its creation, we process the reconstructed raw 3D shapes with position-based dynamics and an approach for non-uniform stiffness estimation of the head tissues, which results in plausible annotations of the surface deformations, hand-face contact regions and head-hand positions. At the core of our neural approach are a variational auto-encoder supplying the hand-face depth prior and modules that guide the 3D tracking by estimating the contacts and the deformations. Our final 3D hand and face reconstructions are realistic and more plausible compared to several baselines applicable in our setting, both quantitatively and qualitatively. https:\/\/vcai.mpi-inf.mpg.de\/projects\/Decaf<\/jats:p>","DOI":"10.1145\/3618329","type":"journal-article","created":{"date-parts":[[2023,12,5]],"date-time":"2023-12-05T10:20:48Z","timestamp":1701771648000},"page":"1-16","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":11,"title":["Decaf: Monocular Deformation Capture for Face and Hand Interactions"],"prefix":"10.1145","volume":"42","author":[{"given":"Soshi","family":"Shimada","sequence":"first","affiliation":[{"name":"MPI for Informatics, Germany, SIC, Germany, and VIA Research Center, Germany"}]},{"given":"Vladislav","family":"Golyanik","sequence":"additional","affiliation":[{"name":"MPI for Informatics, Germany and SIC, Germany"}]},{"given":"Patrick","family":"P\u00e9rez","sequence":"additional","affiliation":[{"name":"Valeo.ai, France"}]},{"given":"Christian","family":"Theobalt","sequence":"additional","affiliation":[{"name":"MPI for Informatics, Germany, SIC, Germany, and VIA Research Center, Germany"}]}],"member":"320","published-online":{"date-parts":[[2023,12,5]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.2312\/vcbm.20181230"},{"key":"e_1_2_2_2_1","volume-title":"Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375","author":"Agarap Abien Fred","year":"2018","unstructured":"Abien Fred Agarap. 2018. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375 (2018)."},{"key":"e_1_2_2_3_1","unstructured":"Aljaz Bozic Pablo Palafox Michael Zoll\u00f6fer Angela Dai Justus Thies and Matthias Nie\u00dfner. 2020. Neural Non-Rigid Tracking. (2020)."},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.116"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01219"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01214"},{"key":"e_1_2_2_7_1","volume-title":"EMOCA: Emotion Driven Monocular Face Capture and Animation. In Conference on Computer Vision and Pattern Recognition (CVPR). 20311--20322","author":"Danecek Radek","year":"2022","unstructured":"Radek Danecek, Michael J. Black, and Timo Bolkart. 2022. EMOCA: Emotion Driven Monocular Face Capture and Animation. In Conference on Computer Vision and Pattern Recognition (CVPR). 20311--20322."},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/3DV53792.2021.00088"},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459936"},{"key":"e_1_2_2_10_1","doi-asserted-by":"crossref","unstructured":"Mihai Fieraru Mihai Zanfir Elisabeta Oneata Alin-Ionut Popa Vlad Olaru and Cristian Sminchisescu. 2020. Three-dimensional reconstruction of human interactions. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR42600.2020.00724"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i2.16223"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3082011"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2508363.2508380"},{"key":"e_1_2_2_14_1","first-page":"219","article-title":"Corrective 3D reconstruction of lips from monocular video","volume":"35","author":"Garrido Pablo","year":"2016","unstructured":"Pablo Garrido, Michael Zollh\u00f6fer, Chenglei Wu, Derek Bradley, Patrick P\u00e9rez, Thabo Beeler, and Christian Theobalt. 2016. Corrective 3D reconstruction of lips from monocular video. ACM Trans. Graph. 35, 6 (2016), 219--1.","journal-title":"ACM Trans. Graph."},{"key":"e_1_2_2_15_1","doi-asserted-by":"crossref","unstructured":"Erik G\u00e4rtner Mykhaylo Andriluka Erwin Coumans and Cristian Sminchisescu. 2022a. Differentiable dynamics for articulated 3d human motion reconstruction. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR52688.2022.01284"},{"key":"e_1_2_2_16_1","doi-asserted-by":"crossref","unstructured":"Erik G\u00e4rtner Mykhaylo Andriluka Hongyi Xu and Cristian Sminchisescu. 2022b. Trajectory optimization for physics-based reconstruction of 3d human pose from monocular video. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR52688.2022.01276"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01790-3_4"},{"key":"e_1_2_2_18_1","volume-title":"Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Grady Patrick","unstructured":"Patrick Grady, Chengcheng Tang, Christopher D. Twigg, Minh Vo, Samarth Brahmbhatt, and Charles C. Kemp. 2021. ContactOpt: Optimizing Contact to Improve Grasps. In Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3083722"},{"key":"e_1_2_2_20_1","volume-title":"German Conference on Pattern Recognition (GCPR).","author":"Habermann Marc","year":"2018","unstructured":"Marc Habermann, Weipeng Xu, Helge Rhodin, Michael Zollhoefer, Gerard Pons-Moll, and Christian Theobalt. 2018. NRST: Non-rigid Surface Tracking from Monocular Video. In German Conference on Pattern Recognition (GCPR)."},{"key":"e_1_2_2_21_1","unstructured":"Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_22_1","volume-title":"Physical Interaction: Reconstructing Hand-object Interactions with Physics. In SIGGRAPH Asia 2022 Conference Papers.","author":"Hu Haoyu","year":"2022","unstructured":"Haoyu Hu, Xinyu Yi, Hao Zhang, Jun-Hai Yong, and Feng Xu. 2022. Physical Interaction: Reconstructing Hand-object Interactions with Physics. In SIGGRAPH Asia 2022 Conference Papers."},{"key":"e_1_2_2_23_1","doi-asserted-by":"crossref","unstructured":"Buzhen Huang Liang Pan Yuan Yang Jingyi Ju and Yangang Wang. 2022. Neural MoCon: Neural Motion Control for Physically Plausible Human Motion Capture. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR52688.2022.00631"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766974"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46484-8_22"},{"key":"e_1_2_2_26_1","doi-asserted-by":"crossref","unstructured":"Navami Kairanda Edgar Tretschk Mohamed Elgharib Christian Theobalt and Vladislav Golyanik. 2022. \u03c6-SfT: Shape-from-Template with a Physics-based Deformation Model. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR52688.2022.00392"},{"key":"e_1_2_2_27_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_2_2_28_1","volume-title":"International Conference on Learning Representations (ICLR).","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Max Welling. 2014. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR)."},{"key":"e_1_2_2_29_1","volume-title":"Face touching: a frequent habit that has implications for hand hygiene. American journal of infection control 43, 2","author":"Angela Kwok Yen Lee","year":"2015","unstructured":"Yen Lee Angela Kwok, Jan Gralton, and Mary-Louise McLaws. 2015. Face touching: a frequent habit that has implications for hand hygiene. American journal of infection control 43, 2 (2015), 112--114."},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00084"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130813"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/3DV57658.2022.00013"},{"key":"e_1_2_2_33_1","volume-title":"Occlusionfusion: Occlusion-aware motion estimation for real-time dynamic 3d reconstruction. In Computer Vision and Pattern Recognition (CVPR).","author":"Lin Wenbin","year":"2022","unstructured":"Wenbin Lin, Chengwei Zheng, Jun-Hai Yong, and Feng Xu. 2022. Occlusionfusion: Occlusion-aware motion estimation for real-time dynamic 3d reconstruction. In Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_34_1","unstructured":"Shaowei Liu Hanwen Jiang Jiarui Xu Sifei Liu and Xiaolong Wang. 2021. Semi-supervised 3d hand-object poses estimation with interactions in time. In Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_35_1","volume-title":"Workshop on Computer Vision for AR\/VR at Computer Vision and Pattern Recognition (CVPRW).","author":"Lugaresi Camillo","year":"2019","unstructured":"Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Yong, Juhyun Lee, et al. 2019. Mediapipe: A framework for perceiving and processing reality. In Workshop on Computer Vision for AR\/VR at Computer Vision and Pattern Recognition (CVPRW)."},{"key":"e_1_2_2_36_1","volume-title":"Dynamics-regulated kinematic policy for egocentric pose estimation. Advances in Neural Information Processing Systems (NeurIPS)","author":"Luo Zhengyi","year":"2021","unstructured":"Zhengyi Luo, Ryo Hachiuma, Ye Yuan, and Kris Kitani. 2021. Dynamics-regulated kinematic policy for egocentric pose estimation. Advances in Neural Information Processing Systems (NeurIPS) (2021)."},{"key":"e_1_2_2_37_1","volume-title":"Embodied Scene-aware Human Pose Estimation. Advances in Neural Information Processing Systems (NeurIPS)","author":"Luo Zhengyi","year":"2022","unstructured":"Zhengyi Luo, Shun Iwase, Ye Yuan, and Kris Kitani. 2022. Embodied Scene-aware Human Pose Estimation. Advances in Neural Information Processing Systems (NeurIPS) (2022)."},{"key":"e_1_2_2_38_1","volume-title":"International Conference on Machine Learning (ICML).","author":"Maas Andrew L","year":"2013","unstructured":"Andrew L Maas, Awni Y Hannun, Andrew Y Ng, et al. 2013. Rectifier nonlinearities improve neural network acoustic models. In International Conference on Machine Learning (ICML)."},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322958"},{"key":"e_1_2_2_40_1","volume-title":"Black","author":"M\u00fcller Lea","year":"2021","unstructured":"Lea M\u00fcller, Ahmed A. A. Osman, Siyu Tang, Chun-Hao P. Huang, and Michael J. Black. 2021. On Self-Contact and Human Pose. In Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jvcir.2007.01.005"},{"key":"e_1_2_2_42_1","volume-title":"Dense Image Registration and Deformable Surface Reconstruction in Presence of Occlusions and Minimal Texture. In International Conference on Computer Vision (ICCV).","author":"Ngo Dat Tien","year":"2015","unstructured":"Dat Tien Ngo, Sanghyuk Park, Anne Jorstad, Alberto Crivellaro, Chang D. Yoo, and Pascal Fua. 2015. Dense Image Registration and Deformable Surface Reconstruction in Presence of Occlusions and Minimal Texture. In International Conference on Computer Vision (ICCV)."},{"key":"e_1_2_2_43_1","volume-title":"Pytorch: An Imperative Style","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems (NeurIPS)."},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2018.8593756"},{"key":"e_1_2_2_45_1","unstructured":"Pexels. 2023. Pexels. https:\/\/www.pexels.com\/. Accessed: 2023-10-11."},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58558-7_5"},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130883"},{"key":"e_1_2_2_48_1","volume-title":"Proceedings, Part VIII 14","author":"Saito Shunsuke","year":"2016","unstructured":"Shunsuke Saito, Tianye Li, and Hao Li. 2016. Real-time facial segmentation and performance capture from rgb input. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11--14, 2016, Proceedings, Part VIII 14. Springer, 244--261."},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2007.1080"},{"key":"e_1_2_2_50_1","volume-title":"Background Matting: The World is Your Green Screen. In Computer Vision and Pattern Regognition (CVPR).","author":"Sengupta Soumyadip","year":"2020","unstructured":"Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman. 2020. Background Matting: The World is Your Green Screen. In Computer Vision and Pattern Regognition (CVPR)."},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-20047-2_30"},{"key":"e_1_2_2_52_1","volume-title":"Computer Vision and Pattern Recognition Workshops (CVPRW).","author":"Shimada Soshi","year":"2019","unstructured":"Soshi Shimada, Vladislav Golyanik, Christian Theobalt, and Didier Stricker. 2019. Ismogan: Adversarial learning for monocular non-rigid 3d reconstruction. In Computer Vision and Pattern Recognition Workshops (CVPRW)."},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459825"},{"key":"e_1_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3414685.3417877"},{"key":"e_1_2_2_55_1","volume-title":"Killing-fusion: Non-rigid 3d reconstruction without correspondences. In Computer Vision and Pattern Recognition (CVPR).","author":"Slavcheva Miroslava","year":"2017","unstructured":"Miroslava Slavcheva, Maximilian Baust, Daniel Cremers, and Slobodan Ilic. 2017. Killing-fusion: Non-rigid 3d reconstruction without correspondences. In Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_56_1","volume-title":"Learning structured output representation using deep conditional generative models. Advances in neural information processing systems (NeurIPS)","author":"Sohn Kihyuk","year":"2015","unstructured":"Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. Advances in neural information processing systems (NeurIPS) (2015)."},{"key":"e_1_2_2_57_1","doi-asserted-by":"crossref","unstructured":"Bugra Tekin Federica Bogo and Marc Pollefeys. 2019. H+ o: Unified egocentric recognition of 3d hand-object poses and interactions. In Computer Vision and Pattern Recognition (CVPR).","DOI":"10.1109\/CVPR.2019.00464"},{"key":"e_1_2_2_58_1","volume-title":"MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In The IEEE International Conference on Computer Vision (ICCV).","author":"Tewari Ayush","year":"2017","unstructured":"Ayush Tewari, Michael Zoll\u00f6fer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Theobalt Christian. 2017. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In The IEEE International Conference on Computer Vision (ICCV)."},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/2929464.2929475"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14774"},{"key":"e_1_2_2_61_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01264-9_30"},{"key":"e_1_2_2_62_1","volume-title":"HandFlow: Quantifying View-Dependent 3D Ambiguity in Two-Hand Reconstruction with Normalizing Flow. Vision, Modeling, and Visualization","author":"Wang Jiayi","year":"2022","unstructured":"Jiayi Wang, Diogo Luvizon, Franziska Mueller, Florian Bernard, Adam Kortylewski, Dan Casas, and Christian Theobalt. 2022. HandFlow: Quantifying View-Dependent 3D Ambiguity in Two-Hand Reconstruction with Normalizing Flow. Vision, Modeling, and Visualization (2022)."},{"key":"e_1_2_2_63_1","volume-title":"An anatomically-constrained local deformation model for monocular face capture. ACM transactions on graphics (TOG) 35, 4","author":"Wu Chenglei","year":"2016","unstructured":"Chenglei Wu, Derek Bradley, Markus Gross, and Thabo Beeler. 2016. An anatomically-constrained local deformation model for monocular face capture. ACM transactions on graphics (TOG) 35, 4 (2016), 1--12."},{"key":"e_1_2_2_64_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01133"},{"key":"e_1_2_2_65_1","unstructured":"Xinyu Yi Yuxiao Zhou Marc Habermann Soshi Shimada Vladislav Golyanik Christian Theobalt and Feng Xu. 2022. Physical Inertial Poser (PIP): Physics-aware Realtime Human Motion Tracking from Sparse Inertial Sensors. In Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_2_2_66_1","unstructured":"Alex Yu. 2023. Triangle mesh to signed-distance function (SDF). https:\/\/github.com\/sxyu\/sdf."},{"key":"e_1_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.111"},{"key":"e_1_2_2_68_1","volume-title":"Simpoe: Simulated character control for 3d human pose estimation. In Computer vision and pattern recognition (CVPR).","author":"Yuan Ye","year":"2021","unstructured":"Ye Yuan, Shih-En Wei, Tomas Simon, Kris Kitani, and Jason Saragih. 2021. Simpoe: Simulated character control for 3d human pose estimation. In Computer vision and pattern recognition (CVPR)."},{"key":"e_1_2_2_69_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01116"},{"key":"e_1_2_2_70_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3322998"},{"key":"e_1_2_2_71_1","article-title":"Single depth view based real-time reconstruction of hand-object interactions","volume":"40","author":"Zhang Hao","year":"2021","unstructured":"Hao Zhang, Yuxiao Zhou, Yifei Tian, Jun-Hai Yong, and Feng Xu. 2021b. Single depth view based real-time reconstruction of hand-object interactions. ACM Transactions on Graphics (TOG) 40, 3 (2021).","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"e_1_2_2_72_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00589"},{"key":"e_1_2_2_73_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601165"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3618329","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3618329","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,21]],"date-time":"2025-08-21T10:50:22Z","timestamp":1755773422000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3618329"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,12,5]]},"references-count":73,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,12,5]]}},"alternative-id":["10.1145\/3618329"],"URL":"https:\/\/doi.org\/10.1145\/3618329","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,12,5]]},"assertion":[{"value":"2023-12-05","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}