{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,17]],"date-time":"2026-01-17T08:45:29Z","timestamp":1768639529371,"version":"3.49.0"},"reference-count":70,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2024,11,30]],"date-time":"2024-11-30T00:00:00Z","timestamp":1732924800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"The Scientific and Technological Research Council of Turkey","award":["122E123"],"award-info":[{"award-number":["122E123"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Appl. Percept."],"published-print":{"date-parts":[[2025,4,30]]},"abstract":"<jats:p>We express our personality through verbal and nonverbal behavior. While verbal cues are mostly related to the semantics of what we say, nonverbal cues include our posture, gestures, and facial expressions. Appropriate expression of these behavioral elements improves conversational virtual agents\u2019 communication capabilities and realism. Although previous studies focus on co-speech gesture generation, they do not consider the personality aspect of the synthesized animations. We show that automatically generated co-speech gestures naturally express personality traits, and heuristics-based adjustments for such animations can further improve personality expression. To this end, we present a framework for enhancing co-speech gestures with the different personalities of the Five-Factor model. Our experiments suggest that users perceive increased realism and improved personality expression when combining heuristics-based motion adjustments with co-speech gestures.<\/jats:p>","DOI":"10.1145\/3694905","type":"journal-article","created":{"date-parts":[[2024,9,9]],"date-time":"2024-09-09T15:30:27Z","timestamp":1725895827000},"page":"1-20","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Personality Expression Using Co-Speech Gesture"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9743-6833","authenticated-orcid":false,"given":"Sinan","family":"Sonlu","sequence":"first","affiliation":[{"name":"Bilkent University, Ankara, Turkey"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-2116-1073","authenticated-orcid":false,"given":"Halil \u00d6zg\u00fcr","family":"Demir","sequence":"additional","affiliation":[{"name":"Bilkent University, Ankara, Turkey"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2462-6959","authenticated-orcid":false,"given":"U\u011fur","family":"G\u00fcd\u00fckbay","sequence":"additional","affiliation":[{"name":"Bilkent University, Ankara, Turkey"}]}],"member":"320","published-online":{"date-parts":[[2024,11,30]]},"reference":[{"key":"e_1_3_3_2_2","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392469"},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01991"},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58523-5_15"},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13946"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3550454.3555435"},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.5555\/1144457.1144483"},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jrp.2010.07.004"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.1111\/j.2517-6161.1995.tb02031.x"},{"issue":"1","key":"e_1_3_3_10_2","first-page":"35","article-title":"Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing","volume":"13","author":"Bereczkei Tamas","year":"2006","unstructured":"Tamas Bereczkei and Norbert Mesko. 2006. Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing. Review of Psychology 13, 1 (2006), 35\u201342.","journal-title":"Review of Psychology"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/0097-8493(94)90057-4"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.4324\/9781315789354-27"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2017.2737019"},{"key":"e_1_3_3_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2022.3230541"},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1007\/S11263-023-01761-6"},{"key":"e_1_3_3_16_2","first-page":"1","volume-title":"Proceedings of the Context-Awareness in Human-Robot Interaction: Approaches and Challenges, Workshop at 2022 ACM\/IEEE International Conference on Human-Robot Interaction","author":"Deichler Anna","year":"2022","unstructured":"Anna Deichler, Siyang Wang, Simon Alexanderson, and Jonas Beskow. 2022. Towards context-aware human-like pointing gestures with RL motion imitation. In Proceedings of the Context-Awareness in Human-Robot Interaction: Approaches and Challenges, Workshop at 2022 ACM\/IEEE International Conference on Human-Robot Interaction. KTH Royal Institute of Technology Publications, 1\u20136."},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/CA.1997.601034"},{"key":"e_1_3_3_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2018.00312"},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.2983620"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1080\/00223980.1957.9713059"},{"key":"e_1_3_3_21_2","doi-asserted-by":"publisher","DOI":"10.1080\/00332747.1969.11023575"},{"key":"e_1_3_3_22_2","unstructured":"Emiliana. 2019. BVH Tools. Retrieved from https:\/\/assetstore.unity.com\/packages\/tools\/animation\/bvh-tools-144728"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/THMS.2016.2537760"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1002\/cav.2016"},{"key":"e_1_3_3_25_2","unstructured":"Rong Gao Xin Liu Bohao Xing Zitong Yu Bjorn W. Schuller and Heikki K\u00e4lvi\u00e4inen. 2024. Identity-free artificial emotional intelligence via micro-gesture understanding. arXiv:2405.13206 [cs.CV]. Retrieved from https:\/\/arxiv.org\/abs\/2405.13206"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14734"},{"key":"e_1_3_3_27_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0253157"},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/2791294"},{"key":"e_1_3_3_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3528233.3530750"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1177\/0956797618799300"},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/2851499"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2013.248"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2014.45"},{"key":"e_1_3_3_35_2","doi-asserted-by":"crossref","unstructured":"Kacper Kania Marek Kowalski and Tomasz Trzci\u0144ski. 2021. TrajeVAE\u2013controllable human motion generation from trajectories. arXiv:2104.00351. Retrieved from https:\/\/arxiv.org\/abs\/2104.00351.","DOI":"10.2139\/ssrn.4092912"},{"key":"e_1_3_3_36_2","doi-asserted-by":"publisher","DOI":"10.1016\/B978-012464995-8\/50022-4"},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3301411"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3473041"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1111\/1467-8721.ep13175642"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0065-2601(08)60241-5"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397481.3450692"},{"key":"e_1_3_3_42_2","unstructured":"Deng Li Xin Liu Bohao Xing Baiqiang Xia Yuan Zong Bihan Wen and Heikki K\u00e4lvi\u00e4inen. 2024. EALD-MLLM: Emotion analysis in long-sequential and de-identity videos with multi-modal large language model. arXiv:2405.00574 [cs.CV]. Retrieved from https:\/\/arxiv.org\/abs\/2405.00574"},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2024.3396656"},{"key":"e_1_3_3_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3025171.3025206"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01022"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01049"},{"key":"e_1_3_3_47_2","unstructured":"IBM. 2015. IBM Watson API. Retrieved May 01 2023 from https:\/\/www.ibm.com\/watson"},{"key":"e_1_3_3_48_2","unstructured":"Oculus. 2021. Oculus Lipsync. Retrieved August 14 2022 from https:\/\/developer.oculus.com\/downloads\/package\/oculus-lipsync-unity\/"},{"key":"e_1_3_3_49_2","doi-asserted-by":"publisher","DOI":"10.1145\/1462048.1462051"},{"key":"e_1_3_3_50_2","doi-asserted-by":"publisher","DOI":"10.3758\/BF03208096"},{"key":"e_1_3_3_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/1640443.1640452"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1002\/(SICI)1099-1778(199901\/03)10:1<39::AID-VIS195>3.0.CO;2-2"},{"key":"e_1_3_3_53_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jrp.2007.02.003"},{"key":"e_1_3_3_54_2","doi-asserted-by":"publisher","DOI":"10.1177\/0146167209346309"},{"key":"e_1_3_3_55_2","doi-asserted-by":"publisher","DOI":"10.1111\/CGF.14776"},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.specom.2015.01.005"},{"key":"e_1_3_3_57_2","doi-asserted-by":"publisher","DOI":"10.2466\/pms.1978.46.3c.1328"},{"key":"e_1_3_3_58_2","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201311"},{"key":"e_1_3_3_59_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSC.2010.41"},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.1145\/3343036.3343129"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1002\/ejsp.2420080405"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/MIS.2022.3147585"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3340254"},{"key":"e_1_3_3_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073697"},{"key":"e_1_3_3_65_2","doi-asserted-by":"publisher","DOI":"10.1109\/VRW62533.2024.00123"},{"key":"e_1_3_3_66_2","doi-asserted-by":"publisher","DOI":"10.1145\/3439795"},{"key":"e_1_3_3_67_2","unstructured":"Guy Tevet Sigal Raab Brian Gordon Yonatan Shafir Daniel Cohen-Or and Amit H Bermano. 2022. Human motion diffusion model. arXiv:2209.14916. Retrieved from https:\/\/arxiv.org\/abs\/2209.14916."},{"key":"e_1_3_3_68_2","first-page":"88","volume-title":"Understanding Social Behavior in Dyadic and Small Group Interactions (Proceedings of Machine Learning Research)","author":"Tuyen Nguyen Tan Viet","year":"2022","unstructured":"Nguyen Tan Viet Tuyen and Oya Celiktutan. 2022. Context-aware human behaviour forecasting in dyadic interactions. In Understanding Social Behavior in Dyadic and Small Group Interactions (Proceedings of Machine Learning Research), Vol. 173. Cristina Palmero, Julio C. S. Jacques Junior, Albert Clap\u00e9s, Isabelle Guyon, Wei-Wei Tu, Thomas B. Moeslund, and Sergio Escalera (Eds.), ML Research Press, Online, 88\u2013106. Retrieved from https:\/\/proceedings.mlr.press\/v173\/tuyen22a.html"},{"key":"e_1_3_3_69_2","doi-asserted-by":"publisher","DOI":"10.1145\/218380.218419"},{"key":"e_1_3_3_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/2874357"},{"key":"e_1_3_3_71_2","unstructured":"Mingyuan Zhang Zhongang Cai Liang Pan Fangzhou Hong Xinying Guo Lei Yang and Ziwei Liu. 2022. MotionDiffuse: Text-driven human motion generation with diffusion model. arXiv:2208.15001. Retrieved from https:\/\/arxiv.org\/abs\/2208.15001."},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.1145\/3349609"}],"container-title":["ACM Transactions on Applied Perception"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3694905","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3694905","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:07Z","timestamp":1750295887000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3694905"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,11,30]]},"references-count":70,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2025,4,30]]}},"alternative-id":["10.1145\/3694905"],"URL":"https:\/\/doi.org\/10.1145\/3694905","relation":{},"ISSN":["1544-3558","1544-3965"],"issn-type":[{"value":"1544-3558","type":"print"},{"value":"1544-3965","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,11,30]]},"assertion":[{"value":"2023-06-17","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-09-03","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-30","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}