{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,13]],"date-time":"2025-10-13T15:37:02Z","timestamp":1760369822540,"version":"3.41.0"},"reference-count":10,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2022,5,27]],"date-time":"2022-05-27T00:00:00Z","timestamp":1653609600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["GetMobile: Mobile Comp. and Comm."],"published-print":{"date-parts":[[2022,5,27]]},"abstract":"<jats:p>Over the last decade, facial landmark tracking and 3D reconstruction have gained considerable attention due to their numerous applications, such as human-computer interactions, facial expression analysis, emotion recognition, etc. However, existing camera-based solutions require users to be confined to a particular location and face a camera at all times without occlusions, which largely limits their usage in practice. To overcome these limitations, we propose the first single-earpiece lightweight biosensing system, Bioface-3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications.<\/jats:p>","DOI":"10.1145\/3539668.3539676","type":"journal-article","created":{"date-parts":[[2022,5,28]],"date-time":"2022-05-28T04:05:46Z","timestamp":1653710746000},"page":"21-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["BioFace-3D"],"prefix":"10.1145","volume":"26","author":[{"given":"Yi","family":"Wu","sequence":"first","affiliation":[{"name":"Wu University of Tennessee, Knoxville, TN, USA"}]},{"given":"Vimal","family":"Kakaraparthi","sequence":"additional","affiliation":[{"name":"University of Colorado Boulder, Boulder, CO, USA"}]},{"given":"Zhuohang","family":"Li","sequence":"additional","affiliation":[{"name":"University of Tennessee, Knoxville, TN, USA"}]},{"given":"Tien","family":"Pham","sequence":"additional","affiliation":[{"name":"University of Texas at Arlington, Arlington, TX, USA"}]},{"given":"Jian","family":"Liu","sequence":"additional","affiliation":[{"name":"University of Tennessee, Knoxville, TN, USA"}]},{"given":"VP","family":"Nguyen","sequence":"additional","affiliation":[{"name":"University of Texas at Arlington, Arlington, TX, USA"}]}],"member":"320","published-online":{"date-parts":[[2022,5,27]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"Sandra Carrasco and Miguel \u00c1ngel Sotelo UAH. 2020. D3.3 Driver Monitoring Concept Report.  Sandra Carrasco and Miguel \u00c1ngel Sotelo UAH. 2020. D3.3 Driver Monitoring Concept Report."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.procs.2015.08.011"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.specom.2009.08.002"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-018-1097-z"},{"key":"e_1_2_1_5_1","doi-asserted-by":"crossref","unstructured":"Hai X. Pham Yuting Wang and Vladimir Pavlovic. 2017. End-to-end learning for 3d facial animation from raw waveforms of speech. arXiv preprint arXiv:1710.00920.  Hai X. Pham Yuting Wang and Vladimir Pavlovic. 2017. End-to-end learning for 3d facial animation from raw waveforms of speech. arXiv preprint arXiv:1710.00920.","DOI":"10.1145\/3242969.3243017"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/ROBIO.2007.4522346"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/TBME.2013.2280900"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00238"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130813"},{"key":"e_1_2_1_10_1","unstructured":"Demo Video for BioFace-3D. 2021. https:\/\/mosis.eecs.utk.edu\/bioface-3d.html  Demo Video for BioFace-3D. 2021. https:\/\/mosis.eecs.utk.edu\/bioface-3d.html"}],"container-title":["GetMobile: Mobile Computing and Communications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3539668.3539676","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3539668.3539676","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:38:03Z","timestamp":1750178283000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3539668.3539676"}},"subtitle":["3D Facial Tracking and Animation via Single-ear Wearable Biosensors"],"short-title":[],"issued":{"date-parts":[[2022,5,27]]},"references-count":10,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,5,27]]}},"alternative-id":["10.1145\/3539668.3539676"],"URL":"https:\/\/doi.org\/10.1145\/3539668.3539676","relation":{},"ISSN":["2375-0529","2375-0537"],"issn-type":[{"type":"print","value":"2375-0529"},{"type":"electronic","value":"2375-0537"}],"subject":[],"published":{"date-parts":[[2022,5,27]]},"assertion":[{"value":"2022-05-27","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}