{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T17:46:00Z","timestamp":1776102360406,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":74,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,5,18]],"date-time":"2021-05-18T00:00:00Z","timestamp":1621296000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,5,18]]},"DOI":"10.1145\/3412382.3458268","type":"proceedings-article","created":{"date-parts":[[2021,5,20]],"date-time":"2021-05-20T23:56:47Z","timestamp":1621555007000},"page":"222-237","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":11,"title":["ExGSense"],"prefix":"10.1145","author":[{"given":"Chen","family":"Chen","sequence":"first","affiliation":[{"name":"University of California San Diego"}]},{"given":"Ke","family":"Sun","sequence":"additional","affiliation":[{"name":"University of California San Diego"}]},{"given":"Xinyu","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of California San Diego"}]}],"member":"320","published-online":{"date-parts":[[2021,5,20]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/3214260"},{"key":"e_1_3_2_1_2_1","volume-title":"Gesture Recognition Using an EEG Sensor and an ANN Classifier for Control of a Robotic Manipulator","author":"Alba-Flores Rocio","unstructured":"Rocio Alba-Flores , Fernando Rios , Stephanie Triplett , and Antonio Casas . 2019. Gesture Recognition Using an EEG Sensor and an ANN Classifier for Control of a Robotic Manipulator . In Intelligent Computing, Kohei Arai, Rahul Bhatia, and Supriya Kapoor (Eds.). Springer International Publishing , Cham , 1181--1186. Rocio Alba-Flores, Fernando Rios, Stephanie Triplett, and Antonio Casas. 2019. Gesture Recognition Using an EEG Sensor and an ANN Classifier for Control of a Robotic Manipulator. In Intelligent Computing, Kohei Arai, Rahul Bhatia, and Supriya Kapoor (Eds.). Springer International Publishing, Cham, 1181--1186."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV.2016.7477553"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/FG.2018.00019"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1177\/1529100619832930"},{"key":"e_1_3_2_1_6_1","volume-title":"US20150015847A1 - Capacitive eye tracking sensor - Google Patents. https:\/\/patents.google.com\/patent\/US20150015847A1\/en. (Accessed on 04\/09\/2019)","author":"Bergman Janne","year":"2015","unstructured":"Janne Bergman , Jari Saukko , and Jussi Severi Uusitalo . 2015 . US20150015847A1 - Capacitive eye tracking sensor - Google Patents. https:\/\/patents.google.com\/patent\/US20150015847A1\/en. (Accessed on 04\/09\/2019) . Janne Bergman, Jari Saukko, and Jussi Severi Uusitalo. 2015. US20150015847A1 - Capacitive eye tracking sensor - Google Patents. https:\/\/patents.google.com\/patent\/US20150015847A1\/en. (Accessed on 04\/09\/2019)."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3267242.3267268"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/1409635.1409647"},{"key":"e_1_3_2_1_9_1","first-page":"87","article-title":"Research on Animation Lip synchronization technology: A study on application and development of domestic animation Lip synchronization","volume":"17","author":"Cho Jungsung","year":"2013","unstructured":"Jungsung Cho . 2013 . Research on Animation Lip synchronization technology: A study on application and development of domestic animation Lip synchronization . International Journal of Asia Digital Art and Design 17 , 3 (2013), 87 -- 92 . Jungsung Cho. 2013. Research on Animation Lip synchronization technology: A study on application and development of domestic animation Lip synchronization. International Journal of Asia Digital Art and Design 17, 3 (2013), 87--92.","journal-title":"International Journal of Asia Digital Art and Design"},{"key":"e_1_3_2_1_10_1","unstructured":"CleveLabs. 2006. Electro-Oculography Laboratory. https:\/\/glneurotech.com\/docrepo\/teaching-labs\/Electro-Oculography_I_Student.pdf (Accessed on 07\/27\/2019).  CleveLabs. 2006. Electro-Oculography Laboratory. https:\/\/glneurotech.com\/docrepo\/teaching-labs\/Electro-Oculography_I_Student.pdf (Accessed on 07\/27\/2019)."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.5220\/0005187200310037"},{"key":"e_1_3_2_1_12_1","volume-title":"Eye Tracking Methodology","author":"Duchowski Andrew","unstructured":"Andrew Duchowski . 2007. Eye Tracking Methodology . Springer-Verlag . Andrew Duchowski. 2007. Eye Tracking Methodology. Springer-Verlag."},{"key":"e_1_3_2_1_13_1","unstructured":"Ekman and Friesen. 2019. FACS (Facial Action Coding System). https:\/\/www.cs.cmu.edu\/~face\/facs.htm. (Accessed on 04\/16\/2019).  Ekman and Friesen. 2019. FACS (Facial Action Coding System). https:\/\/www.cs.cmu.edu\/~face\/facs.htm. (Accessed on 04\/16\/2019)."},{"key":"e_1_3_2_1_14_1","unstructured":"Paul Ekman. 2003. Emotions Revealed. Owl Books New York New York USA.  Paul Ekman. 2003. Emotions Revealed. Owl Books New York New York USA."},{"key":"e_1_3_2_1_15_1","unstructured":"Paul Ekman. 2019. Facial Action Coding System - Paul Ekman Group. https:\/\/www.paulekman.com\/facial-action-coding-system\/. (Accessed on 04\/16\/2019).  Paul Ekman. 2019. Facial Action Coding System - Paul Ekman Group. https:\/\/www.paulekman.com\/facial-action-coding-system\/. (Accessed on 04\/16\/2019)."},{"key":"e_1_3_2_1_16_1","volume-title":"Predicting 3D lip shapes using facial surface EMG. PLOS ONE 12, 4 (04","author":"Eskes Merijn","year":"2017","unstructured":"Merijn Eskes , Maarten J. A. van Alphen , Alfons J. M. Balm , Ludi E. Smeele , Dieta Brandsma , and Ferdinand van der Heijden . 2017. Predicting 3D lip shapes using facial surface EMG. PLOS ONE 12, 4 (04 2017 ), 1--16. https:\/\/doi.org\/10.1371\/journal.pone.0175025 10.1371\/journal.pone.0175025 Merijn Eskes, Maarten J. A. van Alphen, Alfons J. M. Balm, Ludi E. Smeele, Dieta Brandsma, and Ferdinand van der Heijden. 2017. Predicting 3D lip shapes using facial surface EMG. PLOS ONE 12, 4 (04 2017), 1--16. https:\/\/doi.org\/10.1371\/journal.pone.0175025"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1080\/20961790.2018.1523703"},{"key":"e_1_3_2_1_18_1","volume-title":"Proceedings of the 15th USENIX Conference on Networked Systems Design and Implementation (NSDI'18)","author":"Gao Chuhan","year":"2018","unstructured":"Chuhan Gao , Yilong Li , and Xinyu Zhang . 2018 . Livetag: Sensing Human-Object Interaction through Passive Chipless WiFi Tags . In Proceedings of the 15th USENIX Conference on Networked Systems Design and Implementation (NSDI'18) . USA, 533--546. Chuhan Gao, Yilong Li, and Xinyu Zhang. 2018. Livetag: Sensing Human-Object Interaction through Passive Chipless WiFi Tags. In Proceedings of the 15th USENIX Conference on Networked Systems Design and Implementation (NSDI'18). USA, 533--546."},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3241539.3241546"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445576"},{"key":"e_1_3_2_1_21_1","volume-title":"Deep Learning","author":"Goodfellow Ian","unstructured":"Ian Goodfellow , Yoshua Bengio , and Aaron Courville . 2016. Deep Learning . MIT Press . http:\/\/www.deeplearningbook.org Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. http:\/\/www.deeplearningbook.org"},{"key":"e_1_3_2_1_22_1","volume-title":"Bahar Khalighinejad.","author":"Herrero Jose L.","year":"2019","unstructured":"Jose L. Herrero Ashesh D. Mehta Nima Mesgarani Hassan Akbari , Bahar Khalighinejad. 2019 . Towards reconstructing intelligible speech from the human auditory cortex | Scientific Reports. Nature ( 01 2019). https:\/\/doi.org\/10.1038\/s41598-018-37359-z 10.1038\/s41598-018-37359-z Jose L. Herrero Ashesh D. Mehta Nima Mesgarani Hassan Akbari, Bahar Khalighinejad. 2019. Towards reconstructing intelligible speech from the human auditory cortex | Scientific Reports. Nature (01 2019). https:\/\/doi.org\/10.1038\/s41598-018-37359-z"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0190420"},{"key":"e_1_3_2_1_24_1","volume-title":"Long Short-term Memory. Neural computation 9 (12","author":"Hochreiter Sepp","year":"1997","unstructured":"Sepp Hochreiter and Jurgen Schmidhuber . 1997. Long Short-term Memory. Neural computation 9 (12 1997 ), 1735--80. https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735 10.1162\/neco.1997.9.8.1735 Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long Short-term Memory. Neural computation 9 (12 1997), 1735--80. https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_2_1_25_1","unstructured":"Texas Instruements. 2017. Low-Noise 4- 6- 8-Channel 24-Bit Analog-to-Digital Converter for EEG and datasheet (Rev. C). http:\/\/www.ti.com\/lit\/ds\/symlink\/ads1299.pdf (Accessed on 04\/10\/2019).  Texas Instruements. 2017. Low-Noise 4- 6- 8-Channel 24-Bit Analog-to-Digital Converter for EEG and datasheet (Rev. C). http:\/\/www.ti.com\/lit\/ds\/symlink\/ads1299.pdf (Accessed on 04\/10\/2019)."},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300506"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/2638728.2638795"},{"key":"e_1_3_2_1_28_1","unstructured":"Robert Kanaat. 2016. Five Reasons Why Virtual Reality Is A Game-Changer. https:\/\/www.forbes.com\/sites\/robertadams\/2016\/03\/21\/5-reasons-why-virtual-reality-is-a-game-changer\/#6a8a52bc41be (Accessed on 07\/20\/2019).  Robert Kanaat. 2016. Five Reasons Why Virtual Reality Is A Game-Changer. https:\/\/www.forbes.com\/sites\/robertadams\/2016\/03\/21\/5-reasons-why-virtual-reality-is-a-game-changer\/#6a8a52bc41be (Accessed on 07\/20\/2019)."},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3172944.3172977"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/30.1-2.81"},{"key":"e_1_3_2_1_31_1","volume-title":"Kingma and Jimmy Ba","author":"Diederik","year":"2014","unstructured":"Diederik P. Kingma and Jimmy Ba . 2014 . Adam : A Method for Stochastic Optimization . arXiv:cs.LG\/1412.6980 Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:cs.LG\/1412.6980"},{"key":"e_1_3_2_1_32_1","volume-title":"Proceedings of the 25th International Conference on Neural Information Processing Systems -","volume":"1","author":"Krizhevsky Alex","unstructured":"Alex Krizhevsky , Ilya Sutskever , and Geoffrey E. Hinton . 2012. ImageNet Classification with Deep Convolutional Neural Networks . In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (NIPS'12). Curran Associates Inc., Red Hook, NY, USA, 1097--1105. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (NIPS'12). Curran Associates Inc., Red Hook, NY, USA, 1097--1105."},{"key":"e_1_3_2_1_33_1","unstructured":"Ron Kurtus. 2019. List of Worldwide AC Voltages and Frequencies by Ron Kurtus - Physics Lessons: School for Champions. https:\/\/www.school-for-champions.com\/science\/ac_world_volt_freq_list.htm#.XTekIuhKiUk. (Accessed on 07\/23\/2019).  Ron Kurtus. 2019. List of Worldwide AC Voltages and Frequencies by Ron Kurtus - Physics Lessons: School for Champions. https:\/\/www.school-for-champions.com\/science\/ac_world_volt_freq_list.htm#.XTekIuhKiUk. (Accessed on 07\/23\/2019)."},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1152\/japplphysiol.00521.2003"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766939"},{"key":"e_1_3_2_1_36_1","volume-title":"Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '14)","author":"Li Mu","unstructured":"Mu Li , Tong Zhang , Yuqiang Chen , and Alexander J. Smola . 2014. Efficient Mini-batch Training for Stochastic Optimization . In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '14) . ACM, New York, NY, USA, 661--670. https:\/\/doi.org\/10.1145\/2623330.2623612 10.1145\/2623330.2623612 Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J. Smola. 2014. Efficient Mini-batch Training for Stochastic Optimization. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '14). ACM, New York, NY, USA, 661--670. https:\/\/doi.org\/10.1145\/2623330.2623612"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3131672.3131682"},{"key":"e_1_3_2_1_38_1","unstructured":"Ilya Loshchilov and Frank Hutter. 2017. Decoupled Weight Decay Regularization. arXiv:cs.LG\/1711.05101  Ilya Loshchilov and Frank Hutter. 2017. Decoupled Weight Decay Regularization. arXiv:cs.LG\/1711.05101"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1088\/1361-6579\/aa60b9"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/2856767.2856770"},{"key":"e_1_3_2_1_41_1","volume-title":"Proceedings of the 5th Symposium on Spatial User Interaction (SUI '17)","author":"Mavridou I.","unstructured":"I. Mavridou , M. Hamedi , M. Fatoorechi , J. Archer , A. Cleal , E. Balaguer-Ballester , E. Seiss , and C. Nduka . 2017. Using Facial Gestures to Drive Narrative in VR . In Proceedings of the 5th Symposium on Spatial User Interaction (SUI '17) . ACM, New York, NY, USA, 152--152. https:\/\/doi.org\/10.1145\/3131277.3134366 10.1145\/3131277.3134366 I. Mavridou, M. Hamedi, M. Fatoorechi, J. Archer, A. Cleal, E. Balaguer-Ballester, E. Seiss, and C. Nduka. 2017. Using Facial Gestures to Drive Narrative in VR. In Proceedings of the 5th Symposium on Spatial User Interaction (SUI '17). ACM, New York, NY, USA, 152--152. https:\/\/doi.org\/10.1145\/3131277.3134366"},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3110292.3110302"},{"key":"e_1_3_2_1_43_1","volume-title":"A System Architecture for Emotion Detection in Virtual Reality (EuroVR '18)","author":"Mavridou Iigeneia","unstructured":"Iigeneia Mavridou , Ellen Seiss , Theodoros Kostoulas , Emili Balaguer-Ballester , and Charles Nduka . 2018. A System Architecture for Emotion Detection in Virtual Reality (EuroVR '18) . EuroVR Association . Iigeneia Mavridou, Ellen Seiss, Theodoros Kostoulas, Emili Balaguer-Ballester, and Charles Nduka. 2018. A System Architecture for Emotion Detection in Virtual Reality (EuroVR '18). EuroVR Association."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1002\/0471678384"},{"key":"e_1_3_2_1_45_1","unstructured":"MindMaze. 2019. MindMaze reveals Mask to capture your facial expression in virtual reality | VentureBeat. https:\/\/venturebeat.com\/2017\/04\/12\/mindmaze-reveals-mask-to-capture-your-facial-expression-in-virtual-reality\/.  MindMaze. 2019. MindMaze reveals Mask to capture your facial expression in virtual reality | VentureBeat. https:\/\/venturebeat.com\/2017\/04\/12\/mindmaze-reveals-mask-to-capture-your-facial-expression-in-virtual-reality\/."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.2197\/ipsjjip.25.142"},{"key":"e_1_3_2_1_47_1","unstructured":"Asher Flynn Nicola Henry Anastasia Powell. 2018. AI can now create fake porn making revenge porn even more complicated. http:\/\/theconversation.com\/ai-can-now-create-fake-porn-making-revenge-porn-even-more-complicated-92267 (Accessed on 08\/13\/2019).  Asher Flynn Nicola Henry Anastasia Powell. 2018. AI can now create fake porn making revenge porn even more complicated. http:\/\/theconversation.com\/ai-can-now-create-fake-porn-making-revenge-porn-even-more-complicated-92267 (Accessed on 08\/13\/2019)."},{"key":"e_1_3_2_1_48_1","unstructured":"OpenBCI. 2019. Cyton Biosensing Board (8-channels) - OpenBCI Online Store. https:\/\/shop.openbci.com\/collections\/frontpage\/products\/cyton-biosensing-board-8-channel?variant=38958638542 (Accessed on 07\/22\/2019).  OpenBCI. 2019. Cyton Biosensing Board (8-channels) - OpenBCI Online Store. https:\/\/shop.openbci.com\/collections\/frontpage\/products\/cyton-biosensing-board-8-channel?variant=38958638542 (Accessed on 07\/22\/2019)."},{"key":"e_1_3_2_1_49_1","unstructured":"Adam Paszke Sam Gross Soumith Chintala Gregory Chanan Edward Yang Zachary DeVito Zeming Lin Alban Desmaison Luca Antiga and Adam Lerer. 2017. Automatic differentiation in PyTorch.  Adam Paszke Sam Gross Soumith Chintala Gregory Chanan Edward Yang Zachary DeVito Zeming Lin Alban Desmaison Luca Antiga and Adam Lerer. 2017. Automatic differentiation in PyTorch."},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.5555\/265013"},{"key":"e_1_3_2_1_51_1","volume-title":"GANimation: Anatomically-aware Facial Animation from a Single Image. CoRR abs\/1807.09251","author":"Pumarola Albert","year":"2018","unstructured":"Albert Pumarola , Antonio Agudo , Aleix M. Mart\u00ednez , Alberto Sanfeliu , and Francesc Moreno-Noguer . 2018. GANimation: Anatomically-aware Facial Animation from a Single Image. CoRR abs\/1807.09251 ( 2018 ). Albert Pumarola, Antonio Agudo, Aleix M. Mart\u00ednez, Alberto Sanfeliu, and Francesc Moreno-Noguer. 2018. GANimation: Anatomically-aware Facial Animation from a Single Image. CoRR abs\/1807.09251 (2018)."},{"key":"e_1_3_2_1_52_1","unstructured":"RFDigital. 2013. RFD22301 Data Sheet. https:\/\/www.mouser.com\/ds\/2\/470\/rfd22301.data.sheet.11.24.13_11.38pm-272240.pdf (Accessed on 07\/22\/2019).  RFDigital. 2013. RFD22301 Data Sheet. https:\/\/www.mouser.com\/ds\/2\/470\/rfd22301.data.sheet.11.24.13_11.38pm-272240.pdf (Accessed on 07\/22\/2019)."},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3314111.3322501"},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/3314410"},{"key":"e_1_3_2_1_55_1","unstructured":"Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv:cs.LG\/1609.04747  Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv:cs.LG\/1609.04747"},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1177\/0539018405058216"},{"key":"e_1_3_2_1_57_1","volume-title":"A direct comparison of wet, dry and insulating bioelectric recording electrodes. Physiological measurement 21 (06","author":"Searle A","year":"2000","unstructured":"A Searle and Les Kirkup . 2000. A direct comparison of wet, dry and insulating bioelectric recording electrodes. Physiological measurement 21 (06 2000 ), 271--83. https:\/\/doi.org\/10.1088\/0967-3334\/21\/2\/307 10.1088\/0967-3334 A Searle and Les Kirkup. 2000. A direct comparison of wet, dry and insulating bioelectric recording electrodes. Physiological measurement 21 (06 2000), 271--83. https:\/\/doi.org\/10.1088\/0967-3334\/21\/2\/307"},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1093\/geronb\/62.1.P53"},{"key":"e_1_3_2_1_59_1","unstructured":"Daniel Swolf. 2019. DanielSWolf\/rhubarb-lip-sync: Rhubarb Lip Sync is a command-line tool that automatically creates 2D mouth animation from voice recordings. You can use it for characters in computer games in animated cartoons or in any other project that requires animating mouths based on existing recordings. https:\/\/github.com\/DanielSWolf\/rhubarb-lip-sync Accessed on 07\/21\/2019.  Daniel Swolf. 2019. DanielSWolf\/rhubarb-lip-sync: Rhubarb Lip Sync is a command-line tool that automatically creates 2D mouth animation from voice recordings. You can use it for characters in computer games in animated cartoons or in any other project that requires animating mouths based on existing recordings. https:\/\/github.com\/DanielSWolf\/rhubarb-lip-sync Accessed on 07\/21\/2019."},{"key":"e_1_3_2_1_60_1","volume-title":"Classiication Assessment Methods. Applied Computing and Informatics","author":"Tharwat Alaa","year":"2018","unstructured":"Alaa Tharwat . 2018. Classiication Assessment Methods. Applied Computing and Informatics ( 2018 ). https:\/\/doi.org\/10.1016\/j.aci.2018.08.003 10.1016\/j.aci.2018.08.003 Alaa Tharwat. 2018. Classiication Assessment Methods. Applied Computing and Informatics (2018). https:\/\/doi.org\/10.1016\/j.aci.2018.08.003"},{"key":"e_1_3_2_1_61_1","first-page":"1","article-title":"Face2Face","volume":"62","author":"Thies Justus","year":"2018","unstructured":"Justus Thies , Michael Zollh\u00f6fer , Marc Stamminger , Christian Theobalt , and Matthias Nie . 2018 . Face2Face : Real-time Face Capture and Reenactment of RGB Videos. Commun. ACM 62 , 1 (Dec. 2018), 96--104. https:\/\/doi.org\/10.1145\/3292039 10.1145\/3292039 Justus Thies, Michael Zollh\u00f6fer, Marc Stamminger, Christian Theobalt, and Matthias Nie. 2018. Face2Face: Real-time Face Capture and Reenactment of RGB Videos. Commun. ACM 62, 1 (Dec. 2018), 96--104. https:\/\/doi.org\/10.1145\/3292039","journal-title":"Real-time Face Capture and Reenactment of RGB Videos. Commun. ACM"},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3182644"},{"key":"e_1_3_2_1_63_1","unstructured":"David Vintiner. 2018. These face-reading glasses track physical and mental health | WIRED UK. https:\/\/www.wired.co.uk\/article\/emteqvr-digital-phenotyping-charles-nduka (Accessed on 10\/17\/2019).  David Vintiner. 2018. These face-reading glasses track physical and mental health | WIRED UK. https:\/\/www.wired.co.uk\/article\/emteqvr-digital-phenotyping-charles-nduka (Accessed on 10\/17\/2019)."},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306346.3323030"},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3143361.3143381"},{"key":"e_1_3_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/2994551.2994556"},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3286978.3287030"},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411763.3443445"},{"key":"e_1_3_2_1_69_1","volume-title":"FusionAtt: Deep Fusional Attention Networks for Multi-Channel Biomedical Signals. Sensors 19, 11","author":"Yuan Ye","year":"2019","unstructured":"Ye Yuan and Kebin Jia . 2019. FusionAtt: Deep Fusional Attention Networks for Multi-Channel Biomedical Signals. Sensors 19, 11 ( 2019 ). Ye Yuan and Kebin Jia. 2019. FusionAtt: Deep Fusional Attention Networks for Multi-Channel Biomedical Signals. Sensors 19, 11 (2019)."},{"key":"e_1_3_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2018.2871678"},{"key":"e_1_3_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1145\/2807442.2807480"},{"key":"e_1_3_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/2984511.2984574"},{"key":"e_1_3_2_1_73_1","volume-title":"Vigilance Estimation Using a Wearable EOG Device in Real Driving Environment","author":"Zheng Wei-Long","year":"2019","unstructured":"Wei-Long Zheng , Kunpeng Gao , Wei Liu , Jing-Quan Liu , Guoxing Wang , and Bao-Liang Lu. 2019. Vigilance Estimation Using a Wearable EOG Device in Real Driving Environment . IEEE Transactions on Intelligent Transportation Systems ( 2019 ), 1--15. https:\/\/doi.org\/10.1109\/TITS.2018.2889962 10.1109\/TITS.2018.2889962 Wei-Long Zheng, Kunpeng Gao, Wei Liu, Jing-Quan Liu, Guoxing Wang, and Bao-Liang Lu. 2019. Vigilance Estimation Using a Wearable EOG Device in Real Driving Environment. IEEE Transactions on Intelligent Transportation Systems (2019), 1--15. https:\/\/doi.org\/10.1109\/TITS.2018.2889962"},{"key":"e_1_3_2_1_74_1","volume-title":"Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. 2017 IEEE International Conference on Computer Vision (ICCV) (Oct 2017","author":"Zhu Jun-Yan","year":"2017","unstructured":"Jun-Yan Zhu , Taesung Park , Phillip Isola , and Alexei A. Efros . 2017 . Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. 2017 IEEE International Conference on Computer Vision (ICCV) (Oct 2017 ). https:\/\/doi.org\/10.1109\/iccv. 2017 .244 10.1109\/iccv.2017.244 Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. 2017 IEEE International Conference on Computer Vision (ICCV) (Oct 2017). https:\/\/doi.org\/10.1109\/iccv.2017.244"}],"event":{"name":"IPSN '21: The 20th International Conference on Information Processing in Sensor Networks","location":"Nashville TN USA","acronym":"IPSN '21","sponsor":["IEEE-SPS Signal Processing Society","SIGBED ACM Special Interest Group on Embedded Systems"]},"container-title":["Proceedings of the 20th International Conference on Information Processing in Sensor Networks (co-located with CPS-IoT Week 2021)"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3412382.3458268","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3412382.3458268","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:25:01Z","timestamp":1750195501000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3412382.3458268"}},"subtitle":["Toward Facial Gesture Sensing with a Sparse Near-Eye Sensor Array"],"short-title":[],"issued":{"date-parts":[[2021,5,18]]},"references-count":74,"alternative-id":["10.1145\/3412382.3458268","10.1145\/3412382"],"URL":"https:\/\/doi.org\/10.1145\/3412382.3458268","relation":{},"subject":[],"published":{"date-parts":[[2021,5,18]]},"assertion":[{"value":"2021-05-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}