{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T04:04:20Z","timestamp":1760241860887,"version":"build-2065373602"},"reference-count":49,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2018,8,31]],"date-time":"2018-08-31T00:00:00Z","timestamp":1535673600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U1764263,U1405255"],"award-info":[{"award-number":["U1764263,U1405255"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Natural Science Basic Research Plan in Shaanxi Province of China","award":["2016JM6074"],"award-info":[{"award-number":["2016JM6074"]}]},{"name":"Shaanxi Science &amp; Technology Coordination &amp; Innovation Project","award":["2016TZC-G-6-3"],"award-info":[{"award-number":["2016TZC-G-6-3"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>We introduce a two-stream model to use reflexive eye movements for smart mobile device authentication. Our model is based on two pre-trained neural networks, iTracker and PredNet, targeting two independent tasks: (i) gaze tracking and (ii) future frame prediction. We design a procedure to randomly generate the visual stimulus on the screen of mobile device, and the frontal camera will simultaneously capture head motions of the user as one watches it. Then, iTracker calculates the gaze-coordinates error which is treated as a static feature. To solve the imprecise gaze-coordinates caused by the low resolution of the frontal camera, we further take advantage of PredNet to extract the dynamic features between consecutive frames. In order to resist traditional attacks (shoulder surfing and impersonation attacks) during the procedure of mobile device authentication, we innovatively combine static features and dynamic features to train a 2-class support vector machine (SVM) classifier. The experiment results show that the classifier achieves accuracy of 98.6% to authenticate the user identity of mobile devices.<\/jats:p>","DOI":"10.3390\/s18092894","type":"journal-article","created":{"date-parts":[[2018,8,31]],"date-time":"2018-08-31T10:57:52Z","timestamp":1535713072000},"page":"2894","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["Integrating Gaze Tracking and Head-Motion Prediction for Mobile Device Authentication: A Proof of Concept"],"prefix":"10.3390","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6023-2864","authenticated-orcid":false,"given":"Zhuo","family":"Ma","sequence":"first","affiliation":[{"name":"School of Cyber Engineering, Xidian University, Xi\u2019an 710071, China"},{"name":"Shaanxi Key Laboratory of Network and System Security, Xidian University, Xi\u2019an 710071, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9246-7596","authenticated-orcid":false,"given":"Xinglong","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Cyber Engineering, Xidian University, Xi\u2019an 710071, China"}]},{"given":"Ruijie","family":"Ma","sequence":"additional","affiliation":[{"name":"School of Cyber Engineering, Xidian University, Xi\u2019an 710071, China"}]},{"given":"Zhuzhu","family":"Wang","sequence":"additional","affiliation":[{"name":"ZTE Corporation, Xi\u2019an 710114, China"}]},{"given":"Jianfeng","family":"Ma","sequence":"additional","affiliation":[{"name":"School of Cyber Engineering, Xidian University, Xi\u2019an 710071, China"},{"name":"Shaanxi Key Laboratory of Network and System Security, Xidian University, Xi\u2019an 710071, China"}]}],"member":"1968","published-online":{"date-parts":[[2018,8,31]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1908","DOI":"10.1109\/TMM.2017.2692648","article-title":"Social Attribute aware Incentive Mechanisms for Video Distribution in Device-to-Device Communications","volume":"8","author":"Wu","year":"2017","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"2197","DOI":"10.1109\/TMM.2017.2733300","article-title":"Socially Aware Energy Efficient Mobile Edge Collaboration for Video Distribution","volume":"10","author":"Wu","year":"2017","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"2958","DOI":"10.1109\/JIOT.2017.2768073","article-title":"Dynamic Trust Relationships Aware Data Privacy Protection in Mobile Crowd-Sensing","volume":"5","author":"Wu","year":"2017","journal-title":"IEEE Internet Things J."},{"key":"ref_4","first-page":"1","article-title":"Security analysis and improvement of bio-hashing based three-factor authentication scheme for telecare medical information systems","volume":"9","author":"Jiang","year":"2017","journal-title":"J. Am. Intell. Humaniz. Comput."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"3376","DOI":"10.1109\/ACCESS.2017.2673239","article-title":"Lightweight three-factor authentication and key agreement protocol for internet-integrated wireless sensor networks","volume":"5","author":"Jiang","year":"2017","journal-title":"IEEE Access"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"182","DOI":"10.1016\/j.compeleceng.2017.03.016","article-title":"Efficient end-to-end authentication protocol for wearable health monitoring systems","volume":"63","author":"Jiang","year":"2017","journal-title":"Comput. Electr. Eng."},{"key":"ref_7","first-page":"439","article-title":"A survey of password attacks and comparative analysis on methods for secure authentication","volume":"19","author":"Raza","year":"2012","journal-title":"World Appl. Sci. J."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"346","DOI":"10.1587\/transfun.E95.A.346","article-title":"Narrow fingerprint sensor verification with template updating technique","volume":"95","author":"Sin","year":"2012","journal-title":"IEICE Trans. Fundam."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"2008","DOI":"10.1109\/TIP.2017.2788866","article-title":"Matching Contactless and Contact-Based Conventional Fingerprint Images for Biometrics Identification","volume":"4","author":"Lin","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Parkhi, O.M., Vedaldi, A., and Zisserman, A. (2015, January 10). Deep Face Recognition. Proceedings of the British Machine Vision Conference, Swansea, UK.","DOI":"10.5244\/C.29.41"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"2575","DOI":"10.1109\/TIP.2018.2806229","article-title":"BULDP: Biomimetic Uncorrelated Locality Discriminant Projection for Feature Extraction in Face Recognition","volume":"27","author":"Ning","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_12","unstructured":"Delac, K., and Grgic, M. (2004, January 16\u201318). A survey of biometric recognition methods. Proceedings of the Elmar 2004. 46th International Symposium Electronics in Marine, Zadar, Croatia."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Jacob, R.J., and Karn, K.S. (2003). Eye tracking in human\u2013computer interaction and usability research: Ready to deliver the promises. Mind Eye, 573\u2013605.","DOI":"10.1016\/B978-044451020-4\/50031-1"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Majaranta, P., and Bulling, A. (2014). Eye tracking and eye-based human\u2013computer interaction. Advances in Physiological Computing, Springer.","DOI":"10.1007\/978-1-4471-6392-3_3"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1016\/j.cviu.2004.07.010","article-title":"Eye gaze tracking techniques for interactive applications","volume":"98","author":"Morimoto","year":"2005","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"310","DOI":"10.1016\/j.eswa.2017.09.017","article-title":"Design and implementation of an eye gesture perception system based on electrooculography","volume":"91","author":"Lv","year":"2018","journal-title":"Expert Syst. Appl."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Karthikeyan, S., Jagadeesh, V., Shenoy, R., Ecksteinz, M., and Manjunath, B.S. (2013, January 6\u201310). From where and how to what we see. Proceedings of the 2013 IEEE International Conference on Computer Vision, Karlsruhe, Germany.","DOI":"10.1109\/ICCV.2013.83"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1109\/TPAMI.2012.89","article-title":"State-of-the-art in visual attention modeling","volume":"35","author":"Borji","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"478","DOI":"10.1109\/TPAMI.2009.30","article-title":"In the eye of the beholder: A survey of models for eyes and gaze","volume":"3","author":"Hansen","year":"2010","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Krafka, K., Khosla, A., Kellnhofer, P., Kannan, H., Bhandarkar, S., Matusik, W., and Torralba, A. (arXiv, 2016). Eye tracking for everyone, arXiv.","DOI":"10.1109\/CVPR.2016.239"},{"key":"ref_21","unstructured":"Lotter, W., Kreiman, G., and Cox, D. (arXiv, 2016). Deep predictive coding networks for video prediction and unsupervised learning, arXiv."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"716","DOI":"10.1109\/TIFS.2015.2405345","article-title":"Attack of mechanical replicas: Liveness detection with eye movements","volume":"4","author":"Komogortsev","year":"2015","journal-title":"IEEE Trans. Inf. Forensics Secur."},{"key":"ref_23","unstructured":"Ali, A., Deravi, F., and Hoque, S. (2013, January 5\u20136). Spoofing attempt detection using gaze colocation. Proceedings of the 2013 International Conference of the BIOSIG Special Interest Group (BIOSIG), Darmstadt, Germany."},{"key":"ref_24","unstructured":"Zhang, Y., Chi, Z., and Feng, D. (2011, January 24\u201325). An Analysis of Eye Movement Based Authentication Systems. Proceedings of the International Conference on Mechanical Engineering and Technology, London, UK."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"190","DOI":"10.1016\/j.patrec.2015.06.019","article-title":"Eye movements during scene understanding for biometric identification","volume":"82","author":"Saeed","year":"2016","journal-title":"Pattern Recognit. Lett."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Bulling, A., and Gellersen, H. (2012, January 28\u201330). Towards pervasive eye tracking using low-level image features. Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA.","DOI":"10.1145\/2168556.2168611"},{"key":"ref_27","unstructured":"Zhang, Y., Bulling, A., and Gellersen, H. (May, January 27). SideWays: A gaze interface for spontaneous interaction with situated displays. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Bulling, A., and Gellersen, H. (2014, January 27\u201329). Pupil-canthi-ratio: A calibration-free method for tracking horizontal gaze direction. Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces, Como, Italy.","DOI":"10.1145\/2598153.2598186"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Kumar, M., Garfinkel, T., Boneh, D., and Winograd, T. (2007, January 18\u201320). Reducing shoulder-surfing by using gaze-based password entry. Proceedings of the 3rd symposium on Usable privacy and security, Pittsburgh, PA, USA.","DOI":"10.1145\/1280680.1280683"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Weaver, J., Mock, K., and Hoanca, B. (2011, January 9\u201312). Gaze-based password authentication through automatic clustering of gaze points. Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK, USA.","DOI":"10.1109\/ICSMC.2011.6084072"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Bulling, A., Alt, F., and Schmidt, A. (2012, January 5\u201310). Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA.","DOI":"10.1145\/2207676.2208712"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Boehm, A., Chen, D., Frank, M., Huang, L., Kuo, C., Lolic, T., Martinovic, I., and Song, D. (2013, January 24\u201327). Safe: Secure authentication with face and eyes. Proceedings of the 2013 International Conference on Privacy and Security in Mobile Systems, Atlantic City, NJ, USA.","DOI":"10.1109\/PRISMS.2013.6927175"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"De Luca, A., Denzel, M., and Hussmann, H. (2009, January 15\u201317). Look into my eyes!: Can you guess my password?. Proceedings of the 5th Symposium on Usable Privacy and Security, Mountain View, CA, USA.","DOI":"10.1145\/1572532.1572542"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Kocejko, T., and Wtorek, J. (2012). Gaze pattern lock for elders and disabled. Information Technologies in Biomedicine, Springer.","DOI":"10.1007\/978-3-642-31196-3_59"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Chen, Y., Li, T., Zhang, R., Zhang, Y., and Hedgpeth, T. (2018, January 20\u201324). EyeTell: Video-Assisted Touchscreen Keystroke Inference from Eye Movements. Proceedings of the EyeTell: Video-Assisted Touchscreen Keystroke Inference from Eye Movements, San Francisco, CA, USA.","DOI":"10.1109\/SP.2018.00010"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Sluganovic, I., Roeschlin, M., Rasmussen, K.B., and Martinovic, I. (2016, January 24\u201328). Using reflexive eye movements for fast challenge-response authentication. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria.","DOI":"10.1145\/2976749.2978311"},{"key":"ref_37","unstructured":"Simonyan, K., and Zisserman, A. (2014, January 8\u201313). Two-stream convolutional networks for action recognition in videos. Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_38","first-page":"20","article-title":"Temporal segment networks: Towards good practices for deep action recognition","volume":"Volume 10","author":"Wang","year":"2016","journal-title":"Proceedings of the European Conference on Computer Vision"},{"key":"ref_39","unstructured":"Ma, C.Y., Chen, M.H., Kira, Z., and AlRegib, G. (arXiv, 2017). TS-LSTM and Temporal-Inception: Exploiting Spatiotemporal Dynamics for Activity Recognition, arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Tesfaldet, M., Brubaker, M.A., and Derpanis, K.G. (arXiv, 2017). Two-stream convolutional networks for dynamic texture synthesis, arXiv.","DOI":"10.1109\/CVPR.2018.00701"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"529","DOI":"10.1037\/0096-1523.15.3.529","article-title":"Speed and accuracy of saccadic eye movements: Characteristics of impulse variability in the oculomotor system","volume":"15","author":"Abrams","year":"1989","journal-title":"J. Exp. Psychol. Hum. Percept. Perform."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Zeiler, M.D., Krishnan, D., Taylor, G.W., and Fergus, R. (2010, January 13\u201318). Deconvolutional networks. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA.","DOI":"10.1109\/CVPR.2010.5539957"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Zeiler, M.D., Taylor, G.W., and Fergus, R. (2011, January 6\u201313). Adaptive deconvolutional networks for mid and high level feature learning. Proceedings of the 2011 IEEE International Conference on Computer Vision, Barcelona, Spain.","DOI":"10.1109\/ICCV.2011.6126474"},{"key":"ref_44","unstructured":"Zeiler, M.D., and Fergus, R. (12, January 6\u20137). Visualizing and understanding convolutional networks. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_46","unstructured":"Radford, A., Metz, L., and Chintala, S. (arXiv, 2015). Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1007\/BF00994018","article-title":"Support-vector networks","volume":"20","author":"Cortes","year":"1995","journal-title":"Mach. Learn."},{"key":"ref_48","first-page":"2825","article-title":"Scikit-learn: Machine Learning in Python","volume":"12","author":"Pedregosa","year":"2011","journal-title":"J. Mach. Learn. Res."},{"key":"ref_49","unstructured":"Lars, B., Gilles, L., Mathieu, B., Fabian, P., Andreas, M., Olivier, G., Vlad, N., Peter, P., Alexandre, G., and Jaques, G. (2013). API design for machine learning software: Experiences from the scikit-learn project. ECML PKDD Workshop Lang. Data Min. Mach. Learn., 108\u2013122."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/18\/9\/2894\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T15:22:30Z","timestamp":1760196150000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/18\/9\/2894"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,8,31]]},"references-count":49,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2018,9]]}},"alternative-id":["s18092894"],"URL":"https:\/\/doi.org\/10.3390\/s18092894","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2018,8,31]]}}}