{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,28]],"date-time":"2026-02-28T17:13:22Z","timestamp":1772298802958,"version":"3.50.1"},"reference-count":47,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2022,3,26]],"date-time":"2022-03-26T00:00:00Z","timestamp":1648252800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2018YFB1600202"],"award-info":[{"award-number":["2018YFB1600202"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>It is critical for intelligent vehicles to be capable of monitoring the health and well-being of the drivers they transport on a continuous basis. This is especially true in the case of autonomous vehicles. To address the issue, an automatic system is developed for driver\u2019s real emotion recognizer (DRER) using deep learning. The emotional values of drivers in indoor vehicles are symmetrically mapped to image design in order to investigate the characteristics of abstract expressions, expression design principles, and an experimental evaluation is conducted based on existing research on the design of driver facial expressions for intelligent products. By substituting a custom-created CNN features learning block with the base 11 layers CNN model in this paper for the development of an improved faster R-CNN face detector that detects the driver\u2019s face at a high frame per second (FPS). Transfer learning is performed in the NasNet large CNN model in order to recognize the driver\u2019s various emotions. Additionally, a custom driver emotion recognition image dataset is being developed as part of this research task. The proposed model, which is a combination of an improved faster R-CNN and transfer learning in NasNet-Large CNN architecture for DER based on facial images, enables greater accuracy than previously possible for DER based on facial images. The proposed model outperforms some recently updated state-of-the-art techniques in terms of accuracy. The proposed model achieved the following accuracy on various benchmark datasets: JAFFE 98.48%, CK+ 99.73%, FER-2013 99.95%, AffectNet 95.28%, and 99.15% on a custom-developed dataset.<\/jats:p>","DOI":"10.3390\/sym14040687","type":"journal-article","created":{"date-parts":[[2022,3,27]],"date-time":"2022-03-27T21:31:25Z","timestamp":1648416685000},"page":"687","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":48,"title":["Driver Emotions Recognition Based on Improved Faster R-CNN and Neural Architectural Search Network"],"prefix":"10.3390","volume":"14","author":[{"given":"Khalid","family":"Zaman","sequence":"first","affiliation":[{"name":"Information Engineering School, Chang\u2019an University, Xi\u2019an 710061, China"}]},{"given":"Zhaoyun","family":"Sun","sequence":"additional","affiliation":[{"name":"Information Engineering School, Chang\u2019an University, Xi\u2019an 710061, China"}]},{"given":"Sayyed Mudassar","family":"Shah","sequence":"additional","affiliation":[{"name":"Information Engineering School, Chang\u2019an University, Xi\u2019an 710061, China"}]},{"given":"Muhammad","family":"Shoaib","sequence":"additional","affiliation":[{"name":"Department of Computer Science and IT, CECOS University, Peshawar 25000, Pakistan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1084-8289","authenticated-orcid":false,"given":"Lili","family":"Pei","sequence":"additional","affiliation":[{"name":"Information Engineering School, Chang\u2019an University, Xi\u2019an 710061, China"}]},{"given":"Altaf","family":"Hussain","sequence":"additional","affiliation":[{"name":"Institute of Computer Science and IT, The University of Agriculture, Peshawar 25000, Pakistan"}]}],"member":"1968","published-online":{"date-parts":[[2022,3,26]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"127","DOI":"10.1016\/j.trf.2017.11.019","article-title":"Driver anger in France: The relationships between sex, gender roles, trait and state driving anger and appraisals made while driving","volume":"52","author":"Albentosa","year":"2018","journal-title":"Transp. Res. Part F Traffic Psychol. Behav."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"FakhrHosseini, S., Ko, S., Alvarez, I., and Jeon, M. (2022). Driver Emotions in Automated Vehicles. User Experience Design in the Era of Automated Driving, Springer.","DOI":"10.1007\/978-3-030-77726-5_4"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"225463","DOI":"10.1109\/ACCESS.2020.3027026","article-title":"Automatic Emotion Recognition Using Temporal Multimodal Deep Learning","volume":"8","author":"Nakisa","year":"2020","journal-title":"IEEE Access"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Lu, C., Zheng, W., Li, C., Tang, C., Liu, S., Yan, S., and Zong, Y. (2018, January 16\u201320). Multiple spatio-temporal feature learning for video-based emotion recognition in the wild. Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA.","DOI":"10.1145\/3242969.3264992"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1051","DOI":"10.1007\/s12239-019-0099-3","article-title":"Methods to Detect and Reduce Driver Stress: A Review","volume":"20","author":"Chung","year":"2019","journal-title":"Int. J. Automot. Technol."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Chang, W.Y., Hsu, S.H., and Chien, J.H. (2017, January 21\u201326). FATAUVA-Net: An integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.246"},{"key":"ref_7","unstructured":"Kollias, D., and Zafeiriou, S. (2018). A multi-task learning & generation framework: Valence-arousal, action units & primary expressions. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Theagarajan, R., Bhanu, B., and Cruz, A. (2018, January 20\u201324). Deepdriver: Automated system for measuring valence and arousal in car driver videos. Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China.","DOI":"10.1109\/ICPR.2018.8546284"},{"key":"ref_9","unstructured":"Pavlich, C.A. (2018). A Cold Encounter: The Effects of Aversive Stimulation on Verbal and Nonverbal Leakage Cues to Deception. [Ph.D. Thesis, The University of Arizona]."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1080\/15534510.2018.1473290","article-title":"When do we see that others misrepresent how they feel? detecting deception from emotional faces with direct and indirect measures","volume":"13","author":"Stel","year":"2018","journal-title":"Soc. Influ."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Bruni, V., and Vitulano, D. (2020, January 24\u201326). SSIM based Signature of Facial Micro-Expressions. Proceedings of the International Conference on Image Analysis and Recognition, P\u00f3voa de Varzim, Portugal.","DOI":"10.1007\/978-3-030-50347-5_24"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1128","DOI":"10.3389\/fpsyg.2018.01128","article-title":"A survey of automatic facial micro-expression analysis: Datasets, methods, and challenges","volume":"9","author":"Oh","year":"2018","journal-title":"Front. Psychol."},{"key":"ref_13","first-page":"4831","article-title":"Machine Learning-based Signal Processing by Physiological Signals Detection of Stress","volume":"12","author":"Prasanthi","year":"2021","journal-title":"Turk. J. Comput. Math. Educ."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Al Machot, F., Elmachot, A., Ali, M., Al Machot, E., and Kyamakya, K. (2019). A deep-learning model for subject-independent human emotion recognition using electrodermal activity sensors. Sensors, 19.","DOI":"10.3390\/s19071659"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhong, B., Qin, Z., Yang, S., Chen, J., Mudrick, N., Taub, M., Azevedo, R., and Lobaton, E. (December, January 27). Emotion recognition with facial expressions and physiological signals. Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA.","DOI":"10.1109\/SSCI.2017.8285365"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Dzedzickis, A., Kaklauskas, A., and Bucinskas, V. (2020). Human Emotion Recognition: Review of Sensors and Methods. Sensors, 20.","DOI":"10.3390\/s20030592"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Raheel, A., Majid, M., Alnowami, M., and Anwar, S.M. (2020). Physiological Sensors Based Emotion Recognition While Experiencing Tactile Enhanced Multimedia. Sensors, 20.","DOI":"10.3390\/s20144037"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1710","DOI":"10.1109\/TCBB.2020.3018137","article-title":"Subject-Independent Emotion Recognition of EEG Signals Based on Dynamic Empirical Convolutional Neural Network","volume":"18","author":"Liu","year":"2020","journal-title":"IEEE\/ACM Trans. Comput. Biol. Bioinform."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"33002","DOI":"10.1109\/ACCESS.2020.2974009","article-title":"Emotion Recognition From Multi-Channel EEG Signals by Exploiting the Deep Belief-Conditional Random Field Framework","volume":"8","author":"Chao","year":"2020","journal-title":"IEEE Access"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"710","DOI":"10.1166\/jmihi.2020.2922","article-title":"A Novel Fuzzy Rough Nearest Neighbors Emotion Recognition Approach Based on Multimodal Wearable Biosensor Network","volume":"10","author":"Zheng","year":"2020","journal-title":"J. Med. Imaging Heal. Inform."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Al Machot, F., Ali, M., Ranasinghe, S., Mosa, A.H., and Kyandoghere, K. (2018, January 26\u201329). Improving subject-independent human emotion recognition using electrodermal activity sensors for active and assisted living. Proceedings of the 11th Pervasive Technologies Related to Assistive Environments Conference, Corfu, Greece.","DOI":"10.1145\/3197768.3201523"},{"key":"ref_22","first-page":"57","article-title":"Using Deep Convolutional Neural Network for Emotion Detection on a Physiological Signals Dataset (AMIGOS)","volume":"7","author":"Abdulhay","year":"2018","journal-title":"IEEE Access"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Rayatdoost, S., Rudrauf, D., and Soleymani, M. (2020, January 25\u201329). Multimodal gated information fusion for emotion recognition from EEG signals and facial behaviors. Proceedings of the 2020 International Conference on Multimodal Interaction, Online.","DOI":"10.1145\/3382507.3418867"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"96","DOI":"10.1109\/TAFFC.2019.2916015","article-title":"Utilizing Deep Learning Towards Multi-Modal Bio-Sensing and Vision-Based Affective Computing","volume":"13","author":"Siddharth","year":"2022","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"134051","DOI":"10.1109\/ACCESS.2020.3007109","article-title":"Affective robot story-telling human-robot interaction: Exploratory real-time emotion estimation analysis using facial expressions and physiological signals","volume":"8","year":"2020","journal-title":"IEEE Access"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Comas, J., Aspandi, D., and Binefa, X. (2020, January 16\u201320). End-to-end facial and physiological model for affective computing and applications. Proceedings of the 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina.","DOI":"10.1109\/FG47880.2020.00001"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Huang, L., Polanco, M., and Clee, T.E. (2018, January 6\u20138). Initial experiments on improving seismic data inversion with deep learning. Proceedings of the 2018 New York Scientific Data Summit (NYSDS), New York, NY, USA.","DOI":"10.1109\/NYSDS.2018.8538956"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"243","DOI":"10.1016\/j.cmpb.2018.05.024","article-title":"Fine-grained leukocyte classification with deep residual learning for microscopic images","volume":"162","author":"Qin","year":"2018","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201322). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"168","DOI":"10.1080\/1612197X.2020.1854818","article-title":"The circumplex model of affect in physical activity contexts: A systematic review","volume":"20","author":"Evmenenko","year":"2022","journal-title":"Int. J. Sport Exerc. Psychol."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","article-title":"AffectNet: A dataset for facial expression, valence, and arousal computing in the wild","volume":"10","author":"Mollahosseini","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Sharma, R., Rajvaidya, H., Pareek, P., and Thakkar, A. (2019). A comparative study of machine learning techniques for emotion recognition. Emerging Research in Computing, Information, Communication and Applications, Springer.","DOI":"10.1007\/978-981-13-6001-5_37"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Kosti, R., Alvarez, J.M., Recasens, A., and Lapedriza, A. (2017, January 21\u201326). EMOTIC: Emotions in Context dataset. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 2017, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.285"},{"key":"ref_35","unstructured":"Song, S., Jaiswal, S., Sanchez, E., Tzimiropoulos, G., Shen, L., and Valstar, M. (2021). Self-supervised Learning of Person-specific Facial Dynamics for Automatic Personality Recognition. IEEE Trans. Affect. Comput., preprint."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Song, T., Lu, G., and Yan, J. (2020, January 15\u201317). Emotion recognition based on physiological signals using convolution neural networks. Proceedings of the 2020 12th International Conference on Machine Learning and Computing, Shenzhen, China.","DOI":"10.1145\/3383972.3384003"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Jeong, D., Kim, B.-G., and Dong, S.-Y. (2020). Deep Joint Spatiotemporal Network (DJSTN) for Efficient Facial Expression Recognition. Sensors, 20.","DOI":"10.3390\/s20071936"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Riaz, M.N., Shen, Y., Sohail, M., and Guo, M. (2020). eXnet: An Efficient Approach for Emotion Recognition in the Wild. Sensors, 20.","DOI":"10.3390\/s20041087"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Patlar Akbulut, F. (2022). Hybrid deep convolutional model-based emotion recognition using multiple physiological signals. Comput. Methods Biomech. Biomed. Eng., online ahead of print.","DOI":"10.1080\/10255842.2022.2032682"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Huang, Y., Yang, J., Liu, S., and Pan, J. (2019). Combining facial expressions and electroencephalography to enhance emotion recognition. Future Internet, 11.","DOI":"10.3390\/fi11050105"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Bandyopadhyay, S., Thakur, S.S., and Mandal, J.K. (2022). Online Recommendation System Using Human Facial Expression Based Emotion Detection: A Proposed Method. International Conference on Advanced Computing Applications, Springer.","DOI":"10.1007\/978-981-16-5207-3_38"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1109\/JBHI.2017.2688239","article-title":"DREAMER: A dataset for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices","volume":"22","author":"Katsigiannis","year":"2017","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"106568","DOI":"10.1016\/j.aap.2022.106568","article-title":"Global lessons learned from naturalistic driving studies to advance traffic safety and operation research: A systematic review","volume":"167","author":"Ahmed","year":"2022","journal-title":"Accid. Anal. Prev."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Swapna, M., Viswanadhula, U.M., Aluvalu, R., Vardharajan, V., and Kotecha, K. (2022). Bio-Signals in Medical Applications and Challenges Using Artificial Intelligence. J. Sens. Actuator Networks, 11.","DOI":"10.3390\/jsan11010017"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Sciaraffa, N., Di Flumeri, G., Germano, D., Giorgi, A., Di Florio, A., Borghini, G., Vozzi, A., Ronca, V., Varga, R., and van Gasteren, M. (2022). Validation of a Light EEG-Based Measure for Real-Time Stress Monitoring during Realistic Driving. Brain Sci., 12.","DOI":"10.3390\/brainsci12030304"},{"key":"ref_46","unstructured":"Stoychev, S., and Gunes, H. (2022). The Effect of Model Compression on Fairness in Facial Expression Recognition. arXiv."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/j.patrec.2022.02.010","article-title":"Data-aware relation learning-based graph convolution neural network for facial action unit recognition","volume":"155","author":"Jia","year":"2022","journal-title":"Pattern Recognit. Lett."}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/14\/4\/687\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:43:52Z","timestamp":1760136232000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/14\/4\/687"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,26]]},"references-count":47,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2022,4]]}},"alternative-id":["sym14040687"],"URL":"https:\/\/doi.org\/10.3390\/sym14040687","relation":{},"ISSN":["2073-8994"],"issn-type":[{"value":"2073-8994","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,26]]}}}