{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,19]],"date-time":"2025-12-19T15:34:04Z","timestamp":1766158444377,"version":"build-2065373602"},"reference-count":60,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2020,2,18]],"date-time":"2020-02-18T00:00:00Z","timestamp":1581984000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Informatics"],"abstract":"<jats:p>Facial emotion recognition is a crucial task for human-computer interaction, autonomous vehicles, and a multitude of multimedia applications. In this paper, we propose a modular framework for human facial emotions\u2019 recognition. The framework consists of two machine learning algorithms (for detection and classification) that could be trained offline for real-time applications. Initially, we detect faces in the images by exploring the AdaBoost cascade classifiers. We then extract neighborhood difference features (NDF), which represent the features of a face based on localized appearance information. The NDF models different patterns based on the relationships between neighboring regions themselves instead of considering only intensity information. The study is focused on the seven most important facial expressions that are extensively used in day-to-day life. However, due to the modular design of the framework, it can be extended to classify N number of facial expressions. For facial expression classification, we train a random forest classifier with a latent emotional state that takes care of the mis-\/false detection. Additionally, the proposed method is independent of gender and facial skin color for emotion recognition. Moreover, due to the intrinsic design of NDF, the proposed method is illumination and orientation invariant. We evaluate our method on different benchmark datasets and compare it with five reference methods. In terms of accuracy, the proposed method gives 13% and 24% better results than the reference methods on the static facial expressions in the wild (SFEW) and real-world affective faces (RAF) datasets, respectively.<\/jats:p>","DOI":"10.3390\/informatics7010006","type":"journal-article","created":{"date-parts":[[2020,2,19]],"date-time":"2020-02-19T04:06:01Z","timestamp":1582085161000},"page":"6","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":56,"title":["Facial Emotion Recognition Using Hybrid Features"],"prefix":"10.3390","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9034-3909","authenticated-orcid":false,"given":"Abdulrahman","family":"Alreshidi","sequence":"first","affiliation":[{"name":"College of Computer Science and Engineering, University of Ha\u2019il, Ha\u2019il 2440, Saudi Arabia"}]},{"given":"Mohib","family":"Ullah","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Norwegian University of Science and Technology, 2815 Gj\u00f8vik, Norway"}]}],"member":"1968","published-online":{"date-parts":[[2020,2,18]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"2140","DOI":"10.1109\/TIP.2015.2416634","article-title":"Robust representation and recognition of facial emotions using extreme sparse learning","volume":"24","author":"Shojaeilangari","year":"2015","journal-title":"IEEE Trans. Image Process."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Ko, K.E., and Sim, K.B. (2010, January 20\u201322). Development of a Facial Emotion Recognition Method based on combining AAM with DBN. Proceedings of the 2010 International Conference on Cyberworlds (CW), Singapore.","DOI":"10.1109\/CW.2010.65"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"11843","DOI":"10.1007\/s11042-017-4834-3","article-title":"Local neighborhood difference pattern: A new feature descriptor for natural and texture image retrieval","volume":"77","author":"Verma","year":"2018","journal-title":"Multimed. Tools Appl."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1113","DOI":"10.1109\/TPAMI.2014.2366127","article-title":"Automatic analysis of facial affect: A survey of registration, representation, and recognition","volume":"37","author":"Sariyanidi","year":"2014","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Likitha, M., Gupta, S.R.R., Hasitha, K., and Raju, A.U. (2017, January 20\u201324). Speech based human emotion recognition using MFCC. Proceedings of the 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), Chennai, India.","DOI":"10.1109\/WiSPNET.2017.8300161"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Lotfidereshgi, R., and Gournay, P. (2017, January 5\u20139). Biologically inspired speech emotion recognition. Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA.","DOI":"10.1109\/ICASSP.2017.7953135"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"564","DOI":"10.1109\/TASL.2010.2041114","article-title":"Source\/filter model for unsupervised main Melody extraction from polyphonic audio signals","volume":"18","author":"Durrieu","year":"2010","journal-title":"IEEE Trans. Audio Speech Lang. Process."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"521","DOI":"10.1016\/j.ipl.2005.05.019","article-title":"Isolated word recognition with the liquid state machine: A case study","volume":"95","author":"Verstraeten","year":"2005","journal-title":"Inf. Process. Lett."},{"key":"ref_9","first-page":"4","article-title":"Emotional states associated with music: Classification, prediction of changes, and consideration in recommendation","volume":"5","author":"Deng","year":"2015","journal-title":"ACM Trans. Interact. Intell. Syst. TiiS"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Tzirakis, P., Zhang, J., and Schuller, B.W. (2018, January 15\u201320). End-to-end speech emotion recognition using deep neural networks. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462677"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"2","DOI":"10.1186\/s13636-018-0145-5","article-title":"Decision tree SVM model with Fisher feature selection for speech emotion recognition","volume":"2019","author":"Sun","year":"2019","journal-title":"EURASIP J. Audio, Speech Music Process."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"145","DOI":"10.1016\/j.neucom.2018.05.005","article-title":"Speech emotion recognition based on an improved brain emotion learning model","volume":"309","author":"Liu","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Ferdinando, H., Sepp\u00e4nen, T., and Alasaarela, E. (2017, January 24\u201326). Enhancing Emotion Recognition from ECG Signals using Supervised Dimensionality Reduction. Proceedings of the ICPRAM, Porto, Portugal.","DOI":"10.5220\/0006147801120118"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Kanwal, S., Uzair, M., Ullah, H., Khan, S.D., Ullah, M., and Cheikh, F.A. (2019, January 22\u201325). An Image Based Prediction Model for Sleep Stage Identification. Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan.","DOI":"10.1109\/ICIP.2019.8803026"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"46","DOI":"10.1016\/j.inffus.2018.09.001","article-title":"Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection","volume":"49","author":"Kanjo","year":"2019","journal-title":"Inf. Fusion"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"143","DOI":"10.1016\/j.eswa.2017.09.062","article-title":"Evolutionary computation algorithms for feature selection of EEG-based emotion recognition using mobile sensors","volume":"93","author":"Nakisa","year":"2018","journal-title":"Expert Syst. Appl."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Ray, P., and Mishra, D.P. (2019). Analysis of EEG Signals for Emotion Recognition Using Different Computational Intelligence Techniques. Applications of Artificial Intelligence Techniques in Engineering, Springer.","DOI":"10.1007\/978-981-13-1822-1_49"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"40144","DOI":"10.1109\/ACCESS.2019.2904400","article-title":"Internal emotion classification using eeg signal with sparse discriminative ensemble","volume":"7","author":"Ullah","year":"2019","journal-title":"IEEE Access"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Franzoni, V., Vallverd\u00f9, J., and Milani, A. (2019, January 14\u201317). Errors, Biases and Overconfidence in Artificial Emotional Modeling. Proceedings of the IEEE\/WIC\/ACM International Conference on Web Intelligence-Companion Volume, Thessaloniki, Greece.","DOI":"10.1145\/3358695.3361749"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"627892","DOI":"10.1155\/2014\/627892","article-title":"EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation","volume":"2014","author":"Jirayucharoensak","year":"2014","journal-title":"Sci. World J."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"van den Broek, E.L., and Spitters, M. (2013, January 12\u201314). Physiological signals: The next generation authentication and identification methods!?. Proceedings of the 2013 European Intelligence and Security Informatics Conference (EISIC), Uppsala, Sweden.","DOI":"10.1109\/EISIC.2013.35"},{"key":"ref_22","unstructured":"Rota, P., Ullah, H., Conci, N., Sebe, N., and De Natale, F.G. (2013, January 9\u201313). Particles cross-influence for entity grouping. Proceedings of the 21st European Signal Processing Conference (EUSIPCO 2013), Marrakech, Morocco."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"69","DOI":"10.1016\/j.patrec.2019.01.008","article-title":"Extended Deep Neural Network for Facial Emotion Recognition","volume":"120","author":"Jain","year":"2019","journal-title":"Pattern Recognit. Lett."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"466-1","DOI":"10.2352\/ISSN.2470-1173.2019.7.IRIACV-466","article-title":"Single shot appearance model (ssam) for multi-target tracking","volume":"2019","author":"Ullah","year":"2019","journal-title":"Electron. Imaging"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Jeong, M., and Ko, B.C. (2018). Driver\u2019s Facial Expression Recognition in Real-Time for Safe Driving. Sensors, 18.","DOI":"10.3390\/s18124270"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Acharya, D., Huang, Z., Pani Paudel, D., and Van Gool, L. (2018, January 18\u201322). Covariance pooling for facial expression recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00077"},{"key":"ref_27","unstructured":"Ullah, M., Ullah, H., and Alseadonn, I.M. (2017, January 24\u201326). Human action recognition in videos using stable features. Proceedings of the ICPRAM, Porto, Portugal."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"668","DOI":"10.1016\/j.neucom.2017.08.015","article-title":"Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm","volume":"272","author":"Wang","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"33","DOI":"10.1016\/j.patcog.2017.02.031","article-title":"Collaborative discriminative multi-metric learning for facial expression recognition in video","volume":"75","author":"Yan","year":"2018","journal-title":"Pattern Recognit."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Samadiani, N., Huang, G., Cai, B., Luo, W., Chi, C.H., Xiang, Y., and He, J. (2019). A review on automatic facial expression recognition systems assisted by multimodal sensor data. Sensors, 19.","DOI":"10.3390\/s19081863"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1016\/j.patrec.2017.10.022","article-title":"Deep spatial-temporal feature fusion for facial expression recognition in static images","volume":"119","author":"Sun","year":"2019","journal-title":"Pattern Recognit. Lett."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"610","DOI":"10.1016\/j.patcog.2016.07.026","article-title":"Facial expression recognition with convolutional neural networks: coping with few data and the training sample order","volume":"61","author":"Lopes","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Franzoni, V., Milani, A., Biondi, G., and Micheli, F. (2019, January 14\u201317). A Preliminary Work on Dog Emotion Recognition. Proceedings of the IEEE\/WIC\/ACM International Conference on Web Intelligence-Companion Volume, Thessaloniki, Greece.","DOI":"10.1145\/3358695.3361750"},{"key":"ref_34","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, Communications of the ACM."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"38","DOI":"10.1109\/TAFFC.2016.2593719","article-title":"Facial expression recognition in video with multiple feature fusion","volume":"9","author":"Chen","year":"2018","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"74","DOI":"10.1016\/j.neucom.2018.02.045","article-title":"Anomalous entities detection and localization in pedestrian flows","volume":"290","author":"Ullah","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Alshamsi, H., Kepuska, V., Alshamsi, H., and Meng, H. (2018, January 1\u20133). Automated Facial Expression and Speech Emotion Recognition App Development on Smart Phones using Cloud Computing. Proceedings of the 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada.","DOI":"10.1109\/IEMCON.2018.8614831"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"753","DOI":"10.1007\/s11036-016-0685-9","article-title":"Audio-visual emotion recognition using big data towards 5G","volume":"21","author":"Hossain","year":"2016","journal-title":"Mob. Netw. Appl."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"140","DOI":"10.1109\/JBHI.2014.2343154","article-title":"Smartphone-based recognition of states and state changes in bipolar disorder patients","volume":"19","author":"Muaremi","year":"2015","journal-title":"IEEE J. Biomed. Health Inform."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Sneha, H., Rafi, M., Kumar, M.M., Thomas, L., and Annappa, B. (2017, January 22\u201324). Smartphone based emotion recognition and classification. Proceedings of the 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, Tamil Nadu, India.","DOI":"10.1109\/ICECCT.2017.8117872"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"2281","DOI":"10.1109\/ACCESS.2017.2672829","article-title":"An emotion recognition system for mobile applications","volume":"5","author":"Hossain","year":"2017","journal-title":"IEEE Access"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"1591","DOI":"10.1109\/TMM.2012.2198802","article-title":"Video completion using bandelet transform","volume":"14","author":"Mosleh","year":"2012","journal-title":"IEEE Trans. Multimed."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"425","DOI":"10.1016\/j.patcog.2008.08.014","article-title":"Description of interest regions with local binary patterns","volume":"42","author":"Schmid","year":"2009","journal-title":"Pattern Recognit."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Sokolov, D., and Patkin, M. (2018, January 15\u201319). Real-time emotion recognition on mobile devices. Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi\u2019an, China.","DOI":"10.1109\/FG.2018.00124"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Perikos, I., Paraskevas, M., and Hatzilygeroudis, I. (2018, January 6\u20138). Facial expression recognition using adaptive neuro-fuzzy inference systems. Proceedings of the 2018 IEEE\/ACIS 17th International Conference on Computer and Information Science (ICIS), Singapore.","DOI":"10.1109\/ICIS.2018.8466438"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Franzoni, V., Biondi, G., and Milani, A. (2017, January 3\u20136). A web-based system for emotion vector extraction. Proceedings of the International Conference on Computational Science and Its Applications, Trieste, Italy.","DOI":"10.1007\/978-3-319-62398-6_46"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Aguilar, W.G., Luna, M.A., Moya, J.F., Abad, V., Parra, H., and Ruiz, H. (February, January 30). Pedestrian detection for UAVs using cascade classifiers with meanshift. Proceedings of the 2017 IEEE 11th International Conference on Semantic Computing (ICSC), San Diego, CA, USA.","DOI":"10.1109\/ICSC.2017.83"},{"key":"ref_48","unstructured":"Viola, P., and Jones, M. (2001, January 8\u201314). Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"13","DOI":"10.1080\/2151237X.2007.10129236","article-title":"Adaptive thresholding using the integral image","volume":"12","author":"Bradley","year":"2007","journal-title":"J. Graph. Tools"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"349","DOI":"10.4310\/SII.2009.v2.n3.a8","article-title":"Multi-class adaboost","volume":"2","author":"Hastie","year":"2009","journal-title":"Stat. Its Interface"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"1984","DOI":"10.1109\/TGRS.2002.803794","article-title":"A multiple-cascade-classifier system for a robust and partially unsupervised updating of land-cover maps","volume":"40","author":"Bruzzone","year":"2002","journal-title":"IEEE Trans. Geosci. Remote. Sens."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Cutler, A., Cutler, D.R., and Stevens, J.R. (2012). Random forests. Ensemble Machine Learning, Springer.","DOI":"10.1007\/978-1-4419-9326-7_5"},{"key":"ref_53","first-page":"1737","article-title":"Random forests, decision trees, and categorical predictors: the Absent levels problem","volume":"19","author":"Au","year":"2018","journal-title":"J. Mach. Learn. Res."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Dhall, A., Goecke, R., Joshi, J., Sikka, K., and Gedeon, T. (2014, January 12\u201316). Emotion recognition in the wild challenge 2014: Baseline, data and protocol. Proceedings of the 16th International Conference on Multimodal Interaction, Istanbul, Turkey.","DOI":"10.1145\/2663204.2666275"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/MMUL.2012.26","article-title":"Collecting large, richly annotated facial-expression databases from movies","volume":"19","author":"Dhall","year":"2012","journal-title":"IEEE Multimed."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Li, S., Deng, W., and Du, J. (2017, January 21\u201326). Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.277"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Han, J., Zhang, Z., Ren, Z., and Schuller, B. (2019, January 12\u201317). Implicit Fusion by Joint Audiovisual Training for Emotion Recognition in Mono Modality. Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8682773"},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"8375","DOI":"10.1109\/ACCESS.2016.2628407","article-title":"Facial emotion recognition based on biorthogonal wavelet entropy, fuzzy support vector machine, and stratified cross validation","volume":"4","author":"Zhang","year":"2016","journal-title":"IEEE Access"},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"1272","DOI":"10.1166\/jmihi.2015.1527","article-title":"Facial emotion recognition based on higher-order spectra using support vector machines","volume":"5","author":"Ali","year":"2015","journal-title":"J. Med. Imaging Health Inform."},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Vivek, T., and Reddy, G.R.M. (2015, January 4\u20136). A hybrid bioinspired algorithm for facial emotion recognition using CSO-GA-PSO-SVM. Proceedings of the 2015 Fifth International Conference on Communication Systems and Network Technologies, Gwalior, India.","DOI":"10.1109\/CSNT.2015.124"}],"container-title":["Informatics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2227-9709\/7\/1\/6\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T08:58:52Z","timestamp":1760173132000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2227-9709\/7\/1\/6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,2,18]]},"references-count":60,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2020,3]]}},"alternative-id":["informatics7010006"],"URL":"https:\/\/doi.org\/10.3390\/informatics7010006","relation":{},"ISSN":["2227-9709"],"issn-type":[{"type":"electronic","value":"2227-9709"}],"subject":[],"published":{"date-parts":[[2020,2,18]]}}}