{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T02:50:45Z","timestamp":1760151045134,"version":"build-2065373602"},"reference-count":49,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2022,2,6]],"date-time":"2022-02-06T00:00:00Z","timestamp":1644105600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Focusing on emotion recognition, this paper addresses the task of emotion classification and its performance with respect to accuracy, by investigating the capabilities of a distributed ensemble model using precision-based weighted blending. Research on emotion recognition and classification refers to the detection of an individual\u2019s emotional state by considering various types of data as input features, such as textual data, facial expressions, vocal, gesture and physiological signal recognition, electrocardiogram (ECG) and electrodermography (EDG)\/galvanic skin response (GSR). The extraction of effective emotional features from different types of input data, as well as the analysis of large volume of real-time data, have become increasingly important tasks in order to perform accurate classification. Taking into consideration the volume and variety of the examined problem, a machine learning model that works in a distributed manner is essential. In this direction, we propose a precision-based weighted blending distributed ensemble model for emotion classification. The suggested ensemble model can work well in a distributed manner using the concepts of Spark\u2019s resilient distributed datasets, which provide quick in-memory processing capabilities and also perform iterative computations effectively. Regarding model validation set, weights are assigned to different classifiers in the ensemble model, based on their precision value. Each weight determines the importance of the respective classifier in terms of its performing prediction, while a new model is built upon the derived weights. The produced model performs the task of final prediction on the test dataset. The results disclose that the proposed ensemble model is sufficiently accurate in differentiating between primary emotions (such as sadness, fear, and anger) and secondary emotions. The suggested ensemble model achieved accuracy of 76.2%, 99.4%, and 99.6% on the FER-2013, CK+, and FERG-DB datasets, respectively.<\/jats:p>","DOI":"10.3390\/a15020055","type":"journal-article","created":{"date-parts":[[2022,2,6]],"date-time":"2022-02-06T20:36:47Z","timestamp":1644179807000},"page":"55","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Precision-Based Weighted Blending Distributed Ensemble Model for Emotion Classification"],"prefix":"10.3390","volume":"15","author":[{"given":"Gayathri","family":"Soman","sequence":"first","affiliation":[{"name":"Department of Computer Applications, Cochin University of Science and Technology, Kochi 682022, Kerala, India"}]},{"given":"M. V.","family":"Vivek","sequence":"additional","affiliation":[{"name":"Department of Computer Applications, Cochin University of Science and Technology, Kochi 682022, Kerala, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1843-2330","authenticated-orcid":false,"given":"M. V.","family":"Judy","sequence":"additional","affiliation":[{"name":"Department of Computer Applications, Cochin University of Science and Technology, Kochi 682022, Kerala, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2498-9661","authenticated-orcid":false,"given":"Elpiniki","family":"Papageorgiou","sequence":"additional","affiliation":[{"name":"Department of Energy Systems, University of Thessaly, Gaiopolis, 41500 Larissa, Greece"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9895-7606","authenticated-orcid":false,"given":"Vassilis C.","family":"Gerogiannis","sequence":"additional","affiliation":[{"name":"Department of Digital Systems, University of Thessaly, Gaiopolis, 41500 Larissa, Greece"}]}],"member":"1968","published-online":{"date-parts":[[2022,2,6]]},"reference":[{"key":"ref_1","unstructured":"(2021, December 31). SimilarNet: The European Taskforce for Creating Human\u2013Machine Interfaces Similar to Human\u2013Human Communication. Available online: http:\/\/www.similar.cc\/."},{"key":"ref_2","unstructured":"Kayalvizhi, S., and Kumar, S.S. (2017). A neural networks approach for emotion detection in humans. IOSR J. Electr. Comm. Engin., 38\u201345."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1007\/s00779-011-0479-9","article-title":"Ubiquitous emotion-aware computing","volume":"17","year":"2013","journal-title":"Pers. Ubiquit. Comput."},{"key":"ref_4","unstructured":"M\u00e9nard, M., Richard, P., Hamdi, H., Dauc\u00e9, B., and Yamaguchi, T. (2015, January 13). Emotion Recognition based on Heart Rate and Skin Conductance. Proceedings of the 2nd International Conference on Physiological Computing Systems, Angers, France."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"162","DOI":"10.1109\/TAMD.2015.2431497","article-title":"Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks","volume":"7","author":"Zheng","year":"2017","journal-title":"IEEE Trans. Auton. Mental Dev."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Ganaie, M.A., Hu, M., Tanveer, M., and Suganthan, P.N. (2021). Ensemble deep learning: A review. arXiv.","DOI":"10.1016\/j.engappai.2022.105151"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1007\/BF00058655","article-title":"Bagging predictors","volume":"24","author":"Breiman","year":"1996","journal-title":"Mach. Learn."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"119","DOI":"10.1006\/jcss.1997.1504","article-title":"A decision-theoretic generalization of on-line learning and an application to boosting","volume":"55","author":"Freund","year":"1997","journal-title":"J. Comput. Syst. Sci."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1109\/MIS.2009.36","article-title":"The unreasonable effectiveness of data","volume":"24","author":"Halevy","year":"2009","journal-title":"IEEE Intell. Syst."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"169","DOI":"10.1080\/02699939208411068","article-title":"An argument for basic emotions","volume":"6","author":"Ekman","year":"1992","journal-title":"Cogn. Emot."},{"key":"ref_11","first-page":"14","article-title":"Emotion recognition: A survey","volume":"3","author":"Matilda","year":"2015","journal-title":"Int. J. Adv. Comp. Res."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1161","DOI":"10.1037\/h0077714","article-title":"A circumplex model of affect","volume":"39","author":"Russell","year":"1980","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_13","first-page":"318","article-title":"Emotion classification in arousal valence model using MAHNOB-HCI database","volume":"8","author":"Wiem","year":"2017","journal-title":"Int. J. Adv. Comput. Sci. Appl."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Liu, Y., and Sourina, O. (2013, January 21\u201323). EEG Databases for Emotion Recognition. Proceedings of the 2013 International Conference on Cyberworlds, Yokohama, Japan.","DOI":"10.1109\/CW.2013.52"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Bahari, F., and Janghorbani, A. (2013, January 18\u201320). EEG-based Emotion Recognition Using Recurrence Plot Analysis and K Nearest Neighbor Classifier. Proceedings of the 2013 20th Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran.","DOI":"10.1109\/ICBME.2013.6782224"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Murugappan, M., and Mutawa, A. (2021). Facial geometric feature extraction based emotional expression classification using machine learning algorithms. PLoS ONE, 16.","DOI":"10.1371\/journal.pone.0247131"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Cheng, B., and Liu, G. (2008, January 16\u201318). Emotion Recognition from Surface EMG Signal Using Wavelet Transform and Neural Network. Proceedings of the 2nd International Conference on Bioinformatics and Biomedical Engineering (ICBBE), Shanghai, China.","DOI":"10.1109\/ICBBE.2008.670"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Kim, J.H., Poulose, A., and Han, D.S. (2021). The extensive usage of the facial Image threshing machine for facial emotion recognition performance. Sensors, 21.","DOI":"10.3390\/s21062026"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","article-title":"Joint face detection and alignment using multitask cascaded convolutional networks","volume":"23","author":"Zhang","year":"2016","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1740","DOI":"10.1109\/TIP.2012.2235848","article-title":"Local directional number pattern for face analysis: Face and expression recognition","volume":"22","author":"Rivera","year":"2013","journal-title":"IEEE Trans. Image Process."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"541","DOI":"10.1016\/j.cviu.2010.12.001","article-title":"Local binary patterns for multi-view facial expression recognition","volume":"115","author":"Moore","year":"2011","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Happy, S.L., George, A., and Routray, A. (2012, January 27\u201329). A Real Time Facial Expression Classification System Using Local Binary Patterns. Proceedings of the 2012 4th International Conference on Intelligent Human Computer Interaction (IHCI), Kharagpur, India.","DOI":"10.1109\/IHCI.2012.6481802"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"7803","DOI":"10.1007\/s11042-016-3418-y","article-title":"Facial expression recognition based on local region specific features and support vector machines","volume":"76","author":"Ghimire","year":"2017","journal-title":"Multimed. Tools Appl."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"542","DOI":"10.1016\/j.ijar.2007.02.003","article-title":"Facial expression classification: An approach based on the fusion of facial deformations using the transferable belief model","volume":"46","author":"Hammal","year":"2007","journal-title":"Int. J. Approx. Reason."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Devi, M.K., and Prabhu, K. (2020, January 6\u20137). Face Emotion Classification Using AMSER with Artificial Neural Networks. Proceedings of the 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India.","DOI":"10.1109\/ICACCS48705.2020.9074348"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"2057","DOI":"10.1007\/s13042-017-0734-0","article-title":"Ensemble learning on visual and textual data for social image emotion classification","volume":"10","author":"Corchs","year":"2019","journal-title":"Int. J. Mach. Learn. Cyber."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1940015","DOI":"10.1142\/S0218001419400159","article-title":"Facial emotion recognition using an ensemble of multi-level convolutional neural networks","volume":"33","author":"Nguyen","year":"2019","journal-title":"Int. J. Pattern Recognit. Artif. Intell."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Fan, Y., Lam, J.C.K., and Li, V.O.K. (2018, January 4\u20137). Multi-region Ensemble Convolutional Neural Network for Facial Expression Recognition. Proceedings of the 27th International Conference on Artificial Neural Networks, Rhodes, Greece.","DOI":"10.1007\/978-3-030-01418-6_9"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Poulose, A., Reddy, C.S., Kim, J.H., and Han, D.S. (2021, January 17\u201320). Foreground Extraction Based Facial Emotion Recognition Using Deep Learning Xception Model. Proceedings of the 12th International Conference on Ubiquitous and Future Networks (ICUFN), Jeju Island, Korea.","DOI":"10.1109\/ICUFN49451.2021.9528706"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Poulose, A., Kim, J.H., and Han, D.S. (2021, January 20\u201322). Feature Vector Extraction Technique for Facial Emotion Recognition Using Facial Landmarks. Proceedings of the 2021 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea.","DOI":"10.1109\/ICTC52510.2021.9620798"},{"key":"ref_31","first-page":"1235","article-title":"MLlib: Machine learning in Apache Spark","volume":"17","author":"Meng","year":"2016","journal-title":"J. Mach. Learn. Res."},{"key":"ref_32","unstructured":"(2021, December 31). Challenges in Representation Learning: Facial Expression Recognition Challenge. Available online: https:\/\/www.kaggle.com\/c\/challenges-in-representation-learning-facial-expression-recognition-challenge\/data."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010, January 13\u201318). The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion-specified Expression. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, San Francisco, CA, USA.","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Aneja, D., Colburn, A., Faigin, G., Shapiro, L., and Mones, B. (2016, January 20\u201324). Modeling Stylized Character Expressions via Deep Learning. Proceedings of the Asian Conference on Computer Vision, Taipei, Taiwan.","DOI":"10.1007\/978-3-319-54184-6_9"},{"key":"ref_35","unstructured":"(2021, December 31). Metrics and Scoring: Quantifying the Quality of Predictions. Available online: https:\/\/scikit-learn.org\/stable\/modules\/model_evaluation.html#precision-recall-f-measure-metrics."},{"key":"ref_36","unstructured":"Khanzada, A., Bai, B., and Celepcikay, F.T. (2020). Facial expression recognition with deep learning. arXiv."},{"key":"ref_37","unstructured":"Pramerdorfer, C., and Kampel, M. (2016). Facial expression recognition using convolutional neural networks: State of the art. arXiv."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Pecoraro, R., Basile, V., Bono, V., and Gallo, S. (2021). Local multi-head channel self-attention for facial expression recognition. arXiv.","DOI":"10.3390\/info13090419"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Shi, J., Zhu, S., and Liang, Z. (2021). Learning to amend facial expression representation via de-albino and affinity. arXiv.","DOI":"10.23919\/CCC55666.2022.9901738"},{"key":"ref_40","unstructured":"Tang, Y. (2013). Deep learning using linear support vector machines. arXiv."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Minaee, S., Minaei, M., and Abdolrashidi, A. (2021). Deep-emotion: Facial expression recognition using attentional convolutional network. Sensors, 21.","DOI":"10.3390\/s21093046"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Giannopoulos, P., Perikos, I., and Hatzilygeroudis, I. (2018). Deep Learning Approaches for Facial Emotion Recognition: A Case Study on FER-2013. Advances in Hybridization of Intelligent Methods, Springer.","DOI":"10.1007\/978-3-319-66790-4_1"},{"key":"ref_43","unstructured":"Pourmirzaei, M., Montazer, G.A., and Esmaili, F. (2021). Using self-supervised auxiliary tasks to improve fine-grained facial representation. arXiv."},{"key":"ref_44","unstructured":"Ding, H., Zhou, S.K., and Chellappa, R. (June, January 30). FaceNet2ExpNet: Regularizing a Deep Face Recognition Net for Expression Recognition. Proceedings of the 12th IEEE International Conference on Automatic Face & Gesture Recognition, Washington, DC, USA."},{"key":"ref_45","first-page":"109","article-title":"Incremental boosting convolutional neural network for facial action unit recognition","volume":"29","author":"Han","year":"2016","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_46","unstructured":"Meng, Z., Liu, P., Cai, J., Han, S., and Tong, Y. (June, January 30). Identity-aware Convolutional Neural Network for Facial Expression Recognition. Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, Washington, DC, USA."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Jung, H., Lee, S., Yim, J., Park, S., and Kim, J. (2015, January 7\u201313). Joint Fine-tuning in Deep Neural Networks for Facial Expression Recognition. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.341"},{"key":"ref_48","unstructured":"Cl\u00e9ment, F., Piantanida, P., Bengio, Y., and Duhamel, P. (2018). Learning anonymized representations with adversarial neural networks. arXiv."},{"key":"ref_49","unstructured":"Hang, Z., Liu, Q., and Yang, Y. (2018, January 13\u201315). Transfer Learning with Ensemble of Multiple Feature Representations. Proceedings of the IEEE 2018 IEEE 16th International Conference on Software Engineering Research, Management and Applications (SERA), Kunming, China."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/2\/55\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:14:51Z","timestamp":1760134491000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/2\/55"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,2,6]]},"references-count":49,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2022,2]]}},"alternative-id":["a15020055"],"URL":"https:\/\/doi.org\/10.3390\/a15020055","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2022,2,6]]}}}