{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,20]],"date-time":"2025-10-20T10:27:35Z","timestamp":1760956055934,"version":"3.37.3"},"reference-count":31,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,8,3]],"date-time":"2020-08-03T00:00:00Z","timestamp":1596412800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,8,3]],"date-time":"2020-08-03T00:00:00Z","timestamp":1596412800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Big Data"],"published-print":{"date-parts":[[2020,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Group-based emotion recognition (GER) is an interesting topic in both security and social area. In this paper, a GER with hybrid optimization based recurrent fuzzy neural network is proposed which is from video sequence. In our work, by utilizing the Neural Network the emotion recognition (ER) is performed from group of people. Initially, original video frames are taken as input and pre-process it from multi user video data. From this pre-processed image, the feature extraction is done by Multivariate Local Texture Pattern (MLTP), gray-level co-occurrence matrix (GLCM), and Local Energy based Shape Histogram (LESH). After extracting the features, certain features are selected using Modified Sea-lion optimization algorithm process. Finally, recurrent fuzzy neural network (RFNN) classifier based Social Ski-Driver (SSD) optimization algorithm is proposed for classification process, SSD is used for updating the weights in the RFNN. Python platform is utilized to implement this work and the performance of accuracy, sensitivity, specificity, recall and precision is evaluated with some existing techniques. The proposed method accuracy is 99.16%, recall is 99.33%, precision is 99%, sensitivity is 99.93% and specificity is 99% when compared with other deep learning techniques our proposed method attains good result.<\/jats:p>","DOI":"10.1186\/s40537-020-00326-5","type":"journal-article","created":{"date-parts":[[2020,8,31]],"date-time":"2020-08-31T13:02:04Z","timestamp":1598878924000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":22,"title":["Group based emotion recognition from video sequence with hybrid optimization based recurrent fuzzy neural network"],"prefix":"10.1186","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8434-7252","authenticated-orcid":false,"given":"Velagapudi","family":"Sreenivas","sequence":"first","affiliation":[]},{"given":"Varsha","family":"Namdeo","sequence":"additional","affiliation":[]},{"given":"E. Vijay","family":"Kumar","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,8,3]]},"reference":[{"key":"326_CR1","doi-asserted-by":"crossref","unstructured":"Khorrami, P., Le Paine, T., Brady, K., Dagli, C. and Huang, T.S., 2016, September. How deep neural networks can improve emotion recognition on video data. In 2016 IEEE international conference on image processing (ICIP) (pp. 619-623). IEEE.","DOI":"10.1109\/ICIP.2016.7532431"},{"key":"326_CR2","doi-asserted-by":"crossref","unstructured":"Kahou SE, Pal C, Bouthillier X, Froumenty P, G\u00fcl\u00e7ehre \u00c7, Memisevic R, Vincent P, Courville A, Bengio Y, Ferrari RC, Mirza M. December. Combining modality specific deep neural networks for emotion recognition in video. In: Proceedings of the 15th ACM on International conference on multimodal interaction. 2013, pp. 543\u201350.","DOI":"10.1145\/2522848.2531745"},{"key":"326_CR3","doi-asserted-by":"crossref","unstructured":"Walecki R, Rudovic O, Pavlovic V, Pantic M. Variable-state latent conditional random fields for facial expression recognition and action unit detection. In: 2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG), vol. 1. IEEE 2015, pp. 1\u20138.","DOI":"10.1109\/FG.2015.7163137"},{"key":"326_CR4","doi-asserted-by":"crossref","unstructured":"Lee J, Kim S, Kiim S, Sohn K. Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition. In 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE. 2018, pp. 1513\u20137.","DOI":"10.1109\/ICASSP.2018.8461920"},{"key":"326_CR5","doi-asserted-by":"publisher","first-page":"25","DOI":"10.1016\/j.patcog.2017.10.017","volume":"76","author":"O Gupta","year":"2018","unstructured":"Gupta O, Raviv D, Raskar R. Illumination invariants in deep video expression recognition. Pattern Recogn. 2018;76:25\u201335.","journal-title":"Pattern Recogn"},{"issue":"7","key":"326_CR6","doi-asserted-by":"publisher","first-page":"1319","DOI":"10.1109\/TMM.2016.2557721","volume":"18","author":"J Yan","year":"2016","unstructured":"Yan J, Zheng W, Xu Q, Lu G, Li H, Wang B. Sparse kernel reduced-rank regression for bimodal emotion recognition from facial expression and speech. IEEE Trans Multimedia. 2016;18(7):1319\u201329.","journal-title":"IEEE Trans Multimedia"},{"key":"326_CR7","doi-asserted-by":"crossref","unstructured":"Bernal G, Maes P. Emotional beasts: visually expressing emotions through avatars in VR. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems. 2017, pp. 2395\u2013402.","DOI":"10.1145\/3027063.3053207"},{"key":"326_CR8","doi-asserted-by":"crossref","unstructured":"Mavridou I, McGhee JT, Hamedi M, Fatoorechi M, Cleal A, Ballaguer-Balester E, Seiss E, Cox G, Nduka C. FACETEQ interface demo for emotion expression in VR. In: 2017 IEEE virtual reality (VR). IEEE. 2017, pp. 441\u20132","DOI":"10.1109\/VR.2017.7892369"},{"key":"326_CR9","doi-asserted-by":"crossref","unstructured":"Fonnegra RD, D\u00edaz GM. Deep learning based video spatio-temporal modeling for emotion recognition. In: International conference on human\u2013computer interaction. Cham: Springer. 2018, pp. 397\u2013408","DOI":"10.1007\/978-3-319-91238-7_32"},{"key":"326_CR10","unstructured":"Li S, Deng W. Deep facial expression recognition: a survey. arXiv preprint arXiv:1804.08348. 2018."},{"key":"326_CR11","doi-asserted-by":"crossref","unstructured":"Lv Y, Feng Z, Xu C. Facial expression recognition via deep learning. In: 2014 International conference on smart computing. IEEE. 2014, pp. 303\u20138.","DOI":"10.1109\/SMARTCOMP.2014.7043872"},{"key":"326_CR12","volume-title":"Human facial expression: an evolutionary view","author":"AJ Fridlund","year":"2014","unstructured":"Fridlund AJ. Human facial expression: an evolutionary view. New York: Academic Press; 2014."},{"issue":"5","key":"326_CR13","doi-asserted-by":"publisher","first-page":"753","DOI":"10.1007\/s11036-016-0685-9","volume":"21","author":"MS Hossain","year":"2016","unstructured":"Hossain MS, Muhammad G, Alhamid MF, Song B, Al-Mutib K. Audio-visual emotion recognition using big data towards 5G. Mobile Netw Appl. 2016;21(5):753\u201363.","journal-title":"Mobile Netw Appl"},{"key":"326_CR14","first-page":"1","volume":"9","author":"M Sajjad","year":"2019","unstructured":"Sajjad M, Zahir S, Ullah A, Akhtar Z, Muhammad K. Human behavior understanding in big multimedia data using CNN based facial expression recognition. Mobile Netw Appl. 2019;9:1\u201311.","journal-title":"Mobile Netw Appl"},{"issue":"3","key":"326_CR15","doi-asserted-by":"publisher","first-page":"431","DOI":"10.1037\/0022-3514.93.3.431","volume":"93","author":"ER Smith","year":"2007","unstructured":"Smith ER, Seger CR, Mackie DM. Can emotions be truly group level? Evidence regarding four conceptual criteria. J Pers Soc Psychol. 2007;93(3):431.","journal-title":"J Pers Soc Psychol"},{"key":"326_CR16","doi-asserted-by":"crossref","unstructured":"Lakshmy V, Murthy OR. Image based group happiness intensity analysis. In: Computational vision and bio inspired computing. Cham: Springer. 2018, pp. 1032\u201340.","DOI":"10.1007\/978-3-319-71767-8_88"},{"key":"326_CR17","doi-asserted-by":"crossref","unstructured":"Dhall A, Goecke R, Ghosh S, Joshi J, Hoey J, Gedeon T. From individual to group-level emotion recognition: Emotiw 5.0. In: Proceedings of the 19th ACM international conference on multimodal interaction. 2017, pp. 524\u20138.","DOI":"10.1145\/3136755.3143004"},{"key":"326_CR18","doi-asserted-by":"crossref","unstructured":"Dhall A, Kaur A, Goecke R, Gedeon T. Emotiw 2018: audio-video, student engagement and group-level affect prediction. In: Proceedings of the 20th ACM international conference on multimodal interaction. 2018, pp. 653\u20136.","DOI":"10.1145\/3242969.3264993"},{"key":"326_CR19","doi-asserted-by":"crossref","unstructured":"Nagarajan B, Oruganti VRM. Group Emotion recognition in adverse face detection. In: 2019 14th IEEE international conference on automatic face and gesture recognition (FG 2019). IEEE. 2019, pp. 1\u20135.","DOI":"10.1109\/FG.2019.8756553"},{"key":"326_CR20","doi-asserted-by":"crossref","unstructured":"Jangid M, Paharia P, Srivastava S. Video-based facial expression recognition using a deep learning approach. In: Advances in computer communication and computational sciences. Singapore: Springer. 2019, pp. 653\u201360.","DOI":"10.1007\/978-981-13-6861-5_55"},{"key":"326_CR21","doi-asserted-by":"crossref","unstructured":"Balaji B, Oruganti VRM. Multi-level feature fusion for group-level emotion recognition. In: Proceedings of the 19th ACM international conference on multimodal interaction. 2017, pp. 583\u20136.","DOI":"10.1145\/3136755.3143013"},{"key":"326_CR22","doi-asserted-by":"crossref","unstructured":"Surace L, Patacchiola M, BattiniS\u00f6nmez E, Spataro W, Cangelosi A. Emotion recognition in the wild using deep neural networks and Bayesian classifiers. In: Proceedings of the 19th ACM international conference on multimodal interaction. 2017, pp. 593\u20137.","DOI":"10.1145\/3136755.3143015"},{"key":"326_CR23","doi-asserted-by":"crossref","unstructured":"Abbas A, Chalup SK. Group emotion recognition in the wild by combining deep neural networks for facial expression classification and scene-context analysis. In: Proceedings of the 19th ACM international conference on multimodal interaction. 2017, pp. 561\u20138.","DOI":"10.1145\/3136755.3143010"},{"key":"326_CR24","unstructured":"Shamsi SN, Rawat BPS, Wadhwa M. Group affect prediction using emotion heatmaps and scene information. In: Proceedings of 2018 IEEE winter applications of computer vision workshops (WACVW). 2018, pp. 77\u201383."},{"issue":"3","key":"326_CR25","doi-asserted-by":"publisher","first-page":"427","DOI":"10.1007\/s11554-015-0500-z","volume":"11","author":"L Malinski","year":"2016","unstructured":"Malinski L, Smolka B. Fast averaging peer group filter for the impulsive noise removal in color images. J Real-Time Image Proc. 2016;11(3):427\u201344.","journal-title":"J Real-Time Image Proc"},{"key":"326_CR26","doi-asserted-by":"publisher","first-page":"128","DOI":"10.5201\/ipol.2014.104","volume":"4","author":"YQ Wang","year":"2014","unstructured":"Wang YQ. An analysis of the Viola-Jones face detection algorithm. Image Processing Line. 2014;4:128\u201348.","journal-title":"Image Processing Line"},{"key":"326_CR27","doi-asserted-by":"crossref","unstructured":"Ibrahim FN, Zin ZM, Ibrahim N. Eye center detection using combined Viola-Jones and neural network algorithms. In: 2018 international symposium on agent, multi-agent systems and robotics (ISAMSR). IEEE. 2018, pp. 1\u20136.","DOI":"10.1109\/ISAMSR.2018.8540543"},{"key":"326_CR28","first-page":"5","volume":"10","author":"R Masadeh","year":"2019","unstructured":"Masadeh R, Mahafzah BA, Sharieh A. Sea lion optimization algorithm. Sea. 2019;10:5.","journal-title":"Sea"},{"key":"326_CR29","doi-asserted-by":"publisher","first-page":"74991","DOI":"10.1109\/ACCESS.2020.2988717","volume":"8","author":"BM Nguyen","year":"2020","unstructured":"Nguyen BM, Tran T, Nguyen T, Nguyen G. Hybridization of galactic swarm and evolution whale optimization for global search problem. IEEE Access. 2020;8:74991\u20135010.","journal-title":"IEEE Access"},{"issue":"5","key":"326_CR30","doi-asserted-by":"publisher","first-page":"1175","DOI":"10.1109\/TFUZZ.2016.2599855","volume":"25","author":"M Pratama","year":"2016","unstructured":"Pratama M, Lu J, Lughofer E, Zhang G, Er MJ. An incremental learning of concept drifts using evolving type-2 recurrent fuzzy neural networks. IEEE Trans Fuzzy Syst. 2016;25(5):1175\u201392.","journal-title":"IEEE Trans Fuzzy Syst"},{"key":"326_CR31","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-019-04159-z","author":"A Tharwat","year":"2019","unstructured":"Tharwat A, Gabel T. Parameters optimization of support vector machines for imbalanced data using social ski driver algorithm. Neural Comput Appl. 2019. https:\/\/doi.org\/10.1007\/s00521-019-04159-z.","journal-title":"Neural Comput Appl"}],"container-title":["Journal of Big Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-020-00326-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s40537-020-00326-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-020-00326-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,8,2]],"date-time":"2021-08-02T23:46:57Z","timestamp":1627948017000},"score":1,"resource":{"primary":{"URL":"https:\/\/journalofbigdata.springeropen.com\/articles\/10.1186\/s40537-020-00326-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,8,3]]},"references-count":31,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,12]]}},"alternative-id":["326"],"URL":"https:\/\/doi.org\/10.1186\/s40537-020-00326-5","relation":{},"ISSN":["2196-1115"],"issn-type":[{"type":"electronic","value":"2196-1115"}],"subject":[],"published":{"date-parts":[[2020,8,3]]},"assertion":[{"value":"17 March 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 July 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 August 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"Not Applicable.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no Competing interests.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"56"}}