{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,31]],"date-time":"2026-01-31T00:46:12Z","timestamp":1769820372644,"version":"3.49.0"},"reference-count":31,"publisher":"SAGE Publications","issue":"5","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IFS"],"published-print":{"date-parts":[[2022,9,22]]},"abstract":"<jats:p>Human emotion recognition with the evaluation of speech signals is an emerging topic in recent decades. Emotion recognition through speech signals is relatively confusing because of the speaking style, voice quality, cultural background of the speaker, environment, etc. Even though numerous signal processing methods and frameworks exists to detect and characterize the speech signal\u2019s emotions, they do not attain the full speech emotion recognition (SER) accuracy and success rate. This paper proposes a novel algorithm, namely the deep ganitrus algorithm (DGA), to perceive the various categories of emotions from the input speech signal for better accuracy. DGA combines independent component analysis with fisher criterion for feature extraction and deep belief network with wake sleep for emotion classification. This algorithm is inspired by the elaeocarpus ganitrus (rudraksha seed), which has 1 to 21 lines. The single line bead is rarest to find, analogously finding a single emotion from the speech signal is also complex. The proposed DGA is experimentally verified on the Berlin database. Finally, the evaluation results were compared with the existing framework, and the test result accomplishes better recognition accuracy when compared with all other current algorithms.<\/jats:p>","DOI":"10.3233\/jifs-201491","type":"journal-article","created":{"date-parts":[[2022,5,31]],"date-time":"2022-05-31T12:43:44Z","timestamp":1654001024000},"page":"5353-5368","source":"Crossref","is-referenced-by-count":2,"title":["Deep ganitrus algorithm for speech emotion recognition"],"prefix":"10.1177","volume":"43","author":[{"given":"Shilpi","family":"Shukla","sequence":"first","affiliation":[{"name":"Department of Electronics & Communication Engineering, Mahatma Gandhi Mission\u2019s College of Engineering & Technology, A-9 Sector 62, Noida, India"}]},{"given":"Madhu","family":"Jain","sequence":"additional","affiliation":[{"name":"Department of Electronics & Communication Engineering, Jaypee Institute of Information Technology, A-10, Sector 62, Noida (Uttar Pradesh), India"}]}],"member":"179","reference":[{"key":"10.3233\/JIFS-201491_ref1","unstructured":"Bjeki\u0107 D. , Zlati\u0107 L. and Bojovi\u0107 M. , Students-teachers\u2019communication competence: basic social communication skills andinteraction involvement, Journal of Educational Sciences &Psychology 10(1) (2020)."},{"key":"10.3233\/JIFS-201491_ref2","doi-asserted-by":"crossref","unstructured":"B\u0105k H. Emotional Prosody Processing in Nonnative English Speakers. In Emotional Prosody Processing for Non-Native English Speakers (2016), 141\u2013169, Springer, Cham.","DOI":"10.1007\/978-3-319-44042-2_7"},{"issue":"9","key":"10.3233\/JIFS-201491_ref3","doi-asserted-by":"crossref","first-page":"421","DOI":"10.1177\/0037549704044081","article-title":"Toward an integrative multimodeling interface: A human-computer interface approach to interrelating model structures","volume":"80","author":"Fishwick","year":"2004","journal-title":"Simulation"},{"key":"10.3233\/JIFS-201491_ref5","unstructured":"Adler A. Understanding human nature: The psychology of personality. GENERAL PRESS (2020)."},{"issue":"4","key":"10.3233\/JIFS-201491_ref6","doi-asserted-by":"crossref","first-page":"184","DOI":"10.1097\/01376517-200208000-00003","article-title":"Predictors of burden for cregivers of patients with Parkinson\u2019s disease","volume":"34","author":"Edwards","year":"2002","journal-title":"Journal of Neuroscience Nursing"},{"issue":"8","key":"10.3233\/JIFS-201491_ref7","first-page":"2404","article-title":"Increasing the performance of speech recognition system by using different optimization techniques to redesign artificial neural network","volume":"97","author":"Shukla","year":"2019","journal-title":"Journal of Theoretical and Applied Information Technology"},{"issue":"1","key":"10.3233\/JIFS-201491_ref10","doi-asserted-by":"crossref","first-page":"186","DOI":"10.1109\/TASL.2007.909282","article-title":"Unsupervised pattern discovery in speech","volume":"16","author":"Park","year":"2007","journal-title":"IEEE Transactions on Audio, Speech, and Language Processing"},{"key":"10.3233\/JIFS-201491_ref11","doi-asserted-by":"crossref","first-page":"149","DOI":"10.1016\/j.eswa.2016.10.035","article-title":"A new hybrid PSO assisted biogeography-based optimization for emotion and stress recognition from speech signal","volume":"69","author":"Yogesh","year":"2017","journal-title":"Expert Systems with Applications"},{"issue":"4","key":"10.3233\/JIFS-201491_ref12","doi-asserted-by":"crossref","first-page":"959","DOI":"10.1007\/s10772-019-09639-0","article-title":"A novel system for effective speech recognition based on artificial neural network and opposition artificial bee colony algorithm","volume":"22","author":"Shukla","year":"2019","journal-title":"International Journal of Speech Technology"},{"issue":"5","key":"10.3233\/JIFS-201491_ref14","doi-asserted-by":"crossref","first-page":"560","DOI":"10.1049\/iet-spr.2009.0030","article-title":"Novel class of stable wideband recursive digital integrators and differentiators","volume":"4","author":"Gupta","year":"2010","journal-title":"IET Signal Processing"},{"key":"10.3233\/JIFS-201491_ref15","doi-asserted-by":"crossref","first-page":"107112","DOI":"10.1109\/ACCESS.2020.3000322","article-title":"An effective training scheme for deep neural network in edge computing enabled Internet of medical things (IoMT) systems","volume":"8","author":"Pustokhina","year":"2020","journal-title":"IEEE Access"},{"key":"10.3233\/JIFS-201491_ref16","doi-asserted-by":"crossref","first-page":"12","DOI":"10.14569\/IJACSA.2019.0101249","article-title":"Accurate speech emotion recognition by using brain-inspired decision-making spiking neural network,","volume":"10","author":"Jain","year":"2019","journal-title":"International Journal of Advanced Computer Science and Applications"},{"key":"10.3233\/JIFS-201491_ref17","doi-asserted-by":"crossref","first-page":"152423","DOI":"10.1109\/ACCESS.2020.3017462","article-title":"End-to-end speech emotion recognition with gender information, 23","volume":"8","author":"Sun","year":"2020","journal-title":"IEEE Access"},{"issue":"6","key":"10.3233\/JIFS-201491_ref18","doi-asserted-by":"crossref","first-page":"1576","DOI":"10.1109\/TMM.2017.2766843","article-title":"Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching","volume":"20","author":"Zhang","year":"2017","journal-title":"IEEE Transactions on Multimedia"},{"key":"10.3233\/JIFS-201491_ref19","doi-asserted-by":"crossref","first-page":"114683","DOI":"10.1016\/j.eswa.2021.114683","article-title":"Speech emotion recognition using recurrent neural networks with directional self-attention,","volume":"173","author":"Li","year":"2021","journal-title":"Expert Systems with Applications"},{"issue":"1","key":"10.3233\/JIFS-201491_ref20","first-page":"183","article-title":"A CNN-assisted enhanced audio signal processing for speech emotion recognition,","volume":"20","author":"Kwon","year":"2020","journal-title":"Sensors"},{"issue":"1\/2","key":"10.3233\/JIFS-201491_ref21","doi-asserted-by":"crossref","first-page":"14","DOI":"10.17743\/jaes.2019.0043","article-title":"Continuous speech emotion recognition with convolutional neural networks","volume":"68","author":"Vryzas","year":"2020","journal-title":"Journal of the Audio Engineering Society"},{"key":"10.3233\/JIFS-201491_ref22","doi-asserted-by":"crossref","first-page":"79861","DOI":"10.1109\/ACCESS.2020.2990405","article-title":"Clustering-based speech emotion recognition by incorporating learned features and deep BiLSTM, 1\u2013","volume":"8","author":"Sajjad","year":"2020","journal-title":"IEEE Access"},{"key":"10.3233\/JIFS-201491_ref24","doi-asserted-by":"crossref","first-page":"150","DOI":"10.1016\/j.ins.2019.09.005","article-title":"Two-layer fuzzy multiple random forest for speech emotion recognition in human-robot interaction","volume":"509","author":"Chen","year":"2020","journal-title":"Information Sciences"},{"issue":"1","key":"10.3233\/JIFS-201491_ref26","doi-asserted-by":"crossref","first-page":"205","DOI":"10.3390\/app10010205","article-title":"An ensemble model for multi-level speech emotion recognition","volume":"10","author":"Zheng","year":"2020","journal-title":"Applied Sciences"},{"key":"10.3233\/JIFS-201491_ref28","doi-asserted-by":"crossref","first-page":"2697","DOI":"10.1109\/TASLP.2020.3023632","article-title":"Semi-supervised speech emotion recognition with ladder networks","volume":"28","author":"Parthasarathy","year":"2020","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"issue":"4","key":"10.3233\/JIFS-201491_ref29","doi-asserted-by":"crossref","first-page":"5175","DOI":"10.3233\/JIFS-191753","article-title":"A novel stochastic deep conviction network for emotion recognition in speech signal","volume":"38","author":"Shukla","year":"2020","journal-title":"Journal of Intelligent & Fuzzy Systems"},{"issue":"1","key":"10.3233\/JIFS-201491_ref30","first-page":"55","article-title":"Elaeocarpus ganitrus(Rudraksha): A reservoir plant with their pharmacological effects","volume":"34","author":"Hardainiyan","year":"2015","journal-title":"Int J Pharm Sci Rev Res"},{"issue":"2","key":"10.3233\/JIFS-201491_ref31","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1016\/S1364-6613(00)01813-1","article-title":"Independent component analysis: an introduction","volume":"6","author":"Stone","year":"2002","journal-title":"Trends in Cognitive Sciences"},{"issue":"1","key":"10.3233\/JIFS-201491_ref32","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1186\/s13636-018-0145-5","article-title":"Decision tree SVM model with Fisher feature selection for speech emotion recognition","volume":"2019","author":"Sun","year":"2019","journal-title":"EURASIP Journal on Audio, Speech, and Music Processing"},{"key":"10.3233\/JIFS-201491_ref33","doi-asserted-by":"crossref","first-page":"1149","DOI":"10.1109\/ICMLC.2007.4370317","article-title":"August. Feature selection by combining Fisher criterion and principal feature analysis","volume":"2","author":"Wang","year":"2007","journal-title":"In 2007 International Conference on Machine Learning and Cybernetics"},{"issue":"1","key":"10.3233\/JIFS-201491_ref36","doi-asserted-by":"crossref","first-page":"69","DOI":"10.1109\/TAFFC.2015.2392101","article-title":"Speech emotion recognition using Fourier parameters","volume":"6","author":"Wang","year":"2015","journal-title":"IEEE Transactions on Affective Computing"},{"key":"10.3233\/JIFS-201491_ref37","doi-asserted-by":"crossref","first-page":"1158","DOI":"10.1126\/science.7761831","article-title":"The \u201dwake-sleep\u201d algorithm for unsupervised neural networks","volume":"268","author":"Hinton","year":"1995","journal-title":"Science"},{"key":"10.3233\/JIFS-201491_ref38","unstructured":"Bourez C. Deep learning with Theano. Packt Publishing Ltd. (2017)."},{"issue":"3","key":"10.3233\/JIFS-201491_ref40","first-page":"1158","article-title":"Ayadi, M.S. Kamel and F. Karray, Survey on speech emotion recognition: Features, classification schemes, and databases","volume":"44","author":"El","year":"2011","journal-title":"Pattern Recognition"},{"issue":"4","key":"10.3233\/JIFS-201491_ref42","doi-asserted-by":"crossref","first-page":"779","DOI":"10.1007\/s10772-016-9368-y","article-title":"FDBN: Design and development of Fractional Deep Belief Networks for speaker emotion recognition","volume":"19","author":"Mannepalli","year":"2016","journal-title":"International Journal of Speech Technology"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/JIFS-201491","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,30]],"date-time":"2026-01-30T12:48:43Z","timestamp":1769777323000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/JIFS-201491"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,22]]},"references-count":31,"journal-issue":{"issue":"5"},"URL":"https:\/\/doi.org\/10.3233\/jifs-201491","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,22]]}}}