{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T15:20:18Z","timestamp":1776093618850,"version":"3.50.1"},"reference-count":48,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2019,12,28]],"date-time":"2019-12-28T00:00:00Z","timestamp":1577491200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Institute for Information &amp; communications Technology Planning &amp; Evaluation","award":["Korea government (MSIT)"],"award-info":[{"award-number":["Korea government (MSIT)"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Speech is the most significant mode of communication among human beings and a potential method for human-computer interaction (HCI) by using a microphone sensor. Quantifiable emotion recognition using these sensors from speech signals is an emerging area of research in HCI, which applies to multiple applications such as human-reboot interaction, virtual reality, behavior assessment, healthcare, and emergency call centers to determine the speaker\u2019s emotional state from an individual\u2019s speech. In this paper, we present major contributions for; (i) increasing the accuracy of speech emotion recognition (SER) compared to state of the art and (ii) reducing the computational complexity of the presented SER model. We propose an artificial intelligence-assisted deep stride convolutional neural network (DSCNN) architecture using the plain nets strategy to learn salient and discriminative features from spectrogram of speech signals that are enhanced in prior steps to perform better. Local hidden patterns are learned in convolutional layers with special strides to down-sample the feature maps rather than pooling layer and global discriminative features are learned in fully connected layers. A SoftMax classifier is used for the classification of emotions in speech. The proposed technique is evaluated on Interactive Emotional Dyadic Motion Capture (IEMOCAP) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) datasets to improve accuracy by 7.85% and 4.5%, respectively, with the model size reduced by 34.5 MB. It proves the effectiveness and significance of the proposed SER technique and reveals its applicability in real-world applications.<\/jats:p>","DOI":"10.3390\/s20010183","type":"journal-article","created":{"date-parts":[[2019,12,30]],"date-time":"2019-12-30T05:49:41Z","timestamp":1577684981000},"page":"183","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":279,"title":["A CNN-Assisted Enhanced Audio Signal Processing for Speech Emotion Recognition"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8020-3590","authenticated-orcid":false,"family":"Mustaqeem","sequence":"first","affiliation":[{"name":"Interaction Technology Laboratory, Department of Software, Sejong University, Seoul 05006, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5451-8815","authenticated-orcid":false,"given":"Soonil","family":"Kwon","sequence":"additional","affiliation":[{"name":"Interaction Technology Laboratory, Department of Software, Sejong University, Seoul 05006, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2019,12,28]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Grewe, L., and Hu, C. (2019, January 7). ULearn: Understanding and reacting to student frustration using deep learning, mobile vision and NLP. Proceedings of the Signal Processing, Sensor\/Information Fusion, and Target Recognition XXVIII, Baltimore, MD, USA.","DOI":"10.1117\/12.2518262"},{"key":"ref_2","first-page":"35","article-title":"From real to complex: Enhancing radio-based activity recognition using complex-valued CSI","volume":"15","author":"Wei","year":"2019","journal-title":"ACM Trans. Sens. Netw. (TOSN)"},{"key":"ref_3","unstructured":"Zhao, W., Ye, J., Yang, M., Lei, Z., Zhang, S., and Zhao, Z. (2018). Investigating capsule networks with dynamic routing for text classification. arXiv."},{"key":"ref_4","unstructured":"Sabour, S., Frosst, N., and Hinton, G.E. (2017, January 4\u20139). Dynamic routing between capsules. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Bae, J., and Kim, D.-S. (2018, January 2\u20136). End-to-End Speech Command Recognition with Capsule Network. Proceedings of the Interspeech, Hyderabad, India.","DOI":"10.21437\/Interspeech.2018-1888"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Fiore, U., Florea, A., and P\u00e9rez Lechuga, G. (2019). An Interdisciplinary Review of Smart Vehicular Traffic and Its Applications and Challenges. J. Sens. Actuator Netw., 8.","DOI":"10.3390\/jsan8010013"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"541","DOI":"10.1007\/s00371-014-0946-1","article-title":"Velocity-based modeling of physical interactions in dense crowds","volume":"31","author":"Kim","year":"2015","journal-title":"Vis. Comput."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"5571","DOI":"10.1007\/s11042-017-5292-7","article-title":"Deep features-based speech emotion recognition for smart affective services","volume":"78","author":"Badshah","year":"2019","journal-title":"Multimed. Tools Appl."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"2203","DOI":"10.1109\/TMM.2014.2360798","article-title":"Learning salient features for speech emotion recognition using convolutional neural networks","volume":"16","author":"Mao","year":"2014","journal-title":"IEEE Trans. Multimed."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Kang, S., Kim, D., and Kim, Y. (2019). A visual-physiology multimodal system for detecting outlier behavior of participants in a reality TV show. Int. J. Distrib. Sens. Netw., 15.","DOI":"10.1177\/1550147719864886"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Dias, M., Abad, A., and Trancoso, I. (2018, January 15\u201320). Exploring hashing and cryptonet based approaches for privacy-preserving speech emotion recognition. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8461451"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"335","DOI":"10.1007\/s10579-008-9076-6","article-title":"IEMOCAP: Interactive emotional dyadic motion capture database","volume":"42","author":"Busso","year":"2008","journal-title":"Lang. Resour. Eval."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Livingstone, S.R., and Russo, F.A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13.","DOI":"10.1371\/journal.pone.0196391"},{"key":"ref_14","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_15","first-page":"8","article-title":"Memento: An Emotion-driven Lifelogging System with Wearables","volume":"15","author":"Jiang","year":"2019","journal-title":"ACM Trans. Sens. Netw. (TOSN)"},{"key":"ref_16","first-page":"1","article-title":"Feature extraction methods LPC, PLP and MFCC in speech recognition","volume":"1","author":"Dave","year":"2013","journal-title":"Int. J. Adv. Res. Eng. Technol."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Luque Sendra, A., G\u00f3mez-Bellido, J., Carrasco Mu\u00f1oz, A., and Barbancho Concejero, J. (2018). Optimal Representation of Anuran Call Spectrum in Environmental Monitoring Systems Using Wireless Sensor Networks. Sensors, 18.","DOI":"10.3390\/s18061803"},{"key":"ref_18","unstructured":"Erol, B., Seyfioglu, M.S., Gurbuz, S.Z., and Amin, M. (2018, January 16\u201318). Data-driven cepstral and neural learning of features for robust micro-Doppler classification. Proceedings of the Radar Sensor Technology XXII, Orlando, FL, USA."},{"key":"ref_19","unstructured":"Liu, G.K. (2018). Evaluating Gammatone Frequency Cepstral Coefficients with Neural Networks for Emotion Recognition from Speech. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"271","DOI":"10.1016\/j.neucom.2017.07.050","article-title":"Speech emotion recognition based on feature selection and extreme learning machine decision tree","volume":"273","author":"Liu","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Liu, C.-L., Yin, F., Wang, D.-H., and Wang, Q.-F. (2011, January 18\u201321). CASIA online and offline Chinese handwriting databases. Proceedings of the 2011 International Conference on Document Analysis and Recognition, Beijing, China.","DOI":"10.1109\/ICDAR.2011.17"},{"key":"ref_22","unstructured":"Fahad, M., Yadav, J., Pradhan, G., and Deepak, A. (2018). DNN-HMM based Speaker Adaptive Emotion Recognition using Proposed Epoch and MFCC Features. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1576","DOI":"10.1109\/TMM.2017.2766843","article-title":"Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching","volume":"20","author":"Zhang","year":"2017","journal-title":"IEEE Trans. Multimed."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Trigeorgis, G., Ringeval, F., Brueckner, R., Marchi, E., Nicolaou, M.A., Schuller, B., and Zafeiriou, S. (2016, January 20\u201325). Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China.","DOI":"10.1109\/ICASSP.2016.7472669"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Wen, G., Li, H., Huang, J., Li, D., and Xun, E. (2017). Random deep belief networks for recognizing emotions from speech signals. Comput. Intell. Neurosci., 2017.","DOI":"10.1155\/2017\/1945630"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Zhu, L., Chen, L., Zhao, D., Zhou, J., and Zhang, W. (2017). Emotion recognition from Chinese speech for smart affective services using a combination of SVM and DBN. Sensors, 17.","DOI":"10.3390\/s17071694"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Hajarolasvadi, N., and Demirel, H. (2019). 3D CNN-Based Speech Emotion Recognition Using K-Means Clustering and Spectrograms. Entropy, 21.","DOI":"10.3390\/e21050479"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Tao, F., and Liu, G. (2018, January 15\u201320). Advanced LSTM: A study about better time dependency modeling in emotion recognition. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8461750"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Sahu, S., Gupta, R., Sivaraman, G., AbdAlmageed, W., and Espy-Wilson, C. (2018). Adversarial auto-encoders for speech based emotion recognition. arXiv.","DOI":"10.21437\/Interspeech.2017-1421"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Bao, F., Neumann, M., and Vu, N.T. (2019). CycleGAN-based emotion style transfer as data augmentation for speech emotion recognition. Manuscr. Submitt. Publ., 35\u201337.","DOI":"10.21437\/Interspeech.2019-2293"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"7053","DOI":"10.1007\/s00500-016-2247-2","article-title":"SVM or deep learning? A comparative study on remote sensing image classification","volume":"21","author":"Liu","year":"2017","journal-title":"Soft Comput."},{"key":"ref_32","unstructured":"Yu, D., Seltzer, M.L., Li, J., Huang, J.-T., and Seide, F. (2013). Feature learning in deep neural networks-studies on speech recognition tasks. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"287","DOI":"10.1016\/j.isatra.2010.12.004","article-title":"Variance sensitive adaptive threshold-based PCA method for fault detection with experimental application","volume":"50","author":"Alkaya","year":"2011","journal-title":"ISA Trans."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"1533","DOI":"10.1109\/TASLP.2014.2339736","article-title":"Convolutional neural networks for speech recognition","volume":"22","author":"Mohamed","year":"2014","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_35","first-page":"1929","article-title":"Dropout: A simple way to prevent neural networks from overfitting","volume":"15","author":"Srivastava","year":"2014","journal-title":"J. Mach. Learn. Res."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Latif, S., Qayyum, A., Usman, M., and Qadir, J. (2018, January 17\u201319). Cross Lingual Speech Emotion Recognition: Urdu vs. Western Languages. Proceedings of the 2018 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan.","DOI":"10.1109\/FIT.2018.00023"},{"key":"ref_37","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1016\/j.neunet.2017.02.013","article-title":"Evaluating deep learning architectures for Speech Emotion Recognition","volume":"92","author":"Fayek","year":"2017","journal-title":"Neural Netw."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Luo, D., Zou, Y., and Huang, D. (2019, January 19). Investigation on Joint Representation Learning for Robust Feature Extraction in Speech Emotion Recognition. Proceedings of the Interspeech, Graz, Austria.","DOI":"10.21437\/Interspeech.2018-1832"},{"key":"ref_40","unstructured":"Tripathi, S., Kumar, A., Ramesh, A., Singh, C., and Yenigalla, P. (2019). Deep Learning based Emotion Recognition System Using Speech Features and Transcriptions. arXiv."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Yenigalla, P., Kumar, A., Tripathi, S., Singh, C., Kar, S., and Vepa, J. (2018, January 2\u20136). Speech Emotion Recognition Using Spectrogram & Phoneme Embedding. Proceedings of the Interspeech, Hyderabad, India.","DOI":"10.21437\/Interspeech.2018-1811"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"1440","DOI":"10.1109\/LSP.2018.2860246","article-title":"3-D convolutional recurrent neural networks with attention model for speech emotion recognition","volume":"25","author":"Chen","year":"2018","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"3705","DOI":"10.1007\/s11042-017-5539-3","article-title":"Spectrogram based multi-task audio classification","volume":"78","author":"Zeng","year":"2019","journal-title":"Multimed. Tools Appl."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"1701","DOI":"10.21437\/Interspeech.2019-3068","article-title":"Learning Temporal Clusters Using Capsule Routing for Speech Emotion Recognition","volume":"2019","author":"Jalal","year":"2019","journal-title":"Proc. Interspeech"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"104886","DOI":"10.1016\/j.knosys.2019.104886","article-title":"Bagged support vector machines for emotion recognition from speech","volume":"184","author":"Bhavan","year":"2019","journal-title":"Knowl.-Based Syst."},{"key":"ref_46","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2017, January 4\u20139). Imagenet classification with deep convolutional neural networks. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_47","unstructured":"Molchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J. (2016). Pruning convolutional neural networks for resource efficient transfer learning. arXiv."},{"key":"ref_48","unstructured":"George, D., Shen, H., and Huerta, E. (2017). Deep Transfer Learning: A new deep learning glitch classification method for advanced LIGO. arXiv."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/1\/183\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T13:46:27Z","timestamp":1760190387000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/1\/183"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,12,28]]},"references-count":48,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2020,1]]}},"alternative-id":["s20010183"],"URL":"https:\/\/doi.org\/10.3390\/s20010183","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,12,28]]}}}