{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,17]],"date-time":"2026-02-17T13:25:58Z","timestamp":1771334758974,"version":"3.50.1"},"reference-count":30,"publisher":"MDPI AG","issue":"22","license":[{"start":{"date-parts":[[2022,11,16]],"date-time":"2022-11-16T00:00:00Z","timestamp":1668556800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the project Safe Cities\u2014\u201dInova\u00e7\u00e3o para Construir Cidades Seguras\u201d","award":["POCI-01-0247-FEDER-041435"],"award-info":[{"award-number":["POCI-01-0247-FEDER-041435"]}]},{"name":"the project Safe Cities\u2014\u201dInova\u00e7\u00e3o para Construir Cidades Seguras\u201d","award":["COMPETE 2020"],"award-info":[{"award-number":["COMPETE 2020"]}]},{"name":"the European Regional Development Fund (ERDF)","award":["POCI-01-0247-FEDER-041435"],"award-info":[{"award-number":["POCI-01-0247-FEDER-041435"]}]},{"name":"the European Regional Development Fund (ERDF)","award":["COMPETE 2020"],"award-info":[{"award-number":["COMPETE 2020"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Many relevant sound events occur in urban scenarios, and robust classification models are required to identify abnormal and relevant events correctly. These models need to identify such events within valuable time, being effective and prompt. It is also essential to determine for how much time these events prevail. This article presents an extensive analysis developed to identify the best-performing model to successfully classify a broad set of sound events occurring in urban scenarios. Analysis and modelling of Transformer models were performed using available public datasets with different sets of sound classes. The Transformer models\u2019 performance was compared to the one achieved by the baseline model and end-to-end convolutional models. Furthermore, the benefits of using pre-training from image and sound domains and data augmentation techniques were identified. Additionally, complementary methods that have been used to improve the models\u2019 performance and good practices to obtain robust sound classification models were investigated. After an extensive evaluation, it was found that the most promising results were obtained by employing a Transformer model using a novel Adam optimizer with weight decay and transfer learning from the audio domain by reusing the weights from AudioSet, which led to an accuracy score of 89.8% for the UrbanSound8K dataset, 95.8% for the ESC-50 dataset, and 99% for the ESC-10 dataset, respectively.<\/jats:p>","DOI":"10.3390\/s22228874","type":"journal-article","created":{"date-parts":[[2022,11,17]],"date-time":"2022-11-17T06:24:42Z","timestamp":1668666282000},"page":"8874","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["Transformers for Urban Sound Classification\u2014A Comprehensive Performance Evaluation"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9413-3300","authenticated-orcid":false,"given":"Ana Filipa Rodrigues","family":"Nogueira","sequence":"first","affiliation":[{"name":"Faculdade de Ci\u00eancias, Universidade do Porto, Rua do Campo Alegre 1021 1055, 4169-007 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4948-550X","authenticated-orcid":false,"given":"Hugo S.","family":"Oliveira","sequence":"additional","affiliation":[{"name":"Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s\/n, 4200-465 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1094-0114","authenticated-orcid":false,"given":"Jos\u00e9 J. M.","family":"Machado","sequence":"additional","affiliation":[{"name":"Departamento de Engenharia Mec\u00e2nica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s\/n, 4200-465 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7603-6526","authenticated-orcid":false,"given":"Jo\u00e3o Manuel R. S.","family":"Tavares","sequence":"additional","affiliation":[{"name":"Departamento de Engenharia Mec\u00e2nica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s\/n, 4200-465 Porto, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2022,11,16]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Virtanen, T., Plumbley, M.D., and Ellis, D. (2018). Sound Analysis in Smart Cities. Computational Analysis of Sound Scenes and Events, Springer International Publishing.","DOI":"10.1007\/978-3-319-63450-0"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Zinemanas, P., Rocamora, M., Miron, M., Font, F., and Serra, X. (2021). An Interpretable Deep Learning Model for Automatic Sound Classification. Electronics, 10.","DOI":"10.3390\/electronics10070850"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"e12804","DOI":"10.1111\/exsy.12804","article-title":"Environmental sound classification using convolution neural networks with different integrated loss functions","volume":"39","author":"Das","year":"2021","journal-title":"Expert Syst."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Das, J.K., Ghosh, A., Pal, A.K., Dutta, S., and Chakrabarty, A. (2020, January 21\u201323). Urban Sound Classification Using Convolutional Neural Network and Long Short Term Memory Based on Multiple Features. Proceedings of the 2020 Fourth International Conference on Intelligent Computing in Data Sciences (ICDS), Fez, Morocco.","DOI":"10.1109\/ICDS50568.2020.9268723"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Mushtaq, Z., and Su, S.F. (2020). Efficient Classification of Environmental Sounds through Multiple Features Aggregation and Data Enhancement Techniques for Spectrogram Images. Symmetry, 12.","DOI":"10.3390\/sym12111822"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"21552","DOI":"10.1038\/s41598-021-01045-4","article-title":"Environmental sound classification using temporal-frequency attention based convolutional neural network","volume":"11","author":"Mu","year":"2021","journal-title":"Sci. Rep."},{"key":"ref_7","unstructured":"MacIntyre, J., Maglogiannis, I., Iliadis, L., and Pimenidis, E. (2019). Recognition of Urban Sound Events Using Deep Context-Aware Feature Extractors and Handcrafted Features. IFIP International Conference on Artificial Intelligence Applications and Innovations, Springer International Publishing."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"107819","DOI":"10.1016\/j.apacoust.2020.107819","article-title":"Ensemble of handcrafted and deep features for urban sound classification","volume":"175","author":"Luz","year":"2021","journal-title":"Appl. Acoust."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Gong, Y., Chung, Y., and Glass, J.R. (2021). AST: Audio Spectrogram Transformer. arXiv.","DOI":"10.21437\/Interspeech.2021-698"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"108660","DOI":"10.1016\/j.apacoust.2022.108660","article-title":"Connectogram\u2014A graph-based time dependent representation for sounds","volume":"191","author":"Aksu","year":"2022","journal-title":"Appl. Acoust."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"2450","DOI":"10.1109\/TASLP.2020.3014737","article-title":"Sound Event Detection of Weakly Labelled Data with CNN-Transformer and Automatic Threshold Optimization","volume":"28","author":"Kong","year":"2020","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1186\/s13636-020-00172-6","article-title":"Multiclass audio segmentation based on recurrent neural networks for broadcast domain data","volume":"2020","author":"Gimeno","year":"2020","journal-title":"EURASIP J. Audio Speech Music Process."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"130327","DOI":"10.1109\/ACCESS.2019.2939495","article-title":"Learning Attentive Representations for Environmental Sound Classification","volume":"7","author":"Zhang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"896","DOI":"10.1016\/j.neucom.2020.08.069","article-title":"Attention based convolutional recurrent neural network for environmental sound classification","volume":"453","author":"Zhang","year":"2020","journal-title":"Neurocomputing"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Qiao, T., Zhang, S., Cao, S., and Xu, S. (2021). High Accurate Environmental Sound Classification: Sub-Spectrogram Segmentation versus Temporal-Frequency Attention Mechanism. Sensors, 21.","DOI":"10.3390\/s21165500"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"409","DOI":"10.1016\/j.neucom.2021.06.031","article-title":"Environment sound classification using an attention-based residual neural network","volume":"460","author":"Tripathi","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Ristea, N.C., Ionescu, R.T., and Khan, F.S. (2022). SepTr: Separable Transformer for Audio Spectrogram Processing. arXiv.","DOI":"10.21437\/Interspeech.2022-249"},{"key":"ref_18","unstructured":"Akbari, H., Yuan, L., Qian, R., Chuang, W., Chang, S., Cui, Y., and Gong, B. (2021). VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. arXiv."},{"key":"ref_19","unstructured":"Elliott, D., Otero, C.E., Wyatt, S., and Martino, E. (2021). Tiny Transformers for Environmental Sound Classification at the Edge. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Wyatt, S., Elliott, D., Aravamudan, A., Otero, C.E., Otero, L.D., Anagnostopoulos, G.C., Smith, A.O., Peter, A.M., Jones, W., and Leung, S. (July, January 14). Environmental Sound Classification with Tiny Transformers in Noisy Edge Environments. Proceedings of the 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA.","DOI":"10.1109\/WF-IoT51360.2021.9596007"},{"key":"ref_21","unstructured":"Park, S., Jeong, Y., and Lee, T. (2021, January 15\u201319). Many-to-Many Audio Spectrogram Tansformer: Transformer for Sound Event Localization and Detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021, Online."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Koutini, K., Schl\u00fcter, J., Eghbal-zadeh, H., and Widmer, G. (2021). Efficient Training of Audio Transformers with Patchout. arXiv.","DOI":"10.21437\/Interspeech.2022-227"},{"key":"ref_23","unstructured":"Salamon, J., and Bello, J.P. (2021). Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"67","DOI":"10.1109\/4235.585893","article-title":"No free lunch theorems for optimization","volume":"1","author":"Wolpert","year":"1997","journal-title":"IEEE Trans. Evol. Comput."},{"key":"ref_25","unstructured":"Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)."},{"key":"ref_26","unstructured":"Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (2017). Attention is All you Need. Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., van der Maaten, L., and Weinberger, K.Q. (2016). Densely Connected Convolutional Networks. arXiv.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2014). Going Deeper with Convolutions. arXiv.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the Inception Architecture for Computer Vision. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/22\/8874\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:19:42Z","timestamp":1760145582000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/22\/8874"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,16]]},"references-count":30,"journal-issue":{"issue":"22","published-online":{"date-parts":[[2022,11]]}},"alternative-id":["s22228874"],"URL":"https:\/\/doi.org\/10.3390\/s22228874","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,16]]}}}