{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,29]],"date-time":"2025-12-29T22:08:37Z","timestamp":1767046117988,"version":"build-2065373602"},"reference-count":40,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2023,2,21]],"date-time":"2023-02-21T00:00:00Z","timestamp":1676937600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U1936106","U19A2080","XDA27040303","XDA18040400","XDB44000000","31513070501","1916312ZD00902201"],"award-info":[{"award-number":["U1936106","U19A2080","XDA27040303","XDA18040400","XDB44000000","31513070501","1916312ZD00902201"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"CAS Strategic Leading Science and Technology Project","award":["U1936106","U19A2080","XDA27040303","XDA18040400","XDB44000000","31513070501","1916312ZD00902201"],"award-info":[{"award-number":["U1936106","U19A2080","XDA27040303","XDA18040400","XDB44000000","31513070501","1916312ZD00902201"]}]},{"name":"High Technology Project","award":["U1936106","U19A2080","XDA27040303","XDA18040400","XDB44000000","31513070501","1916312ZD00902201"],"award-info":[{"award-number":["U1936106","U19A2080","XDA27040303","XDA18040400","XDB44000000","31513070501","1916312ZD00902201"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Speech enhancement tasks for audio with a low SNR are challenging. Existing speech enhancement methods are mainly designed for high SNR audio, and they usually use RNNs to model audio sequence features, which causes the model to be unable to learn long-distance dependencies, thus limiting its performance in low-SNR speech enhancement tasks. We design a complex transformer module with sparse attention to overcome this problem. Different from the traditional transformer model, this model is extended to effectively model complex domain sequences, using the sparse attention mask balance model\u2019s attention to long-distance and nearby relations, introducing the pre-layer positional embedding module to enhance the model\u2019s perception of position information, adding the channel attention module to enable the model to dynamically adjust the weight distribution between channels according to the input audio. The experimental results show that, in the low-SNR speech enhancement tests, our models have noticeable performance improvements in speech quality and intelligibility, respectively.<\/jats:p>","DOI":"10.3390\/s23052376","type":"journal-article","created":{"date-parts":[[2023,2,22]],"date-time":"2023-02-22T02:08:34Z","timestamp":1677031714000},"page":"2376","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["CST: Complex Sparse Transformer for Low-SNR Speech Enhancement"],"prefix":"10.3390","volume":"23","author":[{"given":"Kaijun","family":"Tan","sequence":"first","affiliation":[{"name":"Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"University of Chinese Academy of Sciences, Beijing 100089, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9750-7454","authenticated-orcid":false,"given":"Wenyu","family":"Mao","sequence":"additional","affiliation":[{"name":"Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"Chinese Association of Artificial Intelligence, Beijing 100876, China"}]},{"given":"Xiaozhou","family":"Guo","sequence":"additional","affiliation":[{"name":"Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"University of Chinese Academy of Sciences, Beijing 100089, China"}]},{"given":"Huaxiang","family":"Lu","sequence":"additional","affiliation":[{"name":"Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China"},{"name":"University of Chinese Academy of Sciences, Beijing 100089, China"},{"name":"Materials and Optoelectronics Research Center, University of Chinese Academy of Sciences, Beijing 100083, China"},{"name":"College of Microelectronics, University of Chinese Academy of Sciences, Beijing 100083, China"},{"name":"Semiconductor Neural Network Intelligent Perception and Computing Technology Beijing Key Laboratory, Beijing 100083, China"}]},{"given":"Chi","family":"Zhang","sequence":"additional","affiliation":[{"name":"Nanjing Research Institute of Information Technology, Nanjing 210009, China"}]},{"given":"Zhanzhong","family":"Cao","sequence":"additional","affiliation":[{"name":"Nanjing Research Institute of Information Technology, Nanjing 210009, China"}]},{"given":"Xingang","family":"Wang","sequence":"additional","affiliation":[{"name":"Nanjing Research Institute of Information Technology, Nanjing 210009, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,2,21]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"883","DOI":"10.1007\/s10772-020-09674-2","article-title":"Fundamentals, present and future perspectives of speech enhancement","volume":"24","author":"Das","year":"2021","journal-title":"Int. J. Speech Technol."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Hao, X., Su, X., Horaud, R., and Li, X. (2021, January 6\u201311). Fullsubnet: A full-band and sub-band fusion model for real-time single-channel speech enhancement. Proceedings of the ICASSP 2021\u20142021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada.","DOI":"10.1109\/ICASSP39728.2021.9414177"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zheng, C., Peng, X., Zhang, Y., Srinivasan, S., and Lu, Y. (2021, January 2\u20139). Interactive speech and noise modeling for speech enhancement. Proceedings of the AAAI Conference on Artificial Intelligence, Virtually.","DOI":"10.1609\/aaai.v35i16.17710"},{"key":"ref_4","first-page":"436","article-title":"Speech Enhancement Based on Deep Denoising Autoencoder","volume":"2013","author":"Lu","year":"2013","journal-title":"Interspeech"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_6","unstructured":"Stoller, D., Ewert, S., and Dixon, S. (2018). Wave-u-net: A multi-scale neural network for end-to-end audio source separation. arXiv."},{"key":"ref_7","unstructured":"Macartney, C., and Weyde, T. (2018). Improved speech enhancement with the wave-u-net. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Hao, X., Su, X., Wang, Z., and Zhang, H. (2020). UNetGAN: A robust speech enhancement approach in time domain for extremely low signal-to-noise ratio condition. arXiv.","DOI":"10.21437\/Interspeech.2019-1567"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1256","DOI":"10.1109\/TASLP.2019.2915167","article-title":"Conv-tasnet: Surpassing ideal time\u2013frequency magnitude masking for speech separation","volume":"27","author":"Luo","year":"2019","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Hu, Y., Liu, Y., Lv, S., Xing, M., Zhang, S., Fu, Y., Wu, J., Zhang, B., and Xie, L. (2020). DCCRN: Deep complex convolution recurrent network for phase-aware speech enhancement. arXiv.","DOI":"10.21437\/Interspeech.2020-2537"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"236","DOI":"10.1109\/TASSP.1984.1164317","article-title":"Signal estimation from modified short-time Fourier transform","volume":"32","author":"Griffin","year":"1984","journal-title":"IEEE Trans. Acoust. Speech Signal Process."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Weninger, F., Erdogan, H., Watanabe, S., Vincent, E., Roux, J.L., Hershey, J.R., and Schuller, B. (2015, January 25\u201328). Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR. Proceedings of the International Conference on Latent Variable Analysis and Signal Separation, Liberec, Czech Republic.","DOI":"10.1007\/978-3-319-22482-4_11"},{"key":"ref_13","unstructured":"Zhao, J., Huang, F., Lv, J., Duan, Y., Qin, Z., Li, G., and Tian, G. (2020, January 13\u201318). Do rnn and lstm have long memory?. Proceedings of the International Conference on Machine Learning. PMLR, Virtual Event."},{"key":"ref_14","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1872","DOI":"10.1007\/s11431-020-1647-3","article-title":"Pre-trained models for natural language processing: A survey","volume":"63","author":"Qiu","year":"2020","journal-title":"Sci. China Technol. Sci."},{"key":"ref_16","unstructured":"Liu, Y., Zhang, Y., Wang, Y., Hou, F., Yuan, J., Tian, J., Zhang, Y., Shi, Z., Fan, J., and He, Z. (2021). A survey of visual transformers. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1700","DOI":"10.1109\/LSP.2020.3025020","article-title":"Improving GANs for speech enhancement","volume":"27","author":"Phan","year":"2020","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Pascual, S., Bonafonte, A., and Serra, J. (2017). SEGAN: Speech enhancement generative adversarial network. arXiv.","DOI":"10.21437\/Interspeech.2017-1428"},{"key":"ref_19","unstructured":"Fu, S.W., Liao, C.F., Tsao, Y., and Lin, S.D. (2019, January 9\u201315). Metricgan: Generative adversarial networks based black-box metric scores optimization for speech enhancement. Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Soni, M.H., Shah, N., and Patil, H.A. (2018, January 15\u201320). Time-frequency masking-based speech enhancement using generative adversarial network. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462068"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Donahue, C., Li, B., and Prabhavalkar, R. (2018, January 15\u201320). Exploring speech enhancement with generative adversarial networks for robust speech recognition. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462581"},{"key":"ref_22","unstructured":"Wang, D. (2005). Speech Separation by Humans and Machines, Springer."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Narayanan, A., and Wang, D. (2013, January 26\u201331). Ideal ratio mask estimation using deep neural networks for robust speech recognition. Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada.","DOI":"10.1109\/ICASSP.2013.6639038"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"483","DOI":"10.1109\/TASLP.2015.2512042","article-title":"Complex ratio masking for monaural speech separation","volume":"24","author":"Williamson","year":"2015","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Heymann, J., Drude, L., and Haeb-Umbach, R. (2016, January 20\u201325). Neural network based spectral mask estimation for acoustic beamforming. Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China.","DOI":"10.1109\/ICASSP.2016.7471664"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Tu, Y.H., Du, J., and Lee, C.H. (2019, January 12\u201317). DNN Training Based on Classic Gain Function for Single-channel Speech Enhancement and Recognition. Proceedings of the ICASSP 2019\u20142019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8682195"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Strake, M., Defraene, B., Fluyt, K., Tirry, W., and Fingscheidt, T. (2019, January 20\u201323). Separated Noise Suppression and Speech Restoration: Lstm-Based Speech Enhancement in Two Stages. Proceedings of the 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA.","DOI":"10.1109\/WASPAA.2019.8937222"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Ali, M.N., Brutti, A., and Falavigna, D. (2020, January 7\u20139). Speech enhancement using dilated wave-u-net: An experimental analysis. Proceedings of the 2020 IEEE 27th Conference of Open Innovations Association (FRUCT), Trento, Italy.","DOI":"10.23919\/FRUCT49677.2020.9211072"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Yang, M., Ma, M.Q., Li, D., Tsai, Y.H.H., and Salakhutdinov, R. (2020, January 4\u20138). Complex transformer: A framework for modeling complex-valued sequence. Proceedings of the ICASSP 2020\u20142020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9054008"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Kim, J., El-Khamy, M., and Lee, J. (2020, January 4\u20138). T-gsa: Transformer with gaussian-weighted self-attention for speech enhancement. Proceedings of the ICASSP 2020\u20142020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053591"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1152","DOI":"10.1007\/s12559-020-09817-2","article-title":"SETransformer: Speech enhancement transformer","volume":"14","author":"Yu","year":"2022","journal-title":"Cogn. Comput."},{"key":"ref_32","first-page":"17283","article-title":"Big bird: Transformers for longer sequences","volume":"33","author":"Zaheer","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_33","unstructured":"Zeng, A., Chen, M., Zhang, L., and Xu, Q. (2022). Are Transformers Effective for Time Series Forecasting?. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201322). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_35","unstructured":"Snyder, D., Chen, G., and Povey, D. (2015). Musan: A music, speech, and noise corpus. arXiv."},{"key":"ref_36","unstructured":"Veaux, C., Yamagishi, J., and MacDonald, K. (2023, February 05). CSTR VCTK Corpus: English Multi-Speaker Corpus for CSTR Voice Cloning Toolkit. Available online: https:\/\/www.semanticscholar.org\/paper\/SUPERSEDED-CSTR-VCTK-Corpus%3A-English-Multi-speaker-Veaux-Yamagishi\/d4903c15a7aba8e2c2386b2fe95edf0905144d6a."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"247","DOI":"10.1016\/0167-6393(93)90095-3","article-title":"Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems","volume":"12","author":"Varga","year":"1993","journal-title":"Speech Commun."},{"key":"ref_38","unstructured":"Pigeon, D.I.S. (2023, February 05). My Noise. Available online: https:\/\/mynoise.net\/NoiseMachines\/cafeRestaurantNoiseGenerator.php."},{"key":"ref_39","unstructured":"Loshchilov, I., and Hutter, F. (May, January 30). Fixing Weight Decay Regularization in Adam. Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada. Available online: https:\/\/openreview.net\/forum?id=rk6qdGgCZ."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Tan, K., and Wang, D. (2018, January 2\u20136). A Convolutional Recurrent Neural Network for Real-Time Speech Enhancement. Proceedings of the Interspeech, Hyderabad, India. Available online: https:\/\/web.cse.ohio-state.edu\/~tan.650\/doc\/papers\/Tan-Wang1.interspeech18.pdf.","DOI":"10.21437\/Interspeech.2018-1405"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/5\/2376\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:38:07Z","timestamp":1760121487000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/5\/2376"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,21]]},"references-count":40,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2023,3]]}},"alternative-id":["s23052376"],"URL":"https:\/\/doi.org\/10.3390\/s23052376","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,2,21]]}}}