{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:02:04Z","timestamp":1760144524477,"version":"build-2065373602"},"reference-count":22,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2024,4,28]],"date-time":"2024-04-28T00:00:00Z","timestamp":1714262400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Future Internet"],"abstract":"<jats:p>Speech separation, sometimes known as the \u201ccocktail party problem\u201d, is the process of separating individual speech signals from an audio mixture that includes ambient noises and several speakers. The goal is to extract the target speech in this complicated sound scenario and either make it easier to understand or increase its quality so that it may be used in subsequent processing. Speech separation on overlapping audio data is important for many speech-processing tasks, including natural language processing, automatic speech recognition, and intelligent personal assistants. New speech separation algorithms are often built on a deep neural network (DNN) structure, which seeks to learn the complex relationship between the speech mixture and any specific speech source of interest. DNN-based speech separation algorithms outperform conventional statistics-based methods, although they typically need a lot of processing and\/or a larger model size. This study presents a new end-to-end speech separation network called ESC-MASD-Net (effective speaker separation through convolutional multi-view attention and SuDoRM-RF network), which has relatively fewer model parameters compared with the state-of-the-art speech separation architectures. The network is partly inspired by the SuDoRM-RF++ network, which uses multiple time-resolution features with downsampling and resampling for effective speech separation. ESC-MASD-Net incorporates the multi-view attention and residual conformer modules into SuDoRM-RF++. Additionally, the U-Convolutional block in ESC-MASD-Net is refined with a conformer layer. Experiments conducted on the WHAM! dataset show that ESC-MASD-Net outperforms SuDoRM-RF++ significantly in the SI-SDRi metric. Furthermore, the use of the conformer layer has also improved the performance of ESC-MASD-Net.<\/jats:p>","DOI":"10.3390\/fi16050151","type":"journal-article","created":{"date-parts":[[2024,4,29]],"date-time":"2024-04-29T04:26:16Z","timestamp":1714364776000},"page":"151","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Effective Monoaural Speech Separation through Convolutional Top-Down Multi-View Network"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0009-0003-2294-4221","authenticated-orcid":false,"given":"Aye Nyein","family":"Aung","sequence":"first","affiliation":[{"name":"Department of Electrical Engineering, National Chi Nan University, Nantou 545, Taiwan"}]},{"given":"Che-Wei","family":"Liao","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, National Chi Nan University, Nantou 545, Taiwan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9366-3070","authenticated-orcid":false,"given":"Jeih-Weih","family":"Hung","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, National Chi Nan University, Nantou 545, Taiwan"}]}],"member":"1968","published-online":{"date-parts":[[2024,4,28]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1849","DOI":"10.1109\/TASLP.2014.2352935","article-title":"On training targets for supervised speech separation","volume":"22","author":"Wang","year":"2014","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Hershey, J.R., Chen, Z., Roux, J.L., and Watanabe, S. (2016). Deep clustering: Discriminative embeddings for segmentation and separation. arXiv.","DOI":"10.1109\/ICASSP.2016.7471631"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Isik, Y., Roux, J.L., Chen, Z., Watanabe, S., and Hershey, J.R. (2016). Single-channel multi-speaker separation using deep clustering. arXiv.","DOI":"10.21437\/Interspeech.2016-1176"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Luo, Y., and Mesgarani, N. (2018). Tasnet: Time-domain audio separation network for real-time, single-channel speech separation. arXiv.","DOI":"10.1109\/ICASSP.2018.8462116"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1256","DOI":"10.1109\/TASLP.2019.2915167","article-title":"Conv-TasNet: Surpassing ideal time\u2013frequency magnitude masking for speech separation","volume":"27","author":"Luo","year":"2019","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_6","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Subakan, C., Ravanelli, M., Cornell, S., Bronzi, M., and Zhong, J. (2021). Attention is all you need in speech separation. arXiv.","DOI":"10.1109\/ICASSP39728.2021.9413901"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Chen, J., Mao, Q., and Liu, D. (2020, January 25\u201329). Dual-path Transformer network: Direct context-aware modeling for end-to-end monaural speech separation. Proceedings of the Interspeech 2020, Shanghai, China. Available online: http:\/\/www.interspeech2020.org\/uploadfile\/pdf\/Wed-2-4-6.pdf.","DOI":"10.21437\/Interspeech.2020-2205"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Luo, Y., Chen, Z., and Yoshioka, T. (2020). Dual-path rnn: Efficient long sequence modeling for time-domain single- channel speech separation. arXiv.","DOI":"10.1109\/ICASSP40776.2020.9054266"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Maldonado, A., Rascon, C., and Velez, I. (2020). Lightweight online separation of the sound source of interest through BLSTM-based binary masking. arXiv.","DOI":"10.13053\/cys-24-3-3485"},{"key":"ref_11","unstructured":"Li, K., Yang, R., and Hu, X. (2023). An efficient encoder-decoder architecture with top-down attention for speech separation. arXiv."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Tzinis, E., Wang, Z., and Smaragdis, P. (2020). Sudo rm -rf: Efficient networks for universal audio source separation. arXiv.","DOI":"10.1109\/MLSP49062.2020.9231900"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"245","DOI":"10.1007\/s11265-021-01683-x","article-title":"Compute and memory efficient universal sound source separation","volume":"94","author":"Tzinis","year":"2022","journal-title":"J. Signal Process. Syst."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Park, H.J., Kang, B.H., Shin, W., Kim, J.S., and Han, S.W. (2022). Manner: Multi-view attention network for noise erasure. arXiv.","DOI":"10.1109\/ICASSP43922.2022.9747120"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Ravenscroft, W., Goetze, S., and Hain, T. (2023). On time domain conformer models for monaural speech separation in noisy reverberant acoustic environments. arXiv.","DOI":"10.1109\/ASRU57964.2023.10389669"},{"key":"ref_16","unstructured":"Wichern, G., Antognini, J., Flynn, M., Zhu, L.R., McQuinn, E., Crow, D., Manilow, E., and Roux, J.L. (2023). Wham!: Extending speech separation to noisy environments. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Yu, D., Kolb\u00e6k, M., Tan, Z., and Jensen, J. (2017). Permutation invariant training of deep models for speak-er-independent multi-talker speech separation. arXiv.","DOI":"10.1109\/ICASSP.2017.7952154"},{"key":"ref_18","unstructured":"(2024, March 30). Available online: https:\/\/github.com\/etzinis\/sudo_rm_rf."},{"key":"ref_19","unstructured":"(2024, March 30). Available online: https:\/\/github.com\/winddori2002\/MANNER."},{"key":"ref_20","unstructured":"(2024, March 30). Available online: https:\/\/github.com\/jwr1995\/pubsep."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zhao, S., and Ma, B. (2023). MossFormer: Pushing the Performance Limit of Monaural Speech Separation using Gated Single-Head Transformer with Convolution-Augmented Joint Self-Attentions. arXiv.","DOI":"10.1109\/ICASSP49357.2023.10096646"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhao, S., Ma, Y., Ni, C., Zhang, C., Wang, H., Nguyen, T.H., Zhou, K., Yip, J., Ng, D., and Ma, B. (2024). MossFormer2: Combining Transformer and RNN-Free Recurrent Network for Enhanced Time-Domain Monaural Speech Separation. arXiv.","DOI":"10.1109\/ICASSP48485.2024.10445985"}],"container-title":["Future Internet"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-5903\/16\/5\/151\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T14:35:09Z","timestamp":1760106909000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-5903\/16\/5\/151"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,28]]},"references-count":22,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2024,5]]}},"alternative-id":["fi16050151"],"URL":"https:\/\/doi.org\/10.3390\/fi16050151","relation":{},"ISSN":["1999-5903"],"issn-type":[{"type":"electronic","value":"1999-5903"}],"subject":[],"published":{"date-parts":[[2024,4,28]]}}}