{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:12:57Z","timestamp":1760145177012,"version":"build-2065373602"},"reference-count":43,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2024,7,3]],"date-time":"2024-07-03T00:00:00Z","timestamp":1719964800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Salahaddin University Erbil","award":["SE2"],"award-info":[{"award-number":["SE2"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation.<\/jats:p>","DOI":"10.3390\/a17070292","type":"journal-article","created":{"date-parts":[[2024,7,3]],"date-time":"2024-07-03T09:23:59Z","timestamp":1719998639000},"page":"292","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Central Kurdish Text-to-Speech Synthesis with Novel End-to-End Transformer Training"],"prefix":"10.3390","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-4932-3795","authenticated-orcid":false,"given":"Hawraz","family":"Ahmad","sequence":"first","affiliation":[{"name":"Department of Software and Informatics Engineering, Salahaddin University-Erbil, Erbil 44001, Iraq"}]},{"given":"Tarik","family":"Rashid","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, University of Kurdistan Hawler, Erbil 44001, Iraq"}]}],"member":"1968","published-online":{"date-parts":[[2024,7,3]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Shen, J., Pang, R., Weiss, R.J., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., Zhang, Y., Wang, Y., and Skerrv-Ryan, R. (2018, January 15\u201320). Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8461368"},{"key":"ref_2","unstructured":"Oord, A.v.d., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. (2016). Wavenet: A generative model for raw audio. arXiv."},{"key":"ref_3","unstructured":"Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., Oord, A., Dieleman, S., and Kavukcuoglu, K. (2018, January 10\u201315). Efficient neural audio synthesis. Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden."},{"key":"ref_4","unstructured":"Ren, Y., Ruan, Y., Tan, X., Qin, T., Zhao, S., Zhao, Z., and Liu, T.Y. (2019). Fastspeech: Fast, robust and controllable text to speech. arXiv."},{"key":"ref_5","unstructured":"Peng, K., Ping, W., Song, Z., and Zhao, K. (2020, January 13\u201318). Non-autoregressive neural text-to-speech. Proceedings of the International Conference on Machine Learning, PMLR, Virtual."},{"key":"ref_6","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. arXiv."},{"key":"ref_7","unstructured":"Li, N., Liu, S., Liu, Y., Zhao, S., and Liu, M. (February, January 27). Neural speech synthesis with transformer network. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA."},{"key":"ref_8","unstructured":"Kumar, K., Kumar, R., De Boissiere, T., Gestin, L., Teoh, W.Z., Sotelo, J., De Brebisson, A., Bengio, Y., and Courville, A.C. (2019). Melgan: Generative adversarial networks for conditional waveform synthesis. arXiv."},{"key":"ref_9","first-page":"17022","article-title":"Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis","volume":"33","author":"Kong","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_10","unstructured":"Ren, Y., Hu, C., Tan, X., Qin, T., Zhao, S., Zhao, Z., and Liu, T.Y. (2020). Fastspeech 2: Fast and high-quality end-to-end text to speech. arXiv."},{"key":"ref_11","unstructured":"Donahue, J., Dieleman, S., Bi\u0144kowski, M., Elsen, E., and Simonyan, K. (2020). End-to-end adversarial text-to-speech. arXiv."},{"key":"ref_12","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. arXiv."},{"key":"ref_13","unstructured":"Kim, J., Kong, J., and Son, J. (2021). Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. arXiv."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Ren, Y., Tan, X., Qin, T., Zhao, Z., and Liu, T.Y. (2022, January 22\u201327). Revisiting oversmoothness in text to speech. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.","DOI":"10.18653\/v1\/2022.acl-long.564"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Kong, J., Park, J., Kim, B., Kim, J., Kong, D., and Kim, S. (2023). VITS2: Improving Quality and Efficiency of Single-Stage Text-to-Speech with Adversarial Learning and Architecture Design. arXiv.","DOI":"10.21437\/Interspeech.2023-534"},{"key":"ref_16","unstructured":"Saiteja, K. (2024, January 03). Towards Building Controllable Text to Speech Systems. Available online: https:\/\/cdn.iiit.ac.in\/cdn\/cvit.iiit.ac.in\/images\/Thesis\/MS\/saiteja_kosgi\/Sai_Thesis.pdf."},{"key":"ref_17","unstructured":"Feng, X., and Yoshimoto, A. (2024). Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Casanova, E., Shulby, C., G\u00f6lge, E., M\u00fcller, N.M., De Oliveira, F.S., Junior, A.C., Soares, A.D.S., Aluisio, S.M., and Ponti, M.A. (2021). SC-GlowTTS: An Efficient Zero-Shot Multi-Speaker Text-To-Speech Model. arXiv.","DOI":"10.21437\/Interspeech.2021-1774"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2502","DOI":"10.1109\/LSP.2022.3226655","article-title":"SNAC: Speakernormalized affine coupling layer in flow-based architecture for zero-shot multi-speaker text-to-speech","volume":"29","author":"Choi","year":"2022","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_20","first-page":"8067","article-title":"Glow-tts: A generative flow for text-to-speech via monotonic alignment search","volume":"33","author":"Kim","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"\u0141a\u0144cucki, A. (2021, January 6\u201311). Fastpitch: Parallel Text-to-Speech with Pitch Prediction. Proceedings of the ICASSP 2021\u20132021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada.","DOI":"10.1109\/ICASSP39728.2021.9413889"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"McAuliffe, M., Socolof, M., Mihuc, S., Wagner, M., and Sonderegger, M. (2017, January 20\u201324). Montreal Forced Aligner: Trainable text-speech alignment using Kaldi. Proceedings of the Interspeech, Stockholm, Sweden.","DOI":"10.21437\/Interspeech.2017-1386"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Jeong, M., Kim, H., Cheon, S.J., Choi, B.J., and Kim, N.S. (2021). Diff-tts: A denoising diffusion model for text-to-speech. arXiv.","DOI":"10.21437\/Interspeech.2021-469"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Chen, N., Zhang, Y., Zen, H., Weiss, R.J., Norouzi, M., Dehak, N., and Chan, W. (2021). Wavegrad 2: Iterative refinement for text-to-speech synthesis. arXiv.","DOI":"10.21437\/Interspeech.2021-1897"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"1650","DOI":"10.1109\/TASLP.2024.3369528","article-title":"EfficientTTS 2: Variational End-to-End Text-to-Speech Synthesis and Voice Conversion","volume":"32","author":"Miao","year":"2024","journal-title":"IEEE ACM Trans. Audio Speech Lang. Process."},{"key":"ref_26","unstructured":"Li, Y.A., Han, C., Raghavan, V., Mischler, G., and Mesgarani, N. (2023). StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Bahrampour, A., Barkhoda, W., and Azami, B.Z. (2009). Implementation of three text to speech systems for Kurdish language. Iberoamerican Congress on Pattern Recognition, Springer.","DOI":"10.1007\/978-3-642-10268-4_38"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Barkhoda, W., ZahirAzami, B., Bahrampour, A., and Shahryari, O.K. (2009, January 9\u201311). December. A comparison between allophone, syllable, and diphone based TTS systems for Kurdish language. Proceedings of the 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Louisville, KY, USA.","DOI":"10.1109\/ISSPIT.2009.5407540"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Daneshfar, F., Barkhoda, W., and Azami, B.Z. (2009, January 20\u201325). Implementation of a Text-to-Speech System for Kurdish Language. Proceedings of the 2009 Fourth International Conference on Digital Telecommunications, Colmar, France.","DOI":"10.1109\/ICDT.2009.29"},{"key":"ref_30","unstructured":"Hassani, H., and Kareem, R. (2011, January 11\u201314). Kurdish text to speech. Proceedings of the KTTS Tenth International Workshop on Internationalisation of Products and Systems, Kuching, Malaysia."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Fahmy, F.K., Khalil, M.I., and Abbas, H.M. (2020, January 2\u20134). September. A transfer learning end-to-end arabic text-to-speech (tts) deep architecture. Proceedings of the IAPR Workshop on Artificial Neural Networks in Pattern Recognition, Cham, Switzerland.","DOI":"10.1007\/978-3-030-58309-5_22"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"3629","DOI":"10.1007\/s11042-021-11719-w","article-title":"Persian speech synthesis using enhanced tacotron based on multi-resolution convolution layers and a convex optimization method","volume":"81","author":"Naderi","year":"2022","journal-title":"Multimed. Tools Appl."},{"key":"ref_33","unstructured":"Van Den Oord, A., and Vinyals, O. (2017). Neural discrete representation learning. arXiv."},{"key":"ref_34","unstructured":"(2023, October 13). espeak-ng. Available online: https:\/\/github.com\/espeak-ng\/espeak-ng."},{"key":"ref_35","first-page":"12449","article-title":"wav2vec 2.0: A framework for self-supervised learning of speech representations","volume":"33","author":"Baevski","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Yamamoto, R., Song, E., and Kim, J.M. (2020, January 4\u20138). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. Proceedings of the ICASSP 2020\u20132020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053795"},{"key":"ref_37","unstructured":"Ahmad, H.A., and Rashid, T.A. (2024). Bridging the Gap: Central Kurdish Speech Corpus Construction and Recognition System Integration. Mendeley Data V1."},{"key":"ref_38","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_39","unstructured":"Loshchilov, I., and Hutter, F. (2017). Decoupled weight decay regularization. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Yang, G., Yang, S., Liu, K., Fang, P., Chen, W., and Xie, L. (2020). Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech. arXiv.","DOI":"10.1109\/SLT48900.2021.9383551"},{"key":"ref_41","unstructured":"Lemmetty, S. (1999). Review of Speech Synthesis Technology. [Master\u2019s Thesis, Helsinki University of Technology]."},{"key":"ref_42","unstructured":"Macon, M.W. (1996). Speech Synthesis Based on Sinusoidal Modeling, Georgia Institute of Technology."},{"key":"ref_43","first-page":"176","article-title":"Toward Kurdish language processing: Experiments in collecting and processing the AsoSoft text corpus","volume":"35","author":"Veisi","year":"2019","journal-title":"Digit. Scholarsh. Humanit."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/17\/7\/292\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:09:43Z","timestamp":1760108983000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/17\/7\/292"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,3]]},"references-count":43,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2024,7]]}},"alternative-id":["a17070292"],"URL":"https:\/\/doi.org\/10.3390\/a17070292","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2024,7,3]]}}}