{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,18]],"date-time":"2026-01-18T12:14:48Z","timestamp":1768738488620,"version":"3.49.0"},"reference-count":49,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2022,1,14]],"date-time":"2022-01-14T00:00:00Z","timestamp":1642118400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Speech is our most natural and efficient form of communication and offers a strong potential to improve how we interact with machines. However, speech communication can sometimes be limited by environmental (e.g., ambient noise), contextual (e.g., need for privacy), or health conditions (e.g., laryngectomy), preventing the consideration of audible speech. In this regard, silent speech interfaces (SSI) have been proposed as an alternative, considering technologies that do not require the production of acoustic signals (e.g., electromyography and video). Unfortunately, despite their plentitude, many still face limitations regarding their everyday use, e.g., being intrusive, non-portable, or raising technical (e.g., lighting conditions for video) or privacy concerns. In line with this necessity, this article explores the consideration of contactless continuous-wave radar to assess its potential for SSI development. A corpus of 13 European Portuguese words was acquired for four speakers and three of them enrolled in a second acquisition session, three months later. Regarding the speaker-dependent models, trained and tested with data from each speaker while using 5-fold cross-validation, average accuracies of 84.50% and 88.00% were respectively obtained from Bagging (BAG) and Linear Regression (LR) classifiers, respectively. Additionally, recognition accuracies of 81.79% and 81.80% were also, respectively, achieved for the session and speaker-independent experiments, establishing promising grounds for further exploring this technology towards silent speech recognition.<\/jats:p>","DOI":"10.3390\/s22020649","type":"journal-article","created":{"date-parts":[[2022,1,16]],"date-time":"2022-01-16T20:45:21Z","timestamp":1642365921000},"page":"649","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":20,"title":["Exploring Silent Speech Interfaces Based on Frequency-Modulated Continuous-Wave Radar"],"prefix":"10.3390","volume":"22","author":[{"given":"David","family":"Ferreira","sequence":"first","affiliation":[{"name":"Department of Electronics, Telecommunications & Informatics, University of Aveiro, 3810-193 Aveiro, Portugal"},{"name":"Institute of Electronics and Informatics Engineering of Aveiro (IEETA), 3810-193 Aveiro, Portugal"}]},{"given":"Samuel","family":"Silva","sequence":"additional","affiliation":[{"name":"Department of Electronics, Telecommunications & Informatics, University of Aveiro, 3810-193 Aveiro, Portugal"},{"name":"Institute of Electronics and Informatics Engineering of Aveiro (IEETA), 3810-193 Aveiro, Portugal"}]},{"given":"Francisco","family":"Curado","sequence":"additional","affiliation":[{"name":"Department of Electronics, Telecommunications & Informatics, University of Aveiro, 3810-193 Aveiro, Portugal"},{"name":"Institute of Electronics and Informatics Engineering of Aveiro (IEETA), 3810-193 Aveiro, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7675-1236","authenticated-orcid":false,"given":"Ant\u00f3nio","family":"Teixeira","sequence":"additional","affiliation":[{"name":"Department of Electronics, Telecommunications & Informatics, University of Aveiro, 3810-193 Aveiro, Portugal"},{"name":"Institute of Electronics and Informatics Engineering of Aveiro (IEETA), 3810-193 Aveiro, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2022,1,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Kepuska, V., and Bohouta, G. (2018, January 8\u201310). Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home). Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA.","DOI":"10.1109\/CCWC.2018.8301638"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"270","DOI":"10.1016\/j.specom.2009.08.002","article-title":"Silent speech interfaces","volume":"52","author":"Denby","year":"2010","journal-title":"Speech Commun."},{"key":"ref_3","unstructured":"Levelt, W.J. (1993). Speaking: From Intention to Articulation, MIT Press."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Freitas, J., Teixeira, A., Dias, M.S., and Silva, S. (2017). SSI Modalities I: Behind the Scenes\u2014From the Brain to the Muscles. An Introduction to Silent Speech Interfaces, Springer.","DOI":"10.1007\/978-3-319-40174-4_2"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Ahmed, S., and Cho, S.H. (2020). Hand gesture recognition using an IR-UWB radar with an inception module-based classifier. Sensors, 20.","DOI":"10.3390\/s20020564"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"125623","DOI":"10.1109\/ACCESS.2019.2938725","article-title":"Short-range radar-based gesture recognition system using 3D CNN with triplet loss","volume":"7","author":"Hazra","year":"2019","journal-title":"IEEE Access"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Freitas, J., Teixeira, A., Dias, M.S., and Silva, S. (2017). Combining Modalities: Multimodal SSI. An Introduction to Silent Speech Interfaces, Springer.","DOI":"10.1007\/978-3-319-40174-4"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Shin, Y.H., and Seo, J. (2016). Towards contactless silent speech recognition based on detection of active and visible articulators using IR-UWB radar. Sensors, 16.","DOI":"10.3390\/s16111812"},{"key":"ref_9","unstructured":"Rohling, H., and Meinecke, M.M. (2001, January 15\u201318). Waveform design principles for automotive radar systems. Proceedings of the 2001 CIE International Conference on Radar Proceedings (Cat No. 01TH8559), Beijing, China."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Winkler, V. (2007, January 10\u201312). Range Doppler detection for automotive FMCW radars. Proceedings of the 2007 European Radar Conference, Munich, Germany.","DOI":"10.1109\/EURAD.2007.4404963"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"4527","DOI":"10.3390\/s130404527","article-title":"Localization and mapping using only a rotating FMCW radar sensor","volume":"13","author":"Vivet","year":"2013","journal-title":"Sensors"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"van Delden, M., Guzy, C., and Musch, T. (2019, January 10\u201313). Investigation on a System for Positioning of Industrial Robots Based on Ultra-Broadband Millimeter Wave FMCW Radar. Proceedings of the 2019 IEEE Asia-Pacific Microwave Conference (APMC), Singapore.","DOI":"10.1109\/APMC46564.2019.9038866"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"646","DOI":"10.21437\/Interspeech.2021-1413","article-title":"RaSSpeR: Radar-Based Silent Speech Recognition","volume":"2021","author":"Ferreira","year":"2021","journal-title":"Proc. Interspeech"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"046031","DOI":"10.1088\/1741-2552\/aac965","article-title":"Development of sEMG sensors and algorithms for silent speech recognition","volume":"15","author":"Meltzner","year":"2018","journal-title":"J. Neural Eng."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Dong, W., Zhang, H., Liu, H., Chen, T., and Sun, L. (2019, January 27\u201331). A Super-Flexible and High-Sensitive Epidermal sEMG Electrode Patch for Silent Speech Recognition. Proceedings of the 2019 IEEE 32nd International Conference on Micro Electro Mechanical Systems (MEMS), Seoul, Korea.","DOI":"10.1109\/MEMSYS.2019.8870672"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1038\/s41378-019-0127-5","article-title":"An epidermal sEMG tattoo-like patch as a new human\u2013machine interface for patients with loss of voice","volume":"6","author":"Liu","year":"2020","journal-title":"Microsyst. Nanoeng."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Ruiz-Olaya, A.F., and L\u00f3pez-Delis, A. (2013, January 11\u201313). Surface EMG signal analysis based on the empirical mode decomposition for human-robot interaction. Proceedings of the Symposium of Signals, Images and Artificial Vision-2013: STSIVA-2013, Bogota, Colombia.","DOI":"10.1109\/STSIVA.2013.6644943"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Diener, L., Umesh, T., and Schultz, T. (2019, January 14\u201318). Improving fundamental frequency generation in emg-to-speech conversion using a quantization approach. Proceedings of the 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Singapore.","DOI":"10.1109\/ASRU46091.2019.9003804"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Joy, J.E., Yadukrishnan, H.A., Poojith, V., and Prathap, J. (2019, January 3\u20136). Work-in-Progress: Silent Speech Recognition Interface for the Differently Abled. Proceedings of the International Conference on Remote Engineering and Virtual Instrumentation, Bangalore, India.","DOI":"10.1007\/978-3-030-23162-0_73"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Kapur, A., Kapur, S., and Maes, P. (2018, January 7\u201311). Alterego: A personalized wearable silent speech interface. Proceedings of the 23rd International Conference on Intelligent User Interfaces, Tokyo, Japan.","DOI":"10.1145\/3172944.3172977"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Merletti, R., and Parker, P.J. (2004). Electromyography: Physiology, Engineering, and Non-Invasive Applications, John Wiley & Sons.","DOI":"10.1002\/0471678384"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Shah, N., Shah, N.J., and Patil, H.A. (2018, January 2\u20136). Effectiveness of Generative Adversarial Network for Non-Audible Murmur-to-Whisper Speech Conversion. Proceedings of the INTERSPEECH 2018, Hyderabad, India.","DOI":"10.21437\/Interspeech.2018-1565"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Sarmiento, L., Rodr\u00edguez, J.B., L\u00f3pez, O., Villamizar, S., Guevara, R., and Cortes-Rodriguez, C. (2019, January 14\u201316). Recognition of silent speech syllables for Brain-Computer Interfaces. Proceedings of the 2019 IEEE International Conference on E-health Networking, Application & Services (HealthCom), Bogota, Colombia.","DOI":"10.1109\/HealthCom46333.2019.9009438"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Morooka, T., Ishizuka, K., and Kobayashi, N. (2018, January 9\u201312). Electroencephalographic Analysis of Auditory Imagination to Realize Silent Speech BCI. Proceedings of the 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), Nara, Japan.","DOI":"10.1109\/GCCE.2018.8574677"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Ma, S., Jin, D., Zhang, M., Zhang, B., Wang, Y., Li, G., and Yang, M. (2019, January 22\u201324). Silent Speech Recognition Based on Surface Electromyography. Proceedings of the 2019 Chinese Automation Congress (CAC), Hangzhou, China.","DOI":"10.1109\/CAC48633.2019.8996289"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"839","DOI":"10.1002\/hed.26057","article-title":"Pilot study for a novel and personalized voice restoration device for patients with laryngectomy","volume":"42","author":"Rameau","year":"2020","journal-title":"Head Neck"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Prorokovi\u0107, K., Wand, M., Schultz, T., and Schmidhuber, J. (2019, January 11\u201314). Adaptation of an EMG-Based Speech Recognizer via Meta-Learning. Proceedings of the 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Ottawa, ON, Canada.","DOI":"10.1109\/GlobalSIP45357.2019.8969231"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Wand, M., Schultz, T., and Schmidhuber, J. (2018, January 2\u20136). Domain-Adversarial Training for Session Independent EMG-based Speech Recognition. Proceedings of the INTERSPEECH 2018, Hyderabad, India.","DOI":"10.21437\/Interspeech.2018-2318"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Fernandes, R., Huang, L., and Vejarano, G. (2019, January 5\u20137). Non-Audible Speech Classification Using Deep Learning Approaches. Proceedings of the 2019 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA.","DOI":"10.1109\/CSCI49370.2019.00118"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Chen, S., Zheng, Y., Wu, C., Sheng, G., Roussel, P., and Denby, B. (2018, January 15\u201320). Direct, Near Real Time Animation of a 3D Tongue Model Using Non-Invasive Ultrasound Images. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462096"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Zhao, C., Zhang, P., Zhu, J., Wu, C., Wang, H., and Xu, K. (2019, January 12\u201317). Predicting tongue motion in unlabeled ultrasound videos using convolutional LSTM neural networks. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK.","DOI":"10.1109\/ICASSP.2019.8683081"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Gosztolya, G., Pint\u00e9r, \u00c1., T\u00f3th, L., Gr\u00f3sz, T., Mark\u00f3, A., and Csap\u00f3, T.G. (2019, January 14\u201319). Autoencoder-based articulatory-to-acoustic mapping for ultrasound silent speech interfaces. Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary.","DOI":"10.1109\/IJCNN.2019.8852153"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Kimura, N., Kono, M., and Rekimoto, J. (2019, January 4\u20139). SottoVoce: An ultrasound imaging-based silent speech interaction using deep neural networks. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK.","DOI":"10.1145\/3290605.3300376"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Csap\u00f3, T.G., Al-Radhi, M.S., N\u00e9meth, G., Gosztolya, G., Gr\u00f3sz, T., T\u00f3th, L., and Mark\u00f3, A. (2019). Ultrasound-based silent speech interface built on a continuous vocoder. arXiv.","DOI":"10.21437\/Interspeech.2019-2046"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Sun, K., Yu, C., Shi, W., Liu, L., and Shi, Y. (2018, January 14). Lip-interact: Improving mobile device interaction with silent speech commands. Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany.","DOI":"10.1145\/3242587.3242599"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Vougioukas, K., Ma, P., Petridis, S., and Pantic, M. (2019). Video-driven speech reconstruction using generative adversarial networks. arXiv.","DOI":"10.21437\/Interspeech.2019-1445"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Uttam, S., Kumar, Y., Sahrawat, D., Aggarwal, M., Shah, R.R., Mahata, D., and Stent, A. (2019, January 15\u201319). Hush-Hush Speak: Speech Reconstruction Using Silent Videos. Proceedings of the INTERSPEECH, Graz, Austria.","DOI":"10.21437\/Interspeech.2019-3269"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Petridis, S., Shen, J., Cetin, D., and Pantic, M. (2018, January 15\u201320). Visual-only recognition of normal, whispered and silent speech. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8461596"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"2404","DOI":"10.1109\/TASLP.2018.2865609","article-title":"Non-invasive silent phoneme recognition using microwave signals","volume":"26","author":"Birkholz","year":"2018","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Dash, D., Wisler, A., Ferrari, P., and Wang, J. (2019, January 15\u201319). Towards a Speaker Independent Speech-BCI Using Speaker Adaptation. Proceedings of the INTERSPEECH, Graz, Austria.","DOI":"10.21437\/Interspeech.2019-3109"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Xu, K., Wu, Y., and Gao, Z. (2019, January 21\u201325). Ultrasound-based silent speech interface using sequential convolutional auto-encoder. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350596"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"2257","DOI":"10.1109\/TASLP.2017.2752365","article-title":"Biosignal-based spoken communication: A survey","volume":"25","author":"Schultz","year":"2017","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Thein, T., and San, K.M. (2018, January 23\u201325). Lip localization technique towards an automatic lip reading approach for Myanmar consonants recognition. Proceedings of the 2018 International Conference on Information and Computer Technologies (ICICT), DeKalb, IL, USA.","DOI":"10.1109\/INFOCT.2018.8356854"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Freitas, J., Teixeira, A., Bastos, C., and Dias, M. (2011). Towards a Multimodal Silent Speech Interface for European Portuguese. Speech Technologies, InTech.","DOI":"10.5772\/16935"},{"key":"ref_45","unstructured":"Freitas, J., Teixeira, A., and Dias, M.S. (2013, January 30). Multimodal Silent Speech Interface based on Video, Depth, Surface Electromyography and Ultrasonic Doppler: Data Collection and First Recognition Results. Proceedings of the Workshop on Speech Production in Automatic Speech Recognition, Lyon, France."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Teixeira, A., Vitor, N., Freitas, J., and Silva, S. (2017, January 9\u201314). Silent speech interaction for ambient assisted living scenarios. Proceedings of the International Conference on Human Aspects of IT for the Aged Population, Vancouver, BC, Canada.","DOI":"10.1007\/978-3-319-58530-7_29"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Albuquerque, D.F., Gon\u00e7alves, E.S., Pedrosa, E.F., Teixeira, F.C., and Vieira, J.N. (October, January 30). Robot Self Position based on Asynchronous Millimetre Wave Radar Interference. Proceedings of the 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy.","DOI":"10.1109\/IPIN.2019.8911809"},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"101835","DOI":"10.1016\/j.bspc.2019.101835","article-title":"Study on the usage feasibility of continuous-wave radar for emotion recognition","volume":"58","author":"Gouveia","year":"2020","journal-title":"Biomed. Signal Process. Control."},{"key":"ref_49","unstructured":"Freitas, J. (2015). Articulation in Multimodal Silent Speech Interface for European Portuguese. [Ph.D. Thesis, University of Aveiro]."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/2\/649\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,13]],"date-time":"2025-10-13T13:39:49Z","timestamp":1760362789000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/2\/649"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1,14]]},"references-count":49,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2022,1]]}},"alternative-id":["s22020649"],"URL":"https:\/\/doi.org\/10.3390\/s22020649","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1,14]]}}}