{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,23]],"date-time":"2026-04-23T07:34:49Z","timestamp":1776929689549,"version":"3.51.2"},"reference-count":61,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T00:00:00Z","timestamp":1677110400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T00:00:00Z","timestamp":1677110400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Nat Mach Intell"],"DOI":"10.1038\/s42256-023-00616-6","type":"journal-article","created":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T17:03:30Z","timestamp":1677171810000},"page":"169-180","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":145,"title":["Mixed-modality speech recognition and interaction using a wearable artificial throat"],"prefix":"10.1038","volume":"5","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5819-4299","authenticated-orcid":false,"given":"Qisheng","family":"Yang","sequence":"first","affiliation":[]},{"given":"Weiqiu","family":"Jin","sequence":"additional","affiliation":[]},{"given":"Qihang","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Yuhong","family":"Wei","sequence":"additional","affiliation":[]},{"given":"Zhanfeng","family":"Guo","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9463-1350","authenticated-orcid":false,"given":"Xiaoshi","family":"Li","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1161-9488","authenticated-orcid":false,"given":"Yi","family":"Yang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4098-9477","authenticated-orcid":false,"given":"Qingquan","family":"Luo","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7328-2182","authenticated-orcid":false,"given":"He","family":"Tian","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7330-0544","authenticated-orcid":false,"given":"Tian-Ling","family":"Ren","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,2,23]]},"reference":[{"key":"616_CR1","doi-asserted-by":"publisher","first-page":"177995","DOI":"10.1109\/ACCESS.2020.3026579","volume":"8","author":"JA Gonzalez-Lopez","year":"2020","unstructured":"Gonzalez-Lopez, J. A., Gomez-Alanis, A., Martin Donas, J. M., Perez-Cordoba, J. L. & Gomez, A. M. Silent speech interfaces for speech restoration: a review. IEEE Access 8, 177995\u2013178021 (2020).","journal-title":"IEEE Access"},{"key":"616_CR2","unstructured":"Betts, B. & Jorgensen, C. Small Vocabulary Recognition Using Surface Electromyography in an Acoustically Harsh Environment (NASA, Ames Research Center, 2005)."},{"key":"616_CR3","doi-asserted-by":"publisher","first-page":"243","DOI":"10.1037\/0096-3445.124.3.243","volume":"124","author":"NL Wood","year":"1995","unstructured":"Wood, N. L. & Cowan, N. The cocktail party phenomenon revisited: attention and memory in the classic selective listening procedure of Cherry (1953). J. Exp. Psychol. Gen. 124, 243 (1995).","journal-title":"J. Exp. Psychol. Gen."},{"key":"616_CR4","doi-asserted-by":"publisher","unstructured":"Lopez-Meyer, P., del Hoyo Ontiveros, J. A., Lu, H. & Stemmer, G. Efficient end-to-end audio embeddings generation for audio classification on target applications. In ICASSP 2021\u20132021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 601\u2013605 (IEEE, 2021); https:\/\/doi.org\/10.1109\/ICASSP39728.2021.9414229","DOI":"10.1109\/ICASSP39728.2021.9414229"},{"key":"616_CR5","doi-asserted-by":"publisher","unstructured":"Wang, D. X., Jiang, M. S., Niu, F. L., Cao, Y. D. & Zhou, C. X. Speech enhancement control design algorithm for dual-microphone systems using \u03b2-NMF in a complex environment. Complexity https:\/\/doi.org\/10.1155\/2018\/6153451 (2018).","DOI":"10.1155\/2018\/6153451"},{"key":"616_CR6","doi-asserted-by":"crossref","unstructured":"Akbari, H., Arora, H., Cao, L. & Mesgarani, N. Lip2AudSpec: speech reconstruction from silent lip movements video. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2516\u20132520 (IEEE, 2018).","DOI":"10.1109\/ICASSP.2018.8461856"},{"key":"616_CR7","doi-asserted-by":"crossref","unstructured":"Chung, J. S., Senior, A., Vinyals, O. & Zisserman, A. Lip reading sentences in the wild. In Proc. 30th IEEE Conference on Computer Vision and Pattern Recognition 3444\u20133450 (IEEE, 2017).","DOI":"10.1109\/CVPR.2017.367"},{"key":"616_CR8","doi-asserted-by":"publisher","unstructured":"Pass, A., Zhang, J. & Stewart, D. AN investigation into features for multi-view lipreading. In 2010 IEEE International Conference on Image Processing 2417\u20132420 (IEEE, 2010); https:\/\/doi.org\/10.1109\/ICIP.2010.5650963","DOI":"10.1109\/ICIP.2010.5650963"},{"key":"616_CR9","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3389\/fnins.2015.00217","volume":"9","author":"C Herff","year":"2015","unstructured":"Herff, C. et al. Brain-to-text: decoding spoken phrases from phone representations in the brain. Front. Neurosci. 9, 1\u201311 (2015).","journal-title":"Front. Neurosci."},{"key":"616_CR10","doi-asserted-by":"publisher","first-page":"493","DOI":"10.1038\/s41586-019-1119-1","volume":"568","author":"GK Anumanchipalli","year":"2019","unstructured":"Anumanchipalli, G. K., Chartier, J. & Chang, E. F. Speech synthesis from neural decoding of spoken sentences. Nature 568, 493\u2013498 (2019).","journal-title":"Nature"},{"key":"616_CR11","doi-asserted-by":"publisher","first-page":"341","DOI":"10.1016\/j.specom.2009.12.002","volume":"52","author":"T Schultz","year":"2010","unstructured":"Schultz, T. & Wand, M. Modeling coarticulation in EMG-based continuous speech recognition. Speech Commun. 52, 341\u2013353 (2010).","journal-title":"Speech Commun."},{"key":"616_CR12","doi-asserted-by":"publisher","first-page":"2515","DOI":"10.1109\/TBME.2014.2319000","volume":"61","author":"M Wand","year":"2014","unstructured":"Wand, M., Janke, M. & Schultz, A. T. Tackling speaking mode varieties in EMG-based speech recognition. IEEE Trans. Biomed. Eng. 61, 2515\u20132526 (2014).","journal-title":"IEEE Trans. Biomed. Eng."},{"key":"616_CR13","doi-asserted-by":"publisher","first-page":"2375","DOI":"10.1109\/TASLP.2017.2738568","volume":"25","author":"M Janke","year":"2017","unstructured":"Janke, M. & Diener, L. EMG-to-speech: direct generation of speech from facial electromyographic signals. IEEE\/ACM Trans. Audio Speech Lang. Process. 25, 2375\u20132385 (2017).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"616_CR14","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1038\/s41467-019-13993-7","volume":"11","author":"KK Kim","year":"2020","unstructured":"Kim, K. K. et al. A deep-learned skin sensor decoding the epicentral human motions. Nat. Commun. 11, 1\u20138 (2020).","journal-title":"Nat. Commun."},{"key":"616_CR15","doi-asserted-by":"publisher","first-page":"1369","DOI":"10.1002\/adma.201504759","volume":"28","author":"M Su","year":"2016","unstructured":"Su, M. et al. Nanoparticle based curve arrays for multirecognition flexible electronics. Adv. Mater. 28, 1369\u20131374 (2016).","journal-title":"Adv. Mater."},{"key":"616_CR16","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/ncomms14579","volume":"8","author":"LQ Tao","year":"2017","unstructured":"Tao, L. Q. et al. An intelligent artificial throat with sound-sensing ability based on laser induced graphene. Nat. Commun. 8, 1\u20138 (2017).","journal-title":"Nat. Commun."},{"key":"616_CR17","doi-asserted-by":"publisher","first-page":"8639","DOI":"10.1021\/acsnano.9b03218","volume":"13","author":"Y Wei","year":"2019","unstructured":"Wei, Y. et al. A wearable skinlike ultra-sensitive artificial graphene throat. ACS Nano 13, 8639\u20138647 (2019).","journal-title":"ACS Nano"},{"key":"616_CR18","first-page":"892","volume":"29","author":"Y Aytar","year":"2016","unstructured":"Aytar, Y., Vondrick, C. & Torralba, A. SoundNet: learning sound representations from unlabeled video. Adv. Neural Inf. Process. Syst. 29, 892\u2013900 (2016).","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"616_CR19","doi-asserted-by":"publisher","first-page":"2048","DOI":"10.1016\/j.procs.2017.08.250","volume":"112","author":"V Boddapati","year":"2017","unstructured":"Boddapati, V., Petef, A., Rasmusson, J. & Lundberg, L. Classifying environmental sounds using image recognition networks. Procedia Comput. Sci. 112, 2048\u20132056 (2017).","journal-title":"Procedia Comput. Sci."},{"key":"616_CR20","doi-asserted-by":"publisher","unstructured":"Becker, S., Ackermann, M., Lapuschkin, S., M\u00fcller, K.-R. & Samek, W. Interpreting and explaining deep neural networks for classification of audio signals. Preprint at https:\/\/doi.org\/10.48550\/arXiv.1807.03418 (2019).","DOI":"10.48550\/arXiv.1807.03418"},{"key":"616_CR21","doi-asserted-by":"publisher","unstructured":"Hershey, S. et al. CNN architectures for large-scale audio classification. In 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 131\u2013135 (2017); https:\/\/doi.org\/10.1109\/ICASSP.2017.7952132","DOI":"10.1109\/ICASSP.2017.7952132"},{"key":"616_CR22","unstructured":"Titze, I. & Alipour, F. The Myoelastic\u2013Aerodynamic Theory of Phonation 227\u2013244 (American Speech\u2013Language\u2013Hearing Association, 2006)."},{"key":"616_CR23","doi-asserted-by":"publisher","first-page":"239","DOI":"10.4103\/0019-509X.64707","volume":"47","author":"B Elmiyeh","year":"2010","unstructured":"Elmiyeh, B. et al. Surgical voice restoration after total laryngectomy: an overview. Indian J. Cancer 47, 239\u2013247 (2010).","journal-title":"Indian J. Cancer"},{"key":"616_CR24","doi-asserted-by":"publisher","first-page":"536","DOI":"10.1044\/jshr.3803.536","volume":"38","author":"Y Qi","year":"1995","unstructured":"Qi, Y. & Weinberg, B. Characteristics of voicing source waveforms produced by esophageal and tracheoesophageal speakers. J. Speech Hear. Res. 38, 536\u2013548 (1995).","journal-title":"J. Speech Hear. Res."},{"key":"616_CR25","doi-asserted-by":"publisher","first-page":"283","DOI":"10.1021\/acsanm.9b01937","volume":"3","author":"W Liu","year":"2020","unstructured":"Liu, W. et al. Stable wearable strain sensors on textiles by direct laser writing of graphene. ACS Appl. Nano Mater. 3, 283\u2013293 (2020).","journal-title":"ACS Appl. Nano Mater."},{"key":"616_CR26","doi-asserted-by":"publisher","first-page":"22531","DOI":"10.1021\/acsami.9b04915","volume":"11","author":"A Chhetry","year":"2019","unstructured":"Chhetry, A. et al. MoS2-decorated laser-induced graphene for a highly sensitive, hysteresis-free, and reliable piezoresistive strain sensor. ACS Appl. Mater. Interfaces 11, 22531\u201322542 (2019).","journal-title":"ACS Appl. Mater. Interfaces"},{"key":"616_CR27","doi-asserted-by":"crossref","unstructured":"Deng, N. Q. et al. Black phosphorus junctions and their electrical and optoelectronic applications. J. Semicond. 42, 081001 (2021).","DOI":"10.1088\/1674-4926\/42\/8\/081001"},{"key":"616_CR28","doi-asserted-by":"publisher","first-page":"082601","DOI":"10.1088\/1674-4926\/43\/8\/082601","volume":"43","author":"S Zhao","year":"2022","unstructured":"Zhao, S., Ran, W., Wang, L. & Shen, G. Interlocked MXene\/rGO aerogel with excellent mechanical stability for a health-monitoring device. J. Semicond. 43, 082601 (2022).","journal-title":"J. Semicond."},{"key":"616_CR29","doi-asserted-by":"crossref","unstructured":"Asadzadeh, S. S., Moosavi, A., Huynh, C. & Saleki, O. Thermo acoustic study of carbon nanotubes in near and far field: theory, simulation, and experiment. J. Appl. Phys. 117, 095101 (2015).","DOI":"10.1063\/1.4914049"},{"key":"616_CR30","first-page":"379","volume":"92","author":"JL Fitch","year":"1970","unstructured":"Fitch, J. L. & Holbrook, A. Modal vocal fundamental frequency of young adults. JAMA Otolaryngol. Head Neck Surg. 92, 379\u2013382 (1970).","journal-title":"JAMA Otolaryngol. Head Neck Surg."},{"key":"616_CR31","doi-asserted-by":"publisher","first-page":"195","DOI":"10.1016\/j.csl.2016.06.007","volume":"41","author":"AL Maas","year":"2017","unstructured":"Maas, A. L. et al. Building DNN acoustic models for large vocabulary speech recognition. Comput. Speech Lang. 41, 195\u2013213 (2017).","journal-title":"Comput. Speech Lang."},{"key":"616_CR32","doi-asserted-by":"publisher","unstructured":"Huang, J., Lu, H., Lopez Meyer, P., Cordourier, H. & Del Hoyo Ontiveros, J. Acoustic scene classification using deep learning-based ensemble averaging. In Proc. Detection and Classification of Acoustic Scenes and Events 2019 Workshop 94\u201398 (New York Univ., 2019); https:\/\/doi.org\/10.33682\/8rd2-g787","DOI":"10.33682\/8rd2-g787"},{"key":"616_CR33","doi-asserted-by":"crossref","unstructured":"Kumar, A., Khadkevich, M. & Fugen, C. Knowledge transfer from weakly labeled audio using convolutional neural network for sound events and scenes. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing 326\u2013330 (IEEE, 2018).","DOI":"10.1109\/ICASSP.2018.8462200"},{"key":"616_CR34","doi-asserted-by":"crossref","unstructured":"Selvaraju, R. R. et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. In Proc. IEEE International Conference on Computer Vision 618\u2013626 (IEEE, 2017).","DOI":"10.1109\/ICCV.2017.74"},{"key":"616_CR35","first-page":"145","volume":"70","author":"RL Siegel","year":"2020","unstructured":"Siegel, R. L. et al. Colorectal cancer statistics, 2020. CA: Cancer J. Clin. 70, 145\u2013164 (2020).","journal-title":"CA: Cancer J. Clin."},{"key":"616_CR36","doi-asserted-by":"publisher","first-page":"1941","DOI":"10.1002\/ijc.31937","volume":"144","author":"J Ferlay","year":"2019","unstructured":"Ferlay, J. et al. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int J Cancer 144, 1941\u20131953 (2019).","journal-title":"Int J Cancer"},{"key":"616_CR37","first-page":"205","volume":"126","author":"BH Burmeister","year":"2000","unstructured":"Burmeister, B. H., Dickie, G., Smithers, B. M., Hodge, R. & Morton, K. Thirty-four patients with carcinoma of the cervical esophagus treated with chemoradiation therapy. JAMA Otolaryngol. Head Neck Surg. 126, 205\u2013208 (2000).","journal-title":"JAMA Otolaryngol. Head Neck Surg."},{"key":"616_CR38","first-page":"1","volume":"30","author":"K Takebayashi","year":"2017","unstructured":"Takebayashi, K. et al. Comparison of curative surgery and definitive chemoradiotherapy as initial treatment for patients with cervical esophageal cancer. Dis. Esophagus 30, 1\u20135 (2017).","journal-title":"Dis. Esophagus"},{"key":"616_CR39","doi-asserted-by":"publisher","first-page":"105397","DOI":"10.1016\/j.compbiomed.2022.105397","volume":"145","author":"Z Luo","year":"2022","unstructured":"Luo, Z. et al. Hierarchical Harris hawks optimization for epileptic seizure classification. Comput. Biol. Med. 145, 105397 (2022).","journal-title":"Comput. Biol. Med."},{"key":"616_CR40","doi-asserted-by":"publisher","first-page":"104252","DOI":"10.1016\/j.compbiomed.2021.104252","volume":"131","author":"W Jin","year":"2021","unstructured":"Jin, W., Dong, S., Dong, C. & Ye, X. Hybrid ensemble model for differential diagnosis between COVID-19 and common viral pneumonia by chest X-ray radiograph. Comput. Biol. Med. 131, 104252 (2021).","journal-title":"Comput. Biol. Med."},{"key":"616_CR41","doi-asserted-by":"publisher","first-page":"2386","DOI":"10.1109\/TASLP.2017.2740000","volume":"25","author":"GS Meltzner","year":"2017","unstructured":"Meltzner, G. S. et al. Silent speech recognition as an alternative communication device for persons with laryngectomy. IEEE\/ACM Trans. Audio Speech Lang. Process 25, 2386\u20132398 (2017).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process"},{"key":"616_CR42","doi-asserted-by":"publisher","unstructured":"Gonzalez, T. F. Handbook of Approximation Algorithms and Metaheuristics (Chapman and Hall\/CRC, 2007); https:\/\/doi.org\/10.1201\/9781420010749","DOI":"10.1201\/9781420010749"},{"key":"616_CR43","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2818\u20132826 (IEEE, 2016).","DOI":"10.1109\/CVPR.2016.308"},{"key":"616_CR44","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 770\u2013778 (IEEE, 2016).","DOI":"10.1109\/CVPR.2016.90"},{"key":"616_CR45","doi-asserted-by":"publisher","unstructured":"Yang, Q., Jin. W. & Zhang, Q. A COLLECTION OF SAMPLE CODES of \u2018Mixed-Modality Speech Recognition and Interaction Using a Single-Device as Wearable Artificial Throat\u2019 (v.3) (Zenodo, 2022); https:\/\/doi.org\/10.5281\/zenodo.7396184","DOI":"10.5281\/zenodo.7396184"},{"key":"616_CR46","doi-asserted-by":"publisher","first-page":"222","DOI":"10.1038\/nature14002","volume":"516","author":"D Kang","year":"2014","unstructured":"Kang, D. et al. Ultrasensitive mechanical crack-based sensor inspired by the spider sensory system. Nature 516, 222\u2013226 (2014).","journal-title":"Nature"},{"key":"616_CR47","doi-asserted-by":"publisher","first-page":"8130","DOI":"10.1002\/adma.201602425","volume":"28","author":"B Park","year":"2016","unstructured":"Park, B. et al. Dramatically enhanced mechanosensitivity and signal-to-noise ratio of nanoscale crack-based sensors: effect of crack depth. Adv. Mater. 28, 8130\u20138137 (2016).","journal-title":"Adv. Mater."},{"key":"616_CR48","doi-asserted-by":"publisher","first-page":"57352","DOI":"10.1021\/acsami.0c16855","volume":"12","author":"T Yang","year":"2020","unstructured":"Yang, T., Wang, W., Huang, Y., Jiang, X. & Zhao, X. Accurate monitoring of small strain for timbre recognition via ductile fragmentation of functionalized graphene multilayers. ACS Appl. Mater. Interfaces 12, 57352\u201357361 (2020).","journal-title":"ACS Appl. Mater. Interfaces"},{"key":"616_CR49","first-page":"1","volume":"29","author":"ML Jin","year":"2017","unstructured":"Jin, M. L. et al. An ultrasensitive, visco-poroelastic artificial mechanotransducer skin inspired by piezo2 protein in mammalian Merkel cells. Adv. Mater. 29, 1\u20139 (2017).","journal-title":"Adv. Mater."},{"key":"616_CR50","doi-asserted-by":"publisher","first-page":"169","DOI":"10.1039\/C2EE23530G","volume":"6","author":"JH Lee","year":"2013","unstructured":"Lee, J. H. et al. Highly sensitive stretchable transparent piezoelectric nanogenerators. Energy Environ. Sci. 6, 169\u2013175 (2013).","journal-title":"Energy Environ. Sci."},{"key":"616_CR51","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/ncomms11108","volume":"7","author":"C Lang","year":"2016","unstructured":"Lang, C., Fang, J., Shao, H., Ding, X. & Lin, T. High-sensitivity acoustic sensors from nanofibre webs. Nat. Commun. 7, 1\u20137 (2016).","journal-title":"Nat. Commun."},{"key":"616_CR52","doi-asserted-by":"publisher","first-page":"194","DOI":"10.1002\/adma.201503957","volume":"28","author":"L Qiu","year":"2016","unstructured":"Qiu, L. et al. Ultrafast dynamic piezoresistive response of graphene-based cellular elastomers. Adv. Mater. 28, 194\u2013200 (2016).","journal-title":"Adv. Mater."},{"key":"616_CR53","doi-asserted-by":"publisher","first-page":"2000262","DOI":"10.1002\/admt.202000262","volume":"2000262","author":"Y Jin","year":"2020","unstructured":"Jin, Y. et al. Deep\u2010learning\u2010enabled MXene\u2010based artificial throat: toward sound detection and speech recognition. Adv. Mater. Technol. 2000262, 2000262 (2020).","journal-title":"Adv. Mater. Technol."},{"key":"616_CR54","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1002\/adfm.201907151","volume":"29","author":"C Deng","year":"2019","unstructured":"Deng, C. et al. Ultrasensitive and highly stretchable multifunctional strain sensors with timbre-recognition ability based on vertical graphene. Adv. Funct. Mater. 29, 1\u201311 (2019).","journal-title":"Adv. Funct. Mater."},{"key":"616_CR55","doi-asserted-by":"publisher","first-page":"299","DOI":"10.3390\/s22010299","volume":"22","author":"D Ravenscroft","year":"2021","unstructured":"Ravenscroft, D. et al. Machine learning methods for automatic silent speech recognition using a wearable graphene strain gauge sensor. Sensors 22, 299 (2021).","journal-title":"Sensors"},{"key":"616_CR56","doi-asserted-by":"publisher","first-page":"e1101185","DOI":"10.1126\/sciadv.1601185","volume":"2","author":"Y Liu","year":"2016","unstructured":"Liu, Y. et al. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces. Sci. Adv. 2, e1101185 (2016).","journal-title":"Sci. Adv."},{"key":"616_CR57","doi-asserted-by":"publisher","first-page":"1316","DOI":"10.1002\/adma.201404794","volume":"27","author":"J Yang","year":"2015","unstructured":"Yang, J. et al. Eardrum-Inspired active sensors for self-powered cardiovascular system characterization and throat-attached anti-interference voice recognition. Adv. Mater. 27, 1316\u20131326 (2015).","journal-title":"Adv. Mater."},{"key":"616_CR58","doi-asserted-by":"publisher","first-page":"4236","DOI":"10.1021\/acsnano.5b00618","volume":"9","author":"X Fan","year":"2015","unstructured":"Fan, X. et al. Ultrathin, rollable, paper-based triboelectric nanogenerator for acoustic energy harvesting and self-powered sound recording. ACS Nano 9, 4236\u20134243 (2015).","journal-title":"ACS Nano"},{"key":"616_CR59","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41378-019-0127-5","volume":"6","author":"H Liu","year":"2020","unstructured":"Liu, H. et al. An epidermal sEMG tattoo-like patch as a new human\u2013machine interface for patients with loss of voice. Microsyst. Nanoeng. 6, 1\u201313 (2020).","journal-title":"Microsyst. Nanoeng."},{"key":"616_CR60","doi-asserted-by":"publisher","unstructured":"Yatani, K. & Truong, K. N. BodyScope: a wearable acoustic sensor for activity recognition. In Proc. 2012 ACM Conference on Ubiquitous Computing\u2014UbiComp\u201912 341 (ACM, 2012); https:\/\/doi.org\/10.1145\/2370216.2370269","DOI":"10.1145\/2370216.2370269"},{"key":"616_CR61","doi-asserted-by":"publisher","unstructured":"Kapur, A., Kapur, S. & Maes, P. AlterEgo: a personalized wearable silent speech interface. In IUI '18: 23rd International Conference on Intelligent User Interfaces 43\u201353 (ACM, 2018); https:\/\/doi.org\/10.1145\/3172944.3172977","DOI":"10.1145\/3172944.3172977"}],"container-title":["Nature Machine Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00616-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00616-6","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00616-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T17:17:04Z","timestamp":1677172624000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.nature.com\/articles\/s42256-023-00616-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,23]]},"references-count":61,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["616"],"URL":"https:\/\/doi.org\/10.1038\/s42256-023-00616-6","relation":{},"ISSN":["2522-5839"],"issn-type":[{"value":"2522-5839","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,23]]},"assertion":[{"value":"6 May 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 January 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 February 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}]}}