{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,10]],"date-time":"2025-11-10T12:24:33Z","timestamp":1762777473816,"version":"build-2065373602"},"reference-count":15,"publisher":"Wiley","issue":"6","license":[{"start":{"date-parts":[[2025,9,26]],"date-time":"2025-09-26T00:00:00Z","timestamp":1758844800000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Internet Technology Letters"],"published-print":{"date-parts":[[2025,11]]},"abstract":"<jats:title>ABSTRACT<\/jats:title>\n                  <jats:p>Speech emotion recognition based on edge computing technology and deep learning can effectively assist in improving the quality of English short passage reading instruction. Restricted by limited computing resources of different edge devices, existing deep models pose a huge challenge for mobile deployment. To alleviate this issue, this paper proposes a novel hybrid speech emotion recognition model in multi\u2010access edge intelligence scenarios. Firstly, we extract the Log Mel features from the speech signal collected by different clients' microphone sensors. Then, on the cloud platform, we deploy an efficient feature extraction backbone by exploiting 1D convolution operations, a minimal gated unit (MGU) module, and a Mamba module, which is introduced for exploiting long\u2010range dependencies with linear computational complexity. We conducted extensive comparative experiments on the public dataset and our own English reading sentiment dataset, and our proposed model achieved the highest recognition performance.<\/jats:p>","DOI":"10.1002\/itl2.70108","type":"journal-article","created":{"date-parts":[[2025,9,26]],"date-time":"2025-09-26T15:09:20Z","timestamp":1758899360000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["A Hybrid Network Speech Recognition Method for English Short Passage Reading Emotion Analysis in Multi\u2010Access Edge Intelligence Scenarios"],"prefix":"10.1002","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-3970-3823","authenticated-orcid":false,"given":"Jun","family":"Liao","sequence":"first","affiliation":[{"name":"Jiaying University  Meizhou China"}]}],"member":"311","published-online":{"date-parts":[[2025,9,26]]},"reference":[{"key":"e_1_2_10_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-025-11197-8"},{"key":"e_1_2_10_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2025.3529125"},{"key":"e_1_2_10_4_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0167-6393(03)00099-2"},{"key":"e_1_2_10_5_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2025.110060"},{"key":"e_1_2_10_6_1","first-page":"125","article-title":"Speech Emotion Recognition From Spectrograms With Deep Convolutional Neural Network","author":"Badshah A. M.","year":"2017","journal-title":"IEEE"},{"key":"e_1_2_10_7_1","first-page":"2227","article-title":"Automatic Speech Emotion Recognition Using Recurrent Neural Networks With Local Attention","author":"Mirsamadi S.","year":"2017","journal-title":"IEEE"},{"key":"e_1_2_10_8_1","first-page":"1","article-title":"Speech Emotion Recognition Using Convolutional and Recurrent Neural Networks","author":"Lim W.","year":"2016","journal-title":"IEEE"},{"key":"e_1_2_10_9_1","first-page":"1","article-title":"Hidden Markov Model\u2010Based Speech Emotion Recognition","author":"Schuller B.","year":"2003","journal-title":"IEEE"},{"key":"e_1_2_10_10_1","doi-asserted-by":"publisher","DOI":"10.1186\/s13636-018-0145-5"},{"key":"e_1_2_10_11_1","doi-asserted-by":"crossref","unstructured":"M.Farr\u00fas J.Hernando andP.Ejarque Jitter and Shimmer Measurements for Speaker Recognition. In: 2007:778\u2013781.","DOI":"10.21437\/Interspeech.2007-147"},{"key":"e_1_2_10_12_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2021.107101"},{"key":"e_1_2_10_13_1","doi-asserted-by":"publisher","DOI":"10.1049\/iet-spr.2017.0320"},{"key":"e_1_2_10_14_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2024.128711"},{"key":"e_1_2_10_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.2990405"},{"key":"e_1_2_10_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3163856"}],"container-title":["Internet Technology Letters"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/itl2.70108","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,10]],"date-time":"2025-11-10T12:19:50Z","timestamp":1762777190000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/itl2.70108"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,26]]},"references-count":15,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,11]]}},"alternative-id":["10.1002\/itl2.70108"],"URL":"https:\/\/doi.org\/10.1002\/itl2.70108","archive":["Portico"],"relation":{},"ISSN":["2476-1508","2476-1508"],"issn-type":[{"type":"print","value":"2476-1508"},{"type":"electronic","value":"2476-1508"}],"subject":[],"published":{"date-parts":[[2025,9,26]]},"assertion":[{"value":"2025-05-05","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-07-30","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-09-26","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"e70108"}}