{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T21:09:03Z","timestamp":1776114543838,"version":"3.50.1"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,7]]},"abstract":"<jats:p>Speech Emotion Recognition (SER) has become a growing focus of research in human-computer interaction. An essential challenge in SER is to extract common attributes from different speakers or languages, especially when a specific source corpus has to be trained to recognize the unknown data coming from another speech corpus. To address this challenge, a Capsule Network (CapsNet) and Transfer Learning based Mixed Task Net (CTL-MTNet) are proposed to deal with both the single-corpus and cross-corpus SER tasks simultaneously in this paper. For the single-corpus task, the combination of Convolution-Pooling and Attention CapsNet module (CPAC) is designed by embedding the self-attention mechanism to the CapsNet, guiding the module to focus on the important features that can be fed into different capsules. The extracted high-level features by CPAC provide sufficient discriminative ability. Furthermore, to handle the cross-corpus task, CTL-MTNet employs a Corpus Adaptation Adversarial Module (CAAM) by combining CPAC with Margin Disparity Discrepancy (MDD), which can learn the domain-invariant emotion representations through extracting the strong emotion commonness. Experiments including ablation studies and visualizations on both single- and cross-corpus tasks using four well-known SER datasets in different languages are conducted for performance evaluation and comparison. The results indicate that in both tasks the CTL-MTNet showed better performance in all cases compared to a number of state-of-the-art methods. The source code and the supplementary materials are available at: https:\/\/github.com\/MLDMXM2017\/CTLMTNet.<\/jats:p>","DOI":"10.24963\/ijcai.2022\/320","type":"proceedings-article","created":{"date-parts":[[2022,7,16]],"date-time":"2022-07-16T02:55:56Z","timestamp":1657940156000},"page":"2305-2311","source":"Crossref","is-referenced-by-count":22,"title":["CTL-MTNet: A Novel CapsNet and Transfer Learning-Based Mixed Task Net for Single-Corpus and Cross-Corpus Speech Emotion Recognition"],"prefix":"10.24963","author":[{"given":"Xin-Cheng","family":"Wen","sequence":"first","affiliation":[{"name":"Xiamen University"}]},{"given":"JiaXin","family":"Ye","sequence":"additional","affiliation":[{"name":"Xiamen University"}]},{"given":"Yan","family":"Luo","sequence":"additional","affiliation":[{"name":"Xiamen University"}]},{"given":"Yong","family":"Xu","sequence":"additional","affiliation":[{"name":"Fujian University of Technology"}]},{"given":"Xuan-Ze","family":"Wang","sequence":"additional","affiliation":[{"name":"Xiamen University"}]},{"given":"Chang-Li","family":"Wu","sequence":"additional","affiliation":[{"name":"Xiamen University"}]},{"given":"Kun-Hong","family":"Liu","sequence":"additional","affiliation":[{"name":"Xiamen University"}]}],"member":"10584","event":{"name":"Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}","theme":"Artificial Intelligence","location":"Vienna, Austria","acronym":"IJCAI-2022","number":"31","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"start":{"date-parts":[[2022,7,23]]},"end":{"date-parts":[[2022,7,29]]}},"container-title":["Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2022,7,18]],"date-time":"2022-07-18T11:09:02Z","timestamp":1658142542000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2022\/320"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2022,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2022\/320","relation":{},"subject":[],"published":{"date-parts":[[2022,7]]}}}