{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,20]],"date-time":"2025-10-20T10:21:22Z","timestamp":1760955682352,"version":"3.37.3"},"reference-count":15,"publisher":"Wiley","license":[{"start":{"date-parts":[[2016,1,1]],"date-time":"2016-01-01T00:00:00Z","timestamp":1451606400000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61231002","61375028"],"award-info":[{"award-number":["61231002","61375028"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61231002","61375028"],"award-info":[{"award-number":["61231002","61375028"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Journal of Electrical and Computer Engineering"],"published-print":{"date-parts":[[2016]]},"abstract":"<jats:p>The feature fusion from separate source is the current technical difficulties of cross-corpus speech emotion recognition. The purpose of this paper is to, based on Deep Belief Nets (DBN) in Deep Learning, use the emotional information hiding in speech spectrum diagram (spectrogram) as image features and then implement feature fusion with the traditional emotion features. First, based on the spectrogram analysis by STB\/Itti model, the new spectrogram features are extracted from the color, the brightness, and the orientation, respectively; then using two alternative DBN models they fuse the traditional and the spectrogram features, which increase the scale of the feature subset and the characterization ability of emotion. Through the experiment on ABC database and Chinese corpora, the new feature subset compared with traditional speech emotion features, the recognition result on cross-corpus, distinctly advances by 8.8%. The method proposed provides a new idea for feature fusion of emotion recognition.<\/jats:p>","DOI":"10.1155\/2016\/7437860","type":"journal-article","created":{"date-parts":[[2016,12,28]],"date-time":"2016-12-28T16:20:49Z","timestamp":1482942049000},"page":"1-11","source":"Crossref","is-referenced-by-count":7,"title":["A Novel DBN Feature Fusion Model for Cross-Corpus Speech Emotion Recognition"],"prefix":"10.1155","volume":"2016","author":[{"given":"Zou","family":"Cairong","sequence":"first","affiliation":[{"name":"Department of Information and Communication Engineering, Guangzhou Maritime University, Guangzhou 510006, China"},{"name":"Key Laboratory of Underwater Acoustic signal Processing of Ministry of Education, Southeast University, Nanjing 210096, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7183-2619","authenticated-orcid":true,"given":"Zhang","family":"Xinran","sequence":"additional","affiliation":[{"name":"Key Laboratory of Underwater Acoustic signal Processing of Ministry of Education, Southeast University, Nanjing 210096, China"}]},{"given":"Zha","family":"Cheng","sequence":"additional","affiliation":[{"name":"Key Laboratory of Underwater Acoustic signal Processing of Ministry of Education, Southeast University, Nanjing 210096, China"}]},{"given":"Zhao","family":"Li","sequence":"additional","affiliation":[{"name":"Key Laboratory of Underwater Acoustic signal Processing of Ministry of Education, Southeast University, Nanjing 210096, China"}]}],"member":"311","reference":[{"key":"1","doi-asserted-by":"publisher","DOI":"10.1007\/s10772-011-9125-1"},{"key":"2","doi-asserted-by":"publisher","DOI":"10.1007\/s11235-011-9624-z"},{"key":"5","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2010.09.020"},{"first-page":"873","volume-title":"Sparse deep belief net model for visual area V2","year":"2008","key":"8"},{"key":"11","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2012.11.009"},{"key":"12","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2012.2210727"},{"key":"13","doi-asserted-by":"publisher","DOI":"10.3969\/j.issn.1001-0505.2015.01.002"},{"key":"15","doi-asserted-by":"publisher","DOI":"10.1016\/S0004-3702(02)00399-5"},{"key":"16","doi-asserted-by":"publisher","DOI":"10.1177\/0022219411417877"},{"key":"17","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2013.2267205"},{"year":"2013","key":"18"},{"key":"19","doi-asserted-by":"publisher","DOI":"10.1162\/089976602760128018"},{"key":"21","doi-asserted-by":"publisher","DOI":"10.1016\/j.specom.2015.10.003"},{"issue":"4","key":"23","first-page":"396","volume":"29","year":"2010","journal-title":"Acoustic Technologies"},{"issue":"1","key":"24","first-page":"63","year":"2010","journal-title":"Technical Acoustics"}],"container-title":["Journal of Electrical and Computer Engineering"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/jece\/2016\/7437860.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/jece\/2016\/7437860.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/jece\/2016\/7437860.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2016,12,28]],"date-time":"2016-12-28T16:20:52Z","timestamp":1482942052000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/jece\/2016\/7437860\/"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2016]]},"references-count":15,"alternative-id":["7437860","7437860"],"URL":"https:\/\/doi.org\/10.1155\/2016\/7437860","relation":{},"ISSN":["2090-0147","2090-0155"],"issn-type":[{"type":"print","value":"2090-0147"},{"type":"electronic","value":"2090-0155"}],"subject":[],"published":{"date-parts":[[2016]]}}}