{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T05:16:36Z","timestamp":1774934196290,"version":"3.50.1"},"reference-count":16,"publisher":"World Scientific Pub Co Pte Ltd","issue":"07","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["J CIRCUIT SYST COMP"],"published-print":{"date-parts":[[2023,5,15]]},"abstract":"<jats:p> Face expression can be used to identify human emotions, but it is easy to misjudge when hidden artificially. In addition, the sentiment recognition of a single mode often results in low recognition rate due to the characteristics of the single mode itself. In order to solve the mentioned problems, the spatio-temporal neural network and the separable residual network proposed by fusion can realize the emotion recognition of EEG and face. The average recognition rates of EEG and face data sets are 78.14% and 70.89%, respectively, and the recognition rates of decision fusion on DEAP data sets are 84.53%. Experimental results show that compared with the single mode, the proposed two-mode emotion recognition architecture has better performance, and can well integrate the emotional information contained in human face visual signals and EEG signals. <\/jats:p>","DOI":"10.1142\/s0218126623501256","type":"journal-article","created":{"date-parts":[[2022,10,14]],"date-time":"2022-10-14T09:02:21Z","timestamp":1665738141000},"source":"Crossref","is-referenced-by-count":6,"title":["Multi-Modal Emotion Recognition Combining Face Image and EEG Signal"],"prefix":"10.1142","volume":"32","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5503-0747","authenticated-orcid":false,"given":"Ying","family":"Hu","sequence":"first","affiliation":[{"name":"Department of Electrical Automation Engineering, ShanXi Polytechnic College, Taiyuan 030002, P. R. China"}]},{"given":"Feng","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Information and Computer, Taiyuan University of Technology, Taiyuan 030000, P. R. China"}]}],"member":"219","published-online":{"date-parts":[[2022,11,19]]},"reference":[{"key":"S0218126623501256BIB001","first-page":"36","volume":"3","author":"Lyu B. L.","year":"2021","journal-title":"J. Intell. Sci. Technol."},{"key":"S0218126623501256BIB002","first-page":"633","volume":"15","author":"Pan J. H.","year":"2020","journal-title":"J. Intell. Syst."},{"key":"S0218126623501256BIB003","doi-asserted-by":"crossref","first-page":"281","DOI":"10.1109\/TCDS.2016.2587290","volume":"9","author":"Zheng W.","year":"2017","journal-title":"IEEE Trans. Cogn. Dev. Syst."},{"key":"S0218126623501256BIB004","first-page":"1363","volume-title":"2nd Int. Conf. Bioinformatics and Biomedical Engineering","author":"Cheng B."},{"key":"S0218126623501256BIB005","doi-asserted-by":"crossref","first-page":"102","DOI":"10.1109\/T-AFFC.2011.28","volume":"3","author":"Agrafioti F.","year":"2012","journal-title":"IEEE Trans. Affective Comput."},{"issue":"22","key":"S0218126623501256BIB006","first-page":"133","volume":"2021","author":"Song J.","year":"2021","journal-title":"Mod. Comput."},{"key":"S0218126623501256BIB008","doi-asserted-by":"crossref","first-page":"105","DOI":"10.3390\/fi11050105","volume":"11","author":"Huang Y. R.","year":"2019","journal-title":"Future Internet"},{"key":"S0218126623501256BIB009","doi-asserted-by":"crossref","first-page":"143303","DOI":"10.1109\/ACCESS.2019.2944273","volume":"7","author":"Wang Z. M.","year":"2019","journal-title":"IEEE Access"},{"key":"S0218126623501256BIB010","first-page":"08047","volume":"164","author":"Chen T.","year":"2020","journal-title":"Measurement"},{"key":"S0218126623501256BIB011","first-page":"ii\/1085","volume-title":"Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing","volume":"2","author":"Hoch S.","year":"2005"},{"key":"S0218126623501256BIB012","first-page":"2890","volume-title":"SICE 2003 Annual Conf.","author":"Go H. J."},{"key":"S0218126623501256BIB013","doi-asserted-by":"crossref","first-page":"97","DOI":"10.1145\/2988257.2988264","volume-title":"Proc. 6th Int. Workshop on Audio\/Visual Emotion Challenge","author":"Brady K.","year":"2016"},{"key":"S0218126623501256BIB014","first-page":"1","volume-title":"Proc. Int. Conf. Digital Image and Signal Processing (DISP\u201919)","author":"Bird J. J."},{"key":"S0218126623501256BIB017","doi-asserted-by":"crossref","first-page":"94106","DOI":"10.1109\/ACCESS.2019.2928983","volume":"7","author":"Chen Y.","year":"2019","journal-title":"IEEE Access"},{"key":"S0218126623501256BIB020","first-page":"1641","volume-title":"2017 3rd IEEE Int. Conf. Computer and Communications (ICCC)","author":"Li Z."},{"key":"S0218126623501256BIB021","doi-asserted-by":"crossref","first-page":"103361","DOI":"10.1016\/j.bspc.2021.103361","volume":"72","author":"Jana G. C.","year":"2022","journal-title":"Biomed. Signal Process. Control"}],"container-title":["Journal of Circuits, Systems and Computers"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.worldscientific.com\/doi\/pdf\/10.1142\/S0218126623501256","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,5,21]],"date-time":"2023-05-21T03:34:15Z","timestamp":1684640055000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.worldscientific.com\/doi\/10.1142\/S0218126623501256"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,19]]},"references-count":16,"journal-issue":{"issue":"07","published-print":{"date-parts":[[2023,5,15]]}},"alternative-id":["10.1142\/S0218126623501256"],"URL":"https:\/\/doi.org\/10.1142\/s0218126623501256","relation":{},"ISSN":["0218-1266","1793-6454"],"issn-type":[{"value":"0218-1266","type":"print"},{"value":"1793-6454","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,19]]},"article-number":"2350125"}}