{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T23:51:48Z","timestamp":1773273108002,"version":"3.50.1"},"reference-count":41,"publisher":"Oxford University Press (OUP)","issue":"3","license":[{"start":{"date-parts":[[2026,1,24]],"date-time":"2026-01-24T00:00:00Z","timestamp":1769212800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62201402"],"award-info":[{"award-number":["62201402"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62501154"],"award-info":[{"award-number":["62501154"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100021171","name":"Guangdong Basic and Applied Basic Research Foundation","doi-asserted-by":"publisher","award":["2023A1515011978"],"award-info":[{"award-number":["2023A1515011978"]}],"id":[{"id":"10.13039\/501100021171","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100021171","name":"Guangdong Basic and Applied Basic Research Foundation","doi-asserted-by":"publisher","award":["2024A1515110039"],"award-info":[{"award-number":["2024A1515110039"]}],"id":[{"id":"10.13039\/501100021171","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2026,2,28]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Missing modalities frequently occur in EEG- and eye movement-based emotion recognition due to the inherent instability of EEG signals and the complexity of multimodal data acquisition. Such modality absence leads to inefficient data utilization and substantial performance degradation, especially when missing modalities appear simultaneously across multiple data sets. This issue is further exacerbated when a substantial portion of a modality is missing within one or more datasets. Consequently, effectively exploiting the remaining multi-modal information under cross-dataset and missing-modality settings remains a critical and unresolved challenge. To address these challenges, we propose a novel framework termed the data generation and cross-dataset learning network (DGCDLNet), which makes the first attempt to simultaneously integrate the data generation strategy and cross-dataset learning mechanism in a unified manner. DGCDLNet contains two key modules: (1) feature reconstruction and fusion module, which leverages complete eye movement signals to compensate for missing EEG data and constructs discriminative multi-modal features via a dual-stream attention mechanism; and (2) cross-dataset learning module, which jointly learns coarse-grained representations across data sets while incorporating fine-grained features from the target-task dataset to improve classification accuracy. Extensive experiments on SEED, SEED-IV, and SEED-V demonstrate that DGCDLNet consistently outperforms state-of-the-art multi-modal fusion methods and achieves satisfactory performance under various EEG missing ratios. These results indicate the potential of DGCDLNet to advance EEG-based multi-modal emotion recognition beyond controlled laboratory settings toward practical real-world applications.<\/jats:p>","DOI":"10.1093\/jcde\/qwag005","type":"journal-article","created":{"date-parts":[[2026,1,23]],"date-time":"2026-01-23T12:38:54Z","timestamp":1769171934000},"page":"84-96","source":"Crossref","is-referenced-by-count":0,"title":["Joint learning across multi-source datasets for multi-modal emotion recognition under missing modalities"],"prefix":"10.1093","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8359-8225","authenticated-orcid":false,"given":"Zhencheng","family":"Li","sequence":"first","affiliation":[{"name":"School of Electronics and Information Engineering, Wuyi University , Jiangmen 529020 ,","place":["China"]},{"name":"College of Computer Science and Software Engineering, Shenzhen University , Shenzhen 518052 ,","place":["China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7524-5999","authenticated-orcid":false,"given":"Sheng-hua","family":"Zhong","sequence":"additional","affiliation":[{"name":"College of Computer Science and Software Engineering, Shenzhen University , Shenzhen 518052 ,","place":["China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5017-3987","authenticated-orcid":false,"given":"Yimin","family":"Wen","sequence":"additional","affiliation":[{"name":"School of Electronics and Information Engineering, Wuyi University , Jiangmen 529020 ,","place":["China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7634-7834","authenticated-orcid":false,"given":"Chen","family":"Li","sequence":"additional","affiliation":[{"name":"School of Mathematics, Foshan University , Foshan 528225 ,","place":["China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7997-8279","authenticated-orcid":false,"given":"Chi-Man","family":"Vong","sequence":"additional","affiliation":[{"name":"Department of Computer and Information Science, University of Macau , Macao 999078 ,","place":["China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3811-296X","authenticated-orcid":false,"given":"Chuangquan","family":"Chen","sequence":"additional","affiliation":[{"name":"School of Electronics and Information Engineering, Wuyi University , Jiangmen 529020 ,","place":["China"]}]}],"member":"286","published-online":{"date-parts":[[2026,1,24]]},"reference":[{"key":"2026031105122569400_bib1","doi-asserted-by":"publisher","first-page":"551","DOI":"10.1093\/jcde\/qwaa045","article-title":"Use of electroencephalogram and long short-term memory networks to recognize design preferences of users toward architectural design alternatives","volume":"7","author":"Chang","year":"2020","journal-title":"Journal of Computational Design and Engineering"},{"key":"2026031105122569400_bib2","doi-asserted-by":"publisher","first-page":"365","DOI":"10.1109\/TETCI.2024.3406422","article-title":"Comprehensive multisource learning network for cross-subject multimodal emotion recognition","volume":"9","author":"Chen","year":"2025","journal-title":"IEEE Transactions on Emerging Topics in Computational Intelligence"},{"key":"2026031105122569400_bib3","first-page":"1","article-title":"Fusing frequency-domain features and brain connectivity features for cross-subject emotion recognition","volume":"71","author":"Chen","year":"2022","journal-title":"IEEE Transactions on Instrumentation and Measurement"},{"key":"2026031105122569400_bib4","doi-asserted-by":"publisher","first-page":"107982","DOI":"10.1016\/j.knosys.2021.107982","article-title":"Easy domain adaptation for cross-subject multi-view emotion recognition","volume":"239","author":"Chen","year":"2022","journal-title":"Knowledge-Based Systems"},{"key":"2026031105122569400_bib5","doi-asserted-by":"publisher","first-page":"345","DOI":"10.1093\/jcde\/qwaf083","article-title":"A cross-domain fault diagnosis method for mixed-fusion samples based on data generation and class-level domain adversary","volume":"12","author":"Chen","year":"2025","journal-title":"Journal of Computational Design and Engineering"},{"key":"2026031105122569400_bib6","doi-asserted-by":"publisher","first-page":"031004","DOI":"10.1088\/1741-2552\/ade290","article-title":"EEG-based affective brain-computer interfaces: recent advancements and future challenges","author":"Chen","year":"2025","journal-title":"Journal of Neural Engineering"},{"key":"2026031105122569400_bib7","doi-asserted-by":"publisher","DOI":"10.1145\/3503161.3548367","article-title":"VigilanceNet: decouple intra-and inter-modality learning for multimodal vigilance estimation in RSVP-based BCI","volume-title":"Proceedings of the 30th ACM International Conference on Multimedia","author":"Cheng","year":"2022"},{"key":"2026031105122569400_bib8","doi-asserted-by":"publisher","first-page":"158","DOI":"10.1093\/jcde\/qwae042","article-title":"Integration of eye-tracking and object detection in a deep learning system for quality inspection analysis","volume":"11","author":"Cho","year":"2024","journal-title":"Journal of Computational Design and Engineering"},{"key":"2026031105122569400_bib9","doi-asserted-by":"publisher","first-page":"81","DOI":"10.1109\/NER.2013.6695876","article-title":"Differential entropy feature for EEG-based emotion classification","volume-title":"2013 6th International IEEE\/EMBS Conference on Neural Engineering (NER)","author":"Duan","year":"2013"},{"key":"2026031105122569400_bib10","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/NER52421.2023.10123730","article-title":"EEG-eye movements cross-modal decision confidence measurement with generative adversarial networks","volume-title":"2023 11th International IEEE\/EMBS Conference on Neural Engineering (NER)","author":"Fei","year":"2023"},{"key":"2026031105122569400_bib11","first-page":"1180","article-title":"Unsupervised domain adaptation by backpropagation","volume-title":"International Conference on Machine Learning","author":"Ganin","year":"2015"},{"key":"2026031105122569400_bib12","doi-asserted-by":"publisher","first-page":"4254","DOI":"10.1109\/TCSS.2025.3567298","article-title":"HSA-Former: hierarchical spatial aggregation transformer for eeg-based emotion recognition","volume":"12","author":"Huang","year":"2025","journal-title":"IEEE Transactions on Computational Social Systems"},{"key":"2026031105122569400_bib13","doi-asserted-by":"publisher","first-page":"102019","DOI":"10.1016\/j.inffus.2023.102019","article-title":"Emotion recognition and artificial intelligence: a systematic review (2014\u20132023) and research recommendations","volume":"102","author":"Khare","year":"2024","journal-title":"Information Fusion"},{"key":"2026031105122569400_bib14","article-title":"Spatial group-wise enhance: improving semantic feature learning in convolutional networks","author":"Li","year":"2019"},{"key":"2026031105122569400_bib15","doi-asserted-by":"publisher","first-page":"715","DOI":"10.1109\/TCDS.2021.3071170","article-title":"Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition","volume":"14","author":"Liu","year":"2021","journal-title":"IEEE Transactions on Cognitive and Developmental Systems"},{"key":"2026031105122569400_bib16","doi-asserted-by":"publisher","first-page":"2247","DOI":"10.18653\/v1\/P18-1209","article-title":"Efficient low-rank multimodal fusion with modality-specific factors","volume-title":"Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)","author":"Liu","year":"2018"},{"key":"2026031105122569400_bib17","volume-title":"Combining Eye Movements and EEG to Enhance Emotion Recognition","author":"Lu","year":"2015"},{"key":"2026031105122569400_bib18","doi-asserted-by":"publisher","first-page":"2302","DOI":"10.1609\/aaai.v35i3.16330","article-title":"Smil: multimodal learning with severely missing modality","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","author":"Ma","year":"2021"},{"key":"2026031105122569400_bib19","doi-asserted-by":"publisher","first-page":"1286","DOI":"10.3390\/app12031286","article-title":"Comparative analysis of emotion classification based on facial expression and physiological signals using deep learning","volume":"12","author":"Oh","year":"2022","journal-title":"Applied Sciences"},{"key":"2026031105122569400_bib20","doi-asserted-by":"publisher","first-page":"1637","DOI":"10.1109\/TNSRE.2024.3389037","article-title":"Multi-scale masked autoencoders for cross-session emotion recognition","volume":"32","author":"Pang","year":"2024","journal-title":"IEEE Transactions on Neural Systems and Rehabilitation Engineering"},{"key":"2026031105122569400_bib21","doi-asserted-by":"publisher","first-page":"1327","DOI":"10.1093\/jcde\/qwac059","article-title":"Infrared webcam-based non-contact measurement of event-related potentials from event-related pupillary responses: an approach focused on mental workload","volume":"9","author":"Park","year":"2022","journal-title":"Journal of Computational Design and Engineering"},{"key":"2026031105122569400_bib22","doi-asserted-by":"publisher","first-page":"8104","DOI":"10.1109\/TII.2022.3217120","article-title":"Joint EEG feature transfer and semisupervised cross-subject emotion recognition","volume":"19","author":"Peng","year":"2022","journal-title":"IEEE Transactions on Industrial Informatics"},{"key":"2026031105122569400_bib23","doi-asserted-by":"publisher","first-page":"14","DOI":"10.1167\/14.13.14","article-title":"Eye movements during emotion recognition in faces","volume":"14","author":"Schurgin","year":"2014","journal-title":"Journal of Vision"},{"key":"2026031105122569400_bib24","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/ICASSP49660.2025.10889485","article-title":"Enhancing emotion recognition in incomplete data: a novel cross-modal alignment, reconstruction, and refinement framework","volume-title":"ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"Sun","year":"2025"},{"key":"2026031105122569400_bib25","article-title":"Attention is all you need","volume":"30","author":"Vaswani","year":"2017","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2026031105122569400_bib26","doi-asserted-by":"publisher","DOI":"10.1109\/BIBM52615.2021.9669556","article-title":"Emotion transformer fusion: complementary representation properties of EEG and eye movements on recognizing anger and surprise","volume-title":"2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)","author":"Wang","year":"2021"},{"key":"2026031105122569400_bib27","doi-asserted-by":"publisher","first-page":"1543","DOI":"10.1109\/TAFFC.2024.3524418","article-title":"From EEG to eye movements: cross-modal emotion recognition using constrained adversarial network with dual attention","volume":"16","author":"Wang","year":"2024","journal-title":"IEEE Transactions on Affective Computing"},{"key":"2026031105122569400_bib28","doi-asserted-by":"publisher","first-page":"101095","DOI":"10.1016\/j.aei.2020.101095","article-title":"Prediction of product design decision Making: An investigation of eye movements and EEG features","volume":"45","author":"Wang","year":"2020","journal-title":"Advanced Engineering Informatics"},{"key":"2026031105122569400_bib29","doi-asserted-by":"publisher","first-page":"805","DOI":"10.1109\/TAFFC.2020.2966440","article-title":"Two-stage fuzzy fusion based-convolution neural network for dynamic emotion recognition","volume":"13","author":"Wu","year":"2020","journal-title":"IEEE Transactions on Affective Computing"},{"key":"2026031105122569400_bib30","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-01261-8_1","article-title":"Group normalization","volume-title":"Proceedings of the European Conference on Computer Vision (ECCV)","author":"Wu","year":"2018"},{"key":"2026031105122569400_bib31","doi-asserted-by":"publisher","first-page":"438","DOI":"10.1145\/3664647.3681683","article-title":"Leveraging knowledge of modality experts for incomplete multimodal learning","volume-title":"Proceedings of the 32nd ACM International Conference on Multimedia","author":"Xu","year":"2024"},{"key":"2026031105122569400_bib32","doi-asserted-by":"publisher","first-page":"1057","DOI":"10.1145\/3474085.3475701","article-title":"Simplifying multimodal emotion recognition with single eye movement modality","volume-title":"Proceedings of the 29th ACM International Conference on Multimedia","author":"Yan","year":"2021"},{"key":"2026031105122569400_bib33","doi-asserted-by":"publisher","first-page":"16416","DOI":"10.1609\/aaai.v38i15.29578","article-title":"Drfuse: learning disentangled representation for clinical multi-modal fusion with missing modality and modal inconsistency","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence","author":"Yao","year":"2024"},{"key":"2026031105122569400_bib34","doi-asserted-by":"publisher","first-page":"1103","DOI":"10.18653\/v1\/D17-1115","article-title":"Tensor fusion network for multimodal sentiment analysis","volume-title":"Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing","author":"Zadeh","year":"2017"},{"key":"2026031105122569400_bib35","first-page":"55943","article-title":"Towards robust multimodal sentiment analysis with incomplete data","volume":"37","author":"Zhang","year":"2024","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2026031105122569400_bib36","first-page":"203","article-title":"A cross-subject and cross-modal model for multimodal emotion recognition","volume-title":"International Conference on Neural Information Processing","author":"Zhang","year":"2021"},{"key":"2026031105122569400_bib37","doi-asserted-by":"publisher","first-page":"5040","DOI":"10.1109\/EMBC.2014.6944757","article-title":"Multimodal emotion recognition using EEG and eye tracking data","volume-title":"2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society","author":"Zheng","year":"2014"},{"key":"2026031105122569400_bib38","doi-asserted-by":"publisher","first-page":"1110","DOI":"10.1109\/TCYB.2018.2797176","article-title":"Emotionmeter: a multimodal framework for recognizing human emotions","volume":"49","author":"Zheng","year":"2018","journal-title":"IEEE Transactions on Cybernetics"},{"key":"2026031105122569400_bib39","doi-asserted-by":"publisher","first-page":"162","DOI":"10.1109\/TAMD.2015.2431497","article-title":"Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks","volume":"7","author":"Zheng","year":"2015","journal-title":"IEEE Transactions on Autonomous Mental Development"},{"key":"2026031105122569400_bib40","doi-asserted-by":"publisher","first-page":"417","DOI":"10.1109\/TAFFC.2017.2712143","article-title":"Identifying stable patterns over time for emotion recognition from EEG","volume":"10","author":"Zheng","year":"2017","journal-title":"IEEE Transactions on Affective Computing"},{"key":"2026031105122569400_bib41","doi-asserted-by":"publisher","first-page":"657","DOI":"10.1109\/TAFFC.2023.3288118","article-title":"PR-PL: A novel prototypical representation based pairwise learning framework for emotion recognition using EEG signals","volume":"15","author":"Zhou","year":"2023","journal-title":"IEEE Transactions on Affective Computing"}],"container-title":["Journal of Computational Design and Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/academic.oup.com\/jcde\/advance-article-pdf\/doi\/10.1093\/jcde\/qwag005\/66564608\/qwag005.pdf","content-type":"application\/pdf","content-version":"am","intended-application":"syndication"},{"URL":"https:\/\/academic.oup.com\/jcde\/article-pdf\/13\/3\/84\/66564608\/qwag005.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/academic.oup.com\/jcde\/article-pdf\/13\/3\/84\/66564608\/qwag005.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T09:12:35Z","timestamp":1773220355000},"score":1,"resource":{"primary":{"URL":"https:\/\/academic.oup.com\/jcde\/article\/13\/3\/84\/8440134"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,1,24]]},"references-count":41,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2026,2,28]]}},"URL":"https:\/\/doi.org\/10.1093\/jcde\/qwag005","relation":{},"ISSN":["2288-5048"],"issn-type":[{"value":"2288-5048","type":"electronic"}],"subject":[],"published-other":{"date-parts":[[2026,3]]},"published":{"date-parts":[[2026,1,24]]}}}