{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,11]],"date-time":"2025-09-11T17:34:37Z","timestamp":1757612077797,"version":"3.44.0"},"reference-count":58,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2025,7,23]],"date-time":"2025-07-23T00:00:00Z","timestamp":1753228800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,7,23]],"date-time":"2025-07-23T00:00:00Z","timestamp":1753228800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100004834","name":"Universitat Jaume I","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100004834","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Pattern Anal Applic"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Enriching multimedia content with affective metadata unlocks opportunities for innovative and enhanced user experiences in recommendation systems, multimedia retrieval, and many other applications. However, it is really challenging to accurately decode human affect. On the one hand, manual procedures for affective annotation are labour-intensive and hardly scalable. On the other hand, content-based approaches rely on the assumption that affective experiences (1) are directly related to the content, (2) are homogeneous for a particular content, and (3) ignore the subjective nature of affective responses. Recent advancements in brain-computer interfacing (BCI) signifies the prospect of partially automating affective annotation by decoding natural affective experiences toward the content. We consider affective annotation of videos based on <jats:italic>brainsourcing<\/jats:italic>: crowdsourced affective reactions from brain signals, recorded while participants were watching videos. Our experiments are based on two popular datasets (DEAP and SEED) and three crowdsourcing models. Crowdsourcing models can support affective annotation for all videos in the SEED dataset and most videos in the DEAP dataset. For both datasets and crowdsourcing models, the performance of affective annotation increases with the crowd size and shows increased confidence of the classifiers with larger crowd sizes. For example, the average classification accuracy for binary valence in DEAP is less than 60% for the individual predictions, but increases up to about 70% for a crowd size of six participants and get to about 80% for fourteen participants. Our findings open avenues for utilizing data captured via BCI for understanding and annotating content according to its users\u2019 affective experiences.<\/jats:p>","DOI":"10.1007\/s10044-025-01476-z","type":"journal-article","created":{"date-parts":[[2025,7,23]],"date-time":"2025-07-23T11:44:10Z","timestamp":1753271050000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Affective annotation of videos from EEG-based crowdsourcing"],"prefix":"10.1007","volume":"28","author":[{"given":"Yoelvis","family":"Moreno-Alcayde","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2203-4928","authenticated-orcid":false,"given":"Tuukka","family":"Ruotsalo","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5011-1847","authenticated-orcid":false,"given":"Luis A.","family":"Leiva","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1596-8466","authenticated-orcid":false,"given":"V. Javier","family":"Traver","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,7,23]]},"reference":[{"key":"1476_CR1","doi-asserted-by":"publisher","DOI":"10.1145\/3297713","author":"A Bablani","year":"2019","unstructured":"Bablani A, Edla DR, Tripathi D, Cheruku R (2019) Survey on brain-computer interface: an emerging computational intelligence paradigm. ACM Comput Surv. https:\/\/doi.org\/10.1145\/3297713","journal-title":"ACM Comput Surv"},{"issue":"4","key":"1476_CR2","doi-asserted-by":"publisher","first-page":"396","DOI":"10.1109\/TAFFC.2017.2661284","volume":"9","author":"Y Baveye","year":"2018","unstructured":"Baveye Y, Chamaret C, Dellandr\u00e9a E, Chen L (2018) Affective video content analysis: a multidisciplinary insight. IEEE Trans Affect Comput 9(4):396\u2013409. https:\/\/doi.org\/10.1109\/TAFFC.2017.2661284","journal-title":"IEEE Trans Affect Comput"},{"key":"1476_CR3","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2021.103289","volume":"72","author":"S Bhosale","year":"2022","unstructured":"Bhosale S, Chakraborty R, Kopparapu SK (2022) Calibration free meta learning based approach for subject independent EEG emotion recognition. Biomed Signal Process Control 72:103289. https:\/\/doi.org\/10.1016\/j.bspc.2021.103289","journal-title":"Biomed Signal Process Control"},{"key":"1476_CR4","first-page":"29","volume-title":"The International Affective Picture System (IASP) in the study of emotion and attention","author":"MM Bradley","year":"2007","unstructured":"Bradley MM, Lang PJ (2007) The International Affective Picture System (IASP) in the study of emotion and attention. Oxford University Press, pp 29\u201346"},{"key":"1476_CR5","doi-asserted-by":"publisher","DOI":"10.1038\/s41597-023-02650-w","author":"J Chen","year":"2023","unstructured":"Chen J, Wang X, Huang C, Hu X, Shen X, Zhang D (2023) A large finer-grained affective computing EEG dataset. Sci Data. https:\/\/doi.org\/10.1038\/s41597-023-02650-w","journal-title":"Sci Data"},{"key":"1476_CR6","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9780511801389","volume-title":"An introduction to support vector machines and other kernel-based learning methods","author":"N Cristianini","year":"2000","unstructured":"Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press. https:\/\/doi.org\/10.1017\/CBO9780511801389"},{"key":"1476_CR7","doi-asserted-by":"publisher","unstructured":"Davis KM, Kangassalo L, Spap\u00e9 Michiel MA, Ruotsalo T (2020) Brainsourcing: Crowdsourcing recognition tasks via collaborative brain-computer interfacing. In: Bernhaupt R, Mueller FF, Verweij D, Andres J, McGrenere J, Cockburn A, Avellino I, Goguey A, Bj\u00f8n P, Zhao S, Samson BP, Kocielnik R (eds) CHI \u201920: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1\u201314. ACM. https:\/\/doi.org\/10.1145\/3313831.3376288","DOI":"10.1145\/3313831.3376288"},{"issue":"4","key":"1476_CR8","doi-asserted-by":"publisher","first-page":"3094","DOI":"10.1109\/TAFFC.2022.3225885","volume":"14","author":"KM Davis","year":"2023","unstructured":"Davis KM, Spape M, Ruotsalo T (2023) Contradicted by the brain: predicting individual and group preferences via brain-computer interfacing. IEEE Trans Affect Comput 14(4):3094\u20133105. https:\/\/doi.org\/10.1109\/TAFFC.2022.3225885","journal-title":"IEEE Trans Affect Comput"},{"key":"1476_CR9","doi-asserted-by":"publisher","DOI":"10.1109\/TCYB.2024.3406159","author":"C de la Torre-Ortiz","year":"2024","unstructured":"de la Torre-Ortiz C, Spap\u00e9 MM, Ravaja N, Ruotsalo T (2024) Cross-subject EEG feedback for implicit image generation. IEEE Trans Cybern. https:\/\/doi.org\/10.1109\/TCYB.2024.3406159","journal-title":"IEEE Trans Cybern"},{"key":"1476_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2023.105745","volume":"89","author":"GC de Melo","year":"2024","unstructured":"de Melo GC, Castellano G, Forner-Cordero A (2024) A procedure to minimize EEG variability for BCI applications. Biomed Signal Process Control 89:105745. https:\/\/doi.org\/10.1016\/j.bspc.2023.105745","journal-title":"Biomed Signal Process Control"},{"key":"1476_CR11","doi-asserted-by":"publisher","unstructured":"de Polavieja GG, Madirolas G (2014) Wisdom of the confident: using social interactions to eliminate the bias in wisdom of the crowds. CoRR, arXiv:abs\/1406.7578. https:\/\/doi.org\/10.48550\/arXiv.1406.7578","DOI":"10.48550\/arXiv.1406.7578"},{"key":"1476_CR12","doi-asserted-by":"publisher","first-page":"749","DOI":"10.1007\/s10994-017-5677-x","volume":"107","author":"Y-X Ding","year":"2018","unstructured":"Ding Y-X, Zhou Z-H (2018) Crowdsourcing with unsure option. Mach Learn 107:749\u2013766. https:\/\/doi.org\/10.1007\/s10994-017-5677-x","journal-title":"Mach Learn"},{"key":"1476_CR13","doi-asserted-by":"publisher","first-page":"403","DOI":"10.1016\/j.ins.2023.02.052","volume":"630","author":"W Ding","year":"2023","unstructured":"Ding W, Abdel-Basset M, Hawash H, Abdel-Razek S, Liu C (2023) Fed-ESD: federated learning for efficient epileptic seizure detection in the fog-assisted internet of medical things. Inf Sci 630:403\u2013419. https:\/\/doi.org\/10.1016\/j.ins.2023.02.052","journal-title":"Inf Sci"},{"key":"1476_CR14","doi-asserted-by":"publisher","unstructured":"Duan R-N, Zhu J-Y, Lu B-L (2013) Differential entropy feature for EEG-based emotion classification. In: 6th International IEEE\/EMBS Conference on Neural Engineering (NER), pages 81\u201384. https:\/\/doi.org\/10.1109\/NER.2013.6695876","DOI":"10.1109\/NER.2013.6695876"},{"key":"1476_CR15","doi-asserted-by":"publisher","unstructured":"Estrada E, Nazeran H, Nava P, Behbehani K, Burk J, Lucas E (2004) EEG feature extraction for classification of sleep stages. In: The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, volume 1, pages 196\u2013199. https:\/\/doi.org\/10.1109\/IEMBS.2004.1403125","DOI":"10.1109\/IEMBS.2004.1403125"},{"key":"1476_CR16","doi-asserted-by":"publisher","unstructured":"Freitas J, Nguyen A, Bosl W (2020) Confidence in the qualified crowd: a platform for sourcing EEG annotations. In: IEEE Signal Processing in Medicine and Biology Symposium (SPMB), pages 1\u20136. https:\/\/doi.org\/10.1109\/SPMB50085.2020.9353617","DOI":"10.1109\/SPMB50085.2020.9353617"},{"key":"1476_CR17","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2024.3415112","author":"J Gui","year":"2024","unstructured":"Gui J, Chen T, Zhang J, Cao Q, Sun Z, Luo H, Tao D (2024) A survey on self-supervised learning: Algorithms, applications, and future trends. IEEE Trans Pattern Anal Mach Intell. https:\/\/doi.org\/10.1109\/TPAMI.2024.3415112","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1476_CR18","doi-asserted-by":"publisher","first-page":"249","DOI":"10.1016\/j.neucom.2021.04.112","volume":"459","author":"SCH Hoi","year":"2021","unstructured":"Hoi SCH, Sahoo D, Lu J, Zhao P (2021) Online learning: a comprehensive survey. Neurocomputing 459:249\u2013289. https:\/\/doi.org\/10.1016\/j.neucom.2021.04.112","journal-title":"Neurocomputing"},{"key":"1476_CR19","unstructured":"Huang Zh (1997) Clustering large data sets with mixed numeric and categorical values. In: Proceedings of the First Pacific Asia Knowledge Discovery and Data Mining Conference, pages 21\u201334"},{"key":"1476_CR20","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-023-30599-8","author":"F Itsuki","year":"2023","unstructured":"Itsuki F, Kunhao Y, Kazuhiro U (2023) On an effective and efficient method for exploiting the wisdom of the inner crowd. Sci Rep. https:\/\/doi.org\/10.1038\/s41598-023-30599-8","journal-title":"Sci Rep"},{"key":"1476_CR21","doi-asserted-by":"publisher","DOI":"10.1016\/j.compbiomed.2023.107450","volume":"165","author":"M Jafari","year":"2023","unstructured":"Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, Garc\u00eda DL, Gorriz JM, Rajendra Acharya U (2023) Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 165:107450. Last access: April 2025 https:\/\/doi.org\/10.1016\/j.compbiomed.2023.107450","journal-title":"Comput Biol Med"},{"key":"1476_CR22","unstructured":"Jiaqi1008 (2020) Emotion detection. https:\/\/github.com\/Jiaqi1008\/Emotion_detection. GitHub repository"},{"issue":"5","key":"1476_CR23","doi-asserted-by":"publisher","first-page":"425","DOI":"10.1177\/10738584211017018","volume":"28","author":"BP Johnson","year":"2022","unstructured":"Johnson BP, Dayan E, Censor N, Cohen LG (2022) Crowdsourcing in cognitive and systems neuroscience. Neuroscientist 28(5):425\u2013437. https:\/\/doi.org\/10.1177\/10738584211017018","journal-title":"Neuroscientist"},{"issue":"1","key":"1476_CR24","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1109\/JBHI.2017.2688239","volume":"22","author":"S Katsigiannis","year":"2018","unstructured":"Katsigiannis S, Ramzan N (2018) DREAMER: a database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J Biomed Health Inform 22(1):98\u2013107. https:\/\/doi.org\/10.1109\/JBHI.2017.2688239","journal-title":"IEEE J Biomed Health Inform"},{"issue":"1","key":"1476_CR25","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/T-AFFC.2011.15","volume":"3","author":"S Koelstra","year":"2012","unstructured":"Koelstra S, Muhl C, Soleymani M, Lee J-S, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I (2012) DEAP: a database for emotion analysis using physiological signals. IEEE Trans Affect Comput 3(1):18\u201331. https:\/\/doi.org\/10.1109\/T-AFFC.2011.15","journal-title":"IEEE Trans Affect Comput"},{"key":"1476_CR26","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2018.00162","author":"X Li","year":"2018","unstructured":"Li X, Dawei S, Peng Z, Yazhou Z, Hou Y, Hu B (2018) Exploring EEG features in cross-subject emotion recognition. Front Neurosci. https:\/\/doi.org\/10.3389\/fnins.2018.00162","journal-title":"Front Neurosci"},{"key":"1476_CR27","doi-asserted-by":"publisher","DOI":"10.1016\/j.bspc.2022.103660","volume":"76","author":"J Li","year":"2022","unstructured":"Li J, Xia W, Zhang Y, Yang H, Xiaojun W (2022) DRS-Net: a spatial-temporal affective computing model based on multichannel EEG data. Biomed Signal Process Control 76:103660. https:\/\/doi.org\/10.1016\/j.bspc.2022.103660","journal-title":"Biomed Signal Process Control"},{"key":"1476_CR28","doi-asserted-by":"publisher","DOI":"10.1145\/3524499","author":"X Li","year":"2022","unstructured":"Li X, Zhang Y, Tiwari P, Song D, Hu B, Yang M, Zhao Z, Kumar N, Marttinen P (2022) EEG based emotion recognition: a tutorial and review. ACM Comput Surv. https:\/\/doi.org\/10.1145\/3524499","journal-title":"ACM Comput Surv"},{"issue":"2","key":"1476_CR29","doi-asserted-by":"publisher","first-page":"715","DOI":"10.1109\/TCDS.2021.3071170","volume":"14","author":"W Liu","year":"2022","unstructured":"Liu W, Qiu J-L, Zheng W-L, Bao-Liang L (2022) Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition. IEEE Trans Cognit Dev Syst 14(2):715\u2013729. https:\/\/doi.org\/10.1109\/TCDS.2021.3071170","journal-title":"IEEE Trans Cognit Dev Syst"},{"issue":"4","key":"1476_CR30","doi-asserted-by":"publisher","first-page":"456","DOI":"10.1109\/T-AFFC.2012.19","volume":"3","author":"D McDuff","year":"2012","unstructured":"McDuff D, El Kaliouby R, Picard RW (2012) Crowdsourcing facial responses to online videos. IEEE Trans Affect Comput 3(4):456\u2013468. https:\/\/doi.org\/10.1109\/T-AFFC.2012.19","journal-title":"IEEE Trans Affect Comput"},{"issue":"2","key":"1476_CR31","doi-asserted-by":"publisher","first-page":"479","DOI":"10.1109\/TAFFC.2018.2884461","volume":"12","author":"JA Miranda-Correa","year":"2021","unstructured":"Miranda-Correa JA, Abadi MK, Sebe N, Patras I (2021) AMIGOS: a dataset for affect, personality and mood research on individuals and groups. IEEE Trans Affect Comput 12(2):479\u2013493. https:\/\/doi.org\/10.1109\/TAFFC.2018.2884461","journal-title":"IEEE Trans Affect Comput"},{"key":"1476_CR32","doi-asserted-by":"publisher","first-page":"103","DOI":"10.1007\/s13534-023-00316-5","volume":"14","author":"Y Moreno-Alcayde","year":"2024","unstructured":"Moreno-Alcayde Y, Javier Traver V, Leiva L (2024) Sneaky emotions: impact of data partitions in affective computing experiments with brain-computer interfacing. Biomed Eng Lett 14:103\u2013113. https:\/\/doi.org\/10.1007\/s13534-023-00316-5","journal-title":"Biomed Eng Lett"},{"key":"1476_CR33","unstructured":"Rajpura P, Pandey P, Miyapuram K (2022) Continual learning for EEG based brain computer interfaces. In: Continual Lifelong Learning Workshop at ACML 2022. Last access: April, 2025 https:\/\/openreview.net\/forum?id=9Y_wci2OC3"},{"key":"1476_CR34","doi-asserted-by":"publisher","DOI":"10.3389\/fnbot.2020.00025","author":"M Rashid","year":"2020","unstructured":"Rashid M, Sulaiman N, Majeed Anwar APP, Musa RM, Nasir AFA, Bari BS, Khatun S (2020) Current status, challenges, and possible solutions of EEG-based brain-computer interface: A comprehensive review. Front Neurorobot. https:\/\/doi.org\/10.3389\/fnbot.2020.00025","journal-title":"Front Neurorobot"},{"key":"1476_CR35","doi-asserted-by":"publisher","DOI":"10.1145\/3472291","author":"P Ren","year":"2021","unstructured":"Ren P, Xiao Y, Chang X, Huang P-Y, Li Z, Gupta BB, Chen X, Wang X (2021) A survey of deep active learning. ACM Comput Surv. https:\/\/doi.org\/10.1145\/3472291","journal-title":"ACM Comput Surv"},{"key":"1476_CR36","volume-title":"Affective computing","author":"RW Rosalind","year":"2000","unstructured":"Rosalind RW (2000) Affective computing. MIT press"},{"issue":"1","key":"1476_CR37","doi-asserted-by":"publisher","first-page":"297","DOI":"10.1109\/TAFFC.2023.3273916","volume":"15","author":"T Ruotsalo","year":"2024","unstructured":"Ruotsalo T, M\u00e4kel\u00e4 K, Spap\u00e9 M (2024) Crowdsourcing affective annotations via fNIRS-BCI. IEEE Trans Affect Comput 15(1):297\u2013308. https:\/\/doi.org\/10.1109\/TAFFC.2023.3273916","journal-title":"IEEE Trans Affect Comput"},{"issue":"6","key":"1476_CR38","doi-asserted-by":"publisher","first-page":"1161","DOI":"10.1037\/h0077714","volume":"39","author":"JA Russell","year":"1980","unstructured":"Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):1161\u20131178. https:\/\/doi.org\/10.1037\/h0077714","journal-title":"J Pers Soc Psychol"},{"issue":"1","key":"1476_CR39","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1109\/MSMC.2021.3103498","volume":"8","author":"SM Salaken","year":"2022","unstructured":"Salaken SM, Hettiarachchi I, Munia AA, Hasan MM, Khosravi A, Mohamed S, Rahman A (2022) Predicting cognitive load of an individual with knowledge gained from others: improvements in performance using crowdsourcing. IEEE Syst Man Cybern Mag 8(1):4\u201315. https:\/\/doi.org\/10.1109\/MSMC.2021.3103498","journal-title":"IEEE Syst Man Cybern Mag"},{"key":"1476_CR40","doi-asserted-by":"publisher","unstructured":"Sheng VS, Zhang J (2019) Machine learning with crowdsourcing: a brief summary of the past research and future directions. In: Proceedings of the AAAI Conference on artificial intelligence 33(01):9837\u20139843. https:\/\/doi.org\/10.1016\/10.1609\/aaai.v33i01.33019837","DOI":"10.1016\/10.1609\/aaai.v33i01.33019837"},{"key":"1476_CR41","doi-asserted-by":"publisher","first-page":"917","DOI":"10.1016\/j.cogsys.2018.09.019","volume":"52","author":"A Singhal","year":"2018","unstructured":"Singhal A, Kumar P, Saini R, Roy PP, Dogra DP, Kim B-G (2018) Summarization of videos by analyzing affective state of the user through crowdsource. Cogn Syst Res 52:917\u2013930. https:\/\/doi.org\/10.1016\/j.cogsys.2018.09.019","journal-title":"Cogn Syst Res"},{"key":"1476_CR42","doi-asserted-by":"publisher","unstructured":"Wallace S, Cai T, Le B, Leiva LA (2022) Debiased label aggregation for subjective crowdsourcing tasks. In: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, New York, NY, USA. Association for Computing Machinery. https:\/\/doi.org\/10.1145\/3491101.3519614","DOI":"10.1145\/3491101.3519614"},{"key":"1476_CR43","doi-asserted-by":"publisher","first-page":"19","DOI":"10.1016\/j.inffus.2022.03.009","volume":"83\u201384","author":"Y Wang","year":"2022","unstructured":"Wang Y, Song W, Tao W, Liotta A, Yang D, Li X, Gao S, Sun Y, Ge W, Zhang W, Zhang W (2022) A systematic review on affective computing: emotion models, databases, and recent advances. Inform Fusion 83\u201384:19\u201352. https:\/\/doi.org\/10.1016\/j.inffus.2022.03.009","journal-title":"Inform Fusion"},{"key":"1476_CR44","doi-asserted-by":"publisher","first-page":"1952","DOI":"10.1109\/TNSRE.2023.3263570","volume":"31","author":"X Wang","year":"2023","unstructured":"Wang X, Ma Y, Cammon J, Fang F, Gao Y, Zhang Y (2023) Self-supervised EEG emotion recognition models based on CNN. IEEE Trans Neural Syst Rehabil Eng 31:1952\u20131962. https:\/\/doi.org\/10.1109\/TNSRE.2023.3263570","journal-title":"IEEE Trans Neural Syst Rehabil Eng"},{"issue":"5","key":"1476_CR45","doi-asserted-by":"publisher","first-page":"1363","DOI":"10.1007\/s12559-021-09936-4","volume":"13","author":"P Washington","year":"2021","unstructured":"Washington P, Kalantarian H, Kent J, Husic A, Kline A, Leblanc \u00c9, Hou C, Mutlu C, Dunlap K, Penev Y, Stockham NT, Chrisman BS, Paskov KM, Jung J-Y, Voss C, Haber N, Wall DP (2021) Training affective computer vision models by crowdsourcing soft-target labels. Cogn Comput 13(5):1363\u20131373. https:\/\/doi.org\/10.1007\/s12559-021-09936-4","journal-title":"Cogn Comput"},{"key":"1476_CR46","unstructured":"Weng W, Gu Y, Guo S, Ma Y, Yang Z, Liu Y, Chen Y (2024) Self-supervised learning for electroencephalogram: a systematic survey. arXiv:2401.05446"},{"issue":"8","key":"1476_CR47","doi-asserted-by":"publisher","DOI":"10.1016\/j.heliyon.2023.e18433","volume":"9","author":"NS Williams","year":"2023","unstructured":"Williams NS, King W, Mackellar G, Randeniya R, McCormick A, Badcock NA (2023) Crowdsourced EEG experiments: a proof of concept for remote EEG acquisition using EmotivPRO Builder and EmotivLABS. Heliyon 9(8):e18433. https:\/\/doi.org\/10.1016\/j.heliyon.2023.e18433","journal-title":"Heliyon"},{"key":"1476_CR48","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2023.3318321","author":"Y Xu","year":"2023","unstructured":"Xu Y, Du Y, Li L, Lai H, Zou J, Zhou T, Xiao L, Liu L, Ma P (2023) AMDET: attention based multiple dimensions EEG transformer for emotion recognition. IEEE Trans Affect Comput. https:\/\/doi.org\/10.1109\/TAFFC.2023.3318321","journal-title":"IEEE Trans Affect Comput"},{"issue":"2","key":"1476_CR49","doi-asserted-by":"publisher","first-page":"1082","DOI":"10.1109\/TAFFC.2021.3100868","volume":"14","author":"K Yang","year":"2023","unstructured":"Yang K, Wang C, Yue G, Sarsenbayeva Z, Tag B, Dingler T, Wadley G, Goncalves J (2023) Behavioral and physiological signals-based deep multimodal approach for mobile emotion recognition. IEEE Trans Affect Comput 14(2):1082\u20131097. https:\/\/doi.org\/10.1109\/TAFFC.2021.3100868","journal-title":"IEEE Trans Affect Comput"},{"key":"1476_CR50","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2024.108011","volume":"133","author":"L Yang","year":"2024","unstructured":"Yang L, Wang Y, Ouyang R, Niu X, Yang X, Zheng C (2024) Electroencephalogram-based emotion recognition using factorization temporal separable convolution network. Eng Appl Artif Intell 133:108011. https:\/\/doi.org\/10.1016\/j.engappai.2024.108011","journal-title":"Eng Appl Artif Intell"},{"key":"1476_CR51","doi-asserted-by":"publisher","unstructured":"Yang J, Sun M, Sun X (2017) Learning visual sentiment distributions via augmented conditional probability neural network. In: Proceedings of the AAAI Conference on Artificial Intelligence, AAAI\u201917, pages 224\u2014230. AAAI Press. https:\/\/doi.org\/10.1609\/aaai.v31i1.10485","DOI":"10.1609\/aaai.v31i1.10485"},{"issue":"2","key":"1476_CR52","doi-asserted-by":"publisher","first-page":"226","DOI":"10.3390\/j2020016","volume":"2","author":"C Yuan","year":"2019","unstructured":"Yuan C, Yang H (2019) Research on K-Value selection method of K-Means clustering algorithm. J 2(2):226\u2013235. https:\/\/doi.org\/10.3390\/j2020016","journal-title":"J"},{"issue":"2","key":"1476_CR53","doi-asserted-by":"publisher","first-page":"107","DOI":"10.1002\/jdn.10166","volume":"82","author":"M Zabcikova","year":"2022","unstructured":"Zabcikova M, Koudelkova Z, Jasek R, Navarro JJL (2022) Recent advances and current trends in brain-computer interface research and their applications. Int J Dev Neurosci 82(2):107\u2013123. https:\/\/doi.org\/10.1002\/jdn.10166","journal-title":"Int J Dev Neurosci"},{"key":"1476_CR54","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.106775","volume":"216","author":"C Zhang","year":"2021","unstructured":"Zhang C, Xie Y, Bai H, Yu B, Li W, Gao Y (2021) A survey on federated learning. Knowl-Based Syst 216:106775. https:\/\/doi.org\/10.1016\/j.knosys.2021.106775","journal-title":"Knowl-Based Syst"},{"key":"1476_CR55","doi-asserted-by":"publisher","unstructured":"Zhang S, He Z, Ye Z, Sun P, Ai Q, Zhang M, Liu Y (2024) EEG-SVRec: an EEG dataset with user multidimensional affective engagement labels in short video recommendation. In: International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 698\u2013708, New York, NY, USA. Association for Computing Machinery. https:\/\/doi.org\/10.1145\/3626772.3657890","DOI":"10.1145\/3626772.3657890"},{"key":"1476_CR56","doi-asserted-by":"publisher","first-page":"6729","DOI":"10.1109\/TPAMI.2021.3094362","volume":"44","author":"S Zhao","year":"2022","unstructured":"Zhao S, Yao X, Yang J, Jia G, Ding G, Chua T-S, Schuller BW, Keutzer K (2022) Affective image content analysis: Two decades review and new perspectives. IEEE Trans Pattern Anal Mach Intell 44:6729\u20136751. https:\/\/doi.org\/10.1109\/TPAMI.2021.3094362","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"3","key":"1476_CR57","doi-asserted-by":"publisher","first-page":"162","DOI":"10.1109\/TAMD.2015.2431497","volume":"7","author":"W-L Zheng","year":"2015","unstructured":"Zheng W-L, Bao-Liang L (2015) Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans Auton Ment Dev 7(3):162\u2013175. https:\/\/doi.org\/10.1109\/TAMD.2015.2431497","journal-title":"IEEE Trans Auton Ment Dev"},{"issue":"3","key":"1476_CR58","doi-asserted-by":"publisher","first-page":"417","DOI":"10.1109\/TAFFC.2017.2712143","volume":"10","author":"W-L Zheng","year":"2019","unstructured":"Zheng W-L, Zhu J-Y, Bao-Liang L (2019) Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans Affect Comput 10(3):417\u2013429. https:\/\/doi.org\/10.1109\/TAFFC.2017.2712143","journal-title":"IEEE Trans Affect Comput"}],"container-title":["Pattern Analysis and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10044-025-01476-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10044-025-01476-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10044-025-01476-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,4]],"date-time":"2025-09-04T07:15:57Z","timestamp":1756970157000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10044-025-01476-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,23]]},"references-count":58,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["1476"],"URL":"https:\/\/doi.org\/10.1007\/s10044-025-01476-z","relation":{},"ISSN":["1433-7541","1433-755X"],"issn-type":[{"type":"print","value":"1433-7541"},{"type":"electronic","value":"1433-755X"}],"subject":[],"published":{"date-parts":[[2025,7,23]]},"assertion":[{"value":"10 January 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 April 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 July 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"Not applicable.","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to publish"}}],"article-number":"146"}}