{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,28]],"date-time":"2026-02-28T04:30:06Z","timestamp":1772253006253,"version":"3.50.1"},"reference-count":87,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2022,12,29]],"date-time":"2022-12-29T00:00:00Z","timestamp":1672272000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001459","name":"Singapore Ministry of Education","doi-asserted-by":"publisher","award":["MOE2018-T2-2-161"],"award-info":[{"award-number":["MOE2018-T2-2-161"]}],"id":[{"id":"10.13039\/501100001459","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001459","name":"Singapore Ministry of Education","doi-asserted-by":"publisher","award":["A20G8b0102"],"award-info":[{"award-number":["A20G8b0102"]}],"id":[{"id":"10.13039\/501100001459","id-type":"DOI","asserted-by":"publisher"}]},{"name":"RIE2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund","award":["MOE2018-T2-2-161"],"award-info":[{"award-number":["MOE2018-T2-2-161"]}]},{"name":"RIE2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund","award":["A20G8b0102"],"award-info":[{"award-number":["A20G8b0102"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Music is capable of conveying many emotions. The level and type of emotion of the music perceived by a listener, however, is highly subjective. In this study, we present the Music Emotion Recognition with Profile information dataset (MERP). This database was collected through Amazon Mechanical Turk (MTurk) and features dynamical valence and arousal ratings of 54 selected full-length songs. The dataset contains music features, as well as user profile information of the annotators. The songs were selected from the Free Music Archive using an innovative method (a Triple Neural Network with the OpenSmile toolkit) to identify 50 songs with the most distinctive emotions. Specifically, the songs were chosen to fully cover the four quadrants of the valence-arousal space. Four additional songs were selected from the DEAM dataset to act as a benchmark in this study and filter out low quality ratings. A total of 452 participants participated in annotating the dataset, with 277 participants remaining after thoroughly cleaning the dataset. Their demographic information, listening preferences, and musical background were recorded. We offer an extensive analysis of the resulting dataset, together with a baseline emotion prediction model based on a fully connected model and an LSTM model, for our newly proposed MERP dataset.<\/jats:p>","DOI":"10.3390\/s23010382","type":"journal-article","created":{"date-parts":[[2022,12,30]],"date-time":"2022-12-30T03:19:46Z","timestamp":1672370386000},"page":"382","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":21,"title":["MERP: A Music Dataset with Emotion Ratings and Raters\u2019 Profile Information"],"prefix":"10.3390","volume":"23","author":[{"given":"En Yan","family":"Koh","sequence":"first","affiliation":[{"name":"Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3213-8242","authenticated-orcid":false,"given":"Kin Wai","family":"Cheuk","sequence":"additional","affiliation":[{"name":"Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore"}]},{"given":"Kwan Yee","family":"Heung","sequence":"additional","affiliation":[{"name":"Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7260-2447","authenticated-orcid":false,"given":"Kat R.","family":"Agres","sequence":"additional","affiliation":[{"name":"Yong Siew Toh Conservatory of Music, National University Singapore, Singapore 117376, Singapore"},{"name":"Centre for Music and Health, National University Singapore, Singapore 117376, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8607-1640","authenticated-orcid":false,"given":"Dorien","family":"Herremans","sequence":"additional","affiliation":[{"name":"Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore"}]}],"member":"1968","published-online":{"date-parts":[[2022,12,29]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"913","DOI":"10.1007\/s00521-019-04166-0","article-title":"The emergence of deep learning: New opportunities for music and audio technologies","volume":"32","author":"Herremans","year":"2020","journal-title":"Neural Comput. Appl."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Yang, Y.H., Su, Y.F., Lin, Y.C., and Chen, H.H. (2007, January 28). Music emotion recognition: The role of individuality. Proceedings of the International Workshop on Human-Centered Multimedia, Augsburg, Bavaria, Germany.","DOI":"10.1145\/1290128.1290132"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Aljanaki, A., Yang, Y.H., and Soleymani, M. (2017). Developing a benchmark for emotional analysis of music. PLoS ONE, 12.","DOI":"10.1371\/journal.pone.0173392"},{"key":"ref_4","unstructured":"Schmidt, E.M., and Kim, Y.E. (2011, January 24\u201328). Modeling Musical Emotion Dynamics with Conditional Random Fields. Proceedings of the ISMIR, Miami, FL, USA."},{"key":"ref_5","unstructured":"Chua, P., Makris, D., Herremans, D., Roig, G., and Agres, K. (2022). Predicting emotion from music videos: Exploring the relative contribution of visual and auditory information to affective responses. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1161","DOI":"10.1037\/h0077714","article-title":"A circumplex model of affect","volume":"39","author":"Russell","year":"1980","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"323","DOI":"10.1073\/pnas.9.9.323","article-title":"Measurements on the expression of emotion in music","volume":"9","author":"Seashore","year":"1923","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_8","unstructured":"Meyer, L. (1956). Emotion and Meaning in Music, University of Chicago Press."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Juslin, P.N. (2019). Musical Emotions Explained: Unlocking the Secrets of Musical Affect, Oxford University Press.","DOI":"10.1093\/oso\/9780198753421.002.0008"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"215","DOI":"10.3389\/fpsyg.2018.00215","article-title":"Music communicates affects, not basic emotions\u2014A constructionist account of attribution of emotional meanings to music","volume":"9","author":"Eerola","year":"2018","journal-title":"Front. Psychol."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"989","DOI":"10.1177\/0305735620917730","article-title":"Emotions of music listening in Finland and in india: Comparison of an individualistic and a collectivistic culture","volume":"49","author":"Saarikallio","year":"2021","journal-title":"Psychol. Music."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Panda, R., Malheiro, R.M., and Paiva, R.P. (2020). Audio features for music emotion recognition: A survey. IEEE Trans. Affect. Comput.","DOI":"10.1109\/TAFFC.2018.2820691"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1622","DOI":"10.2991\/ijcis.d.191216.001","article-title":"Music emotion recognition by using chroma spectrogram and deep visual features","volume":"12","author":"Er","year":"2019","journal-title":"Int. J. Comput. Intell. Syst."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"106","DOI":"10.1109\/MSP.2021.3106232","article-title":"Music emotion recognition: Toward new, robust standards in personalized and context-sensitive applications","volume":"38","author":"Cano","year":"2021","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Herremans, D., Yang, S., Chuan, C.H., Barthet, M., and Chew, E. (2017, January 23\u201326). Imma-emo: A multimodal interface for visualising score-and audio-synchronised emotion annotations. Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences, London, UK.","DOI":"10.1145\/3123514.3123545"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Turnbull, D., Barrington, L., Torres, D., and Lanckriet, G. (2007, January 23\u201327). Towards musical query-by-semantic-description using the cal500 data set. Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands.","DOI":"10.1145\/1277741.1277817"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1016\/j.ipm.2015.03.004","article-title":"Studying emotion induced by music through a crowdsourcing game","volume":"52","author":"Aljanaki","year":"2016","journal-title":"Inf. Process. Manag."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"494","DOI":"10.1037\/1528-3542.8.4.494","article-title":"Emotions evoked by the sound of music: Characterization, classification, and measurement","volume":"8","author":"Zentner","year":"2008","journal-title":"Emotion"},{"key":"ref_19","unstructured":"Barthet, M., Fazekas, G., and Sandler, M. (2012). Music emotion recognition: From content-to context-based models. International Symposium on Computer Music Modeling and Retrieval, Proceedings of the 9th International Symposium CMMR 2012, London, UK, 19\u201322 June 2012, Springer."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1802","DOI":"10.1109\/TASL.2010.2101596","article-title":"Generalizability and simplicity as criteria in feature selection: Application to mood classification in music","volume":"19","author":"Saari","year":"2010","journal-title":"IEEE Trans. Audio Speech Lang. Process."},{"key":"ref_21","unstructured":"Trohidis, K., Tsoumakas, G., Kalliris, G., and Vlahavas, I.P. (2008, January 14\u201318). Multi-label classification of music into emotions. Proceedings of the ISMIR, Philadelphia, PA, USA."},{"key":"ref_22","unstructured":"Hu, X., and Downie, J.S. (2007, January 23\u201327). Exploring Mood Metadata: Relationships with Genre, Artist and Usage Metadata. Proceedings of the ISMIR, Vienna, Austria."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"44617","DOI":"10.1109\/ACCESS.2022.3169744","article-title":"Symbolic music generation conditioned on continuous-valued emotions","volume":"10","author":"Sulun","year":"2022","journal-title":"IEEE Access"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Ferreira, L.N., Mou, L., Whitehead, J., and Lelis, L.H. (2022). Controlling Perceived Emotion in Symbolic Music Generation with Monte Carlo Tree Search. arXiv.","DOI":"10.1609\/aiide.v18i1.21960"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Makris, D., Agres, K.R., and Herremans, D. (2021, January 18\u201322). Generating Lead Sheets with Affect: A Novel Conditional seq2seq Framework. Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China.","DOI":"10.1109\/IJCNN52387.2021.9533474"},{"key":"ref_26","unstructured":"Tan, H.H., and Herremans, D. (2020, January 12\u201315). Music FaderNets: Controllable music generation based on high-level features via low-level feature modelling. Proceedings of the ISMIR, Virtual."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Ehrlich, S.K., Agres, K.R., Guan, C., and Cheng, G. (2019). A closed-loop, music-based brain\u2013computer interface for emotion mediation. PLoS ONE, 14.","DOI":"10.1371\/journal.pone.0213516"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"5017","DOI":"10.1007\/s11042-021-11584-7","article-title":"End-to-end music emotion variation detection using iteratively reconstructed deep features","volume":"81","author":"Orjesek","year":"2022","journal-title":"Multimed. Tools Appl."},{"key":"ref_29","unstructured":"Bischoff, K., Firan, C.S., Paiu, R., Nejdl, W., Laurier, C., and Sordo, M. (2009, January 26\u201330). Music mood and theme classification\u2014A hybrid approach. Proceedings of the ISMIR, Kobe, Japan."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Thayer, R.E. (1990). The Biopsychology of Mood and Arousal, Oxford University Press.","DOI":"10.1093\/oso\/9780195068276.001.0001"},{"key":"ref_31","unstructured":"Han, B.j., Rho, S., Dannenberg, R.B., and Hwang, E. (2009, January 26\u201330). SMERS: Music Emotion Recognition Using Support Vector Regression. Proceedings of the ISMIR, Kobe, Japan."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1177\/0305735610362821","article-title":"A comparison of the discrete and dimensional models of emotion in music","volume":"39","author":"Eerola","year":"2011","journal-title":"Psychol. Music"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"448","DOI":"10.1109\/TASL.2007.911513","article-title":"A regression approach to music emotion recognition","volume":"16","author":"Yang","year":"2008","journal-title":"IEEE Trans. Audio Speech Lang. Process."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Sloboda, J.A., and Juslin, P.N. (2001). Psychological perspectives on music and emotion. Music and Emotion: Theory and Research, Oxford University Press.","DOI":"10.1093\/oso\/9780192631886.003.0004"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"149","DOI":"10.1177\/10298649020050S106","article-title":"Emotional states generated by music: An exploratory study of music experts","volume":"5","author":"Scherer","year":"2001","journal-title":"Music. Sci."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"248","DOI":"10.1037\/a0039279","article-title":"Age-related patterns in emotions evoked by music","volume":"9","author":"Pearce","year":"2015","journal-title":"Psychol. Aesthet. Creat. Arts"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"585","DOI":"10.1080\/02699931.2010.502449","article-title":"Emotion recognition in music changes across the adult life span","volume":"25","author":"Lima","year":"2011","journal-title":"Cogn. Emot."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"153","DOI":"10.3389\/fpsyg.2017.00153","article-title":"Perception and modeling of affective qualities of musical instrument sounds across pitch registers","volume":"8","author":"McAdams","year":"2017","journal-title":"Front. Psychol."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"66","DOI":"10.1007\/s00426-020-01467-1","article-title":"Emotion and expertise: How listeners with formal music training use cues to perceive emotion","volume":"86","author":"Battcock","year":"2021","journal-title":"Psychol. Res."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"507","DOI":"10.1109\/TAFFC.2017.2663421","article-title":"On the interrelation between listener characteristics and the perception of emotions in classical orchestra music","volume":"9","author":"Schedl","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"43","DOI":"10.2307\/40285811","article-title":"A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues","volume":"17","author":"Balkwill","year":"1999","journal-title":"Music Percept."},{"key":"ref_42","unstructured":"Lee, H., Hoeger, F., Schoenwiesner, M., Park, M., and Jacoby, N. (2021). Cross-cultural Mood Perception in Pop Songs and its Alignment with Mood Detection Algorithms. arXiv."},{"key":"ref_43","unstructured":"Kosta, K., Song, Y., Fazekas, G., and Sandler, M.B. (2013, January 4\u20138). A Study of Cultural Dependence of Perceived Mood in Greek Music. Proceedings of the ISMIR, Curitiba, Brazil."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1177\/1029864916637641","article-title":"Cross-cultural anger communication in music: Towards a stereotype theory of emotion in music","volume":"21","author":"Susino","year":"2017","journal-title":"Music. Sci."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"732865","DOI":"10.3389\/fpsyg.2021.732865","article-title":"A Cross-Cultural Analysis of the Influence of Timbre on Affect Perception in Western Classical Music and Chinese Music Traditions","volume":"12","author":"Wang","year":"2021","journal-title":"Front. Psychol."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"116","DOI":"10.1049\/ccs2.12032","article-title":"Cross-cultural analysis of the correlation between musical elements and emotion","volume":"4","author":"Wang","year":"2022","journal-title":"Cogn. Comput. Syst."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Chen, Y.W., Yang, Y.H., and Chen, H.H. (2018, January 17\u201320). Cross-Cultural Music Emotion Recognition by Adversarial Discriminative Domain Adaptation. Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA.","DOI":"10.1109\/ICMLA.2018.00076"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Lin, Y.C., Yang, Y.H., Chen, H.H., Liao, I.B., and Ho, Y.C. (July, January 28). Exploiting genre for music emotion classification. Proceedings of the 2009 IEEE International Conference on Multimedia and Expo, New York, NY, USA.","DOI":"10.1109\/ICME.2009.5202572"},{"key":"ref_49","unstructured":"Song, Y., Dixon, S., and Pearce, M.T. (2012, January 8\u201312). Evaluation of musical features for emotion classification. Proceedings of the ISMIR, Porto, Portugal."},{"key":"ref_50","unstructured":"Panda, R., Malheiro, R., Rocha, B., Oliveira, A., and Paiva, R.P. (2013, January 15\u201318). Multi-modal music emotion recognition: A new dataset, methodology and comparative analysis. Proceedings of the International Symposium on Computer Music Multidisciplinary Research, Marseille, France."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/T-AFFC.2011.15","article-title":"Deap: A database for emotion analysis; using physiological signals","volume":"3","author":"Koelstra","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Zhang, K., Zhang, H., Li, S., Yang, C., and Sun, L. (2018, January 11\u201314). The PMEmo dataset for music emotion recognition. Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, Yokohama, Japan.","DOI":"10.1145\/3206025.3206037"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Lee, J.H., and Hu, X. (2012, January 10\u201314). Generating ground truth for music mood classification using mechanical turk. Proceedings of the 12th ACM\/IEEE-CS Joint Conference on Digital Libraries, Washington, DC, USA.","DOI":"10.1145\/2232817.2232842"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.Y., and Yang, Y.H. (2013, January 22). 1000 Songs for Emotional Analysis of Music. Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia, CrowdMM \u201913, Barcelona, Spain.","DOI":"10.1145\/2506364.2506365"},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Chen, Y.A., Yang, Y.H., Wang, J.C., and Chen, H. (2015, January 19\u201324). The AMG1608 dataset for music emotion recognition. Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia.","DOI":"10.1109\/ICASSP.2015.7178058"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Malik, M., Adavanne, S., Drossos, K., Virtanen, T., Ticha, D., and Jarina, R. (2017). Stacked convolutional and recurrent neural networks for music emotion recognition. arXiv.","DOI":"10.23919\/EUSIPCO.2017.8081505"},{"key":"ref_57","unstructured":"Speck, J.A., Schmidt, E.M., Morton, B.G., and Kim, Y.E. (2011, January 24\u201328). A Comparative Study of Collaborative vs. Traditional Musical Mood Annotation. Proceedings of the ISMIR, Miami, FL, USA."},{"key":"ref_58","unstructured":"Aljanaki, A., Yang, Y.H., and Soleymani, M. (2014, January 16\u201317). Emotion in Music Task at MediaEval 2014. Proceedings of the MediaEval, Catalunya, Spain."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Thao, H.T.P., Balamurali, B., Roig, G., and Herremans, D. (2021). AttendAffectNet\u2013Emotion Prediction of Movie Viewers Using Multimodal Fusion with Self-Attention. Sensors, 21.","DOI":"10.3390\/s21248356"},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"64","DOI":"10.1016\/j.inffus.2022.10.002","article-title":"EmoMV: Affective music-video correspondence learning datasets for classification and retrieval","volume":"91","author":"Thao","year":"2023","journal-title":"Inf. Fusion"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Kittur, A., Chi, E.H., and Suh, B. (2008, January 5\u201310). Crowdsourcing user studies with Mechanical Turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy.","DOI":"10.1145\/1357054.1357127"},{"key":"ref_62","unstructured":"Defferrard, M., Benzi, K., Vandergheynst, P., and Bresson, X. (2017, January 23\u201327). FMA: A Dataset for Music Analysis. Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR), Suzhou, China."},{"key":"ref_63","doi-asserted-by":"crossref","unstructured":"Cheuk, K.W., Luo, Y.J., Balamurali, B., Roig, G., and Herremans, D. (2020, January 19\u201324). Regression-based music emotion prediction using triplet neural networks. Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK.","DOI":"10.1109\/IJCNN48605.2020.9207212"},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Eyben, F., W\u00f6llmer, M., and Schuller, B. (2010, January 25\u201329). Opensmile: The munich versatile and fast open-source audio feature extractor. Proceedings of the 18th ACM international conference on Multimedia, Firenze, Italy.","DOI":"10.1145\/1873951.1874246"},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s11704-021-0569-4","article-title":"A survey of music emotion recognition","volume":"16","author":"Han","year":"2022","journal-title":"Front. Comput. Sci."},{"key":"ref_66","unstructured":"Soleymani, M., and Larson, M. (2010, January 19\u201323). Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus. Proceedings of the Workshop on Crowdsourcing for Search Evaluation, SIGIR 2010, Geneva, Switzerland."},{"key":"ref_67","doi-asserted-by":"crossref","first-page":"583","DOI":"10.1080\/01621459.1952.10483441","article-title":"Use of ranks in one-criterion variance analysis","volume":"47","author":"Kruskal","year":"1952","journal-title":"J. Am. Stat. Assoc."},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"241","DOI":"10.1080\/00401706.1964.10490181","article-title":"Multiple comparisons using rank sums","volume":"6","author":"Dunn","year":"1964","journal-title":"Technometrics"},{"key":"ref_69","doi-asserted-by":"crossref","first-page":"170","DOI":"10.1136\/bmj.310.6973.170","article-title":"Multiple significance tests: The Bonferroni method","volume":"310","author":"Bland","year":"1995","journal-title":"BMJ"},{"key":"ref_70","first-page":"760","article-title":"Music emotion recognition using convolutional long short term memory deep neural networks","volume":"24","author":"Hizlisoy","year":"2021","journal-title":"Eng. Sci. Technol. Int. J."},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Pandeya, Y.R., Bhattarai, B., and Lee, J. (2021). Deep-learning-based multimodal emotion classification for music videos. Sensors, 21.","DOI":"10.3390\/s21144927"},{"key":"ref_72","unstructured":"Delbouys, R., Hennequin, R., Piccoli, F., Royo-Letelier, J., and Moussallam, M. (2018). Music mood detection based on audio and lyrics with deep neural net. arXiv."},{"key":"ref_73","doi-asserted-by":"crossref","first-page":"6749622","DOI":"10.1155\/2022\/5181899","article-title":"A Music Emotion Classification Model Based on the Improved Convolutional Neural Network","volume":"2022","author":"Jia","year":"2022","journal-title":"Comput. Intell. Neurosci."},{"key":"ref_74","doi-asserted-by":"crossref","first-page":"571","DOI":"10.1007\/s10772-020-09781-0","article-title":"Development of music emotion classification system using convolution neural network","volume":"24","author":"Chaudhary","year":"2021","journal-title":"Int. J. Speech Technol."},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"012015","DOI":"10.1088\/1742-6596\/1976\/1\/012015","article-title":"Emotion recognition of musical instruments based on convolution long short time memory depth neural network","volume":"1976","author":"Wang","year":"2021","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_76","doi-asserted-by":"crossref","first-page":"765","DOI":"10.1007\/s11042-019-08192-x","article-title":"Recognition of emotion in music based on deep convolutional neural network","volume":"79","author":"Sarkar","year":"2020","journal-title":"Multimed. Tools Appl."},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"355","DOI":"10.11591\/eei.v12i1.4231","article-title":"Multimodal music emotion recognition in Indonesian songs based on CNN-LSTM, XLNet transformers","volume":"12","author":"Sams","year":"2023","journal-title":"Bull. Electr. Eng. Inform."},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Parthasarathy, S., and Sundaram, S. (2021, January 19\u201322). Detecting expressions with multimodal transformers. Proceedings of the 2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen, China.","DOI":"10.1109\/SLT48900.2021.9383573"},{"key":"ref_79","doi-asserted-by":"crossref","unstructured":"Alajanki, A., Yang, Y.H., and Soleymani, M. (2016). Benchmarking music emotion recognition systems. PLoS ONE, 12.","DOI":"10.1371\/journal.pone.0173392"},{"key":"ref_80","doi-asserted-by":"crossref","unstructured":"Eyben, F. (2015). Real-Time Speech and Music Classification by Large Audio Feature Space Extraction, Springer.","DOI":"10.1007\/978-3-319-27299-3"},{"key":"ref_81","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_82","unstructured":"Thao, H.T.P., Herremans, D., and Roig, G. (November, January 27). Multimodal Deep Models for Predicting Affective Responses Evoked by Movies. Proceedings of the ICCV Workshops, Seoul, Korea."},{"key":"ref_83","doi-asserted-by":"crossref","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","article-title":"Long short-term memory","volume":"9","author":"Hochreiter","year":"1997","journal-title":"Neural Comput."},{"key":"ref_84","doi-asserted-by":"crossref","unstructured":"Liu, H., Fang, Y., and Huang, Q. (2018, January 22\u201323). Music emotion recognition using a variant of recurrent neural network. Proceedings of the 2018 International Conference on Mathematics, Modeling, Simulation and Statistics Application (MMSSA 2018), Shanghai, China.","DOI":"10.2991\/mmssa-18.2019.4"},{"key":"ref_85","unstructured":"Weninger, F., Eyben, F., and Schuller, B. (2013, January 18\u201319). The TUM approach to the MediaEval music emotion task using generic affective audio features. Proceedings of the MediaEval 2013 Workshop, Barcelona, Spain."},{"key":"ref_86","doi-asserted-by":"crossref","first-page":"355","DOI":"10.1080\/09298215.2021.1977336","article-title":"A multi-genre model for music emotion recognition using linear regressors","volume":"50","author":"Griffiths","year":"2021","journal-title":"J. New Music. Res."},{"key":"ref_87","unstructured":"Cumming, J., Ha Lee, J., McFee, B., Schedl, M., Devaney, J., McKay, C., Zagerle, E., and de Reuse, T. (2020). Joyful for you and tender for us: The influence of individual characteristics and language on emotion labeling and classification. Proceedings of the 21st International Society for Music Information Retrieval Conference (ISMIR), Montr\u00e9al, QC, Canada, 11\u201316 October 2020, ISMIR."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/1\/382\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:55:51Z","timestamp":1760147751000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/1\/382"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,29]]},"references-count":87,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,1]]}},"alternative-id":["s23010382"],"URL":"https:\/\/doi.org\/10.3390\/s23010382","relation":{"has-preprint":[{"id-type":"doi","id":"10.20944\/preprints202210.0301.v1","asserted-by":"object"}]},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,12,29]]}}}