{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,9]],"date-time":"2026-04-09T06:22:34Z","timestamp":1775715754724,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":26,"publisher":"ACM","license":[{"start":{"date-parts":[[2013,10,22]],"date-time":"2013-10-22T00:00:00Z","timestamp":1382400000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2013,10,22]]},"DOI":"10.1145\/2506364.2506365","type":"proceedings-article","created":{"date-parts":[[2013,10,17]],"date-time":"2013-10-17T12:23:54Z","timestamp":1382012634000},"page":"1-6","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":128,"title":["1000 songs for emotional analysis of music"],"prefix":"10.1145","author":[{"given":"Mohammad","family":"Soleymani","sequence":"first","affiliation":[{"name":"Imperial College London, London, United Kingdom"}]},{"given":"Micheal N.","family":"Caro","sequence":"additional","affiliation":[{"name":"Drexel University, Philadelphia, PA, USA"}]},{"given":"Erik M.","family":"Schmidt","sequence":"additional","affiliation":[{"name":"Drexel University, Philadelphia, PA, USA"}]},{"given":"Cheng-Ya","family":"Sha","sequence":"additional","affiliation":[{"name":"National Taiwan University, Taipei, Taiwan Roc"}]},{"given":"Yi-Hsuan","family":"Yang","sequence":"additional","affiliation":[{"name":"Academia Sinica, Taipei, Taiwan Roc"}]}],"member":"320","published-online":{"date-parts":[[2013,10,22]]},"reference":[{"key":"e_1_3_2_1_1_1","first-page":"492","volume-title":"Computer Music Modelling & Retrieval","author":"Barthet M.","year":"2012","unstructured":"M. Barthet , G. Fazekas , and M. Sandler . Multidisciplinary perspectives on music emotion recognition: Implications for content and context-based models. In Int'l Symp . Computer Music Modelling & Retrieval , pages 492 -- 507 , 2012 . M. Barthet, G. Fazekas, and M. Sandler. Multidisciplinary perspectives on music emotion recognition: Implications for content and context-based models. In Int'l Symp. Computer Music Modelling & Retrieval, pages 492--507, 2012."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1016\/0005-7916(94)90063-9"},{"key":"e_1_3_2_1_3_1","volume-title":"feeltrace': an instrument for recording perceived emotion in real time","author":"Cowie R.","year":"2000","unstructured":"R. Cowie , E. Douglas-Cowie , S. Savvidou , E. Mcmahon , M. Sawey , and M. Schr\u00f6der . feeltrace': an instrument for recording perceived emotion in real time , 2000 . R. Cowie, E. Douglas-Cowie, S. Savvidou, E. Mcmahon, M. Sawey, and M. Schr\u00f6der. feeltrace': an instrument for recording perceived emotion in real time, 2000."},{"key":"e_1_3_2_1_4_1","first-page":"45","volume-title":"Basic Emotions","author":"Ekman P.","year":"2005","unstructured":"P. Ekman . Basic Emotions , pages 45 -- 60 . John Wiley & Sons, Ltd , 2005 . P. Ekman. Basic Emotions, pages 45--60. John Wiley & Sons, Ltd, 2005."},{"key":"e_1_3_2_1_5_1","unstructured":"D. P. W. Ellis. PLP and RASTA (and MFCC and inversion) in Matlab 2005. online web resource.  D. P. W. Ellis. PLP and RASTA (and MFCC and inversion) in Matlab 2005. online web resource."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.2307\/1415746"},{"key":"e_1_3_2_1_7_1","first-page":"462","volume-title":"Proc. Int. Soc. Music Info. Retrieval Conf.","author":"Hu X.","year":"2008","unstructured":"X. Hu , J. S. Downie , C. Laurier , M. Bay , and A. F. Ehmann . The 2007 MIREX audio mood classification task: Lessons learned . In Proc. Int. Soc. Music Info. Retrieval Conf. , pages 462 -- 467 , 2008 . X. Hu, J. S. Downie, C. Laurier, M. Bay, and A. F. Ehmann. The 2007 MIREX audio mood classification task: Lessons learned. In Proc. Int. Soc. Music Info. Retrieval Conf., pages 462--467, 2008."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1080\/09298215.2010.513733"},{"key":"e_1_3_2_1_9_1","first-page":"231","volume-title":"Proc. Int. Soc. Music Info. Retrieval Conf.","author":"Kim Y. E.","year":"2008","unstructured":"Y. E. Kim , E. Schmidt , and L. Emelle . Moodswings: A collaborative game for music mood label collection . In Proc. Int. Soc. Music Info. Retrieval Conf. , pages 231 -- 236 , 2008 . Y. E. Kim, E. Schmidt, and L. Emelle. Moodswings: A collaborative game for music mood label collection. In Proc. Int. Soc. Music Info. Retrieval Conf., pages 231--236, 2008."},{"key":"e_1_3_2_1_10_1","volume-title":"Proc. Int. Soc. Music Info. Retrieval Conf.","author":"Kim Y. E.","year":"2010","unstructured":"Y. E. Kim , E. M. Schmidt , R. Migneco , B. G. Morton , P. Richardson , J. Scott , J. Speck , and D. Turnbull . Music emotion recognition: A state of the art review . In Proc. Int. Soc. Music Info. Retrieval Conf. , 2010 . Y. E. Kim, E. M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. Speck, and D. Turnbull. Music emotion recognition: A state of the art review. In Proc. Int. Soc. Music Info. Retrieval Conf., 2010."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/1357054.1357127"},{"key":"e_1_3_2_1_12_1","volume-title":"MIREX task on Audio Mood Classification","author":"Laurier C.","year":"2007","unstructured":"C. Laurier and P. Herrera . Audio music mood classification using support vector machine . In MIREX task on Audio Mood Classification , 2007 . C. Laurier and P. Herrera. Audio music mood classification using support vector machine. In MIREX task on Audio Mood Classification, 2007."},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1348\/000712610X506831"},{"key":"e_1_3_2_1_14_1","first-page":"21","volume-title":"Computational models of emotion","author":"Marsella S.","year":"2010","unstructured":"S. Marsella , J. Gratch , and P. Petta . Computational models of emotion , chapter 1.2, pages 21 -- 41 . Oxford University Press , Oxford, UK , 2010 . S. Marsella, J. Gratch, and P. Petta. Computational models of emotion, chapter 1.2, pages 21--41. Oxford University Press, Oxford, UK, 2010."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/T-AFFC.2011.20"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/1837885.1837899"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1037\/a0016063"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1037\/h0077714"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1177\/0539018405058216"},{"key":"e_1_3_2_1_20_1","volume-title":"Proc. Int. Soc. Music Information Retrieval Conf.","author":"Schmidt E. M.","year":"2010","unstructured":"E. M. Schmidt and Y. E. Kim . Prediction of time-varying musical mood distributions from audio . In Proc. Int. Soc. Music Information Retrieval Conf. , August 2010 . E. M. Schmidt and Y. E. Kim. Prediction of time-varying musical mood distributions from audio. In Proc. Int. Soc. Music Information Retrieval Conf., August 2010."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/1743384.1743431"},{"key":"e_1_3_2_1_22_1","volume-title":"Workshop on Crowdsourcing for Search Evaluation, SIGIR 2010","author":"Soleymani M.","year":"2010","unstructured":"M. Soleymani and M. Larson . Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus . In Workshop on Crowdsourcing for Search Evaluation, SIGIR 2010 , Geneva, Switzerland , 2010 . M. Soleymani and M. Larson. Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus. In Workshop on Crowdsourcing for Search Evaluation, SIGIR 2010, Geneva, Switzerland, 2010."},{"key":"e_1_3_2_1_23_1","volume-title":"Proc. Int. Soc. Music Info. Retrieval Conf.","author":"Speck J. A.","year":"2011","unstructured":"J. A. Speck , E. M. Schmidt , B. G. Morton , and Y. E. Kim . A comparative study of collaborative vs. traditional musical mood annotation . In Proc. Int. Soc. Music Info. Retrieval Conf. , 2011 . J. A. Speck, E. M. Schmidt, B. G. Morton, and Y. E. Kim. A comparative study of collaborative vs. traditional musical mood annotation. In Proc. Int. Soc. Music Info. Retrieval Conf., 2011."},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2393347.2393367"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"crossref","DOI":"10.1201\/b10731","volume-title":"Music Emotion Recognition","author":"Yang Y.-H.","year":"2011","unstructured":"Y.-H. Yang and H. H. Chen . Music Emotion Recognition . CRC Press , Boca Raton , Florida, 2011 . Y.-H. Yang and H. H. Chen. Music Emotion Recognition. CRC Press, Boca Raton, Florida, 2011."},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASL.2010.2064164"}],"event":{"name":"MM '13: ACM Multimedia Conference","location":"Barcelona Spain","acronym":"MM '13","sponsor":["SIGMM ACM Special Interest Group on Multimedia"]},"container-title":["Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2506364.2506365","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/2506364.2506365","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T07:28:44Z","timestamp":1750231724000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2506364.2506365"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2013,10,22]]},"references-count":26,"alternative-id":["10.1145\/2506364.2506365","10.1145\/2506364"],"URL":"https:\/\/doi.org\/10.1145\/2506364.2506365","relation":{},"subject":[],"published":{"date-parts":[[2013,10,22]]},"assertion":[{"value":"2013-10-22","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}