{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2022,6,2]],"date-time":"2022-06-02T00:40:14Z","timestamp":1654130414835},"reference-count":31,"publisher":"IGI Global","issue":"2","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2012,7,1]]},"abstract":"<p>The recognition and display of synthetic emotions in humanoid robots is a critical attribute for facilitating natural human-robot interaction. The authors utilize an efficient algorithm to estimate the mood in acoustic music, and then use the results of that algorithm to drive movement generation systems to provide motions for the robot that are suitable for the music. This system is evaluated on multiple sets of humanoid robots to determine if the choice of robot platform or number of robots influences the perceived emotional content of the motions. Their tests verify that the authors\u2019 system can accurately identify the emotional content of acoustic music and produce motions that convey a similar emotion to that in the audio. They also determine the perceptual effects of using different sized or different numbers of robots in the motion performances.<\/p>","DOI":"10.4018\/jse.2012070104","type":"journal-article","created":{"date-parts":[[2013,2,5]],"date-time":"2013-02-05T22:26:21Z","timestamp":1360103181000},"page":"68-83","source":"Crossref","is-referenced-by-count":4,"title":["Synthetic Emotions for Humanoids"],"prefix":"10.4018","volume":"3","author":[{"given":"David K.","family":"Grunberg","sequence":"first","affiliation":[{"name":"Drexel University, USA"}]},{"given":"Alyssa M.","family":"Batula","sequence":"additional","affiliation":[{"name":"Drexel University, USA"}]},{"given":"Erik M.","family":"Schmidt","sequence":"additional","affiliation":[{"name":"Drexel University, USA"}]},{"given":"Youngmoo E.","family":"Kim","sequence":"additional","affiliation":[{"name":"Drexel University, USA"}]}],"member":"2432","reference":[{"key":"jse.2012070104-0","doi-asserted-by":"publisher","DOI":"10.1037\/0012-1649.34.5.1007"},{"key":"jse.2012070104-1","doi-asserted-by":"publisher","DOI":"10.1016\/S1071-5819(03)00050-8"},{"key":"jse.2012070104-2","doi-asserted-by":"crossref","unstructured":"Clay, A., Couture, N., & Nigay, L. (2009). Towards an architecture model for emotion recognition in interactive systems: Application to a ballet dance show. In Proceedings of the World Conference on Innovative VR (pp. 19-24).","DOI":"10.1115\/WINVR2009-704"},{"key":"jse.2012070104-3","doi-asserted-by":"publisher","DOI":"10.1109\/TASL.2006.885257"},{"key":"jse.2012070104-4","doi-asserted-by":"publisher","DOI":"10.1109\/TASSP.1980.1163420"},{"issue":"2","key":"jse.2012070104-5","first-page":"33","article-title":"Development of an autonomous dancing robot.","volume":"3","author":"D.Grunberg","year":"2010","journal-title":"International Journal of Hybrid Information Technology"},{"key":"jse.2012070104-6","unstructured":"Grunberg, D. K., Batula, A., Schmidt, E. M., & Kim, Y. (2012). Emotion recognition and affective gesturing in response to music. In Proceedings of the International Conference on Intelligent Robots and Systems."},{"key":"jse.2012070104-7","doi-asserted-by":"crossref","unstructured":"Grunberg, D. K., Lofaro, D. M., Oh, P. Y., & Kim, Y. E. (2011). Robot audition and beat identification in noisy environments. In Proceedings of the International Conference on Intelligent Robots and Systems (pp. 2916-2921).","DOI":"10.1109\/IROS.2011.6094987"},{"key":"jse.2012070104-8","doi-asserted-by":"crossref","unstructured":"Ince, G., Nakadai, K., Rodemann, T., Tsujino, H., & Imura, J.-I. (2010). Robust ego noise suppression of a robot. In N. Garc\u00eda-Pedrajas, F. Herrera, C. Fyfe, J. Ben\u00edtez, & M. Ali (Eds.), Proceedings of the 23rd International Conference on Trends in Applied Intelligent Systems (LNCS 6096, pp. 62-71).","DOI":"10.1007\/978-3-642-13022-9_7"},{"key":"jse.2012070104-9","doi-asserted-by":"crossref","unstructured":"Jiang, D.-N., Lu, L., Zhang, H.-J., Tao, J.-H., & Cai, L.-H. (2002). Music type classification by spectral contrast feature. In Proceedings of the IEEE International Conference on Multimedia and Expo (Vol. 1, pp. 113-116).","DOI":"10.1109\/ICME.2002.1035731"},{"key":"jse.2012070104-10","doi-asserted-by":"crossref","unstructured":"Kim, Y. E., Batula, A. M., Grunberg, D., Lofaro, D. M., Oh, J., & Oh, P. Y. (2010). Developing humanoids for musical interaction. In Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems (pp. 36-43).","DOI":"10.1109\/CTS.2011.5928689"},{"key":"jse.2012070104-11","unstructured":"Kim, Y. E., Schmidt, E. M., & Emelle, L. (2009). MoodSwings: A collaborative game for music mood label collection. In Proceedings of the 10th International Society for Music Information Retrieval (pp. 231-236)."},{"key":"jse.2012070104-12","doi-asserted-by":"publisher","DOI":"10.1109\/TSA.2005.854090"},{"key":"jse.2012070104-13","doi-asserted-by":"publisher","DOI":"10.1016\/j.robot.2010.08.006"},{"key":"jse.2012070104-14","doi-asserted-by":"crossref","unstructured":"Michalowski, M., Sabanovic, S., & Kozima, H. (2007). A dancing robot for rhythmic social interaction. In Proceedings of the 2nd Annual Conference on Human-Robot Interaction (pp. 89-96).","DOI":"10.1145\/1228716.1228729"},{"key":"jse.2012070104-15","unstructured":"Murata, K., Nakadai, K., Yoshii, K., Takeda, R., Torii, T., & Okuno, H. G. \u2026Tsujino, H. (2008). A robot singer with music recognition based on real-time beat tracking. In Proceedings of the 9th International Conference on Music Information Retrieval (pp. 199-204)."},{"key":"jse.2012070104-16","doi-asserted-by":"publisher","DOI":"10.1177\/0278364907079430"},{"issue":"1","key":"jse.2012070104-17","doi-asserted-by":"crossref","first-page":"27","DOI":"10.20965\/jrm.2002.p0027","article-title":"Analysis of impression of robot bodily expression.","volume":"14","author":"T.Nakata","year":"2002","journal-title":"Journal of Robotics and Mechatronics"},{"key":"jse.2012070104-18","doi-asserted-by":"publisher","DOI":"10.1163\/156855307781503781"},{"key":"jse.2012070104-19","doi-asserted-by":"publisher","DOI":"10.1109\/TASL.2010.2098869"},{"key":"jse.2012070104-20","doi-asserted-by":"publisher","DOI":"10.1121\/1.421129"},{"key":"jse.2012070104-21","doi-asserted-by":"crossref","unstructured":"Schmidt, E. M., & Kim, Y. E. (2010). Prediction of time-varying musical mood distributions from audio. In Proceedings of the International Conference on Music Information Retrieval (pp. 465-470).","DOI":"10.1109\/ICMLA.2010.101"},{"key":"jse.2012070104-22","unstructured":"Schmidt, E. M., & Kim, Y. E. (2011a). Modeling musical emotion dynamics with conditional random fields. In Proceedings of the International Society for Music Information Retrieval (pp. 777-782)."},{"key":"jse.2012070104-23","doi-asserted-by":"crossref","unstructured":"Schmidt, E. M., & Kim, Y. E. (2011b). Learning emotion-based acoustic features with deep belief networks. In Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (pp. 65-68).","DOI":"10.1109\/ASPAA.2011.6082328"},{"key":"jse.2012070104-24","doi-asserted-by":"crossref","unstructured":"Schmidt, E. M., Turnbull, D., & Kim, Y. E. (2010). Feature selection for content-based, time-varying musical emotion regression. In Proceedings of the ACM International Conference on Music Information Retrieval (pp. 267-273).","DOI":"10.1145\/1743384.1743431"},{"key":"jse.2012070104-25","doi-asserted-by":"publisher","DOI":"10.2197\/ipsjtrans.1.80"},{"key":"jse.2012070104-26","unstructured":"Speck, J. A., Schmidt, E. M., Morton, B. G., & Kim, Y. E. (2011). A comparative study of collaborative vs. traditional musical mood annotation. In Proceedings of the International Society for Music Information Retrieval (pp. 549-554)."},{"key":"jse.2012070104-27","author":"R. E.Thayer","year":"1989","journal-title":"The biopsychology of mood and arousal"},{"key":"jse.2012070104-28","author":"R.von Laban","year":"1956","journal-title":"Principles of dance and movement notation"},{"key":"jse.2012070104-29","doi-asserted-by":"crossref","unstructured":"Weinberg, G., & Driscoll, S. (2006). Robot-human interaction with an anthropomorphic percussionist. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1229-1232).","DOI":"10.1145\/1124772.1124957"},{"key":"jse.2012070104-30","doi-asserted-by":"crossref","unstructured":"Yoshii, K., Nakadai, K., Torii, T., Hasegawa, Y., Tsujino, H., & Komatani, K. \u2026Okuno, H. G. (2007). A biped robot that keeps steps in time with musical beats while listening to music with its own ears. In Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems (pp. 1743-1750).","DOI":"10.1109\/IROS.2007.4399244"}],"container-title":["International Journal of Synthetic Emotions"],"original-title":[],"language":"ng","link":[{"URL":"https:\/\/www.igi-global.com\/viewtitle.aspx?TitleId=70418","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,6,1]],"date-time":"2022-06-01T23:59:02Z","timestamp":1654127942000},"score":1,"resource":{"primary":{"URL":"https:\/\/services.igi-global.com\/resolvedoi\/resolve.aspx?doi=10.4018\/jse.2012070104"}},"subtitle":["Perceptual Effects of Size and Number of Robot Platforms"],"short-title":[],"issued":{"date-parts":[[2012,7,1]]},"references-count":31,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2012,7]]}},"URL":"https:\/\/doi.org\/10.4018\/jse.2012070104","relation":{},"ISSN":["1947-9093","1947-9107"],"issn-type":[{"value":"1947-9093","type":"print"},{"value":"1947-9107","type":"electronic"}],"subject":[],"published":{"date-parts":[[2012,7,1]]}}}