{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,23]],"date-time":"2025-12-23T18:58:21Z","timestamp":1766516301136,"version":"3.40.2"},"publisher-location":"Singapore","reference-count":43,"publisher":"Springer Nature Singapore","isbn-type":[{"value":"9789819635184","type":"print"},{"value":"9789819635191","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T00:00:00Z","timestamp":1742860800000},"content-version":"vor","delay-in-days":83,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Humor is a nuanced field; thus, prior robot comedy efforts have found the stage a relevant and helpful source of HRI analysis data, particularly when multiple performers can interact with each other. However, hand-animating one robot is already high-intensity, and to our knowledge, no one has sought to scale entertainment robot gesture design via domain-specific automation. Thus, this paper aims to: (1) study the use of head gesture through a video analysis of 20 human standup comedians, (2) algorithmically generate robot head gestures for dueling robot comedy scripts based on linguistic analysis, and (3) explore critical features for robot entertainment editing interfaces, such as replaying a scene from the middle of a script during rehearsals, as automation is intended to enhance speed rather than finesse. Human entertainers develop expertise via many hours on the stage, sometimes crashing (failing) or bombing (meeting lackluster response), and other times captivating (success) or \u2018killing it\u2019 (high audience response). The value of effective timing and gesture in a range of bi-directional communication scenarios is well established, thus, this work sought to ease the process of creating new multi-robot comedy performances, leveraging a Portable Robot Comedy stage we had developed and deployed with two Blossom robots at a variety of public festivals. Human comedian annotation results discuss how linguistic context can predict best- match gestures, and identify common expressive uses of gesture during standup comedy: <jats:italic>positive affect<\/jats:italic>, <jats:italic>negative affect<\/jats:italic>, <jats:italic>spatial location<\/jats:italic>, and <jats:italic>audience interaction<\/jats:italic>. The software analyzes word strings within a script to auto-assign gestures that match the above expressive categories. While this work occurred before Large Language Models became easily accessible, the software is relevant to efficiently adding gesture and time to any generated script. As such, ongoing work extends these efforts to the higher anthropomorphism Pepper robot platform for LLM-human created guided mindfulness meditations.<\/jats:p>","DOI":"10.1007\/978-981-96-3519-1_24","type":"book-chapter","created":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T04:09:22Z","timestamp":1742875762000},"page":"261-275","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["A Semi-automated Multi-robot Comedy Performance System with\u00a0Gesture"],"prefix":"10.1007","author":[{"given":"Janani","family":"Swaminathan","sequence":"first","affiliation":[]},{"given":"Chirag","family":"Jain","sequence":"additional","affiliation":[]},{"given":"Madison","family":"Miller","sequence":"additional","affiliation":[]},{"given":"Heather","family":"Knight","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,25]]},"reference":[{"key":"24_CR1","doi-asserted-by":"crossref","unstructured":"Agnihotri, A., Knight, H.: Persuasive chairbots: a (mostly) robot-recruited experiment. In: 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). pp.\u00a01\u20137. IEEE (2019)","DOI":"10.1109\/RO-MAN46459.2019.8956262"},{"key":"24_CR2","first-page":"129","volume":"2005","author":"J Allwood","year":"2005","unstructured":"Allwood, J., Cerrato, L., Dybkjaer, L., Jokinen, K., Navarretta, C., Paggio, P.: The mumin multimodal coding scheme. NorFA Yearbook 2005, 129\u2013157 (2005)","journal-title":"NorFA Yearbook"},{"issue":"3\u20134","key":"24_CR3","doi-asserted-by":"crossref","first-page":"273","DOI":"10.1007\/s10579-007-9061-5","volume":"41","author":"J Allwood","year":"2007","unstructured":"Allwood, J., Cerrato, L., Jokinen, K., Navarretta, C., Paggio, P.: The mumin coding scheme for the annotation of feedback, turn management and sequencing phenomena. Lang. Resour. Eval. 41(3\u20134), 273\u2013287 (2007)","journal-title":"Lang. Resour. Eval."},{"issue":"9","key":"24_CR4","doi-asserted-by":"crossref","first-page":"1965","DOI":"10.1007\/s12369-022-00936-4","volume":"14","author":"A Bacula","year":"2022","unstructured":"Bacula, A., Knight, H.: Motis parameters for expressive multi-robot systems: Relative motion, timing, and spacing. Int. J. Soc. Robot. 14(9), 1965\u20131993 (2022)","journal-title":"Int. J. Soc. Robot."},{"issue":"1","key":"24_CR5","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3570732","volume":"12","author":"A Bacula","year":"2023","unstructured":"Bacula, A., Mercer, J., Berger, J., Adams, J., Knight, H.: Integrating robot manufacturer perspectives into legible factory robot light communications. ACM Trans. Hum.-Robot Interact. 12(1), 1\u201333 (2023)","journal-title":"ACM Trans. Hum.-Robot Interact."},{"key":"24_CR6","doi-asserted-by":"crossref","unstructured":"Bechade, L., Duplessis, G.D., Devillers, L.: Empirical study of humor support in social human-robot interaction. In: International Conference on Distributed, Ambient, and Pervasive Interactions, pp. 305\u2013316. Springer, Cham (2016)","DOI":"10.1007\/978-3-319-39862-4_28"},{"key":"24_CR7","doi-asserted-by":"crossref","unstructured":"Breazeal, C., et al.: Interactive robot theatre. In: Proceedings 2003 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), vol.\u00a04, pp. 3648\u20133655. IEEE (2003)","DOI":"10.1109\/IROS.2003.1249722"},{"key":"24_CR8","doi-asserted-by":"crossref","unstructured":"Fallatah, A., Chun, B., Balali, S., Knight, H.: \u201cwould you please buy me a coffee?\u201d how microculturesimpact people\u2019s helpful actions toward robots. In: Proceedings of the 2020 ACM on Designing Interactive Systems Conference, pp. 939\u2013950 (2020)","DOI":"10.1145\/3357236.3395446"},{"key":"24_CR9","doi-asserted-by":"crossref","unstructured":"Fallatah, A., Urann, Jeremyand\u00a0Knight, H.: The robot show must go on: effective responses to robot failures. In: International Conference on on Intelligent Robots and Systems. IEEE (2019)","DOI":"10.1109\/IROS40897.2019.8967854"},{"key":"24_CR10","unstructured":"Fujita, M.: Digital creatures for future entertainment robotics. In: Proceedings IEEE International Conference on Robotics and Automation (ICRA). IEEE (2000)"},{"key":"24_CR11","doi-asserted-by":"crossref","unstructured":"Gkournelos, C., Konstantinou, C., Makris, S.: An LLM-based approach for enabling seamless human-robot collaboration in assembly. In: CIRP Annals (2024)","DOI":"10.1016\/j.cirp.2024.04.002"},{"key":"24_CR12","doi-asserted-by":"crossref","unstructured":"Hansen, J., Flynn, D., Oo, T.M., Knight, H.: Iterative robot waiter algorithm design: service expectations and social factors. In: Proceedings of ACM\/IEEE International Conference on Human-Robot Interaction, pp. 394\u2013402 (2024)","DOI":"10.1145\/3610977.3634978"},{"key":"24_CR13","doi-asserted-by":"crossref","unstructured":"Hasegawa, D., Sjobergh, J., Rzepka, R., Araki, K.: Automatically choosing appropriate gestures for jokes. In: Proceedings of the Fifth Artificial Intelligence and Interactive Digital Entertainment Conference, pp. 40\u201345. AAAI (2009)","DOI":"10.1609\/aiide.v5i1.12354"},{"key":"24_CR14","doi-asserted-by":"crossref","unstructured":"Hedaoo, S., Williams, A., Wadgaonkar, C., Knight, H.: A robot barista comments on its clients: social attitudes toward robot data use. In: 2019 14th ACM\/IEEE International Conference on Human-Robot Interaction (HRI). IEEE (2019)","DOI":"10.1109\/HRI.2019.8673021"},{"key":"24_CR15","unstructured":"Honnibal, M., Montani, I.: spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing, 7(1) (2017, to appear)"},{"issue":"4","key":"24_CR16","doi-asserted-by":"crossref","first-page":"457","DOI":"10.1007\/s12369-016-0370-y","volume":"8","author":"E Jochum","year":"2016","unstructured":"Jochum, E., Vlachos, E., Christoffersen, A., Nielsen, S.G., Hameed, I.A., Tan, Z.H.: Using theatre to study interaction with care robots. Int. J. Soc. Robot. 8(4), 457\u2013470 (2016)","journal-title":"Int. J. Soc. Robot."},{"key":"24_CR17","doi-asserted-by":"crossref","unstructured":"Kahn\u00a0Jr., P.H., Ruckert, J.H., Kanda, T., Ishiguro, H., Gary, H.E., Shen, S.: No joking aside: using humor to establish sociality in HRI. In: Proceedings of the ACM\/IEEE International Conference on Human-Robot Interaction (2014)","DOI":"10.1145\/2559636.2559813"},{"key":"24_CR18","doi-asserted-by":"crossref","unstructured":"Katayama, H.: Humor in Manzai stand-up comedy: a historical and comparative analysis. Int. J. Human. 6(1) (2008)","DOI":"10.18848\/1447-9508\/CGP\/v06i01\/42336"},{"key":"24_CR19","doi-asserted-by":"crossref","unstructured":"Katevas, K., Healey, P., Harris, M.: Robot comedy lab: experimenting with the social dynamics of live performance. Front. Psychol. 6 (2015)","DOI":"10.3389\/fpsyg.2015.01253"},{"key":"24_CR20","unstructured":"Katevas, K., Healey, P.G., Harris, M.: Robot stand-up: engineering a comic performance. In: Proceedings of the Workshop on Humanoid Robots and Creativity at the IEEE-RAS Conference on Humanoid Robots (Madrid) (2014)"},{"key":"24_CR21","doi-asserted-by":"crossref","unstructured":"Knight, H.: Eight lessons learned about non-verbal interactions through robot theater. In: International Conference on Social Robotics, pp. 42\u201351. Springer, Cham (2011)","DOI":"10.1007\/978-3-642-25504-5_5"},{"key":"24_CR22","unstructured":"Knight, H., Satkin, S., Ramakrishna, V., Divvala, S.: A savvy robot standup comic: online learning through audience tracking. In: Workshop Paper (TEI\u201910) (2011)"},{"key":"24_CR23","doi-asserted-by":"crossref","unstructured":"Knight, H., Simmons, R.: An intelligent design interface for dancers to teach robots. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1344\u20131350. IEEE (2017)","DOI":"10.1109\/ROMAN.2017.8172479"},{"key":"24_CR24","unstructured":"Lee, Y.K., Jung, Y., Kang, G., Hahn, S.: Developing social robots with empathetic non-verbal cues using large language models (2023). https:\/\/arxiv.org\/abs\/2308.16529"},{"key":"24_CR25","doi-asserted-by":"crossref","unstructured":"Mahadevan, K., et al.: Generative expressive robot behaviors using large language models. In: Proceedings of the 2024 ACM\/IEEE International Conference on Human-Robot Interaction. HRI \u201924. ACM (2024)","DOI":"10.1145\/3610977.3634999"},{"key":"24_CR26","doi-asserted-by":"crossref","unstructured":"Marsella, S., Xu, Y., Lhommet, M., Feng, A., Scherer, S., Shapiro, A.: Virtual character performance from speech. In: Proceedings of the 12th ACM SIGGRAPH\/Eurographics Symposium on Computer Animation, pp. 25\u201335. Association for Computing Machinery, New York, NY, USA (2013)","DOI":"10.1145\/2485895.2485900"},{"key":"24_CR27","unstructured":"Mirchandani, S., et al.: Large language models as general pattern machines (2023). https:\/\/arxiv.org\/abs\/2307.04721"},{"key":"24_CR28","doi-asserted-by":"crossref","unstructured":"Mirnig, N., Stadler, S., Stollnberger, G., Giuliani, M., Tscheligi, M.: Robot humor: how self-irony and schadenfreude influence people\u2019s rating of robot likability. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 166\u2013171. IEEE (2016)","DOI":"10.1109\/ROMAN.2016.7745106"},{"key":"24_CR29","doi-asserted-by":"crossref","unstructured":"Mirowski, P., Love, J., Mathewson, K., Mohamed, S.: A robot walks into a bar: can language models serve as creativity supporttools for comedy? An evaluation of LLMs\u2019 humour alignment with comedians. In: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (2024)","DOI":"10.1145\/3630106.3658993"},{"key":"24_CR30","doi-asserted-by":"crossref","unstructured":"Pan, Y., Agrawal, R., Singh, K.: S3: speech, script and scene driven head and eye animation. ACM Trans. Graph. 43(4) (2024)","DOI":"10.1145\/3658172"},{"key":"24_CR31","unstructured":"vaderSentiment repository (2019). https:\/\/github.com\/cjhutto\/vaderSentiment"},{"key":"24_CR32","unstructured":"Schraft, R.D., Graf, B., Traub, A., John, D.: A mobile robot platform for assistance and entertainment. Int. J. Ind. Robot (2001)"},{"key":"24_CR33","doi-asserted-by":"crossref","unstructured":"Shah, P.R., Thakkar, C.D., Mali, S.: Computational creativity: automated pun generation. Int. J. Comput. Appl. 140(10) (2016)","DOI":"10.5120\/ijca2016909467"},{"key":"24_CR34","doi-asserted-by":"crossref","unstructured":"Sidner, C.L., Lee, C., Kidd, C., Lesh, N., Rich, C.: Explorations in engagement for humans and robots. arXiv preprint cs\/0507056 (2005)","DOI":"10.1016\/j.artint.2005.03.005"},{"issue":"1","key":"24_CR35","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3310356","volume":"8","author":"M Suguitan","year":"2019","unstructured":"Suguitan, M., Hoffman, G.: Blossom: a handcrafted open-source robot. ACM Trans. Hum.-Robot Interact. (THRI) 8(1), 1\u201327 (2019)","journal-title":"ACM Trans. Hum.-Robot Interact. (THRI)"},{"key":"24_CR36","doi-asserted-by":"publisher","unstructured":"Swaminathan, J., Akintoye, J., Fraune, M.R., Knight, H.: Robots that run their own human experiments: exploring relational humor with multi-robot comedy. In: 2021 30th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1262\u20131268 (2021). https:\/\/doi.org\/10.1109\/RO-MAN50785.2021.9515324","DOI":"10.1109\/RO-MAN50785.2021.9515324"},{"key":"24_CR37","doi-asserted-by":"crossref","unstructured":"Takegoshi, T., Hagiwara, M.: An automatic robot Manzai generation system. Trans. Jpn. Soc. Kansei Eng. TJSKE-D (2015)","DOI":"10.5057\/jjske.TJSKE-D-15-00023"},{"key":"24_CR38","doi-asserted-by":"crossref","unstructured":"Tsai, Y.L., Bana, P.R., Loiselle, S., Knight, H.: Sanitizerbot: how human-in-the-loop social robots can playfully support humans. In: 2022 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8278\u20138285. IEEE (2022)","DOI":"10.1109\/IROS47612.2022.9981917"},{"key":"24_CR39","doi-asserted-by":"crossref","unstructured":"Veloso, M.M.: Entertainment robotics. Commun. ACM 45(3) (2002)","DOI":"10.1145\/504729.504755"},{"key":"24_CR40","doi-asserted-by":"crossref","unstructured":"Vilk, J., Fitter, N.T.: Comedians in cafes getting data: evaluating timing and adaptivity in real-world robot comedy performance. In: 2020 ACM\/IEEE International Conference on Human-Robot Interaction (2020)","DOI":"10.1145\/3319502.3374780"},{"key":"24_CR41","doi-asserted-by":"crossref","unstructured":"Vilk, J., Fitter, N.T.: Jon the robot goes hollywood. In: Companion of International Conference on Human-Robot Interaction (2020)","DOI":"10.1145\/3371382.3378397"},{"key":"24_CR42","doi-asserted-by":"crossref","unstructured":"Wang, C., et al.: Lami: large language models for multi-modal human-robot interaction. In: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. CHI \u201924. ACM (2024)","DOI":"10.1145\/3613905.3651029"},{"key":"24_CR43","unstructured":"Zhu, J.Y., Cano, C.G., Bermudez, D.V., Drozdzal, M.: Incoro: in-context learning for robotics control with feedback loops (2024). https:\/\/arxiv.org\/abs\/2402.05188"}],"container-title":["Lecture Notes in Computer Science","Social Robotics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-981-96-3519-1_24","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T04:10:11Z","timestamp":1742875811000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-981-96-3519-1_24"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025]]},"ISBN":["9789819635184","9789819635191"],"references-count":43,"URL":"https:\/\/doi.org\/10.1007\/978-981-96-3519-1_24","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"value":"0302-9743","type":"print"},{"value":"1611-3349","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025]]},"assertion":[{"value":"25 March 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"ICSR + AI","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"International Conference on Social Robotics","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Odense","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Denmark","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2024","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"24 October 2024","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"27 October 2024","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"16","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"socrob2024a","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/icsr2024.dk","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}