{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:17:42Z","timestamp":1750220262464,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":50,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T00:00:00Z","timestamp":1667779200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"IITP","award":["No. 2017-0-00451; Development of BCI Brain and Cognitive Computing Technology for Recognizing User?s Intentions using DL, No. 2019-0-00079; Department of Artificial Intelligence, KU, No. 2021-0-02068, Artificial Intelligence Innovation Hub"],"award-info":[{"award-number":["No. 2017-0-00451; Development of BCI Brain and Cognitive Computing Technology for Recognizing User?s Intentions using DL, No. 2019-0-00079; Department of Artificial Intelligence, KU, No. 2021-0-02068, Artificial Intelligence Innovation Hub"]}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2017M3C7A1041824, NRF-2019R1A2C2007612, BK21 FOUR"],"award-info":[{"award-number":["NRF-2017M3C7A1041824, NRF-2019R1A2C2007612, BK21 FOUR"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,11,7]]},"DOI":"10.1145\/3536220.3558036","type":"proceedings-article","created":{"date-parts":[[2022,11,4]],"date-time":"2022-11-04T22:11:40Z","timestamp":1667599900000},"page":"127-133","source":"Crossref","is-referenced-by-count":2,"title":["Contextual modulation of affect: Comparing humans and deep neural networks"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2604-9115","authenticated-orcid":false,"given":"Soomin","family":"Shin","sequence":"first","affiliation":[{"name":"Korea University, Republic of Korea"}]},{"given":"DooYon","family":"Kim","sequence":"additional","affiliation":[{"name":"Korea University, Republic of Korea"}]},{"given":"Christian","family":"Wallraven","sequence":"additional","affiliation":[{"name":"Korea University, Republic of Korea"}]}],"member":"320","published-online":{"date-parts":[[2022,11,7]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"The inherently contextualized nature of facial emotion perception. Current opinion in psychology 17","author":"Aviezer Hillel","year":"2017","unstructured":"Hillel Aviezer , Noga Ensenberg , and Ran\u00a0 R Hassin . 2017. The inherently contextualized nature of facial emotion perception. Current opinion in psychology 17 ( 2017 ), 47\u201354. Hillel Aviezer, Noga Ensenberg, and Ran\u00a0R Hassin. 2017. The inherently contextualized nature of facial emotion perception. Current opinion in psychology 17 (2017), 47\u201354."},{"key":"e_1_3_2_1_2_1","volume-title":"Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science 338, 6111","author":"Aviezer Hillel","year":"2012","unstructured":"Hillel Aviezer , Yaacov Trope , and Alexander Todorov . 2012. Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science 338, 6111 ( 2012 ), 1225\u20131229. Hillel Aviezer, Yaacov Trope, and Alexander Todorov. 2012. Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science 338, 6111 (2012), 1225\u20131229."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV.2016.7477553"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1080\/026999398379574"},{"volume-title":"How emotions are made: The secret life of the brain","author":"Barrett Lisa\u00a0Feldman","key":"e_1_3_2_1_5_1","unstructured":"Lisa\u00a0Feldman Barrett . 2017. How emotions are made: The secret life of the brain . Pan Macmillan . Lisa\u00a0Feldman Barrett. 2017. How emotions are made: The secret life of the brain. Pan Macmillan."},{"key":"e_1_3_2_1_6_1","volume-title":"Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological science in the public interest 20, 1","author":"Barrett Lisa\u00a0Feldman","year":"2019","unstructured":"Lisa\u00a0Feldman Barrett , Ralph Adolphs , Stacy Marsella , Aleix\u00a0 M Martinez , and Seth\u00a0 D Pollak . 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological science in the public interest 20, 1 ( 2019 ), 1\u201368. Lisa\u00a0Feldman Barrett, Ralph Adolphs, Stacy Marsella, Aleix\u00a0M Martinez, and Seth\u00a0D Pollak. 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological science in the public interest 20, 1 (2019), 1\u201368."},{"key":"e_1_3_2_1_7_1","volume-title":"Context is routinely encoded during emotion perception. Psychological science 21, 4","author":"Barrett Lisa\u00a0Feldman","year":"2010","unstructured":"Lisa\u00a0Feldman Barrett and Elizabeth\u00a0 A Kensinger . 2010. Context is routinely encoded during emotion perception. Psychological science 21, 4 ( 2010 ), 595\u2013599. Lisa\u00a0Feldman Barrett and Elizabeth\u00a0A Kensinger. 2010. Context is routinely encoded during emotion perception. Psychological science 21, 4 (2010), 595\u2013599."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1016\/0005-7916(94)90063-9"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.143"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"crossref","unstructured":"James\u00a0M Carroll and James\u00a0A Russell. 1996. Do facial expressions signal specific emotions? Judging emotion from the face in context.Journal of personality and social psychology 70 2(1996) 205.  James\u00a0M Carroll and James\u00a0A Russell. 1996. Do facial expressions signal specific emotions? Judging emotion from the face in context.Journal of personality and social psychology 70 2(1996) 205.","DOI":"10.1037\/0022-3514.70.2.205"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359999.3360495"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1167\/jov.21.4.4"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_1_14_1","volume-title":"How does the brain solve visual object recognition?Neuron 73, 3","author":"DiCarlo J","year":"2012","unstructured":"James\u00a0 J DiCarlo , Davide Zoccolan , and Nicole\u00a0 C Rust . 2012. How does the brain solve visual object recognition?Neuron 73, 3 ( 2012 ), 415\u2013434. James\u00a0J DiCarlo, Davide Zoccolan, and Nicole\u00a0C Rust. 2012. How does the brain solve visual object recognition?Neuron 73, 3 (2012), 415\u2013434."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1111\/1469-8986.3750693"},{"key":"e_1_3_2_1_16_1","volume-title":"\u201cInternational Affective Picture System\u201d(IAPS) on a sample from Bosnia and Herzegovina. Psihologija 46, 1","author":"Drace Sasa","year":"2013","unstructured":"Sasa Drace , Emir Efendi\u0107 , Mirna Kusturica , and Lamija Land\u017eo . 2013. Cross-cultural validation of the \u201cInternational Affective Picture System\u201d(IAPS) on a sample from Bosnia and Herzegovina. Psihologija 46, 1 ( 2013 ). Sasa Drace, Emir Efendi\u0107, Mirna Kusturica, and Lamija Land\u017eo. 2013. Cross-cultural validation of the \u201cInternational Affective Picture System\u201d(IAPS) on a sample from Bosnia and Herzegovina. Psihologija 46, 1 (2013)."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3382507.3418814"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACII.2019.8925446"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0231968"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.600"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-66415-2_52"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/MMUL.2022.3173430"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-42051-1_16"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1111\/spc3.12393"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACII.2015.7344577"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/FG.2015.7284841"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-02444-3_39"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2019.2916866"},{"key":"e_1_3_2_1_29_1","volume-title":"Deep facial expression recognition: A survey","author":"Li Shan","year":"2020","unstructured":"Shan Li and Weihong Deng . 2020. Deep facial expression recognition: A survey . IEEE transactions on affective computing( 2020 ). Shan Li and Weihong Deng. 2020. Deep facial expression recognition: A survey. IEEE transactions on affective computing(2020)."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00218"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"crossref","unstructured":"Takahiko Masuda Phoebe\u00a0C Ellsworth Batja Mesquita Janxin Leu Shigehito Tanida and Ellen Van\u00a0de Veerdonk. 2008. Placing the face in context: cultural differences in the perception of facial emotion.Journal of personality and social psychology 94 3(2008) 365.  Takahiko Masuda Phoebe\u00a0C Ellsworth Batja Mesquita Janxin Leu Shigehito Tanida and Ellen Van\u00a0de Veerdonk. 2008. Placing the face in context: cultural differences in the perception of facial emotion.Journal of personality and social psychology 94 3(2008) 365.","DOI":"10.1037\/0022-3514.94.3.365"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01424"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2017.2740923"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2016.188"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ibror.2019.07.608"},{"key":"e_1_3_2_1_36_1","first-page":"392","article-title":"Automatic integration of social information in emotion recognition.Journal of Experimental Psychology","volume":"144","author":"Mumenthaler Christian","year":"2015","unstructured":"Christian Mumenthaler and David Sander . 2015 . Automatic integration of social information in emotion recognition.Journal of Experimental Psychology : General 144 , 2 (2015), 392 . Christian Mumenthaler and David Sander. 2015. Automatic integration of social information in emotion recognition.Journal of Experimental Psychology: General 144, 2 (2015), 392.","journal-title":"General"},{"key":"e_1_3_2_1_37_1","volume-title":"Valence resolution of ambiguous facial expressions using an emotional oddball task.Emotion 11, 6","author":"Neta Maital","year":"2011","unstructured":"Maital Neta , F\u00a0Caroline Davis , and Paul\u00a0 J Whalen . 2011. Valence resolution of ambiguous facial expressions using an emotional oddball task.Emotion 11, 6 ( 2011 ), 1425. Maital Neta, F\u00a0Caroline Davis, and Paul\u00a0J Whalen. 2011. Valence resolution of ambiguous facial expressions using an emotional oddball task.Emotion 11, 6 (2011), 1425."},{"key":"e_1_3_2_1_38_1","volume-title":"Nebraska symposium on motivation, Vol.\u00a019","author":"Paul Ekman","year":"1972","unstructured":"Ekman Paul and Cole James . 1972 . Universals and cultural differences in facial expressions of emotions . In Nebraska symposium on motivation, Vol.\u00a019 . 207\u2013283. Ekman Paul and Cole James. 1972. Universals and cultural differences in facial expressions of emotions. In Nebraska symposium on motivation, Vol.\u00a019. 207\u2013283."},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/FG52635.2021.9666957"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1093\/cercor\/bhj066"},{"key":"e_1_3_2_1_41_1","volume-title":"Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Social cognitive and affective neuroscience 3, 3","author":"Righart Ruthger","year":"2008","unstructured":"Ruthger Righart and Beatrice De\u00a0Gelder . 2008. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Social cognitive and affective neuroscience 3, 3 ( 2008 ), 270\u2013278. Ruthger Righart and Beatrice De\u00a0Gelder. 2008. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Social cognitive and affective neuroscience 3, 3 (2008), 270\u2013278."},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.3758\/CABN.8.3.264"},{"key":"e_1_3_2_1_43_1","first-page":"223","article-title":"Relativity in the perception of emotion in facial expressions.Journal of Experimental Psychology","volume":"116","author":"Russell A","year":"1987","unstructured":"James\u00a0 A Russell and Beverley Fehr . 1987 . Relativity in the perception of emotion in facial expressions.Journal of Experimental Psychology : General 116 , 3 (1987), 223 . James\u00a0A Russell and Beverley Fehr. 1987. Relativity in the perception of emotion in facial expressions.Journal of Experimental Psychology: General 116, 3 (1987), 223.","journal-title":"General"},{"key":"e_1_3_2_1_44_1","volume-title":"Body expressions influence recognition of emotions in the face and voice.Emotion 7, 3","author":"Stock Jan Van\u00a0den","year":"2007","unstructured":"Jan Van\u00a0den Stock , Ruthger Righart , and Beatrice De\u00a0Gelder . 2007. Body expressions influence recognition of emotions in the face and voice.Emotion 7, 3 ( 2007 ), 487. Jan Van\u00a0den Stock, Ruthger Righart, and Beatrice De\u00a0Gelder. 2007. Body expressions influence recognition of emotions in the face and voice.Emotion 7, 3 (2007), 487."},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46484-8_2"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.1403112111"},{"key":"e_1_3_2_1_47_1","volume-title":"Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions.Emotion 17, 8","author":"Yitzhak Neta","year":"2017","unstructured":"Neta Yitzhak , Nir Giladi , Tanya Gurevich , Daniel\u00a0 S Messinger , Emily\u00a0 B Prince , Katherine Martin , and Hillel Aviezer . 2017. Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions.Emotion 17, 8 ( 2017 ), 1187. Neta Yitzhak, Nir Giladi, Tanya Gurevich, Daniel\u00a0S Messinger, Emily\u00a0B Prince, Katherine Martin, and Hillel Aviezer. 2017. Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions.Emotion 17, 8 (2017), 1187."},{"key":"e_1_3_2_1_48_1","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 36\u201347","author":"Zafeiriou Stefanos","year":"2016","unstructured":"Stefanos Zafeiriou , Athanasios Papaioannou , Irene Kotsia , Mihalis Nicolaou , and Guoying Zhao . 2016 . Facial Affect\u201cIn-The-Wild . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 36\u201347 . Stefanos Zafeiriou, Athanasios Papaioannou, Irene Kotsia, Mihalis Nicolaou, and Guoying Zhao. 2016. Facial Affect\u201cIn-The-Wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 36\u201347."},{"key":"e_1_3_2_1_49_1","volume-title":"Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055(2016).","author":"Zhou Bolei","year":"2016","unstructured":"Bolei Zhou , Aditya Khosla , Agata Lapedriza , Antonio Torralba , and Aude Oliva . 2016 . Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055(2016). Bolei Zhou, Aditya Khosla, Agata Lapedriza, Antonio Torralba, and Aude Oliva. 2016. Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055(2016)."},{"key":"e_1_3_2_1_50_1","volume-title":"Emerged human-like facial expression representation in a deep convolutional neural network. Science Advances 8, 12","author":"Zhou Liqin","year":"2022","unstructured":"Liqin Zhou , Anmin Yang , Ming Meng , and Ke Zhou . 2022. Emerged human-like facial expression representation in a deep convolutional neural network. Science Advances 8, 12 ( 2022 ), eabj4383. https:\/\/doi.org\/10.1126\/sciadv.abj4383 arXiv:https:\/\/www.science.org\/doi\/pdf\/10.1126\/sciadv.abj4383 10.1126\/sciadv.abj4383 Liqin Zhou, Anmin Yang, Ming Meng, and Ke Zhou. 2022. Emerged human-like facial expression representation in a deep convolutional neural network. Science Advances 8, 12 (2022), eabj4383. https:\/\/doi.org\/10.1126\/sciadv.abj4383 arXiv:https:\/\/www.science.org\/doi\/pdf\/10.1126\/sciadv.abj4383"}],"event":{"name":"ICMI '22: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION","sponsor":["SIGCHI ACM Special Interest Group on Computer-Human Interaction"],"location":"Bengaluru India","acronym":"ICMI '22"},"container-title":["INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3536220.3558036","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3536220.3558036","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:30:54Z","timestamp":1750188654000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3536220.3558036"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,7]]},"references-count":50,"alternative-id":["10.1145\/3536220.3558036","10.1145\/3536220"],"URL":"https:\/\/doi.org\/10.1145\/3536220.3558036","relation":{},"subject":[],"published":{"date-parts":[[2022,11,7]]}}}