{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T15:54:48Z","timestamp":1772553288223,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":24,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,9,24]],"date-time":"2021-09-24T00:00:00Z","timestamp":1632441600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,9,24]]},"DOI":"10.1145\/3488933.3489006","type":"proceedings-article","created":{"date-parts":[[2022,2,25]],"date-time":"2022-02-25T11:36:59Z","timestamp":1645789019000},"page":"260-265","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Deep Facial Expression Recognition Algorithm Combining Channel Attention"],"prefix":"10.1145","author":[{"given":"Peixiang","family":"Zhang","sequence":"first","affiliation":[{"name":"Center for Image and Information Processing,Xi'an University of Posts and Telecommunications, China"}]},{"given":"Ying","family":"Liu","sequence":"additional","affiliation":[{"name":"Center for Image and Information Processing,Xi'an University of Posts and Telecommunications,, China"}]},{"given":"Yu","family":"Hao","sequence":"additional","affiliation":[{"name":"Center for Image and Information Processing,Xi'an University of Posts and Telecommunications, China"}]},{"given":"Jiming","family":"Liu","sequence":"additional","affiliation":[{"name":"Xi'an University of Posts and Telecommunications, China"}]}],"member":"320","published-online":{"date-parts":[[2022,2,25]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1037\/h0030377"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-42051-1_16"},{"key":"e_1_3_2_1_3_1","volume-title":"IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42,4 (November","author":"Busso C","year":"2008","unstructured":"Busso C , Bulut M , and Lee C C , 2008 . IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42,4 (November 2008), 335-359. https:\/\/doi.org\/10.1007\/s10579-008-9076-6. Busso C, Bulut M, and Lee C C, 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation 42,4 (November 2008), 335-359. https:\/\/doi.org\/10.1007\/s10579-008-9076-6."},{"key":"e_1_3_2_1_4_1","first-page":"3","article-title":"Spatial-temporal attention network forfacial expression recognition","volume":"50","author":"Xiao Yi Feng","year":"2020","unstructured":"Xiao Yi Feng , Dong Huang, and Shao Xing Cui , 2020 . Spatial-temporal attention network forfacial expression recognition . Journal of Northwest University (Natural Science Edition) , 50 , 3 (Jun 2020), 319-327. https:\/\/doi.org\/10.16152\/j.cnki.xdxbzr.2020-03-002. Xiao Yi Feng, Dong Huang, and Shao Xing Cui, 2020. Spatial-temporal attention network forfacial expression recognition. Journal of Northwest University (Natural Science Edition), 50,3 (Jun 2020), 319-327. https:\/\/doi.org\/10.16152\/j.cnki.xdxbzr.2020-03-002.","journal-title":"Journal of Northwest University (Natural Science Edition)"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12193-015-0209-0"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/2818346.2830593"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.3390\/s21030833"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"crossref","unstructured":"Gan Y Chen J and Yang Z 2020. Multiple attention network for facial expression recognition. IEEE Access. 8 (January 2020) 7383-7393. https:\/\/doi.org\/10.1109\/ACCESS.2020.2963913.  Gan Y Chen J and Yang Z 2020. Multiple attention network for facial expression recognition. IEEE Access. 8 (January 2020) 7383-7393. https:\/\/doi.org\/10.1109\/ACCESS.2020.2963913.","DOI":"10.1109\/ACCESS.2020.2963913"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV48630.2021.00245"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR48806.2021.9413094"},{"key":"e_1_3_2_1_11_1","unstructured":"Chen T Pu T and Xie Y 2020. Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchmark and Adversarial Graph Learning. arXiv:2008.00923. Retrieved from https:\/\/arxiv.org\/abs\/2008.00923.  Chen T Pu T and Xie Y 2020. Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchmark and Adversarial Graph Learning. arXiv:2008.00923. Retrieved from https:\/\/arxiv.org\/abs\/2008.00923."},{"key":"e_1_3_2_1_12_1","article-title":"A Deep Residual Expression Recognition Network to Enhance Inter-class Discrimination","author":"Hao Huang Ge.","year":"2021","unstructured":"Hao Huang , Hong Wei Ge. 2021 . A Deep Residual Expression Recognition Network to Enhance Inter-class Discrimination . Journal of Frontiers of Computer Science and Technology, {3},{4}{5} ( July 2021), 1-10. http:\/\/fcst.ceaj.org\/CN\/10.3778\/j.issn.1673-9418.2011042. Hao Huang, Hong Wei Ge. 2021. A Deep Residual Expression Recognition Network to Enhance Inter-class Discrimination. Journal of Frontiers of Computer Science and Technology, {3},{4}{5} (July 2021), 1-10. http:\/\/fcst.ceaj.org\/CN\/10.3778\/j.issn.1673-9418.2011042.","journal-title":"Journal of Frontiers of Computer Science and Technology, {3},{4}{5}"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"crossref","unstructured":"Mahmoudi M A Chetouani A and Boufera F 2021. Improved Bilinear Model for Facial Expression Recognition. Pattern Recognition and Artificial Intelligence. 1322 (February 2021) 47-59. https:\/\/doi.org\/10.1007\/978-3-030-71804-6_4.  Mahmoudi M A Chetouani A and Boufera F 2021. Improved Bilinear Model for Facial Expression Recognition. Pattern Recognition and Artificial Intelligence. 1322 (February 2021) 47-59. https:\/\/doi.org\/10.1007\/978-3-030-71804-6_4.","DOI":"10.1007\/978-3-030-71804-6_4"},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01400"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00693"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"crossref","unstructured":"Gan Y Chen J and Xu L. 2019. Facial expression recognition boosted by soft label with a diverse ensemble. Pattern Recognition Letters. 125 (July 2019) 105-112. https:\/\/doi.org\/10.1016\/j.patrec.2019.04.002.  Gan Y Chen J and Xu L. 2019. Facial expression recognition boosted by soft label with a diverse ensemble. Pattern Recognition Letters. 125 (July 2019) 105-112. https:\/\/doi.org\/10.1016\/j.patrec.2019.04.002.","DOI":"10.1016\/j.patrec.2019.04.002"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1007\/s12559-017-9472-6"},{"key":"e_1_3_2_1_18_1","first-page":"12","article-title":"MRMR-based ensemble pruning for facial expression recognition","volume":"77","author":"Li D","year":"2018","unstructured":"Li D , Wen G. 2018 . MRMR-based ensemble pruning for facial expression recognition . Multimedia Tools and Applications. 77 , 12 (September 2018), 15251-15272. https:\/\/doi.org\/10.1007\/s11042-017-5105-z. Li D, Wen G. 2018. MRMR-based ensemble pruning for facial expression recognition. Multimedia Tools and Applications. 77, 12 (September 2018), 15251-15272. https:\/\/doi.org\/10.1007\/s11042-017-5105-z.","journal-title":"Multimedia Tools and Applications."},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/AICCSA47632.2019.9035249"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01155"},{"key":"e_1_3_2_1_21_1","first-page":"21","volume-title":"2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Li S","year":"2017","unstructured":"Li S , Deng W , and Du J P . 2017. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild . 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Honolulu, HI , 21 - 26 . https:\/\/doi.org\/10.1109\/CVPR. 2017 .277. Li S, Deng W, and Du J P. 2017. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, 21-26. https:\/\/doi.org\/10.1109\/CVPR.2017.277."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2016.2603342"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV.2018.00097"}],"event":{"name":"AIPR 2021: 2021 4th International Conference on Artificial Intelligence and Pattern Recognition","location":"Xiamen China","acronym":"AIPR 2021"},"container-title":["2021 4th International Conference on Artificial Intelligence and Pattern Recognition"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3488933.3489006","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3488933.3489006","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:49:00Z","timestamp":1750193340000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3488933.3489006"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,24]]},"references-count":24,"alternative-id":["10.1145\/3488933.3489006","10.1145\/3488933"],"URL":"https:\/\/doi.org\/10.1145\/3488933.3489006","relation":{},"subject":[],"published":{"date-parts":[[2021,9,24]]},"assertion":[{"value":"2022-02-25","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}