{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T16:18:15Z","timestamp":1772554695244,"version":"3.50.1"},"reference-count":72,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2024,3,29]],"date-time":"2024-03-29T00:00:00Z","timestamp":1711670400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62176249"],"award-info":[{"award-number":["62176249"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"China Scholarship Council (CSC) from the Ministry of Education of China","award":["202006310028"],"award-info":[{"award-number":["202006310028"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Intell. Syst. Technol."],"published-print":{"date-parts":[[2024,6,30]]},"abstract":"<jats:p>\n            The Facial Action Coding System (FACS) encodes the action units (AUs) in facial images, which has attracted extensive research attention due to its wide use in facial expression analysis. Many methods that perform well on automatic facial action unit (AU) detection primarily focus on modeling various AU relations between corresponding local muscle areas or mining global attention\u2013aware facial features; however, they neglect the dynamic interactions among local-global features. We argue that encoding AU features just from one perspective may not capture the rich contextual information between regional and global face features, as well as the detailed variability across AUs, because of the diversity in expression and individual characteristics. In this article, we propose a novel Multi-level Graph Relational Reasoning Network (termed\n            <jats:italic>MGRR-Net<\/jats:italic>\n            ) for facial AU detection. Each layer of MGRR-Net performs a multi-level (i.e., region-level, pixel-wise, and channel-wise level) feature learning. On the one hand, the region-level feature learning from the local face patch features via graph neural network can encode the correlation across different AUs. On the other hand, pixel-wise and channel-wise feature learning via graph attention networks (GAT) enhance the discrimination ability of AU features by adaptively recalibrating feature responses of pixels and channels from global face features. The hierarchical fusion strategy combines features from the three levels with gated fusion cells to improve AU discriminative ability. Extensive experiments on DISFA and BP4D AU datasets show that the proposed approach achieves superior performance than the state-of-the-art methods.\n          <\/jats:p>","DOI":"10.1145\/3643863","type":"journal-article","created":{"date-parts":[[2024,2,9]],"date-time":"2024-02-09T11:54:16Z","timestamp":1707479656000},"page":"1-20","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":7,"title":["MGRR-Net: Multi-level Graph Relational Reasoning Network for Facial Action Unit Detection"],"prefix":"10.1145","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3925-4951","authenticated-orcid":false,"given":"Xuri","family":"Ge","sequence":"first","affiliation":[{"name":"Unviersity of Glasgow, Glasgow, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9228-1759","authenticated-orcid":false,"given":"Joemon M.","family":"Jose","sequence":"additional","affiliation":[{"name":"Unviersity of Glasgow, Glasgow, UK"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-5735-8674","authenticated-orcid":false,"given":"Songpei","family":"Xu","sequence":"additional","affiliation":[{"name":"Unviersity of Glasgow, Glasgow, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9326-7801","authenticated-orcid":false,"given":"Xiao","family":"Liu","sequence":"additional","affiliation":[{"name":"Tencent, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6010-1792","authenticated-orcid":false,"given":"Hu","family":"Han","sequence":"additional","affiliation":[{"name":"Institute of Computing Technology, Chinese Academy of Sciences and University of the Chinese Academy of Sciences, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,3,29]]},"reference":[{"key":"e_1_3_1_2_2","first-page":"67","volume-title":"IEEE FG","author":"Cao Qiong","year":"2018","unstructured":"Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. 2018. VGGFace2: A dataset for recognising faces across pose and age. In IEEE FG. 67\u201374."},{"key":"e_1_3_1_3_2","first-page":"374","volume-title":"AAAI","author":"Chen Yingjie","year":"2022","unstructured":"Yingjie Chen, Diqi Chen, Tao Wang, Yizhou Wang, and Yun Liang. 2022. Causal intervention for subject-deconfounded facial action unit recognition. In AAAI, Vol. 36. 374\u2013382."},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2021.108355"},{"key":"e_1_3_1_5_2","first-page":"298","volume-title":"ECCV","author":"Corneanu Ciprian","year":"2018","unstructured":"Ciprian Corneanu, Meysam Madadi, and Sergio Escalera. 2018. Deep structure inference network for facial action unit recognition. In ECCV. 298\u2013313."},{"key":"e_1_3_1_6_2","first-page":"8694","volume-title":"CVPR","author":"Cui Zijun","year":"2023","unstructured":"Zijun Cui, Chenyi Kuang, Tian Gao, Kartik Talamadupula, and Qiang Ji. 2023. Biomechanics-guided facial action unit detection through force modeling. In CVPR. 8694\u20138703."},{"key":"e_1_3_1_7_2","first-page":"14338","article-title":"Knowledge augmented deep neural networks for joint facial expression and action unit recognition","volume":"33","author":"Cui Zijun","year":"2020","unstructured":"Zijun Cui, Tengfei Song, Yuru Wang, and Qiang Ji. 2020. Knowledge augmented deep neural networks for joint facial expression and action unit recognition. NeurIPS 33 (2020), 14338\u201314349.","journal-title":"NeurIPS"},{"key":"e_1_3_1_8_2","first-page":"248","volume-title":"IEEE CVPR","author":"Deng Jia","year":"2009","unstructured":"Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE CVPR. 248\u2013255."},{"key":"e_1_3_1_9_2","volume-title":"What the Face Reveals: Basic and Applied Studies of Spontaneous Expression using the Facial Action Coding System (FACS)","author":"Ekman Paul","year":"1997","unstructured":"Paul Ekman and Erika L. Rosenberg. 1997. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression using the Facial Action Coding System (FACS). Oxford University Press."},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/TBIOM.2023.3306810"},{"key":"e_1_3_1_11_2","first-page":"01","volume-title":"IEEE FG","author":"Ge Xuri","year":"2021","unstructured":"Xuri Ge, Pengcheng Wan, Hu Han, Joemon M. Jose, Zhilong Ji, Zhongqin Wu, and Xiao Liu. 2021. Local global relational network for facial action units recognition. In IEEE FG. IEEE, 01\u201308."},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2005.06.042"},{"key":"e_1_3_1_13_2","first-page":"770","volume-title":"IEEE CVPR","author":"He Kaiming","year":"2016","unstructured":"Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE CVPR. 770\u2013778."},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_3_1_15_2","first-page":"7680","volume-title":"IEEE CVPR","author":"Jacob Geethu Miriam","year":"2021","unstructured":"Geethu Miriam Jacob and Bjorn Stenger. 2021. Facial action unit detection with transformers. In IEEE CVPR. 7680\u20137689."},{"key":"e_1_3_1_16_2","first-page":"1","volume-title":"IEEE WACV","author":"Jaiswal Shashank","year":"2016","unstructured":"Shashank Jaiswal and Michel Valstar. 2016. Deep learning the dynamic appearance and shape of facial action units. In IEEE WACV. 1\u20138."},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2023.01.001"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2022.02.010"},{"key":"e_1_3_1_19_2","article-title":"Semi-supervised classification with graph convolutional networks","author":"Kipf Thomas N.","year":"2016","unstructured":"Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).","journal-title":"arXiv preprint arXiv:1609.02907"},{"key":"e_1_3_1_20_2","first-page":"8594","volume-title":"AAAI","author":"Li Guanbin","year":"2019","unstructured":"Guanbin Li, Xin Zhu, Yirui Zeng, Qing Wang, and Liang Lin. 2019. Semantic relationships guided representation learning for facial action unit recognition. In AAAI. 8594\u20138601."},{"key":"e_1_3_1_21_2","first-page":"1841","volume-title":"IEEE CVPR","author":"Li Wei","year":"2017","unstructured":"Wei Li, Farnaz Abtahi, and Zhigang Zhu. 2017. Action unit detection with region adaptation, multi-labeling learning and optimal temporal fusing. In IEEE CVPR. 1841\u20131850."},{"key":"e_1_3_1_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2791608"},{"key":"e_1_3_1_23_2","first-page":"4244","volume-title":"IEEE ICPR","author":"Li Xiaobai","year":"2016","unstructured":"Xiaobai Li, Jukka Komulainen, Guoying Zhao, Pong-Chi Yuen, and Matti Pietik\u00e4inen. 2016. Generalized face anti-spoofing by detecting pulse from face videos. In IEEE ICPR. 4244\u20134249."},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.01.032"},{"key":"e_1_3_1_25_2","article-title":"Contrastive learning of person-independent representations for facial action unit detection","author":"Li Yong","year":"2023","unstructured":"Yong Li and Shiguang Shan. 2023. Contrastive learning of person-independent representations for facial action unit detection. IEEE Trans. Image Process. (2023).","journal-title":"IEEE Trans. Image Process."},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2013.2253477"},{"key":"e_1_3_1_27_2","first-page":"702","volume-title":"ACM ICMI","author":"Li Yante","year":"2021","unstructured":"Yante Li and Guoying Zhao. 2021. Intra-and inter-contrastive learning for micro-expression action unit detection. In ACM ICMI. 702\u2013706."},{"key":"e_1_3_1_28_2","first-page":"125","volume-title":"ECCV","author":"Liang Xiaodan","year":"2016","unstructured":"Xiaodan Liang, Xiaohui Shen, Jiashi Feng, Liang Lin, and Shuicheng Yan. 2016. Semantic object parsing with graph lstm. In ECCV. Springer, 125\u2013143."},{"key":"e_1_3_1_29_2","first-page":"2175","volume-title":"IEEE WACV","author":"Liu Peng","year":"2019","unstructured":"Peng Liu, Zheng Zhang, Huiyuan Yang, and Lijun Yin. 2019. Multi-modality empowered network for facial action unit detection. In IEEE WACV. 2175\u20132184."},{"key":"e_1_3_1_30_2","first-page":"151","volume-title":"ECCV","author":"Liu Ping","year":"2014","unstructured":"Ping Liu, Joey Tianyi Zhou, Ivor Wai-Hung Tsang, Zibo Meng, Shizhong Han, and Yan Tong. 2014. Feature disentangling machine-a novel approach of feature selection and disentangling in facial expression analysis. In ECCV. 151\u2013166."},{"key":"e_1_3_1_31_2","first-page":"489","volume-title":"MMM","author":"Liu Zhilei","year":"2020","unstructured":"Zhilei Liu, Jiahui Dong, Cuicui Zhang, Longbiao Wang, and Jianwu Dang. 2020. Relation modeling with graph convolutional networks for facial action unit detection. In MMM. Springer, 489\u2013501."},{"key":"e_1_3_1_32_2","first-page":"10012","volume-title":"IEEE ICCV","author":"Liu Ze","year":"2021","unstructured":"Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In IEEE ICCV. 10012\u201310022."},{"key":"e_1_3_1_33_2","first-page":"1239","volume-title":"IJCAI","author":"Luo Cheng","year":"2022","unstructured":"Cheng Luo, Siyang Song, Weicheng Xie, Linlin Shen, and Hatice Gunes. 2022. Learning multi-dimensional edge feature-based AU relation graph for facial action unit recognition. In IJCAI. 1239\u20131246."},{"key":"e_1_3_1_34_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2019.03.082"},{"key":"e_1_3_1_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/T-AFFC.2013.4"},{"key":"e_1_3_1_36_2","first-page":"909","volume-title":"NIPS","author":"Niu Xuesong","year":"2019","unstructured":"Xuesong Niu, Hu Han, Shiguang Shan, and Xilin Chen. 2019. Multi-label co-regularization for semi-supervised facial action unit recognition. In NIPS. 909\u2013919."},{"key":"e_1_3_1_37_2","first-page":"11917","volume-title":"IEEE CVPR","author":"Niu Xuesong","year":"2019","unstructured":"Xuesong Niu, Hu Han, Songfan Yang, Yan Huang, and Shiguang Shan. 2019. Local relationship learning with person-specific shape regularization for facial action unit detection. In IEEE CVPR. 11917\u201311926."},{"key":"e_1_3_1_38_2","first-page":"599","volume-title":"ACM ICMI","author":"Niu Xuesong","year":"2018","unstructured":"Xuesong Niu, Hu Han, Jiabei Zeng, Xuran Sun, Shiguang Shan, Yan Huang, Songfan Yang, and Xilin Chen. 2018. Automatic engagement prediction with GAP feature. In ACM ICMI. 599\u2013603."},{"key":"e_1_3_1_39_2","doi-asserted-by":"publisher","DOI":"10.1097\/MAO.0b013e3181c993dc"},{"key":"e_1_3_1_40_2","first-page":"8026","volume-title":"NIPS","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In NIPS. 8026\u20138037."},{"key":"e_1_3_1_41_2","doi-asserted-by":"publisher","DOI":"10.1016\/0006-3223(92)90120-O"},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2019.107127"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2023.06.004"},{"key":"e_1_3_1_44_2","first-page":"705","volume-title":"ECCV","author":"Shao Zhiwen","year":"2018","unstructured":"Zhiwen Shao, Zhilei Liu, Jianfei Cai, and Lizhuang Ma. 2018. Deep adaptive attention for joint facial action unit detection and face alignment. In ECCV. 705\u2013720."},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-020-01378-z"},{"key":"e_1_3_1_46_2","article-title":"Facial action unit detection using attention and relation learning","author":"Shao Zhiwen","year":"2019","unstructured":"Zhiwen Shao, Zhilei Liu, Jianfei Cai, Yunsheng Wu, and Lizhuang Ma. 2019. Facial action unit detection using attention and relation learning. IEEE Trans. Affect. Comput. (2019).","journal-title":"IEEE Trans. Affect. Comput."},{"key":"e_1_3_1_47_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.11.108"},{"key":"e_1_3_1_48_2","article-title":"Spatio-temporal relation and attention learning for facial action unit detection","author":"Shao Zhiwen","year":"2020","unstructured":"Zhiwen Shao, Lixin Zou, Jianfei Cai, Yunsheng Wu, and Lizhuang Ma. 2020. Spatio-temporal relation and attention learning for facial action unit detection. arXiv preprint arXiv:2001.01168 (2020).","journal-title":"arXiv preprint arXiv:2001.01168"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2019.2926632"},{"key":"e_1_3_1_50_2","first-page":"5993","volume-title":"AAAI","author":"Song Tengfei","year":"2021","unstructured":"Tengfei Song, Lisha Chen, Wenming Zheng, and Qiang Ji. 2021. Uncertain graph neural networks for facial action unit detection. In AAAI. 5993\u20136001."},{"key":"e_1_3_1_51_2","first-page":"4845","volume-title":"IEEE CVPR","author":"Song Tengfei","year":"2021","unstructured":"Tengfei Song, Zijun Cui, Yuru Wang, Wenming Zheng, and Qiang Ji. 2021. Dynamic probabilistic graph convolution for facial action unit intensity estimation. In IEEE CVPR. 4845\u20134854."},{"key":"e_1_3_1_52_2","first-page":"6267","volume-title":"IEEE CVPR","author":"Song Tengfei","year":"2021","unstructured":"Tengfei Song, Zijun Cui, Wenming Zheng, and Qiang Ji. 2021. Hybrid message passing with performance-driven structures for facial action unit detection. In IEEE CVPR. 6267\u20136276."},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2018.2817622"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2014.2331141"},{"key":"e_1_3_1_55_2","article-title":"Multi-order networks for action unit detection","author":"Tallec Gauthier","year":"2022","unstructured":"Gauthier Tallec, Arnaud Dapogny, and Kevin Bailly. 2022. Multi-order networks for action unit detection. IEEE Trans. Affect. Comput. (2022).","journal-title":"IEEE Trans. Affect. Comput."},{"key":"e_1_3_1_56_2","first-page":"12899","volume-title":"IEEE ICCV","author":"Tang Yang","year":"2021","unstructured":"Yang Tang, Wangding Zeng, Dafei Zhao, and Honggang Zhang. 2021. PIAP-DF: Pixel-interested and anti person-specific facial action unit detection net with discrete feedback learning. In IEEE ICCV. 12899\u201312908."},{"key":"e_1_3_1_57_2","first-page":"1","volume-title":"IEEE CVPR","author":"Tong Yan","year":"2008","unstructured":"Yan Tong and Qiang Ji. 2008. Learning Bayesian networks with qualitative constraints. In IEEE CVPR. 1\u20138."},{"key":"e_1_3_1_58_2","article-title":"Attention is all you need","volume":"30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 30 (2017).","journal-title":"NIPS"},{"key":"e_1_3_1_59_2","first-page":"1","volume-title":"ICLR","author":"Veli\u010dkovi\u0107 Petar","year":"2018","unstructured":"Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In ICLR. 1\u201312."},{"key":"e_1_3_1_60_2","article-title":"Dual learning for joint facial landmark detection and action unit recognition","author":"Wang Shangfei","year":"2021","unstructured":"Shangfei Wang, Yanan Chang, and Can Wang. 2021. Dual learning for joint facial landmark detection and action unit recognition. IEEE Trans. Affect. Comput. (2021).","journal-title":"IEEE Trans. Affect. Comput."},{"key":"e_1_3_1_61_2","article-title":"Deep reasoning with knowledge graph for social relationship understanding","author":"Wang Zhouxia","year":"2018","unstructured":"Zhouxia Wang, Tianshui Chen, Jimmy Ren, Weihao Yu, Hui Cheng, and Liang Lin. 2018. Deep reasoning with knowledge graph for social relationship understanding. arXiv preprint arXiv:1807.00504 (2018).","journal-title":"arXiv preprint arXiv:1807.00504"},{"key":"e_1_3_1_62_2","first-page":"532","volume-title":"IEEE CVPR","author":"Xiong Xuehan","year":"2013","unstructured":"Xuehan Xiong and Fernando De la Torre. 2013. Supervised descent method and its applications to face alignment. In IEEE CVPR. 532\u2013539."},{"key":"e_1_3_1_63_2","volume-title":"AAAI","author":"Yan Sijie","year":"2018","unstructured":"Sijie Yan, Yuanjun Xiong, and Dahua Lin. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI."},{"key":"e_1_3_1_64_2","first-page":"2982","volume-title":"ACM MM","author":"Yang Huiyuan","year":"2020","unstructured":"Huiyuan Yang, Taoyue Wang, and Lijun Yin. 2020. Adaptive multimodal fusion for facial action units recognition. In ACM MM. 2982\u20132990."},{"key":"e_1_3_1_65_2","first-page":"10482","volume-title":"IEEE CVPR","author":"Yang Huiyuan","year":"2021","unstructured":"Huiyuan Yang, Lijun Yin, Yi Zhou, and Jiuxiang Gu. 2021. Exploiting semantic embedding and visual feature for facial action unit detection. In IEEE CVPR. 10482\u201310491."},{"key":"e_1_3_1_66_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2020.01.016"},{"key":"e_1_3_1_67_2","first-page":"11","volume-title":"ACM MM","author":"Zhang Liangfei","year":"2021","unstructured":"Liangfei Zhang, Ognjen Arandjelovic, and Xiaopeng Hong. 2021. Facial action unit detection with local key facial sub-region based multi-label classification for micro-expression analysis. In ACM MM. 11\u201318."},{"key":"e_1_3_1_68_2","article-title":"Short and long range relation based spatio-temporal transformer for micro-expression recognition","author":"Zhang Liangfei","year":"2021","unstructured":"Liangfei Zhang, Xiaopeng Hong, Ognjen Arandjelovic, and Guoying Zhao. 2021. Short and long range relation based spatio-temporal transformer for micro-expression recognition. arXiv preprint arXiv:2112.05851 (2021).","journal-title":"arXiv preprint arXiv:2112.05851"},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.imavis.2014.06.002"},{"key":"e_1_3_1_70_2","first-page":"5108","volume-title":"IEEE CVPR","author":"Zhang Yong","year":"2018","unstructured":"Yong Zhang, Weiming Dong, Bao-Gang Hu, and Qiang Ji. 2018. Classifier learning with prior probabilities for facial action unit recognition. In IEEE CVPR. 5108\u20135116."},{"key":"e_1_3_1_71_2","first-page":"2207","volume-title":"IEEE CVPR","author":"Zhao Kaili","year":"2015","unstructured":"Kaili Zhao, Wen-Sheng Chu, Fernando De la Torre, Jeffrey F. Cohn, and Honggang Zhang. 2015. Joint patch and multi-label learning for facial action unit detection. In IEEE CVPR. 2207\u20132216."},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2016.2570550"},{"key":"e_1_3_1_73_2","first-page":"3391","volume-title":"IEEE CVPR","author":"Zhao Kaili","year":"2016","unstructured":"Kaili Zhao, Wen-Sheng Chu, and Honggang Zhang. 2016. Deep region and multi-label learning for facial action unit detection. In IEEE CVPR. 3391\u20133399."}],"container-title":["ACM Transactions on Intelligent Systems and Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643863","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3643863","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T23:57:34Z","timestamp":1750291054000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3643863"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,29]]},"references-count":72,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6,30]]}},"alternative-id":["10.1145\/3643863"],"URL":"https:\/\/doi.org\/10.1145\/3643863","relation":{},"ISSN":["2157-6904","2157-6912"],"issn-type":[{"value":"2157-6904","type":"print"},{"value":"2157-6912","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,29]]},"assertion":[{"value":"2023-10-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-23","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-03-29","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}