{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,4]],"date-time":"2026-05-04T22:16:23Z","timestamp":1777932983755,"version":"3.51.4"},"reference-count":57,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2023,4,22]],"date-time":"2023-04-22T00:00:00Z","timestamp":1682121600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Key R&amp;D Program","award":["2018YFB1306601"],"award-info":[{"award-number":["2018YFB1306601"]}]},{"name":"National Key R&amp;D Program","award":["2019-90"],"award-info":[{"award-number":["2019-90"]}]},{"name":"National Key R&amp;D Program","award":["HZ2021011"],"award-info":[{"award-number":["HZ2021011"]}]},{"name":"National Key R&amp;D Program","award":["cstc2021jscx-cylhX0009"],"award-info":[{"award-number":["cstc2021jscx-cylhX0009"]}]},{"name":"Chinese Academy of Sciences \u201cLight of the West\u201d Talent Training Introduction Program","award":["2018YFB1306601"],"award-info":[{"award-number":["2018YFB1306601"]}]},{"name":"Chinese Academy of Sciences \u201cLight of the West\u201d Talent Training Introduction Program","award":["2019-90"],"award-info":[{"award-number":["2019-90"]}]},{"name":"Chinese Academy of Sciences \u201cLight of the West\u201d Talent Training Introduction Program","award":["HZ2021011"],"award-info":[{"award-number":["HZ2021011"]}]},{"name":"Chinese Academy of Sciences \u201cLight of the West\u201d Talent Training Introduction Program","award":["cstc2021jscx-cylhX0009"],"award-info":[{"award-number":["cstc2021jscx-cylhX0009"]}]},{"name":"Cooperation projects between Chongqing universities","award":["2018YFB1306601"],"award-info":[{"award-number":["2018YFB1306601"]}]},{"name":"Cooperation projects between Chongqing universities","award":["2019-90"],"award-info":[{"award-number":["2019-90"]}]},{"name":"Cooperation projects between Chongqing universities","award":["HZ2021011"],"award-info":[{"award-number":["HZ2021011"]}]},{"name":"Cooperation projects between Chongqing universities","award":["cstc2021jscx-cylhX0009"],"award-info":[{"award-number":["cstc2021jscx-cylhX0009"]}]},{"name":"Chinese Academy of Sciences","award":["2018YFB1306601"],"award-info":[{"award-number":["2018YFB1306601"]}]},{"name":"Chinese Academy of Sciences","award":["2019-90"],"award-info":[{"award-number":["2019-90"]}]},{"name":"Chinese Academy of Sciences","award":["HZ2021011"],"award-info":[{"award-number":["HZ2021011"]}]},{"name":"Chinese Academy of Sciences","award":["cstc2021jscx-cylhX0009"],"award-info":[{"award-number":["cstc2021jscx-cylhX0009"]}]},{"name":"Chongqing technology innovation and application development special","award":["2018YFB1306601"],"award-info":[{"award-number":["2018YFB1306601"]}]},{"name":"Chongqing technology innovation and application development special","award":["2019-90"],"award-info":[{"award-number":["2019-90"]}]},{"name":"Chongqing technology innovation and application development special","award":["HZ2021011"],"award-info":[{"award-number":["HZ2021011"]}]},{"name":"Chongqing technology innovation and application development special","award":["cstc2021jscx-cylhX0009"],"award-info":[{"award-number":["cstc2021jscx-cylhX0009"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Facial expression methods play a vital role in human\u2013computer interaction and other fields, but there are factors such as occlusion, illumination, and pose changes in wild facial recognition, as well as category imbalances between different datasets, that result in large variations in recognition rates and low accuracy rates for different categories of facial expression datasets. This study introduces RCL-Net, a method of recognizing wild facial expressions that is based on an attention mechanism and LBP feature fusion. The structure consists of two main branches, namely the ResNet-CBAM residual attention branch and the local binary feature (LBP) extraction branch (RCL-Net). First, by merging the residual network and hybrid attention mechanism, the residual attention network is presented to emphasize the local detail feature information of facial expressions; the significant characteristics of facial expressions are retrieved from both channel and spatial dimensions to build the residual attention classification model. Second, we present a locally improved residual network attention model. LBP features are introduced into the facial expression feature extraction stage in order to extract texture information on expression photographs in order to emphasize facial feature information and enhance the recognition accuracy of the model. Lastly, experimental validation is performed using the FER2013, FERPLUS, CK+, and RAF-DB datasets, and the experimental results demonstrate that the proposed method has superior generalization capability and robustness in the laboratory-controlled environment and field environment compared to the most recent experimental methods.<\/jats:p>","DOI":"10.3390\/s23094204","type":"journal-article","created":{"date-parts":[[2023,4,24]],"date-time":"2023-04-24T03:04:08Z","timestamp":1682305448000},"page":"4204","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":38,"title":["Facial Expression Recognition Methods in the Wild Based on Fusion Feature of Attention Mechanism and LBP"],"prefix":"10.3390","volume":"23","author":[{"given":"Jun","family":"Liao","sequence":"first","affiliation":[{"name":"Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"},{"name":"College of Mechanical Engineering, Chongqing University of Technology, Chongqing 400054, China"},{"name":"Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"}]},{"given":"Yuanchang","family":"Lin","sequence":"additional","affiliation":[{"name":"Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"},{"name":"Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"}]},{"given":"Tengyun","family":"Ma","sequence":"additional","affiliation":[{"name":"Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"},{"name":"Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"}]},{"given":"Songxiying","family":"He","sequence":"additional","affiliation":[{"name":"Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"}]},{"given":"Xiaofang","family":"Liu","sequence":"additional","affiliation":[{"name":"Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"},{"name":"Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"}]},{"given":"Guotian","family":"He","sequence":"additional","affiliation":[{"name":"Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"},{"name":"Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing Institute of Green Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,4,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1195","DOI":"10.1109\/TAFFC.2020.2981446","article-title":"Deep Facial Expression Recognition: A Survey","volume":"13","author":"Li","year":"2022","journal-title":"IEEE Trans. Affective Comput."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010, January 13\u201318). The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion-Specified Expression. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition\u2014Workshops, San Francisco, CA, USA.","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"ref_3","unstructured":"Lyons, M., Akamatsu, S., Kamachi, M., and Gyoba, J. (1998, January 14\u201316). Coding Facial Expressions with Gabor Wavelets. Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan."},{"key":"ref_4","unstructured":"Valstar, M.F., and Pantic, M. (2010, January 29). Induced Disgust, Happiness and Surprise: An Addition to the MMI Facial Expression Database. Proceedings of the 3rd International Workshop on Emotion, Paris, France."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"607","DOI":"10.1016\/j.imavis.2011.07.002","article-title":"Facial Expression Recognition from Near-Infrared Videos","volume":"29","author":"Zhao","year":"2011","journal-title":"Image Vis. Comput."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Li, S., Deng, W., and Du, J. (2017, January 21\u201326). Reliable Crowdsourcing and Deep Locality-Preserving Learning for Expression Recognition in the Wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.277"},{"key":"ref_7","unstructured":"Benitez-Quiroz, C.F., Srinivasan, R., and Martinez, A.M. (July, January 26). EmotioNet: An Accurate, Real-Time Algorithm for the Automatic Annotation of a Million Facial Expressions in the Wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Barsoum, E., Zhang, C., Ferrer, C.C., and Zhang, Z. (2016, January 31). Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution. Proceedings of the 18th ACM International Conference on Multimodal Interaction, New York, NY, USA.","DOI":"10.1145\/2993148.2993165"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","article-title":"AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild","volume":"10","author":"Mollahosseini","year":"2019","journal-title":"IEEE Trans. Affective Comput."},{"key":"ref_10","unstructured":"Dalal, N., and Triggs, B. (2005, January 20\u201326). Histograms of Oriented Gradients for Human Detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201905), San Diego, CA, USA."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"971","DOI":"10.1109\/TPAMI.2002.1017623","article-title":"Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns","volume":"24","author":"Ojala","year":"2002","journal-title":"IEEE Trans. Pattern Anal. Machine Intell."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"91","DOI":"10.1023\/B:VISI.0000029664.99615.94","article-title":"Distinctive Image Features from Scale-Invariant Keypoints","volume":"60","author":"Lowe","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"38","DOI":"10.1109\/TSMCB.2010.2044788","article-title":"Graph-Preserving Sparse Nonnegative Matrix Factorization with Application to Facial Expression Recognition","volume":"41","author":"Ruicong","year":"2011","journal-title":"IEEE Trans. Syst. Man Cybern. B"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"2140","DOI":"10.1109\/TIP.2015.2416634","article-title":"Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning","volume":"24","author":"Shojaeilangari","year":"2015","journal-title":"IEEE Trans. Image Process."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"8828245","DOI":"10.1155\/2021\/8828245","article-title":"Facial Expression Recognition with LBP and ORB Features","volume":"2021","author":"Niu","year":"2021","journal-title":"Comput. Intell. Neurosci."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1016\/j.neucom.2021.05.022","article-title":"Intensity Enhancement via GAN for Multimodal Face Expression Recognition","volume":"454","author":"Yang","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"3699","DOI":"10.3934\/mbe.2021186","article-title":"Identity Preserving Multi-Pose Facial Expression Recognition Using Fine Tuned VGG on the Latent Space Vector of Generative Adversarial Network","volume":"18","author":"Abiram","year":"2021","journal-title":"MBE"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"2439","DOI":"10.1109\/TIP.2018.2886767","article-title":"Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism","volume":"28","author":"Li","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"4057","DOI":"10.1109\/TIP.2019.2956143","article-title":"Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition","volume":"29","author":"Wang","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_20","unstructured":"Goodfellow, I.J., Erhan, D., Carrier, P.L., Courville, A., Mirza, M., Hamner, B., Cukierski, W., Tang, Y., Thaler, D., and Lee, D.-H. (2013). Neural Information Processing, Springer."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Dhall, A., Goecke, R., Ghosh, S., Joshi, J., Hoey, J., and Gedeon, T. (2017, January 3). From Individual to Group-Level Emotion Recognition: EmotiW 5.0. Proceedings of the Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow, UK.","DOI":"10.1145\/3136755.3143004"},{"key":"ref_22","unstructured":"Tang, Y. (2015). Deep Learning Using Linear Support Vector Machines. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Kanou, S.E., Ferrari, R.C., Mirza, M., Jean, S., Carrier, P.-L., Dauphin, Y., Boulanger-Lewandowski, N., Aggarwal, A., Zumer, J., and Lamblin, P. (2013, January 9\u201313). Combining Modality Specific Deep Neural Networks for Emotion Recognition in Video. Proceedings of the Proceedings of the 15th ACM on International conference on multimodal interaction\u2014ICMI \u201913, Sydney, Australia.","DOI":"10.1145\/2522848.2531745"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"4630","DOI":"10.1109\/ACCESS.2017.2784096","article-title":"Facial Expression Recognition Using Weighted Mixture Deep Neural Network Based on Double-Channel Facial Images","volume":"6","author":"Yang","year":"2018","journal-title":"IEEE Access"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Bazzo, J.J., and Lamar, M.V. (2004, January 17\u201319). Recognizing Facial Actions Using Gabor Wavelets with Neutral Face Average Difference. Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR 2004), Seoul, Republic of Korea.","DOI":"10.14209\/sbrt.2004.68"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"2767","DOI":"10.1016\/j.ijleo.2012.08.040","article-title":"Facial Expression Recognition Based on Fusion Feature of PCA and LBP with SVM","volume":"124","author":"Luo","year":"2013","journal-title":"Opt.\u2014Int. J. Light Electron Opt."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Mehta, D., Siddiqui, M.F.H., and Javaid, A.Y. (2019). Recognition of Emotion Intensities Using Machine Learning Algorithms: A Comparative Study. Sensors, 19.","DOI":"10.3390\/s19081897"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"562","DOI":"10.1587\/transinf.E96.D.562","article-title":"Computational Models of Human Visual Attention and Their Implementations: A Survey","volume":"E96.D","author":"Kimura","year":"2013","journal-title":"IEICE Trans. Inf. Syst."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Fernandez, P.D.M., Pena, F.A.G., Ren, T.I., and Cunha, A. (2019, January 16\u201320). FERAtt: Facial Expression Recognition with Attention Net. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA.","DOI":"10.1109\/CVPRW.2019.00112"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Zhu, X., He, Z., Zhao, L., Dai, Z., and Yang, Q. (2022). A Cascade Attention Based Facial Expression Recognition Network by Fusing Multi-Scale Spatio-Temporal Features. Sensors, 22.","DOI":"10.3390\/s22041350"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1007\/978-3-030-01234-2_1","article-title":"CBAM: Convolutional Block Attention Module","volume":"Volume 11211","author":"Ferrari","year":"2018","journal-title":"Computer Vision\u2014ECCV 2018"},{"key":"ref_32","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_33","unstructured":"Zhang, H., Cisse, M., Dauphin, Y.N., and Lopez-Paz, D. (2018). Mixup: Beyond Empirical Risk Minimization. arXiv."},{"key":"ref_34","unstructured":"Pramerdorfer, C., and Kampel, M. (2016). Facial Expression Recognition Using Convolutional Neural Networks: State of the Art. arXiv."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"023025","DOI":"10.1117\/1.JEI.31.2.023025","article-title":"Facial Expression Recognition Based on Landmark-Guided Graph Convolutional Neural Network","volume":"31","author":"Meng","year":"2022","journal-title":"J. Electron. Imag."},{"key":"ref_36","unstructured":"Chang, T., Wen, G., Hu, Y., and Ma, J. (2018). Facial Expression Recognition Based on Complexity Perception Classification Algorithm. arXiv."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"78000","DOI":"10.1109\/ACCESS.2019.2921220","article-title":"Recognizing Facial Expressions Using a Shallow Convolutional Neural Network","volume":"7","author":"Miao","year":"2019","journal-title":"IEEE Access"},{"key":"ref_38","unstructured":"Wang, W., Sun, Q., Chen, T., Cao, C., Zheng, Z., Xu, G., Qiu, H., and Fu, Y. (2019). A Fine-Grained Facial Expression Database for End-to-End Multi-Pose Facial Expression Recognition. arXiv."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"1023","DOI":"10.1109\/TAFFC.2020.2986440","article-title":"BReG-NeXt: Facial Affect Computing Using Adaptive Residual Networks with Bounded Gradient","volume":"13","author":"Hasani","year":"2022","journal-title":"IEEE Trans. Affective Comput."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"2511","DOI":"10.1002\/int.22391","article-title":"Learning to Disentangle Emotion Factors for Facial Expression Recognition in the Wild","volume":"36","author":"Zhu","year":"2021","journal-title":"Int. J. Intell. Syst."},{"key":"ref_41","unstructured":"Khaireddin, Y., and Chen, Z. (2019). Facial Emotion Recognition: State of the Art Performance on FER2013. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Huang, C. (2017, January 4\u20136). Combining Convolutional Neural Networks for Emotion Recognition. Proceedings of the IEEE MIT Undergraduate Research Technology Conference (URTC), Cambridge, MA, USA.","DOI":"10.1109\/URTC.2017.8284175"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Albanie, S., Nagrani, A., Vedaldi, A., and Zisserman, A. (2018, January 15). Emotion Recognition in Speech Using Cross-Modal Transfer in the Wild. Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea.","DOI":"10.1145\/3240508.3240578"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Ma, F., Sun, B., and Li, S. (2021). Facial Expression Recognition with Visual Transformers and Attentional Selective Fusion. IEEE Trans. Affective Comput., 1-1.","DOI":"10.1109\/TAFFC.2021.3122146"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"20","DOI":"10.1109\/MMUL.2021.3076834","article-title":"Destruction and Reconstruction Learning for Facial Expression Recognition","volume":"28","author":"Xia","year":"2021","journal-title":"IEEE MultiMedia"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1016\/j.patrec.2022.01.013","article-title":"CERN: Compact Facial Expression Recognition Net","volume":"155","author":"Gera","year":"2022","journal-title":"Pattern Recognit. Lett."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"4435","DOI":"10.1016\/j.aej.2021.09.066","article-title":"A-MobileNet: An Approach of Facial Expression Recognition","volume":"61","author":"Nan","year":"2022","journal-title":"Alex. Eng. J."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"3314","DOI":"10.1109\/TCYB.2017.2662199","article-title":"Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification","volume":"52","author":"Rodriguez","year":"2022","journal-title":"IEEE Trans. Cybern."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.neucom.2019.05.005","article-title":"Three Convolutional Neural Network Models for Facial Expression Recognition in the Wild","volume":"355","author":"Shao","year":"2019","journal-title":"Neurocomputing"},{"key":"ref_50","unstructured":"Turan, C., Lam, K.-M., and He, X. (2018). Soft Locality Preserving Map (SLPM) for Facial Expression Recognition. arXiv."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"11532","DOI":"10.1109\/JSEN.2020.3028075","article-title":"GA-SVM-Based Facial Emotion Recognition Using Facial Geometric Features","volume":"21","author":"Liu","year":"2021","journal-title":"IEEE Sens. J."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"142243","DOI":"10.1109\/ACCESS.2020.3012703","article-title":"PyFER: A Facial Expression Recognizer Based on Convolutional Neural Networks","volume":"8","author":"Kabakus","year":"2020","journal-title":"IEEE Access"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"103110","DOI":"10.1016\/j.jvcir.2021.103110","article-title":"Facial Expression Recognition through Person-Wise Regeneration of Expressions Using Auxiliary Classifier Generative Adversarial Network (AC-GAN) Based Model","volume":"77","author":"Dharanya","year":"2021","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"387","DOI":"10.1007\/s11063-021-10636-1","article-title":"Meaningful Learning for Deep Facial Emotional Features","volume":"54","author":"Filali","year":"2022","journal-title":"Neural. Process. Lett."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"108401","DOI":"10.1016\/j.patcog.2021.108401","article-title":"Co-Attentive Multi-Task Convolutional Neural Network for Facial Expression Recognition","volume":"123","author":"Yu","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"6544","DOI":"10.1109\/TIP.2021.3093397","article-title":"Learning Deep Global Multi-Scale and Local Attention Features for Facial Expression Recognition in the Wild","volume":"30","author":"Zhao","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Li, Z., Han, S., Khan, A.S., Cai, J., Meng, Z., O\u2019Reilly, J., and Tong, Y. (2019, January 8\u201312). Pooling Map Adaptation in Convolutional Neural Network for Facial Expression Recognition. Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China.","DOI":"10.1109\/ICME.2019.00194"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/9\/4204\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:21:26Z","timestamp":1760124086000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/9\/4204"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,22]]},"references-count":57,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2023,5]]}},"alternative-id":["s23094204"],"URL":"https:\/\/doi.org\/10.3390\/s23094204","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,4,22]]}}}