{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,2]],"date-time":"2026-03-02T13:10:15Z","timestamp":1772457015840,"version":"3.50.1"},"reference-count":44,"publisher":"MDPI AG","issue":"24","license":[{"start":{"date-parts":[[2019,12,16]],"date-time":"2019-12-16T00:00:00Z","timestamp":1576454400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions.<\/jats:p>","DOI":"10.3390\/s19245553","type":"journal-article","created":{"date-parts":[[2019,12,17]],"date-time":"2019-12-17T02:59:01Z","timestamp":1576551541000},"page":"5553","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":31,"title":["A Convolutional Neural Network for Compound Micro-Expression Recognition"],"prefix":"10.3390","volume":"19","author":[{"given":"Yue","family":"Zhao","sequence":"first","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]},{"given":"Jiancheng","family":"Xu","sequence":"additional","affiliation":[{"name":"School of Electronics and Information, Northwestern Polytechnical University, Xi\u2019an 710129, China"}]}],"member":"1968","published-online":{"date-parts":[[2019,12,16]]},"reference":[{"key":"ref_1","unstructured":"Martin, C.W., and Ekman, P. (2009). The Philosophy of Deception, Oxford University Press. [3rd ed.]."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"124","DOI":"10.1037\/h0030377","article-title":"Constants across cultures in the face and emotion","volume":"17","author":"Ekman","year":"1971","journal-title":"J. Personal. Soc. Psychol."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"915","DOI":"10.1109\/TPAMI.2007.1110","article-title":"Dynamic texture recognition using local binary patterns with an application to facial expressions","volume":"29","author":"Zhao","year":"2007","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_4","unstructured":"Sze, T.L., and Kok, S.W. (2017, January 12\u201315). Micro-expression recognition using apex frame with phase information. Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Kuala Lumpur, Malaysia."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"299","DOI":"10.1109\/TAFFC.2015.2485205","article-title":"A Main Directional Mean Optical Flow Feature for Spontaneous Micro-Expression Recognition","volume":"7","author":"Yong","year":"2016","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_6","unstructured":"Sze, T.L., Raphael, W.P., and John, S. (2014, January 1\u20134). Optical strain based recognition of subtle emotions. Proceedings of the 2014 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Kuching, Malaysia."},{"key":"ref_7","unstructured":"Bruce, V., and Young, A.W. (2012). Messages from Facial Movements, Psychology Press. [2nd ed.]."},{"key":"ref_8","unstructured":"Ekman, P., and Friesen, W.V. (1976). Pictures of Facial Affect, Consulting Psychologists Press."},{"key":"ref_9","unstructured":"Hjortsjo, C.H. (1970). Man\u2019s Face and Mimic Language, Lund, Studentlitterature."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1454","DOI":"10.1073\/pnas.1322355111","article-title":"Compound facial expressions of emotion","volume":"111","author":"Du","year":"2014","journal-title":"Proc. Natl. Acad. Sci. USA"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"443","DOI":"10.31887\/DCNS.2015.17.4\/sdu","article-title":"Compound facial expressions of emotion: From basic research to clinical applications","volume":"17","author":"Du","year":"2015","journal-title":"Dialogues Clin. Neurosci."},{"key":"ref_12","unstructured":"Yan, W.J., Wu, Q., Liu, Y.J., Wang, S.J., and Fu, X. (2013, January 22\u201326). CASME Database: A Dataset of Spontaneous Micro-Expressions Collected from Neutralized Faces. Proceedings of the 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Shanghai, China."},{"key":"ref_13","first-page":"102","article-title":"CASME II: An improved spontaneous micro-expression database and the baseline evaluation","volume":"9","author":"Yan","year":"2014","journal-title":"PLoS ONE"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Li, X., Pfister, T., Huang, X., Zhao, G., and Pietik\u00e4inen, M. (2013, January 22\u201326). A Spontaneous Micro-expression Database: Inducement, collection and baseline. Proceedings of the 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Shanghai, China.","DOI":"10.1109\/FG.2013.6553717"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Qu, F., Wang, S.J., Yan, W.J., and Fu, X. (2016, January 17\u201322). CAS(ME)2: A Database of Spontaneous Macro-expressions and Micro-expressions. Proceedings of the 2016 International Conference on Human-Computer Interaction, Toronto, ON, Canada.","DOI":"10.1007\/978-3-319-39513-5_5"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"116","DOI":"10.1109\/TAFFC.2016.2573832","article-title":"SAMM: A Spontaneous Micro-Facial Movement Dataset","volume":"9","author":"Davison","year":"2018","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_17","unstructured":"Wang, Y., See, J., Phan, R.C.-W., and Oh, Y. (2014, January 1\u20135). LBP with Six Intersection Points: Reducing Redundant Information in LBP-TOP for Micro-expression Recognition. Proceedings of the 12th Asian Conference on Computer Vision, Singapore."},{"key":"ref_18","unstructured":"Sze, T.L., John, S., and Kok, S.W. (2016, January 20\u201324). Automatic Micro-expression Recognition from Long Video Using a Single Spotted Apex. Proceedings of the 2016 Asian Conference on Computer Vision International Workshops, Taipei, Taiwan."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Li, Y., Huang, X., and Zhao, G. (2018, January 7\u201310). Can Micro-Expression be Recognized Based on Single Apex Frame. Proceedings of the 2018 International Conference on Image Processing, Athens, Greece.","DOI":"10.1109\/ICIP.2018.8451376"},{"key":"ref_20","unstructured":"Sze, T.L., Gan, Y.S., and Wei, C.Y. (2018). OFF-ApexNet on Micro-expression Recognition System. arXiv."},{"key":"ref_21","unstructured":"Sze, T.L., Gan, Y.D., and John, S. (2019, January 14\u201318). Shallow Triple Stream Three-dimensional CNN (STSTNet) for Micro-expression Recognition. Proceedings of the 14th IEEE International Conference on Automatic Face and Gesture Recognition, Lille, France."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Huang, X., Wang, S., Zhao, G., and Piteikainen, M. (2015, January 7\u201313). Facial Micro-Expression Recognition Using Spatiotemporal Local Binary Pattern with Integral Projection. Proceedings of the 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Santiago, Chile.","DOI":"10.1109\/ICCVW.2015.10"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"564","DOI":"10.1016\/j.neucom.2015.10.096","article-title":"Spontaneous facial micro-expression analysis using Spatiotemporal Completed Local Quantized Patterns","volume":"175","author":"Huang","year":"2016","journal-title":"Neurocomputing"},{"key":"ref_24","unstructured":"Matthew, S., Sridhar, G., Vasant, M., and Dmitry, G. (2009, January 7\u20138). Towards macro- and micro-expression spotting in video using strain patterns. Proceedings of the 2009 Conference: Applications of Computer Vision (WACV), Snowbird, UT, USA."},{"key":"ref_25","unstructured":"Sze, T.L., John, S., and Raphael, C.W. (2014, January 1\u20132). Subtle Expression Recognition Using Optical Strain Weighted Features. Proceedings of the Asian Conference on Computer Vision 2014 Workshops, Singapore."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"135","DOI":"10.1016\/j.neucom.2018.05.083","article-title":"Deep Visual Domain Adaptation: A Survey","volume":"312","author":"Mei","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1007\/s12193-015-0195-2","article-title":"EmoNets: Multimodal deep learning approaches for emotion recognition in video","volume":"10","author":"Samira","year":"2016","journal-title":"J. Multimodal User Interfaces"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Liu, A., Yang, Y., Sun, Q., and Xu, Q. (2018, January 20\u201322). A Deep Fully Convolution Neural Network for Semantic Segmentation Based on Adaptive Feature Fusion. Proceedings of the 2018 5th International Conference on Information Science and Control Engineering (ICISCE), Zhengzhou, China.","DOI":"10.1109\/ICISCE.2018.00013"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Liu, P., Han, S., Meng, Z., and Tong, Y. (2014, January 23\u201328). Facial Expression Recognition via a Boosted Deep Belief Network. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.233"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Kim, Y., Lee, H., and Provost, E.M. (2013, January 26\u201331). Deep learning for robust feature generation in audiovisual emotion recognition. Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada.","DOI":"10.1109\/ICASSP.2013.6638346"},{"key":"ref_31","unstructured":"Patel, D., Hong, X., and Zhao, G. (2016, January 4\u20138). Selective deep features for micro-expression recognition. Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"223","DOI":"10.1109\/TAFFC.2017.2695999","article-title":"Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition","volume":"10","author":"Dae","year":"2019","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Ekman, P., and Wallace, V.F. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement, Consulting Psychologists Press. [3rd ed.].","DOI":"10.1037\/t27734-000"},{"key":"ref_34","unstructured":"Ekman, P., and Rosenberg, E. (2005). What the Face Reveals, Oxford University Press. [2nd ed.]."},{"key":"ref_35","first-page":"236","article-title":"Facial expressions of emotion","volume":"17","author":"Ekman","year":"1979","journal-title":"J. Nonverbal Behav."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"519","DOI":"10.1145\/1073204.1073223","article-title":"Motion magnification","volume":"34","author":"Liu","year":"2005","journal-title":"ACM Trans. Graph."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"65","DOI":"10.1145\/2185520.2185561","article-title":"Eulerian video magnification for revealing subtle changes in the world","volume":"31","author":"Wu","year":"2012","journal-title":"ACM Trans. Graph."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"145","DOI":"10.1145\/2461912.2461966","article-title":"Phase-Based Video Motion Processing","volume":"32","author":"Wadhwa","year":"2013","journal-title":"ACM Trans. Graph."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.image.2017.11.006","article-title":"Less is More: Micro-expression Recognition from Video using Apex Frame","volume":"62","author":"Sze","year":"2018","journal-title":"Signal Process. Image Commun."},{"key":"ref_40","first-page":"214","article-title":"A Duality Based Approach for Realtime TV-L1 Optical Flow","volume":"9","author":"Christopher","year":"2007","journal-title":"Pattern Recognit."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"137","DOI":"10.5201\/ipol.2013.26","article-title":"TV-L1 Optical Flow Estimation","volume":"3","author":"Javier","year":"2013","journal-title":"Image Process. Line"},{"key":"ref_42","unstructured":"Jia, X., and Gengming, Z. (2017, January 21\u201323). Joint Face Detection and Facial Expression Recognition with MTCNN. Proceedings of the 2017 4th International Conference on Information Science and Control Engineering (ICISCE), Changsha, China."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1016\/0004-3702(81)90024-2","article-title":"Determining optical flow","volume":"17","author":"Horn","year":"1981","journal-title":"Artif. Intell."},{"key":"ref_44","first-page":"1097","article-title":"Imagenet classification with deep convolutional neural networks","volume":"25","author":"Krizhevsky","year":"2012","journal-title":"Adv. Neural Inf. Process. Syst."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/24\/5553\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T13:42:42Z","timestamp":1760190162000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/24\/5553"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,12,16]]},"references-count":44,"journal-issue":{"issue":"24","published-online":{"date-parts":[[2019,12]]}},"alternative-id":["s19245553"],"URL":"https:\/\/doi.org\/10.3390\/s19245553","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,12,16]]}}}