{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T04:06:00Z","timestamp":1777435560376,"version":"3.51.4"},"reference-count":145,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2023,5,6]],"date-time":"2023-05-06T00:00:00Z","timestamp":1683331200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,5,6]],"date-time":"2023-05-06T00:00:00Z","timestamp":1683331200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Real-Time Image Proc"],"published-print":{"date-parts":[[2023,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Facial expression recognition (FER) is utilized in various fields that analyze facial expressions. FER is attracting increasing attention for its role in improving the convenience in human life. It is widely applied in human\u2013computer interaction tasks. However, recently, FER tasks have encountered certain data and training issues. To address these issues in FER, few-shot learning (FSL) has been researched as a new approach. In this paper, we focus on analyzing FER techniques based on FSL and consider the computational complexity and processing time in these models. FSL has been researched as it can solve the problems of training with few datasets and generalizing in a wild-environmental condition. Based on our analysis, we describe certain existing challenges in the use of FSL in FER systems and suggest research directions to resolve these issues. FER using FSL can be time efficient and reduce the complexity in many other real-time processing tasks and is an important area for further research.<\/jats:p>","DOI":"10.1007\/s11554-023-01310-x","type":"journal-article","created":{"date-parts":[[2023,5,6]],"date-time":"2023-05-06T02:01:27Z","timestamp":1683338487000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":20,"title":["Few-shot learning for facial expression recognition: a comprehensive survey"],"prefix":"10.1007","volume":"20","author":[{"given":"Chae-Lin","family":"Kim","sequence":"first","affiliation":[]},{"given":"Byung-Gyu","family":"Kim","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,5,6]]},"reference":[{"issue":"1","key":"1310_CR1","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1109\/TAFFC.2014.2316163","volume":"5","author":"J Whitehill","year":"2014","unstructured":"Whitehill, J., Serpell, Z., Lin, Y.-C., Foster, A., Movellan, J.R.: The faces of engagement: Automatic recognition of student engagement from facial expressions. IEEE Trans. Affect. Comput. 5(1), 86\u201398 (2014)","journal-title":"IEEE Trans. Affect. Comput."},{"key":"1310_CR2","doi-asserted-by":"crossref","unstructured":"Jerritta, S., Murugappan, M., Nagarajan, R., Wan, K.: Physiological signals based human emotion recognition: a review. In: 2011 IEEE 7th International Colloquium on Signal Processing and Its Applications, pp. 410\u2013415 . IEEE (2011)","DOI":"10.1109\/CSPA.2011.5759912"},{"key":"1310_CR3","doi-asserted-by":"publisher","first-page":"136944","DOI":"10.1109\/ACCESS.2021.3113464","volume":"9","author":"OS Ekundayo","year":"2021","unstructured":"Ekundayo, O.S., Viriri, S.: Facial expression recognition: a review of trends and techniques. Ieee Access 9, 136944\u2013136973 (2021)","journal-title":"Ieee Access"},{"key":"1310_CR4","unstructured":"Li, S., Deng, W.: Deep facial expression recognition: a survey. IEEE Trans. Affect. Comput. (2020)"},{"issue":"3","key":"1310_CR5","doi-asserted-by":"publisher","first-page":"155","DOI":"10.1049\/iet-bmt.2014.0104","volume":"5","author":"S Deshmukh","year":"2016","unstructured":"Deshmukh, S., Patwardhan, M., Mahajan, A.: Survey on real-time facial expression recognition techniques. Iet Biometrics 5(3), 155\u2013163 (2016)","journal-title":"Iet Biometrics"},{"key":"1310_CR6","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn-Kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 Ieee Computer Society Conference on Computer Vision and Pattern Recognition-workshops, pp. 94\u2013101 . IEEE (2010)","DOI":"10.1109\/CVPRW.2010.5543262"},{"issue":"1","key":"1310_CR7","doi-asserted-by":"publisher","first-page":"56","DOI":"10.1007\/BF01115465","volume":"1","author":"P Ekman","year":"1976","unstructured":"Ekman, P., Friesen, W.V.: Measuring facial movement. Environ. Psychol. Nonverb. Behav. 1(1), 56\u201375 (1976)","journal-title":"Environ. Psychol. Nonverb. Behav."},{"issue":"1","key":"1310_CR8","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","volume":"10","author":"A Mollahosseini","year":"2017","unstructured":"Mollahosseini, A., Hasani, B., Mahoor, M.H.: Affectnet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10(1), 18\u201331 (2017)","journal-title":"IEEE Trans. Affect. Comput."},{"key":"1310_CR9","doi-asserted-by":"crossref","unstructured":"Dhall, A., Ramana\u00a0Murthy, O., Goecke, R., Joshi, J., Gedeon, T.: Video and image based emotion recognition challenges in the wild: Emotiw 2015. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 423\u2013426 (2015)","DOI":"10.1145\/2818346.2829994"},{"key":"1310_CR10","doi-asserted-by":"crossref","unstructured":"Dhall, A., Goecke, R., Ghosh, S., Joshi, J., Hoey, J., Gedeon, T.: From individual to group-level emotion recognition: Emotiw 5.0. In: Proceedings of the 19th ACM International Conference on Multimodal Interaction, pp. 524\u2013528 (2017)","DOI":"10.1145\/3136755.3143004"},{"key":"1310_CR11","doi-asserted-by":"crossref","unstructured":"Goodfellow, I.J., Erhan, D., Carrier, P.L., Courville, A., Mirza, M., Hamner, B., Cukierski, W., Tang, Y., Thaler, D., Lee, D.-H.,: Challenges in representation learning: a report on three machine learning contests. In: International Conference on Neural Information Processing, pp. 117\u2013124. Springer (2013)","DOI":"10.1007\/978-3-642-42051-1_16"},{"key":"1310_CR12","doi-asserted-by":"crossref","unstructured":"Barsoum, E., Zhang, C., Ferrer, C.C., Zhang, Z.: Training deep networks for facial expression recognition with crowd-sourced label distribution. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 279\u2013283 (2016)","DOI":"10.1145\/2993148.2993165"},{"key":"1310_CR13","doi-asserted-by":"crossref","unstructured":"Li, S., Deng, W., Du, J.: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2852\u20132861 (2017)","DOI":"10.1109\/CVPR.2017.277"},{"issue":"15","key":"1310_CR14","doi-asserted-by":"publisher","first-page":"1454","DOI":"10.1073\/pnas.1322355111","volume":"111","author":"S Du","year":"2014","unstructured":"Du, S., Tao, Y., Martinez, A.M.: Compound facial expressions of emotion. Proc. Natl. Acad. Sci. 111(15), 1454\u20131462 (2014)","journal-title":"Proc. Natl. Acad. Sci."},{"issue":"11","key":"1310_CR15","first-page":"120","volume":"25","author":"G Bradski","year":"2000","unstructured":"Bradski, G.: The opencv library. Dr. Dobb\u2019s J. Softw. Tools Prof. Program. 25(11), 120\u2013123 (2000)","journal-title":"Dr. Dobb\u2019s J. Softw. Tools Prof. Program."},{"key":"1310_CR16","doi-asserted-by":"crossref","unstructured":"Fabian\u00a0Benitez-Quiroz, C., Srinivasan, R., Martinez, A.M.: Emotionet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5562\u20135570 (2016)","DOI":"10.1109\/CVPR.2016.600"},{"issue":"03","key":"1310_CR17","doi-asserted-by":"publisher","first-page":"34","DOI":"10.1109\/MMUL.2012.26","volume":"19","author":"A Dhall","year":"2012","unstructured":"Dhall, A., Goecke, R., Lucey, S., Gedeon, T.: Collecting large, richly annotated facial-expression databases from movies. IEEE Multimed. 19(03), 34\u201341 (2012)","journal-title":"IEEE Multimed."},{"issue":"1","key":"1310_CR18","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1134\/S1054661815040070","volume":"26","author":"J Zhou","year":"2016","unstructured":"Zhou, J., Zhang, S., Mei, H., Wang, D.: A method of facial expression recognition based on Gabor and nmf. Pattern Recognit Image Anal. 26(1), 119\u2013124 (2016)","journal-title":"Pattern Recognit Image Anal."},{"key":"1310_CR19","doi-asserted-by":"crossref","unstructured":"Darwin, C., Prodger, P.: The Expression of the Emotions in Man and Animals. Oxford University Press, USA,??? (1998)","DOI":"10.1093\/oso\/9780195112719.002.0002"},{"key":"1310_CR20","doi-asserted-by":"crossref","unstructured":"Zeng, J., Shan, S., Chen, X.: Facial expression recognition with inconsistently annotated datasets. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 222\u2013237 (2018)","DOI":"10.1007\/978-3-030-01261-8_14"},{"issue":"2","key":"1310_CR21","doi-asserted-by":"publisher","first-page":"97","DOI":"10.1109\/34.908962","volume":"23","author":"Y-I Tian","year":"2001","unstructured":"Tian, Y.-I., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 97\u2013115 (2001)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"issue":"4","key":"1310_CR22","doi-asserted-by":"publisher","first-page":"363","DOI":"10.1007\/BF00992972","volume":"16","author":"D Matsumoto","year":"1992","unstructured":"Matsumoto, D.: More evidence for the universality of a contempt expression. Motiv. Emot. 16(4), 363\u2013368 (1992)","journal-title":"Motiv. Emot."},{"issue":"3\u20134","key":"1310_CR23","doi-asserted-by":"publisher","first-page":"169","DOI":"10.1080\/02699939208411068","volume":"6","author":"P Ekman","year":"1992","unstructured":"Ekman, P.: An argument for basic emotions. Cognit. Emot. 6(3\u20134), 169\u2013200 (1992)","journal-title":"Cognit. Emot."},{"key":"1310_CR24","doi-asserted-by":"publisher","first-page":"69311","DOI":"10.1109\/ACCESS.2020.2986654","volume":"8","author":"SK Jarraya","year":"2020","unstructured":"Jarraya, S.K., Masmoudi, M., Hammami, M.: Compound emotion recognition of autistic children during meltdown crisis based on deep spatio-temporal analysis of facial geometric features. IEEE Access 8, 69311\u201369326 (2020)","journal-title":"IEEE Access"},{"key":"1310_CR25","doi-asserted-by":"publisher","first-page":"26391","DOI":"10.1109\/ACCESS.2018.2831927","volume":"6","author":"J Guo","year":"2018","unstructured":"Guo, J., Lei, Z., Wan, J., Avots, E., Hajarolasvadi, N., Knyazev, B., Kuharenko, A., Junior, J.C.S.J., Bar\u00f3, X., Demirel, H.: Dominant and complementary emotion recognition from still images of faces. IEEE Access 6, 26391\u201326403 (2018)","journal-title":"IEEE Access"},{"key":"1310_CR26","first-page":"39","volume":"3","author":"RE Haamer","year":"2017","unstructured":"Haamer, R.E., Rusadze, E., Lsi, I., Ahmed, T., Escalera, S., Anbarjafari, G.: Review on emotion recognition databases. Hum. Robot Interact. Theor. Appl. 3, 39\u201363 (2017)","journal-title":"Hum. Robot Interact. Theor. Appl."},{"key":"1310_CR27","unstructured":"Slimani, K., Ruichek, Y., Messoussi, R.: Compound facial emotional expression recognition using cnn deep features. Eng. Lett. 30(4), 1402\u20131416 (2022)"},{"issue":"22","key":"1310_CR28","doi-asserted-by":"publisher","first-page":"2847","DOI":"10.3390\/electronics10222847","volume":"10","author":"D Kami\u0144ska","year":"2021","unstructured":"Kami\u0144ska, D., Aktas, K., Rizhinashvili, D., Kuklyanov, D., Sham, A.H., Escalera, S., Nasrollahi, K., Moeslund, T.B., Anbarjafari, G.: Two-stage recognition and beyond for compound facial emotion recognition. Electronics 10(22), 2847 (2021)","journal-title":"Electronics"},{"issue":"10","key":"1310_CR29","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","volume":"23","author":"K Zhang","year":"2016","unstructured":"Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499\u20131503 (2016)","journal-title":"IEEE Signal Process. Lett."},{"key":"1310_CR30","first-page":"1755","volume":"10","author":"DE King","year":"2009","unstructured":"King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755\u20131758 (2009)","journal-title":"J. Mach. Learn. Res."},{"key":"1310_CR31","doi-asserted-by":"crossref","unstructured":"Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 5203\u20135212 (2020)","DOI":"10.1109\/CVPR42600.2020.00525"},{"key":"1310_CR32","doi-asserted-by":"crossref","unstructured":"Bulat, A., Tzimiropoulos, G.: How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1021\u20131030 (2017)","DOI":"10.1109\/ICCV.2017.116"},{"key":"1310_CR33","doi-asserted-by":"crossref","unstructured":"Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201905), vol. 1, pp. 886\u2013893 . Ieee (2005)","DOI":"10.1109\/CVPR.2005.177"},{"issue":"26","key":"1310_CR34","first-page":"429","volume":"93","author":"D Gabor","year":"1946","unstructured":"Gabor, D.: Theory of communication. Part 1: the analysis of information. J. Inst. Electr. Eng.-Part III: Radio Commun. Eng. 93(26), 429\u2013441 (1946)","journal-title":"J. Inst. Electr. Eng.-Part III: Radio Commun. Eng."},{"issue":"6","key":"1310_CR35","doi-asserted-by":"publisher","first-page":"803","DOI":"10.1016\/j.imavis.2008.08.005","volume":"27","author":"C Shan","year":"2009","unstructured":"Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27(6), 803\u2013816 (2009)","journal-title":"Image Vis. Comput."},{"issue":"6","key":"1310_CR36","doi-asserted-by":"publisher","first-page":"1635","DOI":"10.1109\/TIP.2010.2042645","volume":"19","author":"X Tan","year":"2010","unstructured":"Tan, X., Triggs, B.: Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 19(6), 1635\u20131650 (2010)","journal-title":"IEEE Trans. Image Process."},{"issue":"1","key":"1310_CR37","first-page":"38","volume":"41","author":"R Zhi","year":"2010","unstructured":"Zhi, R., Flierl, M., Ruan, Q., Kleijn, W.B.: Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 41(1), 38\u201352 (2010)","journal-title":"IEEE Trans. Syst. Man Cybern. Part B (Cybernetics)"},{"key":"1310_CR38","unstructured":"Valstar, M., Pantic, M.,: Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, p. 65 . Paris, France (2010)"},{"issue":"9","key":"1310_CR39","doi-asserted-by":"publisher","first-page":"607","DOI":"10.1016\/j.imavis.2011.07.002","volume":"29","author":"G Zhao","year":"2011","unstructured":"Zhao, G., Huang, X., Taini, M., Li, S.Z., Pietik\u00e4Inen, M.: Facial expression recognition from near-infrared videos. Image Vis. Comput. 29(9), 607\u2013619 (2011)","journal-title":"Image Vis. Comput."},{"key":"1310_CR40","doi-asserted-by":"publisher","first-page":"41273","DOI":"10.1109\/ACCESS.2019.2907327","volume":"7","author":"J-H Kim","year":"2019","unstructured":"Kim, J.-H., Kim, B.-G., Roy, P.P., Jeong, D.-M.: Efficient facial expression recognition algorithm based on hierarchical deep neural network structure. IEEE Access 7, 41273\u201341285 (2019)","journal-title":"IEEE Access"},{"issue":"21","key":"1310_CR41","doi-asserted-by":"publisher","first-page":"6954","DOI":"10.3390\/s21216954","volume":"21","author":"S-J Park","year":"2021","unstructured":"Park, S.-J., Kim, B.-G., Chilamkurti, N.: A robust facial expression recognition algorithm based on multi-rate feature fusion scheme. Sensors 21(21), 6954 (2021)","journal-title":"Sensors"},{"issue":"3","key":"1310_CR42","doi-asserted-by":"publisher","first-page":"273","DOI":"10.1007\/BF00994018","volume":"20","author":"C Cortes","year":"1995","unstructured":"Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273\u2013297 (1995)","journal-title":"Mach. Learn."},{"key":"1310_CR43","unstructured":"Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Icml, vol. 96, pp. 148\u2013156 . Citeseer (1996)"},{"issue":"11","key":"1310_CR44","doi-asserted-by":"publisher","first-page":"2278","DOI":"10.1109\/5.726791","volume":"86","author":"Y LeCun","year":"1998","unstructured":"LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278\u20132324 (1998)","journal-title":"Proc. IEEE"},{"key":"1310_CR45","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248\u2013255 . Ieee (2009)","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"1310_CR46","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770\u2013778 (2016)","DOI":"10.1109\/CVPR.2016.90"},{"issue":"8","key":"1310_CR47","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735\u20131780 (1997)","journal-title":"Neural Comput."},{"key":"1310_CR48","unstructured":"Fe-Fei, L.,: A Bayesian approach to unsupervised one-shot learning of object categories. In: Proceedings Ninth IEEE International Conference on Computer Vision, pp. 1134\u20131141 . IEEE (2003)"},{"key":"1310_CR49","unstructured":"Fink, M.: Object classification from a single example utilizing class relevance metrics. Adv. Neural Inf. Process. Syst. 17 (2004)"},{"issue":"2","key":"1310_CR50","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1037\/0033-295X.94.2.115","volume":"94","author":"I Biederman","year":"1987","unstructured":"Biederman, I.: Recognition-by-components: a theory of human image understanding. Psychol. Rev. 94(2), 115 (1987)","journal-title":"Psychol. Rev."},{"key":"1310_CR51","unstructured":"Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. In: 2004 Conference on Computer Vision and Pattern Recognition Workshop, pp. 178\u2013178. IEEE (2004)"},{"issue":"4","key":"1310_CR52","doi-asserted-by":"publisher","first-page":"594","DOI":"10.1109\/TPAMI.2006.79","volume":"28","author":"L Fei-Fei","year":"2006","unstructured":"Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 594\u2013611 (2006)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"1310_CR53","unstructured":"Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126\u20131135. PMLR (2017)"},{"key":"1310_CR54","doi-asserted-by":"crossref","unstructured":"Cai, Q., Pan, Y., Yao, T., Yan, C., Mei, T.: Memory matching networks for one-shot image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4080\u20134088 (2018)","DOI":"10.1109\/CVPR.2018.00429"},{"key":"1310_CR55","unstructured":"Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. Adv. Neural Inform. Process. Syst. 30 (2017)"},{"issue":"3","key":"1310_CR56","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3386252","volume":"53","author":"Y Wang","year":"2020","unstructured":"Wang, Y., Yao, Q., Kwok, J.T., Ni, L.M.: Generalizing from a few examples: a survey on few-shot learning. ACM Comput. Surv. (csur) 53(3), 1\u201334 (2020)","journal-title":"ACM Comput. Surv. (csur)"},{"issue":"6266","key":"1310_CR57","doi-asserted-by":"publisher","first-page":"1332","DOI":"10.1126\/science.aab3050","volume":"350","author":"BM Lake","year":"2015","unstructured":"Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350(6266), 1332\u20131338 (2015)","journal-title":"Science"},{"key":"1310_CR58","unstructured":"Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning (2016)"},{"key":"1310_CR59","unstructured":"Reed, S., Chen, Y., Paine, T., Oord, A.v.d., Eslami, S., Rezende, D., Vinyals, O., de Freitas, N.: Few-shot autoregressive density estimation: Towards learning to learn distributions. arXiv preprint arXiv:1710.10304 (2017)"},{"key":"1310_CR60","unstructured":"Rezende, D., Danihelka, I., Gregor, K., Wierstra, D.: One-shot generalization in deep generative models. In: International Conference on Machine Learning, pp. 1521\u20131529. PMLR (2016)"},{"key":"1310_CR61","doi-asserted-by":"crossref","unstructured":"Wu, J., Liu, S., Huang, D., Wang, Y.: Multi-scale positive sample refinement for few-shot object detection. In: European Conference on Computer Vision, pp. 456\u2013472 . Springer (2020)","DOI":"10.1007\/978-3-030-58517-4_27"},{"key":"1310_CR62","unstructured":"Wang, X., Huang, T.E., Darrell, T., Gonzalez, J.E., Yu, F.: Frustratingly simple few-shot object detection. arXiv preprint arXiv:2003.06957 (2020)"},{"key":"1310_CR63","doi-asserted-by":"crossref","unstructured":"Sun, B., Li, B., Cai, S., Yuan, Y., Zhang, C.: Fsce: Few-shot object detection via contrastive proposal encoding. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 7352\u20137362 (2021)","DOI":"10.1109\/CVPR46437.2021.00727"},{"key":"1310_CR64","doi-asserted-by":"crossref","unstructured":"Jung, I., You, K., Noh, H., Cho, M., Han, B.: Real-time object tracking via meta-learning: efficient model adaptation and one-shot channel pruning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11205\u201311212 (2020)","DOI":"10.1609\/aaai.v34i07.6779"},{"key":"1310_CR65","unstructured":"Garcia, V., Bruna, J.: Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043 (2017)"},{"key":"1310_CR66","unstructured":"Yang, F.S.Y., Zhang, L., Xiang, T., Torr, P., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: CVPR, vol. 1, p. 6 (2018)"},{"key":"1310_CR67","doi-asserted-by":"crossref","unstructured":"Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199\u20131208 (2018)","DOI":"10.1109\/CVPR.2018.00131"},{"key":"1310_CR68","doi-asserted-by":"crossref","unstructured":"Wang, K., Liew, J.H., Zou, Y., Zhou, D., Feng, J.: Panet: Few-shot image semantic segmentation with prototype alignment. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 9197\u20139206 (2019)","DOI":"10.1109\/ICCV.2019.00929"},{"key":"1310_CR69","doi-asserted-by":"crossref","unstructured":"Ouyang, C., Biffi, C., Chen, C., Kart, T., Qiu, H., Rueckert, D.: Self-supervision with superpixels: Training few-shot medical image segmentation without annotation. In: European Conference on Computer Vision, pp. 762\u2013780 . Springer (2020)","DOI":"10.1007\/978-3-030-58526-6_45"},{"key":"1310_CR70","doi-asserted-by":"crossref","unstructured":"Liu, Y., Zhang, X., Zhang, S., He, X.: Part-aware prototype network for few-shot semantic segmentation. In: European Conference on Computer Vision, pp. 142\u2013158. Springer (2020)","DOI":"10.1007\/978-3-030-58545-7_9"},{"key":"1310_CR71","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Zhang, Y., Feng, R., Zhang, T., Fan, W.: Zero-shot sketch-based image retrieval via graph convolution network. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12943\u201312950 (2020)","DOI":"10.1609\/aaai.v34i07.6993"},{"key":"1310_CR72","doi-asserted-by":"crossref","unstructured":"Gui, L.-Y., Wang, Y.-X., Ramanan, D., Moura, J.M.: Few-shot human motion prediction via meta-learning. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 432\u2013450 (2018)","DOI":"10.1007\/978-3-030-01237-3_27"},{"key":"1310_CR73","doi-asserted-by":"crossref","unstructured":"Xian, Y., Korbar, B., Douze, M., Schiele, B., Akata, Z., Torresani, L.: Generalized many-way few-shot video classification. In: European Conference on Computer Vision, pp. 111\u2013127. Springer (2020)","DOI":"10.1007\/978-3-030-65414-6_10"},{"key":"1310_CR74","doi-asserted-by":"crossref","unstructured":"Michalkiewicz, M., Parisot, S., Tsogkas, S., Baktashmotlagh, M., Eriksson, A., Belilovsky, E.: Few-shot single-view 3-d object reconstruction with compositional priors. In: European Conference on Computer Vision, pp. 614\u2013630 . Springer (2020)","DOI":"10.1007\/978-3-030-58595-2_37"},{"issue":"22","key":"1310_CR75","doi-asserted-by":"publisher","first-page":"29799","DOI":"10.1007\/s11042-018-5772-4","volume":"77","author":"L Yan","year":"2018","unstructured":"Yan, L., Zheng, Y., Cao, J.: Few-shot learning for short text classification. Multimed. Tools Appl. 77(22), 29799\u201329810 (2018)","journal-title":"Multimed. Tools Appl."},{"key":"1310_CR76","doi-asserted-by":"publisher","first-page":"271","DOI":"10.1016\/j.patrec.2020.05.007","volume":"135","author":"J Xu","year":"2020","unstructured":"Xu, J., Du, Q.: Learning transferable features in meta-learning for few-shot text classification. Pattern Recogn. Lett. 135, 271\u2013278 (2020)","journal-title":"Pattern Recogn. Lett."},{"key":"1310_CR77","doi-asserted-by":"publisher","first-page":"165786","DOI":"10.1109\/ACCESS.2021.3133657","volume":"9","author":"N Kumar","year":"2021","unstructured":"Kumar, N., Baghel, B.K.: Intent focused semantic parsing and zero-shot learning for out-of-domain detection in spoken language understanding. IEEE Access 9, 165786\u2013165794 (2021)","journal-title":"IEEE Access"},{"key":"1310_CR78","unstructured":"Kaiser, \u0141., Nachum, O., Roy, A., Bengio, S.: Learning to remember rare events. arXiv preprint arXiv:1703.03129 (2017)"},{"key":"1310_CR79","doi-asserted-by":"crossref","unstructured":"Han, X., Zhu, H., Yu, P., Wang, Z., Yao, Y., Liu, Z., Sun, M.: Fewrel: a large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. arXiv preprint arXiv:1810.10147 (2018)","DOI":"10.18653\/v1\/D18-1514"},{"key":"1310_CR80","doi-asserted-by":"crossref","unstructured":"Lampert, C.H., Nickisch, H., Harmeling, S.: Learning to detect unseen object classes by between-class attribute transfer. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 951\u2013958. IEEE (2009)","DOI":"10.1109\/CVPR.2009.5206594"},{"key":"1310_CR81","doi-asserted-by":"crossref","unstructured":"Douze, M., Szlam, A., Hariharan, B., J\u00e9gou, H.: Low-shot learning with large-scale diffusion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3349\u20133358 (2018)","DOI":"10.1109\/CVPR.2018.00353"},{"key":"1310_CR82","doi-asserted-by":"crossref","unstructured":"Pfister, T., Charles, J., Zisserman, A.: Domain-adaptive discriminative one-shot learning of gestures. In: European Conference on Computer Vision, pp. 814\u2013829 . Springer (2014)","DOI":"10.1007\/978-3-319-10599-4_52"},{"key":"1310_CR83","doi-asserted-by":"crossref","unstructured":"Wu, Y., Lin, Y., Dong, X., Yan, Y., Ouyang, W., Yang, Y.: Exploit the unknown gradually: one-shot video-based person re-identification by stepwise learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5177\u20135186 (2018)","DOI":"10.1109\/CVPR.2018.00543"},{"key":"1310_CR84","unstructured":"Tsai, Y.-H.H., Salakhutdinov, R.: Improving one-shot learning through fusing side information. arXiv preprint arXiv:1710.08347 (2017)"},{"issue":"11","key":"1310_CR85","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1145\/3422622","volume":"63","author":"I Goodfellow","year":"2020","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Commun. ACM 63(11), 139\u2013144 (2020)","journal-title":"Commun. ACM"},{"key":"1310_CR86","unstructured":"Gao, H., Shou, Z., Zareian, A., Zhang, H., Chang, S.-F.: Low-shot learning via covariance-preserving adversarial augmentation networks. Adv. Neural Inform. Process. Syst. 31 (2018)"},{"issue":"1","key":"1310_CR87","doi-asserted-by":"publisher","first-page":"41","DOI":"10.1023\/A:1007379606734","volume":"28","author":"R Caruana","year":"1997","unstructured":"Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41\u201375 (1997)","journal-title":"Mach. Learn."},{"key":"1310_CR88","unstructured":"Hu, Z., Li, X., Tu, C., Liu, Z., Sun, M.: Few-shot charge prediction with discriminative legal attributes. In: Proceedings of the 27th International Conference on Computational Linguistics, pp. 487\u2013498 (2018)"},{"key":"1310_CR89","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tang, H., Jia, K.: Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 233\u2013248 (2018)","DOI":"10.1007\/978-3-030-01237-3_15"},{"key":"1310_CR90","unstructured":"Motiian, S., Jones, Q., Iranmanesh, S., Doretto, G.: Few-shot adversarial domain adaptation. Adv. Neural Inform. Process. Syst. 30 (2017)"},{"key":"1310_CR91","doi-asserted-by":"crossref","unstructured":"Yan, W., Yap, J., Mori, G.: Multi-task transfer methods to improve one-shot learning for multimedia event detection. In: BMVC, pp. 37\u20131 (2015)","DOI":"10.5244\/C.29.37"},{"key":"1310_CR92","unstructured":"Luo, Z., Zou, Y., Hoffman, J., Fei-Fei, L.F.: Label efficient learning of transferable representations acrosss domains and tasks. Adv. Neural Inform. Process. Syst. 30 (2017)"},{"key":"1310_CR93","unstructured":"Bachman, P., Sordoni, A., Trischler, A.: Learning algorithms for active learning. In: International Conference on Machine Learning, pp. 301\u2013310. PMLR (2017)"},{"issue":"4","key":"1310_CR94","doi-asserted-by":"publisher","first-page":"283","DOI":"10.1021\/acscentsci.6b00367","volume":"3","author":"H Altae-Tran","year":"2017","unstructured":"Altae-Tran, H., Ramsundar, B., Pappu, A.S., Pande, V.: Low data drug discovery with one-shot learning. ACS Cent. Sci. 3(4), 283\u2013293 (2017)","journal-title":"ACS Cent. Sci."},{"key":"1310_CR95","doi-asserted-by":"crossref","unstructured":"Tang, K.D., Tappen, M.F., Sukthankar, R., Lampert, C.H.: Optimizing one-shot recognition with micro-set learning. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3027\u20133034. IEEE (2010)","DOI":"10.1109\/CVPR.2010.5540053"},{"key":"1310_CR96","unstructured":"Koch, G., Zemel, R., Salakhutdinov, R.,: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop, vol. 2, p. 0 . Lille (2015)"},{"key":"1310_CR97","unstructured":"Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. Adv. Neural Inform. Process. Syst. 29 (2016)"},{"key":"1310_CR98","unstructured":"Bertinetto, L., Henriques, J.F., Valmadre, J., Torr, P., Vedaldi, A.: Learning feed-forward one-shot learners. Adv. Neural Inform. Process. Syst. 29 (2016)"},{"key":"1310_CR99","unstructured":"Bertinetto, L., Henriques, J.F., Torr, P.H., Vedaldi, A.: Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136 (2018)"},{"key":"1310_CR100","unstructured":"Oreshkin, B., Rodr\u00edguez\u00a0L\u00f3pez, P., Lacoste, A.: Tadam: Task dependent adaptive metric for improved few-shot learning. Advances in neural information processing systems 31 (2018)"},{"key":"1310_CR101","doi-asserted-by":"crossref","unstructured":"Zhao, F., Zhao, J., Yan, S., Feng, J.: Dynamic conditional networks for few-shot learning. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 19\u201335 (2018)","DOI":"10.1007\/978-3-030-01267-0_2"},{"key":"1310_CR102","doi-asserted-by":"crossref","unstructured":"Park, S., Chun, S., Cha, J., Lee, B., Shim, H.: Few-shot font generation with localized style representations and factorization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 2393\u20132402 (2021)","DOI":"10.1609\/aaai.v35i3.16340"},{"key":"1310_CR103","unstructured":"Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. Adv. Neural Inform. Process. Syst. 20 (2007)"},{"issue":"2","key":"1310_CR104","doi-asserted-by":"publisher","first-page":"223","DOI":"10.1137\/16M1080173","volume":"60","author":"L Bottou","year":"2018","unstructured":"Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223\u2013311 (2018)","journal-title":"SIAM Rev."},{"key":"1310_CR105","doi-asserted-by":"publisher","first-page":"2016","DOI":"10.1109\/TIP.2021.3049955","volume":"30","author":"H Li","year":"2021","unstructured":"Li, H., Wang, N., Ding, X., Yang, X., Gao, X.: Adaptively learning facial expression representation via cf labels and distillation. IEEE Trans. Image Process. 30, 2016\u20132028 (2021)","journal-title":"IEEE Trans. Image Process."},{"key":"1310_CR106","doi-asserted-by":"crossref","unstructured":"Siqueira, H., Magg, S., Wermter, S.: Efficient facial feature learning with wide ensemble-based convolutional neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5800\u20135809 (2020)","DOI":"10.1609\/aaai.v34i04.6037"},{"key":"1310_CR107","first-page":"17616","volume":"34","author":"Y Zhang","year":"2021","unstructured":"Zhang, Y., Wang, C., Deng, W.: Relative uncertainty learning for facial expression recognition. Adv. Neural. Inf. Process. Syst. 34, 17616\u201317627 (2021)","journal-title":"Adv. Neural. Inf. Process. Syst."},{"key":"1310_CR108","doi-asserted-by":"publisher","first-page":"4057","DOI":"10.1109\/TIP.2019.2956143","volume":"29","author":"K Wang","year":"2020","unstructured":"Wang, K.: Peng: Region attention networks for pose and occlusion robust facial expression recognition. IEEE Trans. Image Process. 29, 4057\u20134069 (2020)","journal-title":"IEEE Trans. Image Process."},{"key":"1310_CR109","doi-asserted-by":"publisher","first-page":"131988","DOI":"10.1109\/ACCESS.2020.3010018","volume":"8","author":"T-H Vo","year":"2020","unstructured":"Vo, T.-H., Lee, G.-S., Yang, H.-J., Kim, S.-H.: Pyramid with super resolution for in-the-wild facial expression recognition. IEEE Access 8, 131988\u2013132001 (2020)","journal-title":"IEEE Access"},{"key":"1310_CR110","doi-asserted-by":"crossref","unstructured":"Kumar, V., Rao, S., Yu, L.: Noisy student training using body language dataset improves facial expression recognition. In: European Conference on Computer Vision, pp. 756\u2013773 . Springer (2020)","DOI":"10.1007\/978-3-030-66415-2_53"},{"key":"1310_CR111","doi-asserted-by":"crossref","unstructured":"Meng, D., Peng, X., Wang, K., Qiao, Y.: Frame attention networks for facial expression recognition in videos. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 3866\u20133870. IEEE (2019)","DOI":"10.1109\/ICIP.2019.8803603"},{"key":"1310_CR112","doi-asserted-by":"publisher","first-page":"4057","DOI":"10.1109\/TIP.2019.2956143","volume":"29","author":"K Wang","year":"2020","unstructured":"Wang, K., Peng, X., Yang, J., Meng, D., Qiao, Y.: Region attention networks for pose and occlusion robust facial expression recognition. IEEE Trans. Image Process. 29, 4057\u20134069 (2020)","journal-title":"IEEE Trans. Image Process."},{"key":"1310_CR113","doi-asserted-by":"crossref","unstructured":"Psaroudakis, A., Kollias, D.: Mixaugment & mixup: augmentation methods for facial expression recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 2367\u20132375 (2022)","DOI":"10.1109\/CVPRW56347.2022.00264"},{"key":"1310_CR114","doi-asserted-by":"crossref","unstructured":"Shi, Y., Jain, A.K.: Probabilistic face embeddings. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 6902\u20136911 (2019)","DOI":"10.1109\/ICCV.2019.00700"},{"key":"1310_CR115","doi-asserted-by":"crossref","unstructured":"Wang, K., Peng, X., Yang, J., Lu, S., Qiao, Y.: Suppressing uncertainties for large-scale facial expression recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 6897\u20136906 (2020)","DOI":"10.1109\/CVPR42600.2020.00693"},{"issue":"4","key":"1310_CR116","doi-asserted-by":"publisher","first-page":"580","DOI":"10.1162\/jocn.2006.18.4.580","volume":"18","author":"G Yovel","year":"2006","unstructured":"Yovel, G., Duchaine, B.: Specialized face perception mechanisms extract both part and spacing information: Evidence from developmental prosopagnosia. J. Cogn. Neurosci. 18(4), 580\u2013593 (2006)","journal-title":"J. Cogn. Neurosci."},{"issue":"1","key":"1310_CR117","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s11263-019-01215-y","volume":"128","author":"Y Luo","year":"2020","unstructured":"Luo, Y., Ye, J., Adams, R.B., Li, J., Newman, M.G., Wang, J.Z.: Arbee: towards automated recognition of bodily expression of emotion in the wild. Int. J. Comput. Vis. 128(1), 1\u201325 (2020)","journal-title":"Int. J. Comput. Vis."},{"key":"1310_CR118","doi-asserted-by":"publisher","first-page":"2340","DOI":"10.1109\/TIP.2021.3051462","volume":"30","author":"Y Jiang","year":"2021","unstructured":"Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340\u20132349 (2021)","journal-title":"IEEE Trans. Image Process."},{"key":"1310_CR119","unstructured":"Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)"},{"issue":"15","key":"1310_CR120","doi-asserted-by":"publisher","first-page":"2800","DOI":"10.1080\/02664763.2018.1441383","volume":"45","author":"C Ju","year":"2018","unstructured":"Ju, C., Bibaut, A., van der Laan, M.: The relative performance of ensemble methods with deep convolutional neural networks for image classification. J. Appl. Stat. 45(15), 2800\u20132818 (2018)","journal-title":"J. Appl. Stat."},{"key":"1310_CR121","doi-asserted-by":"publisher","first-page":"340","DOI":"10.1016\/j.neucom.2020.06.014","volume":"411","author":"J Li","year":"2020","unstructured":"Li, J., Jin, K., Zhou, D., Kubota, N., Ju, Z.: Attention mechanism-based cnn for facial expression recognition. Neurocomputing 411, 340\u2013350 (2020)","journal-title":"Neurocomputing"},{"key":"1310_CR122","unstructured":"Liu, Y., Peng, J., Zeng, J., Shan, S.: Pose-adaptive hierarchical attention network for facial expression recognition. arXiv preprint arXiv:1905.10059 (2019)"},{"issue":"9","key":"1310_CR123","doi-asserted-by":"publisher","first-page":"3046","DOI":"10.3390\/s21093046","volume":"21","author":"S Minaee","year":"2021","unstructured":"Minaee, S., Minaei, M., Abdolrashidi, A.: Deep-emotion: Facial expression recognition using attentional convolutional network. Sensors 21(9), 3046 (2021)","journal-title":"Sensors"},{"key":"1310_CR124","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1016\/j.ins.2021.08.043","volume":"580","author":"Q Huang","year":"2021","unstructured":"Huang, Q., Huang, C., Wang, X., Jiang, F.: Facial expression recognition with grid-wise attention and visual transformer. Inf. Sci. 580, 35\u201354 (2021)","journal-title":"Inf. Sci."},{"key":"1310_CR125","doi-asserted-by":"crossref","unstructured":"Aminbeidokhti, M., Pedersoli, M., Cardinal, P., Granger, E.: Emotion recognition with spatial attention and temporal softmax pooling. In: International Conference on Image Analysis and Recognition, pp. 323\u2013331 . Springer (2019)","DOI":"10.1007\/978-3-030-27202-9_29"},{"key":"1310_CR126","doi-asserted-by":"publisher","first-page":"2015","DOI":"10.3389\/fpsyg.2018.02015","volume":"9","author":"X Zeng","year":"2018","unstructured":"Zeng, X., Wu, Q., Zhang, S., Liu, Z., Zhou, Q., Zhang, M.: A false trail to follow: differential effects of the facial feedback signals from the upper and lower face on the recognition of micro-expressions. Front. Psychol. 9, 2015 (2018)","journal-title":"Front. Psychol."},{"issue":"4","key":"1310_CR127","doi-asserted-by":"publisher","first-page":"2132","DOI":"10.1109\/TAFFC.2022.3188390","volume":"13","author":"AV Savchenko","year":"2022","unstructured":"Savchenko, A.V., Savchenko, L.V., Makarov, I.: Classifying emotions and engagement in online learning based on a single facial expression recognition neural network. IEEE Trans. Affect. Comput. 13(4), 2132\u20132143 (2022)","journal-title":"IEEE Trans. Affect. Comput."},{"key":"1310_CR128","doi-asserted-by":"publisher","first-page":"26756","DOI":"10.1109\/ACCESS.2022.3156598","volume":"10","author":"AP Fard","year":"2022","unstructured":"Fard, A.P., Mahoor, M.H.: Ad-corre: adaptive correlation-based loss for facial expression recognition in the wild. IEEE Access 10, 26756\u201326768 (2022)","journal-title":"IEEE Access"},{"key":"1310_CR129","doi-asserted-by":"crossref","unstructured":"Terhorst, P., Kolf, J.N., Damer, N., Kirchbuchner, F., Kuijper, A.: Ser-fiq: unsupervised estimation of face image quality based on stochastic embedding robustness. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 5651\u20135660 (2020)","DOI":"10.1109\/CVPR42600.2020.00569"},{"issue":"9","key":"1310_CR130","doi-asserted-by":"publisher","first-page":"0222713","DOI":"10.1371\/journal.pone.0222713","volume":"14","author":"Y Fang","year":"2019","unstructured":"Fang, Y., Gao, J., Huang, C., Peng, H., Wu, R.: Self multi-head attention-based convolutional neural networks for fake news detection. PLoS One 14(9), 0222713 (2019)","journal-title":"PLoS One"},{"key":"1310_CR131","unstructured":"Lin, Z., Feng, M., Santos, C.N.d., Yu, M., Xiang, B., Zhou, B., Bengio, Y.: A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130 (2017)"},{"key":"1310_CR132","doi-asserted-by":"crossref","unstructured":"She, J., Hu, Y., Shi, H., Wang, J., Shen, Q., Mei, T.: Dive into ambiguity: Latent distribution mining and pairwise uncertainty estimation for facial expression recognition. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 6248\u20136257 (2021)","DOI":"10.1109\/CVPR46437.2021.00618"},{"key":"1310_CR133","doi-asserted-by":"publisher","first-page":"142071","DOI":"10.1109\/ACCESS.2021.3120542","volume":"9","author":"Y Dai","year":"2021","unstructured":"Dai, Y., Feng, L.: Cross-domain few-shot micro-expression recognition incorporating action units. IEEE Access 9, 142071\u2013142083 (2021)","journal-title":"IEEE Access"},{"issue":"7","key":"1310_CR134","doi-asserted-by":"publisher","first-page":"1635","DOI":"10.1109\/TPAMI.2012.253","volume":"35","author":"Y Yang","year":"2012","unstructured":"Yang, Y., Saleemi, I., Shah, M.: Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions. IEEE Trans. Pattern Anal. Mach. Intell. 35(7), 1635\u20131648 (2012)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"1310_CR135","doi-asserted-by":"crossref","unstructured":"Cruz, A.C., Bhanu, B., Thakoor, N.S.: One shot emotion scores for facial emotion recognition. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 1376\u20131380. IEEE (2014)","DOI":"10.1109\/ICIP.2014.7025275"},{"key":"1310_CR136","doi-asserted-by":"crossref","unstructured":"Shome, D., Kar, T.: Fedaffect: Few-shot federated learning for facial expression recognition. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 4168\u20134175 (2021)","DOI":"10.1109\/ICCVW54120.2021.00463"},{"key":"1310_CR137","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2021.116046","volume":"189","author":"Q Zhu","year":"2022","unstructured":"Zhu, Q., Mao, Q., Jia, H., Noi, O.E.N., Tu, J.: Convolutional relation network for facial expression recognition in the wild with few-shot learning. Expert Syst. Appl. 189, 116046 (2022)","journal-title":"Expert Syst. Appl."},{"key":"1310_CR138","unstructured":"Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J.B., Larochelle, H., Zemel, R.S.: Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676 (2018)"},{"key":"1310_CR139","unstructured":"Ciubotaru, A.-N., Devos, A., Bozorgtabar, B., Thiran, J.-P., Gabrani, M.: Revisiting few-shot learning for facial expression recognition. arXiv preprint arXiv:1912.02751 (2019)"},{"key":"1310_CR140","doi-asserted-by":"crossref","unstructured":"Zou, X., Yan, Y., Xue, J.-H., Chen, S., Wang, H.: When facial expression recognition meets few-shot learning: A joint and alternate learning framework. arXiv preprint arXiv:2201.06781 (2022)","DOI":"10.1609\/aaai.v36i5.20474"},{"key":"1310_CR141","doi-asserted-by":"crossref","unstructured":"Zou, X., Yan, Y., Xue, J.-H., Chen, S., Wang, H.: Learn-to-decompose: cascaded decomposition network for cross-domain few-shot facial expression recognition. In: European Conference on Computer Vision, pp. 683\u2013700. Springer (2022)","DOI":"10.1007\/978-3-031-19800-7_40"},{"key":"1310_CR142","unstructured":"Jiang, L., Zhou, Z., Leung, T., Li, L.-J., Fei-Fei, L.: Mentornet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning, pp. 2304\u20132313. PMLR (2018)"},{"key":"1310_CR143","unstructured":"Arpit, D., Jastrz\u0119bski, S., Ballas, N., Krueger, D., Bengio, E., Kanwal, M.S., Maharaj, T., Fischer, A., Courville, A., Bengio, Y.,: A closer look at memorization in deep networks. In: International Conference on Machine Learning, pp. 233\u2013242. PMLR (2017)"},{"key":"1310_CR144","unstructured":"Wei, X.-S., Song, Y.-Z., Mac\u00a0Aodha, O., Wu, J., Peng, Y., Tang, J., Yang, J., Belongie, S.: Fine-grained image analysis with deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell. (2021) Query ID=\"Q5\" text=\"Kindly provide the volume number and page range for the reference 4, 144\""},{"key":"1310_CR145","doi-asserted-by":"crossref","unstructured":"Dhall, A., Goecke, R., Lucey, S., Gedeon, T.: Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 2106\u20132112 . IEEE (2011)","DOI":"10.1109\/ICCVW.2011.6130508"}],"container-title":["Journal of Real-Time Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-023-01310-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11554-023-01310-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-023-01310-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,19]],"date-time":"2024-10-19T22:32:35Z","timestamp":1729377155000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11554-023-01310-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,6]]},"references-count":145,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,6]]}},"alternative-id":["1310"],"URL":"https:\/\/doi.org\/10.1007\/s11554-023-01310-x","relation":{},"ISSN":["1861-8200","1861-8219"],"issn-type":[{"value":"1861-8200","type":"print"},{"value":"1861-8219","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,6]]},"assertion":[{"value":"9 March 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 April 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 May 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"52"}}