{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,1]],"date-time":"2026-04-01T18:51:08Z","timestamp":1775069468208,"version":"3.50.1"},"reference-count":89,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2023,1,17]],"date-time":"2023-01-17T00:00:00Z","timestamp":1673913600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"GRRC program of Gyeonggi province","award":["GRRC-Gachon2020 (B01)"],"award-info":[{"award-number":["GRRC-Gachon2020 (B01)"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Current artificial intelligence systems for determining a person\u2019s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image\u2019s lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face\u2019s features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset.<\/jats:p>","DOI":"10.3390\/s23031080","type":"journal-article","created":{"date-parts":[[2023,1,18]],"date-time":"2023-01-18T01:33:26Z","timestamp":1674005606000},"page":"1080","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":96,"title":["Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1424-0799","authenticated-orcid":false,"given":"Mukhriddin","family":"Mukhiddinov","sequence":"first","affiliation":[{"name":"Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0478-7889","authenticated-orcid":false,"given":"Oybek","family":"Djuraev","sequence":"additional","affiliation":[{"name":"Department of Hardware and Software of Control Systems in Telecommunication, Tashkent University of Information Technologies Named after Muhammad al-Khwarizmi, Tashkent 100084, Uzbekistan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5360-3479","authenticated-orcid":false,"given":"Farkhod","family":"Akhmedov","sequence":"additional","affiliation":[{"name":"Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1438-0628","authenticated-orcid":false,"given":"Abdinabi","family":"Mukhamadiyev","sequence":"additional","affiliation":[{"name":"Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea"}]},{"given":"Jinsoo","family":"Cho","sequence":"additional","affiliation":[{"name":"Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2023,1,17]]},"reference":[{"key":"ref_1","first-page":"237","article-title":"Accessibility of brainstorming sessions for blind people","volume":"Volume 8547","author":"Miesenberger","year":"2014","journal-title":"LNCS, Proceedings of the ICCHP, Paris, France, 9\u201311 July 2014"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"184","DOI":"10.1111\/j.1467-8721.2009.01633.x","article-title":"How emotions regulate social life: The emotions as social information (EASI) model","volume":"18","year":"2009","journal-title":"Curr. Dir. Psychol. Sci."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"e13675","DOI":"10.1111\/psyp.13675","article-title":"Who to whom and why: The social nature of emotional mimicry","volume":"58","author":"Hess","year":"2020","journal-title":"Psychophysiology"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Mukhamadiyev, A., Khujayarov, I., Djuraev, O., and Cho, J. (2022). Automatic Speech Recognition Method Based on Deep Learning Approaches for Uzbek Language. Sensors, 22.","DOI":"10.3390\/s22103683"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"133","DOI":"10.1007\/s10919-019-00293-3","article-title":"Emotional Expression: Advances in Basic Emotion Theory","volume":"43","author":"Keltner","year":"2019","journal-title":"J. Nonverbal Behav."},{"key":"ref_6","first-page":"713","article-title":"Saliency Cuts: Salient Region Extraction based on Local Adaptive Thresholding for Image Information Recognition of the Visually Impaired","volume":"17","author":"Mukhiddinov","year":"2020","journal-title":"Int. Arab. J. Inf. Technol."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"843","DOI":"10.1038\/nn.2138","article-title":"Expressing fear enhances sensory acquisition","volume":"11","author":"Susskind","year":"2008","journal-title":"Nat. Neurosci."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"112","DOI":"10.1016\/j.visres.2018.02.001","article-title":"Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion","volume":"157","author":"Guo","year":"2019","journal-title":"Vis. Res."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Ramdani, C., Ogier, M., and Coutrot, A. (2022). Communicating and reading emotion with masked faces in the Covid era: A short review of the literature. Psychiatry Res., 114755.","DOI":"10.1016\/j.psychres.2022.114755"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"593","DOI":"10.1016\/j.ins.2021.10.005","article-title":"A survey on facial emotion recognition techniques: A state-of-the-art literature review","volume":"582","author":"Canal","year":"2021","journal-title":"Inf. Sci."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"106646","DOI":"10.1016\/j.cmpb.2022.106646","article-title":"Automated emotion recognition: Current trends and future perspectives","volume":"215","author":"Maithri","year":"2022","journal-title":"Comput. Methods Programs Biomed."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"551","DOI":"10.1007\/s00170-022-08811-2","article-title":"Vision-based melt pool monitoring for wire-arc additive manufacturing using deep learning method","volume":"120","author":"Xia","year":"2022","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1999","DOI":"10.1007\/s00170-022-10335-8","article-title":"A new lightweight deep neural network for surface scratch detection","volume":"123","author":"Li","year":"2022","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Mukhiddinov, M., Akmuradov, B., and Djuraev, O. (2019, January 4\u20136). Robust text recognition for Uzbek language in natural scene images. Proceedings of the 2019 International Conference on Information Science and Communications Technologies (ICISCT), Tashkent, Uzbekistan.","DOI":"10.1109\/ICISCT47635.2019.9011892"},{"key":"ref_15","first-page":"30","article-title":"A novel method for extracting text from natural scene images and TTS","volume":"1","author":"Khamdamov","year":"2018","journal-title":"Eur. Sci. Rev."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"102444","DOI":"10.1016\/j.media.2022.102444","article-title":"Recent advances and clinical applications of deep learning in medical image analysis","volume":"79","author":"Chen","year":"2022","journal-title":"Med. Image Anal."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"2150054","DOI":"10.1142\/S0219691321500545","article-title":"An improvement for the automatic classification method for ultrasound images used on CNN","volume":"20","author":"Avazov","year":"2021","journal-title":"Int. J. Wavelets Multiresolution Inf. Process."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"689","DOI":"10.1016\/j.procs.2020.07.101","article-title":"Facial emotion recognition using deep learning: Review and insights","volume":"175","author":"Mellouk","year":"2020","journal-title":"Procedia Comput. Sci."},{"key":"ref_19","first-page":"53","article-title":"Emotion Recognition and Detection Methods: A Comprehensive Survey","volume":"2","author":"Saxena","year":"2020","journal-title":"J. Artif. Intell. Syst."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Ko, B.C. (2018). A Brief Review of Facial Emotion Recognition Based on Visual Information. Sensors, 18.","DOI":"10.3390\/s18020401"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Dzedzickis, A., Kaklauskas, A., and Bucinskas, V. (2020). Human Emotion Recognition: Review of Sensors and Methods. Sensors, 20.","DOI":"10.3390\/s20030592"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Mukhiddinov, M., and Cho, J. (2021). Smart Glass System Using Deep Learning for the Blind and Visually Impaired. Electronics, 10.","DOI":"10.3390\/electronics10222756"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"4093","DOI":"10.1109\/TMM.2020.3037526","article-title":"TBEFN: A Two-Branch Exposure-Fusion Network for Low-Light Image Enhancement","volume":"23","author":"Lu","year":"2020","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","article-title":"AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild","volume":"10","author":"Mollahosseini","year":"2017","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_25","unstructured":"Aqeel, A. (2022, October 28). MaskTheFace. Available online: https:\/\/github.com\/aqeelanwar\/MaskTheFace."},{"key":"ref_26","unstructured":"(2022, November 02). Available online: https:\/\/google.github.io\/mediapipe\/solutions\/face_mesh.html."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"195","DOI":"10.1016\/j.cognition.2012.06.018","article-title":"Shades of emotion: What the addition of sunglasses or masks to faces reveals about the development of facial expression processing","volume":"125","author":"Roberson","year":"2012","journal-title":"Cognition"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"669432","DOI":"10.3389\/fpsyg.2021.669432","article-title":"Masking Emotions: Face Masks Impair How We Read Emotions","volume":"12","author":"Gori","year":"2021","journal-title":"Front. Psychol."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"201169","DOI":"10.1098\/rsos.201169","article-title":"The effect of face masks and sunglasses on identity and expression recognition with super-recognizers and typical observers","volume":"8","author":"Noyes","year":"2021","journal-title":"R. Soc. Open Sci."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"566886","DOI":"10.3389\/fpsyg.2020.566886","article-title":"Wearing Face Masks Strongly Confuses Counterparts in Reading Emotions","volume":"11","author":"Carbon","year":"2020","journal-title":"Front. Psychol."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Gulbetekin, E., Fidanc\u0131, A., Altun, E., Er, M.N., and G\u00fcrcan, E. (2021). Effects of mask use and race on face perception, emotion recognition, and social distancing during the COVID-19 pandemic. Res. Sq., PPR533073.","DOI":"10.21203\/rs.3.rs-692591\/v1"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Pazhoohi, F., Forby, L., and Kingstone, A. (2021). Facial masks affect emotion recognition in the general population and individuals with autistic traits. PLoS ONE, 16.","DOI":"10.1371\/journal.pone.0257740"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"2261","DOI":"10.1016\/S0042-6989(01)00097-9","article-title":"Bubbles: A technique to reveal the use of information in recognition tasks","volume":"41","author":"Gosselin","year":"2001","journal-title":"Vis. Res."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"2830","DOI":"10.1016\/j.neuropsychologia.2012.08.010","article-title":"The eyes are not the window to basic emotions","volume":"50","author":"Blais","year":"2012","journal-title":"Neuropsychologia"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Wegrzyn, M., Vogt, M., Kireclioglu, B., Schneider, J., and Kissler, J. (2017). Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS ONE, 12.","DOI":"10.1371\/journal.pone.0177239"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"416","DOI":"10.1080\/02699931.2013.833500","article-title":"Featural processing in recognition of emotional facial expressions","volume":"28","author":"Beaudry","year":"2013","journal-title":"Cogn. Emot."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"14","DOI":"10.1167\/14.13.14","article-title":"Eye movements during emotion recognition in faces","volume":"14","author":"Schurgin","year":"2014","journal-title":"J. Vis."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1052","DOI":"10.1016\/j.imavis.2007.11.004","article-title":"An analysis of facial expression recognition under partial facial image occlusion","volume":"26","author":"Kotsia","year":"2008","journal-title":"Image Vis. Comput."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"27","DOI":"10.1016\/j.neucom.2018.03.068","article-title":"Multi-cue fusion for emotion recognition in the wild","volume":"309","author":"Yan","year":"2018","journal-title":"Neurocomputing"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Jung, H., Lee, S., Yim, J., Park, S., and Kim, J. (2015, January 7\u201313). Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.341"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"595","DOI":"10.1109\/TAFFC.2020.3014171","article-title":"Exploiting Multi-CNN Features in CNN-RNN Based Dimensional Emotion Recognition on the OMG in-the-Wild Dataset","volume":"12","author":"Kollias","year":"2020","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Hasani, B., and Mahoor, M.H. (2017, January 21\u201326). Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.282"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Fabiano, D., and Canavan, S. (2019, January 14\u201318). Deformable synthesis model for emotion recognition. Proceedings of the 2019 14th IEEE Interna-tional Conference on Automatic Face & Gesture Recognition (FG 2019), Lille, France.","DOI":"10.1109\/FG.2019.8756614"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Ngoc, Q.T., Lee, S., and Song, B.C. (2020). Facial Landmark-Based Emotion Recognition via Directed Graph Neural Network. Electronics, 9.","DOI":"10.3390\/electronics9050764"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Khoeun, R., Chophuk, P., and Chinnasarn, K. (2022). Emotion Recognition for Partial Faces Using a Feature Vector Technique. Sensors, 22.","DOI":"10.3390\/s22124633"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"611","DOI":"10.1109\/TMM.2009.2017629","article-title":"3-D Face Detection, Landmark Localization, and Registration Using a Point Distribution Model","volume":"11","author":"Nair","year":"2009","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Shah, M.H., Dinesh, A., and Sharmila, T.S. (2019, January 6\u20137). Analysis of Facial Landmark Features to determine the best subset for finding Face Orientation. Proceedings of the 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), Gurugram, India.","DOI":"10.1109\/ICCIDS.2019.8862093"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Riaz, M.N., Shen, Y., Sohail, M., and Guo, M. (2020). eXnet: An Efficient Approach for Emotion Recognition in the Wild. Sensors, 20.","DOI":"10.3390\/s20041087"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.neucom.2019.05.005","article-title":"Three convolutional neural network models for facial expression recognition in the wild","volume":"355","author":"Shao","year":"2019","journal-title":"Neurocomputing"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"78000","DOI":"10.1109\/ACCESS.2019.2921220","article-title":"Recognizing Facial Expressions Using a Shallow Convolutional Neural Network","volume":"7","author":"Miao","year":"2019","journal-title":"IEEE Access"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"4057","DOI":"10.1109\/TIP.2019.2956143","article-title":"Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition","volume":"29","author":"Wang","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Farzaneh, A.H., and Qi, X. (2021, January 3\u20138). Facial expression recognition in the wild via deep attentive center loss. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV48630.2021.00245"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Shi, J., Zhu, S., and Liang, Z. (2021). Learning to amend facial expression representation via de-albino and affinity. arXiv.","DOI":"10.23919\/CCC55666.2022.9901738"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"356","DOI":"10.1109\/TIP.2018.2868382","article-title":"Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition","volume":"28","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"2439","DOI":"10.1109\/TIP.2018.2886767","article-title":"Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism","volume":"28","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Farkhod, A., Abdusalomov, A.B., Mukhiddinov, M., and Cho, Y.-I. (2022). Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces. Sensors, 22.","DOI":"10.3390\/s22228704"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"807","DOI":"10.1016\/j.imavis.2009.08.002","article-title":"Multi-pie","volume":"28","author":"Gross","year":"2010","journal-title":"Image Vis. Comput."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010, January 13\u201318). The extended cohn-kanade dataset (ck+): A com-plete dataset for action unit and emotion-specified expression. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA.","DOI":"10.1109\/CVPRW.2010.5543262"},{"key":"ref_59","unstructured":"Lyons, M., Akamatsu, S., Kamachi, M., and Gyoba, J. (1998, January 14\u201316). Coding facial expressions with Gabor wavelets. Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan."},{"key":"ref_60","unstructured":"Pantic, M., Valstar, M., Rademaker, R., and Maat, L. (2005, January 6\u20138). Web-Based Database for Facial Expression Analysis. Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"McDuff, D., Kaliouby, R., Senechal, T., Amr, M., Cohn, J., and Picard, R. (2013, January 23\u201328). Affectiva-mit facial expression dataset (am-fed): Naturalistic and spontaneous facial expressions collected. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA.","DOI":"10.1109\/CVPRW.2013.130"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"151","DOI":"10.1109\/T-AFFC.2013.4","article-title":"DISFA: A Spontaneous Facial Action Intensity Database","volume":"4","author":"Mavadati","year":"2013","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"32","DOI":"10.1109\/T-AFFC.2011.26","article-title":"The Belfast Induced Natural Emotion Database","volume":"3","author":"Sneddon","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1016\/j.neunet.2014.09.005","article-title":"Challenges in representation learning: A report on three machine learning contests","volume":"64","author":"Goodfellow","year":"2015","journal-title":"Neural Netw."},{"key":"ref_65","unstructured":"(2022, October 28). Available online: https:\/\/www.kaggle.com\/datasets\/msambare\/fer2013."},{"key":"ref_66","doi-asserted-by":"crossref","first-page":"446","DOI":"10.1007\/s42452-020-2234-1","article-title":"Facial emotion recognition using convolutional neural networks (FERC)","volume":"2","author":"Mehendale","year":"2020","journal-title":"SN Appl. Sci."},{"key":"ref_67","unstructured":"Anwar, A., and Raychowdhury, A. (2020). Masked face recognition for secure authentication. arXiv Preprint."},{"key":"ref_68","doi-asserted-by":"crossref","unstructured":"Zafeiriou, S., Papaioannou, A., Kotsia, I., Nicolaou, M.A., and Zhao, G. (2016, January 27\u201330). Facial affect \u201cin-the-wild\u201d: A survey and a new data-base. Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Affect \u201cin-the-wild\u201d Workshop, Las Vegas, NV, USA.","DOI":"10.1109\/CVPRW.2016.186"},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Dhall, A., Goecke, R., Joshi, J., Wagner, M., and Gedeon, T. (2013, January 9\u201313). Emotion recognition in the wild challenge 2013. Proceedings of the 15th ACM on International Conference on Multimodal Interaction, Sydney, Australia.","DOI":"10.1145\/2522848.2531739"},{"key":"ref_70","doi-asserted-by":"crossref","unstructured":"Benitez-Quiroz, C.F., Srinivasan, R., and Martinez, A.M. (2016, January 27\u201330). Emotionet: An accurate, real-time algorithm for the automatic an-notation of a million facial expressions in the wild. Proceedings of the IEEE International Conference on Computer Vision & Pattern Recognition (CVPR16), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.600"},{"key":"ref_71","unstructured":"Mollahosseini, A., Hasani, B., Salvador, M.J., Abdollahi, H., Chan, D., and Mahoor, M.H. (July, January 26). Facial expression recognition from world wild web. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Las Vegas, NV, USA."},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"2049","DOI":"10.1109\/TIP.2018.2794218","article-title":"Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images","volume":"27","author":"Cai","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_73","unstructured":"Chen, W., Wang, W., Yang, W., and Liu, J. (2018). Deep retinex decomposition for low-light enhancement. arXiv."},{"key":"ref_74","unstructured":"(2022, October 28). Available online: https:\/\/google.github.io\/mediapipe\/solutions\/face_detection.html."},{"key":"ref_75","unstructured":"Bazarevsky, V., Kartynnik, Y., Vakunov, A., Raveendran, K., and Grundmann, M. (2019). BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs. arXiv."},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Chen, Y., Wang, J., Chen, S., Shi, Z., and Cai, J. (2019, January 1\u20134). Facial Motion Prior Networks for Facial Expression Recognition. Proceedings of the 2019 IEEE Visual Communications and Image Processing (VCIP), Sydney, Australia.","DOI":"10.1109\/VCIP47243.2019.8965826"},{"key":"ref_77","doi-asserted-by":"crossref","first-page":"64827","DOI":"10.1109\/ACCESS.2019.2917266","article-title":"Local Learning With Deep and Handcrafted Features for Facial Expression Recognition","volume":"7","author":"Georgescu","year":"2019","journal-title":"IEEE Access"},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Hayale, W., Negi, P., and Mahoor, M. (2019, January 14\u201318). Facial Expression Recognition Using Deep Siamese Neural Networks with a Supervised Loss function. Proceedings of the 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition, Lille, France.","DOI":"10.1109\/FG.2019.8756571"},{"key":"ref_79","doi-asserted-by":"crossref","unstructured":"Zeng, J., Shan, S., and Chen, X. (2018, January 8\u201314). Facial expression recognition with inconsistently annotated datasets. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01261-8_14"},{"key":"ref_80","doi-asserted-by":"crossref","unstructured":"Antoniadis, P., Filntisis, P.P., and Maragos, P. (2021, January 15\u201318). Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition. Proceedings of the 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition, Jodhpur, India.","DOI":"10.1109\/FG52635.2021.9667014"},{"key":"ref_81","doi-asserted-by":"crossref","unstructured":"Mukhiddinov, M., Abdusalomov, A.B., and Cho, J. (2022). A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors, 22.","DOI":"10.3390\/s22239384"},{"key":"ref_82","doi-asserted-by":"crossref","unstructured":"Mukhiddinov, M., Muminov, A., and Cho, J. (2022). Improved Classification Approach for Fruits and Vegetables Freshness Based on Deep Learning. Sensors, 22.","DOI":"10.3390\/s22218192"},{"key":"ref_83","doi-asserted-by":"crossref","unstructured":"Mukhiddinov, M., Abdusalomov, A.B., and Cho, J. (2022). Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. Sensors, 22.","DOI":"10.3390\/s22093307"},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"455","DOI":"10.34768\/amcs-2022-0033","article-title":"A hybrid approach of a deep learning technique for real-time ecg beat detection","volume":"32","author":"Patro","year":"2022","journal-title":"Int. J. Appl. Math. Comput. Sci."},{"key":"ref_85","doi-asserted-by":"crossref","unstructured":"Li, Y., Zeng, J., Shan, S., and Chen, X. (2018, January 20\u201324). Patch-gated CNN for occlusion-aware facial expression recognition. Proceedings of the 24th International Conference on Pattern Recognition (ICPR), Beijing, China.","DOI":"10.1109\/ICPR.2018.8545853"},{"key":"ref_86","unstructured":"Li, Y., Lu, Y., Li, J., and Lu, G. (2019, January 17\u201319). Separate loss for basic and compound facial expression recognition in the wild. Proceedings of the Asian Conference on Machine Learning, Nagoya, Japan."},{"key":"ref_87","doi-asserted-by":"crossref","unstructured":"Wang, C., Wang, S., and Liang, G. (2019, January 21\u201325). Identity- and Pose-Robust Facial Expression Recognition through Adversarial Feature Learning. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350872"},{"key":"ref_88","doi-asserted-by":"crossref","unstructured":"Farzaneh, A.H., and Qi, X. (2020, January 14\u201319). Discriminant distribution-agnostic loss for facial expression recognition in the wild. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00211"},{"key":"ref_89","doi-asserted-by":"crossref","unstructured":"Wen, Y., Zhang, K., Li, Z., and Qiao, Y. (2016, January 8\u201316). A discriminative feature learning approach for deep face recognition. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46478-7_31"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/3\/1080\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:08:26Z","timestamp":1760119706000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/3\/1080"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,17]]},"references-count":89,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["s23031080"],"URL":"https:\/\/doi.org\/10.3390\/s23031080","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,1,17]]}}}