{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T10:00:28Z","timestamp":1775815228433,"version":"3.50.1"},"reference-count":72,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2024,8,28]],"date-time":"2024-08-28T00:00:00Z","timestamp":1724803200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,8,28]],"date-time":"2024-08-28T00:00:00Z","timestamp":1724803200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100009367","name":"Mansoura University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100009367","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In this paper, based on facial landmark approaches, the possible vulnerability of ensemble algorithms to the FGSM attack has been assessed using three commonly used models: convolutional neural network-based antialiasing (A_CNN), Xc_Deep2-based DeepLab v2, and SqueezeNet (Squ_Net)-based Fire modules. Firstly, the three individual deep learning classifier-based Facial Emotion Recognition (FER) classifications have been developed; the predictions from all three classifiers are then merged using majority voting to develop the HEM_Net-based ensemble model. Following that, an in-depth investigation of their performance in the case of attack-free has been carried out in terms of the Jaccard coefficient, accuracy, precision, recall, F1 score, and specificity. When applied to three benchmark datasets, the ensemble-based method (HEM_Net) significantly outperforms in terms of precision and reliability while also decreasing the dimensionality of the input data, with an accuracy of 99.3%, 87%, and 99% for the Extended Cohn-Kanade (CK+), Real-world Affective Face (RafD), and Japanese female facial expressions (Jaffee) data, respectively. Further, a comprehensive analysis of the drop in performance of every model affected by the FGSM attack is carried out over a range of epsilon values (the perturbation parameter). The results from the experiments show that the advised HEM_Net model accuracy declined drastically by 59.72% for CK\u2009+\u2009data, 42.53% for RafD images, and 48.49% for the Jaffee dataset when the perturbation increased from A to E (attack levels). This demonstrated that a successful Fast Gradient Sign Method (FGSM) can significantly reduce the prediction performance of all individual classifiers with an increase in attack levels. However, due to the majority voting, the proposed HEM_Net model could improve its robustness against FGSM attacks, indicating that the ensemble can lessen deception by FGSM adversarial instances. This generally holds even as the perturbation level of the FGSM attack increases.<\/jats:p>","DOI":"10.1007\/s40747-024-01603-z","type":"journal-article","created":{"date-parts":[[2024,8,28]],"date-time":"2024-08-28T10:02:12Z","timestamp":1724839332000},"page":"8355-8382","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Accuracy is not enough: a heterogeneous ensemble model versus FGSM attack"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1327-6677","authenticated-orcid":false,"given":"Reham A.","family":"Elsheikh","sequence":"first","affiliation":[]},{"given":"M. A.","family":"Mohamed","sequence":"additional","affiliation":[]},{"given":"Ahmed Mohamed","family":"Abou-Taleb","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4151-9717","authenticated-orcid":false,"given":"Mohamed Maher","family":"Ata","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,8,28]]},"reference":[{"key":"1603_CR1","volume":"237","author":"J Wei","year":"2024","unstructured":"Wei J et al (2024) Learning facial expression and body gesture visual information for video emotion recognition. Expert Syst Appl 237:121419","journal-title":"Expert Syst Appl"},{"key":"1603_CR2","doi-asserted-by":"crossref","first-page":"721","DOI":"10.1007\/s12652-020-02845-8","volume":"13","author":"S Umer","year":"2022","unstructured":"Umer S, Rout KR, Pero C, Nappi M (2022) Facial expression recognition with trade-offs between data augmentation and deep learning features. J Ambient Intell Humaniz Comput 13:721\u2013735","journal-title":"J Ambient Intell Humaniz Comput"},{"issue":"1","key":"1603_CR3","doi-asserted-by":"crossref","first-page":"73","DOI":"10.1007\/s00530-022-00984-w","volume":"29","author":"R Rashmi Adyapady","year":"2023","unstructured":"Rashmi Adyapady R, Annapp B (2023) A comprehensive review of facial expression recognition techniques. Multimed Syst 29(1):73\u2013103","journal-title":"Multimed Syst"},{"issue":"1","key":"1603_CR4","doi-asserted-by":"crossref","first-page":"78","DOI":"10.20517\/ir.2021.16","volume":"2","author":"I Bah","year":"2022","unstructured":"Bah I, Xue Y (2022) Facial expression recognition using adapted residual based deep neural network. Intell Robot 2(1):78\u201388","journal-title":"Intell Robot"},{"issue":"6","key":"1603_CR5","doi-asserted-by":"crossref","first-page":"644","DOI":"10.3390\/machines11060644","volume":"11","author":"QN The Ho","year":"2023","unstructured":"The Ho QN et al (2023) Turning chatter detection using a multi-input convolutional neural network via image and sound signal. Machines 11(6):644","journal-title":"Machines"},{"key":"1603_CR6","doi-asserted-by":"crossref","DOI":"10.1016\/j.engappai.2022.105151","volume":"115","author":"MA Ganaie","year":"2022","unstructured":"Ganaie MA et al (2022) Ensemble deep learning: a review. Eng Appl Artif Intell 115:105151","journal-title":"Eng Appl Artif Intell"},{"key":"1603_CR7","doi-asserted-by":"crossref","first-page":"2731","DOI":"10.1007\/s11760-023-02490-6","volume":"17","author":"R Helaly","year":"2023","unstructured":"Helaly R et al (2023) DTL-I-ResNet18: facial emotion recognition based on deep transfer learning and improved ResNet18. SIViP 17:2731\u20132744","journal-title":"SIViP"},{"issue":"1","key":"1603_CR8","doi-asserted-by":"crossref","first-page":"67","DOI":"10.1007\/s11263-022-01672-y","volume":"131","author":"X Zou","year":"2023","unstructured":"Zou X et al (2023) Delving deeper into anti-aliasing in ConvNets. Int J Comput Vis 131(1):67\u201381","journal-title":"Int J Comput Vis"},{"issue":"1","key":"1603_CR9","volume":"2555","author":"J Zhu","year":"2023","unstructured":"Zhu J, Cao Y (2023) Face expression recognition combining improved deeplabv3+ and migration learning. J Phys: Conf Ser 2555(1):012020","journal-title":"J Phys: Conf Ser"},{"key":"1603_CR10","doi-asserted-by":"crossref","unstructured":"Shehu HA, Browne W, Eisenbarth H (2020) An adversarial attacks resistance-based approach to emotion recognition from images using facial landmarks. In: 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), vol. 20. IEEE, pp 1307\u20131314","DOI":"10.1109\/RO-MAN47096.2020.9223510"},{"issue":"5","key":"1603_CR11","doi-asserted-by":"crossref","first-page":"1297","DOI":"10.1109\/LCOMM.2023.3261423","volume":"27","author":"R Li","year":"2023","unstructured":"Li R et al (2023) Intra-class universal adversarial attacks on deep learning-based modulation classifiers. IEEE Commun Lett 27(5):1297\u20131301","journal-title":"IEEE Commun Lett"},{"key":"1603_CR12","doi-asserted-by":"crossref","first-page":"3659","DOI":"10.1109\/TIFS.2024.3359820","volume":"19","author":"H Kuang","year":"2024","unstructured":"Kuang H, Liu H, Lin X, Ji R (2024) Defense against adversarial attacks using topology aligning adversarial training. IEEE Trans Inf Forensics Secur 19:3659\u20133673","journal-title":"IEEE Trans Inf Forensics Secur"},{"issue":"3","key":"1603_CR13","first-page":"36680","volume":"37","author":"J Zheng","year":"2023","unstructured":"Zheng J et al (2023) Attack can benefit: an adversarial approach to recognizing facial expressions under noisy annotations. Proc AAAI Conf Artif Intell 37(3):36680\u201343668","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"1603_CR14","doi-asserted-by":"crossref","first-page":"16875","DOI":"10.1109\/ACCESS.2023.3245830","volume":"11","author":"M Hussain","year":"2023","unstructured":"Hussain M, AboAlSamh HA, Ullah I (2023) Emotion recognition system based on two-level ensemble of deep-convolutional neural network models. IEEE Access 11:16875\u201316895","journal-title":"IEEE Access"},{"key":"1603_CR15","doi-asserted-by":"crossref","first-page":"26756","DOI":"10.1109\/ACCESS.2022.3156598","volume":"10","author":"AP Fard","year":"2022","unstructured":"Fard AP, Mahoor MH (2022) Ad-corre: adaptive correlation-based loss for facial expression recognition in the wild. IEEE Access 10:26756\u201326768","journal-title":"IEEE Access"},{"issue":"9","key":"1603_CR16","doi-asserted-by":"crossref","first-page":"3046","DOI":"10.3390\/s21093046","volume":"21","author":"S Minaee","year":"2021","unstructured":"Minaee S, Minaei M, Abdolrashidi A (2021) Deep-emotion: facial expression recognition using attentional convolutional network. Sensors 21(9):3046","journal-title":"Sensors"},{"key":"1603_CR17","doi-asserted-by":"crossref","first-page":"4445","DOI":"10.1109\/TIP.2020.2972114","volume":"29","author":"F Zhang","year":"2020","unstructured":"Zhang F, Zhang T, Mao Q, Xu C (2020) Geometry guided pose-invariant facial expression recognition. IEEE Trans Image Process 29:4445\u20134460","journal-title":"IEEE Trans Image Process"},{"key":"1603_CR18","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TIM.2020.3031835","volume":"70","author":"K Mohan","year":"2020","unstructured":"Mohan K et al (2020) Facial expression recognition using local gravitational force descriptor-based deep convolution neural networks. IEEE Trans Instrum Meas 70:1\u201312","journal-title":"IEEE Trans Instrum Meas"},{"key":"1603_CR19","doi-asserted-by":"crossref","first-page":"1295","DOI":"10.1016\/j.procs.2023.01.108","volume":"218","author":"C Gautam","year":"2023","unstructured":"Gautam C, Seeja KR (2023) Facial emotion recognition using handcrafted features and CNN. Procedia Comput Sci 218:1295\u20131303","journal-title":"Procedia Comput Sci"},{"key":"1603_CR20","unstructured":"Carlijn M (2021) Facial landmark detection under challenging conditions. BS thesis. University of Twente. pp 1\u20139"},{"issue":"1","key":"1603_CR21","doi-asserted-by":"crossref","first-page":"1663","DOI":"10.1007\/s40747-023-01195-0","volume":"10","author":"K Zaman","year":"2023","unstructured":"Zaman K et al (2023) A novel driver emotion recognition system based on deep ensemble classification. Complex Intell Syst 10(1):1663","journal-title":"Complex Intell Syst"},{"key":"1603_CR22","unstructured":"Ning J, Spratling M (2024) The importance of anti-aliasing in tiny object detection. In: Asian Conference on Machine Learning, PMLR. pp 975\u2013990"},{"issue":"11","key":"1603_CR23","doi-asserted-by":"crossref","first-page":"3925","DOI":"10.1007\/s10994-022-06222-8","volume":"111","author":"J Grabinski","year":"2022","unstructured":"Grabinski J, Keuper J, Keuper M (2022) Aliasing and adversarial robust generalization of CNNs. Mach Learn 111(11):3925\u20133951","journal-title":"Mach Learn"},{"issue":"12","key":"1603_CR24","doi-asserted-by":"crossref","first-page":"18635","DOI":"10.1007\/s11042-022-14066-6","volume":"82","author":"H Huo","year":"2023","unstructured":"Huo H, Yu Y, Liu Z (2023) Facial expression recognition based on improved depthwise separable convolutional network. Multimed Tools Appl 82(12):18635\u201318652","journal-title":"Multimed Tools Appl"},{"key":"1603_CR25","doi-asserted-by":"crossref","first-page":"120","DOI":"10.1016\/j.isatra.2022.07.030","volume":"132","author":"FH Tseng","year":"2023","unstructured":"Tseng FH, Yeh KH, Kao FY, Chen CY (2023) MiniNet: dense squeeze with depthwise separable convolutions for image classification in resource-constrained autonomous systems. ISA Trans 132:120\u2013130","journal-title":"ISA Trans"},{"issue":"11","key":"1603_CR26","doi-asserted-by":"crossref","first-page":"2152016","DOI":"10.1142\/S0218001421520169","volume":"35","author":"Y Sun","year":"2021","unstructured":"Sun Y, Wu C, Zheng K, Niu X (2021) Adv-emotion: the facial expression adversarial attack. Int J Pattern Recognit Artif Intell 35(11):2152016","journal-title":"Int J Pattern Recognit Artif Intell"},{"issue":"1","key":"1603_CR27","doi-asserted-by":"crossref","first-page":"25","DOI":"10.1007\/s44196-024-00406-x","volume":"17","author":"M Anand","year":"2024","unstructured":"Anand M, Babu S (2024) Multi-class facial emotion expression identification using DL-based feature extraction with classification models. Int J Comput Intell Syst 17(1):25","journal-title":"Int J Comput Intell Syst"},{"key":"1603_CR28","doi-asserted-by":"crossref","DOI":"10.1016\/j.patcog.2019.107184","volume":"101","author":"J Hang","year":"2020","unstructured":"Hang J, Han K, Chen H, Li Y (2020) Ensemble adversarial black-box attacks against deep learning systems. Pattern Recogn 101:107184","journal-title":"Pattern Recogn"},{"key":"1603_CR29","doi-asserted-by":"crossref","unstructured":"Zhang Y, Wang C, Xu Ling X, Deng W (2022) Learn from all: Erasing attention consistency for noisy label facial expression recognition. In: European Conference on Computer Vision, vol. 3686. Cham: Springer Nature Switzerland.\u200f pp 418\u2013434","DOI":"10.1007\/978-3-031-19809-0_24"},{"issue":"2","key":"1603_CR30","doi-asserted-by":"crossref","first-page":"2096","DOI":"10.1109\/TNSM.2023.3267831","volume":"20","author":"E Nowroozi","year":"2023","unstructured":"Nowroozi E et al (2023) Employing deep ensemble learning for improving the security of computer networks against adversarial attacks. IEEE Trans Netw Serv Manag 20(2):2096\u20132105","journal-title":"IEEE Trans Netw Serv Manag"},{"issue":"2","key":"1603_CR31","doi-asserted-by":"crossref","first-page":"215","DOI":"10.3390\/e25020215","volume":"25","author":"Z Fu","year":"2023","unstructured":"Fu Z, Cui X (2023) ELAA: an ensemble-learning-based adversarial attack targeting image-classification model. Entropy 25(2):215","journal-title":"Entropy"},{"key":"1603_CR32","doi-asserted-by":"crossref","unstructured":"Gard GK et al (2024) Automated facial expression detection using genetic algorithm optimization with fuzzy C-means clustering algorithm. In: 2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE). IEEE","DOI":"10.1109\/ICDCECE60827.2024.10548524"},{"issue":"8","key":"1603_CR33","doi-asserted-by":"crossref","first-page":"9174","DOI":"10.1007\/s10489-022-03991-6","volume":"53","author":"G Ryu","year":"2023","unstructured":"Ryu G, Choi D (2023) A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples. Appl Intell 53(8):9174\u20139187","journal-title":"Appl Intell"},{"issue":"12","key":"1603_CR34","doi-asserted-by":"crossref","first-page":"10581","DOI":"10.1007\/s12652-020-02866-3","volume":"12","author":"VRR Chirra","year":"2021","unstructured":"Chirra VRR, Uyyala SR, Kolli VKK (2021) Virtual facial expression recognition using deep CNN with ensemble learning. J Ambient Intell Humaniz Comput 12(12):10581\u201310599","journal-title":"J Ambient Intell Humaniz Comput"},{"issue":"18","key":"1603_CR35","doi-asserted-by":"crossref","first-page":"28589","DOI":"10.1007\/s11042-023-14392-3","volume":"82","author":"S Gupta","year":"2023","unstructured":"Gupta S, Parteek Kumar P, Tekchandani R (2023) A multimodal facial cues based engagement detection system in e-learning context using deep learning approach. Multimed Tools Appl 82(18):28589\u201328615","journal-title":"Multimed Tools Appl"},{"issue":"2","key":"1603_CR36","first-page":"757","volume":"35","author":"A Mohammed","year":"2023","unstructured":"Mohammed A, Kora R (2023) A comprehensive review on ensemble deep learning: opportunities and challenges. J King Saud Univ-Comput Inf Sci 35(2):757\u2013774","journal-title":"J King Saud Univ-Comput Inf Sci"},{"issue":"10","key":"1603_CR37","doi-asserted-by":"crossref","first-page":"3729","DOI":"10.3390\/s22103729","volume":"22","author":"S Kim","year":"2022","unstructured":"Kim S, Nam J, Ko BC (2022) Facial expression recognition based on squeeze vision transformer. Sensors 22(10):3729","journal-title":"Sensors"},{"issue":"1","key":"1603_CR38","doi-asserted-by":"crossref","first-page":"67","DOI":"10.1007\/s11263-022-01672-y","volume":"131","author":"X Zou","year":"2023","unstructured":"Zou X, Xiao F, Yu Z, Lee JY (2023) Delving deeper into anti-aliasing in ConvNets. Int J Comput Vis 131(1):67\u201381","journal-title":"Int J Comput Vis"},{"key":"1603_CR39","doi-asserted-by":"crossref","first-page":"68384","DOI":"10.1109\/ACCESS.2022.3186101","volume":"10","author":"S Suzuki","year":"2022","unstructured":"Suzuki S et al (2022) Knowledge transferred fine-tuning: convolutional neural network is born again with anti-aliasing even in data-limited situations. IEEE Access 10:68384\u201368396","journal-title":"IEEE Access"},{"key":"1603_CR40","first-page":"7324","volume":"36","author":"R Zhang","year":"2019","unstructured":"Zhang R (2019) Making convolutional networks shift-invariant again. Int Conf Mach Learn (ICML) 36:7324\u20137334","journal-title":"Int Conf Mach Learn (ICML)"},{"key":"1603_CR41","doi-asserted-by":"crossref","first-page":"1244","DOI":"10.1109\/ACCESS.2022.3233362","volume":"11","author":"Y He","year":"2022","unstructured":"He Y (2022) Facial expression recognition using multi-branch attention convolutional neural network. IEEE Access 11:1244\u20131253","journal-title":"IEEE Access"},{"key":"1603_CR42","unstructured":"Banerjee K, Gupta RR, Vyas K, Mishra B (2020) Exploring alternatives to softmax function. arXiv preprint arXiv: 2011.11538. pp 1\u20138"},{"issue":"3","key":"1603_CR43","doi-asserted-by":"crossref","first-page":"639","DOI":"10.1007\/s13246-021-01012-3","volume":"44","author":"S Gayathri","year":"2021","unstructured":"Gayathri S, Gopi VP, Palanisamy P (2021) Diabetic retinopathy classification based on multipath CNN and machine learning classifiers. Phys Eng Sci Med 44(3):639\u2013653","journal-title":"Phys Eng Sci Med"},{"key":"1603_CR44","doi-asserted-by":"crossref","first-page":"6769","DOI":"10.1007\/s00521-019-04700-0","volume":"32","author":"W Tang","year":"2020","unstructured":"Tang W et al (2020) A two-stage approach for automatic liver segmentation with Faster R-CNN and DeepLab. Neural Comput Appl 32:6769\u20136778","journal-title":"Neural Comput Appl"},{"issue":"1","key":"1603_CR45","doi-asserted-by":"crossref","first-page":"271","DOI":"10.1038\/s41598-023-50989-2","volume":"14","author":"Y Li","year":"2024","unstructured":"Li Y et al (2024) UNet based on dynamic convolution decomposition and triplet attention. Sci Rep 14(1):271","journal-title":"Sci Rep"},{"issue":"3","key":"1603_CR46","first-page":"4109","volume":"68","author":"K Srinivasan","year":"2021","unstructured":"Srinivasan K et al (2021) Performance comparison of deep cnn models for detecting driver\u2019s distraction. CMC-Comput, Mater Contin 68(3):4109\u20134124","journal-title":"CMC-Comput, Mater Contin"},{"issue":"2","key":"1603_CR47","volume":"16","author":"J Huang","year":"2021","unstructured":"Huang J et al (2021) Fast semantic segmentation method for machine vision inspection based on a fewer-parameters atrous convolution neural network. PLoS ONE 16(2):e0246093","journal-title":"PLoS ONE"},{"issue":"7","key":"1603_CR48","first-page":"1201","volume":"33","author":"M Hassanpour","year":"2020","unstructured":"Hassanpour M, Malek H (2020) Learningdocument image features with SqueezeNet convolutional neural network. Int J Eng 33(7):1201\u20131207","journal-title":"Int J Eng"},{"key":"1603_CR49","doi-asserted-by":"crossref","unstructured":"Beheshti N, Johnsson L (2020) Squeeze U-net: a memory and energy efficient image segmentation network. In: IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work. pp 1495\u20131504","DOI":"10.1109\/CVPRW50498.2020.00190"},{"issue":"2","key":"1603_CR50","doi-asserted-by":"crossref","first-page":"2397","DOI":"10.1007\/s13369-021-06182-6","volume":"47","author":"A Ullah","year":"2022","unstructured":"Ullah A et al (2022) Comparative analysis of AlexNet, ResNet18 and SqueezeNet with diverse modification and arduous implementation. Arab J Sci Eng 47(2):2397\u20132417","journal-title":"Arab J Sci Eng"},{"issue":"1","key":"1603_CR51","volume":"1757","author":"R Lu","year":"2021","unstructured":"Lu R, Li Y, Yang P, Zhang W (2021) Facial expression recognition based on convolutional neural network. J Phys: Conf Ser 1757(1):012100","journal-title":"J Phys: Conf Ser"},{"key":"1603_CR52","first-page":"1633","volume":"33","author":"F Tramer","year":"2020","unstructured":"Tramer F, Carlini N, Brendel W, Aleksander MA (2020) On adaptive attacks to adversarial example defenses. Adv Neural Inf Process Syst 33:1633\u20131645","journal-title":"Adv Neural Inf Process Syst"},{"key":"1603_CR53","doi-asserted-by":"crossref","unstructured":"Chen J et al (2021) Adversarial robustness study of convolutional neural network for lumbar disk shape reconstruction from MR images. Medical Imaging: Image Processing. p 11596","DOI":"10.1117\/12.2580852"},{"key":"1603_CR54","unstructured":"http:\/\/www.whdeng.cn\/raf\/model1.html. Accessed 1 Feb 2024"},{"key":"1603_CR55","unstructured":"https:\/\/www.kaggle.com\/datasets\/shawon10\/ckplus. Accessed 1 Feb 2024"},{"key":"1603_CR56","unstructured":"https:\/\/www.kasrl.org\/jaffe_download.html. Accessed 1 Feb 2024"},{"issue":"12","key":"1603_CR57","doi-asserted-by":"crossref","first-page":"16229","DOI":"10.1007\/s12652-022-03843-8","volume":"14","author":"NS Shaik","year":"2023","unstructured":"Shaik NS, Cherukuri TK (2023) Visual attention based composite dense neural network for facial expression recognition. J Ambient Intell Humaniz Comput 14(12):16229\u201316242","journal-title":"J Ambient Intell Humaniz Comput"},{"key":"1603_CR58","doi-asserted-by":"crossref","unstructured":"Banerjee K et al (2021) Exploring alternatives to softmax function. In: Proceedings of the 2nd International Conference on Deep Learning Theory and Applications. pp 81\u201386","DOI":"10.5220\/0010502000810086"},{"key":"1603_CR59","doi-asserted-by":"crossref","first-page":"1","DOI":"10.11591\/csit.v1i1.pp1-12","volume":"1","author":"D Krstinic","year":"2020","unstructured":"Krstinic D et al (2020) Multi-label classifier performance evaluation with confusion matrix. Comput Sci Inf Technol 1:1\u201314","journal-title":"Comput Sci Inf Technol"},{"issue":"1","key":"1603_CR60","doi-asserted-by":"crossref","first-page":"103","DOI":"10.3390\/electronics11010103","volume":"11","author":"O El Gannour","year":"2022","unstructured":"El Gannour O et al (2022) Concatenation of pre-trained convolutional neural networks for enhanced COVID-19 screening using transfer learning technique. Electronics 11(1):103","journal-title":"Electronics"},{"issue":"2","key":"1603_CR61","doi-asserted-by":"crossref","first-page":"168","DOI":"10.36548\/jtcsst.2023.2.006","volume":"5","author":"S Shrestha","year":"2023","unstructured":"Shrestha S, Gautam S, Sharma K, Bhandari A (2023) Winnowing algorithm: a powerful tool for identifying plagiarism in assignment. J Trends Comput Sci Smart Technol 5(2):168\u2013189","journal-title":"J Trends Comput Sci Smart Technol"},{"issue":"9","key":"1603_CR62","doi-asserted-by":"crossref","first-page":"4233","DOI":"10.3390\/app11094233","volume":"11","author":"B Pal","year":"2021","unstructured":"Pal B et al (2021) Vulnerability in deep transfer learning models to adversarial fast gradient sign attack for COVID-19 prediction from chest radiography images. Appl Sci 11(9):4233","journal-title":"Appl Sci"},{"issue":"20","key":"1603_CR63","doi-asserted-by":"crossref","first-page":"14963","DOI":"10.1007\/s00521-023-08498-w","volume":"35","author":"Y El Sayed","year":"2023","unstructured":"El Sayed Y (2023) An automatic improved facial expression recognition for masked faces. Neural Comput Appl 35(20):14963\u201314972","journal-title":"Neural Comput Appl"},{"issue":"1","key":"1603_CR64","doi-asserted-by":"crossref","first-page":"387","DOI":"10.1007\/s11063-021-10636-1","volume":"54","author":"H Filali","year":"2022","unstructured":"Filali H et al (2022) Meaningful learning for deep facial emotional features. Neural Process Lett 54(1):387\u2013404","journal-title":"Neural Process Lett"},{"issue":"2","key":"1603_CR65","first-page":"1","volume":"46","author":"N Sharmili","year":"2023","unstructured":"Sharmili N et al (2023) Earthworm optimization with improved SqueezeNet enabled facial expression recognition model. Comput Syst Sci Eng 46(2):1\u20131635","journal-title":"Comput Syst Sci Eng"},{"issue":"7","key":"1603_CR66","doi-asserted-by":"crossref","first-page":"9543","DOI":"10.1007\/s12652-023-04627-4","volume":"14","author":"S Indolia","year":"2023","unstructured":"Indolia S et al (2023) A framework for facial expression recognition using deep self-attention network. J Ambient Intell Humaniz Comput 14(7):9543\u20139562","journal-title":"J Ambient Intell Humaniz Comput"},{"key":"1603_CR67","doi-asserted-by":"crossref","unstructured":"Yuan L et al (2021) Tokens-to-Token VIT: Training Vision Transformers from Scratch on ImageNet. In: Proceedings of the IEEE\/CVF International Conference On Computer Vision. pp 558\u2013567","DOI":"10.1109\/ICCV48922.2021.00060"},{"issue":"10","key":"1603_CR68","doi-asserted-by":"crossref","first-page":"8425","DOI":"10.1038\/s41598-023-35446-4","volume":"13","author":"Z-Y Huang","year":"2023","unstructured":"Huang Z-Y et al (2023) A study on computer vision for facial emotion recognition. Sci Rep 13(10):8425","journal-title":"Sci Rep"},{"issue":"6","key":"1603_CR69","doi-asserted-by":"crossref","first-page":"4435","DOI":"10.1016\/j.aej.2021.09.066","volume":"61","author":"Y Nan","year":"2022","unstructured":"Nan Y et al (2022) A-MobileNet: an approach of facial expression recognition. Alex Eng J 61(6):4435\u20134444","journal-title":"Alex Eng J"},{"issue":"4","key":"1603_CR70","doi-asserted-by":"crossref","first-page":"1235","DOI":"10.1007\/s12530-023-09557-2","volume":"15","author":"A Saxena","year":"2024","unstructured":"Saxena A et al (2024) A comprehensive evaluation of Marine predator chaotic algorithm for feature selection of COVID-19. Evol Syst 15(4):1235\u20131248","journal-title":"Evol Syst"},{"issue":"28","key":"1603_CR71","doi-asserted-by":"crossref","first-page":"71721","DOI":"10.1007\/s11042-024-18327-4","volume":"83","author":"AA Joshi","year":"2024","unstructured":"Joshi AA, Rabia MA (2024) A two-phase cuckoo search based approach for gene selection and deep learning classification of cancer disease using gene expression data with a novel fitness function. Multimed Tools Appl 83(28):71721\u201371752","journal-title":"Multimed Tools Appl"},{"key":"1603_CR72","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1002\/9781394233953.ch4","volume-title":"Metaheuristics for machine learning: algorithms and applications","author":"A Yaqoob","year":"2024","unstructured":"Yaqoob A et al (2024) Enhancing feature selection through metaheuristic hybrid cuckoo search and Harris Hawks optimization for cancer classification. Metaheuristics for machine learning: algorithms and applications. Wiley, pp 95\u2013134"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01603-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01603-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01603-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,16]],"date-time":"2024-10-16T22:20:43Z","timestamp":1729117243000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01603-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,28]]},"references-count":72,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2024,12]]}},"alternative-id":["1603"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01603-z","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,8,28]]},"assertion":[{"value":"27 April 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 July 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 August 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the order of authors listed in the manuscript has been approved by all of us. No potential competing interest was reported by the authors.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}