{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,29]],"date-time":"2026-01-29T19:26:46Z","timestamp":1769714806926,"version":"3.49.0"},"reference-count":18,"publisher":"SAGE Publications","issue":"3","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IFS"],"published-print":{"date-parts":[[2023,8,24]]},"abstract":"<jats:p>Sign language recognition is a significant cross-modal way to fill the communication gap between deaf and hearing people. Automatic Sign Language Recognition (ASLR) translates sign language gestures into text and spoken words. Several researchers are focusing either on manual gestures or non-manual gestures separately; a rare focus is on concurrent recognition of manual and non-manual gestures. Facial expression and other body movements can improve the accuracy rate, as well as enhance signs\u2019 exact meaning. The current paper proposes a Multimodal \u2013Sign Language Recognition (MM-SLR) framework to recognize non-manual features based on facial expressions along with manual gestures in Spatio temporal domain representing hand movements in ASLR. Our proposed architecture has three modules, first, a modified architecture of YOLOv5 is defined to extract faces and hands from videos as two Regions of Interest. Second, refined C3D architecture is used to extract features from the hand region and the face region, further, feature concatenation of both modalities is applied. Lastly, LSTM network is used to get spatial-temporal descriptors and attention-based sequential modules for gesture classification. To validate the proposed framework we used three publically available datasets RWTH-PHONIX-WEATHER-2014T, SILFA and PkSLMNM. Experimental results show that the above-mentioned MM-SLR framework outperformed on all datasets.<\/jats:p>","DOI":"10.3233\/jifs-230560","type":"journal-article","created":{"date-parts":[[2023,6,16]],"date-time":"2023-06-16T10:51:08Z","timestamp":1686912668000},"page":"3823-3833","source":"Crossref","is-referenced-by-count":7,"title":["Manual and non-manual sign language recognition framework using hybrid deep learning techniques"],"prefix":"10.1177","volume":"45","author":[{"given":"Sameena","family":"Javaid","sequence":"first","affiliation":[{"name":"Department of Computer Sciences, School of Engineering and Applied Sciences, Bahria University, Karachi Campus, Karachi, Pakistan"}]},{"given":"Safdar","family":"Rizvi","sequence":"additional","affiliation":[{"name":"Department of Computer Sciences, School of Engineering and Applied Sciences, Bahria University, Karachi Campus, Karachi, Pakistan"}]}],"member":"179","reference":[{"key":"10.3233\/JIFS-230560_ref1","doi-asserted-by":"crossref","first-page":"105198","DOI":"10.1016\/j.engappai.2022.105198","article-title":"A comprehensive survey and taxonomy of sign language research","volume":"114","author":"El-Alfy","year":"2022","journal-title":"Engineering Applications of Artificial Inteligence"},{"key":"10.3233\/JIFS-230560_ref3","doi-asserted-by":"crossref","first-page":"113794","DOI":"10.1016\/j.eswa.2020.113794","article-title":"Sign language recognition: A deep survey","volume":"164","author":"Rastgoo","year":"2021","journal-title":"Expert Systems and Applications"},{"issue":"3","key":"10.3233\/JIFS-230560_ref6","doi-asserted-by":"crossref","first-page":"785","DOI":"10.1007\/s11831-019-09384-2","article-title":"Sign language recognition systems: A decade systematic literature review","volume":"28","author":"Wadhawan","year":"2021","journal-title":"Archives of Computational Methods in Engineering"},{"issue":"5","key":"10.3233\/JIFS-230560_ref7","doi-asserted-by":"publisher","first-page":"1877","DOI":"10.1016\/j.patcog.2011.10.026","article-title":"Facial expressions in American sign language: Tracking and recognition","volume":"45","author":"Nguyen","year":"2012","journal-title":"Pattern Recognition"},{"issue":"2","key":"10.3233\/JIFS-230560_ref8","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1504\/IJAPR.2016.079048","article-title":"A survey on manual and non-manual sign language recognition for isolated and continuous sign","volume":"3","author":"Agrawal","year":"2016","journal-title":"International Journal of Applied Pattern Recognition"},{"issue":"1","key":"10.3233\/JIFS-230560_ref9","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s11220-019-0225-3","article-title":"Multiple proposals for continuous arabic sign language recognition","volume":"20","author":"Hassan","year":"2019","journal-title":"Sensing and Imaging"},{"issue":"9","key":"10.3233\/JIFS-230560_ref10","doi-asserted-by":"crossref","first-page":"425","DOI":"10.1007\/s00521-016-2522-2","article-title":"An exposition of facial expression recognition techniques","volume":"29","author":"Saeed","year":"2018","journal-title":"Neural Computing and Applications"},{"issue":"10","key":"10.3233\/JIFS-230560_ref12","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.3390\/sym11101189","article-title":"Facial expression recognition: A survey","volume":"11","author":"Huang","year":"2019","journal-title":"Symmetry"},{"issue":"2","key":"10.3233\/JIFS-230560_ref16","doi-asserted-by":"crossref","first-page":"2","DOI":"10.33411\/IJIST\/2022040225","article-title":"Interpretation of Expressions through Hand Signs Using Deep Learning Techniques","volume":"4","author":"Javaid","year":"2022","journal-title":"International Journal of Innovations in Science and Technology"},{"issue":"16","key":"10.3233\/JIFS-230560_ref17","doi-asserted-by":"crossref","first-page":"2051","DOI":"10.1016\/j.patrec.2013.06.022","article-title":"Robust sign language recognition by combining manual and non-manual features based on conditional random field and support vector machine","volume":"34","author":"Yang","year":"2013","journal-title":"Pattern Recognition Letters"},{"key":"10.3233\/JIFS-230560_ref18","doi-asserted-by":"crossref","first-page":"30","DOI":"10.1016\/j.ins.2017.10.046","article-title":"Independent bayesian classifier combination based sign language recognition using facial expression","volume":"428","author":"Kumar","year":"2018","journal-title":"Information Sciences"},{"key":"10.3233\/JIFS-230560_ref22","doi-asserted-by":"crossref","first-page":"462","DOI":"10.1016\/j.neucom.2021.08.079","article-title":"Enhancing Neural Sign Language Translation by highlighting the facial expression information","volume":"464","author":"Zheng","year":"2021","journal-title":"Neurocomputing"},{"issue":"14","key":"10.3233\/JIFS-230560_ref23","doi-asserted-by":"crossref","first-page":"1739","DOI":"10.3390\/electronics10141739","article-title":"Towards hybrid multimodal manual and non-manual Arabic sign language recognition: MArSL database and pilot study","volume":"10","author":"Luqman","year":"2021","journal-title":"Electronics"},{"key":"10.3233\/JIFS-230560_ref24","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1016\/j.cola.2019.04.002","article-title":"Multi modal spatio temporal co-trained CNNs with single modal testing on RGB\u2013D based sign language gesture recognition","volume":"52","author":"Ravi","year":"2019","journal-title":"Journal of Computer Languages"},{"key":"10.3233\/JIFS-230560_ref25","doi-asserted-by":"crossref","first-page":"30","DOI":"10.1016\/j.ins.2017.10.046","article-title":"Independent bayesian classifier combination based sign language recognition using facial expression","volume":"428","author":"Kumar","year":"2018","journal-title":"Information Sciences"},{"key":"10.3233\/JIFS-230560_ref26","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1016\/j.cola.2019.04.002","article-title":"Multi modal spatio temporal co-trained CNNs with single modal testing on RGB\u2013D based sign language gesture recognition","volume":"52","author":"Ravi","year":"2019","journal-title":"Journal of Computer Languages"},{"issue":"2","key":"10.3233\/JIFS-230560_ref27","doi-asserted-by":"crossref","first-page":"217","DOI":"10.3390\/f12020217","article-title":"A forest fire detection system based on ensemble learning","volume":"12","author":"Xu","year":"2021","journal-title":"Forests"},{"issue":"1","key":"10.3233\/JIFS-230560_ref29","doi-asserted-by":"publisher","first-page":"1","DOI":"10.32604\/cmc.2023.031924","article-title":"A Novel Action Transformer Network for Hybrid Multimodal Sign Language Recognition","volume":"74","author":"Javaid","year":"2022","journal-title":"Computers, Materials and Continua"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/JIFS-230560","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,29]],"date-time":"2026-01-29T08:55:56Z","timestamp":1769676956000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/JIFS-230560"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,24]]},"references-count":18,"journal-issue":{"issue":"3"},"URL":"https:\/\/doi.org\/10.3233\/jifs-230560","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,24]]}}}