{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T17:18:37Z","timestamp":1771003117488,"version":"3.50.1"},"reference-count":21,"publisher":"SAGE Publications","issue":"1","license":[{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Journal of Computational Methods in Sciences and Engineering"],"published-print":{"date-parts":[[2025,1]]},"abstract":"<jats:p>The significance of lip recognition lies in the application of visual information to compensate for auditory information, which is widely used in speech impairment recognition, security monitoring, computer-aided, virtual reality, and other fields. The biggest challenge in lip language recognition lies in the diversity of recognition objects and environmental interference. The key to improving this issue is to enhance the robustness of the model to external objective influences. This study proposes a lip language recognition algorithm that combines a reverse double-layer long short-term memory with a position sensitive attention mechanism. The ablation experiment proved the effectiveness of the research improvement, with an accuracy improvement of 15.19%. Three factors, including resolution, video length, and shooting angle, were set to verify the robustness of the proposed algorithm to external factors. These experiments confirm that the proposed algorithm performs well under different resolution conditions and video lengths. Compared to long short-term memory and recurrent neural networks, its accuracy has improved by 22.5% and 26.5%. Faced with different shooting angles, the proposed algorithm is almost unaffected by them, with minimal fluctuations in accuracy. Compared to long short-term memory and recurrent neural networks, it improves by 61% and 67%. In summary, the proposed algorithm for human\u2013computer interaction lip recognition has better accuracy performance and higher robustness in the face of external interference.<\/jats:p>","DOI":"10.1177\/14727978251321690","type":"journal-article","created":{"date-parts":[[2025,3,11]],"date-time":"2025-03-11T19:51:36Z","timestamp":1741722696000},"page":"876-887","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":0,"title":["LSTM algorithm for human machine interaction lip recognition"],"prefix":"10.1177","volume":"25","author":[{"given":"Na","family":"Liang","sequence":"first","affiliation":[{"name":"Shijiazhuang College of Applied Technology"}]}],"member":"179","published-online":{"date-parts":[[2025,3,11]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1108\/ECAM-12-2019-0732"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.05.104"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.yebeh.2020.107600"},{"issue":"6","key":"e_1_3_2_5_2","first-page":"643","article-title":"A novel method for lip movement detection using deep neural network","volume":"81","author":"Srilakshmi K","year":"2022","unstructured":"Srilakshmi K, Karthik R. A novel method for lip movement detection using deep neural network. J Sci Ind Res 2022; 81(6): 643\u2013650.","journal-title":"J Sci Ind Res"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jmp.2020.102404"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1049\/ipr2.12337"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2020.01.022"},{"issue":"12","key":"e_1_3_2_9_2","first-page":"7266","article-title":"Hearme: Accurate and real-time lip reading based on commercial rfid devices","volume":"22","author":"Zhang SG","year":"2023","unstructured":"Zhang SG, Ma ZJ, Lu KX, et al. Hearme: Accurate and real-time lip reading based on commercial rfid devices. IEEE T Mobile Comput 2023; 22(12): 7266\u20137278.","journal-title":"IEEE T Mobile Comput"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.1117\/1.JEI.30.6.063003"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1002\/smll.202205058"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1002\/int.22813"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.11.089"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2022.07.016"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2022.04.014"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00170-021-07911-9"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2021.3095855"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.3233\/IDA-205524"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.07.018"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1021\/acs.jproteome.0c00431"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2021.3102433"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-022-00550-z"}],"container-title":["Journal of Computational Methods in Sciences and Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/14727978251321690","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.1177\/14727978251321690","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/14727978251321690","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T16:31:28Z","timestamp":1771000288000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.1177\/14727978251321690"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1]]},"references-count":21,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1]]}},"alternative-id":["10.1177\/14727978251321690"],"URL":"https:\/\/doi.org\/10.1177\/14727978251321690","relation":{},"ISSN":["1472-7978","1875-8983"],"issn-type":[{"value":"1472-7978","type":"print"},{"value":"1875-8983","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1]]}}}