{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T17:17:14Z","timestamp":1771003034340,"version":"3.50.1"},"reference-count":26,"publisher":"SAGE Publications","issue":"5","license":[{"start":{"date-parts":[[2025,5,8]],"date-time":"2025-05-08T00:00:00Z","timestamp":1746662400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Journal of Computational Methods in Sciences and Engineering"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:p>The effectiveness of online music education relies heavily on understanding and addressing students\u2019 emotional states, which can impact engagement and learning outcomes. This paper presents a novel emotion recognition method based on an improved frame attention network (IFAN), designed specifically for online music teaching. The method utilizes facial expression data to identify four key emotional states\u2014pleasure, concentration, confusion, and boredom\u2014by introducing deformable convolution to better capture dynamic facial features and a feature aggregation module to enhance emotional temporal patterns. The proposed model achieves recognition accuracies of 96%, 94%, 93%, and 98% for each emotional state, outperforming existing emotion recognition methods. Experimental results indicate that the model is robust and highly accurate in the context of online music education. This research provides a foundation for real-time emotion recognition systems in online teaching environments, with potential for future work to incorporate multimodal data, such as audio and physiological signals, to further enhance model performance.<\/jats:p>","DOI":"10.1177\/14727978251341493","type":"journal-article","created":{"date-parts":[[2025,5,8]],"date-time":"2025-05-08T20:28:26Z","timestamp":1746736106000},"page":"4751-4761","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":0,"title":["Analysis of students\u2019 emotion based on visual clues in online music teaching"],"prefix":"10.1177","volume":"25","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-1414-4878","authenticated-orcid":false,"given":"Xiaoping","family":"Yi","sequence":"first","affiliation":[{"name":"College of Education Science, Guangxi Minzu Normal University, Chongzuo, China"}]}],"member":"179","published-online":{"date-parts":[[2025,5,8]]},"reference":[{"issue":"2","key":"e_1_3_5_2_2","first-page":"47","article-title":"Research and implementation of data-driven online learning burnout early warning model","volume":"42","author":"Huang CQ","year":"2021","unstructured":"Huang CQ, Tu YX, Yu JH, et al. Research and implementation of data-driven online learning burnout early warning model. E-education Res 2021; 42(2): 47\u201354.","journal-title":"E-education Res"},{"issue":"7","key":"e_1_3_5_3_2","first-page":"65","article-title":"Solutions to the lack of teaching emotion between teachers and students in online teaching under the epidemic situation","author":"Guo CL","year":"2021","unstructured":"Guo CL. Solutions to the lack of teaching emotion between teachers and students in online teaching under the epidemic situation. The Chinese J ICT in Educ 2021; (7): 65\u201369.","journal-title":"The Chinese J ICT in Educ"},{"key":"e_1_3_5_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2020.3037903"},{"key":"e_1_3_5_5_2","doi-asserted-by":"publisher","DOI":"10.3103\/S1060992X22030055"},{"issue":"1","key":"e_1_3_5_6_2","first-page":"32","article-title":"Research on recognition method of learning participation from the perspective of artificial intelligence analysis of deep learning experiment based on multimodal data fusion","author":"Cao XM","year":"2019","unstructured":"Cao XM, Zhang YH, Pan M, et al. Research on recognition method of learning participation from the perspective of artificial intelligence analysis of deep learning experiment based on multimodal data fusion. J Distance Educ 2019; (1): 32\u201344.","journal-title":"J Distance Educ"},{"issue":"3","key":"e_1_3_5_7_2","first-page":"186","article-title":"Research on emotion recognition based on eye movement features in online learning environment","volume":"31","author":"Tao XM","year":"2021","unstructured":"Tao XM, Chen XY. Research on emotion recognition based on eye movement features in online learning environment. Comput Technol Dev 2021; 31(3): 186\u2013190.","journal-title":"Comput Technol Dev"},{"issue":"3","key":"e_1_3_5_8_2","first-page":"29","article-title":"Emotion recognition and education","author":"Yu ZT","year":"2019","unstructured":"Yu ZT, Li XB, Zhao GY. Emotion recognition and education. Artif Intell 2019; (3): 29\u201336.","journal-title":"Artif Intell"},{"issue":"2","key":"e_1_3_5_9_2","first-page":"1","article-title":"Facial emotion recognition using temporal relational network: an application to E-learning","volume":"81","author":"Pise A","year":"2020","unstructured":"Pise A, Vadapall H, Sanders I. Facial emotion recognition using temporal relational network: an application to E-learning. Multimed Tool Appl 2020; 81(2): 1\u201312.","journal-title":"Multimed Tool Appl"},{"key":"e_1_3_5_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2020.3003243"},{"key":"e_1_3_5_11_2","doi-asserted-by":"publisher","DOI":"10.1080\/10447318.2018.1469710"},{"key":"e_1_3_5_12_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10639-019-10004-6"},{"key":"e_1_3_5_13_2","volume-title":"Emotion recognition of learning picture in intelligent learning environment and its application","author":"Xu ZG","year":"2019","unstructured":"Xu ZG. Emotion recognition of learning picture in intelligent learning environment and its application. Jinan: ShandongNormal University, 2019."},{"key":"e_1_3_5_14_2","volume-title":"Design and implementation of emotion analysis model for online learning based on multimodality","author":"Ma YT","year":"2019","unstructured":"Ma YT. Design and implementation of emotion analysis model for online learning based on multimodality. Nanjing: Nanjing Normal University, 2019."},{"issue":"12","key":"e_1_3_5_15_2","first-page":"82","article-title":"Classroom student state analysis based on artificial intelligence video processing","volume":"29","author":"Jia LY","year":"2019","unstructured":"Jia LY, Zhang ZH, Zhao XY, et al. Classroom student state analysis based on artificial intelligence video processing. Mod Educ Technol 2019; 29(12): 82\u201388.","journal-title":"Mod Educ Technol"},{"key":"e_1_3_5_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIM.2018.2879706"},{"issue":"7","key":"e_1_3_5_17_2","first-page":"12","article-title":"Opinion mining and emotion recognition applied to learning environments","volume":"150","author":"Estrada MLB","year":"2020","unstructured":"Estrada MLB, Cabada RZ, Bustillos RO, et al. Opinion mining and emotion recognition applied to learning environments. Expert Syst Appl 2020; 150(7): 12\u201323.","journal-title":"Expert Syst Appl"},{"issue":"11","key":"e_1_3_5_18_2","first-page":"1604","article-title":"Mining of educational opinions with deep learning","author":"Cabada RZ","year":"2018","unstructured":"Cabada RZ, Estrada MLB, Bustillos RO. Mining of educational opinions with deep learning. J Univers Comput Sci 2018; (11): 1604\u20131626.","journal-title":"J Univers Comput Sci"},{"key":"e_1_3_5_19_2","doi-asserted-by":"crossref","unstructured":"Balalahadia FF Fernando M Juanatas IC. Teacher\u2019s performance evaluation tool using opinion mining with sentiment analysis. In: 2016 IEEE Region 10 Symposium (TENSYMP) Bali Indonesia 09\u201311 May 2016.","DOI":"10.1109\/TENCONSpring.2016.7519384"},{"key":"e_1_3_5_20_2","article-title":"Multilogue-net: a context-aware RNN for multi-modal emotion detection and sentiment analysis in conversation","author":"Shenoy A","year":"2020","unstructured":"Shenoy A, Sardana A. Multilogue-net: a context-aware RNN for multi-modal emotion detection and sentiment analysis in conversation. arXiv preprint arXiv: 2002.08267 2020.","journal-title":"arXiv preprint arXiv: 2002.08267"},{"key":"e_1_3_5_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_5_22_2","doi-asserted-by":"crossref","unstructured":"Meng D Peng X Wang K et al. Frame attention networks for facial expression recognition in videos. In: 2019 IEEE international conference on image processing(ICIP) 2019 pp. 3866\u20133870. Taipei Taiwan.","DOI":"10.1109\/ICIP.2019.8803603"},{"key":"e_1_3_5_23_2","volume-title":"Research on non-contact heart rate detection based on face video","author":"Wei ZK","year":"2022","unstructured":"Wei ZK. Research on non-contact heart rate detection based on face video. Chengdu: Xihua University, 2022."},{"key":"e_1_3_5_24_2","first-page":"87","volume-title":"European conference on computer vision","author":"Guo Y","year":"2016","unstructured":"Guo Y, Zhang L, Hu Y, et al. MS celeb lM: a dataset and benchmark for large scale face recognition. In: European conference on computer vision. Cham: Springer, 2016, pp. 87\u2013102."},{"key":"e_1_3_5_25_2","doi-asserted-by":"publisher","DOI":"10.1145\/2993148.2993165"},{"key":"e_1_3_5_26_2","doi-asserted-by":"publisher","DOI":"10.1142\/S0218126622501390"},{"key":"e_1_3_5_27_2","doi-asserted-by":"publisher","DOI":"10.1186\/s41239-022-00370-6"}],"container-title":["Journal of Computational Methods in Sciences and Engineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/14727978251341493","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.1177\/14727978251341493","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/14727978251341493","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T16:31:15Z","timestamp":1771000275000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.1177\/14727978251341493"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,8]]},"references-count":26,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["10.1177\/14727978251341493"],"URL":"https:\/\/doi.org\/10.1177\/14727978251341493","relation":{},"ISSN":["1472-7978","1875-8983"],"issn-type":[{"value":"1472-7978","type":"print"},{"value":"1875-8983","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,8]]}}}