{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,2]],"date-time":"2026-04-02T21:27:07Z","timestamp":1775165227939,"version":"3.50.1"},"reference-count":210,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,11,12]],"date-time":"2022-11-12T00:00:00Z","timestamp":1668211200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,11,12]],"date-time":"2022-11-12T00:00:00Z","timestamp":1668211200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"JSPS KAKENHI","award":["20H04294"],"award-info":[{"award-number":["20H04294"]}]},{"name":"Photron limited"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Smart Learn. Environ."],"abstract":"<jats:title>Abstract<\/jats:title><jats:sec>\n                <jats:title>Background<\/jats:title>\n                <jats:p>Recognizing learners\u2019 engagement during learning processes is important for providing personalized pedagogical support and preventing dropouts. As learning processes shift from traditional offline classrooms to distance learning, methods for automatically identifying engagement levels should be developed.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Objective<\/jats:title>\n                <jats:p>This article aims to present a literature review of recent developments in automatic engagement estimation, including engagement definitions, datasets, and machine learning-based methods for automation estimation. The information, figures, and tables presented in this review aim at providing new researchers with insight on automatic engagement estimation to enhance smart learning with automatic engagement recognition methods.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Methods<\/jats:title>\n                <jats:p>A literature search was carried out using Scopus, Mendeley references, the IEEE Xplore digital library, and ScienceDirect following the four phases of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA): identification, screening, eligibility, and inclusion. The selected studies included research articles published between 2010 and 2022 that focused on three research questions (RQs) related to the engagement definitions, datasets, and methods used in the literature. The article selection excluded books, magazines, news articles, and posters.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Results<\/jats:title>\n                <jats:p>Forty-seven articles were selected to address the RQs and discuss engagement definitions, datasets, and methods. First, we introduce a clear taxonomy that defines engagement according to different types and the components used to measure it. Guided by this taxonomy, we reviewed the engagement types defined in the selected articles, with emotional engagement (n = 40; 65.57%) measured by affective cues appearing most often (n = 38; 57.58%). Then, we reviewed engagement and engagement-related datasets in the literature, with most studies assessing engagement with external observations (n = 20; 43.48%) and self-reported measures (n = 9; 19.57%). Finally, we summarized machine learning (ML)-based methods, including deep learning, used in the literature.<\/jats:p>\n              <\/jats:sec><jats:sec>\n                <jats:title>Conclusions<\/jats:title>\n                <jats:p>This review examines engagement definitions, datasets and ML-based methods from forty-seven selected articles. A taxonomy and three tables are presented to address three RQs and provide researchers in this field with guidance on enhancing smart learning with automatic engagement recognition. However, several key challenges remain, including cognitive and personalized engagement and ML issues that may affect real-world implementations.<\/jats:p>\n              <\/jats:sec>","DOI":"10.1186\/s40561-022-00212-y","type":"journal-article","created":{"date-parts":[[2022,11,12]],"date-time":"2022-11-12T09:03:04Z","timestamp":1668243784000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":42,"title":["Automatic engagement estimation in smart education\/learning settings: a systematic review of engagement definitions, datasets, and methods"],"prefix":"10.1186","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5721-3205","authenticated-orcid":false,"given":"Shofiyati Nur","family":"Karimah","sequence":"first","affiliation":[]},{"given":"Shinobu","family":"Hasegawa","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,11,12]]},"reference":[{"key":"212_CR1","doi-asserted-by":"publisher","unstructured":"Abdellaoui, B., Moumen, A., El\u00a0Bouzekri El\u00a0Idrissi, Y. & Remaida, A. (2020). Face detection to recognize students\u2019 emotion and their engagement: A systematic review. In: 2020 IEEE 2nd International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS), pp. 1\u20136 https:\/\/doi.org\/10.1109\/ICECOCS50124.2020.9314600","DOI":"10.1109\/ICECOCS50124.2020.9314600"},{"key":"212_CR2","doi-asserted-by":"publisher","unstructured":"Abedi, A. & Khan, S.S. (2021). Improving state-of-the-art in detecting student engagement with Resnet and TCN hybrid network. In: 2021 18th Conference on Robots and Vision (CRV), pp. 151\u2013157 https:\/\/doi.org\/10.1109\/CRV52889.2021.00028","DOI":"10.1109\/CRV52889.2021.00028"},{"key":"212_CR3","unstructured":"ACM International Conference on Multimodal Interaction 2020: Eighth Emotion Recognition in the Wild Challenge (EmotiW) (2020). https:\/\/sites.google.com\/view\/emotiw2020\/challenge-details"},{"key":"212_CR4","doi-asserted-by":"publisher","unstructured":"Akker, R., Hofs, D., Hondorp, H., Akker, H., Zwiers, J. & Nijholt, A. (2009). Supporting engagement and floor control in hybrid meetings, pp. 276\u2013290 https:\/\/doi.org\/10.1007\/978-3-642-03320-9_26","DOI":"10.1007\/978-3-642-03320-9_26"},{"issue":"3","key":"212_CR5","doi-asserted-by":"publisher","first-page":"374","DOI":"10.1109\/TAFFC.2017.2714671","volume":"10","author":"SM Alarc\u00e3o","year":"2019","unstructured":"Alarc\u00e3o, S. M., & Fonseca, M. J. (2019). Emotions recognition using EEG signals: A survey. IEEE Transactions on Affective Computing, 10(3), 374\u2013393. https:\/\/doi.org\/10.1109\/TAFFC.2017.2714671.","journal-title":"IEEE Transactions on Affective Computing"},{"issue":"2","key":"212_CR6","doi-asserted-by":"publisher","first-page":"87","DOI":"10.2307\/2673158","volume":"70","author":"KL Alexander","year":"1997","unstructured":"Alexander, K. L., Entwisle, D. R., & Horsey, C. S. (1997). From first grade forward: Early foundations of high school dropout. Sociology of Education, 70(2), 87. https:\/\/doi.org\/10.2307\/2673158.","journal-title":"Sociology of Education"},{"issue":"7","key":"212_CR7","doi-asserted-by":"publisher","first-page":"1387","DOI":"10.1007\/s11760-021-01869-7","volume":"15","author":"K Altuwairqi","year":"2021","unstructured":"Altuwairqi, K., Jarraya, S. K., Allinjawi, A., & Hammami, M. (2021). Student behavior analysis to measure engagement levels in online learning environments. Signal, Image and Video Processing, 15(7), 1387\u20131395. https:\/\/doi.org\/10.1007\/s11760-021-01869-7.","journal-title":"Signal, Image and Video Processing"},{"issue":"1","key":"212_CR8","doi-asserted-by":"publisher","first-page":"99","DOI":"10.1016\/j.jksuci.2018.12.008","volume":"33","author":"K Altuwairqi","year":"2021","unstructured":"Altuwairqi, K., Jarraya, S. K., Allinjawi, A., & Hammami, M. (2021). A new emotion-based affective model to detect student\u2019s engagement. Journal of King Saud University\u2013Computer and Information Sciences, 33(1), 99\u2013109. https:\/\/doi.org\/10.1016\/j.jksuci.2018.12.008.","journal-title":"Journal of King Saud University\u2013Computer and Information Sciences"},{"issue":"3","key":"212_CR9","doi-asserted-by":"publisher","first-page":"298","DOI":"10.1109\/T-AFFC.2012.4","volume":"3","author":"O AlZoubi","year":"2012","unstructured":"AlZoubi, O., D\u2019Mello, S. K., & Calvo, R. A. (2012). Detecting naturalistic expressions of nonbasic affect using physiological signals. IEEE Transactions on Affective Computing, 3(3), 298\u2013310. https:\/\/doi.org\/10.1109\/T-AFFC.2012.4.","journal-title":"IEEE Transactions on Affective Computing"},{"issue":"1","key":"212_CR10","doi-asserted-by":"publisher","first-page":"5857","DOI":"10.1038\/s41598-022-09578-y","volume":"12","author":"A Apicella","year":"2022","unstructured":"Apicella, A., Arpaia, P., Frosolone, M., Improta, G., Moccaldi, N., & Pollastro, A. (2022). EEG-based measurement system for monitoring student engagement in learning 4.0. Scientific Reports, 12(1), 5857. https:\/\/doi.org\/10.1038\/s41598-022-09578-y.","journal-title":"Scientific Reports"},{"key":"212_CR11","doi-asserted-by":"publisher","first-page":"334","DOI":"10.1016\/j.future.2020.02.075","volume":"108","author":"TS Ashwin","year":"2020","unstructured":"Ashwin, T. S., & Guddeti, R. M. R. (2020). Affective database for e-learning and classroom environments using Indian students\u2019 faces, hand gestures and body postures. Future Generation Computer Systems, 108, 334\u2013348. https:\/\/doi.org\/10.1016\/j.future.2020.02.075.","journal-title":"Future Generation Computer Systems"},{"issue":"2","key":"212_CR12","doi-asserted-by":"publisher","first-page":"1387","DOI":"10.1007\/s10639-019-10004-6","volume":"25","author":"TS Ashwin","year":"2020","unstructured":"Ashwin, T. S., & Guddeti, R. M. R. (2020). Automatic detection of students\u2019 affective states in classroom environment using hybrid convolutional neural networks. Education and Information Technologies, 25(2), 1387\u20131415. https:\/\/doi.org\/10.1007\/s10639-019-10004-6.","journal-title":"Education and Information Technologies"},{"issue":"5","key":"212_CR13","doi-asserted-by":"publisher","first-page":"759","DOI":"10.1007\/s11257-019-09254-3","volume":"30","author":"TS Ashwin","year":"2020","unstructured":"Ashwin, T. S., & Guddeti, R. M. R. (2020). Impact of inquiry interventions on students in e-learning and classroom environments using affective computing framework. User Modeling and User-Adapted Interaction, 30(5), 759\u2013801. https:\/\/doi.org\/10.1007\/s11257-019-09254-3.","journal-title":"User Modeling and User-Adapted Interaction"},{"issue":"1","key":"212_CR14","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1080\/00461520.2015.1004069","volume":"50","author":"R Azevedo","year":"2015","unstructured":"Azevedo, R. (2015). Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educational Psychologist, 50(1), 84\u201394. https:\/\/doi.org\/10.1080\/00461520.2015.1004069.","journal-title":"Educational Psychologist"},{"key":"212_CR15","doi-asserted-by":"publisher","unstructured":"Ba, S.O. & Odobez, J.-M. (2006). Head pose tracking and focus of attention recognition algorithms in meeting rooms. In: Multimodal Technologies for Perception of Humans, pp. 345\u2013357. Springer. https:\/\/doi.org\/10.1007\/978-3-540-69568-4_32","DOI":"10.1007\/978-3-540-69568-4_32"},{"key":"212_CR16","unstructured":"Bahdanau, D., Cho, K. & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate"},{"key":"212_CR17","doi-asserted-by":"publisher","unstructured":"Baltrusaitis, T., Robinson, P. & Morency, L.-P. (2013). Constrained local neural fields for robust facial landmark detection in the wild. In: 2013 IEEE International Conference on Computer Vision Workshops, pp. 354\u2013361. https:\/\/doi.org\/10.1109\/ICCVW.2013.54","DOI":"10.1109\/ICCVW.2013.54"},{"key":"212_CR18","doi-asserted-by":"publisher","unstructured":"Baltrusaitis, T., Robinson, P. & Morency, L.-P. (2016). OpenFace: An open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1\u201310 https:\/\/doi.org\/10.1109\/WACV.2016.7477553","DOI":"10.1109\/WACV.2016.7477553"},{"key":"212_CR19","doi-asserted-by":"publisher","unstructured":"Baltrusaitis, T., Zadeh, A., Lim, Y.C. & Morency, L.-P. (2018). OpenFace 2.0: Facial behavior analysis toolkit. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 59\u201366 https:\/\/doi.org\/10.1109\/FG.2018.00019","DOI":"10.1109\/FG.2018.00019"},{"issue":"2","key":"212_CR20","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2764921","volume":"5","author":"T Baur","year":"2015","unstructured":"Baur, T., Mehlmann, G., Damian, I., Lingenfelser, F., Wagner, J., Lugrin, B., et al. (2015). Context-aware automated analysis and annotation of human\u2013agent interactions. ACM Transactions on Interactive Intelligent Systems, 5(2), 1\u201333. https:\/\/doi.org\/10.1145\/2764921.","journal-title":"ACM Transactions on Interactive Intelligent Systems"},{"key":"212_CR21","doi-asserted-by":"publisher","unstructured":"Bengio, Y. (2011). Deep learning of representations for unsupervised and transfer learning. In: Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop-Volume 27. UTLW\u201911, pp. 17\u201337. https:\/\/doi.org\/10.5555\/3045796.3045800","DOI":"10.5555\/3045796.3045800"},{"key":"212_CR22","doi-asserted-by":"publisher","unstructured":"Ben-Youssef, A., Clavel, C., Essid, S., Bilac, M., Chamoux, M. & Lim, A. (2017). UE-HRI: A new dataset for the study of user engagement in spontaneous human-robot interactions. In: Proceedings of the 19th ACM International Conference on Multimodal Interaction, pp. 464\u2013472. ACM, New York. https:\/\/doi.org\/10.1145\/3136755.3136814","DOI":"10.1145\/3136755.3136814"},{"issue":"3","key":"212_CR23","doi-asserted-by":"publisher","first-page":"776","DOI":"10.1109\/TAFFC.2019.2898399","volume":"12","author":"A Ben-Youssef","year":"2021","unstructured":"Ben-Youssef, A., Clavel, C., & Essid, S. (2021). Early detection of user engagement breakdown in spontaneous human-humanoid interaction. IEEE Transactions on Affective Computing, 12(3), 776\u2013787. https:\/\/doi.org\/10.1109\/TAFFC.2019.2898399.","journal-title":"IEEE Transactions on Affective Computing"},{"issue":"5","key":"212_CR24","doi-asserted-by":"publisher","first-page":"815","DOI":"10.1007\/s12369-019-00591-2","volume":"11","author":"A Ben-Youssef","year":"2019","unstructured":"Ben-Youssef, A., Varni, G., Essid, S., & Clavel, C. (2019). On-the-fly detection of user engagement decrease in spontaneous human-robot interaction using recurrent and deep neural networks. International Journal of Social Robotics, 11(5), 815\u2013828. https:\/\/doi.org\/10.1007\/s12369-019-00591-2.","journal-title":"International Journal of Social Robotics"},{"issue":"3","key":"212_CR25","doi-asserted-by":"publisher","first-page":"401","DOI":"10.1162\/jocn\\_a_01274","volume":"31","author":"D Bevilacqua","year":"2019","unstructured":"Bevilacqua, D., Davidesco, I., Wan, L., Chaloner, K., Rowland, J., Ding, M., et al. (2019). Brain-to-brain synchrony and learning outcomes vary by student\u2013teacher dynamics: Evidence from a real-world classroom electroencephalography study. Journal of Cognitive Neuroscience, 31(3), 401\u2013411. https:\/\/doi.org\/10.1162\/jocn_a_01274.","journal-title":"Journal of Cognitive Neuroscience"},{"key":"212_CR26","doi-asserted-by":"publisher","DOI":"10.1016\/j.compeleceng.2021.107277","author":"P Bhardwaj","year":"2021","unstructured":"Bhardwaj, P., Gupta, P. K., Panwar, H., Siddiqui, M. K., Morales-Menendez, R., & Bhaik, A. (2021). Application of deep learning on student engagement in e-learning environments. Computers and Electrical Engineering. https:\/\/doi.org\/10.1016\/j.compeleceng.2021.107277.","journal-title":"Computers and Electrical Engineering"},{"key":"212_CR27","doi-asserted-by":"publisher","unstructured":"Bosch, N. (2016). Detecting student engagement: Human versus machine. UMAP 2016: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, pp. 317\u2013320. https:\/\/doi.org\/10.1145\/2930238.2930371","DOI":"10.1145\/2930238.2930371"},{"issue":"2","key":"212_CR28","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2946837","volume":"6","author":"N Bosch","year":"2016","unstructured":"Bosch, N., D\u2019mello, S. K., Ocumpaugh, J., Baker, R. S., & Shute, V. (2016). Using video to automatically detect learner affect in computer-enabled classrooms. ACM Transactions on Interactive Intelligent Systems, 6(2), 1\u201326. https:\/\/doi.org\/10.1145\/2946837.","journal-title":"ACM Transactions on Interactive Intelligent Systems"},{"issue":"7","key":"212_CR29","doi-asserted-by":"publisher","first-page":"1145","DOI":"10.1016\/S0031-3203(96)00142-2","volume":"30","author":"AP Bradley","year":"1997","unstructured":"Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7), 1145\u20131159. https:\/\/doi.org\/10.1016\/S0031-3203(96)00142-2.","journal-title":"Pattern Recognition"},{"key":"212_CR30","unstructured":"Brugman, H. & Russel, A. (2004). Annotating multi-media\/multi-modal resources with ELAN. In: Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC\u201904). European Language Resources Association (ELRA), Lisbon. http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/480.pdf"},{"key":"212_CR31","doi-asserted-by":"publisher","unstructured":"Cao, Q., Shen, L., Xie, W., Parkhi, O.M. & Zisserman, A. (2018). VGGFace2: A dataset for recognising faces across pose and age. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 67\u201374. https:\/\/doi.org\/10.1109\/FG.2018.00020","DOI":"10.1109\/FG.2018.00020"},{"key":"212_CR32","doi-asserted-by":"publisher","unstructured":"Cao, Z., Simon, T., Wei, S.-E. & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2017-January, pp. 1302\u20131310. https:\/\/doi.org\/10.1109\/CVPR.2017.143","DOI":"10.1109\/CVPR.2017.143"},{"issue":"1","key":"212_CR33","doi-asserted-by":"publisher","first-page":"314","DOI":"10.3390\/app10010314","volume":"10","author":"E Carlotta Olivetti","year":"2019","unstructured":"Carlotta Olivetti, E., Violante, M. G., Vezzetti, E., Marcolin, F., & Eynard, B. (2019). Engagement evaluation in a virtual learning environment via facial expression recognition and self-reports: A preliminary approach. Applied Sciences, 10(1), 314. https:\/\/doi.org\/10.3390\/app10010314.","journal-title":"Applied Sciences"},{"key":"212_CR34","doi-asserted-by":"publisher","unstructured":"Carreira, J. & Zisserman, A. (2017). Quo Vadis, action recognition? A new model and the kinetics dataset. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724\u20134733. https:\/\/doi.org\/10.1109\/CVPR.2017.502","DOI":"10.1109\/CVPR.2017.502"},{"key":"212_CR35","doi-asserted-by":"publisher","unstructured":"Castellano, G., Leite, I., Pereira, A., Martinho, C., Paiva, A. & McOwan, P.W. (2012). Detecting engagement in HRI: An exploration of social and task-based context. In: 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, pp. 421\u2013428 https:\/\/doi.org\/10.1109\/SocialCom-PASSAT.2012.51","DOI":"10.1109\/SocialCom-PASSAT.2012.51"},{"key":"212_CR36","doi-asserted-by":"publisher","unstructured":"Castellano, G., Pereira, A., Leite, I., Paiva, A. & McOwan, P.W. (2009). Detecting user engagement with a robot companion using task and social interaction-based features. In: Proceedings of the 2009 International Conference on Multimodal Interfaces - ICMI-MLMI \u201909, p. 119. ACM Press, New York. https:\/\/doi.org\/10.1145\/1647314.1647336","DOI":"10.1145\/1647314.1647336"},{"issue":"4","key":"212_CR37","doi-asserted-by":"publisher","first-page":"484","DOI":"10.1109\/TAFFC.2017.2737019","volume":"10","author":"O Celiktutan","year":"2019","unstructured":"Celiktutan, O., Skordos, E., & Gunes, H. (2019). Multimodal human-human-robot interactions (MHHRI) dataset for studying personality and engagement. IEEE Transactions on Affective Computing, 10(4), 484\u2013497. https:\/\/doi.org\/10.1109\/TAFFC.2017.2737019.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR38","doi-asserted-by":"publisher","first-page":"42","DOI":"10.1016\/J.COMPEDU.2016.02.006","volume":"96","author":"R Cerezo","year":"2016","unstructured":"Cerezo, R., S\u00e1nchez-Santill\u00e1n, M., Paule-Ruiz, M. P., & N\u00fa\u00f1ez, J. C. (2016). Students\u2019 LMS interaction patterns and their relationship with achievement: A case study in higher education. Computers & Education, 96, 42\u201354. https:\/\/doi.org\/10.1016\/J.COMPEDU.2016.02.006.","journal-title":"Computers & Education"},{"key":"212_CR39","unstructured":"Chaouachi, M., Chalfoun, P., Jraidi, I. & Frasson, C. (2010) Affect and mental engagement: Towards adaptability for intelligent systems. In: Proceedings of the 23rd International Florida Artificial Intelligence Research Society Conference, FLAIRS-23, Flairs, pp. 355\u2013360."},{"key":"212_CR40","doi-asserted-by":"publisher","DOI":"10.3389\/fnins.2021.757381","author":"I Chatterjee","year":"2021","unstructured":"Chatterjee, I., Gor\u0161i\u010d, M., Clapp, J. D., & Novak, D. (2021). Automatic estimation of interpersonal engagement during naturalistic conversation using dyadic physiological measurements. Frontiers in Neuroscience. https:\/\/doi.org\/10.3389\/fnins.2021.757381.","journal-title":"Frontiers in Neuroscience"},{"key":"212_CR41","doi-asserted-by":"publisher","first-page":"321","DOI":"10.1613\/jair.953","volume":"16","author":"NV Chawla","year":"2002","unstructured":"Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16, 321\u2013357. https:\/\/doi.org\/10.1613\/jair.953.","journal-title":"Journal of Artificial Intelligence Research"},{"key":"212_CR42","doi-asserted-by":"publisher","unstructured":"Chen, Y.-W. & Lin, C.-J. (2006). Combining SVMs with various feature selection strategies. In: Feature Extraction. Studies in Fuzziness and Soft Computing, vol. 207, pp. 315\u2013324. Springer. https:\/\/doi.org\/10.1007\/978-3-540-35488-8_13","DOI":"10.1007\/978-3-540-35488-8_13"},{"key":"212_CR43","doi-asserted-by":"publisher","DOI":"10.1016\/J.CAEAI.2020.100002","volume":"1","author":"X Chen","year":"2020","unstructured":"Chen, X., Xie, H., Zou, D., & Hwang, G. J. (2020). Application and theory gaps during the rise of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1, 100002. https:\/\/doi.org\/10.1016\/J.CAEAI.2020.100002.","journal-title":"Computers and Education: Artificial Intelligence"},{"issue":"4","key":"212_CR44","doi-asserted-by":"publisher","first-page":"219","DOI":"10.1080\/00461520.2014.965823","volume":"49","author":"MTH Chi","year":"2014","unstructured":"Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219\u2013243. https:\/\/doi.org\/10.1080\/00461520.2014.965823.","journal-title":"Educational Psychologist"},{"key":"212_CR45","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4614-2018-7","volume-title":"Handbook of Research on Student Engagement","author":"Sandra Christenson","year":"2012","unstructured":"Christenson, Sandra, Reschly, Amy L., & Wylie, Cathy. (2012). Handbook of Research on Student Engagement. Springer. https:\/\/doi.org\/10.1007\/978-1-4614-2018-7."},{"issue":"2","key":"212_CR46","doi-asserted-by":"publisher","first-page":"114","DOI":"10.1109\/TLT.2010.14","volume":"4","author":"M Cocea","year":"2011","unstructured":"Cocea, M., & Weibelzahl, S. (2011). Disengagement detection in online learning: Validation studies and perspectives. IEEE Transactions on Learning Technologies, 4(2), 114\u2013124. https:\/\/doi.org\/10.1109\/TLT.2010.14.","journal-title":"IEEE Transactions on Learning Technologies"},{"key":"212_CR47","doi-asserted-by":"publisher","unstructured":"Conti, D., Cattani, A., Di\u00a0Nuovo, S. & Di\u00a0Nuovo, A. (2015). A cross-cultural study of acceptance and use of robotics by future psychology practitioners. In: 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 555\u2013560. https:\/\/doi.org\/10.1109\/ROMAN.2015.7333601","DOI":"10.1109\/ROMAN.2015.7333601"},{"issue":"12","key":"212_CR48","doi-asserted-by":"publisher","first-page":"0225709","DOI":"10.1371\/journal.pone.0225709","volume":"14","author":"DK Darnell","year":"2019","unstructured":"Darnell, D. K., & Krieg, P. A. (2019). Student engagement, assessed using heart rate, shows no reset following active learning sessions in lectures. PloS ONE, 14(12), 0225709. https:\/\/doi.org\/10.1371\/journal.pone.0225709.","journal-title":"PloS ONE"},{"key":"212_CR49","doi-asserted-by":"publisher","unstructured":"De Carolis, B., D\u2019Errico, F., Macchiarulo, N. & Palestra, G. (2019). \u201cEngaged faces\u201d: Measuring and monitoring student engagement from face and gaze behavior. In: Proceedings\u20132019 IEEE\/WIC\/ACM International Conference on Web Intelligence Workshops, WI 2019 Companion, pp. 80\u201385. https:\/\/doi.org\/10.1145\/3358695.3361748","DOI":"10.1145\/3358695.3361748"},{"key":"212_CR50","doi-asserted-by":"publisher","DOI":"10.1002\/9781119152484","volume-title":"Classification Parameter Estimation and State Estimation","author":"D De Ridder","year":"2017","unstructured":"de Ridder, D., Tax, D. M. J., Lei, B., Xu, G., Feng, M., Zou, Y., & van der Heijden, F. (2017). Classification Parameter Estimation and State Estimation. John Wiley & Sons Ltd. https:\/\/doi.org\/10.1002\/9781119152484."},{"key":"212_CR51","unstructured":"DeepLearning.AI: Bad Machine Learning Makes Bad Science (2022). https:\/\/info.deeplearning.ai\/science-plagued-by-machine-learning-mistakes-deepfakes-censor-profanity-wearable-ai-helps-impaired-walking-ensemble-models-simplified-1?ecid=ACsprvvjRjD_WkUlMQXnAK1TiHleIgJOX2XELDoR_6xpahkNmpZLD_oxcL1fuZIAWbOw7KN2KNa5 &utm_campaign=The%20Batch &utm_medium=email &_hsmi=223142202 &_hsenc=p2ANqtz-_Jn2sqcU_uSZ2VW0RvExQAbB3YAplOltKhk6DX3uDJ1lEEfgy_XpZlKf_PpFaM-fatABYOHrJciMBEfqNa6UEA9aYcFg &utm_content=223128787 &utm_source=hs_email"},{"key":"212_CR52","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2020.00116","author":"F Del Duchetto","year":"2020","unstructured":"Del Duchetto, F., Baxter, P., & Hanheide, M. (2020). Are you still with me? Continuous engagement assessment from a robot\u2019s point of view. Frontiers in Robotics and AI. https:\/\/doi.org\/10.3389\/frobt.2020.00116.","journal-title":"Frontiers in Robotics and AI"},{"key":"212_CR53","doi-asserted-by":"publisher","unstructured":"Delgado, K., Origgi, J.M., Hasanpoor, T., Yu, H., Allessio, D., Arroyo, I., Lee, W., Betke, M., Woolf, B. & Bargal, S.A. (2021). Student engagement dataset. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2021-October, pp. 3621\u20133629. Institute of Electrical and Electronics Engineers Inc., IEEE. https:\/\/doi.org\/10.1109\/ICCVW54120.2021.00405","DOI":"10.1109\/ICCVW54120.2021.00405"},{"key":"212_CR54","doi-asserted-by":"publisher","unstructured":"Deng, D., Chen, Z., Zhou, Y. & Shi, B. (2020). MIMAMO Net: Integrating micro- and macro-motion for video emotion recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2621\u20132628. https:\/\/doi.org\/10.1609\/aaai.v34i03.5646","DOI":"10.1609\/aaai.v34i03.5646"},{"key":"212_CR55","doi-asserted-by":"crossref","unstructured":"Deng, J., Guo, J., Zhou, Y., Yu, J., Kotsia, I. & Zafeiriou, S. (2019) RetinaFace: Single-stage dense face localisation in the wild. arXiv abs\/1905.00641","DOI":"10.1109\/CVPR42600.2020.00525"},{"key":"212_CR56","doi-asserted-by":"publisher","unstructured":"Dewan, M.A.A., Lin, F., Wen, D., Murshed, M. & Uddin, Z. (2018). A deep learning approach to detecting engagement of online learners. In: 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld\/SCALCOM\/UIC\/ATC\/CBDCom\/IOP\/SCI), pp. 1895\u20131902. IEEE. https:\/\/doi.org\/10.1109\/SmartWorld.2018.00318","DOI":"10.1109\/SmartWorld.2018.00318"},{"issue":"1","key":"212_CR57","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40561-018-0080-z","volume":"6","author":"MAA Dewan","year":"2019","unstructured":"Dewan, M. A. A., Murshed, M., & Lin, F. (2019). Engagement detection in online learning: A review. Smart Learning Environments, 6(1), 1. https:\/\/doi.org\/10.1186\/s40561-018-0080-z.","journal-title":"Smart Learning Environments"},{"key":"212_CR58","doi-asserted-by":"publisher","unstructured":"Dhall, A., Kaur, A., Goecke, R. & Gedeon, T. (2018). EmotiW 2018: Audio-video, student engagement and group-level affect prediction. In: Proceedings of the 2018 on International Conference on Multimodal Interaction-ICMI \u201918, pp. 653\u2013656. ACM Press. https:\/\/doi.org\/10.1145\/3242969.3264993","DOI":"10.1145\/3242969.3264993"},{"key":"212_CR59","doi-asserted-by":"publisher","unstructured":"Dhall, A., Sharma, G., Goecke, R. & Gedeon, T. (2020). EmotiW 2020: Driver gaze, group emotion, student engagement and physiological signal based challenges. In: Proceedings of the 2020 International Conference on Multimodal Interaction, pp. 784\u2013789. ACM. https:\/\/doi.org\/10.1145\/3382507.3417973","DOI":"10.1145\/3382507.3417973"},{"issue":"3","key":"212_CR60","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3264913","volume":"2","author":"E Di Lascio","year":"2018","unstructured":"Di Lascio, E., Gashi, S., & Santini, S. (2018). Unobtrusive assessment of students\u2019 emotional engagement during lectures using electrodermal activity sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3), 1\u201321. https:\/\/doi.org\/10.1145\/3264913.","journal-title":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"issue":"2","key":"212_CR61","doi-asserted-by":"publisher","first-page":"104","DOI":"10.1080\/00461520.2017.1281747","volume":"52","author":"S D\u2019Mello","year":"2017","unstructured":"D\u2019Mello, S., Dieterle, E., & Duckworth, A. (2017). Advanced, analytic, automated (AAA) measurement of engagement during learning. Educational Psychologist, 52(2), 104\u2013123. https:\/\/doi.org\/10.1080\/00461520.2017.1281747.","journal-title":"Educational Psychologist"},{"issue":"4","key":"212_CR62","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1109\/MIS.2007.79","volume":"22","author":"S D\u2019Mello","year":"2007","unstructured":"D\u2019Mello, S., Picard, R. W., & Graesser, A. (2007). Toward an affect-sensitive AutoTutor. IEEE Intelligent Systems, 22(4), 53\u201361. https:\/\/doi.org\/10.1109\/MIS.2007.79.","journal-title":"IEEE Intelligent Systems"},{"key":"212_CR63","doi-asserted-by":"publisher","unstructured":"Dong, L., Di, H., Tao, L., Xu, G. & Oliver, P. (2010). Visual focus of attention recognition in the ambient kitchen. In: Asian Conference on Computer Vision, pp. 548\u2013559. https:\/\/doi.org\/10.1007\/978-3-642-12297-2_53","DOI":"10.1007\/978-3-642-12297-2_53"},{"key":"212_CR64","doi-asserted-by":"publisher","unstructured":"Dresvyanskiy, D., Minker, W. & Karpov, A. (2021). Deep learning based engagement recognition in highly imbalanced data. In: Speech and Computer, pp. 166\u2013178. https:\/\/doi.org\/10.1007\/978-3-030-87802-3_16","DOI":"10.1007\/978-3-030-87802-3_16"},{"key":"212_CR65","doi-asserted-by":"publisher","first-page":"104495","DOI":"10.1016\/j.compedu.2022.104495","volume":"183","author":"I Dubovi","year":"2022","unstructured":"Dubovi, I. (2022). Cognitive and emotional engagement while learning with VR: The perspective of multimodal methodology. Computers & Education, 183, 104495. https:\/\/doi.org\/10.1016\/j.compedu.2022.104495.","journal-title":"Computers & Education"},{"issue":"2","key":"212_CR66","doi-asserted-by":"publisher","first-page":"136","DOI":"10.1177\/1073191120957102","volume":"29","author":"G Eisele","year":"2022","unstructured":"Eisele, G., Vachon, H., Lafit, G., Kuppens, P., Houben, M., Myin-Germeys, I., & Viechtbauer, W. (2022). The effects of sampling frequency and questionnaire length on perceived burden, compliance, and careless responding in experience sampling data in a student population. Assessment, 29(2), 136\u2013151. https:\/\/doi.org\/10.1177\/1073191120957102.","journal-title":"Assessment"},{"key":"212_CR67","volume-title":"Facial Action Coding System","author":"P Ekman","year":"1978","unstructured":"Ekman, P., & Friesen, W. V. (1978). Facial Action Coding System. Palo Alto: Consulting Psychologists Press."},{"issue":"2","key":"212_CR68","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3503799","volume":"11","author":"O Engwall","year":"2022","unstructured":"Engwall, O., Cumbal, R., Lopes, J., Ljung, M., & Mansson, L. (2022). Identification of low-engaged learners in robot-led second language conversations with adults. ACM Transactions on Human-Robot Interaction, 11(2), 1\u201333. https:\/\/doi.org\/10.1145\/3503799.","journal-title":"ACM Transactions on Human-Robot Interaction"},{"key":"212_CR69","doi-asserted-by":"publisher","unstructured":"Eyben, F., Weninger, F., Gross, F. & Schuller, B. (2013). Recent developments in openSMILE, the munich open-source multimedia feature extractor. In: Proceedings of the 21st ACM International Conference on Multimedia, pp. 835\u2013838. ACM. https:\/\/doi.org\/10.1145\/2502081.2502224","DOI":"10.1145\/2502081.2502224"},{"key":"212_CR70","doi-asserted-by":"publisher","unstructured":"Finn, J.D. & Zimmer, K.S. (2012). Student engagement: What is it? Why does it matter? In: Handbook of Research on Student Engagement, pp. 97\u2013131. Springer. https:\/\/doi.org\/10.1007\/978-1-4614-2018-7_5","DOI":"10.1007\/978-1-4614-2018-7_5"},{"key":"212_CR71","doi-asserted-by":"publisher","unstructured":"Fredricks, J.A. & McColskey, W. (2012). The measurement of student engagement: A comparative analysis of various methods and student self-report instruments. In: Handbook of Research on Student Engagement, pp. 763\u2013782. Springer. https:\/\/doi.org\/10.1007\/978-1-4614-2018-7_37","DOI":"10.1007\/978-1-4614-2018-7_37"},{"issue":"1","key":"212_CR72","doi-asserted-by":"publisher","first-page":"59","DOI":"10.3102\/00346543074001059","volume":"74","author":"JA Fredricks","year":"2004","unstructured":"Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59\u2013109. https:\/\/doi.org\/10.3102\/00346543074001059.","journal-title":"Review of Educational Research"},{"key":"212_CR73","doi-asserted-by":"publisher","first-page":"99112","DOI":"10.1109\/ACCESS.2021.3096136","volume":"9","author":"MTH Fuad","year":"2021","unstructured":"Fuad, M. T. H., Fime, A. A., Sikder, D., Iftee, M. A. R., Rabbi, J., Al-Rakhami, M. S., et al. (2021). Recent advances in deep learning techniques for face recognition. IEEE Access, 9, 99112\u201399142. https:\/\/doi.org\/10.1109\/ACCESS.2021.3096136.","journal-title":"IEEE Access"},{"issue":"3","key":"212_CR74","doi-asserted-by":"publisher","first-page":"769","DOI":"10.1109\/72.846747","volume":"11","author":"B Gabrys","year":"2000","unstructured":"Gabrys, B., & Bargiela, A. (2000). General fuzzy min-max neural network for clustering and classification. IEEE Transactions on Neural Networks, 11(3), 769\u2013783. https:\/\/doi.org\/10.1109\/72.846747.","journal-title":"IEEE Transactions on Neural Networks"},{"issue":"4","key":"212_CR75","doi-asserted-by":"publisher","first-page":"463","DOI":"10.1109\/TSMCC.2011.2161285","volume":"42","author":"M Galar","year":"2012","unstructured":"Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., & Herrera, F. (2012). A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(4), 463\u2013484. https:\/\/doi.org\/10.1109\/TSMCC.2011.2161285.","journal-title":"IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)"},{"issue":"1","key":"212_CR76","doi-asserted-by":"publisher","first-page":"13","DOI":"10.1016\/J.KNOSYS.2011.06.013","volume":"25","author":"V Garc\u00eda","year":"2012","unstructured":"Garc\u00eda, V., S\u00e1nchez, J. S., & Mollineda, R. A. (2012). On the effectiveness of preprocessing methods when dealing with different levels of class imbalance. Knowledge-Based Systems, 25(1), 13\u201321. https:\/\/doi.org\/10.1016\/J.KNOSYS.2011.06.013.","journal-title":"Knowledge-Based Systems"},{"key":"212_CR77","doi-asserted-by":"publisher","DOI":"10.5334\/jors.ar","author":"JM Girard","year":"2014","unstructured":"Girard, J. M. (2014). CARMA: Software for continuous affect rating and media annotation. Journal of Open Research Software. https:\/\/doi.org\/10.5334\/jors.ar.","journal-title":"Journal of Open Research Software"},{"issue":"1","key":"212_CR78","doi-asserted-by":"publisher","first-page":"27","DOI":"10.1007\/s10648-019-09514-z","volume":"33","author":"P Goldberg","year":"2021","unstructured":"Goldberg, P., S\u00fcmer, m, St\u00fcrmer, K., Wagner, W., G\u00f6llner, R., Gerjets, P., et al. (2021). Attentive or not? Toward a machine learning approach to assessing students\u2019 visible engagement in classroom instruction. Educational Psychology Review, 33(1), 27\u201349. https:\/\/doi.org\/10.1007\/s10648-019-09514-z.","journal-title":"Educational Psychology Review"},{"key":"212_CR79","volume-title":"Deep Learning","author":"I Goodfellow","year":"2016","unstructured":"Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Cambridge: MIT Press."},{"key":"212_CR80","doi-asserted-by":"publisher","first-page":"59","DOI":"10.1016\/j.neunet.2014.09.005","volume":"64","author":"IJ Goodfellow","year":"2013","unstructured":"Goodfellow, I. J., Erhan, D., Luc Carrier, P., Courville, A., Mirza, M., Hamner, B., et al. (2013). Challenges in representation learning: A report on three machine learning contests. Neural Networks, 64, 59\u201363. https:\/\/doi.org\/10.1016\/j.neunet.2014.09.005.","journal-title":"Neural Networks"},{"issue":"1","key":"212_CR81","doi-asserted-by":"publisher","first-page":"14","DOI":"10.1080\/00461520.2014.989230","volume":"50","author":"BA Greene","year":"2015","unstructured":"Greene, B. A. (2015). Measuring cognitive engagement with self-report scales: Reflections from over 20 years of research. Educational Psychologist, 50(1), 14\u201330. https:\/\/doi.org\/10.1080\/00461520.2014.989230.","journal-title":"Educational Psychologist"},{"key":"212_CR82","doi-asserted-by":"publisher","unstructured":"Gudi, A., Tasli, H.E., den Uyl, T.M. & Maroulis, A. (2015). Deep learning based FACS action unit occurrence and intensity estimation. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 2015-January, pp. 1\u20135. https:\/\/doi.org\/10.1109\/FG.2015.7284873","DOI":"10.1109\/FG.2015.7284873"},{"key":"212_CR83","doi-asserted-by":"publisher","unstructured":"Gupta, A., D\u2019Cunha, A., Awasthi, K. & Balasubramanian, V. (2016). DAiSEE: Towards User Engagement Recognition in the Wild 14(8), 1\u201312 https:\/\/doi.org\/10.48550\/arXiv.1609.01885","DOI":"10.48550\/arXiv.1609.01885"},{"issue":"3","key":"212_CR84","doi-asserted-by":"publisher","first-page":"392","DOI":"10.1016\/j.robot.2013.09.012","volume":"62","author":"J Hall","year":"2014","unstructured":"Hall, J., Tritton, T., Rowe, A., Pipe, A., Melhuish, C., & Leonards, U. (2014). Perception of own and robot engagement in human-robot interactions and their dependence on robotics knowledge. Robotics and Autonomous Systems, 62(3), 392\u2013399. https:\/\/doi.org\/10.1016\/j.robot.2013.09.012.","journal-title":"Robotics and Autonomous Systems"},{"key":"212_CR85","doi-asserted-by":"publisher","first-page":"3423","DOI":"10.1016\/J.PROCS.2021.09.115","volume":"192","author":"MN Hasnine","year":"2021","unstructured":"Hasnine, M. N., Bui, H. T. T., Tran, T. T. T., Nguyen, H. T., Ak\u00e7ap\u00f5nar, G., & Ueda, H. (2021). Students\u2019 emotion extraction and visualization for engagement detection in online learning. Procedia Computer Science, 192, 3423\u20133431. https:\/\/doi.org\/10.1016\/J.PROCS.2021.09.115.","journal-title":"Procedia Computer Science"},{"key":"212_CR86","doi-asserted-by":"publisher","unstructured":"He, K., Zhang, X., Ren, S. & Sun, J. (2016). Deep residual learning lor image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770\u2013778. https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"212_CR87","doi-asserted-by":"publisher","unstructured":"Hernandez, J., Zicheng Liu, Hulten, G., DeBarr, D., Krum, K. & Zhang, Z. (2013). Measuring the engagement level of TV viewers. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1\u20137. https:\/\/doi.org\/10.1109\/FG.2013.6553742","DOI":"10.1109\/FG.2013.6553742"},{"issue":"7","key":"212_CR88","doi-asserted-by":"publisher","first-page":"1527","DOI":"10.1162\/neco.2006.18.7.1527","volume":"18","author":"GE Hinton","year":"2006","unstructured":"Hinton, G. E., Osindero, S., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527\u20131554. https:\/\/doi.org\/10.1162\/neco.2006.18.7.1527.","journal-title":"Neural Computation"},{"issue":"8","key":"212_CR89","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735\u20131780. https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735.","journal-title":"Neural Computation"},{"key":"212_CR90","doi-asserted-by":"publisher","unstructured":"Holmes, G., Donkin, A. & Witten, I.H. WEKA: A machine learning workbench. In: Proceedings of ANZIIS \u201994-Australian New Zealnd Intelligent Information Systems Conference, pp. 357\u2013361. IEEE. https:\/\/doi.org\/10.1109\/ANZIIS.1994.396988","DOI":"10.1109\/ANZIIS.1994.396988"},{"issue":"2","key":"212_CR91","doi-asserted-by":"publisher","first-page":"984","DOI":"10.1109\/LRA.2016.2529686","volume":"1","author":"F Husain","year":"2016","unstructured":"Husain, F., Dellen, B., & Torras, C. (2016). Action recognition based on efficient deep feature learning in the spatio-temporal domain. IEEE Robotics and Automation Letters, 1(2), 984\u2013991. https:\/\/doi.org\/10.1109\/LRA.2016.2529686.","journal-title":"IEEE Robotics and Automation Letters"},{"key":"212_CR92","doi-asserted-by":"publisher","DOI":"10.1155\/2018\/6347186","author":"M Hussain","year":"2018","unstructured":"Hussain, M., Zhu, W., Zhang, W., & Abidi, S. M. R. (2018). Student engagement predictions in an e-learning system and their impact on student course assessment scores. Computational Intelligence and Neuroscience. https:\/\/doi.org\/10.1155\/2018\/6347186.","journal-title":"Computational Intelligence and Neuroscience"},{"issue":"1","key":"212_CR93","doi-asserted-by":"publisher","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","volume":"35","author":"S Ji","year":"2013","unstructured":"Ji, S., Xu, W., Yang, M., & Yu, K. (2013). 3D convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 221\u2013231. https:\/\/doi.org\/10.1109\/TPAMI.2012.59.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"issue":"2","key":"212_CR94","doi-asserted-by":"publisher","first-page":"505","DOI":"10.1007\/s11042-010-0632-x","volume":"51","author":"H Joho","year":"2011","unstructured":"Joho, H., Staiano, J., Sebe, N., & Jose, J. M. (2011). Looking at the viewer: Analysing facial activity to detect personal highlights of multimedia contents. Multimedia Tools and Applications, 51(2), 505\u2013523. https:\/\/doi.org\/10.1007\/s11042-010-0632-x.","journal-title":"Multimedia Tools and Applications"},{"key":"212_CR95","unstructured":"Jordan, M.I. (1990) Attractor dynamics and parallelism in a connectionist sequential machine. In: Artificial Neural Networks: Concept Learning, pp. 112\u2013127."},{"key":"212_CR96","doi-asserted-by":"publisher","unstructured":"Kapoor, S. & Narayanan, A. (2022). Leakage and the reproducibility crisis in ML-based science. https:\/\/doi.org\/10.48550\/arXiv.2207.07048","DOI":"10.48550\/arXiv.2207.07048"},{"key":"212_CR97","doi-asserted-by":"publisher","unstructured":"Kaur, A., Mustafa, A., Mehta, L. & Dhall, A. (2018). Prediction and localization of student engagement in the wild. In: 2018 Digital Image Computing: Techniques and Applications (DICTA), pp. 1\u20138. IEEE. https:\/\/doi.org\/10.1109\/DICTA.2018.8615851","DOI":"10.1109\/DICTA.2018.8615851"},{"issue":"2","key":"212_CR98","doi-asserted-by":"publisher","first-page":"130","DOI":"10.1375\/ajse.33.2.130","volume":"33","author":"D Keen","year":"2009","unstructured":"Keen, D. (2009). Engagement of children with autism in learning. Australasian Journal of Special Education, 33(2), 130\u2013140. https:\/\/doi.org\/10.1375\/ajse.33.2.130.","journal-title":"Australasian Journal of Special Education"},{"key":"212_CR99","unstructured":"Kipp, M. (2008). Spatiotemporal coding in ANVIL. In: Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC\u201908). European Language Resources Association (ELRA). http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/521_paper.pdf"},{"key":"212_CR100","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1016\/J.DSS.2018.09.002","volume":"115","author":"B Kratzwald","year":"2018","unstructured":"Kratzwald, B., Ili\u0107, S., Kraus, M., Feuerriegel, S., & Prendinger, H. (2018). Deep learning for affective computing: Text-based emotion recognition in decision support. Decision Support Systems, 115, 24\u201335. https:\/\/doi.org\/10.1016\/J.DSS.2018.09.002.","journal-title":"Decision Support Systems"},{"issue":"6","key":"212_CR101","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1145\/3065386","volume":"60","author":"A Krizhevsky","year":"2017","unstructured":"Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84\u201390. https:\/\/doi.org\/10.1145\/3065386.","journal-title":"Communications of the ACM"},{"issue":"6","key":"212_CR102","doi-asserted-by":"publisher","first-page":"1121","DOI":"10.1037\/0022-3514.77.6.1121","volume":"77","author":"J Kruger","year":"1999","unstructured":"Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one\u2019s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121\u20131134. https:\/\/doi.org\/10.1037\/0022-3514.77.6.1121.","journal-title":"Journal of Personality and Social Psychology"},{"key":"212_CR103","doi-asserted-by":"publisher","unstructured":"Larson, R. & Csikszentmihalyi, M. (2014). The experience sampling method. In: Flow and the Foundations of Positive Psychology, pp. 21\u201334. Springer. https:\/\/doi.org\/10.1007\/978-94-017-9088-8_2","DOI":"10.1007\/978-94-017-9088-8_2"},{"issue":"3","key":"212_CR104","doi-asserted-by":"publisher","first-page":"517","DOI":"10.2224\/sbp.7054","volume":"46","author":"H Lei","year":"2018","unstructured":"Lei, H., Cui, Y., & Zhou, W. (2018). Relationships between student engagement and academic achievement: A meta-analysis. Social Behavior and Personality: An International Journal, 46(3), 517\u2013528. https:\/\/doi.org\/10.2224\/sbp.7054.","journal-title":"Social Behavior and Personality: An International Journal"},{"key":"212_CR105","doi-asserted-by":"publisher","unstructured":"Leite, I., McCoy, M., Ullman, D., Salomons, N. & Scassellati, B. (2015). Comparing models of disengagement in individual and group interactions. In: Proceedings of the Tenth Annual ACM\/IEEE International Conference on Human-Robot Interaction, pp. 99\u2013105. ACM. https:\/\/doi.org\/10.1145\/2696454.2696466","DOI":"10.1145\/2696454.2696466"},{"key":"212_CR106","doi-asserted-by":"publisher","unstructured":"Li, S., Deng, W. & Du, J. (2017). Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2584\u20132593. https:\/\/doi.org\/10.1109\/CVPR.2017.277","DOI":"10.1109\/CVPR.2017.277"},{"issue":"10","key":"212_CR107","doi-asserted-by":"publisher","first-page":"6609","DOI":"10.1007\/s10489-020-02139-8","volume":"51","author":"J Liao","year":"2021","unstructured":"Liao, J., Liang, Y., & Pan, J. (2021). Deep facial spatiotemporal network for engagement prediction in online learning. Applied Intelligence, 51(10), 6609\u20136621. https:\/\/doi.org\/10.1007\/s10489-020-02139-8.","journal-title":"Applied Intelligence"},{"issue":"11","key":"212_CR108","doi-asserted-by":"publisher","first-page":"1789","DOI":"10.1109\/JPROC.2004.835366","volume":"92","author":"AV Libin","year":"2004","unstructured":"Libin, A. V., & Libin, E. V. (2004). Person-robot interactions from the robopsychologists\u2019 point of view: The robotic psychology and robotherapy approach. Proceedings of the IEEE, 92(11), 1789\u20131803. https:\/\/doi.org\/10.1109\/JPROC.2004.835366.","journal-title":"Proceedings of the IEEE"},{"issue":"c","key":"212_CR109","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/TAFFC.2020.2981446","volume":"3045","author":"S Li","year":"2020","unstructured":"Li, S., & Deng, W. (2020). Deep facial expression recognition: A survey. IEEE Transactions on Affective Computing, 3045(c), 1\u20131. https:\/\/doi.org\/10.1109\/TAFFC.2020.2981446.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR110","doi-asserted-by":"publisher","first-page":"104114","DOI":"10.1016\/J.COMPEDU.2020.104114","volume":"163","author":"S Li","year":"2021","unstructured":"Li, S., Lajoie, S. P., Zheng, J., Wu, H., & Cheng, H. (2021). Automated detection of cognitive engagement to inform the art of staying engaged in problem-solving. Computers & Education, 163, 104114. https:\/\/doi.org\/10.1016\/J.COMPEDU.2020.104114.","journal-title":"Computers & Education"},{"key":"212_CR111","doi-asserted-by":"publisher","unstructured":"Lin, T.-Y., Goyal, P., Girshick, R., He, K. & Dollar, P. (2017). Focal loss for dense object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999\u20133007. https:\/\/doi.org\/10.1109\/ICCV.2017.324","DOI":"10.1109\/ICCV.2017.324"},{"key":"212_CR112","doi-asserted-by":"publisher","unstructured":"Littlewort, G., Whitehill, J., Wu, T., Fasel, I., Frank, M., Movellan, J. & Bartlett, M. (2011). The computer expression recognition toolbox (CERT). In: Face and Gesture 2011, pp. 298\u2013305. IEEE. https:\/\/doi.org\/10.1109\/FG.2011.5771414","DOI":"10.1109\/FG.2011.5771414"},{"key":"212_CR113","doi-asserted-by":"publisher","unstructured":"Liu, M., Shan, S., Wang, R. & Chen, X. (2014). Learning expressionlets on spatio-temporal manifold for dynamic facial expression recognition. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1749\u20131756. https:\/\/doi.org\/10.1109\/CVPR.2014.226","DOI":"10.1109\/CVPR.2014.226"},{"key":"212_CR114","doi-asserted-by":"publisher","unstructured":"Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P.E. & Matthews, I. (2011). Painful data: The UNBC-McMaster shoulder pain expression archive database. In: Face and Gesture 2011, pp. 57\u201364. IEEE. https:\/\/doi.org\/10.1109\/FG.2011.5771462","DOI":"10.1109\/FG.2011.5771462"},{"issue":"6","key":"212_CR115","doi-asserted-by":"publisher","first-page":"904","DOI":"10.1080\/13825585.2018.1546820","volume":"26","author":"D Lufi","year":"2019","unstructured":"Lufi, D., & Haimov, I. (2019). Effects of age on attention level: Changes in performance between the ages of 12 and 90. Aging, Neuropsychology, and Cognition, 26(6), 904\u2013919. https:\/\/doi.org\/10.1080\/13825585.2018.1546820.","journal-title":"Aging, Neuropsychology, and Cognition"},{"key":"212_CR116","doi-asserted-by":"publisher","unstructured":"Lyons, M., Akamatsu, S., Kamachi, M. & Gyoba, J. (2002). Coding facial expressions with Gabor wavelets. In: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200\u2013205. IEEE Internet Computing. https:\/\/doi.org\/10.1109\/AFGR.1998.670949","DOI":"10.1109\/AFGR.1998.670949"},{"issue":"9","key":"212_CR117","doi-asserted-by":"publisher","first-page":"1315","DOI":"10.1097\/JTO.0b013e3181ec173d","volume":"5","author":"JN Mandrekar","year":"2010","unstructured":"Mandrekar, J. N. (2010). Receiver operating characteristic curve in diagnostic test assessment. Journal of Thoracic Oncology, 5(9), 1315\u20131316. https:\/\/doi.org\/10.1097\/JTO.0b013e3181ec173d.","journal-title":"Journal of Thoracic Oncology"},{"issue":"1","key":"212_CR118","doi-asserted-by":"publisher","first-page":"331","DOI":"10.1175\/2008MWR2553.1","volume":"137","author":"SJ Mason","year":"2009","unstructured":"Mason, S. J., & Weigel, A. P. (2009). A generic forecast verification framework for administrative purposes. Monthly Weather Review, 137(1), 331\u2013349. https:\/\/doi.org\/10.1175\/2008MWR2553.1.","journal-title":"Monthly Weather Review"},{"issue":"3","key":"212_CR119","doi-asserted-by":"publisher","first-page":"107","DOI":"10.18178\/ijiet.2021.11.3.1497","volume":"11","author":"X Ma","year":"2021","unstructured":"Ma, X., Xu, M., Dong, Y., & Sun, Z. (2021). Automatic student engagement in online learning environment based on neural turing machine. International Journal of Information and Education Technology, 11(3), 107\u2013111. https:\/\/doi.org\/10.18178\/ijiet.2021.11.3.1497.","journal-title":"International Journal of Information and Education Technology"},{"key":"212_CR120","doi-asserted-by":"publisher","unstructured":"McDuff, D., Karlson, A., Kapoor, A., Roseway, A. & Czerwinski, M. (2012). AffectAura: An intelligent system for emotional memory. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 849\u2013858. ACM. https:\/\/doi.org\/10.1145\/2207676.2208525","DOI":"10.1145\/2207676.2208525"},{"issue":"4","key":"212_CR121","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1187\/cbe.19-08-0158","volume":"19","author":"KS McNeal","year":"2020","unstructured":"McNeal, K. S., Zhong, M., Soltis, N. A., Doukopoulos, L., Johnson, E. T., Courtney, S., et al. (2020). Biosensors show promise as a measure of student engagement in a large introductory biology course. CBE-Life Sciences Education, 19(4), 50. https:\/\/doi.org\/10.1187\/cbe.19-08-0158.","journal-title":"CBE-Life Sciences Education"},{"key":"212_CR122","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-022-03200-4","author":"NK Mehta","year":"2022","unstructured":"Mehta, N. K., Prasad, S. S., Saurav, S., Saini, R., & Singh, S. (2022). Three-dimensional DenseNet self-attention neural network for automatic detection of student\u2019s engagement. Applied Intelligence. https:\/\/doi.org\/10.1007\/s10489-022-03200-4.","journal-title":"Applied Intelligence"},{"key":"212_CR123","doi-asserted-by":"publisher","unstructured":"Minsu J., Dae-Ha, L., Jaehong, K. & Youngjo, C. (2013). Identifying principal social signals in private student-teacher interactions for robot-enhanced education. In: 2013 IEEE RO-MAN, pp. 621\u2013626. https:\/\/doi.org\/10.1109\/ROMAN.2013.6628417","DOI":"10.1109\/ROMAN.2013.6628417"},{"key":"212_CR124","doi-asserted-by":"publisher","unstructured":"Mohamad Nezami, O., Dras, M., Hamey, L., Richards, D., Wan, S., Paris, C. (2020). Automatic recognition of student engagement using deep learning and facial expression. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, vol. 2, pp. 273\u2013289. Springer. https:\/\/doi.org\/10.1007\/978-3-030-46133-1_17","DOI":"10.1007\/978-3-030-46133-1_17"},{"issue":"1","key":"212_CR125","doi-asserted-by":"publisher","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","volume":"10","author":"A Mollahosseini","year":"2019","unstructured":"Mollahosseini, A., Hasani, B., & Mahoor, M. H. (2019). AffectNet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 10(1), 18\u201331. https:\/\/doi.org\/10.1109\/TAFFC.2017.2740923.","journal-title":"IEEE Transactions on Affective Computing"},{"issue":"1","key":"212_CR126","doi-asserted-by":"publisher","first-page":"15","DOI":"10.1109\/TAFFC.2016.2515084","volume":"8","author":"H Monkaresi","year":"2017","unstructured":"Monkaresi, H., Bosch, N., Calvo, R. A., & D\u2019Mello, S. K. (2017). Automated detection of engagement using video-based estimation of facial expressions and heart rate. IEEE Transactions on Affective Computing, 8(1), 15\u201328. https:\/\/doi.org\/10.1109\/TAFFC.2016.2515084.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR127","doi-asserted-by":"publisher","unstructured":"Nakano, Y. I., & Ishii, R. (2010). Estimating user\u2019s engagement from eye-gaze behaviors in human-agent conversations. In: International Conference on Intelligent User Interfaces, Proceedings IUI, pp. 139\u2013148. https:\/\/doi.org\/10.1145\/1719970.1719990.","DOI":"10.1145\/1719970.1719990"},{"key":"212_CR128","doi-asserted-by":"publisher","DOI":"10.1016\/j.compedu.2019.103641","volume":"142","author":"M Ninaus","year":"2019","unstructured":"Ninaus, M., Greipl, S., Kiili, K., Lindstedt, A., Huber, S., Klein, E., et al. (2019). Increased emotional engagement in game-based learning\u2014A machine learning approach on facial emotion detection data. Computers & Education, 142, 103641. https:\/\/doi.org\/10.1016\/j.compedu.2019.103641.","journal-title":"Computers & Education"},{"key":"212_CR129","doi-asserted-by":"publisher","unstructured":"Noh, H., Hong, S. & Han, B. (2015). Learning deconvolution network for semantic segmentation. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1520\u20131528. https:\/\/doi.org\/10.1109\/ICCV.2015.178","DOI":"10.1109\/ICCV.2015.178"},{"issue":"1","key":"212_CR130","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1002\/asi.21229","volume":"61","author":"HL O\u2019Brien","year":"2010","unstructured":"O\u2019Brien, H. L., & Toms, E. G. (2010). The development and evaluation of a survey to measure user engagement. Journal of the American Society for Information Science and Technology, 61(1), 50\u201369. https:\/\/doi.org\/10.1002\/asi.21229.","journal-title":"Journal of the American Society for Information Science and Technology"},{"key":"212_CR131","doi-asserted-by":"publisher","unstructured":"Okubo, F., Yamashita, T., Shimada, A. & Ogata, H. (2017). A neural network approach for students\u2019 performance prediction. In: Proceedings of the Seventh International Learning Analytics & Knowledge Conference, pp. 598\u2013599. ACM. https:\/\/doi.org\/10.1145\/3027385.3029479","DOI":"10.1145\/3027385.3029479"},{"key":"212_CR132","doi-asserted-by":"publisher","first-page":"100020","DOI":"10.1016\/J.CAEAI.2021.100020","volume":"2","author":"F Ouyang","year":"2021","unstructured":"Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education: Artificial Intelligence, 2, 100020. https:\/\/doi.org\/10.1016\/J.CAEAI.2021.100020.","journal-title":"Computers and Education: Artificial Intelligence"},{"key":"212_CR133","doi-asserted-by":"publisher","DOI":"10.1111\/exsy.12839","author":"C Pabba","year":"2022","unstructured":"Pabba, C., & Kumar, P. (2022). An intelligent system for monitoring students\u2019 engagement in large classroom teaching through facial expression recognition. Expert Systems. https:\/\/doi.org\/10.1111\/exsy.12839.","journal-title":"Expert Systems"},{"issue":"1","key":"212_CR134","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1186\/s13643-021-01626-4","volume":"10","author":"MJ Page","year":"2021","unstructured":"Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews, 10(1), 89. https:\/\/doi.org\/10.1186\/s13643-021-01626-4.","journal-title":"Systematic Reviews"},{"key":"212_CR135","doi-asserted-by":"publisher","unstructured":"Parkhi, O.M., Vedaldi, A. & Zisserman, A. (2015). Deep face recognition. In: Procedings of the British Machine Vision Conference 2015, pp. 1\u201312. https:\/\/doi.org\/10.5244\/C.29.41","DOI":"10.5244\/C.29.41"},{"issue":"6","key":"212_CR136","doi-asserted-by":"publisher","first-page":"1774","DOI":"10.3758\/s13423-017-1242-7","volume":"24","author":"G Pennycook","year":"2017","unstructured":"Pennycook, G., Ross, R. M., Koehler, D. J., & Fugelsang, J. A. (2017). Dunning\u2013Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence. Psychonomic Bulletin & Review, 24(6), 1774\u20131784. https:\/\/doi.org\/10.3758\/s13423-017-1242-7.","journal-title":"Psychonomic Bulletin & Review"},{"key":"212_CR137","doi-asserted-by":"publisher","unstructured":"Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M., & Poggi, I. (2005). A model of attention and interest using gaze behavior. In: International Workshop on Intelligent Virtual Agents, pp. 229\u2013240. Springer. https:\/\/doi.org\/10.1007\/11550617_20.","DOI":"10.1007\/11550617_20"},{"issue":"3","key":"212_CR138","doi-asserted-by":"publisher","first-page":"487","DOI":"10.2307\/1162912","volume":"21","author":"PL Peterson","year":"1984","unstructured":"Peterson, P. L., Swing, S. R., Stark, K. D., & Waas, G. A. (1984). Students\u2019 cognitions and time on task during mathematics instruction. American Educational Research Journal, 21(3), 487\u2013515. https:\/\/doi.org\/10.2307\/1162912.","journal-title":"American Educational Research Journal"},{"issue":"1","key":"212_CR139","doi-asserted-by":"publisher","first-page":"102","DOI":"10.1080\/02796015.2009.12087852","volume":"38","author":"CC Ponitz","year":"2009","unstructured":"Ponitz, C. C., Rimm-Kaufman, S. E., Grimm, K. J., & Curby, T. W. (2009). Kindergarten classroom quality, behavioral engagement, and reading achievement. School Psychology Review, 38(1), 102\u2013120. https:\/\/doi.org\/10.1080\/02796015.2009.12087852.","journal-title":"School Psychology Review"},{"issue":"1","key":"212_CR140","doi-asserted-by":"publisher","first-page":"43916","DOI":"10.1038\/srep43916","volume":"7","author":"AT Poulsen","year":"2017","unstructured":"Poulsen, A. T., Kamronn, S., Dmochowski, J., Parra, L. C., & Hansen, L. K. (2017). EEG in the classroom: Synchronised neural recordings during video presentation. Scientific Reports, 7(1), 43916. https:\/\/doi.org\/10.1038\/srep43916.","journal-title":"Scientific Reports"},{"key":"212_CR141","doi-asserted-by":"publisher","unstructured":"Psaltis, A., Kaza, K., Stefanidis, K., Thermos, S., Apostolakis, K.C., Dimitropoulos, K. & Daras, P. (2016). Multimodal affective state recognition in serious games applications. In: IST 2016-2016 IEEE International Conference on Imaging Systems and Techniques, Proceedings, pp. 435\u2013439. https:\/\/doi.org\/10.1109\/IST.2016.7738265","DOI":"10.1109\/IST.2016.7738265"},{"issue":"3","key":"212_CR142","doi-asserted-by":"publisher","first-page":"292","DOI":"10.1109\/TCIAIG.2017.2743341","volume":"10","author":"A Psaltis","year":"2018","unstructured":"Psaltis, A., Apostolakis, K. C., Dimitropoulos, K., & Daras, P. (2018). Multimodal student engagement recognition in prosocial games. IEEE Transactions on Games, 10(3), 292\u2013303. https:\/\/doi.org\/10.1109\/TCIAIG.2017.2743341.","journal-title":"IEEE Transactions on Games"},{"key":"212_CR143","doi-asserted-by":"publisher","first-page":"101745","DOI":"10.1016\/j.bspc.2019.101745","volume":"57","author":"W Qiao","year":"2020","unstructured":"Qiao, W., & Bi, X. (2020). Ternary-task convolutional bidirectional neural turing machine for assessment of EEG-based cognitive workload. Biomedical Signal Processing and Control, 57, 101745. https:\/\/doi.org\/10.1016\/j.bspc.2019.101745.","journal-title":"Biomedical Signal Processing and Control"},{"key":"212_CR144","doi-asserted-by":"publisher","unstructured":"Ramanarayanan, V., Leong, C.W. & Suendermann-Oeft, D. (2017a). Rushing to judgement: How do laypeople rate caller engagement in thin-slice videos of human-machine dialog? In: Interspeech 2017, pp. 2526\u20132530. ISCA, ISCA https:\/\/doi.org\/10.21437\/Interspeech.2017-1205","DOI":"10.21437\/Interspeech.2017-1205"},{"key":"212_CR145","doi-asserted-by":"publisher","unstructured":"Ramanarayanan, V., Leong, C.W., Suendermann-Oeft, D. & Evanini, K. (2017b). Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: Benefits and pitfalls. In: Proceedings of the 19th ACM International Conference on Multimodal Interaction, pp. 281\u2013287. ACM. https:\/\/doi.org\/10.1145\/3136755.3136767","DOI":"10.1145\/3136755.3136767"},{"key":"212_CR146","unstructured":"Ren, S., He, K., Girshick, R. & Sun, J. (2015) Faster R-CNN: Towards real-time object detection with region proposal networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1. NIPS\u201915, pp. 91\u201399. MIT Press."},{"issue":"2","key":"212_CR147","doi-asserted-by":"publisher","first-page":"178","DOI":"10.31686\/ijier.vol9.iss2.2935","volume":"9","author":"F Ribeiro Trindade","year":"2021","unstructured":"Ribeiro Trindade, F., & James Ferreira, D. (2021). Student performance prediction based on a framework of teacher\u2019s features. International Journal for Innovation Education and Research, 9(2), 178\u2013196. https:\/\/doi.org\/10.31686\/ijier.vol9.iss2.2935.","journal-title":"International Journal for Innovation Education and Research"},{"key":"212_CR148","doi-asserted-by":"publisher","unstructured":"Rich, C., Ponsler, B., Holroyd, A. & Sidner, C.L. (2010). Recognizing engagement in human-robot interaction. In: 2010 5th ACM\/IEEE International Conference on Human-Robot Interaction (HRI), pp. 375\u2013382 https:\/\/doi.org\/10.1109\/hri.2010.5453163","DOI":"10.1109\/hri.2010.5453163"},{"issue":"2","key":"212_CR149","doi-asserted-by":"publisher","first-page":"524","DOI":"10.1109\/TAFFC.2018.2890471","volume":"12","author":"PV Rouast","year":"2021","unstructured":"Rouast, P. V., Adam, M. T. P., & Chiong, R. (2021). Deep learning for human affect recognition: Insights and new developments. IEEE Transactions on Affective Computing, 12(2), 524\u2013543. https:\/\/doi.org\/10.1109\/TAFFC.2018.2890471.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR150","doi-asserted-by":"publisher","unstructured":"Rudovic, O., Park, H.W., Busche, J., Schuller, B., Breazeal, C. & Picard, R.W. (2019b). Personalized estimation of engagement from videos using active learning with deep reinforcement learning. In: 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 217\u2013226. https:\/\/doi.org\/10.1109\/CVPRW.2019.00031","DOI":"10.1109\/CVPRW.2019.00031"},{"key":"212_CR151","doi-asserted-by":"publisher","unstructured":"Rudovic, O., Utsumi, Y., Lee, J., Hernandez, J., Ferrer, E.C., Schuller, B. & Picard, R.W. (2018a). CultureNet: A deep learning approach for engagement intensity estimation from face images of children with autism. In: IEEE International Conference on Intelligent Robots and Systems, pp. 339\u2013346. https:\/\/doi.org\/10.1109\/IROS.2018.8594177","DOI":"10.1109\/IROS.2018.8594177"},{"key":"212_CR152","doi-asserted-by":"publisher","unstructured":"Rudovic, O., Zhang, M., Schuller, B. & Picard, R. (2019a). Multi-modal active learning from human data: A deep reinforcement learning approach. In: 2019 International Conference on Multimodal Interaction, pp. 6\u201315. ACM. https:\/\/doi.org\/10.1145\/3340555.3353742","DOI":"10.1145\/3340555.3353742"},{"key":"212_CR153","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.aao6760","author":"O Rudovic","year":"2018","unstructured":"Rudovic, O., Lee, J., Dai, M., Schuller, B., & Picard, R. W. (2018). Personalized machine learning for robot perception of affect and engagement in autism therapy. Science Robotics. https:\/\/doi.org\/10.1126\/scirobotics.aao6760.","journal-title":"Science Robotics"},{"issue":"6","key":"212_CR154","doi-asserted-by":"publisher","first-page":"1161","DOI":"10.1037\/h0077714","volume":"39","author":"JA Russell","year":"1980","unstructured":"Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161\u20131178. https:\/\/doi.org\/10.1037\/h0077714.","journal-title":"Journal of Personality and Social Psychology"},{"key":"212_CR155","doi-asserted-by":"publisher","unstructured":"Sanghvi, J., Castellano, G., Leite, I., Pereira, A., McOwan, P.W. & Paiva, A. (2011). Automatic analysis of affective postures and body motion to detect engagement with a game companion. In: HRI 2011-Proceedings of the 6th ACM\/IEEE International Conference on Human-Robot Interaction, pp. 305\u2013311. https:\/\/doi.org\/10.1145\/1957656.1957781","DOI":"10.1145\/1957656.1957781"},{"key":"212_CR156","unstructured":"Sayash Kapoor, Priyanka Nanayakkara, Kenny Peng, Hien Pham. & Arvind Narayanan. (2022). The reproducibility crisis in ML-based science https:\/\/sites.google.com\/princeton.edu\/rep-workshop?utm_campaign=The%20Batch &utm_medium=email &_hsmi=223142202 &_hsenc=p2ANqtz-9bv16UMU819WtwyR5st61wc5IsAY27TZ3DBYTsGNcHzkmoYckmHvNSrW6AxtVgRZBSlu0w8dh_5h6c9GEY7Bil_my3sQ &utm_content=223128787 &utm_source=hs_email"},{"issue":"2","key":"212_CR157","doi-asserted-by":"publisher","first-page":"197","DOI":"10.3233\/IA-140073","volume":"8","author":"G Schiavo","year":"2014","unstructured":"Schiavo, G., Cappelletti, A., & Zancanaro, M. (2014). Engagement recognition using easily detectable behavioral cues. Intelligenza Artificiale, 8(2), 197\u2013210. https:\/\/doi.org\/10.3233\/IA-140073.","journal-title":"Intelligenza Artificiale"},{"key":"212_CR158","doi-asserted-by":"publisher","unstructured":"Schmidt, A. & Kasi\u0144ski, A. (2007). The Performance of the Haar Cascade Classifiers Applied to the Face and Eyes Detection, pp. 816\u2013823. https:\/\/doi.org\/10.1007\/978-3-540-75175-5_101","DOI":"10.1007\/978-3-540-75175-5_101"},{"issue":"2","key":"212_CR159","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1016\/0010-0285(89)90008-X","volume":"21","author":"MF Schober","year":"1989","unstructured":"Schober, M. F., & Clark, H. H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21(2), 211\u2013232. https:\/\/doi.org\/10.1016\/0010-0285(89)90008-X.","journal-title":"Cognitive Psychology"},{"key":"212_CR160","doi-asserted-by":"publisher","unstructured":"Schroff, F., Kalenichenko, D. & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815\u2013823. https:\/\/doi.org\/10.1109\/CVPR.2015.7298682","DOI":"10.1109\/CVPR.2015.7298682"},{"key":"212_CR161","doi-asserted-by":"publisher","unstructured":"Schuller, B. (2015). Deep learning our everyday emotions. Advances in neural networks: Computational and theoretical issues, pp. 339\u2013346. https:\/\/doi.org\/10.1007\/978-3-319-18164-6_33","DOI":"10.1007\/978-3-319-18164-6_33"},{"key":"212_CR162","doi-asserted-by":"publisher","first-page":"8","DOI":"10.15377\/2409-5761.2020.07.2","volume":"7","author":"Abdel-Nasser Sharkawy","year":"2020","unstructured":"Sharkawy, Abdel-Nasser. (2020). Principle of neural network and its main types: Review. Journal of Advances in Applied & Computational Mathematics, 7, 8\u201319. https:\/\/doi.org\/10.15377\/2409-5761.2020.07.2.","journal-title":"Journal of Advances in Applied & Computational Mathematics"},{"issue":"4","key":"212_CR163","first-page":"19","volume":"251","author":"Abdel-Nasser Sharkawy","year":"2021","unstructured":"Sharkawy, Abdel-Nasser. (2021). A survey on applications of human-robot interaction. Sensors & Transducers Journal, 251(4), 19\u201327.","journal-title":"Sensors & Transducers Journal"},{"issue":"2","key":"212_CR164","doi-asserted-by":"publisher","first-page":"469","DOI":"10.1007\/s00530-021-00854-x","volume":"28","author":"J Shen","year":"2022","unstructured":"Shen, J., Yang, H., Li, J., & Cheng, Z. (2022). Assessing learning engagement based on facial expression recognition in MOOC\u2019s scenario. Multimedia Systems, 28(2), 469\u2013478. https:\/\/doi.org\/10.1007\/s00530-021-00854-x.","journal-title":"Multimedia Systems"},{"key":"212_CR165","unstructured":"Simonyan, K. & Zisserman, A. (2014) Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations, ICLR 2015-Conference Track Proceedings, pp. 1\u201314."},{"issue":"5","key":"212_CR166","doi-asserted-by":"publisher","first-page":"776","DOI":"10.1109\/72.159066","volume":"3","author":"PK Simpson","year":"1992","unstructured":"Simpson, P. K. (1992). Fuzzy min-max neural networks. I. Classification. IEEE Transactions on Neural Networks, 3(5), 776\u2013786. https:\/\/doi.org\/10.1109\/72.159066.","journal-title":"IEEE Transactions on Neural Networks"},{"key":"212_CR167","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2021.3127692","author":"O Sumer","year":"2021","unstructured":"Sumer, O., Goldberg, P., D\u2019Mello, S., Gerjets, P., Trautwein, U., & Kasneci, E. (2021). Multimodal engagement analysis from facial videos in the classroom. IEEE Transactions on Affective Computing. https:\/\/doi.org\/10.1109\/TAFFC.2021.3127692.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR168","doi-asserted-by":"publisher","unstructured":"Szegedy, C., Wei Liu, Yangqing Jia, Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. & Rabinovich, A. (2015). Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1\u20139. https:\/\/doi.org\/10.1109\/CVPR.2015.7298594","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"212_CR169","doi-asserted-by":"publisher","first-page":"236","DOI":"10.22266\/ijies2021.0430.21","volume":"14","author":"M Thiruthuvanathan","year":"2021","unstructured":"Thiruthuvanathan, M., Krishnan, B., & Rangaswamy, M. A. D. (2021). Engagement detection through facial emotional recognition using a shallow residual convolutional neural networks. International Journal of Intelligent Engineering and Systems, 14, 236\u2013247.","journal-title":"International Journal of Intelligent Engineering and Systems"},{"key":"212_CR170","doi-asserted-by":"publisher","first-page":"100079","DOI":"10.1016\/j.caeai.2022.100079","volume":"3","author":"C Thomas","year":"2022","unstructured":"Thomas, C., Puneeth Sarma, K. A. V., Swaroop Gajula, S., & Jayagopi, D. B. (2022). Automatic prediction of presentation style and student engagement from videos. Computers and Education: Artificial Intelligence, 3, 100079. https:\/\/doi.org\/10.1016\/j.caeai.2022.100079.","journal-title":"Computers and Education: Artificial Intelligence"},{"key":"212_CR171","doi-asserted-by":"publisher","unstructured":"Thong Huynh, V., Kim, S.-H., Lee, G.-S. & Yang, H.-J. (2019). Engagement intensity prediction with facial behavior features. In: 2019 International Conference on Multimodal Interaction, pp. 567\u2013571. ACM. https:\/\/doi.org\/10.1145\/3340555.3355714","DOI":"10.1145\/3340555.3355714"},{"issue":"3\u20134","key":"212_CR172","doi-asserted-by":"publisher","first-page":"81","DOI":"10.2511\/rpsd.34.3-4.81","volume":"34","author":"M Tincani","year":"2009","unstructured":"Tincani, M., Travers, J., & Boutot, A. (2009). Race, culture, and autism spectrum disorder: understanding the role of diversity in successful educational interventions. Research and Practice for Persons with Severe Disabilities, 34(3\u20134), 81\u201390. https:\/\/doi.org\/10.2511\/rpsd.34.3-4.81.","journal-title":"Research and Practice for Persons with Severe Disabilities"},{"issue":"4","key":"212_CR173","doi-asserted-by":"publisher","first-page":"1027","DOI":"10.1109\/TSMCB.2012.2195170","volume":"42","author":"Wu Tingfan","year":"2012","unstructured":"Tingfan, Wu., Butko, N. J., Ruvolo, P., Whitehill, J., Bartlett, M. S., & Movellan, J. R. (2012). Multilayer architectures for facial action unit recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(4), 1027\u20131038. https:\/\/doi.org\/10.1109\/TSMCB.2012.2195170.","journal-title":"IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)"},{"key":"212_CR174","doi-asserted-by":"publisher","unstructured":"Tran, D., Bourdev, L., Fergus, R., Torresani, L. & Paluri, M. (2015). Learning spatiotemporal features with 3D convolutional networks. In: 2015 IEEE International Conference on Computer Vision (ICCV), vol. 2015 Inter, pp. 4489\u20134497 https:\/\/doi.org\/10.1109\/ICCV.2015.510","DOI":"10.1109\/ICCV.2015.510"},{"issue":"3","key":"212_CR175","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3390\/math9030287","volume":"9","author":"P Vanneste","year":"2021","unstructured":"Vanneste, P., Oramas, J., Verelst, T., Tuytelaars, T., Raes, A., Depaepe, F., & Noortgate, W. V. D. (2021). Computer vision and human behaviour, emotion and cognition detection: A use case on student engagement. Mathematics, 9(3), 1\u201320. https:\/\/doi.org\/10.3390\/math9030287.","journal-title":"Mathematics"},{"key":"212_CR176","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L. & Polosukhin, I. (2017). Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS\u201917, pp. 6000\u20136010. Curran Associates Inc. https:\/\/dl.acm.org\/doi\/10.5555\/3295222.3295349"},{"key":"212_CR177","doi-asserted-by":"publisher","unstructured":"Viola, P. & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, pp. 511\u2013518. https:\/\/doi.org\/10.1109\/CVPR.2001.990517","DOI":"10.1109\/CVPR.2001.990517"},{"issue":"2","key":"212_CR178","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1023\/B:VISI.0000013087.49260.fb","volume":"57","author":"P Viola","year":"2004","unstructured":"Viola, P., & Jones, M. J. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137\u2013154. https:\/\/doi.org\/10.1023\/B:VISI.0000013087.49260.fb.","journal-title":"International Journal of Computer Vision"},{"key":"212_CR179","doi-asserted-by":"publisher","unstructured":"Voit, M. & Stiefelhagen, R. (2008). Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios. In: Proceedings of the 10th International Conference on Multimodal Interfaces - IMCI \u201908, p. 173. ACM Press. https:\/\/doi.org\/10.1145\/1452392.1452425","DOI":"10.1145\/1452392.1452425"},{"key":"212_CR180","doi-asserted-by":"publisher","unstructured":"Wagner, J., Jonghwa Kim, Andre, E. (2005). From physiological signals to emotions: Implementing and comparing selected methods for feature extraction and classification. In: 2005 IEEE International Conference on Multimedia and Expo, pp. 940\u2013943. IEEE. https:\/\/doi.org\/10.1109\/ICME.2005.1521579","DOI":"10.1109\/ICME.2005.1521579"},{"key":"212_CR181","doi-asserted-by":"publisher","unstructured":"Wang, Y., Kotha, A., Hong, P.H. & Qiu, M. (2020). Automated student engagement monitoring and evaluation during learning in the wild. In: Proceedings-2020 7th IEEE International Conference on Cyber Security and Cloud Computing and 2020 6th IEEE International Conference on Edge Computing and Scalable Cloud, CSCloud-EdgeCom 2020, pp. 270\u2013275. https:\/\/doi.org\/10.1109\/CSCloud-EdgeCom49738.2020.00054","DOI":"10.1109\/CSCloud-EdgeCom49738.2020.00054"},{"key":"212_CR182","doi-asserted-by":"publisher","first-page":"215","DOI":"10.1016\/J.NEUCOM.2020.10.081","volume":"429","author":"M Wang","year":"2021","unstructured":"Wang, M., & Deng, W. (2021). Deep face recognition: A survey. Neurocomputing, 429, 215\u2013244. https:\/\/doi.org\/10.1016\/J.NEUCOM.2020.10.081.","journal-title":"Neurocomputing"},{"issue":"7","key":"212_CR183","doi-asserted-by":"publisher","first-page":"682","DOI":"10.1109\/TMM.2010.2060716","volume":"12","author":"S Wang","year":"2010","unstructured":"Wang, S., Liu, Z., Lv, S., Lv, Y., Wu, G., Peng, P., et al. (2010). A natural visible and infrared facial expression database for expression recognition and emotion inference. IEEE Transactions on Multimedia, 12(7), 682\u2013691. https:\/\/doi.org\/10.1109\/TMM.2010.2060716.","journal-title":"IEEE Transactions on Multimedia"},{"issue":"6","key":"212_CR184","doi-asserted-by":"publisher","first-page":"1063","DOI":"10.1037\/0022-3514.54.6.1063","volume":"54","author":"D Watson","year":"1988","unstructured":"Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063\u20131070. https:\/\/doi.org\/10.1037\/0022-3514.54.6.1063.","journal-title":"Journal of Personality and Social Psychology"},{"issue":"1","key":"212_CR185","doi-asserted-by":"publisher","first-page":"86","DOI":"10.1109\/TAFFC.2014.2316163","volume":"5","author":"J Whitehill","year":"2014","unstructured":"Whitehill, J., Serpell, Z., Lin, Y. C., Foster, A., & Movellan, J. R. (2014). The faces of engagement: Automatic recognition of student engagement from facial expressions. IEEE Transactions on Affective Computing, 5(1), 86\u201398. https:\/\/doi.org\/10.1109\/TAFFC.2014.2316163.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR186","doi-asserted-by":"publisher","unstructured":"Winata, G.I., Kampman, O.P. & Fung, P. (2018). Attention-based LSTM for psychological stress detection from spoken language using distant supervision. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6204\u20136208. https:\/\/doi.org\/10.1109\/ICASSP.2018.8461990","DOI":"10.1109\/ICASSP.2018.8461990"},{"key":"212_CR187","doi-asserted-by":"publisher","unstructured":"Winne, P. H., & Perry, N. E. (2000). Measuring Self-Regulated Learning. Handbook of Self-Regulation, pp. 531\u2013566. https:\/\/doi.org\/10.1016\/B978-012109890-2\/50045-7.","DOI":"10.1016\/B978-012109890-2\/50045-7"},{"key":"212_CR188","unstructured":"Wittenburg, P., Brugman, H., Russel, A., Klassmann, A. & Sloetjes, H. (2006) ELAN: A professional framework for multimodality research. In: LREC."},{"key":"212_CR189","volume-title":"Data Mining: Practical Machine Learning Tools and Techniques","author":"Ian Witten","year":"2005","unstructured":"Witten, Ian, & Frank, Eibe. (2005). Data Mining: Practical Machine Learning Tools and Techniques (2nd ed.). Morgan Kaufmann.","edition":"2"},{"key":"212_CR190","doi-asserted-by":"publisher","unstructured":"Wolters, C.A. & Taylor, D.J. (2012). A self-regulated learning perspective on student engagement. In: Handbook of Research on Student Engagement, pp. 635\u2013651. Springer. https:\/\/doi.org\/10.1007\/978-1-4614-2018-7_30","DOI":"10.1007\/978-1-4614-2018-7_30"},{"key":"212_CR191","doi-asserted-by":"publisher","unstructured":"Wood, E., Baltruaitis, T., Zhang, X., Sugano, Y., Robinson, P. & Bulling, A. (2015). Rendering of eyes for eye-shape registration and gaze estimation. In: 2015 IEEE International Conference on Computer Vision (ICCV), vol. 2015 Inter, pp. 3756\u20133764. https:\/\/doi.org\/10.1109\/ICCV.2015.428","DOI":"10.1109\/ICCV.2015.428"},{"key":"212_CR192","doi-asserted-by":"publisher","unstructured":"Wu, J., Yang, B., Wang, Y. & Hattori, G. (2020). Advanced multi-instance learning method with multi-features engineering and conservative optimization for engagement intensity prediction. In: Proceedings of the 2020 International Conference on Multimodal Interaction, pp. 777\u2013783. ACM. https:\/\/doi.org\/10.1145\/3382507.3417959","DOI":"10.1145\/3382507.3417959"},{"key":"212_CR193","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1016\/j.compedu.2018.09.020","volume":"128","author":"K Xie","year":"2019","unstructured":"Xie, K., Heddy, B. C., & Greene, B. A. (2019). Affordances of using mobile technology to support experience-sampling method in examining college students\u2019 engagement. Computers & Education, 128, 183\u2013198. https:\/\/doi.org\/10.1016\/j.compedu.2018.09.020.","journal-title":"Computers & Education"},{"key":"212_CR194","doi-asserted-by":"publisher","unstructured":"Yang, D., Alsadoon, A., Prasad, P.W.C., Singh, A.K. & Elchouemi, A. (2018). An emotion recognition model based on facial recognition in virtual learning environment. In: Procedia Computer Science, vol. 125, pp. 2\u201310. https:\/\/doi.org\/10.1016\/j.procs.2017.12.003","DOI":"10.1016\/j.procs.2017.12.003"},{"key":"212_CR195","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1016\/J.IHEDUC.2015.11.003","volume":"29","author":"JW You","year":"2016","unstructured":"You, J. W. (2016). Identifying significant indicators using LMS data to predict course achievement in online learning. The Internet and Higher Education, 29, 23\u201330. https:\/\/doi.org\/10.1016\/J.IHEDUC.2015.11.003.","journal-title":"The Internet and Higher Education"},{"key":"212_CR196","doi-asserted-by":"publisher","first-page":"149554","DOI":"10.1109\/ACCESS.2019.2947091","volume":"7","author":"J Yue","year":"2019","unstructured":"Yue, J., Tian, F., Chao, K.-M., Shah, N., Li, L., Chen, Y., & Zheng, Q. (2019). Recognizing multidimensional engagement of e-learners based on multi-channel data in e-learning environment. IEEE Access, 7, 149554\u2013149567. https:\/\/doi.org\/10.1109\/ACCESS.2019.2947091.","journal-title":"IEEE Access"},{"key":"212_CR197","doi-asserted-by":"publisher","DOI":"10.5772\/50648","author":"S-S Yun","year":"2012","unstructured":"Yun, S.-S., Choi, M.-T., Kim, M., & Song, J.-B. (2012). Intention reading from a Fuzzy-based human engagement model and behavioural features. International Journal of Advanced Robotic Systems. https:\/\/doi.org\/10.5772\/50648.","journal-title":"International Journal of Advanced Robotic Systems"},{"issue":"2","key":"212_CR198","doi-asserted-by":"publisher","first-page":"148","DOI":"10.7763\/IJMLC.2015.V5.499","volume":"5","author":"W-H Yun","year":"2015","unstructured":"Yun, W.-H., Lee, D., Park, C., & Kim, J. (2015). Automatic engagement level estimation of kids in a learning environment. International Journal of Machine Learning and Computing, 5(2), 148\u2013152. https:\/\/doi.org\/10.7763\/IJMLC.2015.V5.499.","journal-title":"International Journal of Machine Learning and Computing"},{"issue":"4","key":"212_CR199","doi-asserted-by":"publisher","first-page":"696","DOI":"10.1109\/TAFFC.2018.2834350","volume":"11","author":"WH Yun","year":"2020","unstructured":"Yun, W. H., Lee, D., Park, C., Kim, J., & Kim, J. (2020). Automatic recognition of children engagement from facial video using convolutional neural networks. IEEE Transactions on Affective Computing, 11(4), 696\u2013707. https:\/\/doi.org\/10.1109\/TAFFC.2018.2834350.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR200","doi-asserted-by":"publisher","unstructured":"Zadeh, A., Lim, Y.C., Baltrusaitis, T. & Morency, L.-P. (2017). Convolutional experts constrained local model for 3D facial landmark detection. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), vol. 2018-January, pp. 2519\u20132528. https:\/\/doi.org\/10.1109\/ICCVW.2017.296","DOI":"10.1109\/ICCVW.2017.296"},{"issue":"1","key":"212_CR201","doi-asserted-by":"publisher","first-page":"80","DOI":"10.1186\/s13640-017-0228-8","volume":"2017","author":"J Zaletelj","year":"2017","unstructured":"Zaletelj, J., & Ko\u0161ir, A. (2017). Predicting students\u2019 attention in the classroom from Kinect facial and body features. EURASIP Journal on Image and Video Processing, 2017(1), 80. https:\/\/doi.org\/10.1186\/s13640-017-0228-8.","journal-title":"EURASIP Journal on Image and Video Processing"},{"issue":"3","key":"212_CR202","doi-asserted-by":"publisher","first-page":"300","DOI":"10.1109\/TAFFC.2016.2553038","volume":"8","author":"S Zhalehpour","year":"2017","unstructured":"Zhalehpour, S., Onder, O., Akhtar, Z., & Erdem, C. E. (2017). BAUM-1: A spontaneous audio-visual face database of affective and mental states. IEEE Transactions on Affective Computing, 8(3), 300\u2013313. https:\/\/doi.org\/10.1109\/TAFFC.2016.2553038.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"212_CR203","doi-asserted-by":"publisher","unstructured":"Zhang, Z., Hu, Y., Liu, M. & Huang, T. (2007). Head pose estimation in seminar room using multi view face detectors, pp. 299\u2013304 https:\/\/doi.org\/10.1007\/978-3-540-69568-4_27","DOI":"10.1007\/978-3-540-69568-4_27"},{"key":"212_CR204","doi-asserted-by":"publisher","unstructured":"Zhang, H., Xiao, X., Huang, T., Liu, S., Xia, Y. & Li, J. (2019). An novel end-to-end network for automatic student engagement recognition. In: 2019 IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC), pp. 342\u2013345. https:\/\/doi.org\/10.1109\/ICEIEC.2019.8784507","DOI":"10.1109\/ICEIEC.2019.8784507"},{"issue":"1","key":"212_CR205","doi-asserted-by":"publisher","first-page":"63","DOI":"10.1177\/0735633119825575","volume":"58","author":"Z Zhang","year":"2020","unstructured":"Zhang, Z., Li, Z., Liu, H., Cao, T., & Liu, S. (2020). Data-driven online learning engagement detection via facial expression and mouse behavior recognition technology. Journal of Educational Computing Research, 58(1), 63\u201386. https:\/\/doi.org\/10.1177\/0735633119825575.","journal-title":"Journal of Educational Computing Research"},{"issue":"10","key":"212_CR206","doi-asserted-by":"publisher","first-page":"1499","DOI":"10.1109\/LSP.2016.2603342","volume":"23","author":"K Zhang","year":"2016","unstructured":"Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10), 1499\u20131503. https:\/\/doi.org\/10.1109\/LSP.2016.2603342.","journal-title":"IEEE Signal Processing Letters"},{"issue":"3s","key":"212_CR207","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3363560","volume":"15","author":"S Zhao","year":"2019","unstructured":"Zhao, S., Wang, S., Soleymani, M., Joshi, D., & Ji, Q. (2019). Affective computing for large-scale heterogeneous multimedia data. ACM Transactions on Multimedia Computing, Communications, and Applications, 15(3s), 1\u201332. https:\/\/doi.org\/10.1145\/3363560.","journal-title":"ACM Transactions on Multimedia Computing, Communications, and Applications"},{"key":"212_CR208","doi-asserted-by":"publisher","unstructured":"Zheng, X., Hasegawa, S., Tran, M.-T., Ota, K. & Unoki, T. (2021). Estimation of learners\u2019 engagement using face and body features by transfer learning, pp. 541\u2013552. https:\/\/doi.org\/10.1007\/978-3-030-77772-2_36","DOI":"10.1007\/978-3-030-77772-2_36"},{"key":"212_CR209","doi-asserted-by":"publisher","unstructured":"Zhu, B., Lan, X., Guo, X., Barner, K.E. & Boncelet, C. (2020). Multi-rate attention based gru model for engagement prediction. In: Proceedings of the 2020 International Conference on Multimodal Interaction, pp. 841\u2013848. ACM. https:\/\/doi.org\/10.1145\/3382507.3417965","DOI":"10.1145\/3382507.3417965"},{"key":"212_CR210","doi-asserted-by":"publisher","unstructured":"Zhu, X., Lei, Z., Liu, X., Shi, H. & Li, S.Z. (2016). Face alignment across large poses: A 3D solution. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2016-December, pp. 146\u2013155. https:\/\/doi.org\/10.1109\/CVPR.2016.23","DOI":"10.1109\/CVPR.2016.23"}],"container-title":["Smart Learning Environments"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40561-022-00212-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s40561-022-00212-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40561-022-00212-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,11,12]],"date-time":"2022-11-12T09:13:36Z","timestamp":1668244416000},"score":1,"resource":{"primary":{"URL":"https:\/\/slejournal.springeropen.com\/articles\/10.1186\/s40561-022-00212-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,12]]},"references-count":210,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["212"],"URL":"https:\/\/doi.org\/10.1186\/s40561-022-00212-y","relation":{},"ISSN":["2196-7091"],"issn-type":[{"value":"2196-7091","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,12]]},"assertion":[{"value":"12 July 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 October 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 November 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"31"}}