{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,20]],"date-time":"2025-07-20T03:38:26Z","timestamp":1752982706012,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":41,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,10,9]],"date-time":"2023-10-09T00:00:00Z","timestamp":1696809600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,10,9]]},"DOI":"10.1145\/3577190.3614108","type":"proceedings-article","created":{"date-parts":[[2023,10,7]],"date-time":"2023-10-07T22:30:48Z","timestamp":1696717848000},"page":"670-678","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["WiFiTuned: Monitoring Engagement in Online Participation by Harmonizing WiFi and Audio"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-3032-3840","authenticated-orcid":false,"given":"Vijay Kumar","family":"Singh","sequence":"first","affiliation":[{"name":"CSE, IIIT Delhi, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3366-0171","authenticated-orcid":false,"given":"Pragma","family":"Kar","sequence":"additional","affiliation":[{"name":"Information Technology, Jadavpur University, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2777-647X","authenticated-orcid":false,"given":"Ayush Madhan","family":"Sohini","sequence":"additional","affiliation":[{"name":"ECE, IIIT Delhi, India"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-6719-8283","authenticated-orcid":false,"given":"Madhav","family":"Rangaiah","sequence":"additional","affiliation":[{"name":"CSE, IIIT Delhi, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3531-968X","authenticated-orcid":false,"given":"Sandip","family":"Chakraborty","sequence":"additional","affiliation":[{"name":"CSE, IIT Kharagpur, India, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4240-4746","authenticated-orcid":false,"given":"Mukulika","family":"Maity","sequence":"additional","affiliation":[{"name":"CSE, IIIT Delhi, India"}]}],"member":"320","published-online":{"date-parts":[[2023,10,9]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCITM53167.2021.9677878"},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300534"},{"key":"e_1_3_2_2_3_1","volume-title":"Rigid head motion in expressive speech animation: Analysis and synthesis","author":"Busso Carlos","year":"2007","unstructured":"Carlos Busso , Zhigang Deng , Michael Grimm , Ulrich Neumann , and Shrikanth Narayanan . 2007. Rigid head motion in expressive speech animation: Analysis and synthesis . IEEE transactions on audio, speech, and language processing 15, 3 ( 2007 ), 1075\u20131086. Carlos Busso, Zhigang Deng, Michael Grimm, Ulrich Neumann, and Shrikanth Narayanan. 2007. Rigid head motion in expressive speech animation: Analysis and synthesis. IEEE transactions on audio, speech, and language processing 15, 3 (2007), 1075\u20131086."},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445243"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3242969.3264986"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3446382.3448361"},{"key":"e_1_3_2_2_7_1","volume-title":"Virtual Classrooms: A Meeting Wrapper for the Teachers. In India HCI","author":"Das Snigdha","year":"2021","unstructured":"Snigdha Das , Sandip Chakraborty , and Bivas Mitra . 2021 . Quantifying Students\u2019 Involvement during Virtual Classrooms: A Meeting Wrapper for the Teachers. In India HCI 2021. 133\u2013139. Snigdha Das, Sandip Chakraborty, and Bivas Mitra. 2021. Quantifying Students\u2019 Involvement during Virtual Classrooms: A Meeting Wrapper for the Teachers. In India HCI 2021. 133\u2013139."},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3550328"},{"key":"e_1_3_2_2_9_1","first-page":"1","article-title":"n-gage: Predicting in-class emotional, behavioural and cognitive engagement in the wild","volume":"4","author":"Gao Nan","year":"2020","unstructured":"Nan Gao , Wei Shao , Mohammad\u00a0Saiedur Rahaman , and Flora\u00a0 D Salim . 2020 . n-gage: Predicting in-class emotional, behavioural and cognitive engagement in the wild . Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4 , 3 (2020), 1 \u2013 26 . Nan Gao, Wei Shao, Mohammad\u00a0Saiedur Rahaman, and Flora\u00a0D Salim. 2020. n-gage: Predicting in-class emotional, behavioural and cognitive engagement in the wild. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1\u201326.","journal-title":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"key":"e_1_3_2_2_10_1","first-page":"1","article-title":"Towards Robust Gesture Recognition by Characterizing the Sensing Quality of WiFi Signals","volume":"6","author":"Gao Ruiyang","year":"2022","unstructured":"Ruiyang Gao , Wenwei Li , Yaxiong Xie , Enze Yi , Leye Wang , Dan Wu , and Daqing Zhang . 2022 . Towards Robust Gesture Recognition by Characterizing the Sensing Quality of WiFi Signals . Proc. Interactive, Mobile, Wearable and Ubiquitous Technologies 6 , 1 (2022), 1 \u2013 26 . Ruiyang Gao, Wenwei Li, Yaxiong Xie, Enze Yi, Leye Wang, Dan Wu, and Daqing Zhang. 2022. Towards Robust Gesture Recognition by Characterizing the Sensing Quality of WiFi Signals. Proc. Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 1 (2022), 1\u201326.","journal-title":"Proc. Interactive, Mobile, Wearable and Ubiquitous Technologies"},{"key":"e_1_3_2_2_11_1","first-page":"1","article-title":"SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array","volume":"5","author":"Gao Yang","year":"2021","unstructured":"Yang Gao , Yincheng Jin , Seokmin Choi , Jiyang Li , Junjie Pan , Lin Shu , Chi Zhou , and Zhanpeng Jin . 2021 . SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array . Proceedings of the ACM on IMWUT 5 , 4 (2021), 1 \u2013 33 . Yang Gao, Yincheng Jin, Seokmin Choi, Jiyang Li, Junjie Pan, Lin Shu, Chi Zhou, and Zhanpeng Jin. 2021. SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array. Proceedings of the ACM on IMWUT 5, 4 (2021), 1\u201333.","journal-title":"Proceedings of the ACM on IMWUT"},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.11648\/j.ijamtp.20180402.13"},{"key":"e_1_3_2_2_13_1","volume-title":"Hernandez and Eyuphan Bulut","author":"M.","year":"2020","unstructured":"Steven\u00a0 M. Hernandez and Eyuphan Bulut . 2020 . Lightweight and Standalone IoT Based WiFi Sensing for Active Repositioning and Mobility. In WoWMoM 2020. Cork, Ireland . Steven\u00a0M. Hernandez and Eyuphan Bulut. 2020. Lightweight and Standalone IoT Based WiFi Sensing for Active Repositioning and Mobility. In WoWMoM 2020. Cork, Ireland."},{"key":"e_1_3_2_2_14_1","volume-title":"Covert and overt voluntary attention: linked or independent?Cognitive Brain Research 18, 1","author":"Hunt R","year":"2003","unstructured":"Amelia\u00a0 R Hunt and Alan Kingstone . 2003. Covert and overt voluntary attention: linked or independent?Cognitive Brain Research 18, 1 ( 2003 ), 102\u2013105. Amelia\u00a0R Hunt and Alan Kingstone. 2003. Covert and overt voluntary attention: linked or independent?Cognitive Brain Research 18, 1 (2003), 102\u2013105."},{"key":"e_1_3_2_2_15_1","volume-title":"Gestalt-based feature similarity measure in trademark database. Pattern recognition 39, 5","author":"Jiang Hui","year":"2006","unstructured":"Hui Jiang , Chong-Wah Ngo , and Hung-Khoon Tan . 2006. Gestalt-based feature similarity measure in trademark database. Pattern recognition 39, 5 ( 2006 ), 988\u20131001. Hui Jiang, Chong-Wah Ngo, and Hung-Khoon Tan. 2006. Gestalt-based feature similarity measure in trademark database. Pattern recognition 39, 5 (2006), 988\u20131001."},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3394974"},{"key":"e_1_3_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3555151"},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1177\/2158244018824505"},{"key":"e_1_3_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3517482"},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3310194"},{"key":"e_1_3_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1111\/1467-8721.01256"},{"key":"e_1_3_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858202"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-46133-1_17"},{"key":"e_1_3_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2016.2515084"},{"key":"e_1_3_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3242969.3242973"},{"key":"e_1_3_2_2_26_1","volume-title":"KLUE: Korean Language Understanding Evaluation. arxiv:2105.09680\u00a0[cs.CL]","author":"Park Sungjoon","year":"2021","unstructured":"Sungjoon Park , Jihyung Moon , Sungdong Kim , Won\u00a0Ik Cho , Jiyoon Han , Jangwon Park , Chisung Song , Junseong Kim , Yongsook Song , Taehwan Oh , Joohong Lee , Juhyun Oh , Sungwon Lyu , Younghoon Jeong , Inkwon Lee , Sangwoo Seo , Dongjun Lee , Hyunwoo Kim , Myeonghwa Lee , Seongbo Jang , Seungwon Do , Sunkyoung Kim , Kyungtae Lim , Jongwon Lee , Kyumin Park , Jamin Shin , Seonghyun Kim , Lucy Park , Alice Oh , Jungwoo Ha , and Kyunghyun Cho . 2021 . KLUE: Korean Language Understanding Evaluation. arxiv:2105.09680\u00a0[cs.CL] Sungjoon Park, Jihyung Moon, Sungdong Kim, Won\u00a0Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Yongsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Lyu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jungwoo Ha, and Kyunghyun Cho. 2021. KLUE: Korean Language Understanding Evaluation. arxiv:2105.09680\u00a0[cs.CL]"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/EUSIPCO.2015.7362835"},{"volume-title":"Implementation of Machine Learning for CAPTCHAs Authentication Using Facial Recognition","author":"Raavi Rupendra","key":"e_1_3_2_2_28_1","unstructured":"Rupendra Raavi , Mansour Alqarni , and Patrick\u00a0 CK Hung . 2022. Implementation of Machine Learning for CAPTCHAs Authentication Using Facial Recognition . In ICDSIS. IEEE , 1\u20135. Rupendra Raavi, Mansour Alqarni, and Patrick\u00a0CK Hung. 2022. Implementation of Machine Learning for CAPTCHAs Authentication Using Facial Recognition. In ICDSIS. IEEE, 1\u20135."},{"key":"e_1_3_2_2_29_1","first-page":"146423","article-title":"Speech Function in Feature Stories in Reader\u2019s Digest","volume":"1","author":"Rosaen Arta","year":"2012","unstructured":"Arta Rosaen and Lidiman Sinaga . 2012 . Speech Function in Feature Stories in Reader\u2019s Digest . Linguistica 1 , 1 (2012), 146423 . Arta Rosaen and Lidiman Sinaga. 2012. Speech Function in Feature Stories in Reader\u2019s Digest. Linguistica 1, 1 (2012), 146423.","journal-title":"Linguistica"},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/3340555.3353742"},{"key":"e_1_3_2_2_31_1","volume-title":"The Performance of LSTM and BiLSTM in Forecasting Time Series. In 2019 IEEE International Conference on Big Data (Big Data). 3285\u20133292","author":"Siami-Namini Sima","year":"2019","unstructured":"Sima Siami-Namini , Neda Tavakoli , and Akbar\u00a0Siami Namin . 2019 . The Performance of LSTM and BiLSTM in Forecasting Time Series. In 2019 IEEE International Conference on Big Data (Big Data). 3285\u20133292 . Sima Siami-Namini, Neda Tavakoli, and Akbar\u00a0Siami Namin. 2019. The Performance of LSTM and BiLSTM in Forecasting Time Series. In 2019 IEEE International Conference on Big Data (Big Data). 3285\u20133292."},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2021.3120098"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-77703-0_28"},{"key":"e_1_3_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3359264"},{"key":"e_1_3_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICUFN49451.2021.9528612"},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/2789168.2790093"},{"key":"e_1_3_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSAC.2017.2679658"},{"key":"e_1_3_2_2_38_1","volume-title":"Environment-Independent Wi-Fi Human Activity Recognition with Adversarial Network. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3330\u20133334","author":"Wang Zhengyang","year":"2021","unstructured":"Zhengyang Wang , Sheng Chen , Wei Yang , and Yang Xu . 2021 . Environment-Independent Wi-Fi Human Activity Recognition with Adversarial Network. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3330\u20133334 . Zhengyang Wang, Sheng Chen, Wei Yang, and Yang Xu. 2021. Environment-Independent Wi-Fi Human Activity Recognition with Adversarial Network. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3330\u20133334."},{"key":"e_1_3_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/2971648.2971658"},{"key":"e_1_3_2_2_40_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TIM.2022.3216419","article-title":"CSI-Based Location-Independent Human Activity Recognition Using Feature Fusion","volume":"71","author":"Zhang Yong","year":"2022","unstructured":"Yong Zhang , Qingqing Liu , Yujie Wang , and Guangwei Yu . 2022 . CSI-Based Location-Independent Human Activity Recognition Using Feature Fusion . IEEE Transactions on Instrumentation and Measurement 71 (2022), 1 \u2013 12 . Yong Zhang, Qingqing Liu, Yujie Wang, and Guangwei Yu. 2022. CSI-Based Location-Independent Human Activity Recognition Using Feature Fusion. IEEE Transactions on Instrumentation and Measurement 71 (2022), 1\u201312.","journal-title":"IEEE Transactions on Instrumentation and Measurement"},{"key":"e_1_3_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3307334.3326081"}],"event":{"name":"ICMI '23: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION","sponsor":["SIGCHI ACM Special Interest Group on Computer-Human Interaction"],"location":"Paris France","acronym":"ICMI '23"},"container-title":["INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3577190.3614108","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3577190.3614108","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:51:11Z","timestamp":1750182671000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3577190.3614108"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,9]]},"references-count":41,"alternative-id":["10.1145\/3577190.3614108","10.1145\/3577190"],"URL":"https:\/\/doi.org\/10.1145\/3577190.3614108","relation":{},"subject":[],"published":{"date-parts":[[2023,10,9]]},"assertion":[{"value":"2023-10-09","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}