{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T06:38:10Z","timestamp":1763015890029,"version":"3.45.0"},"reference-count":35,"publisher":"Frontiers Media SA","license":[{"start":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T00:00:00Z","timestamp":1762992000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["frontiersin.org"],"crossmark-restriction":true},"short-container-title":["Front. Comput. Sci."],"abstract":"<jats:p>\n                    Wearable Activity Recognition consists of recognizing actions of people from on-body sensor data using machine learning. Developing suitable machine learning models typically requires substantial amounts of annotated training data. Manually annotating large datasets is tedious and time intensive. Interactive machine learning systems can be used to support this, with the aim of reducing annotation time or improving accuracy. We contribute a new web-based annotation tool for time series signals synchronized with a video recording with integrated automated suggestions, facilitated by ML models, to assist and improve the annotation process of annotators. This is enabled by focusing user attention toward points of interest. This is particularly relevant for the annotation of long periodic activities to allow fast navigation in large datasets without skipping start and end points of activities. To evaluate the efficacy of this system, we conducted a user study with 32 participants who were tasked with annotating modes of locomotion in a dataset composed of multiple long (over 12 h) consecutive sensor recordings captured by body-worn accelerometers. We analyzed the quantitative impact on annotation performance and the qualitative impact on the user experience. The results show that the implemented annotation assistance improved the annotation quality by 11%\n                    <jats:italic>F<\/jats:italic>\n                    1 Score but reduced annotation speed by 20%, whereas the NASA Task Load Index results show that participants perceived the assistance as beneficial for annotation speed but not for annotation quality.\n                  <\/jats:p>","DOI":"10.3389\/fcomp.2025.1696178","type":"journal-article","created":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T06:30:47Z","timestamp":1763015447000},"update-policy":"https:\/\/doi.org\/10.3389\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Assisting annotators of wearable activity recognition datasets through automated sensor-based suggestions"],"prefix":"10.3389","volume":"7","author":[{"given":"Lukas","family":"G\u00fcnthermann","sequence":"first","affiliation":[]},{"given":"Ivor","family":"Simpson","sequence":"additional","affiliation":[]},{"given":"Phil","family":"Birch","sequence":"additional","affiliation":[]},{"given":"Daniel","family":"Roggen","sequence":"additional","affiliation":[]}],"member":"1965","published-online":{"date-parts":[[2025,11,13]]},"reference":[{"key":"B1","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3448083","article-title":"Attend and discriminate: Beyond the state-of-the-art for human activity recognition using wearable sensors","volume":"5","author":"Abedin","year":"2021","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol"},{"key":"B2","doi-asserted-by":"publisher","first-page":"38","DOI":"10.1007\/978-3-642-14715-9_5","article-title":"\u201cHuman activity recognition using inertial\/magnetic sensor units,\u201d","author":"Altun","year":"2010","journal-title":"Human Behavior Understanding"},{"key":"B3","doi-asserted-by":"publisher","first-page":"2","DOI":"10.1609\/hcomp.v7i1.5285","article-title":"Beyond accuracy: The role of mental models in human-ai team performance","volume":"7","author":"Bansal","year":"2019","journal-title":"Proc. AAAI Conf. Hum. Comput. Crowdsourc"},{"key":"B4","doi-asserted-by":"publisher","first-page":"143","DOI":"10.1007\/s13218-020-00632-3","article-title":"explainable cooperative machine learning with nova","volume":"34","author":"Baur","year":"2020","journal-title":"K\u00fcnstliche Intell"},{"key":"B5","first-page":"189","volume-title":"SUS- A Quick and Dirty Usability Scale","author":"Brooke","year":"1996"},{"key":"B6","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/2499621","article-title":"A tutorial on human activity recognition using body-worn inertial sensors","volume":"46","author":"Bulling","year":"2014","journal-title":"ACM Comput. Surv"},{"key":"B7","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/ICCCI56745.2023.10128242","article-title":"\u201cHuman activity recognition for analysing fitness dataset using a fitness tracker,\u201d","author":"Chadha","year":"2023"},{"key":"B8","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3447744","article-title":"Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities","volume":"54","author":"Chen","year":"2021","journal-title":"ACM Comput. Surv"},{"key":"B9","article-title":"A comprehensive review of automated data annotation techniques in human activity recognition","author":"Demrozi","year":"2023","journal-title":"arXiv [Preprint]"},{"key":"B10","doi-asserted-by":"publisher","first-page":"2639","DOI":"10.3390\/s18082639","article-title":"Exploring semi-supervised methods for labeling support in multimodal datasets","volume":"18","author":"Diete","year":"2018","journal-title":"Sensors"},{"key":"B11","doi-asserted-by":"publisher","first-page":"88","DOI":"10.1007\/978-3-642-39666-3","article-title":"\u201cSmart video browsing with augmented navigation bars,\u201d","author":"Fabro","year":"2013"},{"key":"B12","doi-asserted-by":"publisher","first-page":"42592","DOI":"10.1109\/ACCESS.2018.2858933","article-title":"The university of sussex-huawei locomotion and transportation dataset for multimodal analytics with mobile devices","volume":"6","author":"Gjoreski","year":"2018","journal-title":"IEEE Access"},{"key":"B13","doi-asserted-by":"publisher","first-page":"233","DOI":"10.1145\/3581754.3584112","article-title":"\u201cApplication for doctoral consortium IUI 2023,\u201d","author":"Gunthermann","year":"2023"},{"key":"B14","doi-asserted-by":"publisher","first-page":"7085","DOI":"10.1109\/JSEN.2023.3349191","article-title":"Egocentric human activities recognition with multimodal interaction sensing","volume":"24","author":"Hao","year":"2024","journal-title":"IEEE Sens. J"},{"key":"B15","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1016\/S0166-4115(08)62386-9","article-title":"\u201cDevelopment of nasa-tlx (task load index): results of empirical and theoretical research,\u201d","author":"Hart","year":"1988","journal-title":"Human Mental Workload, Volume 52 of Advances in Psychology"},{"key":"B16","doi-asserted-by":"publisher","DOI":"10.1109\/ACII.2019.8925519","article-title":"\u201cNOVA - a tool for eXplainable cooperative machine learning,\u201d","author":"Heimerl","year":"2019"},{"key":"B17","doi-asserted-by":"publisher","first-page":"1155","DOI":"10.1109\/TAFFC.2020.3043603","article-title":"Unraveling ml models of emotion with nova: multi-level explainable ai for non-experts","volume":"13","author":"Heimerl","year":"2022","journal-title":"IEEE Trans. Affect. Comput"},{"key":"B18","doi-asserted-by":"publisher","first-page":"2005","DOI":"10.1007\/s12144-023-04400-y","article-title":"Workers' whole day workload and next day cognitive performance","volume":"43","author":"Hernandez","year":"2023","journal-title":"Curr. Psychol"},{"key":"B19","doi-asserted-by":"publisher","first-page":"1379788","DOI":"10.3389\/fcomp.2024.1379788","article-title":"A matter of annotation: an empirical study on in situ and self-recall activity annotations from wearable sensors","volume":"6","author":"Hoelzemann","year":"2024","journal-title":"Front. Comput. Sci"},{"key":"B20","doi-asserted-by":"publisher","first-page":"118","DOI":"10.1109\/EDOC.2001.950428","article-title":"\u201cWeb-application development using the model\/view\/controller design pattern,\u201d","author":"Leff","year":"2001"},{"key":"B21","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3432222","article-title":"Crowdact: achieving high-quality crowdsourced datasets in mobile activity recognition","volume":"5","author":"Mairittha","year":"2021","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol"},{"key":"B22","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1145\/67243.67247","article-title":"Dynamic versus static menus: an exploratory comparison","volume":"20","author":"Mitchell","year":"1989","journal-title":"ACM SIGCHI Bull"},{"key":"B23","doi-asserted-by":"publisher","first-page":"341","DOI":"10.1016\/j.neucom.2019.08.092","article-title":"Online active learning for human activity recognition from sensory data streams","volume":"390","author":"Mohamad","year":"2020","journal-title":"Neurocomputing"},{"key":"B24","doi-asserted-by":"publisher","first-page":"115","DOI":"10.3390\/s16010115","article-title":"Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition","volume":"16","author":"Ord\u00f3\u00f1ez","year":"2016","journal-title":"Sensors"},{"key":"B25","doi-asserted-by":"publisher","DOI":"10.1109\/CBMI.2014.6849850","article-title":"\u201cLabelMovie: semi-supervised machine annotation tool with quality assurance and crowd-sourcing options for videos,\u201d","author":"Palotai","year":"2014"},{"key":"B26","doi-asserted-by":"publisher","first-page":"2491","DOI":"10.3390\/s24082491","article-title":"A multi-modal egocentric activity recognition approach towards video domain generalization","volume":"24","author":"Papadakis","year":"2024","journal-title":"Sensors"},{"key":"B27","doi-asserted-by":"publisher","DOI":"10.1145\/3311350.3347153","article-title":"\u201cDesigning videogames to crowdsource accelerometer data annotation for activity recognition research,\u201d","author":"Ponnada","year":"2019"},{"key":"B28","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2023.122538","article-title":"Transfer learning and its extensive appositeness in human activity recognition: a survey","author":"Ray","year":"2024","journal-title":"Expert Syst. Appl"},{"key":"B29","doi-asserted-by":"publisher","first-page":"233","DOI":"10.1109\/INSS.2010.5573462","article-title":"\u201cCollecting complex activity datasets in highly rich networked sensor environments,\u201d","author":"Roggen","year":"2010"},{"key":"B30","doi-asserted-by":"publisher","first-page":"1599","DOI":"10.1145\/1518701.1518946","article-title":"\u201cComparison of three one-question, post-task usability questionnaires,\u201d","author":"Sauro","year":"2009"},{"key":"B31","doi-asserted-by":"publisher","first-page":"101817","DOI":"10.1016\/j.pmcj.2023.101817","article-title":"Online continual learning for human activity recognition","volume":"93","author":"Schiemer","year":"2023","journal-title":"Pervasive Mob. Comput"},{"key":"B32","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/1889681.1889687","article-title":"Performance metrics for activity recognition","volume":"2","author":"Ward","year":"2011","journal-title":"ACM Trans. Intell. Syst. Technol"},{"key":"B33","doi-asserted-by":"publisher","first-page":"108110","DOI":"10.1016\/j.engappai.2024.108110","article-title":"MFCANN: a feature diversification framework based on local and global attention for human activity recognition","volume":"133","author":"Yang","year":"2024","journal-title":"Eng. Appl. Artif. Intell"},{"key":"B34","doi-asserted-by":"publisher","first-page":"24315","DOI":"10.1109\/JIOT.2022.3188785","article-title":"Online learning of wearable sensing for human activity recognition","volume":"9","author":"Zhang","year":"2022","journal-title":"IEEE Internet Things J"},{"key":"B35","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1145\/3544794.3558467","article-title":"\u201cTinyhar: a lightweight deep learning model designed for human activity recognition,\u201d","author":"Zhou","year":"2022"}],"container-title":["Frontiers in Computer Science"],"original-title":[],"link":[{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/fcomp.2025.1696178\/full","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T06:30:48Z","timestamp":1763015448000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/fcomp.2025.1696178\/full"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,13]]},"references-count":35,"alternative-id":["10.3389\/fcomp.2025.1696178"],"URL":"https:\/\/doi.org\/10.3389\/fcomp.2025.1696178","relation":{},"ISSN":["2624-9898"],"issn-type":[{"value":"2624-9898","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,11,13]]},"article-number":"1696178"}}