{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T12:37:37Z","timestamp":1773751057475,"version":"3.50.1"},"reference-count":42,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2020,12,17]],"date-time":"2020-12-17T00:00:00Z","timestamp":1608163200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001691","name":"Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["17H01779"],"award-info":[{"award-number":["17H01779"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002241","name":"Japan Science and Technology Agency","doi-asserted-by":"publisher","award":["PMJCR17A5"],"award-info":[{"award-number":["PMJCR17A5"]}],"id":[{"id":"10.13039\/501100002241","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2020,12,17]]},"abstract":"<jats:p>Gesture recognition and human-activity recognition from multi-channel sensory data are important tasks in wearable and ubiquitous computing. In these tasks, increasing both the number of recognizable activity classes and the recognition accuracy is essential. However, this is usually an ill-posed problem because individual differences in the same gesture class may affect the discrimination of different gesture classes. One promising solution is to use personal classifiers, but this requires personal gesture samples for re-training the classifiers.<\/jats:p>\n          <jats:p>We propose a method of solving this issue that obtains personal gesture classifiers using few user gesture samples, thus, achieving accurate gesture recognition for an increased number of gesture classes without requiring extensive user calibration. The novelty of our method is introducing a generative adversarial network (GAN)-based style transformer to 'generate' a user's gesture data. The method synthesizes the gesture examples of the target class of a target user by transforming of a) gesture data into another class of the same user (intra-user transformation) or b) gesture data of the same class of another user (inter-user transformation). The synthesized data are then used to train the personal gesture classifier.<\/jats:p>\n          <jats:p>We conducted comprehensive experiments using 1) different classifiers including SVM and CNN, 2) intra- and inter-user transformations, 3) various data-missing patterns, and 4) two different types of sensory data. Results showed that the proposed method had an increased performance. Specially, the CNN-based classifiers increased in average accuracy from 0.747 to 0.822 in the CheekInput dataset and from 0.856 to 0.899 in the USC-HAD dataset. Moreover, the experimental results with various data-missing conditions revealed a relation between the number of missing gesture classes and the accuracy of the existing and proposed methods, and we were able to clarify several advantages of the proposed method. These results indicate the potential of considerably reducing the number of required training samples of target users.<\/jats:p>","DOI":"10.1145\/3432199","type":"journal-article","created":{"date-parts":[[2020,12,18]],"date-time":"2020-12-18T15:39:14Z","timestamp":1608305954000},"page":"1-20","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["GAN-based Style Transformation to Improve Gesture-recognition Accuracy"],"prefix":"10.1145","volume":"4","author":[{"given":"Noeru","family":"Suzuki","sequence":"first","affiliation":[{"name":"Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto, Kyoto"}]},{"given":"Yuki","family":"Watanabe","sequence":"additional","affiliation":[{"name":"Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto, Kyoto"}]},{"given":"Atsushi","family":"Nakazawa","sequence":"additional","affiliation":[{"name":"Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto, Kyoto"}]}],"member":"320","published-online":{"date-parts":[[2020,12,18]]},"reference":[{"key":"e_1_2_2_1_1","volume-title":"Proc. ICML.","author":"Almahairi Amjad","year":"2018"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/2499621"},{"key":"e_1_2_2_3_1","volume-title":"Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337","author":"Chen Tian Qi","year":"2016"},{"key":"e_1_2_2_4_1","volume-title":"StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. CoRR abs\/1711.09020","author":"Choi Yunjey","year":"2017"},{"key":"e_1_2_2_5_1","volume-title":"A learned representation for artistic style. arXiv preprint arXiv:1610.07629","author":"Dumoulin Vincent","year":"2016"},{"key":"e_1_2_2_6_1","volume-title":"2017 ACM\/IEEE 8th International Conference on Cyber-Physical Systems (ICCPS). 293--302","author":"Fallahzadeh R."},{"key":"e_1_2_2_7_1","volume-title":"A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576","author":"Gatys Leon A","year":"2015"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.265"},{"key":"e_1_2_2_9_1","unstructured":"Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680.  Ian Goodfellow Jean Pouget-Abadie Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680."},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00858"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3090076"},{"key":"e_1_2_2_12_1","volume-title":"Multi-modal Convolutional Neural Networks for Activity Recognition. In 2015 IEEE International Conference on Systems, Man, and Cybernetics. 3017--3022","author":"Ha S.","year":"2015"},{"key":"e_1_2_2_13_1","volume-title":"CoRR abs\/1604.08880","author":"Hammerla Nils Y.","year":"2016"},{"key":"e_1_2_2_14_1","volume-title":"Deep Residual Learning for Image Recognition. CoRR abs\/1512.03385","author":"He Kaiming","year":"2015"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/MCG.2017.3271464"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925975"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.167"},{"key":"e_1_2_2_18_1","volume-title":"Deep Recurrent Neural Network for Mobile Human Activity Recognition with High Throughput. CoRR abs\/1611.03607","author":"Inoue Masaya","year":"2016"},{"key":"e_1_2_2_19_1","volume-title":"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. CoRR abs\/1502.03167","author":"Ioffe Sergey","year":"2015"},{"key":"e_1_2_2_20_1","volume-title":"Proc. CVPR.","author":"Isola Phillip"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/212430.212431"},{"key":"e_1_2_2_24_1","volume-title":"Proc. ICML.","author":"Kim Taeksoo","year":"2017"},{"key":"e_1_2_2_25_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/ROBOT.1996.509165"},{"key":"e_1_2_2_27_1","unstructured":"Yijun Li Chen Fang Jimei Yang Zhaowen Wang Xin Lu and Ming-Hsuan Yang. 2017. Universal style transfer via feature transforms. In Advances in neural information processing systems. 386--396.  Yijun Li Chen Fang Jimei Yang Zhaowen Wang Xin Lu and Ming-Hsuan Yang. 2017. Universal style transfer via feature transforms. In Advances in neural information processing systems. 386--396."},{"key":"e_1_2_2_28_1","volume-title":"Proc. NIPS.","author":"Liu Ming-Yu","year":"2017"},{"key":"e_1_2_2_29_1","volume-title":"Switzerland) 16, 1","author":"Ord\u00f3\u00f1ez Francisco Javier","year":"2016"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00860"},{"key":"e_1_2_2_31_1","unstructured":"Pekka Siirtola Heli Koskim\u00e4ki and Juha R\u00f6ning. 2019. Personalizing human activity recognition models using incremental learning. arXiv:cs.LG\/1905.12628  Pekka Siirtola Heli Koskim\u00e4ki and Juha R\u00f6ning. 2019. Personalizing human activity recognition models using incremental learning. arXiv:cs.LG\/1905.12628"},{"key":"e_1_2_2_32_1","volume-title":"Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556","author":"Simonyan Karen","year":"2014"},{"key":"e_1_2_2_33_1","volume-title":"Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022","author":"Ulyanov Dmitry","year":"2016"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2018.02.010"},{"key":"e_1_2_2_35_1","volume-title":"Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence.","author":"Weiss Gary Mitchell","year":"2012"},{"key":"e_1_2_2_36_1","volume-title":"18th International Conference on Pattern Recognition (ICPR'06)","volume":"3","author":"Xu Deyou","year":"2006"},{"key":"e_1_2_2_37_1","doi-asserted-by":"crossref","volume-title":"CheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-Mounted Display (VRST '17)","author":"Yamashita Koki","DOI":"10.1145\/3139131.3139146"},{"key":"e_1_2_2_38_1","first-page":"1","article-title":"CheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-mounted Display (VRST '17)","volume":"19","author":"Yamashita Koki","year":"2017","journal-title":"ACM"},{"key":"e_1_2_2_39_1","volume-title":"Phyo Phyo San, Xiao Li Li, and Shonali Krishnaswamy.","author":"Yang Jian Bo","year":"2015"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.310"},{"key":"e_1_2_2_41_1","doi-asserted-by":"crossref","unstructured":"Mi Zhang and Alexander Sawchuk. 2012. USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors. 1036--1043. https:\/\/doi.org\/10.1145\/2370216.2370438  Mi Zhang and Alexander Sawchuk. 2012. USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors. 1036--1043. https:\/\/doi.org\/10.1145\/2370216.2370438","DOI":"10.1145\/2370216.2370438"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.244"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3432199","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3432199","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3432199","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:47:09Z","timestamp":1750193229000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3432199"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,12,17]]},"references-count":42,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2020,12,17]]}},"alternative-id":["10.1145\/3432199"],"URL":"https:\/\/doi.org\/10.1145\/3432199","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,12,17]]},"assertion":[{"value":"2020-12-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}