{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,30]],"date-time":"2025-10-30T07:10:10Z","timestamp":1761808210227,"version":"build-2065373602"},"reference-count":43,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2018,8,1]],"date-time":"2018-08-01T00:00:00Z","timestamp":1533081600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>This paper presents a simple yet effective method for improving the performance of zero-shot learning (ZSL). ZSL classifies instances of unseen classes, from which no training data is available, by utilizing the attributes of the classes. Conventional ZSL methods have equally dealt with all the available attributes, but this sometimes causes misclassification. This is because an attribute that is effective for classifying instances of one class is not always effective for another class. In this case, a metric of classifying the latter class can be undesirably influenced by the irrelevant attribute. This paper solves this problem by taking the importance of each attribute for each class into account when calculating the metric. In addition to the proposal of this new method, this paper also contributes by providing a dataset for pose classification based on wearable sensors, named HDPoseDS. It contains 22 classes of poses performed by 10 subjects with 31 IMU sensors across full body. To the best of our knowledge, it is the richest wearable-sensor dataset especially in terms of sensor density, and thus it is suitable for studying zero-shot pose\/action recognition. The presented method was evaluated on HDPoseDS and outperformed relative improvement of 5.9% in comparison to the best baseline method.<\/jats:p>","DOI":"10.3390\/s18082485","type":"journal-article","created":{"date-parts":[[2018,8,1]],"date-time":"2018-08-01T11:22:34Z","timestamp":1533122554000},"page":"2485","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["Attributes\u2019 Importance for Zero-Shot Pose-Classification Based on Wearable Sensors"],"prefix":"10.3390","volume":"18","author":[{"given":"Hiroki","family":"Ohashi","sequence":"first","affiliation":[{"name":"Research &amp; Development Group, Hitachi, Ltd., Tokyo 185-8601, Japan"}]},{"given":"Mohammad","family":"Al-Naser","sequence":"additional","affiliation":[{"name":"German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germnay"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4239-6520","authenticated-orcid":false,"given":"Sheraz","family":"Ahmed","sequence":"additional","affiliation":[{"name":"German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germnay"}]},{"given":"Katsuyuki","family":"Nakamura","sequence":"additional","affiliation":[{"name":"Research &amp; Development Group, Hitachi, Ltd., Tokyo 185-8601, Japan"}]},{"given":"Takuto","family":"Sato","sequence":"additional","affiliation":[{"name":"Research &amp; Development Group, Hitachi, Ltd., Tokyo 185-8601, Japan"}]},{"given":"Andreas","family":"Dengel","sequence":"additional","affiliation":[{"name":"German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germnay"}]}],"member":"1968","published-online":{"date-parts":[[2018,8,1]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1016\/j.imavis.2017.01.010","article-title":"Going deeper into action recognition: A survey","volume":"60","author":"Herath","year":"2017","journal-title":"Image Vis. Comput."},{"key":"ref_2","unstructured":"Wang, J., Chen, Y., Hao, S., Peng, X., and Hu, L. (arXiv, 2017). Deep learning for sensor-based activity recognition: A survey, arXiv."},{"key":"ref_3","unstructured":"Larochelle, H., Erhan, D., and Bengio, Y. (2008, January 13\u201317). Zero-data Learning of New Tasks. Proceedings of the National Conference on Artificial Intelligence (AAAI), Chicago, IL, USA."},{"key":"ref_4","unstructured":"Frome, A., Corrado, G.S., Shlens, J., Bengio, S., Dean, J., Ranzato, M.A., and Mikolov, T. (2013, January 5\u201310). DeViSE: A Deep Visual-Semantic Embedding Model. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"112","DOI":"10.1109\/MSP.2017.2763441","article-title":"Recent Advances in Zero-Shot Recognition: Toward Data-Efficient Understanding of Visual Content","volume":"35","author":"Fu","year":"2018","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Lampert, C.H., Nickisch, H., and Harmeling, S. (2009, January 20\u201325). Learning to detect unseen object classes by between-class attribute transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA.","DOI":"10.1109\/CVPRW.2009.5206594"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"453","DOI":"10.1109\/TPAMI.2013.140","article-title":"Attribute-based classification for zero-shot visual object categorization","volume":"36","author":"Lampert","year":"2014","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Liu, J., Kuipers, B., and Savarese, S. (2011, January 20\u201325). Recognizing human actions by attributes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA.","DOI":"10.1109\/CVPR.2011.5995353"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Cheng, H.T., Griss, M., Davis, P., Li, J., and You, D. (2013, January 8\u201312). Towards zero-shot learning for human activity recognition using semantic attribute sequence model. Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp), Zurich, Switzerland.","DOI":"10.1145\/2493432.2493511"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Xu, X., Hospedales, T.M., and Gong, S. (2016, January 8\u201316). Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation. Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46475-6_22"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Li, Y., Hu, S.H., and Li, B. (2016, January 25\u201328). Recognizing unseen actions in a domain-adapted embedding space. Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA.","DOI":"10.1109\/ICIP.2016.7533150"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Qin, J., Liu, L., Shao, L., Shen, F., Ni, B., Chen, J., and Wang, Y. (2017, January 21\u201326). Zero-shot action recognition with error-correcting output codes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.117"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Iqbal, U., Milan, A., and Gall, J. (2017, January 21\u201326). PoseTrack: Joint Multi-Person Pose Estimation and Tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.495"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Chen, Y., Shen, C., Wei, X.S., Liu, L., and Yang, J. (2017, January 22\u201329). Adversarial PoseNet: A Structure-aware Convolutional Network for Human Pose Estimation. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.137"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"G\u00fcler, R.A., Neverova, N., and Kokkinos, I. (2018, January 18\u201322). Densepose: Dense human pose estimation in the wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00762"},{"key":"ref_16","unstructured":"Palatucci, M., Pomerleau, D., Hinton, G.E., and Mitchell, T.M. (2009, January 6\u201311). Zero-shot learning with semantic output codes. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Vancouver, BC, Canada."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Cheng, H.T., Sun, F.T., Griss, M., Davis, P., Li, J., and You, D. (2013, January 25\u201328). NuActiv: Recognizing Unseen New Activities Using Semantic Attribute-Based Learning. Proceedings of the International Conference on Mobile Systems, Applications, and Services (MobiSys), Taipei, Taiwan.","DOI":"10.1145\/2462456.2464438"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Xu, X., Hospedales, T., and Gong, S. (2015, January 27\u201330). Semantic embedding space for zero-shot action recognition. Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada.","DOI":"10.1109\/ICIP.2015.7350760"},{"key":"ref_19","unstructured":"Socher, R., Ganjoo, M., Manning, C.D., and Ng, A. (2013, January 5\u201310). Zero-shot learning through cross-modal transfer. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA."},{"key":"ref_20","unstructured":"Jayaraman, D., and Grauman, K. (2014, January 8\u201313). Zero-shot recognition with unreliable attributes. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montr\u00e8al, QC, Canada."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Alexiou, I., Xiang, T., and Gong, S. (2016, January 25\u201328). Exploring synonyms as context in zero-shot action recognition. Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA.","DOI":"10.1109\/ICIP.2016.7533149"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Wang, Q., and Chen, K. (2017, January 18\u201322). Alternative semantic representations for zero-shot human action recognition. Proceedings of the Joint European Conference on Machine Learning & Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Skopje, Macedonia.","DOI":"10.1007\/978-3-319-71249-9_6"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1667","DOI":"10.1109\/LSP.2016.2612247","article-title":"Beyond Semantic Attributes: Discrete Latent Attributes Learning for Zero-Shot Recognition","volume":"23","author":"Qin","year":"2016","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Tong, B., Klinkigt, M., Chen, J., Cui, X., Kong, Q., Murakami, T., and Kobayashi, Y. (2017, January 4\u20139). Adversarial Zero-Shot Learning with Semantic Augmentation. Proceedings of the National Conference On Artificial Intelligence (AAAI), San Francisco, CA, USA.","DOI":"10.1609\/aaai.v32i1.11886"},{"key":"ref_25","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8\u201313). Generative Adversarial Nets. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montr\u00e8al, QC, Canada."},{"key":"ref_26","unstructured":"Liu, H., Sun, F., Fang, B., and Guo, D. (2018). Cross-Modal Zero-Shot-Learning for Tactile Object Recognition. IEEE Trans. Syst. Man Cybern. Syst."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1192","DOI":"10.1109\/SURV.2012.110112.00192","article-title":"A survey on human activity recognition using wearable sensors","volume":"15","author":"Lara","year":"2013","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"33","DOI":"10.1145\/2499621","article-title":"A tutorial on human activity recognition using body-worn inertial sensors","volume":"46","author":"Bulling","year":"2014","journal-title":"ACM Comput. Surv."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"1321","DOI":"10.1109\/JSEN.2014.2370945","article-title":"Wearable sensors for human activity monitoring: A review","volume":"15","author":"Mukhopadhyay","year":"2015","journal-title":"IEEE Sens. J."},{"key":"ref_30","unstructured":"Guan, X., Raich, R., and Wong, W.K. (2016, January 19\u201324). Efficient Multi-Instance Learning for Activity Recognition from Time Series Data Using an Auto-Regressive Hidden Markov Model. Proceedings of the International Conference on Machine Learning (ICML), New York, NY, USA."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"2","DOI":"10.1145\/2134203.2134205","article-title":"Multimodal recognition of reading activity in transit using body-worn sensors","volume":"9","author":"Bulling","year":"2012","journal-title":"ACM Trans. Appl. Percept."},{"key":"ref_32","unstructured":"Adams, R.J., Parate, A., and Marlin, B.M. (2016, January 19\u201324). Hierarchical Span-Based Conditional Random Fields for Labeling and Segmenting Events in Wearable Sensor Data Streams. Proceedings of the International Conference on Machine Learning (ICML), New York, NY, USA."},{"key":"ref_33","unstructured":"Zheng, Y., Wong, W.K., Guan, X., and Trost, S. (2013, January 14\u201318). Physical Activity Recognition from Accelerometer Data Using a Multi-Scale Ensemble Method. Proceedings of the Innovative Applications of Artificial Intelligence Conference (IAAI), Bellevue, WA, USA."},{"key":"ref_34","unstructured":"Yang, J., Nguyen, M.N., San, P.P., Li, X.L., and Krishnaswamy, S. (2015, January 25\u201331). Deep convolutional neural networks on multichannel time series for human activity recognition. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Buenos Aires, Argentina."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Jiang, W., and Yin, Z. (2015, January 26\u201330). Human activity recognition using wearable sensors by deep convolutional neural networks. Proceedings of the ACM International Conference on Multimedia (MM), Brisbane, Australia.","DOI":"10.1145\/2733373.2806333"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Ronao, C.A., and Cho, S.B. (2015, January 9\u201312). Deep convolutional neural networks for human activity recognition with smartphone sensors. Proceedings of the International Conference on Neural Information Processing (ICONIP), Istanbul, Turkey.","DOI":"10.1007\/978-3-319-26561-2_6"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Ord\u00f3\u00f1ez, F.J., and Roggen, D. (2016). Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors, 16.","DOI":"10.3390\/s16010115"},{"key":"ref_38","unstructured":"Hammerla, N.Y., Halloran, S., and Ploetz, T. (2016, January 9\u201315). Deep, convolutional, and recurrent models for human activity recognition using wearables. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), New York, NY, USA."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Wang, W., Miao, C., and Hao, S. (2017, January 23\u201326). Zero-shot human activity recognition via nonlinear compatibility based method. Proceedings of the International Conference on Web Intelligence (WI), Leipzig, Germany.","DOI":"10.1145\/3106426.3106526"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Al-Naser, M., Ohashi, H., Ahmed, S., Nakamura, K., Akiyama, T., Sato, T., Nguyen, P., and Dengel, A. (2018, January 16\u201318). Hierarchical Model for Zero-shot Activity Recognition using Wearable Sensors. Proceedings of the International Conference on Agents and Artificial Intelligence (ICAART), Madeira, Portugal.","DOI":"10.5220\/0006595204780485"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Xian, Y., Schiele, B., and Akata, Z. (2017, January 21\u201326). Zero-Shot Learning\u2014The Good, the Bad and the Ugly. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.328"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Kumar Verma, V., Arora, G., Mishra, A., and Rai, P. (2018, January 18\u201322). Generalized Zero-Shot Learning via Synthesized Examples. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00450"},{"key":"ref_43","unstructured":"Snell, J., Swersky, K., and Zemel, R. (2017, January 4\u20139). Prototypical networks for few-shot learning. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Long Beach, CA, USA."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/18\/8\/2485\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T15:15:42Z","timestamp":1760195742000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/18\/8\/2485"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,8,1]]},"references-count":43,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2018,8]]}},"alternative-id":["s18082485"],"URL":"https:\/\/doi.org\/10.3390\/s18082485","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2018,8,1]]}}}