{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T16:33:36Z","timestamp":1740155616389,"version":"3.37.3"},"reference-count":41,"publisher":"National Library of Serbia","issue":"3","license":[{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["ComSIS","COMPUT SCI INF SYST","COMPUT SCI INFORM SY","COMPUTER SCI INFORM","COMSIS J"],"published-print":{"date-parts":[[2022]]},"abstract":"<jats:p>Motion recognition is a hot topic in the field of computer vision. It is a challenging task. Motion recognition analysis is closely related to the network input, network structure and feature fusion. Due to the noise in the video, traditional methods cannot better obtain the feature information resulting in the problem of inaccurate motion recognition. Feature selection directly affects the efficiency of recognition, and there are still many problems to be solved in the multi-level feature fusion process. In this paper, we propose a novel motion recognition method based on an improved two-stream convolutional neural network and sparse feature fusion. In the low-rank space, because sparse features can effectively capture the information of motion objects in the video, meanwhile, we supplement the network input data, in view of the lack of information interaction in the network, we fuse the high-level semantic information and low-level detail information to recognize the motions by introducing attention mechanism, which makes the performance of the two-stream convolutional neural network have more advantages. Experimental results on UCF101 and HMDB51 data sets show that the proposed method can effectively improve the performance of motion recognition.<\/jats:p>","DOI":"10.2298\/csis220105043c","type":"journal-article","created":{"date-parts":[[2022,9,19]],"date-time":"2022-09-19T14:37:16Z","timestamp":1663598236000},"page":"1329-1348","source":"Crossref","is-referenced-by-count":0,"title":["A novel motion recognition method based on improved two-stream convolutional neural network and sparse feature fusion"],"prefix":"10.2298","volume":"19","author":[{"given":"Chen","family":"Chen","sequence":"first","affiliation":[{"name":"Sports Institute, Henan University of Technology Zhengzhou City, China"}]}],"member":"1078","reference":[{"key":"ref1","doi-asserted-by":"crossref","unstructured":"Yao, G., Lei, T., Zhong, J. \u201dA Review of Convolutional-Neural-Network-Based Action Recognition,\u201d Pattern Recognition Letters, vol. 118, pp. 14-22. (2018)","DOI":"10.1016\/j.patrec.2018.05.018"},{"key":"ref2","unstructured":"Li, H., Ding, Y., Li, C., et al,. \u201dAction recognition of temporal segment network based on feature fusion,\u201d Journal of Computer Research and Development, Vol. 57, No. 1, pp. 145-158. (2020)"},{"key":"ref3","doi-asserted-by":"crossref","unstructured":"Olivieri, D. N., Conde, I.G., Sobrino, X.A.V. \u201dEigenspace-based fall detection and activity recognition from motion templates and machine learning,\u201d Expert Systems with Applications, Vol. 39, No. 5, pp. 5935-5945. (2012)","DOI":"10.1016\/j.eswa.2011.11.109"},{"key":"ref4","doi-asserted-by":"crossref","unstructured":"Zheng, D., Li, H., Yin, S. \u201dAction Recognition Based on the Modified Two-stream CNN,\u201d International Journal of Mathematical Sciences and Computing (IJMSC), Vol. 6, No. 6, pp. 15- 23. (2020)","DOI":"10.5815\/ijmsc.2020.06.03"},{"key":"ref5","doi-asserted-by":"crossref","unstructured":"J. Long, X. Wang, W. Zhou, J. Zhang, D. Dai and G. Zhu. \u201dA Comprehensive Review of Signal Processing and Machine Learning Technologies for UHF PD Detection and Diagnosis (I): Preprocessing and Localization Approaches,\u201d IEEE Access, vol. 9, pp. 69876-69904, (2021).","DOI":"10.1109\/ACCESS.2021.3077483"},{"key":"ref6","doi-asserted-by":"crossref","unstructured":"Wang, P., Li, W., Ogunbona, P., et al. \u201dRGB-D-based Human Motion Recognition with Deep Learning: A Survey,\u201d Computer vision and image understanding, Vol. 171, pp. 118-139. (2017)","DOI":"10.1016\/j.cviu.2018.04.007"},{"key":"ref7","doi-asserted-by":"crossref","unstructured":"Kim, K., Yong, K.C. \u201dEffective inertial sensor quantity and locations on a body for deep learning-based worker\u2019s motion recognition,\u201d Automation in Construction, Vol. 113. (2020)","DOI":"10.1016\/j.autcon.2020.103126"},{"key":"ref8","doi-asserted-by":"crossref","unstructured":"Yin, S., Li, H. \u201dGSAPSO-MQC:medical image encryption based on genetic simulated annealing particle swarm optimization and modified quantum chaos system,\u201d Evolutionary Intelligence, vol. 14, pp. 1817-1829. (2021)","DOI":"10.1007\/s12065-020-00440-6"},{"key":"ref9","doi-asserted-by":"crossref","unstructured":"Ji, S., Xu, W., Yang, M., and Yu, K. \u201d3D Convolutional Neural Networks for Human Action Recognition,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, No. 1, pp. 221-231. (2013)","DOI":"10.1109\/TPAMI.2012.59"},{"key":"ref10","doi-asserted-by":"crossref","unstructured":"Feichtenhofer, C., Pinz, A., Zisserman, A. \u201dConvolutional Two-Stream Network Fusion for Video Action Recognition,\u201d 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1933-1941.","DOI":"10.1109\/CVPR.2016.213"},{"key":"ref11","doi-asserted-by":"crossref","unstructured":"Wang, H., Schmid, C. \u201dAction Recognition with Improved Trajectories,\u201d 2013 IEEE International Conference on Computer Vision, 2013, pp. 3551-3558.","DOI":"10.1109\/ICCV.2013.441"},{"key":"ref12","doi-asserted-by":"crossref","unstructured":"Tran, D., Bourdev, L., Fergus, R., et al. \u201dLearning Spatiotemporal Features with 3D Convolutional Networks,\u201d 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 4489-4497.","DOI":"10.1109\/ICCV.2015.510"},{"key":"ref13","doi-asserted-by":"crossref","unstructured":"Zhu, W., Hu, J., Sun, G., Cao X., et al. \u201dA Key Volume Mining Deep Framework for Action Recognition,\u201d 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1991-1999.","DOI":"10.1109\/CVPR.2016.219"},{"key":"ref14","doi-asserted-by":"crossref","unstructured":"Kar, A. Rai, N. Sikka K. and Sharma, G. \u201dAdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for Human Action Recognition in Videos,\u201d 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5699-5708.","DOI":"10.1109\/CVPR.2017.604"},{"key":"ref15","unstructured":"Yi, Z., Lan, Z., Newsam, S., et al. Hidden Two-Stream Convolutional Networks for Action Recognition. 2017. arXiv:1704.00389"},{"key":"ref16","doi-asserted-by":"crossref","unstructured":"Sevilla-Lara, L., Liao, Y., G\u00a8uney, F., et al. \u201dOn the Integration of Optical Flow and Action Recognition,\u201d Pattern Recognition. GCPR 2018. Lecture Notes in Computer Science, vol. 11269, pp. 281-297, Springer, Cham. (2019)","DOI":"10.1007\/978-3-030-12939-2_20"},{"key":"ref17","doi-asserted-by":"crossref","unstructured":"Zhang, B.,Wang, L.,Wang, Z., et al. \u201dReal-Time Action RecognitionWith Deeply Transferred Motion Vector CNNs,\u201d IEEE Transactions on Image Processing, Vol. 27, No. 5, pp. 2326-2339. (2018)","DOI":"10.1109\/TIP.2018.2791180"},{"key":"ref18","doi-asserted-by":"crossref","unstructured":"Choutas, V., Weinzaepfel, P., Revaud J. \u201dPoTion: Pose MoTion Representation for Action Recognition,\u201d 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7024-7033.","DOI":"10.1109\/CVPR.2018.00734"},{"key":"ref19","doi-asserted-by":"crossref","unstructured":"Wang, L., et al. \u201dTemporal Segment Networks: Towards Good Practices for Deep Action Recognition,\u201d Computer Vision-ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol. 9912, pp. 20-36, Springer, Cham. (2016)","DOI":"10.1007\/978-3-319-46484-8_2"},{"key":"ref20","doi-asserted-by":"crossref","unstructured":"Lan, Z., Zhu, Y., Hauptmann, A. G., and Newsam, S. \u201dDeep Local Video Feature for Action Recognition,\u201d 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1219-1225. (2017)","DOI":"10.1109\/CVPRW.2017.161"},{"key":"ref21","doi-asserted-by":"crossref","unstructured":"Zhou, B., Andonian, A., Oliva, A., et al. \u201dTemporal Relational Reasoning in Videos,\u201d Computer Vision-ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11205, pp. 831-846, Springer, Cham. (2018)","DOI":"10.1007\/978-3-030-01246-5_49"},{"key":"ref22","doi-asserted-by":"crossref","unstructured":"Xu, H., Das, A., and Saenko, K. \u201dR-C3D: Region Convolutional 3D Network for Temporal Activity Detection,\u201d 2017 IEEE International Conference on Computer Vision (ICCV),, pp. 5794- 5803. (2017)","DOI":"10.1109\/ICCV.2017.617"},{"key":"ref23","doi-asserted-by":"crossref","unstructured":"Yin, S., Li, H., Teng, L. \u201dAirport Detection Based on Improved Faster RCNN in Large Scale Remote Sensing Images,\u201d Sensing and Imaging,?Vol. 21. (2020).","DOI":"10.1007\/s11220-020-00314-2"},{"key":"ref24","doi-asserted-by":"crossref","unstructured":"Chen, J., Kong, J., Sun, H. et al. \u201dSpatiotemporal Interaction Residual Networks with Pseudo3D for Video Action Recognition,\u201d Sensors, Vol. 20, No. 11, 3126. (2020)","DOI":"10.3390\/s20113126"},{"key":"ref25","unstructured":"Jiang, D., Li, H., Yin, S. \u201dSpeech Emotion Recognition Method Based on Improved Long Short-term Memory Networks,\u201d International Journal of Electronics and Information Engineering, Vol. 12, No. 4, pp. 147-154. (2020)"},{"key":"ref26","doi-asserted-by":"crossref","unstructured":"Jiang, Y., Wu, Z., Tang, J., et al. \u201dModeling Multimodal Clues in a Hybrid Deep Learning Framework for Video Classification,\u201d IEEE Transactions on Multimedia, vol. 20, no. 11, pp. 3137-3147. (2018)","DOI":"10.1109\/TMM.2018.2823900"},{"key":"ref27","doi-asserted-by":"crossref","unstructured":"Du,W.,Wang, Y., Qiao, Y. \u201dRPAN: An End-to-End Recurrent Pose-Attention Network for Action Recognition in Videos,\u201d 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3745-3754.","DOI":"10.1109\/ICCV.2017.402"},{"key":"ref28","doi-asserted-by":"crossref","unstructured":"Duan, Z., Zhang, T., Tan, J. et al. \u201dNon-Local Multi-Focus Image FusionWith Recurrent Neural Networks,\u201d IEEE Access, Vol. 8, pp. 135284-135295. (2020)","DOI":"10.1109\/ACCESS.2020.3010542"},{"key":"ref29","doi-asserted-by":"crossref","unstructured":"Byeon, Y.H., Kwak, K.C. \u201dFacial Expression Recognition Using 3D Convolutional Neural Network,\u201d International Journal of Advanced Computer Science & Applications, Vol. 5, No. 12. (2014).","DOI":"10.14569\/IJACSA.2014.051215"},{"key":"ref30","doi-asserted-by":"crossref","unstructured":"Cai, Z., Wang, L., Peng, X., Qiao, Y. \u201dMulti-view Super Vector for Action Recognition,\u201d 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 596-603.","DOI":"10.1109\/CVPR.2014.83"},{"key":"ref31","doi-asserted-by":"crossref","unstructured":"Luong, V. D., Wang, L., Xiao, G. \u201dAction Recognition Using Hierarchical Independent Subspace Analysis with Trajectory,\u201d Springer International Publishing, 2015.","DOI":"10.1007\/978-3-319-13359-1_42"},{"key":"ref32","doi-asserted-by":"crossref","unstructured":"Peng, X., Wang, L., Wang, X., et al. \u201dBag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice,\u201d Computer Vision & Image Understanding, Vol. 150, pp. 109-125. (2016)","DOI":"10.1016\/j.cviu.2016.03.013"},{"key":"ref33","doi-asserted-by":"crossref","unstructured":"Wang, L., Qiao, Y., Tang, X. \u201dMoFAP: A Multi-level Representation for Action Recognition,\u201d International Journal of Computer Vision, Vol. 119, No. 3, pp. 254-271. (2016)","DOI":"10.1007\/s11263-015-0859-0"},{"key":"ref34","doi-asserted-by":"crossref","unstructured":"Wang, L., Qiao, Y., Tang, X. \u201dAction recognition with trajectory-pooled deep-convolutional descriptors,\u201d 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 4305-4314.","DOI":"10.1109\/CVPR.2015.7299059"},{"key":"ref35","doi-asserted-by":"crossref","unstructured":"Varol, G., Laptev, I., Schmid, C. \u201dLong-Term Temporal Convolutions for Action Recognition,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 6, pp. 1510- 1517. (2018)","DOI":"10.1109\/TPAMI.2017.2712608"},{"key":"ref36","doi-asserted-by":"crossref","unstructured":"Qiu, Z., Yao, T., Mei, T. \u201dLearning Spatio-Temporal Representation with Pseudo-3D Residual Networks,\u201d 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017.","DOI":"10.1109\/ICCV.2017.590"},{"key":"ref37","unstructured":"Simonyan, K., Zisserman, A. \u201dTwo-stream convolutional networks for action recognition in videos,\u201d Neural Information Processing Systems, Vol. 1, No. 4, 568576. (2014)"},{"key":"ref38","doi-asserted-by":"crossref","unstructured":"Joe Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga and G. Toderici. \u201dBeyond short snippets: Deep networks for video classification,\u201d 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 4694-4702.","DOI":"10.1109\/CVPR.2015.7299101"},{"key":"ref39","doi-asserted-by":"crossref","unstructured":"Wang, X., Farhadi A., and Gupta, A. \u201dActions Transformations,\u201d 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2658-2667.","DOI":"10.1109\/CVPR.2016.291"},{"key":"ref40","doi-asserted-by":"crossref","unstructured":"Dianhuai Shen, Xueying Jiang, Lin Teng. \u201dResidual network based on convolution attention model and feature fusion for dance motion recognition,\u201d EAI Endorsed Transactions on Scalable Information Systems, 21(33), e8, 2021. http:\/\/dx.doi.org\/10.4108\/eai.6-10-2021.171247","DOI":"10.4108\/eai.6-10-2021.171247"},{"key":"ref41","unstructured":"Jisi A and Shoulin Yin. \u201dA New Feature Fusion Network for Student Behavior Recognition in Education,\u201d Journal of Applied Science and Engineering, vol. 24, no. 2, pp. 133-140. (2021)"}],"container-title":["Computer Science and Information Systems"],"original-title":[],"language":"en","deposited":{"date-parts":[[2023,8,11]],"date-time":"2023-08-11T08:22:05Z","timestamp":1691742125000},"score":1,"resource":{"primary":{"URL":"https:\/\/doiserbia.nb.rs\/Article.aspx?ID=1820-02142200043C"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022]]},"references-count":41,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022]]}},"URL":"https:\/\/doi.org\/10.2298\/csis220105043c","relation":{},"ISSN":["1820-0214","2406-1018"],"issn-type":[{"type":"print","value":"1820-0214"},{"type":"electronic","value":"2406-1018"}],"subject":[],"published":{"date-parts":[[2022]]}}}