{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,22]],"date-time":"2026-01-22T07:03:14Z","timestamp":1769065394015,"version":"3.49.0"},"reference-count":42,"publisher":"SAGE Publications","issue":"3","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AIC"],"published-print":{"date-parts":[[2023,8,21]]},"abstract":"<jats:p>The most widely used two-stream architectures and building blocks for human action recognition in videos generally consist of 2D or 3D convolution neural networks. 3D convolution can abstract motion messages between video frames, which is essential for video classification. 3D convolution neural networks usually obtain good performance compared with 2D cases, however it also increases computational cost. In this paper, we propose a heterogeneous two-stream architecture which incorporates two convolutional networks. One uses a mixed convolution network (MCN), which combines some 3D convolutions in the middle of 2D convolutions to train RGB frames, another one adopts BN-Inception network to train Optical Flow frames. Considering the redundancy of neighborhood video frames, we adopt a sparse sampling strategy to decrease the computational cost. Our architecture is trained and evaluated on the standard video actions benchmarks of HMDB51 and UCF101. Experimental results show our approach obtains the state-of-the-art performance on the datasets of HMDB51 (73.04%) and UCF101 (95.27%).<\/jats:p>","DOI":"10.3233\/aic-220188","type":"journal-article","created":{"date-parts":[[2023,8,18]],"date-time":"2023-08-18T15:30:37Z","timestamp":1692372637000},"page":"219-233","source":"Crossref","is-referenced-by-count":1,"title":["A heterogeneous two-stream network for human action recognition"],"prefix":"10.1177","volume":"36","author":[{"given":"Shengbin","family":"Liao","sequence":"first","affiliation":[{"name":"National Engineering Research Center for E-Learning, Central China Normal University, Wuhan, China"}]},{"given":"Xiaofeng","family":"Wang","sequence":"additional","affiliation":[{"name":"National Engineering Laboratory for Educational Big Data Technology, Central China Normal University, Wuhan, China"}]},{"given":"ZongKai","family":"Yang","sequence":"additional","affiliation":[{"name":"National Engineering Laboratory for Educational Big Data Technology, Central China Normal University, Wuhan, China"}]}],"member":"179","reference":[{"key":"10.3233\/AIC-220188_ref1","doi-asserted-by":"publisher","first-page":"3691","DOI":"10.1109\/TIP.2021.3064256","article-title":"Attend and guide (AG-net): A keypoints-driven attention-based deep network for image recognition","volume":"30","author":"Bera","year":"2021","journal-title":"IEEE Transactions on Image Processing"},{"key":"10.3233\/AIC-220188_ref2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.502"},{"issue":"3","key":"10.3233\/AIC-220188_ref3","doi-asserted-by":"publisher","first-page":"664","DOI":"10.1109\/TCSVT.2016.2615324","article-title":"Fast optical flow estimation based on the split Bregman method","volume":"28","author":"Chen","year":"2018","journal-title":"IEEE Transactions on Circuits and Systems for Video Technology"},{"key":"10.3233\/AIC-220188_ref4","doi-asserted-by":"publisher","first-page":"498","DOI":"10.1109\/LSP.2022.3144074","article-title":"Rethinking lightweight: Multiple angle strategy for efficient video action recognition","volume":"29","author":"Chen","year":"2022","journal-title":"IEEE Signal Processing Letters"},{"key":"10.3233\/AIC-220188_ref5","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00807"},{"key":"10.3233\/AIC-220188_ref6","doi-asserted-by":"publisher","first-page":"5195","DOI":"10.1109\/JSEN.2019.2903645","article-title":"A robust framework for abnormal human action recognition using R-transform and Zernike moments in depth videos","volume":"19","author":"Dhiman","year":"2019","journal-title":"IEEE Sensors Journal"},{"key":"10.3233\/AIC-220188_ref7","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1016\/j.engappai.2018.08.014","article-title":"A review of state-of-the-art techniques for abnormal human activity recognition","volume":"77","author":"Dhiman","year":"2019","journal-title":"Engineering Applications of Artificial Intelligence"},{"key":"10.3233\/AIC-220188_ref8","doi-asserted-by":"publisher","first-page":"3835","DOI":"10.1109\/TIP.2020.2965299","article-title":"View-invariant deep architecture for human action recognition using two-stream motion and shape temporal dynamics","volume":"29","author":"Dhiman","year":"2020","journal-title":"IEEE Transactions on Image Processing"},{"issue":"3","key":"10.3233\/AIC-220188_ref9","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3441628","article-title":"Part-wise spatio-temporal attention driven CNN-based 3D human action recognition","volume":"17","author":"Dhiman","year":"2021","journal-title":"ACM Transactions on Multimedia Computing Communications and Applications"},{"key":"10.3233\/AIC-220188_ref10","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298878"},{"key":"10.3233\/AIC-220188_ref11","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.787"},{"key":"10.3233\/AIC-220188_ref12","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.213"},{"issue":"5","key":"10.3233\/AIC-220188_ref13","doi-asserted-by":"publisher","first-page":"2550","DOI":"10.1109\/TCSVT.2020.3042178","article-title":"Hierarchical deep CNN feature set-based representation learning for robust cross-resolution face recognition","volume":"32","author":"Gao","year":"2022","journal-title":"IEEE Transactions on Circuits and Systems for Video Technology"},{"key":"10.3233\/AIC-220188_ref14","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2017.373"},{"key":"10.3233\/AIC-220188_ref15","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00685"},{"key":"10.3233\/AIC-220188_ref16","unstructured":"S.\u00a0Ioffe and C.\u00a0Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: The 14th International Conference on Machine Learning and Applications (ICMLA 2015), Miami, Florida, USA, 9\u201311 December, 2015, pp.\u00a0448\u2013456."},{"key":"10.3233\/AIC-220188_ref17","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.118406"},{"key":"10.3233\/AIC-220188_ref18","doi-asserted-by":"publisher","first-page":"6001","DOI":"10.1007\/s10489-020-02176-3","article-title":"Action recognition using interrelationships of 3D joints and frames based on angle sine relation and distance features using interrelationships","volume":"51","author":"Islam","year":"2021","journal-title":"Applied Intelligence"},{"key":"10.3233\/AIC-220188_ref19","doi-asserted-by":"crossref","first-page":"1481","DOI":"10.1007\/s11063-021-10585-9","article-title":"Applied human action recognition network based on SNSP features","volume":"54","author":"Islam","year":"2022","journal-title":"Neural Processing Letters"},{"issue":"3","key":"10.3233\/AIC-220188_ref20","doi-asserted-by":"publisher","first-page":"417","DOI":"10.1049\/iet-ipr.2018.6437","article-title":"CAD: Concatenated action descriptor for one and two person(s), using silhouette and silhouette\u2019s skeleton","volume":"14","author":"Islam","year":"2020","journal-title":"IET Image Processing"},{"issue":"1","key":"10.3233\/AIC-220188_ref21","doi-asserted-by":"publisher","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","article-title":"3D convolutional neural networks for human action recognition","volume":"35","author":"Ji","year":"2013","journal-title":"IEEE Transactions on Pattern Analysis & Machine Intelligence"},{"issue":"7","key":"10.3233\/AIC-220188_ref22","doi-asserted-by":"publisher","first-page":"4584","DOI":"10.1109\/TII.2020.3018487","article-title":"D3D: Dual 3-D convolutional network for real-time action recognition","volume":"17","author":"Jiang","year":"2021","journal-title":"IEEE Transactions on Industrial Informatics"},{"issue":"1","key":"10.3233\/AIC-220188_ref23","doi-asserted-by":"publisher","first-page":"145","DOI":"10.1109\/TCSVT.2018.2887408","article-title":"Rate-accuracy trade-off in video classification with deep convolutional neural networks","volume":"30","author":"Jubran","year":"2020","journal-title":"IEEE Transactions on Circuits and Systems for Video Technology"},{"key":"10.3233\/AIC-220188_ref25","doi-asserted-by":"publisher","first-page":"60179","DOI":"10.1109\/ACCESS.2020.2983427","article-title":"Action recognition in videos using pre-trained 2D convolutional neural networks","volume":"8","author":"Kim","year":"2020","journal-title":"IEEE Access"},{"key":"10.3233\/AIC-220188_ref26","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2011.6126543"},{"key":"10.3233\/AIC-220188_ref27","doi-asserted-by":"publisher","first-page":"2392","DOI":"10.1109\/TMM.2021.3080076","article-title":"Scene recognition mechanism for service robot adapting various families: A CNN-based approach using multi-type cameras","volume":"24","author":"Liu","year":"2022","journal-title":"IEEE Transactions on Multimedia"},{"key":"10.3233\/AIC-220188_ref28","doi-asserted-by":"crossref","unstructured":"Z.\u00a0Liu, L.\u00a0Wang, W.\u00a0Wu, C.\u00a0Qian and T.\u00a0Lu, TAM: Temporal adaptive module for video recognition, in: IEEE International Conference on Computer Vision (ICCV 2021), 2021, pp.\u00a011\u201317, https:\/\/arxiv.org\/abs\/2005.06803.","DOI":"10.1109\/ICCV48922.2021.01345"},{"key":"10.3233\/AIC-220188_ref29","first-page":"76","article-title":"TS-LSTM and temporal-inception: Exploiting spatiotemporal dynamics for activity recognition","volume":"24","author":"Ma","year":"2018","journal-title":"Signal Processing: Image Communication"},{"issue":"2","key":"10.3233\/AIC-220188_ref30","doi-asserted-by":"publisher","first-page":"111","DOI":"10.3233\/AIC-210172","article-title":"Explaining transformer-based image captioning models: An empirical analysis","volume":"35","author":"Marcella","year":"2022","journal-title":"AI Communications"},{"key":"10.3233\/AIC-220188_ref31","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.590"},{"issue":"3","key":"10.3233\/AIC-220188_ref32","doi-asserted-by":"publisher","first-page":"187","DOI":"10.3233\/AIC-210250","article-title":"Channel attention and multi-scale graph neural networks for skeleton-based action recognition","volume":"35","author":"Ronghao","year":"2022","journal-title":"AI Communications"},{"key":"10.3233\/AIC-220188_ref33","unstructured":"K.\u00a0Simonyan and A.\u00a0Zisserman, Two-stream convolutional networks for action recognition in videos, in: Proceedings of Advances in Neural Information Processing Systems (NIPS 2014), Montreal, QC, Canada, 8\u201313 December, 2014, pp.\u00a0568\u2013576."},{"issue":"9","key":"10.3233\/AIC-220188_ref34","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1111\/exsy.12805","article-title":"A sparse coded composite descriptor for human activity recognition","volume":"8","author":"Singh","year":"2021","journal-title":"Expert Systems"},{"key":"10.3233\/AIC-220188_ref36","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00151"},{"key":"10.3233\/AIC-220188_ref37","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2022.3183112"},{"key":"10.3233\/AIC-220188_ref38","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.510"},{"issue":"6","key":"10.3233\/AIC-220188_ref39","doi-asserted-by":"publisher","first-page":"1510","DOI":"10.1109\/TPAMI.2017.2712608","article-title":"Long-term temporal convolutions for action recognition","volume":"40","author":"Varol","year":"2018","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"10.3233\/AIC-220188_ref40","doi-asserted-by":"publisher","first-page":"1595","DOI":"10.1007\/s00371-018-1560-4","article-title":"A unified model for human activity recognition using spatial distribution of gradients and difference of Gaussian kernel","volume":"35","author":"Vishwakarma","year":"2019","journal-title":"The Visual Computer"},{"key":"10.3233\/AIC-220188_ref41","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.441"},{"key":"10.3233\/AIC-220188_ref42","doi-asserted-by":"crossref","unstructured":"L.\u00a0Wang, Z.\u00a0Tong, B.\u00a0Ji and G.\u00a0Wu, TDN: Temporal difference networks for efficient action recognition, in: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 19\u201325 June, 2021. https:\/\/arxiv.org\/abs\/2012.10071.","DOI":"10.1109\/CVPR46437.2021.00193"},{"key":"10.3233\/AIC-220188_ref43","doi-asserted-by":"crossref","unstructured":"L.\u00a0Wang, Y.\u00a0Xiong, Z.\u00a0Wang, Y.\u00a0Qiao, D.\u00a0Lin, X.\u00a0Tang and L.\u00a0Van Gool, Temporal segment networks: Towards good practices for deep action recognition, in: European Conference on Computer Vision (ECCV 2016), Amsterdam, Netherlands, 8\u201316 October, 2016. https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-319-46484-82.pdf.","DOI":"10.1007\/978-3-319-46484-8_2"},{"issue":"7","key":"10.3233\/AIC-220188_ref44","doi-asserted-by":"publisher","first-page":"3436","DOI":"10.1109\/TPAMI.2021.3054886","article-title":"Event-stream representation for human gaits identification using deep neural networks","volume":"44","author":"Wang","year":"2022","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"}],"container-title":["AI Communications"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/AIC-220188","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,10]],"date-time":"2025-03-10T14:04:35Z","timestamp":1741615475000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/AIC-220188"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,21]]},"references-count":42,"journal-issue":{"issue":"3"},"URL":"https:\/\/doi.org\/10.3233\/aic-220188","relation":{},"ISSN":["1875-8452","0921-7126"],"issn-type":[{"value":"1875-8452","type":"electronic"},{"value":"0921-7126","type":"print"}],"subject":[],"published":{"date-parts":[[2023,8,21]]}}}