{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T23:05:27Z","timestamp":1773788727983,"version":"3.50.1"},"reference-count":36,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2020,2,17]],"date-time":"2020-02-17T00:00:00Z","timestamp":1581897600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100012167","name":"National Science and Technology Infrastructure Program","doi-asserted-by":"publisher","award":["4444-10099609"],"award-info":[{"award-number":["4444-10099609"]}],"id":[{"id":"10.13039\/501100012167","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The detection of pig behavior helps detect abnormal conditions such as diseases and dangerous movements in a timely and effective manner, which plays an important role in ensuring the health and well-being of pigs. Monitoring pig behavior by staff is time consuming, subjective, and impractical. Therefore, there is an urgent need to implement methods for identifying pig behavior automatically. In recent years, deep learning has been gradually applied to the study of pig behavior recognition. Existing studies judge the behavior of the pig only based on the posture of the pig in a still image frame, without considering the motion information of the behavior. However, optical flow can well reflect the motion information. Thus, this study took image frames and optical flow from videos as two-stream input objects to fully extract the temporal and spatial behavioral characteristics. Two-stream convolutional network models based on deep learning were proposed, including inflated 3D convnet (I3D) and temporal segment networks (TSN) whose feature extraction network is Residual Network (ResNet) or the Inception architecture (e.g., Inception with Batch Normalization (BN-Inception), InceptionV3, InceptionV4, or InceptionResNetV2) to achieve pig behavior recognition. A standard pig video behavior dataset that included 1000 videos of feeding, lying, walking, scratching and mounting from five kinds of different behavioral actions of pigs under natural conditions was created. The dataset was used to train and test the proposed models, and a series of comparative experiments were conducted. The experimental results showed that the TSN model whose feature extraction network was ResNet101 was able to recognize pig feeding, lying, walking, scratching, and mounting behaviors with a higher average of 98.99%, and the average recognition time of each video was 0.3163 s. The TSN model (ResNet101) is superior to the other models in solving the task of pig behavior recognition.<\/jats:p>","DOI":"10.3390\/s20041085","type":"journal-article","created":{"date-parts":[[2020,2,20]],"date-time":"2020-02-20T03:20:03Z","timestamp":1582168803000},"page":"1085","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":50,"title":["Automated Video Behavior Recognition of Pigs Using Two-Stream Convolutional Networks"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6920-3787","authenticated-orcid":false,"given":"Kaifeng","family":"Zhang","sequence":"first","affiliation":[{"name":"College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3656-1358","authenticated-orcid":false,"given":"Dan","family":"Li","sequence":"additional","affiliation":[{"name":"College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China"}]},{"given":"Jiayun","family":"Huang","sequence":"additional","affiliation":[{"name":"College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China"}]},{"given":"Yifei","family":"Chen","sequence":"additional","affiliation":[{"name":"College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China"}]}],"member":"1968","published-online":{"date-parts":[[2020,2,17]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"25","DOI":"10.1016\/j.livsci.2017.05.014","article-title":"Implementation of machine vision for detecting behaviour of cattle and pigs","volume":"202","author":"Nasirahmadi","year":"2017","journal-title":"Livest. Sci."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1016\/j.compag.2018.01.023","article-title":"Automatic recognition of lactating sow postures from depth images by deep learning detector","volume":"147","author":"Zheng","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1017\/S1751731114002213","article-title":"Monitoring of behavior using a video-recording system for recognition of Salmonella infection in experimentally infected growing pigs","volume":"9","author":"Ahmed","year":"2015","journal-title":"Animal"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"210","DOI":"10.1016\/j.livsci.2015.09.003","article-title":"Effects of clinical lameness and tail biting lesions on voluntary feed intake in growing pigs","volume":"181","author":"Munsterhjelm","year":"2015","journal-title":"Livest. Sci."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1016\/S0149-7634(88)80004-6","article-title":"Biological basis of the behavior of sick animals","volume":"12","author":"Hart","year":"1988","journal-title":"Neurosci. Biobehav. R."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"261","DOI":"10.1016\/S0147-9571(99)00016-8","article-title":"Experimental model of enterotoxigenic Escherichia coli infection in pigs: Potential for an early recognition of colibacillosis by monitoring of behaviour","volume":"22","author":"Krsnik","year":"1999","journal-title":"Comp. Immunol. Microbiol. Infect. Dis."},{"key":"ref_7","first-page":"109","article-title":"Aggressive and sexual behaviour of growing and finishing pigs reared in groups, without castration","volume":"56","author":"Rydhmer","year":"2006","journal-title":"Acta Agric. Scand. Sect. Anim. Sci."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"295","DOI":"10.1016\/j.compag.2016.04.022","article-title":"Automatic detection of mounting behaviours among pigs using image analysis","volume":"124","author":"Nasirahmadi","year":"2016","journal-title":"Comput. Electron. Agric."},{"key":"ref_9","unstructured":"Rydhmer, L., Zamaratskaia, G., Andersson, H.K., Algers, B., and Lundstr\u00f6m, K. (2004, January 5\u20139). Problems with aggressive and sexual behaviour when rearing entire male pigs. Proceedings of the 55th Annual Meeting of the European Association for Animal Production, Bled, Slovenia."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Li, D., Chen, Y., Zhang, K., and Li, Z. (2019). Mounting Behaviour Recognition for Pigs Based on Deep Learning. Sensors, 19.","DOI":"10.3390\/s19224924"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"334","DOI":"10.1016\/j.vetpar.2006.04.001","article-title":"Comparison of scratching behaviour of growing pigs with sarcoptic mange before and after treatment, employing two distinct approaches","volume":"140","author":"Loewenstein","year":"2006","journal-title":"Vet. Parasitol."},{"key":"ref_12","first-page":"65","article-title":"Investigation of parasitic diseases in some large-scale pig farms in Fujian Province","volume":"03","author":"Jiang","year":"2010","journal-title":"Pig Rais."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1016\/j.tvjl.2016.09.005","article-title":"Early detection of health and welfare compromises through automated detection of behavioural changes in pigs","volume":"217","author":"Matthews","year":"2016","journal-title":"Vet. J."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"2821","DOI":"10.2527\/2000.78112821x","article-title":"Relationships between human-animal interactions and productivity of commercial dairy cows","volume":"78","author":"Hemsworth","year":"2000","journal-title":"J. Anim. Sci."},{"key":"ref_15","first-page":"59","article-title":"Research Advance on Computer Vision in Behavioral Analysis of Pigs","volume":"21","author":"Li","year":"2019","journal-title":"J. Agric. Sci. Tech. China"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Nasirahmadi, A., Sturm, B., Edwards, S., Jeppsson, K.-H., Olsson, A.-C., M\u00fcller, S., and Hensel, O. (2019). Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. Sensors, 19.","DOI":"10.3390\/s19173738"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1016\/j.compag.2014.03.010","article-title":"Image feature extraction for classification of aggressive interactions among pigs","volume":"104","author":"Viazzi","year":"2014","journal-title":"Comput. Electron. Agric."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"141","DOI":"10.1016\/j.livsci.2013.11.007","article-title":"Automatic monitoring of pig locomotion using image analysis","volume":"159","author":"Kashiha","year":"2014","journal-title":"Livest. Sci."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"164","DOI":"10.1016\/j.compag.2012.09.015","article-title":"The automatic monitoring of pigs water use by cameras","volume":"90","author":"Kashiha","year":"2013","journal-title":"Comput. Electron. Agric."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"56","DOI":"10.1016\/j.compag.2016.04.026","article-title":"Automatic recognition of lactating sow behaviors through depth image processing","volume":"125","author":"Lao","year":"2016","journal-title":"Comp. Electron. Agric."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1016\/j.anbehav.2016.12.005","article-title":"Applications of machine learning in animal behaviour studies","volume":"124","author":"Valletta","year":"2017","journal-title":"Anim. Behav."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"453","DOI":"10.1016\/j.compag.2018.11.002","article-title":"Feeding behavior recognition for group-housed pigs with the Faster R-CNN","volume":"155","author":"Yang","year":"2018","journal-title":"Comp. Electron. Agric."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"133","DOI":"10.1016\/j.biosystemseng.2018.09.011","article-title":"Automatic recognition of sow nursing behaviour using deep learning-based segmentation and spatial and temporal features","volume":"175","author":"Yang","year":"2018","journal-title":"Biosyst. Eng."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"104884","DOI":"10.1016\/j.compag.2019.104884","article-title":"Real-time sow behavior detection based on deep learning","volume":"163","author":"Zhang","year":"2019","journal-title":"Comp. Electron. Agric."},{"key":"ref_25","unstructured":"Simonyan, K., and Zisserman, A. (2014, January 7\u201312). Two-stream convolutional networks for action recognition in videos. Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montreal, QC, Canada."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Carreira, J., and Zisserman, A. (2017, January 21\u201326). Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.502"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., and Gool, L.V. (2016, January 8\u201316). Temporal Segment Networks: Towards Good Practices for Deep Action Recognition. Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46484-8_2"},{"key":"ref_28","unstructured":"Zach, C., Pock, T., and Bischof, H. (2007, January 12\u201314). A duality based approach for realtime tv-L1 optical flow. Proceedings of the 29th DAGM Symposium on Pattern Recognition, Heidelberg, Germany."},{"key":"ref_29","unstructured":"Ioffe, S., and Szegedy, C. (2015, January 6\u201311). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., and Shlens, J. (2016, January 27\u201330). Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. (2016, January 27\u201330). Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"221","DOI":"10.1109\/TPAMI.2012.59","article-title":"3D convolutional neural networks for human action recognition","volume":"35","author":"Ji","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L., Li, K., and Li, F. (2009, January 20\u201325). ImageNet: A large-scale hierarchical image database. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_36","unstructured":"Sevilla-Lara, L., Liao, Y., Guney, F., Jampani, V., Geiger, A., and Black, M. (2018, January 18\u201322). On the integration of optical flow and action recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/4\/1085\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T08:58:30Z","timestamp":1760173110000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/4\/1085"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,2,17]]},"references-count":36,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2020,2]]}},"alternative-id":["s20041085"],"URL":"https:\/\/doi.org\/10.3390\/s20041085","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,2,17]]}}}