{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,1,4]],"date-time":"2025-01-04T05:27:34Z","timestamp":1735968454312,"version":"3.32.0"},"reference-count":49,"publisher":"Wiley","issue":"4","license":[{"start":{"date-parts":[[2022,12,1]],"date-time":"2022-12-01T00:00:00Z","timestamp":1669852800000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61772209"],"award-info":[{"award-number":["61772209"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012245","name":"Science and Technology Planning Project of Guangdong Province","doi-asserted-by":"publisher","award":["2019A050510034","2019B020219001"],"award-info":[{"award-number":["2019A050510034","2019B020219001"]}],"id":[{"id":"10.13039\/501100012245","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Quant. Biol."],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:sec><jats:title>Background<\/jats:title><jats:p>Cows actions are important factors of cows health and their well\u2010being. By monitoring the individual cows actions, we prevent cows diseases and realize modern precision cows rearing. However, traditional cows actions monitoring is usually conducted through video recording or direct visual observation, which is time\u2010consuming and laborious, and often lead to misjudgement due to the subjective consciousness or negligence.<\/jats:p><\/jats:sec><jats:sec><jats:title>Methods<\/jats:title><jats:p>This paper proposes a method of cows actions recognition based on tracked trajectories to automatically recognize and evaluate the actions of cows. First, we construct a dataset including 60 videos to describe the popular actions existing in the daily life of cows, providing the basic data for designing our actions recognition method. Second, eight famous trackers are used to track and obtain temporal and spatial information of targets. Third, after studying and analysing the tracked trajectories of different actions about cows, a rigorous and effective constraint method is designed to realize actions recognition by us.<\/jats:p><\/jats:sec><jats:sec><jats:title>Results<\/jats:title><jats:p>Many experiments demonstrate that our method of actions recognition performs favourably in detecting the actions of cows, and the proposed dataset basically satisfies the actions evaluation for farmers.<\/jats:p><\/jats:sec><jats:sec><jats:title>Conclusion<\/jats:title><jats:p>The proposed tracking guided actions recognition provides a feasible way to maintain and promote cows health and welfare.<\/jats:p><\/jats:sec>","DOI":"10.15302\/j-qb-022-0291","type":"journal-article","created":{"date-parts":[[2023,3,22]],"date-time":"2023-03-22T08:29:26Z","timestamp":1679473766000},"page":"351-365","source":"Crossref","is-referenced-by-count":0,"title":["Tracking guided actions recognition for cows"],"prefix":"10.1002","volume":"10","author":[{"given":"Yun","family":"Liang","sequence":"first","affiliation":[{"name":"Guangzhou Key Laboratory of Intelligent Agriculture College of Mathematics and Informatics South China Agricultural University Guangzhou 510642 China"}]},{"given":"Xiaoming","family":"Chen","sequence":"additional","affiliation":[{"name":"Guangzhou Key Laboratory of Intelligent Agriculture College of Mathematics and Informatics South China Agricultural University Guangzhou 510642 China"}]}],"member":"311","published-online":{"date-parts":[[2022,12]]},"reference":[{"key":"e_1_2_8_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2018.2803740"},{"key":"e_1_2_8_3_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0301\u20106226(00)00234\u20107"},{"key":"e_1_2_8_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0168\u20101591(00)00175\u20101"},{"key":"e_1_2_8_5_2","doi-asserted-by":"publisher","DOI":"10.1016\/0168-1591(94)90148-1"},{"key":"e_1_2_8_6_2","doi-asserted-by":"publisher","DOI":"10.3168\/jds.S0022\u20100302(02)74304\u2010X"},{"key":"e_1_2_8_7_2","doi-asserted-by":"publisher","DOI":"10.2527\/jas.2012\u20105554"},{"key":"e_1_2_8_8_2","doi-asserted-by":"publisher","DOI":"10.1017\/S1751731115000890"},{"key":"e_1_2_8_9_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.theriogenology.2004.07.007"},{"key":"e_1_2_8_10_2","doi-asserted-by":"publisher","DOI":"10.3168\/jds.S0022\u20100302(77)83859\u20109"},{"key":"e_1_2_8_11_2","doi-asserted-by":"crossref","unstructured":"Hou R. Chen C.(2017)Tube convolutional neural network (T\u2010CNN) for action detection in videos. In:Proceedings of the IEEE International Conference on Computer Vision (ICCV) pp.5823\u22125832","DOI":"10.1109\/ICCV.2017.620"},{"key":"e_1_2_8_12_2","doi-asserted-by":"crossref","unstructured":"Kuehne H. Jhuang H. Garrote E. Poggio T.(2011)Hmdb: a large video database for human motion recognition. In:Proceedings of the IEEE international conference on computer vision (ICCV) pp.25562563","DOI":"10.1109\/ICCV.2011.6126543"},{"key":"e_1_2_8_13_2","unstructured":"Soomro K. Zamir A. R.(2012)Ucf101: a dataset of 101 human actions classes from videos in the wild. arXiv 1212.0402"},{"key":"e_1_2_8_14_2","doi-asserted-by":"crossref","unstructured":"Liu M. Meng F. Chen C.(2019).Joint dynamic pose image and space time reversal for human action recognition from videos.In:Proceedings of the 33rd AAAI Conference on Artificial Intelligence pp.8762\u20138769","DOI":"10.1609\/aaai.v33i01.33018762"},{"key":"e_1_2_8_15_2","unstructured":"Kristan M. Pflugfelder R.(2015).The visual object tracking VOT2015 challenge results.In:Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW) pp.564\u2013586"},{"key":"e_1_2_8_16_2","doi-asserted-by":"publisher","DOI":"10.3390\/s18113979"},{"key":"e_1_2_8_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2388226"},{"key":"e_1_2_8_18_2","doi-asserted-by":"crossref","unstructured":"Henriques J. F. Caseiro R. Martins P.(2012).Exploiting the circulant structure of tracking\u2010by\u2010detection with kernels.In:Proceedings of the 12th European conference on Computer Vision (ECCV) pp.702\u2013715","DOI":"10.1007\/978-3-642-33765-9_50"},{"key":"e_1_2_8_19_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0149\u20107634(88)80004\u20106"},{"key":"e_1_2_8_20_2","doi-asserted-by":"crossref","unstructured":"Norouzzadeh M. S. Nguyen A. Kosmala M. Swanson A. Palmer M. S. Packer C.(2018).Automatically identifying counting and describing wild animals in camera\u2010trap images with deep learning.In:Proceedings of the National Academy of Science of the United States of America pp.E5716\u2013E5725","DOI":"10.1073\/pnas.1719367115"},{"key":"e_1_2_8_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2016.2642981"},{"key":"e_1_2_8_22_2","first-page":"5167","article-title":"PoseTrack: A Benchmark for Human Pose Estimation and Tracking.","author":"Andriluka M.","year":"2018","journal-title":"The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"},{"key":"e_1_2_8_23_2","first-page":"3686","article-title":"2D human pose estimation: New benchmark and state of the art analysis.","author":"Andriluka M.","year":"2014","journal-title":"The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"},{"key":"e_1_2_8_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.219"},{"key":"e_1_2_8_25_2","doi-asserted-by":"crossref","unstructured":"Gilani S. O. Subramanian R. Yan Y. Melcher D. Sebe N.(2015).PET: An eye\u2010tracking dataset for animal\u2010centric Pascal object classes.In:Proceeding of the IEEE International Conference on Multimedia and Expo (ICME) pp.1\u20136","DOI":"10.1109\/ICME.2015.7177450"},{"key":"e_1_2_8_26_2","doi-asserted-by":"publisher","DOI":"10.3390\/s17030433"},{"key":"e_1_2_8_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2345390"},{"key":"e_1_2_8_28_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-017-5533-9"},{"key":"e_1_2_8_29_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-017-5538-4"},{"key":"e_1_2_8_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2889070"},{"key":"e_1_2_8_31_2","first-page":"4904","article-title":"Learning spatial\u2010temporal regularized correlation filters for visual tracking.","author":"Li F.","year":"2018","journal-title":"The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"},{"key":"e_1_2_8_32_2","doi-asserted-by":"crossref","unstructured":"Zhang K. Zhang L.Yang M.(2012).Real\u2010time compressive tracking.In:Proceedings of the 12th European conference on Computer Vision (ECCV) pp.864\u2013877","DOI":"10.1007\/978-3-642-33712-3_62"},{"key":"e_1_2_8_33_2","doi-asserted-by":"crossref","unstructured":"Sevilla\u2010Lara L.(2012).Distribution fields for tracking.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.1910\u20131917","DOI":"10.1109\/CVPR.2012.6247891"},{"key":"e_1_2_8_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2014.2345390"},{"key":"e_1_2_8_35_2","doi-asserted-by":"crossref","unstructured":"Li Y. Zhu J.Hoi S. C.(2015).Reliable Patch Trackers: Robust visual tracking by exploiting reliable patches.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.353\u2013361","DOI":"10.1109\/CVPR.2015.7298632"},{"key":"e_1_2_8_36_2","doi-asserted-by":"crossref","unstructured":"Galoogahi H. K. Fagg A.(2017).Learning backgroundaware correlation filters for visual tracking.In:Proceedings of the IEEE International Conference on Computer Vision (ICCV) pp.1144\u20131152","DOI":"10.1109\/ICCV.2017.129"},{"key":"e_1_2_8_37_2","first-page":"850","article-title":"Fully\u2010convolutional siamese networks for object tracking.","author":"Bertinetto L.","year":"2016","journal-title":"European Conference on Computer Vision"},{"key":"e_1_2_8_38_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.279"},{"key":"e_1_2_8_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2016.2605305"},{"key":"e_1_2_8_40_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.imavis.2009.11.014"},{"key":"e_1_2_8_41_2","doi-asserted-by":"crossref","unstructured":"Dollar P. Rabaud V.(2005).Behavior recognition via sparse spatio\u2010temporal features.In:Proceedings of the IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance pp.65\u201372","DOI":"10.1109\/VSPETS.2005.1570899"},{"key":"e_1_2_8_42_2","unstructured":"Yin L. Wei X.(2006).A 3D facial expression database for facial behavior research.In:Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition pp.211\u2013216"},{"key":"e_1_2_8_43_2","doi-asserted-by":"crossref","unstructured":"Tanfous A. B. Drira H.Amor B.(2018).Coding kendall\u2019s shape trajectories for 3D action recognition.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.2840\u20132849","DOI":"10.1109\/CVPR.2018.00300"},{"key":"e_1_2_8_44_2","doi-asserted-by":"crossref","unstructured":"Tang Y. Tian Y. Lu J. Li P.(2018).Deep progressive reinforcement learning for skeleton\u2010based action recognition.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.5323\u20135332","DOI":"10.1109\/CVPR.2018.00558"},{"key":"e_1_2_8_45_2","doi-asserted-by":"crossref","unstructured":"Gallego G. Rebecq H.(2018).A unifying contrast maximization framework for event cameras with applications to motion depth and optical flow estimation.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.3867\u20133876","DOI":"10.1109\/CVPR.2018.00407"},{"key":"e_1_2_8_46_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2014.04.018"},{"key":"e_1_2_8_47_2","doi-asserted-by":"crossref","unstructured":"Zhang T. Zhang Y. Cai J. Kot A.(2016)Efficient object feature selection for action recognition. In:Proceeding of the IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) pp.2707\u20132711","DOI":"10.1109\/ICASSP.2016.7472169"},{"key":"e_1_2_8_48_2","doi-asserted-by":"crossref","unstructured":"Feichtenhofer C. Pinz A.Wildes R.(2017).Spatiotemporal multiplier networks for video action recognition.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp.7445\u20137454","DOI":"10.1109\/CVPR.2017.787"},{"key":"e_1_2_8_49_2","first-page":"350","article-title":"Detect\u2010and\u2010track: Efficient pose estimation in videos.","author":"Girdhar R.","year":"2018","journal-title":"The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"},{"key":"e_1_2_8_50_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-017-5251-3"}],"container-title":["Quantitative Biology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.15302\/J-QB-022-0291","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,3]],"date-time":"2025-01-03T09:54:07Z","timestamp":1735898047000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.15302\/J-QB-022-0291"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12]]},"references-count":49,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["10.15302\/J-QB-022-0291"],"URL":"https:\/\/doi.org\/10.15302\/j-qb-022-0291","archive":["Portico"],"relation":{},"ISSN":["2095-4689","2095-4697"],"issn-type":[{"type":"print","value":"2095-4689"},{"type":"electronic","value":"2095-4697"}],"subject":[],"published":{"date-parts":[[2022,12]]}}}