{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T02:33:09Z","timestamp":1777429989319,"version":"3.51.4"},"reference-count":39,"publisher":"MDPI AG","issue":"24","license":[{"start":{"date-parts":[[2021,12,15]],"date-time":"2021-12-15T00:00:00Z","timestamp":1639526400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Enli Lv","award":["31901736"],"award-info":[{"award-number":["31901736"]}]},{"name":"Enli Lv","award":["31971806"],"award-info":[{"award-number":["31971806"]}]},{"name":"Enli Lv","award":["2019B020225001"],"award-info":[{"award-number":["2019B020225001"]}]},{"name":"Enli Lv","award":["YDWS1904"],"award-info":[{"award-number":["YDWS1904"]}]},{"name":"Huazhong Lu","award":["2018YFD0701002"],"award-info":[{"award-number":["2018YFD0701002"]}]},{"name":"enli Lv","award":["2019KJ101"],"award-info":[{"award-number":["2019KJ101"]}]},{"name":"Zhixiong Zeng","award":["2018A0123"],"award-info":[{"award-number":["2018A0123"]}]},{"name":"Zhixiong Zeng","award":["HXKJHT2020058"],"award-info":[{"award-number":["HXKJHT2020058"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In this paper, a lightweight channel-wise attention model is proposed for the real-time detection of five representative pig postures: standing, lying on the belly, lying on the side, sitting, and mounting. An optimized compressed block with symmetrical structure is proposed based on model structure and parameter statistics, and the efficient channel attention modules are considered as a channel-wise mechanism to improve the model architecture.The results show that the algorithm\u2019s average precision in detecting standing, lying on the belly, lying on the side, sitting, and mounting is 97.7%, 95.2%, 95.7%, 87.5%, and 84.1%, respectively, and the speed of inference is around 63 ms (CPU = i7, RAM = 8G) per postures image. Compared with state-of-the-art models (ResNet50, Darknet53, CSPDarknet53, MobileNetV3-Large, and MobileNetV3-Small), the proposed model has fewer model parameters and lower computation complexity. The statistical results of the postures (with continuous 24 h monitoring) show that some pigs will eat in the early morning, and the peak of the pig\u2019s feeding appears after the input of new feed, which reflects the health of the pig herd for farmers.<\/jats:p>","DOI":"10.3390\/s21248369","type":"journal-article","created":{"date-parts":[[2021,12,15]],"date-time":"2021-12-15T21:47:36Z","timestamp":1639604856000},"page":"8369","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":22,"title":["Posture Detection of Individual Pigs Based on Lightweight Convolution Neural Networks and Efficient Channel-Wise Attention"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8362-5481","authenticated-orcid":false,"given":"Yizhi","family":"Luo","sequence":"first","affiliation":[{"name":"College of Engineering, South China Agricultural University, Guangzhou 510642, China"}]},{"given":"Zhixiong","family":"Zeng","sequence":"additional","affiliation":[{"name":"College of Engineering, South China Agricultural University, Guangzhou 510642, China"}]},{"given":"Huazhong","family":"Lu","sequence":"additional","affiliation":[{"name":"Guangdong Academy of Agricultural Sciences, Guangzhou 510640, China"}]},{"given":"Enli","family":"Lv","sequence":"additional","affiliation":[{"name":"College of Engineering, South China Agricultural University, Guangzhou 510642, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,12,15]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"104976","DOI":"10.1016\/j.applanim.2020.104976","article-title":"Characterization of the lying and rising sequence in lame and non-lame sows","volume":"226","author":"Mumm","year":"2020","journal-title":"Appl. Anim. Behav. Sci."},{"key":"ref_2","first-page":"10","article-title":"Measuring animal emotions-and why it matters","volume":"37","author":"Neethirajan","year":"2021","journal-title":"Pig Progr."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"660565","DOI":"10.3389\/fvets.2021.660565","article-title":"A systematic review on validated Precision Livestock Farming technologies for pig production and its potential to assess animal welfare","volume":"8","author":"Stygar","year":"2021","journal-title":"Front. Vet. Sci."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"475","DOI":"10.1016\/j.compag.2018.12.009","article-title":"Automatic scoring of lateral and sternal lying posture in grouped pigs using image processing and Support Vector Machine","volume":"156","author":"Nasirahmadi","year":"2019","journal-title":"Comput. Electron. Agric."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"104530","DOI":"10.1016\/j.livsci.2021.104530","article-title":"An overview of the current trends in precision pig farming technologies","volume":"249","author":"Tzanidakis","year":"2021","journal-title":"Livest. Sci."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"100103","DOI":"10.1016\/j.animal.2020.100103","article-title":"Human\u2013animal relationship influences husbandry practices, animal welfare and productivity in pig farming","volume":"15","author":"Pol","year":"2021","journal-title":"Animal"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"521","DOI":"10.1016\/j.compag.2016.07.017","article-title":"Porcine lie detectors: Automatic quantification of posture state and transitions in sows using inertial sensors","volume":"127","author":"Thompson","year":"2016","journal-title":"Comput. Electron. Agric."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"106030","DOI":"10.1016\/j.compag.2021.106030","article-title":"A computer vision approach based on deep learning for the detection of dairy cows in free stall barn","volume":"182","author":"Tassinari","year":"2021","journal-title":"Comput. Electron. Agric."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"59","DOI":"10.1016\/j.biosystemseng.2018.08.011","article-title":"Recognising blueberry fruit of different maturity using histogram oriented gradients and colour features in outdoor scenes","volume":"176","author":"Tan","year":"2018","journal-title":"Biosystems. Eng."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Kim, D.W., Yun, H., Jeong, S.J., Kwon, Y.S., Kim, S.G., Lee, W., and Kim, H.J. (2018). Modeling and Testing of Growth Status for Chinese Cabbage and White Radish with UAV-Based RGB Imagery. Remote Sens., 10.","DOI":"10.3390\/rs10040563"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"100001","DOI":"10.1016\/j.atech.2021.100001","article-title":"Strawberry Maturity Classification from UAV and Near-Ground Imaging Using Deep Learning","volume":"1","author":"Zhou","year":"2021","journal-title":"Smart. Agric. Technol."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"106490","DOI":"10.1016\/j.compag.2021.106490","article-title":"Stereovision-based ridge-furrow detection and tracking for auto-guided cultivator","volume":"191","author":"Yun","year":"2021","journal-title":"Comput. Electron. Agric."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"105937","DOI":"10.1016\/j.compag.2020.105937","article-title":"Stereo-vision-based crop height estimation for agricultural robots","volume":"181","author":"Kim","year":"2021","journal-title":"Comput. Electron. Agric."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Woo, S., Uyeh, D.D., Kim, J., Kim, Y., Kang, S., Kim, K.C., Lee, S.Y., Ha, Y., and Lee, W.S. (2020). Analyses of Work Efficiency of a Strawberry-Harvesting Robot in an Automated Greenhouse. Agronomy, 10.","DOI":"10.3390\/agronomy10111751"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"105672","DOI":"10.1016\/j.compag.2020.105672","article-title":"A review of computer vision technologies for plant phenotyping","volume":"176","author":"Li","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1016\/j.compag.2018.01.023","article-title":"Automatic recognition of lactating sow postures from depth images by deep learning detector","volume":"147","author":"Zheng","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"105391","DOI":"10.1016\/j.compag.2020.105391","article-title":"Automatically detecting pig position and posture by 2D camera imaging and deep learning","volume":"174","author":"Riekert","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"116","DOI":"10.1016\/j.biosystemseng.2019.11.013","article-title":"Automatic recognition of lactating sow postures by refined two-stream RGB-D faster R-CNN","volume":"189","author":"Zhu","year":"2020","journal-title":"Biosyst. Eng."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"272","DOI":"10.1016\/j.compag.2018.03.032","article-title":"A comparative study of fine-tuning deep learning models for plant disease identification","volume":"161","author":"Too","year":"2019","journal-title":"Comput. Electron. Agric."},{"key":"ref_20","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv, Available online: https:\/\/arxiv.org\/abs\/1704.04861."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.C. (2018, January 18\u201322). Mobilenetv2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Howard, A., Sandler, M., Chu, G., Chen, L.C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., and Vasudevan, V. (2019, January 27\u201328). Searching for mobilenetv3. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea.","DOI":"10.1109\/ICCV.2019.00140"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhou, X., Lin, M., and Sun, J. (2018, January 18\u201323). Shufflenet: An extremely efficient convolutional neural network for mobile devices. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00716"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Ma, N., Zhang, X., Zheng, H., and Sun, J. (2018, January 8\u201314). Shufflenet v2: Practical guidelines for efficient cnn architecture design. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"105735","DOI":"10.1016\/j.compag.2020.105735","article-title":"Grape disease image classification based on lightweight convolution neural networks and channel-wise attention","volume":"178","author":"Tang","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Seo, J., Ahn, H., Kim, D., Lee, S., Chung, Y., and Park, D. (2020). EmbeddedPigDet\u2014Fast and Accurate Pig Detection for Embedded Board Implementations. Appl. Sci., 10.","DOI":"10.3390\/app10082878"},{"key":"ref_27","unstructured":"Wada, K. (2020, June 08). Labelme: Image Polygonal Annotation with Python. Available online: https:\/\/github.com\/wkentaro\/labelme."},{"key":"ref_28","first-page":"81","article-title":"Automatic posture detection of pigs on real-time using Yolo framework","volume":"5","author":"Sivamani","year":"2020","journal-title":"Int. J. Res. Trends Innov."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., and Hu, Q. (2020, January 13\u201319). ECA-Net: Efficient channel attention for deep convolutional neural networks. Proceedings of the CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01155"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J. (2018, January 18\u201322). Path aggregation network for instance segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00913"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201322). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_32","first-page":"arXiv-1803","article-title":"Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks","volume":"11070","author":"Navab","year":"2018","journal-title":"arXiv"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Hou, Q., Zhou, D., and Feng, J. (2021, January 21\u201325). Coordinate Attention for Efficient Mobile Network Design. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual.","DOI":"10.1109\/CVPR46437.2021.01350"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Paisitkriangkrai, S., Shen, C., and Van Den Hengel, A. (2013, January 1\u20138). Efficient Pedestrian Detection by Directly Optimizing the Partial Area under the ROC Curve. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, Australia.","DOI":"10.1109\/ICCV.2013.135"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"106255","DOI":"10.1016\/j.compag.2021.106255","article-title":"Behaviour recognition of pigs and cattle: Journey from computer vision to deep learning","volume":"187","author":"Chen","year":"2021","journal-title":"Comput. Electron. Agric."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Mekhalfi, M.L., Nicol\u00f2, C., Bazi, Y., Al Rahhal, M.M., Alsharif, N.A., and Al Maghayreh, E. (2021). Contrasting YOLOv5, Transformer, and EfficientDet Detectors for Crop Circle Detection in Desert. IEEE Geosci. Remote Sens. Lett., 1\u20135. Available online: https:\/\/ieeexplore.ieee.org\/abstract\/document\/9453822.","DOI":"10.1109\/LGRS.2021.3085139"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"721","DOI":"10.1007\/s42835-020-00343-7","article-title":"Hyperparameter optimization using a genetic algorithm considering verification time in a convolutional neural network","volume":"15","author":"Han","year":"2020","journal-title":"J. Electr. Eng. Technol."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"227","DOI":"10.1016\/j.biosystemseng.2020.04.005","article-title":"Automatic posture change analysis of lactating sows by action localisation and tube optimisation from untrimmed depth videos","volume":"194","author":"Zheng","year":"2018","journal-title":"Biosystems. Eng."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Shao, H., Pu, J., and Mu, J. (2021). Pig-Posture Recognition Based on Computer Vision: Dataset and Exploration. Animals, 11.","DOI":"10.3390\/ani11051295"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/24\/8369\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T07:48:57Z","timestamp":1760168937000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/24\/8369"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,12,15]]},"references-count":39,"journal-issue":{"issue":"24","published-online":{"date-parts":[[2021,12]]}},"alternative-id":["s21248369"],"URL":"https:\/\/doi.org\/10.3390\/s21248369","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,12,15]]}}}