{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,4]],"date-time":"2026-03-04T16:47:16Z","timestamp":1772642836515,"version":"3.50.1"},"reference-count":46,"publisher":"MDPI AG","issue":"19","license":[{"start":{"date-parts":[[2022,9,23]],"date-time":"2022-09-23T00:00:00Z","timestamp":1663891200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Central Universities of China","award":["2662022XXYJ009"],"award-info":[{"award-number":["2662022XXYJ009"]}]},{"name":"Central Universities of China","award":["SZYJY2022034"],"award-info":[{"award-number":["SZYJY2022034"]}]},{"name":"HZAU-AGIS Cooperation Fund","award":["2662022XXYJ009"],"award-info":[{"award-number":["2662022XXYJ009"]}]},{"name":"HZAU-AGIS Cooperation Fund","award":["SZYJY2022034"],"award-info":[{"award-number":["SZYJY2022034"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The body size of pigs is a vital evaluation indicator for growth monitoring and selective breeding. The detection of joint points is critical for accurately estimating pig body size. However, most joint point detection methods focus on improving detection accuracy while neglecting detection speed and model parameters. In this study, we propose an HRNet with Swin Transformer block (HRST) based on HRNet for detecting the joint points of pigs. It can improve model accuracy while significantly reducing model parameters by replacing the fourth stage of parameter redundancy in HRNet with a Swin Transformer block. Moreover, we implemented joint point detection for multiple pigs following two steps: first, CenterNet was used to detect pig posture (lying or standing); then, HRST was used for joint point detection for standing pigs. The results indicated that CenterNet achieved an average precision (AP) of 86.5%, and HRST achieved an AP of 77.4% and a real-time detection speed of 40 images per second. Compared with HRNet, the AP of HRST improved by 6.8%, while the number of model parameters and the calculated amount reduced by 72.8% and 41.7%, respectively. The study provides technical support for the accurate and rapid detection of pig joint points, which can be used for contact-free body size estimation of pigs.<\/jats:p>","DOI":"10.3390\/s22197215","type":"journal-article","created":{"date-parts":[[2022,9,26]],"date-time":"2022-09-26T03:34:17Z","timestamp":1664163257000},"page":"7215","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["HRST: An Improved HRNet for Detecting Joint Points of Pigs"],"prefix":"10.3390","volume":"22","author":[{"given":"Xiaopin","family":"Wang","sequence":"first","affiliation":[{"name":"Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China"},{"name":"Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China"}]},{"given":"Wei","family":"Wang","sequence":"additional","affiliation":[{"name":"Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China"}]},{"given":"Jisheng","family":"Lu","sequence":"additional","affiliation":[{"name":"Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China"}]},{"given":"Haiyan","family":"Wang","sequence":"additional","affiliation":[{"name":"Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China"},{"name":"Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,23]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"399","DOI":"10.1016\/j.compag.2018.11.042","article-title":"Mobile measuring system based on LabVIEW for pig body components estimation in a large-scale farm","volume":"156","author":"Shi","year":"2019","journal-title":"Comput. Electron. Agric."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"10","DOI":"10.1016\/j.biosystemseng.2022.03.014","article-title":"Body size measurement and live body weight estimation for pigs based on back surface point clouds","volume":"218","author":"Li","year":"2022","journal-title":"Biosyst. Eng."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zhang, J., Zhuang, Y., Ji, H., and Teng, G. (2021). Pig weight and body size estimation using a multiple output regression convolutional neural network: A fast and fully automatic method. Sensors, 21.","DOI":"10.3390\/s21093218"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"29","DOI":"10.1016\/j.compag.2018.03.003","article-title":"On-barn pig weight estimation based on body measurements by a Kinect v1 depth camera","volume":"148","author":"Pezzuolo","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"33","DOI":"10.1016\/j.compag.2018.07.033","article-title":"Algorithm of sheep body dimension measurement and its applications based on image analysis","volume":"153","author":"Zhang","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Wang, W., Zhang, Y., He, J., Chen, Z., Li, D., Ma, C., Ba, Y., Baima, Q., Li, X., and Song, R. (2022). Research on Yak Body Ruler and Weight Measurement Method Based on Deep Learning and Binocular Vision. Math. Comput. Sci.","DOI":"10.20944\/preprints202112.0349.v2"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"102897","DOI":"10.1016\/j.cviu.2019.102897","article-title":"Monocular human pose estimation: A survey of deep learning-based methods","volume":"192","author":"Chen","year":"2020","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Cheng, B., Xiao, B., Wang, J., Shi, H., Huang, T.S., and Zhang, L. (2020, January 19). Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00543"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"012033","DOI":"10.1088\/1742-6596\/1544\/1\/012033","article-title":"Overview of two-stage object detection algorithms","volume":"1544","author":"Du","year":"2020","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_10","unstructured":"Bochkovskiy, A., Wang, C.-Y., and Liao, H.-Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_11","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_12","unstructured":"Zhou, X., Wang, D., and Kr\u00e4henb\u00fchl, P. (2019). Objects as points. arXiv."},{"key":"ref_13","unstructured":"Tian, Z., Shen, C., Chen, H., and He, T. (November, January 27). Fcos: Fully convolutional one-stage object detection. Proceedings of the IEEE\/CVF International Conference On Computer Vision, Seoul, Korea."},{"key":"ref_14","first-page":"91","article-title":"Faster r-cnn: Towards real-time object detection with region proposal networks","volume":"28","author":"Ren","year":"2015","journal-title":"Adv. Neural Inf. Processing Syst."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015, January 7\u201313). Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Ding, J., Xue, N., Long, Y., Xia, G.-S., and Lu, Q. (2019, January 20). Learning RoI transformer for oriented object detection in aerial images. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00296"},{"key":"ref_17","first-page":"1","article-title":"Anchor-free oriented proposal generator for object detection","volume":"60","author":"Cheng","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Chen, Y., Wang, Z., Peng, Y., Zhang, Z., Yu, G., and Sun, J. (2018, January 18\u201322). Cascaded pyramid network for multi-person pose estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00742"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Xiao, B., Wu, H., and Wei, Y. (2018, January 8\u201314). Simple baselines for human pose estimation and tracking. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01231-1_29"},{"key":"ref_20","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Sun, K., Xiao, B., Liu, D., and Wang, J. (2019, January 20). Deep high-resolution representation learning for human pose estimation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00584"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"3349","DOI":"10.1109\/TPAMI.2020.2983686","article-title":"Deep high-resolution representation learning for visual recognition","volume":"43","author":"Wang","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Li, Y., Zhang, S., Wang, Z., Yang, S., Yang, W., Xia, S.-T., and Zhou, E. (2021, January 10\u201317). Tokenpose: Learning keypoint tokens for human pose estimation. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01112"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Yu, F., Wang, D., Shelhamer, E., and Darrell, T. (2018, January 18\u201322). Deep layer aggregation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00255"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"157","DOI":"10.1007\/s11263-007-0090-8","article-title":"LabelMe: A database and web-based tool for image annotation","volume":"77","author":"Russell","year":"2008","journal-title":"Int. J. Comput. Vis."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Ruggero Ronchi, M., and Perona, P. (2017, January 22\u201329). Benchmarking and error diagnosis in multi-instance pose estimation. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.48"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Li, S., Li, J., Tang, H., Qian, R., and Lin, W. (2019). ATRW: A benchmark for Amur tiger re-identification in the wild. arXiv, preprint.","DOI":"10.1145\/3394171.3413569"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"108254","DOI":"10.1016\/j.knosys.2022.108254","article-title":"Multi-expert learning for fusion of pedestrian detection bounding box","volume":"241","author":"Tang","year":"2022","journal-title":"Knowl.-Based Syst."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Hu, R., Tang, Z.-R., Wu, E.Q., Mo, Q., Yang, R., and Li, J. (2022). RDC-SAL: Refine distance compensating with quantum scale-aware learning for crowd counting and localization. Appl. Intell., 1\u201313.","DOI":"10.1007\/s10489-022-03238-4"},{"key":"ref_30","first-page":"179","article-title":"Construction of the animal skeletons keypoint detection model based on transformer and scale fusion","volume":"37","author":"Zhang","year":"2021","journal-title":"Trans. Chin. Soc. Agric. Eng."},{"key":"ref_31","unstructured":"Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., and Vasudevan, V. (November, January 27). Searching for mobilenetv3. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_32","unstructured":"Tan, M., and Le, Q. (2021, January 13\u201315). Efficientnetv2: Smaller models and faster training. Proceedings of the International Conference on Machine Learning, Pasadena, CA, USA."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022, January 19\u201324). A convnet for the 2020s. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"ref_34","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. (2018, January 18\u201322). Mobilenetv2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_36","unstructured":"Tan, M., and Le, Q. (2019, January 2\u20134). Efficientnet: Rethinking model scaling for convolutional neural networks. Proceedings of the International Conference on Machine Learning, Taipei, Taiwan."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"57","DOI":"10.1007\/s00530-021-00795-5","article-title":"Scale-aware attention-based multi-resolution representation for multi-person pose estimation","volume":"28","author":"Yang","year":"2022","journal-title":"Multimed. Syst."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"106054","DOI":"10.1016\/j.compag.2021.106054","article-title":"A high-precision detection method of hydroponic lettuce seedlings status based on improved Faster RCNN","volume":"182","author":"Li","year":"2021","journal-title":"Comput. Electron. Agric."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Zhang, J., Lin, S., Ding, L., and Bruzzone, L. (2020). Multi-scale context aggregation for semantic segmentation of remote sensing images. Remote Sens., 12.","DOI":"10.3390\/rs12040701"},{"key":"ref_41","first-page":"237","article-title":"Human pose estimation based on parallel high-resolution net","volume":"43","author":"Liu","year":"2022","journal-title":"Comput. Eng. Des."},{"key":"ref_42","first-page":"6000","article-title":"Attention is all you need","volume":"30","author":"Vaswani","year":"2017","journal-title":"Adv. Neural Inf. Processing Syst."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_44","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Cha, J.-Y., Yoon, H.-I., Yeo, I.-S., Huh, K.-H., and Han, J.-S. (2021). Peri-implant bone loss measurement using a region-based convolutional neural network on dental periapical radiographs. J. Clin. Med., 10.","DOI":"10.3390\/jcm10051009"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft coco: Common objects in context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/19\/7215\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:38:03Z","timestamp":1760143083000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/19\/7215"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,23]]},"references-count":46,"journal-issue":{"issue":"19","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["s22197215"],"URL":"https:\/\/doi.org\/10.3390\/s22197215","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,23]]}}}