{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,23]],"date-time":"2026-01-23T10:46:14Z","timestamp":1769165174416,"version":"3.49.0"},"reference-count":49,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2023,5,31]],"date-time":"2023-05-31T00:00:00Z","timestamp":1685491200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Anhui Province Science and Technology Major Special Project","award":["201903a06020009"],"award-info":[{"award-number":["201903a06020009"]}]},{"name":"Anhui Province Science and Technology Major Special Project","award":["202103b06020013"],"award-info":[{"award-number":["202103b06020013"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Individual identification of pigs is a critical component of intelligent pig farming. Traditional pig ear-tagging requires significant human resources and suffers from issues such as difficulty in recognition and low accuracy. This paper proposes the YOLOv5-KCB algorithm for non-invasive identification of individual pigs. Specifically, the algorithm utilizes two datasets\u2014pig faces and pig necks\u2014which are divided into nine categories. Following data augmentation, the total sample size was augmented to 19,680. The distance metric used for K-means clustering is changed from the original algorithm to 1-IOU, which improves the adaptability of the model\u2019s target anchor boxes. Furthermore, the algorithm introduces SE, CBAM, and CA attention mechanisms, with the CA attention mechanism being selected for its superior performance in feature extraction. Finally, CARAFE, ASFF, and BiFPN are used for feature fusion, with BiFPN selected for its superior performance in improving the detection ability of the algorithm. The experimental results indicate that the YOLOv5-KCB algorithm achieved the highest accuracy rates in pig individual recognition, surpassing all other improved algorithms in average accuracy rate (IOU = 0.5). The accuracy rate of pig head and neck recognition was 98.4%, while the accuracy rate for pig face recognition was 95.1%, representing an improvement of 4.8% and 13.8% over the original YOLOv5 algorithm. Notably, the average accuracy rate of identifying pig head and neck was consistently higher than pig face recognition across all algorithms, with YOLOv5-KCB demonstrating an impressive 2.9% improvement. These results emphasize the potential for utilizing the YOLOv5-KCB algorithm for precise individual pig identification, facilitating subsequent intelligent management practices.<\/jats:p>","DOI":"10.3390\/s23115242","type":"journal-article","created":{"date-parts":[[2023,6,1]],"date-time":"2023-06-01T02:39:47Z","timestamp":1685587187000},"page":"5242","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":31,"title":["YOLOv5-KCB: A New Method for Individual Pig Detection Using Optimized K-Means, CA Attention Mechanism and a Bi-Directional Feature Pyramid Network"],"prefix":"10.3390","volume":"23","author":[{"given":"Guangbo","family":"Li","sequence":"first","affiliation":[{"name":"School of Information and Computer, Anhui Agricultural University, Hefei 230036, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3231-9983","authenticated-orcid":false,"given":"Guolong","family":"Shi","sequence":"additional","affiliation":[{"name":"School of Information and Computer, Anhui Agricultural University, Hefei 230036, China"},{"name":"Key Laboratory of Agricultural Sensors, Ministry of Agriculture and Rural Affairs, Hefei 230036, China"},{"name":"Anhui Provincial Key Laboratory of Smart Agricultural Technology and Equipment, Hefei 230036, China"}]},{"given":"Jun","family":"Jiao","sequence":"additional","affiliation":[{"name":"School of Information and Computer, Anhui Agricultural University, Hefei 230036, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,5,31]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"107606","DOI":"10.1016\/j.compag.2022.107606","article-title":"Recognition of aggressive behavior of group-housed pigs based on CNN-GRU hybrid model with spatio-temporal attention mechanism","volume":"205","author":"Gao","year":"2023","journal-title":"Comput. Electron. Agric."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"107560","DOI":"10.1016\/j.compag.2022.107560","article-title":"An improved PointNet++ point cloud segmentation model applied to automatic measurement method of pig body size","volume":"205","author":"Hao","year":"2023","journal-title":"Comput. Electron. Agric."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Ma, C., Tao, J., Tan, C., Liu, W., and Li, X. (2023). Negative Media Sentiment about the Pig Epidemic and Pork Price Fluctuations: A Study on Spatial Spillover Effect and Mechanism. Agriculture, 13.","DOI":"10.3390\/agriculture13030658"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"107707","DOI":"10.1016\/j.compag.2023.107707","article-title":"Animal behavior classification via deep learning on embedded systems","volume":"207","author":"Arablouei","year":"2023","journal-title":"Comput. Electron. Agric."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Busch, P., Ewald, H., and St\u00fcpmann, F. (2017, January 4\u20136). Determination of standing-time of dairy cows using 3D-accelerometer data from collars. Proceedings of the 2017 Eleventh International Conference on Sensing Technology (ICST), Sydney, NSW, Australia.","DOI":"10.1109\/ICSensT.2017.8304492"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"4037","DOI":"10.1109\/TPAMI.2020.2992393","article-title":"Self-supervised visual feature learning with deep neural networks: A survey","volume":"43","author":"Jing","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Hao, W., Han, W., Han, M., and Li, F. (2022). A Novel Improved YOLOv3-SC Model for Individual Pig Detection. Sensors, 22.","DOI":"10.3390\/s22228792"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Kim, T., Kim, Y., Kim, S., and Ko, J. (2023). Estimation of Number of Pigs Taking in Feed Using Posture Filtration. Sensors, 23.","DOI":"10.3390\/s23010238"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"106737","DOI":"10.1016\/j.compag.2022.106737","article-title":"Two-stage method based on triplet margin loss for pig face recognition","volume":"194","author":"Wang","year":"2022","journal-title":"Comput. Electron. Agric."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"145","DOI":"10.1016\/j.compind.2018.02.016","article-title":"Towards on-farm pig face recognition using convolutional neural networks","volume":"98","author":"Hansen","year":"2018","journal-title":"Comput. Ind."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"105386","DOI":"10.1016\/j.compag.2020.105386","article-title":"An adaptive pig face recognition approach using Convolutional Neural Networks","volume":"173","author":"Marsot","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Wang, R., Gao, R., Li, Q., and Dong, J. (2023). Pig Face Recognition Based on Metric Learning by Combining a Residual Network and Attention Mechanism. Agriculture, 13.","DOI":"10.3390\/agriculture13010144"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Psota, E.T., Mittek, M., P\u00e9rez, L.C., Schmidt, T., and Mote, B. (2019). Multi-pig part detection and association with a fully-convolutional network. Sensors, 19.","DOI":"10.3390\/s19040852"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Ahn, H., Son, S., Kim, H., Lee, S., Chung, Y., and Park, D. (2021). EnsemblePigDet: Ensemble deep learning for accurate pig detection. Appl. Sci., 11.","DOI":"10.3390\/app11125577"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Ocepek, M., \u017dnidar, A., Lavri\u010d, M., \u0160korjanc, D., and Andersen, I.L. (2021). DigiPig: First developments of an automated monitoring system for body, head and tail detection in intensive pig farming. Agriculture, 12.","DOI":"10.3390\/agriculture12010002"},{"key":"ref_16","first-page":"186","article-title":"Individual pig object detection algorithm based on Gaussian mixture model","volume":"10","author":"Li","year":"2017","journal-title":"Int. J. Agric. Biol. Eng."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Zhuang, Y., Zhou, K., Zhou, Z., Ji, H., and Teng, G. (2022). Systems to Monitor the Individual Feeding and Drinking Behaviors of Growing Pigs Based on Machine Vision. Agriculture, 13.","DOI":"10.3390\/agriculture13010103"},{"key":"ref_18","first-page":"583","article-title":"Accuracy Improvement of Pig Detection using Image Processing and Deep Learning Techniques on an Embedded Board","volume":"25","author":"Yu","year":"2022","journal-title":"J. Korea Multimed. Soc."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Seo, J., Ahn, H., Kim, D., Lee, S., Chung, Y., and Park, D. (2020). EmbeddedPigDet\u2014Fast and accurate pig detection for embedded board implementations. Appl. Sci., 10.","DOI":"10.3390\/app10082878"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Sa, J., Choi, Y., Lee, H., Chung, Y., Park, D., and Cho, J. (2019). Fast pig detection with a top-view camera under various illumination conditions. Symmetry, 11.","DOI":"10.3390\/sym11020266"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"108049","DOI":"10.1109\/ACCESS.2019.2933060","article-title":"Automated individual pig localisation, tracking and behaviour metric extraction using deep learning","volume":"7","author":"Cowton","year":"2019","journal-title":"IEEE Access"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Psota, E.T., Schmidt, T., Mote, B., and P\u00e9rez, L.C. (2020). Long-term tracking of group-housed livestock using keypoint detection and map estimation for individual animal identification. Sensors, 20.","DOI":"10.3390\/s20133670"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"71","DOI":"10.1016\/j.biosystemseng.2022.07.017","article-title":"Towards re-identification for long-term tracking of group housed pigs","volume":"222","author":"Wang","year":"2022","journal-title":"Biosyst. Eng."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Bhujel, A., Arulmozhi, E., Moon, B.E., and Kim, H.T. (2021). Deep-Learning-Based Automatic Monitoring of Pigs\u2019 Physico-Temporal Activities at Different Greenhouse Gas Concentrations. Animals, 11.","DOI":"10.20944\/preprints202110.0319.v1"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Zhang, L., Gray, H., Ye, X., Collins, L., and Allinson, N. (2019). Automatic individual pig detection and tracking in pig farms. Sensors, 19.","DOI":"10.3390\/s19051188"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1488","DOI":"10.1109\/TIP.2011.2173206","article-title":"On the mathematical properties of the structural similarity index","volume":"21","author":"Brunet","year":"2011","journal-title":"IEEE Trans. Image Process."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_29","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_30","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1904","DOI":"10.1109\/TPAMI.2015.2389824","article-title":"Spatial pyramid pooling in deep convolutional networks for visual recognition","volume":"37","author":"He","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J. (2018, January 18\u201323). Path aggregation network for instance segmentation. Proceedings of the IEEE Conference on Computer Vision and Pat-tern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00913"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 22\u201329). Mask R\u2014CNN. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature Pyramid Networks for Object Detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Ahmed, M., Seraj, R., and Islam, S.M.S. (2020). The k-means algorithm: A comprehensive survey and performance evaluation. Electronics, 9.","DOI":"10.3390\/electronics9081295"},{"key":"ref_37","first-page":"7132","article-title":"Squeeze-and-Excitation Networks","volume":"Volume 7","author":"Hu","year":"2018","journal-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018, January 8\u201314). CBAM: Convolutional Block Attention Module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Hou, Q., Zhou, D., and Feng, J. (2021, January 20\u201325). Coordinate attention for efficient mobile network design. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01350"},{"key":"ref_40","unstructured":"Wang, J., Chen, K., Xu, R., Liu, Z., Loy, C.C., and Lin, D. (November, January 27). Carafe: Content-aware reassembly of features. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, South Korea."},{"key":"ref_41","unstructured":"Liu, S., Huang, D., and Wang, Y. (2019). Learning spatial fusion for single-shot object detection. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Tan, M., Pang, R., and Le, Q.V. (2020, January 13\u201319). EfficientDet: Scalable and efficient object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Fan, J., Cui, L., and Fei, S. (2023). Waste Detection System Based on Data Augmentation and YOLO_EC. Sensors, 23.","DOI":"10.3390\/s23073646"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Zhang, J., Zhang, J., Zhou, K., Zhang, Y., Chen, H., and Yan, X. (2023). An Improved YOLOv5-Based Underwater Object-Detection Framework. Sensors, 23.","DOI":"10.3390\/s23073693"},{"key":"ref_45","unstructured":"Ge, Z., Liu, S., Wang, F., Li, Z., and Sun, J. (2021). Yolox: Exceeding yolo series in 2021. arXiv."},{"key":"ref_46","unstructured":"Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Wutke, M., Heinrich, F., Das, P.P., Lange, A., Gentz, M., Traulsen, I., Warns, F.K., Schmitt, A.O., and G\u00fcltas, M. (2021). Detecting animal contacts\u2014A deep learning-based pig detection and tracking approach for the quantification of social contacts. Sensors, 21.","DOI":"10.3390\/s21227512"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Chen, J., Zhou, J., Liu, L., Shu, C., Shen, M., and Yao, W. (2023). Sow Farrowing Early Warning and Supervision for Embedded Board Implementations. Sensors, 23.","DOI":"10.3390\/s23020727"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Wang, X., Wang, W., Lu, J., and Wang, H. (2022). HRST: An Improved HRNet for Detecting Joint Points of Pigs. Sensors, 22.","DOI":"10.3390\/s22197215"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/11\/5242\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:46:25Z","timestamp":1760125585000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/11\/5242"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,31]]},"references-count":49,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2023,6]]}},"alternative-id":["s23115242"],"URL":"https:\/\/doi.org\/10.3390\/s23115242","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,31]]}}}