{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T10:08:51Z","timestamp":1774433331180,"version":"3.50.1"},"reference-count":28,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2021,5,7]],"date-time":"2021-05-07T00:00:00Z","timestamp":1620345600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the National Natural Science Foundation of China","award":["the National Natural Science Foundation of China"],"award-info":[{"award-number":["the National Natural Science Foundation of China"]}]},{"name":"the Science and Technology Planning Project of Guangdong Province","award":["2015A020224038"],"award-info":[{"award-number":["2015A020224038"]}]},{"name":"College Students' Innovation and Entrepreneurship Competition","award":["202010564031"],"award-info":[{"award-number":["202010564031"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Instance segmentation is an accurate and reliable method to segment adhesive pigs\u2019 images, and is critical for providing health and welfare information on individual pigs, such as body condition score, live weight, and activity behaviors in group-housed pig environments. In this paper, a PigMS R-CNN framework based on mask scoring R-CNN (MS R-CNN) is explored to segment adhesive pig areas in group-pig images, to separate the identification and location of group-housed pigs. The PigMS R-CNN consists of three processes. First, a residual network of 101-layers, combined with the feature pyramid network (FPN), is used as a feature extraction network to obtain feature maps for input images. Then, according to these feature maps, the region candidate network generates the regions of interest (RoIs). Finally, for each RoI, we can obtain the location, classification, and segmentation results of detected pigs through the regression and category, and mask three branches from the PigMS R-CNN head network. To avoid target pigs being missed and error detections in overlapping or stuck areas of group-housed pigs, the PigMS R-CNN framework uses soft non-maximum suppression (soft-NMS) by replacing the traditional NMS to conduct post-processing selected operation of pigs. The MS R-CNN framework with traditional NMS obtains results with an F1 of 0.9228. By setting the soft-NMS threshold to 0.7 on PigMS R-CNN, detection of the target pigs achieves an F1 of 0.9374. The work explores a new instance segmentation method for adhesive group-housed pig images, which provides valuable exploration for vision-based, real-time automatic pig monitoring and welfare evaluation.<\/jats:p>","DOI":"10.3390\/s21093251","type":"journal-article","created":{"date-parts":[[2021,5,7]],"date-time":"2021-05-07T22:36:24Z","timestamp":1620426984000},"page":"3251","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":31,"title":["Automatic Detection and Segmentation for Group-Housed Pigs Based on PigMS R-CNN"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5844-2360","authenticated-orcid":false,"given":"Shuqin","family":"Tu","sequence":"first","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, China"}]},{"given":"Weijun","family":"Yuan","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, China"}]},{"given":"Yun","family":"Liang","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, China"}]},{"given":"Fan","family":"Wang","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, China"}]},{"given":"Hua","family":"Wan","sequence":"additional","affiliation":[{"name":"College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,5,7]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"28","DOI":"10.1016\/j.biosystemseng.2019.02.018","article-title":"Group-housed pig detection in video surveillance of overhead views using multi-feature template matching","volume":"181","author":"Li","year":"2019","journal-title":"Biosyst. Eng."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"26","DOI":"10.1016\/j.compag.2015.05.004","article-title":"An approach based on digital image analysis to estimate the live weights of pigs in farm environments","volume":"115","author":"Wongsriworaphon","year":"2015","journal-title":"Comput. Electron. Agric."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zhang, L., Gray, H., Ye, X., Collins, L., and Allinson, N. (2019). Automatic Individual Pig Detection and Tracking in Pig Farms. Sensors, 19.","DOI":"10.3390\/s19051188"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"184","DOI":"10.1016\/j.compag.2015.10.023","article-title":"Using machine vision for investigation of changes in pig group lying patterns","volume":"119","author":"Nasirahmadi","year":"2015","journal-title":"Comput. Electron. Agric."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"295","DOI":"10.1016\/j.compag.2016.04.022","article-title":"Automatic detection of mounting behaviours among pigs using image analysis","volume":"124","author":"Nasirahmadi","year":"2016","journal-title":"Comput. Electron. Agric."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"70","DOI":"10.1016\/j.livsci.2018.10.013","article-title":"A kinetic energy model based on machine vision for recognition of aggressive behaviours among group-housed pigs","volume":"218","author":"Chen","year":"2018","journal-title":"Livest. Sci."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"133","DOI":"10.1016\/j.biosystemseng.2018.09.011","article-title":"Automatic recognition of sow nursing behaviour using deep learning-based segmentation and spatial and temporal features","volume":"175","author":"Yang","year":"2018","journal-title":"Biosyst. Eng."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Ju, M., Choi, Y., Seo, J., Sa, J., Lee, S., Chung, Y., and Park, D. (2018). A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring. Sensors, 18.","DOI":"10.3390\/s18061746"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"111","DOI":"10.1016\/j.compag.2013.01.013","article-title":"Automatic identification of marked pigs in a pen using image pattern recognition","volume":"93","author":"Kashiha","year":"2013","journal-title":"Comput. Electron. Agric."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1016\/j.biosystemseng.2014.07.002","article-title":"Foreground detection of group-housed pigs based on the combination of Mixture of Gaussians using prediction mechanism and threshold segmentation","volume":"125","author":"Guo","year":"2014","journal-title":"Biosyst. Eng."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"54","DOI":"10.1016\/j.biosystemseng.2015.05.001","article-title":"Multi-object extraction from topview group-housed pig images based on adaptive partitioning and multilevel thresholding segmentation","volume":"135","author":"Guo","year":"2015","journal-title":"Biosyst. Eng."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"90","DOI":"10.1016\/j.biosystemseng.2017.11.007","article-title":"Identification of group-housed pigs based on Gabor and Local Binary Pattern features","volume":"166","author":"Huang","year":"2018","journal-title":"Biosyst. Eng."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y., Serge Belongie, M.M., Bourdev, L., Girshick, R., James Hays, P.P., Deva Ramanan, C., Lawrence, Z., and Piotr, D. (2014, January 6\u201312). Microsoft COCOCommon Objects in Context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1016\/j.compag.2018.01.023","article-title":"Automatic recognition of lactating sow postures from depth images by deep learning detector","volume":"147","author":"Zheng","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"453","DOI":"10.1016\/j.compag.2018.11.002","article-title":"Feeding behavior recognition for group-housed pigs with the Faster R-CNN","volume":"155","author":"Yang","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"36","DOI":"10.1016\/j.biosystemseng.2018.10.005","article-title":"High-accuracy image segmentation for lactating sows using a fully convolutional network","volume":"176","author":"Yang","year":"2018","journal-title":"Biosyst. Eng."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully Convolutional Network for Semantic Segmentation. Proceedings of the Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 22\u201329). Mask R-CNN. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Qiao, Y., Truman, M., and Sukkarieh, S. (2019). Cattle segmentation and contour extraction based on Mask R-CNN for precision livestock farming. Comput. Electron. Agric., 165.","DOI":"10.1016\/j.compag.2019.104958"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1016\/j.compag.2015.11.008","article-title":"An automatic splitting method for the adhesive piglets\u2019 gray scale image based on the ellipse shape feature","volume":"120","author":"Lu","year":"2016","journal-title":"Comput. Electron. Agric."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Huang, Z., Huang, L., Gong, Y., Huang, C., and Wang, X. (2019, January 16\u201320). Mask Scoring R-CNN. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition CVPR, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00657"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Tu, S., Liu, H., Li, J., Huang, J., and Xue, Y. (2020, January 27\u201329). Instance Segmentation Based on Mask Scoring R-CNN for Group-housed Pigs. Proceedings of the 2020 International Conference on Computer Engineering and Application (ICCEA), Guangzhou, China.","DOI":"10.1109\/ICCEA50009.2020.00105"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Bodla, N., Singh, B., Chellappa, R., and Davis, L.S. (2017, January 22\u201329). Improving Object Detection with One Line of Code. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.593"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21\u201326). Feature Pyramid Networks for Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.106"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Toshev, A., and Szegedy, C. (2014, January 24\u201327). DeepPose: Human Pose Estimation via Deep Neural Networks. Proceedings of the Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.214"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C., and Murphy, K. (2017, January 21\u201326). Towards Accurate Multi-person Pose Estimation in the Wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.395"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/9\/3251\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T05:58:05Z","timestamp":1760162285000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/9\/3251"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,7]]},"references-count":28,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2021,5]]}},"alternative-id":["s21093251"],"URL":"https:\/\/doi.org\/10.3390\/s21093251","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,7]]}}}