{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T01:30:24Z","timestamp":1760059824579,"version":"build-2065373602"},"reference-count":53,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2025,7,11]],"date-time":"2025-07-11T00:00:00Z","timestamp":1752192000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Major scientific and technological project of Shenzhen","award":["KJZD 20230923114611023","42301515"],"award-info":[{"award-number":["KJZD 20230923114611023","42301515"]}]},{"name":"National Natural Science Foundation of China","award":["KJZD 20230923114611023","42301515"],"award-info":[{"award-number":["KJZD 20230923114611023","42301515"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>The flowering stage is a critical phase in the growth of rapeseed crops, and non-destructive, high-throughput quantitative analysis of rape flower clusters in field environments holds significant importance for rapeseed breeding. However, detecting and counting rape flower clusters remains challenging in complex field conditions due to their small size, severe overlapping and occlusion, and the large parameter sizes of existing models. To address these challenges, this study proposes a lightweight rape flower clusters detection model, SPL-YOLOv8. First, the model introduces StarNet as a lightweight backbone network for efficient feature extraction, significantly reducing computational complexity and parameter counts. Second, a feature fusion module (C2f-Star) is integrated into the backbone to enhance the feature representation capability of the neck through expanded spatial dimensions, mitigating the impact of occluded regions on detection performance. Additionally, a lightweight Partial Group Convolution Detection Head (PGCD) is proposed, which employs Partial Convolution combined with Group Normalization to enable multi-scale feature interaction. By incorporating additional learnable parameters, the PGCD enhances the detection and localization of small targets. Finally, channel pruning based on the Layer-Adaptive Magnitude-based Pruning (LAMP) score is applied to reduce model parameters and runtime memory. Experimental results on the Rapeseed Flower-Raceme Benchmark (RFRB) demonstrate that the SPL-YOLOv8n-prune model achieves a detection accuracy of 92.2% in Average Precision (AP50), comparable to SOTA methods, while reducing the giga floating point operations per second (GFLOPs) and parameters by 86.4% and 95.4%, respectively. The model size is only 0.5 MB and the real-time frame rate is 171 fps. The proposed model effectively detects rape flower clusters with minimal computational overhead, offering technical support for yield prediction and elite cultivar selection in rapeseed breeding.<\/jats:p>","DOI":"10.3390\/a18070428","type":"journal-article","created":{"date-parts":[[2025,7,11]],"date-time":"2025-07-11T15:19:31Z","timestamp":1752247171000},"page":"428","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["SPL-YOLOv8: A Lightweight Method for Rape Flower Cluster Detection and Counting Based on YOLOv8n"],"prefix":"10.3390","volume":"18","author":[{"given":"Yue","family":"Fang","sequence":"first","affiliation":[{"name":"Hubei Key Laboratory for High-Efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, Wuhan 430068, China"}]},{"given":"Chenbo","family":"Yang","sequence":"additional","affiliation":[{"name":"Hubei Key Laboratory for High-Efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, Wuhan 430068, China"}]},{"given":"Jie","family":"Li","sequence":"additional","affiliation":[{"name":"Hubei Key Laboratory for High-Efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, Wuhan 430068, China"}]},{"given":"Jingmin","family":"Tu","sequence":"additional","affiliation":[{"name":"Hubei Key Laboratory for High-Efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, Wuhan 430068, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,7,11]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"416","DOI":"10.1016\/j.indcrop.2012.06.021","article-title":"Rapid estimation of seed yield using hyperspectral images of oilseed rape leaves","volume":"42","author":"Zhang","year":"2013","journal-title":"Ind. Crops Prod."},{"key":"ref_2","first-page":"221","article-title":"Security strategy for the nation\u2019s edible vegetable oil supplies under the new circumstances","volume":"46","author":"Feng","year":"2024","journal-title":"Chin. J. Oil Crop Sci."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Mamalis, M., Kalampokis, E., Kalfas, I., and Tarabanis, K. (2023). Deep learning for detecting verticillium fungus in olive trees: Using yolo in uav imagery. Algorithms, 16.","DOI":"10.3390\/a16070343"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Roman, A., Rahman, M.M., Haider, S.A., Akram, T., and Naqvi, S.R. (2025). Integrating Feature Selection and Deep Learning: A Hybrid Approach for Smart Agriculture Applications. Algorithms, 18.","DOI":"10.3390\/a18040222"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"180","DOI":"10.1016\/j.fcr.2018.03.018","article-title":"The critical period for yield and quality determination in canola (Brassica napus L.)","volume":"222","author":"Kirkegaard","year":"2018","journal-title":"Field Crops Res."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"506","DOI":"10.1111\/pce.13946","article-title":"The transition to flowering in winter rapeseed during vernalization","volume":"44","author":"Matar","year":"2021","journal-title":"Plant Cell Environ."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1186\/s13007-023-01017-x","article-title":"Automatic rape flower cluster counting method based on low-cost labelling and UAV-RGB images","volume":"19","author":"Li","year":"2023","journal-title":"Plant Methods"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Li, J., Li, Y., Qiao, J., Li, L., Wang, X., Yao, J., and Liao, G. (2023). Automatic counting of rapeseed inflorescences using deep learning method and UAV RGB imagery. Front. Plant Sci., 14.","DOI":"10.3389\/fpls.2023.1101143"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Han, J., Zhang, Z., and Cao, J. (2020). Developing a new method to identify flowering dynamics of rapeseed using landsat 8 and sentinel-1\/2. Remote Sens., 13.","DOI":"10.3390\/rs13010105"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"111660","DOI":"10.1016\/j.rse.2020.111660","article-title":"Detecting flowering phenology in oil seed rape parcels with Sentinel-1 and-2 time series","volume":"239","author":"Taymans","year":"2020","journal-title":"Remote Sens. Environ."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Zhang, T., Vail, S., Duddu, H.S., Parkin, I.A., Guo, X., Johnson, E.N., and Shirtliffe, S.J. (2021). Phenotyping flowering in canola (Brassica napus L.) and estimating seed yield using an unmanned aerial vehicle-based imagery. Front. Plant Sci., 12.","DOI":"10.3389\/fpls.2021.686332"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Gong, G., Wang, X., Zhang, J., Shang, X., Pan, Z., Li, Z., and Zhang, J. (2025). MSFF: A Multi-Scale Feature Fusion Convolutional Neural Network for Hyperspectral Image Classification. Electronics, 14.","DOI":"10.3390\/electronics14040797"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Colucci, G.P., Battilani, P., Camardo Leggieri, M., and Trinchero, D. (2025). Algorithms for Plant Monitoring Applications: A Comprehensive Review. Algorithms, 18.","DOI":"10.3390\/a18020084"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"S\u00e1ri-Barn\u00e1cz, F.E., Zalai, M., Milics, G., T\u00f3thn\u00e9 Kun, M., M\u00e9sz\u00e1ros, J., \u00c1rvai, M., and Kiss, J. (2024). Monitoring Helicoverpa armigera Damage with PRISMA Hyperspectral Imagery: First Experience in Maize and Comparison with Sentinel-2 Imagery. Remote Sens., 16.","DOI":"10.3390\/rs16173235"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"9757948","DOI":"10.34133\/2022\/9757948","article-title":"Simultaneous prediction of wheat yield and grain protein content using multitask deep learning from time-series proximal sensing","volume":"2022","author":"Sun","year":"2022","journal-title":"Plant Phenomics"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Shi, Z., Wang, L., Yang, Z., Li, J., Cai, L., Huang, Y., Zhang, H., and Han, L. (2025). Unmanned Aerial Vehicle-Based Hyperspectral Imaging Integrated with a Data Cleaning Strategy for Detection of Corn Canopy Biomass, Chlorophyll, and Nitrogen Contents at Plant Scale. Remote Sens., 17.","DOI":"10.3390\/rs17050895"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Sun, K., Yang, J., Li, J., Yang, B., and Ding, S. (2025). Proximal Policy Optimization-Based Hierarchical Decision-Making Mechanism for Resource Allocation Optimization in UAV Networks. Electronics, 14.","DOI":"10.3390\/electronics14040747"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"109103","DOI":"10.1016\/j.compag.2024.109103","article-title":"TasselNetV2++: A dual-branch network incorporating branch-level transfer learning and multilayer fusion for plant counting","volume":"223","author":"Xue","year":"2024","journal-title":"Comput. Electron. Agric."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"107905","DOI":"10.1016\/j.compag.2023.107905","article-title":"SwinT-YOLO: Detection of densely distributed maize tassels in remote sensing images","volume":"210","author":"Zhang","year":"2023","journal-title":"Comput. Electron. Agric."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1586","DOI":"10.1016\/j.cj.2023.04.005","article-title":"Rpnet: Rice plant counting after tillering stage based on plant attention and multiple supervision network","volume":"11","author":"Bai","year":"2023","journal-title":"Crop J."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Yadav, P.K., Thomasson, J.A., Hardin, R., Searcy, S.W., Braga-Neto, U., Popescu, S.C., Rodriguez, R., Martin, D.E., and Enciso, J. (2024). AI-Driven Computer Vision Detection of Cotton in Corn Fields Using UAS Remote Sensing Data and Spot-Spray Application. Remote Sens., 16.","DOI":"10.3390\/rs16152754"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhang, M., Chen, W., Gao, P., Li, Y., Tan, F., Zhang, Y., Ruan, S., Xing, P., and Guo, L. (2024). YOLO SSPD: A small target cotton boll detection model during the boll-spitting period based on space-to-depth convolution. Front. Plant Sci., 15.","DOI":"10.3389\/fpls.2024.1409194"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"109342","DOI":"10.1016\/j.compag.2024.109342","article-title":"MFNet: Multi-scale feature enhancement networks for wheat head detection and counting in complex scene","volume":"225","author":"Qian","year":"2024","journal-title":"Comput. Electron. Agric."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1680","DOI":"10.3390\/make5040083","article-title":"A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas","volume":"5","author":"Terven","year":"2023","journal-title":"Mach. Learn. Knowl. Extr."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Ma, X., Dai, X., Bai, Y., Wang, Y., and Fu, Y. (2024, January 17\u201321). Rewrite the stars. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.00544"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Wang, N., Liu, H., Li, Y., Zhou, W., and Ding, M. (2023). Segmentation and phenotype calculation of rapeseed pods based on YOLO v8 and mask R-convolution neural networks. Plants, 12.","DOI":"10.3390\/plants12183328"},{"key":"ref_27","first-page":"114","article-title":"Improved light-weight military aircraft detection algorithm of YOLOv8","volume":"60","author":"Liu","year":"2024","journal-title":"J. Comput. Eng. Appl."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Akyon, F.C., Altinuc, S.O., and Temizel, A. (2022, January 16\u201319). Slicing aided hyper inference and fine-tuning for small object detection. Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France.","DOI":"10.1109\/ICIP46576.2022.9897990"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Woo, S., Debnath, S., Hu, R., Chen, X., Liu, Z., Xie, S., and He, K. (2023). ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders. arXiv.","DOI":"10.1109\/CVPR52729.2023.01548"},{"key":"ref_30","first-page":"123","article-title":"FarnerNet: A Lightweight Convolutional Neural Network for Agricultural Image Analysis","volume":"15","author":"Doe","year":"2023","journal-title":"J. Agric. Inform."},{"key":"ref_31","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv."},{"key":"ref_32","first-page":"12934","article-title":"EfficientFormer: Vision Transformers at MobileNet Speed","volume":"35","author":"Li","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_33","unstructured":"Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022). EfficientViT: Lightweight Vision Transformers for Real-Time Semantic Segmentation. arXiv."},{"key":"ref_34","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2022). BiFormer: Bidirectional Network for Visual Recognition. arXiv."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_36","unstructured":"Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H.P. (2016). Pruning Filters for Efficient ConvNets. arXiv."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Fang, G., Ma, X., Song, M., Mi, M.B., and Wang, X. (2023, January 18\u201322). Depgraph: Towards any structural pruning. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.01544"},{"key":"ref_38","unstructured":"Zhou, X., Wang, D., and Kr\u00e4henb\u00fchl, P. (2019, January 16\u201320). Objects as Points. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Goyal, P., Girshick, R., He, K., and Doll\u00e1r, P. (2017, January 22\u201329). Focal Loss for Dense Object Detection. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.324"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016, January 11\u201314). SSD: Single Shot MultiBox Detector. Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_41","first-page":"9","article-title":"Detection algorithm of safety helmet wear based on MobileNet-SSD","volume":"47","author":"Xu","year":"2021","journal-title":"Comput. Eng."},{"key":"ref_42","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An Incremental Improvement. arXiv."},{"key":"ref_43","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv."},{"key":"ref_44","unstructured":"Jocher, G., Chaurasia, A., Stoken, A., Borovec, J., Kwon, Y., Michael, K., Fang, J., Wong, C., Zeng, Y., and V, A. (2025, July 08). ultralytics\/yolov5: v6. 2-YOLOv5 Classification Models, Apple M1, Reproducibility, ClearML and Deci. ai Integrations. Available online: https:\/\/ui.adsabs.harvard.edu\/abs\/2022zndo...7002879J\/abstract."},{"key":"ref_45","unstructured":"Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., Ke, Z., Li, Q., Cheng, M., and Nie, W. (2022). YOLOv6: A single-stage object detection framework for industrial applications. arXiv."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2023, January 18\u201322). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"ref_47","first-page":"107984","article-title":"Yolov10: Real-time end-to-end object detection","volume":"37","author":"Wang","year":"2024","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_48","unstructured":"Khanam, R., and Hussain, M. (2024). Yolov11: An overview of the key architectural enhancements. arXiv."},{"key":"ref_49","unstructured":"Wang, J., Zhang, Y., Li, X., and Zhang, J. (2023). RT-DETR: Real-Time Detection Transformer. arXiv."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Zhou, D., Chen, S., Gao, S., and Ma, Y. (2016, January 27\u201330). Single-Image Crowd Counting via Multi-Column Convolutional Neural Network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.70"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Li, Y., Zhang, X., and Chen, D. (2018, January 18\u201322). CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00120"},{"key":"ref_52","first-page":"225","article-title":"TasselNet: Counting Maize Tassels in the Wild via Local Counts Regression","volume":"264","author":"Lu","year":"2019","journal-title":"Agric. For. Meteorol."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Lu, H., and Cao, Z. (2020). TasselNetV2+: A fast implementation for high-throughput plant counting from high-resolution RGB imagery. Front. Plant Sci., 11.","DOI":"10.3389\/fpls.2020.541960"}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/7\/428\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:08:48Z","timestamp":1760033328000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/7\/428"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,11]]},"references-count":53,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2025,7]]}},"alternative-id":["a18070428"],"URL":"https:\/\/doi.org\/10.3390\/a18070428","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2025,7,11]]}}}