{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T00:45:22Z","timestamp":1759970722636,"version":"build-2065373602"},"reference-count":53,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2025,1,29]],"date-time":"2025-01-29T00:00:00Z","timestamp":1738108800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>Multi-class object detectors often suffer from the class imbalance issue, where substantial model performance discrepancies exist between classes. Generative adversarial networks (GANs), an emerging deep learning research topic, are able to learn from existing data distributions and generate similar synthetic data, which might serve as valid training data for improving object detectors. The current study investigated the utility of lightweight unconditional GAN in addressing weak object detector class performance by incorporating synthetic data into real data for model retraining, under an agricultural context. AriAplBud, a multi-growth stage aerial apple flower bud dataset was deployed in the study. A baseline YOLO11n detector was first developed based on training, validation, and test datasets derived from AriAplBud. Six FastGAN models were developed based on dedicated subsets of the same YOLO training and validation datasets for different apple flower bud growth stages. Positive sample rates and average instance number per image of synthetic data generated by each of the FastGAN models were investigated based on 1000 synthetic images and the baseline detector at various confidence thresholds. In total, 13 new YOLO11n detectors were retrained specifically for the two weak growth stages, tip and half-inch green, by including synthetic data in training datasets to increase total instance number to 1000, 2000, 4000, and 8000, respectively, pseudo-labeled by the baseline detector. FastGAN showed its resilience in successfully generating positive samples, despite apple flower bud instances being generally small and randomly distributed in the images. Positive sample rates of the synthetic datasets were negatively correlated with the detector confidence thresholds as expected, which ranged from 0 to 1. Higher overall positive sample rates were observed for the growth stages with higher detector performance. The synthetic images generally contained fewer detector-detectable instances per image than the corresponding real training images. The best achieved YOLO11n AP improvements in the retrained detectors for tip and half-inch green were 30.13% and 14.02% respectively, while the best achieved YOLO11n mAP improvement was 2.83%. However, the relationship between synthetic training instance quantity and detector class performances had yet to be determined. GAN was concluded to be beneficial in retraining object detectors and improving their performances. Further studies are still in need to investigate the influence of synthetic training data quantity and quality on retrained object detector performance.<\/jats:p>","DOI":"10.3390\/bdcc9020028","type":"journal-article","created":{"date-parts":[[2025,1,29]],"date-time":"2025-01-29T07:45:12Z","timestamp":1738136712000},"page":"28","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Lightweight GAN-Assisted Class Imbalance Mitigation for Apple Flower Bud Detection"],"prefix":"10.3390","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6300-2973","authenticated-orcid":false,"given":"Wenan","family":"Yuan","sequence":"first","affiliation":[{"name":"Independent Researcher, Oak Brook, IL 60523, USA"}]},{"given":"Peng","family":"Li","sequence":"additional","affiliation":[{"name":"Independent Researcher, Oak Brook, IL 60523, USA"}]}],"member":"1968","published-online":{"date-parts":[[2025,1,29]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"257","DOI":"10.1109\/JPROC.2023.3238524","article-title":"Object Detection in 20 Years: A Survey","volume":"111","author":"Zou","year":"2023","journal-title":"Proc. IEEE"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"9029","DOI":"10.1007\/s00521-019-04412-5","article-title":"Object Manipulation with a Variable-Stiffness Robotic Mechanism Using Deep Neural Networks for Visual Semantics and Load Estimation","volume":"32","author":"Bayraktar","year":"2020","journal-title":"Neural Comput. Appl."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"3212","DOI":"10.1109\/TNNLS.2018.2876865","article-title":"Object Detection with Deep Learning: A Review","volume":"30","author":"Zhao","year":"2019","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Maheswaran, S., Sathesh, S., Gomathi, R.D., Indhumathi, N., Prasanth, S., Charumathi, K., Balanisharitha, P., Murugesan, G., and Duraisamy, P. (2024, January 24\u201328). Automated Weed Identification And Classification Using Artificial Intelligence. Proceedings of the 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kamand, India.","DOI":"10.1109\/ICCCNT61001.2024.10725737"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"108993","DOI":"10.1016\/j.compag.2024.108993","article-title":"Precision Agriculture in the United States: A Comprehensive Meta-Review Inspiring Further Research, Innovation, and Adoption","volume":"221","author":"Trentin","year":"2024","journal-title":"Comput. Electron. Agric."},{"key":"ref_6","unstructured":"Crassweller, R. (2024, December 17). Home Orchards: Flowering Habits of Apples and Pears. Available online: https:\/\/extension.psu.edu\/home-orchards-flowering-habits-of-apples-and-pears."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"109260","DOI":"10.1016\/j.compag.2024.109260","article-title":"Monitoring Apple Flowering Date at 10 m Spatial Resolution Based on Crop Reference Curves","volume":"225","author":"Duan","year":"2024","journal-title":"Comput. Electron. Agric."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"3388","DOI":"10.1109\/TPAMI.2020.2981890","article-title":"Imbalance Problems in Object Detection: A Review","volume":"43","author":"Oksuz","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_9","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2024, December 17). Generative Adversarial Networks. Available online: https:\/\/arxiv.org\/abs\/1406.2661."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1319","DOI":"10.1109\/TNNLS.2021.3105227","article-title":"Training Generative Adversarial Networks via Stochastic Nash Games","volume":"34","author":"Franci","year":"2023","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"3313","DOI":"10.1109\/TKDE.2021.3130191","article-title":"A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications","volume":"35","author":"Gui","year":"2023","journal-title":"IEEE Trans. Knowl. Data Eng."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1109\/MSP.2017.2765202","article-title":"Generative Adversarial Networks: An Overview","volume":"35","author":"Creswell","year":"2018","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"3106","DOI":"10.1080\/01431161.2022.2085069","article-title":"Sensitivity Examination of YOLOv4 Regarding Test Image Distortion and Training Dataset Attribute for Apple Flower Bud Classification","volume":"43","author":"Yuan","year":"2022","journal-title":"Int. J. Remote Sens."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"23","DOI":"10.1007\/s00138-018-0966-3","article-title":"A Hybrid Image Dataset toward Bridging the Gap between Real and Simulation Environments for Robotics: Annotated Desktop Objects Real and Synthetic Images Dataset: ADORESet","volume":"30","author":"Bayraktar","year":"2019","journal-title":"Mach. Vis. Appl."},{"key":"ref_15","unstructured":"Bohacek, M., and Farid, H. (2024, December 17). Nepotistically Trained Generative-AI Models Collapse. Available online: https:\/\/arxiv.org\/abs\/2311.12202."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"755","DOI":"10.1038\/s41586-024-07566-y","article-title":"AI Models Collapse When Trained on Recursively Generated Data","volume":"631","author":"Shumailov","year":"2024","journal-title":"Nature"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"4217","DOI":"10.1109\/TPAMI.2020.2970919","article-title":"A Style-Based Generator Architecture for Generative Adversarial Networks","volume":"43","author":"Karras","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. (2020, January 13\u201319). Analyzing and Improving the Image Quality of StyleGAN. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"ref_19","unstructured":"Brock, A., Donahue, J., and Simonyan, K. (2019, January 6\u20139). Large Scale GaN Training for High Fidelity Natural Image Synthesis. Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"ref_20","unstructured":"Kushwaha, V., and Nandi, G.C. (2020, January 3\u20135). Study of Prevention of Mode Collapse in Generative Adversarial Network (GAN). Proceedings of the 2020 IEEE 4th Conference on Information & Communication Technology (CICT), Chennai, India."},{"key":"ref_21","unstructured":"Gan, I., and Verbeek, J. (2024, December 17). Instance-Conditioned GAN. Available online: https:\/\/arxiv.org\/abs\/2109.05070."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Li, J., Liang, X., Wei, Y., Xu, T., Feng, J., and Yan, S. (2017, January 21\u201326). Perceptual Generative Adversarial Networks for Small Object Detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.211"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Wang, H., Wang, J., Bai, K., and Sun, Y. (2021). Centered Multi-Task Generative Adversarial Network for Small Object Detection. Sensors, 21.","DOI":"10.3390\/s21155194"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"105022","DOI":"10.1109\/ACCESS.2022.3211394","article-title":"Ganster R-CNN: Occluded Object Detection Network Based on Generative Adversarial Nets and Faster R-CNN","volume":"10","author":"Sun","year":"2022","journal-title":"IEEE Access"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Dewi, C., Chen, R.C., and Liu, Y.T. (2019, January 23\u201325). Similar Music Instrument Detection via Deep Convolution YOLO-Generative Adversarial Network. Proceedings of the 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST), Morioka, Japan.","DOI":"10.1109\/ICAwST.2019.8923404"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"17531","DOI":"10.1109\/JSEN.2023.3281399","article-title":"RodNet: An Advanced Multidomain Object Detection Approach Using Feature Transformation With Generative Adversarial Networks","volume":"23","author":"Jaw","year":"2023","journal-title":"IEEE Sens. J."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Ni, L., Huo, C., Zhang, X., Wang, P., Zhang, L., Guo, K., and Zhou, Z. (2022). NaGAN: Nadir-like Generative Adversarial Network for Off-Nadir Object Detection of Multi-View Remote Sensing Imagery. Remote Sens., 14.","DOI":"10.3390\/rs14040975"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"210","DOI":"10.1007\/978-3-030-01261-8_13","article-title":"SOD-MTGAN: Small Object Detection via Multi-Task Generative Adversarial Network","volume":"Volume 11217","author":"Bai","year":"2018","journal-title":"Computer Vision\u2014ECCV 2018. ECCV 2018"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Chen, G., Liu, L., Hu, W., and Pan, Z. (2018, January 22\u201327). Semi-Supervised Object Detection in Remote Sensing Images Using Generative Adversarial Networks. Proceedings of the IGARSS 2018\u20142018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain.","DOI":"10.1109\/IGARSS.2018.8519132"},{"key":"ref_30","unstructured":"Jiang, W., and Ying, N. (2024, December 17). Improve Object Detection by Data Enhancement Based on Generative Adversarial Nets. Available online: https:\/\/arxiv.org\/abs\/1903.01716."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1117\/1.OE.58.1.013108","article-title":"Compressive Sensing Ghost Imaging Object Detection Using Generative Adversarial Networks","volume":"58","author":"Zhai","year":"2019","journal-title":"Opt. Eng."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"108998","DOI":"10.1016\/j.patcog.2022.108998","article-title":"A Full Data Augmentation Pipeline for Small Object Detection Based on Generative Adversarial Networks","volume":"133","author":"Bosquet","year":"2023","journal-title":"Pattern Recognit."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"361","DOI":"10.1016\/j.neucom.2021.06.094","article-title":"Generative Adversarial Network with Object Detector Discriminator for Enhanced Defect Detection on Ultrasonic B-Scans","volume":"459","author":"Medak","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Lee, H., Kang, S., and Chung, K. (2023). Robust Data Augmentation Generative Adversarial Network for Object Detection. Sensors, 23.","DOI":"10.3390\/s23010157"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"967","DOI":"10.1007\/s10489-021-02445-9","article-title":"Data Augmentation for Thermal Infrared Object Detection with Cascade Pyramid Generative Adversarial Network","volume":"52","author":"Dai","year":"2022","journal-title":"Appl. Intell."},{"key":"ref_36","unstructured":"Liu, L., Muelly, M., Deng, J., Pfister, T., and Li, L.J. (November, January 27). Generative Modeling for Small-Data Object Detection. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1016\/j.neucom.2019.10.065","article-title":"Diverse Sample Generation with Multi-Branch Conditional Generative Adversarial Network for Remote Sensing Objects Detection","volume":"381","author":"Zhu","year":"2020","journal-title":"Neurocomputing"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"3604","DOI":"10.1364\/OSAC.412523","article-title":"Generative Adversarial Networks and Faster-Region Convolutional Neural Networks Based Object Detection in X-Ray Baggage Security Imagery","volume":"3","author":"Kim","year":"2020","journal-title":"OSA Contin."},{"key":"ref_39","first-page":"443","article-title":"Generating Synthetic Training Data for Object Detection Using Multi-Task Generative Adversarial Networks","volume":"5","author":"Lin","year":"2020","journal-title":"ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"47","DOI":"10.1111\/mice.12561","article-title":"Generative Adversarial Network for Road Damage Detection","volume":"36","author":"Maeda","year":"2021","journal-title":"Comput. Civ. Infrastruct. Eng."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Courtrai, L., Pham, M.T., and Lef\u00e8vre, S. (2020). Small Object Detection in Remote Sensing Images Based on Super-Resolution with Auxiliary Generative Adversarial Networks. Remote Sens., 12.","DOI":"10.3390\/rs12193152"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Nath, N., and Behzadan, A.H. (2020, January 14\u201318). Deep Generative Adversarial Network to Enhance Image Quality for Fast Object Detection in Construction Sites. Proceedings of the 2020 Winter Simulation Conference, Orlando, FL, USA.","DOI":"10.1109\/WSC48552.2020.9383890"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"7327","DOI":"10.1080\/01431161.2020.1757782","article-title":"Evaluating Generative Adversarial Networks Based Image-Level Domain Transfer for Multi-Source Remote Sensing Image Segmentation and Object Detection","volume":"41","author":"Li","year":"2020","journal-title":"Int. J. Remote Sens."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Yuan, W. (2024). AriAplBud: An Aerial Multi-Growth Stage Apple Flower Bud Dataset for Agricultural Object Detection Benchmarking. Data, 9.","DOI":"10.3390\/data9020036"},{"key":"ref_45","unstructured":"Jocher, G., and Qiu, J. (2024, December 17). Ultralytics YOLO11. Available online: https:\/\/github.com\/ultralytics\/ultralytics."},{"key":"ref_46","unstructured":"Khanam, R., and Hussain, M. (2024). YOLOv11: An Overview of the Key Architectural Enhancements. arXiv."},{"key":"ref_47","unstructured":"Liu, B., Zhu, Y., Song, K., and Elgammal, A. (2021, January 3\u20137). Towards Faster and Stabilized Gan Training for High-Fidelity Few-Shot Image Synthesis. Proceedings of the 9th International Conference on Learning Representations, ICLR 2021, Virtual Event."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","article-title":"The Pascal Visual Object Classes (VOC) Challenge","volume":"88","author":"Everingham","year":"2010","journal-title":"Int. J. Comput. Vis."},{"key":"ref_49","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2024, December 17). GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. Available online: https:\/\/arxiv.org\/abs\/1706.08500."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going Deeper with Convolutions. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_51","unstructured":"(2024, December 17). A Fast and Stable GAN for Small and High Resolution Imagesets\u2014Pytorch. Available online: https:\/\/github.com\/odegeasslbc\/FastGAN-pytorch."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"413","DOI":"10.3390\/agriengineering5010027","article-title":"Accuracy Comparison of YOLOv7 and YOLOv4 Regarding Image Annotation Quality for Apple Flower Bud Classification","volume":"5","author":"Yuan","year":"2023","journal-title":"AgriEngineering"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Yuan, W., and Choi, D. (2021). UAV-Based Heating Requirement Determination for Frost Management in Apple Orchard. Remote Sens., 13.","DOI":"10.3390\/rs13020273"}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/2\/28\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,8]],"date-time":"2025-10-08T10:38:18Z","timestamp":1759919898000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/2\/28"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,29]]},"references-count":53,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2025,2]]}},"alternative-id":["bdcc9020028"],"URL":"https:\/\/doi.org\/10.3390\/bdcc9020028","relation":{},"ISSN":["2504-2289"],"issn-type":[{"type":"electronic","value":"2504-2289"}],"subject":[],"published":{"date-parts":[[2025,1,29]]}}}