{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T00:44:19Z","timestamp":1759970659263,"version":"build-2065373602"},"reference-count":27,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2025,1,30]],"date-time":"2025-01-30T00:00:00Z","timestamp":1738195200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Computers"],"abstract":"<jats:p>Japan faces a significant labor shortage due to an aging population, particularly in the agricultural sector. The rising average age of farmers and the declining participation of younger individuals threaten the sustainability of farming practices. These trends reduce the availability of agricultural labor and pose a risk to lowering Japan\u2019s food self-sufficiency rate. The reliance on food imports raises concerns regarding price fluctuations and sanitation standards. Moreover, the challenging working conditions in agriculture and a lack of technological innovation have hindered productivity and increased the burden on the existing workforce. To address these challenges, \u201csmart agriculture\u201d presents a promising solution. By leveraging advanced technologies such as sensors, drones, the Internet of Things (IoT), and automation, smart agriculture aims to optimize farm operations. Real-time data collection and AI-driven analysis play a crucial role in monitoring crop growth, assessing soil conditions, and improving overall efficiency. This study proposes enhancements to the YOLO (You Only Look Once) object detection model to develop an automated tomato harvesting system. This system uses a camera to detect tomatoes and assess their ripeness for harvest. Our objective is to streamline the harvesting process through AI technology. Our improved YOLO model integrates two novel loss functions to enhance detection accuracy. The first, \u201cVSR\u201d, refines the model\u2019s ability to classify tomatoes and determine their harvest readiness. The second, \u201cSBCE\u201d, enhances the detection of small tomatoes by training the model to recognize a range of object sizes within the dataset. These improvements have significantly increased the system\u2019s detection performance. Our experimental results demonstrate that the mean Average Precision (mAP) of YOLOv7-tiny improved from 61.81% to 70.21%. Additionally, the F1 score increased from 0.61 to 0.71 and the mean Intersection over Union (mIoU) rose from 65.03% to 66.44% on the tomato dataset. These findings underscore the potential of our proposed system to enhance efficiency in agricultural practices.<\/jats:p>","DOI":"10.3390\/computers14020044","type":"journal-article","created":{"date-parts":[[2025,1,30]],"date-time":"2025-01-30T04:01:39Z","timestamp":1738209699000},"page":"44","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Optimizing Loss Functions for You Only Look Once Models: Improving Object Detection in Agricultural Datasets"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0009-0004-3544-9470","authenticated-orcid":false,"given":"Atsuki","family":"Matsui","sequence":"first","affiliation":[{"name":"Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1, Nojihigashi, Kusatsu 525-8577, Shiga, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-3161-2200","authenticated-orcid":false,"given":"Ryuto","family":"Ishibashi","sequence":"additional","affiliation":[{"name":"Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1, Nojihigashi, Kusatsu 525-8577, Shiga, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4351-6923","authenticated-orcid":false,"given":"Lin","family":"Meng","sequence":"additional","affiliation":[{"name":"College of Science and Engineering, Ritsumeikan University, 1-1-1, Nojihigashi, Kusatsu 525-8577, Shiga, Japan"}]}],"member":"1968","published-online":{"date-parts":[[2025,1,30]]},"reference":[{"key":"ref_1","unstructured":"(2025, January 20). Research of the Ministry of Agriculture, Forestry, and Fisheries. Statistics on Agricultural Labor Force. Available online: https:\/\/www.maff.go.jp\/j\/tokei\/sihyo\/data\/08.html."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"215","DOI":"10.1016\/S0168-1699(02)00093-5","article-title":"Computer vision based system for apple surface defect detection","volume":"36","author":"Li","year":"2002","journal-title":"Comput. Electron. Agric."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"9142753","DOI":"10.1155\/2019\/9142753","article-title":"Identification of tomato disease types and detection of infected areas based on deep convolutional neural networks and object detection techniques","volume":"2019","author":"Wang","year":"2019","journal-title":"Comput. Intell. Neurosci."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"7630926","DOI":"10.1155\/2019\/7630926","article-title":"Detection of apple lesions in orchards based on deep learning methods of cyclegan and YOLOv3-dense","volume":"2019","author":"Tian","year":"2019","journal-title":"J. Sens."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Matsui, A., Meng, L., and Hattori, K. (2023, January 4\u20137). Enhanced YOLO using Attention for Apple grading. Proceedings of the 2023 International Conference on Advanced Mechatronic Systems (ICAMechS), Melbourne, Australia.","DOI":"10.1109\/ICAMechS59878.2023.10272790"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1016\/j.neucom.2020.01.085","article-title":"Recent advances in deep learning for object detection","volume":"396","author":"Wu","year":"2020","journal-title":"Neurocomputing"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Ishibashi, R., Kaneko, H., and Meng, L. (2023, January 4\u20137). Enhancing DETR with Attention-Based Thresholding for Efficient Early Japanese Book Reorganization. Proceedings of the 2023 International Conference on Advanced Mechatronic Systems (ICAMechS), Melbourne, Australia.","DOI":"10.1109\/ICAMechS59878.2023.10272820"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014, January 23\u201328). Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.81"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2016","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016, January 11\u201314). Ssd: Single shot multibox detector. Proceedings of the Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands. Proceedings, Part I 14.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Redmon, J. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Tan, M., Pang, R., and Le, Q.V. (2020, January 13\u201319). Efficientdet: Scalable and efficient object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. (2020, January 23\u201328). End-to-end object detection with transformers. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2023, January 17\u201324). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"ref_16","unstructured":"Wen, H., Dai, F., and Yuan, Y. (2021, January 21\u201324). A Study of YOLO Algorithm for Target Detection. Proceedings of the 2021 International Conference on Artificial Life and Robotics (ICAROB2021), Online."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., and Savarese, S. (2019, January 15\u201320). Generalized intersection over union: A metric and a loss for bounding box regression. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00075"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., and Ren, D. (2020, January 7\u201312). Distance-IoU loss: Faster and better learning for bounding box regression. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6999"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"8574","DOI":"10.1109\/TCYB.2021.3095305","article-title":"Enhancing geometric factors in model learning and inference for object detection and instance segmentation","volume":"52","author":"Zheng","year":"2021","journal-title":"IEEE Trans. Cybern."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Lin, T. (2017). Focal Loss for Dense Object Detection. arXiv.","DOI":"10.1109\/ICCV.2017.324"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zhang, H., Wang, Y., Dayoub, F., and Sunderhauf, N. (2021, January 20\u201325). Varifocalnet: An iou-aware dense object detector. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00841"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"4345","DOI":"10.3390\/heritage6050230","article-title":"Deteriorated characters restoration for early Japanese books using enhanced cyclegan","volume":"6","author":"Kaneko","year":"2023","journal-title":"Heritage"},{"key":"ref_23","unstructured":"Hoffer, E., and Ailon, N. (2015, January 12\u201314). Deep metric learning using triplet network. Proceedings of the Similarity-Based Pattern Recognition: Third International Workshop, SIMBAD 2015, Copenhagen, Denmark. Proceedings 3."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Qi, C., and Su, F. (2017, January 17\u201320). Contrastive-center loss for deep neural networks. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China.","DOI":"10.1109\/ICIP.2017.8296803"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Deng, J., Guo, J., Xue, N., and Zafeiriou, S. (2019, January 15\u201320). Arcface: Additive angular margin loss for deep face recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00482"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","article-title":"The pascal visual object classes (voc) challenge","volume":"88","author":"Everingham","year":"2010","journal-title":"Int. J. Comput. Vis."},{"key":"ref_27","unstructured":"(2025, January 20). Sylhet Agricultural University Tomato Leaf Diseases Detect Dataset. 2024. Sylhet Agricultural University, Tomato Leaf Diseases Detect Computer Vision Project. Available online: https:\/\/universe.roboflow.com\/sylhet-agricultural-university\/tomato-leaf-diseases-detect."}],"container-title":["Computers"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/2\/44\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,8]],"date-time":"2025-10-08T10:38:42Z","timestamp":1759919922000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/2\/44"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,30]]},"references-count":27,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2025,2]]}},"alternative-id":["computers14020044"],"URL":"https:\/\/doi.org\/10.3390\/computers14020044","relation":{},"ISSN":["2073-431X"],"issn-type":[{"type":"electronic","value":"2073-431X"}],"subject":[],"published":{"date-parts":[[2025,1,30]]}}}