{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,24]],"date-time":"2026-04-24T15:01:39Z","timestamp":1777042899360,"version":"3.51.4"},"reference-count":192,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2024,12,16]],"date-time":"2024-12-16T00:00:00Z","timestamp":1734307200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Portuguese Foundation for Science and Technology (FCT)","award":["UIDB\/04033\/2020"],"award-info":[{"award-number":["UIDB\/04033\/2020"]}]},{"name":"Portuguese Foundation for Science and Technology (FCT)","award":["PRT\/BD\/154883\/2023"],"award-info":[{"award-number":["PRT\/BD\/154883\/2023"]}]},{"name":"Portuguese Foundation for Science and Technology (FCT)","award":["C644866286-00000011"],"award-info":[{"award-number":["C644866286-00000011"]}]},{"name":"doctoral scholarship","award":["UIDB\/04033\/2020"],"award-info":[{"award-number":["UIDB\/04033\/2020"]}]},{"name":"doctoral scholarship","award":["PRT\/BD\/154883\/2023"],"award-info":[{"award-number":["PRT\/BD\/154883\/2023"]}]},{"name":"doctoral scholarship","award":["C644866286-00000011"],"award-info":[{"award-number":["C644866286-00000011"]}]},{"name":"Vine &amp; Wine Portugal Project","award":["UIDB\/04033\/2020"],"award-info":[{"award-number":["UIDB\/04033\/2020"]}]},{"name":"Vine &amp; Wine Portugal Project","award":["PRT\/BD\/154883\/2023"],"award-info":[{"award-number":["PRT\/BD\/154883\/2023"]}]},{"name":"Vine &amp; Wine Portugal Project","award":["C644866286-00000011"],"award-info":[{"award-number":["C644866286-00000011"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AgriEngineering"],"abstract":"<jats:p>The Eurasian grapevine (Vitis vinifera L.) is one of the most extensively cultivated horticultural crop worldwide, with significant economic relevance, particularly in wine production. Accurate grapevine variety identification is essential for ensuring product authenticity, quality control, and regulatory compliance. Traditional identification methods have inherent limitations limitations; ampelography is subjective and dependent on skilled experts, while molecular analysis is costly and time-consuming. To address these challenges, recent research has focused on applying deep learning (DL) and machine learning (ML) techniques for grapevine variety identification. This study systematically analyses 37 recent studies that employed DL and ML models for this purpose. The objective is to provide a detailed analysis of classification pipelines, highlighting the strengths and limitations of each approach. Most studies use DL models trained on leaf images captured in controlled environments at distances of up to 1.2 m. However, these studies often fail to address practical challenges, such as the inclusion of a broader range of grapevine varieties, using data directly acquired in the vineyards, and the evaluation of models under adverse conditions. This review also suggests potential directions for advancing research in this field.<\/jats:p>","DOI":"10.3390\/agriengineering6040277","type":"journal-article","created":{"date-parts":[[2024,12,17]],"date-time":"2024-12-17T06:26:12Z","timestamp":1734416772000},"page":"4851-4888","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Advancing Grapevine Variety Identification: A Systematic Review of Deep Learning and Machine Learning Approaches"],"prefix":"10.3390","volume":"6","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7097-3260","authenticated-orcid":false,"given":"Gabriel A.","family":"Carneiro","sequence":"first","affiliation":[{"name":"Engineering Department, School of Science and Technology, University of Tr\u00e1s-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal"},{"name":"Centre for Robotics in Industry and Intelligent Systems (CRIIS), Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC), 4200-465 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3458-7693","authenticated-orcid":false,"given":"Ant\u00f3nio","family":"Cunha","sequence":"additional","affiliation":[{"name":"Engineering Department, School of Science and Technology, University of Tr\u00e1s-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal"},{"name":"ALGORITMI Research Centre, University of Minho, 4800-058 Guimaraes, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0071-3361","authenticated-orcid":false,"given":"Thierry J.","family":"Aubry","sequence":"additional","affiliation":[{"name":"C\u00f4a Parque, Funda\u00e7\u00e3o para a Salvaguarda e Valoriza\u00e7\u00e3o do Vale do C\u00f4a, 5150-620 Vila Nova de Foz Coa, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4533-930X","authenticated-orcid":false,"given":"Joaquim","family":"Sousa","sequence":"additional","affiliation":[{"name":"Engineering Department, School of Science and Technology, University of Tr\u00e1s-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal"},{"name":"Centre for Robotics in Industry and Intelligent Systems (CRIIS), Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC), 4200-465 Porto, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2024,12,16]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Eyduran, S.P., Akin, M., Ercisli, S., Eyduran, E., and Maghradze, D. (2015). Sugars, organic acids, and phenolic compounds of ancient grape cultivars (Vitis vinifera L.) from Igdir province of Eastern Turkey. Biol. Res., 48.","DOI":"10.1186\/0717-6287-48-2"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.plaphy.2019.01.026","article-title":"Early stage metabolic events associated with the establishment of Vitis vinifera\u2014Plasmopara viticola compatible interaction","volume":"137","author":"Nascimento","year":"2019","journal-title":"Plant Physiol. Biochem."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"975","DOI":"10.1007\/s10722-009-9416-4","article-title":"Portuguese traditional grapevine cultivars and wild vines (Vitis vinifera L.) share morphological and genetic traits","volume":"56","author":"Cunha","year":"2009","journal-title":"Genet. Resour. Crop Evol."},{"key":"ref_4","first-page":"197","article-title":"Verifying synonymies between grape cultivars from France and Northwestern Italy using molecular markers","volume":"40","author":"Schneider","year":"2001","journal-title":"VITIS-J. Grapevine Res."},{"key":"ref_5","unstructured":"Lacombe, T. (2012). Contribution \u00e0 l\u2019\u00c9tude de l\u2019Histoire \u00c9volutive de la Vigne Cultiv\u00e9e (Vitis vinifera L.) par l\u2019Analyse de la Diversit\u00e9 g\u00e9n\u00e9Tique Neutre et de G\u00e8nes d\u2019Int\u00e9r\u00eat. [Ph.D. Thesis, Institut National d\u2019Etudes Sup\u00e9rieures Agronomiques de Montpellier]."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"110425","DOI":"10.1016\/j.measurement.2021.110425","article-title":"A CNN-SVM study based on selected deep features for grapevine leaves classification","volume":"188","author":"Koklu","year":"2022","journal-title":"Measurement"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"185","DOI":"10.1016\/j.talanta.2016.05.059","article-title":"Classification of red wine based on its protected designation of origin (PDO) using Laser-induced Breakdown Spectroscopy (LIBS)","volume":"158","author":"Moncayo","year":"2016","journal-title":"Talanta"},{"key":"ref_8","unstructured":"The International Organisation of Vine and Wine (2020). State of the World Vitivinicultural Sector in 2020, The International Organisation of Vine and Wine. Technical Report."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Villano, C., Corrado, G., Basile, B., Serio, E.D., Mataffo, A., Ferrara, E., and Aversano, R. (2023). Morphological and Genetic Clonal Diversity within the `Greco Bianco\u2019 Grapevine (Vitis vinifera L.) Variety. Plants, 12.","DOI":"10.3390\/plants12030515"},{"key":"ref_10","unstructured":"Barnes, A. (2024, December 04). Carmen\u00e8re Day and the Story of Chilean Carmen\u00e8re, 2016. Section: Features. Available online: https:\/\/southamericawineguide.com\/carmenere-day-chilean-carmenere\/."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Iorizzo, M., Sicilia, A., Nicolosi, E., Forino, M., Picariello, L., Piero, A.R.L., Vitale, A., Monaco, E., Ferlito, F., and Succi, M. (2023). Investigating the impact of pedoclimatic conditions on the oenological performance of two red cultivars grown throughout southern Italy. Front. Plant Sci., 14.","DOI":"10.3389\/fpls.2023.1250208"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"383","DOI":"10.1504\/IJGW.2012.049448","article-title":"Impact of climate change on wine production: A global overview and regional assessment in the Douro Valley of Portugal","volume":"4","author":"Jones","year":"2012","journal-title":"Int. J. Glob. Warm."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"259","DOI":"10.1104\/pp.113.229708","article-title":"A Modern Ampelography: A Genetic Basis for Leaf Shape and Venation Patterning in Grape","volume":"164","author":"Chitwood","year":"2014","journal-title":"Plant Physiol."},{"key":"ref_14","first-page":"125","article-title":"Ampelography\u2014An old technique with future uses: The case of minor varieties of Vitis vinifera L. from the Balearic Islands","volume":"45","author":"Cabello","year":"2011","journal-title":"J. Int. Sci. Vigne Vin"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"165","DOI":"10.1023\/A:1022947605916","article-title":"Selecting in situ conservation sites for grape genetic resources in the USA","volume":"50","author":"Pavek","year":"2003","journal-title":"Genet. Resour. Crop Evol."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1448","DOI":"10.1007\/s00122-004-1760-3","article-title":"Development of a standard set of microsatellite reference alleles for identification of grape cultivars","volume":"109","author":"This","year":"2004","journal-title":"Theor. Appl. Genet."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"217","DOI":"10.17660\/ActaHortic.1996.427.27","article-title":"Relationship between environmental factors and the dynamics of growth and composition of the grapevine","volume":"427","author":"Calo","year":"1996","journal-title":"Acta Hortic."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., and Garcia-Rodriguez, J. (2017). A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv.","DOI":"10.1016\/j.asoc.2018.05.018"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Guti\u00e9rrez, S., Tardaguila, J., Fern\u00e1ndez-Novales, J., and Diago, M.P. (2015). Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer. PLoS ONE, 10.","DOI":"10.1371\/journal.pone.0143197"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Karakizi, C., Oikonomou, M., and Karantzalos, K. (2016). Vineyard Detection and Vine Variety Discrimination from Very High Resolution Satellite Data. Remote Sens., 8.","DOI":"10.3390\/rs8030235"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"7","DOI":"10.1016\/j.compag.2013.08.021","article-title":"Identification of grapevine varieties using leaf spectroscopy and partial least squares","volume":"99","author":"Diago","year":"2013","journal-title":"Comput. Electron. Agric."},{"key":"ref_22","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012). ImageNet Classification with Deep Convolutional Neural Networks, Curran Associates, Inc."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Rahim, U.F., Utsumi, T., and Mineno, H. (2021, January 21\u201327). Comparison of grape flower counting using patch-based instance segmentation and density-based estimation with convolutional neural networks. Proceedings of the International Symposium on Artificial Intelligence and Robotics 2021, Fukuoka, Japan.","DOI":"10.1117\/12.2605670"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"92","DOI":"10.1016\/j.compag.2015.10.009","article-title":"Grapevine flower estimation by applying artificial vision techniques on images with uncontrolled scene and multi-model analysis","volume":"119","author":"Aquino","year":"2015","journal-title":"Comput. Electron. Agric."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"113588","DOI":"10.1016\/j.eswa.2020.113588","article-title":"Grape detection with convolutional neural networks","volume":"159","author":"Cecotti","year":"2020","journal-title":"Expert Syst. Appl."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Rose, J.C., Kicherer, A., Wieland, M., Klingbeil, L., T\u00f6pfer, R., and Kuhlmann, H. (2016). Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions. Sensors, 16.","DOI":"10.3390\/s16122136"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Morellos, A., Pantazi, X.E., Paraskevas, C., and Moshou, D. (2022). Comparison of Deep Neural Networks in Detecting Field Grapevine Diseases Using Transfer Learning. Remote Sens., 14.","DOI":"10.3390\/rs14184648"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Elsherbiny, O., Elaraby, A., Alahmadi, M., Hamdan, M., and Gao, J. (2024). Rapid Grapevine Health Diagnosis Based on Digital Imaging and Deep Learning. Plants, 13.","DOI":"10.3390\/plants13010135"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"2042","DOI":"10.1002\/jsfa.10824","article-title":"Non-invasive setup for grape maturation classification using deep learning","volume":"101","author":"Ramos","year":"2021","journal-title":"J. Sci. Food Agric."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"107944","DOI":"10.1016\/j.compag.2023.107944","article-title":"Comparison of deep learning methods for grapevine growth stage recognition","volume":"211","author":"Schieck","year":"2023","journal-title":"Comput. Electron. Agric."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Kierdorf, J., Weber, I., Kicherer, A., Zabawa, L., Drees, L., and Roscher, R. (2022). Behind the Leaves: Estimation of Occluded Grapevine Berries With Conditional Generative Adversarial Networks. Front. Artif. Intell., 5.","DOI":"10.3389\/frai.2022.830026"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"105360","DOI":"10.1016\/j.compag.2020.105360","article-title":"A vision-based robust grape berry counting algorithm for fast calibration-free bunch weight estimation in the field","volume":"173","author":"Liu","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"108072","DOI":"10.1016\/j.compag.2023.108072","article-title":"Plant image recognition with deep learning: A review","volume":"212","author":"Chen","year":"2023","journal-title":"Comput. Electron. Agric."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Mohimont, L., Alin, F., Rondeau, M., Gaveau, N., and Steffenel, L.A. (2022). Computer Vision and Deep Learning for Precision Viticulture. Agronomy, 12.","DOI":"10.3390\/agronomy12102463"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Ferro, M.V., and Catania, P. (2023). Technologies and Innovative Methods for Precision Viticulture: A Comprehensive Review. Horticulturae, 9.","DOI":"10.3390\/horticulturae9030399"},{"key":"ref_36","first-page":"100134","article-title":"Deep learning in computer vision: A critical review of emerging techniques and application scenarios","volume":"6","author":"Chai","year":"2021","journal-title":"Mach. Learn. Appl."},{"key":"ref_37","first-page":"200","article-title":"Transformers in Vision: A Survey","volume":"54","author":"Khan","year":"2021","journal-title":"ACM Comput. Surv."},{"key":"ref_38","first-page":"211","article-title":"Deep learning based computer vision approaches for smart agricultural applications","volume":"6","author":"Dhanya","year":"2022","journal-title":"Artif. Intell. Agric."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Khatri, N., and Shinde, G.U. (2021). Computer Vision and Image Processing for Precision Agriculture. Cognitive Behavior and Human Computer Interaction Based on Machine Learning Algorithm, John Wiley & Sons, Ltd.","DOI":"10.1002\/9781119792109.ch11"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Reddy, G.P.O., Raval, M.S., Adinarayana, J., and Chaudhary, S. (2022). Computer Vision and Machine Learning in Agriculture. Data Science in Agriculture and Natural Resource Management, Springer.","DOI":"10.1007\/978-981-16-5847-1"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Colliot, O. (2023). Classic machine learning methods. Machine Learning for Brain Disorders, Springer.","DOI":"10.1007\/978-1-0716-3195-9"},{"key":"ref_42","first-page":"7668","article-title":"A survey on image feature descriptors","volume":"5","author":"Kumar","year":"2014","journal-title":"Int. J. Comput. Sci. Inf. Technol."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"e1230","DOI":"10.1002\/cl2.1230","article-title":"PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis","volume":"18","author":"Haddaway","year":"2022","journal-title":"Campbell Syst. Rev."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Abbasi, A.A., and Jalal, A. (2024, January 19\u201320). Data Driven Approach to Leaf Recognition: Logistic Regression for Smart Agriculture. Proceedings of the 2024 5th International Conference on Advancements in Computational Sciences, ICACS 2024, Lahore, Pakistan.","DOI":"10.1109\/ICACS60934.2024.10473258"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Garcia, L.C., Concepcion, R., Dadios, E., and Dulay, A.E. (2022, January 1\u20134). Spectro-morphological Feature-based Machine Learning Approach for Grape Leaf Variety Classification. Proceedings of the 2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2022, Boracay Island, Philippines.","DOI":"10.1109\/HNICEM57413.2022.10109536"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"2011","DOI":"10.1111\/1750-3841.15715","article-title":"Research on nondestructive identification of grape varieties based on EEMD-DWT and hyperspectral image","volume":"86","author":"Xu","year":"2021","journal-title":"J. Food Sci."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Landa, V., Shapira, Y., David, M., Karasik, A., Weiss, E., Reuveni, Y., and Drori, E. (2021). Accurate classification of fresh and charred grape seeds to the varietal level, using machine learning based classification method. Sci. Rep., 11.","DOI":"10.1038\/s41598-021-92559-4"},{"key":"ref_48","first-page":"186","article-title":"Grapevine Varieties Classification Using Machine Learning","volume":"Volume 11804","author":"Marques","year":"2019","journal-title":"Progress Artificial Intelligence, Proceedings of the 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, 3\u20136 September 2019"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Gutierrez, S., Fernandez-Novales, J., Diago, M.P., and Tardaguila, J. (2018). On-the-Go Hyperspectral Imaging Under Field Conditions and Machine Learning for the Classification of Grapevine Varieties. Front. Plant Sci., 9.","DOI":"10.3389\/fpls.2018.01102"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"311","DOI":"10.1016\/j.compag.2018.06.035","article-title":"Automated grapevine cultivar classification based on machine learning using leaf morpho-colorimetry, fractal dimension and near-infrared spectroscopy parameters","volume":"151","author":"Fuentes","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Sarosa, M., Maa\u2019rifah, P.N., Kusumawardani, M., and Al Riza, D.F. (2024). Vitis vinera L. Leaf Detection using Faster R-CNN. BIO Web Conf., 117.","DOI":"10.1051\/bioconf\/202411701021"},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Peng, Y., Zhao, S., Liu, J., Peng, Y., Zhao, S., and Liu, J. (2021). Fused Deep Features-Based Grape Varieties Identification Using Support Vector Machine. Agriculture, 11.","DOI":"10.3390\/agriculture11090869"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Elkassar, A. (2024, January 21\u201323). Deep Learning based Grapevine Leaf Classification using Augmented Images and Multi-Classifier Fusion for Improved Accuracy and Precision. Proceedings of the 2024 14th International Conference on Electrical Engineering, ICEENG 2024, Cairo, Egypt.","DOI":"10.1109\/ICEENG58856.2024.10566412"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"L\u00f3pez, A., Ogayar, C.J., Feito, F.R., and Sousa, J.J. (2024). Classification of Grapevine Varieties Using UAV Hyperspectral Imaging. Remote Sens., 16.","DOI":"10.3390\/rs16122103"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"113340","DOI":"10.1016\/j.scienta.2024.113340","article-title":"Automatic detection of grape varieties with the newly proposed CNN model using ampelographic characteristics","volume":"334","author":"Terzi","year":"2024","journal-title":"Sci. Hortic."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"\u00d6zalt\u0131n, \u00d6., and Koyuncu, N. (2024). A Novel Feature Selection Approach-Based Sampling Theory on Grapevine Images Using Convolutional Neural Networks. Arab. J. Sci. Eng.","DOI":"10.1007\/s13369-024-09192-2"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"19","DOI":"10.1017\/S0021859624000145","article-title":"Vine variety identification through leaf image classification: A large-scale study on the robustness of five deep learning models","volume":"162","author":"Gardiman","year":"2024","journal-title":"J. Agric. Sci."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"1061","DOI":"10.1007\/s41348-024-00896-z","article-title":"Advancements in deep learning for accurate classification of grape leaves and diagnosis of grape diseases","volume":"131","author":"Kunduracioglu","year":"2024","journal-title":"J. Plant Dis. Prot."},{"key":"ref_59","first-page":"445","article-title":"Classification of grapevine leaves images using VGG-16 and VGG-19 deep learning nets","volume":"22","author":"Rajab","year":"2024","journal-title":"Telkomnika Telecommun. Comput. Electron. Control."},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"7669","DOI":"10.1007\/s00521-024-09488-2","article-title":"A new hybrid approach for grapevine leaves recognition based on ESRGAN data augmentation and GASVM feature selection","volume":"36","author":"Imak","year":"2024","journal-title":"Neural Comput. Appl."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Sun, Y., Tian, B., Ni, C., Wang, X., Fei, C., and Chen, Q. (2023, January 15\u201317). Image classification of small sample grape leaves based on deep learning. Proceedings of the ITOEC 2023\u2014IEEE 7th Information Technology and Mechatronics Engineering Conference, Chongqing, China.","DOI":"10.1109\/ITOEC57671.2023.10291790"},{"key":"ref_62","doi-asserted-by":"crossref","unstructured":"Lv, Q. (2023, January 12\u201314). Classification of Grapevine Leaf Images with Deep Learning Ensemble Models. Proceedings of the 2023 4th International Conference on Computer Vision, Image and Deep Learning, CVIDL 2023, Zhuhai, China.","DOI":"10.1109\/CVIDL58838.2023.10165757"},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"10132","DOI":"10.1109\/JSEN.2023.3261544","article-title":"Toward Grapevine Digital Ampelometry Through Vision Deep Learning Models","volume":"23","author":"Magalhaes","year":"2023","journal-title":"IEEE Sens. J."},{"key":"ref_64","doi-asserted-by":"crossref","unstructured":"Carneiro, G., Neto, A., Teixeira, A., Cunha, A., and Sousa, J. (2023, January 16\u201321). Evaluating Data Augmentation for Grapevine Varieties Identification. Proceedings of the IGARSS 2023\u20142023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA.","DOI":"10.1109\/IGARSS52108.2023.10283128"},{"key":"ref_65","first-page":"351","article-title":"Can the Segmentation Improve the Grape Varieties\u2019 Identification Through Images Acquired On-Field?","volume":"Volume 14116","author":"Carneiro","year":"2023","journal-title":"Progress in Artificial Intelligence"},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Gupta, R., and Gill, K.S. (2023, January 17\u201318). Grapevine Augmentation and Classification using Enhanced EfficientNetB5 Model. Proceedings of the 2023 IEEE Renewable Energy and Sustainable E-Mobility Conference, RESEM 2023, Bhopal, India.","DOI":"10.1109\/RESEM57584.2023.10236406"},{"key":"ref_67","doi-asserted-by":"crossref","unstructured":"Carneiro, G.A., Padua, L., Peres, E., Morais, R., Sousa, J.J., and Cunha, A. (2022, January 17\u201322). Segmentation as a Preprocessing Tool for Automatic Grapevine Classification. Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Kuala Lumpur, Malaysia.","DOI":"10.1109\/IGARSS46834.2022.9884946"},{"key":"ref_68","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1016\/j.procs.2021.12.025","article-title":"Analyzing the Fine Tuning\u2019s impact in Grapevine Classification","volume":"196","author":"Carneiro","year":"2022","journal-title":"Procedia Comput. Sci."},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Carneiro, G.A., P\u00e1dua, L., Peres, E., Morais, R., Sousa, J.J., and Cunha, A. (2022, January 17\u201322). Grapevine Varieties Identification Using Vision Transformers. Proceedings of the IGARSS 2022\u20142022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia.","DOI":"10.1109\/IGARSS46834.2022.9883286"},{"key":"ref_70","doi-asserted-by":"crossref","unstructured":"Carneiro, G., Padua, L., Sousa, J.J., Peres, E., Morais, R., and Cunha, A. (2021, January 11\u201316). Grapevine Variety Identification Through Grapevine Leaf Images Acquired in Natural Environment. Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium.","DOI":"10.1109\/IGARSS47720.2021.9555141"},{"key":"ref_71","first-page":"172","article-title":"Development of a mobile application for identification of grapevine (Vitis vinifera L.) cultivars via deep learning","volume":"14","author":"Liu","year":"2021","journal-title":"Int. J. Agric. Biol. Eng."},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"216","DOI":"10.1007\/978-3-030-57802-2_21","article-title":"RGB Images Driven Recognition of Grapevine Varieties","volume":"Volume 1268","author":"Junek","year":"2021","journal-title":"Advances in Intelligent Systems and Computing"},{"key":"ref_73","doi-asserted-by":"crossref","unstructured":"Nasiri, A., Taheri-Garavand, A., Fanourakis, D., Zhang, Y.D., and Nikoloudakis, N. (2021). Automated grapevine cultivar identification via leaf imaging and deep convolutional neural networks: A proof-of-concept study employing primary iranian varieties. Plants, 10.","DOI":"10.3390\/plants10081628"},{"key":"ref_74","doi-asserted-by":"crossref","first-page":"1211","DOI":"10.1016\/j.procs.2020.09.117","article-title":"Deep learning for grape variety recognition","volume":"176","author":"Franczyk","year":"2020","journal-title":"Procedia Comput. Sci."},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"104855","DOI":"10.1016\/j.compag.2019.104855","article-title":"Grapevine variety identification using \u201cBig Data\u201d collected with miniaturized spectrometer combined with support vector machines and convolutional neural networks","volume":"163","author":"Fernandes","year":"2019","journal-title":"Comput. Electron. Agric."},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Ad\u00e3o, T., Pinho, T.M., Ferreira, A., Sousa, A., P\u00e1dua, L., Sousa, J., Sousa, J.J., Peres, E., and Morais, R. (2019). Digital Ampelographer: A CNN Based Preliminary Approach, Springer.","DOI":"10.1007\/978-3-030-30241-2_23"},{"key":"ref_77","doi-asserted-by":"crossref","unstructured":"Pereira, C.S., Morais, R., and Reis, M.J.C.S. (2019). Deep learning techniques for grape plant species identification in natural images. Sensors, 19.","DOI":"10.3390\/s19224850"},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Decker, R., and Lenz, H.J. (2007). VOS: A New Method for Visualizing Similarities Between Objects. Advances in Data Analysis, Springer.","DOI":"10.1007\/978-3-540-70981-7"},{"key":"ref_79","doi-asserted-by":"crossref","unstructured":"Peng, J., Ouyang, C., Peng, H., Hu, W., Wang, Y., and Jiang, P. (2024). MultiFuseYOLO: Redefining Wine Grape Variety Recognition through Multisource Information Fusion. Sensors, 24.","DOI":"10.3390\/s24092953"},{"key":"ref_80","doi-asserted-by":"crossref","first-page":"98","DOI":"10.18178\/joig.11.1.98-103","article-title":"Deep Learning in Grapevine Leaves Varieties Classification Based on Dense Convolutional Network","volume":"11","author":"Ahmed","year":"2023","journal-title":"J. Image Graph."},{"key":"ref_81","unstructured":"Santos, T., de Souza, L., Andreza, d.S., and Avila, S. (2019). Embrapa Wine Grape Instance Segmentation Dataset\u2014Embrapa WGISD, Zenodo."},{"key":"ref_82","unstructured":"Vlah, M. (2024, September 11). Grapevine Leaves. Available online: https:\/\/www.kaggle.com\/datasets\/maximvlah\/grapevine-leaves."},{"key":"ref_83","doi-asserted-by":"crossref","first-page":"108906","DOI":"10.1016\/j.dib.2023.108906","article-title":"Image dataset of important grape varieties in the commercial and consumer market","volume":"47","author":"Mohammed","year":"2023","journal-title":"Data Brief"},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"108466","DOI":"10.1016\/j.dib.2022.108466","article-title":"wGrapeUNIPD-DL: An open dataset for white grape bunch detection","volume":"43","author":"Sozzi","year":"2022","journal-title":"Data Brief"},{"key":"ref_85","doi-asserted-by":"crossref","first-page":"67494","DOI":"10.1109\/ACCESS.2018.2875862","article-title":"Computer vision and machine learning for viticulture technology","volume":"6","author":"Seng","year":"2018","journal-title":"IEEE Access"},{"key":"ref_86","unstructured":"Rodrigues, A. (1952). Um M\u00e9todo Filom\u00e9trico de Caracteriza\u00e7\u00e3o Ampelogr\u00e1fica, Universidade Nova de Lisboa."},{"key":"ref_87","unstructured":"Organisation Internationale de La Vigne et du Vin (2013). International List of Vine Varieties and Their Synonyms, Organisation Internationale de La Vigne et du Vin."},{"key":"ref_88","doi-asserted-by":"crossref","first-page":"1483","DOI":"10.1162\/neco.1997.9.7.1483","article-title":"A Fast Fixed-Point Algorithm for Independent Component Analysis","volume":"9","author":"Oja","year":"1997","journal-title":"Neural Comput."},{"key":"ref_89","doi-asserted-by":"crossref","first-page":"679","DOI":"10.1109\/TPAMI.1986.4767851","article-title":"A Computational Approach to Edge Detection","volume":"PAMI-8","author":"Canny","year":"1986","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_90","doi-asserted-by":"crossref","first-page":"25","DOI":"10.5815\/ijigsp.2010.02.04","article-title":"Leaf Vein Extraction Based on Gray-scale Morphology","volume":"2","author":"Zheng","year":"2010","journal-title":"Int. J. Image Graph. Signal Process."},{"key":"ref_91","doi-asserted-by":"crossref","first-page":"96","DOI":"10.1007\/978-3-319-93000-8_12","article-title":"Pixel-Based Leaf Segmentation from Natural Vineyard Images Using Color Model and Threshold Techniques","volume":"Volume 10882","author":"Pereira","year":"2018","journal-title":"Image Analysis and Recognition"},{"key":"ref_92","doi-asserted-by":"crossref","first-page":"017008","DOI":"10.1117\/1.2151172","article-title":"Independent-component analysis for hyperspectral remote sensing imagery classification","volume":"45","author":"Du","year":"2006","journal-title":"Opt. Eng."},{"key":"ref_93","unstructured":"Vaseghi, S., and Jetelova, H. (2006, January 23\u201327). Principal and independent component analysis in image processing. In Proceeding of the 14th ACM International Conference on Mobile Computing and Networking, Santa Barbara, CA, USA."},{"key":"ref_94","doi-asserted-by":"crossref","first-page":"62","DOI":"10.1109\/TSMC.1979.4310076","article-title":"A Threshold Selection Method from Gray-Level Histograms","volume":"9","author":"Otsu","year":"1979","journal-title":"IEEE Trans. Syst. Man Cybern."},{"key":"ref_95","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the 18th International Conference, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_96","doi-asserted-by":"crossref","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","article-title":"SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation","volume":"39","author":"Badrinarayanan","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_97","doi-asserted-by":"crossref","unstructured":"Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Loy, C.C., Qiao, Y., and Tang, X. (2018). ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. arXiv.","DOI":"10.1007\/978-3-030-11021-5_5"},{"key":"ref_98","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1145\/3422622","article-title":"Generative Adversarial Networks","volume":"63","author":"Goodfellow","year":"2014","journal-title":"Commun. ACM"},{"key":"ref_99","doi-asserted-by":"crossref","unstructured":"Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., and Schmid, C. (2012). KAZE Features. Computer Vision\u2014ECCV 2012, Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 7\u201313 October 2012, Springer.","DOI":"10.1007\/978-3-642-33783-3"},{"key":"ref_100","doi-asserted-by":"crossref","unstructured":"Tareen, S.A.K., and Saleem, Z. (2018, January 3\u20134). A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan.","DOI":"10.1109\/ICOMET.2018.8346440"},{"key":"ref_101","unstructured":"Simonyan, K., and Zisserman, A. (2015, January 7\u20139). Very Deep Convolutional Networks for Large-Scale Image Recognition. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015\u2014Conference Track Proceedings, San Diego, CA, USA."},{"key":"ref_102","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_103","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21\u201326). Densely Connected Convolutional Networks. Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_104","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_105","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.C. (2018, January 18\u201323). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_106","unstructured":"Tan, M., and Le, Q.V. (2021). EfficientNetV2: Smaller Models and Faster Training. arXiv."},{"key":"ref_107","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2015). Rethinking the Inception Architecture for Computer Vision. arXiv.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_108","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., and Vanhoucke, V. (2016). Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_109","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1186\/s40537-021-00444-8","article-title":"Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions","volume":"8","author":"Alzubaidi","year":"2021","journal-title":"J. Big Data"},{"key":"ref_110","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv."},{"key":"ref_111","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_112","unstructured":"Mehta, S., and Rastegari, M. (2022). MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer. arXiv."},{"key":"ref_113","unstructured":"Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and J\u00e9gou, H. (2021). Training data-efficient image transformers & distillation through attention. arXiv."},{"key":"ref_114","doi-asserted-by":"crossref","unstructured":"Tu, Z., Talebi, H., Zhang, H., Yang, F., Milanfar, P., Bovik, A., and Li, Y. (2022). MaxViT: Multi-Axis Vision Transformer. arXiv.","DOI":"10.1007\/978-3-031-20053-3_27"},{"key":"ref_115","doi-asserted-by":"crossref","first-page":"2437","DOI":"10.1016\/j.patcog.2004.12.013","article-title":"A new method of feature fusion and its application in image recognition","volume":"38","author":"Sun","year":"2005","journal-title":"Pattern Recognit."},{"key":"ref_116","doi-asserted-by":"crossref","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2016). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv.","DOI":"10.1109\/TPAMI.2016.2577031"},{"key":"ref_117","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv.","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"ref_118","doi-asserted-by":"crossref","unstructured":"Peng, J., Wang, Y., Jiang, P., Zhang, R., and Chen, H. (2023). RiceDRA-Net: Precise Identification of Rice Leaf Diseases with Complex Backgrounds Using a Res-Attention Mechanism. Appl. Sci., 13.","DOI":"10.3390\/app13084928"},{"key":"ref_119","doi-asserted-by":"crossref","first-page":"9600","DOI":"10.1109\/TGRS.2020.3048128","article-title":"Attention-Based Second-Order Pooling Network for Hyperspectral Image Classification","volume":"59","author":"Xue","year":"2021","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_120","doi-asserted-by":"crossref","unstructured":"Liu, K.H., Yang, M.H., Huang, S.T., and Lin, C. (2022). Plant Species Classification Based on Hyperspectral Imaging via a Lightweight Convolutional Neural Network Model. Front. Plant Sci., 13.","DOI":"10.3389\/fpls.2022.855660"},{"key":"ref_121","unstructured":"Moraga, J., and Duzgun, H.S. (2022). JigsawHSI: A network for Hyperspectral Image classification. arXiv."},{"key":"ref_122","unstructured":"Chakraborty, T., and Trehan, U. (2021). SpectralNET: Exploring Spatial-Spectral WaveletCNN for Hyperspectral Image Classification. arXiv."},{"key":"ref_123","doi-asserted-by":"crossref","first-page":"277","DOI":"10.1109\/LGRS.2019.2918719","article-title":"HybridSN: Exploring 3-D\u20132-D CNN Feature Hierarchy for Hyperspectral Image Classification","volume":"17","author":"Roy","year":"2020","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_124","unstructured":"Kingma, D.P., and Ba, J.L. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_125","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Goyal, P., Girshick, R.B., He, K., and Doll\u00e1r, P. (2017). Focal Loss for Dense Object Detection. arXiv.","DOI":"10.1109\/ICCV.2017.324"},{"key":"ref_126","doi-asserted-by":"crossref","unstructured":"Jiang, T., Zhou, J., Xie, B., Liu, L., Ji, C., Liu, Y., Liu, B., and Zhang, B. (2024). Improved YOLOv8 Model for Lightweight Pigeon Egg Detection. Animals, 14.","DOI":"10.3390\/ani14081226"},{"key":"ref_127","unstructured":"Mukhoti, J., Kulharia, V., Sanyal, A., Golodetz, S., Torr, P.H., and Dokania, P.K. (2020, January 6\u201312). Calibrating deep neural networks using focal loss. Proceedings of the Advances in Neural Information Processing Systems 33 (NeurIPS 2020), Virtual."},{"key":"ref_128","unstructured":"Yu, Z., Huang, H., Chen, W., Su, Y., Liu, Y., and Wang, X. (2022). YOLO-FaceV2: A Scale and Occlusion Aware Face Detector. arXiv."},{"key":"ref_129","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","article-title":"Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI","volume":"58","author":"Bennetot","year":"2020","journal-title":"Inf. Fusion"},{"key":"ref_130","doi-asserted-by":"crossref","first-page":"336","DOI":"10.1007\/s11263-019-01228-7","article-title":"Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization","volume":"128","author":"Selvaraju","year":"2016","journal-title":"Int. J. Comput. Vis."},{"key":"ref_131","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2016). \u201cWhy Should I Trust You?\u201d: Explaining the Predictions of Any Classifier, Association for Computing Machinery.","DOI":"10.18653\/v1\/N16-3020"},{"key":"ref_132","doi-asserted-by":"crossref","unstructured":"Cui, Y., Jia, M., Lin, T.Y., Song, Y., and Belongie, S. (2019, January 15\u201320). Class-Balanced Loss Based on Effective Number of Samples. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00949"},{"key":"ref_133","doi-asserted-by":"crossref","unstructured":"van Leeuwen, C. (2010). Terroir: The effect of the physical environment on vine growth, grape ripening and wine sensory attributes. Managing Wine Quality: Viticulture and Wine Quality, Woodhead Publishing.","DOI":"10.1533\/9781845699284.3.273"},{"key":"ref_134","doi-asserted-by":"crossref","first-page":"114181","DOI":"10.1016\/j.eswa.2020.114181","article-title":"Survey of feature extraction and classification techniques to identify plant through leaves","volume":"167","author":"Sachar","year":"2021","journal-title":"Expert Syst. Appl."},{"key":"ref_135","unstructured":"Barratt, S., and Sharma, R. (2018). A Note on the Inception Score. arXiv."},{"key":"ref_136","unstructured":"Ravuri, S., and Vinyals, O. Seeing is Not Necessarily Believing: Limitations of BigGANs for Data Augmentation. 2019; pp. 1\u20135."},{"key":"ref_137","first-page":"218","article-title":"How good is my GAN?","volume":"Volume 11206","author":"Shmelkov","year":"2018","journal-title":"Computer Vision\u2014ECCV 2018, Proceedings of the 5th European Conference, Munich, Germany, 8\u201314 September 2018"},{"key":"ref_138","first-page":"514","article-title":"Early Prediction of Plant Diseases using CNN and GANs","volume":"12","author":"Gomaa","year":"2021","journal-title":"Int. J. Adv. Comput. Sci. Appl."},{"key":"ref_139","first-page":"46","article-title":"Image-to-Image Translation with GAN for Synthetic Data Augmentation in Plant Disease Datasets","volume":"8","author":"Nazki","year":"2019","journal-title":"Smart Media J."},{"key":"ref_140","doi-asserted-by":"crossref","unstructured":"Talukdar, B. (2020, January 26\u201328). Handling of Class Imbalance for Plant Disease Classification with Variants of GANs. Proceedings of the 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India.","DOI":"10.1109\/ICIIS51140.2020.9342728"},{"key":"ref_141","doi-asserted-by":"crossref","unstructured":"Yilma, G., Belay, S., Qin, Z., Gedamu, K., and Ayalew, M. (2020, January 18\u201320). Plant Disease Classification Using Two Pathway Encoder GAN Data Generation. Proceedings of the 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing, ICCWAMTIP 2020, Chengdu, China.","DOI":"10.1109\/ICCWAMTIP51612.2020.9317494"},{"key":"ref_142","doi-asserted-by":"crossref","first-page":"172882","DOI":"10.1109\/ACCESS.2020.3025196","article-title":"GANS-based data augmentation for citrus disease severity detection using deep learning","volume":"8","author":"Zeng","year":"2020","journal-title":"IEEE Access"},{"key":"ref_143","doi-asserted-by":"crossref","unstructured":"Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., and Le, V.Q. (2018). AutoAugment: Learning Augmentation Policies from Data. arXiv.","DOI":"10.1109\/CVPR.2019.00020"},{"key":"ref_144","unstructured":"Zhang, H., Cisse, M., Dauphin, Y.N., and Lopez-Paz, D. (2018). mixup: Beyond Empirical Risk Minimization. arXiv."},{"key":"ref_145","unstructured":"DeVries, T., and Taylor, G.W. (2017). Dataset Augmentation in Feature Space. arXiv."},{"key":"ref_146","doi-asserted-by":"crossref","unstructured":"Chu, P., Bian, X., Liu, S., and Ling, H. (2020). Feature Space Augmentation for Long-Tailed Data. arXiv.","DOI":"10.1007\/978-3-030-58526-6_41"},{"key":"ref_147","unstructured":"Giese, G., Velasco-Cruz, C., and Leonardelli, M. (2020). Grapevine Phenology: Annual Growth and Development, College of Agricultural, Consumer and Environmental Sciences."},{"key":"ref_148","doi-asserted-by":"crossref","first-page":"311","DOI":"10.1016\/j.compag.2018.01.009","article-title":"Deep learning models for plant disease detection and diagnosis","volume":"145","author":"Ferentinos","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_149","doi-asserted-by":"crossref","unstructured":"Kc, K., Yin, Z., Li, D., and Wu, Z. (2021). Impacts of Background Removal on Convolutional Neural Networks for Plant Disease Classification In-Situ. Agriculture, 11.","DOI":"10.3390\/agriculture11090827"},{"key":"ref_150","doi-asserted-by":"crossref","first-page":"93","DOI":"10.18178\/joig.4.2.93-98","article-title":"Improving Leaf Classification Rate via Background Removal and ROI Extraction","volume":"4","author":"Wu","year":"2016","journal-title":"J. Image Graph."},{"key":"ref_151","doi-asserted-by":"crossref","first-page":"91","DOI":"10.1023\/B:VISI.0000029664.99615.94","article-title":"Distinctive Image Features from Scale-Invariant Keypoints","volume":"60","author":"Lowe","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_152","doi-asserted-by":"crossref","unstructured":"Leonardis, A., Bischof, H., and Pinz, A. (2006). SURF: Speeded Up Robust Features. Computer Vision\u2014ECCV 2006, Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7\u201313 May 2006, Springer.","DOI":"10.1007\/11744023"},{"key":"ref_153","doi-asserted-by":"crossref","unstructured":"Howard, A., Sandler, M., Chen, B., Wang, W., Chen, L.C., Tan, M., Chu, G., Vasudevan, V., Zhu, Y., and Pang, R. (November, January 27). Searching for mobileNetV3. Proceedings of the 2019 IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea.","DOI":"10.1109\/ICCV.2019.00140"},{"key":"ref_154","unstructured":"Tan, M., and Le, V.Q. (2019, January 9\u201315). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA."},{"key":"ref_155","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022). A ConvNet for the 2020s. arXiv.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"ref_156","doi-asserted-by":"crossref","unstructured":"Woo, S., Debnath, S., Hu, R., Chen, X., Liu, Z., Kweon, I.S., and Xie, S. (2023). ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders. arXiv.","DOI":"10.1109\/CVPR52729.2023.01548"},{"key":"ref_157","unstructured":"Sabour, S., Frosst, N., and Hinton, G.E. (2017, January 4\u20139). Dynamic Routing Between Capsules. Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA."},{"key":"ref_158","first-page":"1295","article-title":"Capsule Networks\u2014A survey","volume":"34","author":"Edward","year":"2022","journal-title":"J. King Saud Univ.-Comput. Inf. Sci."},{"key":"ref_159","doi-asserted-by":"crossref","first-page":"757","DOI":"10.1007\/s00521-023-09058-y","article-title":"Capsule network-based disease classification for Vitis vinifera leaves","volume":"36","author":"Andrushia","year":"2024","journal-title":"Neural Comput. Appl."},{"key":"ref_160","unstructured":"Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., and Dosovitskiy, A. (2021, January 6\u201314). Do Vision Transformers See Like Convolutional Neural Networks?. Proceedings of the Advances in Neural Information Processing Systems 34 (NeurIPS 2021), Virtual."},{"key":"ref_161","unstructured":"Steiner, A., Kolesnikov, A., Zhai, X., Wightman, R., Uszkoreit, J., and Beyer, L. (2021). How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers. arXiv."},{"key":"ref_162","unstructured":"El-Nouby, A., Izacard, G., Touvron, H., Laptev, I., Jegou, H., and Grave, E. (2021). Are Large-scale Datasets Necessary for Self-Supervised Pre-training?. arXiv."},{"key":"ref_163","unstructured":"Doersch, C., Gupta, A., and Zisserman, A. (2021). CrossTransformers: Spatially-aware few-shot transfer. arXiv."},{"key":"ref_164","doi-asserted-by":"crossref","first-page":"105760","DOI":"10.1016\/j.compag.2020.105760","article-title":"A survey of public datasets for computer vision tasks in precision agriculture","volume":"178","author":"Lu","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_165","doi-asserted-by":"crossref","first-page":"105117","DOI":"10.1016\/j.compag.2019.105117","article-title":"Unsupervised image translation using adversarial networks for improved plant disease recognition","volume":"168","author":"Nazki","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_166","doi-asserted-by":"crossref","first-page":"101475","DOI":"10.1016\/j.ecoinf.2021.101475","article-title":"Automated feature-specific tree species identification from natural images using deep semi-supervised learning","volume":"66","author":"Homan","year":"2021","journal-title":"Ecol. Inform."},{"key":"ref_167","doi-asserted-by":"crossref","first-page":"106510","DOI":"10.1016\/j.compag.2021.106510","article-title":"Self-supervised contrastive learning on agricultural images","volume":"191","author":"Nalpantidis","year":"2021","journal-title":"Comput. Electron. Agric."},{"key":"ref_168","doi-asserted-by":"crossref","unstructured":"Van Horn, G., Cole, E., Beery, S., Wilber, K., Belongie, S., and Mac Aodha, O. (2021). Benchmarking Representation Learning for Natural World Image Collections. arXiv.","DOI":"10.1109\/CVPR46437.2021.01269"},{"key":"ref_169","doi-asserted-by":"crossref","first-page":"249","DOI":"10.1016\/j.neunet.2018.07.011","article-title":"A systematic study of the class imbalance problem in convolutional neural networks","volume":"106","author":"Buda","year":"2018","journal-title":"Neural Netw."},{"key":"ref_170","doi-asserted-by":"crossref","first-page":"105542","DOI":"10.1016\/j.compag.2020.105542","article-title":"Few-Shot Learning approach for plant disease classification using images taken in the field","volume":"175","author":"Picon","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_171","doi-asserted-by":"crossref","unstructured":"Park, S., Lim, J., Jeon, Y., and Choi, J.Y. (2021, January 10\u201317). Influence-Balanced Loss for Imbalanced Visual Classification. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00077"},{"key":"ref_172","doi-asserted-by":"crossref","unstructured":"Wei, X.S., Song, Y.Z., Mac Aodha, O., Wu, J., Peng, Y., Tang, J., Yang, J., and Belongie, S. (2021). Fine-Grained Image Analysis with Deep Learning: A Survey. arXiv.","DOI":"10.1109\/TPAMI.2021.3126648"},{"key":"ref_173","unstructured":"Lin, T.Y., RoyChowdhury, A., and Maji, S. (2017). Bilinear CNNs for Fine-grained Visual Recognition. arXiv."},{"key":"ref_174","doi-asserted-by":"crossref","unstructured":"Gao, Y., Beijbom, O., Zhang, N., and Darrell, T. (2016, January 27\u201330). Compact Bilinear Pooling. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.41"},{"key":"ref_175","doi-asserted-by":"crossref","first-page":"4996","DOI":"10.1109\/TIP.2020.2977457","article-title":"Multi-Objective Matrix Normalization for Fine-grained Visual Recognition","volume":"29","author":"Min","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_176","doi-asserted-by":"crossref","unstructured":"Dubey, A., Gupta, O., Guo, P., Raskar, R., Farrell, R., and Naik, N. (2018). Pairwise Confusion for Fine-Grained Visual Classification. arXiv.","DOI":"10.1007\/978-3-030-01258-8_5"},{"key":"ref_177","doi-asserted-by":"crossref","unstructured":"Sun, G., Cholakkal, H., Khan, S., Khan, F.S., and Shao, L. (2019). Fine-grained Recognition: Accounting for Subtle Differences between Similar Classes. arXiv.","DOI":"10.1609\/aaai.v34i07.6882"},{"key":"ref_178","doi-asserted-by":"crossref","first-page":"4683","DOI":"10.1109\/TIP.2020.2973812","article-title":"The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification","volume":"29","author":"Chang","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_179","unstructured":"Subramanya, A., Pillai, V., and Pirsiavash, H. (November, January 27). Fooling network interpretation in image classification. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_180","unstructured":"Alvarez-Melis, D., and Jaakkola, T.S. (2018). On the Robustness of Interpretability Methods. arXiv."},{"key":"ref_181","unstructured":"Garreau, D., and von Luxburg, U. (2020, January 26\u201328). Explaining the Explainer: A First Theoretical Analysis of LIME. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108, Virtual."},{"key":"ref_182","unstructured":"Stiffler, M., Hudler, A., Lee, E., Braines, D., Mott, D., and Harborne, D. (2018, January 18\u201320). An Analysis of Reliability Using LIME with Deep Learning Models. Proceedings of the Annual Fall Meeting of the Distributed Analytics and Information Science International Technology Alliance, AFM DAIS ITA, Madrid, Spain."},{"key":"ref_183","doi-asserted-by":"crossref","unstructured":"Kapishnikov, A., Venugopalan, S., Avci, B., Wedin, B., Terry, M., and Bolukbasi, T. (2021, January 20\u201325). Guided Integrated Gradients: An Adaptive Path Method for Removing Noise. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00501"},{"key":"ref_184","unstructured":"Kapishnikov, A., Bolukbasi, T., Viegas, F., and Terry, M. (November, January 27). XRAI: Better Attributions Through Regions. Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_185","unstructured":"Smilkov, D., Thorat, N., Kim, B., Vi\u00e9gas, F., and Wattenberg, M. (2017). SmoothGrad: Removing noise by adding noise. arXiv."},{"key":"ref_186","doi-asserted-by":"crossref","first-page":"106458","DOI":"10.1016\/j.engappai.2023.106458","article-title":"CWAN: Self-supervised learning for deep grape disease image composition","volume":"123","author":"Jin","year":"2023","journal-title":"Eng. Appl. Artif. Intell."},{"key":"ref_187","doi-asserted-by":"crossref","first-page":"107055","DOI":"10.1016\/j.compag.2022.107055","article-title":"GrapeGAN: Unsupervised image enhancement for improved grape leaf disease recognition","volume":"198","author":"Jin","year":"2022","journal-title":"Comput. Electron. Agric."},{"key":"ref_188","doi-asserted-by":"crossref","first-page":"122717","DOI":"10.1016\/j.eswa.2023.122717","article-title":"Learning multiple attention transformer super-resolution method for grape disease recognition","volume":"241","author":"Jin","year":"2024","journal-title":"Expert Syst. Appl."},{"key":"ref_189","doi-asserted-by":"crossref","first-page":"4843","DOI":"10.1109\/ACCESS.2020.3048415","article-title":"Machine Learning Applications for Precision Agriculture: A Comprehensive Review","volume":"9","author":"Sharma","year":"2021","journal-title":"IEEE Access"},{"key":"ref_190","doi-asserted-by":"crossref","first-page":"108412","DOI":"10.1016\/j.compag.2023.108412","article-title":"Label-efficient learning in agriculture: A comprehensive review","volume":"215","author":"Li","year":"2023","journal-title":"Comput. Electron. Agric."},{"key":"ref_191","unstructured":"Autz, J., Mishra, S., Herrmann, L., and Hertzberg, J. (2022). The pitfalls of transfer learning in computer vision for agriculture. GIL-Jahrestagung, K\u00fcnstliche Intelligenz in der Agrar- und Ern\u00e4hrungswirtschaft, Gesellschaft f\u00fcr Informatik e.V."},{"key":"ref_192","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1109\/MITP.2020.2986122","article-title":"Multimodal AI to Improve Agriculture","volume":"23","author":"Parr","year":"2021","journal-title":"IT Prof."}],"container-title":["AgriEngineering"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2624-7402\/6\/4\/277\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T16:53:15Z","timestamp":1760115195000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2624-7402\/6\/4\/277"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,16]]},"references-count":192,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2024,12]]}},"alternative-id":["agriengineering6040277"],"URL":"https:\/\/doi.org\/10.3390\/agriengineering6040277","relation":{},"ISSN":["2624-7402"],"issn-type":[{"value":"2624-7402","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,12,16]]}}}