{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,31]],"date-time":"2026-03-31T10:34:03Z","timestamp":1774953243850,"version":"3.50.1"},"reference-count":56,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2021,5,29]],"date-time":"2021-05-29T00:00:00Z","timestamp":1622246400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/100009171","name":"Natural Resources Conservation Service","doi-asserted-by":"publisher","award":["000-000"],"award-info":[{"award-number":["000-000"]}],"id":[{"id":"10.13039\/100009171","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Recent computer vision techniques based on convolutional neural networks (CNNs) are considered state-of-the-art tools in weed mapping. However, their performance has been shown to be sensitive to image quality degradation. Variation in lighting conditions adds another level of complexity to weed mapping. We focus on determining the influence of image quality and light consistency on the performance of CNNs in weed mapping by simulating the image formation pipeline. Faster Region-based CNN (R-CNN) and Mask R-CNN were used as CNN examples for object detection and instance segmentation, respectively, while semantic segmentation was represented by Deeplab-v3. The degradations simulated in this study included resolution reduction, overexposure, Gaussian blur, motion blur, and noise. The results showed that the CNN performance was most impacted by resolution, regardless of plant size. When the training and testing images had the same quality, Faster R-CNN and Mask R-CNN were moderately tolerant to low levels of overexposure, Gaussian blur, motion blur, and noise. Deeplab-v3, on the other hand, tolerated overexposure, motion blur, and noise at all tested levels. In most cases, quality inconsistency between the training and testing images reduced CNN performance. However, CNN models trained on low-quality images were more tolerant against quality inconsistency than those trained by high-quality images. Light inconsistency also reduced CNN performance. Increasing the diversity of lighting conditions in the training images may alleviate the performance reduction but does not provide the same benefit from the number increase of images with the same lighting condition. These results provide insights into the impact of image quality and light consistency on CNN performance. The quality threshold established in this study can be used to guide the selection of camera parameters in future weed mapping applications.<\/jats:p>","DOI":"10.3390\/rs13112140","type":"journal-article","created":{"date-parts":[[2021,5,31]],"date-time":"2021-05-31T03:45:29Z","timestamp":1622432729000},"page":"2140","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":48,"title":["Influence of Image Quality and Light Consistency on the Performance of Convolutional Neural Networks for Weed Mapping"],"prefix":"10.3390","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4220-8566","authenticated-orcid":false,"given":"Chengsong","family":"Hu","sequence":"first","affiliation":[{"name":"Department of Soil and Crop Sciences, Texas A&M University, College Station, TX 77843, USA"}]},{"given":"Bishwa B.","family":"Sapkota","sequence":"additional","affiliation":[{"name":"Department of Soil and Crop Sciences, Texas A&M University, College Station, TX 77843, USA"}]},{"given":"J. Alex","family":"Thomasson","sequence":"additional","affiliation":[{"name":"Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS 39759, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1107-7148","authenticated-orcid":false,"given":"Muthukumar V.","family":"Bagavathiannan","sequence":"additional","affiliation":[{"name":"Department of Soil and Crop Sciences, Texas A&M University, College Station, TX 77843, USA"}]}],"member":"1968","published-online":{"date-parts":[[2021,5,29]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"173","DOI":"10.1016\/S0168-1699(02)00100-X","article-title":"Machine vision technology for agricultural applications","volume":"36","author":"Chen","year":"2002","journal-title":"Comput. Electron. Agric."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"487","DOI":"10.1007\/s11947-010-0411-8","article-title":"Advances in machine vision applications for automatic inspection and quality evaluation of fruits and vegetables","volume":"4","author":"Cubero","year":"2011","journal-title":"Food Bioprocess Technol."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1007\/s13197-011-0321-4","article-title":"Machine vision system: A tool for quality inspection of food and agricultural products","volume":"49","author":"Patel","year":"2012","journal-title":"J. Food Sci. Technol."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"312","DOI":"10.1017\/S0021859618000436","article-title":"A review of the use of convolutional neural networks in agriculture","volume":"156","author":"Kamilaris","year":"2018","journal-title":"J. Agric. Sci."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1186\/s13007-020-00570-z","article-title":"Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields","volume":"16","author":"Gao","year":"2020","journal-title":"Plant Methods"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Dodge, S., and Karam, L. (2016, January 6\u20138). Understanding how image quality affects deep neural networks. Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal.","DOI":"10.1109\/QoMEX.2016.7498955"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Karahan, S., Yildirum, M.K., Kirtac, K., Rende, F.S., Butun, G., and Ekenel, H.K. (2016, January 21\u201323). How image degradations affect deep CNN-based face recognition?. Proceedings of the 2016 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany.","DOI":"10.1109\/BIOSIG.2016.7736924"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Dodge, S., and Karam, L. (August, January 31). A study and comparison of human and deep learning recognition performance under visual distortions. Proceedings of the 2017 26th International Conference on Computer Communication and Networks (ICCCN), Vancouver, Canada.","DOI":"10.1109\/ICCCN.2017.8038465"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zhou, Y., Liu, D., and Huang, T. (2018, January 15\u201319). Survey of face detection on low-quality images. Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi\u2019an, China.","DOI":"10.1109\/FG.2018.00121"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1239","DOI":"10.1109\/TPAMI.2019.2950923","article-title":"Effects of image degradation and degradation removal to CNN-based image classification","volume":"43","author":"Pei","year":"2019","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_11","first-page":"1","article-title":"Recovering high dynamic range radiance maps from photographs","volume":"2008","author":"Debevec","year":"2008","journal-title":"ACM SIGGRAPH"},{"key":"ref_12","first-page":"5","article-title":"An introduction to appearance analysis","volume":"13","author":"Harold","year":"2001","journal-title":"GATF World"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"460","DOI":"10.1109\/30.468045","article-title":"Automatic white balance for digital still camera","volume":"41","author":"Liu","year":"1995","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Akenine-M\u00f6ller, T., Haines, E., and Hoffman, N. (2019). Real-Time Rendering, CRC Press. [4th ed.].","DOI":"10.1201\/9781315365459"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Peng, B., Yang, H., Li, D., and Zhang, Z. (2018, January 15\u201319). An empirical study of face recognition under variations. Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi\u2019an, China.","DOI":"10.1109\/FG.2018.00052"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Gao, X., Li, S.Z., Liu, R., and Zhang, P. (2007). Standardization of face image sample quality. International Conference on Biometrics, Springer.","DOI":"10.1007\/978-3-540-74549-5_26"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"315","DOI":"10.1016\/j.patrec.2009.09.010","article-title":"The role of intensity standardization in medical image registration","volume":"31","author":"Udupa","year":"2010","journal-title":"Pattern Recognit. Lett."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1045","DOI":"10.1177\/0278364917720510","article-title":"Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields","volume":"36","author":"Chebrolu","year":"2017","journal-title":"Int. J. Robot. Res."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Kounalakis, T., Malinowski, M.J., Chelini, L., Triantafyllidis, G.A., and Nalpantidis, L. (2018, January 16\u201318). A robotic system employing deep learning for visual recognition and detection of weeds in grasslands. Proceedings of the 2018 IEEE International Conference on Imaging Systems and Techniques (IST), Krakow, Poland.","DOI":"10.1109\/IST.2018.8577153"},{"key":"ref_20","first-page":"1","article-title":"Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; Calvert, B.; Azghadi, M.R.; White, R.D. DeepWeeds: A multiclass weed species image dataset for deep learning","volume":"9","author":"Olsen","year":"2019","journal-title":"Sci. Rep."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Farrell, J.E., Xiao, F., Catrysse, P.B., and Wandell, B.A. (2004, January 18\u201322). A simulation tool for evaluating digital camera image quality. Proceedings of the Electronic Imaging Symposium, San Jose, CA, USA.","DOI":"10.1117\/12.537474"},{"key":"ref_22","unstructured":"Tsin, Y., Ramesh, V., and Kanade, T. (2001, January 7\u201314). Statistical calibration of CCD imaging process. Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, Canada."},{"key":"ref_23","unstructured":"Liu, C., Freeman, W.T., Szeliski, R., and Kang, S.B. (2006, January 17\u201322). Noise estimation from a single image. Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201906), New York, NY, USA."},{"key":"ref_24","unstructured":"Sumner, R. (2014). Processing RAW Images in MATLAB, Department of Electrical Engineering, University of California Sata Cruz."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"267","DOI":"10.1109\/34.276126","article-title":"Radiometric CCD camera calibration and noise estimation","volume":"16","author":"Healey","year":"1994","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"A80","DOI":"10.1364\/AO.51.000A80","article-title":"Digital camera simulation","volume":"51","author":"Farrell","year":"2012","journal-title":"Appl. Opt."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Catrysse, P.B., and Wandell, B.A. (2005, January 16\u201320). Roadmap for CMOS image sensors: Moore meets Planck and Sommerfeld. Proceedings of the Electronic Imaging, San Jose, CA, USA.","DOI":"10.1117\/12.592483"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Hu, H., and De Haan, G. (2006, January 8\u201311). Low cost robust blur estimator. Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA.","DOI":"10.1109\/ICIP.2006.312411"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"159","DOI":"10.1364\/JOSAA.25.000159","article-title":"Measurement of the point-spread function of a noisy imaging system","volume":"25","author":"Claxton","year":"2008","journal-title":"J. Opt. Soc. Am. A"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"4097","DOI":"10.1364\/AO.54.004097","article-title":"Slant edge method for point spread function estimation","volume":"54","author":"Fan","year":"2015","journal-title":"Appl. Opt."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/MSP.2005.1407713","article-title":"Color image processing pipeline","volume":"22","author":"Ramanath","year":"2005","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_32","unstructured":"European Machine Vision Association (2016). EMVA standard 1288, standard for characterization of image sensors and cameras. Release, 3, 6\u201327."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"376","DOI":"10.1038\/256376a0","article-title":"Signal-to-noise ratio of electron micrographs obtained by cross correlation","volume":"256","author":"Frank","year":"1975","journal-title":"Nature"},{"key":"ref_34","unstructured":"Carlsson, K. (2009). Imaging Physics, KTH Applied Physics Department."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"1144","DOI":"10.1109\/TCE.2006.273126","article-title":"Design considerations of color image processing pipeline for digital cameras","volume":"52","author":"Kao","year":"2006","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Ren, Y., and Cheng, X. (2018, January 8\u201310). Review of convolutional neural network optimization and training in image processing. Proceedings of the Tenth International Symposium on Precision Engineering Measurements and Instrumentation, Kunming, China.","DOI":"10.1117\/12.2512087"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., and Garcia-Rodriguez, J. (2017). A review on deep learning techniques applied to semantic segmentation. arXiv.","DOI":"10.1016\/j.asoc.2018.05.018"},{"key":"ref_39","unstructured":"Chen, L.C., Papandreou, G., Schroff, F., and Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 22\u201329). Mask R-CNN. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_41","unstructured":"Varghese, D., Wanat, R., and Mantiuk, R.K. (2014, January 4\u20135). Colorimetric calibration of high dynamic range images with a ColorChecker chart. Proceedings of the HDRi 2014: Second International Conference and SME Workshop on HDR Imaging, Sarajevo, Bosnia and Herzegovina."},{"key":"ref_42","unstructured":"Pascale, D. (2006). RGB Coordinates of the Macbeth Color Checker, The BabelColor Company."},{"key":"ref_43","unstructured":"Lindbloom, B.J., and RGB\/XYZ Matrices (2021, January 23). Bruce Lindbloom\u2019s Web Site. Available online: http:\/\/www.brucelindbloom.com\/index.html?Eqn_RGB_XYZ_Matrix.html."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1109\/5.982410","article-title":"Perceptual assessment of demosaicing algorithm performance","volume":"90","author":"Longere","year":"2002","journal-title":"Proc. IEEE"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Rani, S.K., and Hans, W.J. (2013, January 3\u20135). FPGA implementation of bilinear interpolation algorithm for CFA demosaicing. Proceedings of the 2013 International Conference on Communication and Signal Processing, Melmaruvathur, India.","DOI":"10.1109\/iccsp.2013.6577178"},{"key":"ref_46","unstructured":"Gonzales, R.C., and Woods, R.E. (2018). Digital image processing, Pearson. [4th ed.]."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Darbon, J., Cunha, A., Chan, T.F., Osher, S., and Jensen, G.J. (2008, January 14\u201317). Fast nonlocal filtering applied to electron cryomicroscopy. Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France.","DOI":"10.1109\/ISBI.2008.4541250"},{"key":"ref_48","unstructured":"(2021, January 23). Image Scaling Wikipedia. Available online: https:\/\/en.wikipedia.org\/wiki\/Image_scaling."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Guo, D., Cheng, Y., Zhuo, S., and Sim, T. (2010, January 13\u201318). Correcting over-exposure in photographs. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA.","DOI":"10.1109\/CVPR.2010.5540170"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Pr\u00e4kel, D. (2016). Photography Exposure, Bloomsbury Publishing. [2nd ed.].","DOI":"10.5040\/9781474222495"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"1819","DOI":"10.1364\/AO.46.001819","article-title":"Gaussian approximations of fluorescence microscope point-spread function models","volume":"46","author":"Zhang","year":"2007","journal-title":"Appl. Opt."},{"key":"ref_52","first-page":"176","article-title":"Review of motion blur estimation techniques","volume":"1","author":"Tiwari","year":"2013","journal-title":"J. Image Graph."},{"key":"ref_53","first-page":"8026","article-title":"PyTorch: An imperative style, high-performance deep learning library","volume":"32","author":"Paszke","year":"2019","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014). Microsoft COCO: Common objects in context. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_55","unstructured":"Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1007\/s11263-014-0733-5","article-title":"The pascal visual object classes challenge: A retrospective","volume":"111","author":"Everingham","year":"2015","journal-title":"Int. J. Comput. Vis."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/11\/2140\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:10:22Z","timestamp":1760163022000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/11\/2140"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,29]]},"references-count":56,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2021,6]]}},"alternative-id":["rs13112140"],"URL":"https:\/\/doi.org\/10.3390\/rs13112140","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,29]]}}}