{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T23:34:23Z","timestamp":1773876863828,"version":"3.50.1"},"reference-count":58,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2021,2,17]],"date-time":"2021-02-17T00:00:00Z","timestamp":1613520000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/100014440","name":"MINISTERIO DE CIENCIA, INNOVACI\u00d3N Y UNIVERSIDADES","doi-asserted-by":"publisher","award":["BFU 2017-88300-C2-1-R"],"award-info":[{"award-number":["BFU 2017-88300-C2-1-R"]}],"id":[{"id":"10.13039\/100014440","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100014440","name":"MINISTERIO DE CIENCIA, INNOVACI\u00d3N Y UNIVERSIDADES","doi-asserted-by":"publisher","award":["BFU 2017-88300-C2-2-R"],"award-info":[{"award-number":["BFU 2017-88300-C2-2-R"]}],"id":[{"id":"10.13039\/100014440","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001872","name":"CDTI","doi-asserted-by":"publisher","award":["5117\/17CTA-P"],"award-info":[{"award-number":["5117\/17CTA-P"]}],"id":[{"id":"10.13039\/501100001872","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Current predefined architectures for deep learning are computationally very heavy and use tens of millions of parameters. Thus, computational costs may be prohibitive for many experimental or technological setups. We developed an ad hoc architecture for the classification of multispectral images using deep learning techniques. The architecture, called 3DeepM, is composed of 3D filter banks especially designed for the extraction of spatial-spectral features in multichannel images. The new architecture has been tested on a sample of 12210 multispectral images of seedless table grape varieties: Autumn Royal, Crimson Seedless, Itum4, Itum5 and Itum9. 3DeepM was able to classify 100% of the images and obtained the best overall results in terms of accuracy, number of classes, number of parameters and training time compared to similar work. In addition, this paper presents a flexible and reconfigurable computer vision system designed for the acquisition of multispectral images in the range of 400 nm to 1000 nm. The vision system enabled the creation of the first dataset consisting of 12210 37-channel multispectral images (12 VIS + 25 IR) of five seedless table grape varieties that have been used to validate the 3DeepM architecture. Compared to predefined classification architectures such as AlexNet, ResNet or ad hoc architectures with a very high number of parameters, 3DeepM shows the best classification performance despite using 130-fold fewer parameters than the architecture to which it was compared. 3DeepM can be used in a multitude of applications that use multispectral images, such as remote sensing or medical diagnosis. In addition, the small number of parameters of 3DeepM make it ideal for application in online classification systems aboard autonomous robots or unmanned vehicles.<\/jats:p>","DOI":"10.3390\/rs13040729","type":"journal-article","created":{"date-parts":[[2021,2,17]],"date-time":"2021-02-17T04:49:01Z","timestamp":1613537341000},"page":"729","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["3DeepM: An Ad Hoc Architecture Based on Deep Learning Methods for Multispectral Image Classification"],"prefix":"10.3390","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8367-2934","authenticated-orcid":false,"given":"Pedro J.","family":"Navarro","sequence":"first","affiliation":[{"name":"Escuela T\u00e9cnica Superior de Ingenier\u00eda de Telecomunicaci\u00f3n (DSIE), Campus Muralla del Mar, s\/n, Universidad Polit\u00e9cnica de Cartagena, 30202 Cartagena, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7408-6255","authenticated-orcid":false,"given":"Leanne","family":"Miller","sequence":"additional","affiliation":[{"name":"Escuela T\u00e9cnica Superior de Ingenier\u00eda de Telecomunicaci\u00f3n (DSIE), Campus Muralla del Mar, s\/n, Universidad Polit\u00e9cnica de Cartagena, 30202 Cartagena, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3677-2964","authenticated-orcid":false,"given":"Alberto","family":"Gila-Navarro","sequence":"additional","affiliation":[{"name":"Gen\u00e9tica Molecular, Instituto de Biotecnolog\u00eda Vegetal, Edificio I+D+I, Plaza del Hospital s\/n, Universidad Polit\u00e9cnica de Cartagena, 30202 Cartagena, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2543-6081","authenticated-orcid":false,"given":"Mar\u00eda Victoria","family":"D\u00edaz-Gali\u00e1n","sequence":"additional","affiliation":[{"name":"Gen\u00e9tica Molecular, Instituto de Biotecnolog\u00eda Vegetal, Edificio I+D+I, Plaza del Hospital s\/n, Universidad Polit\u00e9cnica de Cartagena, 30202 Cartagena, Spain"}]},{"given":"Diego J.","family":"Aguila","sequence":"additional","affiliation":[{"name":"Sociedad Cooperativa Las Cabezuelas, 30840 Alhama de Murcia, Spain"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4693-9948","authenticated-orcid":false,"given":"Marcos","family":"Egea-Cortines","sequence":"additional","affiliation":[{"name":"Gen\u00e9tica Molecular, Instituto de Biotecnolog\u00eda Vegetal, Edificio I+D+I, Plaza del Hospital s\/n, Universidad Polit\u00e9cnica de Cartagena, 30202 Cartagena, Spain"}]}],"member":"1968","published-online":{"date-parts":[[2021,2,17]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1186\/s13007-017-0233-z","article-title":"Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress","volume":"13","author":"Lowe","year":"2017","journal-title":"Plant Methods"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Pereira, C.S., Morais, R., and Reis, M.J.C.S. (2019). Deep Learning Techniques for Grape Plant Species Identification in Natural Images. Sensors, 19.","DOI":"10.3390\/s19224850"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"105247","DOI":"10.1016\/j.compag.2020.105247","article-title":"Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association","volume":"170","author":"Santos","year":"2020","journal-title":"Comput. Electron. Agric."},{"key":"ref_4","first-page":"41","article-title":"Classification of Grape Type Using Deep Learning","volume":"3","author":"Alshawwa","year":"2020","journal-title":"Int. J. Acad. Eng. Res."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"70","DOI":"10.1016\/j.compag.2018.02.016","article-title":"Deep learning in agriculture: A survey","volume":"147","author":"Kamilaris","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Hsieh, T.H., and Kiang, J.F. (2020). Comparison of CNN algorithms on hyperspectral image classification in agricultural lands. Sensors, 20.","DOI":"10.3390\/s20061734"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1949","DOI":"10.1007\/s12524-019-01041-2","article-title":"Evaluation of Deep Learning CNN Model for Land Use Land Cover Classification and Crop Identification Using Hyperspectral Remote Sensing Images","volume":"47","author":"Bhosle","year":"2019","journal-title":"J. Indian Soc. Remote Sens."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Xie, B., Zhang, H.K., and Xue, J. (2019). Deep convolutional neural network for mapping smallholder agriculture using high spatial resolution satellite image. Sensors, 19.","DOI":"10.3390\/s19102398"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1016\/j.compag.2019.04.019","article-title":"Hyperspectral fruit and vegetable classification using convolutional neural networks","volume":"162","author":"Steinbrener","year":"2019","journal-title":"Comput. Electron. Agric."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Kandpal, L., Lee, J., Bae, J., Lohumi, S., and Cho, B.-K. (2019). Development of a Low-Cost Multi-Waveband LED Illumination Imaging Technique for Rapid Evaluation of Fresh Meat Quality. Appl. Sci., 9.","DOI":"10.3390\/app9050912"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1016\/j.postharvbio.2018.03.008","article-title":"Application of hyperspectral imaging for nondestructive measurement of plum quality attributes","volume":"141","author":"Li","year":"2018","journal-title":"Postharvest Biol. Technol."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1186\/s13007-019-0389-9","article-title":"Multispectral imaging for presymptomatic analysis of light leaf spot in oilseed rape","volume":"15","author":"Veys","year":"2019","journal-title":"Plant Methods"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1016\/S1537-5110(02)00269-6","article-title":"Early disease detection in wheat fields using spectral reflectance","volume":"84","author":"Bravo","year":"2003","journal-title":"Biosyst. Eng."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Yu, Y., and Liu, F. (2018). Dense connectivity based two-stream deep feature fusion framework for aerial scene classification. Remote Sens., 10.","DOI":"10.3390\/rs10071158"},{"key":"ref_15","first-page":"gix092","article-title":"Plant phenomics: An overview of image acquisition technologies and image data analysis algorithms","volume":"6","author":"Navarro","year":"2017","journal-title":"Gigascience"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"279","DOI":"10.1016\/j.isprsjprs.2019.09.006","article-title":"Deep learning classifiers for hyperspectral imaging: A review","volume":"158","author":"Paoletti","year":"2019","journal-title":"Isprs J. Photogramm. Remote Sens."},{"key":"ref_17","unstructured":"Lacar, F.M., Lewis, M.M., and Grierson, I.T. (2001, January 9\u201313). Use of hyperspectral imagery for mapping grape varieties in the Barossa Valley, South Australia. Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Sydney, NSW, Autralia."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"47","DOI":"10.1186\/s13007-017-0198-y","article-title":"Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images","volume":"13","author":"Knauer","year":"2017","journal-title":"Plant Methods"},{"key":"ref_19","first-page":"1","article-title":"Detection and Classification of Early Decay on Blueberry Based on Improved Deep Residual 3D Convolutional Neural Network in Hyperspectral Images","volume":"4","author":"Qiao","year":"2020","journal-title":"Sci. Program."},{"key":"ref_20","unstructured":"Russell, B., Torralba, A., and Freeman, W.T. (2020, December 30). Labelme: The Open Annotation Tool. Available online: http:\/\/labelme.csail.mit.edu\/Release3.0\/browserTools\/php\/dataset.php."},{"key":"ref_21","unstructured":"Santos, T., de Souza, L., Andreza, d.S., and Avila, S. (2021, January 17). Embrapa Wine Grape Instance Segmentation Dataset\u2013Embrapa WGISD. Available online: https:\/\/zenodo.org\/record\/3361736#.YCx3LXm-thE."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"15356","DOI":"10.3390\/s121115356","article-title":"Development of a configurable growth chamber with a computer vision system to study circadian rhythm in plants","volume":"12","author":"Navarro","year":"2012","journal-title":"Sensors"},{"key":"ref_23","unstructured":"Navarro, P.J., P\u00e9rez Sanz, F., Weiss, J., and Egea-Cortines, M. (2016, January 10\u201312). Machine learning for leaf segmentation in NIR images based on wavelet transform. Proceedings of the II Simposio Nacional de Ingenier\u00eda Hort\u00edcola. Automatizaci\u00f3n y TICs en agricultura, Alemeria, Spain."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"D\u00edaz-Gali\u00e1n, M.V., Perez-Sanz, F., Sanchez-Pag\u00e1n, J.D., Weiss, J., Egea-Cortines, M., and Navarro, P.J. (2019). A proposed methodology to analyze plant growth and movement from phenomics data. Remote Sens., 11.","DOI":"10.3390\/rs11232839"},{"key":"ref_25","unstructured":"(2021, January 02). Multispectral Camera MV1-D2048x1088-HS03-96-G2 | Photonfocus AG. Available online: https:\/\/www.photonfocus.com\/products\/camerafinder\/camera\/mv1-d2048x1088-hs03-96-g2\/."},{"key":"ref_26","unstructured":"(2021, January 02). Multispectral Camera MV1-D2048x1088-HS02-96-G2 | Photonfocus AG. Available online: https:\/\/www.photonfocus.com\/products\/camerafinder\/camera\/mv1-d2048x1088-hs02-96-g2\/."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"62","DOI":"10.1109\/TSMC.1979.4310076","article-title":"A Threshold Selection Method from Gray-Level Histograms","volume":"9","author":"Otsu","year":"1979","journal-title":"IEEE Trans. Syst. Man Cybern."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Buslaev, A., Iglovikov, V.I., Khvedchenya, E., Parinov, A., Druzhinin, M., and Kalinin, A.A. (2020). Albumentations: Fast and flexible image augmentations. Information, 11.","DOI":"10.3390\/info11020125"},{"key":"ref_29","unstructured":"Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning, MIT Press."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Tran, D., Bourdev, L., Fergus, R., Torresani, L., and Paluri, M. (2015, January 11\u201318). Learning spatiotemporal features with 3D convolutional networks. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.510"},{"key":"ref_31","first-page":"1","article-title":"Hierarchical neural networks for image interpretation","volume":"2766","author":"Behnke","year":"2003","journal-title":"Lect. Notes Comput. Sci. Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Yamaguchi, K., Sakamoto, K., Akabane, T., and Fujimoto, Y. (1990, January 18\u201322). A Neural Network for Speaker-Independent Isolated Word Recognition. Proceedings of the First International Conference on Spoken Language Processing (ICSLP 90), Kobe, Japan.","DOI":"10.21437\/ICSLP.1990-282"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based learning applied to document recognition","volume":"86","author":"LeCun","year":"1998","journal-title":"Proc. IEEE"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet classification with deep convolutional neural networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. Acm"},{"key":"ref_35","unstructured":"Simonyan, K., and Zisserman, A. (2015, January 7\u20139). Very deep convolutional networks for large-scale image recognition. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015\u2013Conference Track Proceedings, San Diego, CA, USA."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going Deeper with Convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., and Shlens, J. (2016, January 27\u201330). Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. (2017, January 4\u20139). Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_40","unstructured":"Hong, S., Kim, S., Joh, M., and Songy, S.K. (2017). Globenet: Convolutional neural networks for typhoon eye tracking from remote sensing imagery. arXiv."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"37","DOI":"10.1080\/15481603.2019.1658960","article-title":"Mapping Rice Paddies in Complex Landscapes with Convolutional Neural Networks and Phenological Metrics","volume":"57","author":"Zhao","year":"2020","journal-title":"GIScience Remote Sens."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"913","DOI":"10.1007\/978-981-15-3250-4_117","article-title":"Remote Sensing Image Classification Based on AlexNet Network Model","volume":"Volume 551","author":"Zhou","year":"2020","journal-title":"Lecture Notes in Electrical Engineering"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"3030","DOI":"10.1109\/JSTARS.2018.2846178","article-title":"Deep Convolutional Neural Network for Complex Wetland Classification Using Optical Remote Sensing Imagery","volume":"11","author":"Rezaee","year":"2018","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Mahdianpari, M., Salehi, B., Rezaee, M., Mohammadimanesh, F., and Zhang, Y. (2018). Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens., 10.","DOI":"10.3390\/rs10071119"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"4738","DOI":"10.1109\/JSTARS.2020.3017676","article-title":"An Improved InceptionV3 Network for Obscured Ship Classification in Remote Sensing Images","volume":"13","author":"Liu","year":"2020","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Ma, H., Liu, Y., Ren, Y., Wang, D., Yu, L., and Yu, J. (2020). Improved CNN classification method for groups of buildings damaged by earthquake, based on high resolution remote sensing images. Remote Sens., 12.","DOI":"10.3390\/rs12020260"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1986","DOI":"10.1109\/JSTARS.2020.2988477","article-title":"Classification of High-Spatial-Resolution Remote Sensing Scenes Method Using Transfer Learning and Deep Convolutional Neural Network","volume":"13","author":"Li","year":"2020","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Jie, B.X., Zulkifley, M.A., and Mohamed, N.A. (2020). Remote Sensing Approach to Oil Palm Plantations Detection Using Xception. 2020 11th IEEE Control and System Graduate Research Colloquium, ICSGRC 2020\u2212Proceedings, IEEE.","DOI":"10.1109\/ICSGRC49013.2020.9232547"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Zhu, T., Li, Y., Ye, Q., Huo, H., and Tao, F. (2017, January 2\u20134). Integrating saliency and ResNet for airport detection in large-size remote sensing images. Proceedings of the 2017 2nd International Conference on Image, Vision and Computing, ICIVC 2017, Chengdu, China.","DOI":"10.1109\/ICIVC.2017.7984451"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"2144","DOI":"10.3389\/fpls.2020.540821","article-title":"The Effect of Post-harvest Conditions in Narcissus sp. Cut Flowers Scent Profile","volume":"11","author":"Terry","year":"2021","journal-title":"Front. Plant Sci."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"453","DOI":"10.1111\/soru.12186","article-title":"Designer Grapes: The Socio-Technical Construction of the Seedless Table Grapes. A Case Study of Quality Control","volume":"58","year":"2018","journal-title":"Sociol. Rural."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"1234","DOI":"10.1104\/pp.18.00259","article-title":"The major origin of seedless grapes is associated with a missense mutation in the MADS-box gene VviAGL11","volume":"177","author":"Royo","year":"2018","journal-title":"Plant Physiol."},{"key":"ref_53","first-page":"120","article-title":"The OpenCV Library","volume":"25","author":"Bradski","year":"2000","journal-title":"Dr. Dobb\u2019s J. Softw. Tools"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"54. Deng, J., Dong, W., Socher, R., Li, L.-J., Kai, L., and Li, F.-F. (2010). ImageNet: A large-scale hierarchical image database. Institute of Electrical and Electronics Engineers (IEEE), IEEE.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_55","unstructured":"Seng, J., Ang, K., Schmidtke, L., and Rogiers, S. (2020, December 30). Grape Image Database\u2013Charles Sturt University Research Output. Available online: https:\/\/researchoutput.csu.edu.au\/en\/datasets\/grape-image-database."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"1211","DOI":"10.1016\/j.procs.2020.09.117","article-title":"Deep learning for grape variety recognition","volume":"Volume 176","author":"Franczyk","year":"2020","journal-title":"Procedia Computer Science"},{"key":"ref_57","first-page":"185","article-title":"DeepGrapes: Precise Detection of Grapes in Low-resolution Images","volume":"51","year":"2018","journal-title":"Ifac Pap."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Ramos, R.P., Gomes, J.S., Prates, R.M., Simas Filho, E.F., Teruel, B.J., and dos Santos Costa, D. (2020). Non-invasive setup for grape maturation classification using deep learning. J. Sci. Food Agric.","DOI":"10.1002\/jsfa.10824"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/4\/729\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T05:25:04Z","timestamp":1760160304000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/13\/4\/729"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,2,17]]},"references-count":58,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2021,2]]}},"alternative-id":["rs13040729"],"URL":"https:\/\/doi.org\/10.3390\/rs13040729","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,2,17]]}}}