{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,28]],"date-time":"2026-04-28T04:23:18Z","timestamp":1777350198732,"version":"3.51.4"},"reference-count":25,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2024,5,28]],"date-time":"2024-05-28T00:00:00Z","timestamp":1716854400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,5,28]],"date-time":"2024-05-28T00:00:00Z","timestamp":1716854400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100007613","name":"Ankara University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100007613","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Real-Time Image Proc"],"published-print":{"date-parts":[[2024,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Lightweight multiscale-feature-fusion network (LMFFNet), a proficient real-time CNN architecture, adeptly achieves a balance between inference time and accuracy. Capturing the intricate details of precision agriculture target objects in remote sensing images requires deep SEM-B blocks in the LMFFNet model design. However, employing numerous SEM-B units leads to instability during backward gradient flow. This work proposes the novel residual-LMFFNet (ResLMFFNet) model for ensuring smooth gradient flow within SEM-B blocks. By incorporating residual connections, ResLMFFNet achieves improved accuracy without affecting the inference speed and the number of trainable parameters. The results of the experiments demonstrate that this architecture has achieved superior performance compared to other real-time architectures across diverse precision agriculture applications involving UAV and satellite images. Compared to LMFFNet, the ResLMFFNet architecture enhances the Jaccard Index values by 2.1% for tree detection, 1.4% for crop detection, and 11.2% for wheat-yellow rust detection. Achieving these remarkable accuracy levels involves maintaining almost identical inference time and computational complexity as the LMFFNet model. The source code is available on GitHub: <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/iremulku\/Semantic-Segmentation-in-Precision-Agriculture\">https:\/\/github.com\/iremulku\/Semantic-Segmentation-in-Precision-Agriculture<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s11554-024-01474-0","type":"journal-article","created":{"date-parts":[[2024,5,28]],"date-time":"2024-05-28T21:03:06Z","timestamp":1716930186000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["ResLMFFNet: a real-time semantic segmentation network for precision agriculture"],"prefix":"10.1007","volume":"21","author":[{"given":"Irem","family":"Ulku","sequence":"first","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,5,28]]},"reference":[{"key":"1474_CR1","doi-asserted-by":"publisher","first-page":"242","DOI":"10.1016\/j.neucom.2022.11.020","volume":"518","author":"S Jinya","year":"2023","unstructured":"Jinya, S., Zhu, X., Li, S., Chen, W.-H.: Ai meets uavs: a survey on ai empowered uav perception systems for precision agriculture. Neurocomputing 518, 242\u2013270 (2023)","journal-title":"Neurocomputing"},{"key":"1474_CR2","doi-asserted-by":"publisher","first-page":"7589","DOI":"10.1109\/JSTARS.2022.3203145","volume":"15","author":"I Ulku","year":"2022","unstructured":"Ulku, I., Akag\u00fcnd\u00fcz, E., Ghamisi, P.: Deep semantic segmentation of trees using multispectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 15, 7589\u20137604 (2022)","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"issue":"13","key":"1474_CR3","doi-asserted-by":"publisher","first-page":"3334","DOI":"10.3390\/rs15133334","volume":"150","author":"D Sch\u00fcrholz","year":"2023","unstructured":"Sch\u00fcrholz, D., Castellanos-Galindo, G.A., Casella, E., Mej\u00eda-Renter\u00eda, J.C., Chennu, A.: Seeing the forest for the trees: mapping cover and counting trees from aerial images of a mangrove forest using artificial intelligence. Remote Sens. 150(13), 3334 (2023)","journal-title":"Remote Sens."},{"key":"1474_CR4","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234\u2013241. Springer (2015)","DOI":"10.1007\/978-3-319-24574-4_28"},{"issue":"1","key":"1474_CR5","doi-asserted-by":"publisher","first-page":"588","DOI":"10.1109\/LRA.2017.2774979","volume":"30","author":"I Sa","year":"2017","unstructured":"Sa, I., Chen, Z., Popovi\u0107, M., Khanna, R., Liebisch, F., Nieto, J., Siegwart, R.: weednet: dense semantic weed classification using multispectral images and mav for smart farming. IEEE Robot. Autom. Lett. 30(1), 588\u2013595 (2017)","journal-title":"IEEE Robot. Autom. Lett."},{"issue":"20","key":"1474_CR6","doi-asserted-by":"publisher","first-page":"7132","DOI":"10.3390\/app10207132","volume":"100","author":"J Deng","year":"2020","unstructured":"Deng, J., Zhong, Z., Huang, H., Lan, Y., Han, Y., Zhang, Y.: ightweight semantic segmentation network for real-time weed mapping using unmanned aerial vehicles. Appl. Sci. 100(20), 7132 (2020)","journal-title":"Appl. Sci."},{"key":"1474_CR7","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2023.122980","volume":"246","author":"J Gao","year":"2024","unstructured":"Gao, J., Liao, W., Nuyttens, D., Lootens, P., Xue, W., Alexandersson, E., Pieters, J.: Cross-domain transfer learning for weed segmentation and mapping in precision farming using ground and uav images. Expert Syst. Appl. 246, 122980 (2024)","journal-title":"Expert Syst. Appl."},{"key":"1474_CR8","doi-asserted-by":"crossref","unstructured":"Milioto, A., Lottes, P., Stachniss, C.: Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in cnns. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2229\u20132235. IEEE (2018)","DOI":"10.1109\/ICRA.2018.8460962"},{"issue":"2","key":"1474_CR9","doi-asserted-by":"publisher","first-page":"33","DOI":"10.1007\/s11554-023-01264-0","volume":"200","author":"F Qi","year":"2023","unstructured":"Qi, F., Wang, Y., Tang, Z., Chen, S.: Real-time and effective detection of agricultural pest using an improved yolov5 network. J. Real-Time Image Proc. 200(2), 33 (2023)","journal-title":"J. Real-Time Image Proc."},{"key":"1474_CR10","doi-asserted-by":"publisher","DOI":"10.1016\/j.compag.2024.108623","volume":"217","author":"B Yang","year":"2024","unstructured":"Yang, B., Yang, S., Wang, P., Wang, H., Jiang, J., Ni, R., Yang, C.: Frpnet: an improved faster-resnet with paspp for real-time semantic segmentation in the unstructured field scene. Comput. Electron. Agric. 217, 108623 (2024)","journal-title":"Comput. Electron. Agric."},{"issue":"12","key":"1474_CR11","doi-asserted-by":"publisher","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","volume":"390","author":"V Badrinarayanan","year":"2017","unstructured":"Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 390(12), 2481\u20132495 (2017)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"1474_CR12","unstructured":"Paszke, A., Chaurasia, A., Kim, S., Culurciello, E.: Enet: a deep neural network architecture for real-time semantic segmentation (2016). arXiv:1606.02147"},{"key":"1474_CR13","doi-asserted-by":"crossref","unstructured":"Wang, Y., Zhou, Q., Liu, J., Xiong, J., Gao, G., Xiaofu, W., Latecki, L.J.: Lednet: a lightweight encoder-decoder network for real-time semantic segmentation. In: IEEE International Conference on Image Processing (ICIP), pp. 1860\u20131864. IEEE (2019)","DOI":"10.1109\/ICIP.2019.8803154"},{"key":"1474_CR14","doi-asserted-by":"publisher","first-page":"226524","DOI":"10.1109\/ACCESS.2020.3045147","volume":"8","author":"M Kim","year":"2020","unstructured":"Kim, M., Park, B., Chi, S.: Accelerator-aware fast spatial feature network for real-time semantic segmentation. IEEE Access 8, 226524\u2013226537 (2020)","journal-title":"IEEE Access"},{"key":"1474_CR15","doi-asserted-by":"crossref","unstructured":"Wang, Y., Zhou, Q., Xiong, J., Xiaofu, W., Jin, X.: Esnet: an efficient symmetric network for real-time semantic segmentation. In: Conference on Pattern Recognition and Computer Vision, pp. 41\u201352. Springer (2019)","DOI":"10.1007\/978-3-030-31723-2_4"},{"key":"1474_CR16","doi-asserted-by":"crossref","unstructured":"Li, H., Xiong, P., Fan, H., Sun, J.: Dfanet: deep feature aggregation for real-time semantic segmentation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 9522\u20139531 (2019)","DOI":"10.1109\/CVPR.2019.00975"},{"issue":"9","key":"1474_CR17","doi-asserted-by":"publisher","first-page":"14349","DOI":"10.1109\/TITS.2021.3127553","volume":"230","author":"L Rosas-Arias","year":"2021","unstructured":"Rosas-Arias, L., Benitez-Garcia, G., Portillo-Portillo, J., Olivares-Mercado, J., Sanchez-Perez, G., Yanai, K.: Fassd-net: fast and accurate real-time semantic segmentation for embedded systems. IEEE Trans. Intell. Transp. Syst. 230(9), 14349\u201314360 (2021)","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"1474_CR18","doi-asserted-by":"publisher","first-page":"196","DOI":"10.1016\/j.isprsjprs.2022.06.008","volume":"190","author":"L Wang","year":"2022","unstructured":"Wang, L., Li, R., Zhang, C., Fang, S., Duan, C., Meng, X., Atkinson, P.M.: Unetformer: a unet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS J. Photogramm. Remote. Sens. 190, 196\u2013214 (2022)","journal-title":"ISPRS J. Photogramm. Remote. Sens."},{"key":"1474_CR19","doi-asserted-by":"crossref","unstructured":"Shi, M., Shen, J., Yi, Q., Weng, J., Huang, Z., Luo, A., Zhou, Y.: Lmffnet: a well-balanced lightweight network for fast and accurate semantic segmentation. IEEE Trans. Neural Netw. Learn. Syst. (2022)","DOI":"10.1109\/TNNLS.2022.3176493"},{"key":"1474_CR20","unstructured":"Jie, H., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132\u20137141 (2018)"},{"key":"1474_CR21","first-page":"7561","volume":"35","author":"A Jaiswal","year":"2022","unstructured":"Jaiswal, A., Wang, P., Chen, T., Rousseau, J., Ding, Y., Wang, Z.: Old can be gold: better gradient flow can make vanilla-gcns great again. Adv. Neural. Inf. Process. Syst. 35, 7561\u20137574 (2022)","journal-title":"Adv. Neural. Inf. Process. Syst."},{"key":"1474_CR22","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770\u2013778 (2016)","DOI":"10.1109\/CVPR.2016.90"},{"key":"1474_CR23","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1016\/j.isprsjprs.2018.04.014","volume":"145","author":"R Kemker","year":"2018","unstructured":"Kemker, R., Salvaggio, C., Kanan, C.: Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning. ISPRS J. Photogramm. Remote. Sens. 145, 60\u201377 (2018)","journal-title":"ISPRS J. Photogramm. Remote. Sens."},{"issue":"3","key":"1474_CR24","first-page":"2242","volume":"170","author":"S Jinya","year":"2020","unstructured":"Jinya, S., Yi, D., Baofeng, S., Mi, Z., Liu, C., Xiaoping, H., Xiangming, X., Guo, L., Chen, W.-H.: Aerial visual perception in smart farming: field study of wheat yellow rust monitoring. IEEE Trans. Ind. Inf. 170(3), 2242\u20132249 (2020)","journal-title":"IEEE Trans. Ind. Inf."},{"key":"1474_CR25","doi-asserted-by":"publisher","DOI":"10.3389\/fpls.2021.645899","volume":"12","author":"Y Wang","year":"2021","unstructured":"Wang, Y., Qin, Y., Cui, J.: Occlusion robust wheat ear counting algorithm based on deep learning. Front. Plant Sci. 12, 645899 (2021)","journal-title":"Front. Plant Sci."}],"container-title":["Journal of Real-Time Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-024-01474-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11554-024-01474-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-024-01474-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,8,27]],"date-time":"2024-08-27T16:21:24Z","timestamp":1724775684000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11554-024-01474-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,5,28]]},"references-count":25,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,8]]}},"alternative-id":["1474"],"URL":"https:\/\/doi.org\/10.1007\/s11554-024-01474-0","relation":{},"ISSN":["1861-8200","1861-8219"],"issn-type":[{"value":"1861-8200","type":"print"},{"value":"1861-8219","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,5,28]]},"assertion":[{"value":"7 January 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 May 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 May 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declaration"}},{"value":"The authors declare that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"101"}}