{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,25]],"date-time":"2026-03-25T15:57:12Z","timestamp":1774454232415,"version":"3.50.1"},"reference-count":34,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2025,9,29]],"date-time":"2025-09-29T00:00:00Z","timestamp":1759104000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["www.mdpi.com"],"crossmark-restriction":true},"short-container-title":["MAKE"],"abstract":"<jats:p>Deploying deep learning (DL) models in real-world environments remains a major challenge, particularly under resource-constrained conditions where achieving both high accuracy and compact architectures is essential. While effective, Conventional pruning methods often suffer from high computational overhead, accuracy degradation, or disruption of the end-to-end training process, limiting their practicality for embedded and real-time applications. We present Dynamic Attention-Guided Pruning (DAGP), a Dynamic Attention-Guided Soft Channel Pruning framework that overcomes these limitations by embedding learnable, differentiable pruning masks directly within convolutional neural networks (CNNs). These masks act as implicit attention mechanisms, adaptively suppressing non-informative channels during training. A progressively scheduled L1 regularization, activated after a warm-up phase, enables gradual sparsity while preserving early learning capacity. Unlike prior methods, DAGP is retraining-free, introduces minimal architectural overhead, and supports optional hard pruning for deployment efficiency. Joint optimization of classification and sparsity objectives ensures stable convergence and task-adaptive channel selection. Experiments on CIFAR-10 (VGG16, ResNet56) and PlantVillage (custom CNN) achieve up to 98.82% FLOPs reduction with accuracy gains over baselines. Real-world validation on an enhanced PlantDoc dataset for agricultural monitoring achieves 60 ms inference with only 2.00 MB RAM on a Raspberry Pi 4, confirming efficiency under field conditions. These results illustrate DAGP\u2019s potential to scale beyond agriculture to diverse edge-intelligent systems requiring lightweight, accurate, and deployable models.<\/jats:p>","DOI":"10.3390\/make7040110","type":"journal-article","created":{"date-parts":[[2025,9,29]],"date-time":"2025-09-29T08:00:32Z","timestamp":1759132832000},"page":"110","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Attention-Guided Differentiable Channel Pruning for Efficient Deep Networks"],"prefix":"10.3390","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-4333-8698","authenticated-orcid":false,"given":"Anouar","family":"Chahbouni","sequence":"first","affiliation":[{"name":"Faculty of Sciences Dhar El Mehraz, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco"}]},{"given":"Khaoula","family":"El Manaa","sequence":"additional","affiliation":[{"name":"Faculty of Sciences Dhar El Mehraz, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco"}]},{"given":"Yassine","family":"Abouch","sequence":"additional","affiliation":[{"name":"DAKAI Laboratory, Nextronic by Aba Technology, Casablanca 20253, Morocco"}]},{"given":"Imane","family":"El Manaa","sequence":"additional","affiliation":[{"name":"Faculty of Sciences Dhar El Mehraz, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8126-7804","authenticated-orcid":false,"given":"Badre","family":"Bossoufi","sequence":"additional","affiliation":[{"name":"Faculty of Sciences Dhar El Mehraz, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco"}]},{"given":"Mohammed","family":"El Ghzaoui","sequence":"additional","affiliation":[{"name":"Faculty of Sciences Dhar El Mehraz, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8524-9053","authenticated-orcid":false,"given":"Rachid","family":"El Alami","sequence":"additional","affiliation":[{"name":"Faculty of Sciences Dhar El Mehraz, Sidi Mohammed Ben Abdellah University, Fez 30000, Morocco"}]}],"member":"1968","published-online":{"date-parts":[[2025,9,29]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Khaoula, E.M., and Hassan, S. (2024, January 19\u201320). Moroccan Arabic Darija Automatic Speech Recognition System Using CNN Model for Drone Control Application. Proceedings of the 2024 3rd International Conference on Embedded Systems and Artificial Intelligence (ESAI), Fez, Morocco.","DOI":"10.1109\/ESAI62891.2024.10913812"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"160284","DOI":"10.1109\/ACCESS.2024.3487313","article-title":"A novel lightweight CNN for constrained IoT devices: Achieving high accuracy with parameter efficiency on the MSTAR dataset","volume":"12","author":"Rahman","year":"2024","journal-title":"IEEE Access"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"El Manaa, K., Laaidi, N., Ezzine, A., and Satori, H. (2025). Phoneme-based automatic speech recognition for real-time Arabic command drone control. Iran J. Comput. Sci., 1\u201317.","DOI":"10.1007\/s42044-025-00275-3"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"11804","DOI":"10.1007\/s10489-024-05747-w","article-title":"A comprehensive review of model compression techniques in machine learning","volume":"54","author":"Dantas","year":"2024","journal-title":"Appl. Intell."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"8388","DOI":"10.1109\/TWC.2023.3349330","article-title":"Semantic communications for image recovery and classification via deep joint source and channel coding","volume":"23","author":"Lyu","year":"2024","journal-title":"IEEE Trans. Wirel. Commun."},{"key":"ref_6","unstructured":"Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H.P. (2017, January 24\u201326). Pruning Filters for Efficient ConvNets. Proceedings of the International Conference on Learning Representations, Toulon, France."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. (2017, January 22\u201329). Learning Efficient Convolutional Networks Through Network Slimming. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.298"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"He, Y., Zhang, X., and Sun, J. (2017, January 22\u201329). Channel Pruning for Accelerating very Deep Neural Networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.155"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"107860","DOI":"10.1016\/j.asoc.2021.107860","article-title":"RFPruning: A retraining-free pruning method for accelerating convolutional neural networks","volume":"113","author":"Wang","year":"2021","journal-title":"Appl. Soft Comput."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"402","DOI":"10.1007\/s10489-025-06332-5","article-title":"DAAR: Dual attention cooperative adaptive pruning rate by data-driven for filter pruning","volume":"55","author":"Lian","year":"2025","journal-title":"Appl. Intell."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"110229","DOI":"10.1016\/j.asoc.2023.110229","article-title":"Convolutional neural network pruning based on multi-objective feature map selection for image classification","volume":"139","author":"Jiang","year":"2023","journal-title":"Appl. Soft Comput."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"110724","DOI":"10.1016\/j.patcog.2024.110724","article-title":"Structured pruning adapters","volume":"156","author":"Hedegaard","year":"2024","journal-title":"Pattern Recognit."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"111697","DOI":"10.1016\/j.patcog.2025.111697","article-title":"Growing-before-pruning: A progressive neural architecture search strategy via group sparsity and deterministic annealing","volume":"166","author":"Lu","year":"2025","journal-title":"Pattern Recognit."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"He, Y., Kang, G., Dong, X., Fu, Y., and Yang, Y. (2018, January 13\u201319). Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks. Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden.","DOI":"10.24963\/ijcai.2018\/309"},{"key":"ref_15","unstructured":"Kang, M., and Han, B. (2020, January 12\u201318). Operation-Aware Soft Channel Pruning Using Differentiable Masks. Proceedings of the International Conference on Machine Learning, Virtual."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"107461","DOI":"10.1016\/j.patcog.2020.107461","article-title":"Autopruner: An end-to-end trainable filter pruning method for efficient deep model inference","volume":"107","author":"Luo","year":"2020","journal-title":"Pattern Recognit."},{"key":"ref_17","unstructured":"Gao, X., Zhao, Y., Dudziak, \u0141., Mullins, R., and Xu, C.Z. (2018). Dynamic channel pruning: Feature boosting and suppression. arXiv."},{"key":"ref_18","first-page":"14747","article-title":"Storage efficient and dynamic flexible runtime channel pruning via deep reinforcement learning","volume":"33","author":"Chen","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"16818","DOI":"10.1007\/s10489-022-03383-w","article-title":"Dynamic channel pruning via activation gates","volume":"52","author":"Liu","year":"2022","journal-title":"Appl. Intell."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1796","DOI":"10.3390\/make5040087","article-title":"Detecting adversarial examples using surrogate models","volume":"5","author":"Feldsar","year":"2023","journal-title":"Mach. Learn. Knowl. Extr."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Yan, W., Feng, Q., Yang, S., Zhang, J., and Yang, W. (2024). Prune-FSL: Pruning-based lightweight few-shot learning for plant disease identification. Agronomy, 14.","DOI":"10.3390\/agronomy14091878"},{"key":"ref_22","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems 30, Curran Associates Inc."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-Excitation Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_24","unstructured":"Tishby, N., Pereira, F.C., and Bialek, W. (2020). The information bottleneck method. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"988","DOI":"10.1109\/72.788640","article-title":"An overview of statistical learning theory","volume":"10","author":"Vapnik","year":"1999","journal-title":"IEEE Trans. Neural Netw."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"465","DOI":"10.1016\/0005-1098(78)90005-5","article-title":"Modeling by shortest data description","volume":"14","author":"Rissanen","year":"1978","journal-title":"Automatica"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Vasu, P.K.A., Gabriel, J., Zhu, J., Tuzel, O., and Ranjan, A. (2023, January 17\u201324). Mobileone: An Improved One Millisecond Mobile Backbone. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00764"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022, January 18\u201324). A Convnet for the 2020s. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"ref_29","unstructured":"(2025, June 05). Raspberry Pi Foundation, Raspberry Pi 4 documentation. Available online: https:\/\/www.raspberrypi.com\/products\/raspberry-pi-4-model-b."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Li, B., Wu, B., Su, J., and Wang, G. (2020, January 23\u201328). Eagleeye: Fast Sub-Net Evaluation for Efficient Neural Network Pruning. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58536-5_38"},{"key":"ref_31","first-page":"1884","article-title":"Channel gating neural networks","volume":"32","author":"Hua","year":"2019","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Gao, S., Zhang, Y., Huang, F., and Huang, H. (2024, January 16\u201322). Bilevelpruning: Unified Dynamic and Static Channel Pruning for Convolutional Neural Networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.01523"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Chen, R., Qi, H., Liang, Y., and Yang, M. (2022). Identification of plant leaf diseases by deep learning based on channel attention and channel pruning. Front. Plant Sci., 13.","DOI":"10.3389\/fpls.2022.1023515"},{"key":"ref_34","unstructured":"Ahmed, T., Jannat, S., Islam, M.F., and Noor, J. (2025). Involution-Infused DenseNet with Two-Step Compression for Resource-Efficient Plant Disease Classification. arXiv."}],"container-title":["Machine Learning and Knowledge Extraction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/110\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,29]],"date-time":"2025-09-29T08:52:12Z","timestamp":1759135932000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/110"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,29]]},"references-count":34,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["make7040110"],"URL":"https:\/\/doi.org\/10.3390\/make7040110","relation":{},"ISSN":["2504-4990"],"issn-type":[{"value":"2504-4990","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,9,29]]}}}