{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T00:59:05Z","timestamp":1760057945911,"version":"build-2065373602"},"reference-count":36,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2025,3,4]],"date-time":"2025-03-04T00:00:00Z","timestamp":1741046400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the Tianjin Enterprise Science and Technology Commissioner Project, Technology Innovation Guidance Special Fund (Fund)","award":["24YDTPJC00410"],"award-info":[{"award-number":["24YDTPJC00410"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>Large convolution kernels offer better performance advantages. They can cover a wider area and capture a broader range of spatial information in a single convolution operation. This is of great importance when dealing with tasks that have significant spatial variations. However, increasing the kernel size brings substantial memory and computational costs to deep convolutional neural networks. The computational complexity becomes unimaginable. Therefore, we proposed a learnable kernel element position convolution using the symmetric Whittaker\u2013Shannon interpolation function (WSIPC). We also performed channel-level pruning (CP) on this large convolutional neural network to achieve network compression. Specifically, WSIPC permits any number of kernel elements. The positions of non-zero elements are learned in a gradient-based manner. We made use of the normal distribution of effective receptive fields to reduce computation, parameter complexity, and improve classification performance. This method achieved the best performance on the large-kernel ConvNeXt. CP used the scaling factor in the Layer Normalization (LN) of the ConvNeXt network as a proxy for channel selection. During training, it automatically identified and pruned unimportant channels in wide and large network models. As a result, it produced a streamlined model with comparable accuracy. This model was more compact in terms of model size, runtime memory, and computational operations.<\/jats:p>","DOI":"10.3390\/sym17030390","type":"journal-article","created":{"date-parts":[[2025,3,4]],"date-time":"2025-03-04T13:31:31Z","timestamp":1741095091000},"page":"390","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Channel-Pruning Convolutional Neural Network with Learnable Kernel Element Position Convolution Utilizing the Symmetric Whittaker\u2013Shannon Interpolation Function"],"prefix":"10.3390","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3539-6688","authenticated-orcid":false,"given":"Chunmiao","family":"Yuan","sequence":"first","affiliation":[{"name":"Key Laboratory of Independent Intelligent Technology and System, Tiangong University, Tianjin 300387, China"}]},{"given":"Xiyan","family":"Jiang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Technology, Tiangong University, Tianjin 300387, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4036-8630","authenticated-orcid":false,"given":"Qingyong","family":"Yang","sequence":"additional","affiliation":[{"name":"School of Software and Communication, Tianjin Sino-German University of Applied Sciences, Tianjin 300350, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,3,4]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"e21","DOI":"10.23915\/distill.00021","article-title":"Computing Receptive Fields of Convolutional Neural Networks","volume":"4","author":"Araujo","year":"2019","journal-title":"Distill"},{"key":"ref_2","unstructured":"Yu, F., and Koltun, V. (2015). Multi-Scale Context Aggregation by Dilated Convolutions. arXiv."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Ding, X., Zhang, X., Han, J., and Ding, G. (2022). Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs. arXiv.","DOI":"10.1109\/CVPR52688.2022.01166"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ding, X., Hao, T., Tan, J., Liu, J., Han, J., Guo, Y., and Ding, G. (2021, January 19\u201325). ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting. Proceedings of the International Conference on Computer Vision, Nashville, TN, USA.","DOI":"10.1109\/ICCV48922.2021.00447"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Gao, S., Huang, F., Cai, W., and Huang, H. (2021, January 19\u201325). Network Pruning via Performance Maximization. Proceedings of the Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00915"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Fang, G., Ma, X., Song, M., Mi, M.B., and Wang, X. (2023, January 17\u201324). DepGraph: Towards Any Structural Pruning. Proceedings of the 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.01544"},{"key":"ref_7","unstructured":"Meng, F., Cheng, H., Li, K., Luo, H., Guo, X., Lu, G., and Sun, X. (2020). Pruning Filter in Filter. arXiv."},{"key":"ref_8","unstructured":"Park, S., Lee, J., Mo, S., and Shin, J. (2020). Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning. arXiv."},{"key":"ref_9","unstructured":"Luo, W., Li, Y., Urtasun, R., and Zemel, R. (2017). Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. arXiv."},{"key":"ref_10","unstructured":"Chen, Q., Li, C., Ning, J., and He, K. (2023). Gaussian Mask Convolution for Convolutional Neural Networks. arXiv."},{"key":"ref_11","unstructured":"Khalfaoui-Hassani, I., Pellegrini, T., and Masquelier, T. (2023). Dilated Convolution with Learnable Spacings: Beyond bilinear interpolation. arXiv."},{"key":"ref_12","unstructured":"Kim, B.J., Choi, H., Jang, H., and Kim, S.W. (2023, January 20\u201324). Understanding Gaussian Attention Bias of Vision Transformers Using Effective Receptive Fields. Proceedings of the British Machine Vision Conference, Aberdeen, UK."},{"key":"ref_13","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2021, January 4). An Image is Worth 16 \u00d7 16 Words: Transformers for Image Recognition at Scale. Proceedings of the International Conference on Learning Representations, Vienna, Austria."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., and Xie, S.A. (2022). A ConvNet for the 2020s. arXiv.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"ref_15","unstructured":"Trockman, A., and Zico Kolter, J. (2022). Patches Are All You Need?. arXiv."},{"key":"ref_16","unstructured":"Liu, S., Chen, T., Chen, X., Chen, X., Xiao, Q., Wu, B., K\u00e4rkk\u00e4inen, T., Pechenizkiy, M., Mocanu, D., and Wang, Z. (2022). More ConvNets in the 2020s: Scaling up Kernels Beyond 51 \u00d7 51 using Sparsity. arXiv."},{"key":"ref_17","unstructured":"Megvii (2023, May 15). Megengine: A Fast, Scalable and Easy-to-Use Deep Learning Framework. Available online: https:\/\/github.com\/MegEngine\/MegEngine."},{"key":"ref_18","unstructured":"Celarek, A., Hermosilla, P., Kerbl, B., Ropinski, T., and Wimmer, M. (2022). Gaussian Mixture Convolution Networks. arXiv."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Kim, S., and Park, E. (2023, January 17\u201324). SMPConv: Self-Moving Point Representations for Continuous Convolution. Proceedings of the 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00992"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Thomas, H., Qi, C.R., Deschaud, J.E., Marcotegui, B., Goulette, F., and Guibas, L.J. (2019). KPConv: Flexible and Deformable Convolution for Point Clouds. arXiv.","DOI":"10.1109\/ICCV.2019.00651"},{"key":"ref_21","unstructured":"Romero, D.W., Bruintjes, R.J., Tomczak, J.M., Bekkers, E.J., Hoogendoorn, M., and van Gemert, J.C. (2021). FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes. arXiv."},{"key":"ref_22","unstructured":"Romero, D.W., Kuzina, A., Bekkers, E.J., Tomczak, J.M., and Hoogendoorn, M. (2021). CKConv: Continuous Kernel Convolution for Sequential Data. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Jacobsen, J.H., Van Gemert, J., Lou, Z., and Smeulders, A.W. (2016, January 27\u201330). Structured Receptive Fields in CNNs. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.286"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"8342","DOI":"10.1109\/TIP.2021.3115001","article-title":"Resolution Learning in Deep Convolutional Networks Using Scale-Space Theory","volume":"30","author":"Pintea","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_25","unstructured":"Shelhamer, E., Wang, D., and Darrell, T. (2019). Blurring the Line Between Structure and Learning to Optimize and Adapt Receptive Fields. arXiv."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"108369","DOI":"10.1016\/j.patcog.2021.108369","article-title":"ADCNN: Towards learning adaptive dilation for convolutional neural networks","volume":"123","author":"Yao","year":"2022","journal-title":"Pattern Recognit. J. Pattern Recognit. Soc."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., and Wei, Y. (2017, January 22). Deformable Convolutional Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.89"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Zhu, X., Hu, H., Lin, S., and Dai, J. (2020, January 14\u201319). Deformable ConvNets V2: More Deformable, Better Results. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR.2019.00953"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wang, W., Dai, J., Chen, Z., Huang, Z., Li, Z., Zhu, X., Hu, X., Lu, T., Lu, L., and Li, H. (2022, January 18\u201324). InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions. Proceedings of the 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52729.2023.01385"},{"key":"ref_30","unstructured":"Huang, T.H., Huang, C.Y., Ding, C.K., Hsu, Y.C., and Giles, C.L. (2020). CODA-19: Using a Non-Expert Crowd to Annotate Research Aspects on 10,000+ Abstracts in the COVID-19 Open Research Dataset. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Chiliang, Z., Tao, H., Yingda, G., and Zuochang, Y. (2019, January 26\u201329). Accelerating Convolutional Neural Networks with Dynamic Channel Pruning. Proceedings of the 2019 Data Compression Conference (DCC), Snowbird, UT, USA.","DOI":"10.1109\/DCC.2019.00075"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Kokol, P., Kokol, M., and Zagoranski, S. (2021). Machine learning on small size samples: A synthetic knowledge synthesis. arXiv.","DOI":"10.1177\/00368504211029777"},{"key":"ref_33","unstructured":"Sellars, P., Aviles-Rivero, A.I., and Schnlieb, C.B. (2021). LaplaceNet: A Hybrid Energy-Neural Model for Deep Semi-Supervised Classification. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"284","DOI":"10.1038\/s41597-021-01060-0","article-title":"A Computed Tomography Vertebral Segmentation Dataset with Anatomical Variations and Multi-Vendor Scanner Data","volume":"8","author":"Liebl","year":"2021","journal-title":"Sci. Data"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"10","DOI":"10.1109\/JRPROC.1949.232969","article-title":"Communication in the Presence of Noise","volume":"Volume 37","author":"Shannon","year":"1949","journal-title":"Proceedings of the IRE"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015, January 11\u201318). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Proceedings of the International Conference on Computer Vision, Las Condes, Chile.","DOI":"10.1109\/ICCV.2015.123"}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/3\/390\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T16:47:10Z","timestamp":1760028430000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/3\/390"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,4]]},"references-count":36,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2025,3]]}},"alternative-id":["sym17030390"],"URL":"https:\/\/doi.org\/10.3390\/sym17030390","relation":{},"ISSN":["2073-8994"],"issn-type":[{"type":"electronic","value":"2073-8994"}],"subject":[],"published":{"date-parts":[[2025,3,4]]}}}