{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,18]],"date-time":"2026-01-18T06:20:51Z","timestamp":1768717251030,"version":"3.49.0"},"reference-count":43,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,9,29]],"date-time":"2022-09-29T00:00:00Z","timestamp":1664409600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,9,29]],"date-time":"2022-09-29T00:00:00Z","timestamp":1664409600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Xi'an Beilin District Science and Technology Project","award":["GX2137"],"award-info":[{"award-number":["GX2137"]}]},{"name":"Xi'an Beilin District Science and Technology Project","award":["GX2137"],"award-info":[{"award-number":["GX2137"]}]},{"name":"Xi'an Beilin District Science and Technology Project","award":["GX2137"],"award-info":[{"award-number":["GX2137"]}]},{"name":"Shaanxi Provincial Department of Science and Technology Regional Innovation Capability Guidance Plan","award":["2022QFY01-14"],"award-info":[{"award-number":["2022QFY01-14"]}]},{"name":"Shaanxi Provincial Department of Science and Technology Regional Innovation Capability Guidance Plan","award":["2022QFY01-14"],"award-info":[{"award-number":["2022QFY01-14"]}]},{"name":"Shaanxi Provincial Department of Science and Technology Regional Innovation Capability Guidance Plan","award":["2022QFY01-14"],"award-info":[{"award-number":["2022QFY01-14"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Cloud Comp"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Deep convolutional neural networks have produced excellent results when utilized for image classification tasks, and they are being applied in a growing number of contexts. Model inference on edge devices is challenging due to the unending complicated structures needed to improve performance, which adds a significant computing burden.According to recent research, the often utilized residual structure in models does not support model inference. The idea of structural reparameterization is put out to address this shortcoming. The RepVGG produced with this method is a high-performance, quick-inference single-path network. Even after reparameterization, the model still needs GPUs and other specialized computing libraries to accelerate inference, however this still has a limit on how quickly the model can infer at the edge. We construct RDPNet using depthwise separable convolution and structural reparameterization to further reduce model size and accelerate inference. When utilizing an Intel CPU, this is a straightforward network that may be utilized for inference. For re-parameterization, we specifically adopt Depthwise separable convolution as the basic convolution form. Create a multi-branch model for training on the training side, and then simplify it into a single-branch model that the edge devices can easily infer. Research demonstrates that compared to alternative lightweight networks that can attain SOTA performance, RDPNet offers a superior trade-off between accuracy and latency.<\/jats:p>","DOI":"10.1186\/s13677-022-00330-5","type":"journal-article","created":{"date-parts":[[2022,9,29]],"date-time":"2022-09-29T05:02:34Z","timestamp":1664427754000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["RDPNet: a single-path lightweight CNN with re-parameterization for CPU-type edge devices"],"prefix":"10.1186","volume":"11","author":[{"given":"Jiarui","family":"Xu","sequence":"first","affiliation":[]},{"given":"Yufeng","family":"Zhao","sequence":"additional","affiliation":[]},{"given":"Fei","family":"Xu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,9,29]]},"reference":[{"key":"330_CR1","doi-asserted-by":"crossref","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Commun ACM  60(6):84-90","DOI":"10.1145\/3065386"},{"key":"330_CR2","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014"},{"key":"330_CR3","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, et al. (2016) Deep residual learning for image recognition. arXiv Comput Vis Patt Recognit  2016:770-778","DOI":"10.1109\/CVPR.2016.90"},{"key":"330_CR4","doi-asserted-by":"crossref","unstructured":"Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. arXiv Comput Vis Patt Recognit\u00a02015:1-9","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"330_CR5","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016a) Rethinking the inception architecture for computer vision. Comput Vis Patt Recognit  2016:2818-2826","DOI":"10.1109\/CVPR.2016.308"},{"key":"330_CR6","doi-asserted-by":"crossref","unstructured":"Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2016b) Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-first AAAI conference on artificial intelligence 2017","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"330_CR7","unstructured":"Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. International conference on machine learning PMLR\u00a02015:448-456"},{"key":"330_CR8","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, van der Maaten L, Weinberger KQ (2016) Densely connected convolutional networks. Proceedings of the IEEE conference on computer vision and pattern recognition 2017:4700-4708","DOI":"10.1109\/CVPR.2017.243"},{"key":"330_CR9","unstructured":"Crowley EJ, Gray G, Storkey A (2018) Moonshine: Distilling with cheap convolutions.  Advances in Neural Information Processing Systems 2018:31"},{"key":"330_CR10","unstructured":"Polino A, Pascanu R, Alistarh D (2018) Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668, 2018"},{"key":"330_CR11","unstructured":"Han S, Mao H, Dally WJ (2015a) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015"},{"key":"330_CR12","unstructured":"Han S, Pool J, Tran J, Dally WJ (2015b) Learning both weights and connections for efficient neural networks. Advances in neural information processing systems 2015:28"},{"key":"330_CR13","unstructured":"Courbariaux M, Bengio Y, David JP (2015) Binaryconnect: Training deep neural networks with binary weights during propagations. Advances in neural information processing systems 2015:28"},{"key":"330_CR14","doi-asserted-by":"crossref","unstructured":"Rastegari M, Ordonez V, Redmon J, Farhadi A (2016) Xnor-net: Imagenet classification using binary convolutional neural networks. European conference on computer vision. Springer, Cham, 2016: 525-542","DOI":"10.1007\/978-3-319-46493-0_32"},{"key":"330_CR15","doi-asserted-by":"crossref","unstructured":"Zhang X, Zhou X, Lin M, Sun J (2018) Shufflenet: An extremely efficient convolutional neural network for mobile devices. Proceedings of the IEEE conference on computer vision and pattern recognition 2018:6848-6856","DOI":"10.1109\/CVPR.2018.00716"},{"key":"330_CR16","doi-asserted-by":"crossref","unstructured":"Ma N, Zhang X, Zheng HT, Sun J (2018) Shufflenet v2: Practical guidelines for efficient cnn architecture design. Proceedings of the European conference on computer vision (ECCV) 2018:116-131","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"330_CR17","unstructured":"Howard A, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017"},{"key":"330_CR18","doi-asserted-by":"crossref","unstructured":"Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE conference on computer vision and pattern recognition 2018:4510-4520","DOI":"10.1109\/CVPR.2018.00474"},{"key":"330_CR19","doi-asserted-by":"crossref","unstructured":"Tan M, Chen B, Pang R, Vasudevan VK, Sandler M, Howard A, Le QV (2018) Mnasnet: Platform-aware neural architecture search for mobile. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition 2019: 2820-2828","DOI":"10.1109\/CVPR.2019.00293"},{"key":"330_CR20","doi-asserted-by":"crossref","unstructured":"Ding X, Zhang X, Ma N, Han J, Ding G, Sun J (2021a) Repvgg: Making vgg-style convnets great again. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition 2021:13733-13742","DOI":"10.1109\/CVPR46437.2021.01352"},{"key":"330_CR21","doi-asserted-by":"crossref","unstructured":"Ding X, Zhang X, Han J, Ding G (2021b) Diverse branch block: Building a convolution as an inception-like unit. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition 2021:10886-10895","DOI":"10.1109\/CVPR46437.2021.01074"},{"key":"330_CR22","doi-asserted-by":"crossref","unstructured":"Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition 2017:1251-1258","DOI":"10.1109\/CVPR.2017.195"},{"key":"330_CR23","doi-asserted-by":"crossref","unstructured":"Yang H, Shen Z, Zhao Y (2021) Asymmnet: Towards ultralight convolution neural networks using asymmetrical bottlenecks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition 2021:2339-2348","DOI":"10.1109\/CVPRW53098.2021.00266"},{"key":"330_CR24","doi-asserted-by":"crossref","unstructured":"Han K, Wang Y, Tian Q, Guo J, Xu C, Xu C (2019) Ghostnet: More features from cheap operations. Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition 2020:1580-1589","DOI":"10.1109\/CVPR42600.2020.00165"},{"key":"330_CR25","unstructured":"Iandola F, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) Squeezenet: Alexnet-level accuracy with 50x fewer parameters and<0.5mb model size.  arXiv preprint arXiv:1602.07360 2016"},{"key":"330_CR26","unstructured":"Veit A, Wilber MJ, Belongie S (2016) Residual networks behave like ensembles of relatively shallow networks. Advances in neural information processing systems 2016:29"},{"key":"330_CR27","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Identity mappings in deep residual networks. European conference on computer vision. Springer, Cham\u00a02016:630-645","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"330_CR28","doi-asserted-by":"crossref","unstructured":"Li Y, Chen Y, Dai X, Chen D, Liu M, Yuan L, Liu Z, Zhang L, Vasconcelos N (2021) Micronet: Improving image recognition with extremely low flops. Proceedings of the IEEE\/CVF International Conference on Computer Vision\u00a02021:468-477","DOI":"10.1109\/ICCV48922.2021.00052"},{"key":"330_CR29","doi-asserted-by":"crossref","unstructured":"Zhou D, Hou Q, Chen Y, Feng J, Yan S (2020) Rethinking bottleneck structure for efficient mobile network design. European Conference on Computer Vision. Springer, Cham\u00a02020:680-697","DOI":"10.1007\/978-3-030-58580-8_40"},{"key":"330_CR30","doi-asserted-by":"crossref","unstructured":"Howard A, Pang R, Adam H, Le QV, Sandler M, Chen B, Wang W, Chen LC, Tan M, Chu G, Vasudevan VK, Zhu Y (2019) Searching for mobilenetv3. Proceedings of the IEEE\/CVF international conference on computer vision\u00a02019:1314-1324","DOI":"10.1109\/ICCV.2019.00140"},{"key":"330_CR31","doi-asserted-by":"crossref","unstructured":"Wu B, Dai X, Zhang P, Wang Y, Sun F, Wu Y, Tian Y, Vajda P, Jia Y, Keutzer K (2018) Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition\u00a02019:10734-10742","DOI":"10.1109\/CVPR.2019.01099"},{"key":"330_CR32","doi-asserted-by":"crossref","unstructured":"Real E, Aggarwal A, Huang Y, Le QV (2019) Regularized evolution for image classifier architecture search. Proceedings of the aaai conference on artificial intelligence 33(01):4780-4789","DOI":"10.1609\/aaai.v33i01.33014780"},{"key":"330_CR33","unstructured":"Tan M, Le QV (2019) Efficientnet: Rethinking model scaling for convolutional neural networks. International conference on machine learning. PMLR\u00a02019:6105-6114"},{"key":"330_CR34","unstructured":"Zagoruyko S, Komodakis N (2017) Diracnets: Training very deep neural networks without skip-connections. arXiv preprint arXiv:1706.00388, 2017"},{"key":"330_CR35","unstructured":"Guo S, Alvarez JM, Salzmann M (2018) Expandnets: Linear over-parameterization to train compact convolutional networks. Advances in Neural Information Processing Systems 33:1298-1310"},{"key":"330_CR36","unstructured":"Cao J, Li Y, Sun M, Chen Y, Lischinski D, Cohen-Or D, Chen B, Tu C (2020) Do-conv: Depthwise over-parameterized convolutional layer. arXiv preprint arXiv:2006.12030"},{"key":"330_CR37","doi-asserted-by":"crossref","unstructured":"Ding X, Guo Y, Ding G, Han J (2019) Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks. Proceedings of the IEEE\/CVF international conference on computer vision. 2019:1911-1920","DOI":"10.1109\/ICCV.2019.00200"},{"key":"330_CR38","doi-asserted-by":"crossref","unstructured":"Everingham M, Gool LV, Williams C, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. International journal of computer vision  88(2):303-338","DOI":"10.1007\/s11263-009-0275-4"},{"key":"330_CR39","unstructured":"Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical report\u00a02009:7"},{"key":"330_CR40","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. 2009 IEEE conference on computer vision and pattern recognition Ieee\u00a02009:248-255","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"330_CR41","unstructured":"Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems\u00a02015:28"},{"key":"330_CR42","unstructured":"Chen LC, Papandreou G, Schroff F, Adam H (2017) Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017"},{"key":"330_CR43","doi-asserted-by":"crossref","unstructured":"Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European conference on computer vision (ECCV) 2018:801-818","DOI":"10.1007\/978-3-030-01234-2_49"}],"container-title":["Journal of Cloud Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13677-022-00330-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s13677-022-00330-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13677-022-00330-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,29]],"date-time":"2022-09-29T05:13:27Z","timestamp":1664428407000},"score":1,"resource":{"primary":{"URL":"https:\/\/journalofcloudcomputing.springeropen.com\/articles\/10.1186\/s13677-022-00330-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,29]]},"references-count":43,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["330"],"URL":"https:\/\/doi.org\/10.1186\/s13677-022-00330-5","relation":{},"ISSN":["2192-113X"],"issn-type":[{"value":"2192-113X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,29]]},"assertion":[{"value":"23 July 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 September 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 September 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"54"}}