{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,27]],"date-time":"2026-01-27T22:00:10Z","timestamp":1769551210162,"version":"3.49.0"},"reference-count":41,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2023,12,15]],"date-time":"2023-12-15T00:00:00Z","timestamp":1702598400000},"content-version":"vor","delay-in-days":348,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Study on the Effectiveness of RF Data and Recognition Models in Wireless Sensing","award":["202203021222049"],"award-info":[{"award-number":["202203021222049"]}]},{"name":"Study on the Effectiveness of RF Data and Recognition Models in Wireless Sensing","award":["202101010101018"],"award-info":[{"award-number":["202101010101018"]}]},{"name":"Study on the Effectiveness of RF Data and Recognition Models in Wireless Sensing","award":["202201010101004"],"award-info":[{"award-number":["202201010101004"]}]},{"name":"Shanxi Province Major Scientific and Technological Project","award":["202203021222049"],"award-info":[{"award-number":["202203021222049"]}]},{"name":"Shanxi Province Major Scientific and Technological Project","award":["202101010101018"],"award-info":[{"award-number":["202101010101018"]}]},{"name":"Shanxi Province Major Scientific and Technological Project","award":["202201010101004"],"award-info":[{"award-number":["202201010101004"]}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["International Journal of Intelligent Systems"],"published-print":{"date-parts":[[2023,1]]},"abstract":"<jats:p>Data\u2010hunger is a persistent challenge in machine learning, particularly in the field of image processing based on convolutional neural networks (CNNs). This study systematically investigates the factors contributing to data\u2010hunger in machine\u2010learning\u2010based image\u2010processing algorithms. The results revealed that the proliferation of model parameters, the lack of interpretability, and the complexity of model structure are significant factors influencing data\u2010hunger. Based on these findings, this paper introduces a novel semi\u2010white\u2010box neural network model construction strategy. This approach effectively reduces the number of model parameters while enhancing the interpretability of model components. It accomplishes this by constraining uninterpretable processes within the model and leveraging prior knowledge of image processing for model. Rather than relying on a single all\u2010in\u2010one model, a semi\u2010white\u2010box model is composed of multiple smaller models, each responsible for extracting fundamental semantic features. The final output is derived from these features and prior knowledge. The proposed strategy holds the potential to substantially decrease data requirements under specific data source conditions while improving the interpretability of model components. Validation experiments are conducted on well\u2010established datasets, including MNIST, Fashion MNIST, CIFAR, and generated data. The results demonstrate the superiority of the semi\u2010white\u2010box strategy over the traditional all\u2010in\u2010one approach in terms of accuracy when trained with equivalent data volumes. Impressively, on the tested datasets, a simplified semi\u2010white\u2010box model achieves performance close to that of ResNet while utilizing a small number of parameters. Furthermore, the semi\u2010white\u2010box strategy offers improved interpretability and parameter reusability features that are challenging to achieve with the all\u2010in\u2010one approach. In conclusion, this paper contributes to mitigating data\u2010hunger challenges in machine\u2010learning\u2010based image processing through the introduction of a novel semi\u2010white\u2010box model construction strategy, backed by empirical evidence of its effectiveness.<\/jats:p>","DOI":"10.1155\/2023\/9227348","type":"journal-article","created":{"date-parts":[[2023,12,15]],"date-time":"2023-12-15T23:05:06Z","timestamp":1702681506000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Semi\u2010White\u2010Box Strategy: Enhancing Data Efficiency and Interpretability of Convolutional Neural Networks in Image Processing"],"prefix":"10.1155","volume":"2023","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9099-9695","authenticated-orcid":false,"given":"Qi","family":"Wang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7755-7550","authenticated-orcid":false,"given":"Jianchao","family":"Zeng","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9086-7366","authenticated-orcid":false,"given":"Pinle","family":"Qin","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1274-5224","authenticated-orcid":false,"given":"Pengcheng","family":"Zhao","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4506-9963","authenticated-orcid":false,"given":"Rui","family":"Chai","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1999-2965","authenticated-orcid":false,"given":"Zhaomin","family":"Yang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0001-7274-0521","authenticated-orcid":false,"given":"Jianshan","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2023,12,15]]},"reference":[{"key":"e_1_2_9_1_2","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-021-00419-9"},{"key":"e_1_2_9_2_2","doi-asserted-by":"publisher","DOI":"10.1186\/1471-2288-14-137"},{"key":"e_1_2_9_3_2","doi-asserted-by":"publisher","DOI":"10.3390\/technologies11020040"},{"key":"e_1_2_9_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-015-0816-y"},{"key":"e_1_2_9_5_2","unstructured":"BrownT. B. MannB. RyderN. SubbiahM. KaplanJ. DhariwalP. NeelakantanA. ShyamP. SastryG. AskellA. AgarwalS. HerbertVossA. KruegerG. HenighanT. ChildR. RameshA. ZieglerD. M. WuJ. WinterC. HesseC. ChenM. SiglerE. LitwinM. GrayS. ChessB. ClarkJ. BernerC. McCandlishS. RadfordA. SutskeverI. andAmodeiD. Language models are few-shot learners 2020."},{"key":"e_1_2_9_6_2","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-019-0197-0"},{"key":"e_1_2_9_7_2","doi-asserted-by":"publisher","DOI":"10.1111\/1754-9485.13261"},{"key":"e_1_2_9_8_2","unstructured":"GoodfellowI. J. Pouget-AbadieJ. MirzaM. XuB. Warde-FarleyD. OzairS. CourvilleA. andBengioY. Generative adversarial networks 2014."},{"key":"e_1_2_9_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/tip.2021.3049346"},{"key":"e_1_2_9_10_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.compag.2022.107208"},{"key":"e_1_2_9_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.compbiomed.2022.105382"},{"key":"e_1_2_9_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3527850"},{"key":"e_1_2_9_13_2","doi-asserted-by":"publisher","DOI":"10.1088\/1361-6501\/ab6ade"},{"key":"e_1_2_9_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2021.108139"},{"key":"e_1_2_9_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/tim.2021.3111977"},{"key":"e_1_2_9_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2023.3292075"},{"key":"e_1_2_9_17_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-022-10183-8"},{"key":"e_1_2_9_18_2","volume-title":"Introduction to Mathematical Statistics","author":"Hogg R. V.","year":"2019"},{"key":"e_1_2_9_19_2","doi-asserted-by":"crossref","unstructured":"ValiantL. G. A theory of the learnable Proceedings of the sixteenth annual ACM symposium on Theory of computing- STOC December 1984 New York NY USA 436\u2013445 https:\/\/doi.org\/10.1145\/800057.808710 2-s2.0-85049646507.","DOI":"10.1145\/800057.808710"},{"key":"e_1_2_9_20_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4614-7138-7"},{"key":"e_1_2_9_21_2","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9781107298019"},{"key":"e_1_2_9_22_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics8030292"},{"key":"e_1_2_9_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/access.2019.2956508"},{"key":"e_1_2_9_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2021.3059968"},{"key":"e_1_2_9_25_2","unstructured":"ChengH. ZhangM. andShiJ. Q. A survey on deep neural network pruning-taxonomy comparison analysis and recommendations 2023."},{"key":"e_1_2_9_26_2","doi-asserted-by":"crossref","unstructured":"FangG. MaX. SongM. MiM. B. andWangX. Depgraph: towards any structural pruning Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition June 2023 Vancouver Canada 16091\u201316101.","DOI":"10.1109\/CVPR52729.2023.01544"},{"key":"e_1_2_9_27_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2023.110386"},{"key":"e_1_2_9_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/jsac.2023.3242704"},{"key":"e_1_2_9_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/tcomm.2023.3277878"},{"key":"e_1_2_9_30_2","unstructured":"BakhtiarniaA. Milo\u02c7sevi\u00b4cN. ZhangQ. Bajovi\u00b4cD. andIosifidisA. Dynamic split computing for efficient deep edge intelligence 2022."},{"key":"e_1_2_9_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/tifs.2023.3274391"},{"key":"e_1_2_9_32_2","doi-asserted-by":"publisher","DOI":"10.1146\/annurev.neuro.24.1.1193"},{"key":"e_1_2_9_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_2_9_34_2","unstructured":"XiaoH. RasulK. andVollgrafR. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms 2017."},{"key":"e_1_2_9_35_2","unstructured":"KrizhevskyA. Learning multiple layers of features from tiny images 2009 https:\/\/www.cs.toronto.edu\/%7Ekriz\/learning-features-2009-TR.pdf."},{"key":"e_1_2_9_36_2","volume-title":"Very Deep Convolutional Networks for Large-Scale Image Recognition","author":"Simonyan K.","year":"2014"},{"key":"e_1_2_9_37_2","doi-asserted-by":"crossref","unstructured":"HeK. ZhangX. RenS. andSunJ. Deep residual learning for image recognition Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) June 2015 Las Vegas NV USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_9_38_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4842-6168-2_10"},{"key":"e_1_2_9_39_2","unstructured":"TanM.andLeQ. Efficientnetv2: smaller models and faster training Proceedings of the International Conference on Machine Learning July 2021 PMLR 10096\u201310106."},{"key":"e_1_2_9_40_2","unstructured":"DosovitskiyA. BeyerL. KolesnikovA. WeissenbornD. ZhaiX. UnterthinerT. DehghaniM. MindererM. HeigoldG. GellyS. UszkoreitJ. andHoulsbyN. An image is worth 16x16 words: transformers for image recognition at scale 2020."},{"key":"e_1_2_9_41_2","doi-asserted-by":"crossref","unstructured":"LiuZ. HuH. LinY. YaoZ. XieZ. WeiY. NingJ. CaoY. ZhangZ. DongL. WeiF. andGuoB. Swin transformer v2: scaling up capacity and resolution Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition June 2022 New Orleans LA USA 12009\u201312019.","DOI":"10.1109\/CVPR52688.2022.01170"}],"container-title":["International Journal of Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/ijis\/2023\/9227348.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/ijis\/2023\/9227348.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2023\/9227348","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,31]],"date-time":"2024-12-31T05:32:18Z","timestamp":1735623138000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/2023\/9227348"}},"subtitle":[],"editor":[{"given":"Alexander","family":"Ho\u0161ovsk\u00fd","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2023,1]]},"references-count":41,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,1]]}},"alternative-id":["10.1155\/2023\/9227348"],"URL":"https:\/\/doi.org\/10.1155\/2023\/9227348","archive":["Portico"],"relation":{},"ISSN":["0884-8173","1098-111X"],"issn-type":[{"value":"0884-8173","type":"print"},{"value":"1098-111X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,1]]},"assertion":[{"value":"2022-12-07","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-11-30","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-12-15","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"9227348"}}