{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T01:37:02Z","timestamp":1760233022625,"version":"build-2065373602"},"reference-count":63,"publisher":"MDPI AG","issue":"24","license":[{"start":{"date-parts":[[2022,12,12]],"date-time":"2022-12-12T00:00:00Z","timestamp":1670803200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Slovak Grant Agency VEGA","award":["1\/0202\/23","2022-23-04"],"award-info":[{"award-number":["1\/0202\/23","2022-23-04"]}]},{"name":"Internal FEI STU Bratislava project","award":["1\/0202\/23","2022-23-04"],"award-info":[{"award-number":["1\/0202\/23","2022-23-04"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>This paper explores extensions and restrictions of shallow convolutional neural networks with fixed kernels trained with a limited number of training samples. We extend the work recently done in research on Receptive Field Neural Networks (RFNN) and show their behaviour using different bases and step-by-step changes within the network architecture. To ensure the reproducibility of the results, we simplified the baseline RFNN architecture to a single-layer CNN network and introduced a deterministic methodology for RFNN training and evaluation. This methodology enabled us to evaluate the significance of changes using the (recently widely used in neural networks) Bayesian comparison. The results indicate that a change in the base may have less of an effect on the results than re-training using another seed. We show that the simplified network with tested bases has similar performance to the chosen baseline RFNN architecture. The data also show the positive impact of energy normalization of used filters, which improves the classification accuracy, even when using randomly initialized filters.<\/jats:p>","DOI":"10.3390\/s22249743","type":"journal-article","created":{"date-parts":[[2022,12,13]],"date-time":"2022-12-13T03:32:32Z","timestamp":1670902352000},"page":"9743","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Structure and Base Analysis of Receptive Field Neural Networks in a Character Recognition Task"],"prefix":"10.3390","volume":"22","author":[{"given":"Jozef","family":"Goga","sequence":"first","affiliation":[{"name":"Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava, Slovakia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0617-5237","authenticated-orcid":false,"given":"Radoslav","family":"Vargic","sequence":"additional","affiliation":[{"name":"Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava, Slovakia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8490-5821","authenticated-orcid":false,"given":"Jarmila","family":"Pavlovicova","sequence":"additional","affiliation":[{"name":"Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava, Slovakia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5706-2259","authenticated-orcid":false,"given":"Slavomir","family":"Kajan","sequence":"additional","affiliation":[{"name":"Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava, Slovakia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5106-4803","authenticated-orcid":false,"given":"Milos","family":"Oravec","sequence":"additional","affiliation":[{"name":"Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovicova 3, 812 19 Bratislava, Slovakia"}]}],"member":"1968","published-online":{"date-parts":[[2022,12,12]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.cviu.2017.05.007","article-title":"Systematic Evaluation of Convolution Neural Network Advances on the Imagenet","volume":"161","author":"Mishkin","year":"2017","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., and Garcia-Rodriguez, J. (2017). A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv.","DOI":"10.1016\/j.asoc.2018.05.018"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet Classification with Deep Convolutional Neural Networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. ACM"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Ozturk, S., Ozkaya, U., Akdemir, B., and Seyfi, L. (2018, January 1\u20133). Convolution Kernel Size Effect on Convolutional Neural Network in Histopathological Image Processing Applications. Proceedings of the 2018 International Symposium on Fundamentals of Electrical Engineering, ISFEE 2018, Bucharest, Romania.","DOI":"10.1109\/ISFEE.2018.8742484"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"6685","DOI":"10.1007\/s00521-019-04603-0","article-title":"Applying Depthwise Separable and Multi-Channel Convolutional Neural Networks of Varied Kernel Size on Semantic Trajectories","volume":"32","author":"Karatzoglou","year":"2020","journal-title":"Neural Comput. Appl."},{"key":"ref_6","unstructured":"Chollet, F. (2017). Deep Learning with Python, Manning."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Yao, H., Chuyi, L., Dan, H., and Weiyu, Y. (2016, January 8\u201310). Gabor Feature Based Convolutional Neural Network for Object Recognition in Natural Scene. Proceedings of the Proceedings\u20142016 3rd International Conference on Information Science and Control Engineering, ICISCE 2016, Beijing, China.","DOI":"10.1109\/ICISCE.2016.91"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","article-title":"Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead","volume":"1","author":"Rudin","year":"2019","journal-title":"Nat. Mach. Intell."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1186\/s40537-019-0197-0","article-title":"A Survey on Image Data Augmentation for Deep Learning","volume":"6","author":"Shorten","year":"2019","journal-title":"J. Big Data"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1186\/s40537-016-0043-6","article-title":"A Survey of Transfer Learning","volume":"3","author":"Weiss","year":"2016","journal-title":"J. Big Data"},{"key":"ref_11","first-page":"979","article-title":"Differential Data Augmentation Techniques for Medical Imaging Classification Tasks","volume":"2017","author":"Hussain","year":"2017","journal-title":"AMIA Annu. Symp. Proc."},{"key":"ref_12","unstructured":"Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., and Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_14","unstructured":"Lin, M., Chen, Q., and Yan, S. (2014, January 14\u201316). Network in Network. Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada. Conference Track Proceedings."},{"key":"ref_15","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). MobileNets. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Sifre, L., and Mallat, S. (2013, January 23\u201328). Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Portland, OR, USA.","DOI":"10.1109\/CVPR.2013.163"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Chang, S.Y., and Morgan, N. (2014, January 14\u201318). Robust CNN-Based Speech Recognition with Gabor Filter Kernels. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, Singapore.","DOI":"10.21437\/Interspeech.2014-226"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Li, J., Wang, T., Zhou, Y., Wang, Z., and Snoussi, H. (2017, January 26\u201328). Using Gabor Filter in 3D Convolutional Neural Networks for Human Action Recognition. Proceedings of the Chinese Control Conference, CCC, Dalian, China.","DOI":"10.23919\/ChiCC.2017.8029134"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Sarwar, S.S., Panda, P., and Roy, K. (2017, January 24\u201326). Gabor Filter Assisted Energy Efficient Fast Learning Convolutional Neural Networks. Proceedings of the International Symposium on Low Power Electronics and Design, Taipei, Taiwan.","DOI":"10.1109\/ISLPED.2017.8009202"},{"key":"ref_20","unstructured":"Shelhamer, E., Wang, D., and Darrell, T. (2019, January 6\u20139). Efficient Receptive Field Learning by Dynamic Gaussian Structure. Proceedings of the ICLR 2019 Workshop, New Orleans, LA, USA."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"4357","DOI":"10.1109\/TIP.2018.2835143","article-title":"Gabor Convolutional Networks","volume":"27","author":"Luan","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Tabernik, D., Kristan, M., and Leonardis, A. (2018, January 18\u201323). Spatially-Adaptive Filter Units for Deep Neural Networks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00978"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"2049","DOI":"10.1007\/s11263-019-01282-1","article-title":"Spatially-Adaptive Filter Units for Compact and Efficient Deep Neural Networks","volume":"128","author":"Tabernik","year":"2020","journal-title":"Int. J. Comput. Vis."},{"key":"ref_24","first-page":"1229","article-title":"A Survey of Model Compression for Deep Neural Networks","volume":"41","author":"Li","year":"2019","journal-title":"Gongcheng Kexue Xuebao\/Chinese J. Eng."},{"key":"ref_25","unstructured":"Blalock, D., Gonzalez Ortiz, J.J., Frankle, J., and Guttag, J. What is the state of neural network pruning? In Proceedings of the 3rd MLSys Conference, Austin, TX, USA, 2\u20134 March 2020."},{"key":"ref_26","unstructured":"Jacobsen, J.H., Van Gemert, J., Lou, Z., and Smeulders, A.W.M. (July, January 26). Structured Receptive Fields in CNNs. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_27","unstructured":"Schlimbach, R.J. (2018). Investigating Scale in Receptive Fields Neural Networks. [Bachelor\u2019s Thesis, University of Amsterdam]."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"8342","DOI":"10.1109\/TIP.2021.3115001","article-title":"Resolution Learning in Deep Convolutional Networks Using Scale-Space Theory","volume":"30","author":"Pintea","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_29","unstructured":"Hilbert, A., Veeling, B.S., and Marquering, H.A. (2018, January 4\u20136). Data-Efficient Convolutional Neural Networks for Treatment Decision Support in Acute Ischemic Stroke 2018. Proceedings of the 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21\u201326). Densely Connected Convolutional Networks. Proceedings of the Proceedings\u201430th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_31","unstructured":"Verkes, G. (2017). Receptive Fields Neural Networks Using the Gabor Kernel Family. [Bachelor\u2019s Thesis, University of Amsterdam]."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Labate, D., Safaripoorfatide, M., Karantzas, N., Prasad, S., and Foroozandeh Shahraki, F. (2019, January 11\u201315). Structured Receptive Field Networks and Applications to Hyperspectral Image Classification. Proceedings of the Wavelets Sparsity XVIII, San Diego, CA, USA.","DOI":"10.1117\/12.2527712"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Karantzas, N., Safari, K., Haque, M., Sarmadi, S., and Papadakis, M. (2019, January 11\u201315). Compactly Supported Frame Wavelets and Applications in Convolutional Neural Networks. Proceedings of the Wavelets Sparsity XVIII, San Diego, CA, USA.","DOI":"10.1117\/12.2530342"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"108707","DOI":"10.1016\/j.patcog.2022.108707","article-title":"Harmonic Convolutional Networks Based on Discrete Cosine Transform. Pattern Recognit","volume":"129","author":"Ulicny","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_35","unstructured":"Ulicny, M., Krylov, V.A., and Dahyot, R. (2019, January 9\u201312). Harmonic Networks for Image Classification. Proceedings of the British Machine Vision Conference (BMVC), Cardiff, UK."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Ulicny, M., Krylov, V.A., and Dahyot, R. Harmonic Networks with Limited Training Samples; In Proceedings of the 27th European Signal Processing Conference, EUSIPCO, Coru\u00f1a, Spain, 2\u20136 September 2019.","DOI":"10.23919\/EUSIPCO.2019.8902831"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Kumawat, S., and Raman, S. (2020, January 4\u20138). Depthwise-STFT Based Separable Convolutional Neural Networks. Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053898"},{"key":"ref_38","unstructured":"Tomen, N., Pintea, S.-L., and Gemert, J. (2021, January 18\u201324). Van Deep Continuous Networks. Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10324-110335."},{"key":"ref_39","unstructured":"Saldanha, N., Pintea, S.L., Van Gemert, J.C., and Tomen, N. (2021). Frequency Learning for Structured CNN Filters with Gaussian Fractional Derivatives. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"223","DOI":"10.1007\/s10851-021-01057-9","article-title":"Scale-Covariant and Scale-Invariant Gaussian Derivative Networks","volume":"64","author":"Lindeberg","year":"2022","journal-title":"J. Math. Imaging Vis."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Elmoataz, A., Fadili, J., Qu\u00e9au, Y., Rabin, J., and Simon, L. (2021). Scale Space and Variational Methods in Computer Vision: 8th International Conference, SSVM 2021, Virtual Event, May 16\u201320, 2021, Proceedings, Springer. Lecture Notes in Computer Science.","DOI":"10.1007\/978-3-030-75549-2"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"104328","DOI":"10.1109\/ACCESS.2022.3210710","article-title":"Principal Components of Neural Convolution Filters","volume":"10","author":"Fukuzaki","year":"2022","journal-title":"IEEE Access"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Penaud\u2013Polge, V., Velasco-Forero, S., and Angulo, J. (2022, January 16\u201319). Fully Trainable Gaussian Derivative Convolutional Layer. Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France.","DOI":"10.1109\/ICIP46576.2022.9897734"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Wei, H., Wang, Z., and Hua, G. (2021, January 28\u201331). Dynamically Mixed Group Convolution to Lighten Convolution Operation. Proceedings of the 2021 4th International Conference on Artificial Intelligence and Big Data, ICAIBD 2021, Chengdu, China.","DOI":"10.1109\/ICAIBD51990.2021.9459076"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings of the Proceedings\u201430th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_46","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury Google, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., and Antiga, L. (2019, January 8\u201314). PyTorch: An Imperative Style, High-Performance Deep Learning Library. Proceedings of the 33rd Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada."},{"key":"ref_47","unstructured":"Yann, L., and Corinna, C. (2022, October 01). Burges Christopher THE MNIST DATABASE of Handwritten Digits. Courant Inst. Math. Sci., Available online: http:\/\/yann.lecun.com\/exdb\/mnist\/."},{"key":"ref_48","first-page":"2653","article-title":"Time for a Change: A Tutorial for Comparing Multiple Classifiers through Bayesian Analysis","volume":"18","author":"Benavoli","year":"2017","journal-title":"J. Mach. Learn. Res."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"1817","DOI":"10.1007\/s10994-017-5641-9","article-title":"Statistical Comparison of Classifiers through Bayesian Hierarchical Modelling","volume":"106","author":"Corani","year":"2017","journal-title":"Mach. Learn."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Nilsson, A., Smith, S., Ulm, G., Gustavsson, E., and Jirstrand, M. (2018, January 10). A Performance Evaluation of Federated Learning Algorithms. Proceedings of the DIDL 2018\u2014Proceedings of the 2nd Workshop on Distributed Infrastructures for Deep Learning, Part of Middleware 201, Rennes, France.","DOI":"10.1145\/3286490.3286559"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"241","DOI":"10.1162\/tacl_a_00018","article-title":"Questionable Answers in Question Answering Research: Reproducibility and Variability of Published Results","volume":"6","author":"Crane","year":"2018","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"181","DOI":"10.1080\/00031305.1998.10480559","article-title":"Violin Plots: A Box Plot-Density Trace Synergism Statistical Computing and Graphics Violin Plots: A Box Plot-Density Trace Synergism","volume":"52","author":"Hintze","year":"1998","journal-title":"Source Am. Stat."},{"key":"ref_53","first-page":"55","article-title":"Early Stopping\u2014But When?","volume":"7700","author":"Prechelt","year":"2012","journal-title":"Lect. Notes Comput. Sci."},{"key":"ref_54","unstructured":"Choi, D., Shallue, C.J., Nado, Z., Lee, J., Maddison, C.J., and Dahl, G.E. (2019). On Empirical Comparisons of Optimizers for Deep Learning. arXiv."},{"key":"ref_55","unstructured":"Zeiler, M.D. (2012). ADADELTA: An Adaptive Learning Rate Method. arXiv."},{"key":"ref_56","unstructured":"Kingma, D.P., and Ba, J.L. (2015, January 7\u20139). Adam: A Method for Stochastic Optimization. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015\u2014Conference Track Proceedings, San Diego, CA, USA."},{"key":"ref_57","unstructured":"Loshchilov, I., and Hutter, F. (2019, January 6\u20139). Decoupled Weight Decay Regularization. Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"ref_58","unstructured":"Luo, L., Xiong, Y., Liu, Y., and Sun, X. (2019, January 6\u20139). Adaptive Gradient Methods with Dynamic Bound of Learning Rate. Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"ref_59","unstructured":"Dozat, T. (2016, January 2\u20134). Incorporating Nesterov Momentum into Adam. Proceedings of the 4th International Conference on Learning Representations, ICLR 2016 \u2014Conference Track Proceedings, San Juan, Puerto Rico."},{"key":"ref_60","unstructured":"Shao, J., Hu, K., Wang, C., Xue, X., and Raj, B. (2020, January 6\u201312). Is Normalization Indispensable for Training Deep Neural Networks?. Proceedings of the Advances in Neural Information Processing Systems, Online."},{"key":"ref_61","unstructured":"Huang, L., Qin, J., Zhou, Y., Zhu, F., Liu, L., and Shao, L. (2020). Normalization Techniques in Training DNNs: Methodology, Analysis and Application. arXiv."},{"key":"ref_62","unstructured":"Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., and Ha, D. (1998). Deep Learning for Classical Japanese Literature. arXiv, Available online: https:\/\/github.com\/rois-codh\/kmnist."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"562","DOI":"10.1007\/978-3-030-58607-2_33","article-title":"LST-Net: Learning a Convolutional Neural Network with a Learnable Sparse Transform","volume":"12355","author":"Li","year":"2020","journal-title":"Lect. Notes Comput. Sci."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/24\/9743\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:39:55Z","timestamp":1760146795000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/24\/9743"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,12,12]]},"references-count":63,"journal-issue":{"issue":"24","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["s22249743"],"URL":"https:\/\/doi.org\/10.3390\/s22249743","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,12,12]]}}}