{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,30]],"date-time":"2026-04-30T16:59:19Z","timestamp":1777568359674,"version":"3.51.4"},"reference-count":32,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2022,1,4]],"date-time":"2022-01-04T00:00:00Z","timestamp":1641254400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,1,4]],"date-time":"2022-01-04T00:00:00Z","timestamp":1641254400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U1603261"],"award-info":[{"award-number":["U1603261"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100009110","name":"Natural Science Foundation of Xinjiang Province","doi-asserted-by":"publisher","award":["2016D01A080"],"award-info":[{"award-number":["2016D01A080"]}],"id":[{"id":"10.13039\/100009110","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["SIViP"],"published-print":{"date-parts":[[2022,7]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In image classification field, existing work tends to modify the network structure to obtain higher accuracy or faster speed. However, some studies have found that the neural network usually has texture bias effect, which means that the neural network is more sensitive to the texture information than the shape information. Based on such phenomenon, we propose a new way to improve network performance by making full use of gradient information. The dual features network (DuFeNet) is proposed in this paper. In DuFeNet, one sub-network is used to learn the information of gradient features, and the other is a traditional neural network with texture bias. The structure of DuFeNet is easy to implement in the original neural network structure. The experimental results clearly show that DuFeNet can achieve better accuracy in image classification and detection. It can increase the shape bias of the network adapted to human visual perception. Besides, DuFeNet can be used without modifying the structure of the original network at lower additional parameters cost.<\/jats:p>","DOI":"10.1007\/s11760-021-02065-3","type":"journal-article","created":{"date-parts":[[2022,1,4]],"date-time":"2022-01-04T00:03:41Z","timestamp":1641254621000},"page":"1153-1160","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["DuFeNet: Improve the Accuracy and Increase Shape Bias of Neural Network Models"],"prefix":"10.1007","volume":"16","author":[{"given":"Zecong","family":"Ye","sequence":"first","affiliation":[]},{"given":"Zhiqiang","family":"Gao","sequence":"additional","affiliation":[]},{"given":"Xiaolong","family":"Cui","sequence":"additional","affiliation":[]},{"given":"Yaojie","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Nanliang","family":"Shan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,1,4]]},"reference":[{"key":"2065_CR1","unstructured":"Brochu, F.: Increasing shape bias in imagenet-trained networks using transfer learning and domain-adversarial methods. arXiv preprint arXiv:1907.12892 (2019)"},{"key":"2065_CR2","doi-asserted-by":"publisher","first-page":"679","DOI":"10.1109\/TPAMI.1986.4767851","volume":"6","author":"J Canny","year":"1986","unstructured":"Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 6, 679\u2013698 (1986)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"2065_CR3","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248\u2013255. IEEE (2009)","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"2065_CR4","unstructured":"Everingham, M., Van\u00a0Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http:\/\/www.pascal-network.org\/challenges\/VOC\/voc2012\/workshop\/index.html"},{"key":"2065_CR5","unstructured":"Gao, W., Zhang, X., Yang, L., Liu, H.: An improved sobel edge detection. In: 2010 3rd International Conference on Computer Science and Information Technology, vol.\u00a05, pp. 67\u201371. IEEE (2010)"},{"key":"2065_CR6","unstructured":"Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231 (2018)"},{"key":"2065_CR7","unstructured":"Han, K., Wang, Y., Chen, H., Chen, X., Guo, J., Liu, Z., Tang, Y., Xiao, A., Xu, C., Xu, Y., Yang, Z., Zhang, Y., Tao, D.: A survey on visual transformer. arXiv preprint arXiv:2012.12556 (2020)"},{"key":"2065_CR8","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770\u2013778 (2016)","DOI":"10.1109\/CVPR.2016.90"},{"key":"2065_CR9","doi-asserted-by":"crossref","unstructured":"Howard, A., Sandler, M., Chu, G., Chen, L.C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et\u00a0al.: Searching for mobilenetv3. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1314\u20131324 (2019)","DOI":"10.1109\/ICCV.2019.00140"},{"key":"2065_CR10","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)"},{"key":"2065_CR11","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132\u20137141 (2018)","DOI":"10.1109\/CVPR.2018.00745"},{"key":"2065_CR12","unstructured":"JDAI-CV: centerx. https:\/\/github.com\/JDAI-CV\/centerX (2020)"},{"key":"2065_CR13","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2939201","author":"L Jiao","year":"2019","unstructured":"Jiao, L., Zhang, F., Liu, F., Yang, S., Li, L., Feng, Z., Qu, R.: A survey of deep learning-based object detection. IEEE Access (2019). https:\/\/doi.org\/10.1109\/ACCESS.2019.2939201","journal-title":"IEEE Access"},{"key":"2065_CR14","unstructured":"Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)"},{"key":"2065_CR15","doi-asserted-by":"crossref","unstructured":"Kriegeskorte, Nikolaus: Deep neural networks: a new framework for modeling biological vision and brain information processing. Annu. Rev. Vis. Sci. 1(1), 417 (2015)","DOI":"10.1146\/annurev-vision-082114-035447"},{"key":"2065_CR16","unstructured":"Krizhevsky, A., Hinton, G., et\u00a0al.: Learning multiple layers of features from tiny images (2009)"},{"key":"2065_CR17","unstructured":"Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097\u20131105 (2012)"},{"key":"2065_CR18","doi-asserted-by":"publisher","first-page":"759","DOI":"10.1167\/16.12.759","volume":"16","author":"J Kubilius","year":"2016","unstructured":"Kubilius, J., Bracci, S., Op de Beeck, H.: Deep neural networks as a computational model for human shape sensitivity. J. Vis. 16, 759 (2016). https:\/\/doi.org\/10.1167\/16.12.759","journal-title":"J. Vis."},{"key":"2065_CR19","doi-asserted-by":"publisher","first-page":"299","DOI":"10.1016\/0885-2014(88)90014-7","volume":"3","author":"B Landau","year":"1988","unstructured":"Landau, B., Smith, L., Jones, S.: The importance of shape in early lexical learning. Cognit. Dev. 3, 299\u2013321 (1988). https:\/\/doi.org\/10.1016\/0885-2014(88)90014-7","journal-title":"Cognit. Dev."},{"key":"2065_CR20","doi-asserted-by":"publisher","unstructured":"Long, J., Shelhamer, E., Darrell, T.: Fully Convolutional Networks for Semantic Segmentation, pp. 3431\u20133440 (2015). https:\/\/doi.org\/10.1109\/CVPR.2015.7298965","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"2065_CR21","doi-asserted-by":"crossref","unstructured":"Ma, N., Zhang, X., Zheng, H.T., Sun, J.: Shufflenet v2: practical guidelines for efficient CNN architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 116\u2013131 (2018)","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"2065_CR22","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et\u00a0al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, pp. 8026\u20138037 (2019)"},{"issue":"9","key":"2065_CR23","doi-asserted-by":"publisher","first-page":"2352","DOI":"10.1162\/neco_a_00990","volume":"29","author":"W Rawat","year":"2017","unstructured":"Rawat, W., Wang, Z.: Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput. 29(9), 2352\u20132449 (2017)","journal-title":"Neural Comput."},{"key":"2065_CR24","unstructured":"Ren, P., Xiao, Y., Chang, X., Huang, P.Y., Li, Z., Chen, X., Wang, X.: A comprehensive survey of neural architecture search: challenges and solutions. arXiv preprint arXiv:2006.02903 (2020)"},{"key":"2065_CR25","doi-asserted-by":"publisher","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: Inverted Residuals and Linear Bottlenecks, pp. 4510\u20134520 (2018). https:\/\/doi.org\/10.1109\/CVPR.2018.00474","DOI":"10.1109\/CVPR.2018.00474"},{"key":"2065_CR26","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618\u2013626 (2017)","DOI":"10.1109\/ICCV.2017.74"},{"key":"2065_CR27","unstructured":"Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)"},{"key":"2065_CR28","doi-asserted-by":"publisher","unstructured":"Xiao, B., Jaiswal, M., Poovendran, R.: Assessing Shape Bias Property of Convolutional Neural Networks, pp. 2004\u201320048 (2018). https:\/\/doi.org\/10.1109\/CVPRW.2018.00258","DOI":"10.1109\/CVPRW.2018.00258"},{"key":"2065_CR29","doi-asserted-by":"crossref","unstructured":"Yang, L., Wu, X., Zhao, D., Li, H., Zhai, J.: An improved prewitt algorithm for edge detection based on noised image. In: 2011 4th International Congress on Image and Signal Processing, vol.\u00a03, pp. 1197\u20131200. IEEE (2011)","DOI":"10.1109\/CISP.2011.6100495"},{"key":"2065_CR30","unstructured":"Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015)"},{"key":"2065_CR31","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848\u20136856 (2018)","DOI":"10.1109\/CVPR.2018.00716"},{"key":"2065_CR32","unstructured":"Zhou, X., Wang, D., Kr\u00e4henb\u00fchl, P.: Objects as points. arXiv preprint arXiv:1904.07850 (2019)"}],"container-title":["Signal, Image and Video Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11760-021-02065-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11760-021-02065-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11760-021-02065-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,5,19]],"date-time":"2022-05-19T06:19:59Z","timestamp":1652941199000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11760-021-02065-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1,4]]},"references-count":32,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,7]]}},"alternative-id":["2065"],"URL":"https:\/\/doi.org\/10.1007\/s11760-021-02065-3","relation":{},"ISSN":["1863-1703","1863-1711"],"issn-type":[{"value":"1863-1703","type":"print"},{"value":"1863-1711","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1,4]]},"assertion":[{"value":"10 March 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 August 2021","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 October 2021","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 January 2022","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}