{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,21]],"date-time":"2025-12-21T06:24:43Z","timestamp":1766298283489,"version":"3.37.3"},"reference-count":39,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,4,13]],"date-time":"2022-04-13T00:00:00Z","timestamp":1649808000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,13]],"date-time":"2022-04-13T00:00:00Z","timestamp":1649808000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61872425"],"award-info":[{"award-number":["61872425"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Research Clusters program of Tokushima University","award":["2003002"],"award-info":[{"award-number":["2003002"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2023,1]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>We propose an Inner-Imaging three-dimensional (3D) attentional feature fusion module for a residual network, which is a simple yet effective approach for residual networks. In our attention module, we constructed a 3D soft attention feature map to refine the input feature. The map fuses the attentional features from different dimensions, including channel and spatial axes, to create a 3D attention map. Then, we implemented a feature fusion module to further fuse the attentional features. Lastly, the attention module outputs a 3D soft attention map that is applied to the residual branch. The attention module can also model the relationship between attentional features from different dimensions and achieve the interaction between attentional features. This function allows our attention module to acquire more attentional features. To demonstrate the effectiveness of our method, extensive experiments were conducted on several computer vision benchmark datasets, including ImageNet 2012 and Microsoft COCO (MS COCO) 2017 datasets. The experimental results show that our method performed better than the baseline methods in the tasks of image classification, object detection, and instance segmentation tasks.<\/jats:p>","DOI":"10.1007\/s10489-022-03225-9","type":"journal-article","created":{"date-parts":[[2022,4,13]],"date-time":"2022-04-13T20:10:46Z","timestamp":1649880646000},"page":"141-152","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Inner-imaging 3D attention module for residual network"],"prefix":"10.1007","volume":"53","author":[{"given":"Wenjie","family":"Liu","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0236-8135","authenticated-orcid":false,"given":"Guoqing","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Fuji","family":"Ren","sequence":"additional","affiliation":[]},{"given":"Quan","family":"Shi","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,13]]},"reference":[{"key":"3225_CR1","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"3225_CR2","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S et al (2016) Identity mappings in deep residual networks. In: European conference on computer vision. Springer, Cham, pp 630\u2013645","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"3225_CR3","doi-asserted-by":"crossref","unstructured":"Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1\u20139","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"3225_CR4","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Van Der Maaten L et al (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700\u20134708","DOI":"10.1109\/CVPR.2017.243"},{"key":"3225_CR5","doi-asserted-by":"crossref","unstructured":"Pang Y, Zhao X, Zhang L et al (2020) Multi-scale interactive network for salient object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 9413\u20139422","DOI":"10.1109\/CVPR42600.2020.00943"},{"key":"3225_CR6","doi-asserted-by":"crossref","unstructured":"Carion N, Massa F, Synnaeve G et al (2020) End-to-end object detection with transformers. In: European conference on computer vision. Springer, Cham, pp 213\u2013229","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"3225_CR7","doi-asserted-by":"crossref","unstructured":"Li X, Lai S, Qian X (2021) DBCFace: Towards pure convolutional neural network face detection. IEEE Trans Circ Syst Video Technol","DOI":"10.1109\/TCSVT.2021.3082635"},{"key":"3225_CR8","first-page":"91","volume":"28","author":"S Ren","year":"2015","unstructured":"Ren S, He K, Girshick R et al (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91\u201399","journal-title":"Adv Neural Inf Process Syst"},{"key":"3225_CR9","doi-asserted-by":"crossref","unstructured":"Wang Y, Xu Z, Wang X et al (2021) End-to-end video instance segmentation with transformers. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 8741\u20138750","DOI":"10.1109\/CVPR46437.2021.00863"},{"issue":"3","key":"3225_CR10","doi-asserted-by":"publisher","first-page":"1066","DOI":"10.1109\/TCSVT.2020.2995122","volume":"31","author":"K Lin","year":"2020","unstructured":"Lin K, Wang L, Luo K et al (2020) Cross-domain complementary learning using pose for multi-person part segmentation. IEEE Trans Circ Syst Video Technol 31(3):1066\u20131078","journal-title":"IEEE Trans Circ Syst Video Technol"},{"key":"3225_CR11","doi-asserted-by":"crossref","unstructured":"Dong J, Cong Y, Sun G et al (2020) Weakly-supervised cross-domain adaptation for endoscopic lesions segmentation. IEEE Trans Circ Syst Video Technol 31(5)","DOI":"10.1109\/TCSVT.2020.3016058"},{"key":"3225_CR12","unstructured":"Rao Y, Zhao W, Zhu Z et al (2021) Global filter networks for image classification. Adv Neural Inf Process Syst:34"},{"key":"3225_CR13","doi-asserted-by":"publisher","first-page":"157","DOI":"10.1016\/j.neucom.2021.06.009","volume":"458","author":"B Yang","year":"2021","unstructured":"Yang B, Wang L, Wong DF et al (2021) Context-aware self-attention networks for natural language processing. Neurocomputing 458:157\u2013169","journal-title":"Neurocomputing"},{"key":"3225_CR14","doi-asserted-by":"crossref","unstructured":"Galassi A, Lippi M, Torroni P (2020) Attention in natural language processing. IEEE Trans Neural Netw Learn Syst","DOI":"10.1109\/TNNLS.2020.3019893"},{"key":"3225_CR15","doi-asserted-by":"crossref","unstructured":"Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132\u20137141","DOI":"10.1109\/CVPR.2018.00745"},{"key":"3225_CR16","doi-asserted-by":"crossref","unstructured":"Woo S, Park J, Lee JY et al (2018) Cbam: Convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3\u201319","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"3225_CR17","doi-asserted-by":"crossref","unstructured":"Li X, Wang W, Hu X et al (2019) Selective kernel networks. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 510\u2013519","DOI":"10.1109\/CVPR.2019.00060"},{"key":"3225_CR18","doi-asserted-by":"crossref","unstructured":"Cao Y, Xu J, Lin S et al (2019) Gcnet: Non-local networks meet squeeze-excitation networks and beyond. In: Proceedings of the IEEE\/CVF international conference on computer vision workshops, pp 0\u20130","DOI":"10.1109\/ICCVW.2019.00246"},{"key":"3225_CR19","doi-asserted-by":"crossref","unstructured":"Wang Q, Wu B et al (2020) ECA-Net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition","DOI":"10.1109\/CVPR42600.2020.01155"},{"issue":"6","key":"3225_CR20","doi-asserted-by":"publisher","first-page":"1412","DOI":"10.1109\/TMM.2018.2877886","volume":"21","author":"L Wu","year":"2018","unstructured":"Wu L, Wang Y, Gao J et al (2018) Where-and-when to look: Deep siamese attention networks for video-based person re-identification. IEEE Trans Multimed 21(6):1412\u20131424","journal-title":"IEEE Trans Multimed"},{"key":"3225_CR21","doi-asserted-by":"publisher","first-page":"6963","DOI":"10.1109\/TIP.2020.2995272","volume":"29","author":"G Chen","year":"2020","unstructured":"Chen G, Lu J, Yang M et al (2020) Learning recurrent 3D attention for video-based person re-identification. IEEE Trans Image Process 29:6963\u20136976","journal-title":"IEEE Trans Image Process"},{"key":"3225_CR22","doi-asserted-by":"publisher","first-page":"301","DOI":"10.1016\/j.neucom.2019.10.054","volume":"377","author":"J Guan","year":"2020","unstructured":"Guan J, Lai R, Xiong A et al (2020) Fixed pattern noise reduction for infrared images based on cascade residual attention CNN. Neurocomputing 377:301\u2013313","journal-title":"Neurocomputing"},{"key":"3225_CR23","doi-asserted-by":"publisher","first-page":"340","DOI":"10.1016\/j.neucom.2020.06.014","volume":"411","author":"J Li","year":"2020","unstructured":"Li J, Jin K, Zhou D et al (2020) Attention mechanism-based CNN for facial expression recognition. Neurocomputing 411:340\u2013350","journal-title":"Neurocomputing"},{"key":"3225_CR24","doi-asserted-by":"crossref","unstructured":"Liu W, Wu G, Ren F (2020) Deep multi-branch fusion residual network for insect pest recognition. IEEE Trans Cogn Dev Syst","DOI":"10.1109\/TCDS.2020.2993060"},{"key":"3225_CR25","doi-asserted-by":"publisher","first-page":"355","DOI":"10.1016\/j.neunet.2021.04.013","volume":"141","author":"Z Zheng","year":"2021","unstructured":"Zheng Z, Yu Z, Wu Y et al (2021) Generative adversarial network with multi-branch discriminator for imbalanced cross-species image-to-image translation. Neural Netw 141:355\u2013371","journal-title":"Neural Netw"},{"key":"3225_CR26","doi-asserted-by":"crossref","unstructured":"Hern\u00e1ndez-Luquin F, Escalante HJ (2021) Multi-branch deep radial basis function networks for facial emotion recognition. Neural Comput and Appl:1\u201315","DOI":"10.1007\/s00521-021-06420-w"},{"key":"3225_CR27","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818\u20132826","DOI":"10.1109\/CVPR.2016.308"},{"key":"3225_CR28","doi-asserted-by":"crossref","unstructured":"Szegedy C, Ioffe S, Vanhoucke V et al (2017) Inception- v4 inception-resnet and the impact of residual connections on learning. In: Thirty-first AAAI conference on artificial intelligence","DOI":"10.1609\/aaai.v31i1.11231"},{"issue":"4","key":"3225_CR29","doi-asserted-by":"publisher","first-page":"300","DOI":"10.26599\/BDMA.2020.9020021","volume":"3","author":"W Liu","year":"2020","unstructured":"Liu W, Wu G, Ren F et al (2020) DFF-ResNet: An insect pest recognition model based on residual networks. Big Data Min Analytics 3(4):300\u2013310","journal-title":"Big Data Min Analytics"},{"key":"3225_CR30","doi-asserted-by":"publisher","first-page":"122758","DOI":"10.1109\/ACCESS.2019.2938194","volume":"7","author":"F Ren","year":"2019","unstructured":"Ren F, Liu W, Wu G (2019) Feature reuse residual networks for insect pest recognition. IEEE Access 7:122758\u2013122768","journal-title":"IEEE Access"},{"key":"3225_CR31","doi-asserted-by":"crossref","unstructured":"Lin TY, Doll\u00e1r P, Girshick R et al (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117\u20132125","DOI":"10.1109\/CVPR.2017.106"},{"key":"3225_CR32","first-page":"8026","volume":"32","author":"A Paszke","year":"2019","unstructured":"Paszke A, Gross S, Massa F et al (2019) Pytorch: An imperative style, high-performance deep learning library. Adv Neural Inf Process Syst 32:8026\u20138037","journal-title":"Adv Neural Inf Process Syst"},{"key":"3225_CR33","doi-asserted-by":"crossref","unstructured":"Selvaraju RR, Cogswell M, Das A et al (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp 618\u2013626","DOI":"10.1109\/ICCV.2017.74"},{"key":"3225_CR34","doi-asserted-by":"publisher","first-page":"107405","DOI":"10.1016\/j.asoc.2021.107405","volume":"108","author":"Y Wu","year":"2021","unstructured":"Wu Y, Jiang X, Fang Z et al (2021) Multi-modal 3D object detection by 2D-guided precision anchor proposal and multi-layer fusion. Appl Soft Comput 108:107405","journal-title":"Appl Soft Comput"},{"issue":"4","key":"3225_CR35","doi-asserted-by":"publisher","first-page":"112","DOI":"10.1109\/MMUL.2020.2999464","volume":"27","author":"H Wang","year":"2020","unstructured":"Wang H, Peng J, Chen D et al (2020) Attribute-guided feature learning network for vehicle reidentification. IEEE MultiMedia 27(4):112\u2013121","journal-title":"IEEE MultiMedia"},{"key":"3225_CR36","doi-asserted-by":"crossref","unstructured":"Wang H, Wang Y, Zhang Z, et al. (2020) Kernelized multiview subspace analysis by self-weighted learning. IEEE Trans Multimed","DOI":"10.1109\/TMM.2020.3032023"},{"issue":"10","key":"3225_CR37","doi-asserted-by":"publisher","first-page":"10484","DOI":"10.1109\/TVT.2020.3009162","volume":"69","author":"H Wang","year":"2020","unstructured":"Wang H, Peng J, Zhao Y et al (2020) Multi-path deep CNNs for fine-grained car recognition. IEEE Trans Veh Technol 69(10):10484\u201310493","journal-title":"IEEE Trans Veh Technol"},{"key":"3225_CR38","doi-asserted-by":"publisher","first-page":"55","DOI":"10.1016\/j.neucom.2020.06.148","volume":"438","author":"H Wang","year":"2021","unstructured":"Wang H, Peng J, Jiang G et al (2021) Discriminative feature and dictionary learning with part-aware model for vehicle re-identification. Neurocomputing 438:55\u201362","journal-title":"Neurocomputing"},{"key":"3225_CR39","doi-asserted-by":"publisher","first-page":"98005","DOI":"10.1109\/ACCESS.2019.2929512","volume":"7","author":"Z Yan","year":"2019","unstructured":"Yan Z, Liu W, Wen S et al (2019) Multi-label image classification by feature attention network. IEEE Access 7:98005\u201398013","journal-title":"IEEE Access"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03225-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-022-03225-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03225-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,3]],"date-time":"2023-01-03T04:34:03Z","timestamp":1672720443000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-022-03225-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,13]]},"references-count":39,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,1]]}},"alternative-id":["3225"],"URL":"https:\/\/doi.org\/10.1007\/s10489-022-03225-9","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"type":"print","value":"0924-669X"},{"type":"electronic","value":"1573-7497"}],"subject":[],"published":{"date-parts":[[2022,4,13]]},"assertion":[{"value":"9 January 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 April 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}