{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,23]],"date-time":"2026-01-23T15:40:03Z","timestamp":1769182803106,"version":"3.49.0"},"reference-count":33,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2022,7,4]],"date-time":"2022-07-04T00:00:00Z","timestamp":1656892800000},"content-version":"vor","delay-in-days":184,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"},{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/doi.wiley.com\/10.1002\/tdm_license_1.1"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["51577086"],"award-info":[{"award-number":["51577086"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100010014","name":"Six Talent Peaks Project in Jiangsu Province","doi-asserted-by":"publisher","award":["TD-XNY004"],"award-info":[{"award-number":["TD-XNY004"]}],"id":[{"id":"10.13039\/501100010014","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Complexity"],"published-print":{"date-parts":[[2022,1]]},"abstract":"<jats:p>Few\u2010shot segmentation is a challenging task due to the limited class cues provided by a few of annotations. Discovering more class cues from known and unknown classes is the essential to few\u2010shot segmentation. Existing method generates class cues mainly from common cues intra new classes where the similarity between support images and query images is measured to locate the foreground regions. However, the support images are not sufficient enough to measure the similarity since one or a few of support mask cannot describe the object of new class with large variations. In this paper, we capture the class cues by considering all images in the unknown classes, i.e., not only the support images but also the query images are used to capture the foreground regions. Moreover, the class\u2010level labels in the known classes are also considered to capture the discriminative feature of new classes. The two aspects are achieved by class activation map which is used as attention map to improve the feature extraction. A new few\u2010shot segmentation based on mask transferring and class activation map is proposed, and a new class activation map based on feature clustering is proposed to refine the class activation map. The proposed method is validated on Pascal Voc dataset. Experimental results demonstrate the effectiveness of the proposed method with larger mIoU values.<\/jats:p>","DOI":"10.1155\/2022\/4901746","type":"journal-article","created":{"date-parts":[[2022,7,4]],"date-time":"2022-07-04T20:05:07Z","timestamp":1656965107000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Few\u2010Shot Segmentation via Capturing Interclass and Intraclass Cues Using Class Activation Map"],"prefix":"10.1155","volume":"2022","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0643-9607","authenticated-orcid":false,"given":"Yan","family":"Zhao","sequence":"first","affiliation":[]},{"given":"Ganyun","family":"Lv","sequence":"additional","affiliation":[]},{"given":"Gongyi","family":"Hong","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2022,7,4]]},"reference":[{"key":"e_1_2_10_1_2","doi-asserted-by":"crossref","unstructured":"LongJ. ShelhamerE. andDarrellT. Fully convolutional networks for semantic segmentation 39 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) June 2015 Boston MA USA no. 4 3431\u20133440 https:\/\/doi.org\/10.1109\/cvpr.2015.7298965 2-s2.0-84959205572.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"e_1_2_10_2_2","doi-asserted-by":"publisher","DOI":"10.26599\/tst.2020.9010025"},{"key":"e_1_2_10_3_2","doi-asserted-by":"crossref","unstructured":"HeK. ZhangX. RenS. andSunJ. Deep residual learning for image recognition Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) June 2016 Las Vegas NV USA 770\u2013778 https:\/\/doi.org\/10.1109\/cvpr.2016.90 2-s2.0-84986274465.","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_10_4_2","unstructured":"SimonyanK.andZissermanA. Very deep convolutional networks for large-scale image recognition Proceedings of the International Conference on Learning Representation (ICLR) April 2015 San Diego CA USA."},{"key":"e_1_2_10_5_2","doi-asserted-by":"publisher","DOI":"10.1007\/s40747-021-00581-w"},{"key":"e_1_2_10_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/tmm.2018.2890360"},{"key":"e_1_2_10_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447032"},{"key":"e_1_2_10_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2017.2699184"},{"key":"e_1_2_10_9_2","doi-asserted-by":"crossref","unstructured":"LiuY. LiuN. CaoQ. YaoX. HanJ. andShaoL. Learning non-target knowledge for few-shot semantic segmentation Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) May 2022 New Orleans Louisiana.","DOI":"10.1109\/CVPR52688.2022.01128"},{"key":"e_1_2_10_10_2","doi-asserted-by":"crossref","unstructured":"ShabanA. BansalS. LiuZ. EssaI. andBootsB. One-shot learning for semantic segmentation Proceedings of the British Machine Vision Conference 2017 BMVC September 2017 London UK https:\/\/doi.org\/10.5244\/c.31.167.","DOI":"10.5244\/C.31.167"},{"key":"e_1_2_10_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/tcyb.2020.2992433"},{"key":"e_1_2_10_12_2","unstructured":"DongN.andXingE. Few-shot semantic segmentation with prototype learning Proceedings of the British Machine Vision Conference (BMVC) September 2018 Newcastle UK 79\u201391."},{"key":"e_1_2_10_13_2","doi-asserted-by":"crossref","unstructured":"ZhangC. LinG. LiuF. GuoJ. WuQ. andYaoR. Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation Proceedings of the 2019 IEEE\/CVF International Conference on Computer Vision (ICCV) November 2019 Seoul Korea (South) 9587\u20139595 https:\/\/doi.org\/10.1109\/iccv.2019.00968.","DOI":"10.1109\/ICCV.2019.00968"},{"key":"e_1_2_10_14_2","doi-asserted-by":"crossref","unstructured":"YangY. MengF. LiH. NganK. N. andWuQ. A new few-shot segmentation network based on class representation Proceedings of the 2019 IEEE Visual Communications and Image Processing (VCIP) December 2019 Sydney NSW Australia IEEE 1\u20134.","DOI":"10.1109\/VCIP47243.2019.8965780"},{"key":"e_1_2_10_15_2","doi-asserted-by":"crossref","unstructured":"ZhangS. WuT. WuS. andGuoG. Catrans: context and affinity transformer for few-shot segmentation Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI) July 2022 Vienna Austria.","DOI":"10.24963\/ijcai.2022\/231"},{"key":"e_1_2_10_16_2","doi-asserted-by":"crossref","unstructured":"GairolaS. HemaniM. ChopraA. andKrishnamurthyB. Simpropnet: Improved Similarity Propagation for Few-Shot Image Segmentation 2020 https:\/\/arxiv.org\/abs\/2004.15014.","DOI":"10.24963\/ijcai.2020\/80"},{"key":"e_1_2_10_17_2","doi-asserted-by":"crossref","unstructured":"LiuJ. BaoY. XieG. XiongH. SonkeJ. andGavvesE. Dynamic prototype convolution network for few-shot semantic segmentation Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) April 2022.","DOI":"10.1109\/CVPR52688.2022.01126"},{"key":"e_1_2_10_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/3401979"},{"key":"e_1_2_10_19_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.comcom.2021.07.021"},{"key":"e_1_2_10_20_2","doi-asserted-by":"crossref","unstructured":"WangK. LiewJ. H. ZouY. ZhouD. andFengJ. Panet: few-shot image semantic segmentation with prototype alignment Proceedings of the 2019 IEEE\/CVF International Conference on Computer Vision (ICCV) October 2019 Seoul Korea 9197\u20139206 https:\/\/doi.org\/10.1109\/iccv.2019.00929.","DOI":"10.1109\/ICCV.2019.00929"},{"key":"e_1_2_10_21_2","doi-asserted-by":"crossref","unstructured":"ZhangC. LinG. LiuF. YaoR. andShenC. Canet: class-agnostic segmentation networks with iterative refinement and attentive few-shot learning Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) June 2019 Long Beach CA USA 5217\u20135226 https:\/\/doi.org\/10.1109\/cvpr.2019.00536.","DOI":"10.1109\/CVPR.2019.00536"},{"key":"e_1_2_10_22_2","doi-asserted-by":"crossref","unstructured":"LiuY. ZhangX. ZhangS. andHeX. Part-aware prototype network for few-shot semantic segmentation Proceedings of the Computer Vision \u2013 ECCV 2020 European Conference on Computer Vision August 2020 Glasgow UK Springer 142\u2013158 https:\/\/doi.org\/10.1007\/978-3-030-58545-7_9.","DOI":"10.1007\/978-3-030-58545-7_9"},{"key":"e_1_2_10_23_2","doi-asserted-by":"crossref","unstructured":"YangL. ZhuoW. QiL. ShiY. andGaoY. Mining latent classes for few-shot segmentation Proceedings of the IEEE\/CVF International Conference on Computer Vision October 2021 Montreal BC Canada 8721\u20138730.","DOI":"10.1109\/ICCV48922.2021.00860"},{"key":"e_1_2_10_24_2","unstructured":"ChenJ. GaoB.-B. LuZ. XueJ.-H. WangC. andLiaoQ. Scnet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive Background Prototypes 2021 https:\/\/arxiv.org\/abs\/2104.09216."},{"key":"e_1_2_10_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2020.3013717"},{"key":"e_1_2_10_26_2","doi-asserted-by":"crossref","unstructured":"YangY. MengF. LiH. WuQ. XuX. andChenS. A new local transformation module for few-shot segmentation Proceedings of the International Conference on Multimedia Modeling January 2020 Daejeon South Korea Springer 76\u201387 https:\/\/doi.org\/10.1007\/978-3-030-37734-2_7.","DOI":"10.1007\/978-3-030-37734-2_7"},{"key":"e_1_2_10_27_2","doi-asserted-by":"crossref","unstructured":"LuZ. HeS. ZhuX. ZhangL. SongY.-Z. andXiangT. Simpler is better: few-shot semantic segmentation with classifier weight transformer Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision (ICCV) October 2021 Montreal BC Canada 8741\u20138750 https:\/\/doi.org\/10.1109\/iccv48922.2021.00862.","DOI":"10.1109\/ICCV48922.2021.00862"},{"key":"e_1_2_10_28_2","doi-asserted-by":"publisher","DOI":"10.2307\/2346830"},{"key":"e_1_2_10_29_2","doi-asserted-by":"crossref","unstructured":"DengJ. DongW. SocherR. LiL.-J. Kai LiK. andLi Fei-FeiL. Imagenet: a large-scale hierarchical image database Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) June 2009 Miami FL USA 248\u2013255 https:\/\/doi.org\/10.1109\/cvpr.2009.5206848.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_2_10_30_2","unstructured":"RakellyK. ShelhamerE. DarrellT. EfrosA. andLevineS. Conditional networks for few-shot semantic segmentation Proceedings of the International Conference on Learning Representation workshop (ICLRW) April 2018 Vancouver BC Canada."},{"key":"e_1_2_10_31_2","doi-asserted-by":"crossref","unstructured":"NguyenK.andTodorovicS. Feature weighting and boosting for few-shot segmentation Proceedings of the IEEE\/CVF International Conference on Computer Vision September 2019 Seoul South Korea 622\u2013631.","DOI":"10.1109\/ICCV.2019.00071"},{"key":"e_1_2_10_32_2","doi-asserted-by":"crossref","unstructured":"LiuW. ZhangC. LinG. andLiuF. Crnet: cross-reference networks for few-shot segmentation Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) June 2020 Seattle WA USA 4165\u20134173 https:\/\/doi.org\/10.1109\/cvpr42600.2020.00422.","DOI":"10.1109\/CVPR42600.2020.00422"},{"key":"e_1_2_10_33_2","doi-asserted-by":"crossref","unstructured":"WangH. ZhangX. HuY. YangY. CaoX. andZhenX. Few-shot semantic segmentation with democratic attention networks Proceedings of the Computer Vision \u2013 ECCV 2020 Proceedings of the European Conference on Computer Vision (ECCV) August 2020 Glasgow UK Springer 730\u2013746 https:\/\/doi.org\/10.1007\/978-3-030-58601-0_43.","DOI":"10.1007\/978-3-030-58601-0_43"}],"container-title":["Complexity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2022\/4901746","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/full-xml\/10.1155\/2022\/4901746","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2022\/4901746","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,22]],"date-time":"2026-01-22T21:26:41Z","timestamp":1769117201000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/2022\/4901746"}},"subtitle":[],"editor":[{"given":"Xuyun","family":"Zhang","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2022,1]]},"references-count":33,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,1]]}},"alternative-id":["10.1155\/2022\/4901746"],"URL":"https:\/\/doi.org\/10.1155\/2022\/4901746","archive":["Portico"],"relation":{},"ISSN":["1076-2787","1099-0526"],"issn-type":[{"value":"1076-2787","type":"print"},{"value":"1099-0526","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1]]},"assertion":[{"value":"2021-12-10","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-04-22","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-07-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"4901746"}}