{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,14]],"date-time":"2026-04-14T13:02:29Z","timestamp":1776171749506,"version":"3.50.1"},"reference-count":75,"publisher":"Springer Science and Business Media LLC","issue":"12","license":[{"start":{"date-parts":[[2023,8,8]],"date-time":"2023-08-08T00:00:00Z","timestamp":1691452800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,8,8]],"date-time":"2023-08-08T00:00:00Z","timestamp":1691452800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2023,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales. However, recent advanced deep convolutional neural network-based methods for image restoration do not take full advantage of self-similarities by relying on self-attention neural modules that only process information at the same scale. To solve this problem, we present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid. Inspired by the fact that corruptions, such as noise or compression artifacts, drop drastically at coarser image scales, our attention module is designed to be able to <jats:italic>borrow<\/jats:italic> clean signals from their \u201cclean\u201d correspondences at the coarser levels. The proposed pyramid attention module is a generic building block that can be flexibly integrated into various neural architectures. Its effectiveness is validated through extensive experiments on multiple image restoration tasks: image denoising, demosaicing, compression artifact reduction, and super resolution. Without any bells and whistles, our PANet (pyramid attention module with simple network backbones) can produce state-of-the-art results with superior accuracy and visual quality. Our code is available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/SHI-Labs\/Pyramid-Attention-Networks\">https:\/\/github.com\/SHI-Labs\/Pyramid-Attention-Networks<\/jats:ext-link><\/jats:p>","DOI":"10.1007\/s11263-023-01843-5","type":"journal-article","created":{"date-parts":[[2023,8,8]],"date-time":"2023-08-08T16:01:45Z","timestamp":1691510505000},"page":"3207-3225","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":117,"title":["Pyramid Attention Network for Image Restoration"],"prefix":"10.1007","volume":"131","author":[{"given":"Yiqun","family":"Mei","sequence":"first","affiliation":[]},{"given":"Yuchen","family":"Fan","sequence":"additional","affiliation":[]},{"given":"Yulun","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Jiahui","family":"Yu","sequence":"additional","affiliation":[]},{"given":"Yuqian","family":"Zhou","sequence":"additional","affiliation":[]},{"given":"Ding","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Yun","family":"Fu","sequence":"additional","affiliation":[]},{"given":"Thomas S.","family":"Huang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2922-5663","authenticated-orcid":false,"given":"Humphrey","family":"Shi","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,8,8]]},"reference":[{"key":"1843_CR1","doi-asserted-by":"crossref","unstructured":"Anwar, S., & Barnes, N. (2019). Real image denoising with feature attention. In ICCV (pp. 3155\u20133164).","DOI":"10.1109\/ICCV.2019.00325"},{"issue":"3","key":"1843_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3390462","volume":"53","author":"S Anwar","year":"2020","unstructured":"Anwar, S., Khan, S., & Barnes, N. (2020). A deep journey into super-resolution: A survey. ACM Computing Surveys (CSUR), 53(3), 1\u201334.","journal-title":"ACM Computing Surveys (CSUR)"},{"key":"1843_CR3","doi-asserted-by":"crossref","unstructured":"Bahat, Y., Efrat, N., & Irani, M. (2017). Non-uniform blind deblurring by reblurring. In ICCV (pp. 3286\u20133294).","DOI":"10.1109\/ICCV.2017.356"},{"key":"1843_CR4","doi-asserted-by":"crossref","unstructured":"Bahat, Y., & Irani, M. (2016). Blind dehazing using internal patch recurrence. In ICCP (pp. 1\u20139). IEEE.","DOI":"10.1109\/ICCPHOT.2016.7492870"},{"key":"1843_CR5","doi-asserted-by":"crossref","unstructured":"Buades, A., Coll, B., & Morel, J.M. (2005). A non-local algorithm for image denoising. In CVPR.","DOI":"10.1109\/CVPR.2005.38"},{"key":"1843_CR6","doi-asserted-by":"publisher","first-page":"208","DOI":"10.5201\/ipol.2011.bcm_nlm","volume":"1","author":"A Buades","year":"2011","unstructured":"Buades, A., Coll, B., & Morel, J. M. (2011). Non-local means denoising. Image Processing On Line, 1, 208\u2013212.","journal-title":"Image Processing On Line"},{"key":"1843_CR7","doi-asserted-by":"crossref","unstructured":"Cao, Y., Xu, J., Lin, S., Wei, F., & Hu, H. (2019). Gcnet: Non-local networks meet squeeze-excitation networks and beyond. In ICCV Workshops (pp. 0\u20130).","DOI":"10.1109\/ICCVW.2019.00246"},{"key":"1843_CR8","unstructured":"Chen, C., Chen, Q., Xu, J., & Koltun, V. (xxxx). Learning to see in the dark."},{"key":"1843_CR9","doi-asserted-by":"crossref","unstructured":"Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., & Gao, W. (2021). Pre-trained image processing transformer. In CVPR (pp. 12299\u201312310).","DOI":"10.1109\/CVPR46437.2021.01212"},{"key":"1843_CR10","doi-asserted-by":"crossref","unstructured":"Chen, Y., & Pock, T. (2017). Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. In TPAMI.","DOI":"10.1109\/TPAMI.2016.2596743"},{"key":"1843_CR11","doi-asserted-by":"crossref","unstructured":"Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In ICIP.","DOI":"10.1109\/ICIP.2007.4378954"},{"key":"1843_CR12","doi-asserted-by":"crossref","unstructured":"Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. In TIP.","DOI":"10.1117\/12.766355"},{"key":"1843_CR13","doi-asserted-by":"crossref","unstructured":"Dai, T., Cai, J., Zhang, Y., Xia, S.T., & Zhang, L. (2019). Second-order attention network for single image super-resolution. In CVPR (pp. 11065\u201311074).","DOI":"10.1109\/CVPR.2019.01132"},{"key":"1843_CR14","doi-asserted-by":"crossref","unstructured":"Dong, C., Deng, Y., Change\u00a0Loy, C., & Tang, X. (2015). Compression artifacts reduction by a deep convolutional network. In ICCV.","DOI":"10.1109\/ICCV.2015.73"},{"key":"1843_CR15","doi-asserted-by":"crossref","unstructured":"Dong, C., Loy, C. C., He, K., & Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In ECCV.","DOI":"10.1007\/978-3-319-10593-2_13"},{"key":"1843_CR16","doi-asserted-by":"crossref","unstructured":"Dong, C., Loy, C. C., & Tang, X. (2016). Accelerating the super-resolution convolutional neural network. In ECCV.","DOI":"10.1007\/978-3-319-46475-6_25"},{"key":"1843_CR17","unstructured":"Fan, Y., Yu, J., Liu, D., & Huang, T. S. (2019). Scale-wise convolution for image restoration. arXiv preprint arXiv:1912.09028."},{"key":"1843_CR18","first-page":"15394","volume":"33","author":"Y Fan","year":"2020","unstructured":"Fan, Y., Yu, J., Mei, Y., Zhang, Y., Fu, Y., Liu, D., & Huang, T. S. (2020). Neural sparse representation for image restoration. NeurIPS, 33, 15394\u201315404.","journal-title":"NeurIPS"},{"key":"1843_CR19","doi-asserted-by":"crossref","unstructured":"Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Pointwise shape-adaptive dct for high-quality denoising and deblocking of grayscale and color images. In TIP.","DOI":"10.1109\/TIP.2007.891788"},{"issue":"2","key":"1843_CR20","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/1944846.1944852","volume":"30","author":"G Freedman","year":"2011","unstructured":"Freedman, G., & Fattal, R. (2011). Image and video upscaling from local self-examples. ACM Transactions on Graphics (TOG), 30(2), 1\u201311.","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"1843_CR21","doi-asserted-by":"crossref","unstructured":"Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., & Lu, H. (2019). Dual attention network for scene segmentation. In CVPR (pp. 3146\u20133154).","DOI":"10.1109\/CVPR.2019.00326"},{"key":"1843_CR22","doi-asserted-by":"crossref","unstructured":"Glasner, D., Bagon, S., & Irani, M. (2009). Super-resolution from a single image. In ICCV (pp. 349\u2013356). IEEE.","DOI":"10.1109\/ICCV.2009.5459271"},{"key":"1843_CR23","doi-asserted-by":"crossref","unstructured":"Haris, M., Shakhnarovich, G., & Ukita, N. (2018). Deep back-projection networks for super-resolution. In CVPR (pp. 1664\u20131673).","DOI":"10.1109\/CVPR.2018.00179"},{"issue":"12","key":"1843_CR24","first-page":"2341","volume":"33","author":"K He","year":"2010","unstructured":"He, K., Sun, J., & Tang, X. (2010). Single image haze removal using dark channel prior. TPAMI, 33(12), 2341\u20132353.","journal-title":"TPAMI"},{"key":"1843_CR25","doi-asserted-by":"crossref","unstructured":"He, X., Mo, Z., Wang, P., Liu, Y., Yang, M., Cheng, J. (2019). Ode-inspired network design for single image super-resolution. In CVPR (pp. 1732\u20131741).","DOI":"10.1109\/CVPR.2019.00183"},{"key":"1843_CR26","doi-asserted-by":"crossref","unstructured":"Huang, J. B., Singh, A., & Ahuja, N. (2015). Single image super-resolution from transformed self-exemplars. In CVPR (pp. 5197\u20135206).","DOI":"10.1109\/CVPR.2015.7299156"},{"key":"1843_CR27","doi-asserted-by":"crossref","unstructured":"Jo, Y., & Kim, S. J. (2021) Practical single-image super-resolution using look-up table. In CVPR (pp. 691\u2013700).","DOI":"10.1109\/CVPR46437.2021.00075"},{"key":"1843_CR28","doi-asserted-by":"crossref","unstructured":"Kim, J., Kwon\u00a0Lee, J., & Mu\u00a0Lee, K. (2016). Accurate image super-resolution using very deep convolutional networks. In CVPR.","DOI":"10.1109\/CVPR.2016.182"},{"key":"1843_CR29","doi-asserted-by":"crossref","unstructured":"Kong, X., Liu, X., Gu, J., Qiao, Y., & Dong, C. (2022). Reflash dropout in image super-resolution. In CVPR (pp. 6002\u20136012)","DOI":"10.1109\/CVPR52688.2022.00591"},{"key":"1843_CR30","doi-asserted-by":"crossref","unstructured":"Lai, W. S., Huang, J. B., Ahuja, N., Yang, M. H. (2017). Deep laplacian pyramid networks for fast and accurate super-resolution. In CVPR.","DOI":"10.1109\/CVPR.2017.618"},{"key":"1843_CR31","doi-asserted-by":"crossref","unstructured":"Li, B., Peng, X., Wang, Z., Xu, J., Feng, D. (2017). Aod-net: All-in-one dehazing network. In ICCV (pp. 4770\u20134778).","DOI":"10.1109\/ICCV.2017.511"},{"key":"1843_CR32","doi-asserted-by":"crossref","unstructured":"Li, J., Chen, C., Cheng, Z., Xiong, Z. (2022). Mulut: Cooperating multiple look-up tables for efficient image super-resolution. In European conference on computer vision (pp. 238\u2013256). Springer.","DOI":"10.1007\/978-3-031-19797-0_14"},{"key":"1843_CR33","doi-asserted-by":"crossref","unstructured":"Li, S., Araujo, I. B., Ren, W., Wang, Z., Tokuda, E. K., Junior, R. H., Cesar-Junior, R., Zhang, J., Guo, X., & Cao, X. (2019). Single image deraining: A comprehensive benchmark analysis. In CVPR (pp. 3838\u20133847).","DOI":"10.1109\/CVPR.2019.00396"},{"key":"1843_CR34","doi-asserted-by":"crossref","unstructured":"Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., & Wu, W. (2019). Feedback network for image super-resolution. In CVPR (pp. 3867\u20133876).","DOI":"10.1109\/CVPR.2019.00399"},{"key":"1843_CR35","doi-asserted-by":"crossref","unstructured":"Liang, J., Cao, J., Sun, G., Zhang, K., Van\u00a0Gool, L., & Timofte, R. (2021). Swinir: Image restoration using swin transformer. In ICCV (pp. 1833\u20131844).","DOI":"10.1109\/ICCVW54120.2021.00210"},{"key":"1843_CR36","doi-asserted-by":"crossref","unstructured":"Lim, B., Son, S., Kim, H., Nah, S., & Lee, K. M. (2017). Enhanced deep residual networks for single image super-resolution. In CVPRW.","DOI":"10.1109\/CVPRW.2017.151"},{"key":"1843_CR37","unstructured":"Liu, D., Wen, B., Fan, Y., Loy, C. C., & Huang, T. S. (2018). Non-local recurrent network for image restoration. In NeurIPS."},{"key":"1843_CR38","doi-asserted-by":"crossref","unstructured":"Liu, J., Zhang, W., Tang, Y., Tang, J., & Wu, G. (2020). Residual feature aggregation network for image super-resolution. In CVPR (pp. 2359\u20132368).","DOI":"10.1109\/CVPR42600.2020.00243"},{"key":"1843_CR39","doi-asserted-by":"crossref","unstructured":"Lotan, O., & Irani, M. (2016). Needle-match: Reliable patch matching under high uncertainty. In CVPR (pp. 439\u2013448).","DOI":"10.1109\/CVPR.2016.54"},{"key":"1843_CR40","doi-asserted-by":"crossref","unstructured":"Magid, S. A., Zhang, Y., Wei, D., Jang, W. D., Lin, Z., Fu, Y., & Pfister, H. (2021). Dynamic high-pass filtering and multi-spectral attention for image super-resolution. In ICCV (pp. 4288\u20134297).","DOI":"10.1109\/ICCV48922.2021.00425"},{"key":"1843_CR41","doi-asserted-by":"crossref","unstructured":"Mairal, J., Bach, F., Ponce, J., Sapiro, G., & Zisserman, A. (2009). Non-local sparse models for image restoration. In ICCV (pp. 2272\u20132279). IEEE.","DOI":"10.1109\/ICCV.2009.5459452"},{"key":"1843_CR42","unstructured":"Mao, X., Shen, C., & Yang, Y. B. (2016). Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In NeurIPS."},{"key":"1843_CR43","doi-asserted-by":"crossref","unstructured":"Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV.","DOI":"10.1109\/ICCV.2001.937655"},{"key":"1843_CR44","doi-asserted-by":"crossref","unstructured":"Mei, Y., Fan, Y., & Zhou, Y. (2021). Image super-resolution with non-local sparse attention. In CVPR (pp. 3517\u20133526).","DOI":"10.1109\/CVPR46437.2021.00352"},{"key":"1843_CR45","doi-asserted-by":"crossref","unstructured":"Mei, Y., Fan, Y., Zhou, Y., Huang, L., Huang, T. S., & Shi, H. (2020). Image super-resolution with cross-scale non-local attention and exhaustive self-exemplars mining. In CVPR (pp. 5690\u20135699).","DOI":"10.1109\/CVPR42600.2020.00573"},{"key":"1843_CR46","doi-asserted-by":"crossref","unstructured":"Michaeli, T., & Irani, M. (2014). Blind deblurring using internal patch recurrence. In ECCV (pp. 783\u2013798). Springer.","DOI":"10.1007\/978-3-319-10578-9_51"},{"key":"1843_CR47","doi-asserted-by":"crossref","unstructured":"Niu, B., Wen, W., Ren, W., Zhang, X., Yang, L., Wang, S., Zhang, K., Cao, X., & Shen, H. (2020). Single image super-resolution via a holistic attention network. In European conference on computer vision (pp. 191\u2013207). Springer.","DOI":"10.1007\/978-3-030-58610-2_12"},{"key":"1843_CR48","unstructured":"Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., & Lerer, A. (2017). Automatic differentiation in pytorch."},{"key":"1843_CR49","doi-asserted-by":"crossref","unstructured":"Roth, S., & Black, M. J. (2005). Fields of experts: A framework for learning image priors. In CVPR (vol.\u00a02, pp. 860\u2013867). IEEE.","DOI":"10.1109\/CVPR.2005.160"},{"key":"1843_CR50","unstructured":"Sheikh, H. R., Wang, Z., Cormack, L., & Bovik, A. C. (2005). Live image quality assessment database release 2."},{"key":"1843_CR51","doi-asserted-by":"crossref","unstructured":"Singh, A., & Ahuja, N. (2014). Super-resolution using sub-band self-similarity. In ACCV (pp. 552\u2013568). Springer.","DOI":"10.1007\/978-3-319-16808-1_37"},{"key":"1843_CR52","doi-asserted-by":"crossref","unstructured":"Tai, Y., Yang, J., Liu, X., & Xu, C. (2017). Memnet: A persistent memory network for image restoration. In ICCV.","DOI":"10.1109\/ICCV.2017.486"},{"key":"1843_CR53","doi-asserted-by":"publisher","first-page":"251","DOI":"10.1016\/j.neunet.2020.07.025","volume":"131","author":"C Tian","year":"2020","unstructured":"Tian, C., Fei, L., Zheng, W., Xu, Y., Zuo, W., & Lin, C. W. (2020). Deep learning on image denoising: An overview. Neural Networks, 131, 251\u2013275.","journal-title":"Neural Networks"},{"key":"1843_CR54","doi-asserted-by":"crossref","unstructured":"Timofte, R., Agustsson, E., Van\u00a0Gool, L., Yang, M. H., Zhang, L., Lim, B., Son, S., Kim, H., Nah, S., Lee, K. M., et\u00a0al. (2017). Ntire 2017 challenge on single image super-resolution: Methods and results. In CVPRW.","DOI":"10.1109\/CVPRW.2017.150"},{"key":"1843_CR55","doi-asserted-by":"crossref","unstructured":"Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. (2008). Extracting and composing robust features with denoising autoencoders. In ICML.","DOI":"10.1145\/1390156.1390294"},{"key":"1843_CR56","doi-asserted-by":"crossref","unstructured":"Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local neural networks. In CVPR.","DOI":"10.1109\/CVPR.2018.00813"},{"key":"1843_CR57","doi-asserted-by":"crossref","unstructured":"Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. In TIP.","DOI":"10.1109\/TIP.2003.819861"},{"key":"1843_CR58","doi-asserted-by":"crossref","unstructured":"Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., & Li, H. (2022). Uformer: A general u-shaped transformer for image restoration. In CVPR (pp. 17683\u201317693).","DOI":"10.1109\/CVPR52688.2022.01716"},{"key":"1843_CR59","unstructured":"Xia, B. N., Gong, Y., Zhang, Y., & Poellabauer, C. (2019). Second-order non-local attention networks for person re-identification. In ICCV (pp. 3760\u20133769)."},{"key":"1843_CR60","doi-asserted-by":"crossref","unstructured":"Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., & Yang, M. H. (2022). Restormer: Efficient transformer for high-resolution image restoration. In CVPR (pp. 5728\u20135739).","DOI":"10.1109\/CVPR52688.2022.00564"},{"key":"1843_CR61","doi-asserted-by":"crossref","unstructured":"Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., Yang, M. H., & Shao, L. (2020). Learning enriched features for real image restoration and enhancement. In ECCV (pp. 492\u2013511). Springer.","DOI":"10.1007\/978-3-030-58595-2_30"},{"key":"1843_CR62","doi-asserted-by":"crossref","unstructured":"Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., Yang, M. H., & Shao, L. (2021). Multi-stage progressive image restoration. In CVPR (pp. 14821\u201314831).","DOI":"10.1109\/CVPR46437.2021.01458"},{"key":"1843_CR63","doi-asserted-by":"crossref","unstructured":"Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. In TIP.","DOI":"10.1109\/TIP.2017.2662206"},{"key":"1843_CR64","doi-asserted-by":"crossref","unstructured":"Zhang, K., Zuo, W., Gu, S., Zhang, L. (2017). Learning deep cnn denoiser prior for image restoration. In CVPR.","DOI":"10.1109\/CVPR.2017.300"},{"key":"1843_CR65","doi-asserted-by":"crossref","unstructured":"Zhang, K., Zuo, W., & Zhang, L. (2017). Ffdnet: Toward a fast and flexible solution for cnn based image denoising. arXiv preprint arXiv:1710.04026.","DOI":"10.1109\/TIP.2018.2839891"},{"key":"1843_CR66","doi-asserted-by":"crossref","unstructured":"Zhang, K., Zuo, W., & Zhang, L. (2018). Learning a single convolutional super-resolution network for multiple degradations. In CVPR (pp. 3262\u20133271).","DOI":"10.1109\/CVPR.2018.00344"},{"key":"1843_CR67","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., & Fu, Y. (2018). Image super-resolution using very deep residual channel attention networks. In ECCV.","DOI":"10.1007\/978-3-030-01234-2_18"},{"key":"1843_CR68","unstructured":"Zhang, Y., Li, K., Li, K., Zhong, B., & Fu, Y. (2019). Residual non-local attention networks for image restoration. In ICLR."},{"key":"1843_CR69","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tian, Y., Kong, Y., Zhong, B., & Fu, Y. (2018). Residual dense network for image super-resolution. In CVPR.","DOI":"10.1109\/CVPR.2018.00262"},{"key":"1843_CR70","first-page":"2695","volume":"34","author":"Y Zhang","year":"2021","unstructured":"Zhang, Y., Wang, H., Qin, C., & Fu, Y. (2021). Aligned structured sparsity learning for efficient image super-resolution. NeurIPS, 34, 2695\u20132706.","journal-title":"NeurIPS"},{"key":"1843_CR71","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Wei, D., Qin, C., Wang, H., Pfister, H., & Fu, Y. (2021). Context reasoning attention network for image super-resolution. In ICCV (pp. 4278\u20134287).","DOI":"10.1109\/ICCV48922.2021.00424"},{"key":"1843_CR72","first-page":"3499","volume":"33","author":"S Zhou","year":"2020","unstructured":"Zhou, S., Zhang, J., Zuo, W., & Loy, C. C. (2020). Cross-scale internal graph neural network for image super-resolution. NeurIPS, 33, 3499\u20133509.","journal-title":"NeurIPS"},{"key":"1843_CR73","doi-asserted-by":"crossref","unstructured":"Zontak, M., & Irani, M. (2011). Internal statistics of a single natural image. In CVPR (pp. 977\u2013984). IEEE.","DOI":"10.1109\/CVPR.2011.5995401"},{"key":"1843_CR74","unstructured":"Zontak, M., Mosseri, I., & Irani, M. (xxxx). Separating signal from noise using patch recurrence across scales."},{"key":"1843_CR75","doi-asserted-by":"crossref","unstructured":"Zoran, D., & Weiss, Y. (2011). From learning models of natural image patches to whole image restoration. In ICCV (pp. 479\u2013486). IEEE.","DOI":"10.1109\/ICCV.2011.6126278"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-023-01843-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-023-01843-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-023-01843-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,10,27]],"date-time":"2023-10-27T14:08:02Z","timestamp":1698415682000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-023-01843-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,8]]},"references-count":75,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["1843"],"URL":"https:\/\/doi.org\/10.1007\/s11263-023-01843-5","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,8]]},"assertion":[{"value":"24 August 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 June 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 August 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}