{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:16:48Z","timestamp":1764969408544,"version":"3.46.0"},"reference-count":103,"publisher":"Association for Computing Machinery (ACM)","issue":"6","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62276251"],"award-info":[{"award-number":["62276251"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"the Joint Lab of CAS-HK"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:p>Deep image restoration models aim to learn a mapping from degraded image space to natural image space. However, they face several critical challenges: removing degradation, generating realistic details, and ensuring pixel-level consistency. Over time, three major classes of methods have emerged, including MSE-based, GAN-based, and diffusion-based methods. However, they fail to achieve a good balance between restoration quality, fidelity, and speed. We propose a novel method, HYPIR, to address these challenges. Our solution pipeline is straightforward: it involves initializing the image restoration model with a pre-trained diffusion model and then fine-tuning it with adversarial training. This approach does not rely on diffusion loss, iterative sampling, or additional adapters. We theoretically demonstrate that initializing adversarial training from a pre-trained diffusion model positions the initial restoration model very close to the natural image distribution. Consequently, this initialization improves numerical stability, avoids mode collapse, and substantially accelerates the convergence of adversarial training. Moreover, HYPIR inherits the capabilities of diffusion models with rich user control, enabling text-guided restoration and adjustable texture richness. Requiring only a single forward pass, it achieves faster convergence and inference speed than diffusion-based methods. Extensive experiments show that HYPIR outperforms previous state-of-the-art methods, achieving efficient and high-quality image restoration.<\/jats:p>","DOI":"10.1145\/3763346","type":"journal-article","created":{"date-parts":[[2025,12,4]],"date-time":"2025-12-04T17:15:39Z","timestamp":1764868539000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Harnessing Diffusion-Yielded Score Priors for Image Restoration"],"prefix":"10.1145","volume":"44","author":[{"ORCID":"https:\/\/orcid.org\/0009-0008-9918-7704","authenticated-orcid":false,"given":"Xinqi","family":"Lin","sequence":"first","affiliation":[{"name":"Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China"},{"name":"University of Chinese Academy of Sciences, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-9744-404X","authenticated-orcid":false,"given":"Fanghua","family":"Yu","sequence":"additional","affiliation":[{"name":"Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3969-0945","authenticated-orcid":false,"given":"Jinfan","family":"Hu","sequence":"additional","affiliation":[{"name":"Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China"},{"name":"University of Chinese Academy of Sciences, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-8546-3478","authenticated-orcid":false,"given":"Zhiyuan","family":"You","sequence":"additional","affiliation":[{"name":"Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China"},{"name":"The Chinese University of Hong Kong, Hong Kong, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7080-6662","authenticated-orcid":false,"given":"Wu","family":"Shi","sequence":"additional","affiliation":[{"name":"Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5888-3083","authenticated-orcid":false,"given":"Jimmy S.","family":"Ren","sequence":"additional","affiliation":[{"name":"SenseTime Research, Hong Kong, Hong Kong"},{"name":"Hong Kong Metropolitan University, Hon Kong, Hong Kong"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4389-6236","authenticated-orcid":false,"given":"Jinjin","family":"Gu","sequence":"additional","affiliation":[{"name":"INSAIT, Sofia University, Sofia, Bulgaria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2260-8079","authenticated-orcid":false,"given":"Chao","family":"Dong","sequence":"additional","affiliation":[{"name":"Shenzhen Key Laboratory of Computer Vision and Pattern Recognition, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China"},{"name":"Shenzhen University of Advanced Technology, Shenzhen, Guangdong, China"}]}],"member":"320","published-online":{"date-parts":[[2025,12,4]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862","author":"Arjovsky Martin","year":"2017","unstructured":"Martin Arjovsky and L\u00e9on Bottou. 2017. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862 (2017)."},{"key":"e_1_2_1_2_1","volume-title":"International conference on machine learning. PMLR, 214\u2013223","author":"Arjovsky Martin","year":"2017","unstructured":"Martin Arjovsky, Soumith Chintala, and L\u00e9on Bottou. 2017. Wasserstein generative adversarial networks. In International conference on machine learning. PMLR, 214\u2013223."},{"key":"e_1_2_1_3_1","volume-title":"Semantic photo manipulation with a generative image prior. arXiv preprint arXiv:2005.07727","author":"Bau David","year":"2020","unstructured":"David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, and Antonio Torralba. 2020. Semantic photo manipulation with a generative image prior. arXiv preprint arXiv:2005.07727 (2020)."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00652"},{"key":"e_1_2_1_5_1","volume-title":"Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096","author":"Brock Andrew","year":"2018","unstructured":"Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018. Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 (2018)."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00318"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00951"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01402"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00169"},{"key":"e_1_2_1_10_1","volume-title":"Recursive generalization transformer for image super-resolution. arXiv preprint arXiv:2303.06373","author":"Chen Zheng","year":"2023","unstructured":"Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, and Xiaokang Yang. 2023b. Recursive generalization transformer for image super-resolution. arXiv preprint arXiv:2303.06373 (2023)."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.01131"},{"key":"e_1_2_1_12_1","volume-title":"Diffusion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687","author":"Chung Hyungjin","year":"2022","unstructured":"Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye. 2022. Diffusion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687 (2022)."},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2007.901238"},{"key":"e_1_2_1_14_1","volume-title":"Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34","author":"Dhariwal Prafulla","year":"2021","unstructured":"Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34 (2021), 8780\u20138794."},{"key":"e_1_2_1_15_1","volume-title":"Kaiming He, and Xiaoou Tang.","author":"Dong Chao","year":"2015","unstructured":"Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2015. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38, 2 (2015), 295\u2013307."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46475-6_25"},{"key":"e_1_2_1_17_1","volume-title":"TSD-SR: One-Step Diffusion with Target Score Distillation for Real-World Image Super-Resolution. arXiv preprint arXiv:2411.18263","author":"Dong Linwei","year":"2024","unstructured":"Linwei Dong, Qingnan Fan, Yihong Guo, Zhonghao Wang, Qi Zhang, Jinwei Chen, Yawei Luo, and Changqing Zou. 2024. TSD-SR: One-Step Diffusion with Target Score Distillation for Real-World Image Super-Resolution. arXiv preprint arXiv:2411.18263 (2024)."},{"key":"e_1_2_1_18_1","volume-title":"Forty-first international conference on machine learning.","author":"Esser Patrick","year":"2024","unstructured":"Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M\u00fcller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. 2024. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first international conference on machine learning."},{"key":"e_1_2_1_19_1","volume-title":"Generative adversarial nets. Advances in neural information processing systems 27","author":"Goodfellow Ian J","year":"2014","unstructured":"Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems 27 (2014)."},{"key":"e_1_2_1_20_1","volume-title":"Image quality assessment for perceptual image restoration: A new dataset, benchmark and metric. arXiv preprint arXiv:2011.15002","author":"Gu Jinjin","year":"2020","unstructured":"Jinjin Gu, Haoming Cai, Haoyu Chen, Xiaoxing Ye, Jimmy Ren, and Chao Dong. 2020. Image quality assessment for perceptual image restoration: A new dataset, benchmark and metric. arXiv preprint arXiv:2011.15002 (2020)."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00170"},{"key":"e_1_2_1_22_1","volume-title":"Improved training of wasserstein gans. Advances in neural information processing systems 30","author":"Gulrajani Ishaan","year":"2017","unstructured":"Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_2_1_23_1","volume-title":"Denoising diffusion probabilistic models. Advances in neural information processing systems 33","author":"Ho Jonathan","year":"2020","unstructured":"Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems 33 (2020), 6840\u20136851."},{"key":"e_1_2_1_24_1","volume-title":"International Conference on Learning Representations","volume":"1","author":"Hu Edward J","year":"2022","unstructured":"Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adaptation of large language models.. In International Conference on Learning Representations, Vol. 1. 3."},{"key":"e_1_2_1_25_1","volume-title":"Proceedings, Part XI 16","author":"Jinjin Gu","year":"2020","unstructured":"Gu Jinjin, Cai Haoming, Chen Haoyu, Ye Xiaoxing, Jimmy S Ren, and Dong Chao. 2020. Pipal: a large-scale image quality assessment dataset for perceptual image restoration. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XI 16. Springer, 633\u2013651."},{"key":"e_1_2_1_26_1","volume-title":"European Conference on Computer Vision. Springer, 428\u2013447","author":"Kang Minguk","year":"2024","unstructured":"Minguk Kang, Richard Zhang, Connelly Barnes, Sylvain Paris, Suha Kwak, Jaesik Park, Eli Shechtman, Jun-Yan Zhu, and Taesung Park. 2024. Distilling diffusion models into conditional gans. In European Conference on Computer Vision. Springer, 428\u2013447."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00976"},{"key":"e_1_2_1_28_1","volume-title":"Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196","author":"Karras Tero","year":"2017","unstructured":"Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2017. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)."},{"key":"e_1_2_1_29_1","volume-title":"Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems 35","author":"Karras Tero","year":"2022","unstructured":"Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. 2022. Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems 35 (2022), 26565\u201326577."},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_2_1_31_1","first-page":"23593","article-title":"Denoising diffusion restoration models","volume":"35","author":"Kawar Bahjat","year":"2022","unstructured":"Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. 2022. Denoising diffusion restoration models. Advances in Neural Information Processing Systems 35 (2022), 23593\u201323606.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00510"},{"key":"e_1_2_1_33_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_2_1_34_1","volume-title":"On convergence and stability of gans. arXiv preprint arXiv:1705.07215","author":"Kodali Naveen","year":"2017","unstructured":"Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. 2017. On convergence and stability of gans. arXiv preprint arXiv:1705.07215 (2017)."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00591"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01039"},{"key":"e_1_2_1_37_1","unstructured":"Black Forest Labs. 2024. FLUX.1 Dev: A Text-to-Image Diffusion Model. https:\/\/huggingface.co\/black-forest-labs\/FLUX.1-dev. Accessed: 2025-05-22."},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.19"},{"key":"e_1_2_1_39_1","volume-title":"Improving the training of rectified flows. Advances in neural information processing systems 37","author":"Lee Sangyun","year":"2024","unstructured":"Sangyun Lee, Zinan Lin, and Giulia Fanti. 2024. Improving the training of rectified flows. Advances in neural information processing systems 37 (2024), 63082\u201363109."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW54120.2021.00210"},{"key":"e_1_2_1_41_1","volume-title":"European Conference on Computer Vision. Springer, 430\u2013448","author":"Lin Xinqi","year":"2024","unstructured":"Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Bo Dai, Fanghua Yu, Yu Qiao, Wanli Ouyang, and Chao Dong. 2024. Diffbir: Toward blind image restoration with generative diffusion prior. In European Conference on Computer Vision. Springer, 430\u2013448."},{"key":"e_1_2_1_42_1","volume-title":"Visual instruction tuning. Advances in neural information processing systems 36","author":"Liu Haotian","year":"2023","unstructured":"Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. Advances in neural information processing systems 36 (2023), 34892\u201334916."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"e_1_2_1_44_1","volume-title":"Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101","author":"Loshchilov Ilya","year":"2017","unstructured":"Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)."},{"key":"e_1_2_1_45_1","volume-title":"Are gans created equal? a large-scale study. Advances in neural information processing systems 31","author":"Lucic Mario","year":"2018","unstructured":"Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. 2018. Are gans created equal? a large-scale study. Advances in neural information processing systems 31 (2018)."},{"key":"e_1_2_1_46_1","volume-title":"Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378","author":"Luo Simian","year":"2023","unstructured":"Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. 2023b. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378 (2023)."},{"key":"e_1_2_1_47_1","first-page":"76525","article-title":"Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models","volume":"36","author":"Luo Weijian","year":"2023","unstructured":"Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, and Zhihua Zhang. 2023a. Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models. Advances in Neural Information Processing Systems 36 (2023), 76525\u201376546.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_48_1","volume-title":"International conference on machine learning. PMLR, 3481\u20133490","author":"Mescheder Lars","year":"2018","unstructured":"Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. 2018. Which training methods for GANs do actually converge?. In International conference on machine learning. PMLR, 3481\u20133490."},{"key":"e_1_2_1_49_1","volume-title":"Making a \"completely blind\" image quality analyzer","author":"Mittal Anish","year":"2012","unstructured":"Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. 2012. Making a \"completely blind\" image quality analyzer. IEEE Signal processing letters 20, 3 (2012), 209\u2013212."},{"key":"e_1_2_1_50_1","volume-title":"f-gan: Training generative neural samplers using variational divergence minimization. Advances in neural information processing systems 29","author":"Nowozin Sebastian","year":"2016","unstructured":"Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. 2016. f-gan: Training generative neural samplers using variational divergence minimization. Advances in neural information processing systems 29 (2016)."},{"key":"e_1_2_1_51_1","volume-title":"International conference on machine learning. PMLR, 2642\u20132651","author":"Odena Augustus","year":"2017","unstructured":"Augustus Odena, Christopher Olah, and Jonathon Shlens. 2017. Conditional image synthesis with auxiliary classifier gans. In International conference on machine learning. PMLR, 2642\u20132651."},{"key":"e_1_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2021.3115428"},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00387"},{"key":"e_1_2_1_54_1","volume-title":"SDXL: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952","author":"Podell Dustin","year":"2023","unstructured":"Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M\u00fcller, Joe Penna, and Robin Rombach. 2023. SDXL: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)."},{"key":"e_1_2_1_55_1","series-title":"SIAM journal on control and optimization 30, 4","volume-title":"Acceleration of stochastic approximation by averaging","author":"Polyak Boris T","year":"1992","unstructured":"Boris T Polyak and Anatoli B Juditsky. 1992. Acceleration of stochastic approximation by averaging. SIAM journal on control and optimization 30, 4 (1992), 838\u2013855."},{"key":"e_1_2_1_56_1","unstructured":"Prolific. 2024. Prolific - Participant Recruitment for Research. https:\/\/www.prolific.com. Accessed: 2025-05-24."},{"key":"e_1_2_1_57_1","volume-title":"International conference on machine learning. PmLR, 8748\u20138763","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning. PmLR, 8748\u20138763."},{"key":"e_1_2_1_58_1","volume-title":"Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434","author":"Radford Alec","year":"2015","unstructured":"Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)."},{"key":"e_1_2_1_59_1","volume-title":"Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 1, 2","author":"Ramesh Aditya","year":"2022","unstructured":"Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 1, 2 (2022), 3."},{"key":"e_1_2_1_60_1","volume-title":"International conference on machine learning. PMLR, 8821\u20138831","author":"Ramesh Aditya","year":"2021","unstructured":"Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International conference on machine learning. PMLR, 8821\u20138831."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_2_1_62_1","volume-title":"Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60, 1\u20134","author":"Rudin Leonid I","year":"1992","unstructured":"Leonid I Rudin, Stanley Osher, and Emad Fatemi. 1992. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60, 1\u20134 (1992), 259\u2013268."},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2022.3204461"},{"key":"e_1_2_1_64_1","volume-title":"Improved techniques for training gans. Advances in neural information processing systems 29","author":"Salimans Tim","year":"2016","unstructured":"Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. Advances in neural information processing systems 29 (2016)."},{"key":"e_1_2_1_65_1","volume-title":"Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512","author":"Salimans Tim","year":"2022","unstructured":"Tim Salimans and Jonathan Ho. 2022. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512 (2022)."},{"key":"e_1_2_1_66_1","volume-title":"European Conference on Computer Vision. Springer, 87\u2013103","author":"Sauer Axel","year":"2024","unstructured":"Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. 2024. Adversarial diffusion distillation. In European Conference on Computer Vision. Springer, 87\u2013103."},{"key":"e_1_2_1_67_1","volume-title":"Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502","author":"Song Jiaming","year":"2020","unstructured":"Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 (2020)."},{"key":"e_1_2_1_68_1","unstructured":"Yang Song Prafulla Dhariwal Mark Chen and Ilya Sutskever. 2023. Consistency models. (2023)."},{"key":"e_1_2_1_69_1","volume-title":"Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32","author":"Song Yang","year":"2019","unstructured":"Yang Song and Stefano Ermon. 2019. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)."},{"key":"e_1_2_1_70_1","volume-title":"Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model. arXiv preprint arXiv:2410.04161","author":"Tao Keda","year":"2024","unstructured":"Keda Tao, Jinjin Gu, Yulun Zhang, Xiucheng Wang, and Nan Cheng. 2024. Overcoming False Illusions in Real-World Face Restoration with Multi-Modal Guided Diffusion Model. arXiv preprint arXiv:2410.04161 (2024)."},{"key":"e_1_2_1_71_1","volume-title":"NTIRE 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 114\u2013125","author":"Timofte Radu","year":"2017","unstructured":"Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, and Lei Zhang. 2017. NTIRE 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 114\u2013125."},{"key":"e_1_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.1998.710815"},{"key":"e_1_2_1_73_1","volume-title":"A connection between score matching and denoising autoencoders. Neural computation 23, 7","author":"Vincent Pascal","year":"2011","unstructured":"Pascal Vincent. 2011. A connection between score matching and denoising autoencoders. Neural computation 23, 7 (2011), 1661\u20131674."},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i2.25353"},{"key":"e_1_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-024-02168-7"},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00905"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW54120.2021.00217"},{"key":"e_1_2_1_78_1","volume-title":"Proceedings of the European conference on computer vision (ECCV) workshops. 0\u20130.","author":"Wang Xintao","year":"2018","unstructured":"Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. 2018. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops. 0\u20130."},{"key":"e_1_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02437"},{"key":"e_1_2_1_80_1","volume-title":"Image quality assessment: from error visibility to structural similarity","author":"Wang Zhou","year":"2004","unstructured":"Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 4 (2004), 600\u2013612."},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58598-3_7"},{"key":"e_1_2_1_82_1","first-page":"92529","article-title":"One-step effective diffusion network for real-world image super-resolution","volume":"37","author":"Wu Rongyuan","year":"2024","unstructured":"Rongyuan Wu, Lingchen Sun, Zhiyuan Ma, and Lei Zhang. 2024a. One-step effective diffusion network for real-world image super-resolution. Advances in Neural Information Processing Systems 37 (2024), 92529\u201392553.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02405"},{"key":"e_1_2_1_84_1","volume-title":"Addsr: Accelerating diffusion-based blind super-resolution with adversarial diffusion distillation. arXiv preprint arXiv:2404.01717","author":"Xie Rui","year":"2024","unstructured":"Rui Xie, Chen Zhao, Kai Zhang, Zhenyu Zhang, Jun Zhou, Jian Yang, and Ying Tai. 2024. Addsr: Accelerating diffusion-based blind super-resolution with adversarial diffusion distillation. arXiv preprint arXiv:2404.01717 (2024)."},{"key":"e_1_2_1_85_1","volume-title":"Rethinking Diffusion Posterior Sampling: From Conditional Score Estimator to Maximizing a Posterior. arXiv preprint arXiv:2501.18913","author":"Xu Tongda","year":"2025","unstructured":"Tongda Xu, Xiyan Cai, Xinjie Zhang, Xingtong Ge, Dailan He, Ming Sun, Jingjing Liu, Ya-Qin Zhang, Jian Li, and Yan Wang. 2025. Rethinking Diffusion Posterior Sampling: From Conditional Score Estimator to Maximizing a Posterior. arXiv preprint arXiv:2501.18913 (2025)."},{"key":"e_1_2_1_86_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW56347.2022.00126"},{"key":"e_1_2_1_87_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00073"},{"key":"e_1_2_1_88_1","volume-title":"European Conference on Computer Vision. Springer, 74\u201391","author":"Yang Tao","year":"2024","unstructured":"Tao Yang, Rongyuan Wu, Peiran Ren, Xuansong Xie, and Lei Zhang. 2024. Pixel-aware stable diffusion for realistic image super-resolution and personalized stylization. In European Conference on Computer Vision. Springer, 74\u201391."},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.00632"},{"key":"e_1_2_1_90_1","volume-title":"Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution. arXiv preprint arXiv:2501.11561","author":"You Zhiyuan","year":"2025","unstructured":"Zhiyuan You, Xin Cai, Jinjin Gu, Tianfan Xue, and Chao Dong. 2025. Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution. arXiv preprint arXiv:2501.11561 (2025)."},{"key":"e_1_2_1_91_1","volume-title":"UniCon: Unidirectional Information Flow for Effective Control of Large-Scale Diffusion Models. arXiv preprint arXiv:2503.17221","author":"Yu Fanghua","year":"2025","unstructured":"Fanghua Yu, Jinjin Gu, Jinfan Hu, Zheyuan Li, and Chao Dong. 2025. UniCon: Unidirectional Information Flow for Effective Control of Large-Scale Diffusion Models. arXiv preprint arXiv:2503.17221 (2025)."},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02425"},{"key":"e_1_2_1_93_1","volume-title":"Arbitrary-steps Image Super-resolution via Diffusion Inversion. arXiv preprint arXiv:2412.09013","author":"Yue Zongsheng","year":"2024","unstructured":"Zongsheng Yue, Kang Liao, and Chen Change Loy. 2024. Arbitrary-steps Image Super-resolution via Diffusion Inversion. arXiv preprint arXiv:2412.09013 (2024)."},{"key":"e_1_2_1_94_1","first-page":"13294","article-title":"Resshift: Efficient diffusion model for image super-resolution by residual shifting","volume":"36","author":"Yue Zongsheng","year":"2023","unstructured":"Zongsheng Yue, Jianyi Wang, and Chen Change Loy. 2023. Resshift: Efficient diffusion model for image super-resolution by residual shifting. Advances in Neural Information Processing Systems 36 (2023), 13294\u201313307.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_1_95_1","volume-title":"Degradation-guided one-step image super-resolution with diffusion priors. arXiv preprint arXiv:2409.17058","author":"Zhang Aiping","year":"2024","unstructured":"Aiping Zhang, Zongsheng Yue, Renjing Pei, Wenqi Ren, and Xiaochun Cao. 2024. Degradation-guided one-step image super-resolution with diffusion priors. arXiv preprint arXiv:2409.17058 (2024)."},{"key":"e_1_2_1_96_1","volume-title":"Accurate image restoration with attention retractable transformer. arXiv preprint arXiv:2210.01427","author":"Zhang Jiale","year":"2022","unstructured":"Jiale Zhang, Yulun Zhang, Jinjin Gu, Yongbing Zhang, Linghe Kong, and Xin Yuan. 2022. Accurate image restoration with attention retractable transformer. arXiv preprint arXiv:2210.01427 (2022)."},{"key":"e_1_2_1_97_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00475"},{"key":"e_1_2_1_98_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.300"},{"key":"e_1_2_1_99_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00355"},{"key":"e_1_2_1_100_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00068"},{"key":"e_1_2_1_101_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00068"},{"key":"e_1_2_1_102_1","volume-title":"Residual non-local attention networks for image restoration. arXiv preprint arXiv:1903.10082","author":"Zhang Yulun","year":"2019","unstructured":"Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, and Yun Fu. 2019. Residual non-local attention networks for image restoration. arXiv preprint arXiv:1903.10082 (2019)."},{"key":"e_1_2_1_103_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-022-01598-5"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3763346","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:13:16Z","timestamp":1764969196000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3763346"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12]]},"references-count":103,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["10.1145\/3763346"],"URL":"https:\/\/doi.org\/10.1145\/3763346","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2025,12]]},"assertion":[{"value":"2025-05-24","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-08-09","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-12-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}