{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,28]],"date-time":"2026-03-28T12:45:46Z","timestamp":1774701946464,"version":"3.50.1"},"reference-count":21,"publisher":"Springer Science and Business Media LLC","issue":"9","license":[{"start":{"date-parts":[[2025,6,11]],"date-time":"2025-06-11T00:00:00Z","timestamp":1749600000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,6,11]],"date-time":"2025-06-11T00:00:00Z","timestamp":1749600000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Text-to-image diffusion models have demonstrated unprecedented capabilities for flexible and realistic image synthesis. Nevertheless, these models rely on a time-consuming sampling procedure, which has motivated attempts to reduce their latency. When improving efficiency, researchers often use the original diffusion model to train an additional network designed specifically for fast image generation. In contrast, our approach seeks to reduce latency directly, without any retraining, fine-tuning, or knowledge distillation. In particular, we find the repeated calculation of attention maps to be costly yet redundant, and instead suggest reusing them during sampling. Our specific reuse strategies are based on ODE theory, which implies that the later a map is reused, the smaller the distortion in the final image. We empirically compare our reuse strategies with few-step sampling procedures of comparable latency, finding that reuse generates images that are closer to those produced by the original high-latency diffusion model.<\/jats:p>","DOI":"10.1007\/s11263-025-02463-x","type":"journal-article","created":{"date-parts":[[2025,6,11]],"date-time":"2025-06-11T13:52:02Z","timestamp":1749649922000},"page":"6422-6431","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Fast Sampling Through The Reuse Of Attention Maps In Diffusion Models"],"prefix":"10.1007","volume":"133","author":[{"given":"Rosco","family":"Hunter","sequence":"first","affiliation":[]},{"given":"\u0141ukasz","family":"Dudziak","sequence":"additional","affiliation":[]},{"given":"Mohamed S.","family":"Abdelfattah","sequence":"additional","affiliation":[]},{"given":"Abhinav","family":"Mehrotra","sequence":"additional","affiliation":[]},{"given":"Sourav","family":"Bhattacharya","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1159-090X","authenticated-orcid":false,"given":"Hongkai","family":"Wen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,6,11]]},"reference":[{"key":"2463_CR1","doi-asserted-by":"crossref","unstructured":"Anderljung, M., & Hazell, J. Protecting society from ai misuse: When are restrictions on capabilities warranted? arXiv preprint arXiv:2303.09377 (2023)","DOI":"10.1007\/s00146-024-02130-8"},{"key":"2463_CR2","unstructured":"Berthelot, D., Autef, A., Lin, J., Yap, D.A., Zhai, S., Hu, S., Zheng, D., Talbot, W., & Gu, E. Tract: Denoising diffusion models with transitive closure time-distillation. arXiv preprint arXiv:2303.04248 (2023)"},{"key":"2463_CR3","unstructured":"Bhojanapalli, S., Chakrabarti, A., Veit, A., Lukasik, M., Jain, H., Liu, F., Chang, Y.-W., & Kumar, S. Leveraging redundancy in attention with reuse transformers. arXiv preprint arXiv:2110.06821 (2021)"},{"key":"2463_CR4","unstructured":"Dhariwal, P., & Nichol, A. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 8780\u20138794 (2021)"},{"key":"2463_CR5","unstructured":"Dockhorn, T., Vahdat, A., & Kreis, K. Genie: Higher-order denoising diffusion solvers. Advances in Neural Information Processing Systems, 30150\u201330166 (2022)"},{"key":"2463_CR6","unstructured":"Gu, J., Zhai, S., Zhang, Y., Liu, L., & Susskind, J. Boot: Data-free distillation of denoising diffusion models with bootstrapping. arXiv preprint arXiv:2306.05544 (2023)"},{"key":"2463_CR7","unstructured":"Kim, B.-K., Song, H.-K., Castells, T., & Choi, S. On architectural compression of text-to-image diffusion models. arXiv preprint arXiv:2305.15798 (2023)"},{"key":"2463_CR8","doi-asserted-by":"crossref","unstructured":"Layek, G., & et al. An Introduction to Dynamical Systems and Chaos vol. 449. Springer, ??? (2015)","DOI":"10.1007\/978-81-322-2556-0"},{"key":"2463_CR9","unstructured":"Li, Y., Wang, H., Jin, Q., Hu, J., Chemerys, P., Fu, Y., Wang, Y., Tulyakov, S., & Ren, J. Snapfusion: Text-to-image diffusion model on mobile devices within two seconds. arXiv preprint arXiv:2306.00980 (2023)"},{"key":"2463_CR10","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., & Zitnick, C.L. Microsoft coco: Common objects in context. In: Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740\u2013755 (2014). Springer","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"2463_CR11","unstructured":"Liu, X., Zhang, X., Ma, J., Peng, J., & Liu, Q. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation. arXiv preprint arXiv:2309.06380 (2023)"},{"key":"2463_CR12","first-page":"5775","volume":"35","author":"C Lu","year":"2022","unstructured":"Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., & Zhu, J. (2022). Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems, 35, 5775\u20135787.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2463_CR13","doi-asserted-by":"crossref","unstructured":"Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J., & Salimans, T. On distillation of guided diffusion models. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 14297\u201314306 (2023)","DOI":"10.1109\/CVPR52729.2023.01374"},{"key":"2463_CR14","unstructured":"Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., & Chen, M. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021)"},{"key":"2463_CR15","unstructured":"Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022)"},{"key":"2463_CR16","doi-asserted-by":"crossref","unstructured":"Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684\u201310695 (2022)","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"2463_CR17","unstructured":"Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E.L., Ghasemipour, K., Gontijo\u00a0Lopes, R., Karagol\u00a0Ayan, B., Salimans, T., & et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 36479\u201336494 (2022)"},{"key":"2463_CR18","unstructured":"Salimans, T., & Ho, J. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512 (2022)"},{"key":"2463_CR19","unstructured":"Song, Y., Dhariwal, P., Chen, M., & Sutskever, I. Consistency models (2023)"},{"key":"2463_CR20","unstructured":"Song, J., Meng, C., & Ermon, S. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 (2020)"},{"key":"2463_CR21","unstructured":"Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., & Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456 (2020)"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02463-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02463-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02463-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,9]],"date-time":"2025-09-09T08:07:22Z","timestamp":1757405242000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02463-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,11]]},"references-count":21,"journal-issue":{"issue":"9","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["2463"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02463-x","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,11]]},"assertion":[{"value":"17 April 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 April 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 June 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflicts of interest"}},{"value":"N\/A","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Consent for publication has been obtained.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"N\/A","order":5,"name":"Ethics","group":{"name":"EthicsHeading","label":"Materials availability"}},{"value":"Code will be made available to the reviewers and the public.","order":6,"name":"Ethics","group":{"name":"EthicsHeading","label":"Code availability"}}]}}