{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,2]],"date-time":"2026-01-02T07:44:47Z","timestamp":1767339887885,"version":"build-2065373602"},"reference-count":46,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2023,4,7]],"date-time":"2023-04-07T00:00:00Z","timestamp":1680825600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Huawei Paris"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data. While recent works have started to lay down a theoretical foundation for these models, a detailed understanding of the role of the diffusion time T is still lacking. Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution; however, a smaller value of T should be preferred for a better approximation of the score-matching objective and higher computational efficiency. Starting from a variational interpretation of diffusion models, in this work we quantify this trade-off and suggest a new method to improve quality and efficiency of both training and sampling, by adopting smaller diffusion times. Indeed, we show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process. Empirical results support our analysis; for image data, our method is competitive with regard to the state of the art, according to standard sample quality metrics and log-likelihood.<\/jats:p>","DOI":"10.3390\/e25040633","type":"journal-article","created":{"date-parts":[[2023,4,10]],"date-time":"2023-04-10T03:26:06Z","timestamp":1681097166000},"page":"633","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["How Much Is Enough? A Study on Diffusion Times in Score-Based Generative Models"],"prefix":"10.3390","volume":"25","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4244-2053","authenticated-orcid":false,"given":"Giulio","family":"Franzese","sequence":"first","affiliation":[{"name":"EURECOM Data Science Department, 06410 Biot, France"}]},{"given":"Simone","family":"Rossi","sequence":"additional","affiliation":[{"name":"EURECOM Data Science Department, 06410 Biot, France"}]},{"given":"Lixuan","family":"Yang","sequence":"additional","affiliation":[{"name":"Huawei Technologies Paris, 92100 Boulogne-Billancourt, France"}]},{"given":"Alessandro","family":"Finamore","sequence":"additional","affiliation":[{"name":"Huawei Technologies Paris, 92100 Boulogne-Billancourt, France"}]},{"given":"Dario","family":"Rossi","sequence":"additional","affiliation":[{"name":"Huawei Technologies Paris, 92100 Boulogne-Billancourt, France"}]},{"given":"Maurizio","family":"Filippone","sequence":"additional","affiliation":[{"name":"EURECOM Data Science Department, 06410 Biot, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4675-7677","authenticated-orcid":false,"given":"Pietro","family":"Michiardi","sequence":"additional","affiliation":[{"name":"EURECOM Data Science Department, 06410 Biot, France"}]}],"member":"1968","published-online":{"date-parts":[[2023,4,7]]},"reference":[{"key":"ref_1","unstructured":"Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. (2015, January 6\u201311). Deep unsupervised learning using nonequilibrium thermodynamics. Proceedings of the International Conference on Machine Learning, Lille, France."},{"key":"ref_2","unstructured":"Song, Y., and Ermon, S. (2019, January 8\u201314). Generative Modeling by Estimating Gradients of the Data Distribution. Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada."},{"key":"ref_3","unstructured":"Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., and Poole, B. (May, January 30). Score-Based Generative Modeling through Stochastic Differential Equations. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_4","unstructured":"Vahdat, A., Kreis, K., and Kautz, J. (2021, January 6\u201314). Score-based Generative Modeling in Latent Space. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_5","unstructured":"Kingma, D., Salimans, T., Poole, B., and Ho, J. (2021, January 6\u201314). Variational Diffusion Models. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_6","unstructured":"Ho, J., Jain, A., and Abbeel, P. (2020, January 6\u201312). Denoising Diffusion Probabilistic Models. Proceedings of the Advances in Neural Information Processing Systems, Online."},{"key":"ref_7","unstructured":"Song, J., Meng, C., and Ermon, S. (May, January 30). Denoising Diffusion Implicit Models. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_8","unstructured":"Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, B. (May, January 30). DiffWave: A Versatile Diffusion Model for Audio Synthesis. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_9","unstructured":"Lee, S.G., Kim, H., Shin, C., Tan, X., Liu, C., Meng, Q., Qin, T., Chen, W., Yoon, S., and Liu, T.Y. (2022, January 25\u201329). PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_10","unstructured":"Dhariwal, P., and Nichol, A. (2021, January 6\u201314). Diffusion Models Beat GANs on Image Synthesis. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_11","unstructured":"Nichol, A.Q., and Dhariwal, P. (2021, January 18\u201324). Improved Denoising Diffusion Probabilistic Models. Proceedings of the International Conference on Machine Learning, Virtual."},{"key":"ref_12","unstructured":"Tashiro, Y., Song, J., Song, Y., and Ermon, S. (2021, January 6\u201314). CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_13","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8\u201313). Generative Adversarial Nets. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_14","unstructured":"Kingma, D.P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. (2016, January 5\u201310). Improved Variational Inference with Inverse Autoregressive Flow. Proceedings of the Advances in Neural Information Processing Systems 29, Barcelona, Spain."},{"key":"ref_15","unstructured":"Kingma, D.P., and Welling, M. (2014, January 14\u201316). Auto-Encoding Variational Bayes. Proceedings of the International Conference on Learning Representations, Banff, AB, Canada."},{"key":"ref_16","unstructured":"Tran, B.H., Rossi, S., Milios, D., Michiardi, P., Bonilla, E.V., and Filippone, M. (2021, January 6\u201314). Model Selection for Bayesian Autoencoders. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"313","DOI":"10.1016\/0304-4149(82)90051-5","article-title":"Reverse-Time Diffusion Equation Models","volume":"12","author":"Anderson","year":"1982","journal-title":"Stoch. Process. Their Appl."},{"key":"ref_18","unstructured":"Song, Y., Durkan, C., Murray, I., and Ermon, S. (2021, January 6\u201314). Maximum Likelihood Training of Score-Based Diffusion Models. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"S\u00e4rkk\u00e4, S., and Solin, A. (2019). Applied Stochastic Differential Equations, Institute of Mathematical Statistics Textbooks, Cambridge University Press.","DOI":"10.1017\/9781108186735"},{"key":"ref_20","unstructured":"Zheng, H., He, P., Chen, W., and Zhou, M. (2023, March 28). Truncated Diffusion Probabilistic Models. CoRR 2022. abs\/2202.09671, Available online: http:\/\/xxx.lanl.gov\/abs\/2202.09671."},{"key":"ref_21","unstructured":"Austin, J., Johnson, D.D., Ho, J., Tarlow, D., and van den Berg, R. (2021, January 6\u201314). Structured Denoising Diffusion Models in Discrete State-Spaces. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_22","unstructured":"Jolicoeur-Martineau, A., Li, K., Pich\u00e9-Taillefer, R., Kachman, T., and Mitliagkas, I. (2023, March 28). Gotta Go Fast When Generating Data with Score-Based Models. CoRR 2021. abs\/2105.14080, Available online: http:\/\/xxx.lanl.gov\/abs\/2105.14080."},{"key":"ref_23","unstructured":"Salimans, T., and Ho, J. (2022, January 25\u201329). Progressive Distillation for Fast Sampling of Diffusion Models. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_24","unstructured":"Xiao, Z., Kreis, K., and Vahdat, A. (2022, January 25\u201329). Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_25","unstructured":"Watson, D., Ho, J., Norouzi, M., and Chan, W. (2023, March 28). Learning to Efficiently Sample from Diffusion Probabilistic Models. CoRR 2021. abs\/2106.03802, Available online: http:\/\/xxx.lanl.gov\/abs\/2106.03802."},{"key":"ref_26","unstructured":"Dockhorn, T., Vahdat, A., and Kreis, K. (2022, January 25\u201329). Score-Based Generative Modeling with Critically-Damped Langevin Diffusion. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. (2022, January 19\u201320). High-Resolution Image Synthesis with Latent Diffusion Models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"ref_28","unstructured":"Bao, F., Li, C., Zhu, J., and Zhang, B. (2022, January 25\u201329). Analytic-DPM: An Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_29","unstructured":"De Bortoli, V., Thornton, J., Heng, J., and Doucet, A. (2021, January 6\u201314). Diffusion Schr\u00f6dinger Bridge with Applications to Score-Based Generative Modeling. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_30","unstructured":"De Bortoli, V. (2022). Convergence of denoising diffusion models under the manifold hypothesis. arXiv."},{"key":"ref_31","unstructured":"Lee, H., Lu, J., and Tan, Y. (2022). Convergence for score-based generative modeling with polynomial complexity. arXiv."},{"key":"ref_32","unstructured":"Huang, C.W., Lim, J.H., and Courville, A.C. (2021, January 6\u201314). A Variational Perspective on Diffusion-Based Generative Models and Score Matching. Proceedings of the Advances in Neural Information Processing Systems, Virtual."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Villani, C. (2009). Optimal Transport: Old and New, Springer.","DOI":"10.1007\/978-3-540-71050-9"},{"key":"ref_34","unstructured":"Chen, S., Chewi, S., Li, J., Li, Y., Salim, A., and Zhang, A.R. (2022). Sampling is as easy as learning the score: Theory for diffusion models with minimal data assumptions. arXiv."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"249","DOI":"10.1137\/20M1339982","article-title":"Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrodinger bridge","volume":"63","author":"Chen","year":"2021","journal-title":"SIAM Rev."},{"key":"ref_36","unstructured":"Chen, T., Liu, G.H., and Theodorou, E. (2022, January 25\u201329). Likelihood Training of Schr\u00f6dinger Bridge using Forward-Backward SDEs Theory. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_37","unstructured":"Chen, R.T.Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D.K. (2018, January 3\u20138). Neural Ordinary Differential Equations. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_38","unstructured":"Grathwohl, W., Chen, R.T.Q., Bettencourt, J., and Duvenaud, D. (2019, January 6\u20139). Scalable Reversible Generative Models with Free-form Continuous Dynamics. Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA."},{"key":"ref_39","unstructured":"Kynk\u00e4\u00e4nniemi, T., Karras, T., Aittala, M., Aila, T., and Lehtinen, J. (2023, March 28). The Role of ImageNet Classes in Fr\u00e9chet Inception Distance. CoRR 2022. abs\/2203.06026, Available online: http:\/\/xxx.lanl.gov\/abs\/2203.06026."},{"key":"ref_40","unstructured":"Theis, L., van den Oord, A., and Bethge, M. (2016, January 2\u20134). A Note on the Evaluation of Generative Models. Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico."},{"key":"ref_41","unstructured":"Rasmussen, C. (December, January 29). The Infinite Gaussian Mixture Model. Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"653","DOI":"10.1007\/s11390-010-9355-8","article-title":"Dirichlet Process Gaussian Mixture Models: Choice of the Base Distribution","volume":"25","year":"2010","journal-title":"J. Comput. Sci. Technol."},{"key":"ref_43","unstructured":"Kingma, D.P., and Dhariwal, P. (2018, January 3\u20138). Glow: Generative Flow with Invertible 1 \u00d7 1 Convolutions. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_44","unstructured":"Hoogeboom, E., Gritsenko, A.A., Bastings, J., Poole, B., van den Berg, R., and Salimans, T. (2022, January 25\u201329). Autoregressive Diffusion Models. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_45","unstructured":"Kloeden, P.E., and Platen, E. (1995). Numerical Solution of Stochastic Differential Equations, Springer."},{"key":"ref_46","unstructured":"Karras, T., Aittala, M., Aila, T., and Laine, S. (2022). Elucidating the Design Space of Diffusion-Based Generative Models. arXiv."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/25\/4\/633\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:12:10Z","timestamp":1760123530000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/25\/4\/633"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,7]]},"references-count":46,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2023,4]]}},"alternative-id":["e25040633"],"URL":"https:\/\/doi.org\/10.3390\/e25040633","relation":{},"ISSN":["1099-4300"],"issn-type":[{"type":"electronic","value":"1099-4300"}],"subject":[],"published":{"date-parts":[[2023,4,7]]}}}