{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T02:27:50Z","timestamp":1760236070413,"version":"build-2065373602"},"reference-count":31,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2021,10,28]],"date-time":"2021-10-28T00:00:00Z","timestamp":1635379200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Stochastic gradient sg-based algorithms for Markov chain Monte Carlo sampling (sgmcmc) tackle large-scale Bayesian modeling problems by operating on mini-batches and injecting noise on sgsteps. The sampling properties of these algorithms are determined by user choices, such as the covariance of the injected noise and the learning rate, and by problem-specific factors, such as assumptions on the loss landscape and the covariance of sg noise. However, current sgmcmc algorithms applied to popular complex models such as Deep Nets cannot simultaneously satisfy the assumptions on loss landscapes and on the behavior of the covariance of the sg noise, while operating with the practical requirement of non-vanishing learning rates. In this work we propose a novel practical method, which makes the sg noise isotropic, using a fixed learning rate that we determine analytically. Extensive experimental validations indicate that our proposal is competitive with the state of the art on sgmcmc.<\/jats:p>","DOI":"10.3390\/e23111426","type":"journal-article","created":{"date-parts":[[2021,10,28]],"date-time":"2021-10-28T23:50:28Z","timestamp":1635465028000},"page":"1426","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4244-2053","authenticated-orcid":false,"given":"Giulio","family":"Franzese","sequence":"first","affiliation":[{"name":"Data Science Department, Eurecom, 06410 Biot, France"}]},{"given":"Dimitrios","family":"Milios","sequence":"additional","affiliation":[{"name":"Data Science Department, Eurecom, 06410 Biot, France"}]},{"given":"Maurizio","family":"Filippone","sequence":"additional","affiliation":[{"name":"Data Science Department, Eurecom, 06410 Biot, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4675-7677","authenticated-orcid":false,"given":"Pietro","family":"Michiardi","sequence":"additional","affiliation":[{"name":"Data Science Department, Eurecom, 06410 Biot, France"}]}],"member":"1968","published-online":{"date-parts":[[2021,10,28]]},"reference":[{"key":"ref_1","unstructured":"Welling, M., and Teh, Y.W. (July, January 28). Bayesian learning via stochastic gradient Langevin dynamics. Proceedings of the 28th International Conference on Machine Learning (ICML-11), Bellevue, WA, USA."},{"key":"ref_2","unstructured":"Ahn, S., Korattikara, A., and Welling, M. (2012). Bayesian posterior sampling via stochastic gradient Fisher scoring. arXiv."},{"key":"ref_3","unstructured":"Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K.Q. (2013). Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex. Advances in Neural Information Processing Systems 26, Curran Associates, Inc."},{"key":"ref_4","unstructured":"Chen, T., Fox, E., and Guestrin, C. (2014, January 22\u201324). Stochastic gradient hamiltonian monte carlo. Proceedings of the International Conference on Machine Learning, Bejing, China."},{"key":"ref_5","unstructured":"Ma, Y.A., Chen, T., and Fox, E. (2015, January 7\u201312). A complete recipe for stochastic gradient MCMC. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_6","unstructured":"Draxler, F., Veschgini, K., Salmhofer, M., and Hamprecht, F.A. (2018). Essentially no barriers in neural network energy landscape. arXiv."},{"key":"ref_7","unstructured":"Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D., and Wilson, A.G. (2018, January 3\u20138). Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs. Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montr\u00e9al, QC, Canada."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Chaudhari, P., and Soatto, S. (2018, January 11\u201316). Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. Proceedings of the 2018 Information Theory and Applications Workshop (ITA), San Diego, CA, USA.","DOI":"10.1109\/ITA.2018.8503224"},{"key":"ref_9","unstructured":"Maddox, W.J., Izmailov, P., Garipov, T., Vetrov, D.P., and Wilson, A.G. (2019, January 8\u201314). A simple baseline for bayesian uncertainty in deep learning. Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada."},{"key":"ref_10","first-page":"4873","article-title":"Stochastic gradient descent as approximate bayesian inference","volume":"18","author":"Mandt","year":"2017","journal-title":"J. Mach. Learn. Res."},{"key":"ref_11","unstructured":"Springenberg, J.T., Klein, A., Falkner, S., and Hutter, F. (2016, January 5\u201310). Bayesian optimization with robust Bayesian neural networks. Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain."},{"key":"ref_12","unstructured":"Gal, Y., and Ghahramani, Z. (2016, January 19\u201324). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Proceedings of the International Conference on Machine Learning, ICML, New York, NY, USA."},{"key":"ref_13","unstructured":"Bishop, C.M. (2006). Pattern Recognition and Machine Learning, Springer. [1st ed.]. 2006. corr. 2nd printing 2011 ed."},{"key":"ref_14","unstructured":"Chen, C., Carlson, D., Gan, Z., Li, C., and Carin, L. (2016, January 9\u201311). Bridging the gap between stochastic gradient MCMC and stochastic optimization. Proceedings of the Artificial Intelligence and Statistics, Cadiz, Spain."},{"key":"ref_15","unstructured":"Gardiner, C.W. (2004). Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, Springer. [3rd ed.]."},{"key":"ref_16","unstructured":"Kushner, H., and Yin, G. (2003). Stochastic Approximation and Recursive Algorithms and Applications, Springer. Stochastic Modelling and Applied Probability."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Ljung, L., Pflug, G., and Walk, H. (1992). Stochastic Approximation and Optimization of Random Systems, Birkhauser Verlag.","DOI":"10.1007\/978-3-0348-8609-3"},{"key":"ref_18","unstructured":"Kloeden, P.E., and Platen, E. (2013). Numerical Solution of Stochastic Differential Equations, Springer Science & Business Media."},{"key":"ref_19","first-page":"2","article-title":"MCMC using Hamiltonian dynamics","volume":"Volume 2","author":"Neal","year":"2011","journal-title":"Handbook of Markov Chain Monte Carlo"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1111\/j.1467-9868.2010.00765.x","article-title":"Riemann manifold langevin and hamiltonian monte carlo methods","volume":"73","author":"Girolami","year":"2011","journal-title":"J. R. Stat. Soc. Ser. B (Stat. Methodol.)"},{"key":"ref_21","unstructured":"Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning, MIT Press."},{"key":"ref_22","first-page":"5504","article-title":"Exploration of the (non-) asymptotic bias and variance of stochastic gradient Langevin dynamics","volume":"17","author":"Vollmer","year":"2016","journal-title":"J. Mach. Learn. Res."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"124020","DOI":"10.1088\/1742-5468\/ab3985","article-title":"On the information bottleneck theory of deep learning","volume":"2019","author":"Saxe","year":"2019","journal-title":"J. Stat. Mech. Theory Exp."},{"key":"ref_24","unstructured":"Zhu, Z., Wu, J., Yu, B., Wu, L., and Ma, J. (2018). The anisotropic noise in stochastic gradient descent: Its behavior of escaping from minima and regularization effects. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"599","DOI":"10.1080\/00949650213744","article-title":"Maximum likelihood estimation using the empirical fisher information matrix","volume":"72","author":"Scott","year":"2002","journal-title":"J. Stat. Comput. Simul."},{"key":"ref_26","unstructured":"Dua, D., and Graff, C. (2021, October 25). UCI Machine Learning Repository. Available online: https:\/\/archive.ics.uci.edu\/ml\/index.php."},{"key":"ref_27","unstructured":"LeCun, Y., and Cortes, C. (2021, October 25). MNIST Handwritten Digit Database. Available online: http:\/\/yann.lecun.com\/exdb\/mnist\/."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_29","unstructured":"Krizhevsky, A., Nair, V., and Hinton, G. (2021, October 25). CIFAR-10 (Canadian Institute for Advanced Research). Available online: http:\/\/www.cs.toronto.edu\/~kriz\/cifar.html."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based learning applied to document recognition","volume":"86","author":"LeCun","year":"1998","journal-title":"Proc. IEEE"},{"key":"ref_31","unstructured":"Guo, C., Pleiss, G., Sun, Y., and Weinberger, K.Q. (2017). On calibration of modern neural networks. arXiv."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/23\/11\/1426\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T07:22:17Z","timestamp":1760167337000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/23\/11\/1426"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,28]]},"references-count":31,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2021,11]]}},"alternative-id":["e23111426"],"URL":"https:\/\/doi.org\/10.3390\/e23111426","relation":{},"ISSN":["1099-4300"],"issn-type":[{"type":"electronic","value":"1099-4300"}],"subject":[],"published":{"date-parts":[[2021,10,28]]}}}