{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T03:34:13Z","timestamp":1767929653578,"version":"3.49.0"},"reference-count":76,"publisher":"Association for Computing Machinery (ACM)","issue":"PLDI","license":[{"start":{"date-parts":[[2024,6,20]],"date-time":"2024-06-20T00:00:00Z","timestamp":1718841600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Program. Lang."],"published-print":{"date-parts":[[2024,6,20]]},"abstract":"<jats:p>\n            Compared to the wide array of advanced Monte Carlo methods supported by modern probabilistic programming languages (PPLs), PPL support for\n            <jats:italic toggle=\"yes\">variational inference<\/jats:italic>\n            (VI) is less developed: users are typically limited to a predefined selection of variational objectives and gradient estimators, which are implemented monolithically (and without formal correctness arguments) in PPL backends. In this paper, we propose a more modular approach to supporting variational inference in PPLs, based on compositional program transformation. In our approach, variational objectives are expressed as programs, that may employ first-class constructs for computing\n            <jats:italic toggle=\"yes\">densities of<\/jats:italic>\n            and\n            <jats:italic toggle=\"yes\">expected values under<\/jats:italic>\n            user-defined models and variational families. We then transform these programs systematically into unbiased gradient estimators for optimizing the objectives they define. Our design enables modular reasoning about many interacting concerns, including automatic differentiation, density accumulation, tracing, and the application of unbiased gradient estimation strategies. Additionally, relative to existing support for VI in PPLs, our design increases expressiveness along three axes: (1) it supports an open-ended set of user-defined variational objectives, rather than a fixed menu of options; (2) it supports a combinatorial space of gradient estimation strategies, many not automated by today\u2019s PPLs; and (3) it supports a broader class of models and variational families, because it supports constructs for approximate marginalization and normalization (previously introduced only for Monte Carlo inference). We implement our approach in an extension to the Gen probabilistic programming system (genjax.vi, implemented inJAX), and evaluate our automation on several deep generative modeling tasks, showing minimal performance overhead vs. hand-coded implementations and performance competitive with well-established open-source PPLs.\n          <\/jats:p>","DOI":"10.1145\/3656463","type":"journal-article","created":{"date-parts":[[2024,6,20]],"date-time":"2024-06-20T16:27:20Z","timestamp":1718900840000},"page":"2123-2147","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Probabilistic Programming with Programmable Variational Inference"],"prefix":"10.1145","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0009-0000-1930-8150","authenticated-orcid":false,"given":"McCoy R.","family":"Becker","sequence":"first","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9262-4392","authenticated-orcid":false,"given":"Alexander K.","family":"Lew","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7058-4679","authenticated-orcid":false,"given":"Xiaoyan","family":"Wang","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3052-7412","authenticated-orcid":false,"given":"Matin","family":"Ghavami","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5294-9088","authenticated-orcid":false,"given":"Mathieu","family":"Huot","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8095-8523","authenticated-orcid":false,"given":"Martin C.","family":"Rinard","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2507-0833","authenticated-orcid":false,"given":"Vikash K.","family":"Mansinghka","sequence":"additional","affiliation":[{"name":"Massachusetts Institute of Technology, Cambridge, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,6,20]]},"reference":[{"key":"e_1_3_1_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-30499-9_86"},{"key":"e_1_3_1_3_1","doi-asserted-by":"publisher","DOI":"10.1007\/11693024_6"},{"key":"e_1_3_1_4_1","article-title":"Automatic Differentiation of Programs with Discrete Randomness","author":"Arya Gaurav","year":"2022","unstructured":"Gaurav Arya, Moritz Schauer, Frank Sch\u00e4fer, and Christopher Rackauckas. 2022. Automatic Differentiation of Programs with Discrete Randomness. InAdvances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022,Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (Eds.). http:\/\/papers.nips.cc\/paper_files\/paper\/2022\/hash\/43d8e5fc816c692f342493331d5e98fc-Abstract-Conference.html","journal-title":"InAdvances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022,Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (Eds.)"},{"key":"e_1_3_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459775"},{"key":"e_1_3_1_6_1","doi-asserted-by":"publisher","unstructured":"McCoy R. Becker Alexander K. Lew and Xiaoyan Wang. 2024. probcomp\/programmable-vi-pldi-2024: v0.1.2. Zenodo. https:\/\/doi.org\/10.5281\/zenodo.10935596 10.5281\/zenodo.10935596","DOI":"10.5281\/zenodo.10935596"},{"key":"e_1_3_1_7_1","unstructured":"Eli Bingham Jonathan P. Chen Martin Jankowiak Fritz Obermeyer Neeraj Pradhan Theofanis Karaletsos Rohit Singh Paul A. Szerlip Paul Horsfall and Noah D. Goodman. 2019. Pyro: Deep Universal Probabilistic Programming. 28:1-28:6 pages.http:\/\/jmlr.org\/papers\/v20\/18-403.html"},{"key":"e_1_3_1_8_1","doi-asserted-by":"crossref","unstructured":"David M Blei and Michael I Jordan. 2006. Variational inference for Dirichlet process mixtures (2006).","DOI":"10.1214\/06-BA104"},{"key":"e_1_3_1_9_1","article-title":"Variational Inference: A Review for Statisticians","author":"Blei David M.","year":"2016","unstructured":"David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. 2016. Variational Inference: A Review for Statisticians.CoRR abs\/1601.00670 (2016). arXiv:1601.00670 http:\/\/arxiv.org\/abs\/1601.00670","journal-title":"CoRR abs\/1601.00670 (2016). arXiv:1601.00670"},{"key":"e_1_3_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3022670.2951942"},{"key":"e_1_3_1_11_1","doi-asserted-by":"publisher","unstructured":"J\u00f6rg Bornschein and Yoshua Bengio. 2015. Reweighted Wake-Sleep. https:\/\/doi.org\/10.48550\/arXiv.1406.2751 10.48550\/arXiv.1406.2751 arXiv:1406.2751 [cs].","DOI":"10.48550\/arXiv.1406.2751"},{"key":"e_1_3_1_12_1","doi-asserted-by":"publisher","unstructured":"Yuri Burda Roger Grosse and Ruslan Salakhutdinov. 2016. Importance Weighted Autoencoders. https:\/\/doi.org\/10.48550\/arXiv.1509.00519 10.48550\/arXiv.1509.00519 arXiv:1509.00519 [cs stat].","DOI":"10.48550\/arXiv.1509.00519"},{"key":"e_1_3_1_13_1","doi-asserted-by":"publisher","DOI":"10.18637\/jss.v076.i01"},{"key":"e_1_3_1_14_1","article-title":"AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms","author":"Cusumano-Towner Marco","year":"2017","unstructured":"Marco Cusumano-Towner and Vikash K Mansinghka. 2017. AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms. InAdvances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/2017\/hash\/acab0116c354964a558e65bdd07ff047-Abstract.html","journal-title":"InAdvances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc"},{"key":"e_1_3_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3314221.3314642"},{"key":"e_1_3_1_16_1","first-page":"1","article-title":"Maximum Likelihood from Incomplete Data via the EM Algorithm","author":"Dempster A. P.","year":"1977","unstructured":"A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm.Journal ofthe Royal Statistical Society. Series B (Methodological) 39, 1 (1977), 1-38. https:\/\/www.jstor.org\/stable\/2984875 Publisher: [Royal Statistical Society, Wiley].","journal-title":"Journal ofthe Royal Statistical Society. Series B (Methodological) 39, 1 (1977)"},{"key":"e_1_3_1_17_1","doi-asserted-by":"publisher","unstructured":"Justin Domke. 2021. An Easy to Interpret Diagnostic for Approximate Inference: Symmetric Divergence Over Simulations. https:\/\/doi.org\/10.48550\/arXiv.2103.01030 10.48550\/arXiv.2103.01030 arXiv:2103.01030 [cs stat].","DOI":"10.48550\/arXiv.2103.01030"},{"key":"e_1_3_1_18_1","doi-asserted-by":"publisher","unstructured":"S. M. Ali Eslami Nicolas Heess Theophane Weber Yuval Tassa David Szepesvari Koray Kavukcuoglu and Geoffrey E. Hinton. 2016. Attend Infer Repeat: Fast Scene Understanding with Generative Models. https:\/\/doi.org\/10.48550\/arXiv.1603.08575 10.48550\/arXiv.1603.08575 arXiv:1603.08575 [cs].","DOI":"10.48550\/arXiv.1603.08575"},{"key":"e_1_3_1_19_1","first-page":"1529","article-title":"Dice: The infinitely differentiable Monte Carlo estimator","author":"Foerster Jakob","year":"2018","unstructured":"Jakob Foerster, Gregory Farquhar, Maruan Al-Shedivat, Tim Rockt\u00e4schel, Eric Xing, and Shimon Whiteson. 2018. Dice: The infinitely differentiable Monte Carlo estimator. InInternational Conference on Machine Learning. PMLR, 1529-1538.","journal-title":"InInternational Conference on Machine Learning. PMLR"},{"key":"e_1_3_1_20_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-011-9236-8"},{"key":"e_1_3_1_21_1","article-title":"Compiling machine learning programs via high-level tracing","author":"Frostig Roy","year":"2018","unstructured":"Roy Frostig, Matthew James Johnson, and Chris Leary. 2018. Compiling machine learning programs via high-level tracing.Systems for Machine Learning 4, 9 (2018).","journal-title":"Systems for Machine Learning 4, 9 (2018)"},{"key":"e_1_3_1_22_1","first-page":"1682","article-title":"Turing: a language for flexible probabilistic inference","author":"Ge Hong","year":"2018","unstructured":"Hong Ge, Kai Xu, and Zoubin Ghahramani. 2018. Turing: a language for flexible probabilistic inference. InInternational conference on artificial intelligence and statistics. PMLR, 1682-1690.","journal-title":"InInternational conference on artificial intelligence and statistics. PMLR"},{"key":"e_1_3_1_23_1","article-title":"Neural Adaptive Sequential Monte Carlo","author":"Gu Shixiang (Shane)","year":"2015","unstructured":"Shixiang (Shane) Gu, Zoubin Ghahramani, and Richard E Turner. 2015. Neural Adaptive Sequential Monte Carlo. InAdvances in Neural Information Processing Systems, Vol. 28. Curran Associates, Inc. https:\/\/papers.nips.cc\/paper_files\/paper\/2015\/hash\/99adff456950dd9629a5260c4de21858-Abstract.html","journal-title":"InAdvances in Neural Information Processing Systems, Vol. 28. Curran Associates, Inc"},{"key":"e_1_3_1_24_1","first-page":"1","article-title":"A convenient category for higher-order probability theory","author":"Heunen Chris","year":"2017","unstructured":"Chris Heunen, Ohad Kammar, Sam Staton, and Hongseok Yang. 2017. A convenient category for higher-order probability theory. InProceedings ofthe 32nd Annual ACM\/IEEE Symposium on Logic in Computer Science (LICS \u201817). IEEE Press, Reykjavik, Iceland, 1-12.","journal-title":"InProceedings ofthe 32nd Annual ACM\/IEEE Symposium on Logic in Computer Science (LICS \u201817). IEEE Press, Reykjavik, Iceland"},{"key":"e_1_3_1_25_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.7761831"},{"key":"e_1_3_1_26_1","article-title":"Stochastic variational inference","author":"Hoffman Matthew D","year":"2013","unstructured":"Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference.Journal of Machine Learning Research (2013).","journal-title":"Journal of Machine Learning Research (2013)"},{"key":"e_1_3_1_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-45231-5_17"},{"key":"e_1_3_1_28_1","first-page":"21696","article-title":"Variational diffusion models","author":"Kingma Diederik","year":"2021","unstructured":"Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. 2021. Variational diffusion models.Advances in Neural Information Processing Systems 34 (2021), 21696-21707.","journal-title":"Advances in Neural Information Processing Systems 34 (2021)"},{"key":"e_1_3_1_29_1","first-page":"3581","article-title":"Semi-supervised learning with deep generative models","author":"Kingma Diederik P.","year":"2014","unstructured":"Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, and Max Welling. 2014. Semi-supervised learning with deep generative models. InProceedings ofthe 27th International Conference on Neural Information Processing Systems - Volume 2 (NIPS\u201914). MIT Press, Cambridge, MA, USA, 3581-3589.","journal-title":"InProceedings ofthe 27th International Conference on Neural Information Processing Systems - Volume 2 (NIPS\u201914). MIT Press, Cambridge, MA, USA"},{"key":"e_1_3_1_30_1","doi-asserted-by":"publisher","unstructured":"Diederik P. Kingma and Max Welling. 2022. Auto-Encoding Variational Bayes. https:\/\/doi.org\/10.48550\/arXiv.1312.6114 10.48550\/arXiv.1312.6114 arXiv:1312.6114 [cs stat].","DOI":"10.48550\/arXiv.1312.6114"},{"key":"e_1_3_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3342136"},{"key":"e_1_3_1_32_1","first-page":"7574","article-title":"Storchastic: A framework for general stochastic automatic differentiation","author":"Krieken Emile","year":"2021","unstructured":"Emile Krieken, Jakub Tomczak, and Annette Ten Teije. 2021. Storchastic: A framework for general stochastic automatic differentiation.Advances in Neural Information Processing Systems 34 (2021), 7574-7587.","journal-title":"Advances in Neural Information Processing Systems 34 (2021)"},{"key":"e_1_3_1_33_1","article-title":"Automatic differentiation variational inference","author":"Kucukelbir Alp","year":"2017","unstructured":"Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. 2017. Automatic differentiation variational inference.Journal ofmachine learning research (2017).","journal-title":"Journal ofmachine learning research (2017)"},{"key":"e_1_3_1_34_1","unstructured":"Tuan Anh Le Adam R. Kosiorek N. Siddharth Yee Whye Teh and Frank Wood. 2019. Revisiting Reweighted Wake-Sleep for Models with Stochastic Control Flow. 1039-1049 pages. http:\/\/proceedings.mlr.press\/v115\/le20a.html"},{"key":"e_1_3_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3571205"},{"key":"e_1_3_1_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3371084"},{"key":"e_1_3_1_37_1","article-title":"Reparameterization gradient for non-differentiable models","author":"Lee Wonyeol","year":"2018","unstructured":"Wonyeol Lee, Hangyeol Yu, and Hongseok Yang. 2018. Reparameterization gradient for non-differentiable models.Advances in Neural Information Processing Systems 31 (2018).","journal-title":"Advances in Neural Information Processing Systems 31 (2018)"},{"key":"e_1_3_1_38_1","first-page":"1096","article-title":"Recursive Monte Carlo and variational inference with auxiliary variables","author":"Lew Alexander K.","year":"2022","unstructured":"Alexander K. Lew, Marco F. Cusumano-Towner, and Vikash K. Mansinghka. 2022. Recursive Monte Carlo and variational inference with auxiliary variables. InUncertainty in Artificial Intelligence, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI2022, 1-5 August 2022, Eindhoven, The Netherlands (Proceedings of Machine Learning Research, Vol. 180). PMLR, 1096-1106. https:\/\/proceedings.mlr.press\/v180\/lew22a.html","journal-title":"InUncertainty in Artificial Intelligence, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI2022, 1-5 August 2022, Eindhoven, The Netherlands (Proceedings of Machine Learning Research, Vol. 180). PMLR"},{"key":"e_1_3_1_39_1","first-page":"1","article-title":"Trace types and denotational semantics for sound programmable inference in probabilistic languages","author":"Lew Alexander K","year":"2019","unstructured":"Alexander K Lew, Marco F Cusumano-Towner, Benjamin Sherman, Michael Carbin, and Vikash K Mansinghka. 2019. Trace types and denotational semantics for sound programmable inference in probabilistic languages.Proceedings of the ACM on Programming Languages 4, POPL (2019), 1-32.","journal-title":"Proceedings of the ACM on Programming Languages 4, POPL (2019)"},{"key":"e_1_3_1_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3591290"},{"key":"e_1_3_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/3571198"},{"key":"e_1_3_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3571243"},{"key":"e_1_3_1_43_1","unstructured":"Michael Y. Li Dieterich Lawson and Scott Linderman. 2023. Neural Adaptive Smoothing via Twisting. https:\/\/openreview.net\/forum?id=rC6-kGN-0v"},{"key":"e_1_3_1_44_1","first-page":"404","article-title":"Correctness of Sequential Monte Carlo Inference for Probabilistic Programming Languages","author":"Lund\u00e9n Daniel","year":"2021","unstructured":"Daniel Lund\u00e9n, Johannes Borgstr\u00f6m, and David Broman. 2021. Correctness of Sequential Monte Carlo Inference for Probabilistic Programming Languages.. InESOP. 404-431.","journal-title":"InESOP"},{"key":"e_1_3_1_45_1","first-page":"1445","article-title":"Auxiliary Deep Generative Models","author":"Maal\u00d8e Lars","year":"2016","unstructured":"Lars Maal\u00d8e, Casper Kaae S0nderby, S0ren Kaae S0nderby, and Ole Winther. 2016. Auxiliary Deep Generative Models. InProceedings of The 33rd International Conference on Machine Learning. PMLR, 1445-1453. https:\/\/proceedings.mlr.press\/v48\/maaloe16.html ISSN: 1938-7228.","journal-title":"InProceedings of The 33rd International Conference on Machine Learning. PMLR"},{"key":"e_1_3_1_46_1","doi-asserted-by":"publisher","unstructured":"Chris J. Maddison Dieterich Lawson George Tucker Nicolas Heess Mohammad Norouzi Andriy Mnih Arnaud Doucet and Yee Whye Teh. 2017. Filtering Variational Objectives. https:\/\/doi.org\/10.48550\/arXiv.1705.09279 10.48550\/arXiv.1705.09279 arXiv:1705.09279 [cs stat].","DOI":"10.48550\/arXiv.1705.09279"},{"key":"e_1_3_1_47_1","article-title":"GFlowNets and variational inference","author":"Malkin Nikolay","year":"2022","unstructured":"Nikolay Malkin, Salem Lahlou, Tristan Deleu, Xu Ji, Edward Hu, Katie Everett, Dinghuai Zhang, and Yoshua Bengio. 2022. GFlowNets and variational inference.arXiv preprint arXiv:2210.00580 (2022).","journal-title":"arXiv preprint arXiv:2210.00580 (2022)"},{"key":"e_1_3_1_48_1","article-title":"Venture: a higher-order probabilistic programming platform with programmable inference","author":"Mansinghka Vikash","year":"2014","unstructured":"Vikash Mansinghka, Daniel Selsam, and Yura Perov. 2014. Venture: a higher-order probabilistic programming platform with programmable inference.arXivpreprint arXiv:1404.0099 (2014).","journal-title":"arXivpreprint arXiv:1404.0099 (2014)"},{"key":"e_1_3_1_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/3192366.3192409"},{"key":"e_1_3_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3649843"},{"key":"e_1_3_1_51_1","first-page":"1","article-title":"Monte Carlo gradient estimation in machine learning","author":"Mohamed Shakir","year":"2020","unstructured":"Shakir Mohamed, Mihaela Rosca, Michael Figurnov, and Andriy Mnih. 2020. Monte Carlo gradient estimation in machine learning.Journal of Machine Learning Research 21, 132 (2020), 1-62.","journal-title":"Journal of Machine Learning Research 21, 132 (2020)"},{"key":"e_1_3_1_52_1","first-page":"968","article-title":"Variational sequential Monte Carlo","author":"Naesseth Christian","year":"2018","unstructured":"Christian Naesseth, Scott Linderman, Rajesh Ranganath, and David Blei. 2018. Variational sequential Monte Carlo. InInternational conference on artificial intelligence and statistics. PMLR, 968-977.","journal-title":"InInternational conference on artificial intelligence and statistics. PMLR"},{"key":"e_1_3_1_53_1","first-page":"15499","article-title":"Markovian score climbing: variational inference with KL(p \u01c1 q)","author":"Naesseth Christian A.","year":"2020","unstructured":"Christian A. Naesseth, Fredrik Lindsten, and David Blei. 2020. Markovian score climbing: variational inference with KL(p \u01c1 q). InProceedings of the 34th International Conference on Neural Information Processing Systems (NIPS\u201920). Curran Associates Inc., Red Hook, NY, USA, 15499-15510.","journal-title":"InProceedings of the 34th International Conference on Neural Information Processing Systems (NIPS\u201920). Curran Associates Inc., Red Hook, NY, USA"},{"key":"e_1_3_1_54_1","first-page":"307","article-title":"Elements of sequential Monte Carlo","author":"Naesseth Christian A","year":"2019","unstructured":"Christian A Naesseth, Fredrik Lindsten, Thomas B Schon, et al. 2019. Elements of sequential Monte Carlo.Foundations and Trends\u00ae in Machine Learning 12, 3 (2019), 307-392.","journal-title":"Foundations and Trends\u00ae in Machine Learning 12, 3 (2019)"},{"key":"e_1_3_1_55_1","doi-asserted-by":"crossref","first-page":"62","DOI":"10.1007\/978-3-319-29604-3_5","article-title":"Probabilistic inference by program transformation in Hakaru (system description)","author":"Narayanan Praveen","year":"2016","unstructured":"Praveen Narayanan, Jacques Carette, Wren Romano, Chung-chieh Shan, and Robert Zinkov. 2016. Probabilistic inference by program transformation in Hakaru (system description). InFunctional and Logic Programming: 13th International Symposium, FLOPS 2016, Kochi, Japan, March 4-6, 2016, Proceedings 13. Springer, 62-79.","journal-title":"InFunctional and Logic Programming: 13th International Symposium, FLOPS 2016, Kochi, Japan, March 4-6, 2016, Proceedings 13. Springer"},{"key":"e_1_3_1_56_1","article-title":"Functional tensors for probabilistic programming","author":"Obermeyer Fritz","year":"2019","unstructured":"Fritz Obermeyer, Eli Bingham, Martin Jankowiak, Du Phan, and Jonathan P Chen. 2019. Functional tensors for probabilistic programming.arXivpreprint arXiv:1910.10775 (2019).","journal-title":"arXivpreprint arXiv:1910.10775 (2019)"},{"key":"e_1_3_1_57_1","first-page":"4871","article-title":"Tensor variable elimination for plated factor graphs","author":"Obermeyer Fritz","year":"2019","unstructured":"Fritz Obermeyer, Eli Bingham, Martin Jankowiak, Neeraj Pradhan, Justin Chiu, Alexander Rush, and Noah Goodman. 2019. Tensor variable elimination for plated factor graphs. InInternational Conference on Machine Learning. PMLR, 4871-4880.","journal-title":"InInternational Conference on Machine Learning. PMLR"},{"key":"e_1_3_1_58_1","article-title":"Variational autoencoder for deep learning of images, labels and captions","author":"Pu Yunchen","year":"2016","unstructured":"Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, and Lawrence Carin. 2016. Variational autoencoder for deep learning of images, labels and captions.Advances in Neural Information Processing Systems 29 (2016).","journal-title":"Advances in Neural Information Processing Systems 29 (2016)"},{"key":"e_1_3_1_59_1","article-title":"You only linearize once: Tangents transpose to gradients","author":"Radul Alexey","year":"2022","unstructured":"Alexey Radul, Adam Paszke, Roy Frostig, Matthew Johnson, and Dougal Maclaurin. 2022. You only linearize once: Tangents transpose to gradients.arXiv preprint arXiv:2204.10923 (2022).","journal-title":"arXiv preprint arXiv:2204.10923 (2022)"},{"key":"e_1_3_1_60_1","unstructured":"Tom Rainforth Adam R. Kosiorek Tuan Anh Le Chris J. Maddison Maximilian Igl Frank Wood and Yee Whye Teh. 2018. Tighter Variational Bounds are Not Necessarily Better. https:\/\/arxiv.org\/abs\/1802.04537v3"},{"key":"e_1_3_1_61_1","first-page":"324","article-title":"Hierarchical Variational Models","author":"Ranganath Rajesh","year":"2016","unstructured":"Rajesh Ranganath, Dustin Tran, and David Blei. 2016. Hierarchical Variational Models. InProceedings of The 33rd International Conference on Machine Learning. PMLR, 324-333. https:\/\/proceedings.mlr.press\/v48\/ranganath16.html ISSN: 1938-7228.","journal-title":"InProceedings of The 33rd International Conference on Machine Learning. PMLR"},{"key":"e_1_3_1_62_1","unstructured":"D.B. Rubin. 1988. Using the SIR algorithm to simulate posterior distributions. https:\/\/api.semanticscholar.org\/CorpusID:115305396"},{"key":"e_1_3_1_63_1","first-page":"1218","article-title":"Markov chain Monte Carlo and variational inference: Bridging the gap","author":"Salimans Tim","year":"2015","unstructured":"Tim Salimans, Diederik Kingma, and Max Welling. 2015. Markov chain Monte Carlo and variational inference: Bridging the gap. InInternational Conference on Machine Learning. PMLR, 1218-1226.","journal-title":"InInternational Conference on Machine Learning. PMLR"},{"key":"e_1_3_1_64_1","article-title":"Gradient estimation using stochastic computation graphs","author":"Schulman John","year":"2015","unstructured":"John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. 2015. Gradient estimation using stochastic computation graphs.Advances in Neural Information Processing Systems 28 (2015).","journal-title":"Advances in Neural Information Processing Systems 28 (2015)"},{"key":"e_1_3_1_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/3434284"},{"key":"e_1_3_1_66_1","doi-asserted-by":"publisher","unstructured":"Artem Sobolev and Dmitry Vetrov. 2019. Importance Weighted Hierarchical Variational Inference. https:\/\/doi.org\/10.48550\/arXiv.1905.03290 10.48550\/arXiv.1905.03290arXiv:1905.03290 [cs stat].","DOI":"10.48550\/arXiv.1905.03290"},{"key":"e_1_3_1_67_1","article-title":"Learning Structured Output Representation using Deep Conditional Generative Models","author":"Sohn Kihyuk","year":"2015","unstructured":"Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning Structured Output Representation using Deep Conditional Generative Models. InAdvances in Neural Information Processing Systems, Vol. 28. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2015\/hash\/8d55a249e6baa5c06772297520da2051-Abstract.html","journal-title":"InAdvances in Neural Information Processing Systems, Vol. 28. Curran Associates, Inc"},{"key":"e_1_3_1_68_1","first-page":"1056","article-title":"Learning proposals for probabilistic programs with inference combinators","author":"Stites Sam","year":"2021","unstructured":"Sam Stites, Heiko Zimmermann, Hao Wu, Eli Sennesh, and Jan-Willem van de Meent. 2021. Learning proposals for probabilistic programs with inference combinators. InProceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence. PMLR, 1056-1066. https:\/\/proceedings.mlr.press\/v161\/stites21a.html ISSN: 2640-3498.","journal-title":"InProceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence. PMLR"},{"key":"e_1_3_1_69_1","first-page":"7609","article-title":"Simple, Distributed, and Accelerated Probabilistic Programming","author":"Tran Dustin","year":"2018","unstructured":"Dustin Tran, Matthew D. Hoffman, Dave Moore, Christopher Suter, Srinivas Vasudevan, and Alexey Radul. 2018. Simple, Distributed, and Accelerated Probabilistic Programming. InAdvances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolo Cesa-Bianchi, and Roman Garnett (Eds.). 7609-7620. https:\/\/proceedings.neurips.cc\/paper\/2018\/hash\/201e5bacd665709851b77148e225b332-Abstract.html","journal-title":"InAdvances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolo Cesa-Bianchi, and Roman Garnett (Eds.)"},{"key":"e_1_3_1_70_1","article-title":"Deep Probabilistic Programming","author":"Tran Dustin","year":"2017","unstructured":"Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, and David M. Blei. 2017. Deep Probabilistic Programming. In5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. https:\/\/openreview.net\/forum?id=Hy6b4Pqee","journal-title":"In5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net"},{"key":"e_1_3_1_71_1","first-page":"19667","author":"Vahdat Arash","year":"2020","unstructured":"Arash Vahdat and Jan Kautz. 2020. NVAE: A deep hierarchical variational autoencoder.Advances in Neural Information Processing Systems 33 (2020), 19667-19679.","journal-title":"NVAE: A deep hierarchical variational autoencoder.Advances in Neural Information Processing Systems 33 (2020)"},{"key":"e_1_3_1_72_1","article-title":"Fast and correct variational inference for probabilistic programming: Differentiability, reparame- terisation and smoothing","author":"Wagner Dominik","year":"2023","unstructured":"Dominik Wagner. 2023.Fast and correct variational inference for probabilistic programming: Differentiability, reparame- terisation and smoothing. Ph.D. Dissertation. University of Oxford.","journal-title":"Ph.D. Dissertation. University of Oxford"},{"key":"e_1_3_1_73_1","first-page":"788","article-title":"Sound probabilistic inference via guide types","author":"Wang Di","year":"2021","unstructured":"Di Wang, Jan Hoffmann, and Thomas Reps. 2021. Sound probabilistic inference via guide types. InProceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation. 788-803.","journal-title":"InProceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation"},{"key":"e_1_3_1_74_1","first-page":"2650","article-title":"Credit assignment techniques in stochastic computation graphs","author":"Weber Th\u00e9ophane","year":"2019","unstructured":"Th\u00e9ophane Weber, Nicolas Heess, Lars Buesing, and David Silver. 2019. Credit assignment techniques in stochastic computation graphs. InThe 22ndInternational Conference on Artificial Intelligence and Statistics. PMLR, 2650-2660.","journal-title":"InThe 22ndInternational Conference on Artificial Intelligence and Statistics. PMLR"},{"key":"e_1_3_1_75_1","unstructured":"Heiko Zimmermann Hao Wu Babak Esmaeili and Jan-Willem van de Meent. 2021. Nested Variational Inference. https:\/\/openreview.net\/forum?id=kBrHzFtwdp"},{"key":"e_1_3_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3236778"},{"key":"e_1_3_1_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3158148"}],"container-title":["Proceedings of the ACM on Programming Languages"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3656463","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3656463","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,4]],"date-time":"2025-07-04T20:44:10Z","timestamp":1751661850000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3656463"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,20]]},"references-count":76,"journal-issue":{"issue":"PLDI","published-print":{"date-parts":[[2024,6,20]]}},"alternative-id":["10.1145\/3656463"],"URL":"https:\/\/doi.org\/10.1145\/3656463","relation":{},"ISSN":["2475-1421"],"issn-type":[{"value":"2475-1421","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,20]]},"assertion":[{"value":"2024-06-20","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}