Abstract
Probabilistic programming draws on ideas from artificial intelligence, statistics, and programming languages. It attempts to combine these ideas in a manner that builds on their respective strengths. In this talk, I will discuss how we can integrate deep learning and importance sampling to perform inference in probabilistic programs. This approach has tremendous potential to make inference scalable, but requires model-specific designs for networks and samplers. I will show how programming language abstractions can make the design of these components more practical and accessible by allowing us to reason compositionally about importance samplers. This opens up opportunities for new model and inference designs, both in the context of simulation-based inference and in the context of deep generative models.