Abstract
Probabilistic circuits represent joint distributions through the computation graph of probabilistic inference, as a type of neural network. They move beyond other deep generative models and probabilistic graphical models by guaranteeing tractable probabilistic inference for certain classes of queries: marginal probabilities, entropies, expectations, causal effects, etc. Probabilistic circuit models are now also effectively learned from data at scale, and achieve state-of-the-art results in constrained sampling from both language models and natural image distributions. They thus enable new solutions to some key problems in machine learning. This talk will overview these recent developments in terms of learning, probabilistic inference, as well as connections to the theory of probability generating polynomials.