Probabilistic circuits represent joint distributions through the computation graph of probabilistic inference, as a type of neural network. They move beyond other deep generative models and probabilistic graphical models by guaranteeing tractable probabilistic inference for certain classes of queries: marginal probabilities, entropies, expectations, causal effects, etc. Probabilistic circuit models are now also effectively learned from data at scale, and achieve state-of-the-art results in constrained sampling from both language models and natural image distributions. They thus enable new solutions to some key problems in machine learning. This talk will overview these recent developments in terms of learning, probabilistic inference, as well as connections to the theory of probability generating polynomials.
Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the StarAI lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His papers have been recognized with awards from key conferences such as AAAI, UAI, KR, and OOPSLA. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.