Dave Moore

I am currently at Google, working with the Bayesflow team to create tools for probabilistic modeling at scale. Previously I was a PhD student in computer science at UC Berkeley, advised by Stuart Russell. Before coming to Berkeley, I was an undergrad at Williams College, where I majored in CS and math and wrote a senior thesis with Andrea Danyluk.

My thesis project was the application of Bayesian inference to nuclear test monitoring: given seismic waveforms from a global network of stations, we want to infer seismic events that plausibly explain the observed signals. Portions of this work were funded by the CTBTO and DTRA. My thesis on signal-based Bayesian monitoring of seismic events is now available! Or see our AISTATS paper for a shorter summary of this work.

More generally I'm interested in model-based machine learning: how can we build intelligent agents that understand the laws by which the world works and can exploit this knowledge to predict and plan in novel circumstances? Relevant topics include deep generative models, automated Bayesian inference and probabilistic programming, causal inference, and applications of ML in science and medicine. I'm also very interested in ensuring that intelligent systems contribute to human flourishing and help us lead our best lives: this includes systems that want to understand and optimize their users' values. An area that I haven't worked in, but would like to, is the application of insights from artificial agents to better understand human purpose, relationships, and mental health. If you're thinking about any of these things, or would like to be, please get in touch!


Research

Selective Conferences

Workshops and other lightly-refereed venues

Theses and other reports


Software

Note: please contact me if you plan to use any of this code! Some of it may be poorly documented or broken in its current form, but I'm glad to help figure out how it can be useful to you.

Elbow is a flexible framework for probabilistic programming, built atop TensorFlow. It's focused on modular construction of probabilistic models and variational posterior representations.

Matrizer is an optimizing compiler for linear algebra expressions: it tries to infer matrix properties and rewrite computations for efficient and numerically stable execution. New: try the interactive web interface!

The TreeGP package implements Gaussian process regression in Python, with efficient posterior calculations via cover trees as described in our UAI-MSTND paper above.


Teaching

I'm not currently teaching. Courses I've TA'd in the past:


Personal

If you're a Williams or Berkeley undergrad thinking about a career in AI, or in particular applying to CS grad schools, feel free to get in touch; I'm more than happy to talk about my experiences with the process!

I've started a blog to contain writings on CS and non-CS topics. I'm not sure yet how often it'll be updated.