My research is on reinforcement learning in brains and machines. I am particularly interested in the process of learning representations that support generalization to promote flexible, speedy RL.

Some background: Deep RL is a highly general and powerfully expressive framework, with impressive victories to its name (Atari, Go, more Go, cooling centers, to list a few of my employer‘s faves). Nevertheless, deep RL has a big problem with data efficiency. A massive amount of data is required to train a deep RL agent, and the agent will remain highly specialized for the task it was trained to perform. This is because agents trained with reinforcement alone do not encode anything about the structure of the environment unless its usefulness is immediately apparent. When reward is sparse, as usually the case, agents learn slowly and discard information that could have been useful later on.

Human and animal brains are comparatively frugal. We are are constantly stashing information that might be useful for unknowable future problems and identifying patterns in the information we store. For instance, if you spot sugar while looking for salt to season your eggs, you can still recall the steps that led you to the sugar when you later want sweeten your coffee, and the lousiest human chef understands that stirring coffee is fundamentally similar to the motion of whisking eggs and can recycle shared machinery across these tasks. When we learn about the similarity structure of the world even before it is obvious what that structure might be useful for, we prepare ourselves to plan rapidly in the future by analogy with relevant episodes from the past. Endowing machines with this capability remains largely an open problem, which is one reason we solved Go before we solved amateur cooking.

So I study what kinds of representations in the brain support these analogies and work on expressing them in a mathematical form for machine learning purposes. My doctoral research involved investigating specifically how hippocampus and entorhinal cortex jointly support this type of flexible learning and planning. I am now working at DeepMind, where my goals are to learn more about the nature and function of representations in the brain, and to port this representational capacity over to machines. I have been particularly interested in training models to learn physical simulation, where there are diverse kinds of structure to govern how dynamics and inductive biases to be discovered. Some of the tools I use are Graph Neural Networks (for combining structured relational reasoning with the expressiveness of deep learning), Spectral Graph Theory (for representing geometry in graphs), and RL representation learning (for motivating representations of structure in the context of end-to-end learning).