Kimberly Stachenfeld

kimberly-stachenfeld-111-e1561035122747.jpg

About

I am a Senior Research Scientist at DeepMind, where I am working to make machines that learn structure in the world that can support efficient learning and planning, using insights from neuroscience.

Compared to current state-of-the-art in Machine Learning, humans and animals are highly flexible and adaptive learners. Typical deep learning models require vast amounts of experience and remain rigidly specialized for the particular task on which they were trained. Animals are comparatively adaptive, constantly stashing information for unknowable future problems and extracting patterns from past experience. The ability to learn and represent “structure” (e.g. how different entities relate to each other) permits flexible behaviors that require knowledge to be reused or recombined in novel ways. This is referred to as “Representation Learning” in Machine Learning, and provides a useful formalism for thinking about how structure should be represented in the brain to best support downstream learning processes.

On the neuroscience side, this is a useful perspective for understanding neural representations not just in terms of how they are constructed by the brain, but in terms of the demands of downstream problem-solving. In my research, we have applied this basic idea to understand properties of neuronal populations in hippocampus and entorhinal cortex. On the ML side, understanding how the brain represents knowledge and the theories developed to describe it are useful for making machines that are more efficient learners. We have applied these to develop efficient Graph Neural Networks and learned simulators. Some of the computational tools I use in my research are Graph Neural Networks, Spectral Graph Theory, and Representation Learning for RL.

CV •  Google Scholar page