Representing States & Spaces, CCN2019 Tutorial by Kim & Tim

Tim Behrens and I put together a CCN 2019 Tutorial on “Representing States and Spaces.” It was a blast: a whirlwind of RL, hippocampal/entorhinal data, philosophizing about representation learning theories, predictive representations, dimensionality reduction, generative models, factorization theories, and a four-way fight for the soul of Psychology (Thorndike / Skinner / Tolman / Harlow). Thanks to all who made it through the FOUR AND A HALF HOUR journey.

Slides are linked here: stachenfeld_behrens_ccn_tutorial_13sep2019
If you notice anything missing or incorrect, please message (email = stachenfeld[at]google[dot]com).

Some informal but very nice notes from the session were posted by Rob Gulli ( here:

A Jupyter notebook with code for trying some of this stuff out is available on the github page for Dartmouth’s MIND 2019 Summer School from my lab there:
While you’re there, check out some of the other resources available on the page too! Lots of stuff for people interested in cognitive maps.


The hippocampus as a predictive map

We published a paper on theory of predictive representations in hippocampus and entorhinal cortex and how these representations help hippocampus support quick and flexible planning. It is a pretty good paper in a lot of ways, so if any of those words appeal to you, you should check it out! Here is a link to the full text without paywall (I think):

The hippocampus as a predictive map.
KL Stachenfeld, MM Botvinick, SJ Gershman.
Nature Neuroscience (2017).
View on readcube.

We also wrote a blog post about it (with some help from the DeepMind coms team) if you want a more approachable version.

Basically the main idea is that hippocampus is representing current position as a blur of present and future location. This sacrifices some information about exactly where you are and exactly where certain sequences of actions will take you next. But it let’s you very quickly compute stats like “on average, how much reward will I get in the future from this starting position?” without having to recompute all the stats from scratch when reward moves around. That stat ends up being a pretty major one for humans and animals seeking joy in life, and the necessity of rapidly computing this stat forms the normative backbone of the theory.

Successor representation of a linear track
A “predictive representation” of location for a hypothetical well-trained rat on a track. When the animal is at the start of the track (top), cells encoding upcoming locations will start firing. As the animal moves along the track (middle), the cells encoding locations behind the animal will stop firing and and those encoding upcoming locations will remain active. This pattern of firing produces a backward skewing place field (bottom).

We also talk about representation compression in entorhinal cortex, and have a model of entorhinal cortex performing dimensionality reduction on the hippocampal representations. This means entorhinal cortex has to get rid of some of the information in hippocampus, forcing it to keep only the most important information. Dimensionality reduction has a lot of potential benefits, and in the paper we focus on how it can smooth and stabilize representations in hippocampus.

And the model fits a bunch of data, makes some predictions, etc.