COSYNE 2018 Workshop

Thanks to all of our speakers for contributing such brilliant talks and to the researchers who participated in really wonderful discussions.

Many of our speakers shared the slides from their talks (Drive folder). If you don’t see the slides you’re looking for, they probably contained unpublished work, so stay tuned for upcoming publications!
— Kim, Bas, Kevin, Roozbeh

Workshop on Model-Based Cognition: Hierarchical Reasoning and Sequential Planning

Workshop program: Day 1, Day 2
Where: Peak 16, Beaver Run Resort, Breckenridge, Colorado
5-6 March 2018 (Workshop Days 1&2)
Kevin Miller, Kim Stachenfeld, Bas van Opheusden, Roozbeh Kiani
Group discussions will occur at the end of each workshop day from 6:45-7:30pm.

Decision making in a complex natural environment requires humans and animals to construct internal models of the world around them. These internal models support a wide variety of flexible behaviors, ranging from relatively simple learning procedures (e.g. outcome revaluation) to decision-making in complex, elaborately structured domains (e.g. games like chess). Although there is a growing consensus that humans and animals rely on models of their environment for goal-oriented behavior, it has proven challenging to draft theories and design experiments to study model-based reasoning and planning in the brain.

Consequently, many questions remain about the mechanisms by which models of the environment are built, revised, and deployed during decision-making behaviors which our workshop will seek to address. A guiding principle of our workshop will be to consider decision-making in natural environments as a hierarchy of inference processes that generate a sequence of actions or action plans to attain a goal. In this hierarchical framework, a high-level strategy guides lower-level choices, and the outcome of those choices informs the strategy.  Choosing a good strategy requires an internal model of the world that is rarely explicitly known and must therefore be inferred from past experience. A complete understanding of this framework must answer how models of the environment are learned, how suitable decision strategies are selected and executed based on such models, how these strategies guide ongoing choices, and how these processes adapt to improve performance in dynamic environments.

Our workshop builds on this framework and aims to provide new avenues to overcome existing challenges. We will identify points of connection across various perspectives from animal physiology, human neuroscience, and machine learning, and we will provide a forum for discussing recent advances in the field and their theoretical and conceptual implications.

Thomas Akam, Postdoctoral Fellow, Champalimaud Institute and Oxford University
Studying model-based cognition in rodents using multi-step decision tasks
Google Scholar

Bruno Averbeck, Senior Investigator, NIMH
Bayesian and reinforcement learning models of reversal learning
Webpage • Google Scholar

David Foster, Associate Professor, University of California Berkeley 
Hippocampal sequences and learning
Webpage • Google Scholar

Stephanie Groman, Associate Research Scientist, Yale University
Model-free and model-based influences in addiction-like behaviors in rats
WebpageGoogle Scholar

Sam Gershman, Assistant Professor, Harvard University
What is the model in model-based reinforcement learning?
Webpage • Google Scholar

Joshua Gold, Professor, University of Pennsylvania
A bias-variance trade-off in human inference
Webpage • ResearchGate

Jessica HamrickResearch Scientist, DeepMind
Metareasoning and mental simulation in humans and artificial agents
Webpage • Google Scholar

Ben Hayden, Assistant Professor, University of Minnesota
Transformation of options to choices in economic choice
Webpage • Google Scholar

Roozbeh Kiani, Assistant Professor, NYU
Hierarchical decisions about choice and change of strategy
Webpage • Google Scholar

Rani Moran, Postdoctoral Research Associate, Max Planck Centre for Computational Psychiatry and Aging Research, UCL
Interaction between model-based and model-free systems in human reinforcement learning
Webpage • Google Scholar

David Reichert, Research Scientist, DeepMind
Deep reinforcement learning with imagination-augmented agents
Google Scholar

Geoffrey Schoenbaum, Senior Investigator, NIMH
Dopamine neurons respond to errors in the prediction of sensory features of expected rewards
Webpage • Google Scholar

Hyojung Seo, Assistant Professor, Yale University
Decision-making and reasoning in the prefrontal cortex
Webpage • Google Scholar

Alireza Soltani, Assistant Professor, Dartmouth College
Model adoption through hierarchical decision making and learning

Matthijs van der Meer, Assistant Professor, Dartmouth College
Reward revaluation biases hippocampal sequence content away from the preferred outcome
Webpage • Google Scholar

Bas van Opheusden, Graduate Student, Princeton University and New York University
Expertise in sequential decision-making relies on attention and tree search
Google Scholar

Xiaohong Wan, Professor, Beijing Normal University
Neural systems for decision-making and metacognition
Webpage • Google Scholar

Marco Wittmann, Postdoctoral fellow, University of Oxford
Multiple time-linked reward representations in anterior cingulate cortex
Webpage • Google Scholar

About the Organizers

  • Kevin Miller recently completed his PhD at Princeton University with Matthew Botvinick and Carlos Brody. He is interested in the algorithmic and neural mechanisms of human and animal decision-making, broadly construed. His recent work focuses on using the tools of rodent neuroscience understand the mechanisms of model-based planning. Webpage • Google Scholar
  • Bas van Opheusden is a graduate student in neuroscience with Wei Ji Ma and Nathaniel Daw at New York University. He is broadly interested in human decision-making in complex sequential environments like board and video games, how they adopt sophisticated strategies with little-to-no training and how they adapt their strategy to specific opponents. Google Scholar
  • Kimberly Stachenfeld is a research scientist at DeepMind and is pursuing her PhD at the Princeton Neuroscience Institute with Matthew Botvinick. She is interested in the intersection of machine learning and animal learning, and her recent research centers on theoretical perspectives on learning and planning in the hippocampus. Webpage • Google Scholar
  • Roozbeh Kiani is an assistant professor in the Center for Neural Science at NYU. His lab focuses on understanding the neural mechanisms by which sensory and mnemonic information is used to guide behavior in complex environments. Webpage • Google Scholar

For more information on COSYNE 2018 Conference and Workshops, including information on registration and applying for Travel Grants, visit

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s