Thanks to all of our speakers for contributing such brilliant talks and to the researchers who participated in really wonderful discussions.
Many of our speakers shared the slides from their talks (Drive folder). If you don’t see the slides you’re looking for, they probably contained unpublished work, so stay tuned for upcoming publications!
— Kim, Bas, Kevin, Roozbeh
Workshop on Model-Based Cognition: Hierarchical Reasoning and Sequential Planning
Workshop program: Day 1, Day 2
Where: Peak 16, Beaver Run Resort, Breckenridge, Colorado
When: 5-6 March 2018 (Workshop Days 1&2)
Organizers: Kevin Miller, Kim Stachenfeld, Bas van Opheusden, Roozbeh Kiani
Group discussions will occur at the end of each workshop day from 6:45-7:30pm.
Decision making in a complex natural environment requires humans and animals to construct internal models of the world around them. These internal models support a wide variety of flexible behaviors, ranging from relatively simple learning procedures (e.g. outcome revaluation) to decision-making in complex, elaborately structured domains (e.g. games like chess). Although there is a growing consensus that humans and animals rely on models of their environment for goal-oriented behavior, it has proven challenging to draft theories and design experiments to study model-based reasoning and planning in the brain.
Consequently, many questions remain about the mechanisms by which models of the environment are built, revised, and deployed during decision-making behaviors which our workshop will seek to address. A guiding principle of our workshop will be to consider decision-making in natural environments as a hierarchy of inference processes that generate a sequence of actions or action plans to attain a goal. In this hierarchical framework, a high-level strategy guides lower-level choices, and the outcome of those choices informs the strategy. Choosing a good strategy requires an internal model of the world that is rarely explicitly known and must therefore be inferred from past experience. A complete understanding of this framework must answer how models of the environment are learned, how suitable decision strategies are selected and executed based on such models, how these strategies guide ongoing choices, and how these processes adapt to improve performance in dynamic environments.
Our workshop builds on this framework and aims to provide new avenues to overcome existing challenges. We will identify points of connection across various perspectives from animal physiology, human neuroscience, and machine learning, and we will provide a forum for discussing recent advances in the field and their theoretical and conceptual implications.
Thomas Akam, Postdoctoral Fellow, Champalimaud Institute and Oxford University
Studying model-based cognition in rodents using multi-step decision tasks
Rani Moran, Postdoctoral Research Associate, Max Planck Centre for Computational Psychiatry and Aging Research, UCL
Interaction between model-based and model-free systems in human reinforcement learning
Webpage • Google Scholar
David Reichert, Research Scientist, DeepMind
Deep reinforcement learning with imagination-augmented agents
Alireza Soltani, Assistant Professor, Dartmouth College
Model adoption through hierarchical decision making and learning
Bas van Opheusden, Graduate Student, Princeton University and New York University
Expertise in sequential decision-making relies on attention and tree search
About the Organizers
- Kevin Miller recently completed his PhD at Princeton University with Matthew Botvinick and Carlos Brody. He is interested in the algorithmic and neural mechanisms of human and animal decision-making, broadly construed. His recent work focuses on using the tools of rodent neuroscience understand the mechanisms of model-based planning. Webpage • Google Scholar
- Bas van Opheusden is a graduate student in neuroscience with Wei Ji Ma and Nathaniel Daw at New York University. He is broadly interested in human decision-making in complex sequential environments like board and video games, how they adopt sophisticated strategies with little-to-no training and how they adapt their strategy to specific opponents. Google Scholar
- Kimberly Stachenfeld is a research scientist at DeepMind and is pursuing her PhD at the Princeton Neuroscience Institute with Matthew Botvinick. She is interested in the intersection of machine learning and animal learning, and her recent research centers on theoretical perspectives on learning and planning in the hippocampus. Webpage • Google Scholar
- Roozbeh Kiani is an assistant professor in the Center for Neural Science at NYU. His lab focuses on understanding the neural mechanisms by which sensory and mnemonic information is used to guide behavior in complex environments. Webpage • Google Scholar
For more information on COSYNE 2018 Conference and Workshops, including information on registration and applying for Travel Grants, visit www.cosyne.org.