Beyond Simple Choices: Computational and Neuronal Mechanism for Complex Spatial Behaviours

Decisions can arise from multiple systems in the brain, which can be dissociated behaviorally, neurally, and computationally. Much progress has been made in our understanding of the “Pavlovian” and “habit” systems, but the neural mechanisms underlying the “planning” system remain elusive. An emerging model system for the study of planning at the neurobiological level is the recording and manipulation of “place cells” in the hippocampus of rats solving navigation problems. Sequences of place cell activity can be decoded to reveal signatures of a planning process: possible trajectories are represented serially at decision points and following reward receipt, and include constructed, never-experienced trajectories. However, computational work clearly shows that serial, brute-force search will fail in complex environments. Accordingly, many efficient algorithms for approximate search have been developed in artificial intelligence (AI) and reinforcement learning (RL), but the algorithms used by the brain to plan in complex environments are unknown.

We propose to elucidate the neural mechanisms of planning in complex environments through the recording, manipulation, and simulation of hippocampal activity, afforded by interdependent approaches rooted in neurobiology, engineering, and computational modeling. We will record place cell activity on-line during planning and off-line during consolidation periods, on tasks sufficiently complex to require approximate planning methods, and that satisfy established behavioral criteria for planning. AI and RL-derived simulations will be applied to these tasks to generate contrasting predictions about the content and frequency of hippocampal sequences, and provide a computational basis for targeted manipulations.

To test the impact of hippocampal sequences on associated circuits and behavior, we will develop real-time methods for sequence decoding, enabling the disruption and reinforcement of specific sequences with electrical and optogenetic tools. The combination of these approaches permits the disentangling of interactions between fast, “on-line” planning and the slower “structure learning” of appropriate task representations. The proposed work will reveal the real-time dynamics and associated computational principles that underlie navigational planning in complex environment, and uniquely links the neural circuit, representational and computational levels of a model planning system.

Project Funding: 
Funding Source: 
International Human Frontier Science Program Organization HFSP
Project Timeframe: 
01 Nov 2014
Group & ISTC Labs: 

tabs

Stato: 
Ongoing
Project ID: 
RGY0088/2014