Sensorimotor experience remaps visual input to a heading-direction network.

December 1, 2019

Fisher YE, Lu J, D'Alessandro I, Wilson RI.

In the Drosophila brain, 'compass' neurons track the orientation of the body and head (the fly's heading) during navigation. In the absence of visual cues, the compass neuron network estimates heading by integrating self-movement signals over time. When a visual cue is present, the estimate of the network is more accurate. Visual inputs to compass neurons are thought to originate from inhibitory neurons called R neurons (also known as ring neurons); the receptive fields of R neurons tile visual space. The axon of each R neuron overlaps with the dendrites of every compass neuron, raising the question of how visual cues are integrated into the compass. Here, using in vivo whole-cell recordings, we show that a visual cue can evoke synaptic inhibition in compass neurons and that R neurons mediate this inhibition. Each compass neuron is inhibited only by specific visual cue positions, indicating that many potential connections from R neurons onto compass neurons are actually weak or silent. We also show that the pattern of visually evoked inhibition can reorganize over minutes as the fly explores an altered virtual-reality environment. Using ensemble calcium imaging, we demonstrate that this reorganization causes persistent changes in the compass coordinate frame. Taken together, our data suggest a model in which correlated pre- and postsynaptic activity triggers associative long-term synaptic depression of visually evoked inhibition in compass neurons. Our findings provide evidence for the theoretical proposal that associative plasticity of sensory inputs, when combined with attractor dynamics, can reconcile self-movement information with changing external cues to generate a coherent sense of direction.


Single-Cell Profiles of Retinal Ganglion Cells Differing in Resilience to Injury Reveal Neuroprotective Genes

November 26, 2019

Nicholas M.Tran, Karthik Shekhar, Irene E.Whitney, Anne Jacobi, Inbal Benhar, Guosong Hong, Wenjun Yan, Xian Adiconis, McKinzie E. Arnold, Jung Min Lee, Joshua Z. Levin, Dingchang Lin, Chen Wang, Charles M. Lieber, Aviv Regev, Zhigang He, Joshua R. Sanes 

Neuronal injury is characterized by the selective death of specific types of neurons, but the reasons are poorly understood. In particular, Joshua Sanes, Zhigang He, and colleagues earlier found that different retinal ganglion cell (RGC) types differ in their robustness to axonal damage. Now, by sequencing the genes expressed in tens of thousands of individual RGCs, Sanes and colleagues (Tran et al., Neuron 2019) used the correlation of differential gene expression with injury response to systematically identify neuroprotective genes. By manipulating some of these molecular targets, they point to potential therapeutic targets.


Elements of a stochastic 3D prediction engine in larval zebrafish prey capture.

November 26, 2019

Bolton AD, Haesemeyer M, Jordi J, Schaechtle U, Saad FA, Mansinghka VK, Tenenbaum JB, Engert F.

The computational principles underlying predictive capabilities in animals are poorly understood. Here, we wondered whether predictive models mediating prey capture could be reduced to a simple set of sensorimotor rules performed by a primitive organism. For this task, we chose the larval zebrafish, a tractable vertebrate that pursues and captures swimming microbes. Using a novel naturalistic 3D setup, we show that the zebrafish combines position and velocity perception to construct a future positional estimate of its prey, indicating an ability to project trajectories forward in time. Importantly, the stochasticity in the fish's sensorimotor transformations provides a considerable advantage over equivalent noise-free strategies. This surprising result coalesces with recent findings that illustrate the benefits of biological stochasticity to adaptive behavior. In sum, our study reveals that zebrafish are equipped with a recursive prey capture algorithm, built up from simple stochastic rules, that embodies an implicit predictive model of the world.


Binary Fate Choice between Closely Related Interneuronal Types Is Determined by a Fezf1-Dependent Postmitotic Transcriptional Switch.

November 25, 2019

Peng YR, James RE, Yan W, Kay JN, Kolodkin AL, Sanes JR.

Many neuronal types occur as pairs that are similar in most respects but differ in a key feature. In some pairs of retinal neurons, called paramorphic, one member responds to increases and the other to decreases in luminance (ON and OFF responses). Here, we focused on one such pair, starburst amacrine cells (SACs), to explore how closely related neuronal types diversify. We find that ON and OFF SACs are transcriptionally distinct prior to their segregation, dendritic outgrowth, and synapse formation. The transcriptional repressor Fezf1 is selectively expressed by postmitotic ON SACs and promotes the ON fate and gene expression program while repressing the OFF fate and program. The atypical Rho GTPase Rnd3 is selectively expressed by OFF SACs and regulates their migration but is repressed by Fezf1 in ON SACs, enabling differential positioning of the two types. These results define a transcriptional program that controls diversification of a paramorphic pair.


The core episodic simulation network dissociates as a function of subjective experience and objective content.

November 16, 2019

Thakral PP, Madore KP, Schacter DL.

Episodic simulation - the mental construction of a possible future event - has been consistently associated with enhanced activity in a set of neural regions referred to as the core network. In the current functional neuroimaging study, we assessed whether members of the core network are differentially associated with the subjective experience of future events (i.e., vividness) versus the objective content comprising those events (i.e., the amount of episodic details). During scanning, participants imagined future events in response to object cues. On each trial, participants rated the subjective vividness associated with each future event. Participants completed a post-scan interview where they viewed each object cue from the scanner and verbally reported whatever they had thought about. For imagined events, we quantified the number of episodic or internal details in accordance with the Autobiographical Interview (i.e., who, what, when, and where details of each central event). To test whether core network regions are differentially associated with subjective experience or objective episodic content, imagined future events were sorted as a function of their rated vividness or the amount of episodic detail. Univariate analyses revealed that some regions of the core network were uniquely sensitive to the vividness of imagined future events, including the hippocampus (i.e., high > low vividness), whereas other regions, such as the lateral parietal cortex, were sensitive to the amount of episodic detail in the event (i.e., high > low episodic details). The present results indicate that members of the core network support distinct episodic simulation-related processes.