Fruit fly larvae develop a simple learned visual recognition system to determine membership in cooperative feeding groups and we propose to study the underlying neural circuit.
Drosophila larvae provide a powerful model to understand complex neural coding. Much of the synapse-level connectome is available as well as genetic tools to modify and monitor neuronal activity in any single neuron. Therefore, this is an unprecedented system to develop a systems-level model for neuronal coding.
The larval visual system, in contrast to that of the adult, is relatively simple. Nevertheless, larvae show complex visual behavior including learned social recognition of other larvae. Only 12 photoreceptor cells project to an isolated neuropil where all of the visual computation likely occurs. Within this neuropil are 7 projection neurons which transmit information out as well as three aminergic modulatory cells. Two local interneurons compute motion. Motion processing is learned within a critical period and is associated with growth of one class of primary input boutons and changes in local interneuron circuit. The local motion computation circuit consists of a reciprocally connected pair of an inhibitory and excitatory neuron. Such a pair would be expected to oscillate and perhaps through the cycle phase of this activity might match to the visual perception of motion. The objective of this project is to measure this oscillation, build a model and generate a framework for how such a circuit functions and is regulated.
At the end of this project, we aim to have achieved the following:
1) Generated neuronal activity data for each of the neurons in the larval visual motion processing circuit. We will also have refined the technology used to generate the correct mutant animals and capture neuronal activity.
2) Develop a coding framework to identify (segment) and measure Ca+2 levels in specific neurons. This framework will be generalized so that it will apply to other neurons as we move forward with this project to other parts of the circuit.
3) Develop a simple integrate-and-fire virtual model for the circuit using both MatLab and Neuron. We will use this model to both simulate and match the circuit activity that we see in vivo as well as do parameter sweeps to glean a deeper understanding of the code.
This funding will be used to fund all or some of 3 graduate students, equally spread in the three areas: image generation, image processing and circuit modeling. We will recruit students from the Biology, Neuroscience, Psychology and ECE PhD programs.