|
Small and near or big and far away: representation of depth from motion parallax in mouse V1| title | Small and near or big and far away: representation of depth from motion parallax in mouse V1 |
|---|
| start_date | 2023/02/21 |
|---|
| schedule | 13h |
|---|
| online | no |
|---|
| visio | https://ucl.zoom.us/j/93586016957?pwd=K0loT2Ira05sNmQ3bGZBZ1F5YWhGUT09 |
|---|
| location_info | 305BW, 3rd Floor & zoom |
|---|
| summary | Distinguishing near and far visual cues is an essential computation that animals must carry out to guide behavior using vision. This ability does not require visual experience in many animals, including mice. When animals move, self-motion creates motion parallax – an important but poorly understood source of depth information – whereby the speed of optic flow generated by self-motion depends on the depth of visual cues. This enables animals to estimate depth by comparing visual motion and self-motion speeds. As neurons in the mouse primary visual cortex (V1) are broadly modulated by locomotion, we hypothesized that they may integrate visual- and locomotion-related signals to estimate depth from motion parallax. To test this hypothesis, we used two-photon calcium imaging to record V1 activity in mice navigating in virtual reality environments, where motion parallax acted as the only cue for depth. I will present our findings on depth selectivity from motion parallax in the mouse primary visual cortex and discuss how optic flow and locomotion-related signals contribute to these responses. |
|---|
| responsibles | Esposito |
|---|
Workflow history| from state (1) | to state | comment | date |
| submitted | published | | 2023/02/16 13:32 UTC |
| |
|