Visualizing Vision
How we perceive color might not be as black and white as first thought
Using a combination of adaptive optics and high-speed retinal tracking technologies, a group of researchers from the University of California, Berkeley, and the University of Washington, Seattle, have, for the first time, been able to target and stimulate individual cone photoreceptor cells in a living human retina (1). The team were able to stimulate individual long (L), middle (M) and short (S) wavelength-sensitive cones with short flashes of cone-sized spots of light (Figure 1) in two male volunteers, who then reported what they saw. Two distinct cone populations were revealed: a numerous population linked to achromatic percepts and a smaller population linked to chromatic percepts. Their findings indicate that separate neural pathways exist for achromatic and chromatic perceptions, challenging current models on how color is perceived. Ramkumar Sabesan and Brian Schmidt, joint first authors of the paper, share their thoughts.
What did you hope to learn from your research?
Our goal was to study how the activity of an individual cone maps onto perception, and we wanted to answer two questions. Firstly, how much and how reliably does a single cone convey information to the brain? Secondly, does the wavelength of light a photoreceptor is most sensitive to, directly map onto the perception it elicits? By studying the relationship between the isolated activity of a single neuron and visual perception, we hoped to learn how the brain uses the entire population of photoreceptors to create a rich sense of the visual world.
Figure 1. Montage of the human retina illustrating study design. Each spot is a single photoreceptor, and each ring indicates one degree of visual angle (~300 µm) from the fovea (represented by a blue dot). The inset is an enlarged pseudo-colored image of the area where individual cones (L [red], M [green] and S [blue]) were stimulated with green light. Inset size 100 µm. Credit: Ramkumar Sabesan, Brian Schmidt, William Tuten and Austin Roorda
Why use adaptive optics and live retinal tracking?
Adaptive optics uses a deformable mirror to correct for all of the aberrations in the eye – from tear film, cornea, lens and vitreous, and permits clinicians and researchers to see into the eye as if these imperfections did not exist, providing a retinal picture with a resolution fine enough to visualize individual cells, and in our case, individual cones. However, the eye is never perfectly still, so targeting light to a specific location to stimulate a single cone has been impossible. To overcome this, we developed sophisticated eye tracking algorithms that monitor the eye’s every movement. This gave us the ability to steer our beam of light to exactly match the eye’s micro-saccades, and confine the light spot to the targeted cone.
Were there any challenges?
To be confident we were isolating the activity of only a single receptor, we needed to carefully calibrate and align our optical systems, and validate the paradigm – we spent a lot of time early on piloting different conditions. Also, stimulating ~150 cones at least 20 times in two subjects meant each volunteer had to name the color of these tiny flashes of light many thousands of times. This was an exhausting effort and required nearly two years to complete.
Of your findings, what do you find most interesting?
That any given cone tended to either produce a white or colored percept, rather than a random mix of the two. Also, in quite a few cases we stimulated a cone 20 or more times and the subject reported the same color sensation every single time. This repeatability suggests the brain has evolved sophisticated neural machinery for transmitting even the tiniest signals with very little corruption – this is remarkable considering how “noisy” any single brain cell can be.
What impact do you think your work will have?
The finding that some L- and M-cones elicited repeatable color percepts whilst most drove white percepts is an important reminder that even within a class of cells, some perform different functions based on differences in the way those cells communicate with other neurons. For the general field of neuroscience, this finding represents how important it is to consider not just a single neuron and the stimulus that best modulates its activity, but also the next set of neurons it talks to.
The finding that some L- and M-cones elicited repeatable color percepts whilst most drove white percepts is an important reminder that even within a class of cells, some perform different functions based on differences in the way those cells communicate with other neurons. For the general field of neuroscience, this finding represents how important it is to consider not just a single neuron and the stimulus that best modulates its activity, but also the next set of neurons it talks to.
Next steps?
The role of S-cones in vision is still somewhat mysterious and we are excited to find out what they see and how they interact with L- and M-cone pathways. We are also anxious to learn what types of percepts are elicited by simultaneous stimulation of multiple cones together. This will bring us close to unravelling the circuitry underlying our most elementary aspects of vision.
Another future direction is to study more people. Color vision is famously variable between people (think of #thedress!). Because these studies were exhausting, we were limited to studying two people, and we are excited to find more volunteers. In particular, we are interested in how variability in the relative number of L- and M-cones in a person’s retina (which varies from ~1:1 to 16:1 L:M cones) influences color perception. Finally, we are also interested in individuals who are colorblind. With gene therapies and other vision restoration techniques on the horizon, we hope the information we glean from these studies will play a key role in testing the efficacy of new treatments and translating them to the clinic.
- R Sabesan et al., “The elementary representation of spatial and color vision in the human retina”, Science Advances, 2, e1600797 (2016). PMID: 27652339.