April 2019

The Experience of Color vs. The Biology of Color teaser image
This week I’ve been exposed to this handy little graphic:
Absorbance vs wavelength of human color-distinguishing proteins
I have a few facts here to put this image in context.  Some are rather well-known:
  • The human eye has three light-absorbing proteins, which are housed in three different kinds of cone cells
  • These three cones produce the sensations of red, green, and blue.  In combination, this produces the entire visual range of color that humans can experience.
LED screens take particular advantage of this.  Each pixel contains three LEDs which shine according to the necessary relative color content of that particular patch of screen.  The red LED has a peak output wavelength within a very small range, centered in the late 600 to 700 wavelength range.  The same goes for green, and then blue.  You can create a perfectly viewable picture by superimposing three images containing only the color content corresponding to red, green, and blue. 
Superimposed images containing only red or blue color information.  Note around the sleeve, how parts of the image appear white or purple depending on the relative color intensity.

I’m saying a lot of things we already know, but bear with me!  You’d expect then that the light-absorbing abilities of the three red/green/blue proteins in your retina would mirror the LED screen – narrow absorbance centered firmly on the wavelength band which corresponds to the correct color.  But let’s take another look at the graph.  Peak absorbance for blue is actually deep into the color I would consider violet.  And… and… what the heck is up with red?
Reposted for convenience

Peak “red” in the human eye is actually yellowish green. 
Note a couple other weird things.  We are clearly capable of detecting very faint “red”, if the rainbow bar at the bottom is at all accurate.  But even at the relatively high absorbance of the “blue” protein in the 400 nm range on the left, our perception drops off to deep violet and then to nothing.  The experience of vision seems to have very little to do with the actual proteins in our eyes.

Notably I have no idea how normalized this graph is, or whether the peak intensity is equivalent between the proteins.  Maybe “blue” only weakly responds to light compared to green and “red”.  But I still think this says some profound things about the amount of post-processing that has to go on after an image is taken from the retina, particularly if you think about how vividly people experience red, vs blue.  I mean, we might have expected this; you have an entire lobe devoted to vision.  The backmost chunk of brainmeat is devoted to working out how to see.

Red detection is important in an evolutionary-history sense, since red is the color of blood, meat, and sometimes fertile sex partners (maybe).  Three of the four F’s are represented here (fight, flight, food, processes conducive to the propagation of the species).

So what’s going on here?  Reds are superenhanced in the optical processing?  Maybe, maybe not.  “Red” is the only cone cell type delivering graded potentials to your optic nerve during the experience of reddest red.
Green cones begin activity at around 650 nm, the very middle of orange.  At this point, red has a relative absorbance of 2.  And, more importantly, we can compare the relative absorbances to discriminate colors.

At 600 nm, green has a relative absorbance of about 2.5, and red at 7, a ratio of about .4.  The ratio of green activity to red activity continues to increase – at yellow-green, the ratio is closer to .9.

It is difficult to discriminate a slightly increased firing rate from a single sensory channel (say, red), especially since, unlike a computer, your brain does not have a system clock (more on this in a future post).  Perhaps the increased overlap of red and green color fields is adaptive.  Perhaps the high clarity of our red visual detection compared to purple or blue is hard-coded into our retinas, and perhaps it is the result of selective post-processing further into the brain.  Perhaps both.

More musings on the biology of color detection, v.s. other, learned or post-processing factors.

--- Elizabeth Broadwell
Elizabeth Broadwell is a Masters studnet in the College of Arts and Sciences

Related Content

Explore Grad Aggieland


Texas A&M Graduate Students Attend Science and Public Policy Workshop in Washington, D.C.  

Four Texas A&M doctoral students were selected to travel to the nation’s capital for a professional development workshop on science and public policy. Serina DeSalvio (Genetics & Genomics), Dallas Freitas (Chemistry), Alaya Keane (Ecology & Conservation Biology) and Molly McClung (Biomedical Sciences) attended the Catalyzing Advocacy in Science and Engineering (CASE) annually-held event, hosted by the American Association for the Advancement of Science, from April 14-17.   

View All News

The grad school arc

If you’re just starting your Ph.D., especially in a STEM field, Serina talks in her latest post about the differences between each year of a 5-year Ph. D. program.

View All Blogs
Defense Announcement

Deep Learning for Molecular Geometry and Property Analysis

View All Defense