I’m frequently asked this. But how can I answer?
The first thing to note is that we aren’t really color-blind, we just get a little mixed up about red and green. And, yes, we do see red and green; we just see them differently than you do.
Can I see the colors in stop-lights? (is it safe for me to drive?)
Yes, I can see three different colors in a stop-light. The “green” light doesn’t look green to me, it looks almost white, but that’s irrelevant, all three colors are easily distinguishable and that’s what matters. (Also, the order of lights in the stop-light is fixed. World-wide as far as I can tell. Even if I couldn’t see colors I could remember: If the bottom light is on: Go. If the top light is on: Stop.)
Most of us aren’t blind to colors, but there are a few people with no color vision at all. No cones in the eye, just rods. They don’t see well in the daylight, nor do they have very fine-grained vision. This is very rare, but it does happen. I shan’t be talking about them, but it’s fairly easy to answer for them (I think). Their world is black/grey/white. The daytime is too bright and it’s hard for them to see at all then. Their vision is fuzzy at the best of times.
What is color?
Color is a trick our minds play on us. It is vaguely related to the wavelength of light, but not precisely.
A given wavelength of visible light can be assigned a color, but the reverse is not true. There are many colors that do no appear on the spectrum. Magenta. Brown. These colors are made up of several different wavelengths (or their lacks) and combined in the mind to produce a single thing called “color”.
The history of color vision
Many years ago, probably a little before the Cambrian explosion (or 530 million years BP), the eye evolved.
Early eyes were simple. Just a light sensor. But over time they became more complex and gained sensors for multiple wavelengths, lenses for focusing and brains for processing.
As early as 500my BP color sensors (the cones) had differentiated from the early light sensors (the rods). Both were useful, rods respond well at low levels of light but provide no color information, while cones need high levels but do provide color data.
Birds, reptiles and teleost fish mostly have four color sensors in their eyes (as well as the rod light sensors which I’ll now ignore).
True mammals evolved about 200 million years ago, presumably with the full 4-sensor complement of other tetrapods, but they were nocturnal and burrowing animals. They couldn’t use color vision, and their eyes degenerated to 2-color (blue-green). About 40 million years ago a mutation in the old-world monkey line duplicated the green sensor gene, and the products of two genes (the original and the duplicate) diverged to give monkeys the 3-color vision we are familiar with today.
This change was so important that there are no old-world monkeys left with 2-color vision. There are also, essentially, no monkeys who are color-blind. Color-blind monkeys take too long to find good food and slowly starve and die.
Humans don’t have the same selection pressure on them that monkeys do. It’s much easier to find ripe (red) fruit in a supermarket than in a tree, so humans are slowly losing the sensitivity of their red sensor. About 8% of Caucasian males are color-blind.
Our red and green sensors (and those of all other old-world primates) are 98% similar in their amino acid sequence (there are either 15 or 16 different amino acids between the two). The green sensor responds best to 530~535 nanometer wavelengths, and the red to 560~565nm. Although there are ~16 differences, only 7 are functional, and of those 7 differences, one in particular accounts for a 14nm difference in sensitivity peak (almost half the total wavelength difference), another for 7nm and a third for 4nm.
Sensor response in normal 3 color vision
The plot labeled “S” is for short-wavelength light (blue)
That labeled “M” is for medium-wavelength light (green)
That labeled “L” is for long-wavelength light (reddish)
From WikiMedia Commons
So there are seven possible single point mutations that can cause some level of color-blindness, and many more (127) multiple point mutations. But the result of any of these changes is that the red sensor moves closer to the green sensor and the ability to distinguish between red and green drops. The amount of movement will vary depending on which mutation(s) occur.
How do we represent photographs on a computer?
Back in the nineteenth century people worked out that most colors could be represented by a mixture of three primary colors (sometimes additive primaries are used: red, green, blue; sometimes subtractive primaries: magenta, cyan, yellow). Color photography is based on this idea.
In a computer every pixel of an image is represented as a percentage of full red, of full green, and of full blue. The same is true of a television signal. Computer monitors and color TV screens are similar: every dot we think we see is actually made up of three small dots close together, one red, one green, and one blue.
Oddly enough there is no standard fixing what red, green and blue are. So two different monitors might display the same image slightly differently, and an photographic image on a monitor might look different from the original.
You might think that everyone would want to use the peaks of the three color sensors in the eye. Go back and look at the image above for a moment. The peak for the long wavelength sensor is nowhere near red, it’s greenish. We can only represent colors that lie between (on the spectrum) the two extreme colors. If we used that greenish peak as the basis for our monitors we would never be able to represent any reds at all.
On the other hand if we use a color which is too far from the peak, then we’d never be able to display a bright red, simply because our eyes would only respond dimly to the color we chose.
So a compromise is used, and people generally choose a color that is about half-way down from the long-wavelength peak.
How do color-blind people see pictures on the computer?
This is a much simpler question, but it is still basically unanswerable. The best I can do is talk about what basic color sensor outputs in the eye might be like.
When talking about the real world we’d need to worry about how the color-blind eye would respond differently to lots of different wavelengths, but when talking about an image displayed on a computer monitor all we need to do is worry about how the eye responds to the three wavelengths used in the image. Red, green and blue.
We don’t know exactly what shades of red, green or blue will be used but the general idea can be conveyed even if we don’t know the specifics.
The blue color will look basically the same to a normal and a color-blind person. (the blue curve doesn’t shift, and the long-wavelength curve barely intersects it).
But for a color-blind person the long-wavelength (red-most, but not really red) sensor peak in the eye moves closer to the green peak. This means that for green light the long-wavelength sensor will be a little more responsive than for normal people (so green light will appear slightly brighter and perhaps redder) while for red light the sensor will be a lot less responsive (so red light will appear much dimmer and less red).
Here I have taken the cone response curve from above and superimposed a black line representing a the long-wavelength sensor for a color-blind person. Vaguely. Again, each color-blind person is different. I’ve chosen a spot midway between the medium and long sensors of a normal eye (remember a single amino acid change can cause 14nm movement so this is a reasonable value).
Note that this sensor now responds a bit more vigorously to green (shown by the dark blue arrow under the green peak) and a lot less vigorously to red (shown by the light blue arrow in the red area).
When the green of your monitor enters your eye you get a strong response from both the “M” (green) and “L” (sort of reddish) sensors. The “L” sensor responds with about 91% of its peak value. For a normal eye. For a color-blind eye the “L” sensor will respond with about 96% of its peak. So for a green signal, the red sensor of a color-blind eye will see an increase by 5% of the green value.
When the red of your monitor enters your eye you get a 50% response from your “L” sensor. While a color blind eye will get about 33% from its “L” sensor. So the color-blind eye will see red at 2/3 of the level seen by a normal eye.
So if I were to take a normal RGB photograph, and decrease the red to 2/3s and then increase it by 5% of the green sensor’s reading it might provide you some idea of what I see.
My friend Greg tells me he can easily see the digits in the color-blindness test even after the transformation, and points out that power is proportional to the square of the intensity of incident light, and therefore suggests that the ratios above should be squared (so
red=(2/3)^2 * red + (1.05^2-1)*green).
I wondered if it might be possible to perform the reverse transformation so that I might see what everyone else does. There turns out to be a major problem with this though, red needs a much greater dynamic range than it actually has (it needs to go from -5% to 150% instead of 0-100%). None the less, I tried to apply this to the color-blindness test above. It didn’t work (the test still looked like random dots), however it did run up against the dynamic range issue — so I don’t know if it didn’t work because the ideas are wrong, or if it didn’t work because I could not do a real de-color-blindness transformation.
I don’t really know if I’m doing anything reasonable here. I’ve managed to convince myself that there is something to my argument… but… I don’t really know…