I'm interested in how people experience color. These pages are a work in progress. As I write this, I'm hoping to try and begin to form an answer to the following 2 questions:
- In the space that the human brain builds to represent the surrounding
world, the color that the brain assigns to a surface of an object in this
space is dependent on many things in addition to the light coming
from the corresponding surface in the real world. This is a known phenomenon,
and one of the reasons people developed the field of
color appearance models. With this in mind, it is interesting to
ask: do modern neural networks for image recognition integrate the same
information to identify certain colors? If not, there are likely conditions
under which they perform poorly as a result.
The above question can be played in reverse: if neural networks do model colors well, then will inspecting the internals of a trained network help reveal how color might be processed in human brains?
Experiments below are exported from Jupyter Lab notebooks from this project's git repository.