Current and recent projects in my lab include color perception and color memory, high dynamic range lightness perception, ideal observer models of lightness perception,  color philosophy, visual memory capacity,  philosophy of science and evolutionary psychology. 

My experimental approach relies on collecting psychophysical data to guide the development of quantitative, computational models to explain and predict visual perception and memory.

Percolating ideas  include Bayesian models of perceptual uncertainty, category estimation, and perceptual biases in data interpretation.

Color perception and color memory

Visual perception tells us about properties of objects in the world. For visual perception of objects to be useful, however, it must be combined with some kind of memory. For example, the color of a fruit indicates its ripeness only if it can be compared to a memory of  what constitutes ripe fruit color. Everyday experience suggests that although sometimes color memory is very good (e.g. recalling the color of stop signs and bananas), it can also be very poor (e.g. picking out paint to match the walls at home). This everyday experience of color memory reflects a deep division about the fidelity of memory in the color memory literature. How can these differences be reconciled? I am beginning a series of experiments that integrate psychophysical and computational approaches to color perception and memory with the aim of providing a single framework that explains the disparate theories of color memory.

Ideal observer models of lightness perception

I am also interested in modeling the visual perception of complex scenes. I take a computational approach with the ultimate goal of creating a model that will take any real image as input and return quantitative predictions about its perception. So far I have focused on models of lightness perception that mimic vision in the sense that they take the retinal image as input and from this input estimate the reflectance (or perceived lightness) and illumination at each location in the world. I use a Bayesian framework, and thus my model estimations are constrained by observations about what surfaces and illuminants are likely to be present in the world. I find the Bayesian framework an attractive one because the visual system evolved in a world with particular statistical properties. Given the ambiguity inherent in the retinal image, it seems a sensible idea to use real-world observations to constrain the solutions.