Visual object perception is a complex cognitive process. Given the same input, processing can occur at multiple distinct levels. At the finest level, we notice object texture and material properties; at the next level, we recognize individual objects; further up the processing hierarchy, we attend to and interact with several objects at the same time; and at the highest level, we perceive entire ensembles of objects. Despite this complexity, human vision has the remarkable ability to effortlessly navigate through these different levels of visual processing and extract information at the appropriate level for the task at hand. This highlights two important issues that are fundamental for understanding visual object perception in the human brain: (1) how is information computed and represented in the brain at each level of the visual processing hierarchy? Are similar or distinct neural mechanisms required at each level? And (2) how is task-relevant visual information selected and represented? What are the neural mechanisms mediating the moment-to-moment visual perception in the brain? Using fMRI, my research aims to address these two questions.

Visual information representation in the mind and brain