The overarching goal of the Vision Sciences Lab is to understand how the mind and brain construct perceptual representations, how the format of those representations impacts visual cognition (e.g., recognition, comparison, search, tracking, attention, memory), and how perceptual representations interface with higher-level cognition (e.g., judgment, decision-making and reasoning).
To this end, ongoing projects in the lab leverage advances in deep learning and computer vision, aiming to understand how humans & machines encode visual information at an algorithmic level, and how different formats of representation impact visual perception and cognition. Towards this end, we import algorithmic and technical insights from machine vision to build models of human vision, and apply theories of human vision and the "experimental scalpel" of human vision science to probe the inner workings of deep neural networks and build more robust and human-like machine vision systems. Ultimately we hope to contribute to the virtuous cycle between the fields human vision science, cognitive neuroscience, and machine vision.
My early research focused on characterizing and understanding limits on our ability to attend to, keep track of, and remember visual information — our visual cognitive capacities. In many cases, deeper understanding of these limits seemed to demand a deeper understanding of visual representation formats, but these ideas were not easily testable, because our field had not yet developed scalable, performant models of visual encoding beyond relatively early visual processing stages.
However, since 2012, we have seen a veritable explosion in the availability of highly performant vision models from the fields of deep learning, machine vision, and artificial intelligence (or is that one field?). On a quarterly basis, new models with new algorithms and new abilities are released, each presenting intriguing hypotheses for the nature of visual representation in humans, and opportunities for a deeper understanding of visual cognition in both humans and machines.
Thus, ongoing work in the lab is focused primarily on this intersection between human and machine vision. See the page for details about this work.
Overview:
In many working memory and attention tasks, we observe a tradeoff between "quantity and quality." Here you can experience that tradeoff for yourself, witnessing a tradeoff between the number of objects tracked, and the speed-limit at which you can track those objects (which we propose reflects a tradeoff between number of attended items and spatial resolution).
What you will see:
You will see 8 black circles moving on a gray background.
To do.
First, you will find the fastest speed at which you can track a single target. The numbers 1-14 correspond to different speeds. Just click on a number to try tracking an item at that speed. You will see 8 black circles on the screen. At the beginning, one circle will blink, and that's the one you should track. Then the items move for several seconds, and finally they stop. The target will turn red so you can check your accuracy. If you get it right, try a faster speed. Keep going until you find the maximum speed at which you can keep track of the target.
Important note.
Make sure to keep your eyes on the central "+" sign, and "mentally track" the target in your peripheral vision. We are testing how fast you can track things with your attention (rather than with your eyes).
Track 1 Target
Find the fastest speed at which you can track 1 target (keeping your eyes on the "+").
click to show video (numbers correspond to speed):
Track 4 Targets
After you determine the fastest speed at which you can track 1 target, try to keep track of 4 targets at that speed. This time, 4 items will blink at the beginning, and you want to try your best to keep track of ALL 4 of them.
click to show video (numbers correspond to speed):
To notice.
You were probably able to keep track of 1 item quite fast, even without moving your eyes. However, when you divided your attention and tried to keep track of 4 items at that same speed, you probably experienced the items "scattering" immediately. If you tried to hang on to all 4 items, you likely lost them all. You might think that this is because you could never keep track of four things at once, but if you try a slower speed, there is likely to be a slower speed at which you can keep track of all four items perfectly.
Super-trackers.
The effect is strongest if you actually reached your limit for 1-target. Some people are able to easily track 1-target at the max-speed tested here (we're limited by video resolution limits for this demo, but in the lab we can test much faster speeds.). Those individuals would likely be able to hang onto many (possibly all) of the 4 targets, but might have found it "more effortful" to do so. Nobody we've tested in the lab can track multiple objects at the same max-speed they can track 1-target, even with extensive cognitive training.
.
I'm George A. Alvarez, Professor of Psychology at Harvard University, and co-director of the Vision Sciences Laboratory. You can follow these links to learn more about the and our .
Here I thought I would briefly share a little bit of personal information. I was born in Honolulu Hawaii, and raised in Watsonville California (go Wildcats!), where I attended public schools through high school. I'm a first generation college student, and was fortunate to attend Princeton University (go Tigers!) as an undergraduate, Harvard University for graduate school, and MIT for my postdoctoral work. If for some reason you would like to learn more about my path to professorship, you can read this American Psychological Society writeup.
In my earlier days my hobbies included sports (baseball, basketball, american football), movies and and studying film-making. These days my time is divided between running the VisionLab, teaching, and raising 2 kiddos, so my hobbies have gravitated towards typical dad stuff. I'm told I make a killer grilled cheese sandwich.
I'm also chair of Harvard Psychology's Diversity, Inclusion, and Belonging Committee, where we are working to increase representation and well-being for all members of the Psychology community.
contact
George Alvarez
alvarez@wjh.harvard.edu | CV | google scholar | @grez72
William James Hall, Room 760
33 Kirkland St
Cambridge, MA
Whether you are intersted in human perception & cognition, or machine vision, deep learning, and artificial intelligence — or the intersection between these fields — you're invited to apply to work in the VisionLab. We invite applications at all levels, including undergraduate students, graduate students, and post-docs.
The VisionLab is a joint lab between myself (George Alvarez) and professor Talia Konkle. We are located on the 7th floor of William James Hall, in the Department of Psychology at Harvard University, where we share an integrated lab space. We endeavor to be an inclusive and fun place to work and socialize, and to support our students to pursue their own ideas and interests (of course, keeping things close enough to home for us to provide support!).