What science is learning about how we see can help you take more compelling pictures.
If you've seen particularly evocative photos on a hospital's walls, Adam Gazzaley may have shot them. "The most powerful photos are warm and comfortable, but also new and exciting," says the M.D./Ph.D., who teaches cognitive neuroscience and runs the Neuroscience Imaging Center at the University of California, San Francisco (www.gazzaleylab.ucsf.edu).
His first photographs depicted brain slices and neurons, taken through a microscope. As he studied underlying patterns in the way the brain converts images into cognition, he also developed a passion for outdoor nature photography, and has sold a lot of prints to hospitals (www.comewander.com).
Now he studies how peoples' brains light up in an MRI while they look at his pictures. His data is helping to show why our aging brains find it increasingly difficult to filter out irrelevant information: As we grow older our saliency maps tend to become more crowded and our ability to weigh different levels of importance in a scene diminishes.
His advice for taking good pictures? "You have to devolve a couple notches, to shift the balance back from top-down to seeing bottom-up," Gazzaley says. "There's a price in being goal-oriented, too top-down -- you miss the flowers along the stream bank as you rush to the waterfall." The idea is to be "more stimulus-driven than goal-directed."
I ask if that means we need to regress to seeing like cavemen. "Maybe below that," he replies. "Back to pre-human. Being predominately top-down is how we've evolved and survived, but you lose appreciation for the subtleties in the world."
Gazzaley says he's shot enough pictures to be able to put most technical questions out of his mind as he tries to see bottom-up. "You have to tune in to what it is that's changing your emotions, and try to capture that," he says. "What's the point of the picture? If you can't describe a photo in three or four words, and you don't feel emotion while you're taking it, the viewer won't feel much either."
MEGA VS. GIGA
Eyes -- lips -- shiny things. My own eyes zombie through 12 photos in the USC iLab. It isn't exactly A Clockwork Orange, with pincers holding my eyelids open. But I do have to keep my chin resting solidly on a T-stand as Laurent Itti raises my chair to aim my gaze slightly downward at a high-definition, 42-inch TV screen about 4 feet away.
As he aims a camera at my eyes to record their movements, I say through clenched teeth, "So all you guys in this lab must be excellent photographers, knowing so much about how people see."
"That knowledge may work against us," he replies. "None of us are very good photographers." He confesses to knowing why creating "scan lines" for the eye to follow through a picture is important, but says that the overly left-brained (e.g., research scientists) have trouble transporting that knowledge across the synapse between science and art. After my eyes are pointed at a target at the neutral center of the photograph, I bring the pictures up separately on the screen by touching the space bar on a computer keyboard. The results, such as that with Angelina Jolie, match closely the way a computer model of human sight, based on hundreds of such tests, predicted my eyes would move.
"Where the eye lands in an image is not much different between monkeys and humans," says USC grad student David Berg, who studies visual stimuli in monkey brains. "Eyes, mouth corners, the top of the lip -- people and monkeys look for emotional significance in a face."
How can a photographer take advantage of these findings? If you want to draw the eye first to something nonhuman in a photograph, leave out faces, be they human, dog, or mask. If you want to tap the viewer's strongest subconscious impulses, hide faces or face-like shadows in the trees and clouds. (After all, painters such as Van Gogh and Cezanne did this -- intentionally or not -- and we're still looking at their pictures.)
Of course, all this research and development isn't happening just in order to make you a better photographer, though that's a nice side benefit. It's chiefly aimed at industrial applications.Some of this eye-knowledge has already made its way into cameras, which now routinely include face-detection technology in exposure and focusing systems under the assumption -- scientifically proven -- that faces are what photographers, and viewers, care most about.
And there's more to come. Electrical engineers at Stanford have developed a digital camera with 12,616 tiny lenses that sees in super 3D. They've shrunk the imaging sensor's pixels down to 0.7 microns, less than a tenth the size of the pixels in many DSLRs, and grouped them into arrays topped by miniature lenses much smaller than the ones used in today's sensors.
This, in turn, is helping pave the way for the gigapixel camera, with 100 times more pixels than today's 10MP clunker.
Still, to build better DSLRs isn't the real goal. It's to build better robots, with the visual acuity of the Terminator. I have to wonder if such super-seeing cyborgs will one day realize they have more visual power than us, decide to take over, and begin laying waste to major cities. One thing's certain: We'll at least know why we can't tear our eyes away from the pictures.