Deep Impact

Professor Dan KerstenNeuroscientist Dan Kersten works to understand how the space in front of us is processed visually by the brain, allowing us to negotiate on a second-to-second basis--driving a car through traffic, maneuvering a pen over paper, dribbling a basketball toward a net. Learn more.

By Danny LaChance

Dan Kersten studies that immediate relationship we have with space. He's a CLA neuroscientist who has spent years trying to understand how the space that lies in front of us gets processed visually by the brain, allowing us to know what material objects occupy that space and where those objects are located.

For decades, neuroscientists have known that light from the world is initially projected two-dimensionally on the retina, a screen-like piece of tissue in the back of our eyes.

"The eyes are built to extract information about the world from projection. So there's a difference right at the start. You start with a three-dimensional scene and you've got two dimensional data," says Kersten.

Those two-dimensional signals soon travel to area V1, a part of the brain's cortex located at the back of the head, where they light up clusters of cells in patterns that approximate the space of the visual field. So an apple in front of you activates cells in your V1 area corresponding to its location in your visual field. Moving the apple to the left will change the location of activated cells in your V1 area.

But if, like the retina, area V1 represents space only in two dimensions, how do we perceive depth? How do we know, when we look into our dining room, that the candle on the table ahead of us isn't touching the curtain hanging three feet behind it?

For years, Kersten says, depth processing was mostly thought to happen elsewhere in the brain, after those initial signals passed through V1. So it was a surprise, he says, when evidence collected by his laboratory last year suggested that V1 does take distance, or depth, into account.

That's good information to know, especially for those seeking to replicate the human eye through technology. In the future, scientists may be able to help people with eye damage see by stimulating their V1 areas directly, through cortical implants. In order to translate the two-dimensional data from a camera lens into signals meaningful to V1, they'll need to know just how V1 processes depth. That's where Kersten's finding and the research it's spawning come in.

The robotics industry also stands to benefit. "In the long term, artificial intelligence may need to draw on what we learn about the way the human brain works in order to achieve or even go beyond human visual and cognitive competence," he says.

After years of thinking about V1 in a certain way, Kersten says it's hard to adjust to his new findings. It wasn't exactly like seeing water boil in a freezer, but the findings do run against years of research and speculation about the way we see, Kersten says.

"This is actually one case where hindsight is not helping a lot," he remarks. Figuratively speaking, of course.

Categories

Pages

Powered by Movable Type 4.31-en

About this Entry

This page contains a single entry by CLA Reach Magazine published on July 27, 2007 11:18 AM.

Class Matters was the previous entry in this blog.

OurSpace is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.