Visionaries

by Deane Morrison

University of Minnesota psychologists are the best in sight -- finding answers to how we make sense of what we see

The experiment is one of many by which Jiang, an Assistant Professor of Psychology, probes the human mind's ability to manage visual information. She is a new face in the department's formidable array of vision researchers, whose discoveries are lifting the curtain on how our eyes and brains work together to produce the miracle of vision.

The department began building its sterling reputation for vision research in 1968 with the arrival of Dwight Burkhardt (see sidebar). It has since expanded to include researchers in nearly every aspect of visual science and is considered one of the top departments nationwide in this area.

The shifty human visual system

Psychology faculty members take full advantage of University resources such as functional magnetic resonance imaging (fMRI) to study how various parts of the brain respond -- or don't respond -- to visual inputs. This research yields insight into how the brain processes and interprets visual information.

A big discovery of the past few decades is the brain's adaptability, or "plasticity," and no one knows that better than Psychology Professor and department chair Gordon Legge. Visually impaired himself, he has undergone fMRI studies to learn how the condition affects neurons of the visual cortex. Located at the back of the head, that's the first part of the cerebral cortex to receive visual information. But its function can be "reassigned" to other senses.

"A portion of my visual cortex has been allocated to touch," says Legge. "We can see that when I read Braille, a lot of my visual cortex is activated. In the totally blind, the visual cortex seems to be taken over by touch."

And touch can be jealous of its new territory. "In some cases, 'sight restoration' surgery has not led to full restoration of visual function, possibly because touch won't let the visual cortex go," says Legge.
Apart from restorative surgery, some visual abilities can improve with effort. In his studies of how visual experience shapes vision in adults, Professor Stephen Engel has found that with practice, people can begin to see very faint lines or patterns that were previously invisible to them.

A burning question is whether the adult primary visual cortex can rewire itself and, if so, how. Finding out, says Engel, could lead to optimizing people's ability to pick out everything from tumors in an X-ray to enemies in a military image.

Surmounting limits

Back in Jiang's office, a visitor watches red and green dots move randomly across the computer screen. The task is to follow the red dots, even after they suddenly turn green. Tracking one isn't so hard, but tracking multiple dots is difficult. It's a test of working memory  --  the ability to remember visual information after it's no longer in sight. People can improve their scores a little, but not much, says Jiang, and individuals vary widely in their working memory abilities.

"Why are some people better than others? Why can't we make other people better with training? We're looking for answers," says Jiang. Some visual limitations can improve, however. A case in point is the reading skills of adults suffering from macular degeneration, a loss of vision in the central area of the visual field that leaves a person reliant on peripheral vision.

The trouble with relying on peripheral vision, says Legge, is that when normally sighted people look at a line of text, they can recognize eight to ten letters on each eye fixation; but in peripheral vision, the span drops to only two or three letters. As a result, reading speed drops.

"We're working on visual training techniques that may enlarge the span and increase reading speed," he says. "With repetitive practice, people can read faster."

Division of labor

When we open our eyes, we usually see a seamless image. We get no hint that it took a great effort, but it did: numerous cells and groups of cells in our retinas and brains, all working on different tasks, ultimately combine to form a coherent whole. Our visual system is like a high-speed factory that puts out a new product  --  the world as we see it -- several times a second.

Division of labor plays a role. To gain a complete picture of the world, our brains appear to contain separate, but physically intertwined, populations of neurons that respond to only one small aspect of our environment, such as vertical lines or motion from left to right. The brain then bases its interpretation of images largely on which neurons fire.

Also, the brain gains efficiency by being organized into "centers." These are groups of neurons that act like many committees, each charged with processing a certain kind of visual information.
Faces, for example, get special treatment. "Many studies show that there is a specialized mechanism to deal with faces," says Professor Sheng He, and it includes an area of the cerebral cortex whose main job seems to be face recognition. In fact, so fundamental is this skill that our brains can "read" emotional information from faces without our being aware of it.

He discovered this by showing images of faces displaying emotions -- for example, fear -- or neutral expressions to experimental subjects. Simultaneously, the subjects were distracted by "noise" in the rest of their visual fields and didn't realize any image was in front of them. Specific areas in their brains showed different responses to fearful and neutral faces -- a clear indication that emotional processing of images goes on below our mind's "radar."

Nor are faces the only objects that register subconsciously. Using the same technique, He has found that "invisible" images of people in erotic poses or of hand tools can also be subconsciously perceived. In the case of the tools, fMRI studies found heightened activity in the upper (dorsal) part of the cerebral cortex  --  an area that plays an important role in reaching and grasping -- as the images were being presented.

Taken together, these studies reveal how evolution has given us the ability to pick out highly significant elements of the visual environment and alert emotional and action-oriented areas of our brains, all without our having to think about it. Thus we are primed to direct our attention to important objects like potential mates, weapons, or survival tools and can respond quickly.

The secret lives of neurons

If science is to find treatments for blindness, dyslexia, and other conditions, much more must be learned about how neurons function separately and together. A lot of work focuses on the part of the cerebral cortex -- the brain's intellectual powerhouse -- that first receives visual information. It is called the primary visual cortex, or V1, and is located in the occipital lobe at the back of the head. The primary visual cortex passes information to at least 30 or 40 other areas of the cortex, including areas associated with vision, conscious perception, movement, emotion, and reasoning. Knowledge of how V1 "decides" what to do is in its infancy.

Now fMRI allows researchers to watch how neurons in V1 and neighboring areas increase or decrease their activity as people perform various visual tasks. These observations yield clues to the sorts of "cues" the different neurons respond to, a first step in learning what roles they play in vision.
"How do neurons know what to respond to?" says Assistant Professor Cheryl Olman. Trained in physics, she is interested in the basic mechanisms of sight, starting with what incoming data from the eyes would look like without the filters of emotion and previous experience.
"For example, what are the cues that you're looking at a distant mountain?" she says. "A computer might mistake it for a hill of beans. I want to know how we parse scenes and link the clues together to form an idea."

Olman, working with Scott Sponheim, an Adjunct Assistant Professor, is also asking how the early stages of vision work for schizophrenic subjects. One question concerns how brain areas work together when a person tries to find patterns amid chaos -- picking out a circle of dots within a sea of dots on a computer screen, for instance.

"Activity occurs in V1 neurons when people look at a simple pattern," Olman says. But what about at the next step, when people recognize a pattern as being, say, egg-shaped or square? These are different stages of vision, and she wants to know whether the same neurons are involved in both.

Sometimes our visual environment throws us a curve, and that offers a great opportunity for learning how the primary visual cortex functions. Take, for example, the well-known moon illusion: a full moon looks bigger when it is rising or setting than it does when it's high in the sky.

We misperceive the moon's size because the brain takes into account depth, or perspective, when it is judging physical size, says Professor Daniel Kersten. Wherever the moon is, he explains, it appears the same size at the level of the retina. But when the moon is close to the horizon, our brains also get depth clues; that is, we interpret it as being farther away because it appears next to distant objects like mountains. And if it appears to be farther away, our brains interpret it as bigger.

Kersten has computer graphics of optical illusions that work the same way as the moon illusion. In experiments, he has recorded activity in the V1 area as subjects viewed the graphics and misjudged the sizes of objects.

"My colleagues and I thought that V1 would not be depth-sensitive because it's too early [a stage in processing of visual information]," says Kersten. Yet subjects in the process of judging, and misjudging, objects showed more widespread activity in their V1 areas.

Therefore, sophisticated processing of visual information seems to begin as soon as it reaches the primary visual cortex. But, says Kersten, V1 may only be helping higher centers in the brain interpret the information. There's a lot of "crosstalk" between V1 and higher areas, and understanding this communication may open a window onto how the cerebral cortex works generally, and even onto what makes us intelligent.

Moving targets

Vision allows us to navigate our environment. For that to happen, eye movements and visual information have to be properly coupled to physical actions, and that's where Paul Schrater, a Professor of both Psychology and Computer Science and Engineering, comes in.

Schrater studies the unconscious choices we make every time we reach for something or scan the scene before us, and he is developing theories of how people ought to blend vision and movement to derive the most benefit.

For example, one key to survival is a system of perception that can use information about intended actions -- such as where your hand is going as you reach for something -- to guide subsequent eye and hand movements, says Schrater. But this chain can break down; schizophrenics, for example, have difficulty coordinating eye movements and may have trouble grasping a cup or a pen. Thus, a theory of how healthy people perform such tasks may point to ways of helping those for whom they are hard.
How people decide what to pay attention to is also an active area of Schrater's research.

"Once we understand which options people should spend time on, it points to how they should allocate their visual attention," he says. "For example, moving your eyes is a decision about how informative various spots in the visual field are. How long you stay focused on one spot is important in driving. For instance, do you check the mirrors often enough?"

Knowledge of how visual perception translates to behaviors like keeping one's eyes on the car ahead rather than checking the mirrors could help in diagnosing poor drivers' specific attention problems, he says.

Beyond the practical implications of their work, vision researchers can't help but have an even deeper effect: a sense of wonder at how the human eye and mind can make so much of so little.
"What keeps me going as a psychologist is the phenomenon that we can see a tree and immediately know it's a tree," says Olman. "It's amazing how strong and tangible perception is, even with weak incoming data."

Categories

Pages

Powered by Movable Type 4.31-en

About this Entry

This page contains a single entry by cla published on November 25, 2008 2:38 PM.

Letter from the Chair was the previous entry in this blog.

Mapping Movement is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.