Decoding How the Brain Accurately Depicts Ever-changing Visual Landscapes

A collaborative study finds that deeper regions of the brain encode visual information more slowly, enabling the brain to identify fast-moving objects and images more accurately and persistently.

by Erica K. Brockmeier

Busy pedestrian crossing at Hong Kong

New research from the University of Pennsylvania, the Scuola Internazionale Superiore de Studi Avanzati (SISSA), and KU Leuven details the time scales of visual information processing across different regions of the brain. Using state-of-the-art experimental and analytical techniques, the researchers found that deeper regions of the brain encode visual information slowly and persistently, which provides a mechanism for explaining how the brain accurately identifies fast-moving objects and images. The findings were published in Nature Communications.

Understanding how the brain works is a major research challenge, with many theories and models developed to explain how complex information is processed and represented. One area of particular interest is vision, a major component of neural activity. In humans, for example, there is evidence that around half of the neurons in the cortex are related to vision.

Researchers are eager to understand how the visual cortex can process and retain information about objects in motion in a way that allows people to take in dynamic scenes while still retaining information about and recognizing the objects around them.

“One of the biggest challenges of all the sensory systems is to maintain a consistent representation of our surroundings, despite the constant changes taking place around us. The same holds true for the visual system,” says Davide Zoccolan, director of SISSA’s Visual Neuroscience Laboratory. “Just look around us: objects, animals, people, all on the move. We ourselves are moving. This triggers rapid fluctuations in the signals acquired by the retina, and until now it was unclear whether the same type of variations apply to the deeper layers of the visual cortex, where information is integrated and processed. If this was the case, we would live in tremendous confusion.”

Experiments using static stimuli, such as photographs, have found that information from the sensory periphery are processed in the visual cortex according to a finely tuned hierarchy. Deeper regions of the brain then translate this information about visual scenes into more complex shapes, objects, and concepts. But how this process works in more dynamic, real-world settings is not well understood.

To shed light on this, the researchers analyzed neural activity patterns in multiple visual cortical areas in rodents while they were being shown dynamic visual stimuli. “We used three distinct datasets: one from SISSA, one from a group in KU Leuven led by Hans Op de Beeck and one from the Allen Institute for Brain Science in Seattle,” says Zoccolan. “The visual stimuli used in each were of different types. In SISSA, we created dedicated video clips showing objects moving at different speeds. The other datasets were acquired using various kinds of clips, including from films.”

Next, the researchers analyzed the signals registered in different areas of the visual cortex through a combination of sophisticated algorithms and models developed by Penn’s Eugenio Pasini and Vijay Balasubramanian. To do this, the researchers developed a theoretical framework to help connect the images in the movies to the activity of specific neurons in order to determine how neural signals evolve over different time scales.

“The art in this science was figuring out an analysis method to show that the processing of visual images is getting slower as you go deeper and deeper in the brain,” says Balasubramanian. “Different levels of the brain process information over different time scales; some things could be more stable, some quicker. It’s very hard to tell if the time scales across the brain are changing, so our contribution was to devise a method for doing this.”

Read the full story in Penn Today.

Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the School of Arts & Sciences and a member of the Penn Bioengineering Graduate Group at the University of Pennsylvania.