Computational model provides insight into function of brain’s visual cortex

Formalizing progress in nanoscience and nanotechnology, engineering researchers at the University of Arkansas have published the first textbook on the emerging field of nanomedicine. Nanomedicine - Design and Application of Magnetic Nanomaterials, Nanosensors and Nanosystems presents a comprehensive treatment of a rapidly developing field that is changing the way biologists, physicists, chemists and medical researchers

Full Post: First nanomedicine textbook published

Computational neuroscientists at Carnegie Mellon University have developed a computational model that provides insight into the function of the brain’s visual cortex and the information processing that enables people to perceive contours and surfaces, and understand what they see in the world around them.

A type of visual neuron known as simple cells can detect lines, or edges, but the computation they perform is insufficient to make sense of natural scenes, said Michael S. Lewicki, associate professor in Carnegie Mellon’s Computer Science Department and the Center for the Neural Basis of Cognition. Edges often are obscured by variations in the foreground and background surfaces within the scene, he said, so more sophisticated processing is necessary to understand the complete picture. But little is known about how the visual system accomplishes this feat.

In a paper published online by the journal Nature , Lewicki and his graduate student, Yan Karklin, outline their computational model of this visual processing. The model employs an algorithm that analyzes the myriad patterns that compose natural scenes and statistically characterizes those patterns to determine which patterns are most likely associated with each other.

The bark of a tree, for instance, is composed of a multitude of different local image patterns, but the computational model can determine that all these local images represent bark and are all part of the same tree, as well as determining that those same patches are not part of a bush in the foreground or the hill behind it.

“Our model takes a statistical approach to making these generalizations about each patch in the image,” said Lewicki, who currently is on sabbatical at the Institute for Advanced Study in Berlin. “We don’t know if the visual system computes exactly in this way, but it is behaving as if it is.”

Lewicki and Karklin report that the response of their model neurons to images used in physiological experiments matches well with the response of neurons in higher visual stages. These “complex cells,” so-called for their more complex response properties, have been extensively studied, but the role they play in visual processing has been elusive. “We were astonished that the model reproduced so many of the properties of these cells just as a result of solving this computational problem,” Lewicki said.

The human brain makes these interpretations of visual stimuli effortlessly, but computer scientists have long struggled to program computers to do the same. “We don’t have computer vision algorithms that function as well as the brain,” Lewicki said, so computers often have trouble recognizing objects, understanding their three-dimensional nature and appreciating how the objects they see are juxtaposed across a landscape. A deeper understanding of how the brain perceives the world could translate into improved computer vision systems.

In the meantime, the functional explanation of complex cells suggested by the computer model will enable scientists to develop new ways of investigating the visual system and other brain areas. “It’s still a theory, after all, so naturally you want to test it further,” Lewicki noted. But if the model is confirmed, it could establish a new paradigm for how we derive the general from the specific.


Increased use of computers to create predictive models of human disease is likely following a workshop organised by the European Science Foundation (ESF), which urged for a collaborative effort between specialists in the field. Human disease research produces an enormous amount of data from different sources such as animal models, high throughput genetic screening

Full Post: Computers used to create predictive models of human disease

A new study from Georgia Tech shows that when patients with macular degeneration focus on using another part of their retina to compensate for their loss of central vision, their brain seems to compensate by reorganizing its neural connections. Age-related macular degeneration is the leading cause of blindness in the elderly. The study appears in

Full Post: Brain reorganizes to compensate for loss of vision

From a young age we are taught about the five senses and how they help us to explore our world. Although each sense seems to be its own entity, recent studies have indicated that there is actually a lot of overlap and blending of the senses occurring in the brain to help us better perceive

Full Post: New insight into multisensory integration

When you first notice a door handle, your brain has already been hard at work. Your visual system first sees the handle, then it sends information to various parts of the brain, which go on to decipher out the details, such as color and the direction the handle is pointing. As the information about an

Full Post: Study looks at brain and movement planning

Chimpanzees recognize their pals by using some of the same brain regions that switch on when humans register a familiar face, according to a report published online on December 18th in Current Biology, a Cell Press publication. The study - the first to examine brain activity in chimpanzees after they attempt to match fellow chimps’

Full Post: New insight into the origin of face recognition in humans