Pocket Sized AI Brain

A Smaller, More Efficient AI “Brain”

One of the main problems with current artificial intelligence systems is efficiency. While the human brain can do a lot with a very small amount of energy, AI systems require huge amounts of both energy and water, while producing large amounts of both pollution and greenhouse gas emissions. But a study published in Nature suggests a more efficient, and more lifelike AI brain model may be just around the corner.

The Study

The study looked at one small part of the brain: the visual system. Specifically, it sought to create a more efficient model of one part of that system. The researchers wanted to answer the question of how the brain translates bits of light into images that it recognizes. They also wanted the system to be able to answer questions like “how do you recognize a cat?”

In the study, the researchers started by creating a predictive deep neural network (DNN) model of neural responses that simulated V4 neurons from macaques. Visual area V4 is the part of the brain responsible for processing color. The model alternated data collection and model training in adaptive, closed-loop experiments.

The original model had 60 million parameters, which researchers then compressed into a more compact model with 10,000 parameters, using a variety of techniques, such as weeding out redundancies, and using statistical techniques similar to those used to compress digital photographs.

The Results

The result was a model small enough to send in an email attachment. And it worked more like a human brain than the researchers’ original model. Also, because the model was smaller and simpler, researchers were able to actually see what their artificial neurons were doing.

Some of the neurons, for example, were responding to shapes with curves and strong edges. Others appeared to respond only to dots within an image. Researchers believe that the specialization of the V4 neurons could help to explain how primates’ brains, including ours, are able to make sense of what the eyes see, and to do it in such an efficient way requiring so little energy.

In addition to size and efficiency, researchers say the compressed model performed in a more lifelike way than the original model. 

Possible Applications

The study results have implications in a number of areas, including not only studying brain diseases like Alzheimer’s, but also, more generally forming an understanding of how human brains work, and, of course, designing better, more powerful, and more efficient artificial intelligence.

One of the limitations the researchers faced in designing their model was an underlying lack of information about both the inner workings of the brain and of AI systems. Benjamin R. Cowley, one of the study’s authors, and an assistant professor at the Cold Spring Harbor Laboratory, said, "We're very impoverished in our understanding of how these AI systems work," Cowley says, "much like our own brain." 

The study showed that by taking an AI model of a very small part of the brain and compressing it down, it’s possible to learn more about both. And increasing our knowledge about how the brain functions can help future researchers to better understand brain diseases, which can hopefully lead to more effective treatments.

Cowley said, “The compact model also appears to work more like a living brain, which could help scientists study what goes wrong in diseases like Alzheimer's.”

The fact that the simpler model produced something more lifelike, as well as more efficient, also has implications for future artificial intelligence.

Cowley said, "If our brains have less complex models and yet can do more than these AI systems, that tells us something about our AI systems." 

Specifically, AI systems could be smaller and simpler than they are right now, and still do a better job more efficiently. And that could mean other technologies that depend on artificial intelligence, for example self-driving cars, could one day be able to run on less powerful computer systems. That, in turn, could mean less pollution, less energy consumption, and possibly, less cost to run.

We’re Not Quite There Yet

There’s still more work to be done, of course. A lot more.

For example, recognizing people and objects that are subject to change. Mitya Chklovskii, a group leader at the Simons Foundation's Flatiron Institute, who was not involved in the study, gives the example of facial recognition. The human brain can easily recognize a familiar face in any setting and from different angles, and even if some characteristics have changed–for example a new haircut or suntan. However, even when powered by supercomputers, AI systems struggle with this sort of task.

Chklovskii believes that part of the problem may be that current AI models are based on last century’s understanding of the human brain. 

"Since then, we [have] learned a lot more about the brain," he says. "So maybe we should update the foundations of the artificial networks."

Ultimately, though, the study provided an important peek into how both primate brains, and artificial intelligence, work.