Feb 12 , 2019
To advance artificial intelligence, reverse-engineer the brain

Your three-pound brain runs on just 20 watts of power — barely enough to light a dim bulb. Yet the machine behind our eyes has built civilizations from scratch, explored the stars, and pondered our existence. In contrast, IBM’s Watson, a supercomputer that runs on 20,000 watts, can outperform humans at calculation and “Jeopardy!” but is still no match for human intelligence.

Neither Watson, nor any other artificially “intelligent” system, can navigate new situations, infer what others believe, use language to communicate, write poetry and music to express how it feels, and create math to build bridges, devices, and life-saving medicines. Why not? The society that solves the problem of intelligence will lead the future, and recent progress shows how we can seize that opportunity.

Imagine human intelligence as a skyscraper. Instead of girders and concrete, this structure is built with algorithms, or sequences of interacting rules that process information, layered upon and interacting with each other like the floors of that building.

The floors above the street represent the layers of intelligence that humans have some conscious access to, like logical reasoning. These layers inspired the pursuit of artificial intelligence in the 1950s. But the most important layers are the many floors that you don’t see, in the basement and foundation. These are the algorithms of everyday intelligence that are at work every time we recognize someone we know, tune in to a single voice at a crowded party, or learn the rules of physics by playing with toys as a baby. While these subconscious layers are so embedded in our biology that they often go unnoticed, without them the entire structure of intelligence collapses.

As an engineer-turned-neuroscientist, I study the brain’s algorithms for one of these foundational layers — visual perception, or how your brain interprets your surroundings using vision. My field has recently experienced a remarkable breakthrough.

For decades, engineers built many algorithms for machine vision, yet those algorithms each fell far short of human capabilities. In parallel, cognitive scientists and neuroscientists like myself accumulated myriad measurements describing how the brain processes visual information. They described the neuron (the fundamental building block of the brain), discovered that many neurons are arranged in a specific type of multi-layered, “deep” network, and measured how neurons inside that neural network respond to images of the surroundings. They characterized how humans quickly and accurately respond to those images, and they proposed mathematical models of how neural networks might learn from experience. Yet, these approaches alone failed to uncover the brain’s algorithms for intelligent visual perception.

The key breakthrough came when researchers used a combination of science and engineering. Specifically, some researchers began to build algorithms out of brain-like, multi-level, artificial neural networks so that they had neural responses like those that neuroscientists had measured in the brain. They also used mathematical models proposed by scientists to teach these deep neural networks to perform visual tasks that humans were found to be especially good at — like recognizing objects from many perspectives.

This combined approach rocketed to prominence in 2012, when computer hardware had advanced enough for engineers to build these networks and teach them using millions of visual images. Remarkably, these brain-like, artificial neural networks suddenly rivaled human visual capabilities in several domains, and as a result, concepts like self-driving cars aren’t as far-fetched as they once seemed. Using algorithms inspired by the brain, engineers have improved the ability of self-driving cars to process their environments safely and efficiently. Similarly, Facebook uses these visual recognition algorithms to recognize and tag friends in photos even faster than you can.

This deep learning revolution launched a new era in AI. It has completely reshaped technologies from the recognition of faces and objects and speech, to automated language translation, to autonomous driving, and many others. The technological capability of our species was revolutionized in just a few years — the blink of an eye on the timescale of human civilization.

But this is just the beginning. Deep learning algorithms resulted from new understanding of just one layer of human intelligence — visual perception. There is no limit to what can be achieved from a deeper understanding of other algorithmic layers of intelligence. As we aspire to this goal, we should heed the lesson that progress did not result from engineers and scientists working in silos; it resulted from the convergence of engineering and science. Because many possible algorithms might explain a single layer of human intelligence, engineers are searching for the proverbial needle in a haystack. However, when engineers guide their algorithm-building and testing efforts with discoveries and measurements from brain and cognitive science, we get a Cambrian explosion in AI.

This approach of working backwards from measurements of the functioning system to engineer models of how that system works is called reverse engineering. Discovering how the human brain works in the language of engineers will not only lead to transformative AI, it will also illuminate new approaches to helping those who are blind, deaf, autistic, schizophrenic, or who have learning disabilities or age-related memory loss. Armed with an engineering description of the brain, scientists will see new ways to repair, educate, and augment our own minds.

The race is on to see if reverse engineering will continue to provide a faster and safer route to real AI than traditional, so-called forward engineering that ignores the brain. The winner of this race will lead the economy of the future, and the nation is positioned to seize this opportunity. But to do so, the United States needs significant new financial commitments from government, philanthropy, and industry that are devoted to supporting novel teams of scientists and engineers. In addition, universities must create new industry-university partnership models. Schools will need to train brain and cognitive scientists in engineering and computation, train engineers in the brain and cognitive sciences, and uphold mechanisms of career advancement that reward such teamwork. To advance AI, reverse engineering the brain is the way forward. The solution is right behind our eyes.

James J. DiCarlo, is a professor of neuroscience and an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds and Machines. 

Illustration Credit: Ellen Weinstein
James J. DiCarlo, Brain and Cognitive Sciences

We believe in the power of continuous learning, discussion and collaboration to bring about transformation. Join the conversation and let’s forge new paths ahead, together.

Recent Comments