AI's Big Challenge
The recently signed executive order establishing the American AI Initiative correctly identifies artificial intelligence as central to American competitiveness and national defense. However, it is unclear if AI has accomplished anywhere near as much as many have claimed. Indeed, current technology exhibits no convincing demonstration of anything remotely approaching “intelligence.”
To maintain U.S. supremacy in AI, the best way forward is to adopt a strategy hewing more closely to the way humans learn, which will put us on the best path to the economic growth and widespread social benefits promised by full-fledged artificial intelligence.
Here’s the challenge with most deep learning neural networks, which reflect the prevailing approach to AI: calling them both deep and intelligent assumes they achieve ever more abstract and meaningful representations of the data at deeper and deeper levels of the network. It further assumes that at some point they transcend rote memorization to achieve actual cognition, or intelligence. But they do not.
Consider computer vision, where deep neural networks have achieved stunning performance improvements on benchmark image-categorization tasks. Say we task our computer vision algorithm with correctly labeling images as either cats or dogs. If the algorithm correctly labels the images, we might conclude that the underlying deep neural network has learned to distinguish cats and dogs.
Now suppose all of the dogs are wearing shiny metallic dog tags and none of the cats are wearing cat tags. Most likely, the deep neural network didn’t learn to see cats and dogs at all but simply learned to detect shiny metallic tags. Recent work has shown that something like this actually underpins the performance of deep neural networks on computer vision tasks. The explanation may not be as obvious as shiny metallic tags, but most academic data sets contain analogous unintentional cues that deep learning algorithms exploit.
Using adversarial examples, which are designed to foil neural networks, adds even more disturbing evidence that deep neural networks might not be “seeing” at all but merely detecting superficial image features. In a nutshell, adversarial examples are created by running in reverse the same computational tools used to train a deep neural network. Researchers have found that adding very slight modifications to an image—imperceptible to humans—can trick a deep neural network into incorrectly classifying an image, often radically.
The problem, it turns out, is one of computational misdirection. Adding or deleting just a few pixels can eliminate a particular cue that the deep neural network has learned to depend on. More fundamentally, this error demonstrates that deep neural networks rely on superficial image features that typically lack meaning, at least to humans.
That creates an opportunity for serious mischief by bad actors using targeted adversarial examples. If you’re counting on consistent image recognition for self-driving cars designed to recognize road signs, for example, or security systems that recognize fingerprints … you’re in trouble.
This flaw is built into the architecture. Recent research in Israel led by Naftali Tishby has found that a deep neural network selectively drops non-essential information at each layer. A fully trained deep neural network has thrown away so much information and has become so dependent on just a few key superficial features—“shiny metal tags”—that it has lost all semblance of intelligence. Deep learning is more accurately described as deep forgetting.
Even more damning, deep neural networks exhibit no capacity to learn by analogy, the basis of all intelligence. For example, humans and other animals use analogy to learn that the world consists of objects that possess common attributes. Whether it’s a rock, an apple or a baseball, all such objects fall to the ground because they obey the laws of an intuitive physics learned during the development of intelligence.
Researchers at Brown University recently tested whether deep neural networks could learn by analogy. The team found that neural networks failed to learn the concept of sameness. Instead of learning by analogy the underlying concept linking the examples of similar images in a training set of images, deep neural networks simply memorized a set of templates for correctly labeling the images in the training set. The networks gained no capacity to generalize outside the training data.
It is difficult to imagine a more searing indictment of deep learning than the inability to learn by analogy. Essentially all cognitive development rests on learning and abstracting the principles underlying a set of concrete examples. The failure, thus far, of deep learning to do so reveals the emptiness behind the facade of intelligence presented by current A.I. systems.
By jumping over the long, slow process of cognitive development and instead focusing on solving specific tasks with high commercial or marketing value, we have robbed AI of any ability to process information in an intelligent manner.
When truly intelligent machines do finally arise—and they probably will—they will not be anything like deep neural networks or other current AI algorithms. The path ahead runs through systems that mimic biology. Like their biological counterparts, intelligent machines must learn by analogy to acquire an intuitive understanding of the physical phenomena around them. To go forward into this future, we must first go backward and grant our machines a period of infancy in which to stumble through the structure of the world and discover the intuitive physics upon which all intelligent inference depends.