Derek Thompson reports in The Atlantic:
Sometimes, a machine is more clever than its makers. This is not to say that AI displays what psychologists would call human creativity. Machines cannot turn themselves on, become self-motivated, ask alternate questions, or explain their discoveries. Without consciousness or comprehension, a creature cannot be truly creative. But AI evolves, as an organism can. Evolutionary biology displays a divergent and convergent intelligence that is a far better metaphor for to the process of machine learning, like generative design, than the tangle of human thought.
But what if that common view is wrong? What if AI’s real comparative advantage over humans is precisely its divergent intelligence—its creative potential?Can artificial intelligence be smarter than a person? Answering that question often hinges on the definition of artificial intelligence. But it might make more sense, instead, to focus on defining what we mean by “smart.”
In the 1950s, the psychologist J. P. Guilford divided creative thought into two categories: convergent thinking and divergent thinking. Convergent thinking, which Guilford defined as the ability to answer questions correctly, is predominantly a display of memory and logic. Divergent thinking, the ability to generate many potential answers from a single problem or question, shows a flair for curiosity, an ability to think “outside the box.” It’s the difference between remembering the capital of Austria and figuring how to start a thriving business in Vienna without knowing a lick of German.
When most people think of AI’s relative strengths over humans, they think of its convergent intelligence. With superior memory capacity and processing power, computers outperform people at rules-based games, complex calculations, and data storage: chess, advanced math, and Jeopardy. What computers lack, some might say, is any form of imagination, or rule-breaking curiosity—that is, divergence.One of the more interesting applications of AI today is a field called generative design, where a machine is fed oodles of data and asked to come up with hundreds or thousands of designs that meet specific criteria. It is, essentially, an exercise in divergent potential.
For example, when the architecture-software firm Autodesk wanted to design a new office, it asked its employees what they wanted from the ideal workplace: How much light? Or privacy? Or open space? Programmers entered these survey responses into the AI, and the generative-design technology produced more than 10,000 different blueprints. Then human architects took their favorite details from these computer-generated designs to build the world’s first large-scale office created using AI.
“Generative design is like working with an all-powerful, really painfully stupid genie,” said Astro Teller, the head of X, the secret research lab at Google’s parent company Alphabet. That is, it can be both magical and mind-numbingly over-literal. So I asked Teller where companies could use this painfully dense genie. “Everywhere!” he said. Most importantly, generative design could help biologists simulate the effect of new drugs without putting sick humans at risk. By testing thousands of variations of a new medicine in a biological simulator, we could one day design drugs the way we design commercial airplanes—by exhaustively testing their specifications before we put them in the air with several hundred passengers.
AI’s divergent potential is one of the hottest subjects in the field. This spring, several dozen computer scientists published an unusual paper on the history of AI. This paper was not a work of research. It was a collection of stories—some ominous, some hilarious—that showed AI shocking its own designers with its ingenuity. Most of the stories involved a kind of AI called machine learning, where programmers give the computer data and a problem to solve without explicit instructions, in the hopes that the
algorithm will figure out how to answer it.
First, an ominous example. One algorithm was supposed to figure out how to land a virtual airplane with minimal force. But the AI soon discovered that if it crashed the plane, the program would register a force so large that it would overwhelm its own memory and count it as a perfect score. So the AI crashed the plane, over and over again, presumably killing all the virtual people on board. This is the sort of nefarious rules-hacking that makes AI alarmists fear that a sentient AI could ultimately destroy mankind. (To be clear, there is a cavernous gap between a simulator snafu and SkyNet.)But the benign examples were just as interesting. In one test of locomotion, a simulated robot was programmed to travel forward as quickly as possible. But instead of building legs and walking, it built itself into a tall tower and fell forward. How is growing tall and falling on your face anything like walking? Well, both cover a horizontal distance pretty quickly. And the AI took its task very, very literally.
According to Janelle Shane, a research scientist who publishes a website about artificial intelligence, there is an eerie genius to this forward-falling strategy. “After I had posted [this paper] online, I heard from some biologists who said, ‘Oh yeah, wheat uses this strategy to propagate!’” she told me. “At the end of each season, these tall stalks of wheat fall over, and their seeds land just a little bit farther from where the wheat stalk heads started.”
From the perspective of the computer programmer, the AI failed to walk. But from the perspective of the AI, it rapidly mutated in a simulated environment to discover something which had taken wheat stalks millions of years to learn: Why walk, when you can just fall? A relatable sentiment.
The stories in this paper are not just evidence of the dim-wittedness of artificial intelligence. In fact, they are evidence of the opposite: A divergent intelligence that mimics biology. “These anecdotes thus serve as evidence that evolution, whether biological or computational, is inherently creative and should routinely be expected to surprise, delight, and even outwit us,” the lead authors write in the conclusion. Sometimes, a machine is more clever than its makers.
This is not to say that AI displays what psychologists would call human creativity. These machines cannot turn themselves on, or become self-motivated, or ask alternate questions, or even explain their discoveries. Without consciousness or comprehension, a creature cannot be truly creative.
But if AI, and machine learning in particular, does not think as a person does, perhaps it’s more accurate to say it evolves, as an organism can. Consider the familiar two-step of evolution. With mutation, genes diverge from their preexisting structure. With natural selection, organisms converge on the mutation best adapted to their environment. Thus, evolutionary biology displays a divergent and convergent intelligence that is a far better metaphor for to the process of machine learning, like generative design, than the tangle of human thought.
AI might not be “smart” in a human sense of the word. But it has already shown that it can perform an eerie simulation of evolution. And that is a spooky kind of genius.
0 comments:
Post a Comment