A Blog by Jonathan Low

 

Jul 11, 2017

How an Army of Humans Wielding Phones Is Teaching Artificial Intelligence To Drive

For the artificial intelligence steering cars to learn requires tens of millions of impressions.

To speed up the process of learning, as many as 200,000 part-time contractors are sent graphic images, which they then identify. Those are input into the models so that the AI develops a clearer sense of how to differentiate between a tree and a traffic light.

And you thought humans were redundant! JL

Jack Stewart reports in Wired:


The onboard cameras helping prototype robocars navigate the world photograph almost every environment and circumstance you can image. Automakers and tech companies send those photos by the millions to an outfit like Mighty AI, which makes a game of identifying everything in those photos. Those millions of annotate photos help an AI identify patterns. Eventually AI will grow smart enough to identify, say, kangaroos. Relying on an army of amateurs remains the most efficient way of training AI.
As her fellow patients read dog-eared magazines or swipe through Instagram, Shari Forrest opens an app on her phone and gets busy training artificial intelligence.
Forrest isn’t an engineer or programmer. She writes textbooks for a living. But when the 54-year-old from suburban St. Louis needs a break or has a free moment, she logs on to Mighty AI, and whiles away her time identifying pedestrians and trash cans and other things you don't want driverless cars running into. “If I am sitting waiting for a doctor's appointment and I can make a few pennies, that’s not a bad deal,” she says.
The work is a pleasant distraction for Forrest, but absolutely essential to the coming ages of driverless cars. The volume of data needed to train the AI underpinning those vehicles staggers the imagination. The Googles and GMs of the world rarely mention it, but their shiny machines and humming data centers rely on a growing, and global, army of people like Forrest to help provide it.
You've probably heard by now that almost everyone expects AI to revolutionize almost everything. Automakers in particular love this idea, because robocars promise to increase safety, reduce congestion, and generally make life easier. “The automotive space is one of the hottest and most advanced fields applying machine learning,” says Matt Bencke, CEO of Mighty AI. He won't name names, but claims his company is working with at least 10 automakers. The challenge lies in teaching a computer how to drive. The DMV rule book provides a good place to start, because it covers rudimentary things like "Yield to pedestrians." Ah, but what does a pedestrian look like? Well, a pedestrian usually has two legs. But a skirt can make two legs look like one. What about a fellow in a wheelchair, or a mother pushing a stroller? Is that a small child, or a large dog? Or a trash can? Any artificial intelligence controlling a two-ton chunk of steel. It must learn how to identify such things, and make sense of an often confusing world. This is second nature for humans, but utterly foreign to a computer.
Cue Forrest and 200,000 other Mighty AI users around the world.
The onboard cameras helping prototype robocars navigate the world photograph almost every environment and circumstance you can image. Automakers and tech companies send those photos by the millions to an outfit like Mighty AI, which makes a game of identifying everything in those photos. It sounds tedious, but Mighty AI makes it a 10 minute task with points, skills, and level-ups to keep it engaging. “It’s more like Candy Crush than a labor farm,” says Bencke. The monetary rewards, although small, help, too.
Forrest carefully draws a box around every person in each picture, then around every approaching car, and then around the tires on each car. That done, she zooms in, and working pixel-by-pixel, meticulously outlines things like trees. Click click, click. She selects a different color pointer and highlights traffic lights, a telegraph pole, a safety cone. When she’s finished, the scene is annotated in language a computer understands. Engineers call it a "semantic segmentation mask".
The need for accuracy makes for painstaking work, but Forrest, who makes a few centers per picture, enjoys it. “It’s like why some adults color,” she says. “It’s become a relaxing task.”
Those millions of annotate photos help an AI identify patterns that help it understand, say, what a human looks like. Eventually the AI grows smart enough to draw boxes around pedestrians. People like Forrest will help double-check the AI's work. Over time, AI will grow smart enough to reliably identify, say, kangaroos.
Relying on an army of amateurs might seem odd, but it remains the most efficient way of training AI. “It’s pretty much the only way,” says Premkumar Natarajan, who specializes in computer vision at the USC Information Sciences Institute. He’s been working in the field for more than two decades.
Although there's been some promising research into so-called unsupervised learning where computers learn with minimal input, but for now the intelligence in artificial intelligence depends on the quality of the data its trained on.
Bencke says his platform uses its own machine learning to determine what each member of the Mighty AI community is best at, then assign them those jobs. No one is getting rich doing this essential work, but for Forrest, that's beside the point.
She says she made about $300 last year, money she put toward online shopping. She's never seen an autonomous vehicle, much less ridden in one. But knowing that she's helping make them smarter will make her more likely to trust the technology when she finally does.

0 comments:

Post a Comment