A Blog by Jonathan Low

 

Sep 6, 2018

IBM's New System Automatically Selects Optimal AI Algorithm

When the system knows what you need more than you do. And more accurately and faster, too. JL

Kyle Wiggers reports in Venture Beat:

"The evolutionary algorithm is designed to reduce the search time for the right deep learning architecture to just hours, making the optimization of deep learning network architecture affordable for everyone. (It) had slightly higher classification error but required significantly less time compared with state-of-the-art human-designed architectures, results of architecture search methods based on reinforcement learning, and results for other automated methods based on evolutionary algorithms.”
Not all deep learning systems — that is to say, systems consisting of layered nodes that ingest data, transform it, output it, and pass it on — are created equal. No algorithm is appropriate for every task, and finding the optimal one can be a long and frustrating exercise. Luckily, there’s hope: IBM developed a system that automates the process.
Martin Wistuba, a data scientist at IBM Research Ireland, described in a recent blog post and accompanying paper the method. He claims it’s 50,000 times faster than other approaches, with only a small increase in error rate.
“At IBM, engineers and scientists select the best architecture for a deep learning model from a large set of possible candidates. Today this is a time-consuming manual process; however, using a more powerful automated AI solution to select the neural network can save time and enable non-experts to apply deep learning faster,” he wrote. “My evolutionary algorithm is designed to reduce the search time for the right deep learning architecture to just hours, making the optimization of deep learning network architecture affordable for everyone.”
Martin Witsuba IBM
Above: A chart showing the mutations undergone by a single neural network.
Image Credit: IBM
Here’s the crux of Wistuba’s “neuro-evolutional” system: Convolutional neural network architectures are treated as sequences of “neuro-cells,” which mutate — i.e., gain or lose layers — until a structure that improves their performance with a given dataset and task is identified. The mutations don’t affect the network’s predictions, and they substantially speed up training time, he wrote.
To test the method’s efficacy, he used it to select an image classification algorithm for the CIFAR-10 and CIFAR-100 datasets (labeled images made publicly available by the University of Toronto). The result?
“Accuracy increas[ed] quickly over the first 10 hours [of training], then progress [was] slow but steady afterward,” he wrote. “My algorithm had slightly higher classification error but required significantly less time compared with state-of-the-art human-designed architectures, results of architecture search methods based on reinforcement learning, and results for other automated methods based on evolutionary algorithms.”
Martin hopes to integrate the system into IBM’s cloud services in the future, and to extend it to larger datasets and additional domains like natural language processing.
Automated algorithm selection isn’t new — it’s one of the methods Google used to improve facial recognition and object detection on smartphones — but if Wistuba’s system works as well as advertised, it could represent a significant advancement in the field.

0 comments:

Post a Comment