Selecting an architecture is a critical step in crafting any AI model, but it’s easier said than done. Excepting those generated by “autoML” systems that work from a basic task outline, their design is informed by a combination of historical precedent, domain knowledge, and trial and error.
Amazon researchers believe there’s a better way — namely one involving computational methods that generate tailored architectures. In a paper (“On the Bounds of Function Approximations”) presented last week at the International Conference on Artificial Neural Networks in Munich, they explore techniques that apply to any computational model, providing the model can compute the same functions a Turing machine can. (In this context, a Turing machine refers to a model defining an abstract machine that manipulates symbols according to rules.)
“Selection of a neural architecture is unlikely to provide the best solution to a given machine learning problem, regardless of the learning algorithm used, the architecture selected, or the tuning of training parameters such as batch size or learning rate,” said Adrian de Wynter, a research engineer with Alexa AI’s Machine Learning Platform Services organization and a lead author on the paper. “Only by considering a vast space of possibilities can we identify an architecture that comes with theoretical guarantees on the accuracy of its computations.”
To this end, the team evaluates solutions to the function approximation problem, a mathematical abstraction of the way AI algorithms search through parameters to approximate outputs of a target function. They reformulate it as a problem of finding a sequence of known functions that estimate the outputs of a target function, which they say confers the advantage of better system modeling.
The researchers’ study suggests the components of an AI model should be selected so that they guarantee Turing equivalence, and they say that models are best identified through an automated search that uses procedures to design architectures for particular tasks. Algorithms in such searches begin by generating other candidate algorithms for solving a problem, after which the best-performing candidates are combined with each other and tested again.
“The paper’s … immediately applicable result is the identification of genetic algorithms — and, more specifically, coevolutionary algorithms … whose performance metric depends on their interactions with each other — as the most practical way to find an optimal (or nearly optimal) architecture,” wrote Wynter. “Based on experience, many researchers have come to the conclusion that coevolutionary algorithms provide the best way to build machine learning systems. But the function-approximation framework from my paper helps provide a more secure theoretical foundation for their intuition.”
Amazon isn’t the only one advocating evolutionary approaches to AI architecture searches. In July, Uber open-sourced a dev library for evolutionary algorithms dubbed EvoGrad. And last October, Google introduced AdaNet, a tool for combining machine learning algorithms to achieve better predictive insights.