A Blog by Jonathan Low

 

Nov 21, 2019

An Ultra-fast AI Chip Is Being Used To Identify Better Cancer Drugs

Simultaneous advances in AI software and hardware are leading to healthcare breakthroughs. JL

Karen Hao reports in MIT Technology Review:

Deep-learning algorithms excel at quickly finding patterns in reams of data, which has sped up scientific discovery. (But) inefficiencies cap the speed at which the chips can run deep-learning algorithms and cause them to soak up huge amounts of energy. A new computer accelerates the training of deep-learning algorithms by orders of magnitude. The computer, which houses the world’s largest chip, is part of a new generation of specialized AI hardware.The goal is to develop a deep-learning model that can predict how a tumor might respond to a drug or combination of drugs. The model can develop new drugs on a specific tumor, or predict the effects of a single drug on many different tumors.
At Argonne National Laboratory, roughly 30 miles from downtown Chicago, scientists try to understand the origin and evolution of the universe, create longer-lasting batteries, and develop precision cancer drugs.
All these different problems have one thing in common: they are tough because of their sheer scale. In drug discovery, it’s estimated that there could be more potential drug-like molecules than there are atoms in the solar system. Searching such a vast space of possibilities within human time scales requires powerful and fast computation. Until recently, that was unavailable, making the task pretty much unfathomable.
But in the last few years, AI has changed the game. Deep-learning algorithms excel at quickly finding patterns in reams of data, which has sped up key processes in scientific discovery. Now, along with these software improvements, a hardware revolution is also on the horizon.
Yesterday Argonne announced that it has begun to test a new computer from the startup Cerebras that promises to accelerate the training of deep-learning algorithms by orders of magnitude. The computer, which houses the world’s largest chip, is part of a new generation of specialized AI hardware that is only now being put to use.
“We’re interested in accelerating the AI applications that we have for scientific problems,” says Rick Stevens, Argonne’s associate lab director for computing, environment, and life sciences. “We have huge amounts of data and big models, and we’re interested in pushing their performance.”
Currently, the most common chips used in deep learning are known as graphical processing units, or GPUs. GPUs are great parallel processors. Before their adoption by the AI world, they were widely used for games and graphic production. By coincidence, the same characteristics that allow them to quickly render pixels are also the ones that make them the preferred choice for deep learning.
But fundamentally, GPUs are general purpose; while they have successfully powered this decade’s AI revolution, their designs are not optimized for the task. These inefficiencies cap the speed at which the chips can run deep-learning algorithms and cause them to soak up huge amounts of energy in the process.
In response, companies have raced to design new chip architectures that are specially suited for AI. Done well, such chips have the potential to train deep-learning models up to 1,000 times faster than GPUs, with far less energy. Cerebras is among the long list of companies that have since jumped to capitalize on the opportunity. Others include startups like Graphcore, SambaNova, and Groq, and incumbents like Intel and Nvidia.
A successful new AI chip will have to meet several criteria, says Stevens. At a minimum, it has to be 10 or 100 times faster than the general-purpose processors when working with the lab’s AI models. Many of the specialized chips are optimized for commercial deep-learning applications, like computer vision and language, but may not perform as well when handling the kinds of data common in scientific research. “We have a lot of higher-dimensional data sets,” Stevens says—sets that weave together massive disparate data sources and are far more complex to process than a two-dimensional photo.
The chip must also be reliable and easy to use. “We’ve got thousands of people doing deep learning at the lab, and not everybody’s a ninja programmer,” says Stevens. “Can people use the chip without having to spend time learning something new on the coding side?”
Thus far, Cerebras’s computer has checked all the boxes. Thanks to its chip size—it is larger than an iPad and has 1.2 trillion transistors for making calculations—it isn’t necessary to hook multiple smaller processors together, which can slow down model training. In testing, it has already shrunk the training time of models from weeks to hours. “We want to be able to train these models fast enough so the scientist that’s doing the training still remembers what the question was when they started,” says Stevens.
Initially, Argonne has been testing the computer on its cancer drug research. The goal is to develop a deep-learning model that can predict how a tumor might respond to a drug or combination of drugs. The model can then be used in one of two ways: to develop new drug candidates that could have desired effects on a specific tumor, or to predict the effects of a single drug candidate on many different types of tumors.
Stevens expects Cerebras’s system to dramatically speed up both development and deployment of the cancer drug model, which could involve training the model hundreds of thousands of times and then running it billions more times to make predictions on every drug candidate.
He also hopes it will boost the lab’s research in other topics, such as battery materials and traumatic brain injury. The former work would involve developing an AI model for predicting the properties of millions of molecular combinations to find alternatives to lithium-ion chemistry. The latter would involve developing a model to predict the best treatment options. It’s a surprisingly hard task because it requires processing so many types of data—brain images, biomarkers, text—very quickly.
Ultimately Stevens is excited by the potential that the combination of AI software and hardware advancements will bring to scientific exploration. “It’s going to change dramatically how scientific simulation happens,” he says. 

0 comments:

Post a Comment