A Blog by Jonathan Low

 

Sep 20, 2019

AI and Machine Learning Applied To Make Data Storage More Efficient and Productive

As the compilation and analysis of data expands exponentially, finding ways to make its storage and access more effective could be one of the greatest challenges - and financial successes - of the techno-socio-economic future.

And it appears AI can help achieve that goal. JL

Jim Salter reports in ars technica:

As the scale and complexity of storage workloads increase, it becomes more and more difficult to manage them efficiently. With thousands of services competing for resources with differing performance and confidentiality targets, management of storage outpaces the human ability to make informed and useful changes.An AI architect might choose a convolutional or recurrent neural network to discover patterns in storage availability. Neural networks learn to spot anomalies and performance problems. AI management may also be able to provide a degree of efficiency not otherwise possible.

While the words "artificial intelligence" generally conjure up visions of Skynet, HAL 9000, and the Demon Seed, machine learning and other types of AI technology have already been brought to bear on many analytical tasks, doing things that humans can't or don't want to do—from catching malware to predicting when jet engines need repair. Now it's getting attention for another seemingly impossible task for humans: properly configuring data storage.
As the scale and complexity of storage workloads increase, it becomes more and more difficult to manage them efficiently. Jobs that could originally be planned and managed by a single storage architect now require increasingly large teams of specialists—which sets the stage for artificial intelligence (née machine learning) techniques to enter the picture, allowing fewer storage engineers to effectively manage larger and more diverse workloads.
Storage administrators have five major metrics they contend with, and finding a balance among them to match application demands approaches being a dark art. Those metrics are:
Throughput: Throughput is the most commonly understood metric at the consumer level. Throughput on the network level is usually measured in Mbps—megabits per second—such as you'd see on a typical Internet speed test. In the storage world, the most common unit of measurement is MB/sec—megabytes per second. This is because storage capacity is usually measured in megabytes. (For reference, there are eight bits in a byte, so 1MB per second is equal to 8Mbps.)
Latency: Latency—at least where storage is concerned—is the amount of time it takes between making a request and having it fulfilled and is typically measured in milliseconds. This may be discussed in a pure, non-throughput-constrained sense—the amount of time to fulfill a request for a single storage block—or in an application latency sense, meaning the time it takes to fulfill a typical storage request. Pure latency is not affected by throughput, while application latency may decrease significantly with increased throughput if individual storage requests are large.
IOPS: IOPS is short for "input/output operations per second" and generally refers to the raw count of discrete disk read or write operations that the storage stack can handle. This is what most storage systems bind on first. IOPS limits can be reached either on the storage controller or the underlying medium. An example is the difference between reading a single large file versus a lot of tiny files from a traditional spinning hard disk drive: the large file might read at 110MB/sec or more, while the tiny files, stored on the same drive, may read at 1MB/sec or even less.
Capacity: The concept is simple—it's how much data you can cram onto the device or stack—but the units are unfortunately a hot mess. Capacity can be expressed in GiB, TiB, or PiB—so-called "gibibytes," "tebibytes," or "pibibytes"—but is typically expressed in more familiar GB, TB, or PB (that's gigabytes, terabytes, or petabytes). The difference is that "mega," "giga," and "peta" is a decimal counting system based on powers of ten (so 1GB properly equals 1000^3 bytes, or exactly one billion bytes), whereas "gibi," "tebi," and "pibi" is a binary counting system based on powers of two (so one "gibibyte" is 1024^3 bytes, or 1,073,741,824 bytes). Filesystems almost universally use the powers of two (standard scientific notation), whereas storage device specifications are almost universally in powers of ten. There are complex historical reasons for the different ways of reckoning, but one reason the different capacity reckoning methods continue to exist is that it conveniently allows drive manufacturers to over-represent their devices' capacities on the box in the store.
Security: For the most part, security only comes into play when you're balancing cloud storage versus local storage. With highly confidential data, on-premises storage may be more tightly locked down, with physical access strictly limited to only the personnel who work directly for a company and have an actual need for that physical access. Cloud storage, by contrast, typically involves a much larger set of personnel having physical access, who may not work directly for the company that owns the data. Security can be a huge concern of the company that owns the data or a regulatory concern handed down from overseeing bodies, such as HIPAA or PCI DSS.
Enterprise administrators face an increasingly vast variety of storage types and an equally varied list of services to support with different I/O metrics to meet. A large file share might need massive scale and decent throughput as cheaply as it can be gotten but also must tolerate latency penalties. A private email server might need fairly massive storage with good latency and throughput but have a relatively undemanding IOPS profile. A database-backed application might not need to move much data, but it might also require very low latency while under an incredibly punishing IOPS profile.
If we only had these three services to deploy, the job seems simple: put the big, non-confidential file share on relatively cheap Amazon S3 buckets, the private mail server on local spinning rust (that's storage admin speak for traditional hard disk drives), and throw the database on local SSDs. Done! But like most "simple" problems, this gets increasingly more complex and difficult to manage as the number of variables scale out. Even a small business with fewer than fifty employees might easily have many dozens of business-critical services; an enterprise typically has thousands.
With thousands of services competing for resources with differing performance and confidentiality targets—some long-running, others relatively ephemeral and likely only being up for days or weeks at a time—management of the underlying storage rapidly outpaces the human ability to make informed and useful changes. Management effort quickly falls back to best-effort, "shotgun" approaches tailored to the preferences of the organization or department—spend too much but get high performance and/or minimal maintenance requirements in return; or gamble on cheaper services, hoping that the cost savings outweigh penalties in missed performance targets or increased IT payroll.





The fox, the chicken, and the storage rack


This is where AI (or "AI," at least) comes into the equation. We already know that artificial intelligence techniques can handle complex tasks that generally have required human expertise in the past. Neural networks can be trained to recognize visual cues faster and more accurately than humans can and can interpret patterns in human language well enough to reliably understand the difference between a hurled epithet and an album title in a social media post. So can such systems manage storage decisions at an enterprise scale better than humans typically do?There's reason to believe so, as AI has already apparently gotten a grip on the concept of resource contention—a problem illustrated by the Gedankenexperiment of the fox, the chicken, and the feed. It goes like this: you're on one side of a river with a fox, a chicken, and a bag of corn, and your task is to get all three to the other side of the river in your canoe. If you take the corn but leave the fox alone with the chicken, the fox will eat the chicken. If you take the fox but leave the chicken alone with the corn, the chicken will eat the corn. And your canoe can only fit one item at a time—so how can you shuttle all your resources over to the other side of the river without losing any? Although this is technically a simple problem, most people struggle mightily with it because it requires some lateral thinking.
(The correct answer is that you take the chicken over first—because the fox won't eat the corn—then come back and get the corn, leaving the fox alone. Then you bring the chicken back to the original side of the river, drop it off, and grab the fox. Lastly, you drop the fox off on the destination side of the river with the corn, go back one more time and get the chicken, and Bob's your uncle.)
I used this problem a couple of years ago to explain how the cloud optimizer in the mesh Wi-Fi kit Plume manages the topology of a consumer's network. While Plume's cloud optimizer is much simpler than the type of AI needed to monitor and manage thousands of workloads across various storage providers, it uses ARIMA—a machine learning technique well-suited to the analysis of time-series data—to solve very similar resource contention problems.

Plume's cloud optimizer makes predictions about near-future service needs based on telemetry taken from its Wi-Fi pods and changes the pods' configuration accordingly.
Enlarge / Plume's cloud optimizer makes predictions about near-future service needs based on telemetry taken from its Wi-Fi pods and changes the pods' configuration accordingly.
Plume
Every night, Plume's cloud optimizer ingests telemetry data from all deployed customer pods and uses machine learning to predict what data usage patterns are likely to be encountered at each household or site. These predictions are then fed into analytical rulesets that modify the individual configurations of pods—primarily how they interconnect wirelessly with one another—in order to satisfy the workloads and metrics the optimizer has predicted. A cloud AI that effectively manages large-scale storage deployments would be performing a very similar task.
In addition to monitoring the activity of the storage system itself, enterprise storage engineers often analyze the code of the applications that system must service. Understanding the storage patterns of mission-critical applications leads to a better understanding of how their storage should be designed around their needs.
Jeffrey Layton, senior solution architect for Nvidia, wrote:
When I'm trying to understand the IO pattern of an application or a workflow, one technique that I use is to capture the strace of the application, focusing on IO functions. For one recent application I examined, the strace output had more than 9,000,000 lines, of which a bit more than 8,000,000 lines were involved in IO. Trying to extract an IO pattern from more than 8,000,000 lines is a bit difficult.
Difficult for you and me with our meat brains, but not for a machine learning algorithm. This is only one example of problematic storage patterns that are best uncovered by looking outside the storage system itself. Simpler examples might include Windows or VMware updates which, when deployed, lead to poorer storage outcomes due to un-caught bugs in the updates that introduce new (and unoptimized) IO patterns.
Due to its ability to ingest and analyze much larger volumes of data than humans can, machine learning offers the possibility of discovering—and mitigating—these problems more effectively than any number of humans could. This analysis can range much further than real-time observation—a neural network can be trained on long-term series histories of particular stacks and applications. This kind of historical analysis can uncover and plan around problems impossible to discover more simply.
(Keen observers will note that this sounds a lot like the definition of "Big Data," a term that was terribly popular a few years ago and still pops up from time to time. The idea behind "Big Data" and the application of AI and machine learning outlined in this piece is essentially the same—we're talking about throwing vast computing resources against sets of data too large for humans to conveniently analyze and teasing out patterns too subtle or too convoluted for human analysis to dig up.)

Multiple techniques for multiple problems

There's probably no one single machine learning/artificial intelligence technique that would solve all the problems in managing storage at scale. Prediction of near-future needs—including simulations of the likely outcomes of changes being considered—would require time-series analysis. This might be accomplished with ARIMA such as Plume's cloud optimizer uses or a Long Short-Term Memory neural network.
So far, so good—but an enterprise must function and be managed at a much more massive scale than any individual Wi-Fi deployment. This larger scale, along with the complexity of available storage technologies' performance profiles, suggest that applying fixed analytical rulesets to the time-series predictions would likely not work so well for the enterprise.
An AI architect might instead choose a convolutional neural network or recurrent neural network to discover patterns in storage availability. In human terms, this is cognitively similar to looking for "holes" shaped and sized appropriately to fit "pegs" represented by various service workloads. The patterns and solutions discovered might then be tested or further optimized with simpler feed-forward neural network systems or techniques used to check or further refine proposed deployments discovered in earlier stages.

Scaling the brains

We're still in the early days of AI being applied to storage in any commercial way. AI is currently finding its way into some storage area networks to handle the task of optimizing storage performance based on simplified performance metrics selected by humans through a management interface. One SAN vendor uses a learning AI to automatically do what's called "storage tiering"—that is, it shifts data from slow (conventional disk) to fast (solid state) tiers within the SAN. Human operators can mark storage collections (such as virtual machines) on the network with desired metrics, and the AI shifts data between hot and cold tiers as necessary to fulfill those metrics.
AI is also being applied in the form of neural networks that learn to spot anomalies and performance problems. An AI that has learned to "fingerprint" an issue that can cause performance problems at one customer site could then be used to scan all of that vendor's customers for other instances of the problem, proactively issuing warnings and suggestions to human operators.
So far, most vendors seem more concerned with focusing on storage designed to service AI networks and not the other way around. Learning neural networks are massively parallel—a single Nvidia DGX-1 supercomputer has 28,692 individual cores—and keeping all those cores saturated non-stop requires an incredible amount of storage performance.
Adopting artificial intelligence as a force multiplier for storage management can lead to a decrease in IT payroll—fewer skilled personnel would be needed to manage the same set of resources—or an increase in performance, as a similar number of skilled personnel become capable of devoting their time to advanced analysis and management as opposed to day-to-day grunt work.
At truly massive scale, AI management may also be able to provide a degree of efficiency that's simply not otherwise possible. Large departments of humans don't always coordinate well, but smaller, more tightly focused departments can operate more efficiently and adhere to policy more coherently.

0 comments:

Post a Comment