A Blog by Jonathan Low

 

Mar 30, 2016

Why The Next Logical Step Past Analytics Is Cognitive Computing

Just as the scale of computing has changed the way that individuals, organizations and the economy which they influence performs, so the scale of the data being generated requires new systems that can sense, learn, infer and interact.

Cognitive computing, as the following article explains, may give economists and managers the ability to better optimize the tools with which they have been provided the opportunity to work. JL 

Tom Davenport comments in DataInformed:

The key benefit of cognitive technologies is that they solve problems traditional analytics can’t. Cognitive models are based on statistical models (but) the big data from sensors, social media, and online applications often flow and accumulate much faster than humans could possibly analyze or act on. Cognitive computing is the next step for any organization that has been pursuing analytical models driven by human hypotheses that wishes to improve speed and scale.
Many people and companies seem to think of “cognitive computing” as an area separate from analytics. Most large organizations today have significant analytical initiatives underway, but they think of the cognitive space as being an exotic science project. One executive told me, “We have no desire to win Jeopardy,” an allusion, of course, to the IBM Watson project from 2011. But cognitive computing is not just about Watson, and it’s not an exotic science project.
In fact, I’d argue that cognitive computing is a logical extension of analytics work. It’s the next step for any organization that has been pursuing traditional analytics, i.e., analytical models driven by human hypotheses. Any organization that wishes to improve the speed and scale of its analytical activities should be exploring at least some cognitive capabilities now. Cognitive methods are a straightforward extension of previous analytical methods, and there are several reasons why they are better for many applications.
Most cognitive methods are, in fact, based on statistical models. Your organization may be doing “cognitive” work without even knowing it. Perhaps, for example, you are using some form of “machine learning,” which attempts to automatically improve the fit of models and “learns” its way to a better set of explanations or predictions. Machine learning often uses logistic regression, a statistical method that has been around since the 1930s. Automated fitting of models has been around only since about 1957, when Cornell researchers created the “perceptron.” That same invention was the beginning of neural networks as well, which are the basis of the “deep learning” approaches used by many cognitive applications today. So all of these cognitive approaches have deep roots in statistical approaches that are very familiar to analytical folks.
Since cognitive tools are only a small step from traditional analytics, it’s not surprising that many vendors are mixing the two. IBM, for example, is clearly fuzzing the line between analytics and Watson with “Watson Analytics.” SAS offers a machine learning capability as well as event streaming for automated analytics. And TIBCO is increasingly focused on “streaming analytics” for real-time automated decision-making – what it calls “fast data.” For these vendors and others, “cognitive” and “analytics” are increasingly intertwined.
The key benefit of cognitive technologies is that they can solve some problems that traditional analytics can’t. In the world of big data, for example, the data from sensors, social media, and online applications often flow and accumulate much faster than humans could possibly analyze or act on it. Without machine learning to create the models for such data, it couldn’t be analyzed at all.
The great challenge for human-centric analytics always has been that many human decision-makers often don’t use the models and data provided to them. Researchers Martha Feldman and James March published an article as far back as 1981 arguing that managers often ask for information that they don’t use. They want to appear to be making analytical decisions, but are more comfortable with their intuition. Therefore, it’s important to bypass human decision-makers with automated decisions when we know that data and analysis are critical to decision outcomes.
If your organization is interested in moving in a more cognitive direction and you are already doing work with analytics, there are some easy steps to get started. Machine learning algorithms are available in the cloud on Amazon Web Services, Microsoft Azure, and the Google Cloud Platform. Google and Microsoft have released open-source versions of their machine learning tools (TensorFlow and Computational Network Toolkit, or CNTK, respectively). These tools facilitate exploration of “deep learning” neural network applications like speech and image recognition.
Assuming that your people could use some instruction in machine learning and neural networks, this is not as much of a barrier as it might have been in the past. There are now many free or inexpensive online courses in these methods. Coursera and Udacity have commercial versions; and MIT, Caltech, Stanford, and other schools have noncommercial courses online. And if you want a generalized overview of machine learning, I highly recommend Pedro Domingos’ book, The Master Algorithm.
The key step is to identify some problem within your organization that might benefit from a cognitive approach. Perhaps it’s a “knowledge bottleneck” – a situation that might benefit from the application of knowledge that previously has been inaccessible. Or perhaps it’s a situation with so much data that humans couldn’t possibly handle it. Then start your experimentation with cognitive technologies on that problem.  The whole process doesn’t need to be exotic and it doesn’t have to cost very much. But it does offer a lot of potential benefits.

0 comments:

Post a Comment