A Blog by Jonathan Low


Jul 14, 2017

How Artificial Intelligence Is Changing the 80-20 Rule

Greater volumes of better data reinforce economic decision-making as the data themselves learn and improve. Which reinforces dominance and contributes to extreme distributions at either end of the spectrum. Successful enterprises seek those insights - and use them. JL

Michael Schrage reports in Harvard Business Review:

Traditional distributions have disruptively changed. Extreme distributions dominate industry. The productivity secret of big data is that Pareto’s 80/20 insight has decayed into empirical anachronism. Greater volumes and variety of data guarantee algorithms get the training to get smarter. Digital networks consequently become Pareto platforms that transform vital vectors of variables into new value. C-suites want data-driven questions algorithmically addressed.
Many high-performance organizations remain passionate about Vilfredo Pareto, the incisive Italian engineer and economist. They continue to be inspired by his 80/20 principle, the idea that 80% of effects (sales, revenue, etc.) come from 20% of causes (products, employees, etc). As machine learning and AI algorithmic innovation transform analytics, I’m betting that next-generation algorithms will supercharge Pareto’s empirically provocative paradigm. Here are three important ways that AI and machine learning will redefine how organizations use the Pareto principle to digitally drive profitable innovation to levels beyond conventional analytics.

Smart Paretos

First, greater volumes and variety of data guarantee that algorithms get the training they need to get smarter. Digital networks consequently become Pareto platforms that transform vital vectors of variables into new value.
Novel workplace analytics, for example, mean more organizations can more readily identify the 20% of employees contributing 80% of value to a product, process, or user experience. Ongoing digitalization of business processes, platforms, and customer experiences similarly invites creative Pareto perspectives: What 20% of the platform upgrade creates 80% of its impact? What 20% of customer experience evokes 80% of delight or distaste? Serious C-suites want those data-driven questions algorithmically addressed.


Second, traditional distributions have disruptively changed. The dirty little productivity secret of big data is that Pareto’s 80/20 insight has decayed into empirical anachronism. Analytically aggressive firms increasingly see Pareto proportions closer to 10/90, 5/50, 2/30, and 1/25. Depending on how rigorously the data is digitally sliced, diced, and defined, 1/50, 5/75, and, yes, 10/150 Paretos emerge. Pareto’s “vital few” becomes a “vital fewer.”
Extreme distributions transcend and dominate industry. Fewer than 10% of drinkers, for example, account for over half the hard liquor sold. Even more extreme, less than 0.25% of mobile gamers are responsible for half of all in-game revenue.
Clearly identifying and cosseting the “super-Paretos,” however, doesn’t go analytically far enough; market and market growth demand that those descriptive statistics lead to predictive and prescriptive statistics. In other words, turn those data sets into “training sets” for smart algorithms.
Organizations need to identify Pareto propensities, as well — they need to algorithmically crack the code on the tiny adjustments that promote order-of-magnitude business impacts. Managers and their data science teams must reorganize themselves around extreme Pareto potentials and possibilities, not just more and better data.
For instance, one multibillion-euro industrial equipment company with over 2,000 SKUs determined that less than 4% of its offers were responsible for one-third of sales and roughly half of profitability. But extending the analysis to include service and maintenance revealed that roughly 100 products were responsible for over two-thirds of profitability. That pushed the firm to fundamentally rethink pricing and bundling strategies.
Finer-grained Pareto analytics around product attributes and features, not just the products themselves, offered more provocative insights. The company’s engineering and account teams explored data-driven redesigns around desired features and function sets rather than the products themselves. Processing a different unit of analysis led to even more valuable Pareto insights. Targeted feature removal, for example, not only reduced costs but also led directly to measurably better user experiences that, in turn, increased share in a growing customer segment.


Third, as data become more granular and algorithms process complex patterns in smarter ways, Pareto portfolio management has changed. The analytically and operationally astute already manage Pareto portfolios — that is, a number of different Pareto insights across the entire enterprise. For them, KPI stands for “key Pareto information,” not just “key performance indicator.” If KPI dashboards don’t facilitate data-driven looks at key Pareto information, people are blind to future optimization and value-creation opportunities.
Where individual process owners, product managers, and sales teams once emphasized optimizing their own core Paretos, they now poke, probe, and play with other people’s Paretos. Serious managers and executives break down and burst out of analytic silos. They recognize that their Paretos can analytically intersect, overlap, and productively recombine with Paretos across the enterprise.
Increasingly, the surest way to rethink and revitalize a Pareto is to link it to another Pareto. As data-rich and algorithmically aware firms shift from individually managing a dozen key Pareto indicators to overseeing hundreds, even thousands, of enterprise KPIs, brave new Pareto ensembles will emerge. Which ensembles will offer the greatest insights and opportunities for new creation and capture?
Networking Paretos has consequently become one of the most exciting and productive analytic initiatives I see. What 10% of KPI clusters might explain 90% of new customer, growth, or margins? The challenge of supra-Pareto creativity demands data-driven cross-functional collaboration. Sophisticated managers and intrapreneurs across the enterprise want to innovatively fuse their vital fews.
At one global telecom, Pareto analytics of all kinds — descriptive, predictive, and prescriptive — had been developed to anticipate, prevent, and minimize churn. The churn management team had done excellent work identifying and retaining literally millions of at-risk customers. But diminishing returns had set in; performance had plateaued.
Everything changed when the group decided to go wide. Instead of emphasizing Pareto insights around customer satisfaction, complaints, or service, they discovered several sales and marketing Pareto data sets emphasizing upselling: the 20% of customers who accounted for 80% of new services purchased; the 25% of customers responsible for 75% of the new lines or data plans.
Analytically armed with these Paretos, the churn team asked whether they could actually upsell their customers, not just retain them. Straightforward regression analysis and simple agent-based modeling techniques found significant profile correlations between Pareto churners and Pareto “upsellees.”
Writing scripts and experimentally testing offers proved fairly fast, simple, and cheap. While the ultimate results weren’t revolutionary, they went well beyond incremental. Not only did retention numbers improve, but the churn team spent less to keep them and managed to successfully upsell a percent or two.
But this Pareto ensemble also generated a serendipitous, if obvious, business bonus. The churn team’s new Paretos proved helpful to the upsell sales and marketing function. Their innovative ensembles boosted customer satisfaction and NPS numbers while reducing their own churn rates. Everybody won.
The preliminary success of Pareto ensembles recalls the critical insight from the Netflix Prize competition: The best results came not from improving individual model performance but from creating ensembles where the best attributes were collectively amplified. Ironically but appropriately, Pareto analytics could determine the most valuable ensembles.
The lesson here is that having lots of models is useful for the incremental results needed to win competitions, but, practically, excellent systems can be built with just a few well-selected models.
Rigorously applying the Pareto analytics to Pareto analytics seems obvious, but few organizations demonstrate that discipline every day. That must change. Strategic plans and technology road maps need to be analytically informed by “Pareto pathways.” The ability to better predict tomorrow’s vital few, the opportunity to combinatorially combine KPIs from across the enterprise, will become sources of not just greater efficiencies but also determinants of disruptive value creation.
The smarter your algorithms, the more they — and your organization — need to be learning from and with Pareto.


Post a Comment