A Blog by Jonathan Low


Jun 5, 2023

How Leaders Are Fighting Their Companies' AI Hype Distraction

Here they go again. A plug and play solution to everything, everywhere, every time, requiring no effort, planning or disruption, just bigger budgets.

That is the challenge leaders face as their customers, employees and boards of directors answer "generative AI' whenever they are asked how to meet the future. Smart and successful leadership recognizes the key to implementation which delivers goals is measurable performance based on operational improvement of existing organizations. JL

Eric Siegel reports in Harvard Business Review:

Even before ChatGPT and other generative AI tools the narrative about all-powerful AI goes too far. It inflates expectations and distracts from the precise way ML will improve business operations. ML projects often lack focus on their value - how ML will render business processes more effective. ML projects that keep operational objectives front and center stand a good chance of achieving that objective. Improving measurable performance is supervised machine learning. Practical use cases of ML are designed to improve the efficiencies of existing business operations and innovate in straightforward ways. This includes resisting the temptation to ride hype waves.

You might think that news of “major AI breakthroughs” would do nothing but help machine learning’s (ML) adoption. If only. Even before the latest splashes — most notably OpenAI’s ChatGPT and other generative AI tools — the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. That’s because for most ML projects, the buzzword “AI” goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations.

Most practical use cases of ML — designed to improve the efficiencies of existing business operations — innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions, which is why it’s sometimes also called predictive analytics. This means real value, so long as you eschew false hype that it is “highly accurate,” like a digital crystal ball.

This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. It’s practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML.

Here’s the problem: Most people conceive of ML as “AI.” This is a reasonable misunderstanding. But “AI” suffers from an unrelenting, incurable case of vagueness — it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools “AI” oversells what most ML business deployments actually do. In fact, you couldn’t overpromise more than you do when you call something “AI.” The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do.

This exacerbates a significant problem with ML projects: They often lack a keen focus on their value — exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value. In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective.

What Does AI Actually Mean?

“‘AI-powered’ is tech’s meaningless equivalent of ‘all natural.’”

–Devin Coldewey, TechCrunch

AI cannot get away from AGI for two reasons. First, the term “AI” is generally thrown around without clarifying whether we’re talking about AGI or narrow AI, a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials.

Second, there’s no satisfactory way to define AI besides AGI. Defining “AI” as something other than AGI has become a research challenge unto itself, albeit a quixotic one. If it doesn’t mean AGI, it doesn’t mean anything — other suggested definitions either fail to qualify as “intelligent” in the ambitious spirit implied by “AI” or fail to establish an objective goal. We face this conundrum whether trying to pinpoint 1) a definition for “AI,” 2) the criteria by which a computer would qualify as “intelligent,” or 3) a performance benchmark that would certify true AI. These three are one and the same.

The problem is with the word “intelligence” itself. When used to describe a machine, it’s relentlessly nebulous. That’s bad news if AI is meant to be a legitimate field. Engineering can’t pursue an imprecise goal. If you can’t define it, you can’t build it. To develop an apparatus, you must be able to measure how good it is — how well it performs and how close you are to the goal — so that you know you’re making progress and so that you ultimately know when you’ve succeeded in developing it.

In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle. AI means computers that do something smart (a circular definition). No, it’s intelligence demonstrated by machines (even more circular, if that’s possible). Rather, it’s a system that employs certain advanced methodologies, such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (clearly, employing one or more of these methods doesn’t automatically qualify a system as intelligent).

But surely a machine would qualify as intelligent if it seemed sufficiently humanlike, if you couldn’t distinguish it from a human, say, by interrogating it in a chatroom — the famous Turing Test. But the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once — fool us twice, shame on humanity. Another reason that passing the Turing Test misses the mark is because there’s limited value or utility in doing so. If AI could exist, certainly it’s supposed to be useful.

What if we define AI by what it’s capable of? For example, if we define AI as software that can perform a task so difficult that it traditionally requires a human, such as driving a car, mastering chess, or recognizing human faces. It turns out that this definition doesn’t work either because, once a computer can do something, we tend to trivialized it. After all, computers can manage only mechanical tasks that are well-understood and well-specified. Once surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesn’t seem “intelligent” after all, at least not to the whole-hearted extent intended by the term “AI.” Once computers mastered chess, there was little sentiment that we’d “solved” AI.

This paradox, known as The AI Effect, tells us that, if it’s possible, it’s not intelligent. Suffering from an ever-elusive objective, AI inadvertently equates to “getting computers to do things too difficult for computers to do” — artificial impossibility. No destination will satisfy once you arrive; AI categorically defies definition. With due irony, the computer science pioneer Larry Tesler famously suggested that we might as well define AI as “whatever machines haven’t done yet.”

Ironically, it was ML’s measurable success that hyped up AI in the first place. After all, improving measurable performance is supervised machine learning in a nutshell. The feedback from evaluating the system against a benchmark — such as a sample of labeled data — guides its next improvement. By doing so, ML delivers unprecedented value in countless ways. It has earned its title as “the most important general-purpose technology of our era,” as Harvard Business Review put it. More than anything else, ML’s proven leaps and bounds have fueled AI hype.

All in with Artificial General Intelligence

“I predict we will see the third AI Winter within the next five years… When I graduated with my Ph.D. in AI and ML in ’91, AI was literally a bad word. No company would consider hiring somebody who was in AI.”

–Usama Fayyad, June 23, 2022, speaking at Machine Learning Week

There is one way to overcome this definition dilemma: Go all in and define AI as AGI, software capable of any intellectual task humans can do. If this science fiction-sounding goal were achieved, I submit that there would be a strong argument that it qualified as “intelligent.” And it’s a measurable goal, at least in principle if not in practicality. For example, its developers could benchmark the system against a set of 1,000,000 tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee you’d just as well issue to a robot, and even brief, one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability.

AGI may set a clear-cut objective, but it’s out of this world — as unwieldy an ambition as there can be. Nobody knows if and when it could be achieved.

Therein lies the problem for typical ML projects. By calling them “AI,” we convey that they sit on the same spectrum as AGI, that they’re built on technology that is actively inching along in that direction. “AI” haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-ends projects left and right.

It’s understandable that so many would want to claim a piece of the AI pie, if it’s made of the same ingredients as AGI. The wish fulfillment AGI promises — a kind of ultimate power — is so seductive that it’s nearly irresistible.

But there’s a better way forward, one that’s realistic and that I would argue is already exciting enough: running major operations — the main things we do as organizations — more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, we’ve got to come down to earth. If your aim is to deliver operational value, don’t buy “AI” and don’t sell “AI.” Say what you mean and mean what you say. If a technology consists of ML, let’s call it that.

Reports of the human mind’s looming obsolescence have been greatly exaggerated, which means another era of AI disillusionment is nigh. And, in the long run, we will continue to experience AI winters so long as we continue to hyperbolically apply the term “AI.” But if we tone down the “AI” rhetoric — or otherwise differentiate ML from AI — we will properly insulate ML as an industry from the next AI Winter. This includes resisting the temptation to ride hype waves and refrain from passively affirming starry-eyed decision makers who appear to be bowing at the altar of an all-capable AI. Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of ML’s true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater.


Digitozone said...

I shared this article with my colleagues, and it sparked a great discussion in our office. Your insights have a ripple effect, making a positive impact beyond your readers. Keep it up!
luxury rehabilitation centre in india

Digitozone said...

Informative and well-written! Your perspective on [How Leaders Are Fighting Their Companies' AI Hype Distraction] is thought-provoking. I appreciate the depth of your analysis. Looking forward to more engaging content from your blog.
disney powerpoint backgrounds

Post a Comment