A Blog by Jonathan Low

 

Jun 1, 2020

Why AI Won't Supplant the Behavioral Economics of Reopening Businesses

Decisions driven by the reality of behavioral economics in a pandemic and recession are not likely to be helped by predictive analytics based on past experience that may now be irrelevant in a post-Covid world.

Businesses facing significant drops in revenue are not motivated to experiment with models based on flawed data. To optimize performance, leaders have to rely on forward looking business judgement, which AI may supplement, but not replace. JL

Christopher Mims reports in the Wall Street Journal:

What do you do when a sudden break from past trends reorders the way the world works? Businesses can’t turn to existing artificial intelligence. AI requires vast quantities of relevant data. When things change this quickly, there’s no time to gather enough. Many pre-pandemic models are no longer useful; some might even point businesses in the wrong direction. The biggest barriers to the use of AI in businesses is the difficulty of finding problems for which AI might be useful. Hiring for data scientists is down 50% since before the pandemic.
What do you do when a sudden break from past trends profoundly reorders the way the world works? If you’re a business, one thing you probably can’t do is turn to existing artificial intelligence.
To carry out one of its primary applications, predictive analytics, today’s AI requires vast quantities of relevant data. When things change this quickly, there’s no time to gather enough. Many pre-pandemic models for many business functions are no longer useful; some might even point businesses in the wrong direction.
AI has seemed to many experts like some kind of magic sauce that could be poured over any business process to transform it into a moneymaking Terminator, an unstoppable deliverer of self-driving cars and destroyer of white-collar work. As the date for those types of disruption continues to be pushed back, it’s clear that AI isn’t progressing as fast as we were once told, and that it won’t be a cure-all.
It is hardly an AI winter, but a chill is definitely in the air. Businesses for which AI is more of an add-on, as well as struggling startups and smaller firms, are furloughing data scientists previously awarded stratospheric salaries, and complaining they can’t find uses for AI. Suddenly, there’s a vindication of those who have argued that the systems most closely associated with modern AI—ones that can learn from huge pools of data—aren’t as capable as their superfans suggested.
What’s happening is not so much a reckoning as a ‘rationalization’ of the application of AI in businesses.
— Rajeev Sharma, Pactera Edge
The hype around AI, among those who actually use it, is subsiding. The flip side of this trend is we’re starting to see that, far from being magical, AI is most useful for accomplishing some pretty mundane stuff. We use AI daily, every time we talk to a voice-activated personal assistant or unlock our phones with our faces or fingerprints. Beyond that, for most businesses, academics, public-health researchers and actual rocket scientists, AI is mostly about assisting humans in making decisions, says Rachel Roumeliotis, a vice president at O’Reilly Media, a publishing and events company that, among other things, teaches coders how to use AI.
When SharpestMinds, a startup that sells mentoring services to data scientists, surveyed its alumni in April and again in May, it found that 6% of respondents had been affected by furloughs, pay cuts or layoffs. That’s a drop on the ocean compared to the enormous layoffs in, say, the restaurant business, but it’s notable because these jobs are generally thought to be business-critical roles requiring high-demand specialized skill sets.
Uber recently shut down its AI research lab, and Airbnb’s layoffs included at least 29 full-time data scientists, according to its directory of those let go.
The pain for data scientists will likely increase as companies rethink how they spend, predicts SharpestMinds founder Edouard Harris. Hiring for such roles has slowed significantly, down by 50% since before the pandemic, he adds. On the other hand, that means there’s still demand, though it’s diminished.
What’s happening is not so much a reckoning as a “rationalization” of the application of AI in businesses, says Rajeev Sharma, head of enterprise AI at Pactera Edge, a technology-consulting firm. “[Companies] feel this is a time they can get rid of extra hires or lower performers who are not a good cultural fit,” he adds.
Top algorithms are left flat-footed when data they’re trained on no longer represents the world we live in.
— Gary Marcus, New York University
By contrast, the deep-pocketed big tech companies clearly see AI as not merely important but core to their businesses, and plan to keep hiring like crazy. Google Chief Executive Sundar Pichai has said that in the sweep of human history, AI is more important than electricity or fire, and all the Big Five have said they’ll continue to add to their engineering ranks during this downturn, including data scientists and AI experts. Now is a great time to hire them, says Mr. Sharma. He says it’s like buying discounted shares after a stock-market crash.
A just-released survey of nearly 1,400 AI professionals, conducted by O’Reilly Media, found the two biggest barriers to the use of AI in businesses are leaders who don’t appreciate its value, and the difficulty of finding business problems in these firms for which AI might be useful.
Right now, AI’s top-shelf approach to solving many problems, the so-called deep-learning algorithms, are good at doing things like identifying cats in pictures and beating humans at the strategy game Go. However, they require enormous quantities of data to train, and are left flat-footed when that data no longer represents the world we live in, says Gary Marcus, a New York University professor and former head of autonomous driving at Uber. He is a frequent critic of these algorithms, believing that their resulting models are “brittle.” That is, rather than resembling the mock-ups of the world that human brains construct, they’re just big engines for finding statistical correlations.
The pandemic and the current business challenges in applying AI are “a wake-up call about how shitty the AI we’re building is,” says Prof. Marcus. The biggest tech companies have enough data to build AI systems that can identify things in images or recognize human voices, but other types of AI—say, an algorithm to predict the best way to route goods through a supply chain or the buying habits of shoppers—break down during events like the coronavirus pandemic. Even in good times, small- and medium-size businesses simply don’t have enough data to train useful AI systems, says Prof. Marcus.
A number of studies comparing new and supposedly improved AI algorithms to “old-fashioned” ones have found they perform no better—and sometimes worse—than systems developed years before. One analysis of information-retrieval algorithms, for example, found that the high water mark was actually a system developed more than a decade ago.
‘Basically it’s a thing that can help you analyze data and make choices.’
— Rachel Roumeliotis, O’Reilly Media
Researchers on the AI-development front lines are bullish about its long-term value and say any retrenchments are only temporary. At the Allen Institute for AI, founded by late Microsoft co-founder Paul Allen, researchers are currently applying AI to the discovery of treatments, vaccines, and clinical insights into the behavior of Covid-19, says Oren Etzioni, who heads the institute.
When AI is applied to well-defined tasks, the increasing availability of massive data sets, better deep-learning algorithms, and growing computational power are the very things that will make it “a transformative tool for scientists as we face pandemics, climate change, or any other of humanity’s thorniest problems,” he adds.
Researchers are working to fix the core problem in modern AI—the demand for so much data—but a solution is a long way off, says Chris Mattmann, deputy chief technology and innovation officer at NASA’s Jet Propulsion Laboratory. His own team had to employ a posse of Ph.D. students and postdocs to spend three years labeling images taken from the surface of Mars, just so they had enough data to train a system to automatically identify geographic and geological features of a Martian landscape. The system will make its debut on Mars Helicopter, a drone that’s part of the Perseverance rover mission scheduled to launch this summer. Mercifully, the rocks of Mars don’t change much from year to year.
As out-of-this-world as that technology might be, it’s actually an example of the most useful workaday AI applications, be they in research, medicine or business, says Dr. Mattman. His team isn’t building new kinds of AI, he adds, just using off-the-shelf software and hardware.
Even firms that still want to make AI part of their business processes are discovering that they might have less need for specialists who build AI than the kind of day-to-day software engineering that is required to suck up data, clean it, then send it off to some cloud AI service run by the likes Microsoft, Google or Amazon, says Mr. Sharma. Everyone I spoke with agreed that this role, known as “data engineering,” is the next evolution of data science. These kinds of jobs don’t require as much specialized knowledge and are accessible to a much broader array of people with coding experience.
Yes, AI is failing to deliver on some of its biggest promises. (Sorry, people eager to welcome our robot overlords.) But we’re entering a time when the things it does well are becoming more apparent.
“I think when people say ‘AI’ they still think robots and chatbots,” says Ms. Roumeliotis. “But basically it’s a thing that can help you analyze data and make choices.”

0 comments:

Post a Comment