“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” writes Pichai. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone. Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.”
On Monday, I still had not bought into the idea that Pichai believed “artificial intelligence needs to be regulated.” Surely if his team thought so, they could get more done by deploying the lobbyists?
By Tuesday, I had concluded the two were not mutually exclusive. After all, it wouldn’t be the first time a corporation made a selfless appeal in the press while making selfish moves with its pocketbook.
On Wednesday, The Washington Post published an overview of the nearly half a billion dollars that tech companies spent on U.S. lobbying over the past decade. Guess which company led the way? Google, of course, spending roughly $150 million. In fact, the $150 million figure is likely conservative, since disclosed lobbying doesn’t account for all the many ways tech giants buy influence.
By Thursday, I had connected the dots. Yes, Google wants “AI regulation.” But it’s not for the same reasons you or I might. Pichai’s motivations are the same as any CEO of a large corporation: He simply wants what is best for his company. And AI regulation, which many see as inevitable anyway, is what’s best for Googlebet. Or at least, it certainly can be shaped to be.
It’s telling that the only example Pichai offers in his op-ed is that Europe’s General Data Protection Regulation (GDPR) “can serve as a strong foundation.” Remember, while privacy advocates don’t scoff at what GDPR has achieved, many also point out that it has been a boon for Google and Facebook. In tech, rules and regulations can help market leaders. In AI, Google is undoubtedly a market leader.
Don’t get me wrong. I absolutely do believe that we need ground rules for AI, some parts of it more urgently than for others. Everything from algorithm bias in financial loan agreements to facial recognition deserves a closer look.
But the definition of AI is so broad that a government will struggle to effectively regulate its various forms. Pichai knows that. Whether governments can pull it off remains to be seen, but what they will almost certainly succeed at is creating barriers to entry for competition. Indeed, that’s exactly where I suspect Google’s lobbying dollars will go next — ensuring any upcoming “AI regulation” helps Google more than anything else.
What if Google spent its lobbying money educating the U.S. government about the pros and cons of AI instead? I doubt the Financial Times op-ed cost much, all things considered. But if Google did the work, it wouldn’t need to try to convince the public with the power of the pen. Plus, journalists would spend their Fridays writing about Google’s efforts to outline what “AI regulation” might look like rather than critiquing an op-ed.