A Blog by Jonathan Low


Apr 23, 2023

AI Has Already Transformed Human Life But Most Of Its Changes Are Invisible

Many of AI's most transformative impacts on everything from search to insurance coverage to advertising targeting have already been felt thanks to quiet background implementation of algorithmic models which are becoming even more common as they are updated by better data. JL

Christopher Mims reports in the Wall Street Journal:

If you’re worried artificial intelligence will transform your job, insinuate itself into your daily routines, or lead to wars fought with autonomous systems, you’re a little late - all of those have already come to pass. They’re integrated into search and productivity tools from Microsoft, Google, and startups in every field, from healthcare, logistics to tax prep and  videogames. Much of what AI does on a daily basis is invisible. The AI-powered prediction algorithms that decide which advertisements to deliver to your social feed with such accuracy leverage (what) insurers use to decide what to charge for a policy, both enabled by big data. (But) computers rather than humans are now building the models.

If you’re worried that artificial intelligence will transform your job, insinuate itself into your daily routines, or lead to wars fought with lethal autonomous systems, you’re a little late—all of those things have already come to pass.

The AI revolution is here. Recent developments like AI chatbots are important, but serve mostly to highlight that AI has been profoundly affecting our lives for decades—and will continue to for many more.

What’s unique about this moment is that new systems like text-generating AIs, such as ChatGPT, and image-generating AIs, like DALL·E 2 and Midjourney, are the first consumer applications of AI. They allow regular people to use AI to make things. That’s awoken many of us to its potential.

As Cara LaPointe, co-director of the Johns Hopkins Institute for Assured Autonomy told me recently, “In terms of public consciousness of AI, we are at an inflection point.”


In the past you had to have the resources of Google to create something useful with AI. Now anyone with an internet connection can. And this is just the beginning of the potential utility of these systems.

Microsoft co-founder Bill Gates said in a recent essay said we are now living in “the age of AI.”  He compared these systems to the first graphical user interfaces—that is, the first versions of the Windows and Macintosh operating systems. He outlined a not-too-far-fetched future in which talking with machines through natural language interfaces becomes the new, dominant way to interact with them.

In the meantime, artificial intelligence has been an essential tool for fighting our wars, protecting our finances, operating our capital markets, insuring our assets, targeting our advertisements and powering our search results—for more than a decade, and in some cases decades.

Most people don’t know this history, says David MacInnis, vice president of analytics and actuarial modernization at insurer Allstate. For most of its history, AI was the sole purview of mathematicians and computer scientists, after all. And it wasn’t called AI, because that term had fallen out of fashion. Instead, engineers talked about generalized linear models, generalized boosted models, or decision trees.

Later, broad classes of these algorithms were all grouped under “machine learning,” and engineers started using artificial neural networks, inspired by neurons in the brain, in place of other mathematical techniques. In general, these systems were designed to recognize patterns and predict outcomes—giving them a significant overlap with another blanket term, “predictive analytics.”

“Insurers have been doing really deep levels of predictive analytics for over two decades now,” Dr. MacInnis says. The AI-powered prediction algorithms that decide which advertisements to deliver to your social feed with such uncanny accuracy that they have convinced millions of people their phones are listening to them? They leverage the same kind of math and algorithms insurers use to decide what to charge for a policy—and both systems are enabled by big data.

Dr. LaPointe calls our current era “learning-based AI.” What characterizes this time is that computers—rather than humans—are now building the models that machines use to accomplish a task.

Even “generative” AI is a bit of a misnomer—ChatGPT is using many of the same prediction algorithms and related technologies AI scientists have been developing for years, but it uses them to predict which word to add next to a block of text, instead of, say, whether an image is of a cat.


These new generative AI systems, which pull together almost every trick cooked up by AI researchers since the turn of the millennium, are doing things no AI has ever done before. And that’s why they’re being integrated into search and productivity tools from Microsoft, Google, and countless startups in every field imaginable, from healthcare and logistics to tax prep and videogames.

But then, the last dozen or so new AI systems rolled out in the past couple of decades have also accomplished things no AI had ever done before. And, without most of us being fully cognizant of it, they’ve transformed whole industries, from retail and logistics to media and banking.

The AIs that did all that are all around us. We invoke them every time our speech is decoded by our smart assistants, we find what we are looking for on Google, we order something and it arrives the same day, our social media feed is sorted for us by Facebook or TikTok, we get an instant quote online from an insurance broker, or a cruise missile finds its target a thousand miles from where it was launched.

Part of the reason that people are so excited about AI right now is that when we see rapid progress in a field, we tend to project that into the future, says Adam Ozimek, chief economist at the Economic Innovation Group, a policy research organization in Washington, D.C. What we forget is that technology often progresses in fits and starts.

As with every new technology, one way to get a preview of the potential impacts of AI is to understand how it works, and what it’s actually good for. Cal Newport, an assistant professor of computer science at Georgetown University, has written about as accessible an essay on how ChatGPT actually works as we’re likely to get (it still clocks in at more than 4,000 words). He concludes that ChatGPT confuses otherwise sophisticated human thinkers into believing it’s more capable than it is, because when they read its well-crafted prose, they mistakenly imagine the mind that would be required to generate such prose.

But ChatGPT has no mind. It has more in common with a search engine than even the most primitive of brains. If we are impressed by its abilities, we must remember that they are a product not of its intelligence, but its scale. ChatGPT required that its engineers cram into it more or less all the text on the entire open web, so that it would have enough reference material to be able to remix it in a way that seems like original thinking, but isn’t. Dr. Newport concludes his essay in the New Yorker by saying that “a system like ChatGPT doesn’t create, it imitates.”

That doesn’t exactly sound like the kind of AI that is about to attain sentience and decide it’s better off without its pesky human overlords. When our machines do things that we once thought were the sole domain of humans—whether it’s beating us at chess, or writing an essay—the general cultural and economic trend is that humans reassign themselves to the tasks the machines aren’t nearly as good at, and become more productive in the process.

We have been here many times before. The financial incentives to hype a new technology never change, nor does our tendency to both fear and celebrate whatever is the newest, shiniest product of our civilization’s ever-growing expenditure on research and development.

That doesn’t mean that AI won’t be transformative—clearly, it already has been.


Post a Comment