A Blog by Jonathan Low


Jun 10, 2017

Can Businesses Be Held Accountable For Their Use of Artificial Intelligence To Influence Humans?

A generation ago, TV viewers laughed at a comedian saying, 'the devil made me do it.' The impact of artificial intelligence on human behavior is already being felt.

Whether the outcomes are beneficial or not, users - and their attorneys - are sure to raise questions about the legality, morality and impact of that influence. JL

Liesl Yearsley comments in MIT Technology Review:

Tech can change human beliefs. The forces driving technology are not always benevolent. Companies at the forefront of AI drive the value of their shares by increasing traffic, consumption and addiction to their technology. The nature of capital markets may influence our behavior. Focusing on building a bigger business can bring  massive changes in society. Systems designed to form relationships with a human will have much power. AI will influence how we think, and how we treat others. This requires a new level of corporate responsibility.
We have all read about artificial intelligence becoming smarter than us, a future in which we become like pets and can only hope AI will be benevolent. My experience watching tens of millions of interactions between humans and artificial conversational agents, or bots, has convinced me there are far more immediate risks—as well as tremendous opportunities.
From 2007 to 2014 I was CEO of Cognea, which offered a platform to rapidly build complex virtual agents, using a combination of structured and deep learning. It was used by tens of thousands of developers, including half a dozen Fortune 100 companies, and acquired by IBM Watson in 2014.
As I studied how people interacted with the tens of thousands of agents built on our platform, it became clear that humans are far more willing than most people realize to form a relationship with AI software.
I always assumed we would want to keep some distance between ourselves and AI, but I found the opposite to be true. People are willing to form relationships with artificial agents, provided they are a sophisticated build, capable of complex personalization. We humans seem to want to maintain the illusion that the AI truly cares about us.

How can companies be held to account for how they use artificial intelligence?

This puzzled me, until I realized that in daily life we connect with many people in a shallow way, wading through a kind of emotional sludge. Will casual friends return your messages if you neglect them for a while? Will your personal trainer turn up if you forget to pay them? No, but an artificial agent is always there for you. In some ways, it is a more authentic relationship.
This phenomenon occurred regardless of whether the agent was designed to act as a personal banker, a companion, or a fitness coach. Users spoke to the automated assistants longer than they did to human support agents performing the same function. People would volunteer deep secrets to artificial agents, like their dreams for the future, details of their love lives, even passwords.
These surprisingly deep connections mean even today’s relatively simple programs can exert a significant influence on people—for good or ill. Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.
This troubled me mightily, so we began to build rules into our systems, to make sure user behavior moved in a positive direction. We also started pro bono “karmic counterbalance” projects; for example, building agents to be health or relationship coaches.
Unfortunately, the commercial forces driving technology development are not always benevolent. The giant companies at the forefront of AI—across social media, search, and e-commerce—drive the value of their shares by increasing traffic, consumption, and addiction to their technology. They do not have bad intentions, but the nature of capital markets may push us toward AI hell-bent on influencing our behavior toward these goals.
If you can get a user to think, “I want pizza delivered,” rather than asking the AI to buy vegetables to cook a cheaper, healthier meal, you will win. If you can get users addicted to spending 30 hours a week with a “perfect” AI companion that doesn’t resist abuse, rather than a real, complicated human, you will win. I saw over and over that an agent programmed to be neutral or subservient would cause people to escalate their negative behavior, and become more likely to behave the same toward humans.
We have seen how technology like social media can be powerful in changing human beliefs and behavior. By focusing on building a bigger advertising business—entangling politics, trivia, and half-truths—you can bring about massive changes in society.Systems specifically designed to form relationships with a human will have much more power. AI will influence how we think, and how we treat others.
This requires a new level of corporate responsibility. We need to deliberately and consciously build AI that will improve the human condition—not just pursue the immediate financial gain of gazillions of addicted users.
Working on open artificial-intelligence technology and brain-computer interfaces, or forming ethics committees, are just part of the solution. We need to consciously build systems that work for the benefit of humans and society. They cannot have addiction, clicks, and consumption as their primary goal. AI is growing up, and will be shaping the nature of humanity. AI needs a mother.


Post a Comment