A Blog by Jonathan Low

 

Mar 26, 2017

Does Amazon's Alexa Have Free Speech Rights? Should It?

Amazon is trying to protect the rights to which it believes it is entitled by manufacturing and selling the Alexa system. The real issue for consumers is what rights they have - and what they are worth. JL 

Toni Massaro and colleagues report in Slate:

Alexa does not “think” or speak on her own; her actions are traceable to her programmers’ choices. So Amazon claims Alexa’s response to users is actually Amazon’s protected speech. For a human right, free speech is inattentive to the humanness of speakers. Alexa  might push courts to stop using the First Amendment to deregulate, and spend more time determining what harms are worth preventing and when human listeners have rights, too.
In November 2015, Victor Collins was found dead in a hot tub in James Bates’ home in Bentonville, Arkansas. Bates was charged with murder. During their investigation, police discovered that he owned an Amazon Echo—a device that, upon voice activation through the “wake” word “Alexa,” answers questions, provides sports scores, and “[h]ears you from across the room with far-field voice recognition, even while music is playing.” Alexa, in other words, both speaks to users and listens to and records them. The police sought and received a warrant to obtain audio recordings made by the Echo. Concerned that “rumors of an Orwellian federal criminal investigation into the reading habits of Amazon’s customers could frighten countless potential customers,” Amazon filed a motion to quash the warrant on Feb. 17, 2017. Buried in that motion was a striking claim: that Alexa’s responses to user queries are protected by the First Amendment.
Amazon has dropped its objection since Bates himself agreed to have the information handed over to law enforcement, so the First Amendment argument will not be addressed in this case. But it’s highly likely to crop up again in the future. Alexa is an example of weak artificial intelligence, or applied A.I. It (“she”?)  responds to narrow requests with a narrow range of responses, and is a far cry from A.I. that can think like a human (called “strong A.I.” or “artificial general intelligence”). Alexa does not “think” or speak on her own; her actions are traceable to her programmers’ choices. So Amazon does not, in fact, claim that Alexa herself has First Amendment rights. Instead, it claims that Alexa’s response to users is actually Amazon’s protected speech.
Whether the First Amendment protects Amazon’s speech through Alexa reflects a debate from a few years ago about whether search engine results are protected. Back in 2003, Google asserted that its search engine results were protected by the First Amendment. Eugene Volokh, a law professor at the University of California–Los Angeles known for his First Amendment scholarship, wrote a Google-commissioned white paper arguing that search engine results were like the pages of a newspaper: protectable because of editorial choices. Some agreed. Others countered that search engines were more like information platforms or conduits that should be regulated to prevent unfair behavior; or like advisers who owe duties of disclosure and loyalty to users. In 2014, a district court held that Baidu’s search engine results were in fact protected by the First Amendment, citing Volokh’s reasoning and analogizing the search engine to a newspaper. This kind of decision makes it harder to impossible to regulate search engine outputs, for better or for worse.
In the realm of weak A.I.—which some believe includes search engines—courts may be comfortable granting First Amendment protection to A.I. speech as an extension of the rights of the programmer. But what if Alexa were strong A.I.? Many scientists say the question is moot: They believe artificial intelligence will never achieve that status. But for the sake of argument, let’s imagine it does. Then Amazon’s analogy to editing breaks down. As Alexa’s emergent behavior becomes more and more unpredictable, more divorced from the intentions of her programmers, it will be harder to use old analogies to determine whether her output is protected speech.
Despite the limitations of old analogies, as we argue in a recent paper, a strong A.I. Alexa might well be protected under current First Amendment law. When legal analogies break down, courts and scholars turn to theory: explanations for why we provide rights protections in the first place. The First Amendment has been justified under a range of theories, all of which may support providing First Amendment protection for strong A.I. speech. This is especially the case when you focus not on the A.I. speaker but on human listeners and readers. Under the marketplace of ideas theory, we protect speech to increase the stock of ideas from which people can draw. A.I. speech adds ideas to the marketplace.
Advertisement
Another popular theory points out that the First Amendment enables democratic self-governance, and A.I. speech (say, A.I.-written news stories) could help human individuals make important decisions about government. Only one major theory—autonomy theory—potentially runs into concerns when a speaker is not human. And even there, A.I. speech may be covered by the First Amendment because the law cares about the autonomy of human listeners, too.
Today’s First Amendment law seems to care very little about the humanness of speakers. The Supreme Court has notoriously recognized speech protection for corporations.  First Amendment cases don’t hinge on how caring or shame-filled a human speaker might be. In fact, the exact opposite occurs: The First Amendment protects even the most callous human speakers.
Nonetheless, arguing that First Amendment coverage may extend to strong A.I. speakers raises a number of legitimate concerns. If A.I. is protected, why not protect speech by cats or by parts of nature, like waves? Well, for one thing, unlike a meow or a crashing sound, A.I. speech uses words and is therefore more likely to be understood as conveying a message. A.I. is also more likely to be central to some human communications effort, supplanting human communication. In other words, we often construct A.I. to serve an essentially communicative function. When it doesn’t—when A.I. “speech” is really more like conduct or an expressive act, say by dancing or staging a protest march or building things—then courts will have to engage in their usual difficult disentangling of speech from nonspeech harms. This is not a problem unique to A.I.: Courts face it when analyzing flag-burning, parades, software, and even 3-D–printing. Each of these acts has expressive elements, and each can also have a nonspeech component that causes physical or similar harms.
As with protecting search engine speech, protecting A.I. speech risks subordinating the rights of users. But the theoretical justifications for protecting A.I. speech—because it contributes to the marketplace of ideas, because it helps users participate in democracy, and because it protects listeners’ autonomy—are all grounded in the rights of human listeners. Thus, if A.I. is deceptive or dangerous, there is a stronger justification for the government to intervene to protect human listeners than there would be when a speaker is human and has rights, too.
When strong A.I. Hello Barbie tries to sell your child candy or an upgrade, the government may have a strong interest in intervening despite the First Amendment values at stake. First Amendment coverage (that something is considered speech) does not always mean First Amendment protection (that speech wins out over other concerns).

Ultimately, contemplating First Amendment coverage of A.I. speech teaches us about current First Amendment law. For a human right, free speech is surprisingly inattentive to the humanness of speakers. Alexa’s stronger progeny might thus push courts to stop using the First Amendment to inevitably deregulate, and spend more time determining what harms are worth preventing and when human listeners have rights, too.

0 comments:

Post a Comment