A Blog by Jonathan Low

 

Mar 9, 2023

Why AI Chatbots May Be Vulnerable To Significant Legal Liability

Since AI - and especially generative AI - actually creates content versus ostensibly 'merely' reporting it, it could theoretically not be covered by Section 230 which provides big tech and social media such protection. 

If judges were to rule that is the case, the owners of chatbots could be sued for what they create. It is prudent to assume this very business-friendly court will try to avoid that outcome, but even so, there are sure to be legal challenges in the future. JL 

Will Oremus and Christiano Lima report in the Washington Post:

The Supreme Court stumbled on a cutting-edge legal debate: the legal protections that shield social networks from lawsuits over user content might not apply to work generated by AI, like ChatGPT. There’s a case that the output of a chatbot would be considered content developed by the search engine itself rendering Google or Microsoft the “publisher or speaker” of the AI’s responses. If judges agree, that could expose tech companies to lawsuits accusing chatbots of providing libelous descriptions to offering faulty investment advice to aiding a terrorist group craft recruiting materials. If the court looks to draw limits on Section 230, AI makers should start bracing for legal head winds.

During oral arguments last week for Gonzalez v. Google, a case about whether social networks are liable for recommending terrorist content, the Supreme Court stumbled on a separate cutting-edge legal debate: Who should be at fault when AI chatbots go awry?

While the court may not be, as Justice Elena Kagan quipped, “the nine greatest experts on the internet,” their question could have far-reaching implications for Silicon Valley, according to tech experts.

Justice Neil M. Gorsuch posited at the session that the legal protections that shield social networks from lawsuits over user content — which the court is directly taking up for the first time — might not apply to work that’s generated by AI, like the popular ChatGPT bot.

 

“Artificial intelligence generates poetry,” he said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected. Let’s assume that’s right.”

While Gorsuch’s suggestion was a hypothesis, not settled law, the exchange got tech policy experts debating: Is he right?

Entire business models, and perhaps the future of AI, could hinge on the answer. 

The past year has brought a profusion of AI tools that can craft pictures and prose, and tech giants are racing to roll out their own versions of OpenAI’s ChatGPT. 

Already, Google and Microsoft are embracing a near future in which search engines don’t just return a list of links to users’ queries, but generate direct answers and even converse with users. Facebook, Snapchat and Chinese giants Baidu and Tencent are hot on their heels. And some of those AI tools are already making mistakes.

 

In the past, courts have found that Section 230, a law shielding tech platforms from being liable for content posted on their sites, applies to search engines when they link to or even publish excerpts of content from third-party websites. 

But there’s a case to be made that the output of a chatbot would be considered content developed, at least in part, by the search engine itself — rendering Google or Microsoft the “publisher or speaker” of the AI’s responses. 

If judges agree, that could expose tech companies to a flood of lawsuits accusing their chatbots of everything from providing libelous descriptions to offering faulty investment advice to aiding a terrorist group in crafting its recruiting materials. 

In a post on the legal site Lawfare titled, “Section 230 won’t protect ChatGPT,” Matt Perault of the University of North Carolina argued just that. And he thinks it’s going to be a big problem, unless Congress or the courts step in.

 

“I think it’s a massive chill on innovation” if AI start-ups have to worry that they could be sued for artificially generated content, said Perault, a former policy official at Facebook who now directs a tech policy center at UNC. 

He suggested that a better approach might be for Congress to grant AI tools temporary immunity, allowing the booming sector to grow unfettered, while studying a longer-term solution that provides partial but not blanket immunity.

Not everyone agrees that Section 230 wouldn’t apply to AI tools, however.

“Just because technology is new doesn’t mean that the established legal principles underpinning the modern web should necessarily be changed,” said Jess Miers, legal advocacy counsel for the left-leaning trade group Chamber of Progress.

The group receives funding from tech companies including Google, Apple and Amazon. (Amazon founder Jeff Bezos owns The Washington Post.)

 

Miers noted that generative AI typically produces content only in response to prompts or queries from a user; these responses could be seen as simply remixing content from the third-party websites, whose data it was trained on.

How the Supreme Court rules in Gonzalez v. Google could offer clues as to the future of tech company liability for generative AI. 

If the court heartily affirms that Section 230 protects YouTube’s recommendation software, that could clear a path for an expansive interpretation of the law that covers tools like Bing, Bard and ChatGPT, too. If the court looks to draw limits on Section 230 here, that could be a sign that Gorsuch got it right — and AI makers should start bracing for legal head winds.

Google and Microsoft declined to comment for this story.

0 comments:

Post a Comment