A Blog by Jonathan Low

 

Jun 24, 2023

Lawyers Keep Trying To Use ChatGPT - And Its Phony Citations Keep Costing Them

Lawyers are supposed to be well educated and detail oriented - but they keep trying to take shortcuts by using ChaptGPT despite the fact that it has been proven to invent its own fake legal citations to bolster arguments it is asked to write. 

The question is why they feel such pressure to use a tool whose unreliability is becoming legendary in their profession. Or maybe legal education just aint what it used to be. JL

Jon Brodkin reports in ars technica:

A federal judge tossed a lawsuit and issued a $5,000 fine to the plaintiff's lawyers after they used ChatGPT to research court filings that cited six fake cases invented by the AI tool. The lawyers "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question," US District Judge Kevin Castel wrote yesterday. The lawyers "advocated for the fake cases and legal arguments" even "after being informed by their adversary's submission that their citations were non-existent and could not be found." A federal judge tossed a lawsuit and issued a $5,000 fine to the plaintiff's lawyers after they used ChatGPT to research court filings that cited six fake cases invented by the artificial intelligence tool made by OpenAI.

 

Lawyers Steven Schwartz and Peter LoDuca of the firm Levidow, Levidow, & Oberman "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question," US District Judge Kevin Castel wrote in an order yesterday. The lawyers, Castel wrote, "advocated for the fake cases and legal arguments" even "after being informed by their adversary's submission that their citations were non-existent and could not be found."

Case dismissed

As we wrote last month, Schwartz admitted using ChatGPT for research and did not verify whether the "legal opinions" provided by the AI chatbot were accurate. Schwartz wrote in an affidavit that he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."

The real case, Roberto Mata vs. Avianca, was originally filed in a New York state court but was moved to US District Court for the Southern District of New York. Schwartz was representing Mata in state court but wasn't admitted to practice in the federal court. Schwartz continued to write the legal briefs and LoDuca filed them under his own name.

Mata sought damages for injuries suffered during an Avianca flight from El Salvador to New York in August 2019 when a metal snack and drink cart struck his knee. Mata's lawyers used the phony citations from ChatGPT to argue that the case should be moved back to the New York state court where a three-year statute of limitations would apply.

Unsurprisingly, their argument citing phony cases wasn't persuasive to the judge. In addition to punishing the lawyers, Castel yesterday granted Avianca's motion to dismiss the case. The judge agreed with the defendant that a two-year statute of limitations under the Montreal Convention applies and that the plaintiff's lawsuit was filed too late.

“I just never thought it could be made up”

The dispute over fake precedents played out over a few months. On March 1, Mata's lawyers cited the fake cases in a brief that opposed Avianca's motion to dismiss the case.

"But if the matter had ended with Respondents coming clean about their actions shortly after they received the defendant's March 15 brief questioning the existence of the cases, or after they reviewed the Court's Orders of April 11 and 12 requiring production of the cases, the record now would look quite different," Castel wrote. "Instead, the individual Respondents doubled down and did not begin to dribble out the truth until May 25, after the Court issued an Order to Show Cause why one of the individual Respondents ought not be sanctioned."

Castel found that the lawyers were guilty of "bad faith" and "acts of conscious avoidance and false and misleading statements to the Court." While Schwartz wrote the bogus legal filings, LoDuca didn't check them for accuracy.

"Mr. LoDuca simply relied on a belief that work produced by Mr. Schwartz, a colleague of more than twenty-five years, would be reliable," Castel wrote. But Schwartz's practice was exclusively in state court. The lawyers admitted in a memorandum of law that Schwartz attempted "to research a federal bankruptcy issue with which he was completely unfamiliar."

At a June 8 hearing on potential sanctions, Schwartz testified that he was "operating under the false perception that this website [ChatGPT] could not possibly be fabricating cases on its own." Schwartz stated, "I just was not thinking that the case could be fabricated, so I was not looking at it from that point of view... My reaction was, ChatGPT is finding that case somewhere. Maybe it's unpublished. Maybe it was appealed. Maybe access is difficult to get. I just never thought it could be made up."

The Levidow firm did not have Westlaw or LexisNexis accounts, instead using a Fastcase account that had limited access to federal cases. Schwartz testified that he "heard about this new site which I assumed—I falsely assumed was like a super search engine called ChatGPT, and that's what I used."

Lawyer relied exclusively on ChatGPT

According to Castel, "Schwartz testified that he began by querying ChatGPT for broad legal guidance and then narrowed his questions to cases that supported the argument that the federal bankruptcy stay tolled the limitations period for a claim under the Montreal Convention."

Schwartz untruthfully claimed that ChatGPT merely "supplemented" his research, Castel wrote. Schwartz later admitted that his entire research consisted of querying ChatGPT. "It became my last resort," he testified.

In April, the judge issued orders directing LoDuca to file copies of the precedents that no one could locate. LoDuca requested an extension of time, claiming that he was out of the office on vacation.

"Mr. LoDuca's statement was false and he knew it to be false at the time he made the statement. Under questioning by the Court at the sanctions hearing, Mr. LoDuca admitted that he was not out of the office on vacation," Castel wrote. LoDuca testified that Schwartz was away and that "I just attempted to get Mr. Schwartz the additional time he needed because he was out of the office at the time."

According to Castel, this "lie had the intended effect of concealing Mr. Schwartz's role in preparing the March 1 Affirmation and the April 25 Affidavit and concealing Mr. LoDuca's lack of meaningful role in confirming the truth of the statements in his affidavit. This is evidence of the subjective bad faith of Mr. LoDuca."

The April 25 filings included a series of "excerpts" from the bogus decisions. The filing "did not comply with the Court's Orders of April 11 and 12 because it did not attach the full text of any of the 'cases' that are now admitted to be fake," Castel wrote.

“Gibberish,” “borders on nonsensical”

The fake Varghese v. China Southern Airlines excerpt submitted by LoDuca and Schwartz, supposedly issued by the US Court of Appeals for the 11th Circuit, "shows stylistic and reasoning flaws that do not generally appear in decisions issued by United States Courts of Appeals," Castel wrote. "Its legal analysis is gibberish... The summary of the case's procedural history is difficult to follow and borders on nonsensical."

 

The Varghese decision also "includes internal citations and quotes from decisions that are themselves non-existent," the judge noted. The other fake precedents submitted to the court had similar flaws.

Despite that, the lawyers have never "written to this Court seeking to withdraw the March 1 Affirmation in Opposition or advise the Court that it may no longer rely upon it," Castel wrote.

Castel also found that Schwartz acted in bad faith and violated a federal rule when, in his April 25 affidavit, he did not reveal that he couldn't find the full decisions cited in his previous filing. "Poor and sloppy research would merely have been objectively unreasonable. But Mr. Schwartz was aware of facts that alerted him to the high probability that 'Varghese' and 'Zicherman' did not exist and consciously avoided confirming that fact," Castel wrote.

"Mr. Schwartz's subjective bad faith is further supported by the untruthful assertion that ChatGPT was merely a 'supplement' to his research, his conflicting accounts about his queries to ChatGPT as to whether 'Varghese' is a 'real' case, and the failure to disclose reliance on ChatGPT in the April 25 Affidavit," the judge added.

Some changes are in the works at the law firm to prevent repeats of the fiasco. Thomas Corvino, the sole equity partner of the Levidow firm, "has acknowledged responsibility, identified remedial measures taken by the Levidow Firm, including an expanded Fastcase subscription and CLE [continuing legal education] programming, and expressed his regret for Respondents' submissions," Castel wrote.

The judge doesn't think it will happen again. "The Court credits the sincerity of Respondents when they described their embarrassment and remorse," Castel wrote. "The fake cases were not submitted for any respondent's financial gain and were not done out of personal animus. Respondents do not have a history of disciplinary violations and there is a low likelihood that they will repeat the actions described herein."

0 comments:

Post a Comment