Cutting corners: Legal fees certainly aren't cheap, so when we retain legal representation, we assume we're paying for that legal professional's time and expertise. Rather than provide the typical services retained, one Manhattan lawyer tried to shorten the research process by letting ChatGPT cite his case references for a Federal Court filing. And as he found out the hard way, fact-checking is pretty important, especially when your AI has a penchant for making up facts.

Attorney Steven A. Schwartz was retained by a client to represent them in a personal injury case against Avianca Airlines. According to the claim, Schwartz's client was allegedly struck in the knee with a serving cart during a 2019 flight into Kennedy International Airport.

As one would expect in this type of legal situation, the airline asked a Manhattan Federal judge to toss the case, which Schwartz immediately opposed. So far, it sounds like a pretty typical courtroom exchange. That is, until Schwartz, who admittedly never before used ChatGPT, decided that it was time to let technology do the talking.

In his opposition to Avianca's request, Schwartz submitted a 10-page brief citing several relevant court decisions. The citations referenced similar cases, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines. According to the New York Times' article, the last citation even provided a lengthy discussion of federal law and "the tolling effect of the automatic stay on a statute of limitations."

While it sounds like Schwartz may have come armed and ready to defend the case, there was one underlying problem: none of those cases are real. Martinez, Zicherman, and Varghese don't exist. ChatGPT fabricated them all with the sole purpose of supporting Schwartz's submission.

When confronted with the error by Judge P. Kevin Castel, Schwartz conceded that he had no intent to deceive the court or the airline. He also expressed regret for relying on the AI service, admitting that he had never used ChatGPT, and was "...unaware of the possibility that its content could be false." According to Schwartz's statements, he at one point attempted to verify the authenticity of the citations by asking the AI if the cases were in fact real. It simply responded with "yes."

Judge Castel has ordered a follow-on hearing on June 8 to discuss potential sanctions related to Schwartz's actions. Castel's order accurately presented the strange new situations as "an unprecedented circumstance," littered with "bogus judicial decisions, with bogus quotes and bogus internal citations." And in a cruel twist of fate, Schwartz's case could very well end up as one of the citations used in future AI-related court cases.