Two lawyers who used fake cases generated by ChatGPT, were handed a joint $5,000 fine and ordered to reach out to judges mentioned in the fake cases with information about the situation, marking the first major sanctions to come from the use of artificial intelligence in the legal fields.
In his decision, New York District Judge P. Kevin Castel said Steven Schwartz—the lawyer who used ChatGPT—and Peter LoDuca, Schwartz’s coworker and the attorney who took over the case when it moved out of Schwartz’s jurisdiction, “consciously avoided” signs the cases they were used as examples were fake, therefore acting in bad faith and misleading the court.
Castel said that while there is “nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the two lawyers were sanctioned because they abandoned their responsibilities and stuck by the false cases, leading “many harms (to) flow from the submission. ”
Castel said if the pair had come clean about their mistakes, he wrote, “the record now would look quite different,” but rather they doubled down and presented the fake court cases and opinions created by artificial intelligence as a fact.
Levidow Levidow & Oberman, PC, Schwartz’s law firm, told Forbes they are considering their options and “have made no decision as to whether to appeal.”
Levidow Levidow & Oberman, PC, said in a statement to Forbes they “fully intend to comply” with the court’s order, but “respectfully disagree” that anyone at the firm “acted in bad faith.” The statement continued: “We continue to believe that in the face of what the Court acknowledged was an unprecedented situation, we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”
Schwartz had served as legal counsel for Roberto Mata, who was suing Avianca Airlines on allegations he was “struck by a metal serving cart” onboard a 2019 flight and suffered personal injuries. Avianca had filed for the case to be dismissed, and in response, Schwartz filed six cases to show precedent, including Varghese v. China Southern Airlines and Shaboon v. EgyptAir. But the court found the cases—which would later be revealed to be created by ChatGPT—didn’t exist and had “bogus judicial decisions with bogus quotes and bogus internal citations.” When the defense questioned some of the cases and asked for documents and opinions, Schwartz went back to ChatGPT, which gave him fabricated legal documents he then presented to the court as fact, leading Castel to consider sanctions. Schwartz signed an affidavit admitting he used the AI chatbot but maintained he had no intent to deceive the court and didn’t act in bad faith. He said he was “mortified” upon learning about the false cases, and “did not understand (ChatGPT) was not a search engine, but a generative language-processing tool.”
Castel also threw out Mata’s suit against Avianca—the case in which ChatGPT was used—ruling it was filed too late.
Phony ChatGPT Brief Leads to $5,000 Fine for NY Lawyers (Bloomberg)
Lawyer Used ChatGPT In Court—And Cited Fake Cases. A Judge Is Considering Sanctions (Forbes)