Lawyer Faces Sanctions for Using Fake ChatGPT Citations

A New York lawyer is in hot water for submitting a legal brief with references to cases that were made up by ChatGPT

As The New York Times reports(Opens in a new window), Steven Schwartz, from Levidow, Levidow and Oberman, submitted six fake judicial decisions in a 10-page brief while representing a plaintiff who was suing the Colombian airline Avianca because of an injury sustained on a flight.

The brief, which argued why the suit should go ahead, cited fake cases that had been completely made up by ChatGPT, and which Schwartz had failed to verify.

In an affidavit(Opens in a new window), Schwartz admitted to using ChatGPT while researching for the brief, and accepted responsibility for not verifying the AI chatbot’s sources.

Schwartz said he “was unaware of the possibility that its content could be false” and maintained that he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

This came after US District Judge Kevin Castel wrote in a May 4 order(Opens in a new window): “The court is presented with an unprecedented circumstance… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”

The affidavit contained screenshots of the attorney being told by ChatGPT that the cases it was providing were real and could be found on any “reputable legal database.” The screenshots also show Schwartz asking the AI chatbot for the source of one bogus case: Varghese v. China Southern Airlines.

Recommended by Our Editors

ChatGPT replied: “I apologize for the confusion earlier. Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.”

As the Times notes, Schwartz is now set to face a sanctions hearing on June 8. 

The saga is the latest in a series of incidents of AI chatbots pumping out misinformation. Long conversations with Microsoft’s Bing resulted in the chatbot exhibiting some manipulation tactics such as gaslighting, while Bard, Google’s AI chatbot, spouted lies about the James Webb Space Telescope during its debut.  

What’s New Now to get our top stories delivered to your inbox every morning.”,”first_published_at”:”2021-09-30T21:30:40.000000Z”,”published_at”:”2022-08-31T18:35:24.000000Z”,”last_published_at”:”2022-08-31T18:35:20.000000Z”,”created_at”:null,”updated_at”:”2022-08-31T18:35:24.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”30.769230769231″>

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Hits: 0