Bloomberg Law
April 18, 2024, 9:00 AM UTC

ChatGPT Will Come for Partners’ Work in Contract Law, Says Prof

Roy Strom
Roy Strom
Reporter

Welcome back to the Big Law Business column. I’m Roy Strom, and today we look at a unique idea about how ChatGPT could transform the business of law. Sign up to receive this column in your inbox on Thursday mornings.

David Hoffman is a University of Pennsylvania law professor who specializes in contracts. When he looks into the future of contract disputes, he sees a world that’s been dramatically altered by the technology underpinning ChatGPT. Big Law partners might not enjoy his view.

That’s because of a simple argument Hoffman makes about generative artificial intelligence.

He says “generative interpretation” can replace the messy and expensive way lawyers currently hash out the meaning of words in legal agreements, using dictionaries and Latin canons.

“Giving courts a convenient way to commit to a cheap and predictable contract interpretation methodology would be a major advance in contract law,” Hoffman wrote last year in a research paper that he co-authored with Yonathan Arbel, a University of Alabama law school professor. “As generative interpretation offers this possibility, we argue it can become the new workhorse of contractual interpretation.”

The paper was an effort to educate judges on how the technology could be used to help them double-check their gut instincts.

But the downstream effect of judges actually adopting this method could have massive ramifications for the business and practice of law.

This is the transformative idea: If judges accept large language models as a valid way to interpret contested contract terms, parties drafting contracts could preemptively use those models to get ahead of predicted or potential disputes by letting the models decide the outcome. That would eliminate much of the uncertainty around contracts, making litigation rare.

“It 100% seems like a real fit. But the thing that’s challenging is it’s so core to what judges and lawyers think is their expertise that there will be a lot of push back,” Hoffman said. “This is the partner work. This is the high-value work. It’s not the low-value work we want to get rid of. This is the stuff we want to keep inside the castle.”

Hoffman admits the idea sounds to many like science fiction. And he said he doesn’t mind being dismissed as “an ivory tower academic.”

The 60-page paper takes readers through real-life cases that turned on how words in contracts were interpreted. It describes how judges reached their conclusions, and then explains what the large language models thought of the results.

Some of the differences are stark.

In one example, GPT-4 disagreed 100% of the time with a New York Court of Appeals ruling that said Duke Ellington’s record company could deduct from the musician’s revenue fees it paid itself for services that used to be done by another company.

“Of course, even uniformity between powerful models cannot decide cases,” Hoffman and Arbel write. “The point, rather, is to illustrate the value of LLMs as a convenient check against overconfidence, and a spur to greater reflection.”

The authors also show that large language models can consider evidence outside of the simple words in a contract.

In another example, they asked GPT-4 to determine a case based on a contract alone. The LLM provided a 10% likelihood that one side will win. Then they asked the model to consider the contents of a real-life phone call that was part of the evidence at trial. The same side’s odds of winning jumped to 75%.

“So convenient are today’s LLMs, and so seductive are their outputs, that it would be genuinely surprising if judges were not using them to resolve questions of contract interpretation as we write this article, only a few months after the tools went mainstream,” Hoffman and Arbel said in the paper.

Indeed, the paper cites a study by the National Judicial College, in which 17% of 332 surveyed judges said they had used ChatGPT in their jobs and liked it.

“The problem then is not whether courts will use LLMs as an aid to interpretation, but how,” Hoffman and Arbel write.

Shock Factor

Still, there’s a big difference between judges using the tool to aid in their work and parties to a contract choosing it as the method to interpret and potentially resolve disputes.

So, I asked Hoffman: How could that second step happen?

More parties first need to develop a trust of the models, he said, despite their flaws, which include an inability to describe why they reached a particular result.

Parties might then migrate to the models if there is a “huge shock” in traditional contract law. Perhaps New York courts, the current preferred venue for contract disputes, take an unexpected turn.

“There needs to be some big thing that happens where everyone says, ‘We did not think about that,’” Hoffman said. “That’s why it’s not going to happen today. You need some kind of shock and consensus.”

There is also a possibility that an enterprising law firm could develop the precedent with a powerful client. Hoffman compared the idea to Wachtell, Lipton, Rosen & Katz developing the poison pill in the 1980s. It was, in effect, a technological innovation that burnished the reputation of Wachtell and, later, Skadden.

A similar thing could happen with the first firm to adopt generative interpretation.

“You need to have a ton of market power so you can insist on your terms,” said Hoffman, a former Cravath Swaine & Moore litigation associate. “You might think it comes from a West Coast firm who has a client which is aligned with them on the utility of driving down litigation costs and feels comfortable with the tech.”

Hoffman’s idea may not be something that will cause Big Law leaders to lose sleep tonight. They are likely, and rightly, concerned with more urgent challenges: Keeping high-performing partners and winning the next big piece of work.

But I credit Hoffman for thinking deeply about how generative AI could transform an important part of the legal market. His way of thinking might help other subject matter experts consider potential big changes AI might bring to their fields.

“I’m a contracts scholar. I see a huge problem that faces contracts scholars and lawyers, which is that legal interpretation is expensive and unpredictable,” he said. “That’s a problem lawyers ought to be interested in trying to fix. So my view was to ask: What’s the vision of how these models will help us fix that problem?”

Worth Your Time

On Litigation Funding: Bench Walk Advisors saw an investment pay off when an Arkansas jury last week sided with London Luxury, a textile vendor that accused Walmart of backing out of a contract to buy more than $500 million in Covid-era personal protective equipment. The litigation funder invested $5.1 million in the case and others shortly before trial, Tatyana Monnay and Emily Siegel report.

On Legal Briefs: Winston & Strawn reached a settlement with a Boston boutique firm that accused it of ripping off its motion to dismiss brief while representing a different defendant against a common plaintiff, Kyle Jahner reports.

On Clare Locke: The small law firm known for taking on large media organizations is forging an alliance with the UK’s Schillings and Australia’s Giles George. The agreement, while not a formal merger, will see the firms share resources to serve cross-border clients, Brian Baxter reports.

That’s it for this week! Thanks for reading and please send me your thoughts, critiques, and tips.

To contact the reporter on this story: Roy Strom in Chicago at rstrom@bloomberglaw.com

To contact the editors responsible for this story: Chris Opfer at copfer@bloombergindustry.com; John Hughes at jhughes@bloombergindustry.com; Alessandra Rafferty at arafferty@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.