West Virginia University

04/30/2026 | Press release | Distributed by Public on 04/30/2026 10:03

WVU legal expert finds judges cautiously adopting AI while guarding human authority

New research from West Virginia University shows that as generative artificial intelligence begins to show up in courtrooms across the country, judges aren't rushing to hand over the gavel.

A white paper co-authored by Amy Cyphert, associate professor in the WVU College of Law, offers a closer look at how judges are beginning to use generative AI in their day-to-day work. While the tools are helping improve efficiency and accessibility in some areas, judges remain firmly committed to maintaining human control over judicial decision-making.

"This project really started from a gap," Cyphert said. "We were all talking about generative AI in the abstract, thinking about guidance, training and risks, but we didn't actually have much data on how judges themselves were using it. So, we decided to ask them directly."

The report draws on in-depth interviews with 13 state and federal judges across the United States, conducted through the AI Policy Consortium for Law and Courts. The goal was to better understand how judges are approaching these tools, how they weigh potential benefits against risks and what kinds of support they may need moving forward.

Early findings suggest some judges are already incorporating generative AI into their workflows. Among other uses, they reported turning to the technology to summarize lengthy documents, organize case materials, draft speeches and prepare questions ahead of oral arguments.

"Every single judge we spoke with was clear-eyed about this," she said. "They see these tools as helpful, but they also believe very strongly that the responsibility for decision-making must remain entirely human."

Many described using generative AI in ways similar to a junior assistant - helpful for administrative or preparatory tasks, but not a substitute for legal reasoning or final judgment. That perspective held across different courts, regions and levels of experience.

"Judges talked about using AI as a kind of force multiplier," Cyphert said. "If it can help with organizing information or preparing materials, that frees them up to spend more time on the core work of judging."

Some judges pointed to broader opportunities, including tools that could make court processes easier to understand for people navigating the system without legal representation.

"There are real opportunities here to make the system more accessible," Cyphert said. "Things like clearer explanations, better communication and easier navigation of court procedures could make a meaningful difference."

Judges also emphasized that those benefits come with tradeoffs. In some cases, using AI can introduce new inefficiencies, particularly when additional time is required to verify the accuracy of outputs.

The report identifies several risks that judges are actively working to manage. Among the most frequently cited were "hallucinations," or instances in which AI generates false or misleading information.

"Hallucinations were a concern that every judge raised," Cyphert said. "These systems can confidently produce information that simply isn't real, and sometimes that's easy to catch, but sometimes it's not. That means careful verification is essential."

Concerns about accuracy are closely tied to broader questions of public trust. Judges noted that even a single error in an opinion or filing could have consequences for confidence in the courts.

"They are very aware that even a single error like that could affect confidence in the courts," Cyphert said. "So, they are approaching these tools with a high level of caution."

Privacy and cybersecurity also remain top of mind. Many judges reported avoiding the use of AI tools for confidential or sealed materials and being mindful of how information is shared, even at the prompt stage.

"There's a lot of thoughtfulness around what information can safely be used with these tools," Cyphert said. "Judges are not just thinking about their own use, but also how their staff are using them."

As the technology continues to evolve, the research points to a growing need for clearer policies around disclosure, acceptable use and ethical guidelines, though establishing those standards may take time.

"These tools are increasingly embedded in everyday software," Cyphert said. "What matters most is that judges and lawyers continue to do ethical work and strive for fairness in every case."

Looking ahead, judges expressed strong interest in additional training, particularly when it comes to using generative AI effectively and identifying potential errors.

"They want practical guidance," Cyphert said. "How to use these tools well, how to spot problems, how to share best practices - that's where the field is headed."

For Cyphert, one of the most notable takeaways was the level of care judges are bringing to the issue.

"I was really impressed," she said. "They are thoughtful, deliberate and are taking this very seriously."

The findings suggest that the role of AI in the judiciary will continue to develop alongside the technology itself, shaped by a combination of training, policy development and an ongoing emphasis on human judgment.

The white paper is part of a broader effort by the AI Policy Consortium for Law and Courts, a collaboration between the National Center for State Courts and the Thomson Reuters Institute, focused on understanding how emerging technologies are influencing the legal system.

-WVU-

hlt/4/30/26

MEDIA CONTACT: Andrew Marvin
Assistant Director of Communications and Marketing
WVU College of Law
[email protected]

Call 1-855-WVU-NEWS for the latest West Virginia University news and information from WVUToday.

West Virginia University published this content on April 30, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 30, 2026 at 16:03 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]