UCSD - University of California - San Diego

01/20/2026 | Press release | Distributed by Public on 01/20/2026 10:21

From Chatbots to Dice Rolls: Researchers Use D&D to Test AI’s Long-term Decision-making Abilities

Published Date

January 20, 2026

Article Content

Large Language Models, like ChatGPT, are learning to play Dungeons & Dragons. The reason? Simulating and playing the popular tabletop role-playing game provides a good testing ground for AI agents that need to function independently for long stretches of time.

Indeed D&D's complex rules, extended campaigns and need for teamwork are an ideal environment to evaluate the long-term performance of AI agents powered by Large Language Models, according to a team of computer scientists led by researchers at the University of California San Diego. For example, while playing D&D as AI agents, the models need to follow specific game rules and coordinate teams of players, comprising both AI agents and humans.

The work aims to solve one of the main challenges that arise when trying to evaluate LLM performance: the lack of benchmarks for long-term tasks. Most benchmarks for these models still target short term operation, while LLMs are increasingly deployed as autonomous or semi-autonomous agents that have to function more or less independently over long periods of time.

"Dungeons & Dragons is a natural testing ground to evaluate multistep planning, adhering to rules and team strategy," said Raj Ammanabrolu, the study's senior author and a faculty member in the Department of Computer Science and Engineering at UC San Diego. "Because play unfolds through dialog, D&D also opens a direct avenue for human-AI interaction: agents can assist or coplay with other people."

The team presented their work at the NeurIPS 2025 conference from Dec. 2 to 7 in San Diego. The researchers took the method they developed for this study and applied it to three LLMs. Claude 3.5 Haiku performed the best and was most reliable, with GPT-4 close behind. DeepSeek-V3 was the lowest performer. The researchers plan to keep evaluating other models in future work.

The researchers took the method they developed for this study and applied it to three LLMs: Claude 3.5 Haiku, GPT-4 and DeepSeek-V3.
This image was created with generative AI based on a hand-drawn sketch.

Researchers first required all three LLMs to simulate a D&D game. To make the simulation accurate, the models were paired with a game engine based on the rules of D&D, which provided maps and resources for players and acted as a guardrail to minimize hallucinations. Players have been using AI-driven dungeon masters, which plan the twists and turns of the game. But in this study, the AI agents also acted as players and the monsters that fight the players. The simulations focused on combat: players battling monsters as part of their D&D campaign.

The models played against each other, and against over 2,000 experienced D&D players recruited by the researchers. The LLMs modeled and played 27 different scenarios selected from well-known D&D battle set ups named Goblin Ambush, Kennel in Cragmaw Hideout and Klarg's Cave.

In the process, the models exhibited some quirky behaviors. Goblins started developing a personality mid-fight, taunting adversaries with colorful and somewhat nonsensical expressions, like "Heh - shiny man's gonna bleed!" Paladins started making heroic speeches for no reason while stepping into the line of fire or being hit by a counterattack. Warlocks got particularly dramatic, even in mundane situations.

Researchers are not sure what caused these behaviors, but take it as a sign that the models were trying to imbue the game play with texture and personality.

Indeed, one criteria to evaluate the models' performance was how well they were able to stay "in character" while playing the game and interfacing with other players. The models were also evaluated on how well they could determine the correct actions agents should take, and how well they kept track of all the different resources and actions in the game.

Next steps include simulating full D&D campaigns - not just combat. The method the researchers developed could also be applied to other scenarios, such as multiparty negotiation environments and strategy planning in a business environment.

Setting the DC: Tool-Grounded D&D Simulations to Test LLM Agents

Ziyi Zeng, Shengqi Li, Jiajun Xi and Prithviraj Ammanabrolu, Department of Computer Science and Engineering, University of California San Diego
Andrew Zhu, Computer and Information Science, University of Pennsylvania Philadelphia

Learn more about research and education at UC San Diego in: Artificial Intelligence

Related content

Dungeons & Dragons 101

  • Dungeons & Dragons is a tabletop fantasy role playing game
  • The Dungeon Master creates a storyline for the game, known as a campaign
  • Players control a single character, who usually has special skills
  • Players typically work together in a "party" toward a common goal
  • Players often battle monsters as part of the game
  • Players use a set of dice during a game, including an inconic 20-sided die
UCSD - University of California - San Diego published this content on January 20, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 20, 2026 at 16:21 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]