04/23/2026 | Press release | Distributed by Public on 04/23/2026 11:04
Introducing Dynatrace for AI, an open-source collection of agent skills and prompts that give any skills-compatible AI coding assistant the domain expertise it needs to work productively and accurately with Dynatrace.
If you've already wired an AI coding assistant up to Dynatrace, through the MCP server, the Dynatrace CLI (dtctl), or a custom agent you built yourself, you've seen the pattern: the agent has access to the data, but it doesn't quite know how to use the data. It writes queries that look accurate but call for fields that don't exist. The gap isn't access; it's expertise. It's a lack of understanding of how Dynatrace is organized and operates. That's the gap that Dynatrace for AI is built to close.
Agent skills are an open format for packaging domain knowledge that AI agents can load on demand. A skill is a folder containing a SKILL.md file with focused instructions, examples, and optional reference material. Compatible agents, such as Claude Code, GitHub Copilot, Cursor, Cline, or others, discover installed skills and load the full content only when it's relevant to the task at hand.
The net effect: you can install dozens of skills without bloating an agent's context window. Agents pull in exactly what's relevant when it's relevant, and ignore the rest.
Figure 1. Install Dynatrace agent skills via a terminalDynatrace for AI is a curated set of skills that give an agent the three things it needs to do real work on Dynatrace:
Skills don't connect to Dynatrace directly. You have to pair them with the MCP server or dtctl to perform live queries and initiate actions. Together, they turn an agent with generic observability intuition into one that speaks fluent Dynatrace.
The first release of Dynatrace for AI agent skills is focused on the workflows that engineering teams run every day:
Alongside the skills, the repo hosts a small set of prompt templates you can use as structured starting points to invoke the right skills for specific tasks. These save teams from having to design their approach from scratch and make outcomes more consistent across agents and users.
Current templates include:
These are a starting point, not a ceiling, designed to be forked and shaped to your team's on-call runbooks.
Skills and prompts are a knowledge and workflow layer. They don't connect to your Dynatrace environment, define what actions your agent can take, or set guardrails. That's the job of the tool you pair them with and your Dynatrace permission model.
The quality of what your agent can produce also depends on the entities your environment is instrumented to capture. Skills help agents ask better questions of data, but they don't control what data is collected.
Think of this skill as onboarding a smart new hire who already knows software, but needs to learn your platform. The skills are the platform user guide; your observability data is the work itself.
It's super simple to install the skills and prompts in one go. Just run:
npx skills add dynatrace/dynatrace-for-ai
…or activate the skills as a Claude Code plugin:
claude plugin marketplace add dynatrace/dynatrace-for-ai claude plugin install dynatrace@dynatrace-for-ai
Make sure your agent can reach Dynatrace, then try a real agent-skill task. A few good starting prompts:
The difference in output quality is immediate: fewer corrections, cleaner queries, and answers that actually reflect how Dynatrace models your environment.
The Dynatrace for AI project is open source and actively developed. Issues, discussions, and pull requests are all welcome, especially from teams running agent skills against real workloads. We'd love to hear from you.
Start making your agents work smarter; teach them how to use Dynatrace.