04/30/2026 | Press release | Distributed by Public on 04/30/2026 08:57
AI coding agents are quickly becoming a core part of how modern engineering teams build, review, deploy, and troubleshoot software. But as usage grows, so do the operational questions: Which agents are being adopted? What are the associated costs? How reliable are coding agents within real developer workflows? Which tools do they invoke, and where are they slowing down, failing, or creating unnecessary risk in production?
Dynatrace helps you answer these questions by extending AI observability for a new wave of coding agents, including Claude Code, Google Gemini CLI, OpenAI Codex CLI, OpenCode, and GitHub Copilot SDK. Together, these integrations give engineering leaders, platform teams, and developers a consistent way to understand agent activity, token consumption, costs, tool behavior, and runtime impact: without forcing teams to stitch together fragmented telemetry across terminals, SDKs, dashboards, and development workflows. Dynatrace's public AI agent instrumentation examples already include Claude Code, and the broader Dynatrace AI Observability and MCP portfolio is designed to bring real-time production context and governance into agentic developer workflows.
As organizations adopt multiple coding agents, new adoption challenges emerge. One team might use Claude Code in the terminal, another may build internal tools with GitHub Copilot SDK, while others experiment with Gemini CLI or Codex CLI. Platform teams want visibility into usage, availability, and costs. Engineering leaders want to know whether agents improve delivery. Security and governance teams want confidence that all prompts, tool usage, and actions can be monitored appropriately.
Dynatrace provides a practical answer to these challenges: a single observability layer for agile development workflows. For agents that emit OpenTelemetry directly, such as Claude Code, Gemini CLI, and Codex CLI, Dynatrace can ingest telemetry related to sessions, tokens, costs, tool executions, errors, and performance. For GitHub Copilot workflows, Dynatrace adds production context, software delivery automation, and GitHub-based integrations that connect agent activity to real engineering workflows.
The payoff is clear. Developers gain visibility into how agents behave in real work. Platform teams can track adoption, usage trends, and cost signals. Engineering leaders can correlate agent activity with commits, pull requests, and delivery outcomes. And with an MCP-enabled production context, teams can connect coding-agent actions to what is happening in production.
"Before we instrumented Claude Code, we had no easy way to break down how our engineers actually used AI, which models, for what tasks, and at what cost. Now we can pinpoint inefficient model use and guide usage toward better cost-performance tradeoffs."
- Marcus Heimbach, Senior Director Software Development
Each coding agent has a different operating model, which is why a common observability layer matters.
Claude Code already supports built-in OpenTelemetry, making it easy to send metrics and logs to Dynatrace with no code changes. Teams can track sessions, tokens, costs, tool activity, API health, and engineering output such as commits and pull requests. Logs, dashboards, and alerts help teams investigate failures, spot latency spikes, and catch unusual spend or error patterns early.
Gemini CLI includes OpenTelemetry-based observability and preconfigured dashboards, making it a strong fit for Dynatrace AI observability. Teams can correlate agent activity with broader platform signals and move quickly from raw telemetry to action. This includes debugging failed runs, identifying slow or error-prone tool calls, and alerting on cost or reliability regressions.
Codex CLI supports opt-in OpenTelemetry monitoring, giving teams a path to audit usage and strengthen governance across CLI, IDE, and app experiences. With Dynatrace, logs and traces help investigate request flows, delays, and failures across agent workflows. Alerts can flag degraded reliability, unexpected behavior, or rising token consumption before they become larger issues.
GitHub Copilot SDK lets teams embed agentic workflows directly into applications, while Dynatrace adds live observability and security context. This matters because embedded agents become part of real engineering and production-adjacent workflows. Dynatrace helps trace execution paths, use logs for debugging and auditability, and set alerts for failures, latency, or policy-relevant events.
OpenCode is a terminal-based AI coding agent that helps developers work through coding tasks directly from the command line. Because OpenCode ships with native OpenTelemetry support, teams can route telemetry to Dynatrace without code changes by setting standard OTLP environment variables. With Dynatrace, teams can track LLM call volume, session activity, tool usage, request latency, and workflow behavior across real developer sessions. Traces help teams inspect LLM requests, tool executions, session lifecycle events, message processing, file snapshots, or diff operations.
Across all operating models, the value is the same: one strategy for monitoring adoption and impact, understanding costs, logging and tracing agent activity, alerting on reliability issues, and debugging real-world workflows as coding agents scale across the enterprise.
Teams are no longer asking whether coding agents are useful. They're asking how to drive adoption, scale them safely, govern them consistently, and prove their impact. That requires visibility into usage, cost, reliability, and engineering outcomes across teams and tools. Dynatrace helps organizations make that shift with the observability and production context needed to expand coding-agent adoption with confidence.
The coding-agent market is moving fast. Claude Code, GitHub Copilot SDK, Google Gemini CLI, and OpenAI Codex CLI each represent a different path toward agentic software delivery, from terminal-based workflows to embedded SDKs and governed local execution. At the same time, Dynatrace has been expanding its developer-facing AI surface with the Dynatrace MCP Server, GitHub Copilot integrations, and AI observability capabilities built to connect agent behavior with real production systems. The timing matters because teams are no longer evaluating whether coding agents are useful. They're deciding how to drive adoption, scale up usage safely, govern usage consistently, and measure real impact.
With Dynatrace, teams can move beyond isolated agent demos and start operating AI coding agents with the same rigor they apply to applications and platforms. This means having the ability to understand adoption, spend, reliability, tool behavior, and engineering outcomes in one place, while giving agents access to the live production context they need to make better decisions.
Whether your developers are working in Claude Code, building on GitHub Copilot SDK, experimenting with Gemini CLI, or adopting Codex CLI, Dynatrace helps bring observability, governance, and production awareness into the heart of agentic software delivery.
Public examples already demonstrate this approach for Claude Code, and the broader Dynatrace MCP and AI observability ecosystem provides the foundation to extend the same value across the next generation of coding agents.
In our Git repository, you'll find step-by-step examples for supported coding-agent workflows, including how to configure OpenTelemetry export, send telemetry data to Dynatrace, and use the provided dashboards to analyze the activity of your AI coding agents.
Visit our Git repo for detailed instructions and AI Coding Agent instrumentation examples