09/10/2025 | Press release | Distributed by Public on 09/10/2025 13:01
Figure 3: Example diagram of observability workflows
This layered approach-developer-friendly traces in pre-production, scalable structured observability in production, and unified trace-based evaluation across both-allows us to move fast without sacrificing visibility or reliability.
Balancing experimentation and integration with developer tooling
While there is intense focus on bringing agents to product and quickly showing value, experimentation and iteration still have a huge role to play in building a successful agent experience. Developer tooling is one the best avenues for creating that space to test, learn, innovate and evaluate when to integrate and scale an agent initiative.
We built a Playground as a testing ground for developers to enable rapid prototyping and experimentation, enabling users to conceptualize and iterate on ideas without committing to extensive integration efforts. Key features include:
Agent experimentation: Developers can experiment with agents and engage in two-way communication, facilitating testing and validation of agent behaviors.
Skill exploration: The Playground offers tools to search for registered skills, inspect metadata, and directly invoke them using user inputs.
Memory inspection: Developers can examine memory contents and observe how they change over time by viewing historical revisions
Identity management: The Playground provides tools to manage and inspect identity data, enabling developers to test with varied authorization scenarios and product licenses.
Observability: Experimental invocations provide observability traces, giving quick insight into failures during development.
Taking note of emergent agent design patterns
While we've shared many practical ways we've extended our tech stack within LinkedIn, we are also constantly evaluating how our learned observations and experiences might reflect or be shaped by the emerging agent design patterns across domains.
UX trends: So, what has advanced in the agentic UX space? Agentic experiences have been evolving to keep pace with the exponential advancements in AI: reasoning models explaining their chain-of-thought, deep research agents, browser use agents and background agents. Users are now used to chat based experiences and increasingly finding them to be more comfortable than interacting with a traditional GUI. They are now delegating more of their tasks to agents to work semi or fully autonomously.
Intent alignment, explainability and control: The key aspects to agentic experiences now include intent alignment, explaining thought and control. Agents must be able to align with user expectations, even when they are vague, by explaining their thought process, grounding their statements with verifiable facts and citations. They need to seek feedback and give enough control to the user even when working autonomously.
Unleashing data: Data has evolved from a passive asset to the driving force behind agentic workflows, where intelligent agents amplify its power by generating insights, making it digestible, and acting upon it. RAG and knowledge graphs surface the latent meaning hidden within data, making it more usable for both agents and users. Central to this transformation is context engineering-a practice that involves feeding LLMs with the right data and memory in alignment with specific goals-unlocking a new tier of responsiveness and intelligence. By infusing the right user memories into multi-agent systems, we enable personalized, responsive experiences that feel truly adaptive. We are also leveraging data intensive big data offline jobs to curate and refine long term agent memories.
Background Agents can take on longer tasks, perform them autonomously behind the scenes and finally present the finished work for review. Users can assign tasks to them through a task assignment system like GitHub actions or Jira. Agents can methodically perform the tasks in the background. Coding assistants and deep research agents are well suited for such background work. This is one way to optimize the use of GPU compute at idle, off peak times.
Frameworks: Every major company has released its own agentic framework and there are more than a hundred available today. But no one framework has a dominant market share yet. LangGraph is quite popular due to its vast integrations, production readiness and observability features. We have embraced LangGraph and adapted it to work with LinkedIn messaging and memory infrastructure using custom built providers. This allows agentic developers to use popular frameworks to work with our agentic platform.
Final thoughts
Our LinkedIn generative AI application tech stack is a reflection of how far we've really come on our journey from simple generative AI use cases to complex multi-agent experiences. At this stage with agents, we are sharing some final thoughts from our perspective on areas of importance and shifting influence in this space.
Security and privacy: Within the platform and individual agents, user data is handled with strict boundaries to support privacy, security, and control. Experiential Memory, Conversation Memory, and other data stores are siloed by design, with privacy-preserving methods governing how information flows between components like the Client Data Layer, the Playground/other agents, and the Agent Lifecycle Service. Any sharing between these domains is designed to happen through explicit, policy-driven interfaces-avoiding direct access-with strong authentication and authorization checks enforced for every cross-component call, including tool invocations. These safeguards ensure that permitted agents or services can access specific data, and all access is logged and auditable, keeping member information compartmentalized and secure.
Sync vs async agent invocation: In addition to the async Messaging based delivery for agent invocation, we have now enabled a new sync delivery mode, which bypasses the async queue and directly invokes the agent with sideways message creation. This significantly speeds up delivery with predictable times for user facing interactive agentic experiences. So, agent developers now have the option of strong consistency with async delivery vs eventual consistency with sync delivery.
MCP and A2A have become foundational protocols for enabling dynamic agent discovery and collaboration-across diverse frameworks and distributed environments. MCP empowers agents to explore and interact with the world through tool-based interfaces, while A2A facilitates seamless teamwork among agents. With widespread support from leading model providers like Claude, OpenAI, and Azure, MCP makes it easier for companies to surface and activate their data via standardized interfaces. We are incrementally adopting these open protocols, moving away from a proprietary skill registry, paving the way for more intelligent, interoperable, and context-aware agent ecosystems.
There is no one correct path towards building successfully with agents. Our hope is to share the approaches, lessons and insights that we've collected over our time to help others make progress in their own exciting journey with agents.